< Dongguk Open Databases & CNN Model
>
---------------------------------------------
49. Dongguk ESSN models and
algorithm for Semantic Segmentation
48. Dongguk Mask R-CNN Model for Elimination of Thermal
Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles Database (DI&V-DB)
47. Dongguk
Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm
46. Dongguk Modified CycleGAN for Age Estimation (DMC4AE) and Generated Images
45. Dongguk Vess-Net
Models with Algorithm
44. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with
Algorithms
43. Dongguk CNN stacked LSTM and CycleGAN
for Action Recognition, Generated Data, and Dongguk Activities & Actions Database
(DA&A-DB2)
42. Dongguk generation model of presentation attack face images
(DG_FACE_PAD_GEN)
41. Label Information of Sun Yat-sen
University Multiple Modality Re-ID (SYSU-MM01) Database and Dongguk Gender
Recognition CNN Models (DGR-CNN).
40. Dongguk cGAN-based Iris
Image Generation Model and Generated Images (DGIM&GI)
39. Dongguk CNN and LSTM models for the classification of
multiple gastrointestinal (GI) diseases, and video indices of experimental
endoscopic videos
38. Dongguk Dual-Camera-based
Gaze Database (DDCG-DB1) and CNN models with Algorithms
37. Dongguk Mobile
Finger-Wrinkle Database (DMFW-DB1) and CNN models with Algorithms
36. Dongguk low-resolution
drone camera dataset & CNN models
35. Dongguk CNN Model for
CBMIR
34. Dongguk Person ReID CNN Models (DPRID-CNN)
33. Dongguk DenseNet-based
Finger-vein Recognition Model (DDFRM) with algorithms
32. Dongguk OR-Skip-Net Model
for Image Segmentation with Algorithm and Black Skin People (BSP) Label
Information
31. Dongguk Banknote Type and
Fitness Database (DF-DB3) & CNN Model with algorithms
30. Dongguk RetinaNet for Detecting Road Marking Objects with
Algorithms and Annotated Files for Open Databases
29. Dongguk CNN Model for NIR
Ocular Recognition (DC4NO) with algorithm
28. Dongguk Face Presentation
Attack Detection Algorithms by Spatial and Temporal Information (DFPAD-STI)
27. Dongguk Dual Camera-based
Driver Database (DDCD-DB1) and Trained Faster R-CNN Model with Algorithm
26. Dongguk FRED-Net with Algorithm
25. Dongguk Face and Body
Database (DFB-DB1) with CNN models and algorithms
24. Dongguk Night-Time Face
Detection database (DNFD-DB1) and algorithm including CNN model
23. Dongguk Iris Spoof
Detection CNN Model version 2 (DFSD-CNN-2) with Algorithm
22. Dongguk Fitness Database (DF-DB2) & CNN Model
21. Dongguk-body-movement-based human identification
database
version 2 (DBMHI-DB2) & CNN Model
20. Dongguk Multimodal
Recognition CNN of Finger-vein and Finger shape (DMR-CNN) with Algorithm
19. Dongguk Drone Camera
Database (DDroneC-DB2) with CNN models
18. Dongguk Periocular
Database (DP-DB1) with CNN models and algorithms
17. Dongguk IrisDenseNet CNN
Model (DI-CNN) with Algorithm
16. Dongguk Visible Light Iris Recognition CNN Model
(DVLIR-CNN)
15. Dongguk Aggressive and Smooth Driving Database
(DASD-DB1)
and
CNN Model
14. Dongguk Night-time Pedestrian Detection Faster
R-CNN and
Algorithm
13. Dongguk Shadow Detection Database (DSDD-DB1) &
CNN
Model
49. Dongguk ESSN
models and algorithm for Semantic Segmentation
(1) Introduction
We propose new models (ESSN) for semantic segmentation. That proposed model was trained with open dataset, SBD [1] and CamVid [2]. We made our trained model and algorithm open to other researchers.
[1] S. Gould, R. Fulton, and D. Koller, “Decomposing a Scene into
Geometric and Semantically Consistent Regions,” in
Proc. IEEE Int. Conf. Comput. Vis., Kyoto, Japan, 29 Sep.-2 Oct. 2009, pp. 1-8.
[2] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, “Segmentation and Recognition Using Structure from Motion Point Clouds,” in Proc. European Conf. Comput. Vis., Marseille, France, 12-18 Oct.
2008, pp. 44-57.
(2) Request for Models
To obtain our pretrained
model, please fill the request form below and send an email to Mr. Dong Seop
Kim (seob2@dongguk.edu). Any work
that uses our algorithm and models must acknowledge the authors by including
the following reference.
DONG SEOP KIM, MUHAMMAD ARSALAN,
MUHAMMAD OWAIS, and KANG RYOUNG PARK, “ESSN: Enhanced Semantic Segmentation Network by
Residual Concatenation of Feature Maps,” IEEE Access, in submission.
< Request Form for pretrained Models and algorithm>
Please complete the following form to request access to our pretrained models. This models should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
48. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections,
Generated Data, Dongguk Thermal Image Database (DTh-DB),
and Items and Vehicles Database (DI&V-DB)
(1) Introduction
We trained the Mask R-CNN model with our thermal image database for the purpose of elimination of thermal reflections. We made the models, generated data with Dongguk thermal image database (DTh-DB), and Dongguk items & vehicles database (DI&V-DB) open to other researchers.
(2) Request for Models, Generated Data, and databases
To obtain our pretrained model, generated data, and databases, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Hyo Sik Yoon, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, “A Study on the Elimination of Thermal Reflections,” IEEE Access, In Submission.
< Request Form
for Models, Generated Data, and
Databases >
Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Name (signature)
47. Dongguk Ultrasound Thyroid Nodule
Classification (DUS-TNC) algorithm
(1) Introduction
In this study, we enhance the classification performance of ultrasound image based thyroid nodule classification system by cascading classifiers using FFT-based and CNN-based methods. The pretrained model was successfully trained using TDID dataset [1]. We made our trained model and algorithm open to other researchers
[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero,
E. An open access thyroid ultrasound-image database. In Proceedings of the 10th
International Symposium on Medical Information Processing and Analysis,
Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).
(2) Request for our algorithm
To gain access to our algorithm (code and pretrained models), Please
sign and scan the request form and email to Prof. D. T. Nguyen at
nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge
the authors by including the following reference.
D. T. Nguyen, T. D. Pham, G. Batchuluun, H. S.
Yoon, and K. R. Park, "Artificial Intelligence-based Thyroid Nodule
Classification Using Information from Spatial and Frequency Domains",
Journal of Clinical Medicine, in submission.
< Request Form for DUS-TNC algorithm >
Please complete the following form to request access our algorithm (All
contents must be completed). These models should not be used for commercial
use.
Name:
Contact: (Email)
(Telephone)
Organization
Name:
Organization
Address:
Purpose:
Date:
Name (signature)
46. Dongguk Modified CycleGAN
for Age Estimation (DMC4AE) and Generated Images
(1) Introduction
We trained our modified CycleGAN models for age estimation with heterogeneous databases of MegaAge and MORPH databases [1,2]. We made our trained models and generated images by modified CycleGAN open to other researchers.
1.
Y. Zhang, L. Liu, C. Li, and C.C. Loy,
Quantifying facial age by posterior of age comparisons, In Proceedings of
British Machine Vision Conference, London, UK, 4-7 September 2017; pp. 1-14.
2. K. Ricanek and T. Tesafaye, Morph: A
longitudinal image database of normal adult age-progression, In Proceedings of
7th International Conference on Automatic Face and Gesture
Recognition, Southampton, UK, 10-12 April 2006; pp 341–345.
(2) Request for our models and images
To gain access to our models and images, download the following request form. Please sign and scan the request form and email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu).
Any work that uses these models and images must acknowledge the authors by including the following reference.
Yu Hwan
Kim, Min Beom Lee, Se Hyun Nam, and Kang Ryoung Park,
“Enhancing the Accuracies of Age
Estimation with Heterogeneous Databases Using Modified CycleGAN,” IEEE Access, in submission.
<
Request Form for DMC4AE and Generated Images >
Please complete the following form to request access to these models with images (All contents must be completed). These models should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
45. Dongguk Vess-Net
Models with Algorithm
(1) Introduction
We trained our Vess-Net model based on dual stream feature empowerment scheme for retinal vessel segmentation to aid the process of diagnosing diseases like diabetic and hypertensive retinopathy. In our experiments we validated the performance of our method with three different publicly available fundus image databases including DRIVE [1] CHASE-DB1 [2] and STARE [3]. We made our trained models open to other researchers.
3.
Gastrolab Staal, J.; Abràmoff,
M.D.; Niemeijer, M.; Viergever,
M.A.; van Ginneken, B. Ridge-based vessel
segmentation in color images of the retina. IEEE Trans. Med. Imaging, 2004, 23,
501–509.
4.
Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.;
Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to
retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 2538–2548
5. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by
piecewise threshold probing of a matched filter response. IEEE Trans. Med.
Image, 2000, 19, 203-210.
(2) Request for our Vess-Net models
To gain access to the Vess-Net trained models, download the following request form. Please sign and scan the request form and email to Mr. Muhammad Arsalan (arsal@dongguk.edu).
Any work that uses these Vess-Net models must acknowledge the authors by including the following reference.
Muhammad
Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho
and Kang Ryoung Park, “Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using
Artificial Intelligence-based Semantic Segmentation,”
Journal of Clinical Medicine, Vol. 8,
Issue 9(1446), pp. 1-27, September 2019.
<
Request Form for Vess-Net Models >
Please complete the following form to request access to these models (All contents must be completed). These models should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
44. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with
Algorithms
(1)
Introduction
We created adaptive ROI images before using
them to train our convolutional neural network (CNN). In the first stage, a vanishing point is detected in
order to create the ROI image. The ROI image that covers the majority of the
road region is then used as the input to train the CNN-based detector and
classifier in the second stage. We made the models, generated data, and
labeled information of database open to other researchers. Our CNN model was
trained with Malaga urban
dataset [1], the Daimler dataset [2], and the Cambridge dataset [3].
1. The Málaga Stereo and Laser
Urban Data Set –
MRPT. Available online: https://www.mrpt.org/MalagaUrbanDataset (accessed on 1
October 2018).
2. Daimler Urban Segmentation
Dataset.
Available online: http://www.6d-vision.com/scene-labeling
(accessed on 2 January 2019).
3. Cambridge-driving Labeled Video Database (CamVid). Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1 October 2018).
(2) Request for models, generated data, and labeled information
To obtain our pretrained model, generated data, and labeled information, please fill the request form bellow and send an email to Dr. Toan Minh Hoang at hoangminhtoan@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Toan Minh
Hoang, Se Hyun Nam, and Kang Ryoung Park, “Enhanced
Detection and Recognition of Road Markings Based on Adaptive Region of Interest
and Deep Learning,”IEEE Access, Vol. 7, pp. 109817- 109832, August 2019.
< Request Form
for Models, Generated Data, and Labeled
Information >
Please complete the following form to request access to our pretrained model, generated data, and labeled information of database (All contents must be completed). This model, data, and database should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
43. Dongguk CNN stacked LSTM and CycleGAN
for Action Recognition, Generated Data, and Dongguk Activities & Actions
Database (DA&A-DB2)
(1) Introduction
We trained our convolutional neural network (CNN), CNN stacked with long short-term memory (CNN-LSTM), cycle-consistent adversarial network (CycleGAN) models with our action database. We made the models, generated data, and database open to other researchers.
(2) Request for Models, Generated Data, and DA&A-DB2
To obtain our pretrained model, generated data, and database, please fill the request form bellow and send an email to Dr. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Dat Tien Nguyen, Tuyen Danh Pham, Chanhum Park, and Kang Ryoung Park, “Action Recognition from Thermal Videos,” IEEE Access, Vol. 7, pp. 103893- 103917, August 2019.
< Request Form
for Models, Generated Data, and
DA&A-DB2 >
Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
42. Dongguk
generation model of presentation attack face images (DG_FACE_PAD_GEN)
(1)
Introduction
We trained our generative adversarial network
(GAN)-based model to artificially generate presentation attack (PA) face images
to reduce the efforts of PA image acquisition.
(2) Request
for obtaining DG_FACE_PAD_GEN
To obtain our pretrained model, please fill
the request form bellow and send an email to Mr. Nguyen at
nguyentiendat@dongguk.edu. Any work that uses the provided pretrained network
must acknowledge the authors by including the following reference.
Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Kyoung Jun Noh, and Kang Ryoung Park, “Presentation Attack Face Image Generation Based on Deep Generative Adversarial Network,” Sensors, in preparation for submission.
<
Request Form for DG_FACE_PAD_GEN >
Please complete the following form to request access to the DG_FACE_PAD_GEN. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
41. Label
Information of Sun Yat-sen University Multiple Modality
Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models (DGR-CNN).
(1) Introduction
We collected gender information of Sun Yat-sen
University Multiple Modality Re-ID (SYSU-MM01) database and trained gender
recognition system based on ResNet-101 using two databases including the
SYSU-MM01 and the Dongguk Body-based Gender Database (DBGender-DB2). We made
label information of SYSU-MM01 database and Dongguk Gender Recognition CNN
(DGR-CNN) open to other researchers.
(2) Request for Label Information and DGR-CNN
To gain access to the label information and DGR-CNN, download the following request form for label information of SYSU-MM01 and DGR-CNN. Please sign and scan the request form and email to Ms. Na Rae Baek (naris27@dongguk.edu).
Any work that uses the label information of SYSU-MM01 database or this CNN model must acknowledge the authors by including the following reference.
Na Rae Baek, Se Woon Cho, Ja Hyung Koo, Noi Quang Truong, and Kang Ryoung Park, “Multimodal Camera-based Gender Recognition Using Human-body Image with Two-step Reconstruction Network,” IEEE Access, Vol. 7, pp. 104025-104044, August 2019.
<
Request Form for label information of SYSU-MM01 and DGR-CNN >
Please complete the following form to request access to the label information of SYSU-MM01 and DGR-CNN. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
40. Dongguk cGAN-based
Iris Image Generation Model and Generated Images (DGIM&GI)
(1) Introduction
We trained generation models based on cGAN
(pix2pix model) using NICE.II training dataset (selected from UBIRIS.v2) and MICHE
database on visible light environment and CASIA-Iris-Distance database on NIR
environment, respectively. Additionally, we generated iris images using trained
generation models with each database. We made DGIM (trained generation models)
and GI (generated images from trained model) open to other researchers.
(2) Request for DGIM&GI
To gain access to the DGIM&GI, download the following request form for DGIM&GI. Please sign and scan the request form and email to Mr. Min Beom Lee (smin6180@naver.com).
Any work that uses this DGIM&GI must acknowledge the authors by including the following reference.
Min Beom Lee, Yu Hwan Kim, and Kang Ryoung Park, “Conditional Generative Adversarial Network-Based Data Augmentation for Enhancement of Iris Recognition Accuracy,” IEEE Access, Vol. 7, pp. 122134-122152, September 2019.
<
Request Form for DGIM&GI >
Please complete the following form to request access to the DGIM&GI. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
39. Dongguk CNN and LSTM models for the
classification of multiple gastrointestinal (GI) diseases, and video indices of
experimental endoscopic videos
(1) Introduction
We trained a cascaded ResNet18 and LSTM model for classification of multiple gastrointestinal diseases by using endoscopic video data. Two different publicly available endoscopic databases [1,2] were considered for the training and validation of our proposed CNN+LSTM based model. Moreover, the trained model is also used in class prediction-based retrieval of endoscopic images. We made our trained model and video indices of experimental endoscopic videos open to other researchers.
1. Gastrolab
− The gastrointestinal site. Available online:
http://www.gastrolab.net/ni.htm (accessed on 1 February 2019).
2. Pogorelov, K.; Randel,
K. R.; Griwodz, C.; Eskeland,
S. L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.-T.; Lux, M.;
Schmidt, P. T.; Riegler, M.; Halvorsen, P. KVASIR: A
multi-class image dataset for computer aided gastrointestinal disease
detection. In Proceedings of the 8th ACM Multimedia Systems Conference, Taipei,
Taiwan, 20–23 June 2017; pp. 164–169.
(2) Request for our CNN+LSTM models and video indices
To gain access to the models and video indices, download the following request form for CNN+LSTM models and video indices. Please sign and scan the request form and email to Mr. Muhammad Owais (malikowais266@gmail.com).
Any work that uses these CNN+LSTM models and video indices must acknowledge the authors by including the following reference.
Muhammad Owais, Muhammad Arsalan, Jiho
Choi, Tahir Mahmood, and Kang Ryoung Park, “Artificial Intelligence-Based Classification of
Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical
Diagnosis,” Journal of Clinical Medicine, Vol. 8, Issue 7(986), pp. 1-33, July 2019.
<
Request Form for CNN+LSTM Models and Video Indices >
Please complete the following form to request access to these models and video indices (All contents must be completed). These models and video indices should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
38. Dongguk
Dual-Camera-based Gaze Database (DDCG-DB1) and
CNN models with Algorithms
(1) Introduction
A natural gaze-detection database [Dongguk dual-camera-based gaze database (DDCG-DB1)] is constructed from the images of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a vehicle environment, and classified into nine situations such as wearing of sunglasses, different glasses, and hats with mobile phones. We make DDCG-DB1 and our CNN model trained with this database open to other researchers.
(2) Request for DDCG-DB1 and CNN model
To gain access to the DDCG-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).
Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.
Hyo Sik Yoon, Na Rae Baek, Noi Quang
Truong, and Kang Ryoung Park, “Driver
Gaze Detection Based on Deep Residual Networks Using the Combined Single Image
of Dual Near-Infrared Cameras,” IEEE Access, Vol. 7, pp. 93448-93461, July 2019.
===========================================================================================================================================================================================================
<
Request Form for DDCG-DB1 and CNN models >
Please complete the following form to request access to the DDCG-DB1 and CNN models (All contents must be completed). This dataset should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
37. Dongguk Mobile Finger-Wrinkle Database (DMFW-DB1) and CNN model
with Algorithms
(1) Introduction
We collected the smartphone-acquired finger-wrinkle open database
DMFW-DB1 using the LG V20’s
frontal-viewing camera (8 mega-pixels (2,160 × 3,840 pixels), 30 fps, auto-mode)
from 33 people (both hands) in five different indoor environments. In addition,
we trained finger-wrinkle recognition system based on ResNet-101. We make
DMFW-DB1 and our CNN model trained with this database open to other
researchers.
(2) Request for DMFW-DB1 and CNN model
To gain access to the DMFW-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Chan Sik Kim (kimchsi90@dongguk.edu).
Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.
Chan Sik Kim, Nam Sun Cho,
and Kang Ryoung Park, “Deep
Residual Network-Based Recognition of Finger Wrinkles Using Smartphone Camera,” IEEE
Access, Vol. 7, pp. 71270- 71285, June 2019.
===========================================================================================================================================================================================================
<
Request Form for DMFW-DB1 and CNN models >
Please complete the following form to request access to the DMFW-DB1 and CNN models (All contents must be completed). This dataset should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
36. Dongguk low-resolution drone camera dataset (DLDC-DB1, DLDC-DB2)
& CNN models
(1) Introduction
We used the Dongguk drone camera dataset ver.2 (DDroneC-DB2) open dataset to make an artificial low-resolution dataset DLDC-DB1 by generating low-resolution images of 80×80 pixels from the original images of 320×320 pixels using bicubic interpolation. Additionally, we collected the real low-resolution dataset DLDC-DB2 using a visible light camera of low-resolution, equipped on a DJI Phantom 4 drone, while landing. The camera presents a downward view of the drone and captures images of 320×240 pixels. We make our CNN models trained by these datasets and open to other researchers, also.
(2) Request for DLDC-DB1, DLDC-DB2 and CNN models
To gain access to the datasets with CNN models, download the following request form. Please scan the request form and email to Mr. Noi Quang Truong (noitq.hust@gmail.com).
Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.
Noi Quang Truong, Phong
Ha Nguyen, Se Hyun Nam, and Kang Ryoung Park, “Deep Learning-Based Super-Resolution
Reconstruction and Marker Detection for Drone Landing,”IEEE
Access, Vol. 7, pp. 61639-61655, May 2019.
===========================================================================================================================================================================================================
<
Request Form for DLDC-DB1, DLDC-DB2 and CNN models >
Please complete the following form to request access to the DLDC-DB1, DLDC-DB2 and CNN models (All contents must be completed). This dataset should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
35. Dongguk CNN Model for CBMIR
(1) Introduction
We trained an enhanced ResNet50 model for classification and retrieval
of multimodal medical images. 12 different publicly available databases [1]
including 50 classes were considered for the training and validation of our
enhanced ResNet50. Finally, the trained model is used in content-based medical
image retrieval (CBMIR) by performing deep feature-based classification of
medical images. We made our
trained model open to other researchers.
1.
Multiple Medical imaging database: Available online: https://sites.google.com/site/aacruzr/image-datasets
(accessed on 28 Feb 2019).
(2) Request for CNN Model for CBMIR
To gain access to the models, download the following request form for CBMIR-CNN. Please sign and scan the request form and email to Mr. Muhammad Owais (malikowais266@gmail.com).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
Muhammad Owais, Muhammad Arsalan, Jiho
Choi, and Kang Ryoung Park, “Effective Diagnosis and Treatment through Content-Based Medical
Image Retrieval (CBMIR) by Using Artificial Intelligence,” Journal of Clinical Medicine, Vol. 8, Issue 4(462), pp. 1-31, April
2019
<
Request Form for CBMIR-CNN >
Please complete the following form to request access to the CBMIR-CNN (All contents must be completed). These CNN models should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
34. Dongguk Person ReID CNN
Models (DPRID-CNN)
(1) Introduction
We trained Person Re-Identification based on ResNet-50 using two
databases including Dongguk Body-based Person Recognition Database (DBPerson-Recog-DB1) [1] and the Sun Yat-sen University multiple modality Re-ID (SYSU-MM01)
finger-vein database [2]. We made trained models open to other researchers.
1. DBPerson
Recog-DB1. Available in this page No.3.
2. SYSU-MM01. Available
online: https://github.com/wuancong/SYSU-MM01
(accessed on 28 Feb 2019).
(2) Request for DPRID-CNN
To gain access to the models, download the following request form for DPRID-CNN. Please sign and scan the request form and email to Mr. Jin Kyu Kang (kangjinkyu@dgu.edu).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
Jin Kyu Kang, Toan Minh Hoang, and
Kang Ryoung Park, “Person
Re-Identification Between Visible and Thermal Camera Images Based on Deep
Residual CNN Using Single Input,” IEEE Access, Vol. 7, pp. 57972-57984, May 2019.
<
Request Form for DPRID-CNN >
Please complete the following form to request access to the DPRID-CNN (All contents must be completed). These CNN models should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
33. Dongguk
DenseNet-based Finger-vein Recognition Model (DDFRM) with algorithms
(1) Introduction
We trained finger-vein recognition system based on DenseNet-161
using two databases including the
Hong Kong Polytechnic University Finger Image Database (version 1) [1] and the
Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database
[2]. We made trained models/Algorithm open to other researchers.
1. Kumar, A.; Zhou, Y. Human
identification using finger images. IEEE
Trans. Image Process. 2012, 21, 2228-2244.
2. SDUMLA-HMT Finger Vein
Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm (accessed
on 7 May 2018).
(2) DDFRM with Algorithm Request
To gain access to the models and algorithm, download the following request form for DDFRM with algorithm. Please sign and scan the request form and email to Mr. Jong Min Song (whdwhd93@gmail.com).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
Jong Min
Song, Wan Kim, and Kang Ryoung Park, “Finger-vein
Recognition Based on Deep DenseNet Using Composite Image,” IEEE Access, Vol. 7, pp. 66845-
66863, June 2019.
<
Request Form for DDFRM with algorithm >
Please complete the following form to request access to the DDFRM with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
32. Dongguk OR-Skip-Net Model for Image Segmentation with Algorithm and
Black Skin People (BSP) Label Information
(1) Introduction
We trained outer skip connection-based deep convolutional network (OR-Skip-net) for image segmentation related to medical diagnosis and other applications, to evaluate the segmentation performance system using ten databases including HGR [1], EDds [2], LIRIS [2], SSG [2], UT [2], AMI [2], Pratheepan [3], BSP, Warwick-QU [4], and NICE.II [9]. We made trained models/Algorithm and BSP label information open to other researchers.
1.
Hand detection and pose
estimation for creating human-computer interaction project. Available online:
http://sun.aei.polsl.pl/~mkawulok/gestures/ip.html (accessed on October 31,
2018).
2.
Skin detection datasets for
video monitoring. Available online:
http://www-vpu.eps.uam.es/publications/SkinDetDM/ (accessed on November 5,
2018).
3.
Pratheepan dataset + ground truth. Available online:
http://cs-chan.com/downloads_skin_dataset.html (accessed on November 5, 2018).
4.
GlaS@MICCAI'2015: Gland
segmentation challenge contest. Available online:
https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/ (accessed on 24
January 2019).
5.
NICE. II. Noisy iris challenge
evaluation - part II. Available online: http://nice2.di.ubi.pt/ (accessed on
November 8, 2018).
(2) Request for OR-Skip-Net Model with Algorithm and Black Skin
People (BSP) Label Information
To gain access to the models, algorithm and BSP label information, download the following request form. Please sign, scan the request form, and email to Mr. Arsalan (arsal@dongguk.edu).
Any work that uses this CNN model with algorithm and label information must acknowledge the authors by including the following reference.
Muhammad Arsalan, Dong Seop Kim, Muhammad Owais, and Kang Ryoung Park, “OR-Skip-Net: Outer Residual Skip Network for Skin Segmentation in Non-Ideal Situations,”Expert Systems With Applications, in press, 2020.
<
Request form for OR-Skip-Net model with algorithm
and BSP label information >
Please complete the following form to request access to the OR-Skip-Net model with algorithm and BSP label information (All contents must be completed). This CNN model with algorithm and label information should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
31. Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN
Model with algorithms
(1) Introduction
We make Dongguk Banknote
Type and Fitness Database (DF-DB3) based on Indian
rupee (INR) (INR10/20/50/100/500/1000), the Korean won (KRW) (KRW
1000/5000/10000/50000) and United States dollar (USD) (USD 5/10/20/50/100)
banknotes, and trained CNN
Models of AlexNet, GoogleNet,
ResNet-18/50 with algorithms available for the fair comparison by other
researchers.
(2) Request for Dongguk Banknote Type and Fitness Database (DF-DB3)
& CNN Model with algorithms
To gain access to these
files, download the following request form. Please scan the request form and
email to Dr. Tuyen Danh
Pham (phamdanhtuyen@dongguk.edu). Any work that uses these files with algorithm
must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request Form for Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN
Model with algorithms >
Please complete the following form to request access to these files (All contents must be completed). These files should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
30. Dongguk RetinaNet for Detecting Road
Marking Objects with Algorithms and Annotated Files for Open Databases
(1) Introduction
Although the open databases
of the Malaga urban dataset [1], the Daimler dataset [2], and the Cambridge
dataset [3] have been widely used in previous studies, they do not provide
annotated information of road markings. This increases the time and load for
system implementation. Therefore, we provide the manually annotated information
of road markings for the Malaga urban dataset, the Daimler dataset, and the
Cambridge dataset. We also provide the proposed RetinaNet
models trained by these databases based on different backbones with and without
pre-trained weights to other researchers.
1. The Málaga Stereo and Laser
Urban Data Set –
MRPT. Available online: https://www.mrpt.org/MalagaUrbanDataset (accessed on 1
October 2018).
2. Daimler Urban Segmentation
Dataset.
Available online: http://www.6d-vision.com/scene-labeling
(accessed on 1 October 2018).
3. Cambridge-driving Labeled Video Database (CamVid). Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1 October 2018).
(2) Request for Dongguk RetinaNet for
Detecting Road Marking Objects with Algorithms and Annotated Files for Open
Databases
To gain access to these files, download the following request form. Please scan the request form and email to Mr. Toan Minh Hoang (hoangminhtoan@dongguk.edu). Any work that uses these files with algorithm must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request Form for Dongguk RetinaNet for Detecting Road
Marking Objects with Algorithms and Annotated Files for Open Databases >
Please complete the following form to request access to these files (All contents must be completed). These files should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
29. Dongguk CNN Model for NIR Ocular Recognition (DC4NO) with
algorithm
(1) Introduction
We made our algorithm for rough pupil detection based on sub-block based template matching and deep ResNet models trained with three open databases: CASIA-Iris-Distance, CASIA-Iris-Lamp, and CASIA-Iris-Thousand [1]. We made these trained CNN models for ocular recognition with algorithm open to other researchers.
1. CASIA-iris version 4.
Available online:
http://www.cbsr.ia.ac.cn/china/Iris%20Databases%20CH.asp
(accessed on 9 November 2018)
(2) Request for DC4NO with algorithm
To gain access to the DC4NO with algorithm, download the following request form. Please scan the request form and email to Mr. Young Won Lee (lyw941021@dongguk.edu).
Any work that uses this DC4NO with algorithm must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request Form for DC4NO with algorithm >
Please complete the following form to request access to the DC4NO with algorithm (All contents must be completed). This DC4NO with algorithm should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
28. Dongguk Face Presentation Attack Detection Algorithms by Spatial
and Temporal Information (DFPAD-STI)
(1) Introduction
We made stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features for face presentation attack detection using the images from CASIA database [1] and Replay-mobile dataset [2], respectively. We made these trained CNN model open to other researchers.
1. Zhang, Z.; Yan, J.; Liu, S.; Lei, Z.; Yi, D. Li, S. Z.
A face anti-spoofing database with diverse attack. In Proceedings of the 5th
International Conference on Biometric, New Delhi, India, 29 March – 1 April, 2012.
2. Costa-Pazo, A.;
Bhattacharjee, S.; Vazquez-Fernandez, E.; Marcel, S. The replay-mobile face
presentation attack database. In Proceedings of the International Conference on
the Biometrics Special Interest Group, Darmstadt, Germary,
21-23 September, 2016.
(2) Request for DFPAD-STI
To gain access to the DFPAD-STI, download the following request form. Please scan the request form and email to Prof. Dat Tien Nguyen (nguyentiendat@dongguk.edu).
Any work that uses this DFPAD-STI must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request Form for DFPAD-STI >
Please complete the following form to request access to the DFPAD-STI (All contents must be completed). This DFPAD-STI should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
27. Dongguk Dual Camera-based Driver Database (DDCD-DB1) and Trained
Faster R-CNN Model with Algorithm
(1) Introduction
When acquiring DDCD-DB1, the driver’s gaze area was divided into 15 zones. The drivers gazed at the 15 zones
divided out beforehand in order, and a total of 26 participants were each
assigned 8 different situations (i.e., wearing a hat, wearing four different
types of glasses (rimless, gold-rimmed, half-frame, and horn-rimmed), wearing
sunglasses, making a call through mobile phone, covering face through hand,
etc.), and the data were collected by two NIR cameras with NIR illuminators. As
the participants gazed at the designated regions in turn, natural head
rotations that would occur in actual driving were permitted, and other
restrictions or instructions were not provided. When acquiring actual driving data,
as there was the risk of a traffic accident, rather than actually driving, a
real vehicle (model name of SM5 new impression by Renault Samsung [41]) was
started from a parked state in various locations (from daylight road to a
parking garage).
In addition, we made two
faster R-CNN models trained with our DDCD-DB1 and open database (CAVE-DB
[1]),
respectively, public.
1. Smith, B.A.; Yin, Q.; Feiner, S.K.; Nayar, S.K. Gaze
Locking: Passive Eye Contact Detection for Human-Object Interaction. In
Proceedings of the 26th Annual ACM Symposium on User Interface Software and
Technology, St. Andrews, Scotland, UK, 8-11 October 2013; pp. 271–280.
(2) Request for DDCD-DB1 and faster R-CNN
model with algorithms
To gain access to DDCD-DB1 and faster R-CNN model with algorithms, download the following request form. Please scan the request form and email to Mr. Sung Ho Park (pshgod91@dongguk.edu).
Any work that uses this DDCD-DB1 and faster R-CNN model with algorithms must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request Form for DDCD-DB1 and faster R-CNN
model with algorithms >
Please complete the following form to request access to the DDCD-DB1 and faster R-CNN model with algorithms (All contents must be completed). This database and CNN model with algorithms should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
26. Dongguk FRED-Net with Algorithm
(1) Introduction
We trained fully residual encoder-decoder network (FRED-Net) CNN models for iris and road scene segmentations, to evaluate the segmentation performance system using seven databases including NICE-II [1], MICHE [2], CASIA distance [3], CASIA interval [3], IITD [4], CamVid [5], and KITTI [6]. We made trained models/Algorithm open to other researchers.
6.
NICE.II. Noisy Iris Challenge
Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html
(accessed on 28 December 2017).
7.
De Marsico, M.; Nappi, M.;
Riccio, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric
iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17–23.
8.
CASIA-Iris-databases.
Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4
(accessed on 28 December 2017).
9.
IIT Delhi Iris Database.
Available online: http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm
(accessed on 28 December 2017).
10.
Brostow, G. J.; Fauqueur, J.;
Cipolla, R. Semantic object classes in video: A high-definition ground truth
database. Pattern Recognit. Lett. 2009, 30, 88–97.
11.
Geiger, A.; Lenz, P.; Stiller,
C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res.
2013, 32, 1231–1237.
(2) FRED-Net Model with Algorithm Request
To gain access to the models and algorithm, download the following FRED-Net model with algorithm request form. Please sign and scan the request form and email to Mr. Arsalan (arsal@dongguk.edu).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
Muhammad Arsalan, Dong Seop Kim, Min Beom Lee, Muhammad Owais, and Kang Ryoung Park, “FRED-Net: Fully residual encoder–decoder network for accurate iris segmentation,” Expert Systems with Applications, Vol. 122, pp. 217-241, May 2019.
<
FRED-Net model with algorithm Request Form >
Please complete the following form to request access to the FRED-Net model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
25. Dongguk Face and Body Database (DFB-DB1) with CNN models and
algorithms
(1) Introduction
DFB-DB1 was created for the experiments using images of 22 people obtained by two types of cameras to assess the performance of the proposed method in a variety of camera environments. The first camera was a Logitech BCC 950, and the camera specifications include a camera viewing angle of 78°, a maximum resolution of full high-definition (HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera was a Logitech C920, and its specifications include a maximum resolution of full HD 1080p, a viewing angle of 78° at 30 fps, and auto focusing. Images were taken in an indoor environment with indoor lights on, and each camera was installed at a height of 2 m 40 cm. The database was divided into two categories according to the camera. In the first database, the images were captured by the Logitech BCC 950, and the second database is composed of the images obtained by the Logitech C920, and the angle of camera was similar to that for capturing the first database.
In addition, we open our two CNN models trained with DFB-DB1 and open database of ChokePoint database [1], respectively, in addition to our algorithms.
1. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 21 Feb. 2018).
(2) Request for DFB-DB1 with CNN model and
algorithms
To gain access to DFB-DB1 with CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).
Any work that uses this DFB-DB1 with CNN model and algorithms must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DFB-DB1 and CNN model Request Form >
Please complete the following form to request access to the DFB-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
24. Dongguk Night-Time Face Detection database (DNFD-DB1) and
algorithm including CNN model
(1) Introduction
DNFD-DB1
is a self-constructed database acquired through a fixed single visible-light
camera at a distance of approximately 20–22 m at
night. The resolution of the camera is 1600 × 1200
pixels, but the image is cropped to the average adult height, which is
approximately 600. A total of 2,002 images of 20 different people were prepared,
and there are 4–6 people in each frame. To carry out the
2-fold cross-validation, those 20 people
were divided into two subsets of 10 people. In addition, we made two 2-stage
Faster R-CNN models
trained with our DNFD-DB1 and open database
of Fudan University [1], respectively, public.
[1] Open
database of Fudan University. Available online:
https://cv.fudan.edu.cn/_upload/tpl/06/f4/1780/template1780/humandetection.htm
(accessed on 26 March 2018).
(2) DNFD-DB1 and CNN model Request
To gain access to DNFD-DB1 and CNN model, download the following request form. Please scan the request form and email to Mr. Se Woon Cho (jsu319@naver.com).
Any work that uses this DNFD-DB1 and CNN model must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DNFD-DB1 and CNN model Request Form >
Please complete the following form to request access to the DNFD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
23. Dongguk Iris Spoof Detection CNN Model version 2 (DFSD-CNN-2)
with Algorithm
(1) Introduction
We trained CNN models using local and global image features based on VGG-19-Net architecture for presentation attack detection for iris recognition system using two public databases, Warsaw-2017 [1] and Notre Dame–2015 [2], respectively. We made trained models open to other researchers.
12. Yambay, D.;
Becker, B.; Kohli, N.; Yadav, D.; Czajka, A.; Bowyer,
K. W.; Schuckers, S.; Singh, R.; Vatsa,
M.; Noore, A.; Gragnaniello,
D.; Sansone, C.; Verdoliva, L.; He, L.; Ru, Y.; Li,
H.; Liu, N.; Sun, Z.; Tan, T. LivDet iris 2017 – iris liveness detection
competition 2017. In Proceedings of the International Conference on Biometrics,
Denver, CO, USA, 1-4 October 2017.
13. Doyle, J. S.; Bowyer, K. W. Robust detection of textured contact lens in
iris recognition using BSIF. IEEE Access, 2015, 3, 1672-1683.
(2) DFSD-CNN-2 Model Request
To gain access to the models and algorithm, download the following DFSD-CNN-2 request form. Please sign and scan the request form and email to Prof. Nguyen (nguyentiendat@dongguk.edu).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
D. T.
Nguyen, T. D. Pham, Y. W. Lee, and K. R. Park, "Deep Learning-Based Enhanced Presentation Attack Detection for Iris
Recognition by Combining Features from Local and Global Regions Based on NIR
Camera Sensor," Sensors, Vol. 18, Issue 8(2601), pp. 1-32, August 2018.
===========================================================================================================================================================================================================
<
DFSD-CNN-2 model Request Form >
Please complete the following form to request access to the DFSD-CNN-2 model (All contents must be completed). This CNN model should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
22. Dongguk Fitness Database (DF-DB2) & CNN Model
(1) Introduction
We collected banknote
fitness databases (DF-DB2) from three national currencies, which are Korean won (KRW), Indian rupee (INR), and Unites
States dollar (USD).
Six denominations exist in the
INR dataset: 10, 20, 50, 100, 500, and 1000 rupees, and two denominations exist
in the KRW dataset: 1000 and 5000 wons, each of which
consists of three fitness levels of fit, normal, and unfit for recirculation,
called the case 1 fitness level. In these case 1 datasets, each banknote image
was captured using VR sensors on both sides, and IRT sensors on the front side.
Five denominations exist for the USD: 5, 10, 20, 50 and 100 dollars, divided
into two fitness levels of fit and unfit, called the case 2 fitness level. The
number of images captured per banknote was two, including the VR and IRT images
of one side of the banknote. In addition, we made CNN models trained
with our DF-DB2 public.
(2) DF-DB2 and CNN model Request
To gain access to DF-DB2 and CNN models, download the following request form. Please scan the request form and email to Prof. Tuyen Danh Pham (phamdanhtuyen@gmail.com).
Any work that uses this DF-DB2 and CNN Model must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DF-DB2 and CNN model Request Form >
Please complete the following form to request access to the DF-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
21. Dongguk-body-movement-based human identification database
version 2 (DBMHI-DB2) & CNN Model
(1) Introduction
We have collected
our database in both dark and bright environments. The
database included both front and back view images of humans. Our database
has been collected in five different places in different days with same camera
heights. The database consists of data of 100 people including men and women. The database includes both
thermal and visible light images but only thermal images have been utilized in
this research. The people in our database have different heights and widths,
and their sizes vary from 27 to 150 pixels in width and from 90 to 390 pixels
in height. In addition, we made our trained CNN and CNN-LSTM model public.
(2) DBMHI-DB2 database & the trained CNN model Request
To gain access to the database and CNN model, download the following DBMHI-DB2 and CNN model request form. Please scan the request form and email to Mr. Ganbayar Batchuluun (ganabata87@dongguk.edu).
Any work that uses or incorporates the database and CNN model must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Hyo Sik Yoon, Jin Kyu Kang, and Kang Ryoung Park, "Gait-Based Human Identification by Combining Shallow Convolutional Neural Network-Stacked Long Short-Term Memory and Deep Convolutional Neural Network," IEEE Access, Vol. 6, pp. 63164-63186, October 2018.
===========================================================================================================================================================================================================
<
DBMHI-DB2 database & the trained CNN model Request
Form >
Please complete the following form to request access to the DBMHI-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
===========================================================================================================================================================================================================
20. Dongguk Multimodal Recognition CNN of Finger-vein and Finger
shape (DMR-CNN) with Algorithm
(1) Introduction
We trained multimodal recognition system of finger-vein and finger
shape based on ResNet-50 and ResNet-101 using two databases including the Shandong University homologous
multi-modal traits (SDUMLA-HMT) [1] and the Hong Kong Polytechnic University
Finger Image Database (version 1) [2]. We made trained models/Algorithm open to
other researchers.
1. SDUMLA-HMT Finger Vein
Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm
(accessed on 7 May 2018).
2. Kumar, A.; Zhou, Y. Human
identification using finger images. IEEE
Trans. Image Process. 2012, 21, 2228-2244.
(2) DMR-CNN Model with Algorithm
Request
To gain access to the models and algorithm, download the following DMR-CNN model with algorithm request form. Please sign and scan the request form and email to Mr. Wan Kim (daiz0128@naver.com).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
W. Kim, J. M. Song, and K. R. Park, “Multimodal Biometric Recognition Based on
Convolutional Neural Network by the Fusion of Finger-vein and Finger Shape
Using Near-Infrared (NIR) Camera Sensor,” Sensors, Vol. 18, Issue 7(2296), pp. 1-34, July 2018.
<
DMR-CNN model with algorithm Request Form >
Please complete the following form to request access to the DMR-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
19. Dongguk Drone Camera Database (DDroneC-DB2) with CNN models
(1) Introduction
In our experiments, we used a DJI Phantom 4 quadcopter to capture the video while the drone was landing or hovering. It includes a color camera with a 1/2.3-inch-thick complementary metal–oxide–semiconductor (CMOS) sensor, with a 94O-field-of-view (FOV) and an f/2.8 lens. The captured videos are in mpeg-4 (MP4) format with 30 fps, and have a size of 1280 x 720 pixels. The drone’s gimbal is adjusted 90° downward so that during landing, the camera can be facing the ground. In our database (shown in Table 1), we captured three videos, and acquired videos in varying types of environments (humidity level, wind velocity, temperature, and weather). We make our CNN model trained by this database and that trained by PASCAL VOC and Ms COCO databases open to other researchers, also.
Table 1. Description of Description of DDroneC-DB2
Sub-dataset |
Number of
images |
Condition |
Description |
|
Morning |
Far |
3088 |
Humidity: 44.7% Wind speed: 5.2 m/s Temperature: 15.2 °C, autumn,sunny Illuminance:1800 lux |
Landing speed: 5.5 m/s Auto mode of camera shutter speed
(8~1/8000 s) and ISO (100~3200) |
Close |
641 |
|||
Close (from DdroneC-DB1) |
425 |
Humidity: 41.5 % Wind speed: 1.4 m/s Temperature: 8.6 °C, spring, sunny Illuminance: 1900 lux |
Landing speed: 4 m/s Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200) |
|
Afternoon |
Far |
2140 |
Humidity: 82.1% Wind speed: 6.5 m/s Temperature: 28 °C, summer, sunny Illuminance:2250 lux |
Landing speed: 7 m/s Auto mode of camera shutter speed
(8~1/8000 s) and ISO (100~3200) |
Close |
352 |
|||
Close (from DdroneC-DB1) |
148 |
Humidity: 73.8 % Wind speed: 2 m/s Temperature: -2.5 °C, winter, cloudy Illuminance: 1200 lux |
Landing speed: 6 m/s Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200) |
|
Evening |
Far |
3238 |
Humidity: 31.5% Wind speed: 7.2 m/s Temperature: 6.9 °C, autumn, foggy Illuminance: 650 lux |
Landing speed: 6 m/s Auto mode of camera shutter speed
(8~1/8000 s) and ISO (100~3200) |
Close |
326 |
|||
Close (from DdroneC-DB1) |
284 |
Humidity: 38.4 % Wind speed: 3.5 m/s Temperature: 3.5 °C, winter, windy Illuminance: 500 lux |
Landing speed: 4 m/s Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200) |
(2) Request for DDroneC-DB2 and CNN models
To gain access to the database with CNN models, download the following request form. Please scan the request form and email to Mr. Phong Ha Nguyen (stormwindvn@dongguk.edu).
Any work that uses or incorporates the database must acknowledge the authors by including the following reference.
Phong Ha Nguyen, Muhammad Arsalan, Ja Hyung Koo, Rizwan Ali Naqvi, Noi Quang Truong, and Kang Ryoung Park, “LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by
Visible Light Camera Sensor on Drone,” Sensors, Vol. 18, Issue 6(1703), pp. 1-30, May 2018.
===========================================================================================================================================================================================================
<
Request Form for DDroneC-DB2 and CNN models >
Please complete the following form to request access to the DDroneC-DB2 and CNN models (All contents must be completed). This database should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
18. Dongguk Periocular Database (DP-DB1) with CNN models and
algorithms
(1) Introduction
The DP-DB1 database was created for research on periocular recognition in an indoor surveillance environment. The camera used to capture the images was a Logitech BCC 950, and the specifications of the camera include a camera viewing angle of 79 degrees, a maximum resolution of full high definition (Full HD) 1080p, and a frame rate of 30 fps with auto focusing. The location where the images were captured was an indoor hallway (with indoor lights on), and the camera was installed at a height of 2 m 40 cm. This database consists of 20 people captured in three scenarios: straight line movement, corner movement, and standing still. In case of the standing still scenario, the images were acquired from 4 different positions. In addition, we open our two CNN models trained with DP-DB1 and open database of ChokePoint database [1, 2], respectively, in addition to our algorithms.
1. Wong, Y.; Chen, S.; Mau, S.; Sanderson, C.; Lovell, B. C. Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Colorado Springs, CO, USA, 20-25 June 2011; pp. 74-81.
2. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 21 Feb. 2018).
(2) Request for DP-DB1 with CNN model and
algorithms
To gain access to DP-DB1 with CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Min Cheol Kim (mincheol9166@naver.com).
Any work that uses this DP-DB1 with CNN model and algorithms must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Request form for DP-DB1 with CNN model and
algorithms >
Please complete the following form to request access to the DP-DB1 with CNN model and algorithms (All contents must be completed). This database and CNN model with algorithms should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
17. Dongguk IrisDenseNet CNN Model
(DI-CNN) with Algorithm
(1) Introduction
We trained IrisDenseNet CNN models based on DenseNet and SegNet architecture for iris segmentation, to evaluate the segmentation performance system using five databases including NICE-II [1], MICHE [2], CASIA distance [3], CASIA interval [3] and IITD [4]. We made trained models/Algorithm open to other researchers.
14. NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online:
http://nice2.di.ubi.pt/index.html (accessed on 28 December 2017).
15. De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile iris
challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern
Recognit. Lett. 2015, 57, 17–23.
16. CASIA-Iris-databases. Available online:
http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 28
December 2017).
17. IIT Delhi Iris Database. Available online:
http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm (accessed on 28
December 2017).
(2) DI-CNN Model with Algorithm
Request
To gain access to the models and algorithm, download the following DI-CNN model with algorithm request form. Please sign and scan the request form and email to Mr. Arsalan (arsal@dongguk.edu).
Any work that uses this CNN model must acknowledge the authors by including the following reference.
Muhammad
Arsalan, Rizwan Ali Naqvi, Dong Seop Kim, Phong Ha Nguyen, Muhammad Owais and Kang Ryoung Park, “IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully
Convolutional Networks in the Images by Visible Light and Near-Infrared Light
Camera Sensors,” Sensors, Vol. 18, Issue 5(1501), pp. 1-30, May 2018.
<
DI-CNN model with algorithm Request Form >
Please complete the following form to request access to the DI-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
16. Dongguk Visible Light Iris Recognition CNN Model (DVLIR-CNN)
(1) Introduction
We made the recognition algorithm of iris region by two three convolutional neural networks (CNNs) trained with NICE-II training database [1, 2], the mobile iris challenge evaluation (MICHE) data [3, 4], and CASIA-Iris-Distance database [5], respectively. We made these trained CNN models open to other researchers.
18. NICE.II. Noisy Iris
Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html
(accessed on 26 July 2017).
19.
Proença, H.; Filipe, S.; Santos, R.; Oliveira,
J.; Alexandre, L. A. The UBIRIS.v2: A database of visible wavelength iris
images captured on-the-move and at-a-distance. IEEE Trans. Pattern Anal.
Mach. Intell.
2010, 32, 1529-1535.
20.
de Marsico, M.; Nappi, M.; Ricco, D.;
Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset
and protocols. Pattern Recognit. Lett.
2015, 57, 17-23.
21.
Haindl,
M.; Krupička, M. Unsupervised detection of
non-iris occlusions. Pattern Recognit. Lett. 2015, 57, 60-65.
22. CASIA-Iris-Distance.
Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4
(accessed on 13 November 2017).
(2) DVLIR-CNN model Request
To gain access to the CNN models, download the following DVLIR-CNN model request form. Please scan the request form and email to Mr. Min Beom Lee (mblee@dongguk.edu).
Any work that uses this CNN Model must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DVLIR-CNN model Request Form >
Please complete the following form to request access to the DVLIR-CNN model (All contents must be completed). This CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
15. Dongguk Aggressive and Smooth Driving Database (DASD-DB1) and
CNN Model
(1) Introduction
We used 15 subjects in the experiment. All the subjects voluntarily
participated in our experiments. Because it was too risky to create an
aggressive driving situation under real traffic conditions, we utilized two
types of driving simulator, to assess baseline aggressive and smooth driving
situations. As illustrated in Figure 1, the experiment included 5 min of smooth
driving and another 5 min of aggressive driving. Between each section of the
experiment, every subject watched a sequence of neutral images from the
international affective picture system, thereby maintaining neutral emotional
input. After the experiment, the subjects rested for about 10 min. This
procedure was repeated three times.
(2) DASD-DB1 and CNN model
Request
To gain access to DASD-DB1 and CNN models, download the following request form. Please scan the request form and email to Mr. Kwan Woo Lee (leekwanwoo@dgu.edu).
Any work that uses this DASD-DB1 and CNN Model must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DASD-DB1 and
CNN model Request Form >
Please complete the following form to request access to the DASD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
14. Dongguk Night-time Pedestrian Detection Faster R-CNN and
Algorithm
(1) Introduction
We made modified faster R-CNN model with algorithm for pedestrian detection at nighttime with the augmented images from KAIST database [1] and Caltech database [2]. We made this trained CNN model open to other researchers.
1. Hwang, S.; Park, J.;
Kim, N.; Choi, Y.; Kweon, I.S. Multispectral
pedestrian detection: Benchmark dataset and baseline. In Proceedings of IEEE Conference
on Computer Vision and Pattern Recognition, Boston, MA, USA, 7-12 June 2015;
pp. 1037-1045.
2. Dollár, P.; Wojek, C.; Schiele, B.; Perona,
P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans.
Pattern Anal. Mach. Intell. 2012, 34, 743-761.
(2) Modified faster R-CNN model with algorithm Request
To gain access to the CNN models with algorithm, download the following request form. Please scan the request form and email to Mr. Jong Hyun Kim (zzingae@dongguk.edu).
Any work that uses this CNN Model with algorithm must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
Modified
faster R-CNN model with algorithm Request Form
>
Please complete the following form to request access to this CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
13. Dongguk Shadow Detection Database (DSDD-DB1) & CNN Model
(1) Introduction
DSDD-DB1 was obtained by installing visual light cameras 5 to 10 m above
the ground, which approximates the conventional height of surveillance camera.
As shown in Figure 1 and Table 1, images are shot in the morning, the
afternoon, the evening, and on rainy days under various weather conditions,
temperature, and illumination. A total of 24,000 images, constituting five
sub-datasets, are obtained. The original image size is 800 x 600 pixels of the
RGB three-channel.
Table 1. Description of
five datasets.
Dataset |
Condition |
Detail
Description |
I |
−0.9℃, afternoon, sunny humidity 24 %,
wind 3.6 m/s |
- Shadow with dark color cast due to strong
sunlight. |
II |
−6.0℃, afternoon, cloudy, humidity 39 %, wind 1.9 m/s |
- Sunlight weakened by cloud so that a shadow of
lighter color is cast. |
III |
8.0℃, evening, cloudy, humidity 42 %, wind 3.5 m/s |
- Darker image due to weak evening sunlight. - Long and many shadows due to the sun position
in the evening and the reflection on buildings. |
IV |
−5.2℃, morning, sunny humidity 37 %, wind 0.6 m/s |
- Background and object become less
distinguishable due to strong morning sunlight. |
V |
13.8℃, afternoon, rainy, humidity 65 %, wind 2.0 m/s |
- Overall dark image due to rainy day. - Many shadows generated by wet background
floor. |
In addition, we made two CNN
models trained with our DSDD-DB1
and open database (CAVIAR [1]), respectively, public.
[1] CAVIAR: Context Aware
Vision using Image-based Active Recognition. Available online: http://homepages.inf.ed.ac.uk/rbf/CAVIAR/
(accessed on 8 August 2017).
(2) DSDD-DB1 and CNN model Request
To gain access to DSDD-DB1 and CNN models, download the following request form. Please scan the request form and email to Mr. Dong Seop Kim (k_ds1028@naver.com).
Any work that uses this DSDD-DB1 and CNN Model must acknowledge the authors by including the following reference.
===========================================================================================================================================================================================================
<
DSDD-DB1 and CNN model Request Form >
Please complete the following form to request access to the DSDD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)