< Dongguk Open Databases & CNN Model >

 

---------------------------------------------

67. Dongguk Korean Banknote Database Version1 (DKB v1) with Faster R-CNN model and post processing algorithms

66. Dongguk Face and Body Database Version2 (DFB-DB2) with GAN model, CNN models, and algorithms

65. Dongguk Computer-Aided Framework to Diagnose Tuberculosis from Chest X-Ray Images

64. Dongguk blurred gaze database (DBGD) and CycleGAN model

63. Dongguk Models for Thermal Image Super-resolution Reconstruction and Deblurring

62. Dongguk RPS-Net based retinal pigment sign detection model (DRPM) with Algorithms

61. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms

60. CNN model for Thermal Reflection Removal

59. Synthesized Low Light Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low Light Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (Syn-KITTI) database, and Algorithm Including CNN Models

58. Dongguk Drone Motion Blur Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained Models

57. Dongguk X-RayNet Model with Algorithms (DXM)

56. Dongguk Mitotic Cell Detection Models (DMM)

55. Dongguk CNN Models for Fake Banknote Image Classification Using Visible-Light Images Captured by Smartphone Camera

54. Dongguk mobile finger wrinkle database versions 1 and 2 (DMFW-DB1 and DMFW-DB2), and GAN with CNN models for motion deblurring

53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2) Algorithm

52. Dongguk generation model of presentation attack face images (DG_FACE_PAD_GEN)

51. Dongguk Spatiotemporal Features-Based Classification Network (DenseNet+LSTM) to Classify the Multiple Gastrointestinal Diseases with Including the Video Indices of Experimental Endoscopy Videos

50. Dongguk Modified Conditional GAN & Deep CNN Models, and Generated Images

49. Dongguk Super-resolution Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)

48. Dongguk ESSN models and algorithm for Semantic Segmentation

47. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles Database (DI&V-DB)

46. Dongguk Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm

45. Dongguk Modified CycleGAN for Age Estimation (DMC4AE) and Generated Images

44. Dongguk Vess-Net Models with Algorithm

43. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with Algorithms

42. Dongguk CNN stacked LSTM and CycleGAN for Action Recognition, Generated Data, and Dongguk Activities & Actions Database (DA&A-DB2)

41. Label Information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models (DGR-CNN).

40. Dongguk cGAN-based Iris Image Generation Model and Generated Images (DGIM&GI)

39. Dongguk CNN and LSTM models for the classification of multiple gastrointestinal (GI) diseases, and video indices of experimental endoscopic videos

38. Dongguk Dual-Camera-based Gaze Database (DDCG-DB1) and CNN models with Algorithms

37. Dongguk Mobile Finger-Wrinkle Database (DMFW-DB1) and CNN models with Algorithms

36. Dongguk low-resolution drone camera dataset & CNN models

35. Dongguk CNN Model for CBMIR

34. Dongguk Person ReID CNN Models (DPRID-CNN)

33. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with algorithms

32. Dongguk OR-Skip-Net Model for Image Segmentation with Algorithm and Black Skin People (BSP) Label Information

31. Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN Model with algorithms

30. Dongguk RetinaNet for Detecting Road Marking Objects with Algorithms and Annotated Files for Open Databases

29. Dongguk CNN Model for NIR Ocular Recognition (DC4NO) with algorithm

28. Dongguk Face Presentation Attack Detection Algorithms by Spatial and Temporal Information (DFPAD-STI)

27. Dongguk Dual Camera-based Driver Database (DDCD-DB1) and Trained Faster R-CNN Model with Algorithm

26. Dongguk FRED-Net with Algorithm

25. Dongguk Face and Body Database (DFB-DB1) with CNN models and algorithms

24. Dongguk Night-Time Face Detection database (DNFD-DB1) and algorithm including CNN model

23. Dongguk Iris Spoof Detection CNN Model version 2 (DFSD-CNN-2) with Algorithm

22. Dongguk Fitness Database (DF-DB2) & CNN Model

21. Dongguk-body-movement-based human identification

database version 2 (DBMHI-DB2) & CNN Model

20. Dongguk Multimodal Recognition CNN of Finger-vein and Finger shape (DMR-CNN) with Algorithm

 

 

67. Dongguk Korean Banknote Database Version1 (DKB v1) with

Faster R-CNN model and post processing algorithms

 

(1) Introduction

The DKB v1 contains eight classes, namely, 10, 50, 100, 500, 1000, 5000, 10000, and 50000 KRW, with each class having 800 images, yielding a total of 6,400 images. The images were photographed using the frontal viewing camera of Galaxy Note 5 [36]. The images of the banknotes were captured from various distances. To reflect the real-world environment as closely as possible, the images were captured under conditions of various locations, lighting, and cases where the bills were randomly folded. The size of the obtained image is 1920 1080 pixels. Furthermore, the experiment was conducted using the open database of JOD to verify whether the proposed algorithm can be applied to various types of banknote images. The JOD open database contains nine classes (i.e., 1 qirsh, 5, 10 piastres, 1/4, 1/2, 1, 5, 10, 20 dinars), yielding a total of 330 images. The size of the obtained image is 3264 2448 pixels. We use these databases with Faster R-CNN and three post processing algorithms.

 

(2) Request for DKB v1 and Faster R-CNN model

To gain access to the DKB v1 with Faster R-CNN model and post processing algorithms, download the following request form. Please scan the request form and email to Mr. Chan Hum Park (pipetsupport@naver.com). Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Chan Hum Park, Se Woon Cho, Na Rae Baek, Jiho Choi, and Kang Ryoung Park, Deep Feature-based Three-stage Detection of Banknotes and Coins for Assisting Visually Impaired People, IEEE Access, in submission.

 

 

< Request Form for database and Models >

 

Please complete the following form to request access to our database and trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

66. Dongguk Face and Body Database (DFB-DB2) with GAN model, CNN models, and algorithms

 

(1) Introduction

DFB-DB2 was created for the experiments using images of 22 people obtained by two types of cameras to assess the performance of the proposed method in a variety of camera environments. The first camera was a Logitech BCC 950, and the camera specifications include a camera viewing angle of 78°, a maximum resolution of full high-definition (HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera was a Logitech C920, and its specifications include a maximum resolution of full HD 1080p, a viewing angle of 78° at 30 fps, and auto focusing. Images were taken in an indoor environment with indoor lights on, and each camera was installed at a height of 2 m 40 cm. The database was divided into two categories according to the camera. In the first database, the images were captured by the Logitech BCC 950, and the second database is composed of the images obtained by the Logitech C920, and the angle of camera was similar to that for capturing the first database. And DFB-DB2 is different from DFB-DB1, and DFB-DB2 contains blur images which are not included in DFB-DB1

In addition, we make our GAN model, two CNN models trained with DFB-DB2 and open database of ChokePoint database [1], and our algorithms publicly available.

 

1. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 20 June 2020).

 

(2) Request for DFB-DB2 with GAN model, CNN model, and algorithms

To gain access to DFB-DB2 with GAN model, CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).

Any work that uses this DFB-DB2 with GAN model, CNN model and algorithms must acknowledge the authors by including the following reference.

 

Ja Hyung Koo, Se Woon Cho, Na Rae Baek, and Kang Ryoung Park, Face and Body-based Human Recognition by GAN-based Blur Restoration, Sensors, in submission.

 

 

< DFB-DB2, GAN model, and CNN model Request Form >

 

Please complete the following form to request access to the DFB-DB2, GAN model, and CNN model (All contents must be completed). This database, GAN model, and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

65. Dongguk Computer-Aided Framework to Diagnose Tuberculosis from Chest X-Ray Images

 

(1) Introduction

We proposed a novel deep learning-based computer-aided framework to diagnose Tuberculosis from a given CXR image and provide the appropriate visual and descriptive information from a previous database. Such information can further assist radiologists to subjectively validate the computer decision. Thus, both subjective and computer decisions will validate each other and ultimately result in effective diagnosis and treatment.

 

(2) Request for Our Model and Dataset Indices

To obtain our trained model and the training and testing data splitting information, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arslan, Tahir Mahmood, Yu Hwan Kim, and Kang Ryoung Park, Mining-based Diagnosis: A Comprehensive Computer-Aided Framework to Diagnosis Tuberculosis from Chest X-Ray Images based on Multi-Scale Information Fusion, Journal of Medical Internet Research, in submission. 

 

 

< Request Form for Models and Databases Indices>

 

Please complete the following form to request access to our trained model with including the video indices of experimental endoscopy videos. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

64. Dongguk blurred gaze database (DBGD) and CycleGAN model

 

(1) Introduction

The blurred gaze database [Dongguk blurred gaze database (DBGD)] is constructed from the images of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a vehicle environment, and classified into 16 situations such as wearing of sunglasses, different glasses, and hats with mobile phones. We make DBGD and our CycleGAN model trained with this database open to other researchers.

 

(2) Request for DBGD and CycleGAN model

To gain access to the DBGD with CycleGAN model, download the following request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Hyo Sik Yoon and Kang Ryoung Park, CycleGAN-based Deblurring for Gaze Tracking in Vehicle Environments, IEEE Access, in submission.

 

 

< Request Form for database and Models >

 

Please complete the following form to request access to our database and trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

63. Dongguk Models for Thermal Image Super-resolution Reconstruction and Deblurring

 

(1) Introduction

We trained the GAN models with our thermal image database and an open database for the purpose of thermal image reconstruction. In the proposed super-resolution method, low resolution image and an original image are used as inputs to the GAN model. In the proposed deblurring method, blurred image and an original image are used as inputs to the GAN model. We made the models (super-resolution reconstruction and deblurring) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Young Won Lee, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, Thermal Image Reconstruction Using Deep Learning, IEEE Access, in submission.

 

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained model. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

 

62. Dongguk RPS-Net based retinal pigment sign detection model (DRPM) with Algorithms

 

(1) Introduction

In this study, we proposed an accurate retinal pigment segmentation network (RPS-Net) that segment the pigment signs for diagnostic purposes. RPS-Net is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background.

 

(2) Request for Models

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Na Rae Baek, Muhammad Owais, Tahir Mahmood, and Kang Ryoung Park, "Deep Learning-based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa ", Sensors, in submission.

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

61. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms

 

(1) Introduction

In this study, we proposed a finger-vein recognition system based on score-level fusion method with shape and texture images. For extracting the matching score of each shape image and texture image, revised DenseNet-161 with composite image input is used. Finger-vein recognition models trained with our experimental databases in this study are made available to other researchers for a fair judgment on the performance.

 

(2) Request for Models

To get access to our pretrained models with algorithms please sign and scan the request form and send an email to Mr. Kyoung Jun Noh at nohkyungjun@dongguk.edu. Any work that uses our models and algorithm must acknowledge the authors by including the following reference.

 

Kyoung Jun Noh, Jiho Choi, Jin Seong Hong and Kang Ryoung Park, Finger-vein Recognition Based on Densely Connected Convolutional Network Using Score-Level Fusion with Shape and Texture Images, IEEE Access, Vol. 8, pp. 96748-96766, June 2020.

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

60. CNN model for Thermal Reflection Removal

 

(1) Introduction

We trained the CNN model with our thermal image database and an open database for the purpose of thermal reflection removal. In the proposed method, a region image and an original image are used as inputs to the CNN models. We made the models (pruned fully convolutional network (PFCN)) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Na Rae Baek, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, Region-based Removal of Thermal Reflection using Pruned Fully Convolutional Network, IEEE Access, Vol. 8, pp. 75741-75760, May 2020.

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

59. Synthesized Low Light Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low Light Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (Syn-KITTI) database, and Algorithm Including CNN Models

 

(1) Introduction

We used synthesized databases that are similar to real low light environments to perform multi-class segmentation in low light environments. Images taken in real low light or nighttime environments have poor image quality and visibility due to low brightness, blur, and noise, making it difficult for humans to create segmentation labels for all the objects in the image and the labels are not accurate. Therefore, to utilize accurate segmentation labels and paired images, experiments were performed using the Syn-CamVid and Syn-KITTI databases, which are the results of converting the daytime CamVid and KITTI databases into low light images, respectively. To create extremely low light images similar to an actual low light environment with little external light, we have used the existing low light image generation methods in combination. In a real low light environment with little external light, the brightness value does not decrease linearly. When comparing the daytime image with the nighttime image, the brightness of highly bright pixels will decrease more, whereas that of the pixels with lower brightness will decrease less. We used gamma correction to produce this nonlinear brightness change. In a low light environment, blurry images are captured due to the amount of light and the cameras exposure time, and we used the Gaussian blur kernel to implement this effect. Finally, the noise in the low light image is generated by the camera sensor, which is added in this experiment using the Gaussian and Poisson noise functions.

 

(2) Request for Our Models and Algorithms

To gain access to our datasets and pretrained models with algorithm, please sign and scan the request form and email to Mr. Se Woon Cho at jsu319@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Se Woon Cho, Na Rae Baek, Ja Hyung Koo, Muhammad Arsalan, and Kang Ryoung Park, "Semantic Segmentation with Low Light Images by Modified CycleGAN-based Image Enhancement", IEEE Access, Vol. 8, pp. 93561-93585, June 2020.

 

< Request Form for Models and Databases >

Please complete the following form to request access to our trained models and databases. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

58. Dongguk Drone Motion Blur Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained Models

 

(1) Introduction

We used the Dongguk drone camera dataset ver.2 (DDroneC-DB2) open dataset to generate two datasets by two different methods, denoted as the synthesized motion blur drone database 1 (SMBD-DB1) and synthesized motion blur drone database 2 (SMBD-DB2). For the first dataset, the motion-blurred images were generated by applying the motion-blurring kernels, which are created by applying subpixel interpolation to the trajectory vector. Each trajectory vector, which is a complex-valued vector, corresponds to the discrete positions of an object undergoing 2D random motion in a continuous domain. For the second dataset, we synthesized dataset that contains realistic motion blur close to the motion blur in the wild. Specifically, we used a video frame interpolation model to increase the frame rate of DDroneC-DB2 videos from 30 to 120 FPS. Then, we generated blurred images by averaging consecutive frames on the generated high-frame-rate videos. With these two datasets, we performed proposed deblur CNN and marker detection CNN. We made our synthesized datasets and CNN models publicly available for fair comparison and result regeneration.

 

(2) Request for Our Models and Algorithms

To gain access to our datasets and pretrained models with algorithm, please sign and scan the request form and email to Prof. Tuyen Danh Pham at phamdanhtuyen@gmail.com. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Noi Quang Truong, Young Won Lee, Muhammad Owais, Dat Tien Nguyen, Ganbayar Batchuluun, Tuyen Danh Pham*, and Kang Ryoung Park, SlimDeblurGAN-based Motion Deblurring and Marker Detection for Autonomous Drone Landing, Sensors, in submission.

 

< Request Form for Models and Databases >

Please complete the following form to request access to our trained models and databases. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

57. Dongguk X-RayNet Model with Algorithms (DXM)

 

(1) Introduction

In this study, semantic segmentation based automatic cardiothoracic ratio (CTR) estimation is proposed. CTR is important to diagnose cardiac and other related diseases. The proposed method consists of two multiclass segmentation networks (X-RayNet1 and X-RayNet2) to provide accurate boundary of chest anatomical structures such as, lungs, heart and clavicle bones. The accurate boundary segmentation of these anatomies helps to compute the CTR automatically, where the CTR is considered as biomarker for cardiomegaly and other diseases. Three publicly available datasets Japanese Society of Radiological Technology (JSRT), Montgomery County (MC) and Shenzhen X-ray sets (SC) where used to evaluate the performance of proposed network. The experimental results show that our method outperforms the existing approaches and provide accurate boundary for CTR computation. We made our models publicly available for fair comparison and result regeneration. All the experiments are implemented in MATLAB R2019a. 

 

(2) Request for Our Models and Algorithms

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Jiho Choi, and Kang Ryoung Park, "Artificial Intelligence-based Diagnosis of Cardiac and Related Diseases ", Journal of Clinical Medicine, Vol. 9, Issue 3(871), pp. 1-27, March 2020.

 

< Request Form for Models and Algorithms >

Please complete the following form to request access to our trained models and algorithms. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

56. Dongguk Mitotic Cell Detection Models (DMM)

 

(1) Introduction

In this study, we proposed a multistage mitosis detection method based on Faster region convolutional neural network (Faster R-CNN) and deep CNNs. Two open datasets of breast cancer known as ICPR 2012 and MITOS-ATYPIA-14 are used. Our proposed technique outperforms over the existing techniques. We made our models publicly available to allow other researchers to regenerate our results and do fair comparisons. All the experiments are implemented in MATLAB R2019a. 

 

(2) Request for Our Models and Algorithms

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Tahir Mahmood at tahirmahmood@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Tahir Mahmood, Muhammad Arsalan, Muhammad Owais, Min Beom Lee, and Kang Ryoung Park, "Artificial Intelligence-based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs", Journal of Clinical Medicine, Vol. 9, Issue 3(749), pp. 1-25, March 2020.

 

< Request Form for Models and Algorithms >

Please complete the following form to request access to our trained models and algorithms. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

55. Dongguk CNN Models for Fake Banknote Image Classification Using Visible-Light Images Captured by Smartphone Camera

 

(1) Introduction

In this study, we proposed a fake banknote classification method using CNN on banknote images captured by smartphone cameras under visible-light conditions. The fake banknote dataset used for the experiments in this study consists of images of banknotes of three national currencies: EUR (EUR 5, EUR 10, EUR 20, EUR 50, and EUR 100), USD (USD 1, USD 5, USD 10, USD 20, USD 50, and USD 100), and KRW (KRW 1000, KRW 5000, KRW 10,000, and KRW 50,000). The fake banknotes were created by capturing the original banknotes by scanner and smartphone cameras, and printed out by color printer to make the reproduced banknote. We subsequently captured banknote images by the same abovementioned smartphones while holding the fake and genuine banknotes in front of cameras or placing them on tables. The training process were conducted using the MATLAB implementation of CNN with AlexNet, ResNet-18, and GoogleNet architectures.

 

(2) Request for Our Models and Algorithm

To gain access to these files, download the following request form. Please scan the request form and email to Dr. Tuyen Danh Pham (phamdanhtuyen@dongguk.edu). Any work that uses these files with algorithm must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Chanhum Park, Dat Tien Nguyen, Ganbayar Batchuluun, and Kang Ryoung Park, Deep Learning-Based Fake-Banknote Detection Using Visible-Light Images Captured by Smartphone Cameras, IEEE Access, Vol. 8, pp. 63144-63161, April 2020.

 

 

 

< Request Form for Models, Algorithm, and Databases >

Please complete the following form to request access to our trained models and algorithm. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

54. Dongguk mobile finger wrinkle database versions 1 and 2 (DMFW-DB1 and DMFW-DB2), and GAN with CNN models for motion deblurring

 

(1) Introduction

To evaluate performance using images captured by a variety of smartphone cameras, DMFW-DB2 used the rear camera of a Samsung Galaxy S8+. Later, images were extracted from the captured images at 30 fps, and the motion blurred images were captured by obtaining the average images of the captured images. In addition, DMFW-DB1 (refer to 37) is artificial blurred by motion blurring kernel. This study used DeblurGAN to restore motion blurred images of DMFW-DB1 and DMFW-DB2. The restored images obtained by DeblurGAN are used as the input for a ResNet-101 to perform the finger wrinkle recognition.

 

(2) Request for Our Models, Algorithm, and Databases

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Nam Sun Cho at diko93@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Nam Sun Cho, Chan Sik Kim, Chanhum Park, and Kang Ryoung Park, "GAN-based Blur Restoration for Finger Wrinkle Biometrics System", IEEE Access, Vol. 8, pp. 49857- 49872, March 2020.

 

 

< Request Form for Models and Algorithm >

 

Please complete the following form to request access to our trained models and algorithm. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2) Algorithm

 

(1) Introduction

In this study, we enhance the classification performance of ultrasound image-based thyroid nodule classification system. The pretrained model was successfully trained using TDID dataset [1]

 

[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero, E. An open access thyroid ultrasound-image database. In Proceedings of the 10th International Symposium on Medical Information Processing and Analysis, Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).

 

(2) Request for Our Models and Algorithm

To gain access to our algorithm and pretrained models, Please sign and scan the request form and email to Prof. D. T. Nguyen at nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge the authors by including the following reference.

 

D. T. Nguyen, et al. "Ultrasound Image-based Diagnosis of Malignant Thyroid Nodule Using Artificial Intelligence", Sensors, Vol. 20, Issue 7(1822), pp. 1-23, March 2020.

 

 

< Request Form for Models and Algorithm >

 

Please complete the following form to request access to our trained models and algorithm. These models should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

52. Dongguk generation model of presentation attack face images (DG_FACE_PAD_GEN)

 

(1) Introduction

We trained our generative adversarial network (GAN)-based model to artificially generate presentation attack (PA) face images to reduce the efforts of PA image acquisition.

 

(2) Request for obtaining DG_FACE_PAD_GEN

 

To obtain our pretrained model, please fill the request form bellow and send an email to Mr. Nguyen at nguyentiendat@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Kyoung Jun Noh, and Kang Ryoung Park, Presentation Attack Face Image Generation Based on Deep Generative Adversarial Network, Sensors, Vol. 20, Issue 7(1810), pp. 1-24, March 2020.

 

 

< Request Form for DG_FACE_PAD_GEN >

 

Please complete the following form to request access to the DG_FACE_PAD_GEN. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

51. Dongguk Spatiotemporal Features-Based Classification Network (DenseNet+LSTM) to Classify the Multiple Gastrointestinal Diseases with Including the Video Indices of Experimental Endoscopy Videos

 

(1) Introduction

We trained a spatiotemporal features-based classification model (named as DenseNet+LSTM) to classify the multiple gastrointestinal diseases using endoscopic videos. Moreover, after performing the classification, the extracted features were further used to retrieve images of similar medical conditions, such as normal and abnormal cases, from a large endoscopic database.

 

(2) Request for Our Algorithm and Dataset Indicies

To obtain our trained model with including the video indices of experimental endoscopy videos, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arsalan, Tahir Mahmood, Jin Kyu Kang and Kang Ryoung Park, Automated Diagnosis of Various Gastrointestinal Lesions Using Deep Learning-Based Classification and Retrieval Framework with Large Endoscopic Database, Journal of Medical Internet Research, In Submission.

 

 

 

< Request Form for Models and Databases Indices>

 

Please complete the following form to request access to our trained model with including the video indices of experimental endoscopy videos. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

50. Dongguk Modified Conditional GAN & Deep CNN Models, and Generated Images

 

(1) Introduction

We trained our modified conditional GAN & Deep CNN Models for finger-vein optical blur restoration and finger-vein recognition by databases of PolyU-DB [1] and SDU-DB [2]. We made our trained models and generated images open to other researchers.

 

[1] Kumar, A.; Zhou, Y. Human identification using finger images. IEEE Trans. Image Process. 2012, 21, 22282244.

[2] SDUMLA-HMT Finger Vein Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm

 

(2) Request for Models and Images

To gain access to our models and generated images, download the following request form. Please fill the request form below and send an email to Mr. Jiho Choi (choijh1027@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

Jiho Choi, Kyoung Jun Noh, Se Woon Cho, Se Hyun Nam, Muhammad Owais, and Kang Ryoung Park, “Modified Conditional Generative Adversarial Network-based Optical Blur Restoration for Finger-vein Recognition,” IEEE Access, Vol. 8, pp. 16281- 16301, January 2020.

 

 

< Request Form for Pretrained Models, Algorithm, and Images>

 

Please complete the following form to request access to our pretrained models, algorithms, and images. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

                 (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

                                                             Name (signature)

 

 

 

49. Dongguk Super-resolution Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)

 

(1) Introduction

We trained our models (DSR&AE-CNN) for facial image super-resolution reconstruction and age estimation by databases of PAL [1] and MORPH databases [2].We made our trained models and generated images open to other researchers.

 

[1] PAL database Available online: http://agingmind.utdallas.edu/download-stimuli/face-database/ (accessed on 17 May 2019).

[2] MORPH database Available online: https://ebill.uncw.edu/C20231_ustores/web/store_main.jsp?STOREID=4 (accessed on 17 May 2019).

 

(2) Request for Models

To gain access to our models and images, download the following request form. Please fill the request form below and send an email to Mr. Se Hyun Nam (nsh6473@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

Se Hyun Nam, Yu Hwan Kim, Noi Quang Truong,  jiho Choi, and Kang Ryoung Park, Age Estimation by Super-Resolution Reconstruction Based on Adversarial Networks, IEEE Access, Vol. 8, pp. 17103-17120, January 2020.

 

< Request Form for Pretrained Models and Algorithm>

 

Please complete the following form to request access to our pretrained models. This models should not be used for commercial use.

 

Name :

 

Contact : (Email)

              (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

               Name (signature)

 

 

48. Dongguk ESSN models and algorithm for Semantic Segmentation

 

(1) Introduction

We propose new models (ESSN) for semantic segmentation. That proposed model was trained with open dataset, SBD [1] and CamVid [2]. We made our trained model and algorithm open to other researchers.

 

[1] S. Gould, R. Fulton, and D. Koller, Decomposing a Scene into Geometric and Semantically Consistent Regions, in Proc. IEEE Int. Conf. Comput. Vis., Kyoto, Japan, 29 Sep.-2 Oct. 2009, pp. 1-8.

[2] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, Segmentation and Recognition Using Structure from Motion Point Clouds, in Proc. European Conf. Comput. Vis., Marseille, France, 12-18 Oct. 2008, pp. 44-57.

 

(2) Request for Models

To obtain our pretrained model, please fill the request form below and send an email to Mr. Dong Seop Kim (seob2@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

DONG SEOP KIM, MUHAMMAD ARSALAN, MUHAMMAD OWAIS, and KANG RYOUNG PARK, ESSN: Enhanced Semantic Segmentation Network by Residual Concatenation of Feature Maps, IEEE Access, Vol. 8, pp. 21363-21379, February 2020.

 

< Request Form for Pretrained Models and Algorithm>

 

Please complete the following form to request access to our pretrained models. This models should not be used for commercial use.

 

Name :

 

Contact : (Email)

              (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

               Name (signature)

 

 

 

47. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles Database (DI&V-DB)

 

(1) Introduction

We trained the Mask R-CNN model with our thermal image database for the purpose of elimination of thermal reflections. We made the models, generated data with Dongguk thermal image database (DTh-DB), and Dongguk items & vehicles database (DI&V-DB) open to other researchers.

 

(2) Request for Models, Generated Data, and databases

To obtain our pretrained model, generated data, and databases, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Hyo Sik Yoon, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, A Study on the Elimination of Thermal Reflections, IEEE Access, Vol. 7, pp. 174597-174611, December 2019.

 

< Request Form for Models, Generated Data, and Databases >

 

Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

46. Dongguk Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm

 

(1) Introduction

In this study, we enhance the classification performance of ultrasound image based thyroid nodule classification system by cascading classifiers using FFT-based and CNN-based methods. The pretrained model was successfully trained using TDID dataset [1]. We made our trained model and algorithm open to other researchers

 

[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero, E. An open access thyroid ultrasound-image database. In Proceedings of the 10th International Symposium on Medical Information Processing and Analysis, Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).

 

(2) Request for our algorithm

To gain access to our algorithm (code and pretrained models), Please sign and scan the request form and email to Prof. D. T. Nguyen at nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Hyo Sik Yoon, and Kang Ryoung Park, Artificial Intelligence-based Thyroid Nodule Classification Using Information from Spatial and Frequency Domains, Journal of Clinical Medicine, Vol.  8, Issue 11(1976), pp. 1-24, November 2019.

 

 

< Request Form for DUS-TNC algorithm >

 

Please complete the following form to request access our algorithm (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

          (Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

                                                         Date:

                                                         Name (signature)

 

 

45. Dongguk Modified CycleGAN for Age Estimation (DMC4AE) and Generated Images

 

(1) Introduction

We trained our modified CycleGAN models for age estimation with heterogeneous databases of MegaAge and MORPH databases [1,2]. We made our trained models and generated images by modified CycleGAN open to other researchers.

 

1.    Y. Zhang, L. Liu, C. Li, and C.C. Loy, Quantifying facial age by posterior of age comparisons, In Proceedings of British Machine Vision Conference, London, UK, 4-7 September 2017; pp. 1-14.

2.    K. Ricanek and T. Tesafaye, Morph: A longitudinal image database of normal adult age-progression, In Proceedings of 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, 10-12 April 2006; pp 341345.

 

(2) Request for our models and images

To gain access to our models and images, download the following request form. Please sign and scan the request form and email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu).

 

Any work that uses these models and images must acknowledge the authors by including the following reference.

 

Yu Hwan Kim, Min Beom Lee, Se Hyun Nam, and Kang Ryoung Park, Enhancing the Accuracies of Age Estimation with Heterogeneous Databases Using Modified CycleGAN, IEEE Access, Vol. 7, pp. 163461-163477, November 2019.

 

< Request Form for DMC4AE and Generated Images >

 

Please complete the following form to request access to these models with images (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

44. Dongguk Vess-Net Models with Algorithm

 

(1) Introduction

We trained our Vess-Net model based on dual stream feature empowerment scheme for retinal vessel segmentation to aid the process of diagnosing diseases like diabetic and hypertensive retinopathy. In our experiments we validated the performance of our method with three different publicly available fundus image databases including DRIVE [1] CHASE-DB1 [2] and STARE [3]. We made our trained models open to other researchers.

 

3.    Gastrolab Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging, 2004, 23, 501509.

4.    Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 25382548

5.    Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Image, 2000, 19, 203-210.

 

(2) Request for our Vess-Net models

To gain access to the Vess-Net trained models, download the following request form. Please sign and scan the request form and email to Mr. Muhammad Arsalan (arsal@dongguk.edu).

 

Any work that uses these Vess-Net models must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho and Kang Ryoung Park, Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-based Semantic Segmentation, Journal of Clinical Medicine, Vol.  8, Issue 9(1446), pp. 1-27, September 2019.

 

< Request Form for Vess-Net Models >

 

Please complete the following form to request access to these models (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

43. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with Algorithms

(1) Introduction

We created adaptive ROI images before using them to train our convolutional neural network (CNN). In the first stage, a vanishing point is detected in order to create the ROI image. The ROI image that covers the majority of the road region is then used as the input to train the CNN-based detector and classifier in the second stage. We made the models, generated data, and labeled information of database open to other researchers. Our CNN model was trained with Malaga urban dataset [1], the Daimler dataset [2], and the Cambridge dataset [3].

 

1. The Málaga Stereo and Laser Urban Data Set MRPT. Available online: https://www.mrpt.org/MalagaUrbanDataset (accessed on 1 October 2018).

2. Daimler Urban Segmentation Dataset.

Available online: http://www.6d-vision.com/scene-labeling (accessed on 2 January 2019).

3. Cambridge-driving Labeled Video Database (CamVid). Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1 October 2018).

 

(2) Request for models, generated data, and labeled information

To obtain our pretrained model, generated data, and labeled information, please fill the request form bellow and send an email to Dr. Toan Minh Hoang at hoangminhtoan@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Toan Minh Hoang, Se Hyun Nam, and Kang Ryoung Park, Enhanced Detection and Recognition of Road Markings Based on Adaptive Region of Interest and Deep Learning,IEEE Access, Vol. 7, pp. 109817- 109832, August 2019.

 

 

 

< Request Form for Models, Generated Data, and Labeled Information >

 

Please complete the following form to request access to our pretrained model, generated data, and labeled information of database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

42. Dongguk CNN stacked LSTM and CycleGAN for Action Recognition, Generated Data, and Dongguk Activities & Actions Database (DA&A-DB2)

 

(1) Introduction

We trained our convolutional neural network (CNN), CNN stacked with long short-term memory (CNN-LSTM), cycle-consistent adversarial network (CycleGAN) models with our action database. We made the models, generated data, and database open to other researchers.

 

(2) Request for Models, Generated Data, and DA&A-DB2

To obtain our pretrained model, generated data, and database, please fill the request form bellow and send an email to Dr. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Dat Tien Nguyen, Tuyen Danh Pham, Chanhum Park, and Kang Ryoung Park, Action Recognition from Thermal Videos, IEEE Access, Vol. 7, pp. 103893- 103917, August 2019.

 

 

 

< Request Form for Models, Generated Data, and DA&A-DB2 >

 

Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

41. Label Information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models (DGR-CNN).

 

(1) Introduction

We collected gender information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) database and trained gender recognition system based on ResNet-101 using two databases including the SYSU-MM01 and the Dongguk Body-based Gender Database (DBGender-DB2). We made label information of SYSU-MM01 database and Dongguk Gender Recognition CNN (DGR-CNN) open to other researchers.

 

(2) Request for Label Information and DGR-CNN

To gain access to the label information and DGR-CNN, download the following request form for label information of SYSU-MM01 and DGR-CNN. Please sign and scan the request form and email to Ms. Na Rae Baek (naris27@dongguk.edu).

 

Any work that uses the label information of SYSU-MM01 database or this CNN model must acknowledge the authors by including the following reference.

Na Rae Baek, Se Woon Cho, Ja Hyung Koo, Noi Quang Truong, and Kang Ryoung Park, Multimodal Camera-based Gender Recognition Using Human-body Image with Two-step Reconstruction Network, IEEE Access, Vol. 7, pp. 104025-104044, August 2019.

 

 

< Request Form for label information of SYSU-MM01 and DGR-CNN >

 

Please complete the following form to request access to the label information of SYSU-MM01 and DGR-CNN. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

40. Dongguk cGAN-based Iris Image Generation Model and Generated Images (DGIM&GI)

 

(1) Introduction

We trained generation models based on cGAN (pix2pix model) using NICE.II training dataset (selected from UBIRIS.v2) and MICHE database on visible light environment and CASIA-Iris-Distance database on NIR environment, respectively. Additionally, we generated iris images using trained generation models with each database. We made DGIM (trained generation models) and GI (generated images from trained model) open to other researchers.

 

(2) Request for DGIM&GI

To gain access to the DGIM&GI, download the following request form for DGIM&GI. Please sign and scan the request form and email to Mr. Min Beom Lee (smin6180@naver.com).

 

Any work that uses this DGIM&GI must acknowledge the authors by including the following reference.

 

Min Beom Lee, Yu Hwan Kim, and Kang Ryoung Park, Conditional Generative Adversarial Network-Based Data Augmentation for Enhancement of Iris Recognition Accuracy, IEEE Access, Vol. 7, pp. 122134-122152, September 2019.

 

< Request Form for DGIM&GI >

 

Please complete the following form to request access to the DGIM&GI. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

39. Dongguk CNN and LSTM models for the classification of multiple gastrointestinal (GI) diseases, and video indices of experimental endoscopic videos

 

(1) Introduction

We trained a cascaded ResNet18 and LSTM model for classification of multiple gastrointestinal diseases by using endoscopic video data. Two different publicly available endoscopic databases [1,2] were considered for the training and validation of our proposed CNN+LSTM based model. Moreover, the trained model is also used in class prediction-based retrieval of endoscopic images. We made our trained model and video indices of experimental endoscopic videos open to other researchers.

 

1. Gastrolab The gastrointestinal site. Available online: http://www.gastrolab.net/ni.htm (accessed on 1 February 2019).

2. Pogorelov, K.; Randel, K. R.; Griwodz, C.; Eskeland, S. L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.-T.; Lux, M.; Schmidt, P. T.; Riegler, M.; Halvorsen, P. KVASIR: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM Multimedia Systems Conference, Taipei, Taiwan, 2023 June 2017; pp. 164169.

 

(2) Request for our CNN+LSTM models and video indices

To gain access to the models and video indices, download the following request form for CNN+LSTM models and video indices. Please sign and scan the request form and email to Mr. Muhammad Owais (malikowais266@gmail.com).

 

Any work that uses these CNN+LSTM models and video indices must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arsalan, Jiho Choi, Tahir Mahmood, and Kang Ryoung Park, Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis, Journal of Clinical Medicine, Vol.  8, Issue 7(986), pp. 1-33, July 2019.

 

< Request Form for CNN+LSTM Models and Video Indices >

 

Please complete the following form to request access to these models and video indices (All contents must be completed). These models and video indices should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

38. Dongguk Dual-Camera-based Gaze Database (DDCG-DB1) and

CNN models with Algorithms

 

(1) Introduction

A natural gaze-detection database [Dongguk dual-camera-based gaze database (DDCG-DB1)] is constructed from the images of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a vehicle environment, and classified into nine situations such as wearing of sunglasses, different glasses, and hats with mobile phones. We make DDCG-DB1 and our CNN model trained with this database open to other researchers.

 

(2) Request for DDCG-DB1 and CNN model

To gain access to the DDCG-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Hyo Sik Yoon, Na Rae Baek, Noi Quang Truong, and Kang Ryoung Park, Driver Gaze Detection Based on Deep Residual Networks Using the Combined Single Image of Dual Near-Infrared Cameras, IEEE Access, Vol. 7, pp. 93448-93461, July 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DDCG-DB1 and CNN models >

 

Please complete the following form to request access to the DDCG-DB1 and CNN models (All contents must be completed). This dataset should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

37. Dongguk Mobile Finger-Wrinkle Database (DMFW-DB1) and CNN model with Algorithms

 

(1) Introduction

We collected the smartphone-acquired finger-wrinkle open database DMFW-DB1 using the LG V20s frontal-viewing camera (8 mega-pixels (2,160 × 3,840 pixels), 30 fps, auto-mode) from 33 people (both hands) in five different indoor environments. In addition, we trained finger-wrinkle recognition system based on ResNet-101. We make DMFW-DB1 and our CNN model trained with this database open to other researchers.

 

(2) Request for DMFW-DB1 and CNN model

To gain access to the DMFW-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Chan Sik Kim (kimchsi90@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Chan Sik Kim, Nam Sun Cho, and Kang Ryoung Park, Deep Residual Network-Based Recognition of Finger Wrinkles Using Smartphone Camera, IEEE Access, Vol. 7, pp. 71270- 71285, June 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DMFW-DB1 and CNN models >

 

Please complete the following form to request access to the DMFW-DB1 and CNN models (All contents must be completed). This dataset should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

36. Dongguk low-resolution drone camera dataset (DLDC-DB1, DLDC-DB2) & CNN models

 

(1) Introduction

We used the Dongguk drone camera dataset ver.2 (DDroneC-DB2) open dataset to make an artificial low-resolution dataset DLDC-DB1 by generating low-resolution images of 80×80 pixels from the original images of 320×320 pixels using bicubic interpolation. Additionally, we collected the real low-resolution dataset DLDC-DB2 using a visible light camera of low-resolution, equipped on a DJI Phantom 4 drone, while landing. The camera presents a downward view of the drone and captures images of 320×240 pixels. We make our CNN models trained by these datasets and open to other researchers, also.

 

(2) Request for DLDC-DB1, DLDC-DB2 and CNN models

To gain access to the datasets with CNN models, download the following request form. Please scan the request form and email to Mr. Noi Quang Truong (noitq.hust@gmail.com).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Noi Quang Truong, Phong Ha Nguyen, Se Hyun Nam, and Kang Ryoung Park, Deep Learning-Based Super-Resolution Reconstruction and Marker Detection for Drone Landing,IEEE Access, Vol. 7, pp. 61639-61655, May 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DLDC-DB1, DLDC-DB2 and CNN models >

 

Please complete the following form to request access to the DLDC-DB1, DLDC-DB2 and CNN models (All contents must be completed). This dataset should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

35. Dongguk CNN Model for CBMIR

 

(1) Introduction

We trained an enhanced ResNet50 model for classification and retrieval of multimodal medical images. 12 different publicly available databases [1] including 50 classes were considered for the training and validation of our enhanced ResNet50. Finally, the trained model is used in content-based medical image retrieval (CBMIR) by performing deep feature-based classification of medical images. We made our trained model open to other researchers.

 

1. Multiple Medical imaging database: Available online: https://sites.google.com/site/aacruzr/image-datasets (accessed on 28 Feb 2019).

 

(2) Request for CNN Model for CBMIR

To gain access to the models, download the following request form for CBMIR-CNN. Please sign and scan the request form and email to Mr. Muhammad Owais (malikowais266@gmail.com).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arsalan, Jiho Choi, and Kang Ryoung Park, Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence, Journal of Clinical Medicine, Vol. 8, Issue 4(462), pp. 1-31, April 2019

 

< Request Form for CBMIR-CNN >

 

Please complete the following form to request access to the CBMIR-CNN (All contents must be completed). These CNN models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

34. Dongguk Person ReID CNN Models (DPRID-CNN)

 

(1) Introduction

We trained Person Re-Identification based on ResNet-50 using two databases including Dongguk Body-based Person Recognition Database (DBPerson-Recog-DB1) [1] and the Sun Yat-sen University multiple modality Re-ID (SYSU-MM01) finger-vein database [2]. We made trained models open to other researchers.

 

1. DBPerson Recog-DB1. Available in this page No.3.

2. SYSU-MM01. Available online: https://github.com/wuancong/SYSU-MM01 (accessed on 28 Feb 2019).

 

(2) Request for DPRID-CNN

To gain access to the models, download the following request form for DPRID-CNN. Please sign and scan the request form and email to Mr. Jin Kyu Kang (kangjinkyu@dgu.edu).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Jin Kyu Kang, Toan Minh Hoang, and Kang Ryoung Park, Person Re-Identification Between Visible and Thermal Camera Images Based on Deep Residual CNN Using Single Input, IEEE Access, Vol. 7, pp. 57972-57984, May 2019.

< Request Form for DPRID-CNN >

 

Please complete the following form to request access to the DPRID-CNN (All contents must be completed). These CNN models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

33. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with algorithms

 

(1) Introduction

We trained finger-vein recognition system based on DenseNet-161 using two databases including the Hong Kong Polytechnic University Finger Image Database (version 1) [1] and the Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database [2]. We made trained models/Algorithm open to other researchers.

 

1. Kumar, A.; Zhou, Y. Human identification using finger images. IEEE Trans. Image Process. 2012, 21, 2228-2244.

2. SDUMLA-HMT Finger Vein Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm (accessed on 7 May 2018).

 

(2) DDFRM with Algorithm Request

To gain access to the models and algorithm, download the following request form for DDFRM with algorithm. Please sign and scan the request form and email to Mr. Jong Min Song (whdwhd93@gmail.com).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Jong Min Song, Wan Kim, and Kang Ryoung Park, Finger-vein Recognition Based on Deep DenseNet Using Composite Image, IEEE Access, Vol. 7, pp. 66845- 66863, June 2019.

< Request Form for DDFRM with algorithm >

 

Please complete the following form to request access to the DDFRM with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

32. Dongguk OR-Skip-Net Model for Image Segmentation with Algorithm and Black Skin People (BSP) Label Information

 

(1) Introduction

We trained outer skip connection-based deep convolutional network (OR-Skip-net) for image segmentation related to medical diagnosis and other applications, to evaluate the segmentation performance system using ten databases including HGR [1], EDds [2], LIRIS [2], SSG [2], UT [2], AMI [2], Pratheepan [3], BSP, Warwick-QU [4], and NICE.II [9]. We made trained models/Algorithm and BSP label information open to other researchers.

 

1.     Hand detection and pose estimation for creating human-computer interaction project. Available online: http://sun.aei.polsl.pl/~mkawulok/gestures/ip.html (accessed on October 31, 2018).

2.     Skin detection datasets for video monitoring. Available online: http://www-vpu.eps.uam.es/publications/SkinDetDM/ (accessed on November 5, 2018).

3.     Pratheepan dataset + ground truth. Available online: http://cs-chan.com/downloads_skin_dataset.html (accessed on November 5, 2018).

4.     GlaS@MICCAI'2015: Gland segmentation challenge contest. Available online: https://warwick.ac.uk/fac/sci/dcs/research/tia/glascontest/ (accessed on 24 January 2019).

5.     NICE. II. Noisy iris challenge evaluation - part II. Available online: http://nice2.di.ubi.pt/ (accessed on November 8, 2018).

 

(2) Request for OR-Skip-Net Model with Algorithm and Black Skin People (BSP) Label Information

To gain access to the models, algorithm and BSP label information, download the following request form. Please sign, scan the request form, and email to Mr. Arsalan (arsal@dongguk.edu).

Any work that uses this CNN model with algorithm and label information must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Dong Seop Kim, Muhammad Owais, and Kang Ryoung Park, OR-Skip-Net: Outer Residual Skip Network for Skin Segmentation in Non-Ideal Situations,Expert Systems With Applications, in press, 2020.

 

< Request form for OR-Skip-Net model with algorithm

and BSP label information >

 

Please complete the following form to request access to the OR-Skip-Net model with algorithm and BSP label information (All contents must be completed). This CNN model with algorithm and label information should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

31. Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN Model with algorithms

 

(1) Introduction

We make Dongguk Banknote Type and Fitness Database (DF-DB3) based on Indian rupee (INR) (INR10/20/50/100/500/1000), the Korean won (KRW) (KRW 1000/5000/10000/50000) and United States dollar (USD) (USD 5/10/20/50/100) banknotes, and trained CNN Models of AlexNet, GoogleNet, ResNet-18/50 with algorithms available for the fair comparison by other researchers.

 

(2) Request for Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN Model with algorithms

To gain access to these files, download the following request form. Please scan the request form and email to Dr. Tuyen Danh Pham (phamdanhtuyen@dongguk.edu). Any work that uses these files with algorithm must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Dat Tien Nguyen, Chanhum Park and Kang Ryoung Park, Deep Learning-Based Multinational Banknote Type and Fitness Classification with the Combined Images by Visible-Light Reflection and Infrared-Light Transmission Image Sensors, Sensors, Vol. 19, Issue 4(792), pp. 1-28, February 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for Dongguk Banknote Type and Fitness Database (DF-DB3) & CNN Model with algorithms >

 

Please complete the following form to request access to these files (All contents must be completed). These files should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

30. Dongguk RetinaNet for Detecting Road Marking Objects with Algorithms and Annotated Files for Open Databases

 

(1) Introduction

Although the open databases of the Malaga urban dataset [1], the Daimler dataset [2], and the Cambridge dataset [3] have been widely used in previous studies, they do not provide annotated information of road markings. This increases the time and load for system implementation. Therefore, we provide the manually annotated information of road markings for the Malaga urban dataset, the Daimler dataset, and the Cambridge dataset. We also provide the proposed RetinaNet models trained by these databases based on different backbones with and without pre-trained weights to other researchers.

 

1. The Málaga Stereo and Laser Urban Data Set MRPT. Available online: https://www.mrpt.org/MalagaUrbanDataset (accessed on 1 October 2018).

2. Daimler Urban Segmentation Dataset.

Available online: http://www.6d-vision.com/scene-labeling (accessed on 1 October 2018).

3. Cambridge-driving Labeled Video Database (CamVid). Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1 October 2018).

 

(2) Request for Dongguk RetinaNet for Detecting Road Marking Objects with Algorithms and Annotated Files for Open Databases

To gain access to these files, download the following request form. Please scan the request form and email to Mr. Toan Minh Hoang (hoangminhtoan@dongguk.edu). Any work that uses these files with algorithm must acknowledge the authors by including the following reference.

 

Toan Minh Hoang, Phong Ha Nguyen, Noi Quang Truong, Young Won Lee and Kang Ryoung Park, Deep RetinaNet-Based Detection and Classification of Road Markings by Visible Light Camera Sensors, Sensors, Vol. 19, Issue 2(281), pp. 1-25, January 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for Dongguk RetinaNet for Detecting Road Marking Objects with Algorithms and Annotated Files for Open Databases >

 

Please complete the following form to request access to these files (All contents must be completed). These files should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

29. Dongguk CNN Model for NIR Ocular Recognition (DC4NO) with algorithm

 

(1) Introduction

We made our algorithm for rough pupil detection based on sub-block based template matching and deep ResNet models trained with three open databases: CASIA-Iris-Distance, CASIA-Iris-Lamp, and CASIA-Iris-Thousand [1]. We made these trained CNN models for ocular recognition with algorithm open to other researchers.

 

1. CASIA-iris version 4. Available online:

 http://www.cbsr.ia.ac.cn/china/Iris%20Databases%20CH.asp (accessed on 9 November 2018)

 

(2) Request for DC4NO with algorithm

To gain access to the DC4NO with algorithm, download the following request form. Please scan the request form and email to Mr. Young Won Lee (lyw941021@dongguk.edu).

Any work that uses this DC4NO with algorithm must acknowledge the authors by including the following reference.

 

Young Won Lee, Ki Wan Kim, Toan Minh Hoang, Muhammad Arsalan and Kang Ryoung Park, Deep Residual CNN-Based Ocular Recognition Based on Rough Pupil Detection in the Images by NIR Camera Sensor, Sensors, Vol. 19, Issue 4(842), pp. 1-30, February 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DC4NO with algorithm >

 

Please complete the following form to request access to the DC4NO with algorithm (All contents must be completed). This DC4NO with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

28. Dongguk Face Presentation Attack Detection Algorithms by Spatial and Temporal Information (DFPAD-STI)

 

(1) Introduction

We made stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features for face presentation attack detection using the images from CASIA database [1] and Replay-mobile dataset [2], respectively. We made these trained CNN model open to other researchers.

 

1. Zhang, Z.; Yan, J.; Liu, S.; Lei, Z.; Yi, D. Li, S. Z. A face anti-spoofing database with diverse attack. In Proceedings of the 5th International Conference on Biometric, New Delhi, India, 29 March 1 April, 2012.

2. Costa-Pazo, A.; Bhattacharjee, S.; Vazquez-Fernandez, E.; Marcel, S. The replay-mobile face presentation attack database. In Proceedings of the International Conference on the Biometrics Special Interest Group, Darmstadt, Germary, 21-23 September, 2016.

 

(2) Request for DFPAD-STI

To gain access to the DFPAD-STI, download the following request form. Please scan the request form and email to Prof. Dat Tien Nguyen (nguyentiendat@dongguk.edu).

Any work that uses this DFPAD-STI must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Min Beom Lee and Kang Ryoung Park, Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information, Sensors, Vol. 19, Issue 2(410), pp. 1-27, January 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DFPAD-STI >

 

Please complete the following form to request access to the DFPAD-STI (All contents must be completed). This DFPAD-STI should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

27. Dongguk Dual Camera-based Driver Database (DDCD-DB1) and Trained Faster R-CNN Model with Algorithm

 

(1) Introduction

When acquiring DDCD-DB1, the drivers gaze area was divided into 15 zones. The drivers gazed at the 15 zones divided out beforehand in order, and a total of 26 participants were each assigned 8 different situations (i.e., wearing a hat, wearing four different types of glasses (rimless, gold-rimmed, half-frame, and horn-rimmed), wearing sunglasses, making a call through mobile phone, covering face through hand, etc.), and the data were collected by two NIR cameras with NIR illuminators. As the participants gazed at the designated regions in turn, natural head rotations that would occur in actual driving were permitted, and other restrictions or instructions were not provided. When acquiring actual driving data, as there was the risk of a traffic accident, rather than actually driving, a real vehicle (model name of SM5 new impression by Renault Samsung [41]) was started from a parked state in various locations (from daylight road to a parking garage).

In addition, we made two faster R-CNN models trained with our DDCD-DB1 and open database (CAVE-DB [1]), respectively, public.

 

1. Smith, B.A.; Yin, Q.; Feiner, S.K.; Nayar, S.K. Gaze Locking: Passive Eye Contact Detection for Human-Object Interaction. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrews, Scotland, UK, 8-11 October 2013; pp. 271280.

 

(2) Request for DDCD-DB1 and faster R-CNN model with algorithms

To gain access to DDCD-DB1 and faster R-CNN model with algorithms, download the following request form. Please scan the request form and email to Mr. Sung Ho Park (pshgod91@dongguk.edu).

Any work that uses this DDCD-DB1 and faster R-CNN model with algorithms must acknowledge the authors by including the following reference.

 

Sung Ho Park, Hyo Sik Yoon and Kang Ryoung Park, Faster R-CNN and Geometric Transformation-Based Detection of Drivers Eyes Using Multiple Near-Infrared Camera Sensors, Sensors, Vol. 19, Issue 1(197), pp. 1-29, January 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DDCD-DB1 and faster R-CNN model with algorithms >

 

Please complete the following form to request access to the DDCD-DB1 and faster R-CNN model with algorithms (All contents must be completed). This database and CNN model with algorithms should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

26. Dongguk FRED-Net with Algorithm

 

(1) Introduction

We trained fully residual encoder-decoder network (FRED-Net) CNN models for iris and road scene segmentations, to evaluate the segmentation performance system using seven databases including NICE-II [1], MICHE [2], CASIA distance [3], CASIA interval [3], IITD [4], CamVid [5], and KITTI [6]. We made trained models/Algorithm open to other researchers.

 

6.     NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html (accessed on 28 December 2017).

7.     De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 1723.

8.     CASIA-Iris-databases. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 28 December 2017).

9.     IIT Delhi Iris Database. Available online: http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm (accessed on 28 December 2017).

10.   Brostow, G. J.; Fauqueur, J.; Cipolla, R. Semantic object classes in video: A high-definition ground truth database. Pattern Recognit. Lett. 2009, 30, 8897.

11.   Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 12311237.

 

(2) FRED-Net Model with Algorithm Request

To gain access to the models and algorithm, download the following FRED-Net model with algorithm request form. Please sign and scan the request form and email to Mr. Arsalan (arsal@dongguk.edu).

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Dong Seop Kim, Min Beom Lee, Muhammad Owais, and Kang Ryoung Park, FRED-Net: Fully residual encoderdecoder network for accurate iris segmentation, Expert Systems with Applications, Vol. 122, pp. 217-241, May 2019.

 

 

< FRED-Net model with algorithm Request Form >

 

Please complete the following form to request access to the FRED-Net model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

25. Dongguk Face and Body Database (DFB-DB1) with CNN models and algorithms

 

(1) Introduction

DFB-DB1 was created for the experiments using images of 22 people obtained by two types of cameras to assess the performance of the proposed method in a variety of camera environments. The first camera was a Logitech BCC 950, and the camera specifications include a camera viewing angle of 78°, a maximum resolution of full high-definition (HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera was a Logitech C920, and its specifications include a maximum resolution of full HD 1080p, a viewing angle of 78° at 30 fps, and auto focusing. Images were taken in an indoor environment with indoor lights on, and each camera was installed at a height of 2 m 40 cm. The database was divided into two categories according to the camera. In the first database, the images were captured by the Logitech BCC 950, and the second database is composed of the images obtained by the Logitech C920, and the angle of camera was similar to that for capturing the first database.

In addition, we open our two CNN models trained with DFB-DB1 and open database of ChokePoint database [1], respectively, in addition to our algorithms.

 

1. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 21 Feb. 2018).

 

(2) Request for DFB-DB1 with CNN model and algorithms

To gain access to DFB-DB1 with CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).

Any work that uses this DFB-DB1 with CNN model and algorithms must acknowledge the authors by including the following reference.

 

Ja Hyung Koo, Se Woon Cho, Na Rae Baek, Min Cheol Kim, and Kang Ryoung Park, CNN-Based Multimodal Human Recognition in Surveillance Environments, Sensors, Vol. 18, Issue 9(3040), pp. 1-34, September 2018. 

 

===========================================================================================================================================================================================================

 

< DFB-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DFB-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

24. Dongguk Night-Time Face Detection database (DNFD-DB1) and algorithm including CNN model

 

(1) Introduction

DNFD-DB1 is a self-constructed database acquired through a fixed single visible-light camera at a distance of approximately 2022 m at night. The resolution of the camera is 1600 × 1200 pixels, but the image is cropped to the average adult height, which is approximately 600. A total of 2,002 images of 20 different people were prepared, and there are 46 people in each frame. To carry out the 2-fold cross-validation, those 20 people were divided into two subsets of 10 people. In addition, we made two 2-stage Faster R-CNN models trained with our DNFD-DB1 and open database of Fudan University [1], respectively, public.

 

[1] Open database of Fudan University. Available online: https://cv.fudan.edu.cn/_upload/tpl/06/f4/1780/template1780/humandetection.htm (accessed on 26 March 2018).

 

(2) DNFD-DB1 and CNN model Request

To gain access to DNFD-DB1 and CNN model, download the following request form. Please scan the request form and email to Mr. Se Woon Cho (jsu319@naver.com).

Any work that uses this DNFD-DB1 and CNN model must acknowledge the authors by including the following reference.

 

Se Woon Cho, Na Rae Baek, Min Cheol Kim, Ja Hyung Koo, Jong Hyun Kim, and Kang Ryoung Park, Face Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network, Sensors, Vol. 18, Issue 9(2995), pp. 1-31, September 2018.

 

===========================================================================================================================================================================================================

 

< DNFD-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DNFD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

23. Dongguk Iris Spoof Detection CNN Model version 2 (DFSD-CNN-2) with Algorithm

 

(1) Introduction

We trained CNN models using local and global image features based on VGG-19-Net architecture for presentation attack detection for iris recognition system using two public databases, Warsaw-2017 [1] and Notre Dame2015 [2], respectively. We made trained models open to other researchers.

 

12.   Yambay, D.; Becker, B.; Kohli, N.; Yadav, D.; Czajka, A.; Bowyer, K. W.; Schuckers, S.; Singh, R.; Vatsa, M.; Noore, A.; Gragnaniello, D.; Sansone, C.; Verdoliva, L.; He, L.; Ru, Y.; Li, H.; Liu, N.; Sun, Z.; Tan, T. LivDet iris 2017 iris liveness detection competition 2017. In Proceedings of the International Conference on Biometrics, Denver, CO, USA, 1-4 October 2017.

13.   Doyle, J. S.; Bowyer, K. W. Robust detection of textured contact lens in iris recognition using BSIF. IEEE Access, 2015, 3, 1672-1683.

 

(2) DFSD-CNN-2 Model Request

To gain access to the models and algorithm, download the following DFSD-CNN-2 request form. Please sign and scan the request form and email to Prof. Nguyen (nguyentiendat@dongguk.edu).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

D. T. Nguyen, T. D. Pham, Y. W. Lee, and K. R. Park, "Deep Learning-Based Enhanced Presentation Attack Detection for Iris Recognition by Combining Features from Local and Global Regions Based on NIR Camera Sensor," Sensors, Vol. 18, Issue 8(2601), pp. 1-32, August 2018.

 

===========================================================================================================================================================================================================

 

< DFSD-CNN-2 model Request Form >

 

Please complete the following form to request access to the DFSD-CNN-2 model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

22. Dongguk Fitness Database (DF-DB2) & CNN Model

 

(1) Introduction

 

We collected banknote fitness databases (DF-DB2) from three national currencies, which are Korean won (KRW), Indian rupee (INR), and Unites States dollar (USD). Six denominations exist in the INR dataset: 10, 20, 50, 100, 500, and 1000 rupees, and two denominations exist in the KRW dataset: 1000 and 5000 wons, each of which consists of three fitness levels of fit, normal, and unfit for recirculation, called the case 1 fitness level. In these case 1 datasets, each banknote image was captured using VR sensors on both sides, and IRT sensors on the front side. Five denominations exist for the USD: 5, 10, 20, 50 and 100 dollars, divided into two fitness levels of fit and unfit, called the case 2 fitness level. The number of images captured per banknote was two, including the VR and IRT images of one side of the banknote. In addition, we made CNN models trained with our DF-DB2 public.

 

 

(2) DF-DB2 and CNN model Request

To gain access to DF-DB2 and CNN models, download the following request form. Please scan the request form and email to Prof. Tuyen Danh Pham (phamdanhtuyen@gmail.com).

Any work that uses this DF-DB2 and CNN Model must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Dat Tien Nguyen, Jin Kyu Kang, and Kang Ryoung Park, "Deep Learning-Based Multinational Banknote Fitness Classification with a Combination of Visible-Light Reflection and Infrared-Light Transmission Images," Symmetry-Basel, Vol. 10, Issue 10(431), pp. 1-26, October 2018.

 

===========================================================================================================================================================================================================

 

< DF-DB2 and CNN model Request Form >

 

Please complete the following form to request access to the DF-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

21. Dongguk-body-movement-based human identification database version 2 (DBMHI-DB2) & CNN Model

 

(1) Introduction

We have collected our database in both dark and bright environments. The database included both front and back view images of humans. Our database has been collected in five different places in different days with same camera heights. The database consists of data of 100 people including men and women. The database includes both thermal and visible light images but only thermal images have been utilized in this research. The people in our database have different heights and widths, and their sizes vary from 27 to 150 pixels in width and from 90 to 390 pixels in height. In addition, we made our trained CNN and CNN-LSTM model public.

 

(2) DBMHI-DB2 database & the trained CNN model Request

To gain access to the database and CNN model, download the following DBMHI-DB2 and CNN model request form. Please scan the request form and email to Mr. Ganbayar Batchuluun (ganabata87@dongguk.edu).

Any work that uses or incorporates the database and CNN model must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Hyo Sik Yoon, Jin Kyu Kang, and Kang Ryoung Park, "Gait-Based Human Identification by Combining Shallow Convolutional Neural Network-Stacked Long Short-Term Memory and Deep Convolutional Neural Network," IEEE Access, Vol. 6, pp. 63164-63186, October 2018.

 

===========================================================================================================================================================================================================

 

< DBMHI-DB2 database & the trained CNN model Request Form >

 

Please complete the following form to request access to the DBMHI-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

===========================================================================================================================================================================================================

 

 

20. Dongguk Multimodal Recognition CNN of Finger-vein and Finger shape (DMR-CNN) with Algorithm

 

(1) Introduction

We trained multimodal recognition system of finger-vein and finger shape based on ResNet-50 and ResNet-101 using two databases including the Shandong University homologous multi-modal traits (SDUMLA-HMT) [1] and the Hong Kong Polytechnic University Finger Image Database (version 1) [2]. We made trained models/Algorithm open to other researchers.

 

1. SDUMLA-HMT Finger Vein Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm (accessed on 7 May 2018).

2. Kumar, A.; Zhou, Y. Human identification using finger images. IEEE Trans. Image Process. 2012, 21, 2228-2244.

 

(2) DMR-CNN Model with Algorithm Request

To gain access to the models and algorithm, download the following DMR-CNN model with algorithm request form. Please sign and scan the request form and email to Mr. Wan Kim (daiz0128@naver.com).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

W. Kim, J. M. Song, and K. R. Park, Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor, Sensors, Vol. 18, Issue 7(2296), pp. 1-34, July 2018.

< DMR-CNN model with algorithm Request Form >

 

Please complete the following form to request access to the DMR-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)