< Dongguk Open Databases & CNN Model >

 

---------------------------------------------

35. Dongguk FRED-Net with Algorithm

34. Dongguk Face and Body Database (DFB-DB1) with CNN models and algorithms

33. Dongguk Night-Time Face Detection database (DNFD-DB1) and algorithm including CNN model

32. Dongguk Iris Spoof Detection CNN Model version 2 (DFSD-CNN-2) with Algorithm

31. Dongguk Fitness Database (DF-DB2) & CNN Model

30.Dongguk-body-movement-based human identification database version 2 (DBMHI-DB2) & CNN Model

29. Dongguk Multimodal Recognition CNN of Finger-vein and Finger shape (DMR-CNN) with Algorithm

28. Dongguk Drone Camera Database (DDroneC-DB2) with CNN models

27. Dongguk Periocular Database (DP-DB1) with CNN models and algorithms

26. Dongguk IrisDenseNet CNN Model (DI-CNN) with Algorithm

25. Dongguk Iris Spoof Detection CNN Model (DISD-CNN) with Algorithm

24. Dongguk Visible Light Iris Recognition CNN Model (DVLIR-CNN)

23. Dongguk Aggressive and Smooth Driving Database (DASD-DB1)

and CNN Model

22. Dongguk Night-time Pedestrian Detection Faster R-CNN and

Algorithm

21. Dongguk Face Spoof Detection CNN Model (DFSD-CNN) with

Algorithm

20. Dongguk Shadow Detection Database (DSDD-DB1) & CNN

Model

19. Dongguk Fitness Database (DF-DB1) & CNN Model

18. Dongguk driver gaze classification database (DDGC-DB1) and

CNN model

17. Dongguk Age Estimation CNN Model (DAE-CNN)

16. Dongguk Single Camera-based Driver Database (DSCD-DB1)

15. Dongguk Body Movement-based Human Identification Database

(DBMHI-DB1) & CNN Model

14. Dongguk Visible Light Iris Segmentation CNN Model (DVLIS-CNN)

13. Dongguk Drone Camera Database (DDroneC-DB1)

12. ISPR Database (real and presentation attack finger-vein images)

& Algorithm Including CNN Model

11. Dongguk Visible Light & FIR Pedestrian Detection Database

(DVLFPD-DB1) & CNN Model

10. Dongguk Open and Closed Eyes Database (DOCE-DB1) & CNN

Model

9. Dongguk Multi-national Currencies Database (DMC-DB1) & CNN

Model

8. Dongguk Finger-Vein Database (DFingerVein-DB1) & CNN Model

7. Dongguk Night-time Human Detection Database (DNHD-DB1) &

CNN Model

6. Dongguk Body-based Person Recognition Database (DBPerson-

Recog-DB1)

5. Dongguk Body-based Gender Database (DBGender-DB2)

4. Dongguk Activities & Actions Database (DA&A-DB1)

3. Dongguk Body-based Gender Database (DBGender-DB1)

2. Dongguk Face Database (DFace-DB1)

1. Dongguk Banknote Database (DBanknote-DB1)

 

 

35. Dongguk FRED-Net with Algorithm

 

(1) Introduction

We trained fully residual encoder-decoder network (FRED-Net) CNN models for iris and road scene segmentations, to evaluate the segmentation performance system using seven databases including NICE-II [1], MICHE [2], CASIA distance [3], CASIA interval [3], IITD [4], CamVid [5], and KITTI [6]. We made trained models/Algorithm open to other researchers.

 

1.     NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html (accessed on 28 December 2017).

2.     De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 1723.

3.     CASIA-Iris-databases. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 28 December 2017).

4.     IIT Delhi Iris Database. Available online: http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm (accessed on 28 December 2017).

5.     Brostow, G. J.; Fauqueur, J.; Cipolla, R. Semantic object classes in video: A high-definition ground truth database. Pattern Recognit. Lett. 2009, 30, 8897.

6.     Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 12311237.

 

(2) FRED-Net Model with Algorithm Request

To gain access to the models and algorithm, download the following FRED-Net model with algorithm request form. Please sign and scan the request form and email to Mr. Arsalan (arsal@dongguk.edu).

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Dong Seop Kim, Min Beom Lee, Muhammad Owais, and Kang Ryoung Park, FRED-Net: Fully Residual Encoder-Decoder Network for Accurate Iris Segmentation, Expert Systems with Applications, in submission.

 

 

< FRED-Net model with algorithm Request Form >

 

Please complete the following form to request access to the FRED-Net model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

34. Dongguk Face and Body Database (DFB-DB1) with CNN models and algorithms

 

(1) Introduction

DFB-DB1 was created for the experiments using images of 22 people obtained by two types of cameras to assess the performance of the proposed method in a variety of camera environments. The first camera was a Logitech BCC 950, and the camera specifications include a camera viewing angle of 78°, a maximum resolution of full high-definition (HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera was a Logitech C920, and its specifications include a maximum resolution of full HD 1080p, a viewing angle of 78° at 30 fps, and auto focusing. Images were taken in an indoor environment with indoor lights on, and each camera was installed at a height of 2 m 40 cm. The database was divided into two categories according to the camera. In the first database, the images were captured by the Logitech BCC 950, and the second database is composed of the images obtained by the Logitech C920, and the angle of camera was similar to that for capturing the first database.

In addition, we open our two CNN models trained with DFB-DB1 and open database of ChokePoint database [1], respectively, in addition to our algorithms.

 

1. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 21 Feb. 2018).

 

(2) Request for DFB-DB1 with CNN model and algorithms

To gain access to DFB-DB1 with CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).

Any work that uses this DFB-DB1 with CNN model and algorithms must acknowledge the authors by including the following reference.

 

Ja Hyung Koo, Se Woon Cho, Na Rae Baek, Min Cheol Kim, and Kang Ryoung Park, CNN-based Multimodal Recognition in Surveillance Environments by Visible Light Camera Sensor, Sensors, in submission

 

===========================================================================================================================================================================================================

 

< DFB-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DFB-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

33. Dongguk Night-Time Face Detection database (DNFD-DB1) and algorithm including CNN model

 

(1) Introduction

DNFD-DB1 is a self-constructed database acquired through a fixed single visible-light camera at a distance of approximately 2022 m at night. The resolution of the camera is 1600 × 1200 pixels, but the image is cropped to the average adult height, which is approximately 600. A total of 2,002 images of 20 different people were prepared, and there are 46 people in each frame. To carry out the 2-fold cross-validation, those 20 people were divided into two subsets of 10 people. In addition, we made two 2-stage Faster R-CNN models trained with our DNFD-DB1 and open database of Fudan University [1], respectively, public.

 

[1] Open database of Fudan University. Available online: https://cv.fudan.edu.cn/_upload/tpl/06/f4/1780/template1780/humandetection.htm (accessed on 26 March 2018).

 

(2) DNFD-DB1 and CNN model Request

To gain access to DNFD-DB1 and CNN model, download the following request form. Please scan the request form and email to Mr. Se Woon Cho (jsu319@naver.com).

Any work that uses this DNFD-DB1 and CNN model must acknowledge the authors by including the following reference.

 

Se Woon Cho, Na Rae Baek, Min Cheol Kim, Ja Hyung Koo, Jong Hyun Kim, and Kang Ryoung Park, Face Detection in Nighttime Images Using Visible-Light Camera Sensors with 2-stage Faster R-CNN, Sensors, in submission.

 

===========================================================================================================================================================================================================

 

< DNFD-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DNFD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

32. Dongguk Iris Spoof Detection CNN Model version 2 (DFSD-CNN-2) with Algorithm

 

(1) Introduction

We trained CNN models using local and global image features based on VGG-19-Net architecture for presentation attack detection for iris recognition system using two public databases, Warsaw-2017 [1] and Notre Dame2015 [2], respectively. We made trained models open to other researchers.

 

7.     Yambay, D.; Becker, B.; Kohli, N.; Yadav, D.; Czajka, A.; Bowyer, K. W.; Schuckers, S.; Singh, R.; Vatsa, M.; Noore, A.; Gragnaniello, D.; Sansone, C.; Verdoliva, L.; He, L.; Ru, Y.; Li, H.; Liu, N.; Sun, Z.; Tan, T. LivDet iris 2017 iris liveness detection competition 2017. In Proceedings of the International Conference on Biometrics, Denver, CO, USA, 1-4 October 2017.

8.     Doyle, J. S.; Bowyer, K. W. Robust detection of textured contact lens in iris recognition using BSIF. IEEE Access, 2015, 3, 1672-1683.

 

(2) DFSD-CNN-2 Model Request

To gain access to the models and algorithm, download the following DFSD-CNN-2 request form. Please sign and scan the request form and email to Prof. Nguyen (nguyentiendat@dongguk.edu).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

D. T. Nguyen, T. D. Pham, Y. W. Lee, and K. R. Park, Deep Learning-based Enhanced Presentation Attack Detection for Iris Recognition by Combining Local and Global Features Based on NIR Camera Sensor, Sensors, in submission, 2018.

 

===========================================================================================================================================================================================================

 

< DFSD-CNN-2 model Request Form >

 

Please complete the following form to request access to the DFSD-CNN-2 model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

31. Dongguk Fitness Database (DF-DB2) & CNN Model

 

(1) Introduction

 

We collected banknote fitness databases (DF-DB2) from three national currencies, which are Korean won (KRW), Indian rupee (INR), and Unites States dollar (USD). Six denominations exist in the INR dataset: 10, 20, 50, 100, 500, and 1000 rupees, and two denominations exist in the KRW dataset: 1000 and 5000 wons, each of which consists of three fitness levels of fit, normal, and unfit for recirculation, called the case 1 fitness level. In these case 1 datasets, each banknote image was captured using VR sensors on both sides, and IRT sensors on the front side. Five denominations exist for the USD: 5, 10, 20, 50 and 100 dollars, divided into two fitness levels of fit and unfit, called the case 2 fitness level. The number of images captured per banknote was two, including the VR and IRT images of one side of the banknote. In addition, we made CNN models trained with our DF-DB2 public.

 

 

(2) DF-DB2 and CNN model Request

To gain access to DF-DB2 and CNN models, download the following request form. Please scan the request form and email to Prof. Tuyen Danh Pham (phamdanhtuyen@gmail.com).

Any work that uses this DF-DB2 and CNN Model must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Dat Tien Nguyen, Jin Kyu Kang, and Kang Ryoung Park, " CNN-Based Multinational Banknote Fitness Classification with a Combination of Visible-Light Reflection and Infrared-Light Transmission Image Sensors," Sensors, in submission.

 

===========================================================================================================================================================================================================

 

< DF-DB2 and CNN model Request Form >

 

Please complete the following form to request access to the DF-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

30. Dongguk-body-movement-based human identification database version 2 (DBMHI-DB2) & CNN Model

 

(1) Introduction

We have collected our database in both dark and bright environments. The database included both front and back view images of humans. Our database has been collected in five different places in different days with same camera heights. The database consists of data of 100 people including men and women. The database includes both thermal and visible light images but only thermal images have been utilized in this research. The people in our database have different heights and widths, and their sizes vary from 27 to 150 pixels in width and from 90 to 390 pixels in height. In addition, we made our trained CNN and CNN-LSTM model public.

 

(2) DBMHI-DB2 database & the trained CNN model Request

To gain access to the database and CNN model, download the following DBMHI-DB2 and CNN model request form. Please scan the request form and email to Mr. Ganbayar Batchuluun (ganabata87@dongguk.edu).

Any work that uses or incorporates the database and CNN model must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Hyo Sik Yoon, Jin Kyu Kang, and Kang Ryoung Park, " Gait-Based Human Identification by Combining Shallow Convolutional Neural Network-stacked Long Short-term Memory and Deep Convolutional Neural Network," IEEE Access, in submission.

 

===========================================================================================================================================================================================================

 

< DBMHI-DB2 database & the trained CNN model Request Form >

 

Please complete the following form to request access to the DBMHI-DB2 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

===========================================================================================================================================================================================================

 

 

29. Dongguk Multimodal Recognition CNN of Finger-vein and Finger shape (DMR-CNN) with Algorithm

 

(1) Introduction

We trained multimodal recognition system of finger-vein and finger shape based on ResNet-50 and ResNet-101 using two databases including the Shandong University homologous multi-modal traits (SDUMLA-HMT) [1] and the Hong Kong Polytechnic University Finger Image Database (version 1) [2]. We made trained models/Algorithm open to other researchers.

 

1. SDUMLA-HMT Finger Vein Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm (accessed on 7 May 2018).

2. Kumar, A.; Zhou, Y. Human identification using finger images. IEEE Trans. Image Process. 2012, 21, 2228-2244.

 

(2) DMR-CNN Model with Algorithm Request

To gain access to the models and algorithm, download the following DMR-CNN model with algorithm request form. Please sign and scan the request form and email to Mr. Wan Kim (daiz0128@naver.com).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

W. Kim, J. M. Song, and K. R. Park, Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor, Sensors, Vol. 18, Issue 7(2296), pp. 1-34, July 2018.

< DMR-CNN model with algorithm Request Form >

 

Please complete the following form to request access to the DMR-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

28. Dongguk Drone Camera Database (DDroneC-DB2) with CNN models

 

(1) Introduction

In our experiments, we used a DJI Phantom 4 quadcopter to capture the video while the drone was landing or hovering. It includes a color camera with a 1/2.3-inch-thick complementary metaloxidesemiconductor (CMOS) sensor, with a 94O-field-of-view (FOV) and an f/2.8 lens. The captured videos are in mpeg-4 (MP4) format with 30 fps, and have a size of 1280 x 720 pixels. The drones gimbal is adjusted 90° downward so that during landing, the camera can be facing the ground. In our database (shown in Table 1), we captured three videos, and acquired videos in varying types of environments (humidity level, wind velocity, temperature, and weather). We make our CNN model trained by this database and that trained by PASCAL VOC and Ms COCO databases open to other researchers, also.

 

Table 1. Description of Description of DDroneC-DB2

Sub-dataset

Number of images

Condition

Description

Morning

Far

3088

Humidity: 44.7%

Wind speed: 5.2 m/s

Temperature: 15.2 °C, autumn,sunny

Illuminance:1800 lux

Landing speed: 5.5 m/s

Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200)

Close

641

Close

(from DdroneC-DB1)

425

Humidity: 41.5 %

Wind speed: 1.4 m/s

Temperature: 8.6 °C,

spring, sunny

Illuminance: 1900 lux

Landing speed: 4 m/s

Auto mode of camera shutter speed

(8~1/8000 s) and ISO (100~3200)

Afternoon

Far

2140

Humidity: 82.1%

Wind speed: 6.5 m/s

Temperature: 28 °C, summer, sunny

Illuminance:2250 lux

Landing speed: 7 m/s

Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200)

Close

352

Close

(from DdroneC-DB1)

148

Humidity: 73.8 %

Wind speed: 2 m/s

Temperature: -2.5 °C,

winter, cloudy

Illuminance: 1200 lux

Landing speed: 6 m/s

Auto mode of camera shutter speed

(8~1/8000 s) and ISO (100~3200)

Evening

Far

3238

Humidity: 31.5%

Wind speed: 7.2 m/s

Temperature: 6.9 °C, autumn, foggy

Illuminance: 650 lux

Landing speed: 6 m/s

Auto mode of camera shutter speed (8~1/8000 s) and ISO (100~3200)

Close

326

Close

(from DdroneC-DB1)

284

Humidity: 38.4 %

Wind speed: 3.5 m/s

Temperature: 3.5 °C,

winter, windy

Illuminance: 500 lux

Landing speed: 4 m/s

Auto mode of camera shutter speed

(8~1/8000 s) and ISO (100~3200)

 

 

(2) Request for DDroneC-DB2 and CNN models

To gain access to the database with CNN models, download the following request form. Please scan the request form and email to Mr. Phong Ha Nguyen (stormwindvn@dongguk.edu).

Any work that uses or incorporates the database must acknowledge the authors by including the following reference.

 

Phong Ha Nguyen, Muhammad Arsalan, Ja Hyung Koo, Rizwan Ali Naqvi, Noi Quang Truong, and Kang Ryoung Park, LightDenseYOLO: A Fast and Accurate Marker Tracker for Autonomous UAV Landing by Visible Light Camera Sensor on Drone, Sensors, Vol. 18, Issue 6(1703), pp. 1-30, May 2018.

 

===========================================================================================================================================================================================================

 

< Request Form for DDroneC-DB2 and CNN models >

 

Please complete the following form to request access to the DDroneC-DB2 and CNN models (All contents must be completed). This database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

27. Dongguk Periocular Database (DP-DB1) with CNN models and algorithms

 

(1) Introduction

The DP-DB1 database was created for research on periocular recognition in an indoor surveillance environment. The camera used to capture the images was a Logitech BCC 950, and the specifications of the camera include a camera viewing angle of 79 degrees, a maximum resolution of full high definition (Full HD) 1080p, and a frame rate of 30 fps with auto focusing. The location where the images were captured was an indoor hallway (with indoor lights on), and the camera was installed at a height of 2 m 40 cm. This database consists of 20 people captured in three scenarios: straight line movement, corner movement, and standing still. In case of the standing still scenario, the images were acquired from 4 different positions. In addition, we open our two CNN models trained with DP-DB1 and open database of ChokePoint database [1, 2], respectively, in addition to our algorithms.

 

1. Wong, Y.; Chen, S.; Mau, S.; Sanderson, C.; Lovell, B. C. Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Colorado Springs, CO, USA, 20-25 June 2011; pp. 74-81.

2. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 21 Feb. 2018).

 

(2) Request for DP-DB1 with CNN model and algorithms

To gain access to DP-DB1 with CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Min Cheol Kim (mincheol9166@naver.com).

Any work that uses this DP-DB1 with CNN model and algorithms must acknowledge the authors by including the following reference.

 

Min Cheol Kim, Ja Hyung Koo, Se Woon Cho, Na Rae Baek, and Kang Ryoung Park, Convolutional Neural Network-based Periocular Recognition in Surveillance Environments, IEEE Access, in submission

 

===========================================================================================================================================================================================================

 

< Request form for DP-DB1 with CNN model and algorithms >

 

Please complete the following form to request access to the DP-DB1 with CNN model and algorithms (All contents must be completed). This database and CNN model with algorithms should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

26. Dongguk IrisDenseNet CNN Model (DI-CNN) with Algorithm

 

(1) Introduction

We trained IrisDenseNet CNN models based on DenseNet and SegNet architecture for iris segmentation, to evaluate the segmentation performance system using five databases including NICE-II [1], MICHE [2], CASIA distance [3], CASIA interval [3] and IITD [4]. We made trained models/Algorithm open to other researchers.

 

9.     NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html (accessed on 28 December 2017).

10.   De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 1723.

11.   CASIA-Iris-databases. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 28 December 2017).

12.   IIT Delhi Iris Database. Available online: http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htm (accessed on 28 December 2017).

 

(2) DI-CNN Model with Algorithm Request

To gain access to the models and algorithm, download the following DI-CNN model with algorithm request form. Please sign and scan the request form and email to Mr. Arsalan (arsal@dongguk.edu).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Rizwan Ali Naqvi, Dong Seop Kim, Phong Ha Nguyen, Muhammad Owais and Kang Ryoung Park, IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors, Sensors, Vol. 18, Issue 5(1501), pp. 1-30, May 2018.

< DI-CNN model with algorithm Request Form >

 

Please complete the following form to request access to the DI-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

25. Dongguk Iris Spoof Detection CNN Model (DISD-CNN) with Algorithm

 

(1) Introduction

We trained CNN models based on VGG-19-Net architecture for presentation attack detection for iris recognition system using two public databases, Warsaw-2017 [1] and Notre Dame 2015 [2], as shown in Tables 1 and 2. We made trained models open to other researchers.

Table 1. Description of training and testing data used with Warsaw-2017 dataset

Dataset

Training dataset

Testing dataset

Real image

Attack image

Total

Test-known dataset

Test-unknown dataset

Real image

Attack image

Total

Real image

Attack image

Total

Original dataset

1,844

2,669

4,513

974

2,016

2,990

2,350

2,160

4,510

Augmented dataset

27,660

(1,84415)

24,021

(2,6699)

51,681

974

2,016

2,990

2,350

2,160

4,510

 

Table 2. Description of training and testing data used with Notre Dame 2015 dataset

Dataset

Training dataset

Testing dataset

Real image

Attack

image

Total

Test-known dataset

Test-unknown dataset

Real image

Attack image

Total

Real image

Attack image

Total

Original ND2015 dataset

600

600

1,200

900

900

1,800

900

900

1,800

Augmented dataset

29,400

(60049)

29,400

(60049)

58,800

900

900

1,800

900

900

1,800

 

13.   Yambay, D.; Becker, B.; Kohli, N.; Yadav, D.; Czajka, A.; Bowyer, K. W.; Schuckers, S.; Singh, R.; Vatsa, M.; Noore, A.; Gragnaniello, D.; Sansone, C.; Verdoliva, L.; He, L.; Ru, Y.; Li, H.; Liu, N.; Sun, Z.; Tan, T. LivDet iris 2017 iris liveness detection competition 2017. In Proceedings of the International Conference on Biometrics, Denver, CO, USA, 1-4 October 2017.

14.  Doyle, J. S.; Bowyer, K. W. Robust detection of textured contact lens in iris recognition using BSIF. IEEE Access, 2015, 3, 1672-1683.

 

(2) DISD-CNN Model Request

To gain access to the models and algorithm, download the following DISD-CNN request form. Please sign and scan the request form and email to Mr. Nguyen (nguyentiendat@dongguk.edu).

 

Any work that uses this CNN model must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Na Rae Baek, Tuyen Danh Pham, and Kang Ryoung Park, Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor, Sensors, Vol. 18, Issue 5(1315), pp. 1-30, May 2018.

< DISD-CNN model Request Form >

 

Please complete the following form to request access to the DISD-CNN model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

24. Dongguk Visible Light Iris Recognition CNN Model (DVLIR-CNN)

 

(1) Introduction

We made the recognition algorithm of iris region by two three convolutional neural networks (CNNs) trained with NICE-II training database [1, 2], the mobile iris challenge evaluation (MICHE) data [3, 4], and CASIA-Iris-Distance database [5], respectively. We made these trained CNN models open to other researchers.

 

15.   NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html (accessed on 26 July 2017).

16.   Proença, H.; Filipe, S.; Santos, R.; Oliveira, J.; Alexandre, L. A. The UBIRIS.v2: A database of visible wavelength iris images captured on-the-move and at-a-distance.  IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1529-1535.

17.   de Marsico, M.; Nappi, M.; Ricco, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17-23.

18.   Haindl, M.; Krupička, M. Unsupervised detection of non-iris occlusions. Pattern Recognit. Lett. 2015, 57, 60-65.

19.   CASIA-Iris-Distance. Available online: http://biometrics.idealtest.org/dbDetailForUser.do?id=4 (accessed on 13 November 2017).

 

(2) DVLIR-CNN model Request

To gain access to the CNN models, download the following DVLIR-CNN model request form. Please scan the request form and email to Mr. Min Beom Lee (mblee@dongguk.edu).

Any work that uses this CNN Model must acknowledge the authors by including the following reference.

 

Min Beom Lee, Hyung Gil Hong, and Kang Ryoung Park, "Noisy Ocular Recognition Based on Three Convolutional Neural Networks," Sensors, Vol. 17, Issue 12(2933), pp. 1-26, December 2017

 

===========================================================================================================================================================================================================

 

< DVLIR-CNN model Request Form >

 

Please complete the following form to request access to the DVLIR-CNN model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

23. Dongguk Aggressive and Smooth Driving Database (DASD-DB1) and CNN Model

 

(1) Introduction

We used 15 subjects in the experiment. All the subjects voluntarily participated in our experiments. Because it was too risky to create an aggressive driving situation under real traffic conditions, we utilized two types of driving simulator, to assess baseline aggressive and smooth driving situations. As illustrated in Figure 1, the experiment included 5 min of smooth driving and another 5 min of aggressive driving. Between each section of the experiment, every subject watched a sequence of neutral images from the international affective picture system, thereby maintaining neutral emotional input. After the experiment, the subjects rested for about 10 min. This procedure was repeated three times.

 

(2) DASD-DB1 and CNN model Request

To gain access to DASD-DB1 and CNN models, download the following request form. Please scan the request form and email to Mr. Kwan Woo Lee (leekwanwoo@dgu.edu).

Any work that uses this DASD-DB1 and CNN Model must acknowledge the authors by including the following reference.

 

Kwan Woo Lee, Hyo Sik Yoon, Jong Min Song, and Kang Ryoung Park, Convolutional Neural Network-Based Classification of Drivers Emotion during Aggressive and Smooth Driving Using Multi-Modal Camera Sensors,Sensors, Vol. 18, Issue 4(957), pp. 1-22, March 2018

 

===========================================================================================================================================================================================================

 

< DASD-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DASD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

22. Dongguk Night-time Pedestrian Detection Faster R-CNN and Algorithm

 

(1) Introduction

 

We made modified faster R-CNN model with algorithm for pedestrian detection at nighttime with the augmented images from KAIST database [1] and Caltech database [2]. We made this trained CNN model open to other researchers.

 

1. Hwang, S.; Park, J.; Kim, N.; Choi, Y.; Kweon, I.S. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7-12 June 2015; pp. 1037-1045.

2. Dollár, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743-761.

 

(2) Modified faster R-CNN model with algorithm Request

To gain access to the CNN models with algorithm, download the following request form. Please scan the request form and email to Mr. Jong Hyun Kim (zzingae@dongguk.edu).

Any work that uses this CNN Model with algorithm must acknowledge the authors by including the following reference.

 

Jong Hyun Kim, Ganbayar Batchuluun, and Kang Ryoung Park, Pedestrian Detection Based on Faster R-CNN in Nighttime by Fusing Deep Convolutional Features of Successive Images, Expert Systems with Applications, (in press).

 

===========================================================================================================================================================================================================

 

< Modified faster R-CNN model with algorithm Request Form >

 

Please complete the following form to request access to this CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

21. Dongguk Face Spoof Detection CNN Model (DFSD-CNN) with Algorithm

 

(1) Introduction

 

We made CNN model with algorithm for face spoof detection with the augmented images from NUAA database [1] and CASIA database [2], respectively. We made these trained CNN model open to other researchers.

 

20.   Tan, X.; Li, Y.; Liu, J.; Jiang, L. Face liveness detection from a single image with sparse low rank bilinear discriminative model. In Proceedings of the 11th European Conference on Computer Vision, Greece, 2010.

21.   Zhang, Z.; Yan, J.; Liu, S.; Lei, Z.; Yi, D. Li, S. Z. A face anti-spoofing database with diverse attack. In Proceedings of the 5th International Conference on Biometric, New Delhi, India, 29 March 1 April, 2012.

 

(2) DFSD-CNN model with algorithm Request

To gain access to the CNN models with algorithm, download the following request form. Please scan the request form and email to Prof. Dat Tien Nguyen (nguyentiendat@dongguk.edu).

Any work that uses this CNN Model with algorithm must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Na Rae Baek and Kang Ryoung Park, Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors,Sensors, Vol. 18, Issue 3(699), pp. 1-28, February 2018.

 

===========================================================================================================================================================================================================

 

< DFSD-CNN model with algorithm Request Form >

 

Please complete the following form to request access to the DFSD-CNN model with algorithm (All contents must be completed). This CNN model with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

20. Dongguk Shadow Detection Database (DSDD-DB1) & CNN Model

 

(1) Introduction

 

DSDD-DB1 was obtained by installing visual light cameras 5 to 10 m above the ground, which approximates the conventional height of surveillance camera. As shown in Figure 1 and Table 1, images are shot in the morning, the afternoon, the evening, and on rainy days under various weather conditions, temperature, and illumination. A total of 24,000 images, constituting five sub-datasets, are obtained. The original image size is 800 x 600 pixels of the RGB three-channel.

 

Table 1. Description of five datasets.

Dataset

Condition

Detail Description

I

0.9, afternoon, sunny

humidity 24 %, wind 3.6 m/s

- Shadow with dark color cast due to strong sunlight.

II

6.0, afternoon, cloudy, humidity 39 %, wind 1.9 m/s

- Sunlight weakened by cloud so that a shadow of lighter color is cast.

III

8.0, evening, cloudy, humidity 42 %, wind 3.5 m/s

- Darker image due to weak evening sunlight.

- Long and many shadows due to the sun position in the evening and the reflection on buildings.

IV

5.2, morning, sunny humidity 37 %, wind 0.6 m/s

- Background and object become less distinguishable due to strong morning sunlight.

V

13.8, afternoon, rainy, humidity 65 %, wind 2.0 m/s

- Overall dark image due to rainy day.

- Many shadows generated by wet background floor.

 

In addition, we made two CNN models trained with our DSDD-DB1 and open database (CAVIAR [1]), respectively, public.

 

[1] CAVIAR: Context Aware Vision using Image-based Active Recognition. Available online: http://homepages.inf.ed.ac.uk/rbf/CAVIAR/ (accessed on 8 August 2017).

 

(2) DSDD-DB1 and CNN model Request

To gain access to DSDD-DB1 and CNN models, download the following request form. Please scan the request form and email to Mr. Dong Seop Kim (k_ds1028@naver.com).

Any work that uses this DSDD-DB1 and CNN Model must acknowledge the authors by including the following reference.

 

Dong Seop Kim, Muhammad Arsalan, and Kang Ryoung Park, Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor, Sensors, Vol. 18, Issue 4(960), pp. 1-19, March 2018.

 

===========================================================================================================================================================================================================

 

< DSDD-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DSDD-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

19. Dongguk Fitness Database (DF-DB1) & CNN Model

 

(1) Introduction

 

We collected banknote fitness databases (DF-DB1) from three national currencies, which are Korean won (KRW), Indian rupee (INR), and Unites States dollar (USD) as shown in Table 1. The KRW banknote image database is composed of banknotes in two denominations, 1000 and 5000 wons. The denominations of banknotes in the INR database are 10, 20, 50,100, 500, and 1000 rupees. Those for the case of the USD are 5, 10, 50, and 100 dollars. Three levels of fitness, which are fit, normal, and unfit for recirculation, are assigned for the banknotes of each denomination in the cases of the KRW and INR, and two levels including fit and unfit are defined for the USD banknotes in the experimental dataset.

 

Table 1. Number of banknote images in each national currency database.

Fitness Levels

KRW

INR

USD

Fit

Number of Images

10,084

11,909

2,907

Number of Images after Data Augmentation

30,252

71,454

61,047

Normal

Number of Images

12,430

7,952

N/A

Number of Images after Data Augmentation

37,290

47,712

N/A

Unfit

Number of Images

11,274

2,203

642

Number of Images after Data Augmentation

33,822

13,218

45,582

 

In addition, we made CNN models trained with our DF-DB1 public.

 

 

(2) DF-DB1 and CNN model Request

To gain access to DF-DB1 and CNN models, download the following request form. Please scan the request form and email to Prof. Tuyen Danh Pham (phamdanhtuyen@gmail.com).

Any work that uses this DF-DB1 and CNN Model must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Dat Tien Nguyen, Wan Kim, Sung Ho Park, and Kang Ryoung Park, "Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor," Sensors, Vol. 18, Issue 2(472), pp. 1-19, February 2018.

 

===========================================================================================================================================================================================================

 

< DF-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DF-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

18. Dongguk driver gaze classification database (DDGC-DB1) and CNN model

 

(1) Introduction

 

17 spots (gaze zones) were designated to gaze at for the experiment, and each driver stared at each spot five times. Data were collected from 20 drivers including 3 wearing glasses. The image size is 1600´1200 pixels with 3 channels. When the participants were staring at each spot, they were told to act normally, as if they were actually driving and were not restrained to one position or given any special instructions to act in an unnatural manner. There were risks of car accidents to motivate the participants to accurately stare at the 17 designated spots while actually driving for the experiment. Instead, this study obtained images from various locations (from roads in daylight to a parking garage) in a real vehicle (model name of SM5 New Impression by Renault Samsung) with its power on, but in park to create an environment most similar to when it is being driven (including factors like car vibration and external light). Moreover, to understand the influence of various kinds of external light on driver gaze detection, test data were acquired at different times of the day: in the morning, the afternoon, and at night.

In addition, we made two CNN models trained with our DDGC-DB1 and open database (CAVE-DB [1]), respectively, public.

 

[1] Smith, B.A.; Yin, Q.; Feiner, S.K.; Nayar, S.K. Gaze Locking: Passive Eye Contact Detection for Human-Object Interaction. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrews, Scotland, UK, 8-11 October 2013; pp. 271280.

 

(2) DDGC-DB1 and CNN model Request

To gain access to DDGC-DB1 and CNN models, download the following request form. Please scan the request form and email to Mr. Rizwan Ali Naqvi (rizwanali@dongguk.edu).

Any work that uses this DDGC-DB1 and CNN Model must acknowledge the authors by including the following reference.

 

Rizwan Ali Naqvi, Muhammad Arsalan, Ganbayar Batchuluun, Hyo Sik Yoon, and Kang Ryoung Park, Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor, Sensors, Vol. 18, Issue 2(456), pp. 1-34, February 2018.

 

===========================================================================================================================================================================================================

 

< DDGC-DB1 and CNN model Request Form >

 

Please complete the following form to request access to the DDGC-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

17. Dongguk Age Estimation CNN Model (DAE-CNN)

 

(1) Introduction

 

We made age estimation robust to optical and motion blurring by Resnet-152 trained with PAL database including artificially (optical and motion) blurred images [1, 2] and MORPH database including artificially (optical and motion) blurred images [3], respectively. We made these trained CNN model open to other researchers.

 

22.   Minear, M.; Park, D.C. A lifespan database of adult facial stimuli. Behav. Res. Methods Instrum. Comput. 2004, 36, 630633.

23.   PAL database. Available online: http://agingmind.utdallas.edu/download-stimuli/face-database/ (accessed on 17 May 2017).

24.  MORPH database. Available online: https://ebill.uncw.edu/C20231_ustores/web/store_main.jsp?STOREID=4 (accessed on 17 May 2017).

 

(2) DAE-CNN model Request

To gain access to the CNN models, download the following DAE-CNN model request form. Please scan the request form and email to Mr. Jeon Seong Kang (kjs2605@dgu.edu).

Any work that uses this CNN Model must acknowledge the authors by including the following reference.

 

Jeon Seong Kang, Chan Sik Kim, Young Won Lee, Se Woon Cho, and Kang Ryoung Park, Age Estimation Robust to Optical and Motion Blurring by Deep Residual CNN, Symmetry-Basel, Vol. 10, Issue 4(108), pp. 1-23, April 2018. 

 

===========================================================================================================================================================================================================

 

< DAE-CNN model Request Form >

 

Please complete the following form to request access to the DAE-CNN model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

16. Dongguk Single Camera-based Driver Database (DSCD-DB1)

 

(1) Introduction

 

We collected DSCD-DB1 from a total of 26 participants: 10 wearing nothing, 8 wearing only four kinds of glasses, 5 wearing only two kinds of sunglasses, and 3 wearing only hat. In addition, even the people (wearing nothing) took various pose including putting one hand to cheek or using mobile phone. Fifteen spots in car were designated to gaze at for the experiment, and each participant stared at each spot five times. When the participants were staring at each spot, they were told to act normally, as if they were actually driving and were not restrained to one position or given any special instructions to act in an unnatural manner. There were risks of car accidents and such to motivate the participants to accurately stare at the 15 designated spots while actually driving for the experiment. Instead, this study obtained images from various locations (from roads in daylight to a parking garage) in a real vehicle (model name of SM5 New Impression by Renault Samsung) with its power on, but in park in order to create an environment most similar to when it is being driven (including factors like car vibration and external light). Moreover, to understand the influence of various kinds of external light on driver gaze detection, test data were acquired at different times of the day: in the morning, the afternoon, and at night.

In addition, we collected the data from the participants (sitting at the side seat of driver) gazing at 15 spots while the driver was actually driving the car. Because the data were collected from the participant sitting at the side seat of driver by attaching our gaze tracking device in front of the participant, the conditions of data acquisition were similar to those from driver. In order to check our method in various car environments, we used different car (model name of Daewoo Lacetti Premiere by Chevrolet). The database was collected from a total of 10 participants: 3 wearing nothing, 3 wearing only three kinds of glasses, 2 wearing only two kinds of sunglasses, and 2 wearing only hat. In addition, even the people (wearing nothing) took various pose including putting one hand to cheek or using mobile phone. When the participants were staring at each spot, they were told to act normally, as if they were actually driving and were not restrained to one position or given any special instructions to act in an unnatural manner. To understand the influence of various kinds of external light on driver gaze detection, test data were acquired at different times of the day: in the morning, the afternoon, and at night, and they were collected while driving on various roads.

Data were obtained and processed on a laptop computer with 2.80 GHz CPU (Intel ® Core i5-4200H) and 8 GB of RAM.

 

(2) DSCD-DB1 database Request

To gain access to the database, download the following DSCD-DB1 request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).

Any work that uses or incorporates the database must acknowledge the authors by including the following reference.

 

Hyo Sik Yoon, Hyung Gil Hong, Dong Eun Lee, and Kang Ryoung Park, "Drivers Eye-based Gaze Tracking System by One-Point Calibration Based on Single NIR Camera," in submission to Multimedia Tools and Applications

 

===========================================================================================================================================================================================================

 

< DSCD-DB1 database Request Form >

 

Please complete the following form to request access to the DSCD-DB1 (All contents must be completed). This database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

===========================================================================================================================================================================================================

 

 

 

15. Dongguk Body Movement-based Human Identification Database (DBMHI-DB1) & CNN Model

 

(1) Introduction

We have collected our database in both dark and bright environments. The database included both front and back view images of humans. Our database has been collected in five different places in different days with same camera heights. The database consists of data of 80 people including men and women. The database includes both thermal and visible light images but only thermal images have been utilized in this research. The people in our database have different heights and widths, and their sizes vary from 27 to 150 pixels in width and from 90 to 390 pixels in height. The description of this database is presented in Table 1.

 

Table 1. Description of our database

Datasets

Condition

Detailed description

I

19 °C, 62 lux

-    The 3rd floor inside building

-    A man is walking in a corridor

-    Halo effect occurs below feet in thermal image

II

17 °C, 84 lux

-    The 6th floor inside building

-    Four men are walking in a corridor

-    Halo effect occurs below feet in thermal image

III

20 °C, 125 lux

-    The 7th floor inside building

-    A man is walking in a corridor while using a cellphone

-    Difference between human body and background is small

18 °C, 8 lux

-    The 7th floor inside building

-    A woman with a bag is walking in a dark corridor

-    Halo effect occurs below feet in thermal image

IV

18 °C, 140 lux

-    The 8th floor inside building

-    A man is walking in a corridor

-    Halo effect occurs below feet in thermal image

17 °C, 1 lux

-    The 8th floor inside building

-    Two men are walking in a dark corridor

-    Halo effect occurs below feet in thermal image

V

18 °C, 130 lux

-    The 7th floor inside building (different corridor than that in dataset III)

-    Two men are walking in a corridor

-    Difference between human body and background is small

18 °C, 1 lux

-    The 7th floor inside building (different corridor than that in dataset III)

-    A man with a small item is walking in a dark corridor

-    Halo effect occurs below feet in thermal image

 

In addition, we made our trained CNN model public.

 

(2) DBMHI-DB1 database & the trained CNN model Request

To gain access to the database and CNN model, download the following DBMHI-DB1 and CNN model request form. Please scan the request form and email to Mr. Ganbayar Batchuluun (ganabata87@dongguk.edu).

Any work that uses or incorporates the database and CNN model must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Rizwan Ali Naqvi, Wan Kim, and Kang Ryoung Park, "Body-movement-based human identification using convolutional neural network," Expert Systems with Applications, Vol. 101, pp. 56-77, July 2018.

 

===========================================================================================================================================================================================================

 

< DBMHI-DB1 database & the trained CNN model Request Form >

 

Please complete the following form to request access to the DBMHI-DB1 and CNN model (All contents must be completed). This database and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

===========================================================================================================================================================================================================

 

 

 

14. Dongguk Visible Light Iris Segmentation CNN Model (DVLIS-CNN)

 

We made the segmentation algorithm of iris region by two convolutional neural networks (CNNs) trained with NICE-II training database [1, 2] and the mobile iris challenge evaluation (MICHE) data [3, 4], respectively. We made these trained CNN models open to other researchers.

 

25.   NICE.II. Noisy Iris Challenge Evaluation-Part II. Available online: http://nice2.di.ubi.pt/index.html (accessed on 26 July 2017).

26.   Proença, H.; Filipe, S.; Santos, R.; Oliveira, J.; Alexandre, L. A. The UBIRIS.v2: A database of visible wavelength iris images captured on-the-move and at-a-distance.  IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1529-1535.

27.   de Marsico, M.; Nappi, M.; Ricco, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17-23.

28.   Haindl, M.; Krupička, M. Unsupervised detection of non-iris occlusions. Pattern Recognit. Lett. 2015, 57, 60-65.

 

(2) DVLIS-CNN model Request

To gain access to the CNN models, download the following DVLIS-CNN model request form. Please scan the request form and email to Mr. Muhammad Arsalan (arsal@dongguk.edu).

Any work that uses this CNN Model must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Hyung Gil Hong, Rizwan Ali Naqvi, Min Beom Lee, Min Cheol Kim, Dong Seop Kim, Chan Sik Kim, and Kang Ryoung Park, "Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment," Symmetry-Basel, Vol. 9, Issue 11(263), pp. 1-25, November 2017.

 

===========================================================================================================================================================================================================

 

< DVLIS-CNN model Request Form >

 

Please complete the following form to request access to the DVLIS-CNN model (All contents must be completed). This CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

13. Dongguk Drone Camera Database (DDroneC-DB1)

 

(1) Introduction

 

In our experiments, we used a DJI Phantom 4 quadcopter to capture the video while the drone was landing or hovering. It includes a color camera with a 1/2.3-inch-thick complementary metaloxidesemiconductor (CMOS) sensor, with a 94O-field-of-view (FOV) and an f/2.8 lens. The captured videos are in mpeg-4 (MP4) format with 30 fps, and have a size of 1280 x 720 pixels. The drones gimbal is adjusted 90° downward so that during landing, the camera can be facing the ground. Our database (shown in Table 1) is divided in two sub databases: drone landing on the marker and drone hovering over the same position while the marker is moving on the ground. For each sub database, we captured four videos at 10 AM, 2 PM, 6 PM, and 10 PM. We acquired videos in varying types of environments (humidity level, wind velocity, temperature, and weather). The marker was visible in the sequences for the morning, afternoon, and evening, but it was barely seen in the night video.

 

Table 1. Description of Description of DDroneC-DB1

Kinds of    sub-database

Time

Condition

Description

Sub-database 1 (drone landing)

Morning

Humidity: 41.5 %, wind speed: 1.4 m/s, temperature: 8.6 oC, spring, sunny

- A sunny day with clear sky, which has affected the illumination on the marker

- Landing speed: 4 m/s

Afternoon

Humidity: 73.8 %, wind speed: 2 m/s, temperature: -2.5 oC, winter, cloudy

- Low level of illumination observed in the winter time, which affected the intensity of background area.

- Landing speed: 6 m/s

Evening

Humidity: 38.4 %, wind speed: 3.5 m/s, temperature: 3.5 oC, winter, windy

- There is the change in the marker’s position due to strong wind

- Landing speed: 4 m/s

Night

Humidity: 37.5 %, wind speed: 3.2 m/s, temperature: 6.9 oC, spring, foggy

- Marker cannot be seen owning low level of light at dark night

- Landing speed: 6 m/s

Sub-database 2 (drone hovering)

Morning

Humidity: 41.6 %, wind speed: 2.5 m/s, temperature: 11 oC, spring, foggy

Drone hovers above the marker, and the marker is manually moved and rotated while capturing videos

Afternoon

Humidity: 43.5 %, wind speed: 2.8 m/s, temperature: 13 oC, spring, sunny

Evening

Humidity: 42.9 %, wind speed: 2.9 m/s, temperature: 10 oC, spring, sunny

Night

Humidity: 41.5 %, wind speed: 3.1 m/s, temperature: 6oC, spring, dark night

 

 

(2) DDroneC-DB1 Request

To gain access to the database, download the following DDroneC-DB1 request form. Please scan the request form and email to Mr. Phong Ha Nguyen (stormwindvn@dongguk.edu).

Any work that uses or incorporates the database must acknowledge the authors by including the following reference.

 

Phong Ha Nguyen, Ki Wan Kim, Young Won Lee, and Kang Ryoung Park, "Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor,"  Sensors, Vol. 17, Issue 9(1987), pp. 1-38, August 2017

 

===========================================================================================================================================================================================================

 

< DDroneC-DB1 Request Form >

 

Please complete the following form to request access to the DDroneC-DB1 (All contents must be completed). This database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

 

12. ISPR Database (real and presentation attack finger-vein images) & Algorithm Including CNN Model

 

(1) Introduction

The ISPR database consists of 3300 and 7560 images for real and presentation attack finger-vein images, respectively. The real finger-vein database was collected by capturing finger-vein images from 33 people, including both male and female. For each person, all 10 fingers were used and 10 trials were captured for each finger. Consequently, the real finger-vein database contains 3300 (33 people × 10 fingers × 10 trials) real finger-vein images for our experiments. Among 3300 real finger-vein images, we selected 56 fingers which have clear vein pattern for making the presentation attack finger-vein images.

There are two reasons for this selection scheme. First, the users should normally use a finger with clear vein pattern for finger-vein recognition system in order to guarantee the security level of biometric feature. Therefore, if an attacker steals a finger-vein pattern image from user, it will normally be a clear finger-vein image. Second, with some fingers which normally have poor vein pattern such as thumbs or little fingers it is very hard for attackers to produce a clear presentation attack finger-vein image. Instead, the presentation attack finger-vein images can contain poor vein pattern and much of noise due to the reproducing process. As a result, the attacking task can be failed by rejection rate of finger-vein recognition system.

The presentation attack finger-vein image database was collected by re-capturing the printed versions of 56 selected real finger-vein images in three different printing materials including A4 paper, MAT paper and OHP film. In addition, we used three different values of the printing resolution of low resolution (300dpi), middle resolution (1200dpi) and high resolution (2400dpi). By using this scheme, we can collect presentation attack finger-vein images which contain various characteristics about printing materials and printing resolution. Finally, in order to simulate the attacking process, we captured presentation attack finger-vein images at three z-distances (the distance between camera and finger-vein sample) by little changing the z-distance during image acquisition and 5 trials for each z-distance.

As a result, a presentation attack finger-vein image database of 7560 images (56 real image × 3 printing materials × 3 printing resolutions × 3 z-distances × 5 trials) was collected.

 

Table 1. Description of ISPR presentation attack finger-vein image database

Image Making Protocol

Real Access

Printed Access

Train Set

Test Set

Total

Train Set

Test Set

Total

Material

Printed on A4 Paper

(ISPR-DB1)

1700

1600

3300

1440

1080

2520

Printed on MAT Paper

(ISPR-DB2)

1700

1600

3300

1440

1080

2520

Printed on OHP Film

(ISPR-DB3)

1700

1600

3300

1440

1080

2520

Printer Resolution

Printed using 300 DPI Resolution Printer

(ISPR-DB4)

1700

1600

3300

1440

1080

2520

Printed using 1200 DPI Resolution Printer. (ISPR-DB5)

1700

1600

3300

1440

1080

2520

Printed using 2400 DPI Resolution Printer

(ISPR-DB6)

1700

1600

3300

1440

1080

2520

Entire Database (ISPR-DB)

1700

1600

3300

4320

3120

7560

 

In addition, we made our algorithm including the trained CNN model public.

 

(2) ISPR database & algorithm including the trained CNN model Request

To gain access to the database and algorithm, download the following ISPR database and algorithm request form. Please scan the request form and email to Prof. Dat Tien Nguyen (nguyentiendat@dongguk.edu).

Any work that uses or incorporates the database and algorithm must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Hyo Sik Yoon, Tuyen Danh Pham, and Kang Ryoung Park, "Spoof Detection for Finger-Vein Recognition System Using NIR Camera," Sensors, Vol. 17, Issue 10(2261), pp. 1-33, October 2017.

 

===========================================================================================================================================================================================================

 

< ISPR Database & Algorithm Request Form >

 

Please complete the following form to request access to the ISPR database and algorithm (All contents must be completed). This database and algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

===========================================================================================================================================================================================================

 

 

 

11. Dongguk Visible Light & FIR Pedestrian Detection Database (DVLFPD-DB1) && CNN model

 

(1) Introduction

 

There are 4 sub-databases, and the total number of frames of visible light images and FIR images is 4080 each. To obtain the images, this study used a duel camera system consisting of a FLIR Tau640 FIR camera (19 mm), and a Logitech C600 visible light web-camera. In order to record the filming conditions, a WH-1091 (wireless weather station) was used.

 

Table 1. Description of Database

 

Sub-database

1

Sub-database

2

Sub-database

3

Sub-database

4

Number of

image

598

651

2364

467

Number of      pedestrian candidate

1123

566

2432

169

Number of           non-pedestrian candidate

763

734

784

347

(range of width) ´ (range of height)

(pixels)

Pedestrian

(27 ~ 91) ´     (87 ~ 231)

(47 ~ 85) ´   (85 ~ 163)

(31 ~ 105) ´  (79 ~ 245)

(30 ~ 40) ´   (90 ~ 120)

Non-pedestrian

(51 ~ 161) ´   (63 ~ 142)