< Dongguk Open Databases & CNN Model
>
106. Dongguk RMOBF-Net and CNN with motion and optical
blurred finger-vein image database
105. Dongguk CCS-Net and MFRA-Net
104. Dongguk ESS-Net and FBSS-Net
103. MASS-Net for human blastocyst component detection
102. Multinational fake banknote detection model and
algorithm in cross-dataset environment
101. VSUL-Net models and Algorithms
100. SSS-Net model
99. CAM-CAN model
98. IMFC-Net for Shoulder Prostheses Recognition
97. DSF-Net and DSA-Net Models
96. Proposed CNN Models for
Breast Tumor Segmentation with Algorithms
95. OSRCycleGAN with Algorithms
94. Dongguk LDS-Net and LDAS-Net for WBCs accurate
segmentationes and videos
92. Attention-guided GAN for synthesizing infrared
image (SI-AGAN) and syn-IR datasets
91. INF-GAN with algorithm and nonuniform finger-vein
images
90. Image Prediction Generative Adversarial Network v2
(IPGAN-2)
89. Generative Adversarial Network for Low-light Age
Estimation (LAE-GAN) and CNN for age estimation
88. DMDF-Net:
Dual Multiscale Dilated Fusion Network for Accurate Segmentation of Lesions
Related to COVID-19 in Lung Radiographic Scans
87. Dongguk
face and body database version 3 (DFB-DB3), modified EnlightenGAN, and deep
CNNs for human recognition
86. Image Prediction Generative Adversarial Network
(IPGAN)
85. Dongguk DAL-Net model for segmentation-based
recognition of COVID-19 lesions in chest CT scans
84. Dongguk modified DeblurGAN and CNN for recognition
of blurred finger-vein image with motion blurred image database
83. Dongguk DRE-Net model for shoulder implants
classification
82. SSF-Net and TSF-Net models for pigment sign
detection
81. Dongguk SLS-Net and SLSR-Net
80. Grouped Dilated Convolution Module (GDCM)-based
Semantic Segmentation Network with Algorithm
79. Dongguk MDA-BN Model for Effective Diagnosis of
COVID-19 Infection
78. PLS-Net
and PLRS-Net models
77. Dongguk Joint-GAN and CNN-LSTM for
action recognition
76. AS-RIG (Adaptive Selection to Reconstructed Input
Data using a Generator) with algorithm for Person Re-Identification
75. Synthesized Low Light CamVid and KITTI database
(Syn-CamVid and Syn-KITTI) and Algorithms Including CNN Models
74. Dongguk DeblurGAN and CNN for Iris Recognition
73. Dongguk Light-weighted Ensemble Network for Robust
Diagnosis of COVID19 Pneumonia
71. Dongguk Nuclei-Net
Model (R-NSN) with Algorithms
70. Dongguk
Pathological Site Classification Models with Algorithm
69.
Dongguk single model both for thermal image super-resolution reconstruction and
deblurring, and detection model of object and thermal reflection
68.
Dongguk enhanced CycleGAN for age estimation and generated images
67. Dongguk Korean Banknote Database Version1
(DKB v1) with Faster R-CNN model and post processing algorithms
66. Dongguk Face and Body Database Version2 (DFB-DB2) with GAN model,
CNN models, and algorithms
65. Dongguk Computer-Aided
Framework to Diagnose Tuberculosis from Chest X-Ray Images
64. Dongguk blurred gaze database (DBGD) and
CycleGAN model
63.
Dongguk Models for Thermal Image Super-resolution Reconstruction and Deblurring
62. Dongguk RPS-Net based
retinal pigment sign detection model (DRPM) with Algorithms
61. Dongguk
DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms
60. CNN model for Thermal
Reflection Removal
59. Synthesized Low Light
Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low Light
Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago
(Syn-KITTI) database, and Algorithm Including CNN Models
58. Dongguk Drone Motion Blur
Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained Models
57. Dongguk X-RayNet Model with Algorithms (DXM)
56. Dongguk Mitotic Cell
Detection Models (DMM)
55. Dongguk CNN Models for Fake Banknote Image Classification Using
Visible-Light Images Captured by Smartphone Camera
54. Dongguk mobile finger
wrinkle database versions 1 and 2 (DMFW-DB1 and DMFW-DB2), and GAN with CNN
models for motion deblurring
53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2)
Algorithm
52. Dongguk generation model of presentation attack face images
(DG_FACE_PAD_GEN)
51. Dongguk Spatiotemporal
Features-Based Classification Network (DenseNet+LSTM) to Classify the Multiple
Gastrointestinal Diseases with Including the Video Indices of Experimental
Endoscopy Videos
50. Dongguk Modified Conditional GAN &
Deep CNN Models, and Generated Images
49. Dongguk Super-resolution
Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)
48. Dongguk ESSN models and
algorithm for Semantic Segmentation
47. Dongguk Mask R-CNN Model for Elimination of
Thermal Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB),
and Items and Vehicles Database (DI&V-DB)
46. Dongguk
Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm
45. Dongguk Modified CycleGAN for Age
Estimation (DMC4AE) and Generated Images
44. Dongguk Vess-Net Models with
Algorithm
43. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with
Algorithms
42. Dongguk CNN stacked LSTM and CycleGAN for Action
Recognition, Generated Data, and Dongguk Activities & Actions Database
(DA&A-DB2)
41. Label Information of Sun Yat-sen University Multiple
Modality Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models
(DGR-CNN).
40. Dongguk cGAN-based Iris Image Generation Model and
Generated Images (DGIM&GI)
106. Dongguk RMOBF-Net and CNN with motion
and optical blurred finger-vein image database
We provide RMOBF-Net and CNN with motion and optical blurred finger-vein image database via the following site.
https://github.com/dongguk-dm/RMOBF-Net
105. Dongguk CCS-Net and MFRA-Net
(1) Introduction
We developed computer-assisted methods to aid diagnostic and surgical procedures for colorectal cancer. In this study two effective networks (CCS-Net and MFRA-Net) are developed to perform accurate segmentation using a less number of trainable parameters.
(2) Request for Our Models with Algorithm
To gain access to our trained models and algorithm, please scan the request form as shown in the bellow description and email to Mr. Adnan Haider (adnanhaider@dgu.ac.kr). Any work that use our data must acknowledge the authors by including the following reference.
Adnan Haider, Muhammad Arsalan, Se Hyun Nam, Haseeb Sultan, and Kang Ryoung Park, " Multi-scale feature retention and aggregation for colorectal cancer diagnosis using gastrointestinal images,” Future Generation Computer Systems, in submission.
< Request Form for Models with Algorithm >
Please complete the following form to request access to our trained models with algorithm. These models with algorithm should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
104. Dongguk ESS-Net and FBSS-Net
(1) Introduction
We propose an engineering-assisted optic cup and optic disc segmentation method for glaucoma detection in retinal fundus images. Optic cup and optic disc segmentation play a vital role for computer-aided-diagnosis of glaucoma. We employed internal and external feature blending for improving the segmentation performance. We developed two networks (ESS-Net and FBSS-Net) capable of achieving state-of-the-art segmentation performance using a small number of trainable parameters.
(2) Request for Our Models with Algorithm
To gain access to our trained models and algorithm, please scan the request form as shown in the bellow description and email to Mr. Adnan Haider (adnanhaider@dgu.ac.kr). Any work that use our data must acknowledge the authors by including the following reference.
Adnan Haider, Muhammad Arsalan, Chanhum Park, Haseeb Sultan, and Kang Ryoung Park, "Exploring Deep Feature-blending Capabilities to Assist Glaucoma Screening,” Applied Soft Computing, in submission.
< Request Form for Models with Algorithm >
Please complete the following form to request access to our trained models with algorithm. These models with algorithm should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
103. MASS-Net for human
blastocyst component detection
(1) Introduction
We propose a novel multiscale aggregation
semantic segmentation network (MASS-Net) that combines four different scales
via depth-wise concatenation. The extensive use of depth-wise separable
convolutions resulted in a decrease in the number of trainable parameters.
Further, the innovative multiscale design provided rich spatial information of
different resolutions, thereby achieving good segmentation performance without
a very deep architecture. MASS-Net utilized 2.3 million trainable parameters
and accurately detects TE, ZP, ICM, and BL without using preprocessing stages.
(2) Request for Our
Models with Algorithms
To obtain our pretrained models with algorithms, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Muhammad Arsalan, Adnan Haider, Se Woon Cho,
Yu Hwan Kim, and Kang Ryoung Park, “Human Blastocyst Components Detection Using Multiscale Aggregation
Semantic Segmentation Network For Embryonic Analysis”, Biomedicines, in submission.
< Request Form
for Proposed CNN Models and Algorithms >
Please complete the
following form to request access to our trained models with algorithms. These
models with algorithms should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
102. Multinational fake banknote detection
model and algorithm in cross-dataset environment
(1) Introduction
We propose multinational fake banknote detection
model and algorithm in cross-dataset environment. The proposed model and
algorithm is tested on the genuine and fake banknotes
of four national
currencies: the Euro (EUR), Korean won (KRW), US dollar (USD), and Jordanian dinar (JOD). There are five denominations of EUR (EUR 5, EUR 10, EUR 20, EUR 50, and
EUR 100), four of KRW (KRW 1000, KRW 5000, KRW 10,000, and KRW 50,000), six of
USD (USD 1, USD 5, USD 10, USD 20, USD 50, and USD 100), and four denominations
(JOD 1, JOD 5, JOD 10, and JOD 20).
(2) Request for Our
Model with Algorithm
To obtain our pretrained model with algorithm, please fill the request form below and send an email to Dr. Tuyen Danh Pham at phamdanhtuyen@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Tuyen Danh Pham, Young Won Lee, Chanhum Park, and Kang Ryoung Park, “Deep Learning-Based Detection of Fake Multinational Bank-notes in a Cross-Dataset Environment by Utilizing Smartphone Cameras for Assisting Visually Impaired Individuals,” Mathematics, in submission.
< Request Form
for Proposed CNN Model and Algorithm >
Please complete the
following form to request access to our trained model with algorithm. These
model with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
101. VSUL-Net Models and Algorithms
(1) Introduction
We propose a vessel segmentation ultra-lite
network (VSUL-Net) to accurately extract the retinal vasculature from the
background. The proposed VSUL-Net comprises only 0.37 million trainable
parameters and uses a raw image as input without preprocessing. The proposed
method is tested on three publicly available datasets: digital retinal images
for vessel extraction (DRIVE), structured analysis of retina (STARE), and
children’s heart health study in England
database (CHASE-DB1) for retinal vasculature segmentation. The experimental
results demonstrated that VSUL-Net provides robust segmentation of retinal
vasculature.
(2) Request for Our
Models with Algorithms
To obtain our pretrained models with algorithms, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Muhammad Arsalan, Adnan Haider, Ja Hyung Koo, and Kang Ryoung Park, “Detecting Retinal Vasculature Using a Low-Cost Artificially Intelligent Segmentation Network for Ophthalmic Diagnosis,” Mathematics, in submission.
< Request Form
for Proposed CNN Models and Algorithms >
Please complete the following
form to request access to our trained models with algorithms. These models with
algorithms should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
100. SSS-Net model
(1) Introduction
We propose a shallow low-cost deep-learning
architecture: sprint semantic segmentation network (SSS-Net) to accurately
detect human blastocyst components in microscopic images for embryological
analysis. The SSS-Net utilizes the sprint convolutional block which uses
asymmetric convolutions in combination with separable convolutions to reduce
the number of trainable parameters. The feature aggregation by concatenation
helps to increase the segmentation performance of the network. The proposed
SSS-Net is consuming just 4.04 million trainable parameters with competitive
segmentation performance. The SSS-Net is providing accurate segmentation of
zona pellucida (ZP), trophectoderm (TE), inner cell mass (ICM), and blastocoel
(BL) for morphological analysis to increase the success rate of in vitro
fertilization. The performance of the proposed method is analyzed using a
publicly available blastocyst image dataset, the results show that the proposed
method has promising segmentation performance compared to existing
state-of-the-art methods.
(2) Request for Our
Model with Algorithm
To obtain our pretrained model, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Muhammad Arsalan, Adnan Haider, Jiho Choi, and Kang Ryoung Park, “Detecting Blastocyst Components by Artificial Intelligence for Human Embryological Analysis to Improve Success Rate of In vitro Fertilization,” Journal of Personalized Medicine, Vol. 12(2), 124, pp. 1-14, February 2022.
< Request Form
for Proposed CNN Model >
Please complete the
following form to request access to our trained model with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
99. CAM-CAN model
(1) Introduction
In the proposed method, a convolution neural
network (CNN) is trained using the classification activation map (CAM) to focus
on specific areas in the input image. The CAM image is used as the ground-truth
image. Furthermore, the concept of the CAM-based categorical adversarial
network (CAM-CAN), in which the CNN is trained based on a generative
adversarial network, is proposed in this paper. An action recognition
experiment was performed using the self-collected Dongguk thermal image
database (DTh-DB) and open database, and the results revealed that the
accuracies of the existing state-of-the-art methods significantly increased
after applying the proposed method.
(2) Request for Model
To obtain our pretrained model, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Jiho Choi, and Kang Ryoung Park, “CAM-CAN: Class Activation Map-based Categorical
Adversarial Network,” Expert Systems with Applications, in submission.
< Request Form
for Proposed Model >
Please complete the
following form to request access to our trained model with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
98. IMFC-Net for Shoulder Prostheses
Recognition
(1) Introduction
We proposed IMFC-Net for
accurate recognition of shoulder prostheses in X-ray sans. The proposed IMFC-Net is based on the ensemble
connectivity of our designed IFC-Net and MFC-Net, followed by JMLP. The
proposed IMFC-Net encompassed fewer parameters than the previous ensemble model
for the problem under investigation.
(2) Request for Our
Models with Algorithm
To obtain our pretrained models, please fill the request form below and send an email to Mr. Haseeb at haseebsltn@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Haseeb Sultan, Muhammad Owais, Jiho Choi, Tahir
Mahmood, Adnan Haider, Nadeem Ullah, and Kang Ryoung Park, “Artificial
Intelligence-based Solution in Personalized Computer-aided Arthroscopy of
Shoulder Prostheses.” Journal of Personalized Medicine, Vol. 12(1), 109, pp. 1-18, January
2022.
< Request Form
for Proposed CNN Models >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
97. DSF-Net and DSA-Net Models
(1) Introduction
We propose two new shallow deep-learning
architectures: dual-stream fusion network (DSF-Net) and dual-stream aggregation
network (DSA-Net) to accurately detect retinal vasculature using semantic
segmentation in raw color fundus images for the screening of diabetic and
hypertensive retinopathy. Two-stream fusion and aggregation produce potential
features that facilitate the fine segmentation of the vessels without expensive
conventional preprocessing. The proposed DSF-Net and DSA-Net are consuming just
1.5 million trainable parameters with competitive segmentation performance. The
performance of the proposed method is analyzed using three publicly available
datasets DRIVE, STARE and CHASE-DB, the results show that proposed method have
promising segmentation performance compared to existing state-of-the art
methods.
(2) Request for Our
Models with Algorithm
To obtain our pretrained models, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Muhammad Arsalan, Adnan Haider, Jiho Choi, and Kang Ryoung Park, “Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures,” Journal of Personalized Medicine, Vol. 12(1), 7, pp. 1-17, January 2022.
< Request Form
for Proposed CNN Models >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
96. Proposed CNN Models for Breast Tumor
Segmentation with Algorithms
(1) Introduction
We propose ultrasound image-based BTEC-Net and
RFS-UNet for the segmentation of breast tumor.
(2) Request for Our
Models with Algorithm
To gain access to our trained models and algorithm, please fill the request form with signature below and send an email to Mr. Se Woon Cho (jsu319@dgu.edu).
< Request Form
for Proposed CNN Models for Breast Tumor Segmentation with Algorithms >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
95. OSRCycleGAN with Algorithms
(1) Introduction
A super-resolution reconstruction
method for ocular recognition is newly proposed.
(2) Request for Our
Models with Algorithm
To gain access to our trained models and algorithm, please fill the request form with signature below and send an email to Mr. Young Won Lee (lyw941021@dgu.ac.kr).
< Request Form
for Our Models with Algorithm >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
94. Dongguk LDS-Net and LDAS-Net for WBCs
accurate segmentation
(1) Introduction
We propose two novel
morphology-aware segmentation networks (LDS-Net and LDAS-Net) for
computer-aided-diagnosis using microscopic images. Traditional methods are used
for diagnosis while these methods are expensive, subjective, time consuming,
and require specialized equipment. To address these issues, deep-learning-based
microscopic image analysis is proposed for an automated diagnosis. Experimental
details exhibits that we achieved the state-of-the-art segmentation
performance. Proposed methods can assist medical experts in diagnosis and
prognosis, thereby reducing the burden on the health system.
(2) Request for Our
Models with Algorithm
To gain access to our trained models and algorithm, please fill the request form with signature below and send an email to Mr. Adnan Haider (adnanhaider@dgu.ac.kr). Any work that use
our data must acknowledge the authors by including the following
reference.
Adnan Haider, Muhammad Arsalan, Young Won
Lee, and Kang Ryoung Park, "Deep Features
Aggregation-Based Joint Segmentation of Cytoplasm and Nuclei in White Blood
Cells,” IEEE Journal of Biomedical and Health Informatics, 2022, in press.
< Request Form
for LDS-Net and LDAS-Net Models >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
92. Attention-guided GAN for synthesizing
infrared image (SI-AGAN) and syn-IR datasets
(1) Introduction
We propose attention-guided generative
adversarial network for synthesizing infrared image (SI-AGAN). SI-AGAN is style
transfer of visible-light image to infrared (IR) image. By introducing
depthwise separable convolution, we significantly reduce the computational
cost. We generate the syn-RegDB database
and syn-SYSU-MM01 database, an open database, using SI-AGAN. For a fair
performance assessment by other researchers, we made open the SI-AGAN with
algorithm and syn-IR datasets in this study.
(2) Request for Models
To obtain SI-AGAN with algorithm and syn-IR databases, please fill the request form below and send an email to Ms. Na Rae Baek (naris27@dgu.ac.kr). Any work that uses the provided pretrained network or images must acknowledge the authors by including the following reference.
Na Rae Baek, Se Woon Cho, Ja Hyung Koo and Kang Ryoung Park, “Pedestrian Gender
Recognition by Style Transfer of Visible-Light Image to Infrared-Light Image
Based on an Attention-Guided Generative Adversarial Network,” Mathematics,
9(20), 2535, pp. 1-32, October 2021.
< Request Form
for SI-AGAN with algorithm and syn-IR datasets >
Please complete the
following form to request access to our trained model and images. This model
and images should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
91. INF-GAN with algorithm and nonuniform
finger-vein images
(1) Introduction
We propose a generative adversarial
network for the illumination normalization of finger-vein images (INF-GAN). In
the INF-GAN, a one-channel image containing texture information is generated
through a residual image generation block, and finger-vein texture information
deformed by the severe nonuniformity of illumination is restored, thus
improving the recognition performance. Also, we generate images containing nonuniform illumination from the images of the
Hong Kong Polytechnic University finger image database version 1 (HKPU-DB) and
the Shandong University homologous multimodal traits finger-vein database
(SDUMLA-HMT-DB). For a fair performance assessment by other researchers, we
made open the nonuniform finger-vein images, INF-GAN, and algorithm proposed in
this study.
(2) Request for Models
To obtain INF-GAN with algorithm and nonuniform finger-vein images, please fill the request form below and send an email to Mr. Jin Seong Hong (turtle1990@dgu.ac.kr). Any work that uses the provided pretrained network or images must acknowledge the authors by including the following reference.
Jin Seong Hong, Jiho Choi, Seung Gu Kim, Muhammad Owais, and Kang
Ryoung Park, “INF-GAN: Generative Adversarial Network for
Illumination Normalization of Finger-Vein Images,”
Mathematics, 9(20), 2613, pp.
1-32, October 2021.
< Request Form
for Models and Images >
Please complete the
following form to request access to our trained model and images. This model
and images should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
90. Image Prediction Generative Adversarial
Network v2 (IPGAN-2)
(1) Introduction
We trained the GAN model using the marathon sub-dataset of the Boston University-thermal infrared video (BU-TIV) benchmark open dataset for the purpose of the image prediction. In the proposed image prediction generative adversarial network (IPGAN-2) method, thermal and binary sequential images are used as inputs to the IPGAN-2 model. The proposed IPGAN-2 method performs image-to-image translation. Compared to our previous study (IPGAN), IPGAN-2 predicts and generates left and right regions of a current image to make the current image wider. We made the model (image prediction generative adversarial network version 2 (IPGAN-2)) open to other researchers.
(2) Request for Models
To obtain our pretrained model, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Na Rae Baek, and Kang Ryoung Park, “Enlargement of the Field of View Based on Image Region Prediction Using Thermal Videos,” Mathematics, 9(19), 2379, pp. 1-29, September 2021.
< Request Form
for Models >
Please complete the
following form to request access to our trained model. This model should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
89. Generative Adversarial Network for Low-light
Age Estimation (LAE-GAN) and CNN for age estimation
(1) Introduction
We propose a low-illumination facial image enhancement system with Generative Adversarial Network for Low-light Age Estimation (LAE-GAN), and CNN models for age estimation. These systems are designed to overcome the performance degradation caused by low-illumination environment. Three open databases named as MORPH [1], FG-Net [2], and AFAD [3] are used for experiments. LAE-GAN and age estimation models are opened to other researchers for fair judgement.
[1] MORPH database.
Available online:
https://ebill.uncw.edu/C20231_ustores/web/store_main.jsp?STOREID=4 (accessed on
17 May 2021)
[2] FGNET database.
Available online: https://yanweifu.github.io/FG_NET_data/index.html (accessed
on 17 May 2021).
[3] AFAD database.
Available online: https://afad-dataset.github.io (accessed on 17 May 2021).
(2) Request for Our
Model
To gain access to LAE-GAN model, age estimation CNN model, and algorithms, please, download the following request form and fill it with your signature. Then, scan the request form and email it to Mr. Se Hyun Nam (nsh6473@dongguk.edu).
Any work that uses this LAE-GAN model, age estimation CNN model, and
algorithms must acknowledge the authors by including the following reference.
Se Hyun Nam, Yu Hwan Kim, Jiho Choi, Seung Baek Hong, Muhammad Owais, and Kang Ryoung Park, “LAE-GAN-based Face Image Restoration for Low-light Age Estimation,” Mathematics, 9(18), 2329, pp. 1-28, September 2021.
< Request Form
for LAE-GAN model, age estimation CNN model,
and algorithms >
Please, complete the
following form to request access to our LAE-GAN, age estimation CNN model, and
algorithms. The proposed models and algorithms should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
88. DMDF-Net: Dual Multiscale Dilated Fusion
Network for Accurate Segmentation of Lesions Related to COVID-19 in Lung
Radiographic Scans
(1) Introduction
We proposed a dual multiscale dilated fusion network (DMDF-Net) by including additional pre- and post-processing steps to address the generality issues and perform effective diagnosis of COVID-19 lesions in lung CT scans. In post-processing step, post-region of interest (ROI) fusion is performed to reduce the false-positive pixels. Additionally, the post-ROI fusion further provides a way to precisely measure the diseased area of lung. Experimental results show the superior performance of the proposed framework over the various state-of-the-art methods.
(2) Request for Our
Model
To obtain our proposed framework (including implementation of
DMDF-Net, pre-, and post-processing steps), please fill the request form below and
send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that
uses the provided pretrained network must acknowledge the authors by including
the following reference.
Muhammad
Owais, Na Rae Baek, and Kang Ryoung Park, “DMDF-Net: Dual Multiscale Dilated Fusion
Network for Accurate Segmentation of Lesions Related to COVID-19 in Lung
Radiographic Scans,” Expert Systems with Applications,
in submission.
< Request Form for DMDF-Net
Model >
Please complete the
following form to request access to our proposed framework (including
implementation of DMDF-Net, pre-, and post-processing steps). The proposed
model should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
87. Dongguk face and body database version 3
(DFB-DB3), modified EnlightenGAN, and CNN models for face & body
recognition
(1) Introduction
We proposed a low-illumination enhancement system with modified Enlighten Generative Adversarial Networks (modified EnlightenGAN), and CNN models for face & body recognition using VGG face net-16 and ResNet-50.
These systems are designed to overcome the performance degradation caused by low-illumination environment. We changed number of patches, patch sizes of discriminator and optimal parameter of perceptual loss on modified EnlightenGAN. Two types of databases named as Dongguk Face and Body database (DFB-DB3) and ChokePoint dataset [1] are used for experiment.
In addition, we make our GAN model, two CNN models trained with DFB-DB3 and open database of ChokePoint database [1], and our algorithms publicly available.
1. ChokePoint Dataset. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 20 February 2021).
(2) Request for Models and Database
To gain access to DFB-DB3 with GAN model, CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).
Any work that uses this DFB-DB3 with GAN model, CNN model and algorithms must acknowledge the authors by including the following reference.
Ja Hyung Koo, Se Woon Cho, Na Rae Baek, and Kang Ryoung Park, “Multimodal Human Recognition in Significantly Low Illumination Environment Using Modified EnlightenGAN,” Mathematics, 9(16), 1934, pp. 1-43, August 2021.
< Request Form
for Models >
Please complete the
following form to request access to our trained models and database. This model
should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
86. Image Prediction Generative Adversarial Network (IPGAN)
(1) Introduction
We trained the GAN model using the marathon sub-dataset of the Boston University-thermal infrared video (BU-TIV) benchmark open dataset for the purpose of the image prediction. In the proposed image prediction generative adversarial network (IPGAN) method, converted three-channel thermal images are used as inputs to the IPGAN model. The proposed IPGAN method performs image-to-image translation. We made the model (image prediction generative adversarial network (IPGAN)) open to other researchers.
(2) Request for Models
To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Ja Hyung Koo, Yu Hwan Kim, and Kang Ryoung Park, “Image Region Prediction from Thermal Videos Based on Image Prediction Generative Adversarial Network”, Mathematics, 9(9), 1053, pp. 1-20, May 2021.
< Request Form
for Models >
Please complete the
following form to request access to our trained models. This model should not
be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
85. Dongguk DAL-Net model for
segmentation-based recognition of COVID-19 lesions in chest CT scans
(1) Introduction
We proposed a domain-adaptive lightweight network (DAL-Net) for the effective and timely recognition of minimal COVID-19 lesions in chest CT scans. Our DAL-Net model is designed to overcome the performance degradation caused by multi-source datasets. Two open databases named as COVID-19-CT-Seg [1,2] and MosMed [3] are used for experiment. Experimental results show the superior performance of the proposed network over the various state-of-the-art methods.
[1] J. Ma et al., “Towards data‐efficient learning: A
benchmark for COVID-19 CT lung and infection segmentation,” Med. Phys., vol. 48, no.
3, pp. 1197–1210,
2021
[2] M. Jun et al., “COVID-19 CT lung and
infection segmentation dataset,”
Zenodo. Available online: http://doi.org/10.5281/zenodo.3757476 (accessed on 01
January 2021).
[3] S. P. Morozov et al., “Mosmeddata: Chest ct scans
with covid-19 related findings dataset,” 2020, arXiv:2005.06465.
(2) Request for Our
Model
To obtain our pretrained DAL-Net model, please fill the request form below
and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work
that uses the provided pretrained network must acknowledge the authors by
including the following reference.
Muhammad Owais, Na Rae Baek, and Kang Ryoung Park, “Domain-Adaptive Artificial Intelligence-based Model for Personalized Diagnosis of Trivial Lesions Related to COVID-19 in Chest Computed Tomography Scans,” Journal of Personalized Medicine, Vol. 11(10), 1008, pp. 1-22, October 2021.
< Request Form
for DAL-Net Model >
Please complete the
following form to request access to our trained models with algorithm and
databases. These models with algorithm and databases should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
84. Dongguk modified DeblurGAN
and CNN for recognition of blurred finger-vein image with motion blurred image
database
(1) Introduction
We proposed a motion blurred finger-vein restoration system with modified Blind Motion Deblurring Using Conditional Adversarial Networks (modified DeblurGAN), and a finger-vein recognition system using DenseNet-161. These systems are designed to overcome the performance degradation caused by motion blur. Two open databases named as SDUMLA-HMT-DB [1] and HKPolyU-DB [2] are used for experiment. Finger-vein restoration and recognition models with motion blurred image database are opened to other researchers for fair judgement.
[1] Y. Yin, L. Liu, and X. Sun, "SDUMLA-HMT: A multimodal biometric database", in Proc. 6th Chin. Conf. Biometric Recognit., Beijing, China, Dec. 2011, pp. 260-268.
[2] A. Kumar and Y. Zhou, ‘‘Human identification using finger images,’’ IEEE Trans. Image Process., vol. 21, no. 4, pp.
2228–2244,
Apr. 2012.
(2) Request for Our
Models with Algorithm and Databases
To obtain our pretrained models with algorithm and databases, please fill the request form
below and send an email to Mr. Jiho Choi at choijh1027@dongguk.edu. Any work that uses
the provided pretrained network must acknowledge the authors by including the
following reference.
Jiho
Choi, Jin Seong Hong, Muhammad Owais, Seung Gu Kim, and Kang Ryoung Park, “Restoration of
Motion Blurred Image by Modified DeblurGAN for Enhancing the Accuracies of
Finger-vein Recognition,” Sensors, Vol. 21, Issue
14(4635), pp. 1-33, July 2021
< Request Form
for Models with Algorithm and Databases >
Please complete the
following form to request access to our trained models with algorithm and
databases. These models with algorithm and databases should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
83. Dongguk DRE-Net model for
shoulder implants classification
(1) Introduction
We proposed DRE-Net, by implementing two spatial feature extraction
networks using a densely connected convolution network and
a residual neural network, and an SCN for robust classification of different types of shoulder
implants. We proposed a rotational invariant augmentation technique used in
DRE-Net to achieve state-of-the-art classification performance.
(2) Request for Our
Models with Algorithm
To obtain our pretrained models, please fill the request form below and send an email to Mr. Haseeb at haseebsltn@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Haseeb Sultan, Muhammad Owais, Chanhum Park, Tahir
Mahmood, Adnan Haider, and Kang Ryoung Park, “Artificial
Intelligence-based Recognition of Different Types of Shoulder Implants in X-Ray
Scans Based on Dense Residual Ensemble-Network for Personalized Medicine.” Journal of
Personalized Medicine, Vol. 11(6), 482, pp. 1-28, May 2021.
< Request Form
for Models with Algorithm >
Please complete the following
form to request access to our trained models with algorithm. These models with
algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
82. SSF-Net and TSF-Net models
for pigment sign detection
(1) Introduction
We propose SSF-Net and TSF-Net for
computer-aided diagnosis of retinitis pigmentosa disease.
(2) Request for Our Models
with Algorithm
To obtain our pretrained models, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu.
< Request Form
for Models with Algorithm >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
81. Dongguk SLS-Net and SLSR-Net
(1) Introduction
We propose an artificial
intelligence-based optic cup and optic disc segmentation method for glaucoma
detection in retinal fundus images. Optic cup and optic disc segmentation plays
a vital role for computer-aided-diagnosis of glaucoma. We employed separable
convolution link and residual skip connections in our architecture. We
developed two networks (SLS-Net and SLSR-Net) capable of achieving
state-of-the-art segmentation performance using a small number of trainable
parameters.
(2) Request for Our
Models with Algorithm
To gain access to our trained models and algorithm, please scan
the request form as shown in the bellow description and email to Mr. Adnan
Haider (adnanhaider@dgu.ac.kr). Any work that use our data must
acknowledge the authors by including the following reference.
Adnan Haider, Muhammad Arsalan, Min Beom Lee, Muhammad Owais,
Tahir Mahmood, Haseeb Sultan, and
Kang Ryoung Park, "Artificial Intelligence-based Computer-aided Diagnosis
of Glaucoma Using Retinal Fundus Images,”
Expert systems with applications, in submission.
< Request Form
for Models with Algorithm >
Please complete the
following form to request access to our trained models with algorithm. These
models with algorithm should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
80. Grouped Dilated Convolution Module
(GDCM)-based Semantic Segmentation Network with Algorithm
(1) Introduction
We proposed a grouped dilated convolution
module that combines existing grouped convolutions and atrous spatial pyramid
pooling techniques, which were trained with two open databases of the Cambridge
Driving Labeled Video Database (CamVid) and the Stanford Background Dataset
(SBD). The proposed method can learn multi-scale features more simply and
effectively than existing methods. Because each convolution group has different
dilations in the proposed model, they have receptive fields of different sizes
and learn features corresponding to these receptive fields. As a result,
multi-scale context can be easily extracted. Moreover, optimal hyper-parameters
are obtained from an in-depth analysis, and excellent segmentation performance
is derived
(2) Request for Our
Model
To obtain our trained
GDCM-based semantic segmentation network with algorithm, please fill the
request form below and send an email to Mr. Dong Seop Kim at k_ds1028@naver.com. Any work that uses the provided pretrained network must
acknowledge the authors by including the following reference.
Dong
Seop Kim, Yu Hwan Kim, and Kang Ryoung Park, “Semantic Segmentation by Multi-scale Feature
Extraction Based on Grouped Dilated Convolution Module,” Mathematics, May 2021.
< Request Form
for Model >
Please complete the
following form to request access to our trained model. This model should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
79. Dongguk MDA-BN Model for Effective Diagnosis of COVID-19
Infection
(1) Introduction
We proposed an optimal
multilevel deep-aggregated boosted network (MDA-BN) model, which includes a
total of 1.76 million trainable parameters. Our method leverages multilevel
deep-aggregated features and multistage training via a mutually beneficial approach
to maximize the overall CAD performance. Quantitative analysis shows the
superior results of our model over various existing methods.
(2) Request for Our
Model and Dataset Indices
To obtain our trained
MDA-BN model, please fill the request form below and send an email to Mr.
Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided
pretrained network must acknowledge the authors by including the following
reference.
Muhammad
Owais, Young Won Lee, Tahir Mahmood, Adnan Haider, Haseeb Sultan, and Kang
Ryoung Park, “Multilevel Deep-Aggregated Boosted Network to Recognize COVID-19
Infection from Large-Scale Heterogeneous Radiographic Data,” IEEE Journal of Biomedical and Health Informatics, Vol. 25, Issue
6, pp. 1881-1891, June 2021.
< Request Form
for Model >
Please complete the
following form to request access to our trained model with the training and
testing data splitting information. This model should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
78. PLS-Net and PLRS-Net models
(1) Introduction
We trained our own developed PLRS-Net and PLS-Net semantic segmentation networks for accurate segmentation retinal blood vessel for diagnosis purposes. Both Networks works on pool-less residual convolutional manner to enhance the segmentation accuracy without using expensive preprocessing and deeper networks. Our proposed method is tested over three publicly available vessel segmentation datasets DRIVE, CHASE-DB1, and STARE. The experimental results show that our proposed method outperforms the existing state-of-the-art methods for retinal vessel segmentation. In addition, our proposed method provides an opportunity to the medical practitioners and ophthalmologist for screening and analysis of diabetic and hypertensive retinopathy disease. We made our models open to other researchers.
(2) Request for Models
To obtain our pretrained models, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Muhammad Arsalan, Adnan Haider, Young Won Lee, and Kang Ryoung Park, “Detecting Retinal Vasculature as a Key Biomarker for Deep Learning-based Intelligent Screening and Analysis of Diabetic and Hypertensive Retinopathy,” Expert Systems With Applications, in submission.
< Request Form
for Models >
Please complete the
following form to request access to our trained models. This model should not
be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
77. Dongguk Joint-GAN and CNN-LSTM for action recognition
(1) Introduction
We trained the GAN and CNN-LSTM models with our thermal image database and an open database for the purpose of the joint and skeleton generation of human body. In the proposed joint and skeleton generation method, both original grayscale thermal image and converted color thermal images are used as inputs to the GAN model. The proposed generation method performs image-to-image translation using a GAN model. In addition, our proposed action recognition method recognizes human actions using the generated joint and skeleton images by the GAN model as inputs to a CNN-LSTM model. We made the models (joint and skeleton generation (Joint-GAN), and action recognition (CNN-LSTM)) open to other researchers.
(2) Request for Models
To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Jin Kyu Kang, Dat Tien Nguyen, Tuyen Danh Pham, Muhammad Arsalan, and Kang Ryoung Park, “Action Recognition from Thermal Videos Using Joint and Skeleton Information”, IEEE Access, Vol. 9, pp. 11716-11733, January 2021.
< Request Form
for Models >
Please complete the
following form to request access to our trained models. This model should not
be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name (signature)
76. AS-RIG (Adaptive Selection to Reconstructed Input Data using a
Generator) with algorithm for Person Re-Identification
(1) Introduction
We trained
reconstructing model based on a GAN for AS-RIG with DBPerson-Recog-DB1 and
SYSU-MM01. For
ease of comparison, we developed the proposed algorithm using a model that was
made available by other research.
(2) Request for trained
model and generated images
To gain access to our
pre-trained models with algorithm, please sign and scan the request form and
email to Mr. Jin Kyu Kang (kangjinkyu@dongguk.edu).
Any work that uses the provided pretrained network must acknowledge the authors
by including the following reference.
Jin Kyu Kang, Min Beom
Lee, Hyo Sik Yoon, and Kang Ryoung Park, “AS-RIG:
Adaptive Selection of Reconstructed Input by Generator or Interpolation for
Person Re-Identification in Cross-Modality Visible and Thermal Images,” IEEE Access, Vol. 9, pp.
12055-12066, January 2021.
< Request Form for Model and Generated Images >
Please complete the following form to request
access to the DRM. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)
75. Synthesized Low
Light CamVid and KITTI database (Syn-CamVid and Syn-KITTI) and Algorithms
Including CNN Models
(1) Introduction
We
used synthesized databases similar to actual nighttime images to measure and
evaluate the segmentation performance in an extremely low light environment.
CamVid and KITTI were used as daytime databases, and synthesized low light
CamVid (Syn-CamVid) and KITTI (Syn-KITTI), which converted two daytime
databases into low light images, were used as low light databases. We used gamma correction to reduce the brightness nonlinearly. Secondly, blur is generated
in images due to a small amount of light and a long camera exposure time at
night, and we applied the Gaussian blur filter to implement this phenomenon.
Lastly, we generated a noisy image which is similar to the actual nighttime
image by adding both Poisson and Gaussian noises.
(2) Request for Our
Models and Algorithms
To gain access to our
datasets and pretrained models with algorithm, please sign and scan the request
form and email to Mr. Se Woon Cho at jsu319@dongguk.edu.
Any work that uses the provided pretrained network must acknowledge the authors
by including the following reference.
Se Woon Cho, Na Rae
Baek, Ja Hyung Koo, and Kang Ryoung Park, “Modified
Perceptual Cycle Generative Adversarial Network-based Image Enhancement for
Improving Accuracy of Low light Image Segmentation,” IEEE Access, Vol. 9, pp. 6296-6324,
January 2021.
< Request Form
for Models and Databases >
Please
complete the following form to request access to our trained models and
databases. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date:
Name (signature)
74. Dongguk DeblurGAN and CNN for Iris Recognition
(1) Introduction
We trained DeblurGAN
models for iris image deblurring with the iris databases of NICE.II and MICHE
which were blurred with a random motion blur kernel. We made our trained models
and generated images open to other researchers.
(2) Request for trained
model and generated images
To gain access to our models
and images, download the following request form. Please sign and scan the
request form and email to Mr. Min Beom Lee (mblee@dongguk.edu).
Any work that uses the provided pretrained network must acknowledge the authors
by including the following reference.
Min Beom Lee, Jin Kyu
Kang, Hyo Sik Yoon, and Kang Ryoung Park, “Enhanced
Iris Recognition Method by Generative Adversarial Network-based Image
Reconstruction,” IEEE Access, Vol. 9, pp. 10120-10135,
January 2021.
< Request Form for Model and Generated Images >
Please complete the following form to request
access to the DGIM&GI. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)
73. Dongguk Light-weighted Ensemble Network for Robust Diagnosis of
COVID19 Pneumonia
(1) Introduction
We proposed an optimal
deep network, which includes a total of 3.16 million trainable parameters.
Moreover, the addition of a multilevel activation visualization layer in the
proposed network further visualizes the lesion patterns as multilevel color
activation maps (ML-CAMs) along with the diagnostic result (either COVID19-positive
or -negative). Such additional output as ML-CAMs provides a visual insight of
the computer decision and may assist radiologists in validating it,
particularly in uncertain situations. Additionally, a novel hierarchical
training procedure was adopted to perform the optimal training of our network.
(2) Request for Our
Model and Dataset Indices
To obtain our trained
model and the training and testing data splitting information, please fill the
request form below and send an email to Mr. Muhammad Owais at
malikowais266@gmail.com. Any work that uses the provided pretrained network
must acknowledge the authors by including the following reference.
Muhammad
Owais, Hyo Sik Yoon, Tahir Mahmood, Haseeb Sultan, Adnan Haider and Kang Ryoung
Park, “Light-weighted Ensemble Network with Multilevel Activation
Visualization for Robust Diagnosis of COVID19 Pneumonia from Large-scale Chest
Radiographic Database,” Applied Soft Computing, Vol.
108(107490), pp. 1-15, September 2021.
< Request Form
for Models and Databases Indices>
Please complete the
following form to request access to our trained model with the training and
testing data splitting information. This model should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
72. Dongguk
CycleGAN-based Domain Adaptation and DenseNet-based Finger-vein Recognition
Models (DCDA&DFRM) with Algorithms
(1) Introduction
We propose a finger-vein recognition system with domain
adaptation based on cycle-consistent adversarial network. This system is
designed to overcome the performance drop caused by heterogeneous data. For
extracting feature, modified DenseNet-161 is used. Two open databases named as
SDUMLA-HMT-DB [1] and HKPolyU-DB [2] are used for experiment. Finger-vein
recognition model and CycleGAN model that used for domain adaptation are opened
to other researchers for fair judgement.
[1] Y.
Yin, L. Liu, and X. Sun, "SDUMLA-HMT: A multimodal biometric
database", in Proc.
6th Chin. Conf. Biometric Recognit., Beijing, China, Dec. 2011, pp. 260-268.
[2] A. Kumar and Y.
Zhou, ‘‘Human identification using finger images,’’ IEEE Trans. Image Process., vol. 21, no. 4, pp.
2228–2244, Apr. 2012.
(2) Request for
algorithm and trained models
To gain access to our dataset, algorithm and trained models,
please scan the request form as shown in the bellow description and email to
Mr. Kyoung Jun Noh (nohkyungjun@dongguk.edu). Any work that use our data must acknowledge the authors by
including the following reference.
Kyoung Jun Noh, Ji Ho Choi, Jin Seong Hong, and Kang Ryoung
Park, “Finger-vein Recognition
Using Heterogeneous Databases by Domain Adaption Based on a Cycle-Consistent
Adversarial Network,” Sensors,
Vol. 21, Issue 2(524), pp. 1-28, January 2021.
< Request Form for Models and
Algorithms >
Please
complete the following form to request access to our trained models and
algorithm. These should not be used for commercial use.
Name:
Contact:
(Email)
(Telephone)
Organization
Name:
Organization
Address:
Purpose:
Date:
Name (signature)
71. Dongguk Nuclei-Net Model
(R-NSN) with Algorithms
(1) Introduction
We propose an artificial intelligence-based nuclei segmentation
method for multi-organ histopathology images. Nuclei segmentation plays an
important role in cell phenotyping, grading and prognosis of cancer. In our
proposed method, we adopt a new nuclei segmentation
network which is empowered by residual skip connections. Our method outperforms
state-of-the-art methods proposed for nuclei segmentation.
(2) Request for
algorithm and trained models
To gain access to our algorithm and trained models, please scan
the request form as shown in the bellow description and email to Mr. Tahir
Mahmood (tahirmahmood@dongguk.edu). Any
work that use our data must acknowledge the authors by including the following
reference.
Tahir
Mahmood, Muhammad Owais, Kyoung Jun Noh, Hyo Sik Yoon, Ja Hyung Koo, Adnan
Haider, Haseeb Sultan, and Kang Ryoung Park, "Accurate Segmentation of
Nuclear Regions with Multi‐Organ Histopathology Images Using Artificial
Intelligence for Cancer Diagnosis in Personalized Medicine,” Journal of Personalized Medicine, Vol.
11(6), 515, pp. 1-25, June 2021.
< Request Form for Program and
Models >
Please
complete the following form to request access to our program and trained
models. These should not be used for commercial use.
Name:
Contact:
(Email)
(Telephone)
Organization
Name:
Organization
Address:
Purpose:
Date:
Name (signature)
70. Dongguk Pathological Site
Classification Models with Algorithm
(1) Introduction
We propose a classification method based on an ensemble of deep
learning models to overcome limitations of single-based model for the
endoscopic pathological site classification problem. Our algorithm successfully
applied for the gastric endoscopic pathological site classification using an
open dataset, named Hamlyn-GI
dataset [1].
[1] Ye,
M.; Giannarou, S.; Meining, A.; Yang, G-Z. Online tracking and retargeting with
applications to optical biopsy in gastrointestinal endoscopic examinations. Med. Image Anal., 2016, 30, 144-157.
(2) Request for
algorithm and trained models
To gain access to our algorithm and trained models, please scan
the request form as shown in the bellow description and email to Mr. D. T.
Nguyen (nguyentiendat@dongguk.edu). Any
work that use our data must acknowledge the authors by including the following
reference.
Dat Tien Nguyen, Min Beom Lee, Tuyen Danh Pham, Ganbayar
Batchuluun*, Muhammad Arsalan, and Kang Ryoung Park, “Enhanced
Image-based Endoscopic Pathological Site Classification Using an Ensemble of
Deep Learning Models,” Sensors, Vol. 20, Issue
21(5982), pp. 1-24, October 2020.
< Request Form for Program and
Models >
Please
complete the following form to request access to our program and trained
models. These should not be used for commercial use.
Name:
Contact:
(Email)
(Telephone)
Organization
Name:
Organization
Address:
Purpose:
Date:
Name (signature)
69. Dongguk single model both for thermal image super-resolution
reconstruction and deblurring, and detection model of object and thermal
reflection
(1) Introduction
We trained the GAN models with our thermal image
database and an open database for the purpose of thermal image reconstruction
and object detection. In the proposed reconstruction method, blurry
low-resolution image and an original image are used as inputs to the GAN model.
The proposed reconstruction method performs super-resolution and deblurring at
the same time with a single GAN model. In addition, our proposed detection
method detects objects and thermal reflections in thermal images. Both proposed
methods use color thermal images as inputs that are converted by using a
colormap. We made the models
(reconstruction and detection) open to other researchers.
(2) Request for Models
To obtain our pretrained models, please fill the
request form below and send an email to Prof. Batchuluun at
ganabata87@dongguk.edu. Any work that uses the provided pretrained network must
acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Jin Kyu Kang, Dat Tien
Nguyen, Tuyen Danh Pham, Muhammad Arsalan, and Kang Ryoung Park, “Deep Learning-based Thermal
Image Reconstruction and Object Detection”, IEEE Access, Vol. 9,
pp. 5951-5971, January 2021.
< Request Form
for Models >
Please complete the
following form to request access to our trained models. This model should not
be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
68. Dongguk enhanced CycleGAN for
age estimation and generated
images
(1) Introduction
We propose an enhanced CycleGAN for generating facial
images of untrained race and ages for age estimation and use it to achieve improved
age prediction performance for testing data comprising untrained age ranges and
races. And this algorithm is the first to improve age prediction performance by
generating data for untrained age ranges and races using an enhanced CycleGAN
(an improvement over the existing CycleGAN). By generating data for untrained
age ranges and races, the network and overfitting problems for multiple classes
and class imbalances are solved. This enhanced CycleGAN was trained with Morph,
MegaAge, and AFAD face databases, separately. We make enhanced CycleGAN and generated images publicly
available.
(2) Request for
enhanced CycleGAN model and generated database
To gain access to the generated database and enhanced CycleGAN
model, download the following request form. Please scan the request form and
email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu). Any work that uses generated images or enhanced CycleGAN
model must
acknowledge the authors by including the following reference.
Yu Hwan Kim, Se Hyun Nam, and Kang Ryoung Park, “Enhanced Cycle Generative Adversarial Network for
Generating Face Images of Untrained Races and Ages for Age Estimation,” IEEE Access, Vol. 9, pp. 6087-6112, January 2021.
< Request Form for database and
Models >
Please
complete the following form to request access to our database and trained
models. These should not be used for commercial use.
Name :
Contact
: (Email)
(Telephone)
Organization
Name :
Organization
Address :
Purpose
:
Date :
Name (signature)
67. Dongguk Korean
Banknote Database Version1 (DKB v1) with
Faster R-CNN model and post
processing algorithms
(1) Introduction
The DKB v1 contains
eight classes, namely, 10, 50, 100, 500, 1000, 5000, 10000, and 50000 KRW, with
each class having 800 images, yielding a total of 6,400 images. The images were
photographed using the frontal viewing camera of Galaxy Note 5 [36]. The images
of the banknotes were captured from various distances. To reflect the
real-world environment as closely as possible, the images were captured under
conditions of various locations, lighting, and cases where the bills were
randomly folded. The size of the obtained image is 1920 ⅹ1080 pixels.
Furthermore, the experiment was conducted using the open database of JOD to
verify whether the proposed algorithm can be applied to various types of
banknote images. The JOD open database contains nine classes (i.e., 1 qirsh, 5,
10 piastres, 1/4, 1/2, 1, 5, 10, 20 dinars), yielding a total of 330 images.
The size of the obtained image is 3264 ⅹ 2448 pixels. We use these
databases with Faster R-CNN and three post processing algorithms. We make DKB v1 with Faster R-CNN model and post
processing algorithms publicly available.
(2) Request for DKB v1
and Faster R-CNN model
To gain access to the
DKB v1 with Faster R-CNN model and post processing algorithms, download the
following request form. Please scan the request form and email to Mr. Chan Hum
Park (pipetsupport@naver.com). Any work that uses or incorporates the dataset
must acknowledge the authors by including the following reference.
Chan Hum Park, Se Woon
Cho, Na Rae Baek, Jiho Choi, and Kang Ryoung Park, “Deep Feature-based Three-stage Detection of Banknotes
and Coins for Assisting Visually Impaired People,” IEEE Access, Vol.
8, pp. 184598-184613, October 2020.
< Request Form for
database and Models >
Please
complete the following form to request access to our database and trained
models. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date
:
Name
(signature)
66. Dongguk Face and Body Database (DFB-DB2) with GAN model, CNN
models, and algorithms
(1) Introduction
DFB-DB2 was created for the experiments using
images of 22 people obtained by two types of cameras to assess the performance
of the proposed method in a variety of camera environments. The first camera
was a Logitech BCC 950, and the camera specifications include a camera viewing
angle of 78°, a maximum resolution of full high-definition
(HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera
was a Logitech C920, and its specifications include a maximum resolution of
full HD 1080p, a viewing angle of 78° at 30 fps, and auto
focusing. Images were taken in an indoor environment with indoor lights on, and
each camera was installed at a height of 2 m 40 cm. The database was divided
into two categories according to the camera. In the first database, the images
were captured by the Logitech BCC 950, and the second database is composed of
the images obtained by the Logitech C920, and the angle of camera was similar
to that for capturing the first database. And DFB-DB2 is different from DFB-DB1, and
DFB-DB2 contains blur images which are not included in DFB-DB1
In addition, we make our GAN model, two CNN
models trained with DFB-DB2 and open database of ChokePoint database [1], and
our algorithms publicly available.
1. ChokePoint Database. Available online:
http://arma.sourceforge.net/chokepoint/ (accessed on 20 June 2020).
(2) Request for DFB-DB2 with GAN model, CNN model, and algorithms
To gain access to DFB-DB2 with GAN model, CNN
model and algorithms, download the following request form. Please scan the
request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).
Any work that uses this DFB-DB2 with GAN model,
CNN model and algorithms must acknowledge the authors by including the
following reference.
Ja Hyung Koo, Se Woon Cho, Na Rae Baek, and Kang
Ryoung Park, “Face and Body-based Human Recognition by GAN-based Blur Restoration,” Sensors, Vol. 20, Issue 18(5229), pp. 1-37, September
2020.
< DFB-DB2, GAN model, and CNN model Request Form >
Please complete the following form to request
access to the DFB-DB2, GAN model, and
CNN model (All contents must be completed). This database, GAN model,
and CNN model should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name
(signature)
65. Dongguk Computer-Aided Framework to Diagnose Tuberculosis from
Chest X-Ray Images
(1) Introduction
We proposed a novel deep learning-based computer-aided framework to
diagnose Tuberculosis from a given CXR image and provide the appropriate visual
and descriptive information from a previous database. Such information can
further assist radiologists to subjectively validate the computer decision.
Thus, both subjective and computer decisions will validate each other and
ultimately result in effective diagnosis and treatment.
(2) Request for Our
Model and Dataset Indices
To obtain our trained
model and the training and testing data splitting information, please fill the
request form below and send an email to Mr. Muhammad Owais at
malikowais266@gmail.com. Any work that uses the provided pretrained network
must acknowledge the authors by including the following reference.
Muhammad
Owais, Muhammad Arsalan, Tahir Mahmood, Yu Hwan Kim, and Kang Ryoung Park, “Comprehensive
Computer-Aided Decision Support Framework to Diagnose Tuberculosis From Chest
X-Ray Images: Data Mining Study,” JMIR Medical
Informatics, Vol. 8, Issue 12: e21790, pp. 1-23, December 2020.
< Request Form
for Models and Databases Indices>
Please complete the
following form to request access to our trained model with including the video
indices of experimental endoscopy videos. This model should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
64. Dongguk blurred gaze database (DBGD) and CycleGAN model
(1) Introduction
The blurred gaze
database [Dongguk blurred gaze database (DBGD)] is constructed from the images
of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a
vehicle environment, and classified into 16 situations such as wearing of
sunglasses, different glasses, and hats with mobile phones. We make DBGD and
our CycleGAN model trained with this database open to other researchers.
(2) Request for DBGD
and CycleGAN model
To gain access to the
DBGD with CycleGAN model, download the following request form. Please scan the
request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).
Any work that uses or
incorporates the dataset must acknowledge the authors by including the
following reference.
Hyo Sik Yoon and Kang
Ryoung Park, “CycleGAN-based Deblurring for Gaze Tracking
in Vehicle Environments,” IEEE Access, Vol.
8, pp. 137418-137437, August 2020.
< Request Form for
database and Models >
Please
complete the following form to request access to our database and trained
models. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date
:
Name
(signature)
63. Dongguk Models for Thermal Image Super-resolution Reconstruction
and Deblurring
(1) Introduction
We trained the GAN models with our thermal image
database and an open database for the purpose of thermal image reconstruction.
In the proposed super-resolution method, low resolution image and an original
image are used as inputs to the GAN model. In the proposed deblurring method,
blurred image and an original image are used as inputs to the GAN model. We
made the models (super-resolution reconstruction and deblurring) open to other
researchers.
(2) Request for Models
To obtain our pretrained models, please fill the
request form below and send an email to Prof. Batchuluun at
ganabata87@dongguk.edu. Any work that uses the provided pretrained network must
acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Young Won Lee, Dat Tien
Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, “Thermal Image Reconstruction
Using Deep Learning”, IEEE Access, Vol. 8, pp. 126839-126858,
July 2020.
< Request Form
for Models >
Please complete the
following form to request access to our trained model. This model should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
62. Dongguk RPS-Net
based retinal pigment sign detection model (DRPM) with Algorithms
(1) Introduction
In this study, we proposed an accurate retinal
pigment segmentation network (RPS-Net) that segment the pigment signs for
diagnostic purposes. RPS-Net is a specifically designed deep learning-based
semantic segmentation network to accurately detect and segment the pigment
signs with fewer trainable parameters. Compared with the conventional deep
learning methods, the proposed method applies a feature enhancement policy
through multiple dense connections between the convolutional layers, which
enables the network to discriminate between normal and diseased eyes, and
accurately segment the diseased area from the background.
(2) Request for Models
To gain access to our
databases and pretrained models with algorithm, Please sign and scan the
request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work
that uses our models, algorithm, and databases must acknowledge the authors by
including the following reference.
Muhammad
Arsalan, Na Rae Baek, Muhammad Owais, Tahir Mahmood, and Kang Ryoung Park,
"Deep Learning-based Detection of Pigment Signs for Analysis and Diagnosis
of Retinitis Pigmentosa," Sensors, Vol. 20, Issue 12(3454), pp. 1-19, June
2020.
< Request Form for Models >
Please complete the
following form to request access to our trained models. These should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
61. Dongguk
DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms
(1) Introduction
In this study, we proposed a finger-vein
recognition system based on score-level fusion method with shape and texture
images. For extracting the matching score of each shape image and texture
image, revised DenseNet-161 with composite image input is used. Finger-vein
recognition models trained with our experimental databases in this study are
made available to other researchers for a fair judgment on the performance.
(2) Request for Models
To get access to our pretrained models with
algorithms
please sign and scan the request form and send an email to Mr. Kyoung Jun Noh
at nohkyungjun@dongguk.edu. Any work that uses our models and algorithm must
acknowledge the authors by including the following reference.
Kyoung Jun Noh, Jiho Choi, Jin Seong Hong and
Kang Ryoung Park, “Finger-vein Recognition Based on Densely Connected Convolutional
Network Using Score-Level Fusion with Shape and Texture Images”, IEEE Access, Vol. 8, pp. 96748-96766, June 2020.
< Request Form for Models >
Please complete the
following form to request access to our trained models. These should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
60. CNN model for
Thermal Reflection Removal
(1) Introduction
We trained the CNN model with our thermal image
database and an open database for the purpose of thermal reflection removal. In
the proposed method, a region image and an original image are used as inputs to
the CNN models. We made the models (pruned fully convolutional network (PFCN))
open to other researchers.
(2) Request for Models
To obtain our pretrained models please fill the
request form below and send an email to Prof. Batchuluun at
ganabata87@dongguk.edu. Any work that uses the provided pretrained network must
acknowledge the authors by including the following reference.
Ganbayar Batchuluun, Na Rae Baek, Dat Tien
Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, “Region-based Removal of
Thermal Reflection using Pruned Fully Convolutional Network”, IEEE Access, Vol. 8,
pp. 75741-75760, May 2020.
< Request Form for Models >
Please complete the
following form to request access to our trained models. These should not be
used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
59. Synthesized Low
Light Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low
Light Karlsruhe Institute of Technology and Toyota Technological Institute at
Chicago (Syn-KITTI) database, and Algorithm Including CNN Models
(1) Introduction
We used
synthesized databases that are similar to real low light environments to
perform multi-class segmentation in low light environments. Images taken in
real low light or nighttime environments have poor image quality and visibility
due to low brightness, blur, and noise, making it difficult for humans to
create segmentation labels for all the objects in the image and the labels are
not accurate. Therefore, to utilize accurate segmentation labels and paired
images, experiments were performed using the Syn-CamVid and Syn-KITTI
databases, which are the results of converting the daytime CamVid and KITTI
databases into low light images, respectively. To create extremely
low light images similar to an actual low light environment with little
external light, we have used the existing low light image generation methods in
combination. In a real
low light environment with little external light, the brightness value does not
decrease linearly. When comparing the daytime image with the nighttime image,
the brightness of highly bright pixels will decrease more, whereas that of the
pixels with lower brightness will decrease less. We used gamma correction to
produce this nonlinear brightness change. In a low light
environment, blurry images are captured due to the amount of light and the
camera’s exposure time, and
we used the Gaussian blur kernel to implement this effect. Finally, the noise
in the low light image is generated by the camera sensor, which is added in
this experiment using the Gaussian and Poisson noise functions.
(2) Request for Our
Models and Algorithms
To gain access to our
datasets and pretrained models with algorithm, please sign and scan the request
form and email to Mr. Se Woon Cho at jsu319@dongguk.edu. Any work that uses our
models, algorithm, and databases must acknowledge the authors by including the
following reference.
Se Woon Cho, Na Rae Baek, Ja
Hyung Koo, Muhammad Arsalan, and Kang Ryoung Park, "Semantic Segmentation
with Low Light Images by Modified CycleGAN-based Image Enhancement", IEEE
Access, Vol. 8, pp. 93561-93585, June 2020.
< Request Form
for Models and Databases >
Please
complete the following form to request access to our trained models and
databases. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
58. Dongguk Drone
Motion Blur Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained
Models
(1) Introduction
We used the Dongguk drone camera dataset ver.2
(DDroneC-DB2) open dataset to generate two datasets by two different methods,
denoted as the synthesized motion blur
drone database 1 (SMBD-DB1) and synthesized
motion blur drone database 2 (SMBD-DB2). For the first dataset, the motion-blurred images were
generated by applying
the motion-blurring kernels, which are created by applying subpixel
interpolation to the trajectory vector. Each trajectory vector, which is a
complex-valued vector, corresponds to the discrete positions of an object
undergoing 2D random motion in a continuous domain. For the second dataset, we synthesized dataset that contains realistic motion blur
close to the motion blur in the wild. Specifically, we used a video frame
interpolation model to increase the frame rate of DDroneC-DB2 videos from 30 to
120 FPS. Then, we generated blurred images by averaging consecutive frames on
the generated high-frame-rate videos. With these two datasets, we performed
proposed deblur CNN and marker detection CNN. We made our synthesized datasets
and CNN models publicly available for fair comparison and result regeneration.
(2) Request for Our
Models and Algorithms
To gain access to our
datasets and pretrained models with algorithm, please sign and scan the request
form and email to Prof. Tuyen Danh Pham at phamdanhtuyen@gmail.com. Any work
that uses our models, algorithm, and databases must acknowledge the authors by
including the following reference.
Noi Quang Truong, Young Won Lee,
Muhammad Owais, Dat Tien Nguyen, Ganbayar Batchuluun, Tuyen Danh Pham*, and
Kang Ryoung Park, “SlimDeblurGAN-based
Motion Deblurring and Marker Detection for Autonomous Drone Landing,” Sensors, Vol. 20, Issue 14(3918), pp. 1-35, July
2020.
< Request Form
for Models and Databases >
Please
complete the following form to request access to our trained models and
databases. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
57. Dongguk X-RayNet Model with Algorithms (DXM)
(1) Introduction
In
this study, semantic segmentation based automatic cardiothoracic ratio (CTR)
estimation is proposed. CTR is important to diagnose cardiac and other related
diseases. The proposed method consists of two multiclass segmentation networks
(X-RayNet1 and X-RayNet2) to provide accurate boundary of chest anatomical
structures such as, lungs, heart and clavicle bones. The accurate boundary
segmentation of these anatomies helps to compute the CTR automatically, where
the CTR is considered as biomarker for cardiomegaly and other diseases. Three
publicly available datasets Japanese Society of Radiological Technology (JSRT),
Montgomery County (MC) and Shenzhen X-ray sets (SC) where used to evaluate the
performance of proposed network. The experimental results show that our method
outperforms the existing approaches and provide accurate boundary for CTR
computation. We made our models publicly available for fair comparison and
result regeneration. All the experiments are implemented in MATLAB R2019a.
(2) Request for Our Models
and Algorithms
To gain access to our
databases and pretrained models with algorithm, Please sign and scan the
request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work
that uses our models, algorithm, and databases must acknowledge the authors by
including the following reference.
Muhammad Arsalan,
Muhammad Owais, Tahir Mahmood, Jiho Choi, and Kang Ryoung Park,
"Artificial Intelligence-based Diagnosis of Cardiac and Related Diseases
", Journal of Clinical Medicine, Vol. 9, Issue 3(871), pp. 1-27, March
2020.
< Request Form
for Models and Algorithms >
Please
complete the following form to request access to our trained models and
algorithms. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
56. Dongguk Mitotic Cell Detection Models (DMM)
(1) Introduction
In
this study, we proposed a multistage mitosis detection method based on Faster
region convolutional neural network (Faster R-CNN) and deep CNNs. Two open
datasets of breast cancer known as ICPR 2012 and MITOS-ATYPIA-14 are used. Our
proposed technique outperforms over the existing techniques. We made our models
publicly available to allow other researchers to regenerate our results and do
fair comparisons. All the experiments are implemented in MATLAB R2019a.
(2) Request for Our
Models and Algorithms
To gain access to our
databases and pretrained models with algorithm, Please sign and scan the
request form and email to Mr. Tahir Mahmood at tahirmahmood@dongguk.edu. Any
work that uses our models, algorithm, and databases must acknowledge the
authors by including the following reference.
Tahir Mahmood, Muhammad Arsalan,
Muhammad Owais, Min Beom Lee, and Kang Ryoung Park, "Artificial
Intelligence-based Mitosis Detection in Breast Cancer Histopathology Images
Using Faster R-CNN and Deep CNNs", Journal of Clinical Medicine, Vol. 9,
Issue 3(749), pp. 1-25, March 2020.
< Request Form
for Models and Algorithms >
Please
complete the following form to request access to our trained models and
algorithms. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
55. Dongguk CNN Models for Fake Banknote Image
Classification Using Visible-Light Images Captured by Smartphone Camera
(1) Introduction
In
this study, we proposed a fake banknote classification method using CNN on
banknote images captured by smartphone cameras under visible-light conditions.
The fake banknote dataset used for the experiments in this study consists of
images of banknotes of three national currencies: EUR (EUR 5, EUR 10, EUR 20,
EUR 50, and EUR 100), USD (USD 1, USD 5, USD 10, USD 20, USD 50, and USD 100),
and KRW (KRW 1000, KRW 5000, KRW 10,000, and KRW 50,000). The fake banknotes were created
by capturing the original banknotes by scanner and smartphone cameras, and
printed out by color printer to make the reproduced banknote. We subsequently
captured banknote images by the same abovementioned smartphones while holding
the fake and genuine banknotes in front of cameras or placing them on tables.
The training process were conducted using the MATLAB implementation of CNN with
AlexNet, ResNet-18, and GoogleNet architectures.
(2) Request for Our
Models and Algorithm
To
gain access to these files, download the following request form. Please scan
the request form and email to Dr. Tuyen Danh Pham (phamdanhtuyen@dongguk.edu).
Any work that uses these files with algorithm must acknowledge the authors by
including the following reference.
Tuyen
Danh Pham, Chanhum Park, Dat Tien Nguyen, Ganbayar Batchuluun, and Kang Ryoung
Park, “Deep Learning-Based Fake-Banknote Detection Using
Visible-Light Images Captured by Smartphone Cameras,” IEEE Access, Vol. 8, pp. 63144-63161, April 2020.
< Request Form for Models,
Algorithm, and Databases >
Please complete the following form to request access to our
trained models and algorithm. These should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
54. Dongguk mobile finger wrinkle database versions 1 and 2
(DMFW-DB1 and DMFW-DB2), and GAN with CNN models for motion deblurring
(1) Introduction
To evaluate performance
using images captured by a variety of smartphone cameras, DMFW-DB2 used the
rear camera of a Samsung Galaxy S8+. Later, images were extracted from the
captured images at 30 fps, and the motion blurred images were captured by obtaining
the average images of the captured images. In addition, DMFW-DB1 (refer to “37”) is artificial blurred by motion blurring kernel.
This study used DeblurGAN to restore motion blurred images of DMFW-DB1 and
DMFW-DB2. The restored images obtained
by DeblurGAN are used as the input for a ResNet-101 to perform the finger wrinkle
recognition.
(2) Request for Our
Models, Algorithm, and Databases
To gain access to our
databases and pretrained models with algorithm, Please sign and scan the
request form and email to Mr. Nam Sun Cho at diko93@dongguk.edu. Any work that
uses our models, algorithm, and databases must acknowledge the authors by
including the following reference.
Nam Sun Cho, Chan
Sik Kim, Chanhum Park, and Kang Ryoung Park, "GAN-based Blur Restoration for Finger Wrinkle
Biometrics System",
IEEE Access, Vol. 8, pp. 49857- 49872, March 2020.
< Request Form
for Models and Algorithm >
Please complete the
following form to request access to our trained models and algorithm. These
should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2)
Algorithm
(1) Introduction
In this study, we
enhance the classification performance of ultrasound image-based thyroid nodule
classification system. The pretrained model was successfully trained using TDID
dataset [1]
[1] Pedraza, L.;
Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero, E. An open access
thyroid ultrasound-image database. In Proceedings of the 10th International
Symposium on Medical Information Processing and Analysis, Colombia, 28 January,
2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).
(2) Request for Our
Models and Algorithm
To gain access to our
algorithm and pretrained models, Please sign and scan the request form and
email to Prof. D. T. Nguyen at nguyentiendat@dongguk.edu. Any work that uses
our algorithm must acknowledge the authors by including the following
reference.
D. T. Nguyen, et al.
"Ultrasound Image-based Diagnosis of Malignant Thyroid Nodule Using
Artificial Intelligence", Sensors, Vol. 20, Issue 7(1822), pp. 1-23, March
2020.
< Request Form
for Models and Algorithm >
Please complete the
following form to request access to our trained models and algorithm. These
models should not be used for commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
52. Dongguk
generation model of presentation attack face images (DG_FACE_PAD_GEN)
(1) Introduction
We trained our generative
adversarial network (GAN)-based model to artificially generate presentation
attack (PA) face images to reduce the efforts of PA image acquisition.
(2) Request for obtaining DG_FACE_PAD_GEN
To obtain our pretrained model,
please fill the request form bellow and send an email to Mr. Nguyen at
nguyentiendat@dongguk.edu. Any work that uses the provided pretrained network
must acknowledge the authors by including the following reference.
Dat Tien Nguyen, Tuyen Danh Pham,
Ganbayar Batchuluun, Kyoung Jun Noh, and Kang Ryoung Park, “Presentation Attack Face Image Generation Based
on Deep Generative Adversarial Network,” Sensors, Vol. 20, Issue 7(1810),
pp. 1-24, March 2020.
< Request Form for DG_FACE_PAD_GEN >
Please complete the following form to request
access to the DG_FACE_PAD_GEN. These files should not be used for commercial
use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)
51. Dongguk Spatiotemporal Features-Based Classification Network
(DenseNet+LSTM) to Classify the Multiple Gastrointestinal Diseases with
Including the Video Indices of Experimental Endoscopy Videos
(1) Introduction
We
trained a spatiotemporal features-based classification model (named
as DenseNet+LSTM) to classify the multiple gastrointestinal
diseases using endoscopic videos. Moreover, after performing the
classification, the extracted features were further used to retrieve images of
similar medical conditions, such as normal and abnormal cases, from a large
endoscopic database.
(2) Request for Our
Algorithm and Dataset Indicies
To obtain our trained
model with including the video indices of experimental endoscopy videos, please
fill the request form below and send an email to Mr. Muhammad Owais at
malikowais266@gmail.com. Any work that uses the provided pretrained network
must acknowledge the authors by including the following reference.
Muhammad
Owais, Muhammad Arsalan, Tahir Mahmood, Jin Kyu Kang, and Kang Ryoung Park, “Automated
Diagnosis of Various Gastrointestinal Lesions Using a Deep Learning-Based
Classification and Retrieval Framework with a Large Endoscopic Database: Model
Development and Validation,” Journal of Medical
Internet Research, Vol. 22, Issue 11: e18563, pp. 1 –
21, November 2020.
< Request Form
for Models and Databases Indices>
Please complete the
following form to request access to our trained model with including the video
indices of experimental endoscopy videos. This model should not be used for
commercial use.
Name
:
Contact
: (Email)
(Telephone)
Organization Name
:
Organization Address
:
Purpose
:
Date :
Name
(signature)
50.
Dongguk Modified Conditional GAN & Deep CNN Models, and Generated Images
(1) Introduction
We trained our modified conditional GAN & Deep CNN
Models for finger-vein optical blur restoration and finger-vein recognition by
databases of PolyU-DB [1] and SDU-DB [2]. We made our
trained models and generated images open to other
researchers.
[1] Kumar, A.; Zhou, Y. Human
identification using finger images. IEEE Trans. Image Process. 2012, 21, 2228–2244.
[2] SDUMLA-HMT Finger Vein Database.
Available online: http://mla.sdu.edu.cn/info/1006/1195.htm
(2) Request for Models
To gain access to our models and generated
images, download the following request form. Please fill the request form below
and send an email to Mr. Jiho Choi (choijh1027@dongguk.edu). Any work that uses our algorithm and models must
acknowledge the authors by including the following reference.
< Request Form for
Pretrained Models, Algorithm, and Images>
Please complete the following form to request
access to our pretrained models, algorithms, and images. These should not be
used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name (signature)
49. Dongguk
Super-resolution Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)
(1) Introduction
We
trained our models (DSR&AE-CNN) for facial image super-resolution
reconstruction and age estimation by databases of PAL [1] and MORPH databases
[2].We made our trained models and generated images open to other researchers.
[1] PAL database
Available online: http://agingmind.utdallas.edu/download-stimuli/face-database/
(accessed on 17 May 2019).
[2] MORPH database
Available online:
https://ebill.uncw.edu/C20231_ustores/web/store_main.jsp?STOREID=4 (accessed on
17 May 2019).
(2) Request for Models
To gain
access to our models and images, download the following request form. Please
fill the request form below and send an email to Mr. Se Hyun Nam (nsh6473@dongguk.edu). Any work
that uses our algorithm and models must acknowledge the authors by including
the following reference.
Se Hyun Nam, Yu Hwan
Kim, Noi Quang Truong, jiho
Choi, and Kang Ryoung Park, “Age
Estimation by Super-Resolution Reconstruction Based on Adversarial Networks,” IEEE Access, Vol. 8, pp. 17103-17120, January 2020.
< Request Form
for Pretrained Models and Algorithm>
Please
complete the following form to request access to our pretrained models. This
models should not be used for commercial use.
Name :
Contact :
(Email)
(Telephone)
Organization
Name :
Organization
Address :
Purpose :
Date :
Name (signature)
48. Dongguk ESSN
models and algorithm for Semantic Segmentation
(1) Introduction
We
propose new models (ESSN) for semantic segmentation. That proposed model was trained
with open dataset, SBD [1] and CamVid [2]. We made our
trained model and algorithm open to other
researchers.
[1] S. Gould, R. Fulton, and D. Koller, “Decomposing a Scene into
Geometric and Semantically Consistent Regions,” in
Proc. IEEE Int. Conf. Comput. Vis., Kyoto, Japan, 29 Sep.-2 Oct. 2009, pp. 1-8.
[2] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, “Segmentation and Recognition Using Structure from Motion Point
Clouds,” in Proc. European Conf. Comput. Vis.,
Marseille, France, 12-18 Oct. 2008, pp. 44-57.
(2) Request for Models
To obtain
our pretrained model, please fill the request form below and send an email to
Mr. Dong Seop Kim (seob2@dongguk.edu). Any work
that uses our algorithm and models must acknowledge the authors by including
the following reference.
DONG SEOP KIM, MUHAMMAD
ARSALAN, MUHAMMAD OWAIS,
and KANG RYOUNG PARK, “ESSN:
Enhanced Semantic Segmentation Network by Residual Concatenation of Feature
Maps,”
IEEE Access, Vol. 8, pp. 21363-21379, February 2020.
< Request Form
for Pretrained Models and Algorithm>
Please
complete the following form to request access to our pretrained models. This
models should not be used for commercial use.
Name :
Contact :
(Email)
(Telephone)
Organization
Name :
Organization
Address :
Purpose :
Date :
Name (signature)
47. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections,
Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles
Database (DI&V-DB)
(1) Introduction
We trained the Mask R-CNN model with our thermal
image database for the purpose of elimination of thermal reflections. We made
the models, generated data with Dongguk thermal image database (DTh-DB), and
Dongguk items & vehicles database (DI&V-DB) open to other researchers.
(2) Request for Models, Generated Data, and
databases
To obtain our pretrained model, generated data,
and databases, please fill the request form below and send an email to Prof.
Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided
pretrained network must acknowledge the authors by including the following
reference.
Ganbayar Batchuluun, Hyo Sik Yoon, Dat Tien
Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, “A Study on the Elimination of Thermal Reflections,” IEEE Access, Vol. 7, pp. 174597-174611, December 2019.
< Request Form for Models,
Generated Data, and Databases >
Please complete the following form to request
access to our pretrained model, generated data, and database (All contents must
be completed). This model, data, and database should not be used for commercial
use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Name (signature)
46. Dongguk Ultrasound Thyroid Nodule
Classification (DUS-TNC) algorithm
(1) Introduction
In this study, we enhance the classification performance of ultrasound
image based thyroid nodule classification system by cascading classifiers using
FFT-based and CNN-based methods. The pretrained model was successfully trained
using TDID dataset [1]. We made our trained model and algorithm open to other researchers
[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero,
E. An open access thyroid ultrasound-image database. In Proceedings of the 10th
International Symposium on Medical Information Processing and Analysis,
Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).
(2) Request for our algorithm
To gain access to our algorithm (code and pretrained models), Please
sign and scan the request form and email to Prof. D. T. Nguyen at
nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge
the authors by including the following reference.
Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Hyo Sik Yoon, and
Kang Ryoung Park, “Artificial
Intelligence-based Thyroid Nodule Classification Using Information from Spatial
and Frequency Domains,” Journal of Clinical
Medicine, Vol. 8, Issue 11(1976), pp. 1-24, November 2019.
< Request Form for DUS-TNC algorithm >
Please complete the following form to request access our algorithm (All
contents must be completed). These models should not be used for commercial
use.
Name:
Contact: (Email)
(Telephone)
Organization
Name:
Organization
Address:
Purpose:
Date:
Name (signature)
45. Dongguk Modified CycleGAN for Age Estimation
(DMC4AE) and Generated Images
(1) Introduction
We trained our modified CycleGAN models for age
estimation with heterogeneous databases of MegaAge and MORPH databases [1,2]. We made our trained models
and generated images by modified CycleGAN open to other researchers.
(2) Request for our models and images
To gain access to our models and images,
download the following request form. Please sign and scan the request form and
email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu).
Any work that uses these models and images must
acknowledge the authors by including the following reference.
Yu Hwan
Kim, Min Beom Lee, Se Hyun Nam, and Kang Ryoung Park, “Enhancing the Accuracies of Age Estimation with
Heterogeneous Databases Using Modified CycleGAN,” IEEE
Access, Vol. 7, pp. 163461-163477, November 2019.
< Request Form for DMC4AE and Generated Images >
Please complete the following form to request
access to these models with images (All contents must be completed). These
models should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)
44. Dongguk Vess-Net Models with Algorithm
(1) Introduction
We trained our Vess-Net model based on dual
stream feature empowerment scheme for retinal vessel segmentation to aid the
process of diagnosing diseases like diabetic and hypertensive retinopathy. In
our experiments we validated the performance of our method with three different
publicly available fundus image databases including DRIVE [1] CHASE-DB1 [2] and
STARE [3]. We
made our trained models open to
other researchers.
(2) Request for our Vess-Net models
To gain access to the Vess-Net trained models,
download the following request form. Please sign and scan the request form and
email to Mr. Muhammad Arsalan (arsal@dongguk.edu).
Any work that uses these Vess-Net models must
acknowledge the authors by including the following reference.
Muhammad
Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho and Kang Ryoung Park, “Aiding the Diagnosis of Diabetic and
Hypertensive Retinopathy Using Artificial Intelligence-based Semantic
Segmentation,” Journal of Clinical Medicine, Vol. 8, Issue 9(1446), pp. 1-27, September 2019.
< Request Form for Vess-Net Models >
Please complete the following form to request
access to these models (All contents must be completed). These models should
not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name (signature)
43. Dongguk CNN for Detecting Road
Markings Based on Adaptive ROI with Algorithms
(1) Introduction
We created adaptive ROI
images before using them to train our convolutional neural network (CNN). In the first stage, a
vanishing point is detected in order to create the ROI image. The ROI image
that covers the majority of the road region is then used as the input to train
the CNN-based detector and classifier in the second stage. We made the models, generated data, and labeled
information of database open to other researchers. Our CNN model was trained
with Malaga
urban dataset [1], the Daimler dataset [2], and the Cambridge dataset [3].
1. The Málaga Stereo
and Laser Urban Data Set – MRPT. Available online:
https://www.mrpt.org/MalagaUrbanDataset (accessed on 1 October 2018).
2. Daimler Urban
Segmentation Dataset.
Available online:
http://www.6d-vision.com/scene-labeling (accessed on 2 January 2019).
3. Cambridge-driving
Labeled Video Database (CamVid). Available online:
http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1
October 2018).
(2) Request for models, generated data, and
labeled information
To obtain our pretrained model, generated data,
and labeled information, please fill the request form bellow and send an email
to Dr. Toan Minh Hoang at hoangminhtoan@dongguk.edu. Any work that uses the
provided pretrained network must acknowledge the authors by including the
following reference.
Toan Minh Hoang, Se Hyun Nam, and Kang
Ryoung Park, “Enhanced Detection and Recognition of Road
Markings Based on Adaptive Region of Interest and Deep Learning,”IEEE Access, Vol. 7, pp. 109817- 109832, August 2019.
< Request Form for Models,
Generated Data, and Labeled Information >
Please complete the following form to request
access to our pretrained model, generated data, and labeled information of
database (All contents must be completed). This model, data, and database
should not be used for commercial use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name
(signature)
42. Dongguk CNN stacked LSTM and CycleGAN for Action Recognition,
Generated Data, and Dongguk Activities & Actions Database (DA&A-DB2)
(1) Introduction
We trained our convolutional neural network
(CNN), CNN stacked with long short-term memory (CNN-LSTM), cycle-consistent
adversarial network (CycleGAN) models with our action database. We made the
models, generated data, and database open to other researchers.
(2) Request for Models, Generated Data, and
DA&A-DB2
To obtain our pretrained model, generated data,
and database, please fill the request form bellow and send an email to Dr.
Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided
pretrained network must acknowledge the authors by including the following
reference.
Ganbayar Batchuluun, Dat Tien Nguyen, Tuyen Danh
Pham, Chanhum Park, and Kang Ryoung Park, “Action Recognition from
Thermal Videos,” IEEE Access, Vol. 7, pp. 103893- 103917, August 2019.
< Request Form for Models,
Generated Data, and DA&A-DB2 >
Please complete the following form to request
access to our pretrained model, generated data, and database (All contents must
be completed). This model, data, and database should not be used for commercial
use.
Name :
Contact : (Email)
(Telephone)
Organization Name :
Organization Address :
Purpose :
Date :
Name
(signature)
41. Label
Information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01)
Database and Dongguk Gender Recognition CNN Models (DGR-CNN).
(1) Introduction
We collected gender information of Sun Yat-sen University Multiple
Modality Re-ID (SYSU-MM01) database and trained gender recognition system based
on ResNet-101 using two databases including the SYSU-MM01 and the Dongguk
Body-based Gender Database (DBGender-DB2). We made label information of
SYSU-MM01 database and Dongguk Gender Recognition CNN (DGR-CNN) open to other
researchers.
(2) Request for Label Information and DGR-CNN
To gain access to the label information and
DGR-CNN, download the following request form for label information of SYSU-MM01
and DGR-CNN. Please sign and scan the request form and email to Ms. Na Rae Baek
(naris27@dongguk.edu).
Any work that uses the label information of SYSU-MM01
database or this CNN model must acknowledge the authors by including the
following reference.
Na Rae Baek, Se Woon Cho, Ja Hyung Koo, Noi Quang Truong,
and Kang Ryoung Park, “Multimodal Camera-based Gender Recognition Using Human-body Image
with Two-step Reconstruction Network,” IEEE Access,
Vol. 7, pp. 104025-104044, August 2019.
< Request Form for label information of SYSU-MM01 and
DGR-CNN >
Please complete the following form to request
access to the label information of SYSU-MM01 and DGR-CNN. These files should
not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)
40. Dongguk cGAN-based Iris Image Generation Model
and Generated Images (DGIM&GI)
(1) Introduction
We trained generation models based on cGAN (pix2pix model) using NICE.II
training dataset (selected from UBIRIS.v2) and MICHE database on visible light
environment and CASIA-Iris-Distance database on NIR environment, respectively.
Additionally, we generated iris images using trained generation models with
each database. We made DGIM (trained generation models) and GI (generated
images from trained model) open to other researchers.
(2) Request for DGIM&GI
To gain access to the DGIM&GI, download the
following request form for DGIM&GI. Please sign and scan the request form
and email to Mr. Min Beom Lee (smin6180@naver.com).
Any work that uses this DGIM&GI must
acknowledge the authors by including the following reference.
Min Beom Lee, Yu Hwan Kim, and Kang Ryoung Park, “Conditional Generative
Adversarial Network-Based Data Augmentation for Enhancement of Iris Recognition
Accuracy,” IEEE Access, Vol. 7, pp. 122134-122152, September 2019.
< Request Form for DGIM&GI >
Please complete the following form to request
access to the DGIM&GI. These files should not be used for commercial use.
Name:
Contact: (Email)
(Telephone)
Organization Name:
Organization Address:
Purpose:
Date:
Name
(signature)