New iris recognition method for noisy iris images

  • Authors:
  • Kwang Yong Shin;Gi Pyo Nam;Dae Sik Jeong;Dal Ho Cho;Byung Jun Kang;Kang Ryoung Park;Jaihie Kim

  • Affiliations:
  • Division of Electronics and Electrical Engineering, Biometrics Engineering Research Center (BERC), Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Republic of Korea;Division of Electronics and Electrical Engineering, Biometrics Engineering Research Center (BERC), Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Republic of Korea;Division of Electronics and Electrical Engineering, Biometrics Engineering Research Center (BERC), Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Republic of Korea;Dept. of Computer Science, Sangmyung University, 7 Hongji-dong, Jongno-gu, Seoul 110-743, Republic of Korea;Technical Research Institute, Hyundai Mobis, 80-9, Mabuk-dong, Giheong-gu, Yongin-si, Gyeonggi-do 446-716, Republic of Korea;Division of Electronics and Electrical Engineering, Biometrics Engineering Research Center (BERC), Dongguk University, 26 Pil-dong 3-ga, Jung-gu, Seoul 100-715, Republic of Korea;School of Electrical and Electronic Engineering, Biometrics Engineering Research Center (BERC), Yonsei University, 134 Shinchon-dong, Seodaemun-gu, Seoul 120-749, Republic of Korea

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2012

Quantified Score

Hi-index 0.10

Visualization

Abstract

When capturing an iris image under unconstrained conditions and without user cooperation, the image quality can be highly degraded by poor focus, off-angle view, motion blur, specular reflection (SR), and other artifacts. The noisy iris images increase the intra-individual variations, thus markedly degrading recognition accuracy. To overcome these problems, we propose a new iris recognition algorithm for noisy iris images. This research is novel in the following three ways compared to previous works. First, we propose the 1st step classification method which discriminates the ''left or right eye'' on the basis of the eyelash distribution and SR points. Since the iris pattern of the left eye differs from that of the right eye, the 1st step classification can enhance the accuracy of iris recognition. Second, the separability between intra- and inter-classes is increased by using the 2nd step classification based on the ''color information'' of the iris region. They are measured by using the Euclidean distance (ED), chi square distance (CSD), and hamming distance (HD) calculated with the color space models such as YIQ, YUV, YCbCr, HSI, and CMY. Third, ''textural information'' of the iris region is used for the 3rd step classification. That is, the 1-D Gabor filter is applied to the red, green, and gray image channels to afford three sets of iris codes from iris textures and, consequently, three HD scores, which are then combined on the basis of the weighted SUM rule to produce a final matching score. The experimental results with the NICE.II training dataset (selected from UBIRIS.v2 database) showed that the decidability value (d') was 1.6398 (the fourth-highest rank).