Fingerprint Identification System using Neural Networks

Biometric Identification Systems are widely used for unique identification. Especially for verification and identification. There are many types of biometric systems like fingerprint recognition, voice recognition, face recognition, palm recognition, iris recognition, etc. Among all of these, Fingerprint recognition is one of the well-known and most widely used biometric technologies. A fingerprint is the pattern of ridges and valleys on the surface of a fingertip. The endpoints and crossing points of ridges are called minutiae. It is a widely accepted assumption that the minutiae pattern of each finger is unique and does not change during one’s life. Ridge endings are the points where the ridge curve terminates, and bifurcations are where a ridge splits from a single path to two paths at a Y-junction.

Fingerprint identification technology extracts feature from images made by the distinct ridges on the fingertips. Images of the fingerprint are captured by a scanner then enhanced and after that converted into a template. Scanner technologies can be optical, silicon, or ultrasound technologies. Optical scanners were the most commonly used and simple one. There are two kinds of approaches for fingerprint recognition. The first one is minutia‐based which represents the fingerprints by its local features, like terminations and bifurcations. The second approach uses image‐based methods that do matching using the global features of a whole fingerprint image.

 In this post, we are going to discuss on Fingerprint Identification System using Neural Networks. A neural network is also known as a parallel distributed processing network. It is a computing solution that is loosely modeled to our cortical structures of the brain. It consists of interconnected processing elements called nodes or neurons that work together to produce an output function. These nodes work together to produce an output function. The output of a neural network relies on the cooperation of the individual neurons within the network. Processing of information by neural networks is done in parallel rather than in series like in earlier binary computers or Von‐Neumann machines.

Fundamental Steps of Fingerprint recognition System
Fundamental Steps of Fingerprint Recognition System

Proposed Algorithm

It is very well known to develop a reliable fingerprint recognition system, image enhancement and features extraction are needed. This proposed algorithm is divided into main three stages: Preprocessing, post processing and final minutiae matching stage. Preprocessing stage involved enhancement of an image by using histogram equalization, binarization, and morphological operations after applying this enhancement algorithm a binarized thinned image has been obtained. In the second stage, minutiae are extracted from the enhanced fingerprint by using the optimization technique. The final stage is the recognition of the fingerprint which has been done with the help of the neural network.

Algorithm used to implement Fingerprint recognition System

The algorithm does the following:

I. Calculates the gradient values along x‐direction and y‐direction for each pixel of the block. Two Sobel filters are used to fulfill the task.
II. For each block, it uses the following formula to get the Least Square approximation of the block direction.

tan2ß = 2 ∑ ∑ (gx*gy)/ ∑ ∑ (gx^2‐gy^2)   for all the pixels in each block.

Image Acquisition:

A number of methods are used to acquire fingerprints. Among them, the inked impression method remains the most popular one. Inkless fingerprint scanners are also present eliminating the intermediate digitization process. Fingerprint quality is very important since it affects directly the minutiae extraction algorithm. The size of the scanned fingerprints that are used in this research is 188×240 pixels. The images are taken in this size in order to ease the computational burden.

Fingerprint Image Processing:

Processing of fingerprint image is necessary to:

(i) improve the clarity of ridge structures of fingerprint images
(ii) maintain their integrity,
(iii) avoid introduction of spurious structures or artifacts, and
(iv) retain the connectivity of the ridges while maintaining separation between ridges. Fingerprint image processing operations are image enhancement, image normalization, and image binarization

Fingerprint Image Enhancement:

 It is to make the image clearer for further operations. The fingerprint images acquired from sensors have not perfect quality, so to increase the contrast between ridges and furrows and for connecting the false broken points of ridges enhancement is done. For enhancement, FFT enhancement method is used. In this, we divide the image into small processing blocks of 32*32 pixels and perform the Fourier transform.

Fingerprint Ridge Thinning:

          Ridge Thinning is done to eliminate the redundant pixels of ridges till the ridges are just one pixel wide. For that, an iterative parallel thinning algorithm is used. In each scan of the image, the algorithm marks down redundant pixels in the small image window and finally removes all those marked pixels after several scans. The thinned ridge map is then filtered by other Morphological operations to remove some H breaks, isolated points, and spikes. In this step, any single points, whether they are single‐point ridges or single‐point breaks in a ridge are eliminated and considered processing noise. The ridge structures in poor-quality fingerprint images are not always well-defined and, hence, the orientation information could not be correctly detected, which greatly restricts the applicability of these techniques. The Gabor filter based technique could obtain a reliable orientation estimate even for corrupted images. It is unsuitable for an online fingerprint recognition system such as AFIS because the algorithm is computationally expensive.

Fingerprint Image Binarization:

Quality of fingerprints can significantly vary, mainly due to skin condition and pressure made by contact of a fingertip on sensing device some sort of preprocessing is needed to achieve good minutiae extraction. This problem can be handled by applying an enhancing algorithm that is able to separate and highlight the ridges from the background; this type of enhancing is also called linearization. The binarization step is that the true information that could be extracted from a print is simply binary means 0 and 1; ridges vs. valleys. But it is a really important step in the process of ridge extracting because the prints are taken as grayscale images, so ridges, knowing that they’re in fact ridges, still vary in intensity. So, binarization transforms the image from a 256 level image to a 2 level image which gives the same information. Typically, an object pixel is given a value of “1” while a background pixel is given a value of “0.” Finally, a binary image is created by coloring each pixel white or black, depending on a pixel’s label (black for 0, white for 1)

Fingerprint Identification System

Minutia Marking

After the ridge thinning, the marking minutia point is a relatively easy job. The concept of Crossing Number (CN) is widely used for extracting the minutiae from the image. Also, at this stage, the average inter‐ridge width D is estimated. This refers to the average distance between two neighboring ridges. Scan a row of the thinned ridge. Image and sum up all pixels in the row whose values are one. Then divide the row length by the above summation to get an inter‐ridge width. For more accuracy, such kind of row scan is performed upon several other rows and column scans are also conducted, finally, all the inter‐ridge widths are averaged to get the D.

False Minutia Removal

In preprocessing false ridge breaks due to an insufficient amount of ink and ridge cross‐connections due to over inking are not totally eliminated. All the earlier stages themselves occasionally introduce some artifacts which lead to spurious minutia. Such false minutiae will significantly affect the accuracy of matching if they are simply regarded as genuine minutiae. So it needs to remove false minutia and keep the fingerprint identification system effective.

Minutia Match

After we get two sets of transformed minutia points, we use the Neural Networks to count the matched minutia pairs by assuming two minutia having nearly the same position and direction are identical.


Backpropagation Network (BPN)

A backpropagation is one of many different learning algorithms that can be applied for neural network training and has been used in this thesis. It belongs to a category of so called learning with the teacher. For every input vector x that is presented to the neural network, there is the predefined desired response of the network in a vector t (the teacher). The desired output of the neural network is then compared with the real output by computing an error e of vector t and neural network output vector y. The correction of the weights in the neural network is done by propagating the error e backward from the output layer towards the input layer, therefore the name of the algorithm. The weight change in each layer is done with steepest descent algorithm. The back-propagation algorithm is carried out in the following steps:

  1. Select a training pair from the training set; apply the input vector to the network input.
  2. Calculate the output of the network.
  3. Calculate the error between the network output and the desired output (the target vector from the training pair)
  4. Adjust the weights of the network in a way that minimizes the error.
  5. Repeat the steps 1 through 4 for each vector in the training set until the error for the entire set is acceptably low.

During the training phase, the training data is fed into the input layer. The data is propagated to the hidden layer and then to the output layer. This is called forward pass of the back propagation algorithm. The output values of the output layer are compared with the target output values. The target output values are those that we attempt to teach our network. The error between actual output and target output values is calculated and propagated back towards hidden layer. This is called the backward pass of the back propagation algorithm. The error is used to update the connection strengths between nodes, i.e. weight matrices between input‐hidden layers and hidden‐output layers are updated.

Cellular Neural Network (CNN):

CNN is an artificial neural network consisting of separate neurons or cells. It has several properties that may be advantageous compared to other neural networks. The dynamic range of a CNN, for example, is bounded. CNN can easily be extended without having to re‐adjust the entire network because a cell is not connected to every other cell in the network but rather to cells within a certain neighborhood. Albeit its cellular structure, it still displays the complex dynamic behavior as seen with other neural networks. Due to this complex behavior it can be used in image processing (e.g. noise‐removal, connected component detection (CCD), ‘thinning’ etc.) It can also be used to simulate certain equations or be used as an associative memory. The state and output vary in time, whereas the input is kept constant. Templates describe the interaction of the cell with its neighborhood and regulate the evolution of the CNN state and output vectors. Template connections can be realized by voltage-driven current generators. The output characteristic of adopted is a sigmoid-type piecewise-linear function.

 Another advantage is that although the factor with which the output of a cell affects the behavior of other cells (template parameter) may be different for spatially different neighboring cells, the template thus formed is translational invariant (‘cloning templates’). As stated above, these cells can interact with other cells within a certain neighborhood. A rectangular two‐dimensional CNN consisting of 16 cells where every cell interacts with only its directly neighboring cells. CNNs are exploited for image processing by associating each pixel of the image to the input or initial state of a single cell. Subsequently, both the state and output of the CNN matrix evolve to reach an equilibrium state. The evolution of the CNN is governed by the choice of the template. A lot of templates have already been defined in order to perform basic image processing operations. Simple operations can be performed just by using the basic templates A, B, and the bias I, whereas more complicated processing, requires the use of the nonlinear templates and the generalized nonlinear generator

Did it help? Would you like to express?