Overview of Facial Recognition
Advantages and Disadvantages of Facial Recognition
| Unlike other biometric technologies, a user doesn't have to make
physical contact with scanning equipment, so the user feels less
strained during the process. As a result, he or she will not feel
uncomfortable or offended.
|| It is somewhat difficult to develop a recognition system with excellent
recognition rates in all environments since a face image can change
widely depending on external elements such as light, noise, facial
expressions, hairdo, posture, and so on, which, in turn, could interfere
with correct face recognition.
it is the only available option. In particular, it has significant
implications in legal areas in case a picture is the only evidence
available to identify a subject. For example, it is almost impossible
to read and test the fingerprints or the retina of a criminal from
a picture. But with the face recognition technology, it is possible
to find the subject with only a picture.
| It doesn't need any expensive equipment to enter biological information.
Only a regular video camera is needed. So, it is very simple compared
to other equipment for other biometric technologies. In particular,
since PC cameras are very popular at the moment, various kinds of
application software for personal PC security are being released.
Facial Recognition System Composition
* Figure 2.1 Five Stages of the General Face Recognition
- Image Capturing
After capturing an image from a CCD camera, store it.
Eliminate noise from the image and separate the image.
Detect the face area from the image.
Extract feature points and standardize brightness and geometry.
Compare and recognize the detected image with images in database
Facial Recognition System Algorithm
a fully automated face recognition system, face detection from an input
image should be done first. The process is called "face area detection",
and the overall performance of the system basically depends on the detection
performance. General detection algorithms carry out the function of
finding the area presumed to be a human face from the planar image.
Its major methods are shown in Table 2.1 below.
a learned face shape in a black and white static image.
to detect more than two faces, but slow and difficult to learn faces.
this to black and white successive image frames in real time by
using frequency spatial algorithm
possible to track 1-2 faces in real time. Very difficult to learn.
+ nerve network
fuzzy membership function as a value instead of pixel brightness
value to enter in the nerve network.
performance than using the nerve network alone, but its processing
speed is slower.
biggest skin color area only with color information using probability
to find one face field, but possible errors in the background of
the skin color.
color+ other images
color and movement information in the consecutive images of more
to get a critical value that is not sensitive to red color and the
diffused reflection of light.
face color with a fuzzy membership function.
influenced by membership function and knowledge base.
(Principal Component Analysis)
image similar to a face by using a proper face as a basic vector.
used as an algorithm to extract a characteristic point more often
than to recognize a face.
a face by calculating correlation between facial geometric template
and an image.
to respond to various changes in a face shape or face size.
* Table 2.1 Types of Detection Algorithms
A static image is extensively used
as an input image to extract a face. As for extracting a face, there
is some difference between a static image and a sequence of static
images photographed at regular intervals. In the case of one static
image, it is easy to separate a face from the background with color
information or Template by properly using several variables such as
regular light or background, but if the image has complex background
like a public place, it is very difficult to extract the face. On
the other hand, it is relatively easy to extract a face using precise
information from movement in a sequence of images due to very short
The purpose of the extraction is basically to find a face by utilizing
all kinds of image processing methods.
So, there is no perfectly superior algorithm over others. Therefore,
two or three algorithms among movements, color, Template, and artificial
intelligence are being used together to maximize performance.
Standardization is a process that homogenizes
the position, size and brightness of a face image captured from an
extractor according to database standards to enhance overall recognition
rate in a system. There are two main types of face standardization:
geometric standardization and brightness normalization.
A technique to extract a face area only with color or movement information
can include its background other than
the face or have some of the face area cut out according to the sensitivity
of a color model or other images.
The algorithm is used to extract feature points and eyes from the
face and refer to the eye size and position in order to decide the
final face area and standardize it geometrically. On the other hand,
light normalization is the process of maintaining the same brightness
degree on the input image regardless of changes in the environment.
Like geometrical normalization, it can increase recognition rates
by standardizing the brightness information about each pixel of an
image. Almost all algorithms are applying brightness normalization,
so they become insensitive to changes in brightness.
Figure 2.2 below shows original face images and after images that
went through brightness normalization.
* Face 2.2 Original face images and the Images
that went through brightness normalization
Facial Recognition Algorithm
Face recognition is the method that compares
an input face image standardized according to database traits with
all faces in the database for verification. Table 2.2 shows major
algorithms relating to face recognition.
||It verifies identity
by comparing geometrical feature points of an input face image.
Each face image should be changed in size and go through a standardization
process. The positions of feature points are very critical.
||A face is three-dimensional,
can be hidden, and have various expressions. So, this approach
has inevitable limitations.
||It considers bright
and dark patterns of a planar image as a single vector, thinking
of a face image as a series of those vectors.
||If a face position or
brightness is changed, it could recognize the one face as two
||FLD, EFM, SVM, etc.
||An algorithm designed
to boost its performance by working on the drawbacks of PCA.
||It teaches the system
multi-level percept theory and applies it on a face image.
||Learning has its own
difficulties and it's hard to compose the learning data.
||It is effective to process
changes in the position and expressions of a face by changing
||Algorithm volume is
too high compared to recognition rates.
* Table 2.1 Types of Detection Algorithms
Eigenfaces was designed by Pentland in 1991.
It applies PCA to extract feature points, and uses Euclidean distance
to assess similarity. Its recognition rates are not high and respond
sensitively to changes in lighting or environment. However, it is
one of the most important face recognition methods, being used to
compare with other algorithms. It is often referred to in many dissertations,
and its performance is verified.
Fisherfaces was developed in 1997 based
on PCA, and uses FLD (Fisher Linear Discriminant) as a classification
algorithm. It also measures similarity with Euclidean distance easily.
It learns characteristics of individuals so that it is more accurate
and insensitive to external changes. In the case of off-line learning,
it takes some time, but in the case of on-line learning, the characteristics
can be applied to the system in real time.
ARENA recognition algorithm is relatively
simple, but effective to recognize a face from a 2D image. It employs
PCA and SVM (Support Vector Machine) for face recognition. Its recognition
rate is relatively high, but it consumes considerable time and memory
to apply to a multi-class area such as face recognition. But SVM algorithm
is diligently studied, so it is safe to say that it is one of the
flagship algorithms in the face recognition area.
EFM-based face recognition method addresses
the FLD generalization problems used in Fisherfaces by proposing and
applying EFM to face recognition. It applied PCA before FLD-type processing
for reducing dimensions. EFM-1 and EFM-2 are now proposed for EFM
Algorithm. EFM-1 is aimed at reducing dimensions by maintaining Eigen
values internally needed for variance matrix within a class, including
the energy of maximum circle data, and selecting the Eigen values.
Like Fisherfaces, EFM-2 reduces dimensions and processes the reduced
variance matrix within the class.
Then, feature points are selected among the values and other small,
unselected Eigen values are included in the calculation of the variance
matrix. According to experimental results, EFM-based face recognition
method shows higher performance than Fisherfaces by 20 percent. Moreover,
it has slightly higher perception rates than EFM-1 when EFM-2 Algorithm