Accessibility navigation


Investigation of gait representations and partial body gait recognition

Wattanapanich, C. (2019) Investigation of gait representations and partial body gait recognition. PhD thesis, University of Reading

[img]
Preview
Text - Thesis
· Please see our End User Agreement before downloading.

5MB
[img] Text - Thesis Deposit Form
· Restricted to Repository staff only

2MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.48683/1926.00084979

Abstract/Summary

Recognising an individual by the way they walk is one of the most popular research subjects within the field of soft biometrics in last few decades. The advancement of technology and equipment such as Close Circuit Television (CCTV), wireless internet and wearable sensors makes it easier to obtain gait data than ever before. The gait biometric can be used widely and in different areas such as biomedical, forensic and surveillance. However, gait recognition still has many challenges and fundamental issues. All of these problems only serve as a researcher’s motivation to learn more about various gait topics to overcome the challenges and improve the field of gait recognition. Gait recognition currently has high performance when carried out under very specific conditions such as normal walking, obstruction from certain types of clothing and fixed camera view angles. When the aforementioned conditions are changed, the classification rate dramatically drops. This study aims to solve the problems of clothing, carrying objects and camera view angles within the indoor environment and video-based data collection. Two gait related databases used for testing in this study are CASIA dataset B and OU-ISIR Large population dataset with Bag (OU-LP-Bag). Three main tasks will be tested with CASIA dataset B while only gait recognition is tested with OU-LP-Bag. The gait recognition framework is developed to solve the three main tasks including gait recognition by identical view, view classification and cross view recognition. This framework uses gait images sequence as input to generate a gait compact image. Next, gait features are extracted with the optimal feature map by Principal Component Analysis (PCA) and then a linear Support Vector Machine (SVM) is used as the one-against-all multiclass classifier. Four gait compact images including Gait Energy Image (GEI), Gait Entropy Image (GEnI), Gait Gaussian Image (GGI) and the novel gait images called Gait Gaussian Entropy Image (GGEnI) are used as basic gait representations. Then three secondary gait representations are generated from these basic representations. These include Gradient Histogram Gait Image (GHGI) and two novel gait representations called Convolutional Gait Image (CGI) and Convolutional Gradient Histogram Gait Image (CGHGI). All representations are tested with three main tasks. When people walk, each body part does not have the same locomotion information, for example, there is much more motion in the leg than shoulder motion when walking. Moreover, clothing and carrying objects do not have the same level of affect to every part of the body, for example, a handbag does not generally affect leg motion. This study divides the human body into fourteen different body parts based on height. Body parts and gait representations are combined to solve the three main tasks. Three combined parts techniques which use two different parts to solve the problem are created. The fist is Part Scores Fusion (PSF) which uses the summation score of two models based on each part. The highest summation score model is chosen as the result. The second is Part Image Fusion (PIF) which concatenates two parts into a single image with a 1:1 ratio. The highest scoring model which is generated from image fusion is selected as the result. The third is Multi Region Duplication (MRD) which uses the same idea as PIF, however, the second part’s ratio is increased to 1:2, 1:3 and 1:4. These techniques are tested on the gait recognition by identical view. In conclusion, the general framework is effectively for three main tasks. GHGI-GEI which is generated from full silhouette is the most effective representation for gait recognition by identical view and cross view recognition. GHGI-GGI with lower knee region is the most effective representation for view angle classification. The GHGI-GEI CPI combination between full body and limb parts is the most effective combination on OU-LP-Bag. A more detailed description of each aspect is in the following Chapters.

Item Type:Thesis (PhD)
Thesis Supervisor:Wei, H. and Ferryman, J.
Thesis/Report Department:School of Mathematical, Physical and Computational Sciences
Identification Number/DOI:https://doi.org/10.48683/1926.00084979
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:84979
Date on Title Page:2018

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation