# Hand gesture recognition (PCA) - Python

Staff member
I am trying to make hand gesture recognition by Principal Component Analysis (PCA) using python. I am following the steps in this tutorial: <a href="http://onionesquereality.wordpress....genfaces-and-distance-classifiers-a-tutorial/" rel="nofollow">http://onionesquereality.wordpress....genfaces-and-distance-classifiers-a-tutorial/</a>

Here is my code:

Code:
``````import os
from PIL import Image
import numpy as np
import glob
import numpy.linalg as linalg

#Step 1: put training images into a 2D array
filenames = glob.glob('C:\\Users\\Karim\\Desktop\\Training &amp; Test images\\New folder\\Training/*.png')
filenames.sort()
img = [Image.open(fn).convert('L').resize((90, 90)) for fn in filenames]
images = np.asarray([np.array(im).flatten() for im in img])

#Step 2: find the mean image and the mean-shifted input images
mean_image = images.mean(axis=0)
shifted_images = images - mean_image

#Step 3: Covariance
c = np.asmatrix(shifted_images) * np.asmatrix(shifted_images.T)

#Step 4: Sorted eigenvalues and eigenvectors
eigenvalues,eigenvectors = linalg.eig(c)
idx = np.argsort(-eigenvalues)
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:, idx]

#Step 6: Finding weights
w = eigenvectors.T * np.asmatrix(shifted_images)
w = np.asarray(w)

#Step 7: Input (Test) image
input_image = Image.open('C:\\Users\\Karim\\Desktop\\Training &amp; Test images\\New folder\\Test\\31.png').convert('L').resize((90, 90))
input_image = np.asarray(input_image).flatten()

#Step 8: get the normalized image, covariance, eigenvalues and eigenvectors for input image
shifted_in = input_image - mean_image
c = np.cov(input_image)
cmat = c.reshape(1,1)
eigenvalues_in, eigenvectors_in = linalg.eig(cmat)

#Step 9: Fing weights of input image
w_in = eigenvectors_in.T * np.asmatrix(shifted_in)
w_in = np.asarray(w_in)

#Step 10: Euclidean distance
df = np.asarray(w - w_in)                # the difference between the images
dst = np.sqrt(np.sum(df**2, axis=1))     # their euclidean distances
idx = np.argmin(dst)                     # index of the smallest value in 'dst' which should be equal to index of the most simillar image in 'images'
print idx``````

The detected image should be the nearest from the training images to the test image, but the result is a completely different one, although for each test image there are 10 similar images in the training image.

Anyone can help?