Over the last few decades, machine vision became a very important part of human society. It's long been used in industrial fields such as manufacturing, but it's also becoming more of an everyday thing for individuals. Automated cars are a great example that uses the machine vision for the good of everyone, and now the systems are so developed that those cars can drive hundreds of miles without a single accident.
However, the development of technology also imposed a new threat to people, especially on the privacy side. Hundred of cameras on the road can accurately identify certain individuals, being able to track their movement and actions. Also, cars can be automatically identified with the number plates easily, giving unnecessary information about individuals that can be used to harm people. This is not happening yet in most of the world due to issues with ethics, but some countries actively use technology to often control their citizens. It is not just important to learn about the technology, but it is also important to realize what is the right application of the machine vision.
One amazing thing about machine vision is that the development of many tools made it easier for people to use the technology individually. Even students like us can use the tools such as PyTorch and TensorFlow to build a machine vision model that can achieve a certain goal in a fairly short amount of time. For this project, we will focus on creating a possible model that can be helpful for people, and avoid any harm that can happen.
Creating a machine vision application that can correctly recognize sign languages can help many different individuals who are deaf. People who are born with a lack of hearing ability often have problems with reading and writing due to the challenge of getting an education. One study in Korea states that about 30% of deaf people in Korea are illiterate, meaning it is hard for them to communicate with other people and transform their ideas into a written form.
Having an accurate machine vision model that can detect the sign languages and translate (interpret) them into sounds or written words can greatly help people to communicate with people lacking hearing ability. The stakeholders will be people who are willing to communicate with deaf people especially those who cannot speak and write, or people who don't know sign language.
This is not easy, because building a model that can accurately detect sign language is the same as creating a program that translates languages perfectly. As same as other translators out there, the model always has the risk of translating the messages in the wrong way, where miscommunication can happen. To prevent this to happen, the model that recognizes the sign language should be very accurate to minimize the errors.
With the skillset that we have, it is almost impossible to create a machine vision model that will perfectly translate sign languages into English. Thinking that even Google has not been able to create a perfect translator, this isn't even a project that can be held at an individual level. However, what we can do is build a model that can do a basic detection - sign languages to alphabet letters.
By using the MNIST Sign Language Dataset (https://www.kaggle.com/datamunge/sign-language-mnist), we will create a CNN(Convolutional Neural Network) machine vision model with TensorFlow to build and improve a model that can accurately classify the specific sign language a person is making. As we stated earlier, our major goal will be to try to make the model as accurate as possible to prevent possible errors and miscommunication that can happen in real applications.
The dataset includes 27455 images of training data on sign languages that represent alphabet letters. The dataset also includes 7172 images of testing data to evaluate how well the model works after going through training with training data. Both training and testing datasets divide the images by different labels, A being 0 and Z being 25 to represent all 26 alphabets. Images are stored as a single row data in CSV representing 784 pixels needed for 28 * 28 images in grayscale, from 0 to 255, which is the size and values often used for MNIST applications. Unfortunately, the dataset lacks specific data for the alphabet J and Z, because gesture motion is involved to correctly represent those sign languages, meaning that labels 9 and 25 will be missing. As you can see from this exception, most sign languages are not motionless - this proves the model to be limited, even with high accuracy. But building a model that can do very basic stuff can be helpful for the future creation of better models, or give important insight to improve your application.
CNN is an effective method to create an efficient model for a lot of machine vision applications, but especially to sign languages. CNN model will effectively divide the image into features, where an image will be translated into multiple numbers that show what tendencies each image has and what can the computer interpret about it. Due to the unique shape of hands you can observe in each sign language, if we can train the model to recognize these shapes well, we will be able to build a good CNN model with high accuracy.
After building a working model, it can be evaluated using real-world data possibly created by our own. Ideally, we can create real-time interpretation software that speaks or writes down the detected sign language. Just trying to test our images of sign languages can be a powerful test too, checking if the model can translate correctly with the difference in data style. However, if all the real application is not capable, we believe that building a model with the highest accuracy possible will be enough for this project.
# import tensorflow
from tensorflow.keras import layers, models
import tensorflow as tf
# import pandas, numpy, and matplotlib for data structures and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# connect to Colab's GPU for faster calculation
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
pass
#raise SystemError('GPU device not found')
# mount my own google drive to locate the training/testing data
from google.colab import drive
drive.mount('/content/drive/')
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
# load training/testing data from the google drive
train_data = pd.read_csv("/content/drive/MyDrive/SignLanguage/sign_mnist_train.csv")
test_data = pd.read_csv("/content/drive/MyDrive/SignLanguage/sign_mnist_test.csv")
# visualize a image to check if the data was loaded correctly
matrix = train_data.iloc[3].iloc[1:].values.reshape((28, 28))
plt.imshow(matrix, cmap='gray')
<matplotlib.image.AxesImage at 0x7f5362937ad0>
The dataset was pre-downloaded and uploaded to our google drive. After importing necessary libraries to deal with the dataset (especially pandas), the dataset was imported to the Google Colab and checked to see if they were imported correctly.
# split the training and testing data into x and y elements
x_train = train_data.iloc[:, 1:].values
y_train = train_data.iloc[:, 0].values
x_test = test_data.iloc[:, 1:].values
y_test = test_data.iloc[:, 0].values
# divide by 255 to normalize the pixel data
x_train = x_train/255.0
x_test = x_test/255.0
# reshape the data so that it can be used for training
x_train = x_train.reshape(-1,28,28,1)
x_test = x_test.reshape(-1,28,28,1)
# plot histogram of labels to check the distribution
figure = plt.figure(figsize=(8, 6))
plt.hist(y_train, bins=np.arange(26)-0.5, edgecolor='black')
plt.xticks(list(range(0,26)))
plt.title("Distribution of Training Data")
plt.xlabel("Labels")
plt.ylabel("Count")
plt.show()
Training and testing datasets were divided into the x element and y element to be used in the training. All the pixel data inside the x elements were normalized for future calculations, and x elements were reshaped back into a 28*28 image form. From the histogram, we can see that the dataset is missing the labels 9 and 25 (alphabet J and Z) as mentioned earlier, but other than that all the training data is evenly distributed and are capable of doing a balanced training.
# build a simple CNN model
sign_model_first = models.Sequential([
# add a convolutional layer
layers.Conv2D(32, kernel_size=(5, 5), strides=(1, 1), padding='same', activation='relu', input_shape=(28, 28, 1)),
# add a max-pooling layer
layers.MaxPooling2D(pool_size=(2, 2)),
# flatten the image data
layers.Flatten(),
# dense the input into 512 neurons
layers.Dense(512, activation='relu'),
# create probability map of 25 different sign languages per image
layers.Dense(25, activation='softmax')
])
# compile the model
sign_model_first.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# train the model
history_first = sign_model_first.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))
Epoch 1/5 858/858 [==============================] - 7s 8ms/step - loss: 0.6612 - accuracy: 0.8219 - val_loss: 0.5166 - val_accuracy: 0.8334 Epoch 2/5 858/858 [==============================] - 7s 8ms/step - loss: 0.0192 - accuracy: 0.9991 - val_loss: 0.5463 - val_accuracy: 0.8622 Epoch 3/5 858/858 [==============================] - 6s 7ms/step - loss: 0.0037 - accuracy: 1.0000 - val_loss: 0.5728 - val_accuracy: 0.8759 Epoch 4/5 858/858 [==============================] - 6s 7ms/step - loss: 0.0188 - accuracy: 0.9948 - val_loss: 0.6976 - val_accuracy: 0.8624 Epoch 5/5 858/858 [==============================] - 6s 7ms/step - loss: 4.5856e-04 - accuracy: 1.0000 - val_loss: 0.7205 - val_accuracy: 0.8674
# plot the accuracy graph for training/testing dataset for each epoch
plot_target = ['accuracy', 'val_accuracy']
figure = plt.figure(figsize=(8, 6))
for x in plot_target:
plt.plot(history_first.history[x], label = x)
plt.legend()
plt.title("Accuracy For Each Epoch")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.grid()
plt.show()
The first model was very simple. It had one convolutional layer, one max-pooling layer, and two dense layers. It has all the basics you need to form a CNN model, but we can see from the result that it overfits the training data fast, having the accuracy of 100% on train data but having around 85~88% accuracy on the test data. Since the model was improved to guess the training data with high accuracy, the model cannot guess a new image correctly. The accuracy is not that bad for the first try, but we know that we might want higher accuracy for the real application.
# create a model to check conv2d layer and maxpooling layer
layer_names = [x.output for x in sign_model_first.layers[:2]]
activation_model = models.Model(inputs=sign_model_first.input, outputs=layer_names)
activation = activation_model.predict(x_test)
# plot all the 32 channels of a image in the conv2d layer
rows = 4
columns = 8
figure = plt.figure(figsize=(13, 8))
plt.suptitle("Image After Convolutional Layer")
for i in range(0, columns*rows):
figure.add_subplot(rows, columns, i+1)
plt.imshow(activation[0][2][:,:,i].reshape(28,28), cmap='viridis')
plt.show()
The most important layer in the CNN model is the convolutional layer. By using kernels with different weights, the model tries to catch the relationships between the pixels inside an image. As the model continues training, the convolutional layer will change the weights of kernels so that it can effectively catch desired features from images.
The above figure shows all the 32 channels of the convolutional layer after applying 32 different kernels to an image. We can see that each channel looks different, trying to catch different aspects of the image. The weight values for each kernel start as random values but fits themselves to values that will yield the highest accuracy in the case of our model.
Having more filters (kernels) can help with improving the accuracy of the model, but there is always the risk of over-fitting the data with a higher number of filters. So, it is important to find out the most optimal values for the training.
# plot all the 32 channels of a image in the maxpooling layer
rows = 4
columns = 8
figure = plt.figure(figsize=(13, 8))
plt.suptitle("Image After Max Pooling Layer")
for i in range(0, columns*rows):
figure.add_subplot(rows, columns, i+1)
plt.imshow(activation[1][2][:,:,i].reshape(14,14), cmap='viridis')
plt.show()
Max pooling for the CNN model is done so that you can effectively reduce the number of parameters you have to deal with while maintaining the accuracy of the model. You want to reduce the number of parameters so that the training does not take forever, but keep the feature data you gained from the convolutional layer for better results. If the pool size is 2*2, the resulting image will be one-fourth of the original - meaning that you only have to deal with 25% of the original data now. Also, reducing data does not just mean faster training, but also helps the model to not be over-fitted.
The effectiveness of pooling can be explained simply by showing the number of parameters. Every time data goes through the convolutional layer, the amount of data is multiplied by the number of filters. For my model, this is 32. If I create 3 convolutional layers without pooling, the number of parameters for training will be multiplied by 32 32 32 = 32,768 times. If we do max pooling after each convolutional layer, this number will be reduced to (32/4) (32/4) (32/4) = 512 times. Simply saying, this will reduce the number of parameters the model has to deal with by 64 times.
Max or average pooling is usually used, because max pooling is effective in getting the most distinct number inside the pool, possibly keeping the feature data you want for the model. Average pooling can also be used to see the general tendency inside the pool, which will also be helpful depending on the data type.
We used max-pooling for our model. Looking at each of the 32 channels created from the pooling, we can observe that resolution is lower than the convolutional layer but still maintains the major shape each channel had, showing how max-pooling can be effective in CNN application.
We chose Adam as our optimizer because we learned that it is efficient in most machine learning cases. After some experiment of swapping the optimizer around, Adam took a fair amount of time on training, while showing high accuracies. For example, using Adagrad on the first model only gave 47% accuracy in the evaluation, and using RMSprop gave a bit less accuracy and took more time to compute. We are sure that changing parameters around for the model will increase accuracy for each optimizer, but it did not seem like a useful investment of time since we already know a faster and accurate optimizer.
Dropout is an easy way of preventing the overfitting of the model by randomly discarding nodes during the training. As mentioned several times already, having too many parameters always have the risk of overfitting. By dropping random nodes during the training, the model gets a higher chance of encountering a parameter it did not encounter in the previous iterations, lowering the chance of over-fitting the data.
By adding a single Dropout(0.25) layer, we saw a 1~2% increase in the accuracy for the evaluation. The accuracy tended to increase with a higher drop-out value, but just until a certain point around 50%. We can assume that dropping too much data leaves not enough parameters for the model to learn about the dataset.
By adding multiple dropout layers with different values, we were able to see that it was less fitted to the training data because the accuracy of the training data did not hit 100%, and accuracy on the evaluation increased. After experimenting multiple times, we realized that having two 50% drop-out layers in between the dense layers helped raise the test data accuracy by about 4~5%.
By trying to add more dense layers, we realized that having more drops the accuracy. However, deleting one of the two dense layers dropped the accuracy too, meaning that one dense layer is perfect for this application.
Changing the number of neurons also mattered, where having a number around 256~512 gave the best accuracy. We assume that having too few neurons will not be able to form enough network to compute well while having too much will overfit the model again.
We decided on using the ReLU as our activation function because it is easy to implement, efficient, and also fast. ReLu function is commonly used for most machine learning applications, especially to machine vision because it makes the images less linear, making the computation faster.
Decreasing the learning rate from the default 0.001 to 0.0001 just made the training slower, and did not increase the accuracy at all. Also, raising the value to 0.01 did not help either, being stuck in one particular loss value and not able to train the model anymore. So we figured out that leaving the learning rate to the default is the best choice.
By adding one more convolutional layer, the accuracy for the test data rose to 94~95% but was not able to go above that. After a certain epoch that hit the 95% accuracy, the accuracy started going down afterward, meaning that the model starts overfitting to the training data after that. We took a wild guess and added the third layer, and we found a tendency where the accuracy for the test data went up by a little bit.
And then we tweaked with the kernel size, where we found that having a bigger kernel size (5,5) gives a better result for the first convolutional layer, but a smaller kernel size (3,3) for later layers helped with raising the accuracy. This can be explained because having a too-small kernel size for the bigger image will likely create too specific channels (images) of the picture, not being able to catch the right pattern that happens with different sign languages.
The final change we did was changing the filter number. Having a high filter number made the training very slow, but tended to have higher test data accuracy in general. After some testing, gradually increasing the filter number almost has the same effect as putting a high value for every filter, but needed much less training time.
Changing the batch size, as far as they are not too small or not too big, did not change the maximum accuracy for the model. Different numbers of epochs were needed, but eventually, they ended up in the same place. So to make the computation faster, we decided to use the batch size of 100, which gave good accuracy and fair training time.
# build the final model
sign_model = models.Sequential([
# add first convolutional layer
layers.Conv2D(32, kernel_size=(5, 5), strides=(1, 1), padding='same', activation='relu', input_shape=(28, 28, 1)),
# add first max-pooling layer
layers.MaxPooling2D(pool_size=(2, 2)),
# add second convolutional layer
layers.Conv2D(64, kernel_size=(3, 3), strides=(1, 1), padding='same', activation='relu'),
# add second max-pooling layer
layers.MaxPooling2D(pool_size=(2, 2)),
# add third convolutional layer
layers.Conv2D(128, kernel_size=(3, 3), strides=(1, 1), activation='relu', padding='same'),
# add third max-pooling layer
layers.MaxPooling2D(pool_size=(2, 2)),
# flatten the image data
layers.Flatten(),
# randomly drop half of the nodes to prevent overfitting
layers.Dropout(0.5),
# dense the input into 512 neurons
layers.Dense(512, activation='relu'),
# randomly drop half of the nodes to prevent overfitting
layers.Dropout(0.5),
# create probability map of 25 different sign languages per image
layers.Dense(25, activation='softmax')
])
# print the summary of the model
sign_model.summary()
Model: "sequential_10" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_22 (Conv2D) (None, 28, 28, 32) 832 _________________________________________________________________ max_pooling2d_22 (MaxPooling (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_23 (Conv2D) (None, 14, 14, 64) 18496 _________________________________________________________________ max_pooling2d_23 (MaxPooling (None, 7, 7, 64) 0 _________________________________________________________________ conv2d_24 (Conv2D) (None, 7, 7, 128) 73856 _________________________________________________________________ max_pooling2d_24 (MaxPooling (None, 3, 3, 128) 0 _________________________________________________________________ flatten_10 (Flatten) (None, 1152) 0 _________________________________________________________________ dropout_12 (Dropout) (None, 1152) 0 _________________________________________________________________ dense_20 (Dense) (None, 512) 590336 _________________________________________________________________ dropout_13 (Dropout) (None, 512) 0 _________________________________________________________________ dense_21 (Dense) (None, 25) 12825 ================================================================= Total params: 696,345 Trainable params: 696,345 Non-trainable params: 0 _________________________________________________________________
# compile the model
sign_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# train and save the history for the possible future use
history = sign_model.fit(x_train, y_train, batch_size=100, epochs=10, validation_data=(x_test, y_test))
Epoch 1/10 275/275 [==============================] - 4s 13ms/step - loss: 1.7047 - accuracy: 0.4706 - val_loss: 0.4482 - val_accuracy: 0.8660 Epoch 2/10 275/275 [==============================] - 3s 12ms/step - loss: 0.3300 - accuracy: 0.8869 - val_loss: 0.1784 - val_accuracy: 0.9364 Epoch 3/10 275/275 [==============================] - 3s 12ms/step - loss: 0.1389 - accuracy: 0.9537 - val_loss: 0.1498 - val_accuracy: 0.9505 Epoch 4/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0842 - accuracy: 0.9728 - val_loss: 0.1325 - val_accuracy: 0.9579 Epoch 5/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0558 - accuracy: 0.9820 - val_loss: 0.0968 - val_accuracy: 0.9718 Epoch 6/10 275/275 [==============================] - 4s 13ms/step - loss: 0.0448 - accuracy: 0.9861 - val_loss: 0.0940 - val_accuracy: 0.9736 Epoch 7/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0398 - accuracy: 0.9863 - val_loss: 0.0826 - val_accuracy: 0.9803 Epoch 8/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0331 - accuracy: 0.9891 - val_loss: 0.0793 - val_accuracy: 0.9802 Epoch 9/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0305 - accuracy: 0.9902 - val_loss: 0.0973 - val_accuracy: 0.9805 Epoch 10/10 275/275 [==============================] - 3s 12ms/step - loss: 0.0274 - accuracy: 0.9912 - val_loss: 0.1140 - val_accuracy: 0.9730
# plot the accuracy graph for training/testing dataset for each epoch
plot_target = ['accuracy', 'val_accuracy']
figure = plt.figure(figsize=(8, 6))
for x in plot_target:
plt.plot(history.history[x], label = x)
plt.legend()
plt.title("Accuracy For Each Epoch")
plt.xlabel("Epoch")
plt.ylabel("Accuracy")
plt.grid()
plt.show()
The final model was created using the following layers:
It became a little more complex than the first model, and we can check that the new model is better by looking at the accuracy. Even though the accuracy of the training data decreased by a bit, but the accuracy of the testing data increased from 85-88% to 96-97%. Thinking that getting 1 wrong out of 25 sign languages is much better than getting 1 out of 8 wrong, the model was effectively improved by multiple iterations.
One observation we found from the accuracy graph is that accuracy tends to fluctuate around the maximum accuracy or the lowest loss value. This was the reason why we were not able to improve the model further. We might be able to tweak the hyperparameters more to get better results, but at this point, we are unsure how to approach the solution.
# create prediction for the test data and get the predicted labels
prediction = sign_model.predict(x_test)
prediction_label = np.argmax(prediction, axis = 1)
from collections import defaultdict
import math
# Organize the prediction data so that it is sorted by the number of wrong labels
miss = defaultdict(list)
for i in range(len(y_test)):
if prediction_label[i] != y_test[i]:
miss[y_test[i]].append(i)
miss_order = sorted(miss.items(), key=lambda x : len(x[1]), reverse=True)
# count the total number of each labels for percentage
label_count = defaultdict(lambda: 0)
for x in y_test:
label_count[x] += 1
# create a list with index of first wrong image for each wrong label
most_missed = []
miss_max = 9
if len(miss_order) < miss_max:
miss_max = len(miss_order)
for i in range(miss_max):
most_missed.append(miss_order[i][0])
# create a plot that shows label number and wrong percentage
plt.figure(figsize=(12, 12))
plt.suptitle("Wrong Sign Languages By Missing Percentage")
for idx, n in enumerate(most_missed):
index = miss[n][0]
plt.subplot(3, 3, idx + 1)
label = y_test[index]
miss_count = len(miss[n])
total_count = label_count[label]
percentage = miss_count/total_count * 100
plt.imshow(x_test[index].reshape(28,28), cmap = 'Greys', interpolation='nearest')
plt.title('Label: ' + str(label) + ', Miss %: ' + str(math.floor(percentage)) + "%")
plt.axis('off')
plt.show()
To look at the tendency of sign languages the model get's wrong, we decided to visualize the TOP 9 sign languages that were predicted wrong by the model. From looking at the percentage of missed labels, we realized that the model is not good at recognizing the certain type of images. It's not that every sign language is misinterpreted equally, but the 96~97% accuracy by the model meant that specific labels are guessed wrong.
# get first index of top 2 most wrong labels
sample_index1 = miss[most_missed[0]][0]
sample_actual_label1 = y_test[index]
sample_prediction_label1 = prediction_label[sample_index1]
anti_sample_index1 = -1
sample_index2 = miss[most_missed[1]][0]
sample_actual_label2 = y_test[index]
sample_prediction_label2 = prediction_label[sample_index2]
anti_sample_index2 = -1
# get the first index of predicted labels for the above images
for i in range(len(y_test)):
if y_test[i] == sample_prediction_label1:
anti_sample_index1 = i
break
for i in range(len(y_test)):
if y_test[i] == sample_prediction_label2:
anti_sample_index2 = i
break
# plot the comparison diagram
if anti_sample_index1 != -1 and anti_sample_index2 != -1:
plt.figure(figsize=(8, 8))
plt.suptitle("Actual Label and Prediction Comparison")
plt.subplot(2, 2, 1)
plt.imshow(x_test[sample_index1].reshape(28,28), cmap = 'Greys', interpolation='nearest')
plt.title('Actual Label: ' + str(y_test[sample_index1]) + ', Predict: ' + str(prediction_label[sample_index1]))
plt.subplot(2, 2, 2)
plt.imshow(x_test[anti_sample_index1].reshape(28,28), cmap = 'Greys', interpolation='nearest')
plt.title('Actual Label: ' + str(y_test[anti_sample_index1]))
plt.subplot(2, 2, 3)
plt.imshow(x_test[sample_index2].reshape(28,28), cmap = 'Greys', interpolation='nearest')
plt.title('Actual Label: ' + str(y_test[sample_index2]) + ', Predict: ' + str(prediction_label[sample_index2]))
plt.subplot(2, 2, 4)
plt.imshow(x_test[anti_sample_index2].reshape(28,28), cmap = 'Greys', interpolation='nearest')
plt.title('Actual Label: ' + str(y_test[anti_sample_index2]))
We compared TOP2 labels that got predicted wrong by their actual label and predicted label. We realized that they resemble in shape, such as both having just the fist, or having a similar number of fingers toward the same directions. This means that the model was trying to do a good job predicting, but some images had almost the same features that the CNN model was not able to guess correctly. Since the CNN model tries to predict by what kind of features the images have, this can be especially challenging.
This investigation gave us a feeling that the CNN model can have a great limitation. Since every image is resized to a small resolution, some images are even hard for humans to interpret correctly. It is not surprising that the machine was not able to differentiate between some sign languages if they became harder to recognize while preparing the dataset needed for training. One possible solution for this problem can be having images with higher resolution, but having bigger images means that training time will be longer, taking much more time to evaluate if a model is effective or not.
import seaborn as sns
# create confusion matrix with counts of result
confusion = tf.math.confusion_matrix(labels=y_test, predictions = prediction_label).numpy()
# normalize the confusion matrix
confusion_norm = np.around(confusion.astype('float') / confusion.sum(axis=1)[:, np.newaxis], decimals=2)
label_list = list(range(0,25))
confusion_df = pd.DataFrame(confusion_norm,index = label_list, columns = label_list)
# plot the confusion matrix with color
figure = plt.figure(figsize=(10, 10))
sns.heatmap(confusion_df, annot=True,cmap=plt.cm.Blues)
plt.tight_layout()
plt.title("Confusion Matrix")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:7: RuntimeWarning: invalid value encountered in true_divide import sys
A confusion matrix was created to see the relationship between each class. The matrix looks great with the blue diagonal cells, but we can see some light-blue cells around that shows that some predictions were wrong. The matrix is normalized since each sign language does not have the same number of data. We can see that most of the labels have 1 as their value meaning that all the predictions were right, but some of them are around 0.9-1.0, and few of them have relatively low accuracy, showing some false negatives and false positives.
Overall, the model is effective in most applications, except for some sign language that they fail to recognize correctly.
The final model has 96~97% accuracy, meaning that it can get interpret the sign language alphabet correctly most of the time. Due to the special condition my team had where there was only one person, we were not able to implement a real-time interpretation of sign language or try to evaluate our model with data created by ourselves. However, the learning experience of creating a fully working CNN model and improving the accuracy from 85% to 97% was a valuable time.
Thinking of the nature of a language, it is important to achieve 100% accuracy to prevent any type of misinterpretation. For example, even with 97% accuracy to interpret each letter correctly, the possibility of interpreting a 5 letter message correctly is 85%, and 10 letter message correctly is only 73%. Also, alphabets are the not only thing that is used in sign languages - it is rather rare to just use the alphabets for communication. So the biggest limitation for this project is that even with CNN it was challenging to achieve very high accuracy that is ideal for real-world usage, and also cannot interpret sign languages with motion.
Multiple limitations made the model hard to be used in real-world applications, but we believe that this can be a basic step that contributes to building a functional interpreter for sign language. There can be other machine learning models that can achieve higher accuracy, or we can approach in a different direction to deal with the gestures. In either direction, I believe it is important for us engineers to keep trying so that we can eventually serve good to human society without harming.