Playing Detective with CNNs
Playing Detective with CNNs


With the growing popularity Convolutional Neural Networks(CNNs) in computer vision tasks and the ability of them to model intricate patterns in data, we took to extending the study of verifying handwriting using CNNs. Handwriting has been considered to be unique for each individual, which is why they are still of considerable importance in legal documents. Although handwriting is considered to be unique, there still exists variations within a writer's handwriting. We thus seek to build a model that can successfully differentiate writers while remaining invariant to within writer changes without feature engineering as used by previous models.(Individuality of Handwriting, S. N. Srihari 2002).newline We use the CEDAR ,word-level """"and"""" dataset to train and experiment with several CNN architectures based on differences in filter size, feature maps, activation functions, regularization techniques, cost functions, width and depth of the model, pooling operations, strides and optimization techniques. While experimenting with an architecture, we found the model behaved with certain trends, by modeling several of these trends we get an intuitive understanding of the effect of each of these parameters on the model's performance. We use this understanding in each experiment for developing our final architecture. We further split the final architecture into two CNN architectures with the same parameter tuning along with weight and variable sharing and saw a spike in performance. The primary objective of our experiments was to test how handwriting verification performs without any feature engineering and using Convolutional Neural Networks for feature extraction. However, we also gain several insights into trends in different architecture results with respect to the dataset and develop a final combination of parameters that was found to perform the best. We hope these insights could be used while tuning more complex models.

We test the model's performance by identifying two types of error based on model writers: 1) Samples from a known writer - This includes for testing variation within writer's samples seen before as well as between writers 2) Samples from an unknown writer- This includes for testing the generalizing capability of the model on samples of never before seen writers. By measuring these two kinds of errors, we arrive at an architecture that performs optimally for both


Sanjana started off with leveraging machine learning algorithms for political data. She worked on drawing inferences from expenditures data for the presidential election cycle.. She received her Masters in Computer Science with a specialization in Artificial Intelligence and spent a year working on her thesis with Prof. Sargur N Srihari and Prof. Wen Dong, for learning to differentiate authors based on handwriting using neural networks. Sanjana now works on Conversational AI, researching and developing NLP techniques for Mya Systems.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google