What Do Neural Embedding Based Translation Algorithms Tell Us About Language Similarity?

Abstract: In this talk we review insights generated from training and then analyzing several neural embedding based machine learning translation algorithms. We use several methodologies to train the linear translation matrices that map neural word embeddings of the vocabulary of one language to the vocabulary in a second language. The training process is conducted to ensure that we generate translation matrices that meet the mathematical properties required to conduct a spectral decomposition of the matrix, enabling us to (in effect) have a representation vector of the inter language relationships. Analysis of these translation """"spectra"""" provides mathematical criteria for measuring similarity between different languages. Results confirm several hypotheses based on linguistic entomology about (for example) similarity between romance languages vs English or east Asian languages.

Bio: Mike serves as Takt Chief Data Science Officer, UC Berkeley Data Science faculty, and head of Skymind Labs the Machine Learning research lab affiliated with DeepLearning4J. He has led teams of Data Scientists in the bay area as Chief Data Scientist for InterTrust, Director of Data Sciences for MetaScale/Sears, and CSO for Galvanize where he founded the galvanizeU-UNH accredited Masters of Science in Data Science degree and oversaw the company's transformation from co-working space to Data Science organization. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University before teaching at the University of Pittsburgh.