The Power of Monotonicity to Make ML Make Sense
The Power of Monotonicity to Make ML Make Sense


The key to machine learning is getting the right flexibility. For many ML problems, we have prior knowledge about global trends the model should be capturing, like that predicted travel time should go up if traffic gets worse. But flexible models like DNN's and RF's can have a hard time capturing such global trends given noisy training data, which limits their ability to extrapolate well when you run a model on examples different than your training data. TensorFlow's new TensorFlow Lattice tools let you create flexible models that can respect the global trends you request, producing easier-to-debug models that generalize well. TF Lattice provides new TF Estimators that make capturing your global trends easy, and we'll also explain the underlying new TF Lattice operators that you can use to create your own deeper lattice networks.


Maya Gupta leads Google's Glassbox Machine Learning R and D team, which focuses on designing and developing controllable and interpretative machine learning algorithms that solve Google product needs. Prior to Google, Gupta was an Associate Professor of Electrical Engineering at the University of Washington from 2003-2013. Her PhD is from Stanford, and she holds a BS EE and BA Econ from Rice.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google