Authors: Maya Gupta, Research scientist, Jan Pfeifer, software engineer, and Seungil You



Cross-publishing on the Google Open source blog






DNN
Random forests






TensorFlow Lattice
TensorFlow estimate device
TensorFlow
Monotonic relationship













www.youtube.com/watch?v=kaP…



How does the dot matrix model help you





The feature space in which all other inputs remain constant and only the distance changes is segmented. The precise flexible function (pink) on the Tokyo training example (purple) predicted that a coffee shop at a distance of 10 km would outperform an identical coffee shop at a distance of 5 km. If the distribution of the data changes, as shown in the blue example here in Texas (where coffee shops are scattered), the problem becomes even more apparent during testing.
The monotone flexible function (green) is accurate on the training example and generalizes to the Texas example compared to the non-monotone flexible function (pink) in the previous figure.












Prebuild estimators



TensorFlow estimate device
Calibration linear model
Calibrate the lattice model
Nonlinear interaction






Build your own model





An example of a 9-layer deep lattice network architecture [5], an alternate layer of lattice linear embedding and set, contains the calibrator layer (as the sum of rectifiers in the neural network). The blue lines correspond to monotone inputs that are retained layer by layer and thus retained throughout the model. You can build this and any other architecture using TensorFlow Lattice because each layer is distinguishable.

L1
L2


  • Monotonically restrict your input choices [3], as described above.
  • Laplace regularization is performed on the lattice [3] in order to make the learning function flatter.
  • Distortion regularization [3] inhibits unnecessary nonlinear feature interactions.

Making the code base
The tutorial






Thank you



Developing and open-source TensorFlow Lattice was a huge team effort. We are very grateful to all those who participated: Andrew Cotter, Kevin Canini, David Ding, Mahdi Milani Fard, Yifei Feng, Josh Gordon, Kiril Gorovoy, Clemens Mewald, Taman Narayan, Alexandre Passos, Christine Robson, Serena Wang, Martin Wicke, Jarek Wilkiewicz, Sen Zhao, Tao Zhu






reference



Lattice Regression
Eric Garcia, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2009



Optimized Regression for Efficient Function Evaluation
Eric Garcia, Raman Arora, Maya R. Gupta, IEEE Transactions on Image Processing, 2012



Monotonic Calibrated Interpolated Look-Up Tables
Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, Alexander van Esbroeck, Journal of Machine Learning Research (JMLR), 2016



Fast and Flexible Monotonic Functions with Ensembles of Lattices
Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2016



Deep Lattice Networks and Partial Monotonic Functions
Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta, Advances in Neural Information Processing Systems (NIPS), 2017