AutoML at Scale: Integrating Data as Part of Hyperparameter Optimization
AutoML at Scale: Integrating Data as Part of Hyperparameter Optimization


Neural networks dominate the modern machine learning landscape, but their training and performance still suffer from sensitivity to the empirical choice of training task hyperparameters. The aim of automated machine learning (AutoML) techniques is to automate and optimize the population of training tasks during the empirical parameter search, in order to maximise performance under limited computation resources. Current attempts for AutoML are focused on configurations common to deep learning models, such as architecture, loss function, learning rate and optimization algorithm. Nevertheless , the training data and its quality are considered constant, which is in contrast to their importance in determining the quality of the trained model. In this work, we propose an integrated approach to AutoML which includes, in addition to the off-the-shelf hyperparameter optimizer, parameterization over the metadata population selected for training.


Coming Soon!

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google