“BERT takes months of computation and is very expensive — say, $1 million to generate the model and repeat these processes,” Bahrami said. “So if everyone wants to do the same thing, it’s too expensive – it’s not energy efficient and it’s not good for the world.”
While the field shows promise, researchers are still looking for ways to make autoML techniques more computationally efficient. For example, methods like Neural Architecture Search currently build and test many different models to find the best fit, and the energy required to complete all these iterations can be enormous.
AutoML techniques can also be applied to machine learning algorithms that do not involve neural networks, such as creating random decision forests or support vector machines to classify data. Research in these areas goes a step further, and there are already many coding libraries available to people who want to incorporate autoML techniques into their projects.
Conference organizer Hutter said the next step is to use autoML to quantify uncertainty and address trust and fairness issues in the algorithm. In this vision, the criteria for trustworthiness and fairness will be similar to any other machine learning constraints, such as accuracy. autoML can capture and automatically correct biases found in these algorithms before they are released.
But for things like deep learning, autoML still has a long way to go. The data used to train deep learning models, such as images, documents, and recorded speech, is often dense and complex. It requires enormous computing power to process.The cost and time to train these models could be prohibitive for anyone except deep-pocketed researchers private company.
A competition at the conference asked participants to develop energy-efficient alternative algorithms for neural architecture search.This is quite a challenge because of this technology Notorious computing needs. It automatically loops through countless deep learning models to help researchers choose the right one for their application, but the process can take months and cost over a million dollars.
The goal of these alternative algorithms, known as zero-cost neural architecture search agents, is to make neural architecture search more accessible and environmentally friendly by significantly reducing its computational requirements. Results run in seconds, not months.These technologies are still in early stages of development and are generally unreliable, but Machine learning researchers predict They have the potential to make the model selection process more efficient.