Google’s DeepMind has introduced a new JEST training method for AI models, claiming to significantly enhance training speed and energy efficiency. This method, which focuses on batch training rather than individual data points, involves creating a smaller model to assess data quality and select the most suitable batches for training a larger model. The success of the JEST method hinges on the quality of the training data, making it challenging for amateur AI developers to implement. The timing of this research is crucial, given the increasing concerns about the environmental impact of AI data centers, with AI workloads already consuming a substantial amount of power. The adoption of JEST methods by major players in the AI space remains uncertain, but there are hopes that it could lower power consumption and training costs. However, the competition between cost savings and hyper-fast training output may ultimately determine its impact on the industry.
