According to PANews, the Apple research team has launched an advanced open language model, OpenELM. OpenELM uses a layered scaling strategy to effectively distribute parameters in each layer of the transformer model, thereby improving accuracy. With a parameter budget of approximately 1 billion, OpenELM's accuracy is 2.36% higher than OLMo, and the need for pre-trained tokens is reduced by 2 times.

Unlike previous practices that only provided model weights, inference code, and pre-training on private datasets, OpenELM includes a complete framework for training and evaluating language models on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. They also released code to convert the model to the MLX library for inference and fine-tuning on Apple devices.

As early as February this year, Apple CEO Tim Cook said that Apple's generative AI function will be launched "later this year". It is reported that iOS 18, which will be released in June, may be the "biggest" update in the history of Apple iOS, and the first AI iPhone device will also be launched in September.