Recent research from Harvard and the University of Michigan has uncovered hidden capabilities in modern AI models that emerge early during training but remain concealed until specific prompts are given. These findings challenge traditional methods of measuring AI capabilities, suggesting that models may possess sophisticated skills that only surface under certain conditions. The study highlights the importance of transparency in AI development and safety, as standard tests may underestimate the true potential of these models. By adjusting training data presentation and using alternative prompting techniques, researchers were able to extract hidden abilities long before they were detectable through conventional tests. This discovery has significant implications for AI evaluation and suggests the need for more advanced testing protocols to fully understand and harness the capabilities of AI models. Read more AI-generated news on: https://app.chaingpt.org/news