Artificial intelligence enjoyed a banner year in 2024. The frontier technology captured awards, corralled investors, charmed Wall Street and showed that it could reason mathematically — even explaining differential equations.
It also drew the attention of global regulators, concerned about privacy and safety risks. Others worried that AI might soon evolve into artificial general intelligence (AGI) and then artificial superintelligence — surpassing human cognitive abilities. Catastrophic scenarios were posited and discussed: bioterrorism, autonomous weapons systems and even “extinction-level” events.
Here are 10 of 2024’s AI highlights.
#1 GenAI dominates
Generative artificial intelligence (GenAI), a subset of AI, is able to create something out of nothing (well, apart from its voluminous training data). Prompt it with a line of text, for instance, and it can generate a 500-word ghost story.
GenAI took center stage in 2024. And it wasn’t just ChatGPT, the AI-enabled chatbot developed by OpenAI. Google’s Gemini, Microsoft’s Copilot, Anthropic’s Claude, and Meta’ Llama 3 series also helped push the edge of the envelope, developing software that could read and generate not just text, but also audio, video and images.
AI labs spent freely to fuel these advances. AI spending surged to $13.8 billion in 2024, more than six times the amount forked out in 2023, according to Menlo Ventures, in “a clear signal that enterprises are shifting from experimentation to execution, embedding AI at the core of their business strategies."
#2 AI captures Nobel prizes for physics, chemistry
Further evidence that AI is here to stay was provided in October when the Royal Swedish Academy of Sciences announced the 2024 Nobel Prizes. Geoffrey Hinton and John Hopfield took the physics prize “for foundational discoveries and inventions that enable machine learning with artificial neural networks.” Neural networks are a core technology in today’s AI.
Hinton, a British-Canadian computer scientist and cognitive psychologist — i.e., not a physicist — has often been called the “Godfather of AI.” His path-breaking work on neural networks goes back to the 1980s when he used tools from statistical physics like a Boltzmann machine to advance machine learning.
Elsewhere, Demis Hassabis — co-founder and CEO of Google DeepMind — and John Jumper were honored with the Nobel Prize for chemistry for developing an artificial intelligence model that can predict proteins’ complex structures.
Canada’s own wins the Nobel Prize for AI work. Source: Justin Trudeau
#3 Nvidia overtakes Apple as world’s most valuable company
It takes a special type of computer chip to train and run the massive large language models (LLMs) that were so dominant in 2024, and chipmaker Nvidia produced more of these special graphics processing units, or GPUs, than any company in the world.
It isn’t surprising, then, that Nvidia also became the world’s most valuable company in 2024 — reaching $3.53 trillion in market capitalization in late October, eclipsing Apple’s $3.52 trillion.
“More companies are now embracing artificial intelligence in their everyday tasks and demand remains strong for Nvidia chips,” commented Russ Mould, investment director at AJ Bell.
Will Nvidia keep its manufacturing dominance in 2025, and beyond? Nvidia’s widely anticipated Blackwell GPUs, expected to launch in the 4th quarter, were delayed because of design flaws reportedly, but given Nvidia’s enormous lead in the GPUs — it controlled 98% of the market in 2023 — few expect it to be outduelled any time soon.
#4 AI legislation in the EU
Everyone wants an artificial intelligence that is safe, secure, and beneficial for society at large, but passing laws and implementing rules to ensure a responsible AI is no easy matter. Still, in 2024, global regulatory authorities took some first steps.
The European Union’s Artificial Intelligence Act came into force in August, introducing safeguards for general-purpose AI systems and addressing some privacy concerns. The act sets strict rules on the use of AI for facial recognition, for example, but it also seeks to address broader risks like automating jobs, spreading misinformation online and endangering national security. The legislation will be implemented in phases, stretching out until 2027.
Regulating AI won’t be easy, however, as California found out in 2024 with its proposed SB 1047 legislation that was sidelined (vetoed) by the state’s governor in September. Described as the “most sweeping effort yet to regulate artificial intelligence,” SB 1047 had support from some AI proponents like Geoffrey Hinton and Elon Musk, who argued that it provided badly needed guardrails for this rapidly evolving technology.
But it also drew criticism from other technologists, like Andrew Ng, founder of DeepLearning.AI, because it imposed liability on AI developers and this could arguably stifle innovation.
#5 Emergence of small language models (SLMs)
Massively large AI models that are trained on billions of datapoints became commonplace in 2024. ChatGPT was trained on 570 gigabytes of text data scraped from the internet — about 300 billion words, for instance.
But for many enterprises the AI future lies in smaller, industry-specific language models, some of which began to emerge in 2024.
In April, Microsoft rolled out its Phi-3 small language models, while Apple presented eight small language models for its handheld devices. Microsoft and Khan Academy are now using SLMs to improve math tutoring for students, for example.
“There is much more compute available at the edge because the models are getting smaller for specific workloads, [and] you can actually take a lot more advantage of that,” Yorke Rhodes, Microsoft’s director for digital transformation, blockchain and cloud supply chain, explained at a May conference.
SLMs require less training data and computational power to develop and run, and their capabilities “are really starting to approach some of the large language models,” he added.
#6 Agentic AI moved to the forefront
Chatbots like ChatGPT are all about asking questions and receiving answers on a wide breadth of topics — though they can also write software code, draft emails, generate reports, and even write poetry.
But AI agents go a step beyond chatbots and can actually make decisions for users, enabling them to achieve specific goals. In the healthcare industry, an AI agent could be used to monitor patient data, making recommendations when appropriate to modify a specific treatment, for instance.
Luna is an AI agent built on Virtuals. Source: X
Looking ahead, tech consulting firm Gartner named Agentic AI as one of its “Top Strategic Technology Trends for 2025.” Indeed, by 2028 as much of a third of enterprise software applications will include agentic AI, the firm predicts, up from less than 1% in 2024.
AI agents could even be used to write blockchain-based smart contracts (technically they can already do so, but the risks of an errant bug and a loss of funds are too high at present). Blockchain project Avalanche has already begun building a new virtual machine at the intersection of AI and blockchains to do this in a natural language. “You write your [smart contract] programs in English, German, French, Tagalog, Chinese [...] a natural language that your mother taught you in your mother’s tongue,” said Ava Labs founder Emin Gün Sirer.
Smart contract programming as it stands today is really hard, so an easy-to-use AI agent could potentially bring in “billions of new [blockchain] users,” Sirer predicted.
#7 Reasoning models for solving ‘hard problems’
Chatbots have other limitations. They can struggle with simple math problems and software coding tasks, for instance. They aren’t great at answering scientific questions.
OpenAI sought to remedy matters in September with the drop of OpenAI o1, a new series of reasoning models “for solving hard problems,” like differential equations. The response was mostly positive.
"Finally, an AI model capable of handling all the complex science, coding and math problems I’m always feeding it," tweeted New York Times columnist Kevin Roose.
On tests, o1 performed as well as the top 500 students in the US in a qualifier for the USA Math Olympiad, for instance, and exceeded human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems, OpenAI reported.
#8 Zeroing in on AGI
Why do advances in structured problem solving, as described above, matter? They bring AI incrementally closer to providing human-like intelligence, i.e., artificial general intelligence, or AGI.
OpenAI’s o3 models, released just before Christmas, performed even better than o1, especially on math and coding tests, while other projects like Google’s Gemini 2.0 also made progress in 2024 on structured problem solving — i.e., that is, breaking down complex tasks into manageable steps.
However, AGI still remains a distant goal in the view of many experts. Today’s advanced models still lack an intuitive understanding of physical concepts like gravity or causality, for instance. Nor can current AI algorithms think up questions on their own, or learn if and when scenarios take an unexpected turn.
Overall, “AGI is a journey, not a destination — and we’re only at the beginning,” Brian Hopkins, the vice president for emerging technology at consulting firm Forrester, declared recently.
# 9 Signs of a looming training data shortage
Unquestionably, 2024 was an exciting year for AI developers and users alike, and few expect AI innovation to subside any time soon. But there were also suggestions in 2024 that the AI’s LLM sub-epoch may have already peaked.
The reason is a looming data shortage. Companies like OpenAI and Google may soon run out of data, AI’s lifeblood, used to “train” massive artificial intelligence systems.
Only so much data can be scraped from the internet, after all. Moreover, LLM developers are finding they can’t always gather publicly available data with impunity. The New York Times, for one, has sued OpenAI for copyright infringement with regard to its news content. It isn’t likely to be the only major news organization to seek recourse from the courts.
“Everyone in the industry is seeing diminishing returns,” said Google’s Demis Hassabis.
One answer may be to train algorithms using synthetic data — artificially generated data that mimics real-world data. AI developer Anthropic’s Claude 3 LLM, for instance, was trained, at least in part, on synthetic data, i.e., “data we generate internally,” according to the company.
Even though the term “synthetic data” may sound like an oxymoron, scientists, including some medical experts, say creating new data from scratch holds promise. It could support medical AI by filling out incomplete data sets, for instance, which could help eliminate bias against certain ethnic groups, for instance.
Anthropic is trying to lead the way with ethical AI. Source: Anthropic
#10 Emergence of a more ethical AI
Interestingly, Anthropic explains in some detail how it obtains its training data in the referenced paper above. Of particular note, it operates its website crawling system “transparently,” which means that website content providers — like The New York Times, presumably — “can easily identify Anthropic visits and signal their preferences to Anthropic.”
The firm has gone to some lengths to prevent misuse of its technology, even creating a responsible scaling officer, whose scope was broadened in 2024 in an effort to create a “safe” AI. The company’s efforts didn’t go unnoticed. Time magazine named it one of the 100 most influential companies in 2024, extolling it as the “AI Company Betting That Safety Can Be a Winning Strategy.”
Given the drift of AI development in 2024 and public concerns about potential catastrophic risks from these new frontier systems, it seems entirely likely that more developers may soon embrace a more transparent and responsible AI.
Magazine: Story Protocol helps IP creators survive AI onslaught… and get paid in crypto