Written by: Mario Gabriele

Compiled by: Block unicorn

The Holy War of Artificial Intelligence

I would rather live my life as if God exists, and find out upon death that God does not exist, than live my life as if God does not exist, and find out upon death that He does exist. — Blaise Pascal

Religion is a curious thing. Perhaps because it is completely unprovable in any direction, or perhaps like my favorite saying: 'You cannot fight emotions with facts.'

The characteristic of religious belief is that as faith rises, they accelerate in development at an unbelievable speed, making it almost impossible to doubt the existence of God. When those around you increasingly believe in it, how can you doubt a divine presence? When the world rearranges itself around a doctrine, where is there still room for heresy? When temples and cathedrals, laws and norms are all arranged according to a new, unshakeable gospel, where is the space for opposition?

When the Abrahamic religions first emerged and spread across continents, or when Buddhism spread from India to all of Asia, the immense momentum of faith created a self-reinforcing cycle. With more conversions and the establishment of complex theological systems and rituals around these beliefs, it becomes increasingly difficult to question these fundamental premises. In a sea of credulity, becoming a heretic is not easy. Grand cathedrals, complex religious scriptures, and thriving monasteries serve as physical evidence of the divine presence.

But the history of religion also tells us how easily such structures can collapse. As Christianity spread to the Scandinavian Peninsula, the ancient Nordic faith collapsed in just a few generations. The ancient Egyptian religious system lasted for thousands of years, only to vanish when new, more enduring beliefs arose within larger power structures. Even within the same religion, we have seen dramatic schisms — the Reformation tore apart Western Christianity, while the Great Schism led to the division of the Eastern and Western Churches. These schisms often begin with seemingly trivial doctrinal differences, gradually evolving into entirely different belief systems.

Scripture

God is a metaphor that transcends all levels of intellectual thought. It's that simple. — Joseph Campbell

In simple terms, believing in God is religion. Perhaps creating God is no different.

Since its inception, optimistic AI researchers have envisioned their work as creationism — the creation of God. Over the past few years, the explosive development of large language models (LLMs) has further solidified the believers' conviction that we are on a sacred path.

It also confirms a blog post written in 2019. Although people outside the field of AI had only recently learned about it, Canadian computer scientist Richard Sutton's (Bitter Lessons) has become an increasingly important text in the community, evolving from esoteric knowledge into a new, all-encompassing religious foundation.

In 1,113 words (every religion needs a sacred number), Sutton summarizes a technological observation: 'The biggest lesson from 70 years of AI research is that leveraging computation in a general way is ultimately the most effective and a tremendous advantage.' Progress in artificial intelligence models has been driven by exponential increases in computational resources, riding the enormous wave of Moore's Law. Meanwhile, Sutton points out that much work in AI research has focused on optimizing performance through specialized techniques — increasing human knowledge or narrow tools. While these optimizations may help in the short term, in Sutton's view, they are ultimately a waste of time and resources, like trying to adjust the fins of a surfboard or experiment with new wax when a massive wave is coming.

This is the foundation of what we call 'bitter religion.' It has only one commandment, commonly referred to in the community as the 'law of expansion': exponential growth of computation drives performance; everything else is foolish.

Bitter religion is rapidly spreading from large language models (LLMs) to world models, now swiftly moving through biology, chemistry, and embodied intelligence (robotics and self-driving vehicles), these unconverted temples.

However, with the spread of Sutton's doctrine, definitions began to change. This is a hallmark of all active and vibrant religions — debate, extension, and commentary. The 'law of expansion' no longer simply means scaling computation (the ark is not just a boat); it now refers to a variety of methods aimed at enhancing transformer and computation performance, also incorporating some tricks.

Now, the classics encompass attempts to optimize every part of the AI stack, from techniques applied to the core models themselves (model merging, mixture of experts (MoE), and knowledge distillation) to generating synthetic data to feed these eternally hungry gods, along with a plethora of experimentation.

Warring Sects

A question recently raised in the AI community, with a tone of jihad, is whether 'bitter religion' is still valid.

This week, Harvard, Stanford, and MIT published a new paper titled (The Law of Expansion of Precision), igniting this conflict. The paper discusses the end of quantifying technical efficiency gains, which are a series of technologies that improve artificial intelligence model performance and greatly benefit the open-source ecosystem. Tim Dettmers, research scientist at the Allen Institute for AI, outlined its significance in the post below, calling it 'the most important paper in a long time.' It represents a continuation of the increasingly heated dialogue of the past few weeks and reveals a notable trend: the increasing consolidation of two religions.

OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both confidently state that we will achieve Artificial General Intelligence (AGI) in approximately 2-3 years. Altman and Amodei can be said to be two of the figures most reliant on the sanctity of 'bitter religion.' All their incentives lean towards overcommitting, generating the maximum hype to accumulate capital in this game, which is almost entirely dominated by economies of scale. If the law of expansion is not the 'alpha and omega,' the beginning and the end, then what do you need $22 billion for?

Former OpenAI Chief Scientist Ilya Sutskever adheres to a different set of principles. He, along with other researchers (including many from within OpenAI, according to recently leaked information), believes that expansion is nearing its limits. This group argues that new science and research will be necessary to maintain progress and bring AGI into the real world.

Sutskever's faction reasonably points out that the Altman faction's ongoing expansion philosophy is economically unfeasible. As AI researcher Noam Brown asks, 'After all, do we really want to train models costing hundreds of billions or trillions of dollars?' This does not even account for the additional tens of billions needed for inference computation if we shift the computational expansion from training to reasoning.

However, true believers are very familiar with the arguments of their opponents. The missionaries at your doorstep can easily handle your hedonistic dilemma. For Brown and Sutskever, Sutskever's faction points to the possibility of 'computation at test time' for expansion. Unlike the previous case, 'computation at test time' does not rely on bigger computations to improve training but allocates more resources to execution. When AI models need to answer your questions or generate a piece of code or text, they can provide more time and computation. This is akin to shifting your attention from cramming for math to persuading the teacher to give you an extra hour and allow you to bring a calculator. For many in the ecosystem, this is the new frontier of 'bitter religion,' as teams are shifting from orthodox pre-training to post-training/inference approaches.

It is easy to point out the flaws in other belief systems, criticize other doctrines without exposing one's own stance. So, what is my own belief? Firstly, I believe that the current batch of models will yield a very high return on investment over time. As people learn to bypass limitations and leverage existing APIs, we will see the emergence and success of truly innovative product experiences. We will transcend the reification and incremental stages of artificial intelligence products. We should not view it as 'Artificial General Intelligence' (AGI), because that definition has a framework flaw, but rather as 'Minimum Viable Intelligence' that can be customized according to different products and use cases.

As for achieving Artificial Superintelligence (ASI), more structure is required. Clearer definitions and delineations will help us discuss the trade-offs between the economic value and economic cost that each may bring more effectively. For instance, AGI may provide economic value to a portion of users (merely a localized belief system), while ASI may exhibit unstoppable compound effects, transforming the world, our belief systems, and our social structures. I do not believe that ASI can be achieved solely through scaling transformers; but sadly, as some might say, that is just my atheistic belief.

Lost Faith

The AI community cannot resolve this holy war in the short term; there are no facts to present in this emotional struggle. Instead, we should turn our attention to what it means that AI doubts its faith in the law of expansion. The loss of faith could trigger a chain reaction, extending beyond large language models (LLMs) to impact all industries and markets.

It should be noted that in most areas of AI/machine learning, we have not thoroughly explored the law of expansion; there will be more wonders to come. However, if doubt quietly emerges, it will become more difficult for investors and builders to maintain the same level of confidence in the ultimate performance state of 'early curve' categories like biotechnology and robotics. In other words, if we see large language models beginning to slow down and deviate from the chosen path, many founders' and investors' belief systems in adjacent fields will collapse.

Whether this is fair is another question.

There is a view that 'Artificial General Intelligence' naturally requires larger scale, so the 'quality' of specialized models should manifest at a smaller scale, making it less likely to encounter bottlenecks before providing real value. If a model in a specific domain only ingests a portion of data, thus requiring only a fraction of computational resources to achieve viability, shouldn't it have enough room for improvement? Intuitively, this makes sense, but we repeatedly find that the key often lies elsewhere: including relevant or seemingly unrelated data often improves the performance of seemingly unrelated models. For example, including programming data seems to enhance broader reasoning capabilities.

In the long run, the debate over specialized models may be irrelevant. Anyone building ASI (Artificial Superintelligence) is likely aiming for an entity that can self-replicate and self-improve, exhibiting limitless creativity across various fields. Holden Karnofsky, a former board member of OpenAI and founder of Open Philanthropy, refers to this creation as 'PASTA' (the Process for Automated Science and Technology Advancement). Sam Altman's original profit plan seems to rely on similar principles: 'build AGI, then ask how it pays off.' This is apocalyptic AI, the ultimate destiny.

The success of large AI labs like OpenAI and Anthropic has spurred the capital markets' enthusiasm for supporting similar 'OpenAI of X-field' labs, whose long-term goal is to build 'AGI' around their specific vertical industry or domain. This inference of scale decomposition will lead to a paradigm shift, away from the OpenAI simulation towards product-centered companies — a possibility I presented at the Compound 2023 annual meeting.

Unlike apocalyptic models, these companies must demonstrate a series of progress. They will be companies built on large-scale engineering problems, rather than scientific organizations conducting applied research, with the ultimate goal of building products.

In science, if you know what you are doing, you shouldn't be doing it. In engineering, if you don't know what you are doing, you shouldn't be doing it. — Richard Hamming

Believers are unlikely to lose their sacred faith in the short term. As mentioned earlier, with the surge of religion, they codified a script for life and worship and a set of heuristics. They constructed physical monuments and infrastructures that reinforced their power and wisdom, demonstrating that they 'know what they are doing.'

In a recent interview, Sam Altman said this when discussing AGI (emphasis on 'we'):

This is the first time I feel we really know what to do. There is still a lot of work to be done from now until building an AGI. We know there are some known unknowns, but I think we basically know what to do, which will take some time; it will be hard, but it is also very exciting.

Judgment

In questioning (the bitter religion), the expansionists are settling one of the most profound discussions of the past few years. Each of us has engaged in such thoughts in some form. What would happen if we invented God? How quickly would that God appear? What would happen if AGI (Artificial General Intelligence) truly and irreversibly emerged?

Like all unknown and complex topics, we quickly store our specific reactions in our brains: a portion of people despair at their impending irrelevance, most expect a mix of destruction and prosperity, while the final portion anticipates that humanity will do what we do best — continuing to seek problems to solve and fixing the ones we create ourselves, leading to pure abundance.

Anyone with a significant stake wants to predict what the world will look like for them if the law of expansion holds and AGI arrives in a few years. How will you serve this new God, and how will this new God serve you?

But what if the stagnant gospel drives away the optimists? What if we start to think that perhaps even God may decline? In a previous article (Robot FOMO, Scaling Laws, and Technological Predictions), I wrote:

I sometimes wonder what would happen if the law of expansion does not hold, whether it would be similar to the impacts of revenue loss, growth slowdown, and rising interest rates on many tech sectors. I also sometimes wonder if the law of expansion holds completely, whether it would resemble the commodification curves of many other fields’ pioneers and their value capture.

"The benefit of capitalism is that, regardless of the circumstances, we will spend vast amounts of money to find the answers."

For founders and investors, the question becomes: what will happen next? Candidates who could become great product builders in every vertical are gradually becoming known. There will be more such individuals in each industry, but this story has already begun to unfold. Where will new opportunities arise?

If expansion stagnates, I expect to see a wave of closures and mergers. The remaining companies will increasingly shift their focus towards engineering, an evolution we should anticipate by tracking talent movements. We have observed some signs that OpenAI is moving in this direction as it increasingly productizes itself. This shift will open up space for the next generation of startups to 'cut corners' by relying on innovative applied research and science rather than engineering, attempting to surpass existing companies in exploring new paths.

Lessons of Religion

My view on technology is that anything that seems obviously to have a compounding effect usually does not last long, and a commonly held view is that any business that appears obviously to have a compounding effect strangely develops at far below expected speed and scale.

The early signs of religious schism usually follow predictable patterns, which can serve as a framework for tracking the evolution of (bitter religion).

It often begins with the emergence of competing interpretations, whether for capitalist or ideological reasons. In early Christianity, differing views on the divinity of Christ and the nature of the Trinity led to schisms, resulting in starkly different biblical interpretations. Beyond the AI schism we've mentioned, there are other emerging fractures. For instance, we see a segment of AI researchers rejecting core orthodoxies of transformers, turning instead to other architectures like State Space Models, Mamba, RWKV, Liquid Models, etc. While these are still soft signals, they indicate the budding of heretical thoughts and a willingness to rethink the field from foundational principles.

Over time, the impatient words of prophets can also lead to distrust. When the predictions of religious leaders do not come to fruition, or divine intervention does not materialize as promised, it sows the seeds of doubt.

The Millerite movement predicted Christ's return in 1844, but when Jesus did not arrive as expected, the movement collapsed. In the tech world, we typically bury failed predictions silently and allow our prophets to continue painting optimistic, long-cycle future scenarios, despite repeatedly missed deadlines (hi, Elon). However, without support from continuously improved raw model performance, the faith in the law of expansion may also face a similar collapse.

A corrupt, bloated, or unstable religion is easily influenced by heretics. The Protestant Reformation advanced not only because of Luther's theological views but also because it emerged during a period of decline and turmoil for the Catholic Church. When mainstream institutions show cracks, long-standing 'heretical' ideas suddenly find fertile ground.

In the field of artificial intelligence, we may focus on smaller models or alternative approaches that achieve similar results with less computation or data, such as the work done by various Chinese enterprise labs and open-source teams (like Nous Research). Those who break the limits of biological intelligence, overcoming obstacles long thought insurmountable, may also create a new narrative.

The most direct and timely way to observe the beginnings of a shift is to track the movements of practitioners. Before any formal schism, religious scholars and clergy often privately hold heretical views while appearing compliant in public. Today's equivalent may be some AI researchers who outwardly adhere to the law of expansion but secretly pursue radically different approaches, waiting for the right moment to challenge the consensus or leave their labs in search of theoretically broader horizons.

The tricky part about religion and technological orthodoxy is that they often contain a part that is correct, albeit not as universally correct as the most faithful adherents believe. Just as religion integrates fundamental human truths into their metaphysical frameworks, the law of expansion clearly describes the reality of neural network learning. The question is whether this reality is as complete and immutable as current enthusiasm suggests, and whether these religious institutions (AI labs) are flexible and strategic enough to lead the zealots forward, while also establishing printing presses (chat interfaces and APIs) that allow their knowledge to spread.

Endgame

"Religion is true in the eyes of the common people, false in the eyes of the wise, and useful in the eyes of the rulers." — Lucius Annaeus Seneca

A possibly outdated view of religious institutions is that once they reach a certain scale, they become susceptible to survival motives, much like many human-run organizations, trying to survive in competition. In doing so, they neglect truth and great motives (which are not mutually exclusive).

I once wrote an article about how capital markets become narrative-driven echo chambers, where incentive mechanisms often perpetuate these narratives. The consensus around the law of expansion feels ominously familiar — a deeply entrenched belief system that is mathematically elegant and extremely useful in coordinating large-scale capital deployment. Like many religious frameworks, it may serve as a coordination mechanism more valuable than as a fundamental truth.