In the field of artificial intelligence, the intersection of faith and technology has created a fierce debate surrounding the validity and future development of the "law of expansion". This article explores the rise, divergences and possible impact of this "bitter religion", revealing the complex relationship between faith and science. The artificial intelligence community is locked in a doctrinal battle over its future and whether it will be large enough to create a god. This article originates from an article written by Mario Gabriele, organized and compiled by Block unicorn. (Preliminary summary: Musk’s xAI completed a US$6 billion Series C financing, with Huida, BlackRock, a16z... a number of industry giants participating in the investment) (Background supplement: Huida will launch a humanoid robot computing platform "Jetson Thor" next year Is the ChatGPT moment for physics AI coming?) The holy war of artificial intelligence. I would rather live my life as if there is God and wait until death to find out that God does not exist than live as if there is no God and wait until death to find out about God. It exists. —— Blaise Pascal Religion is a funny thing. Maybe because it's completely unprovable in any direction, or maybe it's like one of my favorite sayings: "You can't fight facts against feelings." The thing about religious beliefs is that as they rise in belief, they An acceleration that is so incredible that it is almost impossible to doubt the existence of God. How can you doubt a divine being when everyone around you increasingly believes in it? When the world rearranges itself around a doctrine, where is the place for heresy? When temples and cathedrals, laws and norms are ordered according to a new, unshakeable gospel, where is the room for opposition? When the Abrahamic religions first emerged and spread across continents, or when Buddhism spread from India across Asia, the sheer momentum of faith created a self-reinforcing cycle. As more people converted, and complex theologies and rituals were built around these beliefs, it became increasingly difficult to question these basic premises. It is not easy to be a heretic in a sea of gullibility. Magnificent churches, complex religious texts, and thriving monasteries all serve as physical evidence of the divine presence. But the history of religion also shows us how easily such structures can break down. As Christianity spread into Scandinavia, the ancient Norse faith collapsed in just a few generations. The religious system of ancient Egypt lasted for thousands of years, eventually disappearing as new, more enduring faiths arose and larger power structures emerged. Even within the same religion, we see dramatic schisms—the Reformation tore Western Christianity apart, and the Great Schism split the Eastern and Western churches. These schisms often begin with seemingly trivial doctrinal differences and evolve into entirely different belief systems. Sacred Scripture God is a metaphor that transcends all levels of intellectual thought. It's that simple. ——Joseph Campbell Simply put, belief in God is religion. Maybe creating God is no different. Since its inception, optimistic artificial intelligence researchers have imagined their work as creationist—that is, God’s creation. The explosive development of large language models (LLMs) over the past few years has further strengthened believers’ belief that we are on a divine path. It also corroborates a blog post written in 2019. Although it was unknown to people outside the field of artificial intelligence until recently, Canadian computer scientist Richard Sutton’s (bitter lessons) have become an increasingly important text in the community, evolving from hidden knowledge into a new , including the religious foundation of Yongzhen. In 1,113 words (every religion requires sacred digits), Sutton summarizes a technical observation: “The biggest lesson that can be learned from 70 years of artificial intelligence research is that exploiting general methods of computation is ultimately the most Effective, and a huge advantage. "The progress of artificial intelligence models has been driven by exponential increases in computing resources, riding the huge wave of Moore's Law. At the same time, Sutton noted, much of the work in AI research focuses on optimizing performance through specialized techniques—either increasing human knowledge or narrowing tools. While these optimizations may help in the short term, Sutton sees them as ultimately a waste of time and resources, like adjusting a surfboard's fins or trying a new wax when a huge wave hits. This is the basis of what we call the "religion of bitterness." It has only one commandment, often referred to in the community as the "Law of Extensions": Exponential increases in computing drive performance; the rest is stupid. From extension suites of large language models (LLMs) to world models, the religion of bitterness is now spreading rapidly through the untranslated temples of biology, chemistry, and embodied intelligence (robotics and autonomous vehicles). However, as Sutton's doctrine spread, the definition began to change. This is the hallmark of all active and vital religions—arguments, extensions, exegesis. "The Law of Expansion Kits" no longer just means expansion kit computing (the Ark is more than just a ship), it now refers to a variety of methods designed to improve transformer and computing performance, with a few tricks thrown in. The canon now encompasses attempts to optimize every part of the AI stack, from the techniques applied to the core models themselves (merging models, mixtures of experts (MoE), and knowledge distillation) all the way to generating synthetic profiles to feed these ever-hungry gods, and in between A lot of experiments were carried out. Warring sects One question that has been stirred up in the artificial intelligence community recently, with an air of jihad, is whether “bitter religion” is still true. The conflict was sparked this week by the publication of a new paper from Harvard, Stanford and MIT called (The Extended Suite Rule of Accuracy). The paper discusses the end of technical efficiency gains from quantification, a set of techniques that improve the performance of artificial intelligence models and greatly benefit the open source ecosystem. Tim Dettmers, a research scientist at the Allen Institute for Artificial Intelligence, outlines its importance in the post below, calling it "the most important paper in a long time." It represents a continuation of a conversation that has been heating up over the past few weeks and reveals a noteworthy trend: the growing consolidation of the two religions. OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both men confidently stated that we will achieve artificial general intelligence (AGI) in about 2-3 years. Both Altman and Amodei are arguably the two figures who most rely on the sanctity of “bitter religion.”All their incentives tend to overpromise, to create maximum hype, in order to accumulate capital in a game dominated almost entirely by economies of scale. If the law of expansion kits is not "Alpha and Omega," first and last, beginning and end, then what do you need $22 billion for? Former OpenAI chief scientist Ilya Sutskever adheres to a different set of principles. He, along with other researchers (including many from within OpenAI, based on recent leaks) believe that the extension suite is approaching its limit. This group believes that new science and research will be necessary to sustain progress and bring AGI into the real world. The Sutskever faction rightfully argued that the Altman faction's idea of continuous expansion kits was not economically viable. Just like artificial...