Next year, OpenAI will enter the era of AI systems.

What big moves is OpenAI planning for the upcoming year after GPT-4? Where is OpenAI's moat? What is the value of AI Agents? With numerous longtime employees 'leaving', will OpenAI choose younger individuals who have more passion and energy?

On November 4, OpenAI CEO Sam Altman (hereinafter referred to as 'Altman') answered these questions on 'The Twenty Minute VC' podcast, clearly stating that enhancing reasoning capabilities has always been OpenAI's core strategy.

When the podcast host, 21VC founder Harry Stebbings (hereinafter referred to as 'Stebbings') asked what opportunities OpenAI could leave for AI entrepreneurs, Altman believed that if AI entrepreneurship still focused on addressing model deficiencies, this business model would lose competitive edge as OpenAI models upgrade. Entrepreneurs should build businesses that benefit from increasingly powerful models, which will be a huge opportunity.

In Altman's view, people’s discussions about AI seem a bit outdated; compared to models, systems are a more worthwhile area of focus, and next year will be a key year for OpenAI to transition into AI systems.

Here are the highlights from the conversation between Stebbings and Altman:

OpenAI plans to build no-code tools.

Stebbings: I’ll start today’s interview with a question from the audience: Is OpenAI’s future direction to launch more models like GPT-3.5, or to train larger and stronger models?

Altman: We will optimize the models comprehensively; enhancing reasoning capabilities is the core of our current strategy.

I believe that strong reasoning capabilities will unlock a range of features we look forward to, including allowing AI to make substantive contributions in scientific research, write highly complex code, and so on, which will greatly propel the development and progress of society.

Everyone can expect ongoing and rapid iteration and optimization of the GPT series of models, which will be the focus and priority of our future work.

Sam Altman was interviewed on the podcast by 21VC founder Harry Stebbings.

Stebbings: Will OpenAI develop no-code tools for non-technical users in the future, allowing them to easily build and scale AI applications?

Altman: There is no doubt that we are steadily moving towards this goal.

Our initial plan is to significantly enhance the efficiency of programmers, but in the long run, our goal is to create top-notch no-code tools. Although there are already some no-code solutions in the market, they currently cannot fully meet the needs of creating a complete startup in a no-code manner.

Stebbings: In the future, in which areas of the technology ecosystem will OpenAI expand? Given that OpenAI might dominate at the application level, if startups invest a lot of resources to optimize existing systems, is this a waste of resources? How should founders think about this issue?

Altman: Our goal is to continuously improve our models. If your business is just to address some minor shortcomings of the existing models, once our models become strong enough and those shortcomings no longer exist, your business model may become uncompetitive.

However, if you can build a business that benefits from the continuous improvement of models, this will be a huge opportunity.

Imagine if someone revealed to you that GPT-4 will become extraordinarily powerful, capable of accomplishing tasks that currently seem impossible, then you would be able to plan and develop your business from a longer-term perspective.

Stebbings: We have discussed with venture capitalist Brad Gerstner how OpenAI might affect certain niche markets. From a founder's perspective, which companies might be impacted by OpenAI and which might survive? As investors, how should we evaluate this issue?

Altman: Artificial intelligence will create trillions of dollars in value, it will give rise to entirely new products and services, making things previously impossible or impractical feasible.

In some areas, we expect models to be powerful enough to make achieving objectives effortless; while in other areas, this new technology will be further enhanced by building excellent products and services.

In the early days, about 95% of startups seemed to bet that models would not get better, which surprised me, but I’m no longer surprised now. When GPT-3.5 was just released, we had already foreseen the potential of GPT-4, and we knew it would be very powerful.

So, if the tools you’re building are merely to compensate for the model's shortcomings, as the models improve, those shortcomings will become increasingly irrelevant.

When models performed poorly in the past, people were more inclined to develop products to compensate for model flaws rather than building revolutionary products like 'AI teachers' or 'AI medical advisors.' I feel that at that time, 95% of people were betting that models would not improve, while only 5% believed that models would get better.

Now the situation has reversed; people understand the speed of improvement and also comprehend our direction of development.

This issue is no longer as prominent, but we were once very concerned because we foresaw that those companies working to address model flaws might face difficulties.

Stebbings: You mentioned that 'artificial intelligence will create trillions of dollars in value'; Masayoshi Son (founder and CEO of SoftBank Group) also predicted 'AI will generate $9 trillion in value each year', enough to offset what he sees as 'the necessary $9 trillion in capital expenditure.' What do you think about this?

Altman: I can't give an exact number; obviously, substantial capital expenditures will also create huge value, as they have with every major technological revolution, and AI is undoubtedly one of them.

Next year will be a pivotal year for us as we enter the era of the next generation of AI systems.

Regarding the development of no-code software agents you mentioned, I am not sure how long this will take, and it is not feasible at the moment. But if we could achieve this goal, allowing everyone to easily access the complete suite of enterprise-level software they need, how much economic value would that release for the world?

If you can maintain the same value output while making it more convenient and cost-effective, this will have a massive impact.

I believe we will see more similar examples, including in healthcare and education, representing trillions of dollars in markets.

If AI can drive new solutions in these areas, I think the specific numbers are not important; what matters is that it will indeed create incredible value.

Excellent AI Agents possess capabilities that surpass human abilities.

Stebbings: What role do you think open-source will play in the future development of artificial intelligence? Within OpenAI, how does the discussion around 'whether certain models should be open-sourced' take place?

Altman: Open-source models play a crucial role in the AI ecosystem.

There are already some outstanding open-source models available.

I believe it is also crucial to provide both high-quality services and APIs. In my view, offering these elements as a product portfolio makes sense, allowing people to choose the solution that best fits their needs.

Stebbings: Besides open-source, we can also provide services to customers through Agents. How do you define 'Agent'? What do you think it is and what it is not?

Altman: I believe an Agent is a program that can perform long-term tasks and requires almost no human supervision during the execution of those tasks.

Stebbings: Do you think there are misunderstandings about people's understanding of Agents?

Altman: Rather than misunderstanding, I would say we have not fully understood the role Agents will play in the world of the future.

A frequently cited example is having an AI Agent help make restaurant reservations, such as being able to use OpenTable or directly call the restaurant.

This can certainly save some time, but I think what’s more exciting is that Agents can do things that humans cannot, like contacting 300 restaurants at once to find the best dishes or restaurants that can offer special services.

This is nearly an impossible task for humans, but if the Agents are all AI, they can process in parallel, and this problem becomes solvable.

Although this example is simple, it demonstrates the capabilities of Agents that surpass human abilities. What’s even more interesting is that Agents can not only help you book restaurants but can also act like a very smart senior colleague, collaborating with you to complete a project; or it can independently handle a task that takes two days or even two weeks, only contacting you when encountering problems, and ultimately presenting an excellent result.

Stebbings: Will this Agent model affect the pricing of SaaS (Software as a Service)? Traditionally, SaaS charges based on user seats, but now Agents are effectively replacing human roles. How do you see the future pricing models changing, especially as AI Agents become a core part of corporate employees?

Altman: I can only speculate, because we really can’t be certain.

I can envision a scenario where the future pricing model will be determined by the computational resources you use, such as whether you need 1 GPU, 10 GPUs, or 100 GPUs to solve a problem.

In this case, pricing will no longer be based on the number of seats or even the number of Agents, but rather determined by the actual computational resources consumed.

Stebbings: Do we need to build specific models for Agents?

Altman: Indeed, a large amount of infrastructure is required to support the operation of Agents, but I believe GPT-3.5 has already pointed the way to a universal model capable of executing complex Agent tasks.

Models are depreciating assets, but the training experience is worth more than the cost value.

Stebbings: Many believe that as the trend of model commodification becomes more pronounced, models are depreciating assets. How do you view this perspective? Currently, as the capital intensity of training models increases, does this mean only a few companies can bear such costs?

Altman: Indeed, models can be seen as depreciating assets, but thinking their value is lower than the cost of training is completely wrong.

In fact, during the training of models, we can achieve positive compounding effects, as the knowledge and experience we gain from training will help us train the next generation of models more efficiently.

I believe that the actual revenue we gain from models has already demonstrated the rationality of these investments. Of course, not all companies can achieve such results.

Currently, many companies may be training very similar models, but if you fall slightly behind, or do not have a product that can continuously attract users and provide value, obtaining investment returns may become even more difficult.

We are fortunate to have ChatGPT, which is used by hundreds of millions of users, so even if the costs are high, we can spread these costs through our large user base.

Stebbings: How will OpenAI's models maintain differentiation in the future? What aspects do you hope to expand differentiation in?

Altman: Reasoning capabilities are the area we are currently most focused on, and I believe this will be key to unlocking the next stage of large-scale value creation.

Moreover, we will also focus on the development of multimodal models and introduce new features that we believe are critical for users.

Stebbings: How will visual capabilities expand under the new GPT-3.5 reasoning paradigm?

Altman: Without giving spoilers, I expect image models to develop rapidly.

Stebbings: Anthropic's models are sometimes considered to perform better on programming tasks. What do you think about this? Do you think this assessment is fair? How should developers choose between OpenAI and other providers?

Altman: Anthropic does indeed have a model that performs excellently in programming, and their work is truly impressive.

I believe developers will generally use multiple models at the same time, and I’m not sure how this will change as the field develops. But I believe AI will be ubiquitous in the future.

The way we currently discuss AI may be somewhat outdated; I predict we will shift from discussing 'models' to discussing 'systems', but this will take time to realize.

Stebbings: Regarding the issue of model scaling, how long do you think the scaling law for models can continue? In the past, people have thought it would not last long, but it seems to be more enduring than expected.

Altman: Not going into details, the core question is: Will the trajectory of model capability improvement continue as it has? I believe it will and will continue for a considerable amount of time.

Stebbings: Have you ever had doubts about this?

Altman: We have indeed encountered some behavioral patterns that we cannot understand and have experienced some failed training processes, trying various new paradigms.

When we are close to reaching the limits of a paradigm, we must find the next breakthrough point.

Stebbings: What is the toughest challenge to deal with in this process?

Altman: We encountered some extremely tricky problems during the development of GPT-4, which left us feeling helpless at times, unsure how to break through.

Ultimately, we successfully overcame these challenges. But there was indeed a period when we felt lost about how to advance the development of the model.

Additionally, the transformation of GPT-3.5 and the concept of reasoning models is a goal we have long dreamed of achieving, but the research path to achieving this goal is fraught with challenges and twists.

Stebbings: In this long and winding process, how do you keep the team's morale high? How do you maintain morale when the training process may fail?

Altman: Our team members are all passionate about building AGI (Artificial General Intelligence), which is an incredibly motivating goal.

We all know this is not an easy path, and success will not come easily. As the saying goes: "I never ask God to stand on my side, but rather to help me stand on His side."

Diving into deep learning feels like a just cause; although there will inevitably be setbacks in the process, we seem to always make progress in the end. This steadfast belief is a tremendous help to us.

Stebbings: Regarding semiconductor supply chain issues, how concerned are you about the semiconductor supply chain and international tensions?

Altman: I cannot quantify the degree of this concern, but there is no doubt that I do feel worried.

While it may not be my biggest concern, it certainly ranks among the top 10% of everything I care about.

Stebbings: May I ask what your biggest concerns are?

Altman: Overall, what worries me most is the complexity of trying to accomplish all the work in the entire field.

While I believe everything will eventually be resolved, this is indeed an extremely complex system.

This complexity exists at all levels, including within OpenAI and within each team. For example, in semiconductors, we need to balance power supply, make correct network decisions, ensure we get enough chips, while also considering potential risks and whether the research progress can match these challenges, so we don't end up completely caught off guard or waste resources.

The entire supply chain seems like a straight pipeline, but the complexity of the ecosystem at every level exceeds what I have seen in any other industry. To some extent, this is precisely what I worry about most.

Stebbings: You mentioned unprecedented complexity; many people compare the current wave of AI with the dot-com bubble era, especially when mentioning excitement and enthusiasm. I think the difference lies in the scale of funding involved. Larry Ellison (co-founder of Oracle) once said that the entry cost for competing in foundational models is $100 billion. Do you agree with this view?

Altman: No, I don’t think the costs will be that high. But there is an interesting phenomenon here: people like to use past technological revolutions to draw parallels with new revolutions to make them seem more familiar.

I think overall this is not a good habit, but I understand why people do it. I also feel that the AI analogies people choose are particularly inappropriate; the internet is clearly very different from AI.

You mentioned an example regarding costs; whether it really requires spending $10 billion or $100 billion to be competitive, a hallmark of the internet revolution is that 'it's easy to get started.'

Another feature similar to the internet is that for many companies, AI is just an extension of the internet — others will build these AI models, and you can leverage them to develop various outstanding products.

This sees AI as a new way to build technology. But if you want to build AI itself, the situation is completely different.

Another common analogy is electricity, but I think this is not applicable in many ways.

Although I believe people should not rely too heavily on analogies, my favorite analogy is transistors, which are a new discovery in physics with incredible scalability, quickly permeating various fields, benefiting the entire tech industry. The products and services we use contain numerous transistors, but you wouldn't consider the companies creating these products and services as 'transistor companies.'

This (transistors) is a very complex and expensive industrial process that has formed a massive supply chain around it.

This simple physical discovery has led to long-term economic growth, even though most people are not aware of its existence most of the time; they just feel that 'this thing can help me process information.'

Maintain high standards for talent rather than leaning towards a specific age group.

Stebbings: How do you think people's talents are wasted?

Altman: There are many very talented people in the world who cannot fully realize their potential due to working at unsuitable companies, living in countries that do not support great companies, or various other reasons.

One of the things I’m most excited about with AI is that it could help us better unlock the potential of every individual, and we are currently far from doing enough in this regard. I believe there are many potential great AI researchers in the world whose life trajectories are just different.

Stebbings: Over the past year, you have experienced incredible rapid growth; looking back over the past decade, what do you think has been your biggest change in leadership?

Altman: For me, the most unusual thing in recent years has been the speed of change.

A normal company growing from zero to one hundred million in revenue, then from one hundred million to one billion, and finally from one billion to one hundred billion, usually takes a long time, while we have to complete this process in just two years.

We have transitioned from a purely research lab to a company that truly serves a large number of customers, and this rapid shift has left me with less time to learn.

Stebbings: What are you looking to spend more time learning?

Altman: How to guide the company to focus on achieving 10x growth rather than just 10% growth.

To grow from a company with billions in revenue to one with hundreds of billions requires profound changes, not just repeating last week's work.

But the challenge of rapid growth is that we do not have enough time to solidify the foundations.

I had underestimated the effort required to catch up and continue advancing in such a rapidly growing environment.

Internal communication, information sharing, structured management, and how to balance short-term needs with long-term development are all extremely important.

For example, to ensure that the company can execute in the next year or two, we need to prepare computational resources, office space, and more in advance. Effective planning is very challenging in such a rapidly growing environment.

Stebbings: Keith Rabois (venture capitalist) once said that he learned from Peter Thiel (co-founder of PayPal) that hiring people under 30 is the secret to building great companies. What do you think of this advice, that establishing a company through hiring very energetic and ambitious young people is the only way?

Altman: I was about 30 years old when I created OpenAI, not too young, but it seems quite appropriate (laughs).

So, this is indeed a path worth trying.

Stebbings: However, while young people are full of energy and ambition, they may lack experience; or should we choose those with experience who have already proven themselves?

Altman: The obvious answer is that both types of talent hiring can be successful, as we have done at OpenAI.

Just before today’s interview, I was discussing a young person who just joined our team, probably in their twenties, but their work performance is outstanding.

I am thinking about whether we can find more talents like him, as these young people bring new perspectives and energy.

However, on the other hand, if you are going to design one of the most complex and costliest computing systems in human history, I would not easily entrust this heavy responsibility to a newly minted young person.

Therefore, we need a combination of both types of talent. I believe the key is to maintain high standards for talent rather than simply favoring a certain age group.

I am particularly grateful for Y Combinator (startup incubator) because it made me realize that a lack of experience does not mean a lack of value.

There are many high-potential talents early in their careers who can create immense value, and our society should invest in these talents, which is a very positive thing.

Stebbings: I recently heard a saying — the heaviest burden in life is not iron or gold, but the decisions not made. For you, which unmade decision has caused you the most stress?

Altman: The answer to this question is changing every day; no single unmade decision is particularly significant.

Of course, we do face some significant decisions, such as which product direction to choose or how to design the next generation of computers, all of which are important and risky choices.

When faced with such situations, I might postpone decisions, but most of the time, the challenge is having to face some 51% vs. 49% dilemmas every day. These decisions are presented to me because they are difficult to resolve, and I may not be more certain than others in the team in making a better choice, but I must make a decision.

So, the core of the issue lies in the quantity of decisions, rather than any specific decision.

Stebbings: When faced with a 51% vs. 49% decision, do you have a go-to person to consult?

Altman: No, I don’t think relying on one person for everything is the right way.

For me, a better approach is to find 15 or 20 people with good intuition and background knowledge in specific areas and consult the best experts when needed, rather than relying on a single consultant.

Quick Q&A

Stebbings: Suppose you are a 23 or 24-year-old young person today, considering the existing infrastructure, what would you choose to do?

Altman: I would choose an AI-supported vertical field, such as AI education, and develop the best AI educational products to enable people to learn knowledge in any field.

Similar examples could include AI lawyers, AI CAD engineers, and so on.

Stebbings: You mentioned writing a book; what would you name it?

Altman: I haven’t thought of a name yet. I haven’t carefully considered this book; I just feel its existence will inspire a lot of people's potential. It may relate to the theme of 'human potential.'

Stebbings: In the AI field, what directions do you think people are overlooking that should receive more attention?

Altman: What I hope to see is an AI that can understand your entire life.

It does not require infinite context, but it would be nice to have some way to have an AI Agent that understands all your data and can assist you.

Stebbings: Has anything surprised you in the past month?

Altman: It's a research result I cannot reveal, but it is shocking.

Stebbings: Who is your most respected competitor? Why?

Altman: In fact, I respect everyone in this field; the entire field is filled with outstanding talents and excellent work.

I’m not intentionally avoiding the question; I just see talented people doing outstanding work everywhere.

Stebbings: Is there a specific one?

Altman: There isn't a specific one.

Stebbings: Which OpenAI API is your favorite?

Altman: The new instant API is fantastic, we now have a huge API business with a lot of great stuff in it.

Stebbings: Who do you respect the most in the AI field today?

Altman: I want to specifically mention the Cursor team, they have brought truly magical experiences with AI and created a lot of value for people.

Many people have not pieced together all the elements, while they have. I deliberately did not mention anyone from OpenAI, otherwise, this list would be very long.

Stebbings: What is your take on the trade-off between latency and accuracy?

Altman: A knob is needed to adjust between the two. Just like now, you want me to answer questions quickly, and I try not to spend minutes thinking, at this time, latency matters.

If you want me to make a significant discovery, you might be willing to wait a few years. The answer is that it should be user-controllable.

Stebbings: When you think about insecurity in leadership, which areas do you think you need to improve the most, and what would you most like to enhance as a leader and CEO?

Altman: Recently, I feel more uncertain than before about what the details of our product strategy should be.

Overall, I feel that product is my weak point, and the company currently needs me to provide a clearer product vision.

We have a great product owner and team, but this is an area I wish I were better at, and I have felt this particularly strongly recently.

Stebbings: You hired Kevin Scott (OpenAI's CTO), whom I have known for many years, and he is excellent. What qualities of Kevin make him a world-class product leader?

Altman: "Discipline" is the first word that comes to my mind.

Stebbings: What do you specifically mean?

Altman: He is very focused on priorities, knows what to say no to, and can think from the user's perspective about why to do or not do something, which is really very rigorous and avoids fanciful ideas.

Stebbings: Looking ahead to the next five and ten years, if you had a magic wand to paint the vision of OpenAI in five and ten years, what would it look like?

Altman: I can easily sketch out the next two years, but if we guess right and start producing some super-strong systems, such as in scientific advancement, this could lead to incredible technological progress.

I believe that in five years, we will see a staggering pace of technological advancement, even surpassing everyone's expectations. Society may feel that 'the moment for AGI has come and gone'; we will discover many new things, not only in AI research but also in other scientific fields.

On the other hand, I think the changes brought to society by technological progress are relatively limited.

For example, if you had asked people five years ago: Will computers pass the Turing test?

They would probably say: No. If you tell them: Yes. Then they would think this will bring about tremendous changes in society.

Now you see, we have indeed broadly passed the Turing test, but social changes have not been that dramatic.

This is my expectation for the future, that technological advances will continually surpass all expectations, while social changes will be relatively slow.

I believe this is a good and healthy state. In the long term, technological advances will certainly bring about tremendous changes in society, but these changes will not be reflected so quickly within five to ten years.

This article is collaboratively reprinted from: Deep Tide

More reports
a16z investors deeply analyze the evolution of stablecoins: looking at the future of stablecoins from 250 years of banking history
After Trump's election, ushering in a 'crypto spring', do key financial institutions in the U.S. move towards 'guidance instead of enforcement'?