A 17-year-old high school student designed a "god-tier prompt" that significantly enhances the reasoning ability of Claude 3.5, making it comparable to OpenAI's o1 model, sparking heated discussions in the community. Several developers found after testing that the prompt not only successfully recreated Flappy Bird and Texas Hold'em games but also displayed excellent results on Google Gemini after improvements. (Background: Bernstein announced that cryptocurrency has entered the "infinite era": BTC expected to break 200,000 by the end of the year, mining companies accelerating integration with AI, ETH becoming a darling of institutions...) (Context: Sam Altman looks to 2025: The first batch of AI agents will significantly enter the labor market, getting closer to AGI...) Recently, 17-year-old high school student Tu Jinhao wrote a command known as a "god-tier prompt," which significantly enhanced the reasoning and thinking abilities of the AI model Claude 3.5, drawing widespread attention. This prompt simulates human thought processes, allowing Claude 3.5 to demonstrate complex reasoning and problem-solving abilities comparable to OpenAI's o1 model. Tu Jinhao previously achieved global first place in the Alibaba Global Mathematics Competition's AI track. The prompt he designed is called "Thinking Claude," aimed at allowing the Claude model to engage in a comprehensive thinking process before answering questions, bringing its reasoning capabilities closer to that of humans. Thinking Claude principles Python engineer Jie Ge analyzed that this prompt guides Claude to first restate the question, analyze the background, break down the task, generate multiple hypotheses, and ultimately form a coherent and in-depth answer through self-correction and verification. In detail, he pointed out the steps include: Initial understanding: restating the question, understanding its background, and identifying known and unknown elements. Problem space exploration: breaking down the problem into multiple parts, understanding its requirements and constraints. Hypothesis generation: proposing multiple hypotheses and different perspectives to analyze before selecting a solution. Natural discovery process: gradually delving deeper like a detective, drawing more insightful conclusions. Verification and testing: self-examination, checking the consistency of logic and comprehensiveness of analysis. Error identification and correction: finding deficiencies in thinking and further improving and optimizing. Knowledge integration: connecting information from different sources to build a more comprehensive cognitive framework. Pattern recognition and analysis: looking for specific patterns in information and applying them to deeper research and reasoning. Users develop classic games with Thinking Claude The emergence of this prompt has sparked widespread attention and discussion in the AI and programming communities. Many developers found after practical application that the prompt indeed significantly improves the performance of the Claude model, allowing it to exhibit stronger logic and human-like thinking characteristics when handling complex tasks. Some users successfully developed the classic game Flappy Bird using this prompt. Additionally, others created a Texas Hold'em game based on it, embedding an AI player function to achieve a more intelligent battle experience. Furthermore, YouTuber "AI Turn Turn Turn" also adapted the prompt into a Gemini version, testing it in Google AI Studio, which yielded better results. Application methods The usage method is also very simple; just copy the prompt from the model_instruction of Thinking Claude or Thinking-gemini (selecting the latest version) and paste it into the system instruction box of Claude or Google AI Studio, allowing for testing and development through dialogue with the AI. Interested readers can experience and realize their own ideas! Overall, Tu Jinhao's innovative act demonstrates the potential of prompt engineering in the field of AI, stimulating interest in how to optimize prompts to enhance model performance. However, some experts also pointed out that this method of enhancing model capabilities through prompts may have certain limitations, and the model's foundational abilities remain a key factor determining its performance.