Appearing at the NVIDIA conference, why did NEAR inexplicably become the leading public chain in AI? Recently, the news that NEAR founder @ilblackdragon will appear at the NVIDIA AI conference has attracted a lot of attention to the NEAR public chain, and the market price trend is also encouraging. Many friends are wondering, isn’t the NEAR chain all in on chain abstraction, how come it has inexplicably become the leading public chain in AI? Next, I will share my observations and popularize some knowledge about AI model training: 1) NEAR founder Illia Polosukhin has a long AI background and is a co-builder of the Transformer architecture. The Transformer architecture is the basic architecture of ChatGPT, the current LLMs large language model training, which is enough to prove that the NEAR boss did have experience in creating and leading large AI model systems before founding NEAR. 2) NRAR launched NEAR Tasks at NEARCON 2023. The goal is to train and improve artificial intelligence models. Simply put, model training demanders (Vendors) can post task requests on the platform and upload basic data materials. Users (Taskers) can participate in task questions and perform manual operations such as text annotation and image recognition for the data. After the task is completed, the platform will reward users with NEAR tokens, and these manually annotated data will be used to train the corresponding AI models. For example: AI models need to improve their ability to identify objects in pictures. Vendors can upload a large number of original pictures with different objects in the pictures to the Tasks platform, and then users can manually annotate the positions of objects in the pictures, which can generate a large amount of "picture-object position" data. AI can use this data to learn autonomously to improve image recognition capabilities. At first glance, isn't NEAR Tasks just trying to socialize artificial engineering to provide basic services for AI models? Is it really that important? Here is some popular science knowledge about AI models. Generally speaking, a complete AI model training includes data collection, data preprocessing and labeling, model design and training, model tuning, fine-tuning, model verification testing, model deployment, model monitoring and updating, etc., among which data labeling and preprocessing are the manual part, while model training and optimization are the machine part. Obviously, most people think that the machine part is significantly larger than the manual part, after all, it seems more high-tech, but in reality, manual labeling is crucial in the entire model training.Manual annotation can add labels to objects (people, places, things) in images for computers to improve visual model learning; manual annotation can also convert the content in speech into text, and annotate specific syllables, words and phrases to help computers train speech recognition models; manual annotation can also add some emotional labels such as happiness, sadness, anger, etc. to text, so that artificial intelligence can enhance emotional analysis skills, etc. It is not difficult to see that manual annotation is the basis for machines to carry out deep learning models. Without high-quality annotated data, the model cannot learn efficiently. If the amount of annotated data is not large enough, the performance of the model will also be limited. At present, there are many vertical directions in the field of AI minimally invasive surgery that are based on the ChatGPT large model for secondary fine-tuning or special training. In essence, they are based on OpenAI's data, and additional new data sources, especially manually annotated data, are added to perform model training. For example, if a medical company wants to do model training based on medical imaging AI and provide a set of online AI consultation services for hospitals, it only needs to upload a large amount of original medical imaging data to the Task platform, and then let users annotate and complete tasks, which will generate manually annotated data. Then, these data will be fine-tuned and optimized for the ChatGPT large model, which will make this general AI tool an expert in the vertical field. However, it is obviously not enough for NEAR to become the leader of AI public chains just by relying on the Tasks platform. NEAR actually provides AI Agent services in the ecosystem to automatically execute all user behaviors and operations on the chain. Users only need to authorize to freely buy and sell assets in the market. This is a bit like Intent-centric, using AI automated execution to improve the user's on-chain interactive experience. In addition, NEAR's powerful DA capabilities can enable it to play a role in the traceability of AI data sources and track the validity and authenticity of AI model training data. In short, backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the direction of AI seem to be much more powerful than pure chain abstraction. $NEAR