OpenAI released the 'Reinforcement Fine-Tuning' (RFT) feature on Day 2 of its new product launch event, allowing users to enhance the inference capabilities of customized models in specific fields, thereby improving model accuracy. This feature can be applied across various domains, including scientific research, law, finance, insurance, engineering, and more. (Background: OpenAI announced the launch of the full version of the o1 model and the new subscription plan ChatGPT Pro, is the monthly fee of $200 worth it?) (Additional context: OpenAI announced a 12-day live stream: introducing many new features, can AI concept coins be a hidden opportunity?) The developer behind ChatGPT, OpenAI, previewed on the 5th that a 12-day new product launch would begin at 2 AM on the 6th. The first day featured the full inference model o1, replacing the previous preview version, which is currently available to ChatGPT Plus, Team, and Pro users. They also launched a subscription plan that allows unlimited use of the o1 model - 'ChatGPT Pro'. Further reading: OpenAI released the full version of the o1 model and the new subscription plan ChatGPT Pro, is the monthly fee of $200 worth it? Today (7th) is Event Day 2; what surprises does OpenAI have in store for us? Release of the new feature 'Reinforcement Fine-Tuning' On the second day of the product launch, OpenAI released the new feature 'Reinforcement Fine-Tuning' (RFT), enabling users to use their own datasets to customize training of the o1 model through reinforcement learning algorithms. Additionally, the event host indicated that only a few dozen high-quality examples are needed to quickly improve the model's inference capabilities in that specific domain. Notably, this feature can be applied across various fields, including scientific research, law, finance, insurance, engineering, and more. Furthermore, in the release video, Justin Reese, a computational biologist from the Berkeley National Laboratory, participated in the live demonstration of the model, sharing how 'Reinforcement Fine-Tuning' helps diagnose rare diseases. In the images showcased by Reese, it can be seen that the original identification accuracy of the o1 model was 25%; the original identification accuracy of the o1 mini model was 17%, and after reinforcement fine-tuning, the accuracy of the o1 mini model improved to 31%, achieving an increase of 82%. Although the 'Reinforcement Fine-Tuning' feature significantly enhances the accuracy and professionalism of the model in specific fields, the host also mentioned that this feature is still in the preview stage and is planned to be released to the public next year. Additionally, OpenAI has initiated the RFT alpha program, inviting universities, research institutions, or enterprises to participate in testing this feature. Related reports: ChatGPT was exposed for 'refusing to respond to David Mayer' and other names, raising suspicions that OpenAI intentionally blocks information. OpenAI announced a 12-day live stream: introducing many new features, can AI concept coins be a hidden opportunity? OpenAI released the full version of the o1 model and the new subscription plan ChatGPT Pro, is the monthly fee of $200 worth it? "OpenAI releases Day 2) the outrageous 'Reinforcement Fine-Tuning' new feature, enhancing AI's accuracy in specialized domain learning" this article was first published on BlockTempo (Dynamic Zone - the most influential blockchain news media).