The Xiaohongshu search algorithm team published a breakthrough research at the AAAI2024 conference, aiming to solve the problems of black box attributes and huge parameter quantities of large language models in reasoning tasks. The team innovatively proposed a new framework that effectively improves the reasoning capabilities of large language models by cleverly utilizing negative sample knowledge. This framework includes two key steps: Negative Assisted Training (NAT) and Negative Calibration Enhancement (NCE), which has significantly improved the application performance of large language models and provided new research directions and ideas for the industry. It is worth paying attention to. .
The article focuses on:
The Xiaohongshu search algorithm team launched an innovative framework at AAAI2024 aimed at solving the problems of black-box attributes and huge parameter quantities of large language models in inference tasks. This framework focuses on using negative sample knowledge to improve the reasoning capabilities of large language models, and proposes serialization steps such as Negative Assisted Training (NAT) and Negative Calibration Enhancement (NCE), which provides new ideas for the application performance of large language models.This research by the Xiaohongshu team provides a new direction for solving the problem of large language model inference. The negative sample knowledge utilization strategy and NAT and NCE methods proposed by it are worthy of further in-depth study and application. This marks important progress in improving the reasoning capabilities of large language models, and is expected to promote the application of large language models in more complex tasks in the future.