Recently, Google AI head Jeff Dean and OpenAI chief scientist won the NeurIPS "Time Test Award" for their word2vec paper, sparking heated discussions. This groundbreaking paper ten years ago was unanimously rejected by the ICLR conference at the time. This highlights the difficulty of assessing a paper's future impact, as well as the sometimes lag in academic judgment of innovative research. This article will delve into the experience of the word2vec paper and the resulting thoughts on the academic evaluation mechanism.
Google AI head Jeff Dean and OpenAI chief scientist recently won the NeurIPS "Time Test Award" for their groundbreaking word vector technology word2vec paper proposed 10 years ago. However, Tomas Mikolov, one of the authors of the paper, revealed that the paper was unanimously rejected by the first ICLR meeting in 2013. In fact, many papers and works that were later to have far-reaching influence were rejected by top conferences when they were first submitted, which shows that it is very difficult to evaluate the future influence of papers. Researchers should not be discouraged after being rejected, but should take suggestions to improve the paper and then submit it to other high-level papers.The experience of the word2vec paper provides valuable experience for scientific researchers and reminds us once again that academic evaluation is not achieved overnight and requires time and practical testing. Perseverance and continuous improvement will eventually lead to recognition. In the future, how to evaluate scientific research results more effectively is worthy of our in-depth thinking.