The new "memory" function of the Google Gemini AI chatbot marks a big step forward for the development of AI assistants in a more personalized and intelligent direction. This feature allows Gemini to remember users’ life information, work content and personal preferences to provide more accurate and considerate services. Currently, this feature is only available to Google One AI Premium subscribers ($20 per month), and only supports web page and English input. iOS and Android applications are not yet available. Although convenient and fast, it also raises concerns about data security and privacy protection.
Google Gemini's "memory" function provides personalized services by remembering user preferences, such as favorite foods, work habits, etc., such as recommending restaurants based on the user's taste. Google provides a variety of examples of practical scenarios, such as using simple languages, specifying programming languages, including daily expenses in travel planning, etc. Users can turn off the memory function at any time, but stored information needs to be deleted manually. Google emphasizes that this memory information will not be used for model training, nor will it be shared with others.
Google's AI chatbot Gemini recently launched an important update - the "memory" function, which allows the AI assistant to remember the user's life information, work content and personal preferences, thereby providing a more personalized service experience.
This new feature is currently only available to users who subscribe to the $20 per month Google One AI Premium plan, and is currently only supported on the web page and has not yet been launched on iOS and Android applications. It’s worth noting that currently only English input is supported.
Specifically, Gemini’s memory function can help users realize a variety of practical scenarios. For example, after a user tells Gemini what food they like, the next time they ask for restaurant recommendations, AI can provide more targeted suggestions based on the user's taste preferences. Google also shows other practical examples in the interface, such as "use simple language and avoid jargon", "I only know JavaScript programming" and "include daily expenses when planning travel".
Google emphasizes that users can turn off the memory function at any time, but the stored memory content needs to be manually deleted before it disappears. More importantly, a Google spokesperson made it clear that this memory information will not be used for model training, nor will it be shared with others.
However, the safety of this type of memory function deserves concern. Earlier this year, a security researcher discovered that hackers could plant "fake" memories in ChatGPT to continue stealing user data. This discovery reminds us that the memory function of AI systems requires stricter safety protection measures.
The launch of this feature reflects the trend of AI assistants developing in a more personalized and intelligent direction, but it also triggers thinking about user privacy protection and data security. How to ensure user data security while providing convenience will be an important issue that AI service providers need to continue to pay attention to.
Although Gemini's "memory" function brings convenience, it also highlights the importance of privacy protection and security in the development of AI technology. In the future, how to balance personalized services and data security will become a key challenge in the AI field, requiring continuous exploration and improvement.