The Google Gemini AI chatbot recently received a major update, adding a new memory function, aiming to provide a more personalized and intelligent service experience. This feature can remember the user's personal information, work content and preferences to better meet the user's needs. The editor of Downcodes will explain in detail the characteristics, application scenarios and potential security risks of this function.
Google's AI chatbot Gemini recently launched an important update - the memory function, which allows the AI assistant to remember the user's life information, work content and personal preferences, thereby providing a more personalized service experience.
This new feature is currently only available to users who subscribe to the $20 per month Google One AI Premium plan, and is currently only supported on the web page and has not yet been launched on iOS and Android applications. It’s worth noting that currently only English input is supported.
Specifically, Gemini’s memory function can help users realize a variety of practical scenarios. For example, after a user tells Gemini what food they like, the next time they ask for restaurant recommendations, AI can provide more targeted suggestions based on the user's taste preferences. Google also showed other practical examples in the interface, such as using simple language and avoiding jargon, I only know JavaScript programming, and including daily expenses when planning trips.
Google emphasizes that users can turn off the memory function at any time, but the stored memory content needs to be manually deleted before it disappears. More importantly, a Google spokesperson made it clear that this memory information will not be used for model training, nor will it be shared with others.
However, the safety of this type of memory function deserves concern. Earlier this year, a security researcher discovered that hackers could plant false memories in ChatGPT to continue stealing user data. This discovery reminds us that the memory function of AI systems requires stricter safety protection measures.
The launch of this feature reflects the trend of AI assistants becoming more personalized and intelligent, but it also triggers thinking about user privacy protection and data security. How to ensure user data security while providing convenience will be an important issue that AI service providers need to continue to pay attention to.
The launch of Gemini memory function demonstrates the trend of AI personalization development, but it also brings privacy and security challenges. Google promises that the data will not be used for model training or sharing, but security measures will still need to be strengthened in the future to better balance convenience and security.