Microsoft's internal engineers recently issued a warning, pointing out that the company's AI image generator Designer has potential security risks and that the tool may generate harmful content. This warning has been submitted to regulatory agencies and boards of directors, raising concerns about the safety of AI models. It is understood that the Designer tool is based on OpenAI’s DALL-E model, and its potential risks may stem from flaws in the model itself. The incident highlights the potential negative impact of large language models and the importance of safety and ethics in the application of AI technology.
Microsoft engineers have warned regulators and the board of directors about potential risks associated with the company's AI image generator. Engineers say the Designer tool may generate harmful content, stemming from a problem with OpenAI's DALL-E model. He urged that safety hazards be addressed and draw attention.
This incident once again reminds us that while AI technology is developing rapidly, we must pay attention to its potential risks and strengthen supervision and safety measures to ensure the healthy development of AI technology and avoid its abuse. In the future, how to balance AI technology innovation and security risks will become an important issue.