ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How
Published on: Dec. 1, 2025, 10:38 a.m. | Source: Times Now
Recent research from Italy's Icaro Lab has revealed significant weaknesses in AI models like ChatGPT and Gemini, allowing attackers to bypass safety measures by framing harmful requests as poetry. The study tested 20 harmful prompts in poetic form, achieving a 62% success rate across various AI systems, including Moonshot AI and Mistral AI. , Technology & Science, Times Now
