ReferIndia News ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

ReferIndia News

Trusted Mumbai Loan Experts

Business, Home, Mortgage & Personal Loans. Since 2001

Learn More
News Image

ChatGPT And Gemini Can Give Harmful Answers If You Trick Them Via Poetry, Here Is How

Published on: Dec. 1, 2025, 10:38 a.m. | Source: Times Now

Recent research from Italy's Icaro Lab has revealed significant weaknesses in AI models like ChatGPT and Gemini, allowing attackers to bypass safety measures by framing harmful requests as poetry. The study tested 20 harmful prompts in poetic form, achieving a 62% success rate across various AI systems, including Moonshot AI and Mistral AI. , Technology & Science, Times Now

Checkout more news
Ad Banner

One Partner for All Your Financial Goals

From financial planning and insurance to portfolio management and retirement solutions—get everything under one trusted roof.

Book Consultation
ReferIndia News contact