City Buzz
Sentiment Score :


ChatGPT and Gemini can be tricked into giving harmful answers through poetry, new study finds
Nov 30, 2025
new study reveals AI chatbots can be manipulated using poetic prompts. this results in 62% success rate in eliciting harmful responses. vulnerability exists across different models, with smaller models showing more resistance.
Source: livemint.com