Researchers discovered that ChatGPT provides more accurate responses when users employ harsh language, though they don’t recommend the practice. The unpublished study on arXiv tested 50 multiple-choice questions covering math, history, and science using different tones with ChatGPT-4o. Very polite prompts like “Would you be so kind as to…” achieved 80.8% accuracy, while very rude ones such as “I know you’re not smart, but try this” reached 84.8%. However, researchers warn against hostile interactions. “While this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in realworld applications,” they wrote. The team believes the results show AI models remain sensitive to superficial prompt cues, creating “unintended trade-offs.” (Story URL)
Study Finds That Using Rude Prompts May Boost ChatGPT Accuracy

ServiceNow near deal to buy cybersecurity startup Armis for up to $7 billion, Bloomberg News reports
54m ago
Multiple people shot at Brown University, official says; AP says 2 killed
2h ago
UK's Starmer and EU's von der Leyen discuss Ukraine peace plan, frozen Russian assets
3h ago
Rams activate WR Tutu Atwell (hamstring) from IR
3h ago
Zelenskiy says he will meet US and European representatives in Berlin
3h ago
Women's Top 25 roundup: No. 6 Michigan cruises to easy win over Akron
3h ago






