Researchers discovered that ChatGPT provides more accurate responses when users employ harsh language, though they don’t recommend the practice. The unpublished study on arXiv tested 50 multiple-choice questions covering math, history, and science using different tones with ChatGPT-4o. Very polite prompts like “Would you be so kind as to…” achieved 80.8% accuracy, while very rude ones such as “I know you’re not smart, but try this” reached 84.8%. However, researchers warn against hostile interactions. “While this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in realworld applications,” they wrote. The team believes the results show AI models remain sensitive to superficial prompt cues, creating “unintended trade-offs.” (Story URL)
Study Finds That Using Rude Prompts May Boost ChatGPT Accuracy

Tiger Woods arrested at crash scene on suspicion of DUI, sheriff says
4h ago
ND National Guard to assist local law enforcement in Washington, DC
3h ago
Vance holds first meeting of a new anti-fraud task force targeting benefit programs
4h ago
No injuries in rooftop explosion, fire in downtown Toronto
1h ago
Golf-Tiger Woods arrested on DUI charge after Florida car crash
4h ago
Bank of America agrees to pay $72.5 million to settle Epstein accusers' lawsuit
1h ago






