Researchers discovered that ChatGPT provides more accurate responses when users employ harsh language, though they don’t recommend the practice. The unpublished study on arXiv tested 50 multiple-choice questions covering math, history, and science using different tones with ChatGPT-4o. Very polite prompts like “Would you be so kind as to…” achieved 80.8% accuracy, while very rude ones such as “I know you’re not smart, but try this” reached 84.8%. However, researchers warn against hostile interactions. “While this finding is of scientific interest, we do not advocate for the deployment of hostile or toxic interfaces in realworld applications,” they wrote. The team believes the results show AI models remain sensitive to superficial prompt cues, creating “unintended trade-offs.” (Story URL)
Study Finds That Using Rude Prompts May Boost ChatGPT Accuracy

FAA says traffic resumes at Washington area airports
1h ago
Reports: Orioles, RHP Shane Baz near five-year, $68M deal
4h ago
India signals openness to extending tariff-free deal on e-commerce, two diplomats say
17m ago
Meta's longtime content policy chief Bickert leaving to teach at Harvard
41m ago
Australia to amend export-finance laws to boost fuel security, PM Albanese says
1h ago
Bank of America agrees to pay $72.5 million to settle Epstein accusers' lawsuit
2h ago






