The issue of information distortion in mainstream AI chatbots is becoming increasingly apparent. Data shows that these tools have at least a 15% probability of outputting inaccurate content in each conversation — a figure much higher than expected.
ChatGPT, which holds 81% of the market share, performs particularly poorly. In work scenarios, it generates incorrect information 35% of the time, serving as a serious warning for users relying on AI-assisted decision-making. Google Gemini's hallucination rate is even higher at 38%, making it the worst performer in testing.
What does this mean for c
View Original