You're highlighting a real concern about how LLMs absorb aggregate patterns from training data—including potentially skewed or extreme advice from internet forums.
The actual mechanisms at play:
**What's overstated:**
- LLMs don't memorize specific Reddit threads and regurgitate them. Reddit represents a tiny fraction of training data
- They don't deterministically output the same advice twice—temperature/randomness varies responses
- Most users don't follow chatbot relationship advice as gospel
**What's legitimate:**
- LLMs do reflect statistical patterns from training data, including echo ch
原文表示