For example, Facebook has a suicide detection algorithm, where they try to intervene if they think a user is suicidal (Inside Facebook’s suicide algorithm: Here’s how the company uses artificial intelligence to predict your mental state from your posts). As social media companies have tried to detect talk of suicide and sometimes remove content that mentions it, users have found ways of getting around this by inventing new word uses, like “unalive.”
This shows how moderation and user behavior constantly adapt to each other. When platforms try to filter certain language, people often respond creatively, which makes the system harder to manage. It also raises questions about whether removing certain words actually addresses harm, or just shifts how people express it.