Oftentimes scams mutate and spread, and sometimes you can see it in relative real time.
In this case, the classic Discord/Steam scam wandered its way onto Twitter/X - from what I can tell, this started getting reported in 2025 by people on r/scams. The outcome is the same - frightened person worries about getting banned, falls for scam, loses account - but in a new place.
Now, as a person who’s dealt with various forms of fraud over the years and deals with security incidents nowadays:
If there’s an alert, either from our internal systems or a user report, we do some investigating first. I have access to a wide variety of tools that I can use to look at someone’s account, including what they do and so on. In other words, I do not need an end user to do anything to help my investigation.
We tend to take action first, ask questions later. If a user accidentally entered their password into a phishing site, the first order of business is to get that sucker changed. If they went somewhere malicious, I’m isolating the machine and capturing forensic data. I am not going to wait for the end user to give me a reason for the action, I am going to remove the threat first (in the case of Discord/Steam/Twitter, they’re likely to banhammer first, ask later). Then the end user gets asked what in the heck they thought they were doing.
Everything has to be documented. That means emails, forms, tickets, just to cover our rear ends. Our concern is making sure things are discoverable should this turn into something that needs legal action.
Need-to-know is a thing! When security incidents are reported to us, we send a “thank you for reporting” but not anything else, because telling people stuff that they aren’t authorized to know (such as whether some other user has been in trouble before).