- Blog Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
Two weeks ago, I got a direct message on Twitter from a fake account pretending to be my colleague Christina Ayiotis. It used @christinaayiot1 as the handle and had cut and paste her profile photo. The account had 0 followers, was only following six people, and the message was a simple “hey.” I flagged it on the app, tweeted it to the real Christina Ayiotas (@christinayiotas) and her more than 3,500 followers, and called it a day.
Christina also tweeted it at @twittersupport and asked for a blue check mark for her account to show that it was verified and legitimate. While Twitter removed the fake account, Christina is still without her blue check mark. She, and everyone else, should get one.
Twitter announced recently that after hitting pause on accepting applications for the coveted blue check marks a few years back, it would be restarting the process this quarter with a more equitable system. Previously, becoming verified on Twitter required extensive connections within the company. What is needed now is not a fairer program to determine the worthiness of accounts but an open and transparent process that would let most account holders that tweet under their real name become verified.
In a world in which social media creates its own celebrities overnight, the idea of choosing who is and is not a celebrity is ludicrous. As we saw in the 2016 election, a fake account pretending to be a private citizen can have enormous influence. Whoever created Christina’s fake account certainly had some intended scam they were going to use it for. In the grey area are fake accounts created for product promotion that are all too good at connecting potential consumers to fad diets and financial advice. Twitter users deserve to know whether an account—any account—is authentic.
Nobody else should be able to claim Christina Ayiotis’s name or likeness, particularly when the ability to validate identities at scale exists. Instead of adjudicating who should and should not be validated in a Twitter-owned process, individual Twitter users should be able to choose whether they are validated using what Gartner has dubbed “Bring Your Own Identity.”
Under this model, users that choose to be validated would go through an identity proofing process at an identity validation company, like id.me or Secure.Key, and then use that verified identity to obtain a blue checkmark that would confirm their real name, profile photo, and other relevant data points in the app. These companies have developed online processes to recreate the kind of in-person identity proofing that a government agency or a bank could do.
Several states have partnered with id.me to provide identity proofing for unemployment benefits during the pandemic in an effort to both reduce in-person contact and reduce fraud. If that is good enough to enable the exchange of dollars, it should be good enough to validate the origin of tweets.
Verification keeps fake accounts from taking over the personas of real people. Widespread verification would allow users to choose to not interact with or be suspicious of non-verified accounts. Twitter could also allow users to filter content based on verification status. On the backend, widespread verification would allow Twitter to more easily identify fraudulent accounts when new users attempt to impersonate existing verified accounts.
I’ve previously predicted that 2021 would be the year we kick the dogs off the internet. I’d hoped Twitter’s new policy would go further than it did, but the year is still young. I think it is possible that by years end, these could be the only dogs left on Twitter.