from Net Politics

Hey LinkedIn, Sean Brown Does Not Work at CFR: Identity, Fake Accounts, and Foreign Intelligence

A fake LinkedIn account of a Sean Brown claiming to work for CFR highlights the issues with fake accounts.

September 10, 2019

Blog Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

The fake LinkedIn profile of Sean Brown.

Despite what his profile says, Sean Brown does not work at the Council on Foreign Relations. I know this because I work there. But LinkedIn doesn’t actually know that either.

Sean Brown in fact appears not to exist. The photo does not match any existing photos that Google’s photo search picture could identify. Analysis by fotoforensics, if I’m interpreting the analysis correctly, suggests that the image has been heavily photoshopped. Sean Brown is likely one of the accounts created by foreign intelligence agencies for recruitment as documented recently in the New York Times.

Figure 1: Analysis by fotoforensics (black is the unaltered image; white indicates a change).

Figure 1: Analysis by fotoforensics (black is the unaltered image; white indicates a change).

More on:

Cybersecurity

Social Media

I reported the account as fake about a week ago, as did CFR’s CIO. But Sean Brown’s profile is still up, and he is slowly building connections. Right now, a lot of people are likely suspicious of the request to connect given his low number of connections. But as a few at a time connect, his profile will soon seem very real.

Of course, the challenge for LinkedIn is they have no reason to believe me when I flag the account as fake because they have no reason to believe that I actually work at CFR. My identity is totally unvalidated. Anyone can claim to work at CFR. What’s more, CFR has no way to control who claims to work at the organization.

Because LinkedIn doesn’t know whether they can trust my judgement on whether the account is fake, it is still up. When you report a fake account, there is no opportunity to provide context—no notes section. This fact suggests that the decision to disable the account is being made by an algorithm that will tally up the reports that the account is fake and at a certain point determine that the score is high enough to disable the account.

If that’s how their fake account reporting works, Microsoft, which owns LinkedIn, doesn’t have their best data scientists working on this problem. Based on the data they have available, the assertion that I work at CFR is fairly strong given the number of connections I have to other people who work at CFR. Beyond the data they have already, fairly simple workflow would allow them to search the web (they could even use Bing…) and CFR’s website to see if Sean Brown exists.

LinkedIn’s sometime rival Facebook has gotten a lot more heat for their fake account problem. And to their credit they have improved their ability to detect such accounts and are trying to be transparent about their efforts. By their own calculation, Facebook is catching 99.8% of fake accounts before users flag them.

More on:

Cybersecurity

Social Media

The problem is that Facebook is stopping 2.2 billion fake accounts each quarter. That’s about 4.4 million fake accounts that their algorithms are missing each quarter. Given that it takes some effort to create a fake account and that someone out there is making them at the rate of almost 9 billion per year, clearly enough are slipping past the gauntlet of AI and vigilant users to make it worthwhile.

For starters, LinkedIn needs to give organizations the ability to control when individuals claim to work at their organization. If an individual claims to work at an organization, LinkedIn should require them to submit an email address that matches the domain associated with that organization and then send an email with a verification link. LinkedIn should also give organizations absolute control over who claims to work at their organization, optionally even allowing an organization to approve all current affiliations before posting.

Beyond this simple step, it's time to the end the era when nobody knows if you’re a dog on the Internet. Both Facebook and LinkedIn operate under “real name” policies. Unlike with Twitter where pseudonymous accounts are common and accepted, the presumption on these sites is that the people you are interacting with are real human being engaging under their real names. In this day and age, it’s time that these sites and others like them actually verify that you are who you say you are.

Facebook already has the process for validating identities setup. Any user can go through the identity proofing process in order to place political ads. I did it smoothly and painlessly.

For new users in the United States, creating an account should require identity proofing. For existing users, Facebook should promote an option for identity proofing, giving a status check mark (think Twitter’s verification symbol). Before the 2020 election heats up, Facebook and LinkedIn should start flagging anyone engaged in a political discussion that hasn’t had their identities verified. The algorithms are simply not strong enough to create trust in these platforms, not when the 10,000 or so fake accounts that slip through the cracks may be enough to swing 10,000 votes in Michigan in 2020. That’s not a chance LinkedIn or Facebook should be willing to take.

Update/Correction: The profile has been removed from Linkedin as of September 11, 2019. In the analysis, the author incorrectly interpreted the results from fotoforensics. The image was not a photoshop creation but the actual image taken from Sean Silcoff’s profile page at the Globe & Mail.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close