More than a thousand LinkedIn identities were uncovered using false facial photos developed using artificial intelligence, according to researchers at Stanford Internet Observatory.
After DiResta was approached on LinkedIn by someone called Keenan Ramsey, Renée DiResta and Josh Goldstein of the Stanford Internet Observatory made the finding. While it appeared to be a typical software sales pitch at first glance, DiResta discovered that the face in the profile image appeared to be false owing to the center positioning of the eyes and the hazy background after a deeper study. After that, it became clear that Ramsey was a complete fabrication.
This motivated the researcher to launch an inquiry into the quantity of computer-generated or deepfake photographs on the professional networking site with her colleague Josh Goldstein. Deepfakes blend and superimpose existing photographs and videos to create fake images of people or frame them for doing or saying something they did not do.
Based on recent examples, it appears that this technology has made its way into the corporate sphere. Many of these profiles with AI-generated photographs appear to be for marketing and sales purposes, according to NPR. When a user joins one of these bogus identities, however, they are connected to a real salesman.
Many of the organizations listed as employers on LinkedIn accounts with AI-generated photos said they contacted potential clients through third-party vendors. AirSales is one of these suppliers, claiming to engage independent contractors to deliver marketing services. These freelancers then design their own deepfake LinkedIn profiles at their leisure.
The presence of such phony profiles or entities is illegal on LinkedIn, according to the platform’s professional community regulations. In the first half of 2021, according to a community post on LinkedIn’s transparency page, the network had eliminated over 15 million phony accounts, with the majority of them being removed thanks to the company’s automatic defenses.