Malicious individuals are using stolen personally identifiable information (PII) and voice and video deepfakes to try to land remote IT, programming, database and software-related jobs, the FBI has warned last week.
The increasing malicious use of deepfakes
Deepfakes are synthetic media – images, audio recordings, videos – that make it look like a person has been doing and saying things they haven’t done or said. There have also been known attempts of real-time deepfake attacks.
Deepfakes are created with deep (machine) learning algorithms and generative adversarial networks, and they are becoming more difficult to spot by the day. Case in point: it took over 15 minutes and a set of unexpected questions and statements for the mayor of Berlin to begin to suspect that a scheduled Webex video with someone that looked and sounded like Kyiv mayor Vitali Klitschko was, in fact, a deepfake-fuelled “attack.”
Deepfake videos of Ukrainian President Volodymyr Zelenskyy have previously been leveraged to sow distrust on the Ukrainian side of the ongoing conflict, but deepfakes can be a tool for disseminating disinformation with a wide variety of malicious goals in mind.
For example, they are used to create involuntary deepfake pornography (including so-called revenge porn), hoaxes, and to perpetrate document fraud, theft, fooling “know your customer” mechanisms, and more.
Researchers around the world are working on technologies that will be able to reliably spot deepfake media, but as the quality of deepfakes improves, they are constantly playing catching-up. Also, these technologies are not enough to combat the spread of disinformation for ideological or political purposes; that side of the problem will have to be addressed via legislation, regulation, and education, as well as action by tech giants.
FBI’s warning
The FBI’s warning tells about a recent increase in complaints they’ve been receiving of individuals using deepfakes and stolen PII to apply for a variety of remote jobs and work-at-home positions, some of which “include access to customer PII, financial data, corporate IT databases and/or proprietary information.”
These individuals are using stolen PII to try to bypass pre-employment background checks, and voice spoofing – or potentially voice deepfakes – during online interviews.
“In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually,” the FBI explained.
These discrepancies may be easy to notice, but also to dismiss due to the occasional frailty of audion and video communications. Organizations looking for IT professionals, programmers, software developers and database administrators are advised to take extra precautions to ensure they are not ensnared by these and other attackers.
from Help Net Security https://ift.tt/FHS50gD
0 comments:
Post a Comment