Dr. Rishabh Das
As AI algorithms become increasingly capable, we must treat our voice like a fingerprint and protect how we share it. In an exclusive coverage by 10tv in Columbus, Ohio, Dr. Rishabh Das from the Scripps College of Communication discusses this emerging threat and outlines potential defensive measures to identify deepfake threats.
Artificial intelligence is a computer program that can imitate human intelligence. The algorithm can study a large amount of existing data and find patterns or develop specific cognitive capabilities. Once trained, the algorithm can accelerate human workflow, making us more efficient. These are all positives, where is the problem?
What if the AI learns your voice? The AI can impersonate you and deceive your friends. What if the AI learns how you move and talk on virtual Zoom or team-style conference calls? The algorithm can then execute a wider type of impersonation. These are called Deepfakes. Deepfakes are synthetic audio or video clips produced by generative AI. With only a few seconds of publicly available voice or imagery, criminals can clone a person’s likeness and embed it in phone calls.
During the phone calls, the output sounds authentic, and the victim might get tricked into transferring money or providing confidential information to the criminal. These scams are becoming increasingly common. In a recent study, Starling Bank found that 28% of people were targeted by an AI voice cloning scam at least once in the past year. 46% of people did not even know this type of scam existed, and 8% said they would likely send whatever money was requested, even if they thought the call from their loved one seemed strange. [1]
Direct link to TV report:
AI deepfakes are not just fooling individuals. They have already been used to infiltrate corporate environments and execute high-stakes fraud. Criminals impersonated a senior officer of a Hong Kong-based bank and duped the organization of £20 million. The criminal is believed to have downloaded publicly available videos and used an AI algorithm to execute the scam.
Reference: -
The McClure School of Emerging Communication Technologies strives to offer the best academic programs in the IT (Information Technology), cybersecurity
, the game development and the Virtual Reality/Augmented Reality
(VR/AR) industries. Our programs and certificates cover numerous aspects of the rapidly changing industries of information networking, cybersecurity operations
, data privacy, game development
, digital animation and the academic side of esports.