The Canadian Security Intelligence Service (CSIS), Canada’s leading national intelligence agency, has expressed alarm over the spread of disinformation through artificial intelligence (AI)-generated deepfakes on the internet.
The agency is particularly concerned about the increasing sophistication and indistinguishability of deepfakes, viewing them as a potential threat to Canadian citizens. CSIS’s report highlighted instances where deepfakes have been used to harm individuals, including a case covered by Cointelegraph involving deepfakes of Elon Musk targeting cryptocurrency investors. Since 2022, these advanced deepfake videos have been employed by malicious actors to deceive crypto investors into relinquishing their funds. This concern was heightened following a fake video of Musk on X (previously known as Twitter), promoting a crypto platform with implausible returns.
CSIS also pointed out issues such as privacy breaches, societal manipulation, and inherent biases as additional challenges posed by AI. The agency emphasized the need for government policies, guidelines, and initiatives to keep pace with the evolving nature of deepfakes and synthetic media, warning:
“Should governments evaluate and respond to AI on their own and at their usual pace, their actions may soon become outdated.”
To combat the widespread dissemination of misleading information, CSIS advocates for collaborative efforts between partner governments, allies, and industry specialists. This commitment to international cooperation in addressing AI-related issues was solidified on October 30, when the Group of Seven (G7) industrial nations agreed on an AI code of conduct for developers.
As Cointelegraph previously reported, this code comprises 11 principles aimed at fostering “safe, secure, and trustworthy AI globally.” It seeks to maximize the benefits of AI while simultaneously managing and mitigating its risks.