Share

Cybersecurity leader deploys capabilities to stop deepfakes and protect resilience of business

Trend Micro's Threat Research Group, a global leader in cybersecurity, warns of the significant increase in AI-based tools available on the so-called dark web. Concerned about this advance, the company has just launched the Trend Micro Deepfake Inspector, new detection technology deepfake which is now available to consumers as a free download and will soon be included on the Trend Vision One™ platform.

Internet users can take advantage of Trend Micro Deepfake Inspector to protect themselves against scammers who use face-swapping technology during live video calls. In addition to using a variety of advanced methods to detect AI-generated content and adopting techniques such as image noise and color analysis, the solution evaluates user behavioral elements, providing a much more assertive approach against the action deepfakand.

“These new tools deepfake make it easier for cybercriminals to carry out social engineering scams and attempts to circumvent security, which is why we are implementing new features to detect deepfakes and other forms of fraud that use artificial intelligence, aiming to protect our customers and consumers in general”, highlights Kevin Simzer, COO of Trend Micro.

Detecting and defeating AI-based attacks is essential to improving enterprise attack surface risk management and reducing digital risk for consumers. In a recent Trend Micro survey, 711% of respondents expressed a negative opinion about deepfakes, believing that one of its main uses is fraud. One deepfake If left undetected, it can lead to financial impacts, job losses, legal challenges, damage to the company's reputation, identity theft, and harm to mental and physical health.

The abuse of Generative AI by cybercrime

The commercialization of Generative AI tools in deepweb is growing at the same rate as the use of enterprise AI – and it is becoming cheaper and more accessible every day, allowing criminals of all skill levels to launch large-scale attacks with greater ease, tricking victims into extortion, identity theft, fraud or disinformation.

According to researchers, just a few weeks after the release of the report on Generative AI, we can already see the emergence of more sophisticated tools for creating deepfake. Threat actors are proliferating offerings of Large Scale Language Models (LSLs). Large Language Models or LLMs), which criminally use deep learning algorithms and other technologies to increase the volume and extent of their reach.

Screenshots from Swapface's demo video showing how the software works

Apps called Deepfake AI, SwapFace, and AvatarAI VideoCallSpoofer have also been made available. If their ads are to be believed, these apps could replace a face in a video with an entirely different one taken from just one or a few photos, though there are some key differences in how each offering works. While Deepfake AI supports recorded videos, SwapFace, for example, is implemented as a live video filter, even offering support for popular video streaming tools like Open Broadcast Software (OBS).

These bots are specifically trained with malicious data, which includes malicious source code, methods, techniques, and other criminal strategies. “The need for such capabilities arises from the fact that commercial LLMs are programmed to refuse to comply with a request if it is considered malicious. In addition, attackers are often afraid to directly access services like ChatGPT for fear of being tracked and exposed,” explains Flávio Silva, cybersecurity expert at Trend Micro.

Flávio also points out that Jailbreak-as-a-service front-ends can supposedly bypass this censorship, as they wrap the user's malicious questions in a special prompt designed to trick a commercial LLM into giving an unfiltered answer.

The demand for criminal LLMs is slowly but steadily growing. Older criminal LLMs such as WormGPT and DarkBERT, which were reported to be discontinued last year, are making a comeback. What’s notable about the recent releases of DarkBERT and WormGPT is that both are offered as apps with a “Siri kit” option. This is presumably a voice-enabled option, similar to the recently released AI assistants Humane and Rabbit R1. This also coincides with Apple announcing the feature in the next iPhone update.
Click ON HERE to read more details of the study “Surging Hype – An Update on the Rising Abuse of GenAI”.

quick access

en_USEN