Facebook’s parent company, Meta, has identified a growing trend of malware creators using the popularity of Chatgpt to lure victims into downloading malicious applications and browser extensions. Chatgpt is an open AI chatbot created by Openai and funded by Microsoft. Meta has noticed that malware creators are exploiting public interest in Chatgpt and promoting malicious tools that feature the chatbot. Meta has identified around 10 malware families and over 1,000 malicious links that have been promoted as Chatgpt tools since March. In some cases, the malware delivered working Chatgpt functionality alongside abusive files, according to a report quoted by Reuters.
During a press conference on the findings in the report, Meta’s Chief Information Security Officer, Guy Rosen, remarked that Chatgpt is the new crypto for bad actors. Meta’s executives also pointed out that the company is preparing its defenses for a variety of potential abuses related to generative AI technologies like Chatgpt.
The rising popularity and rapid development of platforms like Chatgpt have raised concerns among governments and authorities around the world. They believe that such tools are likely to make online disinformation campaigns easier to propagate. In a statement issued after their meeting in Japan at the end of April, digital ministers of the G7 countries agreed that their developed nations should adopt AI regulations that are risk-based while enabling the development of AI technologies.
Rosen believes that it is still early to find examples of generative AI being used in information operations, but he expects some bad actors to employ such technologies to try to speed up, and perhaps scale up, their activities. Entrepreneur and investor Elon Musk also accused Openai, the developer of Chatgpt which he helped found, of “training the AI to lie.” He also announced plans to create a rival to the offerings of tech giants which he called “Truthgpt.”
Malware creators have increasingly been using AI-based chatbots to lure victims into downloading malicious applications and browser extensions. Chatgpt has become a particular target due to its widespread usage across a range of platforms. In some cases, the malware has been delivered as working Chatgpt functionality alongside abusive files, making it harder for users to detect the threat.
The trend of malware actors leveraging public interest in Chatgpt to lure victims is expected to continue to grow. This is due to the chatbot’s increasing popularity and the fact that it is used across a range of platforms. It’s likely that further security measures will be put in place to prevent the exploitation of tools like Chatgpt in the future. Governments and regulators will also need to monitor the use of AI-based chatbots to prevent abuse and the spread of disinformation campaigns.