Chatgpt, an AI-based chatbot developed by Openai, is the latest tool being used by malware creators to entice users into downloading malicious apps and browser extensions. Meta, the corporation behind Facebook, has identified around ten malware families and over 1,000 malicious links that have been promoted as Chatgpt tools since March. According to Reuters, the company has warned of the rising trend and is now preparing its defenses for a variety of potential abuses related to generative AI technologies like Chatgpt. During the briefing, its Chief Information Security Officer, Guy Rosen, and other Meta executives pointed out that for bad actors, Chatgpt has become the “new crypto.”
Some instances of malware have delivered working Chatgpt functionality alongside malware, Meta noted. The trend of malware actors leveraging public interest in Chatgpt to lure victims, is expected to continue to grow, and is a cause of concern among authorities around the world. Platforms like Chatgpt, which are funded by Microsoft, have raised concerns among authorities who fear that such tools will make online disinformation campaigns easier to propagate.
Digital ministers of developed nations at the G7 meeting in Japan at the end of April agreed that they should adopt AI regulations that are risk-based while enabling the development of AI technologies. However, some believe that Openai, the developer of Chatgpt, which was co-founded by entrepreneur and investor Elon Musk, is “training the AI to lie.” Musk is planning to create a rival to current offerings, “Truthgpt.”
In summary, it appears the growing popularity and rapid development of platforms like Chatgpt and other generative AI technologies have provided new opportunities for malware creators to exploit public interest. Facebook and the developers of AI technologies are being urged to adopt risk-based regulations to ensure that such tools are not abused.