On Wednesday (03-May-2023), Meta (formerly known as Facebook) reported that it had discovered malicious actors exploiting public interest in ChatGPT, an AI-powered chatbot, to trick users into downloading harmful apps and browser extensions, drawing comparisons to cryptocurrency scams. The social media giant uncovered approximately 10 malware families and over 1,000 malicious links promoting tools featuring ChatGPT since March, some of which included functioning ChatGPT capabilities alongside abusive files.
During a press briefing, Guy Rosen, Meta's Chief Information Security Officer, compared ChatGPT to the new cryptocurrency for bad actors. He and other Meta executives noted that they were taking steps to prepare for potential abuses linked to generative AI technologies like ChatGPT, which can rapidly generate human-like writing, music, and art. Policymakers have raised concerns about the use of these tools, which they believe could facilitate online disinformation campaigns.
Although it is still early, the executives admitted that they anticipate "bad actors" will begin utilizing generative AI to "accelerate and possibly expand" their activities in the future.
Source Agencies

Post a Comment
Hey,