The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools likeChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.
The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.
According to a senior FBI official (viaTom’s Hardware), “We expect over time as adoption and democratization of AI models continues, these trends will increase.” Bad actors are using AI to supplement their regular criminal activities, they continued, including usingAI voice generators to impersonate trusted people in order to defraud loved ones or the elderly.
It’s not the first time we’ve seen hackers taking tools like ChatGPT and twisting them to create dangerous malware. In February 2023, researchers from security firm Checkpointdiscovered that malicious actors had been able to alter a chatbot’s API, enabling it to generate malware code and putting virus creation at the fingertips of almost any would-be hacker.
The FBI strikes a very different stance from some of the cyber experts we spoke to in May 2023. Theytold us that the threat from AI chatbots has been largely overblown, with most hackers finding better code exploits from more traditional data leaks and open-source research.
For instance, Martin Zugec, Technical Solutions Director at Bitdefender, explained that “The majority of novice malware writers are not likely to possess the skills required” to bypass chatbots’ anti-malware guardrails. As well as that, Zugec explained, “the quality of malware code produced by chatbots tends to be low.”
That offers a counterpoint to the FBI’s claims, and we’ll have to see which side proves to be correct. But with ChatGPT maker OpenAI discontinuing its own tool designed todetect chatbot-generated plagiarism, the news has not been encouraging lately. If the FBI is right, there could be tough times ahead in the battle against hackers and their attempts at chatbot-fueled malware.
There’s a lot of AI hype floating around, and it seems every brand wants to cram it into their products. But there are a few remarkably useful tools, as well, though they are pretty expensive. ChatGPT’s Deep Research is one such feature, and it seems OpenAI is finally feeling a bit generous about it.
The company has created a lightweight version of Deep Research that is powered by its new o4-mini language model. OpenAI says this variant is “more cost-efficient while preserving high quality.” More importantly, it is available to use for free without any subscription caveat.
Everyone’s heard the expression, “Politeness costs nothing,” but with the advent of AI chatbots, it may have to be revised.
Just recently, someone on X wondered how much OpenAI spends on electricity at its data centers to process polite terms like “please” and “thank you” when people engage with its ChatGPT chatbot.
Imagine a tech giant telling you that it wants your Instagram and Facebook posts to train its AI models. And that too, without any incentive. You could, however, opt out of it, as per the company. But as you proceed with the official tools to back out and prevent AI from gobbling your social content, they simply don’t work.
That’s what users of Facebook and Instagram are now reporting. Nate Hake, publisher and founding chief of Travel Lemming, shared that he got an email from Meta about using his social media content for AI training. However, the link to the opt-out form provided by Meta doesn’t work.