• Protect
  • Blog
  • Privacy

Artificial? Yes, it may well be.

As AI explodes all around the world, promising a raft of benefits to populations, scammers are using the technology to develop evermore sophisticated scams that are becoming harder to spot. Know what to look out for.
AI scams Blog

What are Artificial Intelligence (AI) scams?

Artificial intelligence scams involve the misuse of AI technology or the exploitation of people's trust in AI to carry out fraudulent activities. These scams often leverage the perception that AI is advanced and capable of performing complex tasks, which can lead individuals to lower their guard and become more susceptible to manipulation.

Recent reports highlight how scammers are leveraging the promise of AI to trick individuals into sharing sensitive information or making financial transactions. Cybercriminals have created fake chatbots, voice assistants, and automated email systems that convincingly mimic legitimate communication. Unsuspecting victims are then manipulated into revealing personal details or even transferring money, under the false belief that they are interacting with a legitimate AI service.

Here are some common types of AI-related scams:

  • AI-generated content scams: Scammers use AI-generated content to create fake websites, articles, reviews, or social media posts that appear legitimate. This content can be used to spread misinformation, promote fake products or services, and manipulate public opinion. Certain state actors use bots to spread misinformation
  • Voice cloning and deepfake scams: AI technology can be used to clone voices or create convincing deepfake videos, where someone's likeness and voice are manipulated to say or do things they never actually did. Criminals are now able to use deepfakes to imitate trusted individuals, such as company executives or family members, to request fraudulent transactions. This adds a layer of complexity to the scams, making it harder for individuals to differentiate between real and fake requests
  • Impersonation scams: Scammers may impersonate individuals to deceive victims into performing actions like sending money or sharing sensitive information. Experts believe common software can recreate a person's voice after just 10 minutes of learning it
  • Chatbot and virtual assistant scams: Fraudsters create fake chatbots or virtual assistants that mimic legitimate customer support services. These AI-powered bots may provide false information, attempt to steal personal data, or redirect victims to malicious websites
  • Phishing and social engineering with AI: Scammers can use AI to automate and personalise phishing attacks. AI algorithms analyse the target's online activity and create convincing phishing messages that mimic the writing style and interests of the victim
  • Tech support scams with AI twist: Scammers may use AI voice bots to impersonate technical support personnel, claiming to fix non-existent computer issues. They trick victims into granting remote access to their devices or paying for unnecessary services
  • AI investment scams: Fraudsters use AI jargon and sophisticated-sounding algorithms to promote fake investment opportunities. They claim their AI systems can predict market trends and offer guaranteed returns, but in reality, these scams result in financial losses
  • Online dating scams: AI-powered chatbots can be used by scammers on dating platforms to engage users in conversations and build trust. Eventually, the scammers ask for money or personal information under false pretences
  • Fraudulent job offers: Scammers create AI-assisted fake job postings that appear genuine. These scams may involve requests for payment to secure a job, promises of high salaries for minimal work, or attempts to collect personal information for identity theft
  • Automated robocall scams: AI-driven robocalls can impersonate legitimate institutions, such as banks or government agencies, to steal personal and financial information from recipients.

Deepfake can be scary

AI has been used to impersonate children and try to get money from their parents. Terrifying calling the parent and using AI to impersonate the child’s voice. The criminals impersonate the child’s voice and make the parents think that the child is in danger or even worse has been kidnapped. Using this to extort money from the parents by way of a ransom.

Parents reported hearing what they thought was her daughter shouting, “Help me, help me.” Followed by a man on the phone, explaining that they had her daughter, and they were told he would pump her stomach full of drugs, and the parents would never see her again, unless they paid a ransom.

They were told “You need to pay a ransom if you want to see her again”. They were told he would come and pick them up in a white van and put a bag over their heads so they couldn’t see where we were going. They were told that if they didn’t bring the money in cash, they’d be left for dead, along with their daughter.

It was only when the police were contacted and the mother was told by the police, it’s an AI scam and that we have seen lots of others like this, that they started to question. It was then confirmed that the daughter who was on a school trip was safe and well that the parents realised it was in fact a brutal, terrifying scam.

The police said the daughter's voice would have been AI-generated and the man’s voice would have been real. The parents were convinced it was her voice.

Other reports of similar scams include people receiving a fake distress call from a criminal impersonating people's grandchildren, saying things like they are in trouble and need some money to get out of the situation.

Businesses' scams

Businesses are not immune to these AI-related scams either. The integration of AI into various business processes has made them susceptible to attacks that target company data, financial assets, and intellectual property. Phishing emails and social engineering tactics that exploit AI-generated content are on the rise, fooling employees into giving sensitive information or granting unauthorised access to systems.

Tips to Avoid AI-Related Scams:

  • Verify the source: Always verify the authenticity of any AI-powered communication you receive, especially if it asks for personal or financial information. Double-check email addresses, website URLs, and the sender's identity before responding or clicking on any links. Never rely on just one source
  • Safe word: Adopt a “safe or code word” that can be used when talking with a loved one when asking for money, and always call a person back to verify the authenticity of the call
  • Set to private: Set all your social media accounts to private, as publicly available information can be easily used against individuals
  • Be sceptical: Whether it's an unexpected call, email, or message from an AI-powered system, be cautious when sharing sensitive details or making payments. Legitimate organisations won't ask for passwords, credit card numbers, or personal identification through unsolicited messages
  • Use multi-factor authentication: Enable multi-factor authentication (MFA) wherever possible, especially for sensitive accounts. Many companies offer this now and it adds an extra layer of security by requiring multiple forms of verification before granting access
  • Stay up to date: Keep yourself informed about the latest AI-related scams and cyber threats
  • Educate family and friends: Share your knowledge about AI scams with family and friends, especially those who might be more vulnerable to these types of scams. Sometimes a simple conversation can prevent a loved one from falling victim to a scam
  • Too-good-to-be-true: If something feels off or too good to be true, it probably is. Criminals often use high-pressure tactics to manipulate victims, so trust your instincts and don't let anyone rush you into making a decision
  • Secure your devices: Regularly update your devices' software, use reputable antivirus software, and avoid downloading apps or files from unknown sources. A well-protected device is less likely to be compromised
  • Beware of unsolicited messages: Be cautious when receiving unexpected messages, especially those that ask for personal information or money
  • Secure personal data: Avoid sharing sensitive information or granting access to your devices unless you are sure of the legitimacy of the request
  • Research investment opportunities: Investigate thoroughly before investing in any AI-related investment opportunity, and consult with financial professionals if needed

By staying vigilant and following these tips, you can reduce the risk of falling victim to AI-related scams and protect your personal information and finances.

If you encounter an AI-related scam or suspicious communication, report it.

Protect your data

Before a scammer can try and hook you into one of their sorry schemes, they probably have obtained some of your personal data from somewhere. They might have done it by scraping social media profiles using AI tools or possibly buying stolen personal data on the dark web.

The best way to avoid having your data stolen in a data breach and making you vulnerable to scams, whether AI or otherwise, is to make sure it’s not stored amongst any data that gets stolen. You can get your data deleted from any company that no longer needs it by using our Rightly Protect service. It’s quick, simple and free and will tell you just who has your data and give you the chance to instruct them to completely erase it, if that’s what you want to do.

Related Articles