Tech Trends: Who’s calling?

Recent years have seen the development of Artificial Intelligence (AI) skyrocket, making many base tools available online for free; or an arsenal of tools for a cheap enough price. The various tasks that AI can improve are just one side of the coin. When bad actors get their hands on these AI tools, they greatly improve the effectiveness of their scams and exploits.

Have you received calls from organizations trying to convince you to make a monetary transaction for an issue that needs to be rectified or for an offer promising a large sum of money later? This was most likely a scam call from centres to gain your sensitive information or money.

When AI gets involved in the mix, it makes the scammer’s process much easier, faster and widespread. One such possibility is the use of deepfake applications such as AI voice cloning technology to mimic an actual person that the victim knows.

The entire world has a big digital footprint. From the many social media platforms and numerous online accounts, photos and videos posted online provide potential attackers access to a rich collection of samples which are fed into the AI tool to clone and generate the voice of a target and initiate the call with a script.

Imagine a loved family member calls you one day, saying they are in a life-or-death situation and requires some money from you. No one would stop to think twice before trying to find the quickest way to send the money. This is exactly what attackers are trying to do. Such AI generated scam calls exploit people using fear to create a sense of urgency.

Security experts recommend the use of a safe word with loved ones in the event of a real emergency. If you receive a concerning call from a loved one in an emergency, call the person back at their regular number to verify the story.

Another use of AI deepfakes would be to imitat an organization like Amazon, government agency, or others. These are likely to be AI chatbots with automated voice systems to impersonate a real customer service agent.

The scammer would provide wrong details of personal accounts of the victim waiting for the victim to correct them and giving away personal information and credentials in the process.

A report by Biocatch shows 69 per cent of organizations in North America say that criminals are more advanced at using AI for financial crime than banks are at using AI to fight financial crime. This is why globally, 91 per cent of financial services and banking organizations are re-thinking the use of voice verification for clients.

The office of the superintendent of financial institutions predicts that by 2026, 70 per cent of financial institutions will be using AI compared to the 50 per cent recorded in 2023. The use cases are for operational efficiency, customer engagement, documentation and fraud detection.

To combat these threats, individuals should be aware of how to identify scam calls and organizations must invest in building a robust system of security measures to counteract the exploits by the fraudsters.

Share

Tech Trends: Who’s calling?

Verified by ExactMetrics