Coronavirus is stumping security software.
Businesses have been trying to adapt to the coronavirus pandemic’s “new normal.” It turns out, criminal enterprises have been making the same transition—often in ways more nimble than those of legitimate companies.
Last month, the international law enforcement agencies Interpol and Europol both warned of a spike in fraud related to the pandemic, with tactics involving everything from COVID-19-themed phishing emails to sales of phony coronavirus test kits and fake PPE. Hackers have seen an opportunity too, attacking hospitals with ransomware just when they can least afford to have their computer systems fail.
“There has been a very high increase in attacks and, in fact, health care institutions have been attacked the most,” says Ajay Bhalla, president of cyber and intelligence solutions at payments giant Mastercard.
Earlier this month, Mastercard began offering health organizations free cybersecurity risk assessments through its RiskRecon subsidiary and a partnership with the Health Information Sharing and Analysis Center (H-ISAC), a consortium of key players in global health.
Max Heinemeyer, director of threat hunting at Darktrace, a cybersecurity company based in Cambridge, England, that uses artificial intelligence to guard against cyberattacks, says there has been an increase in “fearware”—phishing emails that target hospitals: Made to appear official and purporting to contain vital information about the COVID-19 pandemic, these emails contain attachments or links that launch ransomware. Darktrace has been offering its software to British hospitals for free during the pandemic to help protect them from these attacks.
But the pandemic poses a particular challenge for security software designed to detect fraud, money laundering, and cybercrime, experts say.
That’s because many of these systems rely on machine-learning algorithms to spot deviations from normal patterns—but when everyone’s normal changes, the software struggles to cope. The result can be too many false positives, blocking legitimate commerce and overwhelming human analysts tasked with investigating the alerts such software generates.
Too many false positives and companies will just turn the security systems off, says Michal Pechoucek, chief technology officer at cybersecurity company Avast. Or, he says, they will raise the thresholds for the software to trigger an alert, potentially allowing more fraudulent transactions or cyberattacks to slip through.
In fact, Pechoucek says, sophisticated hackers may even incorporate this into their tactics and deliberately try to trigger false alarms in advance of an actual attack in order to condition their targets’ cybersecurity analysts to ignore the alerts or relax their defenses. Avast recently helped a hospital in the Czech city of Brno mitigate a ransomware attack that had temporarily shut down its IT systems.
One way to guard against this is to have multiple security systems working in concert, notes Martin Rehak, the cofounder and CEO of Resistant AI, a Prague startup specializing in protecting the automated systems that many financial firms are now using to register new customers and conduct required background checks from being gamed by criminals.
Pechoucek, who founded a previous company with Rehak called Cognitive Security, which Cisco purchased for an undisclosed amount in 2013, is an investor in Resistant AI. Today the startup announced it has received $2.75 million in seed funding from two venture capital firms, Index Ventures and Credo Ventures, with participation from technology incubator Seedcamp. Daniel Dines, the CEO of UiPath, the robotic process automation company, is also an angel investor in the startup.
Rehak says the company’s software is designed to work in conjunction with human fraud analysts. “Our view of A.I. is that it is a great way to scale human teams,” he says. Human review is essential, especially when the cost of a false alarm‚ such as blocking an important transaction from a valued customer, is high.
Another solution to the challenge shifting consumer behaviors are posing for A.I.-based fraud detection systems is to simply retrain them on new data. That’s the approach that Feedzai—a San Mateo, Calif., company that uses machine learning to help protect businesses, including Citigroup and Lloyds Bank, against financial fraud—has taken, according to its cofounder and chief science officer, Pedro Bizarro.
Bizarro says just a few years ago, Feedzai would have required at least 12 months’ worth of data to train a good machine-learning–based fraud detection system. Now, thanks to advances in A.I. techniques, he says, Feedzai can achieve the same results with just a few months of data. And he says that for a large national bank, “even just five days of data is tens of millions of transactions, and it is easy to train a good model from that.”
Having a comprehensive view of the person attempting a transaction, as Mastercard’s Bhalla says his company’s card network often does, is a big advantage too. He says that while the pandemic may have forced people to shop online instead of in-store, the kinds of things they are buying, the brands they are buying from, and the amounts they are spending, have not changed too dramatically. As a result, automated systems that examine multiple forms of identification, such as a card number, registered card address, delivery address, IP address, and device type, and then compare these against normal purchasing patterns, can easily cope.
There are exceptions, of course, Bhalla says. For instance, a business manager who is working from home on their personal laptop and making a one-off business purchase with a company card registered to a work address, but asking for the item to be delivered to a coworker’s home address, may find that a legitimate transaction is flagged as suspicious.
Of course, the pandemic has also forced criminals to change their normal behaviors too. “Often fraudsters were trying to buy things with very high retail value,” Bizarro says. Now, some of those items—such as expensive airline tickets—aren’t available. Instead, he says, Feedzai is seeing more of what he calls “sugar and rice” fraud, in which stolen credit card numbers are used to shop for groceries or other everyday items that are less likely to stand out and be flagged by fraud detection software.
Rehak says that the increasing use of automated systems to register customers for financial services and conduct required know-your-customer and anti–money-laundering checks has created new avenues for fraudsters. Sophisticated criminals, he says, are discovering how to use machine learning themselves to produce fake documents, ID photos, and even biometric data, such as fingerprints, that can fool these automated onboarding tools. But, he says, these methods tend to leave traces of manipulation that other A.I. systems, such as the ones developed by Resistant AI, can be trained to identify.
Bizarro says that cybercriminals have also discovered ways to use A.I. systems that can identify objects in photographs to defeat the image-based captcha challenges that many businesses use to ensure those trying to complete transactions are “not a robot.”
He says such A.I.-based methods have lowered the barriers to entry to online financial crime. More sophisticated hackers and criminal groups now package and sell stolen credit card numbers and software that enables anyone to easily perpetrate cyberattacks or fraud attempts.
And he worries that with the crushing economic impact of the coronavirus—with millions of people out of work worldwide—a massive surge in fraud is likely imminent, as some people are tempted, out of desperation, to become at least part-time criminals. “There are 25 million people out of work in the U.S.,” he says. “Even if just 0.1% of them consider it, that’s 25,000 new fraudsters.”
Comments