Foiling Enterprise Payment Fraud Through Managed Learning
In our new white paper on enterprise payment fraud (EPF), we explore the fastest-growing area of banking fraud today and explain NetGuardians’ technology-led approach to combating this widespread threat. Enterprise payment fraud involves simple, low-tech scams that fool retail and corporate customers into authorizing fraudulent payments into criminals’ bank accounts.
This results in huge losses to individuals and organizations. In 2019, the US Federal Bureau of Investigation’s Internet Crime Complaint Center received over 467,000 complaints relating to online crimes and scams including payment frauds – an average of nearly 1,300 every day - carried out by criminals who had gained access to personal and corporate email accounts. Total losses in these cases were put at almost $3.5bn. Figures from UK Finance, the British financial-services trade body, show that in 2019 its members reported losses of £456m ($549m) as a result of so-called “authorized push payment fraud” is up from £246m in 2018.
Frauds that the customer authorizes
A major reason why this type of fraud is growing so quickly is that such fraudulent payments involve everyday transactions that would not normally arouse suspicion. Because they have been directly authorized by the customer, they are extremely difficult for banks to spot. Moreover, because these frauds are low-tech deceptions that typically require no special expertise, large numbers of criminals are attempting EPF-type scams.
Enterprise payment frauds frequently involve an element of “social engineering”. Using this approach, fraudsters harvest the information they need to make a bill or payment request appear genuine from freely available sources such as the organization’s website or the individual’s social media accounts.
Here are some common examples of enterprise payment fraud from our white paper:
- Advance fee fraud: a caller posing as an official from a government department or the tax authorities tells the victim they face court unless they pay to settle an action against them
- Fraudsters email an individual a fake bill for building work or school fees, for example. The forgery closely resembles a genuine bill but includes different account details
- A fraudster calls in person at the victim’s home, posing as an employee of a company that has carried out work for the victim and who has come to collect payment
- Fake but very convincing invoices are emailed to a company including different payment details. In smaller companies with few formal controls, relatively junior staff who have access to payment systems can be duped or pressured into making a payment by a caller posing as a senior executive or a customer
- A fraudster posing as a bank employee calls the victim to inform them that their account has been compromised and requests personal login information to help protect their money. Or they ask the victim to transfer funds to a new “safe” account that has been set up for them
- Fraudsters target victims through online dating sites or social media and create a fake romantic relationship, winning the victim’s trust with online messages before asking them to send money
Because these fraudulent payments have been directly authorized by the victims, customers are often held responsible for the losses and therefore receive no compensation. However, pressure is mounting on banks to protect customers from enterprise payment fraud and compensate them for their losses.
Banks need better anti-fraud tools
Banks, therefore, need far more effective tools to combat these low-tech, hard-to-spot frauds.
Today’s rules-based anti-fraud systems cannot detect or block enterprise payment frauds because they are too rigid: customers now have so much flexibility and choice in how to transact that everyone’s payment behavior is effectively unique. No rules-based system can accommodate this much variety.
The problem with mainstream AI
Many newer software systems that try to use Artificial Intelligence (AI) to identify and block fraudulent payments in real time also have drawbacks. An individual bank’s data sets are just not big enough to allow the effective training of AI algorithms. This leads to “overfitting”, where systems are trained using a limited number of fraud examples and as a result can detect only the limited range of frauds with which they are familiar.
To address this problem, at NetGuardians we use a technique called Managed Learning. This combines several supervised and unsupervised Machine Learning approaches within a consistent scoring model and employs two phases of analytics. The first phase searches for anomalous transactions by building a dynamic profile of each customer’s banking behavior as it evolves through time and flags anomalous transactions. In the second phase, the system is trained to recognize which of these anomalies are fraudulent transactions (and to disregard the legitimate ones) by learning from the feedback it receives. A key strength of Managed Learning is that it does this without unbalancing the scoring models in a way that would lead to overfitting.
Managed Learning doubles detection rate
The results are compelling: the fraud detection rate of our software is more than double that of a rules-based system, and the number of false positives is reduced by more than 80 percent. As a result, the time spent by fraud teams investigating suspicious payments declines by more than 90 percent, delivering major operational gains as well as a better banking experience for customers.