Logo

dev-resources.site

for different kinds of informations.

Fraud Capture: Identity Theft from a Data Science Perspective

Published at
2/9/2024
Categories
prove
fraud
datascience
developers
Author
ndewald
Author
7 person written this
ndewald
open
Fraud Capture: Identity Theft from a Data Science Perspective

In 2022, 1.1 million incidents of identity theft were reported through the Federal Trade Commission, resulting in $8.8 billion lost to fraud (1). Not only does this cost individuals and businesses money, but it also decreases trust between people and the companies they interact with. So, how can we stop bad actors without adding friction to good users in the digital world? While our goal is to prevent fraud, we still need to ensure that good consumers have as seamless of an experience as possible. The more extensive the sign-up process is for a product or service, the higher the drop-off rate, so the process must be as simple as possible.

Prove helps combat fraud by performing digital identity authentication and fraud mitigation using phone risk signals and associating identities to phone numbers to catch bad actors before they can commit fraud. Prove performs digital authentication and identity verification via a solution termed ā€œPRO.ā€ Much of this occurs behind the scenes, allowing the user to have a seamless experience.

PRO

Image description

PRO stands for Possession, Reputation, and Ownership, of which we are referring to a telephone. We check to see whether the person claiming to be using the phone is actually in possession of it (Possession), what the recent activity on that phone looks like (Reputation), and whether the person SHOULD have access to the phone (Ownership).

ā€The first part of Proveā€™s solution is built on well-established cryptographic protocols SIM and FIDO. Prove utilizes a cryptographic bind-based authentication where the phone number is bound to the SIM with 128-bit encryption, allowing Prove to authenticate that the user is actually in possession of the phone at the time of the transaction. Iā€™ll go into the importance of this later on, but this is critical for ensuring that Iā€™m not using YOUR phone to open up accounts.

ā€The Possession check can be done in a few different manners and Prove has quite sophisticated methods of accomplishing this, but the one most people are familiar with is the SMS OTP. Your bank may send you a text with four to nine digits and ask you to submit that ā€˜one-time passwordā€™ to the website ā€“ this allows them to confirm with a relatively high confidence that you are actually holding your phone. Itā€™s important to note that an SMS can be shared through social engineered attacks. The outcome for Possession is True/False.

ā€The Reputation check looks at what kind of activity has been seen on the phone recently. Indicators of risky phone numbers include Nonfixed VoIPs, as data on these line types is limited and the barrier to acquiring an account is much lower than others, and recent SIM swaps, as a bad actor can have taken over the phone. Prove has created a heuristic model that uses a plethora of signals to assign each phone number a score from 0 to 1000, where 0 is very risky, and 1000 has no risk we can identify from a Reputation standpoint.

ā€The Ownership check confirms whether or not an individual is associated with a phone number. To do this, customers provide some PII (Personally Identifiable Information), that is used to confirm an association with the phone number. The outcome for Ownership is True/False.

Types of Fraud

While there are many different types of fraud, PRO focuses on third-party fraud, also known as identity theft. For example, say I want to open a credit line with my neighborā€™s information so that Iā€™m not held responsible for paying the card back. In this scenario, Iā€™ll open an account using my neighborā€™s phone number, which has a good reputation because she hasnā€™t been utilizing her phone for risky activity, and I know my neighborā€™s PII, so Iā€™m able to tie it to her phone number ā€“ ownership checks out.

ā€However, because Iā€™m not physically holding her phone, I wonā€™t receive the OTP thatā€™s sent to the phone to check for possession, and no cryptographic bind has been established ā€“ in this case, the possession check returns a ā€˜Falseā€™ value and stops the fraud. While there are ways for me to receive the OTP via SIM swapping and social engineering, we know the phone hasnā€™t been SIM swapped because it has a good reputation, and letā€™s assume my neighbor is knowledgeable about social engineering and doesnā€™t fall prey to my attempts to obtain the OTP she received.

PRO is NOT designed to catch first-party fraud, which is categorized as a person committing fraud as themselves. Consider that I, the fraudster, am perpetrating fraud in my own name. Iā€™ll open an account using a phone number that isnā€™t used for risky activity but has been tied to me so that it passes the reputation and ownership checks.

Because itā€™s my real phone, Iā€™ll be able to enter the OTP thatā€™s been sent, and a cryptographic bind has been established, ensuring I pass the possession check. While not designed to catch first-party fraud, Prove can provide a detailed evidence trail that will discourage (or help relying parties identify) first-party fraud: e.g. "This transaction took place on your iPhone 15 at this time and day", essentially a 'digital watermark'.

Image description

Risk Indicator Modeling

Although our products are heuristic-based, we use machine learning (ML) to build gains charts to highlight the potential fraud capture capabilities. In other words, if we look at the number of applications that Prove deems to be risky, we see that weā€™re able to correctly identify fraudulent transactions while minimally affecting good applicants.

ā€There are many metrics we can use to discuss fraud capture, but some are easier to understand than others, and we need to be conscious of this when presenting to a non-technical audience. Our team performs ML to help determine which signals may be most useful for a particular client and their fraud scenarios; we can then highlight why those signals were selected with visuals tailored to the audience (e.g., projected revenue or pass-through rates). More often than not, our audience consists of product stakeholders as opposed to data experts, so we have a few different tools at our disposal, depending on who weā€™re having conversations with.

ā€Using accuracy can be helpful, but the data sets we work with tend to be undersampled with fraud. Fraud tends to make up a very small portion of a data set, typically less than 3%. Say weā€™re working with a set that has a 2.6% fraud rate, and we find we have an accuracy of 80% ā€“ this means that 80% of the time, we can correctly classify whether a transaction was good or fraudulent. However, this doesnā€™t tell us much about how the model is doing because we could classify all fraudulent transactions as good and still retain an 80% accuracy rate.

ā€Precision and recall can be useful because they give us more information on how the model determines good vs. fraudulent, not simply whether the model is correct (as accuracy does). Precision tells us what percentage of total predicted fraud was classified correctly, and recall tells us what percentage of total actual fraud was caught by the model. These can be helpful but difficult to explain and can be easily misconstrued by those not familiar with the terminology. For the same population, we could have a precision rate of 0.07 and a recall rate of 0.9. While these numbers arenā€™t incorrect, they donā€™t quite tell a comprehensive story and can be misinterpreted as accuracy. Precision is low because the model is over-classifying good transactions as fraud and only correctly classifying 90% of the fraud. If clients take this at face value, they will see that we are adding friction to a lot of their good population but not catching a lot of the fraud ā€“ which is the worst-case scenario.

ā€Weā€™ve found that using gains charts is a great way to convey results in an easily digestible manner. A gains chart measures how much better a model performs than random guessing. For the same population with an 80% accuracy rate, we can say that we captured 30% of the fraud by adding friction to only 5% of the population. Random guessing would have us capturing 5% of the fraudulent population, so in this instance, we have a 6x multiplier capture rate.

Image description

Data Challenges

There are many challenges that come with working with fraud data that affect the industry at large - these are not unique to Prove.

  • Appropriately labeled fraud can be a pain point as there are different levels of maturity over classifications of fraud as well as different organizations and systems.
  • In order for us to study fraud, we actually need to go back in time to the moment the fraud was committed. Knowing what phone activity and identity associations are today is essentially useless, so while we have the ability to look back at a moment in time, our retro capabilities are limited and decline with age.
  • Having as much knowledge about our customer implementations is important as fraud looks different depending on what channels are available, where Prove products are in the flows, and what checks are happening before PRO, but we often donā€™t have insight into this information.

Modeling Challenges

The goal of modeling is to find the most effective features and signals in Proveā€™s products to stop specific client fraud.

  • Logistic Regression is our main approach to modeling; we prefer to keep it simple to allow easy translation to non-technical stakeholders and often need to be able to transition to heuristic rules for clients.
  • Correct fraud tagging provides better accuracy for models.
  • Data is often imbalanced (<3% fraud) OR a client may attempt to test us by throwing a large portion of untagged fraud into a file. Conversely, weā€™ve seen the opposite problem where fraud is over oversampled, but true production sampling is not clear.
  • The possession check aspect of a real-time transaction canā€™t be simulated ā€“ we simply canā€™t look back in time and test whether a transaction wouldā€™ve passed a possession check.
  • In Production, PRO is proactively stopping fraud, so itā€™s hard to measure fraud because we donā€™t get to see it come to fruition.

Despite these limitations, Prove is typically able to show a significant improvement in fraud capture and/or pass rates over our clientā€™s existing approaches. Moreover, we are able to indicate an additional lift in performance when going live in production as possession checks will occur. Based on our experience with hundreds of client implementations, we can provide guidance on the range of expected lift that possession checks will yield. Finally, when we go live in production, we test a small sample population to validate that our PRO methodology is effective before deploying to the full user population.

Conclusion

Overall, Prove has been able to attain a 75% fraud reduction relative to the attack rate for our customers. There are quite a few different tools at our disposal, but we tend to utilize metrics that are easily understood by a broad audience. Types of fraud that occur, how our products are utilized, and any fraud controls our clients currently have in place can impact the performance of Prove's solutions in implementing the PRO methodology.

Bad actors are always coming up with new ways to commit fraud and so weā€™re continuously learning and researching how to best prevent it, developing new products, and creating best practices. Itā€™s challenging to keep up with new attack vectors, but understanding how fraud is carried out puts Prove in a great position to stop it.

ā€
(1) ā€œNew FTC Data Show Consumers Reported Losing Nearly $8.8 Billion to Scams in 2022ā€, FTC.gov, February 23, 2023 https://www.ftc.gov/news-events/news/press-releases/2023/02/new-ftc-data-show-consumers-reported-losing-nearly-88-billion-scams-2022

fraud Article's
30 articles in total
Favicon
Understanding CEO Fraud: A Growing Threat to Organisations
Favicon
What will be the impact of AI on Cybersecurity in 2025
Favicon
Building an Effective Machine Learning Pipeline for Fraud Prevention
Favicon
How Use Of Big Data Science For Fraud Detection And Prevention
Favicon
6 Best Practices for Performing a Fraud Risk Assessment
Favicon
The Power of Market Disruption: How to Detect Fraud With Graph Data
Favicon
What is the role of AI in fraud prevention?
Favicon
How Does American Express Handle Fraud Protection?
Favicon
What are Risk Engines, and How to Make Sure They Work Well
Favicon
How Does Geico Handle Insurance Fraud?
Favicon
How to choose the right fraud-management solutions for your business?
Favicon
Benfords Law Unraveling the Secrets of Digits in the Digital Age
Favicon
Click Fraud Is Impossible To Police
Favicon
The Satyam Scandal: India's Biggest Corporate Fraud
Favicon
Understanding Tax Fraud: Signs and Prevention Strategies
Favicon
The SEC's Role in Protecting Investors from Fraud By Linda Athanasiadou
Favicon
Fraud Capture: Identity Theft from a Data Science Perspective
Favicon
Phishing Attack
Favicon
The Importance of Transparency in Auditing
Favicon
The Comprehensive Guide to Fraud Detection, Management, and Analysis
Favicon
A Step-by-Step Guide to Brand Hijacking Attacks and How to Prevent Them
Favicon
Get your certificate easily! Knowledge? Who needs that?
Favicon
Payment gateway worldwide
Favicon
Transaction Monitoring for dummies
Favicon
Real-Time Fraud Detection: 4 Methods to Protect your Online Shoppers
Favicon
Help needed
Favicon
Memblokir no telepon penipu secara online
Favicon
Primer on Bitcoin for Data Scientistā€Šā€”ā€ŠPart 1
Favicon
šŸ” How Retailers Can Prevent Ecommerce Fraud This Holiday Season
Favicon
I fell for fraud

Featured ones: