How can we help?

Ask a question or search our knowledge base:

 

 

Using the Analyze Page

The Sift Science Console's Analyze Graph is a powerful tool that can help your team assess the health of your fraud-fighting practices. As your business scales, you may be left pondering questions like, “Should I be blocking lower?” or “Can I review fewer users without affecting the bottom line?”

Note:

Before diving in - Sift Science's ability to detect and prevent fraud relies on your team's input in the form of Feedback! The more information you send - the better our Machine Learning model gets, so please send Feedback for a few weeks before digging into Analyze. Please refer to this Sift EDU document for more details.

 Review Your Existing Fraud Practices

Using Analyze - you can review your existing policies, or experiment with thresholds to find the optimum risk acceptance for your organization. As a Sift Science Console Admin, navigate to the Analyze tab to begin:

 

First - choose a date range that you want to review activity in. A good starting point would be looking at the last 30 days of activity:

A graph will be created that displays all events that happened within the selected date range - you can select to display Transaction, Create Order, Create Account, Create Content, or Add Promo events that you had sent to Sift. It's most meaningful to choose the event where you automate or wish to automate some or all of your fraud decisions. For the purpose of this example -the following graph is based off of the '$transaction' event sent to Sift Science via the Events API. A few points of interest to dissect:

The blue path represents all events of the given type from users who have not been labeled, or labeled as 'Not Bad'. The red path represents all events of the given type from users who have been labeled as 'Bad' by your team or through the Labels API. Note that the scales on both ends differ because the number of Bad users you have is much less than the number of good users, and you're unlikely to have time to label all Bad users.

 

Displayed below your Analyze Graph are three pie charts - Accept, Risky, and Reject. Moving the score sliders on the Analyze Graph will update the statistics within each chart; the large percentage in each chart represents the total percentage of all events, while the small percentage shows what percent of these events were labeled bad.

In the example above - the sliders are set at a threshold between 30 and 60. The 'Accept' pie chart shows statistics for all events with a score below 30. The 'Risky' pie chart shows statistics for all events with a score between 30 and 60, and the 'Reject' pie chart shows statistics for events with a score above 60.

Hovering over each pie chart will give even greater detail - in this case events with a score of 30 to 60 represented 9.4% (21,947) of all events - and 1.52% (334) of those 21,947 were from users labeled as 'Bad' by your Fraud Team or through the Labels API.

Note that the 1.52% is likely not a prediction of the total amount of fraud in this bucket unless your team has reviewed all the users. In the example above, if a review team reviewed one out of every ten transactions in this range and found that 15.2% are coming from users they ultimately label Bad, this would lead to the 1.52% (15.2% * 1/10 of users reviewed = 1.52%). So, that review team would conclude that, had they reviewed all transactions in the score range, likely 15.2% would have been from users who were fraudulent.

Getting the most out of this page also requires understanding your integration. For example, if you authorize a credit card when a user adds one to their account, the above chart does not break transactions into those auths vs auths and captures at time of order. If you wanted to look at where to block orders, looking at $create_order would make more sense.

Move the sliders to see where the fraud is happening - what scores are typically fraudulent for your business?

Understanding What's Right for You

When you start battling Fraudsters using Sift Science - the first step is to start sending Sift feedback around which users are good and which users are fraudulent (discussed above). The Analyze Graph will show how your feedback is helping Sift Science better predict which users are Fraudsters over time.

Over time your Fraud Team will get a sense of which score thresholds indicate fraudulent activity. For example - you may notice that all users with Sift Scores above 80 are Fraudsters, which is a sign that the Sift Science model has learned what signals and behavior are associated with Fraudsters specifically for your business. Find more information about the Sift Score and Signals here and here.

Ultimately, it's important to answer the following questions:

  • what percentage of events or users do you believe is fraudulent?
  • what percentage of events or users is your team comfortable blocking using Automation?

Looking at the example - setting an automated Decision to block all users above a score of 60 means blocking 4.88% of total transactions for the time period selected. Whether or not this percentage is too high or too low depends on your specific business - how much fraud you see, how much an instance of fraud hurts your business compared to an instance of a good user wrongfully blocked, and so on.

If you're not sure what percentage of your traffic is fraudulent, you can start with a very high threshold to be conservative. Then, use Analyze and Review Queues or Lists to understand whether or not this strategy is effective for your business.

Conclusion

The Analyze Graph can be invaluable in understanding how effective your automation strategy is. Seeing how the Sift Score predicts fraud over time will allow less time spent on manual review and more time on your business. The combination of sending Feedback and reviewing your automation using Analyze will help get you there.

 

 

Have more questions? Submit a request