Insights

Mastering A/B Testing in iGaming with GR8 Tech

5
15.02.24

10 minutes to read

Anton Shmerkin
Anton Shmerkin

Digital creator specializing in iGaming

Who would have known that A/B testing would start with beer? And yet it did: At the dawn of the 20th century, William Sealy Gosset, a pioneering statistician, experimented on Guinness's renowned stout production line, implementing minor, measured modifications and assessing the behavioral variations between two distinct groups, laying the groundwork for modern A/B testing techniques.

It took almost a hundred years for Google engineers to run the first modern A/B test in 2000 to determine the optimal number of links on a search page. Fast-forward to today, A/B testing has become paramount for virtually any industry, but it holds a special position in iGaming. By comparing different gaming elements, A/B testing helps make data-driven decisions, helping to refine user experience, increase engagement, and optimize game features to meet the evolving tastes of players.

We asked Oleksandr Fialkovskyi, the Data Analysis Team Lead at GR8 Tech, to delve into the intricacies of A/B testing in the iGaming industry, uncovering how this powerful tool’s performance reshapes game development and player experience. Oleksandr brings a wealth of knowledge from his extensive experience, providing an in-depth look at the cutting-edge techniques and innovative approaches to A/B testing GR8 Tech employs to refine and perfect its product.

So, what A/B testing is all about?

Basically, ab testing or split testing is how we compare two versions of a webpage, app, or other digital assets to determine which one performs better for our target audience. As a split testing example, think of it as an experiment where two or more variants are shown to users randomly, and statistical analysis is used to determine which variation achieves a specific goal more effectively. It's like having two different designs for a casino game interface, and we test to see which one users enjoy more or which leads to more user engagement. This method provides valuable insights into iGaming as it helps us tailor our offerings to what players prefer, ensuring enhanced experience in user acquisition and increased profitability.

So, what A/B testing is all about?

Oleksandr Fialkovskyi

Data Analysis Team Lead

Can you provide us with an example of an A/B test in iGaming?

In general, “A” and “B” represent two versions of a webpage, product, or feature. In the context of iGaming, ‘A’ could be one version of a game’s interface, while ‘B’ is an alternative version. These versions are tested simultaneously to see which performs better according to predefined metrics. The key here is that users, based on predefined settings, are randomly assigned to either version A or version B rather than being randomly shown one version or the other. This ensures that the control group in version B is exposed to the new design for the entire duration of the test, while those in version A continuously experience the old design, allowing for an unbiased comparison of their responses to each version without them knowing there’s a test underway.

Can you provide us with an example of an A/B test in iGaming?

Oleksandr Fialkovskyi

Data Analysis Team Lead

Please, share some of the best practices for A/B testing in the gambling industry?

Since in order to provide valuable insights, our A/B testing needs to operate through our proprietary back-end developments, we offer a unique ability to conduct A/B tests across multiple channels. For instance, if a user falls into a test group with video broadcasting disabled, we can implement this change simultaneously in the app and on the website. Beyond this, we also adhere to best practices, such as creating or utilizing existing segments for test splitting to ensure precise targeting. This includes conducting deteriorating tests to assess the impact of removing features, scheduling tests based on a calendar or within a single instance to prevent user crossover, and determining the test duration and group sizes in advance to optimize the testing environment. These practices allow us to refine our approach continuously and ensure that our testing is both effective and efficient, tailored to the specific needs of the gambling industry.

Please, share some of the best practices for A/B testing in the gambling industry?

Oleksandr Fialkovskyi

Data Analysis Team Lead

Doesn’t Google do that with Optimize and other free tools? 

Unlike Google's free tools, which fail to recognize that this is the same user across different platforms and need to include them in specific test groups for each tested channel, we can launch A/B tests across various channels simultaneously using our completely unique workups in Growthbook and some other proprietary solutions. This is crucial because, without this capability, the accurate results of an experiment couldn’t be achieved; the test wouldn't be "clean". 

For example, if we're running a degrading test to see how the presence or absence of certain important features affects retention, and this is tested on the website while the user spends most of their time in the app, the final metrics would be meaningless – the test results aren't "clean" across all channels. 

We use a mix of binomial and non-binomial metrics, which means we're looking at both simple yes/no outcomes and more complex measurements.

Then, there's the selection of the test type. It's not always a straightforward A/B test. Sometimes, it's AB, ABC, ABCD, or even AAB. Each type has its unique application and is chosen based on the specific scenario we're testing. This all serves to explain why an operator working with platforms other than GR8 Tech, which hasn't invested years in R&D to enable all these capabilities, would not be able to conduct multi-channel A/B tests on a large audience with conversion metrics.

Doesn’t Google do that with Optimize and other free tools? 

Oleksandr Fialkovskyi

Data Analysis Team Lead

Do you leverage any specific platforms or services to enhance your CRM applications and segmentation capabilities?

Indeed, our approach to segmentation and A/B testing involves selecting a base of users with precise identifiers which is not directly tied to any Amazon Marketing Services. After identifying users who meet our specific criteria, we transfer this information into our CRM system. Our CRM is robust enough to assign these users to appropriate segments, enabling us to conduct highly targeted and flexible A/B tests. For example, we can identify two groups of 1,000 users each, characterized by having made 10 deposits, placed 100 bets, and having no casino bets. These users are then formed into segments within our CRM, facilitating the launch of nuanced tests specifically designed for them. This strategy is particularly effective because it circumvents limitations in platforms like Growthbook, which may not support such direct, detailed segmentation and testing.

What are some typical aspects of GR8 Tech software that you might choose for A/B testing, and why are they significant?

The pool here is quite wide and deep. For example, we might test a new design layout to see if it enhances player engagement or retention. Changes to the betslip functionality are also a common focus, where we assess how modifications might impact betting behavior or user satisfaction.

Other key areas for A/B testing we perform include recommendatory models, which personalize game suggestions to each player, and alterations in our bonus program, where we evaluate different reward structures and their effects on player loyalty and activity.

Players' onboarding process is another critical element. Here, we can test different approaches to see which is more effective in introducing new players to our platform and encouraging them to start playing.

Custom margin adjustments are also frequently tested. These modifications in betting odds or payouts can significantly impact betting patterns and overall profitability, making them vital components for A/B testing.

One crucial aspect we're always mindful of is avoiding test intersections. This means ensuring that our tests don't overlap in a way that could skew the results. If we're testing different aspects of the same product, we use distinct user groups or test spaces.

Now, on the technical side, we calculate the minimum percentage of users needed and determine how long it should run tests, given the statistical significance of each metric. This is a delicate balance to ensure we have enough accurate data without overextending the test duration.

Customizing test conditions is also vital. We tailor tests based on various factors like operator, brand, and user segments. Plus, we make sure to test on specific app versions or during particular bonus campaigns. Before we launch, we verify everything with test data to ensure system accuracy.

What are some typical aspects of GR8 Tech software that you might choose for A/B testing, and why are they significant?

Oleksandr Fialkovskyi

Data Analysis Team Lead

What models do you use to segment users in the context of A/B tests to help randomize samples based on different criteria?

It's advisable to minimize the impact of something new on the VIP segment unless it's a feature specifically designed for them. Additionally, ‘bad’ segments (fraudsters, ‘bonus hunters’, and others) can negatively affect our financial metrics, so we recommend excluding them as well. There's also the possibility of removing outliers to minimize their impact on test groups. Typically, we use the 95th percentile, meaning that 5% of users who generate the highest turnover and are included in the test will be excluded for a more objective evaluation.

What models do you use to segment users in the context of A/B tests to help randomize samples based on different criteria?

Oleksandr Fialkovskyi

Data Analysis Team Lead

How do you determine statistical significance during A/B testing, and what methodology do you use?

Surely, you can’t use all the accurate data you collect—it’d be a mess. What do you keep, and what goes into the bin?

True. We primarily use the Bootstrap method but also a number of other tools, some propriety software modules, to focus on what’s important. Here's how it works: First, we select a specific metric as our focal point – this could be anything from click-through rates to player retention rates. The choice of metric depends on what aspect of the game or platform we are testing and what outcome we aim to measure.

Once we've defined our metric, we run a specialized script, typically coded in Python, once we make sure that enough data is collected from the test. This script provides us with even more data and critical indicators, such as the win chance and control variant (test) odds.

We usually look for a win chance that exceeds 90% to consider a result statistically significant. However, reaching this threshold doesn't automatically mean we implement the change. The final decision rests with the product manager, who will weigh the statistical data against other factors like user feedback, market trends, and business objectives.

In essence, while statistical significance is a vital indicator in our A/B testing process, it's part of a larger decision-making framework that considers a range of qualitative and quantitative factors, such as seasonality, when testing our sportsbook: it’s crucial because player behavior can vary greatly depending on the sports season. This approach ensures that our modifications and updates are not just statistically valid but also aligned with our broader business goals and user expectations.

How do you determine statistical significance during A/B testing, and what methodology do you use?

Oleksandr Fialkovskyi

Data Analysis Team Lead

 A/B test exaple from GR8 Tech's Growthbook
A/B test exaple from GR8 Tech's Growthbook

Could you clarify how AI contributes to your testing processes and how it differs from other A/B testing tools?

Certainly, it's important to distinguish between the roles of AI and other tools in our ab testing examples. At GR8 Tech, AI is primarily utilized for developing predictive models and recommendation engines and segmenting users based on machine learning algorithms. These AI-driven methodologies help us understand our users better and tailor our offerings more precisely. 

On the other hand, for the execution and analysis of A/B tests, we rely on tools such as Growthbook, Bootstrap, Tableau, and certain proprietary techniques. These tools are not AI-based but are crucial for the practical aspects of running tests, such as setting up test parameters, collecting data, and analyzing results. While AI plays a significant role in informing our strategies and enhancing user experiences, the actual A/B testing process is handled by these specialized tools under careful human oversight to mitigate the limitations and potential errors inherent in any automated system.

Could you clarify how AI contributes to your testing processes and how it differs from other A/B testing tools?

Oleksandr Fialkovskyi

Data Analysis Team Lead

Let’s Make Your iGaming Business Great

A clear understanding of the role of A/B testing in optimizing your iGaming platform is critical. We at GR8 Tech offer comprehensive solutions that allow exhaustive testing across all facets of your business and unlock your platform's full potential.

Contact the GR8 team
Please rate the post
1 2 3 4 5
Thanks for your rating!
Ask us anything
By submitting this form, you are accepting the GDPR policy.
GR8 iGaming insights to your mail