Dropouts’ usage of a responsible gambling tool and subsequent gambling patterns

Abstract Responsible gambling measures are mainly implemented by the gambling industry to reduce excessive gambling and gambling-related harm. These measures include responsible gambling tools that target online gamblers, typically through behavior tracking, feedback, and, in some cases, advice on how to reduce gambling. Playscan is a responsible gambling tool implemented at gambling sites in several countries with many users in Norway and Sweden. Previous studies have indicated that these tools have limited repeated use. Also, the tools have shown to have a low effect on decreasing gambling behavior. Our aim has been to investigate usage and effect of Playscan among Norwegian gamblers (N = 835) that began to use Playscan and then opted out. These gamblers had a high initial use, but extensive lack of repeated use of the functions included in the tool (secondary data was used). The majority of the gamblers used Playscan for a short period of time. The results indicate that the participants did not gamble less after using Playscan (gambling data analyzed using ANOVA). A hypothesis that can be suggested is that short-term use of Playscan do not decrease the level of gambling for this sample. Also, low-risk gamblers seems to have increased their gambling after using Playscan. The results implies that level of use and length of use needs to be taken into account when evaluating the effect of responsible gambling tools. The low level of use in this sample and in other studies implies that strategies to increase is needed.

Abstract: Responsible gambling measures are mainly implemented by the gambling industry to reduce excessive gambling and gambling-related harm. These measures include responsible gambling tools that target online gamblers, typically through behavior tracking, feedback, and, in some cases, advice on how to reduce gambling. Playscan is a responsible gambling tool implemented at gambling sites in several countries with many users in Norway and Sweden. Previous studies have indicated that these tools have limited repeated use. Also, the tools have shown to have a low effect on decreasing gambling behavior. Our aim has been to investigate usage and effect of Playscan among Norwegian gamblers (N = 835) that began to use Playscan and then opted out. These gamblers had a high initial use, but extensive lack of repeated use of the functions included in the tool (secondary data was used). The majority of the gamblers used Playscan for a short period of time.
The results indicate that the participants did not gamble less after using Playscan (gambling data analyzed using ANOVA). A hypothesis that can be suggested is that short-term use of Playscan do not decrease the level of gambling for this sample. Also, low-risk gamblers seems to have increased their gambling after using Playscan. The results implies that level of use and length of use needs to be taken ABOUT THE AUTHOR David Forsström is a psychologist and has a PhD in clinical psychology. His research focus is on gambling from different perspectives. He has carried out studies investigating the use of responsible gambling tools and also studies focusing on gambling prevention in general. He is also involved in a project investigating the relationship between gambling and crime and he is the principal investigator in a project about betting on e-sport. The current study is related to David Forsström's previous work on the use of responsible gambling tools.

PUBLIC INTEREST STATEMENT
Different strategies are available for online gamblers to decrease gambling and some are supplied by gambling companies. Responsible gambling tools is one feature available for gamblers. The goal is to decrease gambling and limit harm. Playscan is one of the tools available. It makes a risk assessment based on gambling data and self-assessed negative consequences of gambling, supplying feedback based on the assessment and provides advice to decrease gambling. Previous investigations have shown that these tools does not decrease gambling behavior to a high degree and has a low degree of use. The aim of the study was investigate use and effect among short-time users to explore how length of use, overall use and effect were connected. The results showed a low degree of use and no decrease of gambling behavior implying that use and effect are connected. The main implication is of a methodological nature: investigating the effect and use needs to be taken in to account when investigating if the tools decrease gambling.

Introduction
Responsible gambling (RG) has been defined as policies and practices that reduce the potential harmfulness of gambling (Blaszczynski et al., 2011;Blaszczynski, Ladouceur, & Nower, 2008;Blaszczynski, Ladouceur, & Shaffer, 2004). Several types of measures are available to prevent gamblers from excessive gambling, such as self-exclusion, limit-setting, RG tools (e.g. Playscan and Mentor), and pop-up messages. Most of these features are provided by gambling companies on online gambling sites. An umbrella review found no or low effect for many of the available RG measures. However, the review concluded that there was a lack of empirical studies (McMahon, Thomson, Kaner, & Bambra, 2019). Alderson (2004) argues that "the absence of evidence does not mean the evidence of absence". In the context of research on RG measures, much is needed to be done in order to provide information about the effect of the tools and their ability to limit harm (especially negative economic consequences).
Our study will attempt to increase the knowledge about the effect of RG tools and in doing so address the lack of studies in the field of RG research and, in particular, RG tools. The main idea behind RG tools is that a risk assessment, feedback, and advice on how to reduce gambling behavior will result in decreased gambling and, thus, decreasing harm. The risk assessment is based on gambling data. Studies have shown that it is possible to adequately determine risk and other behaviors, such as self-exclusion (Adami et al., 2013;Braverman & Shaffer, 2012;Dragicevic, Percy, Kudic, & Parke, 2013;Dragicevic, Tsogas, & Kudic, 2011;Haefeli, Lischer, & Haeusler, 2014;Haefeli, Lischer, & Schwarz, 2011;Haeusler, 2016;Percy, França, Dragičević, & d'Avila Garcez, 2016;Philander, 2013). RG tools are similar to personalized feedback interventions (PFI) aimed to help gamblers. However, the main difference is that RG tools are usually supplied by gambling companies and are used online at gambling sites while PFI in gambling is based on self-report data and supplied in the context of gambling studies. Thus, the two interventions are not comparable.
There are currently several RG tools (e.g. Playscan and Mentor) in use and connected to gambling sites. Research by Wood and Wohl (2015) focusing on Playscan, and Auer and Griffiths (2015) focusing on Mentor has investigated the effect of RG tools on gambling behavior. The individual studies found that RG tools decrease gambling behavior for certain risk groups, but the effect was low according to the benchmarks by Cohen (1988). Even though the individual studies conclude positive effect, the umbrella review, as mentioned earlier, did not conclude that the intervention studies had an effect on an overall level (McMahon et al., 2019). There are several possible explanations for the lack of outcomes in the studies by Wood and Wohl (2015) and Auer and Griffiths (2015). These studies do not stipulate how much a gambler needs to use a tool in order for a tool to be effective. The studies did not investigate for how long time a gambler has to use the tool in order for it to impact on their gambling. Also, the matching between the intervention group and control group is not based on how long the gamblers have been members at the gambling site.
Studies that have investigated motivations for using RG tools and actual use might shed some light on why the tools lack effect. Griffiths, Wood, and Parke (2009) carried out a survey among gamblers at the Svenska Spel (the Swedish state-owned gambling company) gambling site and also asked questions about the use of Playscan. They found that those who used Playscan had signed up out of curiosity. Most of the users were pleased with the tool, but did not report any changes in their gambling after joining the program. Also, Forsström, Hesser, and Carlbring (2016) found that Playscan had a high initial usage rate of its different functions, but the users did not utilize the functions of the tool repeatedly. The most widely used feature was the self-test GamTest (Jonsson, Munck, Volberg, & Carlbring, 2017), which establishes the level of negative consequences that a user experiences from gambling. Forsström et al. (2016) also identified five different user classes. According to a latent class analysis, two classes used the tool to a greater extent, "multifunctional users (used all the functions available in the tool)" and "selftesters (answered multiple self-tests)." These classes had a higher risk of developing gambling problems before using the tool than the groups that used the tool to a lesser extent. The participants in the study had volunteered to use Playscan, but 7.9% did not use the tool at all. A qualitative study by Forsström, Jansson-Fröjmark, Hesser, and Carlbring (2017) revealed that most users were satisfied with the risk assessment provided by the tool. The self-test (GamTest) was perceived as an important function and as the basis for the risk assessment by the interviewees. The self-test served as a gateway to using the tool and was one of the features used by a majority of the participants. However, the participants reported a low frequency of repeated use (Forsström et al., 2017), as did Forsström et al. (2016).
The relatively low effect of the tool Playscan might be partially explained by the low use among gamblers. For example, a study by Caillon et al. (2019) found that short-term (one week) self-exclusion did not change gambling behavior, indicating the need to empirically investigate both short-term and long-term use of all RG measures and, in particular, RG tools because these tools needs an investment of time that is higher than if a gambler for example self-excludes. In short, there might be a doseeffect relationship that needs to be explored; increased use may increase the effect of the tool and as a consequence low use may result in a low effect. There is a possibility that there is a threshold for use to achieve an effect from RG tools. Our study tries to contribute with a perspective that might be a small step in an attempt to figure out if and how RG tools can decrease harm for gamblers.
Furthermore, Playscan (and other RG tools) does not provide guidelines for use on gambling sites where it is available. With over a million users of Playscan and the fact that Playscan is mandatory on some gambling sites and voluntary on others it is important to investigate different types of populations that use RG tools in order to understand how gamblers use RG tools under different conditions. Our results combined with other studies can be used to inform gambling companies about how to promote the use of RG tools in an effective way. Hopefully, our study might contribute with knowledge that will improve the effect and increase use of RG tools, which in turn would benefit a large number of Nordic gamblers and subsequently decrease harm.

Aim
This study had a bifold aim. As a first step, we described the level of use of Playscan among Norwegian gamblers (customers at Norsk Tipping) that started and then stopped using Playscan. This will be a partial replication of the study by Forsström et al. (2016). We aimed to explore these users' utilization of the different functions included in the tool; to analyze the number of members who answered and completed the self-test; and to establish the extent to which the users utilized the advice function supplied by Playscan. We also investigated the distribution of risk ratings and the time elapsed before the user left the Playscan program. However, in Forsström et al. (2016) different user classes were linked with risk level rating. This part will not be replicated in our study. The second part of the aim was to explore if gamblers decreased their losses and time spent on gambling after using Playscan for a short period of time.

Description of the responsible gambling tool Playscan
Playscan is intended to decrease gambling for at-risk gamblers at a gambling site that is affiliated with the tool. Motivational Interviewing (Miller & Rollnick, 2002) and the Stages of Change-model (Prochaska, DiClemente, & Norcross, 1993) constitute the theoretical background for the tool. Playscan has three components to promote behavioral change. The first component is a risk assessment, based on the users' gambling history and different markers of excessive gambling such as night owling (staying up late at night to gamble) and chasing losses (trying to win back in an irrational manner). Also, how much time and money are spent on gambling is part of the overall assessment. The assessment also factors in the overall result of a 16-item instrument (GamTest) that covers different negative consequences of gambling. Some of the questions in the instrument are similar to those in the Problem Gambling Severity Index (Ferris & Wynne, 2001), but GamTest also contains several questions about time and money spent on gambling (self-reported). Gambling history and the self-test results carry equal weight in the overall result of the risk assessment. The assessment has three different risk levels: green, yellow and red. The green level means that the user has a low risk of developing gambling problems, the yellow level signifies moderate risk, and the red level equals high risk. The second component is feedback on the risk assessment to the gambler, communicated via a messaging service built into the tool. The risk assessment is updated every week, so the users can track any change in their risk level on a weekly basis. The user has to log in to the tool to access the assessment, but can see if there is a message, indicating a change a risk level, in the inbox when logging in to the gambling site. Importantly, if the users have not gambled on the site during the period for the risk assessment, they will not be assigned a risk level (color). After receiving the feedback, users can choose to receive advice (the third component) on how to limit their gambling. The advice covers different ways of reducing gambling behavior such as setting a budget or taking a break from gambling via self-exclusion.
Playscan is currently in use at the Swedish and Norwegian state-owned gambling companies and available on Svenska Spel's gambling site and at Norsk Tipping. It is also available for users of Miljonlotteriet (a Swedish lottery company) and La Françaises des Jeux (a French gambling company). Some of the gambling sites have implemented the tool as a mandatory feature, while others have decided to make the tool available, but not mandatory. Additional information about the tool is available in Forsström et al. (2016) and on the Playscan website www.playscan.com.

Norsk tipping
Norsk Tipping is a state-owned Norwegian gambling company, which has the monopoly on gambling in Norway. It supplies land-based and online gambling opportunities to customers based in Norway. Norsk Tipping has used Playscan as an RG measure on its gambling site since 14 January 2014. To begin with, Playscan was not mandatory for Norsk Tipping's online customers, but has been mandatory for all customers since March, 2015.

Procedure
Data was collected from Playscan dropouts-users who had started to use the tool and then had stopped using it-during the time that participation in Playscan was voluntary for Norwegian gamblers. The data collection period was from 14 January 2014 to 19 March 2015 (when Playscan membership became mandatory for users of the Norsk Tipping gambling site). To join Playscan during the inclusion period the gamblers had to sign up and were then asked to take the GamTest. To leave Playscan and opt out, the gamblers had log on to Playscan and visit a separate web page in the tool's interface and press a button to leave Playscan. A "dropout" was defined as a gambler who had voluntarily signed up to use the tool and then, regardless of how long the user was part of the tool during the inclusion period, left Playscan. Activation time was defined as the time period from when the gambler joined to the time point when he/she chose to leave the tool. To actively join and leave the tool was only possible before membership became mandatory for users of the Norsk Tipping site. Some users joined and left Playscan several times during the inclusion period. These users were included in the study, but for them the period between their first and last use of the tool counted as their active user period. All gambling data and risk-level data were collected two weeks before the gambler started using Playscan and two weeks after leaving Playscan. We also collected secondary data on how the gamblers used Playscan during the activation time and how long they used Playscan. The primary and secondary data was supplied by Norsk Tipping and Playscan. A control group was not included in the study. Since several previous studies investigating RG tools found that the intervention group had a significant decrease in gambling behavior compared to the controls and that the change in gambling behavior for the control group was small. Therefore a control group was not deemed necessary in our study.

Variables
We used both primary and secondary data. Primary data refers to the gambling history (including all the gambling activities carried out by the online gamblers at Norsk Tipping) and to the risk level, while secondary data pertains to the use of features in Playscan (including answering the self-test and use of the advice function).
The length of time (activation time) the gamblers had been a part of Playscan was defined as the time between when a user started to use Playscan and chose to leave the tool. The number of times the users had started and also completed the self-test in the tool was also used as a variable, as was the number of times advice was requested by the users. The risk assessment used in the study was based solely on the users' gambling history; it was not the combination of the users' gambling history and self-test results. Another variable was the total amount of money that the users spent on gambling (net loss) in the 14 days before they joined Playscan and the 14 days after they had elected to leave Playscan. The number of days gambled was counted using the same time frame.
Fourteen days was chosen as a time frame because it had been successfully used in a previous study that investigated the effect of the RG tool Mentor (Auer & Griffiths, 2015). It is also plausible that the effect of Playscan wears off if one does not use it for a period of time. It was therefore important to choose a time frame where the tool was likely to have an effect on gambling behavior

Participants
The participants/dropouts (N = 835) were users who chose to opt out of Playscan during the inclusion period. There were 652 males (78.1%) and 183 females (21.9%) in the sample. The mean age of the 835 participants was 45.7 years (SD = 12.8). The mean age of the men in the sample was 45.8 years (SD = 13.2), and the mean age of the women was 45.5 years (SD = 11.5), which is similar to the participants in Forsström et al. (2016). The majority of the participants played lotto or bet on sports (see Table 1).

Data analysis
We used IBM SPSS V.25 and R V.3.4 (R Development Core Team, 2018) to carry out the analysis. As a first step, the secondary data were structured on the basis of how many times it was used by a participant. The outcome measures were the frequency of usage (number of occasions that advice had been sought, and self-test both started and completed); number of minutes that a dropout actively used Playscan; risk level; and gambling intensity. Gambling intensity was measured by using the dropouts' net loss, which were calculated by subtracting the users' winnings from the net loss. As mentioned, we also measured the gambling intensity by comparing the number of days played before and after activation and deactivation.
To test for changes in gambling behavior between the 14 days' pre-activation and post-deactivation for the different user categories, we conducted a series of within-subject Analysis of Variance (ANOVA). The Shapiro-Wilks test of normality (Shapiro & Wilk, 1965) showed that the dependent variable (gambling data) had a significant non-normal distribution (W ranging between 0.84 and 0.95, all p < .01) for all risk categories, both pre-and post-activation. Some players spent disproportionately large amounts of money compared to the mean, which positively skewed the distributions. However, recent investigations (Schmider, Ziegler, Danay, Beyer, & Bühner, 2010) have shown strong support for the continued application of ANOVA for non-normal distributed data. Also, several recent studies (Celio & Lisman, 2014;Wood & Wohl, 2015) investigating personalized feedback aimed towards gamblers have used ANOVA as an analytic strategy. Based on this, the decision was made to use ANOVA as a method for the analysis.

Ethical considerations
The data files were based on ID tags created at random, guaranteeing the anonymity of the participants. When joining the tool, the participants consented to having their anonymized gambling and behavioral data analyzed. The researchers had no possibility of identifying or contacting the players, which is in line with previous work that used the same procedure to avoid violating the players' privacy (Gainsbury, 2011). Also, before the release of this particular dataset, Datatilsynet [The Norwegian Data Protection Authority] gave clearance that de-identified data could be used for research purposes. Table 2 displays the usage frequencies for the advice function, started self-tests, and completed self-tests.

Usage of the different functions
Answering a self-test was the most widely used feature, but almost no one in the sample requested advice on how to reduce their gambling.

Time spent as a user of the tool
The mean time spent as a user of the tool for the entire group was 10,484.0 minutes (SD = 28,834.5), which is approximately 7.5 days.

Risk level before participation
The distribution of risk level before joining Playscan was as follows: 576 green users (69.1%), 83 yellow users (9.9%) and 77 red users (9.2%). The number of users who had not gambled enough to be eligible for a risk assessment was 99 (11.8%). For users that were members of Playscan for more than one hour, the distribution of risk levels was: 106 green users (58.9%), 35 yellow users (19.4%) and 31 red users (17.2%). In this group, there were also 8 users (4.4%) who were not eligible for a risk assessment. In total, 180 participants used the tool for more than one hour. 3.4. Net loss before and after using the tool Table 3 displays the means and SDs of the actual losses during the two weeks before Playscan activation and during the two weeks after Playscan deactivation for each of the three user groups.
Significant changes in net loss were found for green users. The green group gambled for more money after using Playscan. However, the effect size was only 0.0098. For the other risk groups there were no significant changes. The details of the conducted ANOVAs are presented in Table 4. Note. N = 99 users without color-category, n = 576 green users, n = 83 yellow users. N = 77 red users. SD: Standard deviation.

Number of days gambled before and after the use of Playscan
Yellow users did not reduce the number of days played. The users that had gambled too little to receive a risk assessment did not show an increase in the days gambled, while the green users did. The red users had a decrease in the number of days played (see Tables 5 and 6).

Discussion
This study set out to explore Norwegian Playscan dropouts' use of and potential change in gambling behavior when using the tool. The dropouts almost exclusively used the self-test feature of the tool; the advice function was only used by a few in the sample. One finding was that there was not a decrease in gambling or the number of days played for most risk groups when comparing the 14 days up to joining to 14 days after leaving the tool. Furthermore, gamblers with a low risk rating (green) gambled more after having used the tool. The results can be interpreted as indicating that short-term use and low frequency use of an RG tool does not decrease gambling behavior. While this might only be true for this particular sample, it might serve as a starting point for future studies.

Use of the tool
Many users started (90.4 %) and completed (79.1%) the first self-test they answered. This is in line with the results obtained by Forsström et al. (2016) where 80.7% and 65.4% started and completed the first self-test. That more users completed the self-test in our study can be an indication that the users wanted to get an assessment of how much they had gambled and receive their level of risk, and nothing more. This might partially explain why they opted out after a short period of time. After the initial self-test, we found a significant drop in started and completed self-tests. By way of contrast, the levels of use remained more stable for subsequent self-tests in Forsström et al. (2016). The high number of finished self-tests could explain the major drop in the use of this function in our study. Again, the users maybe only wanted an assessment of the level of negative consequences associated with their gambling and once they had received this, they might not have felt inclined to use this function again. This also puts the results from Forsström et al. (2016) in a new light: perhaps the Swedish users in that study continued to be a part of the tool and just did not opt out. Also, Griffiths et al. (2009) found that curiosity was the most prevalent reason for joining Playscan. This in combination with our result and that by Forsström et al. (2016) suggests that the tool might not have much utility beyond the initial assessment. Also, the majority of the participants had a green risk rating before signing up to the tool, which indicates that they did not need more help than to get information about their risk level. These results from our study and from Forsström et al. (2016) indicate that the tool needs to communicate with the users in a different way to maintain the users' interest. A more pedagogic approach is perhaps needed to ensure the users' active participation in the tool. Targeted communication to users in the different risk categories might be a way of retaining active use of the tool. In a Swedish context, communicating in a more comprehensive manner is perhaps even more important, as the tool is not mandatory. Ivanova, Rafi, Lindner, and Carlbring (2019) found that gamblers at different levels of gambling problems were not disturbed by the presence of RG measures at gambling sites. Their study supports our assumption as regards to communicating with online gamblers.
The advice function was not used by many of the dropouts. Their needs might not have been the same as for the participants in Forsström et al. (2016). They might just have wanted feedback on their gambling. This is contrary to Forsström et al. (2016), in which the use of the advice function was higher. One plausible reason for the users' not engaging with the advice function is that they may not have felt the need to use the tool further after reviewing the results of the self-test and receiving an initial risk assessment. Therefore, the dropouts seemed to engage with the tool in a slightly different manner than the continuous users of the tool in Forsström et al. (2016).

Possible effect on gambling behavior
On the basis of the results from our study, different length of use might impact on the level of effect. In terms of spending money and time on gambling after using Playscan, the results of our study were not in line with those of Auer and Griffiths (2015), and Wood and Wohl (2015). Our results indicate that the dropouts did not gamble less after being members of Playscan. For one group (green users), the levels of loss seems to have increased, as did the number of days gambled. However, the increase was associated with a low effect size, which suggests that it did not have any practical implications. Perhaps the green users increased their gambling because they had overestimated their risk or had a different view of their gambling than what was communicated to them. After receiving a low risk level, they might have felt more inclined to gamble than before the use of Playscan. The feedback might have given them a false sense of security, which led them to increase their gambling. It is worth exploring if the feedback and the use of RG tools for low-risk gamblers is counterproductive (our study does not answer this, but it is an avenue worth exploring). The increase in spending can also have alternative explanations, such as normal fluctuations in the gambling patterns among green users. However, this can only partially explain the result; the users utilized the tool at different time points, and it is therefore not possible to attribute the changes in gambling to special jackpots or big sporting events that would make individuals gamble more.
The users that did not have a risk rating when they joined also increased their spending. This is probably an effect of a regression to the mean because their spending before Playscan was close to zero. Small increases in spending among some of the users that did not receive a risk rating before joining would result in a quite large increase of the mean. This result can thus be seen as having no practical implications and these users spend approximately $20 per week, which indicates that their level of harm from gambling would be low.
For the yellow and red users, the results suggest no significant drop in loss before and after using Playscan. As mentioned, this contradicts findings of previous studies. There might be several explanations for this inconsistency. Even though Auer and Griffiths (2015) and Wood and Wohl (2015) do not investigate the use and length of use of Mentor and Playscan, it is possible to assume that the users in those studies utilized the tools more and for a longer time than in our study, which resulted in a small effect. However, one important argument that has to be made in relation to the results from our study is that absence of a significant drop in loss and days gambled does not necessarily mean that there an absence of effect from short-term use. This is in line with the argument made by Alderson (2004). The argument made is that the "absence of evidence is not the evidence of absence". The lack of significant results for the yellow and red users could be due to the fact that the study might have been underpowered. The results of the green group should be valid since it consists of over 500 participants. Few studies require a population lager than that. The yellow and red groups are smaller and might have been underpowered, but what contradicts this it that the red group with 77 participants still had a significant decrease in number of days gambled. Furthermore, in Ivanova, Magnusson, and Carlbring (2019), there was no effect on gambling in a large sample after gamblers had been nudged to set limits indicating that is hard to achieve effect when investigating RG measures.
The main area where our study can contribute is that it indicates that there might be a doseresponse relationship between the use of RG tools and gambling behavior (loss and days gambled). Since the users in our study almost only used the self-test feature, it does not seem to be enough to just be informed about one's risk level to be able to decrease gambling behavior. One argument that can be suggested on basis of the result is that there might be a threshold of use that gamblers needs to pass in order to achieve an effect. However, this could be a result of the structure of the tool. The lack of effect could be because the users only received feedback when logging in to Playscan. After answering a self-test and receiving feedback, the users might have thought that they would receive more feedback via mail or through other channels and when that did not occur, they left the tool. The result from our study suggest that these types of RG tools should be mandatory for users of gambling sites. Voluntary use might result in dropouts, who will then receive no effect from the tool.
The results also suggest that dropouts with a high-risk level (red) decreased their gambling frequency (number of days), but this did not result in a decrease in net loss. One possible explanation is that the change reflects fluctuations in gambling over time and has no practical consequence. Also, the effect size was low, indicating that the decrease did not have any practical consequence.
Furthermore, it is possible to argue that yellow and red users did not increase their gambling after using the tool. The tool might have stopped yellow and red users from increasing their spending on gambling. It is, however, not possible to confirm this scenario with the dataset used in this study.
One line of argument based on the results is that gambling companies need to adopt a more proactive stance (Hancock & Smith, 2017a, 2017b. A recent study (Jonsson, Hodgins, Munck, & Carlbring, 2019) set out to test if big losers at the Norsk Tipping gambling site decreased their gambling after a telephone call informing them about their gambling. The researchers found that the gamblers substantially decreased their spending compared to the control group. This study suggests that a more pedagogic and proactive approach is required to facilitate behavior change by means of RG measures. Also, gamblers seem to have a positive attitude toward RG measures and to using them (Engebø, Torsheim, Mentzoni, Molde, & Pallesen, 2019;. This suggests that gambling companies can offer more measures and be more proactive.

Limitations
Two general limitations are present when using account based data: a gambler could gamble at several sites at the same time and several gamblers can use one account. This limits the inferences that can be drawn from these types of studies and the results should be interpreted with caution.
Using a sample consisting of dropouts carries with it a limitation. There is no available data on why the participants chose to leave the tool. What we know from the results is that a majority of the participants used the tool for a very short time and that they answered one self-test. This suggest that the dropouts used the tool in a similar manner as the Swedish users in Forsström et al. (2016) who remained users of the tool. Perhaps the users in our study wanted to get an assessment and then dropped out. Another possible explanation for users dropping out is that they may not have found the RG tool to be useful and stopped using it after surveying its features. The reasons for joining Playscan are also unknown even though it seems, based on how they used the tool, that the users joined to have their risk level assessed. Also, it is not known if the dropout group were new customers on the gambling site or old customers that had used the site for a longer period of time. Using this sample means that it is not possible to generalize to other populations that use the tool, but like Caillon et al. (2019) the results contribute with information about the short-term use of an RG measure and can contribute with knowledge on to how to investigate RG tools in future studies. It could be argued that our study does not make a contribution to the research field due to the biased selection, but several studies with a biased selection have been published before. Two studies (Haefeli et al., 2014(Haefeli et al., , 2011) used a sample that contacted the customer information at a gambling company and investigated if these gamblers self-excluded later on. Both studies had a problematic sampling procedure, but still contributed with important information about gamblers' behavior. Similarly, while there might be a problem with our sample, it might nonetheless be relevant to investigate not least because of the problems of setting up experiments that examine long-term and short-term use of RG measures.
Another limitation is the lack of a control group that did not use Playscan to compare the change in gambling behavior present in the dropout group. However, in the study by Wood and Wohl (2015), non-users (control group) of Playscan did not experience a behavior change. They only had a decrease of 30 Swedish kronor (approximately 4 dollars). The gamblers in this control group would have been the same as in our study. Celio and Lisman (2014) found no significant change for the control group and Auer and Griffiths (2015) found that the control group had a slight decrease in theoretical loss. We can assume that a control group made up by Norwegian non-Playscan users would not have experienced a change in gambling behavior. However, the lack of a control group alongside the non-significant results that might be due to an underpowered study means that the results from our study needs to be interpreted with caution.
The users who dropped out of Playscan might not be representative for all users that started to use the tool and subsequently stopped. Swedish users, for example, might have followed different pathways in using and abandoning the tool. However, the pathways should be similar, as the results by Forsström et al. (2016) are similar to those obtained in our study.
It might also be a limitation that the study is based on gambling data 14 days before and after the activation and deactivation of Playscan. Including a more long-term (three or six months) follow-up might have produced a different outcome. In any case, including a longterm follow-up would have made the study more similar to previous studies investigating RG measures.
It is also worth noting that the gambling data used to calculate net loss in our study was skewed. Hence, the conclusions that can be drawn from the ANOVAs are limited. However, the sample size and the amount of data available should have diminished the effects of the skewed data. If a non-parametric test would have been used it would probably have resulted in non-significant results since these tests are more conservative. Hence, the subsequent discussion should have been similar if a non-parametric test would have been used.

Future research
Future research should use qualitative methods to investigate the reasons behind joining and leaving RG tools. Do gamblers leave the tool because they do not agree with the assessments made or have they received the information that they needed? When evaluating the effect of RG tools, the levels of use of the tool need to be taken into account, for this study again highlights the differences in use among gamblers that use Playscan. The doseresponse relationship also needs to be explored in future studies.
Future studies could also focus on investigating RG tools and the stages of change by adding questions about this. Linking the propensity to change with feedback could provide key insights when it comes to tailoring effective feedback.

Conclusions
The results have shown that repeated use of the functions in Playscan is rare; users leave the tool after a short time. The results suggest that the users do not gamble less after using the tool. Therefore, two tentative conclusions can be drawn. First, users seem to screen themselves to get an assessment and feedback on their risk level, and then leave the tool. That the results indicate that the dropouts did not gamble less after they had used Playscan suggests that one needs to be a member of an RG tool for a longer period of time or/and use the different functions several times in order for any positive effects to be seen. The conclusion that can be drawn is of a methodlogial nature: use and effect should be investigated toghether to understand how to decrease harmful gambling.

Correction
This article has been republished with minor changes. These changes do not impact the academic content of the article.