Pranks, Obscene Chatters, and Ambiguous Content: Exploring the Identification and Navigation of Inappropriate Messages to a Web-Based Sexual Assault Hotline

ABSTRACT Sexual assault crisis hotlines provide crucial support for survivors. Though some hotline users engage in inappropriate conduct (e.g. prank or obscene calls), few studies explore these interactions. To address the lack of literature exploring inappropriate hotline interactions, we conducted a secondary data analysis of chat transcripts (n = 233) shared with the research team as part of the formative evaluation of a university-based sexual assault program’s web-based crisis hotline. From those transcripts, we analyzed potentially inappropriate interactions (n = 38), most of which (n = 28) hotline responders flagged as inappropriate in post-chat log forms. We used codebook thematic analysis to explore how hotline responders identified and navigated these interactions. Our analysis generated three themes describing the processes through which responders seemed to identify potentially inappropriate chats – detecting implausibly graphic and abusive content, identifying patterns of presumably inauthentic chat topics, and interpreting ambiguous content. Hotline responders seemed to navigate ambiguous and less egregious boundary violations by gently redirecting conversations, and addressed clearer violations by setting firm, direct boundaries. Chatters responded to boundary setting by desisting and disconnecting or attempting to reengage responders. Findings highlight ambiguities and challenges web-based sexual assault hotline responders face and suggest a need for additional responder support, training, and debriefing options.

health treatment, legal advocacy, and emergency housing (Macy et al., 2009).Many survivors use crisis counseling hotlines, which aim to provide immediate support and resource referrals (Macy et al., 2009), via phone call, text message, or web-based chat.
Though responders support survivors in most hotline interactions, some calls and chats are considered inappropriate; in previous evaluations, responders considered 7-20% of calls to counseling hotlines (Evans et al., 2013, Pollock et al., 2013) and six percent of calls and chats to a sexual assault hotline (Moylan et al., 2022) to be inappropriate.Bloch and Leydon (2019) argue that inappropriate hotline interactions are marked by a caller's lack of desire to discuss and receive guidance for a problem.More specifically, we define inappropriate hotline interactions as those that are pranks, obscene, inauthentic, possessing a hidden motive, or otherwise perceived to be suspicious.These interactions create challenges in which responders must reconcile the need to support callers while protecting themselves and the service itself (Pollock et al., 2013).While few hotline interactions may be inappropriate, these calls, texts, and chats consume valuable resources and may disturb responders.Though a better understanding of these interactions would help hotlines preserve responder wellbeing and hotline accessibility, little research has examined inappropriate chats to sexual assault hotlines or how responders navigate these interactions, leaving hotline responders unsure of how to identify and respond to these calls and chats.

Web and text-based sexual assault crisis hotlines
Survivor difficulties accessing appropriate survivor-centered services led agencies to develop sexual assault crisis hotlines that aim to provide crisis intervention and referrals to outside resources (Macy et al., 2009, Wood et al., 2022).Though agencies traditionally operated hotlines via phone, the growth of the internet and popularity of texting among young people helped inspire web-based hotlines (Moylan et al., 2022).Preliminary studies of web-based hotlines suggest that they are effective and may increase help-seeking because they are free, private, and accessible (Evans et al., 2013, Moylan et al., 2022, Wood et al., 2022).
Survivors often contact hotlines before other formal support systems, so web-based hotline responders must effectively navigate challenges, such as difficulties understanding and showing emotions in writing, because a poor response may reduce future help seeking (Evans et al., 2013, Moylan et al., 2022, Wood et al., 2022).Despite a growth in web and text hotlines, the lack of research or technical assistance specific to their operation suggests informal and practice-based knowledge continues to shape responder training (Moylan et al., 2022, Wood et al., 2022).Expanded research on web-based hotlines could support providers in navigating modality-specific challenges to crisis intervention.

Pranking and abusing hotlines: identification, navigation, and impact
Though inappropriate calls and chats create challenges for hotlines, we know of no studies of inappropriate calls or chats to a sexual assault hotline or their identification.The few published studies of inappropriate interactions used data from the Australian Kids Helpline (Emmison & Danby, 2007, Weatherall et al., 2016) or YouTube videos (Weatherall et al., 2016).These pranks often began like genuine calls, but then callers tried to disrupt a responder's expectations for the call by using provocative phrases and avoiding emotions (Emmison & Danby, 2007, Weatherall et al., 2016).Work with responders to hotlines that primarily serve adults suggests inappropriate callers may contact hotlines to seek sexual gratification, and often hope to speak and masturbate to women's voices (Brockopp & Lester, 2012, Pollock et al., 2013).These callers revealed their intention explicitly, by admitting to masturbating, or implicitly, through heavy breathing and sexualized language (Brockopp & Lester, 2012).
Previous work provides insight into the process through which callers and responders navigated inappropriate hotline calls.The Australian Kids Helpline treated inappropriate callers, who they refer to as "testing" the hotline (Barton 1999, as cited in Weatherall et al., 2016), with dignity in case they call back with a genuine concern (Emmison & Danby, 2007, Weatherall et al., 2016), which may be common for hotlines (Evans et al., 2013).Treating these interactions as "tests" lets hotlines show their safety to those who may be learning how to seek help or seeking rejection that confirms their negative selfimage (Brockopp & Lester, 2012, Weatherall et al., 2016).However, navigating testing calls creates challenges as responders must avoid being abused by a caller while not prematurely dismissing a call as inappropriate (Emmison & Danby, 2007, Pollock et al., 2013, Weatherall et al., 2016).The need to avoid dismissing genuine callers is compounded by the incompatibility between labeling a call inauthentic and some hotline's emphasis on being nonjudgmental (e.g., Pollock et al., 2013).Indeed, some hotlines give no clear guidance on navigating these interactions (Pollock et al., 2013) and existing recommendations for generalized counseling hotlines, such as that responders letting telephone masturbators use them for sexual gratification (Brockopp & Lester, 2012), may limit responder agency and be inappropriate for sexual assault hotlines.As a result, in a study of a single hotline, responders described navigating these interactions in various ways, such as by directly telling a caller they recognize them, redirecting a conversation to emotions, asking why a caller called, or ending a call (Pollock et al., 2013).
Though little work has focused on the impact of caller's abuse on responders, these calls may make hotline responders feel sad, stressed, angry, manipulated, or disgusted (Brockopp & Lester, 2012, Dihenia, 2022, Pollock et al., 2013).Pollock et al. (2013) found that these interactions were particularly distressing when deemed "manipulative and disingenuous" (p.119).These feelings and related emotional activation may manifest in harmful ways, for example by leading responders to be suspicious of callers from certain groups (e.g., young women, Pollock et al., 2013;or men, Brockopp & Lester, 2012) or experience burnout (Dihenia, 2022).Hotlines may, then, become less accessible to certain callers and provide lower quality services if responders are burnt out and less able to support callers (Dihenia, 2022).

The current study
Disagreements and inconsistencies on how to navigate inappropriate calls or chats, particularly on hotlines with a client-centered, nonjudgmental approach (Brockopp & Lester, 2012, Pollock et al., 2013), have led to a lack of uniform guidelines for responder trainings.The lack of clear guidelines is compounded by the lack of studies of inappropriate interactions on web-based hotlines or services for survivors of sexual assault.Our analysis aims to fill these gaps by exploring the process through which responders for a web-based sexual assault crisis hotline identify and navigate potentially inappropriate chats.We believe exploring these interactions will provide guidance for hotline staff to respond to and navigate them in a way that improves training related to these interactions, reduces burden to responders, and ensures survivors have access to the essential support of crisis hotlines.We aimed to explore the following research questions: 1) How do responders identify inappropriate chats on a web-based sexual crisis hotline? and 2) How do responders and chatters navigate inappropriate chats?

Methods
The data for this study came from the formative evaluation of a web-based crisis hotline operated by a campus sexual assault victim service provider at a large, midwestern, public university.The primary goal of the formative evaluation project was to identify the core elements of digital crisis intervention to inform further evaluation and contribute to the literature supporting the effectiveness of text-based crisis intervention (see Moylan et al., 2022).The hotline, staffed by volunteer responders who work remotely but may check in with a backup staff member for support, is available to members of the university and surrounding community.While most volunteers are students, they may also be faculty, staff, and community members.Responders receive 30 hours of training before they can start volunteering and then participate in regular supervision and continuing education opportunities.Trainings are delivered by campus sexual assault victim service provider staff and focus on core skills like being client-centered.Responders are trained to be nonjudgmental and assertive when confronting difficult chatters or setting boundaries as needed (e.g., a caller or chatter is yelling), though trainings focus on a chatter's emotional activation rather than inappropriate conduct.If a caller or chatter continues inappropriate behavior after two warnings, responders can end the interaction and tell the caller or chatter that they can call back when able to respect boundaries.As part of the evaluation, the program gave the research team access to deidentified transcripts (n = 233) of chat conversations received from Spring 2019 to Fall 2021.We secured IRB approval from the Michigan State University human subjects review board to conduct this study using anonymized (chatter and responder) transcripts, so we lacked access to chatter and responder demographics.
We analyzed transcripts (n = 38) from suspected inappropriate chatters, including those that seemed to be pranks, obscene, inauthentic, possessing a hidden motive, or otherwise suspicious.We identified these transcripts in two ways.First, we reviewed 35 transcripts flagged by responders as from prank/obscene chatters in a chat log form.When a responder answered a chat, they completed a form to log interaction details.The form included a box to indicate if they thought a chat was a prank/obscene and an open-ended field to give details about the interaction.Of the 35 transcripts flagged by responders, 28 met our definition of inappropriate.The chats that did not meet our definition of inappropriate were flagged for being outside of the hotline's scope (e.g., a chatter wanting to discuss challenges maintaining a friendship).Second, to help identify factors that led responders to suspect chats were pranks/obscene, the first two authors reviewed all transcripts (n = 233) to identify those that shared features with inappropriate chats but were not flagged (n = 5) or lacked a debrief form but otherwise met the criteria for inclusion (n = 5).These 10 transcripts plus the 28 identified by responders left us with a sample of 38 chats.
Inappropriate chats were given IDs from 1-38 and uploaded to Dedoose (2021).A team of three researchers used codebook thematic analysis (TA), a qualitative analysis method, to interpret patterns in the data (Braun & Clarke, 2021, 2022).Braun and Clarke (2021) encourage using codebook TA for translational research, which aligns with our goal of informing hotline practice.Braun and Clarke (2022) note that TA codebooks are not used to mitigate positivist concerns about bias in qualitative inquiry or to "facilitate the measurement of an intercoder agreement" (p. 7).Therefore, our approach led us to conceptualize our analysis as innately subjective and reflexive (Braun & Clarke, 2021), so we considered our emotional reactions to help us identify and understand our subjective responses to particularly distressing content.For example, many chats made us uncomfortable, and throughout our analysis we discussed how being able to see chats in full meant our reactions differed from those who had to respond in the moment, which we tried to keep in mind when framing and discussing results.Using preexisting data meant our sample size was guided by inclusion criteria rather than data saturation, which fits the epistemology of codebook TA described by Braun andClarke (2021, 2022).
First, the first three authors reviewed the 38 transcripts and developed 10 codes used to code the data.Early codes were descriptive and captured conversation progression (e.g., "chatter leaves conversation at suspicious point" or "responder recommends chatter call 9-1-1").Per Braun andClarke's (2021, 2022), we developed codes early in the analysis process and used them to create a common language for the research team.After independently coding the dataset, the first three authors met to develop new codes and refine existing codes, leading us to create 13 codes describing the chats and their structure.After recoding the data with the 13 codes, the first three authors reviewed associated excerpts and grouped those with shared meaning.We met to discuss grouped excerpts and generate themes in a process guided by Braun and Clarke's (2021) description of themes as "understandings of patterns of thematic-meaning" (p.342).For example, we synthesized codes to generate the higher order theme "recognizing presumably inauthentic chat patterns."After reviewing data independently, the first three authors met to discuss and refine themes until agreeing we had generated a final set of themes supported by our reflexive understanding of the data.We then wrote up a definition of each theme and identified quotes to illustrate them.As appropriate, we considered negative case examples to refine theme definitions.Finally, we used "evaluation questions" (Braun & Clarke, 2021) to guide our explanation of methods (e.g., to ensure conceptual fit between our methods, aims, and research question) and analysis (e.g., to ensure themes were fully formed and justified).Throughout the analytical process, the first three authors met and to discuss and resolve any disagreements.

Reflexivity
Given codebook TA's embrace of reflexivity, our analysis and findings are deeply influenced by our identities and experiences.This project was conducted by an interdisciplinary team (community psychology, nursing, social work) of six individuals, including gender-based violence researchers and the director of a university-based program that provides services to those impacted by sexual violence.The paper's first three authors, who coded the data, all have multiple years of conducting qualitative research (ranging from four to 15 years) and working as gender-based violence practitioners/researchers (ranging from four years to more than two decades).As the team consisted of gender-based violence practitioners and researchers, most of whom have experience answering hotline calls and all of whom have direct practice experience working with survivors in some way, we were able to draw on previous experiences in our analysis and interpretation.We used these experiences to help us understand the patterns of shared meaning that we observed in the transcripts, for example, by using our hotline experiences to help us identify language in transcripts that responders may have viewed as red flags.

Results
The research team generated six themes describing the process of identifying and navigating inappropriate chats.Theme presentation begins with inappropriate chat identification and transitions to themes relating to interaction navigation.As appropriate, we share transcripts verbatim, retaining typos.However, the research team suspected some chatters took pleasure in exposing responders to extremely graphic imagery, so we sometimes paraphrased chatters' most disturbing language rather than share their words directly and risk being complicit in their abuse by helping them reach more people.Campbell (2022) discusses the need for researchers to limit their exposure to traumatizing qualitative data; with that in mind, at points we describe, rather than paraphrase, chatters' most disturbing language.We recognize that doing so diminishes the trustworthiness in our analysis and limits the extent to which readers can understand the chats these responders navigated but are willing to make this tradeoff given concerns related to sharing content verbatim.In our analysis and discussion, we use "graphic" to describe only content that goes beyond a survivor recounting their experience of victimization, and the use of slang or frank descriptions of sexual violence is not enough for us to refer to language as graphic or violent.

Process of identifying inappropriate chats
We generated three themes to describe the process through which responders identified inappropriate chats.Responders seemed to identify these interactions through a process of 1) detecting implausibly graphic and abusive content, 2) recognizing presumably inauthentic chat patterns, and 3) navigating ambiguous content.

Detecting implausibly graphic and abusive content
Though one would expect to encounter details about sexual assault incidents or topics such as suicide on a sexual assault crisis hotline, responders seemed to quickly detect that some chats contained implausibly graphic or violent language.These chats included explicit, disturbing details about sexual violence and other forms of violence.They also included no discussion of emotional impact, support needs, or any other reason for contacting the hotline.
When responders detected graphic and abusive content, they often seemed to do so by noticing the violent, disturbing, and implausible nature of the chatter's language.For instance, a chatter began a conversation by describing suicidal ideation; when the responder began a safety assessment, the chatter crassly propositioned them for sex (18).Throughout the interaction, this chatter also described sexual assault, female genital mutilation, being "horny," their water breaking, and giving birth on "railroad tracks" (18).Other chatters who discussed pregnancy also described giving birth during the chat in ways that seemed increasingly implausible, logically inconsistent, disturbing, and violent (17,24).Even after responders encouraged these chatters to call emergency services (e.g., 9-1-1), they often continued to describe their situations in graphic detail, again suggesting an intent other than helpseeking.
Responders seemed to identify some chatters' graphic language as an attempt to seek sexual gratification.For example, a chatter wrote "see seen what I'm doing on the internet right now" before describing degrading sexual behaviors in a way that suggested that they might be narrating content (e.g., porn) as they watched (17).Another chatter discussed experiencing childhood sexual assault but then used the present tense to share biological reactions that implied feelings of sexual gratification as they retold the story (23).Rather than focusing on feelings or needs, some chatters' explanations of bodily sensations seemed to move conversations toward sexual gratification and away from help seeking or other appropriate uses of the hotline.

Identifying patterns of presumably inauthentic chat topics
While some topics may be likely to appear in many hotline chats, responders seemed to observe that some patterns of presumably inappropriate chat topics diverged from typical help-seeking.These chats, which included a resistance to discussing resources or emotions, consisted of nearly or completely identical phrasing.The nearly identical nature of subject matter, syntax, and diction arose suspicion that the chats originated from a single person, though there is no way to confirm that.We observed a two-phase process of identifying patterns of inappropriate chat topics in which responders initially treated a chat as authentic before coming to recognition that chats may not be authentic, leading to a change in the manner of engaging with chatters.
Over several months, nine chats began with a chatter writing "I'm tied up."In the first several chats that fell into this pattern, the chatter usually wrote only a single phrase, "I'm tied up" before leaving the conversation (7, 9, 14, 15).Though responders described these chats as "suspicious" and possibly "intended as a prank" (9), they engaged with chatters as if they were authentically seeking help by assessing safety and making referrals to emergency services (e.g., 9-1-1).In the first few chats where 9-1-1 was mentioned, the chatter left the conversation, though the chatter eventually engaged in the The implausibility of someone being able to text while being tied up and of that person choosing to contact an anonymous hotline rather than someone who might be better positioned to provide immediate help escaping might have been initial clues that this chatter was not authentically seeking services.The question, "Can I tell you how much rope," after the mention of 9-1-1 and their eventual admission that they could untie themselves seemed to solidify the interpretation that the chatter was not actually in danger or seeking help.The request to describe the rope and unsolicited description of their clothing may indicate the chatter was describing a sexual fantasy.
Chats following this pattern occurred numerous times over a period of a few months.Similar chats in proceeding months (e.g., 20, 21) received minimal engagement, likely due to the hotline staff's detection of the repeat chatter's disingenuous intent and direction that responders do not engage.However, after a ten-month gap, the hotline received a chat during the next academic year that began, "I'm tied up."The responder, likely unaware of the pattern, asked, "Do you want to talk more about what you're feeling tied up about," interpreting the phrase as a metaphor for emotional distress (30).The chatter then asked to describe the rope with which they were bound, again suggesting efforts to seek sexual gratification (30).We highlight this chat as it illustrates the sustained pattern over an extended period and how chat pattern recognition might go extinct as new volunteers train and time passes.Across nearly identical chats, responders interpreted "I'm tied up" as a genuine plea for help from someone in danger (e.g., "Would you like to say more about your situation?If this is an emergency please call 911" (9)), a known repeat and inappropriate chatter (e.g., referring a chatter to 911 with no further engagement (15, 20, 21)), and someone in a state of emotional distress and confusion (e.g., "Do you want to talk more about what you're feeling tied up about" (30)).In other words, depending on a responder's knowledge of a chat pattern, their engagement with nearly identical chats varied significantly.

Interpreting ambiguous content
While some chats contained graphic and clearly violent imagery, others were ambiguous, necessitating more interpretation and sustained interaction to assess their legitimacy and the appropriate response.Like typical chats from survivors, these chatters shared their experiences of unwanted sexual contact; responders seemed to flag these chats as inappropriate based on their use of language, lack of detail, unexpected conversational segues, or similarity to other chats received by the hotline.Unlike graphic and abusive chats characterized by violent imagery, implausible details, and extremely graphic content, these chatters' language included common slang terms for genitalia and blunt references to sexual acts.For example, a chatter wrote, "he offered to suck my dick" (5).The chat's corresponding log form suggested that the responder saw this chatter as "possibly obscene" based on the use of "dick," frank description of sexual behavior, and the responder's recognition that the chat came shortly after and resembled other ambiguous chats (5).While the language was not violent or graphic, the context of recent chats shaped the responder's interpretation, illustrating how seemingly unproblematic content can be seen as obscene when considered in the context of potential patterns of inappropriate behavior.
Some interactions initially aligned with a typical chat and only aroused suspicion when a chatter's conversational segue seemed odd or incongruent with the rest of a conversation.These odd statements seemed to shift the responder's interpretation of a chat's inappropriateness.For example, an interaction that began with a chatter describing an assault concluded as follows: Responder: I am sorry that happened to you.The responder described this as an "obscene chatter" who "initially chatted vaguely about something that happened but then quickly became obscene" in the accompanying chat log form (3). The chat log's reference to the chatter's obscenity seemed to be a reference to the comment, "His dick looks different from mine" (3).Though unclear, the responder may have suspected the chatter was trying to get them to invite the chatter to describe their, and someone else's, genitals.In contrast, another chatter described an assault in greater detail, eventually asking "why would I stay hard throughout the whole thing?" (1).While the chat's language was like the previous chat in its level of detail and use of slang, the responder seemed to interpret this chat as consistent with a survivor seeking help and responded with empathy and support.These chats' ambiguity prompted interpretation from the responder that shaped their response.As researchers who had previously worked on sexual violence hotlines, drawing on our experiences often left us unsure if these chats were inappropriate or from survivors struggling with aspects of their experience such as experiencing physiological arousal.Of course, we lacked context (e.g., patterns of phone calls received by the service) that may have shaped responder responses, supporting our conclusion that these ambiguous chats required responder interpretation and ensuing judgment calls.

Navigating inappropriate chats
We identified two processes responders used to navigate potentially inappropriate chats: 1) gently attempting to redirect the chatter and 2) firm, direct boundary setting.The processes the responder chose seemed to result from their evaluation of the severity of a chatter's inappropriate conduct.We also explored chatter responses to boundary setting as we thought chatter responses might reveal something about their intentions and the success of boundary setting efforts.These responses typically took the form of desist and disconnect responses (the chatter leaving the chat) or attempts by the chatter to reengage the responded using various methods.

Gently attempting to redirect
When responding to ambiguous content and mild boundary violations (e.g., describing a need for emergency or medical services the hotline could not provide), responders gently attempted to redirect chatters.In doing so, responders used techniques to gently steer a conversation toward more appropriate topics or made referrals to more appropriate services while trying to preserve rapport and invitation the chatter to continue with appropriate discussion.When chatters' needs were beyond the hotline's scope of work, responders used gentle redirects to refer them to other resources.For example, a responder wrote "Unfortunately, we cannot help with that.Please call 911 if you need help" (18) in response to a chatter who claimed to be giving birth.As the chatter's description of giving birth became increasingly implausible and disturbing, the responder became firmer in encouraging the chatter to contact emergency services.The responder's tactic maintained rapport in case the chatter needed assistance, informed the chatter that the hotline could not meet their needs, and referred them to a more appropriate resource.When chatters discussed topics that seemed to make responders uncomfortable, responders used gentle redirects to direct focus away from inappropriate content, shifting discussions from graphic details of an incident and toward potentially more helpful topics such as discussions of emotions or problem solving.For example, a responder wrote, "We don't have to go in depth, but I'm here to listen and support you if you wanna talk about how you're feeling" (30).When another chatter shared some graphic details, a responder replied, "how can I help you in this moment after sharing this with me" (34)?In each case, the responder encouraged the chatter to focus on immediate support needs the hotline could address rather than details of their victimization.Gentle redirects, however, often led chatters to end the conversation, suggesting that the context in which they occurred impacted their effectiveness.

Firm, direct boundary setting
Responders set firm, direct boundaries chatters violated the service's user expectations (e.g., ignoring previous attempts to set boundaries or using implausibly graphic and abusive language).When setting direct boundaries, responders reiterated the chat's purpose, named the chatter's behavior as unacceptable, and/or informed the chatter of consequences for inappropriate conduct.Descriptions of consequences sometimes took the form of firm warnings that the responder would end the chat if abusive behavior continued.For example: This chat is for people who are in crisis and needs to remain available for those who need it.If you are not in need of services and continue this type of language or behavior, you will be blocked from the chat.( 7) When chatters ignored warnings or their conduct was so egregious that there was no evidence they had a genuine reason for accessing services, responders sometimes ended the chat or stated an intent to do so via statements like, "This is an inappropriate message . . .this is for people in crisis with sexual assault, I'm ending the chat now" (11).In setting firm boundaries, responders often also reasserted the purpose of the chat and sometimes directly confronted a chatter about particularly inappropriate conduct.These confrontations were sometimes the result of a chatter's particularly violent language, with comments such as "[this] behavior will not be condoned.If you continue, we can trace this number" (18) communicating an attempt to prevent the recurrence of harmful behavior.At other times, these confrontations were explicit allusions to a chatter's pattern of behavior, for example, "I know you have been repeatedly reaching out to [NAME OF SERVICE].This service needs to remain available for people who are in crisis.I am ending this chat" (38).In each case, responders choosing to confront the chatter seemed to be an attempt to erode the sense of anonymity and deter the chatter from future contact attempts.

Chatter responses
Responder boundary setting elicited various responses.Some chatters used a desist and disconnect response, ending the interaction by leaving the chat platform.A desist and disconnect response may suggest boundary setting deterred further inappropriate conduct.However, for ambiguous chats, gentle attempts to redirect chats sometimes led to a disconnection.In these cases, it was unclear if intervention deterred an inappropriate chatter or a survivor's help seeking.For example, when a chatter who expressed disinterest in resources referenced specific sexual acts, the responder wrote, "let's try to talk more about how this is affecting you opposed to the details of the event" (8).The chatter then left the chat, suggesting even gentle boundary setting may dissuade further engagement.The appropriateness of responses in which responders discouraged chatters from discussing their victimization remains unclear -particularly in the context of chats with ambiguous content.If the interaction came from an obscene chatter as the responder believed, the response was effective.However, the chatter may have anticipated and hoped to discuss the circumstances of their victimization using the words and language that felt comfortable to them, in which case, the use of a gentle redirect may have caused harm.
Other chatters responded to boundary setting by attempting to re-engage the responder, either by trying to prolong the responder's attention or escalating their inappropriate behavior.For example, in a chat featuring unambiguous, extremely graphic language, the responder reasserted the chat service's focus on crisis intervention and announced an intention to end the chat.The chatter responded, "are you a girl im in crisis going to end myself cause rape" (17), which seemed to be the chatter's attempt to reengage the responder by referencing rape and indicating suicidal intent (for the first and only time in the interaction) to evoke empathy, instill doubt about the chatter's (il)legitimacy, or otherwise require the responder to engage in a suicide risk assessment and continue the interaction.Other chatters seemingly tried to reengage the responder by escalating their inappropriate behavior.For example, when a chat began with the chatter referencing suicidal intent, the responder responded with empathy (18).The chatter responded by propositioning the responder for sex using slang terms for sex and female genitalia, without any further reference to their suicidal intent.The responder then set a firm boundary; in response, the chatter continued with disturbing and increasingly graphic language, resisting all attempts to end the chat (18).Another chatter seemed to taunt a responder, writing "have a nice day who ever it is just remember be safe, plus I'm not even in the US, I'm far away, Byeeeee" (19).The chatter's escalation seemed to be an attempt to have the last word and highlighted their ability to retreat to anonymity without fear of repercussions.In these chats, the chatter's escalation in response to boundary setting seemed consistent with the assessment that the chatter was not using the hotline for its intended purpose of supporting survivors.

Discussion
This study examined the process through which hotline responders identified and navigated potentially inappropriate chats on a web-based sexual assault hotline.We observed three processes through which responders seemed to identify inappropriate chats: detecting implausibly graphic and abusive content, identifying patterns of presumably inauthentic chat topics, and interpreting ambiguous content.The perceived severity of a chatter's boundary violation led responders to use gentle redirects or set firm boundaries, both of which often led chatters to disconnect from the service.Other chatters ignored boundary setting efforts, attempting to reengage the responder, sometimes by escalating their inappropriate behavior.
Our results extend descriptions of inappropriate callers as seeking sexual gratification (Brockopp & Lester, 2012, Pollock et al., 2013) while highlighting that the process of detecting graphic and abusive content and recognizing presumably inauthentic chat patterns differs on phone versus text-based hotlines.While we observed content similar to descriptions of inappropriate callers discussed by telephone responders (Brockopp & Lester, 2012, Pollock et al., 2013), we also observed chatters discussing committing homicide, sexualizing rape, and the perpetration of the sexual abuse of children.These chats contained elements other than the use of slang or frank descriptions of sexual violence (e.g., implausibly graphic and abusive content or abusive attempts to reengage responders after boundary setting), suggesting an orientation toward sexual gratification, rather than help seeking.The abusive nature of some graphic and abusive content also suggests responders could perceive some chats as a form of violence or a betrayal.In these instances, there may be a need for services and policies that support responders.Of course, this does not apply to ambiguous content or chatters who used slang or frank language to describe sexual victimization.We suspect other hotlines receive similar chats, and our work contributes to the field by providing insight into the content and process through which responders identified these interactions in a text-based modality.Previous work suggests that the skills needed to provide crisis counseling differ via phone and text (Moylan et al., 2022), and these differences likely also apply to identifying and navigating inappropriate chats.Without audible context clues associated with identifying telephone masturbators (e.g., heavy breathing; Brockopp & Lester, 2012), responders were most able to identify chats as abusive when chatters directly and inappropriately invoked sexuality, for example by typing a moan, asking responders for sex, or sharing that they had an erection.In other cases, identification of a chat's potential sexual motivation (e.g., those with graphic, disturbing violent imagery and "I'm tied up" chats) seemingly stemmed from their recognition of presumably inauthentic chat patterns and content observed in violent and rape pornography (e.g., bondage; Carrotte et al., 2020).Regardless, we recommend further study of whether other text-or web-based survivor-serving crisis hotlines encounter dynamics like those we observed.
Difficulties navigating ambiguous content seemed to be compounded by the text-based nature of the hotline; the ambiguity of many analyzed chats meant responders had to interpret a chatter's meaning and motivation before responding or choosing between gently attempting to redirect the chatter or firm, direct boundary setting.Without context clues indicating the interaction was inappropriate (e.g., friends giggling in the background, heavy breathing) that phone-based hotline responders can use to determine a call's legitimacy (Brockopp & Lester, 2012, Emmison & Danby, 2007, Weatherall et al., 2016), text-based chat responders must rely on syntax, diction, patterns of chats received by the service, and their expectations of survivors to guide responses.Responder interpretations matter because these interactions appear fragileeven gently attempting to redirect the chatter often led ambiguous chatters to use a desist and disconnect response.While such a response may suggest the successful deterrence of an abusive chatter, it may also represent a survivor feeling like a sexual assault crisis hotline dismissed their experience.Responders must be careful that neither gently attempting to redirect the chatter nor firm, direct boundary setting dissuades survivors from seeking help -especially because chats did not begin with a discussion of hotline guidelines, so chatters with no previous knowledge of hotline policies may be surprised by boundaries related to discussions of victimization.Notably, chatters can access chat guidelines on the website where the chat is accessed, but chatters have to seek out and read those guidelines if interested.These challenges suggest responders must use caution when a chatter's use of slang or frank depictions of sexual violence led to doubts about a chat's legitimacy, for example by reflecting and re-reading a chat before responding or asking follow-up questions to give chatters additional opportunities before responders set boundaries.
Though some of the chats flagged as inappropriate may have genuinely been abusive, they also could have been testing chats from survivors or perpetrators trying to learn how to frame their experience to ensure they receive an empathetic response or a referral to a more appropriate service.Many survivors are hesitant to contact formal support systems, particularly when their experience deviates from that of the stereotypical white, heterosexual, cisgender woman (e.g., Calton et al., 2016, Huntley et al., 2019), and some survivors may therefore be particularly likely to test a resource's safety.However, a survivor who reaches out to a chatline and then feels their narrative has been dismissed may be less likely to seek help in the future (Ahrens, 2006, Ahrens et al., 2007).Responders, therefore, may wish to be particularly empathetic when setting boundaries and inviting further conversation during ambiguous chats to minimize the likelihood of a survivor's desist and disconnect response.

Policy and practice
Our findings about the process of identifying and navigating inappropriate chats has implications for practice and policy.The processes of detecting implausibly graphic and abusive content or recognizing presumably inauthentic chat patterns could traumatize or disturb responders.The research team found many analyzed chats very disturbing despite having hotline training and experience, being removed from the interaction, and being able to prepare ourselves for the transcripts' content.During the analytic process, we discussed our emotional reactions and how they shaped our understanding of and engagement with the data and phenomenon we studied (Campbell, 2022).Responders, however, lack the privilege of observing these interactions after the fact and must respond empathetically, respectfully, and quickly to all chatters -even those who may be deliberately causing discomfort or harm.
When chatters abuse hotline responders, responders may feel tricked or betrayed (Pollock et al., 2013); since these chats may emulate the dynamics of sexual violence by making a responder a non-consensual participant in a chatter's sexual fantasy, responders may experience these interactions as a victimization, highlighting the need for hotline protocols to empower responders to set boundaries when they detect implausibly graphic and abusive content or recognize presumably inauthentic chat patterns.Therefore, agencies should give space for responders to debrief and process disturbing material after abusive interactions -particularly because many hotline responders are survivors of sexual violence (Slattery & Goodman, 2009, Wood, 2017) and may be triggered by chatters' abuse.Staff and volunteers may fear burdening their supervisor, worry about over-reacting, or be concerned that admitting a chat was troubling might jeopardize their role on a hotline if a supervisor decides they are not fit for the work.We, therefore, recommend supervisors check in with responders after disturbing interactions to offer support.Hotline supervisors should normalize a range of responses to inappropriate chats, including disgust, anger, resentment, distrust, disbelief, and empathy.We recommend trainers be transparent with new staff and volunteers about the possibility of inappropriate and disturbing hotline interactions, and encourage responders to have self-care plans to follow after challenging interactions.Hotlines may also incorporate simulations of inappropriate or ambiguous chats during training.Helping responders explore their instincts on how to respond to these interactions may empower them during live encounters with inappropriate or ambiguous chatters.
Reaching out to a hotline is a brave step for survivors and responders should avoid prematurely ending communication as negative responses may discourage future help seeking (Ahrens, 2006, Ahrens et al., 2007).Challenges communicating via text, such as a lack of vocal cues, magnify the potential to misunderstand ambiguous chats.For example, understanding "I'm tied up" to be metaphorical prompted a very different response than those who understood the language to be literal.The responder's misunderstanding and followup inquiries regarding what the chatter was tied up about (emotionally) highlight an important point -responders had to interpret chats based on limited information.These complexities support hotline policies that treat all interactions as legitimate, giving hotline users the chance to establish the hotline as safe (e.g., Emmison & Danby, 2007, Weatherall et al., 2016).We recommend trainings include discussions of gentle redirects and firm, direct boundary setting in ambiguous chats, with a focus on clarifying the chatter's intention and maintaining connection when possible.
Further research and training on diverse survivors' help-seeking, including from men, non-binary, gender diverse, and transgender survivors, and their use of crisis hotlines might help increase understandings of how to identify inappropriate chatters.Many ambiguous chats seemed to be flagged as such when chatters referred to their genitals or described experiencing arousal.If these ambiguous chats came from survivors, suspicion based on discussion of physiological responses or genitals may harm and discourage survivors whose gender presentation challenges expectations of survivor help-seeking (PettyJohn et al., 2022).For example, men who have experienced sexual victimization may feel guilt and struggle to process their victimization after experiencing arousal non-concordance, a disconnect between their physiological and emotional response to stimulation, during an assault (Nagoski, 2015, PettyJohn et al., 2022).A responder gently attempting to redirect the chatter or engaging in firm, direct boundary setting when a chatter references physiological arousal may unintentionally validate a survivor's guilt and contribute to difficulty understanding their experience.Instead, when discussions of arousal during victimization inspire a responder's suspicion, normalizing associated shame and asking follow-up questions may be more appropriate than boundary setting.While responders need to protect themselves from inappropriate interactions, prematurely dismissing ambiguous chatters as inappropriate could retraumatize survivors.Responders must be particularly mindful of the possibility of causing this form of harm, particularly because the nature of anonymous, web-based hotlines limits the ability for responders to assess how their responses impact the chatter.

Future research
Despite recognition that inappropriate interactions on sexual assault hotlines occur, little research explores these interactions.Our analysis provides insight into the processes through which inappropriate chats are identified and navigated, but we recommend further work to confirm and extend our findings.Given evidence that obscene and manipulative or deceitful calls negatively impacted hotline responders (Brockopp & Lester, 2012, Dihenia, 2022, Pollock et al., 2013), future research should explore how inappropriate hotline calls and chats, particularly those that emulate the dynamics of sexual assault, impact sexual assault hotline responders (many of whom are volunteers) and, in turn, a service as a whole (e.g., due to burnout; Dihenia, 2022).Such research could provide valuable information for hotline trainers, supervisors, and volunteers about risks and strategies for mitigating harm.We recommend scholars engage hotline responders to understand their perspective on navigating ambiguous and inappropriate chats and the effect of these interactions.Post-chat debriefs could explore responder emotions, reactions to specific content, and the impact of these chats.Finally, our data was collected as part of the formative evaluation of one service provider that included no demographic data, so we felt we lacked sufficient data to comment on the role of gender in inappropriate chats.

Limitations
This exploratory study used a small sample of chats from one agency's webbased sexual assault hotline and may not represent chats received at other hotlines.Our discomfort sharing especially graphic content verbatim is a limitation to the present work; qualitative researchers often provide quotes to illustrate themes in data and demonstrate trustworthiness in their analysis.Some language used was deeply disturbing, and we decided the risk of exposing people to incredibly graphic language sometimes outweighed the benefit of sharing direct quotes.While we considered redacting or censoring some chats as an alternative to paraphrasing, the amount of content that would need to be removed from these interactions made doing so impractical.Paraphrasing in lieu of using direct quotes limits our analysis in that the meaning and severity of some quotes may have been lost in our efforts to mask graphic content.The hotline's anonymity [. ..]How are you doing?Chatter: I am embarrassed by what happened.Chatter: I feel dirty and nasty Responder: It is very normal to feel that way but it is not your fault at all that it happened Chatter: I don't know why he did that Chatter: His dick looks different from mine Responder: How can I help you today after this happened?(3)