The (In)visibility of Misdiagnosis in Point-of-Care HIV Testing in Zimbabwe

ABSTRACT There is a global trend to introduce point-of-care diagnostic tests, enabling healthcare workers at any level to test, provide results, and initiate immediate treatment if necessary. This article explores how healthcare workers conducting rapid HIV tests – in contexts of limited external quality assurance mechanisms – ascertain the accuracy of their test results. Drawing on interview data and participant observations from health facilities in Zimbabwe, we open the black box of misdiagnosis (in)visibility and reveal a range of proxies and markers that HIV testers draw on to develop certainty, or question, the reliability of their diagnostic classifications.


Introduction
Technological advancements, strong advocacy, ambitious global health targets and a shortage of laboratory testing services in many low-income countries have spurred a global trend to introduce point-of-care diagnostic tests (Drain et al. 2014). For instance, new and efficacious diagnostic test kits and devices have made it possible for nurses and counselors in rural clinics to test for the human immunodeficiency virus (HIV) at the point-of-care, accelerating support of a global campaign to ensure that 90% of all people living with HIV know their status. Transformative as these rapid HIV test-kits may be, disconcerting levels of misdiagnoses (Johnson et al. 2017b) call for critical reflection on how rapid HIV testers, who, when applying the test-kits, are able to recognize and reduce usererrors and misdiagnoses. Here, we open the "black box" of misdiagnosis (in)visibility in rapid HIV testing, a hitherto unexplored area of investigation, and examine the opportunities and challenges of rapid HIV testers in Zimbabwe to determine diagnostic or misclassification errors -a prerequisite to reduce misdiagnosis.
HIV is, for the most part, asymptomatic, and diagnostic screening is necessary to ascertain infection status (Pitts 2020). The enzyme-linked immunosorbent assay (ELISA) test is conventionally used to determine HIV status. However, as ELISA tests cannot be performed by health workers or even doctors, at the point-of-care, and require laboratory technicians and infrastructure (Pai et al. 2012), ELISA HIV testing is in many low-income countries restricted to the domain of highlevel hospitals. Consequently, and prior to the availability of rapid HIV test-kits, smaller clinics offering HIV testing services had to transport blood samples to facilities with ELISA testing capabilities, often at a great cost. Results then had to be communicated back to the HIV testing facility and clients, often with high risks of delay. This not only prohibited access to HIV testing services, but also contributed to high rates of loss to HIV treatment initiation, with clients failing to return for their results. These early HIV testing experiences prompted strong advocacy networks to support the development of, and access to, rapid HIV test-kits that make it possible for local healthcare providers to test patients where they are, immediately inform them of their test results, and make a plan for patient care (Pai et al. 2012;Stevens et al. 2014). While rapid HIV test-kits have played an instrumental role in catalyzing the HIV testing capacity required for the successful expansion of antiretroviral treatment programs, rapid HIV testing does not come without challenges.
Despite WHO (2019) guidelines and recommendations for improving the reliability and accuracy of test results of HIV-related point-of-care testing, research from different parts of sub-Saharan Africa indicate that large numbers of people may receive incorrect information about their HIV status due to diagnostic or misclassification errors in point-of-care HIV testing programs (Kufa et al. 2017;Young et al. 2013). In 2012, a national HIV surveillance of pregnant women in Zimbabwe revealed worrying levels of misdiagnosis. The survey revealed that 8.8% of women were incorrectly classified as negative, when they were in fact positive, and 1.3% of women were erroneously classified as positive, when they were negative (ZMoHCC 2013). We have recently repeated the study in Zimbabwe and found that 0.1% of adults tested were told they were positive, when they were in fact negative, and that 10% of those given a negative result, were in fact positive (Gregson et al. 2021). While new false-positives were rare in 2017, we identified a significant number of known-positive people (3.5%) on antiretroviral therapy who were HIV negative (Gregson et al. 2021). Misdiagnosis can be devastating for health and survival. People living with HIV, but classified as HIV negative, may experience significant and potentially life threatening delays in accessing treatment, whilst people erroneously classified as positive, in an era of test-and-treat, may start lifelong treatment needlessly, and risk experiencing stigma and discrimination by their partners, family, or friends (Shanks, Klarkowski, and O'Brien 2013).
A small but growing body of literature is beginning to illuminate the circumstances that may lead to misdiagnosis. In a review of this literature, Johnson et al. (2017a) note that misdiagnosis results from a myriad of factors at different stages of the HIV testing continuum, from the training of rapid HIV testers and supply of test-kits through to the actual testing practices. Our research in Zimbabwe identified three potential pathways to misdiagnosis (Skovdal et al. 2020). One, inadequate training, coupled with frequent changes in testing algorithms (explained further below) and test-kits, meant that some rapid HIV testers lacked confidence in using certain test-kits, which introduced uncertainties into testing practices. Two, difficult work conditions compounded these uncertainties. Rapid HIV testers reported high workloads and resource-depleted facilities, which meant they occasionally were forced to take short-cuts. Finally, we noted power struggles between different types of rapid HIV testers (nurses vs HIV counselors) and specific client-tester encounters, which created social interactions that contributed to interpretation biases (e.g., if a rapid HIV tester feared the reaction of a client with a positive test result). These pathways to misdiagnosis not only help explain deviances from official and recommended testing procedures but provide important context for understanding how rapid HIV testers operate and seek to authenticate test results.
Opening the black box of misdiagnosis (in)visibility We take inspiration from Science and Technology Studies, and Latour (2005) in particular, to open up the "black box" of misdiagnosis (in)visibility. The black box metaphor refers to the invisible workings of technologies, such as rapid HIV test kits, that have become so embedded in everyday practice that users and science are blind to the events and systems that govern their successes or failures (Latour 1999). Because rapid HIV tests have been so successful in achieving the goal of testing more people, little has been done to critically examine how rapid HIV testers either develop or question their confidence and trust in test results. The internal workings through which the certainty (or otherwise) of test results is developed constitutes a black box which is key to understanding pointof-care diagnostic error.
HIV is invisible to the naked eye. Although the rapid tests do not make HIV visible, they detect antibodies, the body's reaction to HIV. In practice, seroconversion becomes a proxy for HIV visibility, a visibility that is dependent on the quality and accuracy of the classification produced by the testing technologies, their users, and the testing algorithm in place. From a Foucauldian perspective, rapid HIV tests provide local healthcare workers with a "medical gaze" (Foucault 2002) into the body of their clients, and thus the power to produce what is, for some, a life changing classification of being either positive or negative. Much work has therefore gone into developing training programs and guidelines for rapid HIV testing at the point-of-care, including standard operating procedures (SOPs) for individual test-kits, and elaborate testing algorithms to minimize errors. Figure 1 depicts the national HIV testing strategy (not an algorithm as we have anonymized the tests) that was operational in Zimbabwe during the time of our study (2018)(2019). The testing strategy specifies that if a HIV test is positive, a second test needs to be run. The second test cannot be of the same type of test as the first. If the second test agrees with the first positive test result, this confirms the positive test result. However, if the first and second tests are in disagreement, the tests must be repeated in parallel. Only if the test results remain discordant, will a third tie-breaker test be done.
However, despite having such testing standards in place, errors in classification do occur. If an error is the difference between what is classified and a real value (Kitchin 2014), then the mistakes or misunderstandings leading to misdiagnoses, or misclassifications of a diagnosis, can only become visible, and authenticated, if the results of different testing technologies or testers are matched and compared, and if one technology is considered superior (i.e., able to show the "real" value). The ELISA lab-test is considered the gold standard of HIV testing and surveillance, and studies reporting on rapid HIV test misdiagnoses often compare and match the test results of rapid HIV tests with ELISA labtests. While misdiagnoses and misclassifications can -and indeed do -occur in ELISA lab-tests, the testing procedures surrounding laboratory testing are often covered by conventional laboratory quality assurance plans and accreditation statuses -ensuring a broad range of checks and balances are in place to avoid misdiagnoses (Stevens et al. 2014).
Research and laboratory testing offer very different contexts from the point-of-care settings in which healthcare providers perform rapid HIV testing. Rapid HIV testers do not have access to data to compare and authenticate the accuracy of their classifications (like researchers), nor do they work in a controlled and highly governed environment (like lab technicians). While there were external quality assurance (EQA) controls in place at the time of the study, including training requirements and monthly proficiency testing of random samples, not all testing sites were subject to quality controls due to funding constraints (Zimuto et al. 2016). Routine EQA was implemented in a quarter of cases, but with little effect on levels of diagnostic misclassification (Gregson et al. 2021). As none of our participants mentioned receiving feedback from the EQA programme, this suggests limited or dysfunctional feedback loops. From this perspective, there is effectively no (quantitative) way for our participants to know if they have misdiagnosed when recording the results of a rapid HIV test. Their experiences, perceptions, and explanations of misdiagnoses therefore depend (qualitatively) on proxies or markers. Thus we seek to explore a less-visible ethical issue pertaining to point-of-care testing, namely how rapid HIV testers, in the absence of the kind of data that researchers use, and a highly controlled environment akin to laboratory testing, determine the reliability of their classifications and make visible potential mistakes. We explore how rapid HIV testers encounter, respond to, and negotiate certainty, or put into question, the reliability of their diagnoses and classifications, and discuss the implications of this for future point-of-care testing.
To do this we draw on two concepts -actor network theory and "scripts" -that have previously been used to study diagnostic certainty and ambiguity amongst people who have undergone testing for HIV (Corbett 2009). Actor-network theory, or ANT, recognizes, and gives equal importance to, the agential capabilities of non-human actors, such as test-kits and testing strategies, within a network of human actors (Latour 2005). We posit that rapid HIV testers, and the test-kits themselves, form part of a heterogeneous network of other staff and test-kits, clients, standard operating procedures, and HIV testing algorithms that mediate and shape each other in complex ways. Our findings suggest that the misdiagnosis experiences of HIV testers center on whether and how their testing practices follow or deviate from the instructions and assumptions guiding the national HIV testing strategy and associated test-kit SOPs. To interrogate this, we analyze how these "scripted" non-human objects act to help HIV testers determine the reliability of test results. The concept of "scripts" was introduced by Akrich (1992) to elucidate how all technical objects are inscribed by their designers' visions and assumptions about the relationships that exist between a technical object, its users and the accompanying context of implementation (see also Timmermans 1996). These scripts are often explicated in protocols on how to use the technology. Engel and Wolffs (2020) provide a detailed ethnographic account of the range of actors involved in developing HIV-tests for point-of-care diagnostics, and the opportunities and challenges they face in aligning test-kit designs, or scripts, to the many different contexts in which they are used. By approaching the HIV testing strategy and associated test-kits as scripted, we are able to examine the "geography of responsibilities" (Akrich 1992) that are implicitly inscribed into the technologies and determine the relationships between testers and clients, the spaces in which rapid HIV test-kits are stored and administered, the temporality of each test, testing practices, and the values surrounding the mode of doing "accurate HIV testing." The HIV testing strategy and test-kits thus create scenarios for accurate or inaccurate rapid HIV testing practices, which in turn may affect how rapid HIV testers develop certainty, or put into question, the reliability of their test results.

Empirical context
Ironically, this article arose following our own uncertainties as researchers of a larger mixed-methods study ("the misclassification study"), which was conceived to examine the scale, sources, and consequences of misclassification errors in rapid HIV test algorithms. Aside from including a qualitative arm, which we partly report on in this article, the misclassification study draws on quantitative data embedded within a national HIV surveillance survey involving 62 purposefully selected health facilities across Zimbabwe. A primary objective of the misclassification study was to determine the reliability of the diagnosis classifications made by rapid HIV testers at the point-of-care. This inevitably involved comparing the rapid HIV test results with laboratory ELISA (4 th generation) tests -as discussed above. However, the supposedly gold standard laboratory test revealed inconsistencies when INNO-LIA TM HIV-I/II (INNO-LIA) antibody tests were introduced to confirm ELISA test results. The reliability of the confirmatory ELISA test results varied substantially between the two laboratories in Harare where they were carried out, and so a GeneXpert pro-viral DNA test was introduced to further establish the reliability of discrepant results. Occasionally this agreed with the rapid HIV test results and disagreed with ELISA and INN-OLIA test results (Gregson et al. 2021). This apparent lack of an absolute "gold standard" test that would conclusively provide accurate test results heightened our focus on diagnosis uncertainty and took us on a path to identify other ways of authenticating test results. As we analyzed the qualitative material it became clear that there were parallels to be drawn between our own initial trust in the testing script, our emerging uncertainties, and our quest to authenticate test results, and those of rapid HIV testers at the point-of-care. Thus, while the quantitative arm of the misclassification study was concerned with pursuing reliable test results and employed gold standard tests of ever higher orders, none of these forms of comparing and cross-checking results were available to the rapid HIV testers, who had to rely on other strategies to feel confident that they were issuing the correct results. These included first and foremost adhering to the SOPs and algorithm guidelines they had learned in their training. However, sticking to these scripts was not always achievable in their clinical practice, which encouraged us to interrogate in detail the phenomenon of HIV misdiagnosis (in)visibility from the perspectives of healthcare workers performing rapid HIV testing daily.
In this article we draw on material from the qualitative arm of the study, which sought to explore the experiences and practices of rapid HIV testers, with particular attention paid to the conditions and contextual factors that may affect user/clerical errors. Permission to conduct the study was granted by the ethical review boards of Imperial College London (15IC2797) and the Medical Research Council of Zimbabwe (MRCZ/A/1865). Informed and written consent were obtained from all participants, with the promise that we would ensure their anonymity. Pseudonyms will therefore be used throughout.
Our findings draw on the results of a rapid ethnographic assessment carried out in May 2018 in 11 health facilities, located in Chipinge, Gutu, Buhera, Harare, Mudzi, and Rushinga districts. As further elaborated in Skovdal et al. (2020) the health facilities were sampled to cover a mix of health facility settings (e.g., hospitals vs clinics, rural vs urban, low vs high performing when it comes to misclassification errors). Two or three rapid HIV testers from each facility were invited to participate in an interview. Twenty-eight rapid HIV testers (20 females and eight males) agreed to do so and received a t-shirt as a token of our appreciation. Seventeen of the testers were employed as HIV counselors, whilst 10 were registered nurses. One participant worked as a lab technician. Eighteen of the testers worked in hospital settings, whilst 10 worked in smaller decentralized health clinics. Twenty participants worked in what we had classified as low performing health facilities, whilst six worked in high performing health facilities. We also carried out participant observations in four of the health facilities to observe how rapid HIV testing was carried out in the day-to-day clinical context. One week was spent at each health facility, with observations being carried out from Monday through to Saturday from 9am to 4pm. This amounted to 24 observational visits.
The rapid ethnographic assessment was led by the first and second author and carried out by a team of five experienced qualitative local researchers who are part of the Manicaland Center for Public Health Research, a long-standing HIV research initiative based in eastern Zimbabwe, which conducted the misclassification study. Having worked on numerous projects in the area for years, the local researchers were already familiar with several of the study sites and have excellent skills in building rapport with clinic staff and patients. We knew our presence might lead to some unease among clinic staff, especially since performance-based testing had been introduced in some sites in Zimbabwe, which could significantly influence testers' interview responses and behavior during observations. We therefore carefully planned the study by discussing an appropriate set of methodologies in a national stakeholders planning workshop for the Zimbabwe HIV surveillance survey and with the Manicaland Center's fieldwork manager and qualitative research team. We then held an intensive training course for the researchers with particular emphasis on discussing strategies for putting our research participants at ease and building rapport. These involved carefully explaining that we were part of an independent research center and not connected to the government, and that our research was not concerned with assessing performance, but with testers' practices and experience of administering rapid HIV tests. We also decided against mentioning that we had selected our research sites based on their high or low performance and instead highlighted that we were interested in learning about testers' experience of doing this difficult and important work and signaled our understanding of the challenges of working in high-pressured environments. The research team was largely based on site, in hospital or local bed and breakfast accommodation, and took their tea and lunch breaks together with the clinic staff, thus providing spaces for socializing with research participants and making sure that their conversations went beyond workrelated matters. The second author participated in the first two days of fieldwork and debriefed and discussed challenges with the research team.
In our training we also highlighted the importance of identifying tensions and hierarchies among HIV testing staff and between testers and their superiors. While on one occasion the clinic's matron who oversaw the HIV testers initially appeared reluctant to accommodate the researchers in her clinic, our careful introductions, combined with presenting multi-level authorizations and support letters for our study seemed to have convinced her to welcome us and she facilitated access to all departments and rallied her staff to support the study. The major concern about the researchers' presence in the clinics seemed to be around the strain our study would place on staff time. Importantly, her initial concerns did not appear to have filtered down to the HIV testers, who were keen to talk about the challenges they encountered in their work life. Our topic guides were designed to encourage discussion about the entire realm of rapid HIV testing, starting from general information about the clinic, procurement, transport, storage, and dispensing of test-kits, to quality assurance checks and the practice of testing, and to relationships with colleagues, superiors and clients. Rather than focusing on individual testers' errors, we asked them to tell us about what they thought might be the main reasons for misclassification of results, and only then asked whether they knew of any examples where this had occurred. While testers were clearly reluctant to talk about specific mistakes they had made, they often talked about instances where their work contexts made it impossible to follow the testing script. This was confirmed through our observations, which showed clearly how testers initially tried to explain every step they took in the testing process, yet often had to pause their explanation as they were drawn into having to respond to other time-sensitive issues, issues they would also likely face during their rapid HIV testing practice. We have reported on these time pressures elsewhere (Skovdal et al. 2020).
Interviews were conducted primarily in Shona, except when participants preferred to be interviewed in English. Interviews were conducted at the health facilities and took an average of 60 minutes. While interviews were largely carried out one-to-one, the researchers often met during the day and debriefed and triangulated their findings at the end of each day, thus allowing for an iterative approach to data collection. The researchers also wrote detailed field notes on the interview set up, process, emerging themes, and questions to be followed up on in the next interview. The interviews were digitally recorded and transcribed into English.
The participant observations were steered by an observation guide focusing on the HIV testing practice, with attention paid to actors, objects, communication, as well as the spatial and temporal context of rapid HIV testing. All observations began with a tour of the facilities, with a particular emphasis on the testing rooms, the pharmacy, and the drug storage room. This was followed by a week of observing HIV testing practices in different departments, including the antenatal clinic (ANC), the HIV testing and counseling department (HTC), and the opportunistic infection/antiretroviral treatment department (OI/ART). Several different testers were observed in each health facility, including nurses and primary counselors who had been trained in applying rapid HIV tests. 1 All interview transcripts, field notes and researchers' interview and observation reflections were stored in secure password-protected locations. The material was anonymized and coded thematically, based on our interest in understanding misdiagnosis (in)visibility and how healthcare workers determine authenticity and errors. Our analysis produced the following three themes: the invisibility of misdiagnosis; proxies for a reliable HIV diagnosis; and proxies for the visibility of HIV misdiagnosis (see Table 1). To elaborate on the themes and to further understand what is at stake, we draw on our theoretical framework in our analysis of our findings.

Uncertainties in point-of-care (mis)diagnosis
Before presenting the way in which HIV testers discuss misdiagnosis and determine the reliability or errors of their HIV testing practices, we note that rapid HIV testing is a carefully orchestrated practice, which is performed without any difficulties most of the time. During our clinic visits we repeatedly observed HIV testers who appeared experienced, and whose testing practices were aligned with the test-kits' SOPs and the national HIV testing strategy (the script): On the first day, I was in the testing room where three HIV counsellors work together. At this particular moment, there was only one counsellor in the room, and we talked about life and work. She expressed that there are some days, like Mondays and Tuesdays, which are busier than others. I noticed that the testing area was set up in a corner; there was a table where there were different HIV tests, test 1A and 2A. There was also hand sanitizer, cotton swabs, box of gloves, and so forth. The testing area was strategically positioned in a corner to avoid direct sunlight. On the walls right above the testing area, there were some charts on standard operating procedures and algorithms. From mid-morning up until in the afternoon, it was quite busy and there were many patients who were coming in for testing. During the observations, I noted that the tester started by recording the patient's details in the register then went on to write the patient's number on the test-kit that she would use. She would then clean the middle finger or the ring finger and prick it for blood collection. After this, she would then collect

Basic themes
Organising themes 1. Never witnessed a case of misdiagnosis A) The Invisibility of misdiagnosis 2. It does not happen here 3. Trust in the script B) Proxies for a reliable HIV diagnosis 4. Testers' care in administering the test 5. Clinical observations and client reports 6. When deviance from the "script" is noticed. C) Proxies for the visibility of HIV misdiagnosis 7. When clients come back for re-testing 8. When clients get tested at other clinics with different outcomes a sample of blood and put it on the test kit before adding one drop of buffer and set it aside to wait for the results to come out. After 15 minutes she read the results, she would immediately record them in the register and then issue them to the client. She repeated the same process for all the clients that she attended to during this period. This field note observation details the HIV testing process and illustrates how the script (Akrich 1992) for using rapid HIV tests facilitates a standardized process, which for the vast majority of HIV testing scenarios, contributes to accurate diagnoses. Healthcare workers are reminded of the script all the time, as both SOPs and the national HIV testing strategy (see Figure 1) appear on the walls where tests are carried out.

The invisibility of misdiagnosis
When we asked the HIV testers about their experiences of issuing a wrong diagnosis, more often than not the question had to be repeated or explained further, leading to an almost uniform response: "Haa I have never faced that," "Haa I think here we haven't come across that," "I have never come across such a scenario," "Uhm so far I have never seen that," "unless if I did it unknowingly. I am not aware of anything like that." Many of these responses came from HIV testers at health facilities with worrying levels of misdiagnosis, illustrating their unawareness of misdiagnosis. According to our informants, misdiagnosis is a rarity. One respondent from a clinic with relative high rates of misclassification of HIV status is adamant that it no longer happens. If it happens, it happens to other people, at other health facilities, perhaps because they are busier. I would be lying if I said that misdiagnoses are still happening. I don't know, but they are no longer happening here, maybe if there's an institution where they get many people testing positive at the same time maybe, but here no. [ This certainty that misdiagnoses do not happen to them, or their health facility, may of course reflect a social desirability bias in their responses, but is also, as we argue in this article, a reflection of their trust in the test-kits and associated scripts and the fact that misclassifications are invisible to them. We, as researchers, know that mistakes happen in their clinics, yet they appear wholly unaware. Only a handful of our respondents could give concrete examples of misdiagnosis they had witnessed or heard about. Nonetheless, all our respondents, when asked to reflect on their HIV testing practices, were able to offer detailed accounts of their experiences, and in so doing, articulated proxies of certainty or uncertainty, which helped them determine the reliability of test results.

Proxies for reliable HIV diagnosis
We identified three pathways for healthcare workers to authenticate the reliability of their HIV classifications, namely i) trust in the script; ii) testers' care in administering the test; and iii) clinical observations and client reports. The first relates to their trust in the script itself and the authentication processes embedded within the script. Our participants, unanimously, spoke about how errors cannot occur if they follow the SOPs and the HIV testing algorithm. A counselor from a mission hospital with a relatively high number of misclassifications explained that mistakes cannot happen "if we are using the kits as we are supposed to and label those kits with patient names." Two questions generated detailed accounts of their trust in the SOPs and HIV testing algorithms. When asked about their experiences of misdiagnosing, this often generated responses detailing how their testing practices were aligned with the SOPs or the testing algorithm, thus making mistakes impossible.
Haa I have never experienced that because I will be dealing with one patient at a time; and the moment that I put their blood sample on the test kit, I mark it with a number. When another patient comes in you will know that you have allocated this number to that other person, and that it must tally with what I have written in the register. Male counsellor, age 50, high performing mission hospital Is there a time when a client received a wrong test result? No. What's important is to wait for the right time to read the results. That is what's important. If you read it too early, the result will not be out yet. So, if you wait for the right time, you will get the correct result. Female nurse, age 40, clinic We also asked our participants about their confidence in the test-kits and the testing process. While the participants had more confidence in some test-kits than others (determined by training levels, frequency or ease of use), the scripts surrounding the test-kits provided them with confidence. Some explained that they trusted the test results because they conducted quality controls as stipulated in the SOP, whilst others referred to the testing algorithm and the certainty it produced by allowing the HIV testers to run a second test with an alternate test-kit to confirm and authenticate the test results.
We do quality control tests every morning. First, we look at the expiry date of the test kits to make sure that they are still valid, then we conduct our quality control test, also we keep the kits safely secured in boxes away from extreme hot or cold temperatures. Female counsellor, age 42, low performing mission hospital If I'm not confident that is when I must follow the confirmatory test. If it confirms the results from the first test, then I should be confident. Male counsellor, age 33, low performing hospital A laboratory technician, working at a rural hospital, also praised the testing strategy, but underlined the importance of the individuals performing the test. He found confidence in himself, explaining, "I have confidence in me [. . .] I follow procedures each time, even if I'm busy." It is evident that for many of our participants, the reliability of the test-results is a result of the certainty that comes with their perceived compliance with the script.
While compliance with the testing strategy provides our participants with a level of certainty and trust in the test results, these results were occasionally corroborated by other proxies, such as their clinical observations. We observed a counselor making small changes to her testing practice, such as putting on gloves (a practice listed as necessary in test-kit SOPs, but often not adhered to), the moment a client with bodily manifestations of HIV entered the testing room.
The next client came in and sat down. He looked very sick, so the counsellor started by asking him what was wrong. He told her that he had a terrible headache. He had just received treatment at the outpatient department, but they had referred him for HIV testing as well. She offered him pre-test counselling and administered the test. This time she used some gloves. She repeated the same procedure as with the previous client and as she waited for the result, she recorded the client's details in the register. The client asked if he could go to the toilet as they waited for the result, and he walked out. After about 15 minutes, he was still not back, and the counsellor read the result. When he returned, she informed him that his test result was positive and that she wanted to run a second test. She pricked the client again and collected some blood. This time she used test 2A. The client just sat in silence and appeared to be in deep thought. After about 10 minutes, she read the result and informed the client that it was positive as well. The client was silent for a while. He received post-test counselling and was referred into the next room where he was initiated on ART. Observation notes It is evident that the counselor, based on her experience, made a split-second judgment about the likelihood of this person living with HIV. In this scenario, the test-kits either authenticated the HIV testers' instinct based on the bodily manifestations of HIV or vice versa. Similarly, reports from clients about their suspicions and fears of having acquired HIV also corroborated positive test results. However, if clients appeared healthy and challenged the positive test result, the technological perspective of the rapid HIV test appeared to trump the subjective lived experience of being HIV negative.
Our findings suggest that the national HIV testing strategy (Figure 1) -as a scripted technology -is not a neutral actor in the rapid HIV testing practice. HIV testers showed significant levels of trust in test results when following the script, a trust that was heightened by the presence of other actors, such as clients who exhibited physical manifestations of HIV. This unquestioning trust that "as long as we follow the testing strategy, the test result will be true" risks hiding potential pathways to misdiagnosis (see Skovdal et al. 2020).

How misdiagnosis becomes visible
As highlighted above, misdiagnosis is largely invisible to our participants. However, in the same way as compliance with a script, whether it is the testing strategy or test-kit SOPs, supports the perceived reliability of the test-results, so does lack of compliance become a warning signal of misdiagnosis. Most of our participants could detail situations where an inscribed protocol could not be followed, and where uncertainty about diagnostic reliability ensued. I think it is the test because one would have followed the SOP on the kit to the dot and still get the faint line detected. However, at times it is about the client and the antibodies in them and then in some cases when one uses too small a blood sample, the test takes forever to read. So, in such cases if one doesn't pay close attention to the time it takes to read the result, one might report a negative result when in fact it will be a positive. Male counsellor, age 33, low performing hospital The counselor alludes to the complex network of actors who can cause instability to the scripted practice of HIV testing: the test-kit itself, the client getting tested, and the HIV-tester. While this HIV-tester is not aware of a misdiagnosis, it is evident that uncertainty arising from instabilities to the script sharpens his gaze on the possibility of misdiagnosis. Only a few of our participants could give concrete examples of misdiagnosis. One HIV-tester for example noticed a novice tester deviating from the script by classifying and informing a client about their HIV negative status before the test result could be read. The test-result turned out to be positive, and the misdiagnosed client had to be traced and re-tested.
There was a student tester who was on attachment, and we were in the same room; I managed to notice that there was something wrong with the results the client had been given because the time that was taken recording whilst the test was on the table and discussions happening did not tally. I then managed to ask if things were done properly and at the time the client had already gone. There was a person who was positive but was given a negative result and if I remember correctly the positive result was not given to anyone. So that meant we had to follow up on that client that was given a negative result to have them re-tested. We had to test them again and asked for forgiveness and luckily the client was understanding. Female counsellor, age 36, low performing clinic Again, deviance from the script was a key marker for a sharpened gaze, this time by an onlooker, underlining how the presence of more than one person can help "catch" potential misdiagnoses. While HIV-testers can catch their own potential mistakes, or the mistakes of peers, by juxtaposing their HIVtesting practice with the script, clients could also make misdiagnoses visible. For instance, misdiagnosis became visible when clients, doubtful about the reliability of the test result, sought HIV testing elsewhere and returned to inform the HIV testers about discordant test results. One HIV tester recounted a story about a couple who sought a second opinion on their test results by getting tested elsewhere, and again at their clinic, and how this revealed misdiagnosis: That [misdiagnosis] happened a long time ago in this hospital but myself personally I haven't had such a case. However, it happened to a particular tester because they had not labelled their test kits and the clients were given the wrong result, they were given a positive result. [. . .] the couple went and got tested at a different facility and they tested negative, they then came back here after three months and got tested again and the result remained negative. The woman then pointed out that they once been tested here and produced the card they used at the time, which was indicating a positive. I think this is caused by not labelling the test kits. Female counsellor, age 32, low performing mission hospital The initiative of the couple to get re-tested, as well as the physical card demonstrating previous diagnostic classifications, not only made misdiagnosis indisputably visible to the HIV tester, but illustrates some of the informal procedures that clients adopt to ascertain the reliability of test results. In the account also runs a parallel story, namely her explanation for the misdiagnosis, which centers on deviance from the script. The next quote corroborates these observations but adds that in some situations when rapid HIV test results are contested, laboratory tests may be organized to confirm test results.
I'm not very sure. I wasn't the one who was testing that day but somebody else was testing that day. I don't know what happened. I don't know but from the way that I was saying earlier on that we lay our kits for kit number one, we write the name and the number and so on haa so I don't know in terms of reading the kits and interpreting the results what exactly took place for someone to be given a wrong test result. But it was corrected because we had to make a follow up on those patients and it was observed that the patients that had gotten tested one after the other had their results swapped. Yes, the one, aah not the one who had been given a positive result didn't accept it then we had not yet started this retesting process before initiation. They didn't accept it and so they went to get tested somewhere else, after going to get somewhere else they went to get tested at the general ward and tested negative. And then that patient told them that they had been tested and result was positive and had their blood sample taken even to the laboratory and the laboratory result was negative. Then the laboratory people had to make a follow-up that's when we noticed that two patients had been given a wrong test result and we had to contact the next person who followed this one and noticed that it was them who had tested positive then we asked for forgiveness before we initiated the other person. The good thing is I think this scenario happened within a week, so it didn't really give us a challenge because at least it happened within a week and the mistake had been rectified. Female nurse, age 45, low performing clinic The determination of one client to get re-tested not only made his misdiagnosis visible but allowed healthcare workers to trace the error. The tracing revealed that test results were erroneously swapped, and another misdiagnosis was uncovered in the process.
Sometimes it is not deviance from the script that served as a proxy for misdiagnosis, but their client's circumstances. For instance, couples with discordant test results led to suspicion. Because of the assumption that couples share the same HIV status, discordant test-results were a marker of potential misdiagnosis. One HIV-tester describes how discordant test-results amongst couples immediately resulted in laboratory testing. Some clients like a married couple: like one may test positive then the other tests negative, it happens. That's where we will draw blood for DNA and everything because there is nothing that we can do and if you use RDT the same result will still come so we say the next laboratory will do it. Male lab technician, age 58, low performing hospital It is evident that HIV-testers draw on many kinds of knowledge and experiences to determine diagnostic reliability, but compliance and deviance from the national testing strategy or test-kit SOPs (the scripts) were key markers, either instigating further testing, or providing explanation for misclassifications.

Point-of-care HIV testing as a scripted process
Given our own difficulties in ascertaining the reliability of ELISA test-results, we set out to explore how rapid HIV-testers -in the absence of the kind of feedback loops we could pursue -develop certainty, or put into question, the authenticity of their diagnoses and HIV status classifications. We did this with an interest to make visible the invisibility of misdiagnosis and to contribute to the evolving sociology of diagnosis (Armstrong and Hilton 2014). In the process, we learned how this invisibility is intrinsically linked to the (in)stability of the network that forms part of the scripted rapid HIV test practice, shaping trust or uncertainties in test results. Schubert (2011:853) argues that diagnostic technologies come with large degrees of ambiguity and uncertainty and notes: 'Diagnostic procedures then cannot create certainty by themselves, but certainty must be created through practices of "making sure".' Many of our interviewees were by and large convinced that misdiagnoses and misclassifications do not happen or are at most rare occurrences. This certainty arose from the fact, that in the absence of feedback loops that can identify errors or authenticate test results, there were no systems in place to identify and inform them about potential misdiagnoses. Because misdiagnoses are inherently invisible to them, they can get on with their rapid HIV testing in good faith that the HIV tests are stable technologies. The invisibility of misdiagnosis, coupled with the comfort and strong sense of security offered by the scripts surrounding the test-kits, heightened their trust and certainty in the authenticity of test results. The dominant narrative was that if the script was followed, mistakes could not happen. Reflecting their trust in the scripted HIV testing practice, following the script was a key proxy for authenticating results. This echoes recent observations made by Pienaar, Petersen, and Bowman (2020) who found the process of producing evidence for a test result helped some Australian healthcare workers cope with anxieties related to diagnostic dilemmas. Nonetheless, uncertainties did emerge from our interview material when HIV testers were invited to speculate on potential sources of misclassification when concrete examples of misdiagnosis were shared with us, as elaborated in Skovdal et al. (2020). However, as evidenced in this article, these accounts also revealed different proxies and pathways to put into question, the reliability of diagnoses and HIV status classifications, or develop certainty by "making sure."

Reliance on non-lab "data" to "making sure" or questioning test results
Clients whose bodies carried surrogate makers of HIV, provided HIV testers with further clues and trust in the authenticity of a positive test results. Clients whose circumstances or suspicions about the risk of having acquired HIV, also corroborated positive test results. While such markers appeared to help testers authenticate the test results, they may also inadvertently contribute to diagnostic biases, if for example a healthcare worker assumes a first positive test result to be correct (because of other proxies) and refrains from running a confirmatory test (Skovdal et al. 2020). Similarly, not following the script was a key proxy for questioning results and explaining or accounting for misdiagnoses. Some of our interviewees explained that they had observed others deviate from the script, which made them question the authenticity of results. Data also revealed instances where clients who were skeptical about their test results, had sought re-testing, and confronted the rapid HIV testers with the discrepant test results. This mirrors the findings of Corbett (2009), that consumers of HIV tests may also experience ambiguous outcomes. Clients who got re-tested either did so at another clinic or returned to the same clinic another day. When clients returned with reporting cards to prove discrepant results, HIV testers often ran another test, confirming that their initial test was erroneous, revealing the instability of the rapid HIV tests to the testers. Timmermans (2015) in his research of how standards are used in clinical exome sequencing for patient diagnosis, also notes clinicians' reliance on scripted standards, and need for trust therein. He too finds clinicians draw on different sources of information or seek out solutions when diagnostic technologies produce anomalous or unexpected results. He coins this process reflexive standardization, stipulating that trust and certainty in a diagnostic result is developed, not through a single standard, but by reflexively considering multiple standards as well as local practices and observations (Timmermans 2015). The HIV testers in our study arguably engage in reflexive standardization to determine their trust and certainty in test results when they consider their compliance or deviance from scripted standards, such as the national HIV testing strategy or test-kit SOPs, as well the other more client-focused markers and proxies discussed above. Our findings thus corroborate previous studies noting that diagnosis is a process that involves ongoing judgment by the involved healthcare workers to overcome uncertainty (Pienaar et al. 2020;Schubert 2011).
Our observations are constrained by a couple of methodological limitations. First, our crosssectional design and rapid ethnographic approach may only have captured the experiences and (performative) practices of our participants at one particular moment in time. Studying their experiences and practices more in-depth, and capturing change over time would be useful, particularly as the EQA program in Zimbabwe expands. Second, and relatedly, the design made social desirability bias possible. Some participants may have acted or represented themselves in a particularly positive way to either protect their own reputation or that of their organization.

Conclusion
Our results illustrate that rapid HIV testers in Zimbabwe have only limited opportunities to develop certainty, or question the reliability of diagnoses above and beyond their compliance with, or deviance from the testing script and a few experiential proxies. This observation is of no idle concern; it is, rather, an ethical issue that lies at the heart of much ongoing debate about who and what is to blame for issuing wrong test results and opens a discussion of potential ways to tackle the problem. For instance, feedback loops for point-of-care testers could be introduced, where samples of tests (both negative and positive) could be sent off to laboratory testing, feeding the results about potential misclassifications back to the testers to make them aware of potential misdiagnoses. Existing EQA procedures could be used in this way: already, some facilities send a small number of randomly selected samples to the lab for re-testing at regular intervals. This procedure could be expanded and a feedback mechanism created whereby the testers would be informed if any of their test results diverge from the lab results. However, such procedures are not easily implemented on a large scale. They require significant added investment and administrative labor: in the case of EQAs, more laboratory tests need to be carried out, reports created and passed on to line managers who will then need to discuss them with individual testers. This raises questions around feasibility in a context that is already resource-strapped.

Note
1. All point-of-care HIV testers in Zimbabwe have passed a two-day training course (covering 14 modules, including personal safety, specimen collection, rapid testing algorithms, quality assurance, proficiency testing, stock management, documentation and reporting, and professional ethics) and been accredited with a certificate of competency. However, not all HIV testers receive up-to-date training when a new national testing strategy is being implemented, or when new test-kits are introduced, an issue we discuss elsewhere (Skovdal et al, 2020).