Reimagining the White surveillance gaze: A practical theological proposal for repentant and solidaristic engagement

ABSTRACT This surveillance gaze is White in the sense that while it may, on occasion, intentionally target people who are non-white for special scrutiny, this gaze comprises operations and experiences of surveillance that assume Whiteness as normative and unremarkable. It raises barriers which are of no, or minimal hindrance, to white people. This article draws on examples and theories of racial bias within Artificial Intelligence technologies to identify dataism (a naïve trust in data) to be a significant dimension of the problematic gaze. Using Stoddart’s cruciform Christian theology of surveillance and the preferential optic for the (digitally) poor, in conjunction with Trozzo’s theological critique of data and Pattison’s theological notion of enfacement, proposals are made for dismantling the White surveillance gaze. These include acknowledging biases in White theology and imagining a redeemed White gaze as a paradigm of repentant and solidaristic surveillance. The article concludes with recommendations for enhanced practice by Christians who are developers, users, and subjects of surveillance technologies.


Introduction
Before the ubiquity of digital technologies, when we learned in a news report that suspects had been 'under surveillance', we broadly understood the practice of covert observation by the police of targeted individuals. Today, surveillance is widespread in everyday life and directed towards whole populations about which there is not necessarily suspicion of nefarious activities. As David Lyon's definition ably captures, surveillance encompasses, 'the operations and experiences of gathering and analysing personal data for influence, entitlement and management' (Lyon 2018, 6).
The White surveillance gaze is a term that names those operations and experiences that are racially-inflected. Although 'race' is a socially-constructed, not a biological, category, it is deployed and encountered as a powerful categorisation that impacts people's lives. For this reason, I will refer to 'race' in this article. This surveillance gaze is White in the sense that while it may, on occasion, intentionally target people who are non-White for special scrutiny, this gaze comprises operations and experiences of surveillance that assume Whiteness as normative and unremarkable. It raises barriers which are of no, or minimal hindrance, to White people. This is not at all, however, to suggest that race is the only way in which surveillance is differentially distributed across society; economic status, gender, religion, sexuality, and citizenship are often in play when it comes to how individuals and communities experience surveillance.
In this article, I describe racial bias in Artificial Intelligence (AI) surveillance technologies, giving most attention here to images of people's faces. (In practical theological terms, I name experience through attending to the experiences of people who are disadvantaged by a White surveillance gaze). I use theories of the gaze, and the White gaze in particular, in conjunction with a socio-technological model to identify the crucial part that naive trust in data (dataism) plays in accentuating racial bias in this field. I confront dataism by framing a theological critique of surveillance that proposes cruciform rather than dominating paradigms of God's watching. Stephen Pattison's theological notion of enfacement enables me to push towards practical steps for dismantling and then redeeming the White surveillance gaze as modest, repentant, and in solidarity with those that gaze has hitherto oppressed.

Describing the White surveillance gaze
Privileging of White-normativity is evidenced in specific devices. The early days of colour photography showed a 'positive bias toward lighter skin tones' despite colour-balancing being presented as neutral (Benjamin 2019, 100). More recently, a 2009 webcam could pan to follow a White face but struggled to do so for individuals with dark skin (Benjamin 2019, 108). In 2017, African-American users of the online photo-enhancing FaceApp found it lightening their skin tones because the underlying training data was dominated by white European faces (Morse 2017). Saifya Umoja Noble tested Google's system of generated images based on search terms. She found that it was images of White Americans that so heavily predominated, and the prevalence of pornified images returned on a simple 'Black girls' search that convinced Noble that this was 'reinforc[ing] the superiority and mainstream acceptability of Whiteness as the default "good" to which all others are made invisible' (Noble 2018, 82).
Research by Joy Buolamwini and her colleagues has found racial bias in commercial gender classifiers where lighter-skinned people were more accurately classified than were darker individuals (Buolamwini and Gebru 2018, 8). A different form of discrimination in facial recognition technologies arises when aware of bias in their datasets that are used to 'train' AI systems, designers have made efforts to populate with supplemental or highlighted data from under-represented categories of people. The methods can be ethically questionable when, for example, a company signed a deal with the government of Zimbabwe to harvest millions of Black people's faces using access to the state's CCTV and other facial database systems without, it is claimed, seeking the active consent of individuals who are being exploited for the improvement of a commercial technology (Raji et al. 2020, 148). In 2019, the US Department of Commerce's National Institute of Standards and Technology reviewed some facial recognition technologies and found that the false positive identification rates: were highest in in West and East African and East Asian people, and lowest in Eastern European individuals. This effect is generally large, with a factor of 100 more false positives between countries. However, with a number of algorithms developed in China this effect is reversed, with low false positive rates on East Asian faces. (Grother, Ngan, and Hanaoka 2019, 2) The context in which technology is developed really does seem to matter; the predominant racial features generate lower false positive rates. The Algorithmic Justice League, of which Buolamwini is a founding researcher, campaigns in the US for much greater regulation of facial recognition technologies, acknowledging not only the legal and technical but the societal challenges that require to be addressed (Learner-Miller et al. 2020).
Racial bias in facial recognition systems profoundly concerns but the White surveillance gaze is not limited to that domain. There are prenatal health care systems in the USA in which surveillance of women's bodies reproduces racialisation (Bridges 2011). In January 2021, the Dutch government resigned after it was discovered to have used an algorithm to predict who was more likely than others to wrongly claim child welfare benefits; a racially-discriminatory process that, without evidence of fraud or right to appeal, targeted parents of dual nationalities, and ethnic minorities (Heikkilä 2021). People in the Windrush immigration to the UK were failed by a lack of adequate record keeping and documentation on the part of the Home Office; a White surveillance gaze that rendered them invisible (House of Commons 2020). A culture of fear in the aftermath of the 2001 attack on the Twin Towers in New York and on the Pentagon intensified a racially-inflected gaze of suspicion towards people stereotyped as 'terrorists'. In parallel to the trope of being caught 'driving while black', people of particular ethnicities had to contend with accusations of 'flying while brown' (Chandrasekhar 2003).
Providing one's data, particularly one's name, can be a challenge that those who conform to Anglo-Saxon naming conventions need never consider. Online systems assume that a person has only one name, although Chinese people may have different names in different circumstances (FBIIC 2006, 56), and Sikhs do not have a family name or surname. White speakers benefit from automatic speech recognition technologies that can be more accurate for them than for speakers who are black (Koenecke et al. 2020).
To reiterate, the White surveillance gaze is not always intentionally against other ethnicities but can be neglectful of its assumptions around building a data set that thereby privileges people who are white.

Understanding the White surveillance gaze
The White gaze The gaze is a looking practice and, as such, is 'a complex interaction that often involves a technology on or through which we look' (Sturken and Cartwright 2009, 103). In visual studies, the gaze is a way of referring to how a viewer 'is situated in a field of meaning production that involves recognizing oneself as a member of that world' (Sturken and Cartwright 2009, 104). This means that the person who gazes is constituted by being addressed by that upon which they are gazing. To be addressed by an image is to encounter the responses that the image invites from a particular type of viewer (Sturken and Cartwright 2009, 105). The person who gazes responds as well as being addressed by an image (or, more strictly, by the field of the gaze; the historical and social contexts in which an image sits).
When it is another person upon whom we gaze (not merely a cultural object), the gaze is yet more complex. We are constituting one another through that gazewhich may be anything but equal. When I am gazed at, I constitute myself in relation to that gaze. I might accept the subordination to the more powerful gazer and render my own subjectivity as one of submission. Alternatively, I might resist and develop my subjectivity as a rejection of the submission. Rarely will this be submission or rejection, but a dynamic interplay within the constraints and opportunities available. As Marita Sturken and Lisa Cartwright conclude: It is by now widely agreed that identification and power in any field of the gaze is always multiple, complex, and fluid and does not necessarily follow from one's identity, given or assumed. Just as human subjectivity is complex, fragmentary, and subject to multiple forces, so too are identification and power in looking. (Sturken and Cartwright 2009, 123) Frantz Fanon conceptualised the White gaze in the 1950s as one that cuts open the everyday experience of Black people in an exercise of dominance that sets up two worlds of behaviours. Black people are excluded from the White world, and White people demand that Black people conform to White constructions of Blackness (Fanon 2008(Fanon [1952, 94). The Black man, in Fanon's analysis and in the language of his time, 'is unaware of it as long as he lives among his own people; but at the first white gaze, he feels the weight of his melanin' (Fanon 2008(Fanon [1952, 128). Whiteness and the White gaze have been complexified in the past fifty or so years to offer a thicker account such as that of Ruth Frankenberg. She identifies Whiteness as, variously, a 'location of structural advantage' and 'standpoint' that is 'often unmarked and unnamed, or named as national or "normative" rather than specifically racial' (Frankenberg 2001, 76). Frankenberg makes the important caveat that this site of advantage or privilege is inter-sectional in the sense that factors such as socio-economic status, gender, sexuality, and religion will also be significant for people who are White and those who are not White. However, as she correctly observes, these identities and factors 'do not erase or render irrelevant race privilege, but rather inflect or modify it' (Frankenberg 2001, 76). People who are, for example, gay may experience discrimination from people of all races, but Black people within a gay community may experience a gay White gaze that further oppresses as well as a straight White gaze that is intensified because the gay person is Black.
George Yancey talks of 'white lived space … structured by whiteness' in which white identity is formed, 'shaping how one sees and how one does not see the world' (Yancy 2012, 134) (emphasis in original). Such is one of the major challenges for those who are White: the White gaze looks both at and through people who are not White. To not be seen can be as discriminatory as being seen in a wrong way. At the very same time, as Richard Dyer argues, Whiteness largely operates as the power that 'is maintained by being unseen' (Dyer 2017, 45). Whiteness is very much seen by those who are not White, but Dyer's point is that Whiteness 'needs to be made strange' and thereby named and noticeable. By this, he contends, Whiteness can be 'dislodged' from its position of 'centrality and authority' (Dyer 2017, 10). In Cheryl Matias's terms, White people need to decolonise their minds rather than retreating to the familiar ground that comforts a colonial, racist construction of reality and their identity (Matias 2016, 164). This is what Robin Diangelo calls a 'White equilibrium' that serves as a 'cocoon of racial comfort' (Diangelo 2019, 105), although William Aal's distinction between impact and intention, whilst not condoning White people's reluctance to engage with Whiteness, creates emotional as well as intellectual space for someone to accept responsibility for the damaging consequences of their identity although they may not be culpable (Aal 2001, 306).
Racial bias in technology is understood in different ways. To Yarden Katz, AI 'serves technology by advancing its imperial and capitalist projects' (Katz 2020, 9). In this view, neoliberal economics and their accumulation of land through dispossession are interwoven with racially-inflected political strategies of mass incarceration and, interestingly for our discussion, surveillance technologies. AI, in Katz's understanding, serves powerful interests by making statements about what constitutes human intelligence, particularly its limitations, and by doing so AI reinforces 'radicalised and gendered models of the self that are falsely presented as universal' (Katz 2020, 10). In other words, the 'intelligence' which is rendered 'artificial' is not only an attenuated understanding of what it means to be 'intelligent', but this form of decision-making is, to again use Dyer's terms, 'unmarked, unspecific, universal' (Dyer 2017). Furthermore, in Katz's view, the Whiteness of the institutions and experts that shape AI are a re-inscribing privilege, and thus: 'AI cannot simply be lifted from its historical and institutional context and claimed as a liberator technology' (Katz 2020, 11).

Surveillance as socio-technological systems
Surveillance ought never to be treated merely as a technology, as if in popular parlance, 'it is just a tool'. This repeats the spurious arguments such as 'guns do not kill people, people kill people'. The mistaken assumption is that people and devices are not mutually related. People shape the design of tools and tools shape their designers. In other words, having access to surveillance devices gives more opportunities for surveillance, but, most significantly, that access changes people's moral and spiritual challenges and opportunities. Human relationships are changed; relationships with others and with the self. Devices are always socio-technological, never merely technological. It is not simply that people are faced with more, and sometimes more complex, decisions around technologies but that the person decides within frameworks, dispositions, attitudes, and assumptions that develop within a technological paradigm. David Lyon talks about a 'culture of surveillance' (Lyon 2018) and Jacques Ellul challenges la technique (as prioritising efficiency over other moral criteria) (Ellul 1965).
An influential, although not necessarily integral, element of a socio-technological perspective is dataism; a belief in the objectivity of quantification and uncritical trust in the institutions that collect and analyse data ( van Dijck 2014, 198). Popularly, dataism manifests as the view that, 'it's data so it must be true', an outlook closely aligned to the populist mantra 'we don't need experts' because 'the data speaks for itself'. Dataism is most definitely not the same as valuing evidence-based practice. Dataism is an uncritical approach to data, how data is gathered, who collects data, and how and who processes data. Dataism is, therefore, a partner in crime to racial bias in technology.

Dataism and hope in God
We can break down dataism into a number of perspectives, not all of which are held by someone who is a dataist: (i) life is dataa position in which human experience can be rendered by the measurement of biological algorithmic processes; (ii) we have the data a view that data is both necessary and sufficient for decisions to be made about responses to human action; (iii) data is dataa stance of trust that the systems used to gather and analyse data are unbiased; (iv) the data analyst says …a stance of trust that the institutions that deploy data-gathering systems and analyse data are trustworthy in the sense of being unbiased; and (v) the data speaks for itselfa populist belief that data analysis is simple.
Perspective (i) is a strong dataism that is held by many transhumanists, for example, (Harari 2017), but who might well reject stances (iii), (iv), and (v). Perspective (ii) could follow in a strong form as a consequence of (i). In this strong form of both necessity and sufficiency, transhumanists may advocate that superior decision-making, given the complexities of algorithmic correlations (and some causations), can be made by artificial intelligence systems than by much more limited (in terms of computing capacity) human brains.
Stances (iii) (data is data) and (iv) (the data analyst says …) are naive given what we know of the corruptibility of institutions where discrimination can become embedded in the ethos, narratives, and unspoken assumptions of behaviour expected of employees (Pattison 1997). A good scholar and data-interpreter must be a sceptic by thinking critically and testing different ideas. A cynic will jump to the conclusion that no-one can be trusted to bring truth to the table, so, as a result, she falls back on her gut reaction. Here is where the slide into populism, stance (v) (the data speaks for itself), takes effect. A cynic has, it appears, lost trust in institutions to ever tell the truth.
Christian hope in God confronts both dataism and a more critical approach to data. Eric Trozzo challenges the notion of 'calculable identity' (Trozzo 2019, 102), in which big data reduces consistence to mere existence (Trozzo 2019, 132). Calculating probabilities and segmenting individuals into categories of anticipated characteristics, attitudes, or behaviours is an attenuated approach to life. Probabilistic or algorithmic reasoning 'holds no space for hope in the future, for an unforeseeable possibility to rupture the current trajectory of probability' (Trozzo 2019, 131). Trozzo compares this reductionism with Jürgen Moltmann's distinction between adventus and futurum. Adventus is hope in the kingdom of God that is coming, whereas futurum merely projects from the past into the future (Trozzo 2019, 133). I had made a similar point some years before in arguing that Moltmann's image of the cross casting a shadow backwards from the future demands that no-one forecloses on another's identity (Moltmann 1974, 163). Whilst probabilistic algorithmic reasoning need not descend into dataism, the two are rather too regularly hand-in-hand. Trozzo is correct to identify the importance of 'resistance to the attempts of data mining to define us solely by our actions and instead rooting our identity in the infinity of Otherness' (Trozzo 2019, 134-135). I would emphasis 'the infinity of Otherness' in more explicitly cruciform terms lest pantocratic, imperial, colonising tendencies go insufficiently checked.

Cruciform surveillance
For a number of years, I have argued that a theology of surveillance needs to build on the notion of God watchingbut on a critical perspective on God's watching (Stoddart 2011). Too often, this metaphor has operated in the mode of God watching from on high; exemplified in the iconography of Christ Pantocrator, ruler of the world, familiar from the domed ceilings of many basilicas. Granted, the claim is that it is Christ, not the emperor, who is the ruler of the cosmos and under whose judgement the imperial powers of the earth will be held. However, the paradigm of dominance has been found wanting in its underpinning of patriarchy and kyriarchy (lord/masterservant) by feminist theologians (Fiorenza 1998, 131) and by postcolonial theologians for its shaping of readers' representations of themselves and thus of biblical texts under the pressures of colonial powers (Segovia 2006). More specifically, a Black critique contends with the 'singular failure on the part of white interpreters to recognise that the way in which they handle the text 'reflects a fetishization of the domination world that the text helped create' (Wimbush 2000, 10). This has direct relevance to how Black bodies are gazed upon. Kelly Brown Douglas unpacks the construction of Blackness itself as sin (Douglas 2015, 66). In her view, this is a racialized version of natural law that provides, 'the sacred canopy for white mistreatment of black bodies' (Douglas 2015, 50). The black body is chattel. The way things are is falsely taken to be the ways things are supposed to be; a fatal problem of natural law reasoning of this form. Black bodies are therefore constructed and perceived as a threat, not just as inferior (Douglas 2015, 68). Black people are thus guilty of trespassing into what is, in effect, White space (Douglas 2015, 86).
Within Christian families and congregations, 'God is watching you' can be deployed as a disciplinary threat over children (and some adults). Whilst I recognise that a belief that 'God is watching over me' has been immensely comforting for many people, I have advocated a more cruciform paradigm to better express Christ's watching over the world from the cross. Chiming with David Hollenbach, who extols the sign of the cross as opening 'the possibility of an ethics of compassionate solidarity' (Hollenbach 1996, 13), I have adopted an approach that emphasises Christ's watching over, caring, or I would say, surveillance, as his being in solidarity with all those who are subject to unjust monitoring, categorisation or otherwise impacted by systems such as the White surveillance gaze.
Most recently, in The Common Gaze, I argue that asking what surveillance is for, and even more importantly, who surveillance is for, engages us in crucial questions of social justice (Stoddart 2021, 36-41). Drawing on the familiar foundational concept of liberation theology, I have argued that we interpret and evaluate surveillance in the light of God's preferential optic for those who are (digitally) poor. By this, I mean that in considering the potential (as well as actual) effects of surveillance, it is vital to put the effects on those who are already marginalised at the front of the queue for our consideration. In other words, asking first how this system will affect the most vulnerable in society? How will it be to their advantage and disadvantage? The effects on others in society remain importantbut are not the most important criteria.
The contrast between pantocratic and cruciform surveillance is crucially important in confronting the White surveillance gaze. Within surveillance studies, it was Michel Foucault's critique of panoptic surveillance that prevailed as the primary analytical device (Foucault 1979). The panopticon was Jeremy Bentham's model prison in a doughnut configuration; guards at the centre and prisoners around the perimeter. Each prisoner could only look towards the central guard tower (enclosed as they were in partitioned cells). With the central tower in darkness and prisoners lit, a prisoner would not know when precisely he (rather than one of his fellow inmates) was being watched. For Foucault, this model described institutional surveillance in, for example, schools or military institutions (as well as prisons). Knowing that one might be under surveillance brought a disciplinary force far more efficient than constant observation. The one watching the many could, in effect, be one person when the discipline is internalised by those being watched.
A White surveillance gaze has panoptic elements, but it is distributed through non-centralised systems as a 'surveillant assemblage' (Haggerty and Ericson 2000), akin to a rhizomatic plant that has no single trunk from which branches grow. However, the White surveillance gaze bears similarities to pantocratic polity in the imperial figure at the heart of an empire. Colonisation, exploitation, dismissal of the intrinsic value of difference are all expressed in surveillance that is racially-biased to favour White people (or at least to not disadvantage White people). As much as Christ Pantocrator features in traditional iconography and imagination, cruciform surveillance points towards a radically different model. Cruciform surveillance emphasises solidarity with those who are disproportionately and unjustly subject to monitoring and data analysis. It confronts those who benefit from the direct privileges of surveillance turned towards those who are perceived as threats. Whilst there are people who are dangerous, the White surveillance gaze is largely one of oppression by those with more power over the levers of design and deployment of advanced technological systems. In terms of the Magnificat (Luke 1:46-55), it is they whose thrones are imperilled by the in-breaking of God's kingdom. A preferential optic for those who are poor or otherwise oppressed puts surveillance justice centre stage against the White surveillance gaze.

Enfacement
It is important to appreciate that one problem with the White surveillance gaze is it is generally being non-reciprocal; the gaze cannot often be returned upon the gazer. A value in a liberation based preferential optic lies in emphasising that acts of recognition and gaze take place within systems of structural sin and systemic injustice. Gazing is not an abstract activity, but an activity within a context, and that context includes structures, biases, which shape how we gaze at one another, and how we respond to those gazes. Stephen Pattison argues for the importance of faces in terms of 'enfacement', which is an action of 'discovering and saving' faces (Pattison 2013, 154). It is a relational activity between God and humans, and between humans. Enfacement highlights how faces are 'continually being created, sustained or violated' and that we find our own face and the face of God not in looking in the mirror but into the communities in which we encounter one another (Pattison 2013, 154). Faces can be recognised, 'tended and valued' or can be 'disregarded and lost' (Pattison 2013, 154-155). As Pattison reminds us, such seeing or failing to see the faces of others 'is a socially and culturally inflected phenomenon involving issues of power' (Pattison 2013, 165). In this way, talk about seeing the face of God is inseparable from the enfacement we offer to others. Concomitantly, disregarding the face of others diminishes, if not comes close to extinguishing, one's gaze upon God.
Pattison attends to the face, Brian Brock attends to the gaze. Drawing on Ola Sigurdson, Brock acknowledges the 'unavoidably reciprocal' nature of sight (Brock 2018, 537) and locates ways of seeing in participation in traditioned communities. For Brock, this is more than even learning to gaze in a Christian way or, as he puts it, to be 'traditioned in the gaze of the incarnate Christ' (Brock 2018, 543). His theology of the gaze confronts digital imaging that may enable a form of vision but one that cannot offer 'us the presence to one another on which the Christian life depends' (Brock 2018, 542). Surveillance technologies enable forms of monitoring, watching, or observing, and predicting through algorithmic processes but do not offer the capability of relational, and for Brock the crucial Christ-formed perception: Ultimately the demands of Christian discernment and discipleship direct the attention of Christians to what sensors cannot capture: acts that display hardness of heart, growth and decline in sensitivity to the joys and pains of others, the signs of strained relations-the telltale signs that signal the fraying of trusting communion. (Brock 2018, 542) For Trozzo, dataism reduces consistence to existence; leaving no space for the breakingin of God's kingdom to the trajectory of human life. To Pattison, enfacement is at stake where social and political ways of looking at an other's face and thus thereby the face of God, are actions of disregard. For Brock, digital imaging is similarly reductionist in terms of how rather than who or even what is gazed upon.

Dismantling the White surveillance gaze
To dismantle the White surveillance gaze means first acknowledging the biases of White theology itself. As Norris observes, 'White theology has never had to consider itself a contextual theology' (Norris 2020, 65). It is significant that Norris identifies an inherent 'antagonism' towards other contextual theologies where white theology so emphasises the role of being part of traditioned communities that perspectives from others' are a threat (Norris 2020, 79). The need to appreciate just how contextual are the dominant White theologies is acute when it is the White surveillance gaze that is being scrutinised. Fear is so much part of this gaze; justifying and amplifying calls for greater monitoring of the dangerous other. Where fear itself is endemic to White theology, the task of confrontation is made more difficult. Fear may be of torment in hell, of falling away from the faith, of failing God, of dubious theological developments, of any limitation of personal freedoms by the state, or of social censure by a particular Christian community. To the extent that any particular White theology is embroiled in fear, finding the resources to name, let alone confront, a White gaze is doubly challenging.
Second, it is important to name the vantage point from which we experience any surveillance gaze. Living in Edinburgh, I cannot remember ever having seen young Black men being stopped and searched by Police Scotland. I am spared the work of having to resist the ways in which my social attitudes are shaped by what other White people observe in many other urban areas in the UK and reported, often prejudicially, in local media. White women in cities much more ethnically diverse than Scotland's capital have to contend with concerns for their safety in contexts of male violence, of which I know nothing. Media stereotyping and racism mould White women's perception of danger in ways that are different from that faced by White men.
Third, it is vital to imagine a surveillance gaze that is White in a redeemed sense so as to not collapse a critique into inappropriate White guilt. A cruciform theology of surveillance with its preferential optic for those who are (digitally) poor offers hope for solidaristic Whiteness. The gospels narrate Jesus as often sitting amidst sinners, tax collectors, and others with bad reputations (Mark 2:15-17). From such a paradigm, Jesus would be found sitting amongst people steeped in their Whiteness and facing criticism for his associations. The parabolic forgiving father welcomes a wayward son who, in a distant pig-sty has come to himself, unsettles an audience to which mercy to White prodigals is extravagant, but indicative of the grace of God (Luke 15:11-32). Similarly, the rhetorical power of the parable about the rescuer coming to the aid of the man attacked on the road from Jerusalem to Jericho is retained if the unexpectedly compassionate 'Samaritan' is a White person (Luke 10:25-37). Similarly, there will be more rejoicing in heaven over one lost White person who repents than over ninety-nine Black persons who do not (in respect of Whiteness) need to repent (Lk 15:7). If redeemed Whiteness could become paradigmatic of modesty (as well as repentance from superiority and exploitation), then White guests taking the 'last places' at a table (Luke 14:10) would have learned the possibilities of being honoured in futures not foreclosed as is done to others in the White surveillance gaze. This is not a case of reclaiming but of reimagining Whiteness as a reparative standpoint that acknowledges the impact of Whiteness and the privileges it has embodied whether or not individual people who are White have ever intended oppression and dominance.
White-majority churches need to model redeemed Whiteness, and there is no good reason that this should be top-down, modelled first by institutional leaders. In fact, a bottom-up demonstration of reimagined Whiteness by members of congregations would also further dismantle residual patriarchy and kyriarchy as other expressions of superiority and domination. Churches naming Whiteness is a step towards dismantling people's representations of themselves; the postcolonial summons turned specifically to the colonial powers. Identifying White space (in its concrete and symbolic senses) is a precursor to repenting of its restrictive boundaries. Parish churches and gathered congregations have different challenges given their members inhabit, respectively, local and distributed residential spaces. Seeking, and listening to, the experiences of those excluded from White space can be both a personal and a congregational commitment.
Fourth, there are very practical steps available. Some depend on seniority in the workplace, but others involve contributing to debates within organisations. Michael Walzer's reiterative process is significant in contexts where people do not share the same religious or philosophical commitments but still engage in fruitful discussion. He argues that we find confirmation of our community's truths when we see others reiterating them in their own social criticisms and articulation of what they hold to be true (Walzer 1990, 533). A Christian who is an IT professional may want to query their company's recruitment policies in order to identify and lower barriers to hiring for a workplace in which diverse racial sensibilities are more adequately represented. This will help tackle racial bias in technological design. People involved in IT development can seek out the cleanest possible data sources that do not have identifiers carrying inherent bias. These, as Matthew Nolan notes, include 'first language/language of choice, level of education, employment activity/occupation, credit score, marital status, number of social followers, among others. Ultimately, bad data creates bad models and results in biased AI' (Nolan 2020). In addition, Suresh Venkatasubramanian proposes using more diverse datasets to train algorithms and, so that bias can be understood, including explanations as to how such algorithms make decisions (Kleinman 2017). Christian users of surveillance systems, for example, police officers and educators, could usefully be advocates of audits to identify and address racially-biased aspects of their systems. Christian subjects of surveillance can play a part in naming discrimination they experience on the grounds of racial identity with a view to acting in solidarity with others.
Dismantling the White surveillance gaze will require a hermeneutic of suspicion but not cynicism. Any gaze traditioned by a Christian community will have to face the grave danger of a pandemic age where a retreat into digital encounters has a strong pull. It will require effort to ensure that any gaze is not partial but encompasses the richness of encounters that are enfacing (in Pattison's terms) and comprehensive (in Brock's terms). Instead of a surveillance gaze that sees only what a person is, a dismantled and redeemed White surveillance gaze will both see who is in a device's sights and understand how those gazing are being shaped by their devices.