From object obfuscation to contextually-dependent identification: enhancing automated privacy protection in street-level image platforms (SLIPs)

ABSTRACT Street-level image platforms (SLIPs) employ indiscriminate forms of data collection that include potentially privacy invasive images. Both the scale and the indiscriminate nature of data collection means that significant privacy management requirements are needed. Legal risk management is currently operated through obfuscation techniques involving certain image objects. Current SLIP object obfuscation solutions are an indiscriminate and a blunt solution to a similarly indiscriminate data collection concern. A new contextual approach to obfuscation is required that goes beyond object obfuscation. Contextually-dependent identification would seek to identify the contexts, including captured objects, which can give rise to privacy concerns. It is technically more challenging for automated solutions as it requires an assessment of the contextual situation to understand privacy risk. Context-sensitive privacy detection, combined with context-sensitive privacy-by-design processes, potentially offer a risk management solution that better situates and addresses the concerns arising from SLIP data collections.


I. Introduction
Obfuscation, in its humble, provisional, better-than-nothing, socially contingent way, is deeply entangled with the context of use. 1 Brunton and Nissenbaum's quote highlights the challenges that arise for street-level image platforms (SLIPs) which have emerged over the last two decades. 2SLIPs combine online mapping technologies with 360-degree, panoramic street level views that feature inbuilt track, pan and zoom capabilities. 3While data collection techniques vary, imagery for these products is typically gathered using a fleet of vehicles equipped with specialised cameras that capture images to be 'sewn' together into a navigable interface. 4Image data collection is thus largely indiscriminate because SLIP cameras capture street level photography, at a given point in time, including persons and other common objects found in the everyday life of global human societies.
Such indiscriminate collections of street-level imagery give rise to significant privacy concerns that require SLIPs to implement privacy risk management strategies across a vast trove of image data.Legal risks are somewhat mitigated by the 'public' nature of image data, which privacy law has traditionally accorded a lower degree of protection. 5However, it is still possible for SLIPs to give rise to legal issues under privacy torts and data protection law frameworks.Not surprisingly, then, advances in street-level image capture have attracted privacy concerns across many jurisdictions, particularly in relation to the depiction of identifiable individuals, their facial features, 6 vehicle registration plates 7 and residential homes. 8These complaints give rise to a new form of automated legal risk management solution, obfuscation of partial or complete object imagery that provide privacy protections by blurring features to make them unrecognisable online.
Obfuscation of object imagery has become the primary privacy risk management tool for SLIPs.It thus provides the type of 'better-than-nothing, socially contingent way' of resolving the complex privacy issues that arise from the indiscriminate collection of global street imagery.However, as Brunton and Nissenbaum highlight, obfuscation is still 'deeply entangled in the context of use.' 9 Context, in relation to SLIP data collections and obfuscation, has two connotations.First, acts of obfuscation arise within a 'context of unavoidable relationships between people and institutions with large informational and power asymmetries.' 10 In other words, context can render obfuscated individuals identifiable.The second connotation is that privacy is a 'multi-faceted concept' with a 'wide range of structures, mechanisms, rules and practices … to produce and defend it.' 11he types of privacy rights and expectations enlivened by a particular technology will  16 Tex.Wesleyan L. Rev. 477. 9 Brunton and Nissenbaum (n 1) 95. 10 Ibid 56. 11Ibid 45.
be shaped by the context of use.Obfuscation is therefore but one tool in a considerably complex toolbox that is designed to provide a legal risk management solution to meet many different privacy rules and contexts.
Our paper argues that SLIPs are currently not using all the tools available in the privacy toolbox due to a restricted understanding of privacy concerns and the favouring of one convenient tool, object obfuscation, at the expense of the more complex, but comprehensive consideration of contextually dependent identification.Part II identifies and categorises the key failings of object obfuscation that arise from the capture, aggregation, and disclosure of digital mapping imagery.Part III provides an overview of how privacy issues are generally treated under two key areas of law: privacy torts and data protection legislation.Part IV then develops contextually-dependent technical and legal solutions to shift SLIP obfuscation strategies from a 'humble, provisional, betterthan-nothing' solution to one that is appropriately 'deeply entangled with the context of use.'

II. Privacy concerns and obfuscation problems
Advances in street-level mapping have attracted privacy concerns across many jurisdictions.Google Street View, the earliest and most comprehensive of SLIPs, 12 has received the most attention and criticism.Following its initial launch in the US in 2007, Google was confronted with a raft of complaints from individuals, government agencies, and advocacy groups objecting to the depiction of residential homes, vehicle registration plates and identifiable individuals. 13Such concerns prompted various inquiries by government agencies and regulators, sometimes resulting in temporary bans. 14Resistance on the basis of privacy concerns was particularly staunch in Germany, where a swell of complaints and regulatory actions led to Google's complete cessation of Street View recording activities; 15 a situation that persisted until recently. 16egal grounds for objection and regulatory responses to Street View varied across jurisdictions.In the US, a spate of civil actions based on the application of different privacy torts were pursued unsuccessfully. 17In other parts of the world, regulators brought actions based on infringements of data protection law. 18Notwithstanding the activation of diverse types of privacy law, legal actions sprang from a common set of privacy concerns regarding the scale and novelty of Google Street View.Street View's global data collection programme was enormous and initially conducted without any notice to, or permission from, either individuals or governments. 19While Street View's collection of public street imagery was novel, due to its scale and ambition, the collection process itself involved variations of old practices and technologies which, in isolation, implicate privacy interests long contemplated by privacy laws, such as street photography 20 and public surveillance. 21However, the unprecedented scale of Street View, 22 the sheer volume of imagery captured, and the scope of its availability rendered the perceived privacy impact of the project greater than that of its constituent processes.Greater also than that contemplated by the relevant privacy laws developed to govern those processes. 23he widespread criticism prompted Google to implement certain technical and organisational privacy protections into the product. 24Google's main response to early objections involved image pixelation of collected Street View imagery to obfuscate images of certain objects, including faces and vehicle licence plates, that could give rise to recognised privacy law risks, particularly those involving data protection laws. 25Google's initial deployment was also selective rather than comprehensive across the whole Street View project as it focussed on object obfuscation strategies in select jurisdictions to conform with local privacy laws. 26Google later acquiesced to further pressure from legislators, privacy regulators and individual advocates to adopt an automated, pre-emptive approach to the obfuscation of faces and vehicle licence plates across the globe. 27ccording to Google's own description of its obfuscation measures in 2009, the new system was a 'completely automatic system' that could 'sufficiently blur more than 89% of faces and 94 − 96% of license plates in evaluation sets sampled from Google Street View imagery.' 28 The obfuscation approach involved a combination of noise and aggressive Gaussian blur which blended with surrounding background features to obfuscate targeted objects in images. 29The automated process of pre-emptive object obfuscation was not, however, failsafe.Google conceded that users were required to 'narrow the gap between automatic performance and 100% recall' by requiring complainants to selfreport obfuscation errors in Street View imagery via a contact link provided in the live 19 The first round of Street View collections was undertaken without any prior notification to communities.and others note that Google's ex ante approach to potential harm was a deliberate strategy. 25Hoelzl and Marie (n 15) 268.For example, the above-mentioned German resistance led Google to sign a binding memorandum of understanding with the German Data Protection Agency in which they agreed, inter alia, to blur faces and licence plates before publication and provide residents the ability to have their homes removed or blurred. 26 product. 30Jane Horvath, Google's then Senior Privacy Counsel, summed up the shortcomings of the approach, stating that 'our blurring technology is not perfectwe occasionally miss a face or license plate' and 'for the few that we miss, the tools within the product make it easy for users to report a face or license plate for extra blurring.'31Instead of pre-emptive blurring, residential properties were blurred only after receipt of an individual privacy complaint or for national security purposes. 32Property obfuscation thus takes place post publication, after the threat to privacy has materialised.
This standard approach suffers from several weaknesses.We identify four different modes of automated obfuscation failure that have given rise to different types and levels of privacy harm (Table 1).
The first type of obfuscation failure, false negatives, occurs where the relevant object, e.g. a person, property or number plate has not been sufficiently obfuscated to render the object unidentifiable, or has been missed in the obfuscation process.Such failures are most often the result of the identification algorithm not recognising a face or licence plate, such as, when a face is captured at a sharp angle or from a distance. 33In these instances, the face or the number plate is not recognised as an image that requires obfuscation.The absence of obfuscation in an environment where every other object of the same type is blurred, could be sufficient to generate a privacy claim based on reasonable expectations of privacy.The reasonable expectation is generated by the SLIPs object obfuscation itself and arises from its own failure.A comparable situation can arise in the situation of partial obfuscations, particularly of facial image objects, where the face is somewhat obfuscated and the obfuscation can be removed to thus reveal the individual's face. 34he second type of failure, false positives, may occur where an object that was not required to be blurred has been obfuscated.Examples include blurring of faces on billboards, 35 statues,36 as well as animal facial features where the identification algorithm identifies a non-human face as a human one. 37Though these instances may degrade the quality of the product, and reflect a failure of the overall process of automated object obfuscation blurring, they are unlikely to raise privacy concerns or compliance issues as they do not involve the revealment of individual identity in a data protection context or a private activity in the tortious context.The third failure involves obfuscation that has been completed successfully, but paradoxically, the act of obfuscation draws attention to a person, property or other image object which is intended to be concealed by blurring.In this situation, obfuscation can add a 'sense of suspicion to otherwise profoundly banal imagery.' 38 The third type of failure is typically problematic with the addition of extra contextual information.For example, this form of obfuscation failure can be seen as a variation of the 'Streisand Effect', 39 where an attempt to conceal a thing unintentionally confirms or heightens awareness of it.The 'Streisand Effect' itself derives from a legal action undertaken by the celebrity performer, Barbara Streisand, who brought a legal action against several parties involving the online publication of her residence.Prior to news of the legal action, only six attempts were made to view the property.In the first month following news of Streisand's action, over 420,000 attempts were made to view the property online. 40 similar situation arose recently involving the obfuscation of US Supreme Court Justice Brett Kavanaugh's home, following the leaked publication of the controversial Dobbs judgment. 41Google's obfuscation of Kavanaugh's home on Street View had the effect of drawing attention to the property as the only blurred façade on an otherwise non-blurred street. 42In this case, object obfuscation served to confirm that the house belonged to Kavanaugh. 43Both situations show that the obfuscation noise added has the perverse effect of drawing attention to the object that has been obfuscated, thus encouraging investigation into it.Obfuscation in this context is a signal to the curious rather than a noise that obscures.
The final type of failure encapsulates the central challenge that context poses to the efficacy of automated obfuscation processes.The blurring of faces and licence plates is not a definitive means of obfuscating the situational contexts in which privacy infringements can arise.Moreover, the nature and scale of Street View image collection means that identification of individuals based on environmental or other types of contextual elements, notwithstanding the blurring of faces and number plates, is a likely and common occurrence.As Teresa Scassa observes, 'it is entirely possible to recognise individuals from attributes other than their facesand their geographical location may combine with these attributes to reinforce the identification'. 44Accordingly, even though an individual's face may be blurred, the combination of information about their location, vehicle model and home exterior may lead to identification from broader contextual and environmental factors.
The Canadian case of Pia Grillo v Google 45 is an example of how context or ancillary objects may be sufficient to render an individual identifiable, even in situations of successful object obfuscation.In that case, the plaintiff brought a claim for invasion of privacy against Google for capturing and publishing an image of her in front of her Quebec home.Importantly for the decision, she was wearing a sleeveless tank-top with her breasts partially exposed. 46Though the plaintiff's face had been blurred, the licence plate of her vehicle and the postal address of her home had not. 47The court held that the failure to blur information such as her licence plate and residential address may have led to personal identification. 48ontext also shapes the scope and nature of the privacy interests held by the individual.As discussed below, courts have traditionally viewed activities carried out in certain places, such as the home, as attracting greater expectations of privacy.In the Grillo case, questions arose as to whether the plaintiff had tacitly waived her right to privacy because she was seated outside her home and thus visible from the street.The court rejected the argument, but it still highlights the pivotal role of context in shaping legal conceptions of privacy invasion. 49n sum, context is core to the establishment of privacy interests emanating from SLIP data collections in two ways.First, context may render an otherwise obfuscated individual identifiable.Second, the privacy rights and expectations of an individual will generally turn on the context of an activity.These two functions of context are reflected in data protection legislation and the law of privacy torts, respectively.A key limitation of automated obfuscation systems is the failure to adequately account for contextual nuances.However, these limitations are not merely technical.Delineating the reasonable expectations of privacy in a given context is a social and political process, 50 which is the subject of ongoing contestation in courts, parliaments and public discourses. 51In the next section, we explore some of the complexities of legally conceptualising the context-sensitive nature of privacy interests implicated by SLIPs, which in turn have implications for designing technical measures for privacy protection.

III. Relevant legal frameworks
The act of capturing, aggregating, and disseminating street-level mapping imagery can implicate a range of privacy interests and rights protected by law.Capturing imagery by entering property without the permission of the owner may amount to trespass.The covert deployment of optical or data surveillance devices may breach surveillance 45 Pia Grillo v Google Inc. (2014) QCCQ 9394 (Can.). 46Ibid [11]. 47Ibid [56]. 48Ibid [57].Note also that Google admitted obfuscation failure of the licence plate which could be classed as a false negative error.Thus, re-emphasising our point that there are crossovers between types of failure. 49The court also rejected the argument that the plaintiff had tacitly waived her right to privacy because she was seated outside her home and thus visible from the street.See Ibid device laws in some jurisdictions. 52This section focuses on two main areas of law which provided the basis for actions for invasion of privacy arising out of street-level mapping incidents: tortious invasions of privacy and data protection statutes.Despite a few instances of successful litigation and government pressure, a survey of the existing legal frameworks reveals a gap between public perceptions of privacy invasion and the interests protected by the law.

A. Privacy torts
Street-level mapping projects may give rise to claims for tortious invasion of privacy. 53rivacy torts are recognised in several common law jurisdictions. 54Two causes of action, intrusion upon seclusion and publicity of private life, 55 are of relevance to SLIPs. 56The scope and elements of these torts have developed differently across jurisdictions.Nonetheless, it is possible to glean some common principles and themes which have emerged from attempts by courts in various jurisdictions to define what merits protection from intrusion or publication.Intrusion upon seclusion involves intentional and unwanted intrusions upon the solitude or seclusion of a person. 57An intrusion may occur by way of physical entry, sensory or electronic observation, or search or inspection. 58In order to be actionable, the intrusion must be upon the intimate or private affairs of the person and be highly offensive to the hypothetical reasonable person. 59The second category, publicity given to private life, is concerned with the public disclosure of private information about a person. 60In some jurisdictions, it is a requirement that the intrusion or publication must be 'highly offensive to the reasonable person'. 61Despite overlap, 62 these categories are generally recognised by courts as separate torts. 63Broadly speaking, the former tends to concern physical access to, or observation of, a person, activity, or space.The latter involves the publication of private information about a person.
Whether the process and outputs of digital mapping are actionable will depend upon various contextual factors.Location often factors heavily into court assessments of what is 'private' and merits protection from intrusion or publication.Traditionally, expectations of privacy are highest in the home, 64 and significantly diminished once they venture into public places. 65However, courts in various jurisdictions have shown a willingness to treat, as private, certain states or activities carried out in public places.US courts have taken the view that photographs taken of an individual as part of a public scene in their 'ordinary status' or involved in incidents 'seen almost daily in ordinary life' will not violate their privacy. 66Capturing a person in what courts considered an 'embarrassing' state, such as where private body parts are inadvertently exposed, might, 67 though not where the publication is 'newsworthy'. 68The Grillo case is one example of this, though the plaintiff's location on private property may also have played a role in the finding of privacy interests.English courts have recognised images of a well-known person seeking treatment for drug addiction, 69 and of the children of well-known parents in a public place, as private. 70The publication of details of an extra-marital relationship to a public audience, 71 and images and details of sexual encounters, 72 have also attracted privacy protection.
While these fact-sensitive decisions cannot be neatly reduced to general categories, it is safe to conclude that the location and state in which an individual is captured, and the nature of their activities, will factor heavily into whether mapping imagery capturing individuals outside of their homes is considered private. 73Generally, the incidental capture and publication of images of people going about quotidian activities in public places is unlikely to be protected.There are, however, few bright lines which distinguish an ordinary status or activity on one hand, from a private situation on the other.
The 'highly offensive' threshold adopted in some jurisdictions is a key obstacle to establishing that the capture or publication of mapping imagery constitutes a tortious invasion of privacy. 74To be highly offensive, courts may ask whether the act would cause a person of 'ordinary sensibilities' distress, humiliation, or anguish. 75In Boring v Google, a US court was not convinced that the act of entering a driveway accessible from a road marked 'private road, no trespassing' and photographing the plaintiff's residence and swimming pool was highly offensive to a reasonable person. 76Accordingly, the plaintiffs' claims of privacy invasion failed.
The way this information is collected has sometimes been material.For instance, the English courts have factored the surreptitious nature of information collection into decisions about invasion of privacy. 77As noted, the physical dimension of an invasion is recognised in some jurisdictions as a separate cause of action for intrusion into seclusion. 78Expectations regarding acceptable modes of collection are also context-dependent and evolving, as communities habituate to different practices over time.Take, for instance, the recent re-entry of Google Street View vehicles into Germany.
Overall, tort law has played a limited role in resolving the tensions triggered by streetlevel mapping technologies.With some exceptions, tort law's adherence to the notion of 'public-presence-as-consent' 79 provides limited guidance for locating privacy interests beyond a restricted set of sites and activities.The limits of US privacy torts in addressing the privacy concerns which attended the initial rollout of Google Street View led several academics to argue for reform. 80However, common law courts have been given few opportunities to resolve perceived inadequacies in the law and thus no major legal developments have taken place. 81Instead, the ease in friction between community expectations and street-level mapping practices in the decade or so since Google Street  81 The rarity of claims is perhaps unsurprisingly, given would-be claimants find themselves in a paradoxical situation where seeking recourse for privacy invasion might have the opposite and undesirable effect of magnifying the facts or images they wish to keep private (a catch-22 which factored into the Borings' unsuccessful claim): Boring v Google (n 53).
View's debut has largely been a product of compromise reached through changes to Google's policies and the blurring and takedown mechanisms discussed above.Both were enacted in response to public outcry and pressure from regulators, levelled under the auspices of data protection law. 82As other authors point out, what distinguishes SLIPs from earlier systems of recording street imagery is the ubiquity of collection, the breadth of distribution, 83 and the potential lifespan of the imageconsiderations more explicitly dealt with under statutory regimes for data collection, use and disclosure.

B. Data protection legislation
Tortious protections of privacy are predicated on legal mechanisms that establish spaces of non-intrusion.Such protections recognise a value of privacy that regards the ability of an individual to limit access to the private elements of their lives.The more one can limit access, the greater protection one has against potentially intrusive behaviour from others.The cases outlined above are emblematic of access-based conceptions of privacy by their focus on shielding a private context, whether it be a specific private space or property, 84 or a private aspect of individual life, whether it be conducted in a private or a public space. 85The cases also recognise the prospects of amplification to a broader audience as an form of access intrusion, especially in situations that draw attention to physical features that would normally be classed as private. 86ll these issues clearly give rise to legal risk management considerations for SLIPs which are built into technical solutions based on obfuscation of risk-generating captured imagery.The scale of global mapping platforms is such that managing the tortious risks that arise from intrusion type infringements is complex.However, mapping platforms need to manage this complexity in tandem with a different set of privacy requirements, namely, data protection, albeit from a perspective that is intended to apply in a limited, 'common sense' way. 87In doing so, the privacy focus shifts from the preservation of access to limit private intrusions to establishing and maintaining individual control of personal data, particularly imagery.A different form of privacy legal analysis is now at play that focuses more significantly on whether collected image data is classifiable as personal data.If that is the case, then data collectors, including the harnessers of public geographical images, may be subject to stringent data protection requirements.
The intention of data protection law is to establish processes of control for individuals regarding the handling of their personal information. 88A range of legal obligations are placed on data collectors, such as SLIPs, that begin at the point of data collection and end with destruction or de-identification of no longer required data. 89The guiding form of control mechanism are called 'privacy principles' 90 or in the US context, 'fair information principles', 91 that govern data handling obligations for data collectors and provide a range of interaction points of involvement for individuals.In the interim, data collection organisations have a range of obligations to fulfil.The individual must be notified about the purposes of collection so they can meaningfully consent to subsequent uses. 92Personal data can generally only be used for a defined purpose about which the individual is adequately informed. 93Individuals have a range of interaction mechanisms that seek to ensure the maintenance of control by being able to affirm the accuracy and currency of collected personal information. 94Personal data, once collected and stored, must be kept secure. 95ata protection law thus seeks to provide individuals with varying degrees of control and involvement in personal information exchange processes.However, the application of data protection law whilst predicated on underlying notions of individual control, 96 is also cognisant of the requirements of data collecting organisations and the flow-on benefits of personal information use for society. 97Balance between individual protections and organisational requirements is a key component of data protection law including SLIP collections.It is the balancing requirement, the balance between individual privacy infringement and broader societal benefit, that is at the heart of some jurisdictional 'common sense' approaches to SLIP data protection issues and others which have had more stringent regulatory perspectives. 98Some member countries of the EU have been the strictest.
The European Union model of data protection places greater rights-based protections for individuals because it is a fundamental right of EU citizenship. 99The US model of information privacy places greater emphasis on market-based activities and therefore provides a lesser degree of protection for individuals. 100The OECD based systems place greater emphasis on balancing interests to facilitate data exchange processes and any notion of rights-based protections exist in statutory rather than fundamental forms.These geo-political issues are of obvious importance to SLIPs because different jurisdictions will have different data protection expectations.As such, whilst core conceptual  96 Rubinstein and Good (n 18) 1347. 97Bennett and Raab (n 88). 98Geissler (n 6). 99Orla Lynskey, 'Grappling with "Data Power": Normative Nudges from Data Protection and Privacy' (2019) 20 Theo Inq L189, 191. 100 Paul Schwartz and Karl-Nikolaus Peifer, 'Transatlantic Data Privacy Law' (2017) 106 Geo.L.J. 115.constructs are similar across the three systems, they are different in application which becomes problematic for SLIPs which operate across all three frameworks.
Nevertheless, the first question to consider is the same, namely, whether the image of an individual is classable as a type of information that can trigger data protection requirements.In this situation the basic question to ask is whether the image of an individual captured as part of SLIP image capture would be personal data or personal information, depending on the jurisdictional context. 101It is useful to note, at this point, the different legal approaches compared to privacy torts.A key consideration in relation to privacy torts regarded whether a realm of privateness had been captured and whether an intrusion into the private was likely to infringe a reasonable expectation of privacy.Both are foundational issues in tort that are much less relevant in data protection.The public/ private distinction is not a requirement of data protection obligations and reasonable expectations is not the foundational legal test.Consequently, images captured in public spaces under a data protection perspective can give rise to legal obligations on account of the identifiability of an individual rather than an intrusion into an individual's private life, as noted above. 102The question of whether SLIP image collection is personal data or information, including sensitive information, is thus a threshold test. 103he classificatory basis of regulated information is therefore important because it belies many of the political considerations inherent to the application of data protection law.The US situation, in comparison to the EU and Australia, is an important case in point.Each jurisdiction has a different method of classification which has an impact on the scope of application, including for SLIPs.For example, both EU and OECD jurisdictions have an intentionally flexible definition of personal data or personal information that regards information relating to 104 or about 105 an identified or reasonably identifiable individual.The use of 'relating to' and 'about' define the connective process that links an individual to a recognisable and applicable process of identity. 106Two states of identity then flow, namely, identified and identifiable, or reasonably identifiable, in the Australian context.There is general agreement about how the two states of identity are conceptualised in both jurisdictions which entail context independent and context dependent approaches. 107 context independent approach enables the categorisation of personal information without recourse to the social context within which the information is used.It represents the 'identified' state of both definitions.The removal of social context simplifies the categorisation of personal information because it is possible to make a definitive prediction of what information is always likely to be classified as personal information.For example, a name is always likely to reveal identification so therefore a name will always be personal data or information.Similarly, a photograph of an individual who is clearly recognisable from the image will be personal data or information.If that is the case, then collections under some jurisdictions, may require the individual to consent to image collection before it takes place. 108This of course is an extremely difficult task for SLIP image collections which are indiscriminate by nature.The issue of consent acquisition is obviated by object obfuscation of faces which makes the individual unidentifiable and post-publication reporting options when automated errors occur.
However, that is not the end of the story.A context dependent approach, as defined by the 'identifiable' or 'reasonably identifiable' state, deems that personal information can only be identified by examining the social context within which a piece of information is used. 109This makes definitive prediction virtually impossible because all information could be classed as personal information in certain circumstances, which are likely to be inherently subjective.For example, as above, a home address does not automatically reveal identity, but it can do in certain circumstances by cross-referencing with other information. 110The same can also be said for SLIP image collections as exemplified by the Pia Grillo decision highlighted above.Even though the plaintiff was not facially identifiable, the court decided that her identity could be ascertained in relation to her location and other bodily factors.Object obfuscation of faces, if done completely, unlike the examples of false positives highlighted in Section II, might be an automated solution for identified types of imagery that could be personal data or information, but is not a guaranteed solution to the more complex context dependent types of identifiability.
Most jurisdictions also recognise that different forms of personal information can have greater levels of sensitivity attached to it, such as, data about racial origins, political or philosophical beliefs, sexual orientation or biometric data. 111The issue of whether SLIP facial image data collection is biometric data and therefore sensitive is not of direct relevance to our paper.However, it is important to consider recent regulatory actions against Clearview AI which appears to show that the type of collections pertinent to SLIPs are increasingly being considered in relation to the scale and application.The action brought by the Office of the Australian Information Commissioner, in conjunction with the UK's Information Commissioner's Office is a case in point. 112Clearview AI's business model was to provide a facial recognition tool, principally for law enforcement agencies throughout the world. 113To do so, it required a massive database of facial imagery which Clearview AI achieved through web scraping activities. 114Much like the initial advent of Google Street View, Clearview AI's business gave rise to several regulatory investigations based on the application of data protection law.
A key procedural question in the Australian investigation involved whether Clearview AI was carrying out a business in Australia under s5B(3)(b) of the Privacy Act 1988 (Cth).The Commissioner found that Clearview AI was carrying out business in Australia when it employed its web crawling technology that scraped Australian websites. 115More importantly for this paper, the Commissioner also made clear that Clearview's scraping was of an 'indiscriminate nature' and the scale of its database which contained 'at least 3 billion images' meant that it must have collected Australian facial imagery for services intended for the domestic market. 116Moreover, because of Clearview's use of imagery as biometric profiling, it could not rely on forms of implied consent, when explicit consent was required, especially relating to collections where individuals were not adequately informed. 117The indiscriminate nature of collection, combined with the sensitivity of use, meant that Clearview's opt-out mechanism was not sufficient especially when balanced against the 'serious consequences for the individual'. 118he Clearview decision, while not directly relevant to our SLIP privacy considerations, is still helpful because it demonstrates the type of balancing exercise that regulators and courts engage in when considering massive and indiscriminate types of image data collection.Individuals are unaware of data collections in both types of situation and cannot provide meaningful or valid consent.As a result, individuals do not have the types of control over personal data that data protection laws seek to instil.Moreover, even though Clearview involved sensitive information for biometric use, which is beyond the scope of our paper, it is nonetheless important because it highlights that the combination of indiscriminate collections to produce massive facial image databases will give rise to specific types of adjudicatory reasoning.Scale is becoming important and it is therefore an issue that SLIPs need to be more attuned to, given their size and indiscriminate forms of collection.With that in mind, we now set out a different way of thinking about automated privacy protection for SLIP data collections.Our new approach moves exclusively from object obfuscation, as the only regulatory solution, to consider a more holistic understanding of privacy concepts, in tandem with more sophisticated forms of automated response involving contextually-dependent identification.

IV. Enhancing automated privacy protection
The above analysis reveals the complex privacy considerations at play regarding SLIP data collections and usages.The indiscriminate nature of SLIP image collections, and the equally indiscriminate process of object obfuscation as a general privacy protection, gives rise to questions about whether SLIPs are doing enough to sufficiently protect privacy.To think about this sufficiency question, we go back to Brunton and Nissenbaum's two contextual connotations, namely, that acts of obfuscation must be considered within the context of power asymmetries and that privacy is a 'multi-faceted concept' with a range of different ways in which it can be protected.We finish our substantive considerations by examining technical alternatives to 'better-than-nothing' protective processes of SLIP object obfuscation.In doing so, we ask whether it is possible to adopt processes of automated contextually-dependent identification that is more attuned to the complex privacy scenarios identified in this paper.To do so requires machine learning processes to holistically identify individual and environmental contexts arising from the principled analysis of tortious and data protection laws across four areas of potential object obfuscation failure.They are where: (1) Obfuscation failed at the individual level e.g.where a person's image was ineffectively pixelated and a person was re-identified despite the obfuscation applied to facial features (a false negative example of facial obfuscation).(2) Obfuscation failed at the environmental level e.g.where a property or some other thing was ineffectively pixelated and a person was re-identified despite the obfuscation applied to the property or thing (a failure to obfuscate a licence plate or a home name plaque).(3) Obfuscation did not fail at the individual level e.g. but nevertheless the un-pixelated environment still leads to the identification of the individual (the type of contextual identification in the Pia Grillo case).(4) Obfuscation did not fail at the environmental level e.g. but nevertheless the pixelation still draws attention to an individual in combination with other data ('the Streisand Effect').
Incorporating these four areas of potential obfuscation failure gives rise to new complexities for automated forms of analysis that require both individual and environmental contexts to be assessed.As noted immediately below, while it is technically possible to identify context from images it remains a non-trivial task particularly when conducted at the level of scale in which SLIPs operate.We therefore contend that further research into automated contextually-dependent privacy detection is required to ensure the true complexity of privacy risks are sufficiently contemplated in SLIP processes of ameliorative protection.
However, whilst the development of new processes of automated contextually-dependent privacy protection would provide a more sufficient technical process to better handle the 'multi-faceted' elements of privacy law, it does not alone assist with Brunton and Nissenbaum's observation that obfuscation processes must be considered within the context of power asymmetries.We consequently also consider new types of legal risk management process that could be adopted based on existing processes of privacy-by-design that are more cognisant of the power contexts relevant to SLIPs.Key to the power context is the scale of SLIPs and thus their ability to establish privacy expectations of the broader populous.We therefore conclude the paper by examining the question of whether automated obfuscation is a sufficient protection considering the technical enhancements possible and in conjunction with the scale, indiscriminate nature and power of SLIP activities.

A. Contextually-dependent automated and semi-automated privacy detection
As noted above, the current approach to privacy detection and intervention tends towards literal interpretations of privacy-sensitive features.Objects that are understood to contain personal data or information, such as faces or number plates, are detected through algorithms, bounded and blurred.However, context-dependent detection and intervention requires more sophisticated scene understanding.In this circumstance, an object may constitute personal information only when accompanied by specific contextual elements.The existence of features which are context dependent, partially obscured or outside of recognised 'privacy sensitive' categories are often not captured by existing methods.Computer vision systems optimise for such tasks, by using methods such as, object detection, instance segmentation and semantic segmentation.However, they fail to address scene dependent privacy issues as they falsely assume a binary categorisation of objects as either private or non-private. 119As noted above, whilst binary differentiation may be used to enhance technical functionality, the idea that there is a distinct public and private sphere which can be used to delineate legal protections is seen as increasingly problematic. 120Accordingly, there is a need to develop context-dependent object detection methods that require more nuanced privacy protections based on a deeper contextual understanding of privacy torts and data protection law.
The technical components of a more sophisticated contextual analysis already exist but are complex, particularly in operation at scale.There are continuous grading options, but they assume a type of monotonic scale that can have the tendency to oversimplify context.For example, object detection is an important task for autonomous driving systems because it allows vehicles to actively detect roads, pedestrians, and other vehicles. 121The existing approach to object detection is underpinned by the development of deep learning and neural networks. 122It combines the two tasks of object localisation and image classification to determine where an object is within an image and assign it a class label. 123The complexity of traffic scenarios and real-time decision making also requires instance and semantic segmentation. 124There will often be more than one vehicle within an image frame which will need to be detected, and the exact location must be known.These actions require the detection of individual object instances, pixel-by-pixel segmentation and assignment of class labels. 125he three most common methods for object detection are R-CNN ('Region-Based Convolutional Neural Network'), 126 YOLO ('You Only Look Once') 127 and SSD ('Single Shot Detection'). 128R-CNN models make use of a two-step process, proposing multiple regions within an image and using a CNN to classify each of the proposed regions. 129This has been improved by Faster R-CNN, which integrates Region Proposal Networks (RPNs) to effectively combine the two steps while maintaining high accuracy. 130In contrast, YOLO is an end-to-end method which offers faster performance by predicting bounding boxes and fix class labels in one simultaneous forward pass. 131Similarly, SSDs balance speed and detection accuracy by avoiding a delegated region proposal network, instead predicting bounding boxes and class categorisations directly from feature maps in one single pass. 132The choice of object detection method for an autonomous driving system is consequently a trade-off between speed, detection accuracy and detection granularity.However, while these methods improve object detection, none of them make use of contextual features beyond individual grids of region proposals.Contextuality is improved by having a better, closer to real-time understanding of specific objects which is obviously a limitation when considering the more complex forms of contextdependent analysis we argue are needed above.
The concept of 'context' in computer vision includes any information that could influence how a scene and its objects are perceived. 133This often includes local pixel information which assists in image segmentation and boundary extraction. 134For example, Dalal and Triggs found that incorporating background information into detection of pedestrians improved accuracy. 135However, contemporary object detection methods have progressed beyond these techniques to achieve more complete scene understanding.Enlarging the receptive field of object detection networks has been shown to improve overall accuracy. 136The advent of transformers and attention-based mechanisms has allowed detection techniques which utilise full-image receptive fields. 137This enables scene configuration to be exploited as an additional source of information for object detection. 138n particular, an objects presence, appearance and location within an image can be more accurately detected with scene understanding. 139The likelihood of an object's presence can be predicted using the environment, the presence of other objects and the scene layout. 140Again, this development is particularly significant for autonomous driving systems because it has increased accuracy by identifying objects that are not on the road. 141The extraction of context features was categorised by Divvala and others into three distinct categories; semantic, scale and spatial context. 142Semantic context describes the likelihood of an object appearing in a particular type of scene.Scale context describes the relative size of an object to surrounding objects, and spatial context considers the objects which are likely to surround a particular object. 143These attempts to move towards greater scene understanding and beyond literal detection will increase the accuracy of computer vision systems.However, increasing accuracy, as noted above, will also necessitate more effective privacy-focused intervention techniques, particularly obfuscation. 144s noted above, the most common methods of image obfuscation are pixelation and blurring. 145Pixelation, or mosaicking, obfuscates part of an image by dividing the targeted area into a square grid and computing the average colour of each pixel within the grid. 146The entire square (aka, 'pixel box') is set to that colour, with the size of the boxes controlling the granularity of the pixelation. 147Blurring removes detail from an image by applying a Gaussian kernel which smooths the targeted area. 148ection II outlines that object obfuscation techniques are not a silver bullet as they do not remove all of the information from an image, but limit a human's ability to interpret the obfuscated information.The preservation of some information, though more visually pleasing than a complete redaction, is fallible to extraction from advanced image recognition models, 149 particularly in relation to contexts that can give rise to identification.
Two common non-standard methods of obfuscation are k-same and GAN obfuscation. 150K-same obfuscation has been used for facial blurring by clustering faces based on non-identifiable information such as expression, then generating a surrogate face for each cluster. 151his ensures that a re-identification system cannot be more accurate than 1/k in assigning a face to an individual. 152GAN methods popularised by 'deep fakes' can provide more realistic faces whilst still removing identifiable information.These models use contrastive loss and conditional-GANs to ensure faces are obscured from the source input. 153In addition, other more esoteric methods of image obfuscation have also been explored.For example, black boxes, 154 cartooning, 155 full body masking, 156 and inpainting. 157These methods each seek to reach an optimal balance of obscuring human recognition, obscuring machine recognition, and visual attractiveness.
The outlined obfuscation methods all make object detection more difficult.For example, pixelation can remove edge details which may be critical for object localisation, whilst blurring may diffuse contrast features necessary for object classification.However, autonomous driving systems are increasingly trained on datasets which already have sensitive information obscured. 158Such systems are inherently more robust in detecting and processing obfuscated imagery.
Furthermore, deep learning techniques have been developed which are able to circumvent these methods to access the underlying information.Tekli and others described techniques which identify obscured information through either 'Recognition-based' or 'Restoration-based' attacks. 159Recognition-based algorithms are trained to recognise information from within obscured images, whilst Restoration-based methods attempt to reconstruct the original features that have been obscured. 160The introduction of automated or semi-automated context dependent detection of privacy features may introduce a third class; one which negates the requirement for recognition or restoration, rather using extraneous information within an image to infer obscured detail.This evolving interplay between obfuscation methods and detection techniques underscores the requirement to regularly revisit the need for nuanced privacy considerations regarding computer vision technologies.
The above analysis of technical developments highlights that the type and level of object obfuscation employed by SLIPs could be augmented.First, it is possible, by applying techniques developed for autonomous vehicles, to still focus on object identification but to do so in a more concentrated fashion that better identifies captured image objects and potentially the relationships between objects.Second, obfuscation methods can also be employed with different types of object detection technique to provide a closer identification of certain objects to be obfuscated (e.g.faces) and to provide enhanced obfuscation methods that are more comprehensive in coverage and more difficult to re-engineer.These developments highlight that there are possibilities for improving object obfuscation which would better understand and respond to the image environment of captured objects.
The two improvements highlighted above would increase the scope of contextuallydependent forms of identification, but they would not provide automated processes where potentially privacy infringing contexts could be identified or detected.Accurately detecting the types of privacy infringing context highlighted above through machine learning for legal risk management purposes still appears to be some distance away, despite the improvements to object obfuscation that could be made.This leads us back to Brunton and Nissenbaum's second context connotation and the need for SLIPs to provide more contextually-dependent risk management processes that better understands the broader context of power asymmetries and the establishment of expectations involved in privacy-by-design processes.

B. Contextually-dependent privacy by design
In an effort to grapple with the threats to privacy posed by the types of ubiquitous digital data collection characterised by SLIPs, legislators have increasingly turned to a 'privacyby-design' paradigm. 161The foundations of privacy-by-design were laid several decades ago, most notably in the work of former Information and Privacy Commissioner for Ontario, Ann Cavoukian, 162 and embraced by policymakers across many jurisdictions. 163The principle gained a new statutory emphasis and articulation with the adoption of the General Data Protection Regulation.Article 25 of the Regulation requires that firms: … implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects. 164 essence, the provision broadens regulatory focus beyond individual rights and a post facto remedy to systemic, ex ante and lifecycle protections, by requiring organisations to build technical and organisational safeguards into system architecture that ensures compliance with data protection obligations and rights.
While Article 25 represents the clearest legislative articulation of a privacy-by-design obligation to date, it is nonetheless ambiguous in scope and operation. 165The Regulation supplies a few examples of the types of technical and organisational measures which might be required but refrains from further detail, leaving significant room for interpretation as to the concrete requirements in a given context.Indeed, the context-dependent nature of the obligation is explicit in the provision. 166What constitutes an 'appropriate' technical and organisational measure and a 'necessary' safeguard is, according to Article 25, dependent upon a range of factors, including the context of data processing and the severity of the privacy risk it poses.As noted in Section III, and throughout the paper, context is central to legal conceptions of privacy invasion across jurisdictions and bodies of law.Identifying the features of a context that heighten or ameliorate the risk of privacy invasion is also a notoriously difficult and an ongoing social and legal project.Another factor which must be considered in determining appropriate technical measures under article 25 is the 'state of the art'.As Bygrave observes, the provision assumes the existence or emergence of growing markets for privacy-enhancing technologies (PETs), propelling the advancements of the 'state of the art'. 167In reality, however, the innovation and uptake of PETs across various domains has stagnated; 168 a situation we observe in the street-level mapping domain.
We argue that while the largest SLIPs have long implemented technical safeguards for privacy into their products, the approach taken by key members of the industry falls short of privacy-by-design ideals in some key respects.The implementation of technical and organisational safeguards in major SLIPs was largely a reactive process, 'bolted on as an add on, after the fact … ', 169 rather than embedded into the architecture of a system. 170Further, the initial build and engineering of SLIPs relied upon indiscriminate data collection, as opposed to data minimisation, which Seda Gurses and others rightly cast as 'necessary and foundational first step to engineer systems in line with the principles of privacy by design.' 171 Moreover, we argue that the standard approach of privacy-by-design and its predominant focus on data protection, is insufficient for SLIPs because it fails to sufficiently account for the contextual nature of privacy interests.A more contextually-dependent form of privacy-by-design is required that can consider Brunton and Nissenbaum's 'multi-faceted' nature of privacy and the different types of legal issue that could arise.
Section III highlighted the different legal frameworks of tortious privacy and data protection that are implicated in SLIP data collections and usages.Both frameworks provide core legal protections and do so in separate ways with different foci.Privacy torts engender a focus on reasonable expectations of privacy predicated historically on a clear distinction between private and public realms.The traditionally (highly contested) distinctions between private and public has of course been further disrupted by novel technologies such as SLIPs.Nevertheless, that distinction is still a prime theme that emerges from some of the key cases, such as Boring, which still seek to differentiate reasonable expectations of privacy on differing privacy contexts, especially those related to the private realm.Data protection law focuses on providing designated levels of assurance and control for individuals to ensure that data collectors handle personal data in accordance with a range of legal boundaries including legal authority, contractual obligations and consent.The law's guiding focus is the balancing of individual control with the organisational exigencies of data collectors.Privacy-by-design methods that focus exclusively on data protection, such as Article 25 of the GDPR, albeit with scope for implementation and development, do not fully encapsulate the requirement for a reasonable expectations analysis that is an essential part of SLIP preemptive privacy considerations. 167Bygraves (n 165) 119. 168Ibid. 169Seda Gurses and others, 'Engineering Privacy by Design' (2011) 14 Computers, Privacy & Data Protection 25 <https:// software.imdea.org/~carmela.troncoso/papers/Gurses-CPDP11.pdf>accessed 4 September 2023. 170Koops and Leenes (n 165). 171Gurses and others (n 169).
We contend that a reasonable expectation analysis does not simply entail the layering of one legal framework over another as part of a data protection driven privacy-by-design bolt on. 172Rather, our inspiration for the type of deeper, contextually-dependent analysis is again drawn from Brunton and Nissenbaum's consideration of power asymmetries.Brunton and Nissenbaum rightly contend that there is a foundational link between obfuscation as a privacy strategy and its use as an ameliorator of power. 173In that sense, throughout their work, obfuscation is considered as a tool of the repressed to equalise power asymmetries in online environments.
Our use of their obfuscation construct has been different throughout our paper.We have used obfuscation techniques as a point of critique of the powerful rather than a tool for the powerless.We consider obfuscation as a method for organisations to build privacy into data collection systems by design rather than as a form of deliberate resistance to surveillance and data collection. 174In this regard, the noise that obfuscation strategies produce is not an individual protection.Rather, it is representative of an exercise of power that seeks to enshrine a normative understanding of reasonable expectations of privacy.In the case of SLIPs, one that is based on the 'better-than-nothing' provision of automated object obfuscation with the dubious fallback of online individual reporting of published obfuscation errors or contextual complexities that are beyond machine learning techniques.
We highlight above the scope of Article 25 and its inherent contextual focus on 'appropriate' technical and organisational measures that must consider the 'state of the art.'The immediate sub-section above highlights that existing identification and obfuscation techniques exist that could potentially be used to further develop a truly 'state of the art' automated system for object obfuscation that would better marry the systemic concerns which both privacy torts and data protection hark to.However, while the development of improved technical forms of object obfuscation to SLIPs could provide enhancements, it would still not deal with the more complex question of contextual identification of potential privacy infringing imagery that is undertaken post collection and publication.The question therefore arises whether SLIPs should be required to implement a broader contextually-dependent type of privacy-by-design that considers 'state of the art' technical measures and whether they are appropriate for the publication of global street level imagery.
We contend they should, given the indiscriminate nature of SLIP data collections, the unprecedented global scale at which data collection takes place, often without the knowledge and the express consent of individuals who are inadvertently captured by image collection.The combination of contextually-dependent automated privacy detection techniques with contextually-dependent privacy-by-design legal risk management processes will thus shift obfuscation from a 'humble, provisional, better-than-nothing' 175 solution to one that is appropriately 'deeply entangled with the context of use' 176 and thus 172 Rubinstein and Good (n 18) 1358 and the focus of 'privacy engineering' and 'on what companies can do to build privacy protections into their own systems.' 173 Brunton and Nissenbaum (n 1) 50 stating ' … we can better understand acts of obfuscation within a context of unavoidable relationships between people and institutions with large informational and power asymmetries.' 174Ibid 1 'Obfuscation is the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection.' 175 Brunton and Nissenbaum (n 1) 95. 176Ibid 95.
gives fuller effect to Brunton and Nissenbaum warnings about the relationship between obfuscation and power.

V. Conclusion
Our paper highlights the contextual complexities inherent in SLIP data collections and publications.The indiscriminate nature of SLIP data collections is partially offset by an equally blunt protective privacy measure, object obfuscation.Common objects that are obfuscated, based on prior legal or regulatory actions, include facial features and car licence registration number plates.Nevertheless, we identify four types of obfuscation failure that emerge from SLIPs.They are false negatives, false positives, the Streisand Effect and contextual identification.The four failures demonstrate the complex legal requirements that emerge for SLIPs which emanate from both privacy torts and data protection law.The complex legal issues that arise are due to the 'multi-faceted nature' of privacy in which object obfuscation is but one tool in a much larger tool-box.However, SLIPs tend to portray object obfuscation as the key tool available which in turn attempts to shape broader expectations about the level of privacy protection that is attainable.Based on the work of Brunton and Nissenbaum, we contend that contextually-dependent privacy detection and privacy-by-design processes are required in SLIPs to ensure generated privacy expectations are indeed reasonable and are in keeping with the types of appropriate, state of the art technical measures necessary to safeguard against the contextual risks that emanate from SLIP data collections.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Table 1 .
Overview of object obfuscation by mode of failure.
89 Colin Bennett, 'The European General Data Protection Regulation: An Instrument for the Globalization of Privacy Standards?' (2018) Information Polity 239. 90Moira Paterson and Maeve McDonagh, 'Data Protection in an Era of Big Data: The Challenges Posed by Big Personal Data' (2018) 44 Mon LR 1. 91 Fred Cate, 'The Failure of Fair Information Practice Principles' in Jane Winn (ed), Consumer Protection in the Age of the 'Information Economy' (Ashgate, 2006); Rubinstein and Good (n 18) 1343. 92Joris van Hoboken, 'From Collection to Use In Privacy Regulation?A Forward-Looking Comparison of European and US Frameworks for Personal Data Processing' in Bart van der Sloot, D Broeders and E Schrijvers (eds), Exploring the Boundaries of Big Data (Amsterdam University Press, 2016). 93Bert-Jaap Koops, 'On Decision Transparency, or How to Enhance Data Protection After the Computational Turn' in M. Hildebrandt and Katja De Vries (eds), Privacy, Due Process and the Computational Turn The Philosophy of Law Meets the Philosophy of Technology (Taylor and Francis, 2013). 94Daniel Susser, 'Notice After Notice-and-Consent: Why Privacy Disclosures Are Valuable Even If Consent Frameworks Aren't' (2019) 9 J. Inf.Policy 37. 95 Paul Schwartz and Edward Janger, 'Notification of Data Security Breaches' (2007) 105 Mich.L. Rev 913.