Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks

The benefits of autonomous vehicles (AVs) are widely acknowledged, but there are concerns about the extent of these benefits and AV risks and unintended consequences. In this article, we first examine AVs and different categories of the technological risks associated with them. We then explore strategies that can be adopted to address these risks, and explore emerging responses by governments for addressing AV risks. Our analyses reveal that, thus far, governments have in most instances avoided stringent measures in order to promote AV developments and the majority of responses are non-binding and focus on creating councils or working groups to better explore AV implications. The US has been active in introducing legislations to address issues related to privacy and cybersecurity. The UK and Germany, in particular, have enacted laws to address liability issues, other countries mostly acknowledge these issues, but have yet to implement specific strategies. To address privacy and cybersecurity risks strategies ranging from introduction or amendment of non-AV specific legislation to creating working groups have been adopted. Much less attention has been paid to issues such as environmental and employment risks, although a few governments have begun programmes to retrain workers who might be negatively affected.


Introduction
Autonomous vehicles (AVs) develop new paths for mobility and are acknowledged to have economic and societal benefits, but there are concerns regarding the extent of their benefits and their unintended consequences. As with all new technologies, appropriate governance strategies can help maximise the potential benefits associated with the rapid development of AVs and minimise the risks often associated with technological disruption and negative and/or unintended consequences. The concern, however, remains about the capacity of governments in the timely management of wider societal implications.
Since Google released its first fleet of AVs in 2010 (Teoh & Kidd, 2017), developments in AV technology have accelerated significantly. Hillier, Wright, and Damen (2015) estimate that auto companies will roll out AVs in the market by 2020 and AVs are expected to occupy 25% of the global market by 2040 (West, 2016). Most scholarly work has been directed towards the effects of AVs. For instance, Milakis, Snelder, van Arem, Homem de Almeida Correia, and van Wee (2017) and Wadud, MacKenzie, and Leiby (2016) estimate the impact of AVs on transport demand and energy consumption respectively, while Collingwood (2017) and Glancy (2012) explore the impact of AVs on privacy issues.
There is limited country-specific literature regarding the policy implications of AVs and governance responses to AVs. Literature reviews have been conducted for Australia (Hillier et al., 2015;Sun, Olaru, Smith, Greaves, & Collins, 2016) and the United Kingdom (UK) (Clark, Parkhurst, & Ricci, 2016). Kalra (2017) has identified regulatory gaps in the United States (US) federal government's approach to AV-related safety risks, proposing possible risk management strategies. These studies, however, do not explicitly analyse government strategies and efforts as part of a broader framework. This article addresses the following questions: (a) what are the different kinds of risks associated with AVs? (b) what are the emerging government responses to address these risks and how can these different emerging strategies be categorised and compared? To address these two questions, we focus on efforts at a national level and consider the broader developments in the European Union (EU) as well.
The next section briefly introduces the methodology used for the selection of articles and reports on the governance of AVs. We provide the necessary background information about AVs before discussing the risks associated with them. We present a theoretical framework for examining responses to risk associated with AVs and identify and discuss the various emerging strategies applied by governments to address these risks before the concluding remarks.

Methodology
Our methodology involved two steps. Firstly, we identified AV-related implications by preliminary review and exploration of the key factors that were highlighted as the most prominent in the current literature. We searched for possible risks associated with AVs using the keywords "autonomous vehicle(s)", "driverless" or "driverless vehicle(s)" in combination with one of the following keywords representing an AV-related implication ( Table 1). Boolean operators such as "AND", "OR" and "NOT" were also used. To identify the lesser-known risks of AVs, we searched AVs in conjunction with "risk(s)" and its synonyms, such as "effect(s)", "impact(s)" and "consequence(s)". Secondly, existing government efforts to manage AV-related risks were identified. We searched for words relating to government regulation, such as "regulation(s)", "legislation(s)", "rule(s)", "bill(s)" and "law(s)", together with AVs and the names of the countries and regions of study. These include Australia, China, the EU, Germany, Japan, South Korea, Singapore, the US, and the UK, as most of AV-related developments have occurred in these regions and countries.
Research published from 2000 and onwards was obtained from well-known academic databases (Scopus, ScienceDirect, Web of Science and Springer). Google Scholar was used as the search engine only when the databases produced limited or no results for specific implications of AVs. News articles were also used to supplement background research on AVs, such as NYTimes, The Guardian, and Reuters. Government reports and policy documents were also included to identify the previous, current, and future government measures to address AV-related risks.

Background to AVs
Autonomous systems are characterised as systems capable of making decisions independently of human interference (Brodsky, 2016;Collingwood, 2017), but, unlike mere automation, they can make these decisions while facing uncertainty (Danks & London, 2017). Autonomous systems have been developed in different domains, including warfare, personal care (Arkin, 2013;Pineau, Montemerlo, Pollack, Roy, & Thrun, 2003;Stahl & Coeckelbergh, 2016;Sukman, 2015) and transport. AVs rely on artificial intelligence (AI), sensors and big data to analyse information, adapt to changing circumstances and handle complex situations as a substitute for human judgement, as the latter would no longer be needed for conventional vehicle operations such as lane-changing, parking, collision avoidance and braking (Long, Hanford, Janrathitikarn, Sinsley, & Miller, 2007;West, 2016). 1 This perceived superiority to human drivers is attributed to high-performance computing that allows AVs to process, learn from and adjust their guidance systems according to changes in external conditions at much faster rates than the typical human driver, and it is supplemented with vehicle-to-vehicle (V2 V) and vehicle-to-infrastructure (V2I) communication, allowing AVs to learn from other vehicles (West, 2016).
Companies are racing to secure their share in the emerging AV market by investing in software development, teaming up with leading university research centres and implementing testing on roads. Most governments recognise the need to adapt to these rapid technological advancements but face challenges balancing the strategic desirability of AVs and the issues accompanying this technology. AVs entail enormous social and economic benefits. Countries committed to developing AVs desire greater mobility for the elderly and handicapped, as well as improved safety, and competitiveness in the automotive industry, including the UK, US, China, and Japan (Nikkei, 2017;West, 2016). China aims to lead the world in electric vehicles and AVs by 2030 (Dunne, 2016). Using AVs can boost productivity in countries facing labour shortages in the transport sector, such as Singapore (Lim, 2017) and Japan (Bloomberg, 2016b). AVs can also help meet other national objectives such as improving the fuel economy (Dunne, 2016) and reducing congestion and pollution (Hanai, 2018). However, AVs entail various risks. In this article, we focus on the governance of such risks. 2 More specifically, we examine the governance of technological risks as it broadly defines the unintended consequences arising from the technology. 3 AVs are classified into different categories based on their features. The Society of Automotive Engineers (SAE) categorises AVs based on five levels of automation. At level 1 (assisted automation) and level 2 (partial automation), the dynamic driving tasks such as its operational and tactical aspects are performed by the human (SAE, 2014). From levels 3 to 5, all the dynamic driving tasks are performed by the automated driving system. At level 3 (conditional automation), the human driver is expected to control the vehicle occasionally. A vehicle is classified as fully autonomous at levels 4 (high automation) and 5 (full automation), but only at level 5 is the vehicle expected to drive itself under all environmental conditions . This definition is adopted by various national and international bodies, such as Australia's National Transport Commission (NTC) (Hillier et al., 2015;Sun et al., 2016), the UK's Department for Transport (DfT) (Clark et al., 2016), the US National Highway Traffic Safety Administration ((NHTSA), 2017), the Government of Ontario, Canada (Ticoll, 2015) and the European Road Transport Research Advisory Council (ERTRAC, 2017). This study focuses on AVs at SAE Levels 4-5, as they represent a greater fundamental shift in society.

AV risks and governance strategies
Innovative technologies such as AVs create risks and unintended consequences that may decrease society's acceptance of them, which include environmental risk, market risk, social risk, organisational risk, political risk, financial risk, technological risk, and turbulence risks (Li, Taeihagh, & de Jong, 2018). This article focuses on technological risks, described as potentially negative social, physical, and economic consequences related to citizens' concerns in the adoption of novel technologies (Renn & Benighaus, 2013). Five types of technological risk are associated with AVs: safety, liability, privacy, cybersecurity, and industry influence. 4 To ensure that society reaps the maximum gains from the emerging AV market, it is paramount for governments to introduce new measures and regulations to manage the risks associated with AVs. In this section, we explore the types of strategies adopted by various governments to govern the technological risks brought about by AVs. We employ a framework for identifying governance strategies for addressing these risks, and categorise them as no-response, prevention-oriented, control-oriented, tolerationoriented, or adaptation-oriented strategies based on the work of Li et al. (2018) and Li, Taeihagh, de Jong, and Klinke (forthcoming) ( Table 2).

Safety
At least 90% of vehicle accidents are estimated to be the result of human error (NHTSA, 2015;Smith, 2013;Sun et al., 2016). Adopting AVs can potentially reduce or eliminate the largest cause of car accidents while also outperforming human drivers in perception, decision-making and execution. However, AVs introduce new safety issues. Collingwood (2017) and Litman (2017) highlight that vehicle occupants may reduce seatbelt use and pedestrians may become less cautious due to feeling safer. Also, the elimination of human error does not imply the elimination of machine error. As the technology grows in complexity, so does the probability of technical errors compromising vehicle safety. The fatal crash of Tesla's autopilot in 2016 reveals the uncertainty of machine perception (Banks, Plant, & Stanton, 2018) and highlights the technology's inability to avoid accidents in certain scenarios. Concerns also arise regarding how AVs should be programmed by "crash algorithms" to respond during unavoidable accidents (Coca-Vila, 2018; de Sio, 2017;Nyholm & Smids, 2016). Due to the "lack of blame", the damage caused by AVs in accidents cannot be assessed subjectively, which necessitates rules to regulate AVs' reactions to moral dilemmas (Coca-Vila, 2018). However, it is unclear how to arrive at these rules. Algorithms may be programmed to prioritise the safety of the AVs' occupants  Li, Taeihagh & de Jong (2018) and Li et al. (forthcoming)).

Strategy Definition and AV examples
No-response Policy-makers do not take any specific actions to address risks and may delay decisions due to their uncertain nature. In this scenario, policy-makers may not have any back-up plans or robust institutional frameworks to address impending threats. An example of this strategy in response to AV safety risks is when the government has neither established nor indicated its intentions to establish safety standards for AV manufacturers to follow during the testing of AVs. Another example is the US federal government not establishing any nation-wide rules regarding the allocation of liability and motor vehicle insurance. This strategy corresponds to fragile strategy (Duit & Galaz, 2008). No-response might also imply that policy-makers are ignorant about the potential negative consequences of risks.

Preventionoriented
The main aim of this strategy is to avoid risks by taking preventive action. Prohibiting the adoption of innovative technologies is one such display of risk avoidance, as it seeks to prevent the existence of risk. One example is to temporarily prohibit or restrict AV testing on certain routes if a safety concern is identified (PennDOT, 2016). This strategy corresponds to risk minimisation strategy (Brown & Osborne, 2013) and is suitable to address risks of a more predictable nature, but is ineffective when risks are unexpected (Wildavsky, 1991).

Control-oriented
Policy-makers allow for the existence of risks, but take steps to control them by implementing formal policies and regulations (Osipova & Eriksson, 2013). Traditional methods of risk assessment are adopted to predict and regulate risks. One example of a control-oriented strategy is the Singapore government's response to AV safety risks. In 2017, amendments were made to the Road Traffic Act which now requires AV testers to pass safety assessments and developers to have robust accident mitigation plans before testing on roads (Road Traffic (Amendment) Bill, 2017).

Tolerationoriented
Policy-makers take action to ensure that the system or organisation's performance is robust to risks in a wide range of situations. One example of this strategy in response to AV safety risks is when the government introduces new legislation that requires all AV manufacturers to develop a comprehensive list of contingency plans that outline and justify the AV's responses to a diverse range of accident scenarios. Another example is the UK government's Vehicle Technology and Aviation Bill (HC Bill 143, 2017) that lays out a comprehensive list clarifying the liability of insurers and AV owners in the event of an accident and under a wide range of circumstances. This strategy corresponds to robustness and resistance strategies proposed by Nair and Howlett (2016) and Walker, Lempert, and Kwakkel (2013) respectively. Policy-makers also make forward-looking plans to mitigate potential consequences, such as by developing alternative solutions.

Adaptationoriented
This strategy aims to improve the adaptive capability of the system or organisation. It emphasises on embracing uncertainty and improving its performance in response to shocks. Features of this strategy also include aspects of "forward-looking planning, joint responsibility", and "co-deciding" (Li et al., 2018)). This strategy corresponds to adaptive resilience and resilience strategies proposed by Nair and Howlett (2016), and Walker et al. (2013). For instance, Australia's National Transport Commission is seeking feedback from various stakeholders to decide on one of four options to regulate AV safety. Here, policymakers view risk as an opportunity to change the system for the better, rather than as a threat that should be ignored, suppressed, controlled, or tolerated.
"over anything else", which ensures the economic viability of developing AVs, but using the individual self-interest of AV occupants as a basis to justify the harm inflicted on others undermines the functions of law itself (Coca-Vila, 2018). In contrast, algorithms may be programmed to achieve the most socially beneficial decision based on a range of factors, but how to arrive at these factors is still unclear (Coca-Vila, 2018). Also, regulators have yet to agree on an acceptable level of safety or define legitimate methods of determining the safety of AVs (Kalra, 2017). AVs' performance could improve over time with real-world driving experience, but this is only possible if the public accepts the technology (Bansal, Kockelman, & Singh, 2016;Kalra, 2017). In the US, the federal government traditionally sets the "national safety standards", and the state governments issue licences and regulate drivers' behaviour (Halsey, 2018). NHTSA outlined a Vehicle Performance Guidance for all entities involved in "manufacturing, designing, supplying, testing, selling, operating, or deploying" AVs in the US (NHTSA, 2017). While NHTSA has intentions to enforce these recommendations in future, now it requests these entities to provide a Voluntary Safety Assessment that outlines the compliance to the guidance, which includes specifications on systems safety such as describing safety strategies and design redundancies for addressing AV malfunctions (NHTSA, 2017). The responsibilities of the federal and state governments were clarified in the "Self Drive Act" in late 2017, which establishes NHTSA as the "preeminent regulating body" (Stone, 2018) and allows states to enforce new standards on AVs only if they are "identical" to what is prescribed by federal law (H.R.3388, 2017). It seems with AVs, the legal competence of the federal government will grow while that of state governments' shrinks (Halsey, 2018), as the role of the latter in regulating driver behaviour becomes more redundant. The federal government is not interested in imposing strict regulations on AVs, as, in the words of the Transportation Secretary, they are "not in the business … to pick the best technology" and prefer a market-oriented approach (Halsey, 2018).
Similarly, the UK's DfT published an AV testing code of practice for manufacturers to ensure AV safety in various situations throughout their service life (DfT, 2015), which also has no legal status. It encourages and allows testing on any public road in the UK without requiring the approval of authorities or a surety bond (CCAV, 2016). However, frameworks on how risk can be minimised while engaging in public testing have not yet been established. This laid-back approach stems from the plans to create a national "cluster of excellence" in AV testing as part of its Industrial Strategy to grow human capital, attract foreign investment and develop "high-skill, well-paying jobs" to enhance the economy and achieve greener economic growth, greater mobility, and meeting the needs of an ageing society (DBEIS, 2017a(DBEIS, , 2017b. Both the US and UK are careful not to impose regulations that are too stringent, or to have an excessively lenient stance on AV safety, to provide sufficient room for innovation (CCAV, 2016;Kang, 2016). Their attempts to establish and align expectations regarding safety standards without imposing overly restrictive barriers to innovation represent a light control-oriented strategy.
Likewise, Australia's NTC has published non-mandatory guidelines for safe AV testing that also constitute a light control-oriented strategy (NTC, 2017b). In 2016, the Transport and Infrastructure Council approved of the NTC's suggestion to create a national safety assurance system to assess the level of safety of AVs (NTC, 2016;NTC, 2017c). Emphasis is placed on controlling access to AVs, and it supports the commercial deployment of AVs as a long-term goal, while no regulations have yet been established to approve deployment, and it will still be considered case by case (NTC, 2016). The NTC has developed four regulatory options to regulate safety, on which it is seeking feedback from various stakeholders (NTC, 2017c). This step represents an attempt at consensus-building and public participation among various actors and may thus reflect a move towards an adaptation-oriented strategy.
China's government also adopts a light control-oriented strategy to address safety risks while taking some preventive measures to avoid exposing AVs to realistic road conditions. Human drivers are required to be in the vehicle with their hands kept on the steering wheel, and AVs cannot be tested under actual road conditions until the government devises a framework for granting road test exemptions (KPMG, 2018;West, 2016). While the government has developed draft rules to regulate AV testing on public roads, AV testing has remained slow as existing laws have yet to be revised (The Straits Times, 2018). In 2016, the National Technical Committee of Auto Standardisation started reviewing China's vehicle standards and regulations to identify the appropriate regulatory adjustments. In 2017, the China-New Car Assessment Programme was initiated to ensure that safety measures are well incorporated into the assessment system, and research has begun on industry policy and stakeholder engagement of AVs to assist authorities (ERTRAC, 2017). AVs have been identified as a key sector in the government's plans in becoming a leader in artificial intelligence by 2025 and to compete with the US' core AI industries. Thus, China seeks to create a "friendly policy environment" for accelerating AV development (Cadell & Jourdan, 2017;Dai, 2018).
In Europe, AV testing is legally permitted, but the EU is stricter relative to the US due to cultural differences, as Europe emphasises more on protecting citizens from technological risks while the US focuses on the "race for innovation and progress" (Nicola, Behrmann, & Mawad, 2018). AV testing in the US is allowed on public roads without any mandatory standards to follow, while in Europe AV testing is typically "confined to private streets" and "pre-defined routes" or "restricted to very low speeds" (Nicola et al., 2018). Amendments to the 1968 Vienna Convention on Road Traffic took effect in 2016 to legalise the use of automated driving technologies, which the German government has incorporated into its national law in December 2016. The amended 1968 Vienna convention, however, still requires every vehicle to have a driver who should always be ready to take control of the AVs. The European Parliamentary Research Service (EPRS) highlights that this is incompatible with most highly or fully automated systems, which may not require a driver. Thus, the EPRS recommends further amending the convention (Pillath, 2016). The German government has started experimenting with safety standards through its project PEGASUS (FMEAE, 2017). At both the EU and national level, European governments are still evaluating the implications of AVs before establishing permanent regulations. The aim is to develop a unified strategy to regulate AVs, marked by the Declaration of Amsterdam in 2016, agreeing to meet twice a year to share best practices, monitor progress and collaborate on all levels of regulation (ERTRAC, 2017).
Singapore and Japan have begun amending their laws to regulate safety in AV testing. The Singapore Road Traffic Act (RTA) was amended in February 2017, demonstrating a control-oriented strategy. The law now recognises that a motor vehicle need not have a human driver (RTAB, 2017) and the Minister for Transport can create new rules on AV trials, set standards for AV designs, and acquire the data from AV trials. A five-year regulatory sandbox was created to ensure that innovation is not stifled and the government intends to enact further legislation in the future. Meanwhile, AVs must pass safety assessments, robust plans for accident mitigation must be developed before road testing, and the default requirement for a human driver can be waived once the AV demonstrates sufficient competency to the Land Transport Authority (LTA). After displaying higher competencies, AVs can trial on increasingly complex roads (CNA, 2017). Similarly, Japan has drafted rules for AV testing in early 2017 that require a human driver with a driver's licence in the vehicle, police approval, clear labelling on AV test vehicles and testers to always be prepared to apply brakes (Kyodo, 2017). Furthermore, police officers will "ride the test vehicles" to ensure its proper functioning (Jiji, 2017). The emphasis on human control of the AV demonstrates a prevention-oriented strategy, as the Japanese government is actively using human oversight to avoid the risk of accidents resulting from technical faults. In South Korea, a Smart Car Council has been established to coordinate actions across ministries (West, 2016).

Liability
In most conventional car accidents, the driver retains some control over the vehicle and thus assumes primary liability for the vehicle's fate; however, persons in an AV are no longer in control (Collingwood, 2017;Douma & Palodichuk, 2012). Part or all of the responsibility will shift onto the AV as accidents become more of an issue of product safety or efficacy; thus, third parties involved in the design of safety systems in AVs will face greater vulnerability to lawsuits involving product liability (Marchant & Lindor, 2012;Pinsent Masons, 2016). It is unclear how liability will be apportioned between the AV's autonomous system and the human driver. Will the human bear part of the responsibility of a crash if there is a manual override function they failed to use (Collingwood, 2017)? At the expense of privacy, black box data (event data recorders (EDRs)) can be utilised for determining liability more accurately (Dhar, 2016). Moreover, no clear legal framework exists that outlines how liability is apportioned between third parties responsible for designing AV systemsthe manufacturer, supplier, software provider or the software operator -making the identification and separation of the various components that caused the malfunction difficult (Collingwood, 2017;Pinsent Masons, 2016).
Manufacturers are increasingly vulnerable to reputational risks imposed by accidents associated with failures in design and manufacturing (Hevelke & Nida-Rümelin, 2015;Tien, 2017). The current legal frameworks also do not define the practical and moral responsibilities of software programmers in designing "crash algorithms" that determine life or death decisions, raising numerous concerns over AVs' implications on public ethics (Fleetwood, 2017;Pinsent Masons, 2016). Governments have yet to address whether algorithms' decision-making criteria during accidents should be standardised. For instance, should decisions be prioritised by the likelihood, severity, and quality of life effects of the type of injury, or by the number of people injured (Fleetwood, 2017)? No government save the UK has yet amended their legal framework to incorporate these new complexities into the liability of drivers, manufacturers, software designers and other third parties (Duffy & Hopkins, 2013;HC 143, 2017).
The assignment of liability and the corresponding effects on insurance costs are currently unknown (Abdullah, 2016b). Injured third parties may resort to suing the manufacturer or software provider if responsibility belongs to the autonomous system. In the long run, high liability risks may weaken the incentive for manufacturers to innovate, slowing down further safety improvements for AV users (Gurney, 2013;Hevelke & Nida-Rümelin, 2015).
In the US, the federal government delegates most of the responsibility in determining liability rules to state governments (NHTSA, 2017). Currently, the Department of Transportation has not displayed any response to establishing nation-wide rules for liability and insurance in the short run. NHTSA urges states to consider liability allocation, to determine who must carry motor vehicle insurance and to consider rules allocating tort liability. So far, most states have taken the first step towards a control-oriented strategy to address liability risks by revising the definitions of AVs (NHTSA, 2017).
The only country that has adopted a toleration-oriented strategy to address liability and insurance risks is the UK at the moment, and other countries have adopted either no-response or control-oriented strategies. At the end of 2016, the Centre for Connected & Autonomous Vehicles (CCAV, 2016) highlighted the legal gaps involving liability and insurance and proposed regulatory changes to the DfT. In response, the Bill HC 143 (2017) was passed. The bill lays out a comprehensive list clarifying the liability of insurers and AV owners if an accident occurs and under a wide range of circumstances. Insurers are automatically liable for death or damages due to accidents caused by insured AVs (HC 143, 2017). An insurer's liability can, however, be limited in situations where the owner is deemed at fault. The bill thus resolves ambiguity regarding the apportioning of liability between insurers and the insured victims involved in AV accidents. Specifically, the bill ensures that liability for accidents involving AVs remains under the existing motor vehicle insurance scheme, providing accident victims faster access to compensation (CCAV, 2016;DfT, 2017b). Manufacturers are also protected under the Consumer Protection Act if they demonstrate that the vehicle was not defective at the time it was supplied, and that the defect was only detected later due to scientific advancements (Coates, 2017).
Governments in Singapore and Australia have acknowledged the need to update liability laws. The Singapore government amended the RTA in 2017 to exempt AVs, its operators and those involved in AV trials from existing provisions of the RTA, which hold a human driver responsible for the use of vehicles on public roads (CNA, 2017). There is clear acknowledgement that the vehicle is now in the control of the AV system, and that AVs confront the notion of human responsibility at the core of current road and criminal laws in Singapore (MOT, 2017). In Australia, the government plans to follow a stated timeframe for amending liability and insurance laws. The NTC plans to develop guidelines clarifying the different definitions of control for AVs by November 2017. After this, it is committed to review current driving laws, establish specific legal obligations for AV driving entities, and, if necessary, amend compulsory injury systems to identify potential barriers to eligibility of occupants and accident victims by 2018 (NTC, 2017a). Overall, government efforts both in Singapore and Australia reflect a gradual approach towards regulatory reform and, thus, a movement towards a light control-oriented strategy to manage liability and insurance risks.
Currently, governments in China and South Korea have not indicated their regulatory stance towards liability and insurance risks, representing a no-response strategy. Notably, Baidu Inc. and automaker Zhejiang Geely Holding Co. have urged the Chinese government to speed up the drafting of regulations for AV testing (Bloomberg, 2016a).
The government in South Korea has mentioned that the lack of international standards is hindering the creation of domestic rules for AVs, as South Korea is a major importer and exporter of cars, requiring manufacturers to incorporate international standards for AVs (Ramirez, 2017).
The EU has not amended its legal framework to incorporate AV-related liability and insurance risks but is exploring solutions to liability issues. The European Commission (EC) launched GEAR 2030 in 2016 to explore solutions to AV-related issues, and in February 2017 the group made recommendations for using EDRs. In May 2016 European Parliament Members recommended that the EC should create a mandatory insurance scheme and an accompanying fund to safeguard full compensation for victims of AV accidents and a legal status should be created for all robots to determine liability in accidents (EP, 2017;EPCLA, 2016).
Like the UK, the government of Germany has enacted permanent legislation in June 2017 to address AV-related liability risks. According to the law, AVs must install a black box to record the entire journey to determine liability during collisions (JDSUPRA, 2017; Wacket, Escritt, & Davis, 2017). The law also doubles the maximum liability limits imposed by the existing RTA and attempts to apportion liability between the manufacturer and the driver: the former is made responsible for accidents where the AV system is in charge, and a system failure is the main culprit (Wacket et al., 2017. However, the law lacks clarification on what is considered an "adequate time reserve" that drivers are permitted to have before taking control when necessary and on what grounds third parties own the data collected in the black box (JDSUPRA, 2017). Germany's new Ethics Commission has also published the world's first ethical guidelines for AVs. The guidelines recommend that there must always be clarity regarding who is considered the driver, which must be documented for determining liability. Moreover, it states that it is unethical for algorithms in the AV system to use an individual's data (such as their age or gender) as criteria for decision-making during unavoidable accident scenarios (FMTDI, 2017a). Although the guidelines are not mandatory, it is a first step towards resolving the ethical issues surrounding AVs. There has yet, however, to be an open discussion regarding the responsibility of persons designing such algorithms.
Japan's strategy towards liability and insurance risks can be classified as light controloriented. The National Police Agency makes recommendations on actions to avoid liability risks but has not made them mandatory. For instance, it urges companies to install black boxes on AVs that are tested to help ascertain the causes of accidents and take preventive measures (Nikkei, 2018). Also, testers of AVs are required to submit documents detailing the structures of vehicles and accident mitigation plans to the authorities. The operators or monitors of AVs through remote systems must have a driver's licence and bear responsibility for operational mistakes (Jiji, 2017;Japan Bullet, 2017). Manufacturers will be liable for defects in the system, but this does not include the software designer or other third parties involved in the initial design of the vehicle (Japan Bullet 2017).

Privacy
AVs are reliant on sensors, high definition maps and other instruments, from which information is collected and optimised to ensure the vehicle's safe operation (West, 2016;Dhar, 2016). However, concerns arise regarding who controls this information, and how it is used (Anderson et al., 2014;Boeglin, 2015). Multiple issues regarding informational privacy remain unclear: the exact reasons why information is being collected, the types of information being collected, accessibility to the information and the permissible duration of information storage have not been clarified (Glancy, 2012). V2V and V2I communications allow information to be transmitted between AVs for safety reasons, but they also expose the vehicle's movements and geographical location to external networks, from which people can access to locate an AV user (Glancy, 2012). Schoonmaker (2016) highlights the inadequacies of protecting location-based data based on customer consent, as customers accept the terms and conditions without fully understanding them. Another issue is the use of EDRs for ascertaining the exact causes of accidents, as this data may be sold to third parties such as insurance companies and used against drivers (Dhar, 2016;Pinsent Masons, 2016;Schoonmaker, 2016).
Other cited risks to informational privacy are the possibility of using this information to harass AV users through marketing and advertising, to steal users' identity, profile users and predict their actions, concentrating information and power over large numbers of individuals (Glancy, 2012). While it is possible to anonymise the information taken, this can be reversed through deanonymisation. 5 Deanonymisation algorithms can re-identify anonymised microdata with high probability, demonstrating that anonymisation is insufficient for data privacy (Gambs, Killijian, & del Prado Cortez, 2014;Narayanan & Shmatikov, 2008). This is a serious problem for location-based data, as human traces are unique, enabling an adversary to trace movements even with limited side information (Gambs et al., 2014;Gillespie, 2016). Also, access to the interconnected 6 AVs' wireless network enables public and private agencies to conduct remote surveillance of AV users, which can undermine individual autonomy through psychological manipulation and intimidation (Glancy, 2012). Another emerging issue is the use of video surveillance in AVs that are used as a transportation service, such as autonomous taxis. As users do not own these AVs, it is unclear whether the vehicle is considered a "public space" where surveillance can be considered acceptable (Schoonmaker, 2016).
The governments in the US and South Korea have enacted new legislation on data privacy that applies to all vehicles (including AVs and conventional vehicles). In the US, the new SPY Car Act gives NHTSA the authority to protect the use of (and access to) driving data in all vehicles manufactured for sale in the US (SCA, 2017). All vehicles must provide owners or lessees the ability to stop the data collection, except for data essential for safety and post-incident investigations, and manufacturers are prohibited from using the collected data for marketing or advertising without consent from the owners or lessees. Similarly, effective on February 2016 the South Korean government amended its Vehicle Management Act which establishes conditions for the issuance of temporary licences to test AVs and sets requirements on data collection for all vehicles. Any individual must obtain approval from the Minister of Land, Infrastructure and Transport (LIT) before using collected data. The Act does not, however, specify the extent of information sharing in different conditions. It mentions that approval will be granted in a way that does not violate the privacy of vehicle owners, and that the standards for approval will be determined by the Minister of LIT (MVMA, 2017).
The EU has taken steps to manage privacy and cybersecurity risks applicable to all data in the region, demonstrating a control-oriented strategy. The European Parliament's Intelligent Transport Systems (ITS) Action Plan in 2009 (EP, 2009) emphasised the need to protect personal privacy from the early stages of designing ITS, and the EC released a study in 2012 assessing possible methods to ensure data protection in ITS (Pillath, 2016). These efforts were consolidated through the Declaration of Amsterdam (MIE, 2017). The Data Protection Directive 95/46/EC of 1995 was then updated through the EU General Data Protection Regulation (EU GDPR), which was ratified in 2016 and will become effective in May 2018. 7 The regulations will apply to all companies processing data from subjects residing in the EU, regardless of the location of the company, extending control of data beyond geographical borders (EU, 2016). The regulations also strengthened conditions for consent and increased penalties to a maximum fine of 4% of companies' global revenue and protects the right to be forgotten and the "right to explanation", which allows citizens to review particular algorithmic decisions (Metz, 2016). The EU has already fined Google on several occasions, demonstrating its commitment to privacy (Eben, 2018;West, 2016). However, stringent application of these rules may impede AV developments, for instance, high definition mapping requires geo-coded data to improve AVs' navigational abilities. Excessive regulation of data usage may also disadvantage European manufacturers, and it may be difficult to enforce the GDPR on non-European manufacturers (Pinsent Masons, 2016).
China and Japan have both also taken legislative action to control privacy and cybersecurity risks applying to all personal data, demonstrating a control-oriented strategy. For instance, Japan has amended its Privacy Protection Law in 2017 (The Japan Times, 2017). China, too, has enacted a new Cybersecurity Law requiring the anonymisation of all forms of personal information. It emphasises customer consent and requires network operators to be transparent regarding the purpose, method, and scope of data collection and use (KPMG, 2017). Overall, the law establishes many controls on the collection, use and sharing of personal data but the law does not, however, include AVspecific provisions.
The Singapore government adopts a control-oriented strategy to address privacy risks in general and specifically between public sector agencies. The government is in the process of amending the Personal Data Protection Act (PDPA). A public consultation was issued in July 2017 that proposes amendments to the PDPA, such as increasing the transparency regarding the collection and use of personal information and providing individuals with the option to terminate their consent of these data collection activities (PDPC, 2018). Also, the government has enacted the Public Sector (Governance) bill that prohibits the unauthorised use and sharing of data between public sector agencies. The bill is designed to improve the delivery of public services in Singapore, particularly in the aspects of efficiency and "programme management" (PSGB, 2017).
Germany and Australia have not amended existing legislation to address AV-specific privacy risks. Germany's new AV bill does not include provisions for data privacy but addresses safety and liability risks. The German government has, however, indicated its intention to incorporate privacy concerns when the bill is revised in two years (Wacket et al., 2017). Australia's NTC has also released privacy recommendations, such as adopting a "privacy by design" approach and refraining from generating personal information "wherever possible"; however, this last phrase may suggest that these recommendations are rhetorical overtures (Daly, 2017). Thus, these principles may represent a formal commitment to risk control rather than specifically outlining steps to control AV privacy risks. The NTC also recommended that the upcoming national safety assurance system incorporate elements of privacy protection at the highest possible level (NTC, 2017a). More recently, the House of Representatives Standing Committee on Industry, Innovation, Science and Resources (SCIISR) encouraged public participation in an inquiry into the social implications of AVs and recommended further investigating the data rights of consumers, insurers, government agencies, and manufacturers (NTC, 2017a), adopting an adaptation-oriented strategy by engaging with the public to build consensus in addressing privacy risks.
The UK's DfT, in collaboration with the Centre for the Protection of National Infrastructure (CPNI), created key principles for privacy and cybersecurity. The guidelines recommend that manufacturers follow ISO standards, such as the Privacy Architecture framework outlined by ISO 29101 (DfT, 2017a), demonstrating a light control-oriented strategy. The principles state that personal information must be "managed properly" concerning what is stored and transmitted, its usage, the data owner's control over these processes and ensuring AV users' ability to delete "sensitive data". However, what is considered "proper" management of personal information or "sensitive" data is not defined. These efforts indicate the government's awareness of AV-specific privacy risks and the non-binding nature of the guidelines supports the government's aspirations in becoming a world-leading hub for AV research and development and thus not taking actions that may impede achieving this aim (DfT, 2017a).
In Germany, 13 voluntary recommendations for AVs have been released, and notably, it is recommended that specific rules clarify the data that businesses can process without the "explicit consent" from the AV users (FMTDI, 2017b). Similar to the EU's GDPR, these recommendations apply to all data and emphasise on complete transparency and drivers' full authority over the use of personal data collected from the AV. Germany's current data protection laws are strict regarding the definition of personal data as applied to information with the slightest link to an individual and it is likely that most connected AV data will be considered as personal data unless data-generating items have been designed to anonymise data (Pinsent Masons, 2016).

Cybersecurity
Cybersecurity threats to conventional vehicles with automated features already exist. In their survey of 5000 respondents across 109 countries, Kyriakidis, Happee, and de Winter (2015) found that people were most concerned about software hacking and misuse of vehicles with all levels of automation. Hackers could take control of the vehicle through wireless networks (such as Bluetooth, keyless entry systems, cellular or other connections) as the car connects with the environment (Lee, 2017). With its ability to store and transmit transaction and lifestyle data, AVs are attractive targets for hackers as such information can be sold for a financial gain, or these systems can be used to inflict physical harm by extremists or used for illegal purposes by drug traffickers (König & Neumayr, 2017;Lee, 2017). For instance, Miller and Valasek demonstrated that malicious attacks on AVs are a near-term possibility in 2013, as they hacked a Chrysler-Jeep through its internet connection and took control of its engines and brakes (Schellekens, 2016).
Various studies have analysed the possible cybersecurity threats to AVs, as computers possess greater control over the movements of an AV, AVs are more vulnerable to hacking than conventional vehicles, and the driver is less able to intervene during an attack (Hern, 2016;Lee, 2017). Without sufficient security, V2V and V2I communication channels can be hacked, which can lead to serious accidents (Dominic, Chhawri, Eustice, Ma, & Weimerskirch, 2016;Pinsent Mason, 2016). Injection of fake messages and spoofing of global navigation satellite systems (GNSS) are some of the major threats that AVs will face, as GNSS data can be manipulated to undermine the AVs' safety critical functions (Bagloee, Tavana, Asadi, & Oliver, 2016). Other threats include the use of sensor manipulation to disorient the AV's systems, bright lights to blind cameras and ultrasound or radar interference to blind an AV from incoming obstacles (Page & Krayem, 2017;West, 2016). While systems may be installed to detect such malfunctions, these require software updates as well as changing existing standardised security architectures (Bagloee et al., 2016).
Most governments have developed non-mandatory guidelines on cybersecurity best practices and researched to explore the implications of AVs on cybersecurity. Governments in the US, China, EU, and Singapore have adopted a control-oriented strategy and have introduced or enacted new legislations to address cybersecurity risks.
In the US, NHTSA's voluntary guidelines recommend that manufacturers and software companies design AV systems according to existing international standards, such as those published by the National Institute for Standards and Technology, NHTSA, SAE and the Alliance of Automobile Manufacturers and others (NHTSA, 2017). A new electronics systems safety research department has been set up to evaluate and monitor potential cyber vulnerabilities and an internal agency working group, the Electronics Council, has also been set up to enhance collaboration regarding electronics and cybersecurity research (NHTSA, 2018). These changes represent attempts to gain more awareness and raise awareness of cybersecurity risks to automakers and software companies. The SPY Car Act was also introduced to enhance controls on cybersecurity and privacy to all vehicles (SCA, 2017). 8 According to this law, critical and noncritical software systems in every vehicle must be separated, and all vehicles will be evaluated using best practices. It introduces specifications to ensure the security of collected information in vehicle electronic systems while the data is on the vehicle, in transit from the vehicle to a different location or in any offboard storage. It also requires vehicles to be able to instantaneously detect, stop and report attempts to capture driving data or take control of the vehicle and requires the AV to display the extent to which the AV protects the privacy and cybersecurity of the consumers.
Cybersecurity is not a new concern in the EU. It has taken incremental steps to control cybersecurity risks over the last few years, although they are not AV-specific. The EU Cybersecurity strategy was introduced in 2013, followed by the Directive on the security of network and information systems in 2016 (EC, 2017). The latter was the first EU-wide legislation on cybersecurity. Further efforts have been taken by various EU organisations to raise awareness and provide recommendations on how to address cybersecurity issues. In 2016, the EU's independent advisory body on data protection and privacy, the Data Protection Working Party, published its views to raise awareness about developments in the IoT and its associated security issues (Pillath, 2016).
Like Europe's GDPR, China's latest Cybersecurity Law represents a control-oriented strategy. Key provisions of the law are personal information protection, critical information infrastructure protection, responsibilities of network operators to ensure security, preservation of sensitive information within China, certification of security products and penalties for violations (KPMG, 2017). One example of network operators' responsibilities includes the requirement for critical information infrastructure operators to store personal data within China and for companies to gain approval and pass national reviews before moving data overseas (He, 2018). Critical cyber equipment and special cybersecurity products can only be sold after receiving security certifications (KPMG, 2017). The government in Singapore has also amended existing legislation to control different aspects of cybersecurity risks. Singapore's Computer Misuse and Cybersecurity Act was amended in April 2017 to strengthen businesses' response to computer-related offences (Kwang, 2017). Other steps have been taken to raise awareness of cybersecurity, such as through local institutes of higher learning and forming partnerships between academia and the private sector. By doing so, the government aims to use this as an opportunity for Singapore to become a leading cybersecurity service provider, demonstrating an adaptationoriented strategy; and there are plans to set up a national Defence Cyber Organisation (Srikanthan, 2017).
The UK government has not yet exerted legal control over cybersecurity risks in AVs but is taking steps to increase awareness and strengthen the resilience of AVs against such risks. It has implemented two cybersecurity strategies applying to all cyber systems in the UK. The National Cybersecurity Strategy 2016-2021 focuses on promoting further research into cybersecurity for all systems to produce successful products and services and strengthen UK's position as a world leader in cybersecurity by 2021 (Cabinet Office, 2016). A National Cybersecurity Centre (NCSC) was established in 2016 to analyse and detect cyber threats. The strategy also targets autonomous systems, which may receive funding for research in the upcoming Cyber Science and Technology Strategy (Cabinet Office, 2016). Besides, the strategy aims to stimulate growth in the cybersecurity sector and to enhance its citizens' responses to these threats, which represents an effort to enhance the country's adaptive capacity. The government's adaptation-oriented approach is also reflected in the DfT and the CPNI's key principles for privacy and cybersecurity, which recommends designing the AV system to be resilient to attacks and to produce appropriate responses when its defences or sensors fail (DfT, 2017a).
South Korea has amended its Vehicle Management Act, but it does not include provisions related to AV cybersecurity (MVMA, 2017). Australia and Germany have not amended legislation on cybersecurity but are exploring the security risks arising from AVs. The government in Germany has set up 5 working groups to research AV-related issues such as cybersecurity and data protection (ERTRAC, 2017). More recently, in Australia, it was recommended that the National Cybersecurity Strategy investigate AVs and associated systems to address potential vulnerabilities and recommends establishing a national taskforce to coordinate the introduction of AVs (SCIISR, 2017). The Japanese government appears to have adopted a no-response strategy; as it has neither amended its RTA nor provided recommendations on either general or AV-specific cybersecurity risks. The government has, however, displayed intentions to gain more awareness and revise laws on liability and cybersecurity issues (Nikkei, 2015).

Industry influence
Literature suggests that technological advancements pose a threat to many existing lowskilled, manual jobs, as these are easily automated (Brynjolfsson & McAfee, 2011;Frey & Osborne, 2017). Drivers and mechanics are especially at risk as their value-added is derived from the driving task and they tend to be older and less educated (Alonso Raposo et al., 2018). If the regulatory environment favours widespread adoption, AVs will have immense employment implications. Simulation studies suggest that taxi fleets could be reduced in size to 10% in Berlin, and to one third in Singapore if autonomous taxi services also replaced traditional public transport (Bischoff & Maciejewski, 2016;Spieser et al., 2014). In Singapore, where start-up nuTonomy launched driverless taxis for the first time in the world, nearly half of the privately-owned cars may be redundant in future (Liang & Durbin, 2016). Truck drivers and bus drivers are also at risk due to the massive cost savings from eliminating labour (Anderson et al., 2014;Clements & Kockelman, 2017;Frey & Osborne, 2017). It is estimated that the trucking and delivery industries will gain $100-$500 billion from AVs by 2025, most of which will come from eliminating drivers' wages; while shifting truck drivers to more technical roles, such as monitoring AV systems, will barely make up for the millions of jobs lost (Clements & Kockelman, 2017). Overall, the net economic effects of introducing AVs are estimated to be positive, but the redistribution of employment will negatively impact lower-skilled workers the most, as these displaced workers may spillover to other low-skilled occupations, creating downward pressure on their wages, which can exacerbate inequality (Alonso Raposo et al., 2018).
A few countries recognise the threat AVs pose to employment, although they have yet to formulate detailed strategies to address them. The US Transportation Secretary has voiced her concerns over the impact of AVs on employment (Reuters, 2017). In Australia, the SCIISR (2017) noted concerns about the negative implications of automation for professional drivers and acknowledged the impact of AVs on other sectors, such as the motor trades sector, insurers, repairers, and road enforcement officers. To minimise these potential negative effects, it urged transitioning the workforce as soon as possible. Singapore's government has conveyed its intention to retrain future displaced workers progressively through programmes helping them acquire new skills and enabling them to get higher value-added jobs (CNA, 2017). Much emphasis is placed on helping the Singaporean workforce to cope and adapt to inevitable disruption. Autonomous buses can fill up the shortage of bus drivers (CNA, 2017), and AVs can be used for street-cleaning purposes (Abdullah, 2016a); thus, the risk of disruption to employment in the public transportation sector is low relative to other countries without manpower constraints. These public statements signal the Singapore government's intentions of transforming AV-specific risks to employment into a beneficial opportunity for the nation's economy, demonstrating an adaptation-oriented strategy.

Conclusion
This study aimed to obtain an overview of the governance strategies adopted so far in various countries in response to AV developments. As the basis of our analysis of government responses, we identified different technological risks associated with AVs and focused on five categories of risks: safety, liability, privacy, cybersecurity, and industry influence. Table 3 highlights the strategies adopted by different countries for addressing these AV risks.
Research shows that AV-related safety risks may arise from the less cautious behaviour of vehicle occupants and road users, system errors, and the lack of regulation of crash algorithms that determine life or death situations during inevitable accidents. Safety performance may improve over time if the public accepts mass deployment, which would allow AVs to gain more real-world driving experience. In response, most national governments have avoided using overly stringent measures to manage safety risks and have adopted light control-oriented strategies in the form of non-mandatory AV testing guidelines with the aim of promoting AV development. Given that AV development is at an early stage, councils or working groups have been created to explore the implications of the technology. Germany and Singapore have advanced to implement new regulations whereas China and Japan are currently developing regulation to regulate safety in AV testing. Australia has sought public consensus to address AV safety risks, demonstrating a move towards an adaptation-oriented strategy.
Lack of clarity regarding how liability is apportioned between AV occupants, AV manufacturers and other third parties along the supply chain may increase liability and reputational risks for manufacturers during accidents. To address liability and insurance risks, most governments either display no response or have adopted lightcontrol oriented strategies in the form of voluntary guidelines and exploring possible options to address these risks before enacting legislation. The UK's new law resolves significant ambiguity regarding liability and insurance implications of AVs under various accident scenarios, reflecting the government's toleration-oriented approach. Germany enacted a similar law that provides less clarity relative to the UK regarding the responsibilities of drivers and data ownership permissions and thus, reflects a control-oriented strategy.
The literature also highlights the privacy risks that emerge alongside AVs. Data storage and transmission capabilities allow third parties to gain access to the personal information of customers and use it for advertising, user profiling, and location tracking. Responses to manage privacy risks range from enacting new data privacy laws, relying on existing data privacy laws to making recommendations on privacy principles. The EU and governments in most countries have developed new regulations to control the access to, use and sharing of personal data that are not specific to AVs, whose provisions vary regarding the scope and the extent of control given to consumers, among other aspects. An exception is the governments of Australia and the UK, who have made privacy recommendations. Countries that adopt light control-oriented strategies intend to regulate AV privacy risks in future, reflecting a dominant pattern towards control-oriented strategies. Australia's government has also pursued the less common strategy of building consensus with the public to address privacy risks.
AV Communication networks are vulnerable to malicious attacks that undermine cyber and physical security. Responses to manage cybersecurity issues vary considerably among the surveyed countries ranging from amending or introducing new non-AV specific legislation, creating working groups to explore these issues, funding cybersecurity-related research in the private sector and providing Cybersecurity principles to manufacturers. The release of Cybersecurity principles reflect the government's intentions to gradually shape AV developments alongside technological progression before making any hasty policy decisions. The US, China and Singapore have enacted cybersecurity laws that are not specific to AVs, Germany and Australia are still gaining awareness of AV cybersecurity risks, whereas the UK and Singapore display intentions to use cybersecurity risks as an opportunity to improve the nation's adaptive capacity. Overall, the strategies taken by most countries to address Cybersecurity risks encompass all systems in general rather than being specific to AVs.
Our research shows that AVs can also disrupt the public transportation and trucking industry, as AVs can displace workers from jobs that are easily automated. Most governments have not responded to these risks, but Singapore has begun programmes to retrain workers who might be negatively affected, while some governments have begun studying and regulating other risks such as the risks AVs pose to the environment, congestion, and government revenues.