Unpacking the evidence elasticity of digital traces

Abstract Digital traces are sought in most criminal investigations, due to their usefulness in shedding light on the case circumstances and evidential themes. They are often trusted to be value-neutral and credible. However, a recent study on digital forensic (DF) decision-making revealed that the human factor greatly influences the construction of digital evidence. The study found that DF practitioners were biased by contextual information and produced inconsistent results during DF casework. This article applies a qualitative lens to explore how the statistically determined variance materialises in DF reports. The article examines the role of interpretative flexibility when the DF practitioner constructs the digital evidence. It develops “evidence elasticity” as a concept for describing the mutability of digital traces as knowledge objects. The article explores the role of evidence elasticity for constructing narratives involving digital evidence and how this sometimes may result in misinformation with the propensity to mislead actors in the criminal justice chain, such as the investigators, prosecutors or judges.


Introduction
Digital information from various sources such as mobile phones and laptops is frequently seized in criminal investigations. According to the Digital Forensic Strategy issued by the UK National Police Chiefs' Council, over 90% of all recorded crimes have a digital element (NPCC, 2020). Digital traces are transformed into evidence through the DF process, which is an iterative multistage process for the identification, collection, examination, analysis and presentation of the evidence (Flaglien, 2018). The DF process is primarily conducted by personnel-the DF practitioners-with adequate expertise to ensure a forensically sound transformation (McKemmish, 2008).
Despite the available knowledge about the technical sources of errors in digital traces and the necessity of error mitigation (e.g., Casey, 2002;SWGDE, 2018), digital evidence is often trusted to be credible and value-neutral representations of the truth (e.g., Erlandsen, 2019;Kaufmann, 2017;Reedy, 2020). According to Brookman et al. (2020, p. 23), "the 'voice' of science and technology seems to carry greater weight than other forms of information (in the minds of criminal justice actors at least) in criminal justice narratives of this kind". Such presumptions ABOUT THE AUTHOR Nina Sunde (MSc) is a Police Superintendent and a lecturer at the Norwegian Police University College. She is also a PhD student at the University of Oslo, Department of Criminology and Sociology of Law, with the project "The digital forensic practitioner's role in constructing digital evidence in a criminal investigation". She has more than 20 years of experience from the Norwegian police, mainly in conducting and teaching criminal investigation and digital forensics. might be related to common techno-fallacies: beliefs that technology is neutral, that "facts" speak for themselves, or the belief in completely fail-safe systems (Marx & Guzik, 2017). These beliefs may rest on assumptions of "mechanical objectivity" (Daston, 1992;Daston & Galison, 2007), which implies that machines produce richer, better, and truer evidence than humans. The trust in digital evidence is even reflected in common law, for example, in England and Wales, where the rule that governs the admissibility of electronic evidence states a presumption that computer systems are reliable: "in the absence of evidence to the contrary, the courts will presume that mechanical instruments were in order at the material time" (The Law Commission, 1997, para 13.13.). The problem is, however, that, if one believes that digital evidence is value-neutral, objective and credible factual representations of truth, it may also be assumed to be free from technical and human error. The consequence of such a misconception may be less scrutiny and quality control. Flawed or misleading digital evidence may have major consequences on people's lives. It may lead to erroneous charges and arrests in the pre-trial phase, as well as wrongful convictions (e.g., Smit et al., 2018).
The DF discipline has largely focused its attention on technical sources of error (e.g., Casey, 2002;SWGDE, 2018) in order to prevent evidence dynamics, which are described as "any influence that changes, relocates, obscures, or obliterates evidence, regardless of intent between the time evidence is transferred and the time the case is resolved" (Casey, 2011, p. 27). Evidence dynamics relates to material changes in the evidence (Casey, 2011), and the concept does not encompass factors influencing the digital trace as a knowledge object. As this study will show, even if the evidence integrity is safeguarded and maintained from start to end, the evidence may still be constructed differently due to the interpretative flexibility (Collins, 1981;Doherty et al., 2006) of digital traces. Kruse (2021) argues that, even if the objects such as trace evidence are stable enough to be moved between epistemic cultures (Knorr-Cetina, 1999) in the criminal justice system, the knowledge they are meant to move does not necessarily remain stable, and that this mutability is problematic in a legal context. Ask et al. (2008Ask et al. ( , p. 1247 use the term elasticity to describe "the level of ambiguity associated with a piece of information, which leaves room for subjective interpretations". Since the article explores the mutability of digital traces resulting from the subjective interpretation by DF practitioners, it is further referred to as evidence elasticity. Due to the subjective nature of the DF investigative process, it is of particular value to gain insight into how digital traces may be turned into misinformation, which, in a worst-case scenario, may mislead the legal decision-makers and result in miscarriages of justice. The article draws on Søe's (2019) unified account of information, misinformation and disinformation, which considers the information to be misleading when it has the propensity to cause false beliefs (Søe, 2019). The article centres on misinformation, which entails that the information's misleading nature is unintended and the result of an honest mistake, such as an unconscious bias, as opposed to disinformation, which is intended to mislead the recipient (Søe, 2019, p. 5946). Evidence elasticity and misinformation are discussed in relation to legal concepts for assessing evidential value, as described by Anderson et al. (2005). The aim is to explore whether and how the evidential value is crafted by the DF practitioner and to shed light on how the evidence elasticity enables the DF practitioner to turn the traces into misinformation with the propensity to mislead the legal decision-maker's assessment of evidential value.
The article is structured as follows: First, related research and the theoretical perspectives are outlined, followed by a description of the applied methods. The results and discussion section presents and discusses the elasticity of digital traces and explores its range and function for constructing digital evidence and the narratives involving such evidence. Then, evidence elasticity is discussed in relation to the legal concepts for assessing evidential value to elucidate how misinformation may influence the legal decision-maker's assessments concerning the relevance, credibility and inferential/probative force or weight of digital evidence. Finally, some concluding remarks are offered.

Related research and theoretical perspectives
Scientific practice and knowledge construction have been the subject of several empirical studies (e.g., Knorr-Cetina, 1982;Latour & Woolgar, 1979), and several studies within the Science and Technology (STS) scholarly tradition have explored the construction of forensic evidence (e.g., Cole, 2001;Dahl, 2009;Kruse, 2016). There is a sparse but growing body of empirical research on various aspects of DF practice, such as acquisition tasks (Carlton, 2007;Hewling, 2013), preview and triage (James & Gladyshev, 2013;Rappert et al., 2021;Wilson-Kovacs, 2019), investigative strategies for content analysis (Haraldseid, 2021), reporting practices (Stoykova et al., 2022;Sunde, 2021), quality procedures (Jahren, 2020;Tully et al., 2020) and collaboration practices and professional dynamics between investigation officers and DF practitioners (Hansen et al., 2017;Wilson-Kovacs, 2021). Ask et al. (2008) explored whether different evidence types varied in their elasticity. They found that asymmetrical scepticism among the participating police trainees was stronger when judging the reliability of witness evidence compared to DNA and CCTV footage. Brookman et al. (2020) studied how digital evidence such as CCTV evidence contributed to identifying and charging suspects in homicide investigations and the risks associated with such evidence. They identified packaging for court as a highly constructive process, which involved decisions about which images to include and exclude for court, ordering and highlighting important information (Brookman & Jones, 2021). This article advances these insights by exploring the elasticity of digital traces mediated by DF practitioners.
Former studies of criminal justice narratives have followed the evidence and the developing narratives through the justice chain (e.g., Brookman et al., 2020;Costa & Santos, 2019;Dahl, 2009;Kruse, 2016;Lam, 2016;Offit, 2019;Santos, 2014). The novelty of the current research relates to exploring the multiple narratives constructed and "framed" (Goffman, 1975) based on the same evidence file.
The analysis draws on concepts from Actor-network theory (ANT), which views human and nonhuman actors as actors with agency (Latour, 2005). The DF practitioner is at the focal point of the article. Central non-human actors in this study are the assignments provided to the DF practitioners and the reports produced as a result of the DF process, where the findings are represented in text or visualisations. In ANT, the concept of inscription entails turning matter into symbolic forms, such as text, graphs, diagrams and lists (Kruse, 2016). This article addresses the shortcomings of the evidence dynamics concept by drawing particular attention to the function of the evidence elasticity when the DF practitioners turn the digital traces into inscriptions in DF reports.

Material and methods
The article is based on empirical data from DF reports obtained in research, involving a quasiexperimental design (further referred to as "the DF experiment") with 53 DF practitioners from eight countries (Sunde & Dror, 2021). The primary purpose of the DF experiment was to explore contextual bias and inter-practitioner reliability (consistency) in DF decision-making. In short, the DF practitioners were given a vignette with a scenario description, an assignment, and a mock evidence file (see Appendix 1 in Sunde & Dror, 2021). They were asked individually to analyse the evidence file and document their findings in a report. According to the vignette, the evidence file was secured from the laptop of Jean Jones, CFO of the firm, M57.biz. There had been an information leakage from M57.biz, which involved the posting on a competitor's web forum of confidential information about the employees. The information allegedly originated from Jean Jones' laptop. The participants were asked to find out what had happened, clarify Jean Jones' involvement in the incident and write a report on the result.
To explore whether they were prone to contextual bias, the participants received different contextual information indicating guilt and innocence in the vignette. A control group received only the basic scenario. The inter-practitioner reliability was also explored for each experiment group. For details about the design and methods applied in the DF experiment, see Sunde and Dror (2021). The results of the DF experiment indicated that contextual information led to biased observations, i.e. systematic differences in how many traces each of the DF practitioners discovered and deemed relevant to include in the report. Further, the results showed low reliability in inter-practitioner observations, interpretations and conclusions (Sunde & Dror, 2021).
This article applies a critical qualitative lens to explore how the variation and inconsistency in the results came about in the DF practitioners' reports from the DF experiment. A thematic analysis method (Braun & Clarke, 2006) was applied, through a combined deductive and inductive approach in QSR International NVivo Pro Edition, Version 12.1.1.256 64 bit. Some themes were already triggered during the initial coding for the statistical analysis of the DF experiment, which guided the deductive analysis approach. The inductive analysis focused on additional variance elements not directly related to the statistical measurements. Table 1 presents the codes and themes emerging from the analysis, which also form the Results and discussion section's structure. The analysis centred on the diversity of the descriptions, and the frequency of traces is only mentioned when it is considered relevant for the elasticity or quality aspects (e.g., false positives). Included quotes from the DF practitioners' reports are marked with "D" (e.g., D56). An English translation is offered when the original quote was in the Norwegian language.

Results and discussion
The DF process begins with identifying and collecting devices with data, typically smartphones or laptops. Due to established principles for forensically sound handling, the device is not accessed immediately. Instead, a DF practitioner acquires a copy of the device's content (referred to as an evidence file) and performs the subsequent analysis on the evidence file. Software is used to process the evidence file, enabling the DF practitioner to explore the content as meaningful information. After conducting the analysis, the DF practitioner writes a report on how the DF investigation was performed and their findings.
This study follows the digital traces through the analysis and presentation stages of the DF process (Flaglien, 2018). It starts at the point when the DF practitioner receives the assignment and the copy of Jean Jones' laptop (the evidence file). Key objects and names from the DF experiment are presented in Table 2.
First, the DF practitioner's reproduction of the assignment is explored, followed by an examination of their descriptions of central digital traces and events found on the evidence file. Finally, their reported conclusions are analysed and discussed.

Interpretations of the assignment
The DF practitioner typically receives an assignment from the criminal investigation team, which guides the scope of the DF analysis (Sunde & Dror, 2021). In the DF experiment, all participants received the same assignment in writing: "What has happened, and what was Jean's involvement in the reported incident?". Despite receiving similar assignments in writing, a correct account of the assignment was found in approximately half of the DF reports, and almost one-fifth of the reported assignments diverged substantially from the original assignment (Sunde, 2021, p. 591). Examples of the varying assignments stated in reports are included in Table 3. Kruse (2016) points out that, although science and crime scene investigations have much in common, there is a fundamental difference in their purposes: Research aims to contribute to the growing body of generalised knowledge, while forensic science aims to produce knowledge about the particular. The assignment defines what in particular the DF practitioner should produce knowledge about, and the deviant accounts of the assignment demonstrate that a written assignment does not guarantee that the sender and the receiver read the same meaning into it. The deviating accounts of

Names and objects Explanation
The evidence file The accurate copy of the information on the suspect's computer. File name: nps-2008-jean.E01

Jean Jones
The CFO of the company M57.biz, user of the seized computer, referred to as "the suspect's computer" or the "evidence file" assignments are an important finding. They demonstrate that it is not the assignment itself but the DF practitioners' translation of it that guides their further discovery process and assessments of what traces are relevant or irrelevant to the investigation. This shows that the DF practitioner may start the analysis with a different assignment in mind than the one they originally received, highlighting the importance of DF practitioners documenting the assignment in DF reports, to enable scrutiny and detection of misinterpretations that may have led the DF investigation astray.

Reconstructing what had happened
Digital evidence is co-constructed within what may be referred to as a socio-technical expert system, where highly skilled DF practitioners perform physical and cognitive tasks in interaction with powerful hardware and custom-designed software. The technology extends their cognitive abilities and helps them make sense of digital information collected during an investigation (see, Brey, 2017). They operate within a cultural environment, with domain-specific processes, procedures and defined goals. The DF expertise, combined with access to the necessary technology, makes the DF practitioner an obligatory passage point (Callon, 1986) for digital evidence in a criminal investigation.
The DF practitioner plays an active role in the DF process and makes numerous judgements and decisions that impact which findings are relevant to report, how to assess, control and document their credibility and how to evaluate their evidential value (Sunde, 2021Sunde & Dror, 2019). In their reports, the DF practitioners present findings, which are an overview of what they have deemed information relevant to the investigation. They also provide opinions, which are statements about what the traces are, what they mean and their evidential value. The original data (the input) and the reported results are observable by others. However, the cognitive processes involved in the judgements and decisions are invisible and may thus be perceived as a black box. The experimental setting provided a unique opportunity to observe the consequences from diverse interpretations of a similar evidence file, through the study of the reports produced by the 53 DF practitioners participating in the DF experiment.
The DF experiment vignette indicated that the allegedly leaked spreadsheet with confidential information was of particular interest. The DF practitioners were informed of the file name "m57plan.xls", and that the file had been discovered on the competitor's web forum. A snapshot of the leaked spreadsheet was also enclosed. To determine whether the file was leaked from the suspect's computer, an essential investigative step would be to establish whether there were traces of a similar spreadsheet on her computer. The DF practitioners could deploy several strategies to find the file, for example, a text string search for the filename or text/content in Table 3.

Original assignment Examples of reported assignments
What has happened, and what was Jean's involvement in the reported incident?
• "to find out how internal business information was given to a competing business corporation" (D2).
• "to determine whether there were files/material on the evidence file that could provide information about the suspect's role in this matter" (D6).
• "whether there was a data breach at the firm M57.biz and Jean Jones involvement in the data breach" (D31).
• "to perform a content analysis of the evidence file, and to document any findings that may shed light on the case" (D48).
the spreadsheet or a search for all spreadsheets in case the spreadsheet had been renamed. The results related to this issue are discussed below.

The leaked spreadsheet (m57plan.xls) was/was not on the suspect's computer
Since a mock evidence file was used, the ground truth was known, although not to the DF practitioners participating in the DF experiment. The leaked spreadsheet was, in fact, not on the suspect's computer, and a correct approach would therefore be to document the search strategy and result: that m57plan.xls (the leaked spreadsheet) was not found on the computer. Approximately a quarter of the DF practitioners provided a report corresponding with the ground truth and stated explicitly that a spreadsheet named M57plan.xls was not found on the suspect's computer (Sunde & Dror, 2021). However, even though the leaked file was not on the suspect's computer, some DF practitioners (n = 3) stated that it was present and related it to the information leakage, for example: "The spreadsheet m57plan.xls was identified as the allegedly leaked spreadsheet. The evidence file did not contain any other relevant spreadsheets" (D29). Such a statement may be characterised as misinformation with the potential to mislead the legal decision-maker.
Those who commented that they did not find the leaked spreadsheet stated this in two ways, which illustrates the DF practitioners' different interpretations of similar information. Some stated that the spreadsheet was searched for, but not found, such as "In the task, a file named 'm57plan. xls' was mentioned. This file was not found on the evidence file" (D9) or "A text string search was performed on 'm57plan.xls'. This search gave 0 hits" (D32). This description did not exclude that the leaked spreadsheet could be there and is in line with the well-known principle that absence of evidence does not entail evidence of absence. Nevertheless, others excluded the presence of the leaked spreadsheet on the computer in their description, for example: "The file as mentioned in the report, i.e. m57plan.xls, was not available in the image file nps-2008-jean.E01" (D53). Based on the scope of the analysis performed in the DF experiment, this is an overstatement beyond justification, which would fall into the category of misinformation.
The examination and analysis of the information obtained from a digital device may have many similarities with the crime scene investigation of a physical crime scene. Kruse (2016) found that the main contribution to the pre-trial investigation from crime scene investigators was written crime scene reports. Kruse describes: When a report replaces a crime scene, the crime scene becomes more understandable through the interpretation the crime scene technicians have made, but it also becomes more limited. The details that have not been documented are lost with the crime scene, and the report shapes how the readers understand the crime scene and thus the forensic evidence. (Kruse, 2016, pp. 95-96).
This description is transferable to the digital crime scene and the analysis report's function. The DF practitioners turn the digital crime scenes into inscriptions, by describing or visualising the relevant digital traces into reports and documenting their credibility. A mediated picture of the digital crime scene is painted in DF reports based on the traces considered relevant to the case.
However, compared to a physical crime scene, there is a fundamental difference. Under optimal conditions, an accurate copy of the digital crime scene can be secured, stored, and re-examined later, which is a great advantage from a quality control perspective. Invalid results, such as false positives and overstated opinions, could be corrected during quality measures such as a verification review or re-examination (Horsman & Sunde, 2020). Unfortunately, research has shown that quality measures such as peer review are rarely undertaken as part of the DF process (Jahren, 2020;Page et al., 2019). The misinformation should thus be of concern, since, in actual casework, it could lead to a stronger (but invalid) underpinning of the suspicion against the suspect and potentially result in an unfair administration of justice.

Comparing the leaked spreadsheet (m57plan.xls) with the file on the suspect's computer (m57biz.xls)
As mentioned, the leaked spreadsheet was not on the suspect's computer. There were, however, two identical spreadsheets named m57biz.xls on the suspect's computer, which had many similarities with the leaked spreadsheet. One was stored in the "Desktop" folder and the other in the email container file (outlook.pst). They contained much of the same information as that of the leaked file (m57plan.xls), but there were three distinct differences (see, Figure 1): • The file names were different • The file, m57biz.xls, had two extra rows of information, compared to the leaked spreadsheet • The file, m57biz.xls, contained a picture of soldiers holding a flag, while this picture was not visually present in the leaked spreadsheet According to the statistical analysis, almost all (94%) of the DF practitioners found at least one of these files (Sunde & Dror, 2021). An appropriate next step would be to compare the file(s) with the leaked file. Corresponding files would support a hypothesis stating that the seized computer was the source of the leaked information. As shown below, the reports included a wide range of descriptions, which illustrate the elasticity of the findings relevant to this evidential issue.

The spreadsheets were different
After comparing the files, some explicated (correctly) that the files were different and described the dissimilarities, for example: "The file of interest, with another filename than provided in the incident report, was found on the unit: M57.biz.xls" (D57) or: Then I examined all the xls files on the evidence file and found an xls file named 'm57biz.xls'. This file was copied and opened in excel. The file was similar to the one described as

The spreadsheets were similar
Even true information may sometimes also become misinformation: "Through unintended omittance it is possible to have instances of true misinformation-i.e., what is said is true, but by unintendedly omitting other crucial parts of the picture it becomes misleading" (Søe, 2019, p. 5939). Common to many of the descriptions was the focus on similarities and correspondence with the leaked file, while downplaying or excluding the differences, for example, "An Excel document was discovered, which largely corresponds with the document referred to in the incident report, ref 5.12 Documents" (D1).
Others focused on how the file content corresponded, such as "The same information as the information that had gone astray was found on Jean's computer with the filename 'm57biz.xls'" (D18) or "The spreadsheet 'm57biz.xls' with corresponding content as what was posted was found on the computer in the user folder of user 'Jean'" (D37). Others described whether the spreadsheets visually looked similar: A document named 'm57biz.xls' has the same content as the document 'm57plan.xls'. [. . .] The document is stored on Jean's computer with file path '\Documents and Settings\Jean \Desktop\m57biz.xls' and has the md5 hash of 'e23a4eb7f2562f53e88c9dca8b26a153'. The document corresponds visually with the document shared on the internet. (D38) Since the descriptions present only some of the relevant information, they are inaccurate and incomplete and can potentially mislead the legal decision-maker.

The spreadsheets were identical
Some stretched their accounts of the similarities even further and described the content as identical, which is an invalid interpretation, as shown in figure 1. For example, "I did not find the file M57plan.xls on the evidence file, but I found M57biz.xls with identical content" (D5) or "20.07.2008 01:28:00, Jean responds to the email and encloses the file m57.biz.xls. Through the search for the exact filename on the evidence file, the original file was discovered, which is identical to the file that was leaked on the internet" (D31).
The variation in descriptions of the two spreadsheets, from different to identical (see, Figure 2), demonstrates the range of elasticity of the digital traces. Kruse (2016) underlines that constructed does not entail that they are not real, but rather that they could be constructed differently, and the 53 reports clearly demonstrate how the elasticity becomes a construction tool for crafting and customising coherent building blocks for an emerging narrative. By describing the spreadsheets as similar or identical, without commenting on the differences, the DF reports can mislead legal decision-makers into believing that the leaked document was found on the suspect's computer or that a causal relationship between the two documents was established. Such descriptions may fuel an illusion of incriminating evidence against the suspect.

5 Determining who were involved
Digital traces obtained from computers and mobile phones are vital for reconstructing events and activities. However, they have little value if not linked to a person. Since multiple users may access a computer, attributing these activities to a person, with the required level of certainty (source level issues), remains a recurring challenge in DF investigations (see, e.g., Cohen, 2013). In other words, proving who had their fingers on the keyboard and performed certain activities that left traces on the digital device can be complicated. Another complicating factor is that, in addition to excluding other possible suspects as sources of the traces, the DF practitioner must determine whether human activity or a machine/system generated the traces. For example, suppose illegal imagery is found on a computer. In that case, the DF practitioner must justify that it was downloaded by a particular person and refute that it was the result of an automated procedure happening in the background when the user visited a webpage.
Similar to a typical DF investigation, the participants in the DF experiment received limited and incomplete case information before starting the analysis. Although they were informed that the evidence file was secured from Jean's computer, it could not be stated with certainty who (person) had conducted the activities on the computer. There were several active user accounts on the computer, with traces of activity around the time of the alleged information leakage and traces indicating that the computer could have been compromised through, for example, hacking.
Due to the insufficient information and the uncertainty concerning who caused the traces, the DF practitioners ought to describe entities (e.g., user accounts, email addresses) rather than persons related to the alleged activities or events (e.g., sending email, searching the internet). For example: "According to the internal metadata, the document was created on 12.06.2008 17:13:51 by a user named Alison Smith and last saved on 20.07.2008 at 03:28:03" (D1) or "An email sent to jean@m57.biz on 21.07.2008 at 01:43:19 from bob@m57.biz discusses the publishing of the attachment which is reported" (D49). Such an approach does not implicitly assume who performed the activity. In contrast, many described activities performed by a particular person, for example, stating that Jean replied to emails: "Jean Jones replies to the last inquiry and encloses the document m57.biz.xls (item # 46654)" (D15) or that she created the document: "Jean creates this document and sends it to the malicious email address. She is not aware that the recipient is not Alison, and it seems like she has no intentions of sharing this information on purpose" (D47). Another described that Jean sent the spreadsheet and altered the document: It is proven that Jean Jones sent the file to an email address outside their own firm domain. The analysis shows that the DF practitioners' mediations relate not only to the digital traces but also to who was involved. Here, the elasticity concerns to how far the DF practitioner goes in the trace-linking process, that is whether they link the trace to the entity or make an additional inferential leap, and link from trace to person. When the reasoning and justification of the relationship between trace and entity/person lacks transparency, the legal decision-maker may be misled into believing that the link between a suspect and a criminal activity is more certain than it really is.

Constructing conclusions
When zooming in on a digital trace, it is often an assembly of several components. The email sent from Jean's computer with the enclosed spreadsheet, m57biz.xls, consists of content and metadata. However, when zooming in on the metadata, another assembly of temporal and spatial traces of transportation of the information from sender to receiver via several servers emerges. The observed traces are rarely viewed in isolation; they are ordered, linked, and related to the context of the case under investigation. When traces are pieced together during the abductive reasoning process, they may enable several plausible explanations (Anderson et al. 2005). These explanations form sub-narratives, which may be further assembled into narratives of what has happened, how it was done, who was involved, why they did it, when and where. Anderson et al. (2005) describe this process as "connecting the dots". The number of traces and factors such as ambiguity and completeness influence the forming of explanations and narratives (Tecuci et al., 2016, pp. 159, 163).
Several plausible scenarios could be developed based on the contextual information and the digital traces on the suspect's computer; for example, the confidential information could have been leaked by • copying it onto a thumb drive • uploading it to the internet • breaking into the system and stealing it • tricking Jean into sending it by email (phishing) There were several possible perpetrators, such as Jean, another insider, an outsider, or an insider and an outsider in collaboration. The evidence file from Jean Jones' laptop contained traces consistent with all these scenarios. Yet, none could be refuted on the basis of the information on the evidence file obtained from the suspect's computer.
The different observations and interpretations in the M57 case led to many different conclusions or summaries of the results. Some DF practitioners concluded that Jean had caused or contributed to the information leakage. For example, "If the user account 'Jean' on the analysed system is Jean Jones, the findings mentioned here indicate that Jean may have sent the file 'm57biz.xls' per email to tuckergorge@gmail.com or alison@m57.biz on 2008-07-20 03:28:00 CEST" (D42) or "The employees involved are Alison SMITH, manager of M57.biz, and Jean JONES, CFO of M57.biz. Additionally, email communication would also suggest that 'tuckgorge@gmail.com' also is involved in the leakage of confidential information" (D62). Others concluded, on the contrary, that the spreadsheet was not sent/shared from the computer, for example, "Based on the analysis referred to under section 5, no information was found indicating that the spreadsheet, m57biz.xls, or other documents with the same content were shared from the unit" (D28).
Several indicated that the leaked spreadsheet could be associated with Jean sending the spreadsheet outside the firm, but they highlighted that she was deceived, which indicates innocence, for example, "The examination has uncovered that Jean Jones appears to have been the victim of a spear-phishing attack, where she has sent the leaked document to an attacker, who has pretended to be her colleague Alison Smith" (D15).
Some suggested other methods for how the information was leaked, such as the mounting of a USB and the installation of VMWare, for example, "There are traces of a thumb drive being connected to the machine just before the above-mentioned email was sent. Further examinations should be undertaken to determine whether the spreadsheet might also have been shared from this thumb drive" (D22) or: The software VMWare was found, which makes it possible to run a virtual machine on the computer. The programming tool C++ was also discovered. Both were installed just before the email was sent, and both programs were uninstalled. It is therefore not possible to eliminate that Jean's computer has been compromised in some way. (D34) While most of the conclusions associated the uncovered traces with Jean being tricked in a phishing attack, others proposed the insider theory to explain the information leakage, suggesting one of the programmers or the manager as possible perpetrators. For example, "It should aslo be mentioned that it was Alison Smith who reported the incident. It should be investigated with the same strength whether Alison Smith may have an agenda to blame Jean Jones" (D2).
The analysis of conclusions shows that sense-making and linking of traces may lead to many different explanations of what happened and who was involved, which may be referred to as a "story", "narrative" or "scenario". The story model of Pennington and Hastie (1993) offers descriptions of how people reason when they are exposed to a lot of information or evidence and how they create, evaluate and select scenarios. A scenario is anchored in common sense generalisations (Crombag et al., 1994) and consists of at least a central action and a scene that makes the central action understandable (Van Koppen & Mackor, 2020). Scenarios may also have elements such as a scene, motive, actor and consequences (Pennington & Hastie, 1993). The DF practitioner contributes to constructing the narrative of the crime under investigation by linking the digital traces and presenting narratives or narrative sequences inferred from the evidence and general world knowledge. The narrative is essential for making sense of information and deciding whether it is relevant as evidence (Stubbins & Stubbins, 2008).
At the same time, narratives-even those with dubious validity-may appear attractive and appealing (Stubbins & Stubbins, 2008). Since many of the conclusions in the DF experiment are articulated as narratives or sub-narratives, they may have the potential to lead (or mislead) an investigation in many different directions.
Although Goffman's (1975) framing theory involves face-to-face interactions, it is helpful to understand the power associated with narrative construction. The DF practitioners influence the report reader's attention, by highlighting which traces or issues are relevant (Goffman's in-frame) and downplaying or omitting what is irrelevant (Goffman's out-of-frame). The DF practitioners order, describe and mediate the narrative surrounding the traces. Present in the report are not the original traces but material-semiotic objects of knowledge, which are an "active, meaninggenerating axis of the apparatus of bodily production" (Haraway, 1991, p. 200). The text thus becomes an active and transformational entity. As Asdal (2015) describes, turning something into an issue might also entail becoming a non-issue, a question to be handled by certain issue experts. In the context of DF casework, the expert describing the issue may be one of the few with technical expertise to fully understand the traces and their context. Since other actors in the criminal justice chain may lack the expertise to challenge the findings in a meaningful way, even flawed descriptions of traces and invalid opinions concerning these may become a non-controversy or a nonissue and be assigned factual status.
The variable conclusions from the DF experiment may, on one hand, be viewed as a challenge to investigation quality. Since some of the trace interpretations and conclusions observed in the current study were erroneous or one-sided, they could, in an actual investigation, have a "misframing" potential (Goffman, 1975, p. 308), by introducing contextual bias on subsequent investigative tasks, and lead the investigation astray. Under such conditions, the trace and conclusion elasticity would challenge the fair administration of justice. On the other hand, the various findings and conclusions may be perceived as a means to broaden the scope of the investigation by illuminating the multiple explanations of the traces and the potential lines of inquiry. However, in an actual criminal investigation, only one DF practitioner, and occasionally a few DF practitioners, would perform the analysis and write a report (e.g., Jahren, 2020). The elasticity would thus not be visible. Mol (2002) suggests that there are multiple versions of the same body in medicine: a "body multiple", where the same diagnoses were enacted differently by various medical disciplines. The DF reports provide insight into the DF practitioner's construction process of a new object-the digital evidence-which is an assemblage of material-semiotic representations of the traces, disseminating what they are, what they mean and what story they tell concerning the case under investigation. The traces in themselves are meaningless, but, ordered, placed in a context, explained and evaluated, they become meaningful pieces of information (Edmond, 1999). The transparency of the DF practitioners' enactment and mediation depends on how accurately they document the applied procedures and are often black-boxed when the report is finalised. For digital evidence, plurality not only relates various disciplines enacting the same "diagnosis" differently but also includes representatives of the same discipline enacting, translating and inscribing the digital evidence differently into DF reports, by using elasticity as a construction tool, which enables a digital evidence multiple.
The empirical data collected from the DF experiment displays aspects that are rarely visible in actual casework. It provides a unique opportunity to identify which components of the digital evidence are elastic, the range of the elasticity and how the elasticity functions as a construction tool to develop plausible and coherent explanations and narratives. The article has demonstrated how evidence elasticity enables the construction of multiple versions of the same traces, interpretations of traces and various conclusions based on these. The DF discipline has centred attention on preventing evidence dynamics (Casey, 2011) but has lacked a concept such as evidence elasticity encompassing possible misleading aspects related to the digital trace as a knowledge object. In context, the evidence dynamics and evidence elasticity can explain the misinformation and flaws that may relate to the physical/digital aspects of the evidence and those related to the interpretation of the evidence as a knowledge object.

Evidence elasticity and the legal criteria for evidential value
A piece of evidence has no value in isolation-it must be considered in relation to a questioned matter and conditioning information. The DF practitioners' position as experts provides them with definition power, and, when reporting the findings, they also mediate how the traces should be understood in relation to the case under investigation. An important issue explored below is whether and how the mediation encompasses the evidential value, and whether the elasticity may be problematic in such a context. The discussion centres on the assessment of evidential value, performed for example, during the investigation or in court, and does not address the issue of admissibility, the legal threshold for admitting forensic evidence to the trial. Anderson et al. (2005) relate the evidential value of tangible (as opposed to testimonial) evidence to three components: reliability, credibility and probative/inferential force or weight (see, also Tecuci et al., 2016).
Relevance makes the evidential theme (matter to be proved) more or less probable (Anderson et al. 2005). The relevance of a trace often depends on other obtained traces (Tecuci et al., 2016). In the M57 case, the spreadsheet on the suspect's computer was assessed in light of the allegedly leaked spreadsheet found on the competitor's website. The DF practitioners mediated the relevance by establishing a strong or weak link between findings on the suspect's computer and the allegedly leaked spreadsheet, or between inculpatory information and the suspect.
Credibility refers to three components: authenticity, accuracy/sensitivity and reliability (Anderson et al. 2005). Authenticity relates to whether the evidence is a genuine representation of what it appears to be (Anderson et al. 2005;Tecuci et al., 2016). It is closely related to accuracy or sensitivity, which is about whether the evidence provides sufficient resolution to discriminate between possible events/explanations (Anderson et al. 2005;Tecuci et al., 2016). The assessment is concerned with having enough information to infer or evaluate safely. For example, Brookman and Jones (2021) found that poor-quality CCTV footage could cause misinterpretation when observing events or attempting to identify a suspect in a homicide investigation, and was used to explain away contradictory evidence, such as the discrepancy between a suspect's coat and the colour seen on the CCTV footage. In the DF experiment, the spreadsheet from the competitor's website had been secured through the "cut and paste" technique, which entailed the file itself and associated metadata (data about data) being unavailable to the DF practitioners. Hence, they lacked crucial and accurate information to draw inferences about its authenticity or many other investigative questions, such as who created the file or when was it created, and to compute a digital fingerprint of the file. In such situations, there is a risk that the DF practitioner unintentionally and unconsciously fills in the gaps with non-evidenced assumptions. For example, the DF practitioners consistently inferred that the spreadsheet had appeared on the competitor's forum after a spreadsheet was sent from the suspect's email; yet, there was no information underpinning the order of events.
Reliability relates to whether the evidence was produced in a repeatable, dependable and consistent manner (Anderson et al. 2005;Tecuci et al., 2016). While Anderson et al. (2005) relate the reliability to the operating characteristics of the device used to generate the trace, multiple devices must be considered when establishing reliability in a DF context. The reliability assessment must encompass the device involved in generating the trace and the devices and technology used for collecting and examining the data during the DF process. However, the research shows that the reliability not only hinges on the technical instruments generating or analysing the evidence. The results show that the "human instrument" has low reliability and can produce varying descriptions of the particular traces, suggest different meanings when the traces are considered with other evidence and construct substantially divergent conclusions.
Inferential/probative force or weight of evidence is about how strongly the evidence favours or disfavours particular hypotheses or propositions (Tecuci et al., 2016). This issue relates largely to how the conclusions were articulated, and the analysis showed that narratives played an essential role. During narrative construction, the traces are put in a context that gives them meaning and value for the case under investigation. The conclusions in the DF reports were typically crafted as a single one-sided narrative, based on their findings (see, also Sunde, 2021). They presented various accounts of whether and how an information leakage had happened. While some constructed the narrative with the suspect as the perpetrator, others presented her as a victim of fraud. The analysis shows that, although divergent, none of the narratives stood out as implausible. Granhag and Ask (2021) outline several essential factors for a narrative to be convincing. The narrative must be plausible and able to explain all or most of the evidence. It must be cohesive and not contradict any evidence. The narrative must be complete, cover all essential parts of the crime and be arranged in chronological order. Moreover, it must be unique, and there should be no alternative narrative with an equal or higher capacity to convince. The evidence elasticity observed in the DF experiment conclusions seems to facilitate necessary adjustments that enable the traces to fit into various plausible explanations and narratives. Costa and Santos (2019) add the presence of forensic evidence to this list of convincing factors. They found that forensic evidence made the narrative appear stronger, even if the evidence was weak (Costa & Santos, 2019). Due to the perception of digital evidence as facts, it may create an illusion of unquestionable anchoring points.
To summarise, the discussion shows that DF practitioners are in a unique position to mediate not only what the digital traces are and their meaning-but also their evidential value in a legal context.

Concluding remarks
The study explored the implications of biased observations and low reliability in DF reporting practices, by following the translation and transformation of the digital evidence through one of several obligatory passage points in the justice system, the DF practitioner. The article explored the interpretative flexibility of digital traces, and developed the concept evidence elasticity for the mutability of digital evidence as a knowledge object. Evidence elasticity adds to the concept of evidence dynamics, which relates to the physical and digital aspects that may reduce the value of the evidence. In context, these concepts provide a more complete oversight of the possible flaws and misleading aspects of digital evidence. However, some possible limitations should be commented.
Although the DF experiment provided a unique insight into the variation between the DF participants' reported results, it was an experimental situation with time constraints. To mitigate the limitations, the experiment was designed to have as high ecological validity as possible-allowing the DF participants to work on their regular workstations with their usual methods and tools and to report the results as they would in an actual DF investigation. Although a mock evidence file was used, it was considered realistic and fit for purpose. They were asked to reserve 4-5 hours for conducting the DF experiment, which probably is less than they would spend on a case in an actual DF investigation. It may have influenced how much evidence they discovered and how well the reports were written. On the other hand, time constraints are a typical challenge also in real casework and may thus have strengthened the ecological validity.
Although there were limitations, the most essential knowledge to take away from the study of DF reports is, first, that the digital evidence transformation starts as soon as the DF practitioner receives the assignment. It is the interpretation of the assignment that guides the DF practitioner's decision-making about what traces are relevant to the case under investigation, and the selected traces condition the subsequent forming of narratives. Second, the same traces may be understood differently and assigned different evidential values. When inscribed into reports, they may sometimes be turned into misinformation, with the potential to mislead the legal decision-makers. Third, the analysis has shown that the narrative-forming process is a way of "connecting the dots" about what has happened, how, and by whom, and that elasticity enables the same trace to contribute to very different narratives-from one extreme to another: from a narrative where the suspect is innocent to a narrative where the suspect is guilty. The analysis also shows how the DF practitioners make invalid or unjustified inferential leaps (e.g., about who did what) to create coherent and complete narratives in their conclusions.
In everyday life, framing and "stretching" of information may be necessary for social interaction. In the context of a criminal investigation, these mechanisms may shape what the information means and its value in a legal context: their relevance, credibility and probative/ inferential force or weight. As shown in this study, the power to frame also entails the power to misframe. Assigning multiple DF practitioners per case, as done in the DF experiment, is not typical. Relying on a single DF practitioner's results may, on one hand, constitute a lost opportunity to uncover the full scope of relevant traces and plausible narratives. On the other hand, such an approach may introduce the risk of one-sided or misleading conclusions and erroneous results remaining undetected and uncorrected, which may lead to wrongful convictions of innocent people, in a worst-case scenario.
The DF experiment provided a unique insight into evidence elasticity that would otherwise be invisible. Opening the black box and gaining insight into how digital evidence is constructed empowers other actors involved in the criminal justice chain, such as prosecutors, defence lawyers, judges and jurors. These insights enhance their ability to scrutinise the DF process and any misinformation resulting from it, such as invalid inferential leaps and misleading opinions regarding the evidential weight of digital evidence conveyed in DF reports. Future research should follow the digital evidence further into the justice chain to explore whether and how other actors negotiate the evidence elasticity of the digital traces.
Knowledge about the evidence elasticity associated with digital traces is vital for countering the fallacies about technology and digital evidence and designing appropriate quality control procedures, which may prevent misleading information from cascading further into the investigation process. Knowledge is power, and knowledge about the elasticity of digital evidence may thus be an essential means for safeguarding the rule of law.