Human factors and ergonomics methods in practice: three fundamental constraints

ABSTRACT Human factors and ergonomics needs to ensure that its methods are available, usable and used in practice. The majority of our methods tend to be developed by researchers situated in academic institutions, and published in scientific journals, books and conference proceedings. The intended or assumed end-users of HF/E methods, on the other hand, are often practitioners embedded in consultancies, producers, manufacturers, service providers, government departments and so on. The difference in context contributes to a research-practice gap, resulting in a number of issues such as reliance on old methods, low uptake of new methods and application problems. This commentary article outlines three key constraints from our own experience as practitioners and researchers – as tool users and developers – that affect the application of methods in practice. We suggest several implications and ways to improve the availability and actual use of HF/E methods.


Introduction and context
We are practitioners situated in an intergovernmental organisation and a consultancy, respectively, but with experience also in academia (both previous full-time roles and current visiting/adjunct roles), and in other settings À commercial and governmental. We have experience of the applied application of human factors and ergonomics (HF/E) methods for physical, cognitive and social/organisational aspects of HF/E, in a variety of domains (including aviation/air traffic management, rail, border security, chemical manufacturing, water processing, hydrocarbon storage, food production, pharmaceuticals, retail, logistics and distribution). For this article, we draw on this experience, as external consultant practitioners, internal practitioners and researchers in universities. We have found that the various contexts of HF/E are important, since the implications for method development and application can be very different depending on the degree of 'embeddedness' À the way that 'ergonomics fits within the organisational system and is embedded within practice' (Wilson 2014;p. 9).
We have been involved in the development of methods for a variety of purposes, including incident and human error analysis (e.g. Shorrock and Kirwan 2002;Isaac, Shorrock, and Kirwan 2002;Isaac et al. 2003;Shorrock 2005); safety assessment (Clark, Shorrock, and Turley 2008;Williams , Haslam, and Weiss 2008); human factors integration (Shorrock, Woldring, and Hughes 2004;Shorrock and Woldring 2006); human factors assessment (Jones et al. 2003); managing system disturbances (Shorrock and Str€ ater 2004;Shorrock and Straeter 2006); and safety culture Shorrock 2012;Reader et al. 2015;Noort et al. 2016). One of the authors also has experience in the formal evaluation of HF/E methods (Olsen and Shorrock 2010;Shorrock 2003), and in research on the research-practice relationship (Chung and Shorrock 2011). As such, and in combination with our experience of teaching university students to use HF/E methods, we have encountered issues from various perspectives, and have seen what has worked and what has not, concerning methods developed and used by ourselves and others. What we have noticed is that a number of constraints limit the use of HF/E methods, but these constraints may not be understood by tool developers (or, in some cases, users).
The article is written from our experiences as practitioners and researchers, and as method developers and users. The first author (Shorrock) has more experience in safetycritical industries and of being embedded more in transportation organisations, while the second author (Williams) has more experience in an occupational health and safety context, as an external consultant to a wide variety of organisations. The article is directed at those involved in the development, selection and use of methods, and aims to help in the development of HF/E methods that are actually used in practice.
By 'method' we mean 'A particular procedure for accomplishing or approaching something, especially a systematic or established one' (www.oxforddictionaries.com), for data collection, analysis, synthesis, to design and evaluate 'tasks, jobs, products, environments and systems in order to make them compatible with the needs, abilities and limitations of people' (International Ergonomics Association 2016). Specific examples of what we mean by 'methods' are detailed in Stanton et al. (2013), but there are hundreds of others, described in journal articles, conference papers and books. Even looking through the methods described in Stanton et al., it is interesting to note that some of these À selected 'based upon a survey by the authors of standard ergonomics textbooks, relevant scientific journals and existing HF method reviews' (p. 8) À are not in common use (or any known use by practitioners). The screening method by Stanton et al. also excludes methods that are not freely available in the public domain; anthropometric, physiological and biomechanical methods; similar to other methods (reiterations, new format); and of limited use or applied in an analysis of some sort. Stanton et al. add here that 'the field of HF is not short of methods and quite often a method is developed and not used by anyone other than the developer' (p. 9). The screening method used by Stanton et al. suggests that the methods are just a selection of those available (probably a minority). A great many methods retain a prototype status; published and largely forgotten about. This is a significant source of waste in HF/E and hints at a research-practice divide.
In our experience as practitioners and researchers, a common remark in academia is that practice is lagging behind research; that practitioners stick with old methods that may have outlived their usefulness, and fail to innovate or use novel methods. Meanwhile, a common remark in industry is that methods developed in academia are not accessible to practitioners, or not realistic for use in real environments. Both of these arguments have merit, but there can be a lack of empathy from both sides regarding the reasons why things are the way they are, and perhaps a lack of understanding of our own blind-spots, biases and prejudices.
In this paper, we attempt to outline three fundamental constraints, from our own experience (including our mistakes), that affect practice. We hope that this might help in some small way to reduce the waste and other problems that blight HF/E tool development and application. The constraints affect many (but of course not all) HF/E methods À established and novel. We have outlined three fundamental constraints that affect the use of methods in practice. For each of these, we detail some example practical issues.

Accessibility constraints
The first constraint on the use of HF/E methods in practice relates to the accessibility of journal articles; one of the primary means of reporting on methods: methods must obviously be accessible if people are to use them. This constraint refers to how easy it is to access methods, and so also refers to usability (Constraint 2), but we include it as a special and fundamental constraint here. Many methods, certainly those developed in research environments, are reported in scientific journals. Access to academic journals has improved in the last few years (e.g. to members of HF/E associations such as the human factors and ergonomics society [HFES] and the Chartered Institute of Ergonomics and Human Factors [CIEHF]), but there remain several problems with journals from a practitioner viewpoint. Access to journals remains limited and difficult, except for journals that form part of a package of membership (usually a small number of journals). Buying individual journal articles is often prohibitively expensive, especially when the usefulness of any given article is evident only after purchase. This situation is improving, as most research funding organisations are now either providing funding for researchers to publish in appropriate open source journals or insisting that the research data and findings be available to everyone. Even so, the process of searching and finding articles is laborious, partly because journal databases are often inaccessible to practitioners. Additionally, the style of journal reporting is often not suited to the needs of practitioners. Researchers usually write for other researchers (including editors and imagined reviewers), not for practitioners. Relatively few HF/E journals authors, reviewers and editors are employed outside academic institutions. The context of reporting may also differ significantly from the context of practice. The key point is that while researchers do have to publish in journals, (as their performance is assessed based on what is measurable: publications in peer-reviewed journals), journals are probably not the primary method by which HF/E practitioners acquire knowledge.
A second accessibility issue concerns software accessibility. While many methods can be used on common software platforms and applications, others require software that: is not widely used or included in standard computer configurations in organisations; is proprietary; is rendered defunct with updates to operating systems; or else requires significant training investment. Many of the postural and biomechanical and physiological data acquisition methods fall into this category and the methods are made even less accessible by the requirement for expensive hardware accessories for data capture. That said, more smartphone apps are coming on the market daily, which provide access to functionality previously the reserve of more expensive, PC-based software (and more functionality beside). Even in the last few years, rapid changes in our ability to measure and process data such as heart rates, walking distances, calorie intake, postural positional data and body part discomfort using cheaper hardware and app-based software have been a revolution. Also, of course, standard word processing, presentation and spreadsheet software can be used for many methods, including those for task analysis, cognitive task analysis, process charting, human error identification and accident analysis (though it can be tedious).
A third accessibility issue concerns sharing and intellectual property. Some organisations (e.g. consultancies and governments) perceive a need to protect intellectual property rights, for instance via restrictive protective markings such as trademarks. Particular names, words, phrases, etc. may be perceived as giving a market advantage and perhaps help to prevent against misuse of a method, and protect the organisation from reputation damage. Such protection can have downsides. Clients and practitioners (and sometimes those using such restrictive markings) often do not properly understand the legal issues, e.g. whether and to what extent questionnaires published in journal articles can be reproduced, or whether and how trademarks can be used. Restrictions obviously make wider usage of a method difficult, and this has consequences. If the only way to use a tool is by going to the holder of the trademark, or paying a licence fee, then practitioners may seek instead to use an alternative method. This means that the trademarked or licensed tool is not used and therefore the originator is not credited. In one example form our experience (Williams), a client specified the use of a method with a trademarked name for a particular piece of consultancy, though other methods À not trademarked and with a better published evidence base À were available. This made the project difficult to undertake without involving the original developer of the trademarked tool and without locking the client into potentially unnecessary long-term licensing arrangements. Another issue, with implications for the discipline of HF/E is that methods that are not freely shared make independent evaluation difficult, or impossible (and perhaps deliberately so).

Implications
Make methods available to users: Make articles on methods available as pre-prints on university servers or other websites that are easily accessed by practitioners with no journal subscriptions.
Consider creating blogs for methods, including the method and associated articles (preprints, summaries, etc). Consider the benefits of Creative Commons licensing for methods, and other arrangements that may help independent evaluation and wider-scale use of methods, with proper citation. In some cases, it may be preferable to be associated with a widely used tool that is not licensed or commercialised, than an unknown or non-evaluated commercialised tool. The Creative Commons licence can clarify the degree of modification that is permissible. Minimise restrictive software requirements: Use common software packages included in standard organisational configurations in favour of bespoke, expensive or hard-to-use software packages where possible. Keep up-to-date with the latest smartphone apps to support data collection and analysis. Actively promote methods: Understand that publication in a journal is a not the end of the method development process from a user's perspective. Promote methods on social media such as twitter, LinkedIn, blogs and practitioner magazines. Make relevance and implications clear: Work with practitioners to clarify the relevance and implications of methods. Make the practitioner implications clear on articles reporting new methods.

Usability constraints
Assuming that a method is accessible, it must also be usable and useful. Some methods, especially those developed without significant involvement from potential users, may take account of theory but may be hard to use in practice. While user-centred design is core to HF/E, we perhaps need to pay more attention to it in the development of our methods. According to ISO, usability is 'The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.' In order to practice what we preach, methods must be easy to learn, efficient to use, memorable, resistant to unintended use and satisfying. There is little research showing how user needs are considered, e.g. via personas, cognitive modelling, prototyping, ethnographic analysis and iterative design and testing with users throughout the method development process. For practitioners, it is important that methods are only as complicated as necessary: work situations are complex, but methods should only be as complicated as is necessary with regard to purpose; require a reasonable amount of time: the amount of time to use a method is affected by several factors (e.g. the software); use language that is appropriate in the real environment: concepts that are used in research do not always translate well to real environments, and sometimes need to be translated to more concrete terms; require data that can feasibly be gathered: the types of data that can be gathered in real environments, under conditions characteristic of a contract for instance, are limited.
The user group for HF/E methods may also change over time, to include not only HF/E specialists but also allied professions. This reflects a wider trend in organisations; increasingly, HF/E is not just the preserve of HF/E professionals (certified, chartered, registered, etc). As an example of this, the technique for the retrospective analysis of cognitive errors (TRACEr, Shorrock and Kirwan 2002) was developed initially for HF/E specialists (originally from an M.Sc. dissertation (1997) and then a Ph.D. thesis (2003) by Shorrock). Over time, the technique was adapted for wider European use among air traffic management safety investigators (HERA-JANUS, Isaac, Shorrock, and Kirwan 2002;Isaac et al. 2003). In both methods, a taxonomy of psychological terms was used to classify underlying features of errors, such as 'expectation bias' for perceptual errors, and 'incorrect assumption' for decision errors. This taxonomy was termed (in TRACEr) 'psychological error mechanisms'. Experience and initial testing suggested that this particular taxonomy was useful to help explore issues from a psychological perspective, helping to form a bridge between the more obvious aspects of so-called 'human error' ('internal error mechanisms', e.g. 'no detection [visual]'), and the so-called 'performance shaping factors' (e.g. aspects of display design). However, these intermediary 'psychological error mechanisms' were not coded sufficiently reliably, while other taxonomies were (Shorrock 2003).
This was partly because at other levels of the taxonomy there is a clearer difference between the codes. For instance, at the level of internal error mechanisms, 'no detection [auditory]' can be readily differentiated from 'no detection [visual]' even with limited data gathering. Similarly, the task error code 'radar monitoring error' can be readily differentiated from 'strip use error'. At the psychological error level, 'failure to consider side effects' (a psychological error mechanism) requires deeper probing and, probably conjecture, to differentiate it from 'assumption'.
The European variant of the technique, including this particular taxonomy, was subsequently adopted for use in a variety of countries, but after a few years largely fell into disuse, except for one organisation which continued to use the published (HERA-JANUS) method, and another which used a lighter version. The continued use by a very small number of influential organisations was critical to subsequent re-uptake.
The European variant was revised in recent years by a working group comprising safety specialists from several European countries (independent of the developer) and incorporated into a toolkit for air traffic management occurrence investigation, which provides dedicated software tools for each of the major steps of the investigation process (see http://www.eurocontrol.int/services/tokai). This toolkit À TOKAI À helps air navigation service providers in over 40 countries to manage their recording, analysis and sharing of ATM incident report data according to European requirements. According to EURO-CONTROL (2016): 'TOKAI provides a means for occurrence notification and enables a harmonised application of relevant safety regulations (ICAO Annex 13, ESARRs and relevant EC Directives and Regulations). Furthermore, it enables the user to transfer data to an ECCAIRS system (European Coordination Centre for Accidents and Incidents Reporting System), or to produce reports in different formats, including the AST (Annual Summary Template), needed for the exchange of safety information with EUROCONTROL'.
The revisions, compared to the original TRACEr, included the removal of several of the original taxonomies (including the original 'psychological error mechanisms' taxonomy) and simplification (grouping) of codes. Unfortunately, some useful contextual taxonomies, which were not fully understood, were also omitted.
Work is now ongoing to reintroduce better versions of these contextual taxonomies and produce investigation guidance (but not codes) on the former 'psychological error mechanisms ' (e.g. expectation). This work has also involved neutralising negative codes to make them useful not only for 'failures' but more generally to help to understand normal work (in light of Hollnagel et al. 2013).
The point is that if methods do not meet users' needs, then they will not be used, or will fall into disuse, or users will adapt them however they see fit, and the result may look very different to the original method. In this case, it took over a decade of experience, involving many discussions and iteration of different versions by various working groups, and gradual disuse and re-uptake, in a highly conservative environment, which is slow to take up new methods (and, once introduced, slow to let go of existing methods).
Ultimately, for practitioners, a key constraint to the approach taken to a piece of work is the time and money available. Project costs, duration and time with stakeholders for data collection affect tool choices and the thoroughness with which they can be used. Methods that require extensive time on activities that offer diminishing returns (especially 'invisible' desk-based non-client/user facing activities, such as formatting, drawing diagrams and conducting complex statistical analysis) are harder to sell or justify to clients.
Another issue is testing. There are hundreds of HF/E methods, as reported in journals, but many are not tested for usability, suitability for a particular domain or application, time to use, or aspects of reliability and validity. These measures will vary between applications and hence cannot be judged from a simple test case. Often, those methods that are tested have not been tested by researchers other than the developers (and their students or affiliates), and have not been tested by practitioners in real environments for real projects in real organisational conditions. When tested by researchers and practitioners not associated with the technique, empirical data can be remarkably different. Olsen and Shorrock (2010) tested an adaptation of human factors analysis and classification system (HFACS) (Wiegmann and Shappell 2003) used in the Australian Defence Force. Three field studies of inter-coder consensus and intra-coder consistency found the method was unreliable for incident analysis. A further study on the original HFACS (Olsen 2011) found low consensus, both for human factors specialists and air traffic controllers, all given standardised training. Olsen (2011) noted that 'Many studies have reported a successful level of reliability for HFACS with "success" ranging from a percentage agreement (or Cohen's Kappa in many instances) of 60%À85%. However, many of these studies are based on unpublished graduate research data and are reported using potentially inappropriate statistics for taxonomic reliability studies.' Even where methods are independently tested, few standardised testing protocols have been developed for many HF/E methods. Olsen (2012) found a very wide variety of testing methods, and differences in reporting used for human factors safety taxonomies, such that comparison of reliability statistics is impossible for most published studies.
The main problem is that practitioners often do not know if a method or technique will be suitable for their application, or how to convince clients of the value of a method. Sometimes this is a problem, though in many cases the methods and context of use are not amenable to traditional scientific evaluation (see Dempsey 2007;Wilson 2000).

Implications
Apply a user-centred design strategy to methods: Develop methods with practitioners and field experts who are embedded in the domain of application to ensure that methods are fit for purpose. Use representatives of the possible intended user groups. Avoid the use of students as test participants. Involve practitioners in iterative testing activity from early stages of design, in the real environment. (This will lengthen the development cycle.) At the outset of tool development, analyse intended and possible user requirements, and understand the required levels of competence. Where HF/E methods must be taught and used by non-specialists, an appropriate level of simplicity needs to be worked out, with signposting of the need for further expertise.
Discuss with practitioners what 'good enough' looks like À what is the right efficiency-thoroughness trade-off? This could be done in conference workshops, for example. Remember that good methods take time to develop, test and refine. It is hard to go at a faster pace than key stakeholders in the process of development and implementation. Do not expect a tool to be created, developed and fit for purpose in a short time frame. Consider developing more 'lite' versions of methods, e.g. TRACEr-lite (Shorrock 2005(Shorrock , 2006.

Contextual constraints
The third fundamental constraint that we would like to discuss is one of the most powerful, and concerns the nature of the organisations and influential stakeholders within organisations and other environments in which practitioners work. The importance of this constraint is highlighted in a survey of 587 HF/E professionals by Chung and Shorrock (2011). Participants were asked to rate 29 statements concerning current barriers to their application of HF/E research findings, as a practitioner. The statement 'The research is not relevant to my practice' was the fourth strongest barrier overall, as rated by the participants. This of course refers to all research, not just methods, but perhaps illustrates the importance of the issue. Other statements in the top 10, which might refer to realism, included, 'Implications for practice are not made clear in the article' (second strongest barrier), 'I feel results are not generalisable to the organisational environment' (sixth strongest barrier) and 'The organisational environment is not adequate for application of research findings' (ninth strongest barrier).
One related constraint is organisational history, intertia and readiness. Practitioners do not have freedom to use whatever HF/E method they wish. Organisations can be conservative, and of course may have a history of using certain methods. Many organisations have established HF/E-related methods for particular purposes, which are rooted in history and have considerable associated personal, organisational, financial and time investments. This makes it difficult to update or improve methods, to replace them, or simply to remove them from use. Again, an example is safety-related taxonomies and classification schemes and associated methods, for incidents and accidents. Organisations often have legacy taxonomies that are coded into databases, established in training and used for many years, with much associated data which cannot easily be recoded using new taxonomies. Several such taxonomies are in use, including those developed by national and international/intergovernmental organisations (e.g. the International Civil Aviation Organization's accident/incident data reporting (ADREP) taxonomy [ICAO 2016], which includes a vast array of HF/E issues). In such cases, new taxonomies, perhaps with better empirical validity and reliability data, will struggle to get a foothold (though many taxonomies have never been tested, or have limited data). Other emergent approaches (such as intelligent text search) are unlikely to get a foothold at all.
Changes to long-established approaches tend to be incremental and even then, changes may involve very long periods of negotiation in working groups involving multiple parties, differentiated along professional, organisational and national lines.
Even when changes are free of lengthy negotiations, there can be difficult compromises and trade-offs. The EUROCONTROL safety culture questionnaire (Reader et al. 2015;Noort et al. 2016), which has been applied to over 30 air navigation service providers (over 30 countries) throughout Europe, has undergone several iterations. Each of these has improved the validity of the questionnaire, but each makes comparison with past data more difficult.
Additionally, there may be more than one method used in one or more departments, even for broadly the same purpose. This is common in commercial and government organisations. For instance, the EUROCONTROL questionnaire is used by most air navigation service providers in Europe, but other questionnaires have also been used, both separately and in conjunction with the EUROCONTROL questionnaire; a client-side compromise with possible implications for validity, reliability and acceptability (to participants).
Another constraint relating to the organisational context is client method selection and endorsement from enforcing bodies. Calls for tender for consultancy projects often specify particular methods or approaches, which can affect the opportunity to introduce new or different methods. In one author's experience (Williams), a call for tender for safety culture support specified not only that the approach to the work would only involve focus groups, it also outlined the number, duration and make-up of the groups. In another example (Shorrock), the use of the EUROCONTROL safety culture questionnaire has been specified in several calls for tender to support the European Safety Culture Programme, as part of a multi-methods approach also including qualitative methods (interviews and focus groups). In many cases, client selection of methods is necessary and appropriate. In others it is not. In all cases, the bidding process should ideally concern not only the scoping and pricing of a project, but also innovation and proposal of the best methods for the job.
Some of the issues already raised in this article come together in situations where a method becomes widely reported and available, may be internally requested by funders and is used by novices, and when that tool has some endorsement by, or provenance in, an enforcing authority. Regulators, supervising authorities and other enforcing bodies sometimes require particular methods, or mandate certain methods as acceptable means of compliance, or have simply generated them as useful aids.
In one author's experience (Williams) the health and safety executive's (HSE's) manual handling risk assessment filter embedded in L23 (HSE 2004) is one such method that has been misused repeatedly. Instead of a guide to highlight the need for further risk assessment, it has become a de facto chart of safe lifting weights. Similarly, the manual handling assessment chart (MAC) developed by the HSE (HSE 2014) is a tool which is seen as a gold standard assessment tool for all manual handling situations, due to its provenance, in spite of its clearly stated limitations.
A final constraint concerning the organisational context that we will address is user competence (an issue that obviously relates to usability). Many HF/E methods assume, at least implicitly, a level of HF/E competence, perhaps in line with qualifying programs. Some HF/E methods, however, are used by allied professions, such as safety engineering, psychology and physiotherapy, and in industrial or organisational contexts more generally (e.g. healthcare) where competence in HF/E is limited. Increasingly in organisations, individuals are given a 'human factors specialist' role, without formal training in HF/E (e.g. to the level that would confer formal recognition, e.g. certification, registration or Chartership). The application of HF/E methods by those without formal training in HF/E as a discipline therefore becomes an issue (e.g. see Stanton and Young 2003). Methods may be used beyond design intent, users may not be up-to-date with newer methods, and unsuitable methods may be used or remain in use.
One of the authors (Shorrock) consulted to a chemical manufacturing plant, in which selected operations staff members had, sometime in the past, been taught to use hierarchical task analysis (HTA) as a means of constructing procedures and supporting safetyrelated assessments. The original HTA process was translated into templates of pre-drawn tree-diagrams, and analyses were done in a simplistic fashion without progressive redescription of goals and without 'plans' À a critically important aspect of the HTA method that addresses task sequencing, conditionality and so on. This example (along with the TRACEr example described above) illustrates how critical aspects of particular methods may not survive the professional transition from HF/E to more general use.
At the more physical end of HF/E, lots of work has been done to provide methods or versions of methods that are accessible to the non-specialist. Examples include the MAC tool (HSE 2014); rapid upper limb assessment (RULA) (originally developed by McAtamney and Corlett (1993), and provided as software at http://www.ergonomics.co.uk/rula.html or easily scored worksheets at e.g. http://ergo.human.cornell.edu/ahrula.html), and the Health and Safety Executive's Fatigue and Risk Index (HSE 2006). In our experience of these and other tools, non-specialists often require support to do something useful with their findings. As with any tool, the magic comes from the user of the tool as much as, or more than, the tool itself (Williams and Haslam 2011); the two are part of an adaptive system.
There are other organisational constraints, many of which neither method developers nor users have much power over (e.g. access to staff, environment and other client-side resources). But the above hopefully give a flavour of some of the constraints that can affect the use of methods in practice.

Implications
Understand the organisation: In new project proposals, build in a survey of existing methods and adaptation of organisational systems to incorporate new methods. Take time to understand the implications of a change to the wider system (e.g. the knock-on effect in reporting and performance metrics). Methods such as Seven Samurai (Martin 2004) can be useful for this. Create a user community: Consider setting up user groups. For instance, the functional resonance analysis method (FRAM) (Hollnagel 2012) has an associated 'FRAMily', which meets regularly to discuss applications. There are regular conferences relating to systems-theoretic accident model (STAMP) (Leveson 2004). Work on relationships: Understand that the introduction of a method into an organisation to non-HF/E specialists is often more political than technical, and thus requires more interpersonal than technical skill. Involve industry associations and other influential organisations during the development process. Develop links with those who commission work to introduce improved HF/E methods in calls for tender, or to leave scope for such methods. Befriend IT systems owners and developers to understand how the databases are coded and how changes can be made. Link methods with sound models for intervention (e.g. Michie, Atkins, and West 2014) to provide a compelling case for change. Build methods into tenders: Discuss with clients the scope for interpretation prior to formal response to a call for tender or other request. Build into projects a phase for methods comparison or development to evaluate available methods and show that another one is better for the desired work. Alternatively, if the funder is set on a particular method, propose to redevelop it through the work. Include in tenders an optional approach to allow for negotiation (even after winning the bid). Be careful when mandating or endorsing methods: Enforcing bodies should take particular care when endorsing or requiring the use of certain methods. In such cases, it must be emphasised that 'good practice' now may not be good in the future, or in all cases, and there must be provision to account for changes. Be specific about the application and scope of endorsed methods.

Moving forward
In order to move forward, and even survive, HF/E as a discipline and profession needs to ensure that its methods are fit for purpose and implemented in a way that ensures successful use. Without combined effort from researchers and practitioners, we are at risk of retaining methods that are unfit for purpose (e.g. due to a fast changing environment), and developing new methods that are unfit for purpose (e.g. due to unmet stakeholder needs). This final section outlines some potential ways to move forward.
The key recommendation for developers is to use our own process of design: humancentred design. The ISO standard 'human-centred design for interactive systems' (ISO 9241-210 2010) describes six principles for user-centred design (in this case, the design of a method).
(1) The design is based upon an explicit understanding of users, tasks and environments.
(2) Users are involved throughout design and development.
(3) The design is driven and refined by user-centred evaluation.
(5) The design addresses the whole user experience.
(6) The design team includes multidisciplinary skills and perspectives. HF/E methods need a thorough stakeholder analysis. These may include HF/E practitioners and clients, and for example, influential organisations such as industry groups and associations, enforcing bodies, intergovernmental organisations. The stakeholders will typically have varying levels of competence.
Part of this user-centred design activity should ensure that it is easy for people to find and use the method. This might include avoiding restrictive markings; using commonly used and easy-to-use software; using open access publications or making pre-prints available freely; using Creative Commons licensing; avoiding the need for single source, in-house training; using social media to promote the method; considering smartphone apps where feasible; and linking methods with intervention models, considering the capability, opportunity and motivation of potential method users. The key recommendation for users is to take bolder steps to introduce new or different methods. There can be a tendency to use familiar methods, accept the status quo, or perhaps to acquiesce too quickly to the perceived demands of clients. However, some common methods are becoming unfit for purpose, for instance those that assume linear cause-effect relations or 'root causes', and that do not take a systems approach more generally. Many clients are receptive of suggestions for innovation, even though they are unlikely to be aware of many HF/E methods. Our experience is that by working on relationships in an organisation, many things become possible. Steps can also be taken to build methods into tenders.
For both developers and users, there is a need for greater collaboration, not just in the context of user-centred design for specific methods, but more generally, to ensure that the right constructs are being targeted for method development (noting the plethora of methods for notions such as 'human error', 'situation awareness' and 'mental workload', which are hardly systems constructs).
These recommendations align with the most frequent suggestions to HF/E researchers and practitioners made by HF/E specialists in the survey by Chung and Shorrock (2011), namely: (1) increasing collaboration, communication and networking between researchers and practitioners; (2) ensuring that research focus and methodology are relevant to the organisational environment; (3) providing clearer implications/applications and more definitive conclusions in articles; (4) increasing awareness of research and time to read research at work; (5) seeking support from decision makers and stakeholders; and (6) increasing application of research findings to real problems and organisational experience.
The first of these is the key to the rest. As researchers and practitioners, as developers and users, we must work together to ensure that our HF/E methods are useful, usable and used in practice.