Applying a Capability Maturity Model (CMM) to evaluate global health security-related research programmes in under-resourced areas

Abstract Organisations in under-resourced areas that achieve long-term research sustainability by successfully competing for research funding will not only build their reputation for conducting quality science but also develop their human resources in a manner that reduces the risk of becoming a future security threat. Major challenges to these organisations include identifying and prioritising funding opportunities, securing and administering external grant awards and publishing both the outcomes of research and relevant surveillance data. Lack of a standardised evaluation technique to assess institutional research capabilities poses challenges for identifying and targeting specific, repeatable processes that lead to organisational improvements. Short- and long-term goals, which are challenged by research quality, funding and human resources, need to be established in order to achieve complex missions such as reducing global health security threats. Once baseline capabilities are established, a consistent evaluation technique provides an objective view to complement other steps that enhance capabilities. The capability maturity model, which is often used in business and technology sectors for establishing life cycle and planning sustainment, is a technique that enhances performance by defining three levels of capability (initial, managed and optimised). An organisation can assess its current state of capability (‘as is’) and develop an actionable strategy for its next progression (‘to be’). In addition, application of a CMM aids creation of a strategy for realising a more repeatable and optimised process. Research programmes frequently rely on basic metrics such as the number of peer-reviewed publications and grant funding awards to measure their quality. Our analysis suggests an approach that includes references and tools, especially those that are risk-based, which can be used to establish initial best practices, define metrics, measure outputs and rates of success in a stepwise manner. In addition, we provide a pilot example from a survey of research institutes in under-resourced areas.

drain), inability to maintain equipment through service contracts and difficulty in obtaining new state-of-the-art equipment, research kits and technologies. Lacking these resources, research quality will stagnate then regress and, thus, the ability to competitively apply for grant funding will diminish, exacerbating the problem. To maintain an upward cycle, a mechanism to assess and plan for success in obtaining and administering grant funding must be instituted early on, so that mitigation processes can be initiated as challenges are identified.
The current lack of an overall strategy and programme structure to assess and evaluate the success of securing and administering research funding will hinder

Introduction
Organisations that achieve long-term research sustainment by successfully competing for research funding build not only their reputation for conducting quality science but also develop their human resources in a manner that reduces the risk of becoming a future security threat. In assessing and evaluating research capabilities, especially those related to global health security programmes, methods must be used to focus on the success of securing and administering external grant funding by the institute. Without external grant funding, the infrastructure underlying a research programme will begin to degrade over time, including: the loss of quality personnel (brain OPEN ACCESS countries (Pew Research Center, 2015), suggesting that scientists may not be as limited in access as many of their compatriots. As evaluations of scientific impact now incorporate more than just publications (Priem, 2013), Internet access will become ever more critical in developing characteristics of a successful and impactful research programme.
Recognition of relevant opportunities to apply for funding and to publish results also requires sufficient familiarity with funding agencies, notification platforms, journals, conferences and other sources of pertinent information. Scientists in under-resourced areas may require greater opportunities to engage with international scientific organisations, attend conferences and access journals or other scientific media in the quest to develop an impactful scientific research programme. Where suitable conferences for results presentation are identified, existing funds for attendance may be scarce, resulting in a lack of incentive for scientists to submit their research abstracts for conference presentation.
Another potential challenge to advancement of research programmes at these institutes is a lack of published, hypothesis-driven investigations upon which to build a competitive research programme. For the reasons mentioned previously, institutes aimed primarily at providing diagnostic services may not have significant experience with conducting hypothesis-driven research. Similarly, scientists in non-English speaking countries may not have had the experience of publishing their results in internationally recognised journals of high scientific impact (Badrane & Alaoui-e-Azher, 2003;Benamer & Bakoush, 2009;Rahman & Fukui, 2003;Uthman & Uthman, 2007). The lack of such publication records hinders the ability to obtain funding for proposed research projects. Development of local national scientists' skills in conducting and reporting on independent research is the key to furthering their representation in high-impact scientific reporting platforms (Rohra, 2011;Yousefi-Nooraie, Shakiba, & Mortaz-Hejri, 2006).
One mechanism for enhancing scientific skills, particularly in conducting quality hypothesis-driven research and reporting in international peer-reviewed journals with good impact factors, is collaborating with other peer scientists at national and international institutes. These collaborations with experienced, successful scientists will strengthen development and testing of hypotheses; sharing of equipment, supplies, reagents and assays (e.g. commercial and homemade); standard operating procedures; and improve writing skills for scientific presentations, reports and manuscripts. Success from this peer coaching and mentoring, as well as with collaborative studies, leads to scientific publications and confidence in writing competitive research proposals for national and international funding. A biomedical institute's capability and recognition, especially in under-resourced competitiveness among global health security-related research programmes in under-resourced areas, resulting in their inability to maintain an independent institutional research effort. Biomedical institutes in under-resourced areas of the world face a number of challenges to build global health security-related research programmes in addition to those commonly encountered in developed countries. For example, many institutes capable of and/ or charged with handling pathogens of global security concern may be officially commissioned solely as clinical diagnostic centres and may not have a clear or specific mandate to conduct and publish novel scientific research. In such cases, the institutional infrastructure required to identify and obtain funding for research programmes, such as staff support, recognised ethics and review boards and accounting procedures to receive and manage external funds, may not be available or may not be prioritised for research activities. Scientists also may not have explicit permission to conduct research and to publish results in scientific media. In cases where institute personnel are allowed to conduct research, diagnostic procedures often take priority over research activities and may result in delays or interruptions of research projects.
Two critical needs for identifying and applying for relevant research opportunities, as well as publishing research results, are English language skills and reliable Internet access. English is the dominant language of international scientific communications; publication in other languages and non-English national or regional journals has declined in relative terms over the past several decades (Bordons & Gomez, 2004;Kirchik, Gingras, & Lariviere, 2012;Tardy, 2004;van Weijen, 2012;Zitt, Perrot, & Barre, 1998). At many biomedical institutions in under-resourced countries, scientists and institute personnel may require additional English language skills to read and respond to opportunities for research funding and to develop research articles for peer-reviewed publication (Bortolus, 2012). Where funding applications and publications are written in a native language, funds and human resources for translation of written materials or alternative translation models (Root-Bernstein & Ladle, 2014;Smith, Chen, & Liu, 2008), need to be found in order to submit materials for the English-speaking scientific audience.
As the Internet has become a powerful medium for scientific communications worldwide, a reliable Internet connection is also necessary to connect scientists of under-resourced areas with the global scientific community. Global Internet penetration varies significantly, with low developed countries, especially lacking in web access; broadband Internet service is even rarer in most of these places (Pew Research Center, 2015). Fortunately, individuals with greater education and English language skills often have greater-than-average access to the Internet in their areas, will be enhanced greatly by the increased numbers of collaborations; in contrast, those institutes that have few collaborations will likely have less success in biomedical research. Evaluations of collaborations should be one of the more important metrics used in determining the research capability of biomedical institutes.
The assessment and evaluation of research capabilities is a complex process that suffers from the current lack of available standardised methods and the wide variation in institutional capabilities. In particular, there is a lack of standard methodology to evaluate research capabilities at the working level of an institute, programme or department. Reports exist which provide evaluation frameworks and tools, which are intensive on statistics, but there is no 'one size that fits all' . The Frascati Manual (FM), for example, which has been used for over 50 years, is a standard, accepted method to produce research and development (R&D) statistics that industrialised and developing countries use to benchmark their national policies for science and technology innovation. However, the FM recognises the difficulty in identifying and measuring research outputs in a consistent way. In general, the lack of R&D statistics from developing countries and under-resourced areas exists due to the complexity of coordinating and compiling data and lack of defined indicators for science, technology and innovation across different institutions (van der Pol, 2010).
RAND developed a guide to evaluate research using frameworks and tools, including a decision tree, that matches the type of evaluation tool with the research purpose: analysis, accountability, advocacy and allocation. They categorised tools as Group 1, including case studies, peer review and site visits; and research purpose as Group 2, which included bibliometrics, surveys and data mining. Both Groups 1 and 2 are used for describing analysis and accountability (Guthrie, Wamae, Diepeveen, Wooding, & Grant, 2013). RAND also developed a document to measure the performance of a U.S. Department of Defense (DoD) programme, which included metrics for measuring research capabilities (Young, Willis, Moore, & Engstrom, 2014).
In conclusion, competitiveness among global health security-related research programmes in under-resourced areas is impeded by the absence of consistent tools for evaluating successes in the areas of funding, administering, conducting and publishing scientific research. One important mitigation process is to focus on hypothesis-driven and achievable proposals to garner funds for generating quality research that will successfully address the hypothesis. Moreover, successful research will then lead to conference presentations and peer-reviewed publications, which will then raise an institute's profile and international recognition. This procedure will enhance the subsequent application process for obtaining grant funding and collaboration from international institutions, resulting in an upward cycle of successes in improving research sustainability.

Capability Maturity Model (CMM)
The capability maturity model (CMM) originated as a framework developed by Carnegie Mellon University to improve its process for developing computer software. The model includes a self-assessment that presents the organisation's best practices in key process areas (e.g. capabilities) and then shows how the organisation can redefine its capabilities as it evolves into a more mature state (Paulk, 2009;Paulk, Curtis, Chrissis, & Weber, 1993). As an organisation performs its self-assessment to define its current 'as is' capabilities, tools, such as those as described by RAND Groups 1 and 2, can be employed to collect the needed information. These tools, coupled with a thorough analysis by the organisations' executive leadership and performers, are essential to develop the desired 'to be' capabilities. Beyond computer software, CMM has been adapted to various product development applications that require repeatable decision-making, risk, and mitigation strategies, such as medical devices (Medical Device International Consortium [MDIC], 2015) and business development practices. Carnegie Mellon University developed both the Capability Maturity Model (CMM®) and Capability Maturity Model Integration (CMMI®), while Shipley Associates developed the Business Development Capability Maturity Model (BD-CMM®). In principle, the fundamental concepts of the CMM can be applied and scaled to enhance any institute's research capability.
CMM, CMMI and BD-CMM are structured into five maturity levels (Newman, 2011a;Paulk et al., 1993). An organisation first performs a self-assessment to establish its current maturity level and then establishes the next steps and goals needed to achieve the subsequent levels. For CMMI Level 1 (initial), the organisational capability process is not structured, and is undefined and inconsistent; it largely relies on successes of individuals. In Level 2 (managed), the process has partially consistent, successful processes; and, Level 3 (defined) has standard and documented processes. In Level 4, the process is quantitatively managed, while the process is optimised in Level 5. Within the five maturity levels in CMMI, there are three capability levels that correspond to the first three levels of maturity: initial, managed and defined. Our focus will be on these three capability levels as they relate to research capacities. Understanding the challenges that organisations performing global health security-related research in under-resourced areas face, their ownership (such as through. a CMM or related assessment) and sustainment Though the initial capability assessment is usually an internal activity, regular assessments by independent, peer, technical and scientific experts -whose knowledge and expertise enable them to make credible and objective judgements about the institute -should be performed as capabilities mature. When examining issues of research, an independent assessor should pay particular attention to current and past research, as well as the capability for additional growth. The laboratory assessment process is designed to enhance laboratory performance and quality by providing feedback to laboratory managers and staff regarding their work. It provides a straightforward appraisal that further improves stakeholder confidence in the value of the work performed and outcomes produced. It also serves as an opportunity for technical experts, customers and stakeholders to exchange views with laboratory managers and directors.
Metrics related to research capability can be measured in general outputs that may also include indicators and milestones. For example, measures for general research capability include annual counts of: conference presentations (oral, poster), peer-reviewed publications, responses to proposals or other requests, number of agencies from which funding is actively received and the proportion of senior scientists acting as lead author on one or more papers (Young et al., 2014). Additional metrics comparing the number of submissions with those accepted, as well as the frequency of a particular scientist as a lead author on publications, provide an indication of individual and general performances, with success graded by quality, funding amounts and performance sustainability.
Bibliometrics is the measurement of the quality and quantity of publications. Recently, the term 'scientometrics' has been applied to biometrics of scientific publications. Both metrics are based on various types of citation analyses, in which the citing of a particular publication is followed over time. In addition to the obvious value of such information to individual researchers or institutions as a measure of success (or lack of it), citation analysis can also be a valuable tool for planners gauging directions in which research is moving in a particular field. The quality of publications can be assessed through data on how often a particular reference is cited, as well as information on the 'impact factor' of the journals in which they are cited. This impact factor is, in turn, usually a measure of how often a journal's articles are cited. Citation indexes have been around for centuries, in various forms. A common present-day index is the Science Citation Index; and, there are also free-access sites such as CiteSeer and Google Scholar available. These methods are well-developed for traditional publication in journals but less well developed for digital dissemination of data in places such as websites and data-sets, since URLs for these collections are often of these capabilities are important factors that need to be prioritised. Success will also depend on those associated networks and partnerships that an organisation maintains. (Marjanovic, Hanlin, Diepeveen, & Chataway, 2012).

Approach to assessing initial capabilities
Initial laboratory capability assessments should be undertaken internally by a laboratory manager or quality officer to produce a baseline and to identify target areas for improvement projects. While these assessments and improvement projects are likely to focus upon the general development of laboratory services in under-resourced settings, these data should also be incorporated into any assessment of research capabilities. Such assessments, however, can be time-consuming, particularly as they can encompass a huge array of areas including staffing and technical skills, biosafety/biosecurity, quality management, equipment, supply chain and laboratory management. Thus, selecting the most appropriate assessment approach from the outset minimises the risk of wasted or duplicative effort. A well-defined mission statement is critical to allow institutes to properly evaluate their capability and, thus, their maturity. In under-resourced areas, laboratories are often engaged by the international community seeking to assist in their growth and development and, as described elsewhere (Yeh et al., 2016), lack of alignment of expectations or goals can hinder these collaborative relationships. A formally adopted mission statement makes clear the planned end state of an institute. While initial baseline assessments may reveal gaps which prevent these aims from being achieved, knowing the goal(s) simplifies the definition of developmental milestones as the institute's research capacity matures. Understanding the mission and role of an institute in the health and research communities, and capturing these in a simple statement, allows for the correct assessment technique to be selected, as well as enabling useful goals and milestones to be developed. These procedures allow resources to be properly focused on achieving the aims of the institute rather than addressing gaps of lesser importance to the overall mission, which may be the result of measuring inappropriate metrics.
A multitude of laboratory assessment tools have been published and are freely available online, but the checklist being used should be selected based upon the goals and needs of a given facility. For data related to research capability, most selected tools would fall into the Group 1 category under the RAND framework (Guthrie et al., 2013), as they include an element of judgement or interpretation . In order to give a clear picture of a research capacity, some aspects of Group 2 tools, for example, bibliometrics or data mining, should also be considered to give a more complete picture.
Based on our solicited responses to specific questions from the two laboratories, one seemed to be in better shape than the other, but neither was functioning as well as analogous organisations in effectively resourced areas. As a substrate for potential application of CMM approaches, we list below some of the points made by the laboratories in their replies to us, as well as some suggestions on how CMM could be applied to improve their performance.

Problem issues (a) Engagement and ownership of research programmes by staff scientists:
This occurs less frequently where top-down management is practiced. That is, the administrator makes all key decisions including scientific directions, which funding to apply for and, presumably, how the funding is applied upon an award. This unilateral approach fails to utilise the strengths of subordinate resources in the institution and diminishes the constructive potential of the scientists involved. In our experience, this is very common in under-resourced countries and in cultures that exhibit greater inequality among superiors and subordinates. High 'power distance values' are typical within an organisational structure where the relationship among authority and subordinates is defined, rigid and accepted (Waldman et al., 2006). (b) Restricted funding sources: In one of the two laboratories we surveyed, the only source of finance for research appeared to be from the national government. Whether this was mandated by the government was unclear but it is an obvious problem for a research programme that wishes to develop a more global strategic vision. The other laboratory appeared to be more forward-leaning and obtained support from national and foreign governments, as well as with national/international collaborators through public and private sources. (c) Importance of funding source: Not surprisingly, one laboratory was interested only in the actual source of funding, in line with its comments for the prior section. The other laboratory saw funding importance in terms of source, scope, likelihood of success, importance of topic and its ability to allow expansion of the laboratory's capabilities, which is a more desirable situation. (d) Application process: Both institutions used 'standard procedures' for the application process and both had training in place to optimise the effort involved. One telling comment, however, was that if a scientist goes out to another, potentially foreign, ignored in journal citations. However, tools for examining the impact of such data, such as the 'Toolkit for the Impact of Digitised Scholarly Resources' (TIDSR; microsites.oii. ox.ac.uk), have more recently been developed. In addition, analytical methods are also available to improve the reliability of capability assessments, which are often knowledge-based models (Rauffet, Da Cunha, & Bernard, 2010). In contrast to traditional journal publications, scientific networks can also be analysed to better understand the dynamics and the process of how research collaborations are started and sustained. The networks and outcomes can be modelled to show relationships among collaborative activity and events such as conferences, meetings, white papers, presentations and their outcomes (Fair, Stokes, Pennington, & Mendenhall, 2016). Scientific networks, especially face-to-face interactions, combined with earlier-mentioned English language skills and reliable Internet access are critically important for scientists working in under-resourced areas. Overall, organisations with more mature research capabilities, in concert with government and stakeholders, will help create 'innovation ecosystems' that will catalyse economic growth, especially in underresourced areas (Moser, 2016).

Initial pilot example: applying CMM methodology to a hypothetical research institute in an under-resourced area
Many tools are available that can be used to conduct a self-assessment. In this case, we conducted electronic surveys in two regions, which included specific questions to determine applicable metrics. Online surveys are appealing, since they can be quickly designed and widely distributed to collect and measure data and, if desired, to ensure respondent anonymity.
One of the surveyed institutes relied only on local government sources for research funding and had a high rate of success (around 80%, albeit often with reduced budgets). Another, that was much more global in its attempts to fund research, had no success in the past five years, despite many submissions. However, this institute had previously received external grants in the five-year period preceding the surveyed one, implying that perhaps the international grant funding arena has become more competitive.
Surveyed institutes had many of the features in place that are predictors of potential funding success, such as defined programmes and objectives; identification and prioritisation of opportunities; grantsmanship training; post-submission debriefing; and frequent attendance at scientific meetings. Noticeably absent from reported research programme features were publications in international journals, which did not seem to be a high priority with the organisations that responded. out earlier, there are many tools available through which this can be achieved. A critical point is that the director needs to be willing to alter the existing management structure, so that opinions other than that of the director can inform the future progress of the institution. (b) Another feature of assessing initial capabilities is that output from the institution has to be tracked, starting with publications and funding sources. At least in one case, a restricted ability to acquire financial support for research needs to be noted as a problem and steps taken to institute changes. The capability assessment should, therefore, also include the institute's capacity to apply for and deal with additional research support and specify areas in which this could be most fruitfully utilised. (c) Further assessment should encompass the broad issue of training. In our questionnaire, we focused on training in grant submission; however, an equally important part of the institute's capabilities is the extent to which staff members are encouraged to seek additional research training, very often in other laboratories. This activity should be planned as an extension of the initial assessment, such that areas pinpointed can be favoured when choices are made on which scientists are best suited for training and on the type of training required. (d) Once the initial assessment has been completed and the future direction of the institution has been decided upon, capacity-building activities then need to be put in place. These, in keeping with what we have already discussed above, should address issues such as improving decision-making, making the day-to-day running of the operation more inclusive and relaxed and fostering transparency in all activities. An appropriate focus needs to be developed in the area of training, which should not just involve the senior members of the scientific staff but also all those whose expertise forms part of the strategic plan for the future. Again, as outlined earlier, development of collaborations, either as a result of contacts made during training or at scientific meetings, for example, will be a necessary part of the capacity-building efforts for the institution. (e) The goals, then, are to assess capabilities, build additional capacity through the initial planning exercise and become involved in ongoing assessment of progress through the CMM approach. The end result is to strengthen the overall competitiveness of the institution, both its international laboratory for training, he/she may not be able to fully utilise the training when returning home. The reasons were not specified; however, experience in such situations suggests that either the resources were not available and/or that institute policy (via the administrator) precluded work in specific areas. Also, neither laboratory was very active (relative to a developed country laboratory) in grant applications. One admitted to 3-4 annually, while the other only mentioned one. Both institutions had scientific staff sizes of around 40 people. (e) Presenting work done: Conference attendance was an area in which both institutions did quite well.
On an annual basis, both sent approximately 10% of their scientists to international conferences and 20-30% to national and local events. Funding was supplied from each grant, and everyone who attended presented a poster or gave a talk at the conference. One of the outcomes from conferences that did not seem to be so vital, however, was the use of such occasions as an introduction to collaborations. In general, generation of publications appeared to be given less importance than making presentations; one response mentioned approximately 5 publications annually (for 40 scientists), while other was able to come up with only 'many' publications. This latter also suggests that publications are not tracked formally by this institution and thus are likely not to be part of the perceived impact of the individual or the institution. (f) Project scope: In the one case in which we had a response to this question, the scope of the research funded from grants was very wide, from animal genetics through chemical toxicology to microbial genetics. This may be because certain government tasks have been mandated; however, such a wide scope may well prevent the formation of a specialised, institutional focus. Such a focus could make the laboratory more attractive for funding, particularly from international sources.

Potential solutions
Through CMM approaches, as we discussed earlier, some or all of the above issues can be addressed.
(a) Increasing engagement and ownership of research programmes by staff scientists is an integral part of assessing initial capabilities. To begin this, a mission statement is essential, and the institute director should gather a group of his/her colleagues together and draft such a document. As pointed

Disclaimers
The views expressed in this presentation are those of the authors and do not necessarily represent the official policy or position of the Department of the Navy, Department of Defense or the U.S. Government.This is the work of a U.S. Government employee (ALR) and may not be copyrighted (17 USC 105). No copyright notice may be placed on this work.

Discussion
We hypothesise that an analysis of institutional research capabilities which combines an objective self-assessment with historic metrics, and applies CMM benchmarks designed for business and industry, provides a positive direction for strategic institutional development. Setting expectations and goals which reflect expected institutional performance is part of defining what are considered initial, managed and defined capabilities. Given the general lack of information available from 'real-life' institutes in under-resourced areas on all of the issues we have discussed, we made contact with two of them to gather comments on a range of relevant subjects. These included past performance and future plans for research activities. It is important to note that the type of research many institutes perform in global health security is applied, rather than basic or experimental research, as defined by the FM. Regarding our initial pilot example feedback, the lack of obtaining international funding presents a major challenge to carry out the type of 'cutting edge' research that would be accepted in the global scientific press, setting up a classic 'catch 22' situation in which lack of publication leads to lack of funding and vice versa. Nevertheless, our initial pilot example did show that some of the measures included in the survey that we deemed important in a CMM approach to research capacity improvement, were in place, at least in our very small sample. A next step would be to redesign a more detailed and wide-ranging survey and use the results to give feedback to institutes in low-resource areas, so that a CMM-based plan might be put in place with improved confidence.
Overall, we argue that the analytical and quantitative practice for integrating the use of CMM is an excellent technique for improving low-resource institutional capabilities and performance to a level which is repeatable and sustainable. Also, the awareness, knowledge, data and partnerships created through implementation of CMM help foster, reinforce and strengthen a culture of scientific responsibility. Transparency, fostered through discussions shared in face-to-face engagements, is further enhanced through sharing data, publishing results and discussing capabilities. Finally, the one thing that is perhaps most clear is that global health security interests need to build bridges with institutes in low-resource areas and, in instances where they seem to be less than successful, make serious attempts to bring them into the international research environment.