A KSA system for competency-based assessment of clinicians’ professional development in China and quality gap analysis

ABSTRACT Background We aim to create a holistic competency-based assessment system to measure competency evolution over time – one of the first such systems in China. Method Two rounds of self-reported surveys were fielded among the graduates from the Shantou University Medical College: June through December 2017, and May through August 2018. Responses from three cohorts of graduates specializing in clinical medicine – new graduates, resident physicians, and senior physicians – were analyzed. Gaps between respondents’ expected and existing levels of competencies were examined using a modified service quality model, SERVQUAL Results A total of 605 questionnaires were collected in 2017 for the construction of competency indicators and a 5-level proficiency rating scale, and 407 in 2018, for confirmatory factor and competency gap analysis. Reliability coefficients of all competency indicators (36) were greater than 0.9. Three competency domains were identified through exploratory factor analysis: knowledge (K), skills (S), and attitude (A). The confirmatory factor analysis confirmed the fit of the scale (CMIN/DF < 4; CFI > 0.9; IFI > 0.9; RMSEA ≤ 0.08). Within the cohorts of resident and senior physicians, the largest competency gap was seen in the domain of knowledge (K): −1.84 and −1.41, respectively. Among new graduates, the largest gap was found in the domain of skills (S) (−1.92), with the gap in knowledge (−1.91) trailing closely behind. Conclusions A competency-based assessment system is proposed to evaluate clinician’s competency development in three domains: knowledge (K), skills (S), and attitude (A). The system consists of 36 competency indicators, a rating scale of 5 proficiency levels, and a gap analysis to measure competency evolution through 3 key milestones in clinician’s professional career: new graduate, resident physician, and senior physician. The competency gaps identified can provide evidence-based guide to clinicians’ own continuous development as well as future medical curriculum improvements.


Introduction
Epstein and Hundert [1,2] defined systems-based competencies for health professionals as follows: 'the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served.' It was further advocated in the report entitled 'Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World' and published by the Lancet Commissions in 2010 that 'a 3 rd generation [of medical education] is now needed that should be systems based to improve the performance of health systems by adapting core professional competencies to specific contexts while drawing on global knowledge.' [3] The earliest literature on physician core competencies can be traced as far back as to the 1970s. Government agencies and organizations worldwide have since been continuously updating these competencies. In 1998, the Accreditation Council for Graduate Medical Education (ACGME) in the U.S. defined core competencies in 6 areas for health practitioners [4]. In 2005, the Royal College of Physicians and Surgeons of Canada published the CanMEDS 2005 Physician Competency Framework as an update to the previous version (published in 1996 and entitled 'Skills for the New Millennium'), outlining 7 physician roles [5]. In 2013, the General Medical Council in UK issued a guidance document entitled 'Good Medical Practice' to delineate the duties of doctors [6].
In order to transform medical education for the twenty-first century, it is essential that educational institutes (medical colleges and schools, teaching hospitals, etc.) strengthen their faculty teams and promote curriculum reforms to elevate a broad range of capabilities of the medical personnel [3,7]. Graduate surveys that collect feedback from the recipients of medical education have been relied upon as one of the effective tools to gauge the teaching quality at medical institutes, and can provide valuable input to help direct plans to improve medical curricula [8,9].
Since 1998, China has implemented the largest reform of medical education in the world by incorporating professional training into college education. This has significantly boosted the enrollment of health-care professionals at medical institutes [7]. In 2015, standardized resident training was also introduced in China [10]. There are three main tracks of formal medical education in China, which aspiring high-school graduates can pursue: the 5-year, the 5 + 3, and the 8-year programs. For the 5-year track, high-school graduates enroll themselves to the undergraduate medical program and will receive a bachelor's degree at the end of their 5 years of study ('new graduate'). These new graduates are eligible for standardized resident training which will last another 3 years. For the 5 + 3 track, after students complete their initial 5-year training (equivalent to that of the 5-year track), they attend a 3-year standardized resident training and will be awarded a master's degree together with a standardized resident training certificate ('resident physicians') when completing the program. For the 8-year track, highschool graduates attend a broader training program that spans basic and clinical medicine as well as liberal arts, and will receive a degree of MD (Doctor of Medicine/Medical Doctor) at the end of their 8 years of study. Like 'new graduates', MDs can take up additional standardized resident training which will last 2 to 3 years. The bachelor's degree prepares new graduates for a career in clinical medicine, if they so choose, or related professions. The goal of the 5 + 3 training program is to cultivate a pipeline of clinical physicians, while the 8-year program aims to incubate medical talents with more versatility.
From 2012 to 2014, Dr. Baozhi Sun, former Vice President of the China Medical University, joined efforts with a team of scholars to conduct a largescale and cross-sectional survey among clinicians in 31 provinces and cities across the country. They constructed the 'Chinese Doctors' Common Competency Model' which consists of 76 indicators and covers 3 key dimensions of competency knowledge, skills, and attitude (KSA). The model encompasses the following aspects of medicine: clinical skills and patient care, disease prevention and health promotion, information and management, medical knowledge and life-long learning, interpersonal communication, teamwork and scientific research, core value, and professionalism. However, the model developed by Sun et al. mainly targets senior physicians with more extensive clinical experience as practitioners.
Nevertheless, holistic systems to assess medical graduates' professional development as they progress through different phases of their career remain few and far between in China. What is also lacking is a keen appreciation of professional development as a continuous and dynamic process, as well as investigations to assess this process that are reproducible. Therefore, the objective of the current study is to create a holistic competency-based assessment system comprising 3 components: competency indicators suitable for clinicians in different phases of their career, a rating scale aligned with the progression of skill acquisition, and an analytical tool to measure competency evolution over time -one of the first such systems in China.

Method
A competency-based KSA assessment system was designed by drawing from the conceptual framework of Norcini who espoused that an effective assessment system should include three segments: competency (defined by indicators), level of assessment (degree of mastery), and assessment of progression (skill acquisition through stages) [2,11,12]. The Dreyfus model that any skill acquisition spans 5 stagesnovice, advanced beginner, competent, proficient, and expert [13] -was also consulted to create a more nuanced scheme for assessing the mastery level of competency. This study was approved by the Ethics Committee of the Shantou University of Medical College (SUMC).

KSA-based competency indicators and a rating scale
Thirty-six indicators ( Figure 1) were derived by combining and simplifying closely related indicators from the model created by Sun et al [2]. so the scale can be applicable for surveying a more diverse group of clinicians -that is, new graduates, resident physicians, and senior physicians -who were selected to represent 3 key milestones in a clinician's professional career. A more succinct scale also rendered the survey less cumbersome to administer, more enticing for respondents to complete the survey, thereby allowing the collection of more meaningful data.
The questionnaire developed based on this scale includes two sections: basic information, and selfassessment of competencies. The self-assessment is based on the 5-point Likert scale defined as follows: 0 represents 'do not know'; '1' represents 'beginner' (having acquired cognitive understanding of the relevant basics); '2' represents 'application' (being able to practice or simulate under the guidance of others; '3' represents 'competent' (being able to practice independently in the real world according to standards; '4' represents 'proficient' (being able to practice independently and deliver top-quality outcome); and '5' represents 'expert' (being able to serve as an example for peers and in an advisory capacity as well as participate in developments of standards).
Respondents were asked to rate both their existing and expected levels of competencies.
The anonymous questionnaire was made available on the graduate survey platform (http://bysczdc.med. stu.edu.cn/) at the SUMC from June through Respondents were informed that their answers would be kept strictly confidential and that they could withdraw from the survey at will. All participants completed and submitted the questionnaires electronically or on paper.
Questionnaires collected were excluded from analysis if they met any of the following criteria: from graduates who earned their degrees outside the 3 time points specified; from respondents who no longer worked in the field of clinical medicine; from those who populated the answers mechanically (e.g., filled each question with identical answers); from respondents who submitted multiple questionnaires using the same Internet Protocol (IP) address (in this case, the last questionnaire submitted would be treated as valid input, with the rest, discarded). SPSS Statistics 21.0 for Windows (IBM Corp., Armonk, New York) was used to analyze reliability and validity. The competency level of '0' was equated as 'missing data' and substituted with the mean score ('mean imputation') [14]. The Cronbach's alpha value was used to evaluate the internal consistency. The Kaiser-Meyer-Olkin (KMO) measure greater than 0.9 and the significance level of Bartlett's test of sphericity less than 0.05 would indicate that the data were suitable for exploratory factor analysis (EFA) [15]. Factors with eigenvalues greater than 1 and factor loading greater than 0.45 would be extracted after orthogonal rotation with Kaiser normalization. If there were multiple-factor loadings greater than 0.45, the factor with the highest loading would be selected [16].
For the confirmatory factor analysis, a separate random survey (using the same questionnaire) was fielded from May through August in 2018 among graduates who enrolled in the clinical medicine department at the SUMC in 2013 ('new graduates'), 2010 ('resident physicians'), or 2007 ('senior physicians'). Confirmatory factor analysis using the software Amos 21.0 for Windows (IBM Corp., Armonk, New York) was carried out to test the fit of the scale. The reasonable fit of the scale would be determined based on the following: chi-square to the degree of freedom ratio (CMIN/DF) < 4; comparison fit index (CFI) > 0.9; incremental fit index (IFI) > 0.9; and root mean square error of approximation (RMSEA) ≤ 0.08 [17].

Gap analysis of competencies and perceived quality of medical education by graduates
A revised service quality model, SERVQUAL -which was originally designed for commercial applications to business services [17] -was employed for the competency gap analysis based on the same survey responses collected in 2018. The quality of medical education (as measured by the gap between the existing competency level and the expected level) for the i th indicator is represented by where P i indicates the perceived existing level of competency for the i th indicator, and E i , the expected level of competency [18]. The quality of medical education for each of the KSA domain is where m represents the number of indicators in each domain. When m = 36, Q indicates the overall quality of medical education. The Kruskal-Wallis test was used to analyze the differences in perceived quality among the three groups of respondents.

The KSA-based competency indicators
Reliability and validity. There were 226, 193, and 186 questionnaires collected from new graduates, resident physicians, and senior physicians, respectively, which were included according the established criteria ( Table 1). The Cronbach's alpha values (reliability coefficients) for each item in the questionnaire and the questionnaire as a whole were both greater than 0.9. Therefore, all 36 core competency indicators were retained. The KMO values associated with the 3 groups of respondents were 0.967, 0.964, and 0.943, respectively. The p values of the Bartlett's sphericity test were less than 0.001. The indicators were thus suitable for factor analysis.
Based on the exploratory factor analysis, 3, 3, and 5 factors were extracted from the groups of new graduates, resident physicians, and senior physicians, respectively (Table 2). Three out of the five factors extracted from senior physicians shared the same constructs and were combined into one single factor (i.e., 'knowledge'). The Cronbach's alpha values for all factors associated with each group were greater than 0.9, indicating high internal consistency. The factors extracted were analyzed further, and three domains emerged with which the competency indicators measured were aligned: knowledge (K), skills (S), and attitude (A).
Confirmatory factor analysis. There were 159, 126, and 122 questionnaires collected from the 3 cohorts of respondents, respectively, that were included according the established criteria ( Table 1). The reasonable fit of the scale was confirmed based on the following: CMIN/DF = 3.596; CFI = 0.905; IFI = 0.905; RMSEA = 0.080.

Gap analysis of competencies and perceived quality of medical education by graduates
As shown in Table 3

Discussion
Unlike previous research that focused on such parameters as tangibility, reliability, responsiveness, assurance of services as well as empathy of the faculty and staff [19,20], our study aimed to evaluate the evolution of medical graduates' core competencies in 3 domains: knowledge (K), skills (S), and attitude (A). We designed a competency-based assessment system that is holistic and implementable to examine how clinicians' competencies have evolved from when they were new medical graduates, through residency, to becoming seasoned practicing physicians. The gap analysis, another component of our system, yielded uniquely valuable insights about the quality of medical education as perceived by the participating graduates.
In Table 3, the negative Q values for all 36 competency indicators among the 3 cohorts of graduates suggest a higher expected level of competency than participants' perceived existing level. Based on the total Q values, the largest overall competency gap is seen among new graduates, followed by residents and senior physicians, in that order. In terms of domains, distinct gaps are found in domains of skills (S) and knowledge (K) in all 3 cohorts. Hence, there appear cohort-specific and domain-specific contributors to these gaps, and targeted remedial measures will be needed to bridge the gaps. For example, at the indicator level, the biggest gap among new graduates is associated with 'conducting emergency rescue' followed by 'formulating the treatment plan' -both indicators fall within the domain of skills (S). To bridge the gap, additional class hours -as part of the clinical skill training series at the SUMC -can be devoted to scenario-based simulation training. At the domain level, the biggest gap is found in the domain of skills (S) among new graduates. As required by laws and clinical practice standards in China, all medical activities shall be conducted under the supervision of senior physicians to ensure the safety of patients and the learning environment of medical students. New graduates can thus only reach the level where they can 'apply' the knowledge learned,  Figure 1. but cannot reach the 'competent' level where they follow standard guidelines and practice independently. The 'competent' level of competency is now a requirement for standardized resident trainings in China. Therefore, there is a more urgent need to ramp up new graduates' clinical skills, so they can be better prepared as they transition to the residency phase where more emphasis is placed on clinical practices. Methods such as simulation technique, standardized patient, and enhanced clinical exposure can all help elevate new graduates' clinical skills [21,22]. Different levels of expectation were also found between new graduates and residents/senior physicians. While new graduates hoped to reach the level of 'competent' (competency level = 3) for the great majority of indicators when they graduated, resident & senior physicians aspired to become 'proficient' (competency level = 4) for more indicators. This difference is not a total surprise, given the different professional development phases that these graduates find themselves in. However, upon a closer examination, the expectation of 'being proficient' appeared predominantly associated with the domain of skills ('S') among residents (9 out of 10 skill-related indicators) and senior physicians (8 out of 10 skill-related indicators), and, to a lesser degree, among new graduates (2 out of 10 skill-related indicators). Interestingly, this strong correlation was not seen with the domains of attitude ('A') or knowledge ('K'). In other words, the study participants did not demonstrate a similar degree of expectation for attitude-and knowledge-related competencies. This gravitation toward skill-defined competencies may reflect a paradigmatic orientation among medical graduates from the SUMC as a whole -which places higher emphasis on 'skill acquisition' than development of competencies in softer areas such as attitude and knowledge. This finding highlights the need to drive home not only the ultimate goal of nurturing  well-rounded health-care professionals but also the importance of operationalizing this aim, so professional expectations can be raised accordingly and training courses/programs fit to deliver on this goal will be created and propagated. Fulfilling this objective also underscores the construction of a multicomponent assessment system as proposed by this study to measure the multiple dimensions of medical competency.

Implications
The competency-based assessment system that we propose can be completed not only by 'receivers' of medical education/training (e.g., medical graduates, as in the current study), but also 'administers' (e.g., instructors, supervisors) and 'beneficiaries' (e.g., patients) of this education/ training (although some modification of the indicators may be needed for the survey to be more meaningful to the latter group of stakeholders). This broad set of potential applications can facilitate the creation of a 360-degree survey of clinician's core competencies, which echoes the systems-oriented characterization of the competencies that physicians need to demonstrate in order to serve the health-care needs of a society -a view elucidated by Epstein and Hundert [1,2] and referenced in the Introduction of this report.
Contrary to the assessment system that simply rates clinicians' competency level at particular time points, the gap analysis incorporated into our system compares existing with expected levels of competencies empowers the receivers of medical education by acknowledging the value of their feedback. Insights culled from this group of stakeholders can inform policy-makers and administers of medical education as these decision-makers endeavor to instigate on-target improvements to bridge the pedagogical gaps. On the other hand, gap analysis facilitates the establishment of personal benchmark for clinicians, which allows them to take stock of the progress which they've made and titrate their goals and expectations as they continue evolving professionally. [23][24][25] Additionally, the competency-based KSA scale proposed in our study can serve as a reference to guide the reform of medical licensing examinations. In the past, these examinations focused on knowledge. Today, more emphasis is placed on physicians' professionalism and clinical skills. Pursuant to additional investigations of feasibility, the 36 indicators contained in the scale can be developed into an expanded set of criteria to assist the redesign of medical licensing examinations.

Limitations of the study
As constrained by time and resources, the assessment rated by the 3 cohorts of graduates from the SUMCnew graduates, resident physicians, and senior physicians -was used as a proxy to gauge clinician's professional development over time. Hence, the development trends found in this research may diverge from those in a longitudinal study that monitors the competency evolution of one single group of graduates. Secondly, the analysis of competency gaps in the study was based on participants' self-assessment that might not corroborate fully with the assessment based on more objective measures or furnished by other key stakeholders such as patients, supervising physicians, peers, and nurses. Interpretations and extrapolations of the study findings thus need to be pursued with caution. Last but not least, the KSA-based assessment system proposed in our study was tested only among the graduates from one medical university, and needs to be further validated at additional medical institutes and in different parts of the country.

Conclusion
A competency-based assessment system is proposed to evaluate clinician's competency development in 3 domains: knowledge (K), skills (S), and attitude (A). The system consists of 36 competency indicators, a rating scale of 5 proficiency levels, and a gap analysis to measure competency evolution through 3 key milestones in clinician's professional career: new graduate, resident physician, and senior physician. The competency gaps identified can provide evidence-based guide to clinicians' own continuous development as well as future medical curriculum improvements. and the late Dr. Ting Long (Shantou University Medical College) who was involved in the study design as well as the creation of the questionnaire and 36 core competency indicators. The authors also appreciate the guidance and suggestions furnished by Prof. Junhui Bian (Former Dean of the Shantou University Medical College) and Mianhua Yang (Shantou University Medical College).

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This work was sponsored, guided, and assisted (for its implementation) by the National Medical Examination Center, the Ministry of Education Humanities and Social

Data availability
Data are available from the corresponding author upon reasonable request.