Which professional skills do students learn in engineering team-based projects?

ABSTRACT Engineering accreditation bodies express a strong consensus that in addition to technical and scientific skills, engineering education also needs to promote the development of professional skills. In general, team-based projects are considered to be valuable approaches to develop such skills and have been extensively added to engineering curricula. Yet, it remains unclear which skills and to what extent students learn from these interventions. The challenge of assessing the development of those skills is an important factor in this gap. In this paper, we used a standardised self-reporting questionnaire to evaluate the development of students' self-efficacy beliefs through in-course and capstone projects. Results suggest that students only marginally develop these skills when they are not explicitly addressed as part of the project, showing ways to more effectively support student learning of professional skills. The questionnaire also proved to be an effective and scalable way to assess large classes.


Introduction
Most national and cross-national engineering accreditation bodies (e.g. ABET 1 in the United States, ENAE 2 at the European level and IEA 3 at the International level) have included 'professional skills' in their requirements for accrediting engineering programs for years (Shuman, Besterfield-Sacre, and McGourty, 2005;Winberg et al., 2020). Although there is no clear consensus on the exact set of skills under the term 'professional skills' (also called 'soft', 'transferable', 'transversal' or even '21st Century' skills), there seems to be an overall agreement that engineering education should not only address science and engineering but also social, ethical, and organisational aspects of engineers' practices and responsibilities (Kolmos and Holgaard, 2019;Winberg et al., 2020). As an example, the 2013 IEA 'Graduate Attributes and Professional Competencies' report highlights skills related to societal responsibility, ethics, teamwork, communication, and project management (IEA, 2013). Communication, teamwork, and organisational abilities are also featured prominently in sources looking at labour-market needs such as employer and alumni surveys (Carter, 2011;Itani and Srour, 2016;Craps et al., 2017) or in occupational databases such as O*NET in the US 4 , which provide information on the skills considered important for current engineering positions. With growing concerns regarding the automation of jobs and the rise of outsourcing and globalisation, studies have attempted to predict the evolution of those needs. Frey and Osborne (2017) have investigated the susceptibility of jobs to computerisation and found that jobs requiring more skills related to interpersonal and interprofessional domains such as perceptiveness, negotiation, and persuasion were less likely to be affected.
In response, the last decades have seen important institutional efforts to introduce these skills at different levels of the curriculum. These efforts have taken various forms, from the creation of specialised courses (e.g. Mohan et al., 2010) to the complete redesign of curricula (e.g. Moore and Voltmer, 2003;Graham, 2018), including the introduction of cornerstone and capstone courses with interdisciplinary team-based projects (e.g. Howe, Rosenbauer, and Poulos, 2017;Torres, Sousa, and Torres, 2018;Grimheden, 2007), problem-based learning (PBL) and the conceivedesign-implement-operate (CDIO) approach (e.g. Edström and Kolmos, 2014;Crawley et al., 2014). The pedagogical approaches used in these interventions vary widely (Winberg et al., 2020), with an overall emphasis, however, on active learning and 'learning by doing' strategies, particularly those involving projects.
Despite these efforts, multiple reports suggest that a gap between employers' expectations and the reality of engineering graduates' skill-sets in the 'professional' domain still persists (Ramadi, Ramadi, and Nasr, 2016;Itani and Srour, 2016;Craps et al., 2017;Kolmos and Holgaard, 2019). This raises a number of questions around the professional skills that students actually learn from such interventions. Previous work in the domain has highlighted that teaching professional skills is difficult, and that assessing these skills is even more challenging (Shuman, Besterfield-Sacre, and McGourty, 2005;Winberg et al., 2020). In this paper, we look at both issues by focusing on the following research questions: . Which professional skills do students learn from projects? . How can we assess those skills and their evolution while taking into account scale issues of largesize engineering classes? . Which characteristics of projects make students develop professional skills?
We present the results of a study in the context of Bachelor (BA) and Master (MA) courses involving team-based engineering design projects in mechanical engineering at a European technical university. Using a previously validated instrument (Laperrouza and Tormey, 2019), we have assessed students' self-efficacy beliefs regarding their skills in five different areas: planning, risk assessment, ethical sensitivity, communication, and interprofessional competence. This instrument has allowed drawing comparisons across different levels of studies and careers, as well as assessing the progress students have made over the course of one specific project. Further, an analysis of students' answers to open-ended questions through a feedback survey has allowed investigating what non-technical skills students feel they have acquired from the projects and also which ones they have found to be the most challenging. Finally, comparing the format of the courses has led to the identification of some project characteristics, which might improve the development of professional skills for students.
In the following, a review of existing literature in the field is introduced, before presenting the context and the methodology used. Finally, the results are presented and discussed before concluding.

Related work
As far back as 2005, while reflecting on the importance of the professional skills included in the ABET 'Engineering Criteria 2000', Shuman and his colleagues acknowledged the range of methods applicable to teaching professional skills (Shuman, Besterfield-Sacre, and McGourty, 2005). These methods included decision-making exercises, project management or business simulations, project-driven classes, case studies, as well as embedded modules. More recently, Winberg et al. (2020) have attempted to classify these approaches in a systematic review of engineering employability studies. While reporting important variations in what the reviewed studies called 'professional skills', the authors defend the idea that professional skills cannot be considered as generic but are instead linked to disciplinary practices. Therefore, they argue that engineering knowledge and professional skills should be better integrated.

Teaching professional skills through projects
As Prince and Felder (2006) highlight in their review, projects used in teaching may vary significantly in scope and scale. From in-course to semester to BA or MA projects, the complexity of the problem, their duration and the size of the student group vary a lot. There are also variations in the ways in which projects are embedded in engineering curricula. This can be conceptualised as a continuum which ranges from projects being embedded in traditional engineering education programs in parallel to traditional courses, to projects being seen as central to a re-imagining of the whole engineering curricula. Some examples of the latter are the conceive-design-implement-operate (CDIO) approach or some variants of project/problem-based learning (PBL) (Savery, 2006;Kolmos, de Graaff, and Du, 2009;Crawley et al., 2014;Edström and Kolmos, 2014;Chen, Kolmos, and Du, 2021). In traditional program designs, projects frequently come at or near the end of a program through 'capstone projects' or thesis (Crawley et al., 2014), as a way to integrate learning and prepare students for the professional world. They are also increasingly being introduced earlier in the curriculum ('cornerstone' projects) as a way to scaffold students' understandings of real-life educational practice (Grimheden, 2007). Although 'whole curriculum approaches' are influential within the literature and in the engineering education research community, they are perhaps less widely practiced than more traditional curricular approaches. Chen, Kolmos, and Du (2021), for example, found that two-thirds of the reviewed studies on project-based learning focused on projects within courses (rather than across courses or across whole curricula). Similarly, the large-scale cross-institutional survey of capstone design courses in the United States by Howe, Rosenbauer, and Poulos (2017) shows that most capstone projects are run with traditional lectures in parallel or before.
There is evidence that the inclusion of projects has a positive impact on some aspects of student learning. The presence of capstone-type projects has been found to generally improve employer satisfaction and employment ratiospossibly indicating an indirect effect on professional skills, but also student motivation and retention (Howe, 2010). Studies within the PBL framework also point to an impact on some types of learning: Gijbels et al. (2005) andNewman (2004), for example, found that while problem-based learning had little impact on the learning of knowledge and facts, PBL did, however, have a notable and positive impact on the understanding of principles and on the ability to apply what is learned. Hattie's (2009) review of meta-analyses of learning provides a useful benchmark for evaluating such impacts: he argues that effect sizes (i.e. Cohen's d statistic) of less than 0.15 should be regarded as indicating effectively a lack of teaching, effects between 0.15 and 0.4 reflect normal teaching effects, and effects greater than 0.4 should be regarded as being in the desired zone. He noted few effects were greater than d=0.7. The effect of PBL on the ability to apply knowledge has been identified as d=0.4, and for the ability to use principles as d=0.75 (Gijbels et al., 2005).
This highlights that one of the challenges with project-based learning is the validity of assessment that is, are we assessing the things that we actually want to teach using project-based learning? There are numerous lists of professional or transversal skills to be learned by engineers. Accreditation bodies, for example, typically identify skills including ability to scope, plan, and design solutions to complex and ill-defined problems, ability to communicate, to work in teams, and to apply ethical reflection in their work (see for example, ENAEE, 2017). In their review of assessment tools for professional skills, Cruz, Saunders-Smits, and Groen (2020) focused on a similar set of skills: communication, lifelong learning, innovation/creativity and teamwork. Given that engineers are faced with complex problems that can often not be solved by any one discipline, interdisciplinary and interprofessional work has also been increasingly identified as an important professional competence (Lattuca et al., 2017;Klaassen, 2018).

Conditions for learning professional skills during projects
One specific difficulty in projects comes from the tension between two different outcomes: the final product or solution on one hand, and student learning on the other hand. As Shuman, Besterfield-Sacre, and McGourty (2005) highlight, professional skills have a strong 'process' component, meaning that students need to learn about the processes behind communication, teamwork, project management, etc. Projects should therefore be an opportunity for students to focus their attention on such processes. This has not only been shown to be difficult by nature when 'learning from doing' (Barron et al., 1998;Hmelo, Holton, and Kolodner, 2000) but can become even more challenging when projects are expected to lead to usable results, such as in industry-sponsored projects. Howe, Rosenbauer, and Poulos (2017) show that instructors are conscious about this issue since more than two thirds of respondents reported that they found the process to be equally or more important than the product. However, from the same survey, the final written report, the final oral presentation, and the final product play the biggest role when evaluating the students' work in projects. Whether such sources allow assessing how much students have developed process skills, is a key question that we will discuss in the next section. Another question is the type of support that can help students to focus their attention on the process during projects, thus fostering the development of professional skills.
In their review, Shuman, Besterfield-Sacre, and McGourty (2005) argue that conditions for the effective development of process skills are that students should not be 'thrown into team projects without support' but do not provide details about the instructional techniques behind such support. The same holds for the review by Winberg et al. (2020), which underlines the importance of using adequate pedagogical approaches when integrating professional skills with engineering knowledge and skills. Unfortunately, the authors do not specifically review the associated instructional techniques. Overall, we were able to find much advice but very few studies looking at evidence concerning how professional skills development can be best supported during students projects. One such study, a large-scale study by Cabrera, Colbeck, and Terenzini (2001) has examined how teaching practices in team-based and hands-on design projects relate to students' gain in professional competencies. Using self-reported measures of both teaching practices and professional competencies as perceived by undergraduate engineering students, this multi-institutional study has shown that the practices used in class contributed more to the learning gains perceived by students than their background, demographics, or motivation. Interaction with and feedback from instructors as well as collaborative learning were the two instructional practices shown to predict gains in professional competencies, defined in this case as group skills, problem-solving skills and professional awareness. While very interesting by its scale, this study dates back almost twenty years and uses students as the only source of information, in a post-only survey. In a more recent but smaller qualitative study, Costa and her colleagues used focus groups to investigate the professional skills students considered to have developed during an interdisciplinary engineering project. The teaching techniques involved in the project were not the focus of the study, but the authors do summarize a set of best teaching practices and conclude by emphasising the critical importance of supervising and guiding students (Costa et al., 2019).
It was noted above that teamwork is an important engineering skill. There is evidence that the relatively homogeneous environment of engineering education makes student teams a potentially unpleasant environment for those who are not part of the majority group (e.g. Aeby et al., 2019). The importance of guidance has been explored in studies focusing on teamwork skills more specifically (Oakley et al., 2004;Planas-Lladó et al., 2020). Because group work has the potential to lead to challenging experiences and outcomes for students (e.g. Isaac and Tormey, 2015), the explicit steps taken by instructors have been shown to be essential to assure learning. Reflection and self-assessment were shown to be effective techniques. Other studies focusing on engineering design projects, but not necessarily on professional skills, also underline the importance of guidance and supervision for student learning. Feedback, in particular, has been identified as a key factor for helping students learn by making them reflect on the process (Barron et al., 1998;Hmelo, Holton, and Kolodner, 2000).

The challenge of assessing professional skills
As introduced above, appropriate assessment methods are key to determine whether students do or do not develop professional skills in engineering design projects. However, several studies report that assessing professional skills is a challenge. Shuman, Besterfield-Sacre, and McGourty (2005) state that efforts have been made to develop assessment tools for these skills but that 'the literature remains sparse with respect to robust, effective measures for these outcomes'. Some of the issues they identified are a lack of consensus on the definition of the skills, a difficulty to identify the moment when the skills should be assessed when learning is distributed over the curriculum, the very nature of the skills to assess and the cost in time and efforts when assessing those skills in large-size classes. These difficulties continue to be acknowledged in more recent studies such as Cabrera, Colbeck, and Terenzini (2001), Planas-Lladó et al. (2020), Hernandez-Linares et al. (2015) and Reynders et al. (2020), suggesting that this is a lingering concern.
Although there are a diversity of assessment methods (Howe, Rosenbauer, and Poulos, 2017;Chen, Kolmos, and Du, 2021), the most widely used methods for assessing projects are final reports, presentations, and/or products. It is doubtful that these would allow valid assessment of students' professional skills. Project management skills such as planning illustrate this issue well: besides splitting the work into defined tasks, evaluating the time necessary to accomplish them, and organising them along a timeline, appropriate planning requires a continuous adjustment, taking into account the evolution of the work as well as imponderables. A final project report alone is not likely to provide reliable information on the process through which students have made their plan evolve over time. More generally, the nature of professional skills makes the choice of the type of learning traces to collect, when to collect them, and the criteria and scales to evaluate them, challenging. In the context of projects, the adequate alignment between assessment methods, learning objectives and teaching methods is an additional difficulty (Hernandez-Linares et al., 2015;Winberg et al., 2020). As a result, various other assessment formats have been explored in the literature, among which multi-source feedback (mixing peer feedback, self feedback, and instructor feedback [McGourty, 2000]), portfoliosoften in combination with rubrics (Winberg et al., 2020;Reynders et al., 2020), and self-assessment questionnaires (Hernandez-Linares et al., 2015). Cruz, Saunders-Smits, and Groen (2020) have recently mapped the range of assessment tools that can be used to assess some of the professional skills at issue in PBL: communication, lifelong learning, innovation/creativity and teamwork. They note that the kind of assessment used depends in part on the purpose of the assessment: where the goal is to assess students' learning, rubrics were the preferred assessment tool. However, where the goal is to assess the effectiveness of the course, questionnaires were commonly used. Such questionnaires typically rely on student self-reports of their own beliefs about their skills or their attitudes. Although Cruz, Saunders-Smits, and Groen (2020) mention in passing their belief that self-report questionnaires are open to bias, this question has been subject to more detailed analysis in both psychological and statistical domains. From a psychometric point of view, it has been speculated that self-reports are biased when those with low skills in a domain are poor at assessing their own skill level in the domain (known as the Dunning-Kruger effect). More recent explorations of this apparent phenomenon suggest that this apparent bias is actually an artefact of the experiments designed to test it Nuhfer et al. (2016) and Nuhfer et al. (2017). They suggest that while self-reports are susceptible to measurement error (as with any measure), these measurement errors are not biased and are already taken into account in standard measures of reliability. From a psychological perspective, self-report measures are widely used in areas such as personality, metacognition, and self-efficacy. As Paulhus and Vazire (2007) note, self-reports are popular in some domains in psychology not just because they are cheap and get a good response rate but also because they can provide access to information on the internal working of a person's psychological processes which would not otherwise be available, and because they assess how a person perceives something and thisrather than begin simply a source of biasis actually a meaningful thing to measure in itself. This is particularly relevant in the case of self-efficacy beliefs, defined as a person's judgments of their own capabilities to undertake the actions needed to attain a designated goal. Students with higher self-efficacy beliefs have been found to choose more challenging tasks in the domain, to put in greater effort, to persist longer in the face of challenges, and to suffer reduced anxiety and stress. Self-efficacy beliefs also show a moderate to strong correlation with performance (Zimmerman, 2000). Overall then, there are good grounds for saying that self-report questionnaires may be a valuable assessment tool, if they are psychometrically assessed as valid and reliable (see also Cabrera, Colbeck, and Terenzini, 2001). Where self-assessment tools are valid and reliable they can also have other benefits. Chen, Kolmos, and Du (2021) have highlighted that the challenges of assessing such learning in PBL are addressed both through more traditional assessment modes (such as quizzes and exams) as well as less traditional models (self-assessment, peer review, portfolios, etc.). This, in turn raises problems for comparing learning from different PBL approachesif all assessments are different (assessing different learning goals and with different psychometric characteristics) it is hard to draw meaningful conclusions about the impact on student learning of different ways of organising PBL. The use of standard instruments means that it is possible to compare the impact of one type of pedagogy to another. This can provide valuable information to teachers who are seeking to enact pedagogies with demonstrated effectiveness. It is worth noting that self-assessment questionnaires also have pedagogical advantages. In particular, responding to self-assessment questions can trigger students' self-reflection and draw their attention to what they learn, a key to 'learning from doing' (Barron et al., 1998;Hmelo, Holton, and Kolodner, 2000). Previous studies have shown that such tools could enhance learning and help students develop self-monitoring habits that are essential for the self-regulation of learning (Schmitz and Perels, 2011). Panadero, Klug, and Järvelä (2016) even propose that self-assessment tools originally designed for measurement could be considered as pedagogical interventions because of these effects.
Overall then, it is clear that professional skills are an important component of engineering education and these include the kinds of skills that are needed when working on complex and openended projects. Such skills include scoping, and organising to design solutions to complex problems, integrating ethical considerations into one's work, communicating effectively in teams and working within interdisciplinary and interprofessional contexts. It is also clear that in engineering education, these skills are often targeted through projectstypically projects in courses. These seem to be organised in a wide variety of ways, even if there is evidence that coaching, direction and feedback are all likely to be important in student learning. The assessment of different ways of organising these projects in courses is made more complex by the use of a wide range of assessment instruments which may not actually address the competencies in question. This paper aims to contribute to this literature by exploring the differential impact of different approaches to organising projects within a traditional engineering curriculum, using a psychometrically validated and reliable standardised assessment.

Context of the study
This study was conducted on two mandatory Bachelor and one optional Master coursestaught by the same teacherin mechanical engineering at a major European technical university. The three courses feature projects representative of the types of projects integrated in traditional curricula as identified by Howe, Rosenbauer, and Poulos (2017).
The two BA courses are taught for 14 weeks and cover mechanical engineering fundamentals ('Mechanical Systems' and 'Dynamics of Mechanical Systems'). These are ex cathedra courses on these topics with exercises sessions. In addition, students work in groups of 4 or 5 on a structured engineering in-course project. The work on the projects starts on week 10 and ends on week 14 with the submission of a final report (one per group). The evaluation of their work is based solely on technical contributions. Each week, the groups have the opportunity to reach out to teaching assistants (TA) in case of questions. The project contributes to 20% of the students' final grade and is based on the final report (one grade per group). The workload for the project is estimated to 1 ECTS (28 hours) per student. Both courses are built around the same model, and they were grouped in the study.
The MA course is a capstone-like design project course, called 'Applied Mechanical Design'. The students work in groups of 3 or 4 for the whole class (17 weeks) on a common open-ended product design task and each group needs to come up with their own design. The workload of the project is estimated to 4 ECTS (112 h) per student. For support, 6 theoretical classes are given in parallel, introducing and discussing the design process (3 courses), project management, and creativity fostering. In addition, a single TA is associated per group and weekly follow-ups are scheduled. The students are graded through 3 written reports (week 3, week 10 and week 17) and a final oral presentation, weighting 10%, 40%, 40% and 10% respectively. Through this repeated assessment, the teaching team gets multiple viewpoints on similar topics, which enables the evaluation of the progress and the applied process. Besides the final concept, topics that are repeatedly evaluated include project planning, risk assessment, and design process.
Through these courses and the use of a pre-post study design, the effects of in-course and capstone projects on professional skills can be investigated. In details, the considered in-course projects follow primarily technical objectives and span over a short period of time (5 weeks), while the studied capstone project spans a whole semester (17 weeks, including 2 weeks of break) and has a focus on process skills. In both cases, students are put into situations where they need to practice various professional skills in order to handle the work, but the MA course, in particular, includes specific lectures on planning and risk assessment and these topics are evaluated through several reports.
In order to be able to better interpret the student data, their responses have been benchmarked against those of experienced industry professionals.

Participants
There were 306 participants in the BA courses of the 2019 spring semester and 33 participants in the fall 2018 MA course. All students were offered the same course material and support independently of their participation in the study. Their informed consent for data collection and research usage was collected through a third-party body unrelated to the teaching staff and their identity transmitted to the research staff only after the course was wrapped up, in full agreement with the protocol approved by the ethics committee of the university.
Crossing the collected consents and questionnaires, 168 pre-and 47 post-questionnaires from BA students and 32 pre-and 29 post-questionnaires from MA students were collected. In addition, 253 anonymous student feedback forms were collected from the BA students. While the low number of respondents to the post-questionnaire could imply biases in the data, the comparison of the answers to the pre questionnaires of the students that answered the post against the ones that did not, returned no statistical difference (two-sided independent t-test, p=0.27). As such, it may be reasonable to hypothesise that this sub-group is representative of the wide class.
Answers to a single questionnaire by 51 professionals were collected through anonymous links. The links were sent by email to professionals working in engineering-related activities at companies collaborating through research projects with the university between Oct. 2019 and Jan. 2020. Based on the reported years of experience, professionals with less than 4 years of experience have been discarded (11 answers) to retain only the experienced ones.
The breakdown of participants is summarised in Table 1.

Instruments
The Interprofessional Project Management Questionnaire (IPMQ) was used as main quantitative instrument for this research. The IPMQ is a 24-item questionnaire available in two languages designed to assess one's self-efficacy beliefs in five domains: (1) planning, (2) risk assessment, (3) ethical sensitivity, (4) communication, and (5) Table A1. Each question is evaluated with a 5-level Likert scale. The processing of the IPMQ implies the calculation of per-factor scores (average over the questions related to the factor) and a total score (average over all questions). After filling the questionnaire, the students received a feedback sheet summarising their score with instructions on how to understand it and tips for further improving their skills. As such, the IPMQ can be considered as an educational intervention (Schmitz and Perels, 2011). In addition to the IPMQ, data from anonymous student feedback have been collected from BA students. They were asked to report: . 'The 3 most important non-technical things I learned while carrying out the team project are' (open answer) . 'The 2 biggest challenges I encountered during the realisation of the team project:' (open answer) . 'For the [first/second] of these challenges, I learned the skills to better manage a similar problem in the future' (two Likert questions associated to the previous question) These questions are used in feedback forms throughout the university for evaluating courses with team projects and are intentionally short to maximise the response rate. In our context, these questions give additional insights into the validity and sensitivity of the IPMQ.

Research design and analysis
The students were asked to fill the IPMQ twice in a pre-post design. BA students could fill out the pre-questionnaire from week 7 until week 10 (start of the project). The post-questionnaire was open for answers starting after week 14 (end of project and end of semester courses) and stayed open for about five weeks up until the exam of the respective course. Additional questions have been added to the questionnaire to have students report their prior experience (pre) or gained experience (post). The students were asked to report the number of projects they had worked on, the number of project management courses followed, and the number of risk assessment courses followed. MA students were asked to fill the pre-questionnaire between weeks 3 and 5 (project started on week 1). The post-questionnaire was opened after the end of the project on week 17 and stayed open for two weeks. For the pre/post analysis, only the answers of students that responded to both questionnaires were considered. Using the single questionnaire filled by professionals, a cross-sectional study comparing prescores of students against professionals was performed. Since only professionals with more than four years of experience have been included, we expect them to have a certain level of expertise on the considered professional skills, associated with high self-efficacy beliefs (Dreyfus, 2004).
The statistical difference between two sets of answers is evaluated using independent (between population) or dependent (within population) t-tests. T-tests have been shown to be appropriate and robust for data from Likert scales even with moderate violation of normality (Winter and Dodou, 2010). When reporting the significance level of statistical tests, the APA star notation is used: * when p<0.05, ** when p<0.01 and *** when p<0.001.
The open-ended survey questions have been coded in two steps. An inductive approach has been used in the first step, where a codebook has been created in two iterations, based on students answers only (we have pooled the answers from the two open questions). The orthogonality of the resulting categories has been checked and the number of categories reduced. In a second step, the resulting categories have been compared to the IPMQ factors and aligned where it was meaningful to allow for comparison.

Cross-sectional view of professional skills
In a first step, the evolution of IPMQ scores across student levels to professionals is analysed. For this analysis, only the pre-scores of the students are considered. Figure 1 is a box plot of the reported scores by population and factor and Table 2 reports the mean and standard deviation values.
Looking at the average scores per factor, it can be seen that there are differences and trends between students and professionals on the factors they are better at or worse. For students, risk assessment has the lowest score, while communication has the highest. For professionals, interprofessional competence has the highest score and ethical sensitivity the lowest.
More than the ranking of factors, it is the changes within factors from Bachelor students to professionals that are of interest. On the total IPMQ score, the mean value gradually increases, suggesting a progression from the first years of engineering education to professional practice. A Figure 1. Cross-sectional comparison of pre-scores of students and professionals per factor and of the total score of the IPMQ. The 1.5 interquartile range (IQR) convention is used for the whiskers and the outlier identification. In addition, the mean value of the distributions is shown with a triangle. Statistically significant differences between populations within factors are shown using the star notation, see Table 3. similar trend is visible on planning, risk assessment, and interprofessional competence. Further, the lowest individual scores are within Bachelor students.
In order to confirm this trend, a statistical pairwise comparison between populations within factors is conducted and its results are reported in Table 3. This analysis confirms that there is a statistically significant increase between Bachelor students and professionals on planning (p<0.05), risk assessment (p<0.001), interprofessional competence (p<0.01), and on the total IPMQ score (p , 0.01). There is also a significant increase between Master students and professionals on planning (p<0.05) and risk assessment (p<0.05). The difference between Bachelor and Master students is less clear. Statistically, they differ only on ethical sensitivity (p<0.05).

Evaluation of an in-course project
Focusing on in-course projects first, their effect on the development of professional skills is assessed by running a pre-post comparison of IPMQ scores and by an analysis of course evaluation survey data. Figure 2 shows the pre-post-score distribution for the BA courses. The effective differences within factors between pre and post has been statistically assessed and the analysis is summarised in Table 4, along with the mean and standard deviation for each factor.
The results show a positive trend on the total IPMQ score and a significant increase over the course of the project on interprofessional competence (p<0.05) and communication (p<0.05). However, with Cohen's d of 0.379 and 0.371, respectively, the effect sizes lie slightly below average for educational interventions. The other factors show no statistical difference.
While the primary objective of in-course projects is the development of technical skills, these projects are considered to be an important contribution to the development of professional skills. Yet, the results suggest none or limited impact of these projects on the learning of professional skills, even though these results reflect the combined effect of several projects students participated in during the same time frame (Median 2, IQR = 1-4, see Table 5). During the same time, the students also reported that they had no courses addressing project management or risk assessment. As a consequence, the lack of explicit instructions and guidance is hypothesised as an important reason for the reported learning.
In complement to the IPMQ, the open-ended questions from the anonymous feedback surveys were processed. Since the survey is anonymous, the answers cannot be matched with the IPMQ, Table 2. Mean and standard deviation (SD) of students' pre-scores and professionals' scores per factor of the IPMQ.  Statistically significant p-values are indicated using the star notation.
but they nevertheless offer additional insights into similar topics. The coded items reported by students in their top 3 of important non-technical things they learned are presented in Table 6, sorted by descending number of occurrences. Among the most commonly reported topics are communication (49), organisation and coordination (46), time management (41) and task distribution (41). Although less frequent (29), team up challenges are also reported, inline with existing literature on difficulties of students in team projects . With communication topics being reported first, the survey data are consistent with the significant increase and medium effect size found with the IPMQ on the communication factor. The results of the second open question on challenges that students encountered and whether they feel they learned how to address them are reported in Table 7. There, the same aspects of planning are frequently reported as important challenges and have a relatively higher share of students disagreeing that they have learned how to address them.
Finally, inline with the learning goals of the project, mechanical engineering design methodology aspects are the most reported challenges, but have the highest share of students positive about their learning, confirming that the project achieves its primary goal. Figure 2. Box plot comparing pre/post scores of BA students per factor (mean value indicated by a triangle). Only students that answered both questionnaires are considered (N=47). Statistically significant differences within factors are shown using the star notation, see also Table 4. Table 4. Mean and standard deviation, and results of the paired T-tests between pre/post scores per factor for BA students with the alternative hypothesis that post scores are better (one-sided tail).

Factor
Mean pre-score Mean post-score T p -value Cohen's d Statistically significant p-values are indicated using the star notation. Only students that answered both questionnaires are considered (N=47). Table 5. Reported prior experience and training by BA students and evolution during the semester.
Prior to the semester During the semester

Evaluation of a capstone project
Regarding the effect of capstone projects, the analysis is performed using a pre-post comparison of IPMQ scores. The results of the comparison are shown in Figure 3. The associated statistical tests including the mean and standard deviation are summarised in Table 8. Visually, there seems to Communicate, share information 49 'Communication with other members of the group', 'Good discussion in the group' Organize, coordinate, manage work 46 'Organize work', 'Coordination' Manage time and workload 41 'Do a job with a deadline', 'Plan', 'Work regularly' Split and distribute tasks 41 'Share work', 'Distribute tasks' Interpersonal attitude 35 'Trust others', 'Be patient', 'Listen to others', 'Cope with others' motivation' Collaborate, cooperate, work in group 29 'Cooperate', 'Work together', 'Teamwork' Team up, get along 29 'Work with unknown people', 'Get to know people', 'Highlighting the qualities of each person is important' Mechanical engineering design methodology 10 'Check results', 'Work specifications out' Anticipate and solve issues 5 'Learn to prevent issues', 'Issues need to be addressed as soon as they occur' The table presents only answers with more than 4 occurrences, grouped by topic with verbatim examples and ordered by decreasing frequency. be a clear increase in the reported scores for almost all factors. Statistically, the planning (p<0.001), risk assessment (p<0.05), and interprofessional competence (p<0.05) factors as well as the broader total IPMQ score (p<0.01) show a significant increase between the pre and post questionnaires. The gains on the planning and risk assessment factors and on the total IPMQ score show an effect size (d=0.538, d=0.453 and d=0.465, respectively) above the 0.4 threshold, suggesting an above average effect of the educational interventions on professional skills.
A closer analysis reveals that this is actually notably affected by a single outlier who scored extremely low on the post test (e.g. risk assessment at 1.4). If this student is excluded, the effect size for the risk analysis factor, for example, rises from a moderate effect (d=0.453) to a very strong effect (d=0.718) among educational interventions.

Discussions
Reverting to our research questions, we first look at our results in terms of the professional skills' development. In general, professional skills tend to be implicitly and not explicitly treated as part of traditional curricula (Howe, Rosenbauer, and Poulos, 2017). For example, project planning skills are seen as a prerequisite for higher education (e.g. ABET), but are nevertheless only addressed in ad hoc classes. Indeed, both BA and MA students in our study showed a significantly lower score compared to professionals. So while, planning is required to deal with the project in class, the students don't seem to develop their skills through their studies in general. Further, neither students nor professionals seem to learn certain skills, such as ethical sensitivity, which has also been reported Figure 3. Box plot comparing pre/post scores of MA students per factor (mean as a triangle). Only students that answered both questionnaires are considered (N=29). Statistically significant differences within factors are shown using the star notation, see also Table 8. Table 8. Mean and standard deviation (SD), and results of the paired T-test statistical comparison of pre/post IPMQ scores per factor for MA students with the alternative hypothesis that post scores are better (one-sided tail). Only students that answered both questionnaires are considered (N = 29). Statistically significant p-values are indicated using the star notation.
by Cech (2014) or Tormey et al. (2015). This calls for a broadening use of professional skills assessment tools as teacher reflective devices about the content of one's courses and as reflective devices for students to make the learning goals of projects explicit. There has been a growing focus on the methodsand their qualityused to assess professional skills (Cruz, Saunders-Smits, and Groen, 2020), which echoes our second question about assessment methods. In this study, we have evaluated the use of a standardised questionnaire: the IPMQ. The usefulness of the IPMQ as a measure of students' learning seems confirmed by the benchmark against experienced professionals. The latter indeed scored higher than students except in ethical sensitivity and communication. Their relative low score on ethical sensitivity may actually well reflect the reality of industry, where such questions are not necessarily the highest priority. Within courses, the IPMQ was able to capture different gains between in-course and capstone projects. Students increase a bit their self-efficacy beliefs on courses where professional skills can be learned, and more so, on topics that are explicitly taught.
While the importance of the skills assessed by the IPMQ seems confirmed by the large overlap with the students' answers to the open questions, the IPMQ may not be sensitive to all learning. In the BA courses, students reported, through their answers to a survey, learning coordination, time management and task splitting that correspond mostly to items from the planning factor of the IPMQ (define work plan Q2, breaking work into tasks Q3 and keep track of tasks Q5) on which there was no significant difference. This could be explained by the individual framing of IPMQ questions whereas students see those as group activities, or they feel their learning is not sufficient to state that 'they are good at'. Furthermore, context-specific topics, such as team-up challenges are, as expected, not captured by this standard instrument.
While there have been recent publications discussing the characteristics of projects that make students develop professional skills more (Costa et al., 2019;Winberg et al., 2020), these are mostly prescriptive studies. With respect to that, our data specifically suggest that: (1) A broader learning is obtained from bigger, longer and more complex projects than from smaller in-course projects, noticeable by the significant gain on the total IPMQ score by MA students.
(2) A stronger learning is obtained when there is explicit teaching and regular formative and summative feedback on process skills, highlighted by the strong gains in planning and risk assessment by MA students.
There are, however, a few open questions and limitations to our study. The reported gains in selfefficacy beliefs of interprofessional competence are unexpected and cannot be related to the settings of the projects (all students were from the same department) nor was it part of the teaching. Since this could be due to a transfer by the students from interprofessional to intra-group interindividual concepts, it also highlights the challenges of self-assessment. Indeed, while self-efficacy beliefs have been shown to correlate with performance (Zimmerman, 2000), it is still unknown if this correlation stands for professional skills. The use of objective measures would limit this issue, but objective measures of performance in these domains that can standardize across disciplines and courses still need to be developed. In addition, there are potential biases resulting from the low response rate by BA students to the post IPMQ. However, the statistical analysis of the larger population from the pre-questionnaire and the general overlap with the open questions, for which substantially more answers have been collected, suggest that this bias is probably limited. The low response rate also highlights that while Likert scale based self-assessment tools are simple to apply and score, motivating large classes to participate remains a challenge of its own, especially once the teaching part is over. It is certainly even more if the questionnaires are not well aligned with the explicit learning objectives of courses, making it difficult for students to realize the benefits.

Conclusion
In this paper, the development of professional skills by students through their work on in-course or capstone projects has been investigated. An easy to use 24-item questionnaire has been applied in a pre-post method to measure self-efficacy belief changes due to these interventions. In addition, answers to open-ended questions from feedback surveys have been collected to complement the evaluation of in-course projects.
Through the various results presented, the effects on the learning of professional skills through the curriculum in general and by in-course or capstone projects have been highlighted.
Regarding what professional skills students develop during projects, the results suggest that the in-course projects promote communication and interprofessional skills, while through the capstone project, students develop a broader set of professional skills and a strong gain is visible in particular on explicitly taught competences: planning and risk assessment. Yet, there seems to be a gap with respect to professionals suggesting that these skills should probably be taught and practiced even more during the curriculum.
The professional skills were assessed using the IPMQ, a standardised self-efficacy belief questionnaire, that showed good validity in our context. Using this tool in a pre-post comparison, we were able to measure differences. Reports sent out to students gave them feedback about their score. The whole process scales up well and is practical even for larger classes, since we could easily process more than 300 questionnaires. While the tool works for large numbers, it remains challenging to have many students participate in such classes. In addition, the IPMQ does not evaluate contextspecific aspects of professional skills such as organisational and team-up challenges highlighted by the feedback surveys.
Projects are increasingly important, but it may not always be possible to change whole curricula to organise them around projects, following PBL or CDIO approaches. When projects are used in traditional classes, students are often expected to learn professional skills along technical skills, but only technical skills are explicitly addressed and discussed. Our data confirms that this is obviously not realistic. Indeed, the factors of the IPMQ with strong gains correspond to topics that were addressed by specific lectures, and that students could practice and get feedback on, including through multiple assessments. This is a notable finding that contributes to understanding how to effectively integrate projects in traditional classes. Future work should further investigate the exact teaching practices that are most effective at supporting the development of professional skills, for example by looking at the actions of the teaching staff through interviews or focus groups.
Appendix. Interprofessional project management questionnaire (IPMQ )   Table A1. List of the questions of the IPMQ along with the factors they are linked to (A=planning, B=risk assessment, C=ethical sensitivity, D=communication, E=interprofessional competence).

Q1
I am good at making a clear problem statement to clarify the goals when I start working on a project. A Q2 I am good at defining a clear work plan early in a project. A Q3 I am good at breaking a large project into a number of smaller work packages. A Q4 I am good at analyzing a project work plan to identify the order, priority and importance of work tasks. A Q5 I am good at identifying how to keep track of which tasks have been completed and how a project is progressing.
A Q6 I am good at clarifying how likely it is that something will go wrong with a project. B Q7 I am good at identifying how much damage or trouble may be caused by something going wrong with a project.

B Q8
When working on a project, I am good at estimating the likelihood and potential impact of something going wrong with a project. C Q13 I am good at putting myself in the shoes of someone whose life could be affected by a project's results. C Q14 I am good at identifying all the people who could be impacted by a project, no matter how directly or indirectly. C Q15 I am good at trying to understand the perspective of other team members. D Q16 I am good at making sure that all the necessary information is shared with other team members. D Q17 I am good at explaining my ideas in ways that other people can understand. D Q18 When someone disagrees with me, I am good at paying close attention to see if I can learn something from their alternative perspective.