Channelizing Thinking for Practical Innovations and Measurable Impact

Abstract Individuals and groups of people have the potential to find creative and innovative solutions to the problems they face. However, this potential usually goes unrealized. Realizing it requires channelization of thoughts using a focused process. This paper introduces a set of seven thinking tools for systematic innovation and describes the evolution of a training workshop called Think! in which the tools were used. It is established that the tools are useful for solving ill-defined or ambiguous problems. The tools are categorized into primary and supporting categories, to aid conduction of effective workshops whose duration ranges from two hours to two days. It is concluded that the tools provide a strong head start to innovation. However, implementation of the generated solutions requires effective post- workshop follow up.


Introduction
Constant thinking is what propelled humans from being one of the easiest of preys, a species that nearly went extinct, to becoming the most accomplished life form on Earth (Gordon, 2012). Our cognitive abilities will continue to evolve, but the basic disposition towards thinking and continuous problem solving remains unchanged. Yet, in the 21st century, where problem solving tasks abound, only one in four people feel that they are living up to their creative potential (Adobe, 2012). This is especially remarkable because the call for creativity and innovation is stronger than ever before in organizational contexts.
It is believed that creative potential abounds in all humans and that it can be realized via deliberate processes and thinking tools (Isaksen, Treffinger, 2005). This is also reflected in our experience of conducting workshops to build innovation capabilities of individuals. This paper presents the iterative development of a workshop called Think! Aimed at training people in problem solving and systematic innovation. Three development iterations are presented. A total of thirteen workshop instances involving three hundred and two participants were conducted over a period of two years.
The primary contribution of this paper is a set of thinking tools for systematic innovation. The tools guide the thought processes through a set of tasks from problem formulation to solution generation and evaluating the effectiveness of selected solutions. The tools are intended to be used in training workshops whose duration spans a couple of hours up to a couple of days. Secondary contributions of this paper are in the form of insights into effective facilitation of workshops, description of templates for idea evaluation and reducing facilitator workload, and means for sustaining workshop learning through to implementation of solutions in mainstream work. This paper is organized as follows: Section 2 introduces the workshop design, considerations for its facilitation, and evaluation criteria. Section 3 describes experiences with conducting the workshops in a real-world setting, and subsequent revisions to improve the content and delivery based on a set of defined metrics. Section 4 discusses insights from the workshops and methodologies that will be useful for organizations, trainers and teachers in their quest for innovation.

Workshop design
This section presents the main thinking tools used during the workshops, how the workshops were conducted i.e. their facilitation, and how each workshop's effectiveness was evaluated.
2.1 Think! 1.0 -A framework for thinking Design of workshop training material was initiated with an objective to provide a systematic thinking framework for participants to transform real life problems or opportunities into practical innovations. Different methods and models like Creative Problem Solving (CPS), (Parnes, 1967) (Isaksen, 1995), Classical TRIZ (Sheu, 2007), Six Thinking Hats (de Bono, 1999, Design Thinking (Liedtka, Ogilvie, 2011) were studied. We selected CPS as the basis for our training because it takes a cognitive approach, presents principles that are demonstrated through real world use cases or cooperative experiences, and engages students in instruction that is followed by opportunities to apply and practice on realistic challenges (Puccio, Murdock, Mance, 2005). It has also been proven as an effective method for over sixty years (Scott, Leritz, Mumford, 2004). The five stage CPS (Parnes, 1967) was commonly treated as a process in which each session required fixed, linear, sequential application of all the stages (Isaksen, Treffinger, 2005, p346). On the other hand, Six Thinking Hats has flexibility and focused on individuals' thinking in a specific direction for a specific period of time (de Bono, 1999). In our training, we chose to adopt a combination of the two methods. Inspired by Six Thinking hats and based on CPS and the author's previous work (Khodke, 2013), seven thinking tools were designed to provide a systematic framework and the module was named as "Think!". Table 1 lists the seven thinking tools and their respective functions for the first iteration of the workshop, Think! 1.0. The tools are used as external stimuli to channelize a participant's thoughts in line with the function of the tool. We pose the following first hypothesis: H1: These tools will help participants to solve the type of problems identified by (Greeno 1980) which are fuzzy, ill-defined and ambiguous. Such problems do not have a known or determined method and the needed outcome is not available but requires active construction of the outcome.

Facilitation
Workshop facilitation is based on four factor theory comprising -vision, participative safety, task orientation, and support for innovation (Anderson, West, 1998). The participants are divided into groups. Each group attempts to solve a specific problem in a one day 'think and do' workshop. Using the Pencil tool, individuals from the groups identify various problems in their area of work. Each group selects and refines one problem (vision) for solving based on a common frame of reference (Renzulli, 1982) between the individuals. The group then uses the other thinking tools from Table 1 to build and organize thinking from generating ideas through to making an action plan for implementation (task orientation). The tools offer flexibility and need not follow a sequence. However, for beginners we use a sequential method starting from the Pencil tool (problem identification) through to the Piggy Bank tool (Cost considerations). At the end of the session, each group presents their solution and action plan for implementation to all the participants.
The principle of deferred judgment during ideation (Osborn, 1953) is used to ensure participative safety. Case studies along with activities are used to make the exploration enjoyable and pragmatic. In case of multilingual participants, we welcome the participants to think, write and discuss in their preferred language and then translate their thoughts in the most accepted language.

Evaluation
The purpose of workshop evaluation was to capture insights and make necessary changes in the content and method of facilitation to improve overall learning and outcome from the workshops. The Kirkpatrick Model of training evaluation (Kirkpatrick, 1993) was used to evaluate the workshop outcomes on four levels, as follows: 1. Reaction: This level measures the extent to which participants react favourable to the learning event. To gauge this, quantitative and qualitative feedback along with suggestions for improvement were collected from the participants through feedback forms. The participants were asked to rate three items a) the overall learning experience during the workshop b) usefulness of the tools in problem solving and systematic innovation, and c) facilitation of the workshop. The ratings used a 5-point scale, where 1 meant strong disagreement while 5 meant strong agreement.

2.
Learning: This level examines the extent to which the participants acquire the intended knowledge, based on their participation in the learning event. This was rated both by the participants as well as the facilitator, based on the presentations made by each group before the other groups. The ratings were based on a five-point scale along three parameters a) importance of the identified problem b) usefulness of the idea(s) and c) clarity of the action plan for implementation.

Behavior:
This level measures the extent to which participants apply their learning from the workshop in their everyday jobs, after the training. This was measured by taking qualitative and quantitative feedback from each participant's supervisor after one week. The supervisor was asked to assess on a five-point scale, the extent of perceived improvement in a participant's behavior based on two parameters (Zhou, George, 2001, p8) viz. the ability to a) identify problems and b) generate ideas.

Results:
This level measures the extent to which the desired outcomes occur as a result of the learning event and subsequent reinforcement. This was measured based on percentage of solutions implemented (Malinoski, Perry, 2011) in mainstream work after four weeks, on a five-point scale ranging from 1 = 0% solutions implemented to 5 = 100% solutions implemented.
A score sheet was prepared to tabulate the quantitative data from workshop evaluation. Maximum, minimum and average ratings for the workshops were recorded for each of the four levels. Table 2 shows the score sheet template

Workshop development
This section presents the iterative development of the training module based on observations, insights and experiences from conducting workshop.

Think! 1.0: Observations and Evaluation
Think! 1.0 was conducted as a one day workshop. Twenty six participants comprising designers, engineers, and project managers from the automotive industry working in design and development of vehicle interiors participated in the workshop. The participants' work experience ranged from one to thirteen years.
The participants were divided into three groups, each with multidisciplinary team members. Using the Pencil tool, the groups identified three problems to solve a) wastage of resources b) lack of creative work space and c) making a particular seat design energy efficient. A considerable amount of time was spent to define 'creative work space'. Next, still using Pencil, the groups generated ideas to address the identified problems. Restating that the participants can use their preferred language increased the participation leading to more ideas in shorter time. Sixty five ideas were generated which were evaluated using the Magnifying Glass tool. While using the Magnifying glass tool, some individuals from a group were evaluating the ideas while others from the same group were making action plan. Ideas that required investment and change of company policy were dropped and qualifying ideas were further improved using remaining tools. Each group presented their solution and action plan for implementation to the other groups.
All twenty six participants thought solving real life problems with the tools made the process easy to understand and apply. Designers thought all seven tools were important, relevant and useful for their job. Engineers thought three tools, Pencil (identify and ideate), Magnifying Glass (evaluation and action plan) and Piggy Bank (cost) were more useful, whereas project managers voted for Clock in addition to the three tools preferred by the engineers. 78% of the participants would have preferred an additional workshop day to explore their ideas in greater detail. The average participant score for the metrics of Reaction and Learning was 4.3 and 3.1 points respectively.
After one week, one supervisor of seven participants was interviewed. Three out of seven participants showed marginal improvement in identifying problems and thinking about new ways to improve the design of their product whereas no improvement was perceived by the supervisor for the remaining four. The average score for the Behavior metric was rated at 1.4 points. None of the solutions presented during the workshop were implemented within four weeks. Thus the Results metric from the workshop was rated at 1. Detailed rating is tabulated in table 3.
3.2 From Think!1.0 to Think!1.1 : Making Ideas Tangible Three important insights were captured from the workshop based on Think! 1.0. 1) participants need to actually try out their ideas 2) need for a single function for the Magnifying Glass tool, in contrast to the two functions assigned to it and 3) participants showed preference for a set of tools based on their work profile, thus making evident the need for revision in our tools and methods to best suit the users and their context.
To assist participants in trying out their ideas, Design Thinking Principles (Meinel, Keifer, 2011) were studied and prototyping was introduced in the training as a function of the Spanner tool. This augmented the previous function of the Spanner tool, which was "Improvise".
The function of the Magnifying Glass tool was revised to idea evaluation alone. Its former Action Plan function for implementation of the identified solution was added as a separate part of the training, which could be included or excluded based on user needs.
The seven thinking tools were divided into three primary tools and four supporting tools as shown in table 5. The Pencil tool (problem identification and ideation), the Magnifying Glass tool (idea evaluation) and the Spanner tool (prototyping) were categorized as primary tools and the remaining four were supporting tools. These revisions in the training module lead to two more hypotheses H2: The three primary tools will be universally relevant and useful. The supporting tools ought to be introduced and used only where necessary.

H3:
With the primary tools, the problem solving process can be effectively delivered in workshops spanning a couple of hours to two days, depending on the participants and context.
The new revision was named Think! 1.1.

Think! 1.1: Observations and Evaluation
For Think 1.1, nine workshops were conducted. Four workshops with a duration of two days and five workshops with a duration of three hours. Two hundred and four participants comprising students and professionals between ages of ten to sixty two years, and eleven disciplines attended the workshops.
Participants from the two day workshops used the primary tools identified in Section 3.2. Example problems identified were 'City redesign for problem free life' by school students, 'heavy rejection in manufactured components during material handling' by engineers at a manufacturing company, and 'design of an efficient process for college enrollment' by students of an undergraduate course in Communication Design. Over 50% participants needed facilitator's support to understand how to evaluate ideas. Ideas that needed fewer resources were selected for prototyping. Prototypes made during these workshops were physical working models of their solutions. The school students made a model of a city using wooden building blocks and walked their parents through the city for feedback, engineers made trays with partitions using corrugated boxes and tried moving components, while schematic diagrams and role play with the college staff helped to prototype methods for the new enrollment process. Almost 25% of the solutions experienced early failures but were quickly redesigned and tested. For one of the workshops (workshop 4) whose participants were from a small company, thirty minute follow-up sessions were conducted with the participants each week, for four weeks.
Participants from the three hour workshops used the primary tools identified in Section 3.2.
Examples of identified problems were 'Redesign a company balance sheet for easy understanding by accountants from a small business, and 'Effective time management for company administration staff' by administrators from a small business. Similar to the two day workshops, facilitator intervention was need for idea evaluation. Shortlisted ideas were prototyped mainly in the form of sketches or schematic diagrams.
All participants thought the three primary tools were easy to understand and apply in their everyday work. They enjoyed prototyping, testing and improving their solutions, and experienced an increased creative confidence. All professionals thought that a prototype brings clarity to an abstract idea and can help in communicating with the decision makers about implementation. The quantitative scores for the workshops are tabulated in table 3.
The comparison between ratings of two day workshops of Think! 1.1 with Think! 1.0 shows an improvement in Reaction as well as Learning metrics. The comparison between ratings of three hour workshops of Think! 1.1 and Think! 1.0 are at par with ratings of Think! 1.0 for the Reaction metric (with the exception of Workshop 10). However, there is a reduction in the Learning metric. For Behavior and Result metrics, improvement was observed only in the participants of workshop 4.
Workshop 4 was rated at 3 points for Results metric owing to implementation of two out of four solutions generated during the workshop.

From Think!1.1 to Think! 1.2 : Metrics for Impact of New Ideas
The demarcation of primary tools, prototyping of ideas, and increased duration of the workshop improved the learning process in Think! 1.1. However, evaluation of workshop 10 raised concerns. It showed lowest ratings on each of the four Kirkpatrick levels used for workshop evaluation. On further analysis, we found that there was no common thread amongst the group members and hence not everyone could relate to the problem identified by another member of the group. Thus, a different approach towards facilitating diverse groups was needed. A new method for facilitation of diverse groups was designed in which participants would identify and solve problems individually instead of solving as a group.
Another concern was the need for facilitator support for evaluation of the generated ideas and increased cognitive load on participants during idea evaluation. A template was designed for idea evaluation as shown in Figure 1. The template guides the participants in categorizing their ideas into four categories based on investment and impact, which are then color coded. As our workshops focus on incremental innovations, the ideas falling into the green zone (high impact, low investment) of the template are selected for further development during the workshop.

Figure 1. Template for idea evaluation using the Magnifying Glass tool
Yet another insight surfaced, as business owners seemed reluctant to send their personnel to attend workshops on creative and innovative thinking. Fifteen business owners were interviewed and Customer Profiles (Osterwalder, Pigneur, Smith, Bernarda, 2014) were charted. It became apparent that the intangible and unmeasurable nature of the knowledge gained during the workshop caused the business owners to prefer skill based training programs ("thinking" was not considered to be a skill). This was also reflected in the lack of support for implementation of some of the breakthrough solutions prototyped during workshops 7 and 8.
It became clear that to increase engagement of business owners, as well as the Results metric, it was necessary to better communicate the value of the innovative ideas generated during the workshops. Prior research shows that the impact of innovation is often measured as return on investment, behavior of employees like number of ideas generated, etc. (Malinoski, Perry, 2000). However, the solutions designed during Think! 1.0 and Think! 1.1 were not just about financial return on investment but also related to safety, potential intellectual property, carbon footprints, and social impact. Hence, a context specific "idea impact calculator" was designed by the author to tangibly assess the value of generated ideas and their worth for the business. This led to the next hypothesis H4: A physical prototype along with estimated impact in terms of business value generation will drive the implementation of solution in mainstream work and subsequent Results metric of workshop evaluation.
Different methods of facilitation for cohesive and diverse groups, a template for idea evaluation, and impact calculator were included and Think! 1.1 was revised to Think! 1.2.

Think!1.2 : Observations and Evaluation
Three workshops of Think! 1.2 were conducted. These were workshops 11, 12, and 13. Workshops 11 and 12 had a duration of one day, while workshop 13 lasted for two days. Seventy two participants, comprising forty seven professionals and twenty five students attended the workshops.
The new method of facilitation was used for workshop 11 as the participants were from diverse disciplines and organizations. The old method of facilitation was retained for workshops 12 and 13, because those groups were more cohesive. The participants used the primary tools identified in Section 3.2. Example problems identified were "redesigning an industrial water heater for efficiency", "reducing development time for investment casting components", and "making college campus eco-friendly". Each participant was given a target of minimum 25 ideas. Over 1700 ideas were generated in these three workshops. All participants were able to evaluate ideas and did not need additional help from the facilitator. A total of twenty two prototypes were made. Collectively, the solutions generated during these workshops were estimated to save approximately three million dollars annually.
All participants felt that the usage of the primary tools, along with prototyping and impact calculation, made them more confident in dealing with ambiguity. Calculating impact of the generated ideas strengthened the resolve for implementation. Quantitative data shows Improvement in ratings for all four levels of Kirkpatrick model as compared to the workshops based on Think! 1.1 (excluding Results metric of workshop 4) and Think! 1.0. The detailed scores are tabulated in Table 3.

Discussion
Organizations and other groups of people already hold the potential to find creative and innovative solutions to the problems they face. However, this potential is usually latent; it needs to be systematically drawn out and applied. One way to do this is to channelize thinking in a series of distinct steps, which aid in idea discovery, evaluation, and realization. Our workshops attempt to facilitate this channelization of thoughts by means of a set of thinking tools. Over repeated application, we have refined the workshop content, as well as the way in which that content is delivered to the workshop participants. During this process, we gained insights into practices and strategies that aid (and hamper) effective channelization of thought. These insights are presented in this section.
In our first hypothesis (H1 -Section 2.1), we expressed that the use of a set of thinking tools would aid participants in solving problems which are fuzzy, ill defined, or ambiguous. After conducting thirteen workshops involving three hundred and two participants, we believe that the hypothesis is valid. We experienced that the set of thinking tools presented in Table 1 provided a template to systematically analyze different aspects of a problem under investigation. The tools guided the users in problem formulation, ideation of potential solutions, evaluation of each solution, as well as consideration of cost, time, resource, and environmental factors. When working with one particular tool, the users could focus exclusively on the scope of that tool, without worrying about other concerns. By the time the last tool was used, the users were confident that no critical areas were left unexamined. The tools described in Table 1 are sufficiently generic to be applicable to a broad variety of problems. They may be tailored to a specific problem domain to increase their effectiveness.
When transitioning from Think! 1.0 to Think! 1.1, we introduced two more hypotheses (H2 and H3 -Section 3.2) whereunder we expressed that the set of thinking tools would benefit from being split into primary and supporting categories. The primary category would be most relevant and useful for most users and furthermore, it would enable effective training that lasted anywhere between a couple of hours to two days. The split into primary and supporting categories was driven by feedback and observations that users in different roles (for example: engineers, project managers) found some tools more useful/relevant than others. At the same time, a basic set of tools was considered useful by everyone. The latter set became our primary category, which was used to train two hundred and four participants across thirteen disciplines during workshops two through ten. These participants ranged in age between ten and sixty one years. Their feedback confirms hypothesis H2 that the primary tools we selected are universally useful. Since the primary tools are fewer in number, it became possible to use them comprehensively in workshops with a shorter duration -as short as a couple of hours. With the usage of primary tools, three hour workshops had a Reaction metric on par with two day workshops. This confirms hypothesis H3. It is interesting to note, however, that the Learning metric for three hour workshops suffered in comparison with the two day workshops. We propose that workshops with shorter duration should be used as introductory sessions for training in problem solving, rather than as sessions where real solutions are generated.
In our final hypothesis (H4 -Section 3.4) we expressed that creation of a physical prototype and measuring impact of the idea in terms of generated business value would drive the implementation of the generated solution. An implemented solution provides clear evidence of workshop effectiveness, as captured in Kirkpatrick's Results metric. While we believe that the hypothesis is logically sound, there isn't sufficient evidence to confirm it just yet. The three Think! 1.2 workshops, which introduced prototyping and impact measurement, showed only a marginal improvement in the Results metric, compared to Think! 1.1. This could be attributed to insufficient statistical data, but we feel that there is another explanation. It has been reported (Brinkerhoff, 2003, p 219) that only 9% of learners actually apply what they learn with positive results. The training activities that lead to the most on-the-job application are, in fact, pre-and post-training follow-up (Brinkerhoff, 2006). This corroborates our experience. Workshop 4 scored highest on the Results metric and this was indeed the workshop for which follow-up sessions were conducted to review and foster implementation of the workshop ideas. Thus, while prototyping and impact measurement may still be useful, we feel confident in asserting that follow-up sessions are likely to prove most beneficial for idea implementation. That said, impact measurement is still useful in developing creative confidence amongst participants and conveying business value to business owners for investing in workshops on thinking, for employee education programs.
Facilitation plays an important role in the generation of ideas and their evaluation. One of its aims is to increase participant engagement during the ideation phase. We found that the simple "trick" of encouraging participants to use their preferred language increased their comfort levels and made the cognitive process easy and enjoyable. This markedly increased participant engagement and the corresponding generation of ideas. The latter is also facilitated by specifying a target number of ideas that each participant should generate. While generation of many ideas is a positive outcome, their evaluation may require greater engagement by the facilitator, thus leading to increased workload for the facilitator. This can be mitigated, as demonstrated in workshops 11-13, through the use of templates for idea evaluation and impact calculation. Workshop 11 also showed remarkable improvement in the Reaction and Learning metrics after a new approach to facilitation of diverse groups was incorporated. In diverse groups, the participants come from widely varying backgrounds and do not have enough common ground to appreciate (or even understand) the problems and solutions posed by other group members. In such groups, better results were obtained when each individual was asked to think of their own problems and solutions, rather than participating in a group effort.
The workshops have demonstrated that channelized thinking can help people in creating breakthrough solutions that are tangible and measurable. They provide a strong head start for practical innovation, but cannot sustain the momentum to ensure that solutions get implemented.
The thinking tools introduced are easy to understand; despite this, it takes time and effort to make a habit of using them in everyday life. During the ideation phase, a large number of ideas are generated. When just one idea is selected for follow up, the rest of the ideas can easily be lost.
Documenting those ideas and managing them is a crucial activity. For large organizations with existing knowledge management procedures, this may pose less of a problem. For small and medium businesses, we propose that this should be taken up by program management. In all cases, there is a need for post training follow up to maximize the impact of the training and establish a culture of innovation.