Methodological pragmatism in educational research: from qualitative-quantitative to exploratory-confirmatory distinctions

ABSTRACT Educational researchers continue to polarize into ‘qualitative’ and ‘quantitative’ camps, with these terms often functioning as global identity markers, rather than as styles of research that are available to anyone. Many scholars have lamented the drawbacks of researchers being siloed into opposing, apparently incommensurable research paradigms, and have advocated for more inclusive and mixed-methods approaches. However, mixed methods research is not necessarily a good fit for every researcher, research study or research question. In this theoretical paper, I argue that the distinctions commonly made between qualitative and quantitative research are fundamentally incoherent and that the challenges researchers face across both styles of research are essentially analogous. I present methodological pragmatism as an accessible and convenient, compatibilist framework for making research design choices which cut across qualitative-quantitative divides. I propose that the exploratory-(dis)confirmatory distinction is of considerably more practical relevance to educational researchers than qualitative-quantitative ones, and I outline how methodological pragmatism, while consistent with a degree of methodological specialism, recognizes the availability of all research methods for all researchers. Methodological pragmatism liberates researchers in education to conduct the most rigorous research possible by drawing on any methods from any tradition that will further their research goals.


Introduction
Many scholars have lamented the tendency for empirical researchers in education, as well as in the social sciences more broadly, to polarize into distinct methodological camps, labelling their research and even themselvesas 'qualitative' or 'quantitative' (e.g.Gorard and Taylor 2004;Ercikan and Roth 2006).Not only does this division impede inter-as well as intra-disciplinary collaboration, but the terms 'qualitative' and 'quantitative' are often poor descriptors of the intended distinction.The qualitative-quantitative divide rests on philosophical and ideological differences and is much more than a simple matter of whether 'numbers' are present in the generated or collected data (Lund 2005).A 'qualitative' educational researcher conducting a semi-structured interview with a teacher might ask them how many years they have been teaching, and write down a number, but they would not consider this to be 'quantitative data' that required a fundamentally different, say 'positivist', analytic approach or epistemological/ontological framework (Dienes 2008).In a similar way, a researcher undertaking a statistical analysis ought to think hard about how to meaningfully interpret their findings in context, and how to construct a persuasive argument based on them (Abelson 2012).But this highly 'interpretive' work does not mean that they have unwittingly blundered into an 'interpretivist paradigm' and must deny the existence of an objective reality (Denzin and Lincoln 2011).
This paper argues that the similarities between qualitative and quantitative research in education are much stronger than the differences, and that much is to be gained by de-emphasizing the qualitative-quantitative distinction.It is a truism to say that all research in education, in whatever style, is challenging (Picho and Artino Jr 2016); however, what may frequently be missed is that the challenges within qualitative and quantitative research are often highly similar.Specifically, I will argue that research challenges associated with definitions of constructs, operationalization and measurement, data reduction and synthesis, analytic interpretation and threats to validity/trustworthiness are all essentially analogous across qualitative and quantitative research.Better communication across the qualitative-quantitative divide could lead not just to more mixed-methods researchwhere this is understood as research that draws on a mixture of both quantitative and qualitative methods (Johnson and Onwuegbuzie 2004).It could also facilitate a more unified and deeper understanding derived from both of these research styles and traditions of what it means to conduct high-quality educational research.
The value and validity of educational research has long been criticized for its perceived lack of rigour (e.g.Hargreaves 1996;Tooley and Darby 1998;Whitty and Wisby 2009), for being 'not very influential [or] useful' (Burkhardt and Schoenfeld 2003, 3) and for being 'often inaccessible, irrelevant, or impenetrable' (Rycroft-Smith and Macey 2021, 1).Hammersley (2005) argued that high levels of government interference in educational research had weakened the enterprise.The notion of practice in schools and other learning settings being 'research-informed' or 'evidence-informed' is polysemous, and 'evidence' in education continues to be a contested term, with questions, for instance, over the priority given to randomized controlled trials and a 'what works' agenda (see Davis 2018;Thomas 2021).Challenges to the validity and generalisability of qualitative studies are mirrored by a crisis of trust in quantitative studies, in the context of recent concerns about replication crises across related fields and a perceived need to do better, more reproducible, open science (Goldacre 2010;Chambers 2019;Ritchie 2020).
In this paper, I propose methodological pragmatism as an accessible, convenient paradigm for conducting educational research, in which all methods are available to all researchers, and the commonalities in the challenges of applying these methods are stressed.The overarching principle of methodological pragmatism is: There are no intrinsically good or bad methods; in each circumstance, there will be methods that are more or less well suited to particular research questions.Methodological pragmatism might well lead in certain situations to mixed methodsin the sense of a combination of qualitative and quantitative methods, a combination of different qualitative methods, or a combination of different quantitative methods (Johnson and Onwuegbuzie 2004)but this is by no means inevitable.Methodological pragmatism makes no prior commitment to the superiority of mixed methods; rather, whether or not to use qualitative, quantitative or mixed methods is a matter to be determined purely pragmatically, based solely on the needs of the research.Consequently, for particular studies, methodological pragmatism would be equally comfortable with an entirely qualitative approach or an entirely quantitative approach, and would not see either of these as being in any way inferior to mixed-methods research.Defining methodological pragmatism in this way of course presupposes well-defined research questions, which themselves derive from researchers' perspectives and what they find 'interesting' (Davis 1971)methodological pragmatism cannot in any sense be portrayed as a 'neutral' paradigm.However, I will argue that any individual methods of data collection/generation and analysis, deriving from any tradition, might be incorporated into an educational study in a relatively unproblematic fashion.I will argue that methodological pragmatism offers a positive way to navigate the challenges of conducting educational research, in the context of the systemic constraints placed on researchers and what is valued in public and policy discourse (Evans 2015), so as to support a more unified and coherent field of research to address these challenges.
In the remainder of this paper, I first (in Section 2) question the qualitative-quantitative distinctions that are commonly made and argue that these are much less stark on close inspection.On the contrary, I go on in Section 3 to argue that the challenges of doing research in education are broadly similar between qualitative and quantitative styles of research, taking examples from across the different stages of conducting a research study.In Section 4, I detail the nature of methodological pragmatism, arguing that the exploratory-confirmatory distinction is of far more relevance to researchers than qualitative-quantitative ones.In Section 5, I respond to some possible criticisms, and in Section 6 I conclude with some suggestions on how unhelpful polarisations between research and researchers might be obviated.

Questionable distinctions
This paper argues that attempts to differentiate qualitative from quantitative research lack coherence, and that, in practice, research in education is often messy and fails to fit neatly into qualitative/quantitative categories.Below, I problematize some of the ways in which the qualitativequantitative distinction may often be made.

Numbers versus words
There is no straightforward distinction between qualities and quantities.As Gorard and Taylor (2004, 150) have pointed out: Most methods of analysis use some form of number, such as 'tend', 'most', 'some', 'all', 'none', 'few' and so on … The patterns in qualitative analysis are, by definition, numeric, and the things that are traditionally numbered are qualities … The measurement of all useful quantities requires a prior consideration of theory leading to the identification of a quality to be measured.(Gorard and Taylor 2004, 150) To distinguish data according to whether it contains numerical quantities is often hard in practice, and not very meaningful.Is a statement that 'Most interview participants expressed X' a quantitative statement because it involves counting the frequency of such statements?It does not seem clear that it assists the conduct of educational research to make this distinction.

Use of particular methods
In methods training courses, certain data collection/generation methods may often come to be regarded as associated with either qualitative or quantitative studies, but this frequently seems to make little sense.A survey, for instance, might contain both items that require a numerical answer and items that require extended written responses, but it does not necessarily follow that this would require a 'mixed methods' approach to analysis.The written responses might be coded and categorized, and the frequency of occurrences of various themes might be counted.During an interview, a student might be asked to order some cards containing statements in an order of preferencethis might not involve any numbers, but the rankings of the cards might subsequently be handled statistically.It seems a category error to try to assign data collection methods or analysis methods such as 'interview' or 'survey' to 'qualitative' or 'quantitative' categories, and the attempt does not seem to serve any useful purpose.
Whether statistical techniques are ultimately involved in an analysis might depend on many factors, such as the size and properties of the eventually-acquired data set.A judgment might need to be made at some stage, based on the properties of the data, as to whether it would be meaningful to conduct statistical tests, or even use descriptive statistics, or whether this might be inappropriate or unnecessary.A chi-squared test, for instance, might be conducted on categories of qualitative data, but does this mean that frequency or ranked data must be considered quantitative?If the chi-squared test is not in the end carried out, due perhaps to some cell frequencies being too small, does the study then remain qualitative?
Sometimes entire research methodologies, such as 'action research' or 'design research', may be designated qualitative, but this seems not to accord with the variety of ways in which those approaches may be manifested in practice (see Burkhardt 2013).The categorization seems unhelpful.

Philosophical and theoretical distinctives
Behind the other differences mentioned above lies a deeper one, in which qualitative and quantitative research are distinguished according to philosophical and theoretical values and positions.Conflation of research style and ideological standpoint may lead researchers to feel a need to 'pick a side', and this can result in the propagation of simplistic stereotypes.Quantitative researchers are stereotyped as taking an apolitical, 'positivist' stance, and pursuing the impossible goal of eliminating all sources of bias in order to arrive at objective, neutral, value-free truths.But it seems unlikely that many experienced researchers within the social sciences would take this kind of absolutist stance.Such a portrayal suggests a straw person, used to ridicule quantitative approaches as simplistic, naïve, untheoretical and uncritical.
On the other hand, qualitative researchers may be stereotyped as vague, anecdotal and 'merely' journalistic, writing emotively, but unrigorous in their approaches.They may be characterized as more interested in changing the world in personal ideological directions than in finding things out, failing to account for their confirmation biases, unwilling to have their views challenged, and cherry-picking evidence to fit a pre-determined narrative.Again, such a description seems a straw person, and does not reflect the care and consideration with which high-quality qualitative research is undertaken and the respect and primacy given to the data.It would seem that any of these criticisms could potentially also be applied to quantitative research; either style of research can be conducted more rigorously or less rigorously according to different notions of rigour.
The language of 'paradigms' is potentially problematic here (Trifonas 2009), because of assumptions 'that adopting a method automatically means adopting an entire "paradigm"' (Gorard and Taylor 2004, 9).Researchers, especially early-career researchers, may feel pressured to 'join a tribe' and even 'exclaim, before deciding on a topic and research questions, that they intend, for example, to use "qualitative" methods of data collection or analysis' (Gorard and Taylor 2004, 9).Alternatively, they are 'encouraged to count or measure everything, even where this is not necessarily appropriate' (Gorard and Taylor 2004, 9), so that qualitative evidence is derided and quantitative evidence is accepted uncritically.As Ercikan and Roth (2006, p. 14) argued, 'polarization is confusing to many and tends to limit research inquiry, often resulting in incomplete answers to research questions and potentially inappropriate inferences based on findings.'Many beginning researchers in education are former schoolteachers, or otherwise bring experience working with children and young people.It is possible that they may be inclined to see the kinds of skills needed for qualitative research, involving methods such as interviewing and classroom observation, as being more closely aligned to their existing skill sets (Gorard and Taylor 2004, 146).Studying phenomena in their natural settings, listening and trying to make sense of what people say, feel and experience, and co-constructing knowledge alongside participants (see Denzin and Lincoln 2011) may seem more closely aligned with their previous roles in classrooms than more quantitative methods, such as analyzing numerical data sets.
Researchers' negative relationships with their own learning of mathematics may also contribute to an aversion to statistics, which may (consciously or unconsciously) support biases against working 'quantitatively'.On the other hand, researchers with a talent for mathematics, and perhaps less inclined to spend time interacting with people, might feel more drawn to the spreadsheets and computer coding of quantitative research.In such ways, researchers might readily polarize on the basis of features associated with their background or personality, which should perhaps be irrelevant from the point of view of finding the best ways to address the particular research questions in a specific study.Over time, and supported by philosophical theorizing, it is possible to see how these divisions might develop into supposedly 'incommensurable paradigms'.
Having once 'picked a side', researchers might then set about laying out their epistemological and ontological positions.However, this can sometimes be done with considerable philosophical naivety, with stereotypical positions passed on uncritically from older to younger researcher.Philosophical problems around objectivity/subjectivity, realism/idealism, relativism/constructivism, etc., when taken in their strongest and most radical forms, and presented simplistically, could seem to threaten the rationale for all styles of research.Choosing methods based on the researcher's judgment of which position they align more closely with in centuries-old, unresolved (and possibly unresolvable) philosophical debates seems precarious, and researchers teaching methods courses in education departments are unlikely to possess the kinds of strong background in philosophy that could make such discussions enlightening.
Clearly, all researchers in education need to be as philosophically well-informed as possible (e.g.Dienes 2008), with awareness of the challenges of 'true objectivity' in science (e.g.see Egan 2002), the theory-ladenness of facts and the underdetermination of theory by evidence.Awareness that theories can only ever be provisional, that causation in the social sciences is probabilistic, not deterministic, and that critical scepticism (but not cynicism) is fundamental to research are important for all researchers in education to understand.But these do not necessitate 'picking a side'.Epistemological and ontological assumptions must be acknowledged, and may be more important to some researchers than to others.Inescapably, this will impact on the study's design, analysis and interpretation.However, the methodological pragmatist may view the weight given to these positionings as often excessive, and the tendency to assume that they must represent fixed 'positions' over time for any particular researcher unrealistic.
Theoretical/conceptual/analytic frameworks (see de Vaus 2002; Cai and Hwang 2019; Crotty 2020) should be equally important to qualitative or quantitative research, so 'theory' should not be viewed as more the property of qualitative than quantitative research.It seems unhelpful to make a strong distinction between 'grand Theories' (with a capital 'T'), which operate as worldviews or meta-narratives (and tend to be more strongly associated with qualitative and theoretical research), and more local theories that build up from attempts to capture the ways in which people behave in certain educationally-relevant situations when it would seem that either qualitative or quantitative research might be well placed to contribute to either kind.Ultimately, the overriding consideration for the methodological pragmatist is that if setting out one's various epistemological and ontological positions does not seem to have much or any impact on the details of a study's design, analysis and interpretation, then it should be de-emphasized and perhaps not even referred to at all.

Analogous challenges
It would be uncontroversial to note here that qualitative and quantitative research each possess different advantages and disadvantages, and that each might be appropriate for different purposes.This may be typically the kind of message that young researchers receive on balanced research methods courses and in sensible research methods books.However, the argument of this paper goes much further than this.It is not just that qualitative and quantitative research each have their own difficulties; frequently, they present analogous challenges to the researcher.This has occasionally been noteda recent tweet stated, 'Being an applied statistician is a lot like being an ethnographer' (Women in Statistics and Data Science 2021).If the qualitative-quantitative distinction is, as argued here, fundamentally spurious, then we would expect that the challenges associated with qualitative and quantitative research would often be highly similar.The sections below argue that this is indeed the case, taking examples from defining, operationalizing and measuring constructs, data reduction and synthesis, analytic interpretation and threats to validity/trustworthiness.

Definitions of constructs, operationalization and measurement
It is often assumed that the hard sciences have it easy when it comes to defining constructs and even that science is inherently more straightforward than social science (e.g.'Imagine how hard physics would be if electrons could think', attributed to Murray Gell-Mann).Everyone knows what something like 'energy' is, for instance, and so it can be easily and precisely measured.But this really fails to do justice to the challenges within the quantitative disciplines, and would seem to be a social-scientist's perspective on how they imagine science to be ('the grass is greener').On the specific example of energy, the Nobel-prize-winning physicist Richard Feynman noted that 'It is important to realize that in physics today, we have no knowledge of what energy is.… [T]here are formulas for calculating some numerical quantity, and when we add it all together it gives … always the same number.It is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas.'(Leighton and Sands, 1965, 4-1).This would seem potentially very similar to the kind of difficulty that an education researcher might have in defining and operationalizing a fuzzy construct, such as 'learning', 'understanding', 'problem solving' or 'creativity', where it is also difficult to say 'what exactly it is'.
Measurement is another issue often presumed to be straightforward for quantitative researchers.However, even in a subject like physics, and with supposedly 'everyday' constructs, measurement can be highly challenging (Vincent 2022).Consider temperature as an example.It might seem straightforward to establish a temperature scale, calibrated with melting ice at 0°C and boiling water at 100°C, but of course these values are context-dependent (varying with air pressure, purity of the water, etc.).Even disregarding that, how might a scientist decide what should be meant by, say, 50°C?What does 'halfway between' mean?If we use electrical resistance of a metal to calibrate our scale with melting ice and boiling water, then 50°C will be halfway in electrical resistance for that particular metal; if we use the expansion of a column of liquid, as in a glass thermometer, 50°C will be halfway along the column.But these two temperatures will not be precisely the same in terms of their hotnessheat will flow from one of them to the other, demonstrating one to be of higher temperature than the other.So, even with something as apparently objective as 'the temperature', important choices have to be made, and the exact definitions and conditions used reported clearly.This does not seem fundamentally so different in principle from the challenges of devising a scale for something like 'self-esteem', and triangulating it with other measures.
Social scientists sometimes assume that context is not an important or problematic issue in the hard sciences, but the course that a chemical reaction, for instance, will take might depend on many external factors (e.g.ambient temperature, pressure, presence of catalysts or other impurities, how old the reagents are and where they have been purchased from, how they have been prepared, purified and whether they have been kept refrigerated, and for how long).Control is not at all a straightforward matter in scienceand the challenges can be viewed as similar to those in the social sciences.The challenges of conducting qualitative research would all seem to apply to conducting quantitative research, and good quantitative researchers are very aware of this.The need to carefully set up the conditions and measures before collecting statistical data is well-captured in the statement attributed to Ronald Fisher: 'To consult the statistician after an experiment is finished is often merely to ask [them] to conduct a postmortem examination.[They] can perhaps say what the experiment died of.' A qualitative researcher might object that qualitative research does not involve measurement.But a broad view of 'measurement' would challenge this.An extreme perspective is that 'Anything that exists, exists in some amount.And all amounts can be measured' (Edward Thorndike).Conducting semi-structured interviews to ascertain teachers' views on some matter, for instance, may not seem like 'measurement', but the findings that the researcher wishes to report are likely to contain statements that attempt to capture the degree to which participants expressed this or that opinion or related to this or that experience.Nuance in qualitative reporting is often about scaling (i.e.measuring) qualities.A great deal of quantitative language is often needed in order to summarize the findings from even the most qualitative piece of research, and this leads us on to the issue of data reduction and synthesis.

Data reduction and synthesis
Data reduction is an equally important issue for both qualitative and quantitative research ('To think is to forget details, generalize, make abstractions', Jorge Luis Borges).Quantitative data might consist of a large spreadsheet of numbers: in order to draw a conclusion from these that is meaningful, informative and useful, they must be processed and packaged into an overall message for the reader.This might be done by calculating summary statistics, such as the mean and standard deviation, and/or by conducting inferential statistical tests to establish whether meaningful differences or correlations can be detected that could tell us something about the wider population from which the sample is taken.Qualitative data, on the other hand, might consist of large data files of transcribed text, perhaps from interviews, but again, in order to draw a useful conclusion from these, this data must be 'reduced' into a condensed message for the reader.In both cases, information, detail and nuance must be discarded, and this sacrifice is necessary and acceptable in order to derive conclusions that will be informative to the researcher and be digestible and communicable to others.
Obviously, in neither case would a researcher merely present their raw data.The reduction of quantitative data to, say, a mean and a standard deviation, can be viewed as being in some ways analogous to the reduction of qualitative data to, say, a handful of themes.In both cases, researchers recognize that this is a 'lossy' process, which must be handled with the greatest of care if the conclusions arrived at are to be valid, reliable and trustworthy.In both cases, the researcher must guard against bias and be open and transparent about the details of the process, so that the consumer of the research can have confidence in the findings.But to regard either process as inherently more or less 'rigorous' or 'objective' seems unjustified.To suggest that the outcome of quantitative research is 'what', whereas the outcome of qualitative research is 'why', seems not to match the details of what researchers engage in.The two approaches would seem to have much more in common than is generally acknowledged, and researchers from both traditions might have much to learn from the experiences of the other.The essential challenges of doing research would seem to be largely the same.
Portraying qualitative research as staying with the richness, subtlety and detail is not necessarily accurate.As King, Keohane and Verba (1994) pointed out: Even the most comprehensive description done by the best cultural interpreters with the most detailed contextual understanding will drastically simplify, reify, and reduce the reality that has been observed.Indeed, the difference between the amount of complexity in the world and that in the thickest of descriptions is still vastly greater than the difference between this thickest of descriptions and the most abstract quantitative or formal analysis.No description, no matter how thick, and no explanation, no matter how many explanatory factors go into it, comes close to capturing the full 'blooming and buzzing' reality of the world.There is no choice but to simplify.(King, Keohane, and Verba 1994, 43, original emphasis) Raw data, of whatever kind, must always be reduced during analysis and synthesized, and no style of research can escape the challenges that this brings.

Analytic interpretation
Some qualitative research styles are commonly described as 'interpretivist', and yet all research findings require interpretation.This seems to be another example where qualitative and quantitative researchers face essentially the same challenge.Interpretation is often perceived to be the most challenging aspect of quantitative research (Abelson 2012), and interpretation of quantitative data involves much more than merely distinguishing correlation from causation (see Rohrer 2018).As argued above, it is a mistake to think of qualitative research as inherently more nuanced.The tools of statistics sometimes facilitate making highly nuanced statements, which can be hard to express clearly in words.For example, perhaps two quantities have both increased, but one has increased more than the other, or the gap between two quantities has decreased, although both have increased.More complicated findings, such as a 3-way interaction or the results of a mediation analysis or analysis of covariance, can be even harder to translate precisely into text, and this is surely as challenging as attempting to capture and summarize subtle and nuanced statements made by interview participants in qualitative research.
Quantitative science is often portrayed in terms of certainty, universality and 'laws', and yet scientists and statisticians tend to talk more in terms of 'models' of the real world, which are acknowledged to be deliberately simplified and imperfect representations of 'reality' (e.g.see Christensen, Johnson, Turner and Christensen 2011).Models do not intend to capture all the details; they are intentionally slimmed-down, 'reduced' versions, that are therefore efficient to work with.They are always provisional and open to being improved with more data and theory: 'All models are wrong but some are useful' (attributed to the statistician George Box, see Wasserstein 2010).In particular, this means that: 1. Models are descriptive, not prescriptive.They don't say what will or must happen in the future; they summarize patterns in what has been observed to happen in the past.2. Models apply only approximately, and within certain, specified domains.No model can make predictions with 100% accuracy and there will always be exceptions.3. Models are probabilistic, not deterministic; i.e. they are 'good bets' that work on average, other things being equal, but not every time or for every case.
Such models, derived from quantitative data, do not seem epistemologically very different from a speculative theory or framework derived from qualitative data.The challenges and limitations are essentially the same in handling ideas that are tentative conjectures or hypotheses and attempting to ensure that they are understood in that way.

Threats to validity/trustworthiness
Finally, we consider the issue of threats to validity, and again conclude that the threats to validity of qualitative and quantitative research in education are essentially the same.In qualitative research, words like 'transparency' or 'authenticity' may be preferred to 'validity' and 'generalisability' (Denzin and Lincoln 2011), but, even so, similar issues remain (Smith 2018).Researchers interested in a particular topic often need to synthesize both qualitative and quantitative research in the same literature review, so common ways of evaluating how much confidence ought to be placed in each finding are helpful.Schoenfeld (2007, 81) posed three questions to ask about any study in education: 1. Why should one believe what the author says?(the issue of trustworthiness) 2. What situations or contexts does the research really apply to? (the issue of generality, or scope) 3. Why should one care?(the issue of importance) (Schoenfeld 2007, 81) These would seem to be readily applicable and relevant across both qualitative and quantitative studies, since, within all research in education, it would seem fair to say that 'The function of a research design is to ensure that the evidence obtained enables us to answer the initial question as unambiguously as possible' (de Vaus 2002, 9, original emphasis).All research claims, derived from whatever tradition, need severe testing before acceptance.Many (even opposing) claims may seem 'obvious' to someone, and one purpose of research is to distinguish statements that are 'obvious and true' from those that are 'obvious and false' (see Gage 1991).
Qualitative studies are often criticized for being small-scale and not generalizable, but sample size may not be the most important factor in judging generalisabilityand generalisability may not always be the intention anyway (Smith 2018).A large quantitative study might equally be judged to have low generalisability due to the 'artificial' nature of its design (e.g. a laboratory-basedrather than classroom-basedsetting, using researcher-devised materials, across a short timescale, etc.).However, the 'external invalidity' (lack of 'ecological validity') of such studies may be considered a feature, and not a bug, because it may facilitate theoretical conclusions being drawn with much greater confidence (Mook 1983), in the same way that a qualitative interview situation would not be criticized for being 'artificial'.A plural, but not dualistic, approach to generalisability is helpful (Larsson 2009).
When evaluating research, Christian and Griffiths (2016, 223) criticized both 'cherry-picked personal anecdotes and aggregated summary statistics.The anecdotes, of course, are rich and vivid, but they're unrepresentative.… Aggregate statistics, on the other hand, are the reverse: comprehensive but thin.'However, these different criticisms seem to have more to do with questions of scale (a small amount of rich data versus a large amount of superficial data), rather than whether the research should be considered qualitative or quantitative.Both qualitative and quantitative researchers can inappropriately 'stack the deck' in their literature reviews by cherry-picking sources in an unsystematic review.The replication crisis has highlighted quantitative researchers' excessive 'researcher degrees of freedom' at the analytic stage.This seems similar in character to allegations of bias and subjective idiosyncrasy made against qualitative researchers when conducting their analyzes.It would seem that similar considerations involving acknowledging the researchers' positions and interests/biases, as well as transparent reporting throughout, are important, along with preregistering analyses (of both kinds) in advance (Chambers 2019).It is often acknowledged that, even in qualitative research, being an 'outsider' and aspiring to some level of objectivity can be valuable, though difficult (Thapar-Björkert and Henry 2004).
In both styles of research, bias is a considerable source of concern, with the researcher's preconceptions always at risk of exerting undue influence.But it does not seem clear that either qualitative or quantitative research has a greater problem in this area.As for the other aspects considered above, the challenges seem essentially the same, perhaps because all researchers bring the same strengths and limitations of being human.

Methodological pragmatism
There have been many previous attempts to address the qualitative-quantitative disconnect (e.g.Scott 2007) and to advocate better ways to teach research methods courses within the social sciences (e.g.Nind and Lewthwaite 2020; Sarafoglou, Hoogeveen, Matzke, and Wagenmakers 2020).In particular, Johnson and Onwuegbuzie (2004) argued for mixed methods research to be the third research paradigm, which can bridge the chasm between qualitative and quantitative research.However, methodological pragmatism does not position mixed methods as always the optimal research style, superior to mono-method approaches (see Creswell and Creswell 2018;Ramlo 2020).Methodological pragmatism is an approach to making research design choices, rather than an endpoint; the endpoint might be qualitative, quantitative or mixed, and each of these might be optimal for any particular study.For the methodological pragmatist, mixed methods is just one option.
Drawing on the thinking of the classical pragmatists (e.g.Charles Sanders Peirce, William James), methodological pragmatism seeks to adopt a pluralist, compatibilist and open-minded, balanced methodological position, considering all possibilities and seeking as much as possible set to one side any prior ideological commitments for or against any particular methods.As stated in Section 1, the overarching principle of methodological pragmatism is: There are no intrinsically good or bad methods; in each circumstance, there will be methods that are more or less well suited to particular research questions.The choice of research questions will of course be influenced by the personal philosophy of the researcher, and what they regard as interesting and important.However, once a research question is arrived at, methodological pragmatism dictates that all methods are on the table; nothing is off limits.
None of this is to say that epistemological and ontological positions are of no relevance to educational research.It is important to acknowledge the illusory nature of 'the view from nowhere'.All researchers are constantly positioned in relation to their ontological and epistemological assumptions, and they need to be aware of what those are if they are to understand the warrants for the methods that they use, the questions that they frame and the kind of 'knowledge' that they expect their research to produce.Any researcher using any research design must be mindful of the knowledge claims being made and how these can be substantiated.However, philosophical pragmatism in all its forms seeks to question the overriding importance of such positionings.Baggini (2018, 82, original emphasis) commented, 'One consequence of adopting the pragmatist viewpoint is that many philosophical problems are not so much solved as dissolved'.As Dewey (1910, 19) wrote of philosophical questions, 'We do not solve them: we get over them.'While this does not give researchers carte blanche to a 'sloppy mismash' of methods, and any methods deployed need to be 'sympathetic' to each other, it does open up researchers to consider absolutely any method, from whatever source.While different positionings will affect how a method might be understood and used, and our methodological biases are important to acknowledge, in order to try to work against them, on the pragmatist view they should not rule out any methods a priori.
It is clearly challenging to navigate the many choices associated with a pragmatic approach (Clarke and Visser 2019), but methodological pragmatism would seem to offer a way towards greater methodological diversity and coherence within the field of educational research.It also seems likely to lead to higher-quality research: research is hard enough without throwing away half of the toolbox of available methods.Seeing research design as a logical problem, rather than a logistical problem, the methodological pragmatist seeks to identify whatever approaches seem most appropriate to achieve the research goals.As de Vaus (2002, 9) put it, 'issues of sampling, method of data collection (e.g.questionnaire, observation, document analysis), design of questions are all subsidiary to the matter of "What evidence do I need to collect?" ' Crotty (2020, 15) pointed out that to solve problems in everyday life we all tend to use any method that will achieve our desired purposes: We may consider ourselves utterly devoted to qualitative research methods.Yet when we think about investigations carried out in the normal course of our daily lives, how often do measuring and counting turn out to be essential to our purposes?Constraining our research questions and methods to accommodate our beliefsprejudices, evenabout research styles, or limitations in our repertoire of known research methods, seems like the tail wagging the dog.
However, this is not to say that every researcher in education must have facility in all possible methods, and that methodological specialism is never defensible.The complexity of many methods, and the years of experience that may be needed to develop expertise in, say, multilevel Bayesian modelling, or conversation analysis, necessitates that not all researchers can be highly skilled in every possible method.The typical, stylized doctoral journey of 'Find a research interest, Formulate research questions, Find methods to address those questions, Learn how to do those methods, Carry out the study and Write it up' becomes perhaps less practicable for the earlycareer researcher, who, due to the systemic constraints they experience as an academic (see Evans 2015), needs to publish research rapidly.In such a situation, capitalizing on already-known methods (and using locally available expertise and equipment) is clearly an efficient and perfectly valid, 'pragmatic' approach.Being methods-led to a certain degree can be entirely consistent with methodological pragmatism.However, retreating into a tribe (see Becher and Trowler 2001) that prides itself on not engaging with certain methods would seem to do a disservice to the field.Ignoring half of the relevant literature because the researcher 'doesn't do statistics' or 'doesn't do Theory' is not productive.At the very least, all researchers should aspire to be equipped to critically read any style of research that falls within their area of interest.

Exploratory and confirmatory research
A seemingly more productive distinction than qualitative-quantitative, that cuts across the qualitative-quantitative divide, is that between exploratory and confirmatory research.This distinction should not divide people from one another, since it would seem highly unlikely that any researcher would want to specialize in only one or other of these.Exploratory and confirmatory research naturally work in partnership, often in the hands of the same researcher, and largely do not draw on different, specialized skills.They are in no sense in competition with one another.
Exploratory research (called 'night science' by Yanai and Lercher 2020) involves open-minded investigation, in which strenuous attempts are made to acknowledge and set aside preconceptions and avoid confirmation bias.This kind of research is necessarily descriptive and limited to particular situations and contexts.It may lead to thick, rich qualitative descriptions, which may be coded and organized to generate descriptive themes.Alternatively, it may involve collection of statistical data (e.g. on populations, from n = 1 up to very large), with a view to using descriptive statistics (as opposed to inferential statistics, such as statistical tests), and could be the first-stage in a cross-validation design.Such research can provide proof of concept, existence proofs for a feasibility study, and/or raise pertinent questions for discussion, and generate hypotheses and theories to be tested out (Yarkoni 2022).In this way, exploratory researchwhether qualitative or quantitativeis a crucial source of ideas and discussion.The challenges of exploratory research, whether qualitative or quantitative, are for the researcher to be creative and open-minded, honest and open to all possibilities, and to 'listen attentively' to the data, wherever it leads.
In sharp contrast to this is (dis)Confirmatory research ('day science' in Yanai and Lercher's [2020] terms), which tests hypotheses/theories/conjectures and seeks to generalize beyond the particular sample or group studied to some wider population of interest.In qualitative terms, this might entail scaling up an earlier, exploratory studyperhaps conducting semi-structured interviews to examine to what extent findings from an exploratory survey might generalize to, or surface again in, a wider group.Alternatively, quantitative confirmatory research could entail conducting a survey built around issues emerging from previous exploratory interviews, and seeking to examine evidence for certain claims in a larger and more diverse sample.There is nothing inherently 'statistical' or 'quantitative' about the notion of 'hypothesis testing' or 'conjecture testing'it does not depend inexorably on a 'positivist' worldview or entail absolutes, such as 'proof' and establishing 'laws'.Confirmatory quantitative research might operate within the hypothetico-deductive tradition, whether frequentist or Bayesian, making predictions and consequently supporting or rejecting/challenging/falsifying theories, and making generalizable claims with statistical significance.But none of this need presuppose any particular worldview on the part of the researchers involved, who might be equally happy another day interpreting interview transcripts.The challenges of (dis)confirmatory research are in stating beforehand (e.g. by pre-registration, see Chambers 2019) what is being looked for and being transparent and reproducible in the search.
Neither exploratory nor confirmatory research should be viewed as 'better'both are needed.But, unlike the qualitative-quantitative distinction, the exploratory-confirmatory distinction matters for practical reasons of rigour (see Chambers 2019).Every researcher should be clear which kind of research they are engaged in at that moment.Failure to make this distinction has led to questionable research practices in the quantitative world, including 'p-hacking' and HARKing (Hypothesizing After the Results are Known, see Chambers 2019), where inferential statistics have been used in studies that would be better characterized as exploratory.This means that the resulting p-values are not corrected for multiple comparisons, and therefore are misinterpreted.Reported findings thus give a false impression of statistical significance when seen divorced from the wider context of the other tests that were also carried out, and p-hacked findings should not be trusted or expected to replicate.
Within the qualitative world, confusing exploratory and confirmatory research is equally problematic.The qualitative analogue of p-hacking/HARKing is the narrative fallacy, in which post-hoc explanations that sound plausible are arrived at after examining the data.Such accounts should only ever be presented as hypotheses/conjectures, to be tested out further, separately, with new data from new participants.Sometimes this distinction is blurred, with researchers perhaps reporting in the 'Method' section that their study does not seek to generalize from their sample, but then in the 'Discussion' section perhaps giving themselves licence to make wild, unsupported generalizations.These are issues for both qualitative and quantitative research, and both styles of research would benefit from greater clarity regarding the exploratory-(dis)confirmatory distinction (Figure 1).Exploratory research, of whatever style, is always provisional until confirmatory research has either backed it up or challenged it.

Possible objections
In this Section, we briefly consider some possible objections that might be made to the argument for methodological pragmatism presented in this paper.The points below build on the discussion above, and we consider in particular three possible objections.

An impoverished epistemology
Pragmatism, in all its flavours, has been attacked for being 'light' intellectually (e.g.see Madden 1980;Slaney 2015), and methodological pragmatism may appear to be attempting to be atheoretical and to sweep inescapable philosophical issues under the carpet.Without adequate attention to theory of knowledge, how can we know whether any of this 'pragmatic' research is valid?Methodological pragmatism, in its apparently inclusive acceptance of all methods, does not seem to provide the intellectual machinery needed to assess or criticize any method.
In response to this, we might say that, on the contrary, methodological pragmatism provides the most severe level of criticality possible: the pragmatist's ultimate test for any method is whether it 'works', here in the sense of being able to answer real questions posed by real researchers in practice.It is not so much that methodological pragmatism sees all methods as equally valid; it is that it does not allow prior reservations based on crude categorisations of methods to exert an influence.As long as a method is experienced as useful in achieving a particular set of research goals, then it justifies its presence.Within methodological pragmatism, a method can suffer the most robust criticism, but only on the grounds that it does not adequately contribute to answering a research question.This would appear to place the burden on methods exactly where it properly belongs.

A short-termist orientation
A pragmatic concern for 'results' could be seen to prioritize short-term gains over longer-term goals by addressing immediate research problems at the expense of the bigger picture.In this way, it might limit creativity and innovation, preferring to depend on practical solutions that have been shown to work in the past, rather than exploring novel, innovative methods that might bring fresh benefits in the future.
In response, we might say that short-termism would seem to be a danger with all research, not just that conducted from a methodological pragmatist perspective.The researcher becomes comfortable in what they know and does not feel inclined to consider options outside of their 'bubble'.However, it would seem that this danger should be at its lowest within methodological pragmatism, given the imperative within this approach to be as inclusive as possible in considering all existingand potentialresearch methods from all possible traditions.Any narrowness of focusing on immediate issues at the expense of broader concerns is perhaps most likely to enter in the planning and devising of research questions, rather than in the selection of suitable methods to address them, and in that case it does not seem that methodological pragmatism should be to blame.

Ethical questions
The most serious of the three concerns considered in this Section relates to ethics, and the responsible researcher should certainly ensure that any potential ethical concerns with their research approaches should attract their fullest attention.A critic might argue that the potential inclusivity of methodological pragmatism to all possible methods is ethically suspect.A principled researcher, who eschews certain methods on personal or ideological grounds, may be seen as holding themselves to a higher ethical standard than the methodological pragmatist, who seemingly cares only about 'getting results', regardless of the consequences.
In response, we note that, clearly, ethical vigilance must pervade all aspects of conducting any educational research (BERA 2018).It is important to acknowledge that one aspect of this is making productive use of available resources, including research funding, and especially the time and energies of all of the participants involved.Methodological pragmatism encourages us to ask hard questions about whether the chosen methods are worth everyone's time, effort and money, in terms of what they stand to reveal about the situation.If alternative methods could gain the same or better information at lower 'cost' (interpreted broadly), then they should be preferred on ethical, as well as pragmatic, grounds.
The pragmatist's quest for 'utility' requires us to consider what is valued, and 'the findings' are only one aspect of a research study.The study's effects (both long-term and short-term) on participants and researchers are of central importance, and a method cannot be regarded as pragmatically useful if it risks a negative impact on those involved.Methods which risk harming anyone are not compatible with methodological pragmatism if our desired outcomes include, as they must, the wellbeing of all concerned.

Conclusion
This paper has argued that the qualitative-quantitative distinction creates unnecessary divisions among researchers in education who have otherwise similar goals, and inhibits intra-and inter-disciplinary collaboration.Collaboration is a value held strongly by many educational researchers, and it seems something of a contradiction that our field continues to be so divided in this way.Researchers committed to opposing, incommensurable paradigms operate from within different silos, even when their substantive interests may seem to be extremely close (Wallace and Kuo 2020).Qualitative and quantitative researchers focused on the same topic might nonetheless read quite distinct literatures and conduct studies that fail to speak to each other or take advantage of all of the insights gained.Gorard and Taylor (2004, 149) described this separation as 'divisive and conservative in nature' and unlikely to lead to best progress in research and practice.Discarding half of the research methods at one's disposal, or half of one's potential colleagues and collaborators, seems extremely unwise, and a disservice to the field.
Although there are certainly divisions within qualitative and quantitative research, it would seem that polarization into differing, at times even 'opposing', methodological camps is likely to entrench misapprehensions on both sides about what characterizes the 'other', which can then become mutually self-reinforcing.In such a way, a polarized division once created can easily become self-sustaining.If 'qualitative' researchers derive their understanding of what 'quantitative' researchers do and believe largely from other 'qualitative' researchers, with the reverse being true for 'quantitative' researchers, then it is hard to see how the situation will improve (Lund 2005).
Having worked across the sciences and the social sciences, Chomsky (1979) described how he experienced being judged within mathematics, not on the basis of his credentials and academic background, but purely on the basis of the value of what he had to say.In contrast, he found that within the social sciences he was constantly challenged about his right to speak on such matters.He noted: In mathematics, in physics, people are concerned with what you say, not with your certification.But in order to speak about social reality, you must have the proper credentials, particularly if you depart from the accepted framework of thinking.Generally speaking, it seems fair to say that the richer the intellectual substance of a field, the less there is a concern for credentials, and the greater is concern for content.(6-7) Judging the content rather than the person would seem to be an inclusive and equitable stance with everything to commend it, and might enable educational researchers to be more accepting of other researchers' 'right' to draw pragmatically on any method that they consider of possible use.No one should have to say whether they are 'qualitative' or 'quantitative' in order to get a hearing.Research methods that have been developed should be seen as the property of the entire community, and research-capacity building should aim to lead all researchers into that shared inheritance (Rees, Baron, Boyask, and Taylor 2007).
The essential skills of doing research well and reading research intelligently are not specific to either qualitative or quantitative research styles, and each has much to learn from the other.Along with other unhelpful and simplistic binaries/dichotomies (e.g.objective/subjective, positivist/interpretivist, inductive/deductive, science/social science), the qualitative-quantitative distinction has outlived any usefulness that it may once have had.Instead, researchers should be liberated to draw pragmatically on any methods and research styles that will progress their research, so as to make educational research more robust and useful to as many people as possible (Alvesson, Gabriel, and Paulsen 2017).