Downloadfile 1 PDF
Downloadfile 1 PDF
Downloadfile 1 PDF
Lynda Cheshire
Associate Professor in Sociology
School of Social Science, The University of Queensland
Email: [email protected]
Keynote address at the Australian Consortium of Social and Political Research (ACSPRI) annual
conference, The University of Sydney, 17th-20th July 2016.
Introduction
I would like to thank the team from ACSPRI for the kind invitation to speak here today. When
Betsy first contacted me about giving a keynote address, the brief was to have a past, present and
future orientation in social science methods. As a qualitative researcher, my task was to consider
developments in qualitative research with an eye to the future. In initially thinking about this
brief, I considered some of the key trends we are witnessing in qualitative research with a view
to reflecting upon where qualitative research sits in relation to them and how they inform the
way we undertake our practice as qualitative researchers. I identified three immediately. First,
the emergence of the ‘big data’ era; second the debates around data archiving and sharing and the
responses of qualitative researchers to these new practices; and thirdly, new technological
innovations in the collection and analysis of qualitative data, notably the use of online sources.
But when I reflected and read further, I realised that what I was focussing on here was qualitative
data not qualitative inquiry or qualitative research. And I also reflected that focussing on
qualitative data rather than qualitative inquiry was problematic for three reasons. The first is that
it creates the impression that research methods and data can be deployed, generated and
analysed in isolation of the theoretical and philosophical context in which they are embedded,
and thus neutral and objective rather than socially and politically constituted. I return to this
point later in the presentation. Second is that the new kinds of data and analysis that we are
currently getting excited about are only rendered problematic when we think about them in the
context of broader issues of methodology, rather than straightforward method. If we subject
qualitative research to critique on its own terms – most specifically its hermeneutic foundations
of understanding and interpretation – we begin to see the flaws and inconsistencies in the way
qualitative methods are designed and deployed, particularly within a research culture dominated
by big data, metrics and the scientific method. And third, that when one starts thinking about
these issues, it is the status and future of qualitative inquiry, rather than the qualitative method,
which is more intricate, complex and interesting.
What I plan to do in this address today is to spend a little time examining current debates within
qualitative research, particularly as they relate to the contemporary research climate that
celebrates big data and privileges the measurement of research quality and outcomes through
metrics that arguably separate research methods from research methodology and diminish the
contribution of qualitative research as a humanistic endeavour. In order to understand the
salience of these concerns, I want to locate them within the longstanding awkward relationship
that qualitative research has with its rejection of positivism on the one hand, and its retention of
the ‘structuring concepts’ (St Pierre, 2014) and processes of positivist scientific inquiry on the
other. For postmodern feminist scholars, the only way out of this is a future that is anti-
methodological or post-qualitative, but for those of us more comfortable with our role as social
scientists, even while we strive to remain reflexive and critical or our practice, there are ways we
can engage with big data scientists and demonstrate the importance of our work without letting
1
go of either our values or our practices.
Preparing for this address also forced me to think once more about what I mean by the term
qualitative research. The position I take with qualitative research is that it cannot be defined
solely by the methods through which data are generated – traditionally the interview,
observation, focus group, diaries and documents but also, and increasingly, through online
versions of these techniques as well as new methods such as blogging, cyber-ethnography and
online content analysis. Nor does it refer to the kinds of data generated through these methods,
which are qualitative in form – textual data in the form of interview transcripts, field diaries,
document or online posts; or visual data in the form of images, video, objects, the built
environment and so on. Rather, what defines qualitative research are the philosophical
assumptions that underlie it and its provenance as the product of a rejection of the scientific
method for the study of society in favour of a more humanistic approach which emphasises the
actors’ perspective (Alasuutari, 2010).
Qualitative research can be understood in terms of the properties of the social world that we are
interested in – the meanings, experiences and perspectives of the people under study – which we
understand to be socially constructed within particular cultural, political, social, economic and
historic contexts. Because of the requirement to see through the eyes of the other, a degree of
closeness to or immersion in the research site is advocated so that behaviour is understood in the
context of the meaning systems through which it takes place. Qualitative research also draws on
an interpretivist epistemology, rejecting claims that the social scientist can properly know and
explain how the world works as if there is a single, unitary world that we can grasp with careful
analysis. Instead it recognises that the best we can do is to interpret or understand the actions
and perspectives of different social actors. And, that since we can never know how they really
think or feel, nor step outside the social world to study it as a scientist might study an organism
under a microscope, our understanding is always partial, contingent and influenced by our own
subjectivities.
In providing an historical overview of qualitative research, Brinkmann et al. (2014) identify three
paradigms that have been most influential to its development. The first is hermeneutics and the
art of interpretation. While the natural sciences can explain nature, the social world can only be
understood. Second is phenomenology and the goal of understanding phenomena as they are
experienced by thinking and acting human beings. Employing the technique of reduction –
meaning ‘bracketing’ out any rationality or scientific foundation to the phenomena in question –
phenomenology gives us the tools to understand the actor’s perspective of a given phenomenon,
regardless of whether that perspective is logical, ethical, legitimate or rational. The final set of
influences come from microsociology, which emanates from the Chicago School of Sociology and
its pursuit of sociology and the social sciences as an empirical rather than scholarly endeavour.
In the tradition of Garfinkle, Goffman and others, this branch of qualitative inquiry argues that
the primordial object of social inquiry should be what people do (rather than what they say or
indeed what we think they do) and the meanings they give to their everyday lives through
interaction with others. While qualitative research is a diverse form of social inquiry containing
many theories and approaches, it is these three strands of humanistic inquiry that help define
what we mean by qualitative research.
What we are all generally aware of is the way qualitative research stands in relation to
quantitative research. As undergraduate students, we were most likely subjected to the
2
dichotomies of objectivism versus constructivism, and positivism versus interpretivism. We
would also have had the odd lecture on critical social science, feminism and poststructuralism,
although these paradigms remain less dominant in methods teachings. For qualitative
researchers, positivism has long been a pejorative term – one that almost causes us to sneer as
we say it. Yet the distinction between quantitative and qualitative approaches appears more
important to qualitative researchers than it does to quantitative, primarily – I would argue –
because the relationship between interpretivist humanistic inquiry and the hegemony of the
natural scientific method has been, and continues to be, problematic for qualitative researchers.
One the one hand, there has been an explicit rejection among qualitative researchers of what
Bryman (1984) calls the ‘paraphernalia’ of positivism and its preoccupation with truth through
objectivity, detachment, replicability, generalizability and causality. On the other hand,
qualitative researchers have not only had to contend with internal debates about how the quality
of qualitative research can be asserted in the absence of these (albeit inappropriate) criteria, but
they have also felt obliged to defend and justify the importance of their work to funding agencies
and policymakers who have a clear preference for measurement, causality, large populations and
statistical significance.
The rather problematic relationship between positivism and qualitative research is reflected in
Pattie Lather’s account of the various ‘turns’ that have taken place within qualitative research in
a paper entitled Methodology 21: what do we do in the afterward? (2013). She captures this
movement with what she describes as 1.0, 2.0 and 3.0 methodologies.
According to Lather, Qual 1.0 is the conventional interpretivist inquiry that emerged from
sociology and cultural anthropology with its emphasis on the ‘humanistic subject who has an
authentic voice, transparent descriptions of lived experience, and the generally untroubled belief
that better methods and richer descriptions can get closer to the truth’ (2013: 635).
In Qual 2.0, she argues, qualitative research begins to acknowledge messy texts; the presence of
multiple voices, some of which are being subjugated by those with more power; reflexivity; and
discourse. Nevertheless it ‘remains within the humanistic enclosure, grounded in humanist
concepts of language, reality, knowledge, power [and] truth ….’ It also becomes:
Qual 2.0 is epitomised by the precarious relationship between science and humanism that I
alluded to earlier. Elizabeth St Pierre (1014), for example, points to the way qualitative research
claims to be interpretivist or humanist in its epistemology, but then retains positivist structuring
concepts such as ‘bias’, ‘data coding’ and triangulation’ even while introducing phenomenological
concepts like ‘voice’, ‘lived experience’ and ‘authenticity’ (p.6). The effect, she argues, is that:
These rules of qualitative research basically revolve around three dualisms, as Schwandt (1991)
described in his account of attempts to deal with ‘unresolved tensions’ in qualitative research.
The first is an attempt to maintain the opposition of subjectivity and objectivity, which
researchers seek to achieve by acknowledging the world is made up of subjective meaning but
3
believing they can disengage from that world and study it in an objective manner. Second, and
related to this, are attempts to sustain a separation between the object of investigation and the
investigator as if the researcher can stand outside the world of which he or she is part to conduct
an objective study of it. Finally, there is the distinction between facts and values and the
erroneous belief that researchers can investigate meanings and values, but do so in a value free
manner.
One of the most profound manifestations of this approach is the set of ‘Basic Rules for “Scientific”
Qualitative Research’ produced by King, Keohane and Verba in 1994. They argued that qualitative
researchers need to approach their work in a more rigorous and scientific manner, which
basically means as much as possible adhering to the conventions of quantitative research. They
suggest that qualitative research shares the same ‘logic of inference’ as quantitative research and
they set out a series of rules for rigorous scientific qualitative research:
1. Construct falsifiable theories (choose theories that could be wrong then test them)
2. Build theories that are internally consistent (if two hypotheses in your theory contradict one
another, it can’t be upheld)
3. Dependent variables should be dependent
4. Do not select observations based on the dependent variable so the dependent variable is
constant
5. Maximise concreteness (avoid unobservable concepts)
6. To make theory falsifiable, choose one that has as many observable implications as possible
7. To better evaluate theory, collect data on as many observable implications as possible
8. The more evidence we can find in varied contexts, the more powerful our explanation
becomes
9. State theories that are as encompassing as possible
10. All data and analyses should, insofar as possible, be replicable.
King, Keohane and Verba (1994) Designing Social Inquiry: Inference in Qualitative Research.
This ‘qualitative template’ has been countered by researchers who call out this ‘“quantitative
imperialism” for what it is’ (Lather, 2013: 636). Aside from the fact that this advice is difficult to
comprehend for most qualitative researchers who describe their work using a different language,
it encapsulates the major issues associated with assessing the quality of qualitative research
according to quantitative criteria. These problems arise as a result of the double hermeneutic in
qualitative research and the need to understand that if researchers are a part of the social world
that they study and that if meanings about that world are socially constructed and contextually
specific – as interpretivist social scientists claim – then how is it possible for a researcher to stand
outside of that world and study it objectively? What enables a researcher to transcend the
subjectivity that we attribute to research participants – and indeed continue to emphasise in our
work? Such claims of ‘epistemic sovereignty’ (Rouse, 1994: 103) amount to what Donna Haraway
terms ‘God tricks’ of seeing everywhere from nowhere (1988: 581).
The challenge facing qualitative researchers, then, becomes this: we can reject scientific criteria
for assessing the quality of qualitative research because they are inconsistent with our
philosophical stance. But if we do that, how can we demonstrate the methodological rigor of our
work? Is it important that we do so? As Denzin and Lincoln pointed out more than 20 years ago
(1994), these dilemmas have created something of a legitimacy crisis in qualitative research. If
we cannot distinguish between good and bad qualitative research, we risk the challenge of all
qualitative research being labelled fictitious, subjective, false, theoretically and methodologically
weak, ‘soft’ and engaged in naval gazing.
I encounter this a lot among students’ work whereby students complete a wonderful thesis and
then undermine the whole value of what they have just done by arguing that their work is not
generalisable, based on too small a population and/or only one – often atypical – case study. I
4
have provided some examples below, but slightly changed some of the wording to preserve the
identity of the authors:
… that the research was only undertaken in Sydney limits the generalisability of the findings
to Sydney, and not the rest of Australia, but further research could be conducted in other
states in the future.
The ‘uniqueness and non-typicality’ of the three case study sites, along with the qualitative
methodology, limit the application of the research findings. To address this, I recommend
the use of survey research in the future to generate research data that has broader
applicability.
I think this is a misunderstanding of what qualitative researchers have been saying about the
limits to knowing in their work and one only needs to read a good qualitative methods textbook
to see that there are clear guidelines for arguing the importance and contribution of a qualitative
study beyond the individual case. However, it illustrates the ambivalence that still exists among
qualitative researchers about where their work sits in relation to the science of quantitative
research and about the appropriate indicators of research quality for qualitative research. This is
something I return to later.
So we turn now to more recent developments in qualitative research, which Lather positions as
Qual 3.0. Qual 3.0 is an attempt to deal with the methodological tensions in qualitative research
using poststructuralist theories. First off, it rejects outright what poststructuralist researchers
see as positivist indicators of quality. The most coherent articulation of this new position was The
Handbook of Qualitative Research, first published in the early 1990s. Through this collection, as a
Doctoral student, I learnt to deconstruct taken for granted qualitative tools and concepts on the
basis of their problematic assumptions about the fixity of the world and the ability to know it,
even while they were viewed as consistent with an interpretivist paradigm. From Laurel
Richardson, I learnt that triangulation assumes there is a fixed point of reality at which all
perspectives meet. And that the crystal is a much better metaphor than the triangle for conceiving
how different methods simply give us different perspectives that, in Richardson’s words, ‘reflect
externalities and refract within themselves, creating different colours, patterns, arrays, casting
off in different directions (1994: 522). I also learnt from Richardson that writing is not a neutral
process of recording the facts, but that our writing contains complex and hidden political and
ideological agendas. Even the concept of ‘field site’ becomes a complex terrain that cannot exist,
nor be presented, as an objective space that can be defined, entered, and written about as if it
exist independently of the researcher. And transcription has also been revealed to be a ‘critical
step in the social production of scientific knowledge rather than an accurate record of the
interaction that took place.
More than this, I also learnt how power is bound up in subjectivity and language, such that
researchers who claim it is possible to stand above disputes of competing truths - in a position of
sovereignty to what is being observed - are suppressing alternative forms of knowledge by
investing their own interpretation with the privilege of truth and power that is traditionally
attributed to the natural sciences (Rouse 1994, 103). Foucault (1994: 43) expressed this
beautifully in his critique of Western Marxism when he asked:
What types of knowledge do you want to disqualify in the very instant of your demand: “is
it a science”? Which speaking, discoursing subjects - which subjects of experience and
knowledge - do you then want to “diminish” when you say: “I who conduct this discourse
am conducting a scientific discourse, and I am a scientist”? … When I see you straining to
5
establish the scientificity of Marxism … you are investing Marxist discourses and those who
uphold them with the effects of a power which the West, since Medieval times has
attributed to science and has reserved for those engaged in scientific discourse (1994: 43).
For Lather, the development of this postmodern agenda for qualitative research stalled as
qualitative researchers moved to defend their work and the theories underpinning them in the
wake of tighter funding regimes; an increased reliance upon metrics as a driver of individual and
university performance; an audit culture among universities and funding agencies that increases
accountability of publicly-funded research money; the requirement that research should have a
measurable policy outcome or ‘impact’; and the fanfare over big data (Duberley, 2015). I want to
look at two of these trends and their effects upon qualitative research in turn.
There is no disputing the tightening climate that researchers work in, the demands placed upon
us, or the very real impact that this has on the way we conduct our research. For example, we
have experienced the former Minister Jamie Briggs, Chair of the beautifully entitle ‘Scrutiny of
Government Waste Committee’ threatening to audit – and I quote – ‘increasingly ridiculous
research grants and reprioritising funding through the ARC to deliver funds where they are really
needed’ (Briggs, 2013). Briggs identified a number of projects that, in his opinion, that ‘do little, if
anything, to advance Australia’s research needs’. We have also seen attempts by the ARC to
impose a journal ranking system that assigns a grade to journals based on their perceived quality.
The ERA ranking system was withdrawn after scholars complained about its ‘positivistic logic’ (St
Pierre, 2012: 484). However, the University of Queensland still continues to use a journal ranking
system of its own to determine staff publishing performance, although no one can seem to gain
access to either the full list or the logic behind the assessment of individual journals.
In addition, UQ has introduced the ‘q-index’ which assigns our scholarly work to a single number
based on a complex formula that assigns a value to our grants, publications and Research Higher
Degree supervision and completions. With research output points, we are awarded extra marks
if the journal is ranked ERA A*, even though ERA rankings no longer exist. So, for example, a paper
with three authors in the A* journal, Journal of Rural Studies gives me one research output point,
but a similar paper in the journal Mobilities – ranked C despite being a highly esteemed journal
and also with three authors – gives me 0.17 points. The message for anyone needing to care about
this for their promotion prospects is this: only target A* journals.
Qualitative researchers have expressed concern about the particular challenges this places on
their research as they are driven by institutionalised norms to secure publications in prestigious
journals, many of which often do not accept qualitative papers. In an article reflecting on the
future of qualitative research, British scholar Joanne Duberley says this has led to an increase in
the ‘standardisation’ of qualitative research and the formulation of scientific criteria for
evaluating its quality which tend to adhere to post-positive frameworks in order to make
qualitative research ‘acceptable to those with more quantitative leanings’ (2015: 340).
Nancarrow and colleagues (2005) go further to suggest that qualitative research is being
‘McDonaldized’, while Donnelley et al (2013) believe it has been ‘sanitised’. Lather (2013: 636)
speaks of the ‘disciplining of qualitative research’, somewhat problematically – we might argue –
through new technologies of the self that induce qualitative researchers to voluntarily adhere to
scientific conventions of linearity, deductivity and objectivity in the desire to maintain legitimacy
for their work. Such practices, Pratt (2009) argues, ‘serves to obscure the fluid and emergent
character of qualitative writing, creating what he sees as “the worst of all worlds”’ (cited in
Donnelley et al. 2013: 7).
My own position is less pessimistic, or at least less selective in the effects of the neoliberal
6
university on everyday research practices. While there certainly are some prestigious journals
that rarely accept papers based on qualitative work, they are equally unlikely to publish work
based on smaller scale quantitative studies, preferring instead large-scale and longitudinal
datasets and highly sophisticated analytic techniques that require high-level statistical skills.
Most good journals regularly publish qualitative work, providing it tells an important story and,
from experience, I do not believe that qualitative research does any worse with ARC funding than
other approaches, although others may have a different view on this. But I do agree that
qualitative research is taken most seriously when it presents itself as a scientific method,
including as part of a mixed methods approach; and that the rise of large datasets – or big data –
has implications for qualitative research, many of which are positive and exciting. I turn to this
second point below.
This brings me to one of the main themes of today’s presentation. In thinking about the future of
qualitative research one cannot look past the emergence of big data as the new ‘big thing’ in
research. Just to ensure we are all on the same page, let us be clear on what is meant by big data.
boyd and Crawford (2012: 663) define big data as a ‘cultural, technological and scholarly
phenomenon’ that rests on the interplay of three things. The first is technology: ‘maximising
computational power and algorithmic accuracy to gather, analyze, link, and compare large data
sets’ (ibid). These datasets are huge in volume, consisting now of terabyte or petabytes of data.
The second is the use of new, and increasingly sophisticated, techniques in data analysis. As boyd
and Crawford (2012) note, the main point of big data is not the size of the dataset since
researchers have long been working with large volumes of data such as census data. Rather the
newness of big data comes from the newly evolving capacity of researchers to ‘search, aggregate
and cross reference large datasets’ (boyd and Crawford, 2012: 663). Finally, they point to the
mythology surrounding big datasets: ‘the widespread belief that large datasets offer a higher form
of intelligence and knowledge that can generate insights that were previously impossible, and
which are imbued with an aura of truth, objectivity and accuracy’ (ibid).
Big data in the social sciences typically takes the form of social media interactions such as
Facebook records, twitter feeds, You tube or Expedia posts and comments; health records and
other government and administrative data on clients and users; digital traces left by people, such
as smart phones that record the history of their use or data that tracks navigation through
websites. Market researchers are making use of databases of credit card sales and other
information to make predictions about your purchasing habits and target their marketing
strategies. While big data has been embraced enthusiastically by many researchers, qualitative
analysts are more circumspect of its ability to answer the kinds of questions traditionally posed
by interpretive social scientists and sceptical of the power imbued in the truth claims that emerge
from big data, seemingly without the contamination of researcher interventions or agendas.
Let us turn first to the challenges posed by big data to the way we conduct qualitative research.
According to critical analysts, there is plenty to be concerned about. The first concern revolves
around the mythology of big data that boyd and Crawford speak of and the belief that large data
and computationally derived correlations can speak for themselves devoid of human bias and
subjectivity (Kitchen, 2013). There is an assumption that the data generated and the correlations
between them are inherently meaningful; that we no longer need a priori theories or hypotheses
to drive our analysis. Summing up this position, Kitchen suggests that big data encourages naïve
empiricism and pseudo-positivism (Kitchen, 2013). Such concerns have not been helped by
provocative writings heralding the so-called end of theory, as outlined in the following excerpt
from Chris Anderson, then Editor of the magazine Wired:
This is a world where massive amounts of data and applied mathematics replace every
7
other tool that might be brought to bear. Out with every theory of human behavior, from
linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why
people do what they do? The point is they do it, and we can track and measure it with
unprecedented fidelity. With enough data, the numbers speak for themselves (Anderson,
2008).
boyd and Crawford take issue with what they see as Anderson’s ‘sweeping dismissal of all …
theories and disciplines’ (2008: 666). They say:
… it reveals an arrogant undercurrent in many big data debates where other forms of
analysis are too easily sidelined. Methods for ascertaining why people do things, write
things, or make things are lost in the sheer volume of numbers. This is not a space that
has been welcoming to older forms of intellectual craft’ (2008: 666).
Qualitative researchers complain about the positivist bias in big data by virtue of the statistical
techniques often deployed to manage large datasets. This means that even greater value is placed
upon quantitative data while qualitative is increasingly devalued. Further, when qualitative
research is considered within the big data project, it is often decoupled from its disciplinary roots
in the way Anderson suggests, such that there is a separation of method from methodology. As
Smith (2014) observes, the result is that methods such as interviews and field observation are
taken out of the interpretive context in which they are embedded and given the appearance of
being neutral tools that can be adopted in any context for any purpose. It is only by paying
attention to the way methods are informed by particular paradigms or traditions, Smith argues,
that ‘deeper and less readily avoided difficulties’ become more apparent (p.190).
Smith also points out that big data is analytically distant from what ‘actual people actually do’
(2014: 191) and the meanings they give to their actions. In other words, he suggests, ‘precisely
that which is social in the first instance – the practice, rather than the aggregate – is missed or
lost’ (Smith, 2014: 191). What we get a picture of is the product or outcome of social action, but
not the work that goes into producing it (ten Have, 2004). There is a worry, then, that humanist
conceptions of society are being eclipsed and that the expertise of the social scientist as
interpreter of human action and human knowledge is being devalued (Smith, 2014). Added to
this, the hype over big data can marginalise small data studies: a risk that applies particularly to
qualitative research with its focus on richness and depth rather than scale and breadth. And
finally, the marginalisation of qualitative research within the big data agenda – and the critique
and rejection of big data by the qualitative community – runs the risk of solidifying the
quantitative-qualitative divide that we have all worked to overcome. As boyd and Crawford put
it (2008: 669): ‘Big data risk re-inscribing established divisions in … long-running debates about
scientific method and the legitimacy of social science and humanistic inquiry’.
Nevertheless, there are opportunities for qualitative researchers to engage with big data and to
make an important contribution on their own terms, either through harnessing big data to their
own research agendas or by highlighting the benefits of ‘small data’ that are complementary to
the patterns observed. To begin with, big data contains considerable qualitative material that can
be utilised by qualitative researchers to answer questions about the routines, practices, patterns
and discourses of social action. Through social media such as Facebook and online fora, we can
learn about the way relationships with others and self are reflexively formed and practiced in
new ways, as Smith (2015) revealed in a recent PhD thesis. Through administrative records, we
can gain access to naturally occurring or user-generated data which is unmediated by the
interventions of the researcher. We get to see the work done in the production of data by
unreflexive practitioners who often have no expectation that their words and practices will be
8
scrutinised by a researcher. There is a richness to the data in terms of the insights gained about
the categories and concepts used by administrators to describe the phenomena and the people
they work with which may not otherwise be revealed to researchers through an interview.
Of course, there are ethical dilemmas that arise from the use of administrative records containing
personal information about clients and the unsolicited observations of administrative staff. This
is particularly salient with qualitative research given that it does not aggregate records in the way
quantitative research does, but maintains the detail and complexity of individual stories. There is
also the issue of gaining access to such data given the ethical sensitivities involved, and also the
fact that these datasets require significant cleaning in order to be suitable for sharing with a
researcher. In my own work, I spent more than three years negotiating access to data from the
Queensland Government’s Dispute Resolution Branch (DRB) from which we now have 15 years
of recorded neighbour disputes written up by DRB staff as they hear the accounts provided by
disputing neighbours. The dataset had to be de-identified by DRB staff before it could be handed
over because of privacy requirements, and this was an extremely lengthy process which DRB only
considered worthwhile because of the potential insights they might obtain from letting a research
team interrogate their material. There is also the challenge of analysing large volumes of
qualitative data which, while assisted by computer software such as Nvivo, remains an
interpretive process that cannot be conducted by computation algorithms.
Smith (2014) says that the greatest contribution that can be made by qualitative researchers to
the interrogation of big data is in its ‘attention to the practices, routines and activities in and
through which ‘patterns’, ‘traces’ and ‘correlations’ (for example) are brought into being, in actual
settings by actual people’s actions’ (p.183). The value of qualitative research, then, is not so much
the way it can be brought in as a handmaiden to big data analysis, but rather what is distinct about
it. Smith’s approach is heavily informed by the ethnomethodological tradition and he identifies
two examples of the sorts of contributions that qualitative researchers might make. The first
would be studies of the actual activities of actual people working with big data – the everyday
practices of air traffic controllers or CCTV operators, focusing on their professions and the
situational nature of how they see, do and talk about their activities. Alternatively, researchers
might ‘follow the data’ by observing and describing what is known, and constructed as knowable,
about a population and how those data are created used and acted upon.
Further, qualitative research can complement big data by giving rich detail and context to the
quantitative findings by placing social relations and social action within their specific historical
and cultural contexts. Tricia Wang (2013) in a blog on big data and ethnography refers to Clifford
Geertz’ notion of thick data – or thick descriptions – to describe how qualitative data can
complement big data by capturing the context that renders individual events and actions
meaningful. She argues that:
… thick data analysis primarily relies on human brain power to process a small ‘N’ while
big data analysis requires computational power … to process a large ‘N’. Big data reveals
insights with a particular range of data points, while thick data reveals the social context
of, and connections between, data points. Big data delivers numbers; thick data delivers
stories (Wang, 2013).
9
Qual 4.0 – post-qualitative Research
I said at the beginning of this presentation that Pattie Lather identified moments in the
development of qualitative research, and we got as far as 3.0: the acknowledgement of a
constructivist ontology, but a slippage away from interpretivism as we were pulled in the
direction of big datasets, metrics and evidence-based policy. In these closing minutes of my
presentation, I want to draw attention to Lather’s reflections on what she terms Qual 4.0 and add
a few of my own about where qualitative research might be headed next.
For Lather, a postmodern feminist scholar, the future is once again postmodern as qualitative
researchers grow tired of defending their art and work towards the production of different forms
of knowledge in ways that are consistent with a decentred ontology. For Lather, this involves an
explicit rejection of the scientific method in qualitative research and a new form of post
qualitative inquiry; post-method. Elizabeth St Pierre argues that the task of ‘post’ critiques is to
reject the structures and narratives, such as those of science, that fix, totalise and exclude, and ‘to
examine them so seriously that they deconstruct themselves, reveal their disciplinary goals and
lose their innocence’ (2012: 496-7).
The question this raises for me is this: how do we know we are doing science if we reject scientific
indicators to measure the quality of our work? And what makes it research if we abandon the
procedural logics of qualitative research, such as research questions, research design and data
collection? How do we feel about letting go of the science of social science inquiry? And what
distinguishes our research from literature or fiction if we drop the conventions of research from
our work? Does it matter? Depending on how you answer these questions, there are three
possible direction researchers might take in reference to this post-methodological approach to
qualitative research.
The first is the deployment of methods that embrace subjectivity, reject linearity of research
design and process, and emphasise lived experience. These include non-traditional techniques
such as dance, poetry, drama and autoethnography that are rarely, if ever, found in qualitative
textbooks, but which explicitly embrace the absence of science in the quest to convey the actor’s
point of view, including that of the researcher. During conference sessions here at ACSPRI
yesterday, I learnt of new approaches to research, such as written portraiture and dramatic
tableaux that have their origins within the arts rather than the social sciences. But the researchers
also spoke of the fear and judgement they encountered among academic colleagues in response
to the unconventionality of their methods; to a sense of isolation in being the only ones working
with these methods; and to the absence of a textbook that clearly explicated the methods and
analysis. ‘Tell me what to do’, one speaker had urged from her advisory team.
A second, rather more conventional, approach involves the adoption of a more political, activist
10
stance in research, emphasising ethical values of care, community and social justice (Brinkmann
et al. 2014). This harks back to a more critical social science informed by Marx’s question of why
we should be satisfied with understanding the world instead of trying to change it. Critical social
science is critical of interpretivism because it believes that understanding and explaining the
world is not enough. Instead, it argues that we have an ethical obligation to speak out and act
when we uncover unequal power structures and to emancipate marginalised groups from
oppression (St Pierre, 2012). We are accountable for what we see and learn, and the values of the
research are not shrouded from view but drive the research process. Alongside this is an ethical
commitment to work collaboratively with those we study through forms of action research that
avoid the appropriation of other’s voices for the professional gain of the researcher and position
research participants as co-investigators. ‘Nothing about us without us’ is the cry of those who
have traditionally been objectified through our work and now wish to regain control over what
is said about and done to them.
Finally for those who appreciate and understand the positivist dilemma in qualitative research,
but still want to position themselves as social scientists – even if they hold a healthy cynicism
towards science as a hegemonic project – there has to be something in our well-learned
approaches that we can hold onto and not have dissolve under the postmodern gaze. This entails
revisiting our practice collectively so that our work is conducted and judged by concepts and
standards that come from our own paradigm and not someone else’s. There have been plenty of
attempts to do this. Perhaps the most fruitful I have encountered so far is that of Tracy (2010).
She compares qualitative work to another artistic craft – that of cheese making – to illustrate the
feasibility of adopting a universal hallmark of high quality without tying those criteria to specific
processing techniques or flavours. In cheese-making, she says, there is near universal acceptance
that indicators such as a pleasing texture, appearance, flavour and nutrition are signs of a good
quality cheese, which allows cheesemakers in different traditions to ‘learn from, admire, and
dialogue with each other’ (2010: 839). In the same vein, she says, so it is possible to formulate
criteria for good quality qualitative research that is not explicitly tied to any epistemological
position, never mind a scientific one. These criteria are:
The problem we continue to face, however, is that there is a lack of agreement among qualitative
researchers around the formulation of quality criteria. While this is reflective of a healthy
diversity and debate within qualitative research, it also undermines our efforts to have universal
standards that are accepted by those who sit outside our trade. Howe can we convince
policymakers and funding agencies that our work is valid, important and rigorous when there is
so much division in our work?
Conclusion
One of the things I appreciate most about qualitative research is the capacity and willingness of
11
researchers to scrutinise and reflect on their own assumptions and practices. This is not intended
as an exercise of introspection or self-indulgence, but a desire to continually improve upon what
we do and how we do it. Sometimes when we are caught up with teaching, grant writing,
publishing and the myriad of other activities we do as researchers and academics, we fail to find
the time for this reflexive practice. Certainly for me, preparing for this keynote has given me a
rare period to stop and take note of where we have reached in qualitative research; the challenges
we face and the opportunities ahead of us. Rather than focus purely on the exciting array of
research methods that are now available to us, I have sought to emphasise the importance of
methodological reflection. In doing so, I have identified the ongoing challenge that qualitative
research has with itself, which we often frame in terms of, and in relation to its other – positivistic
objectivist scientific research. I hope that some of what I have said today resonates with you, or
at least encourages you to reflect upon your own practice and assumptions so that we can
continue to progress as a community in terms of our scholarly practice, our relevance and our
skills and knowledge.
References
Alasuutari, P. (2010) ‘The rise and relevance of qualitative research’, International Journal of
Social Research Methodology, 13(2), 139-155.
Anderson, C. (2008) ‘The end of theory: the data deluge makes the scientific method obsolete’,
Wired, 23rd June 2008.
Brinkmann, S., Jacobsen, M. and Kristiansen, S. (2014) ‘Historical overview of qualitative research
in the social sciences’, in P. Leavey (ed.) The Oxford Handbook of Qualitative Research,
London, Oxford University Press, 17-42.
Briggs, J. Hon MP. (2013) ‘Ending more of Labor’s Waste’, [Online]
https://members.nsw.liberal.org.au/ending-more-labor%E2%80%99s-waste [Accessed 25
February 2016].
Bryman, A. (1984) ‘The debate about quantitative and qualitative research: a question of method
or epistemology?’, British journal of Sociology, 35(1), 75-92.
boyd, d. and Crawford, K. (2012) ‘Critical questions for big data’, Information, Communication and
Society, 15(5), 662-79.
Davidson, J., Paulus, T., Jackson, K. (2016) ‘Speculating on the future of digital tools for qualitative
research’, Qualitative Inquiry, DOI 1077800415622505.
Denzin, N. and Lincoln, Y. (1994) ‘Introduction: entering the field of qualitative research’, in N.
Denzin and Y. Lincoln (eds) A Handbook of Qualitative Research, Thousand Oaks, CA, Sage, 1-
17.
Donnelly, P. F., Gabriel, Y. and Özkazanç-Pan, B. (2013) ‘Untold stories of the field and beyond:
narrating the chaos’, Qualitative Research in Organizations and Management: An International
Journal, 8(1), 4-15.
Duberley, J. (2015) ‘The future of qualitative research: unity, fragmentation or pluralism?’,
Qualitative Research in Organisations and Management: An International Journal, 10(4), 340-
43.
Foucault, M. (1994) ‘Genealogy and social criticism’, in S. Seidman (ed) The Postmodern Turn: New
Perspectives on Social Theory, Cambridge, Cambridge University Press, 39-45.
Geertz, C. (1973) The Interpretation of Cultures: Selected Essays, New York, NY: Basic Books.
Haraway, D. (1988) ‘Situated knowledges: the science question in feminism and the privilege of
partial perspective’, Feminist studies, 14(3), 575-599.
Johnson, P. (2015) ‘Evaluating qualitative research: past, present and future’, Qualitative Research
in Organisations and Management: An International Journal, 10(4), 320-24.
King, G., Keohane, R. and Verba, S. (1994) Designing Social Inquiry: Inference in Qualitative
Research, New York: Princeton University Press
Kitchen, L. (2013) Big data and human geography: opportunities, challenges and risks’, Dialogues
in Human Geography, 3(3), 262-67.
12
Lather, P. (2013) ‘Methodology 21: what do we do in the afterward?’, International Journal of
Qualitative Studies in Education, 26(6), 634-45.
Lather, P. and St Pierre, E. (2013) ‘Post-qualitative research’, International Journal of Qualitative
Studies in Education, 26(6), 629-33.
Nancarrow, C., Vir, J. and Barker, A. (2005) ‘Ritzer's McDonaldization and applied qualitative
marketing research’, Qualitative Market Research: An International Journal, 8(3), 296-311.
Pratt, M.G. (2009), ‘For the lack of a boilerplate: tips on writing up (and reviewing) qualitative
Research’, Academy of Management Journal, 52(5), 856-862.
Richardson, L. (1994) ‘Writing: a method of inquiry’, in N. Denzin and Y. Lincoln (eds) A Handbook
of Qualitative Research, Thousand Oaks, Ca, Sage, 516-29.
Rouse, J. (1994) ‘Power/knowledge’, in G. Cutting (ed) The Cambridge Companion to Foucault,
Cambridge, Cambridge University Press, 92-114.
St Pierre, E. (2012) ‘Another postmodern report on knowledge: positivism and its others’,
International Journal of Leadership in Education, 15(4), 483-503.
St Pierre, E. (2014) ‘A brief and personal history of post qualitative research’, Journal of
Curriculum Theorizing, 30(2), 1-18.
St Pierre, E. (2013) ‘The posts continue: becoming’, International Journal of Qualitative Studies in
Education, 26(6), 646-57.
Smith, R. (2014) ‘Missed miracles and mystical connections: qualitative research, digital social
science and big data’, in Big Data: Qualitative Approaches to Digital Research Studies in
Qualitative Methodology, 13, 181-204.
Smith, N. (2015) Constructing Facebook: Constituting Social Space Online, Unpublished PhD
Thesis, The University of Queensland, Brisbane, Australia.
ten Have, P. (2004) Understanding Ethnomethodology and Qualitative Research, London, Sage.
Tracy, S. J. (2010) ‘Qualitative quality: Eight “big-tent” criteria for excellent qualitative research’,
Qualitative Inquiry, 16(10), 837-851.
Wang, T. (2013) ‘Big data needs thick data, Ethnography Matters, [online]
http://ethnographymatters.net/blog/2013/05/13/big-data-needs-thick-data/ [Accessed
6th July 2016).
13