0% found this document useful (0 votes)
118 views21 pages

Perry 2005 Research Design Methods PRG

This document outlines the syllabus for a research design and methods class. It will cover three primary topics: the philosophical underpinnings of research, concepts of research design, and the foundations of causal analysis. Students will complete two assignments: creating an original research design no longer than 15 pages, and participating in a group presentation. The research design must define the research problem, assemble research questions or hypotheses, and outline the proposed study methodology.

Uploaded by

jgdb665
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views21 pages

Perry 2005 Research Design Methods PRG

This document outlines the syllabus for a research design and methods class. It will cover three primary topics: the philosophical underpinnings of research, concepts of research design, and the foundations of causal analysis. Students will complete two assignments: creating an original research design no longer than 15 pages, and participating in a group presentation. The research design must define the research problem, assemble research questions or hypotheses, and outline the proposed study methodology.

Uploaded by

jgdb665
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

PAF600 FALL, 2005, ASUDC, Wednesday

RESEARCH DESIGN AND METHODS


Ron Perry 480-965-3978 (ASU-Main)
[email protected] 602-673-7280 (pager)
ASUDC Room 380E Wilson Hall 226
[office hours: TW 4:30-5:30] ---also by appointment--- [office hours: M 8:00-9:00]

This class aims at exposing students to doctoral level issues in research design
and analysis. We will have only a limited concern with questions of technique; it is
assumed that these skills were acquired before this class or that you will follow up on
outside reading for such issues. Reading is a way of life for doctoral students and
professionals in our field. Thus, I have appended a bibliography of “technique” books to
this syllabus; if you feel a need for reviewing any such material, these are a good place
to start. You will find that many current trends and fads have their origins in the
classics; doctoral level scholars are expected to understand origins and context. I will
provide references to classic works in methods, statistics and methodology as we move
through the semester. We will be concerned with three primary thrusts in this class; first
with the philosophical underpinnings of research, second with the concept of research
design, and finally with the bases of causal analysis.

Under the rubric of philosophy of science, we will critically examine the context of
research and analysis. Our goal is to review what may be called “big picture” issues:
Why do we do research and create theory; What is social science and the scientific
method; What rules of evidence and what logics are acceptable in science; What are
the building blocks of theories; What are the bases for the various processes and
techniques associated with research doing. These discussions are designed to facilitate
understanding the framework for how knowledge is created, stored and changed.

With respect to research designs, concern focuses upon understanding types of


study objectives and milestones associated with different designs. We will also examine
the components of principal designs used in public administration; this includes sample
surveys, experimental designs, Observation and Documents and Archives. The
objective is not to know how to do these, but to understand when a design is
appropriate, and kinds of knowledge designs produce.

The third area of concern is analysis, but at a general level. These discussions
will elucidate the fundamentals of causal analysis, and examine bases of causal
inferences. We will review classical and modern causal modeling practice.

1
TEXTBOOK and READING

Paul Davidson Reynolds, 1971. A Primer on Theory Construction. Boston: Allyn-


Bacon Publishers.

In addition to the texts, I will assign reading from chapters and articles relevant to our
topics of discussion. I will provide a collection of articles, book chapters and conference
papers that are specifically selected to compliment and for a basis for class discussion.
I will send these to you on .pdf files and full references are below:

Neil Agnew and S. Pyke, 1994. The Science Game. Prentice-Hall, Chapt 1, This Thing called Science.

George C. Homans, 1967. The Nature of Social Science. New York: Harcourt, Brace, Janovitch, part I.

Earl Babbie, 2001. The Basics of Social Research, Second Edition. Chapter 15, The Elaboration Model.

Clarence Schrag, 1968. Elements of theoretical analysis in sociology, pp 220-253, in L. Gross (Ed),
Sociological Theory: Inquiries & Paradigms. New York: Harper & Row.

Herbert Costner,1971. Theory, Deduction and Rules of Correspondence, pp. 299-319 in Hubert Blalock,
Causal Models in the Social Sciences. Chicago: Aldine.

Abraham Kaplan, 1964. The Conduct of Inquiry. San Francisco: Chandler, Chapter 1, Methodology.

Don Martindale, 1979. Ideologies, paradigms and theories, In William Snizek, Ellsworth Fuhrman &
Michael Miller (Eds), Contemporary Issues in Theory & Research: A methodological perspective.
Westport, CT: Greenwood Press, pages 7-24.

Z. Lan and K. Anders, 2000. Paradigmatic view of contemporary public administration research
Administration and Society 32: 138-165.

Julian Simon and Paul Burstein, 1985. Basic Research Methods in Social Science. McGraw-Hill. Chapter
7 Steps in an empirical study.

James Davis, 1977. Are Surveys any good, and if so, for what? Paper read at the annual meetings of the
American Sociological Association, Chicago. [Davis is still at the National Opinion Research
Corporation (NORC), University of Chicago.]

Morris Zelditch, 1969. Can you really study an army in the laboratory? in Amatai Etzoini, Sociological
Reader on Complex Organizations. Holt, Rinehart and Winston.

Travis Hirshi and Hannon Selvin, 1973. Principles of survey analysis. Free Press. Chapter 3, principles of
causal analysis.

Jeffrey Alexander, 1991. Sociological theory and the claim to reason, Sociological Theory 9(fall): 148-153.

Stephen Turner, 1987. Cause, Law and Probability, Sociological Theory 5 (spring): 15-27.

Julian Simon and Paul Burstein 1985. Basic Research Methods in Social Science. McGraw-Hill. Chapter
14 Studies using other people’s data

2
Thomas Kuhn, 1970. The Structure of Scientific Revolutions, 2nd Edition. Chicago: University of Chicago
Press, Chapter 14: Postscript-1969.

George Ritzer, 1979. Toward an integrated sociological paradigm. In William Snizek, Ellsworth Fuhrman &
Michael Miller (Eds), Contemporary Issues in Theory & Research: A methodological perspective.
Westport, CT: Greenwood Press, pages 25-46.

STUDENT ASSIGNMENTS

There are two assignments for this class. The first is to construct a research design.
The other is to participate in a group assignment, regarding which you and your group
will make an oral presentation. Each of these assignments is described below.

RESEARCH DESIGN

Each student will complete a research design as part of the class. A research design is
simply a very detailed plan to do research; you will not actually collect data. The
purpose here is to allow you to practice thinking skills to define and refine a problem,
and to review technique skills for gathering data and doing analysis. The research
design is also the format you will use, later in your doctoral career, to create a
dissertation proposal. Hence, the experience should be immediately useful and apply
as well to work you will do throughout your career.

This paper should not exceed 15 pages. In research design, it is important that you
understand the problem well enough to be both precise and concise. I am not looking
for an exhaustive literature review or the tome of the century. If you are immersed in
the problem, you should be able to communicate it well and briefly. Furthermore, all
research designs are basically structured documents--there are specific topics to
address. It is traditional to address them and move on. If you work outside academe,
particularly in a policy environment, you will find that few policy managers, policy
makers and administrators have time to read more than a few pages. If you can’t say it
concisely, it is unlikely that anyone will ever hear what you have to say. This
assignment is due November 30, 2005.

Below, I provide a basic topical structure I want you to follow in developing your
research design for class. You can follow the assigned paper from Simon and Burstein
on elements of research design if you like it better, but I like mine. Please be sure to
write a paper; the below outline below provides headings for issues you should address.
The paper should flow and read as a complete story, not a batch of bullet points.

I. What is the problem?


A. Identify the principal variable in conceptual (not operational) terms.
B. Are you interested in what causes this variable or it’s effects. If you want to
explain what causes it, sketch the principal causal concepts. If you want to
know about its effects, talk about WHICH effects you are most interested in.
C. Significance: Why should anyone care about this problem? The fact that it

3
could be a dissertation that gets you a degree is not necessarily a primary
dimension of importance.

II. Assemble Research Questions or Hypotheses.


A. If you want to explain your principal variable, it is dependent (It is caused by
other variables that you will identify). Provide a nominal and an
operational definition.
B. If you want to address the consequences of your principal variable, it is
Independent; that is, it causes something else. [In program evaluations,
the program is seen as the “cause” of—hopefully—the good outcomes
that accrue as a function of operation of the program.] Provide nominal
and operational definitions.
C. If you are explaining a dependent variable, elaborate the primary independent
variables that you believe are causal. Provide nominal and operational
definitions for each independent variable. Also create propositions that
link each independent variable with the dependent variable. Some call
these research hypotheses, some prefer the term research expectations.
Either way, they are assertions that link independent variables with the
dependent variable.
D. If you are documenting the consequences of an independent variable, identify
the dependent variables which you believe are linked to the dependent
variable. Provide nominal and operational definitions of these variables.
Create propositions that link the independent variable with each
dependent variable.
E. Explicitly identify your unit of analysis [what or who will you collect data from
or about to test the hypotheses or expectations or answer the research
questions?]

III. Perform a literature review.


A. Because this is an exercise in research design, the literature review can be
abbreviated. Ordinarily, you would take the “model” you have assembled
above and thoroughly review as much of what is written about it as
possible. This would include empirical studies, theoretical work, and even
scholarly speculation. You immerse yourself in the problem. For this
paper, just select a few studies you think are critical in understanding the
relationships you sketched in the research
questions/hypotheses/expectations.
B. For this exercise focus on empirical studies (those with data not speculation).
The goal is to examine the results of other people’s research that looked
at the relationships posed in the propositions you developed (hypotheses,
research expectations). You determine whether they support your
expectations, fail to support or perhaps are indeterminate. Of special
interest are those that do not support your expectations. If the
methodology is sound, you may choose to amend your working
hypotheses. If the methodology is not sound or there are other

4
extenuating circumstances (the author looked at a ‘peculiar’ population, or
perhaps used a different operational definition than you), you should
explain this to the reader and retain your hypotheses. Remember, your
review focuses upon studies that addressed the same or similar
relationships proposed in your model. A literature review is not a license
to tiptoe through the general knowledge base. You must walk with
purpose and direction.
C. If on the basis of your literature review, you choose to change your model,
explain what changes you make, why, and restate the linking propositions.

IV. Data collection method


A. What technique will you use to produce data? What is the reasoning behind
your selection of this method? [Hint: it should be related to the nature of
your research question.] What is the population in which you are
interested and to which you would like your research results to apply?
How will you gain access to this population? How will you use informed
consent? If your study design poses ethical issues, address them here.
B. Explain how you will select cases to include in your study. If you are using a
probability sample, provide the standard information regarding sample
frame, size and standard error. If you use a nonprobability sample,
explain your selection rules in detail. What are the consequences of your
selection technique for external validity?
C. How will you gather the information? If a questionnaire, how will it be
administered? If observers/raters are used, how will you tell them what to
observe, how will they record the information and how will inter-rater
reliability be handled? If you are using documents or archives, where are
they, what form are they in, how will you record the information relevant to
your variables, what is the reliability of the sources? Please be specific in
this section about your data collection activities, but there is no need to
create a questionnaire or observation checklist; you have already given
me your operational definitions. If you are not sure about what issues
need to be addressed, see one of the relevant references on the
“technique” oriented reading list. An overview of methods is provided in
Julian Simon and Paul Burstein, Basic Research Methods in Social
Science (any of the three editions are acceptable).
D. If you use any data collection technique other than archives/documents, what
measures will you employ to insure that an adequate number of cases
actually participate in your data collection effort. These measures vary by
technique (survey, experiment, etc) and usually include research design
features and sometimes special incentives.

V. Data analysis
A. What technique (statistics or statistical system) will you use to test the
relationships posed in your hypotheses or research expectations? Why is
this technique appropriate to your problem? [Hint: this is usually a function

5
of levels of measurement and the data collection technique.]
B. There is no need to create hypothetical data to analyze. The purpose of this
short discussion is to insure you are comfortable in selecting analysis
techniques. Note that many different techniques might be acceptable for a
given problem.

6
GROUP PRESENTATION ASSIGNMENT

I have selected four issues that have been the subject of controversy and interest in
methodology for some time. These issues address (1) the question of distinguishing
“Qualitative” versus “Quantitative” research, (2) the relationship between models and
theories, (3) the question of how scientific knowledge is accumulated, and (4) the nature
and primary assumptions of doing social science and its relationship to public
administration.

I will divide the class into groups and assign an issue to each group. Each group will
review the assigned problem, organize itself, and devise a scheme to break up the
problem into component issues. Each group member will select a component issue,
review literature regarding the issue and develop an oral presentation on their
component. I would like a copy of each individual=s oral presentation or a simple write-
up of what was said. The idea behind doing the write-up is to provide fellow students
(not in your group) with a reference for the issues you addressed. Each group will have
90 minutes to present. The presentation should include an overview that ties all
presentations together. Presentations will be made on the nights of October 19 and
26, 2005.

I have appended an issue brief for each of the four issues to this syllabus. The issue
brief presents the problem, suggests some issues, and gives some starter references.

LECTURE SCHEDULE AND READINGS

Meeting Topic READ

Aug 24 Class Introduction

Aug 31 Social science & public administration


Scientific method; Epistemology Agnew & Pyke
Postulates of science Homans
Reynolds 1
Sept 7 Methodology
Logic-in-use/Reconstructed Logic Kaplan
Goals of science Robinson

Sept 14 Theories Reynolds


Concepts/Variables 3,4,5,6
Measurement
Rules of Correspondence
Validity/Reliability

7
Sept 21 Multiple Indicators, Validity & Reliability Costner
Propositions
Eliminating False Claims
External Validity of Conclusions

Sept 28 Theory form and content Schrag


Theory Examples Lan & Anders
Meta-Theory, abduction, retroduction Alexander
Logics of discovery & proof Reynolds 7
Paradigms

Oct 5 Meta-theory & scientific knowledge accumulation Kuhn


Paradigm shifts and scientific revolutions Ritzer
Evaluating Theory Martindale
Communities in Social Science Reynolds 2
& Public Administration

Oct 12 Exercises, Philosophy of Social Science Review, Groups

Oct 19 PRESENTATIONS: Models & Qualitative/Quantitative Groups

Oct 26 PRESENTATIONS: Knowledge Accumulation & Intellectual


Craftsmanship

Nov 2 Types of Research Design Handouts


Elements of Research Design Simon & Burstein 7
Review Paper Assignments

Nov 9 Sample Survey Designs Davis


Experimental & Quasi-Experimental Designs Zelditch

Nov 16 Structured & Unstructured Observation


Documents & Archives Simon & Burstein
14

Nov 23 Thanksgiving Break

Nov 30 Principles of Causal Analysis Hirshi & Selvin


Criteria for causal inference Turner
Classical & modern causal modeling Babbie
Types of causality

ASU Fall Semester Instruction Ends December 6, 2005.

8
MODELS IN RESEARCH

The concept of Model has been used in the physical, natural and social sciences for
generations. In 1966, Mary Hesse [Models and Analogies in Science, University of Notre Dame
Press] created a catalog of different types or conceptions of model. Yet the term is still used
rather loosely by researchers. While philosophers of science make many distinctions or identify
many types of models, a large subset of social science researchers typically identify model with a
collection of propositions amenable to some sort of structural equation based analysis. This
particular usage was popularized (not invented) by Hubert M. Blalock in the early 1960s. Two
kinds of controversy have persisted about models in social science. First, there is disagreement
about the relationship of models and theories. Second, there is disagreement about the goal of
modeling.
Throughout his life, Blalock contended that a causal model was a collection of
propositions that interrelate a network of variables with the purpose of explaining one or more
dependent variables. He also intertwined the (conceptual/theoretical) model with its testing, first
by linking them with partial correlation, then simple regression, then path analysis and finally
with structural equation approaches. He steadfastly maintained that causal models are theories.
With the notion of partitioning through successive solution of structural equations, single
stage models (a batch of independent variables acting simultaneously on a single dependent
variable) gave way to multi-stage models. A multi-stage model contains a network of variables
in which the same variable may be both independent and dependent, depending upon the stage of
analysis. Thus, in the very simple model below, at the first stage risk perception is one
independent variable among three that explains warning compliance. At another stage, risk is a
dependent variable explained by past experience, warning content and warning credibility.
This is not an elaborate model; many in the literature have several stages and dozens of
variables.
The advantage is that complex models reflect complex thinking and theory. Stanley Lieberson

Credibility
RIsk Perception

Experience
Warning belief Compliance

Content
family context

[1981, Making It Count, University of California Press] has argued that over decades, social

9
scientists have tended to become too enamored with complex multistage models and forgotten
the original objective; to explain a particular dependent variable. This argument was not
embraced by causal modelers (putting it mildly), whom Lieberson felt turned to the dark side
(focusing on the statistics of modeling) instead of focusing on the conceptual challenge of
explaining some phenomenon (variable).

Please address the below questions in your presentation.

1. What exactly is a model? A causal model? How are models used?


2. Is a causal model the same as a theory? [I’m particularly interested in why or why not.]
3. Conceptually, is there a problem with multi-stage models? Does it appear that building such
models really diverts our attention from serious explanation of a given dependent
variable as Lieberson suggests?
4. Are there disadvantages to using models? [What, if any, alternatives are available?]
5. Where do models fit into metatheory? Are they only part of the logic of proof?

STARTER REFERENCES

Nicholas Mullins, 1971. The Art of Theory. New York: Harper & Row. Chapter 8 addresses
models.
Hubert Blalock, 1979. Dilemmas and strategies of theory construction, pp. 119-136 In William
Snizek, Ellsworth Fuhrman & Michael Miller (Eds), Contemporary Issues in Theory &
Research: A methodological perspective. Westport, CT: Greenwood Press.
Hubert Blalock, 1968. Theory Building and Causal Inferences, pp. 155-198 in Hubert Blalock
and Ann Blalock (Eds), Methodology in Social Research. New York: McGraw-Hill.
Abraham Kaplan, 1964. The Conduct of Inquiry. San Francisco: Chandler Press. Section on
Models, chapters 30-33.
Hubert Blalock, 1961. Causal Inferences in Nonexperimental Research. Chappell Hill, NC:
University of North Carolina Press. Chapter 1.
Eugene Meehan, 1994. Modeling, chapter 4 in Social Inquiry. Chatham, NJ: Chatham House
Publishers.
George Homans, 1964. “Theory” in Robert Faris (ed), Handbook of Modern Sociology.
Chicago: Rand McNally.
G. Arminger and G. Bohrnstedt, 1987. Making it count even more: review and critique of
Lieberson, pp. 363-372 in Clifford Clogg, Sociological Methodology 1987. Washington,
DC: American Sociological Association.
Herbert Costner, 1986. Research Methodology: Pedagogy, criticism and exhortation,
Contemporary Sociology 15:537-40. [Another comment on Lieberson]
Stanley Lieberson, 1992. Einstein, Renoir and Greeley American Sociological Review 57:1-15.
Alden Miller, 1985. The Logic of Causal Analysis, pages 7-28 in Hubert Blalock (Ed.), Causal
Models in Panel and Experimental Designs. Chicago: Aldine.
Jack Gibbs, 1972. Sociological Theory Construction. Hinsdale, IL: Dryden Press.
Robert Dubin, 1978. Theory Building. New York: Free Press.
Neil Smelser (ed), 1988. The Handbook of Sociology. Newbury Park: Sage Publications.

10
Bernard Cohen, 1989. Developing Sociological Knowledge. Chicago: Nelson Hall Publishers.
David Freedman, 1991. Statistical Models and Shoe Leather, Chapter 10 in Peter Marsden (ed),
Sociological Methodology 1991. Washington DC: American Sociological Association.

11
QUALITATIVE AND QUANTITATIVE RESEARCH

Among social scientists, there has long been a kind of combat between advocates of
qualitative and quantitative research methods. At worst, the frenzy seems to involve camps,
each of which believes their tool is superior. Certainly the notion that any single approach (or
small collection of approaches) is appropriate to every research question begs serious critical
examination. There are distinctions made about method in the literature of the philosophy of
science; principally these deal with techniques that are more appropriate for inductive versus
deductive approaches. Unfortunately the quantitative/qualitative dialogue does not appear to be
rooted in deep thought. Lieberson’s 1992 article cited in the Modeling reading above [especially
page 3] makes an interesting point about quantitative/qualitative distinctions being difficult to
maintain in practice. The purpose of this exercise is to clarify what is being talked about and
how reasonable it is to talk of such things.

I’ve included some starter readings below. Please examine the issue yourselves and address the
below questions in your presentation.

1. In the literature, how do authors distinguish between qualitative and quantitative techniques?
On what bases (factors) does this distinction depend? That is, what are the defining
characteristics of each?
2. From the standpoint of meta-theory, in terms of approaches to explanation, how would you
classify the apparent use of qualitative and quantitative techniques? That is, some have
argued that qualitative techniques are useful in inductive context, while quantitative
techniques appear in deductive contexts (See Schrag below). Is this true? Why or why
not?
3. A main goal of social science is explanation and that is achieved through theories and models.
Can one build theory using only qualitative or only quantitative techniques? What
challenges arise when one focuses only on a single family of approaches?
4. In the scheme of building a knowledge base for a discipline, is there a meaningful distinction
between qualitative and quantitative techniques? Please make one argument that says
there is a meaningful distinction and one that concludes there is not a meaningful
distinction.

STARTER REFERENCES

J. Gubrium and J. Holstein, 1992 .Qualitative Methods pp.1577-82 in The Encyclopedia of


Sociology, Volume 3, Edgar Borgatta and Marie Borgatta (eds), New York: MacMillan.
D. Heise and A. Durig, 1992. Qualitative Models pp.1582-86 in The Encyclopedia of Sociology,
Volume 3, Edgar Borgatta and Marie Borgatta (eds), New York: MacMillan.
Hubert Blalock, 1992. Causal Inference Models, pp. 176-88 in The Encyclopedia of Sociology,
Volume 1, Edgar Borgatta and Marie Borgatta (eds), New York: MacMillan.
Earl Babbie, 1986. Quantitative or Qualitative?, in E. Babbie, Observing Ourselves. Belmont,
CA: Wadsworth.
Earl Babbie, 1998. Observing Ourselves: Essays in Social Research. Prospect Heights, IL:

12
Waveland Press.
Walter Wallace, 1987. Causal Images in Sociology, Sociological Theory 5:41-46.
Randall Collins, 1989. Sociology: Proscience or antiscience, American Sociological Review 54:
124-139.
Clarence Schrag, 1967. Elements of theoretical analysis in sociology in L. Gross, Sociological
Theory: Inquiries and Paradigms. New York: Harper and Row.
Donald Campbell, 1988. Qualitative Knowing in Action Research, pages 360-376 in E. Samuel
Overman (Ed.), Methodology and Epistemology for Social Science: Selected Papers of
Donald T. Campbell. Chicago: University of Chicago Press.
Norman Denzin and Yvonna Denzin (Eds), Handbook of Qualitative Research. Thousand Oaks:
Sage. Chapters 1,2,6,40.
Michael Patton, 2002. Qualitative Research and Evaluation Methods, 3rd Edition. Thousand
Oaks: Sage. Chapters 1,8 &9.

13
KNOWLEDGE GROWTH

For decades, social scientists have puzzled over how social scientific or just scientific knowledge
bases form. Whether one subscribes to a positivist, post-positivist, logical positivist or less
positivist epistemology, it is generally agreed that the key to a science or a discipline rests with
the knowledge that it creates and stores. This knowledge, whether in the form of formal or less
formal theory, is the basis upon which the science or discipline describes, explains, predicts and
controls the world of experience. At least since the mid-1960s, there have been (when people
bothered to think about this at all) two conceptions of knowledge accumulation: cumulative,
incremental growth and history-rewriting revolutionary growth. One issue in this debate is
whether we see science as the people who practice it, or the knowledge that composes it.
Although, he wouldn’t agree, it has been suggested that Kuhn’s view of revolutions focuses upon
people and politics. The purpose of these presentations will be to clarify this issue.

Questions to be addressed

1. What is the basis of the contention that knowledge accumulates incrementally? How does
this related to the scientific method? Does this model use fit within the meta-theory
approach described in class?
2. Explain the rudiments of the notion that revolutions best describe the growth of scientific
knowledge. Include critiques of the original Kuhn work.
3. Robert Merton described two broad approaches to knowledge formation: theory then research
and research then theory. Please explain each approach, how they relate to inductive and
deductive reasoning strategies, and comment on whether a knowledge base could be
build using only one of these approaches [by discussing them separately, Merton implies,
but doesn’t say exactly, that they are alternatives].
4. Should any of us care whether knowledge accumulates incrementally or in revolutions? How
does this impact the way theory is constructed or research is done?

STARTER REFERENCES

Thomas Kuhn, 1970. Reflections on my critics. In I. Lakatos & A. Musgrave (Eds.), Criticism
and the growth of knowledge (pp. 231-278). London: Cambridge University Press.
Jennifer Greene, 1990. Three views on the nature and role of knowledge in social science. Pp.
227-245 in Egon Guba (Ed), The Paradigm Dialog. Thousand Oaks: Sage.
Margaret LeCompte, 1990. Emergent paradigms. Pp. 246-256 in Egon Guba (Ed), The Paradigm
Dialog. Thousand Oaks: Sage.
Judith Goetz, 1990. Discussion on knowledge accumulation. Pp156-158 in Egon Guba (Ed), The
Paradigm Dialog. Thousand Oaks: Sage.
Don Martindale, 1979. Ideologies, Paradigms and Theories, pp. 7-24 In William Snizek,
Ellsworth Fuhrman & Michael Miller (Eds), Contemporary Issues in Theory & Research:
A methodological perspective. Westport, CT: Greenwood Press.
Robert Merton, 1967. On Theoretical Sociology. New York, Free Press. [this is the research-
then-theory, theory-then-research discussion]

14
Robert Merton, 1957. Social Theory and Social Structure. New York: Free Press.[the research
theory discussion is in here too.]
Allen Bluedorn and Roger Evered, 1980. Middle range theory and the strategies of theory
construction, chapter 2 in Craig Pinder and Larry Moore, Middle Range Theory and the
Study of Organizations. Boston: Martinus Nijhoff Publishing.
Craig Pinder and Larry Moore, 1980. The inevitability of multiple paradigms and the resultant
need for middle range analysis in organization theory, chapter 7 in Craig Pinder and
Larry Moore, Middle Range Theory and the Study of Organizations. Boston: Martinus
Nijhoff Publishing.
Thomas Kuhn, 1970. The Structure of Scientific Revolutions, 2nd Edition. Chicago: University of
Chicago Press.
Jerald Hage, 1972. Techniques and Problems of Theory Construction in Sociology. New York:
John Wiley.
Robert Dubin, 1978. Theory Building. New York: Free Press.
Bernard P. Cohen, 1989. Developing Sociological Knowledge. Chicago: Nelson-Hall.
Egon G. Guba, 1990. The Paradigm Dialog. Newbury park: Sage Publications. See
particularly
chapters 1,2,5.
Jay D. White and Guy B. Adams, 1994. Research in Public Administration. Thousand Oaks:
Sage. see chapters 1,2,3.
Jonathan H. Turner, 1989. Theory Building in Sociology: Assessing Theoretical Cumulation.
Newbury Park: Sage. Chapters 1 and 8 make the argument for cumulation.
Philip Von Bretzel and Richard Nagasawa, 1977. Logic, Theory and Confirmation in Sociology.
Washington DC: University Press of America.

15
INTELLECTUAL CRAFTSMANSHIP

Asked why he did research and published, an Assistant Professor at the


University of Washington replied: It’s necessary to get tenure. Ignoring the narrow
institutional view expressed here, it is reasonable to reflect on why we do what we do.
As a discipline, public administration also gets immersed in these issues. There have
long been arguments about whether PA is a science or not, what that means for the
field, and why we have to “borrow” theories rather than generate them within our
disciplinary actions. At the root of all this, there appear to lie several issues that are
important. One issue rests in understanding what social science and PA are and how
they relate to each other. Another is what is it that social science and PA are supposed
to do? George Homans (1967) suggests that the definition of a science lies in its goals
and aims rather than it’s results. This issue can be seen as revolving around the
question of facts [that is, social facts in the way Emile Durkheim used the term]: do we
just gather them up or are we supposed to do something about them? Blalock (1984)
weighed in on this issue arguing that facts are what we explain (not necessarily
important in themselves) and that is what science is about.
Referring back to Homans’ claims raises the issue of what underlies the conduct
of science. There remains much discussion of positivist approaches, interpretive
approaches and the whole mystique of post-modernism. Lundberg (1949) worried
about problems with non-positivist epistemologies long ago, but many others have
recently attempted to elaborate such epistemologies (or at least critique positivism)
[see, for example, Sheldon Goldberg, 1992]. The philosophical side of this debate has
been effectively reviewed by Mario Bunge (1999) and even appears in the popular
writings of Carl Sagan (1996).
At the risk of oversimplification, these folks are talking about the use of
theoretical analysis in social science. Not collecting data, but approaches to making
sense of data, hunches, ideas, and intuitions. Charles Wright Mills (1959) called this
task intellectual craftsmanship and subsequently Sjoberg and Nett (1968) spoke of
theoretical analysis. In a sense, it is the challenge of getting beyond technique and
assemblies of facts to create order, explanation and meaning. The purpose of this
group activity is to elucidate and explore these issues.

Please address the below questions:

1. What is social science? What is public administration? How do they relate to each other?
2. What is postmodernism? What impact does it have on research and theory?
3. What is the relationship of facts to propositions and theories?
4. What is positivism? What are interpretive perspectives? Critique these approaches and
discuss how are they are related.
5. What is theoretical analysis--a style of thinking or a technique?

STARTER REFERENCES

16
Bernard P. Cohen, 1980. Developing Sociological Knowledge. Englewood Cliffs, NJ: Prentice-
Hall. Chapters 10,11 & 12 deal with logical analysis of theories and knowledge.
Michael Crotty, 1998. The Foundations of Social Research. Thousand Oaks: Sage. See chapters
4 and 5 on INTERPRETIVISM.
Norman Denzin, 1986. Postmodern Social Theory, Sociological Theory 4 (fall): 194-204.
C. Wright Mills, 1959. The Sociological Imagination. New York: Oxford. Appendix: On
Intellectual Craftsmanship.
Michael Crotty, 1998. Postmodernism. Pp. 183-214 in Crotty, The Foundations of Social
Research. Thousand Oaks: Sage.
Hubert Blalock, 1984. Basic Dilemmas in the Social Sciences. Beverly Hills: Sage. Chapter 4
Can we move from many facts to fewer law-like propositions?.
Robert S. Lynd, 1939. Knowledge for What? Princeton: Princeton University Press.
Gideon Sjoberg and Roger Nett, 1968. A Methodology for Social Research. New York: Harper,
Chapter 9, Theoretical Analysis.
George C. Homans, 1967. The Nature of Social Science. New York: Harcourt, Brace,
Janovitch.
Neil Agnew and Sandra Pyke, 1994. The Science Game. Englewood Cliffs, NJ: Prentice-Hall,
Chapter 16, The Truth Spinners.
Sheldon Goldberg, 1992. Thinking Methodologically. New York: Harper Collins Publishers,
Chapter 19 Final Critiques and Comparative Assessment [of Positivist & Interpretivist
views].
James Davis, 1964. Great Books and Small Groups in Phillip E. Hammond, Sociologists at
Work. New York: Basic Books. [also reprinted in George H. Lewis, 1975, Fist-Fights in
the Kitchen. Pacific Palisades, CA: Goodyear Publishers.]
George Lundberg, 1949. Can Science Save Us? New York: Longmans, Green.
Mario Bunge, 1999. Social Science Under Debate. Toronto: University of Toronto Press.
Chapter 1 From natural science to social science.
Mario Bunge, 1996. Finding Philosophy in Social Science. New Haven, CT: Yale University
Press.
Carl Sagan, 1996. The Demon-Haunted World: Science as a Candle in the Dark. New York:
Ballentine Books. Chapters 1 and 2.
Gerald Holton, 1993. Science and Antiscience. Cambridge, MA: Harvard University Press.
Max Perutz, 1991. Is Science Necessary? Oxford: Oxford University Press.

17
Technique Oriented Research Books
Please note that these books generally do not talk about when or why you would use a particular
research or analysis approach. They focus on how to do something. They are useful as
refreshers or for remembering step-by-step procedures when one needs to design a particular
type of study. Some are old, some new, but all represent fairly clear statements of how to do
“it”.

SURVEY WORK

Arlene Fink, 1995. The survey handbook. Thousand Oaks: Sage.

Linda Bourque and Eve Fiedler, 1995. How to conduct self-administered and mail surveys.
Thousand Oaks: Sage.

Arlene Fink and Jaqueline Kosecoff, 1998. How to conduct surveys, Second Edition. Thousand
Oaks: Sage.

James Frey and S.M. Oishi, 1995. How to conduct interviews by telephone and in person.
Thousand Oaks: Sage.

Arlene Fink, 1995. How to sample in surveys. Thousand Oaks: Sage.

EXPERIMENTS

Steven Brown and Lawrence Melamed, 1990. Experimental design and analysis. Newbury
Park: Sage.

Roger Kirk, 1982. Experimental Design: Procedures. Monterey CA: Brooks/Cole.

Geoffrey Keppel, 1991. Design and Analysis: A researcher’s handbook. Englewood Cliffs, NJ:
Prentice-Hall.

Geoffrey Keppel, 1992. Introduction to design and analysis: a student=s handbook. New York:
W.H. Freeman Company.

CASE STUDIES

George Huber and Andrew Van de Ven, 1995. Longitudinal Field Research methods. Thousand
Oaks: Sage.

Robert Yin, 1994. Case study research: design and methods. Thousand Oaks: Sage.

John Creswell, 1998. Qualitative Inquiry and Research Design. Thousand Oaks: Sage.

18
STRUCTURED AND UNSTRUCTURED OBSERVATION

Leonard Schatzman and Anselm Strauss, Field Research. Englewood Cliffs, NJ: Prentice-Hall.

Jaber Gubrium, 1988. Analyzing Field Reality. Thousand Oaks: Sage.

John Lofland and Lyn Lofland, 1984. Analyzing Social Settings. Belmont, CA: Wadsworth.

Eugene Webb, Don Campbell, Richard Schwartz and Lee Sechrest, 1981. Unobtrusive
Measures: Nonreactive Research in the Social Sciences. Boston: Houghton-Mifflin.

DOCUMENTS AND ARCHIVES

Richard Light annd David Pillemer, 1984. Summing Up: The Science of Reviewing Research.
Cambridge: Harvard University Press.

Thomas Mann, 1987. A Guide to Library Research Methods. New York: Oxford University
Press.

Edward Kardas and T.M. Milford, 1996. Using the Internet for Social Science Research and
Practice. Belmont, CA: Wadsworth.

Robert Weber, 1985. Basic Content Analysis. Thousand Oaks: Sage.

PROGRAM EVALUATION

Michael Patton, 1987. How to use qualitative methods in evaluation. Newbury Park: Sage.

William Trochim, 1984. Research design for program evaluation. Beverly Hills: Sage.

Joan Herman, Lynn Morris and Carol Fitz-Gibbon, 1987. Evaluator=s Handbook. Newbury
Park: Sage.

Eugene Bardach, 2000. A Practical Guide for Policy Analysis. New York: Chatham House.

MEASUREMENT

Mark S. Litwin, 1995. How to measure survey reliability and validity. Thousand Oaks: Sage.

Marlene Henerson, Lynn Morris and Carol Fits-Gibbon, 1987. How to measure attitudes.
Newbury Park: Sage.

Lynn Morris, 1978. How to Measure Program Implementation. Beverly Hills: Sage.

19
Allen Edwards, 1957. Techniques of Attitude Scale Construction. New York: Appleton-
Century-Crofts. [this is THE classic].

Edward Carmines and Richard Zeller, 1979. Reliability and Validity Assessment. Thousand
Oaks: Sage.

STATISTICAL INFERENCE

Gudmund Iversen, 1984. Bayesian Statistical Inference. Beverly Hills: Sage.

Christopher Mooney and Robert Duval, 1993. Bootstrapping: a nonparametric approach to


statistical inference. Newbury park: Sage.

Denton Morrison and Ramon Henkel, 1973. The Significance Test Controversy. Chicago:
Aldine.

Michael Oakes, 1986. Statistical Inference. New York: Wiley.

STATISTICAL ANALYSIS

Andy Field, 2000. Discovering Statistics using SPSS for Windows: Advanced techniques.
Thousand Oaks: Sage.

Rae Newton and Kjell Rudestam, 1999. Your Statistical Consultant: Answers to questions.
Thousand Oaks: Sage.

Geoffrey Maruyama, 1997. Basics of Structural Equation Modeling. Thousand Oaks: Sage.

Adamantios Diamantopoulos and J.A. Siguaw, 2000. Introducing LISREL: a guide for the
uninitiated. Thousand Oaks: Sage.
G. R. Iversen and Helmut Norpoth1976. Analysis of Variance. Thousand Oaks: Sage.

James Bray and Scottt Maxwell, 1985. Multivariate Analysis of Variance. Thousand Oaks:
Sage.

Christopher Achen, 1982. Interpreting and Using Regression. Thousand Oaks: Sage.

Larry Schroeder, David Sjoquist, Paula Stephan, 1986. Understanding Regression Analysis.
Thousand Oaks: Sage.

William Berry and Stanley Feldman, 1985. Multiple Regression in Practice. Thousand Oaks:
Sage.
Herbert Asher, 1976. Causal Modeling. Thousand Oaks: Sage.

20
Charles Ostrom, 1978. Time Series Analysis. Thousand Oaks: Sage.

Jae-on Kim and Charles Mueller, 1978. Introduction to Factor Analysis. Thousand Oaks: Sage.

Geoffrey Keppel, 1989. Data analysis for research designs: analysis of variance and multiple
regression approaches. New York: W.H. Freeman Company.

Arlene Fink, 1995. How to analyze survey data. Thousand Oaks: Sage.

Jacob Cohen, 1987. Statistical Power Analyses for the Social Sciences. Hillsdale, NJ: Lawrence
Erlbaum Associates.

21

You might also like