0% found this document useful (0 votes)
39 views20 pages

MPC005

Uploaded by

preetiikumarii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views20 pages

MPC005

Uploaded by

preetiikumarii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

1. Discuss the nature, types and steps of case study.

Describe the criteria and


misconceptions of case studies (1000 words, 15 marks)

Introduction
Case study is a research methodology, which includes in-depth study of a
specific subject, which can be a person, an event, a place or a group. It is
widely used for research in academic, businesses, medical and social
domains. A case study examines nearly every aspect of the subject’s past
and present, in order to learn as much as possible about the subject, so that
the information can be generalized. Examples of subject might be an event
like Demonetization in India (2016) or an individual like Queen Elizabeth II.

Nature of Case Study


Case study can be defined in a number of ways, with the core idea being
the intensive study of the subject in its natural context. For this reason, it is
sometimes, referred to as a “naturalistic” design, in contrast to an
“experimental" design, where the researcher controls and manipulates
variables.
A case study can be a single-case design or multiple-case design, where two
or more cases or replications across the cases are studied to investigate the
same phenomena. For example – to study the trends of startups in fashion
industry, two or more fashion startups can be taken and analyzed.
Case study research usually involves qualitative methods like, observation,
interviews etc., but quantitative methods like surveys are also used
sometimes. It relies on multiple sources of evidence, like documentation,
archival records etc. It also benefits from any theories proposed previously,
to get a holistic understanding of the subject.

Types
Case studies are broadly categorized into below four types:
a. Illustrative: In this type, usually one or two instances are analyzed to
explain what a situation is like. It is primarily descriptive, e.g.–
studying two-three tribal people to highlight their issues.
b. Exploratory(pilot): A smaller scale study is done to identify situations
for future large-scale research, e.g. – studying effects of a herb for
fever in 2-3 family members before conducting medical experiments
on a larger scale.
c. Cumulative: This type of study analyzes multiple past studies, done at
different times and locations, which allows for better generalization
without putting more efforts to do these studies separately, e.g. –
aggregating various studies done at different locations/years for
understanding gender pay-gap
d. Critical Instance: In this type, a subject or event is studied for its
uniqueness, with no intention of generalizing the phenomena to
understand the causal relationship, e.g.– studying a patient with
extreme level of PTSD

Steps
In general, case study research follows below four steps:
a) Defining the case: Careful formulation of the research question and a
review of the existing literature is important to define the case.
b) Choosing a case: Depending upon type of case study, it may be an
extreme or a typical case (when trying to understand a
phenomenon).
c) Collect data: Different qualitative methods, e.g. – interviews,
observations etc. are used to collect data. Multiple sources of data
like documents, past theories etc. should also be utilized.
d) Analyze data and report case: Repeated reviews and analysis are
often required to make sense of the not-so-straightforward data
gathered. It might lead to development of a new theory or
challenging a previous existing one. The findings are then presented
in a very context-rich case-study report, also containing details of the
process.
Sometimes depending upon exact type of study, little variation of
steps(below) might be followed:
a) Understanding the present status of the case, e.g. - why a child
behaves normally in one class, but is very defiant in another.
b) Formulating the hypothesis, which might include hypothesizing the
probable cause(s), e.g. - our hypothesis may be that the student is
defiant due to difficulty of the subject matter and the not-so-warm
gesture of the subject teacher.
c) Verifying the hypothesis by checking for presence or absence of the
hypothesized causes, e.g. – interviewing the child to know about his
experiences or observing him in the classroom.
d) Next step is to diagnose the causes in great detail, so that remedial
suggestions might be provided, e.g.- suggestion to change the subject
teacher or helping with the subject matter.
e) Follow-up of the case is done to understand if the diagnosis was
correct and the remedial plan was successful, e.g. - the child may be
assessed after a few weeks, to check if the defiant behavior has
reduced.
f) The case study must be reported with precise, accurate and detailed
information.

Criteria
Selecting a case for case study depends on the type of case study. For an
intrinsic case study (to learn about a unique phenomenon), the researcher
may choose an extreme or unique case, as the uniqueness is of interest to
the researcher. On the other hand, for instrumental case study (to get
broader picture of a phenomenon), a typical or general case may be
chosen. For collective or multiple-case study, the cases must be chosen
carefully so that comparisons can be made among the cases.

Misconceptions
Flyvbjerg (2006) identified five common misunderstandings about case-
study research mentioned below:
a) General, theoretical knowledge is more valuable than concrete,
practical knowledge.
b) One cannot generalize on the basis of an individual case, making case
study ineffective for scientific development.
c) The case study is most useful for generating hypotheses, while other
methods are more suitable for hypotheses testing and theory
building.
d) The case study contains a bias or a tendency to confirm the
researcher’s preconceived notions.
e) It is often difficult to summarize and develop general propositions
and theories on the basis of specific case studies.

Summary
Case study is a very useful research tool in doing an in-depth analysis of an
event or a subject, in its real-world context. The case study needs to be
done in a systematic way to generate holistic understanding. The selection
of case is critical to the case study research. It is widely used for research in
domains like academic, business or clinical. Case study is very useful in
clinical psychology to analyze an individual’s mental health. It can also be
used to understand rare mental phenomena, e.g. - the famous study of
patient H.M. in cognitive neuroscience.

2. Explain the method, steps, relevance and implications of grounded theory.


Describe the types of coding in grounded theory.
Introduction
Grounded theory (GT), developed by Glaser and Strauss (1967), is a
qualitative research method which enables the researcher to generate a
new theory or modify an existing one, through collecting and analyzing data
around a particular phenomenon. The theory is ‘grounded’ in data i.e. – the
theory generation follows data collection. It is used to uncover such things
as behavior or social relationships of a group. For example – a researcher
might gather data about Himalayan monks and their issues in daily lives,
and then develops a theory about it.
Unlike traditional research methods which are hypothesis-deductive i.e.,
you come up with a hypothesis first and then try to prove/disprove it
through data, the grounded theory is an inductive research method, where
data is collected first and then theories are built upon it. The process of
data collection, analysis and formulation of concepts go through multiple
iterations, until it reaches theoretical saturation, the point at which
additional data adds no additional insights.

Method
The idea that ‘all is data’ is a fundamental property of the grounded theory,
which means that everything that the researcher comes across while
studying the event, e.g. - an article, a conversation or a TV show, is actually
data. Short(field) notes created through informal conversations or gathered
through online articles, videos etc., are the core data. Sometimes a
researcher who is expert in the subject being studied, might do a self-
assessment, to accumulate ideas.
Most ground theorists do not look for statistical significance, rather the
concerns about fitness and relevance of the theory are considered
important. The researchers are not really looking for the truth, rather
theories to explain the relationship between concepts developed through
data.

Steps
In carrying out GT study, there is a general practice to avoid discussing the
topic or reviewing relevant literature, so that no preconceptions are
generated. The researcher should only identify a general topic of interest as
the beginning. In grounded theory, the data collection and analysis happen
simultaneously and iteratively. The data is collected and then analyzed to
generate concepts; based on the analysis even more data is collected and
further analysis takes place and so on. Theorizing i.e. – the building and
testing of theory happens throughout the project.
The processes involved are:
I. Memoing: Memos are a form of short notes, prepared while
exploring data and identifying concepts. These are kind of self-notes
by the researcher, to capture her train of thoughts. These can be
prepared in three ways:
a. Theoretical note: These are written about evolving concepts
and their (potential) relationships with other concepts.
b. Field note: These are prepared when the researcher is actively
on field, interacting with participants of the study area. It might
come through other sources like newspapers, informal talks
etc.
c. Code note: The researcher might name, categorize and label
various data items while analyzing data. The code note
describes codes for these labels.
II. Coding: Coding is the process of generating the code notes, by
analyzing and categorizing data. The code notes can be prepared in
three ways – open, selective and axial.
III. Sorting: In this stage, the memos are sorted and the connections
among concepts are clarified to make a concrete understanding of
the subject. This is key to formulating the theory.
IV. Writing: At this stage, a written theory is shaped. The researcher
applied her interpretations on the generated categories and ideas.
The existing relevant literature is then correlated to present the
theory in a scholarly context.

Relevance
Grounded theories often provide new and fresh theories and is of high
significance for below reasons:
I. Collection of data from multiple sources make grounded theories
more valid and reliable. These are considered ecologically valid i.e. –
closely representing the real-world settings.
II. It provides ways to analyze the cause in a qualitative manner i.e.,
without the need to prove statistical significance.
III. It is an inductive approach where the ground or the basis is ‘data’
and theory is the outcome of the data analysis.
IV. It provides ways to modify the knowledge base in the light of new
information, making it quite informative and context-rich.
V. It provides strategies for categorization and organization of data, for
theory generation.

Implications
Although developed by sociologists, to understand social and psychological
processes, Grounded theory is not restricted to any school or domain. Some
of its widespread implications are:
I. In social science and politics, it is used to formulate policies and
program evaluation researches.
II. In marketing, it can be used to analyze consumers’ preferences and
advertising opportunities.
III. In psychology, it can be used to analyze community practices and
shaping of human behavior.
IV. In education, it can be used to analyze special needs children and
devising programs to empower them.
V. In sociology, it can be used to analyze how people’s beliefs lead them
to having lives with different level of satisfaction.
Types of Coding
The data collected is analyzed and categories are formed through the
process of coding. It is done in following ways:
I. Open coding: It is the first and quite tedious phase, where all the field
notes are labeled and categorized, by going over them line-by-line. It
is the first level of abstraction which generates multiple discrete
categories or codes.
II. Axial coding: This phase follows open coding, where the discrete
categories generated in the previous phase are analyzed and the
relationship between the categories are formed. The researcher is
often interested to find the ‘cause’ and ‘context’ codes.
III. Selective coding: In this phase, the researcher tries to find the main
or central category out of the various categories formed and then
tries to relate this selected category with other categories.

Summary
Grounded theory provides ways to generate fresh theories which are
grounded in data. It is basically, the discovery of emerging patterns in data.
It specifies ways to conceptualize not-so-obvious information through the
process of memoing, coding and sorting. The grounded theory is considered
to represent real-world situation very closely. Grounded theory is widely
used in multiple domains like social sciences, health, education etc., to carry
meaningful researches and generate policies to address the real-life issues.

3. Elaborate the assumptions, approaches, steps, issues and implications of


discourse analysis.

Introduction
Let’s begin with the word discourse. In simple terms, discourse is any form
of communication (written/text/sign) that goes beyond a sentence. It is not
just the language in use, but the overall meaning conveyed by language in
context. Context here can be social, cultural or historical background, which
is a very important factor to consider, to get the essence or true meaning of
the discourse. Discourse can refer to verbal or written conversations,
objects, social practices or even belief systems. Michel Foucault, one of the
key theorists, defined discourse as “systems of thoughts composed of ideas,
attitudes, courses of action, beliefs and practices that systematically
construct the subjects and the worlds of which they speak.”

Discourse analysis is a qualitative research method, used to study how


language functions in relation to its context. Traditional language studies
focused on the structural part of language, whereas discourse analysis
focuses on its functional part and its usage in shaping a society. Critical
discourse analysis is a form of discourse analysis, which studies the social-
political discourse and how power influences social relationships. Content
discourse analysis is another form, which analyses any form of content
objectively, to derive its true meaning.

Assumptions
Discourse analysis, as an interdisciplinary research approach, carries a few
underlying assumptions:
I. The most fundamental assumption is that, reality is constructed by
social and cultural elements. What is real depends on what is socially
acceptable. An individual’s sense of reality i.e.- elements like ideas or
concepts is determined by language and its usage. Since language is a
social and cultural construct, in a way, reality becomes a product of
society and culture.
II. In-line with the above assumption, another one is that people are a
product of their social interactions. The use of language in social
communication, plays a major role in shaping an individual. The sense
of self or personality, assumed as part of our inner-sense in scientific
approaches, is actually a product of social interactions.
III. Another assumption is that, the results of any study would always
include some amount of subjectivity. Researchers are very likely to
carry some biases or subjectivity, reflected in their interpretation.

Approaches
There are numerous definitions and hence approaches of discourse
analysis. Some of them are:
I. Modernism: Modernists focused on achievement, and sought for
universal social laws to gain better understanding of the society. They
considered discourse to be functional, and hence transformations
were needed to develop new or more accurate words to describe the
new innovations and understandings. Modernism considered
language and discourse as "natural” product of common sense, and
not influenced by power.
II. Structuralism: Structuralism believed that all human activity and
social formations are constructed (not natural). These are related to
the used language and discourse, and makes a system of related
elements. This means that an individual element holds significance or
meaning, only in relation to its structure as a whole, where the
structure is considered to be self-reliant. The language in itself is
meaningless, but the system in which it is used, provides it meaning.
III. Postmodernism: Postmodernists rejected modernist claims that
having universal social laws would help in better understanding of the
society. Instead, they focused on the differences and openness of
meaning. They insisted more upon analyzing the discourses as texts,
speeches, language and practices. Michel Foucault discussed the role
of power in discourse.
IV. Feminism: Feminists emphasized on the concept of ‘performing
gender’, the idea that gender is not related to biological sex, rather is
a set of practices learned and performed based on cultural norms, i.e.
- we act and walk and talk in ways that consolidate the idea of “being
a man” or “being a woman”. Feminists described discourse as events
of such social practices, and investigated how power, ideology,
language and discourse were related.

Steps
Following steps can be taken to conduct a discourse analysis:
I. Target orientation: First, the researcher needs to find the target or
area of study. Once the research question is finalized, the researcher
needs to figure out the materials to be used as source of data.
II. Significance of data: Different sources, e.g. – newspapers, brochures,
online forums etc. can be used to gather data. After gathering, the
significance or the value of the data needs to be examined.
III. Interpretation of data: The investigation of the data, in terms of its
social and historical context must be done to understand the topic.
Questions like when, why, how must be assessed, so that all aspects
of the discourse context are included.
IV. Analysis of the findings: This final step closely examines various
elements of the material such as words, sentences, paragraphs etc.,
and relates them with relevant patterns or themes. Software is often
used to find patterns, but the final conclusion depends on
researchers’ rationale.

Issues
Though being highly useful, there are some concerns about the reliability
and validity of this methodology:
I. Since the interpretation is analyzed by the researcher (likely to have
certain biases), it may involve some subjectivity.
II. Lack of proper formats and formal guidelines make this approach
controversial.
III. The reliability and validity of the interpretation is highly dependent
upon the quality of researcher’s logic and reasoning.
IV. Since it doesn’t provide hard or proven data, the validity of the
interpretation is dependent upon the researcher’s rhetorical
response.

Implications
Discourse analysis has been taken up in varied disciplines like sociology,
anthropology, psychology, to name a few. Some of the implications are:
I. It helps to realize the hidden motivations behind problems, to solve it
more effectively.
II. It helps to get a comprehensive view of the problem, so that we can
relate ourselves to it.
III. It helps in deconstructing norms or generally held social beliefs.
IV. It is used to understand how language works and how discourse can
be used to foster social changes.

Summary
Discourse analysis is a powerful research approach(es), which explore the
relationship between language and reality. It has proven to be very helpful
in bringing paradigm shifts in society, by challenging the “taken for granted
nature of language”. Theories like modernism or structuralism among
others have provided different approaches to discourse analysis. Despite
having concerns with reliability and validity of the interpretation, discourse
analysis has been widely used in numerous disciplines like sociology,
humanities, philosophy and many more.

4. Explain the methods of estimating reliability. (400 words, 5 marks)


Reliability is a measurement of consistency i.e., if a test repeated multiple
times under same conditions, generates the same results, it is considered
reliable. This is needed for the hypothesis to be accepted as a scientific
truth. Reliability is not a measure, but an estimate and there are multiple
ways of calculating it. These can be classified into below two groups:
I. External consistency procedures: This approach compares findings
from two sets of data collected by the same test. The methods are:
a. Test-retest reliability: This is the most used approach, which
compares findings of administering the same test on the same
sample, on two different time periods. The reliability
coefficient is the correlation between the scores obtained by
the same person on the two administrations. Clearly time is of
critical importance, as it may impact the scores due to either
memory effect, where during re-test, people might answer
from memory without really thinking about the test item, or
practice effect, where re-test often shows better scores due to
practice.
b. Parallel forms reliability: In this approach, two forms of a test,
having different items but measuring the same construct, are
administered on the same sample and the result scores are
compared to estimate reliability. Despite being one of the most
rigorous assessments, this method is not much used in
practice, as researchers find it burdensome to generate two
forms of test.
c. Inter-rater reliability: This approach involves multiple
researchers or raters, observing the same sample and
comparing their results. Reliability is estimated by the degree
of agreement among the raters. Having multiple raters also
eliminate biases or mood effects.
II. Internal consistency procedures: This approach assesses the
correlation between multiple items in a test that are intended to
measure the same construct. Ways to measure are:
a. Split half reliability: The test items are divided into two sets
and the scores obtained for each half is compared to find the
correlation i.e. - split half reliability. This approach is
problematic for short tests and to rectify the defects of
shortness, Spearman Brown’s formula is employed: R = 2hh /(1
+ rhh), where rhh is the correlation between two halves.
b. Cronbach’s alpha: Cronbach’s alpha can be considered as the
mean of all possible split half coefficients, corrected by the
Spearman Brown formula. It can be calculated as: r(alpha) =
blahblahblah
c. Kudar Richardson estimate of reliability: It is a special case of
Cronbach’s alpha, computed for dichotomous score (0 or 1).

5. Explain the different types of variables. (400 words, 5 marks)


Simply speaking, variable is anything that takes on different values. In research,
additionally the variable must also be observable. Variables ties theories to the
real world, and hence considered as the core of theoretical research. Variables
are of different types mentioned below:
I. Stimulus, Organism and Response: Many psychologists have adopted the
S-O-R model to understand human behavior.
 S represents the stimulus variables, which are variables of
environment, that elicits response from organism, e.g. - light or
sound.
 O is for organism variables that is the changeable characteristics
of the organism being observed, e.g. – heart rate, blood pressure
etc.
 R is for response variables, which refers to some behavior or
action of the organism, e.g. – salivating or punching etc.
II. Independent and Dependent:
 Independent variable is the one that is manipulated by the
researcher, to ascertain its relationship to the observed
phenomenon.
 Dependent variable is the one measured by the researcher. It is
impacted by change in the independent variable.
For example - in an experiment studying the effect of sleep on
digestion, the independent variable is sleep and digestion is
dependent.
III. Extraneous and Confounded:
 Extraneous variable is all or any other variable that affects
dependent variable directly or in combination with the
independent variable. It may mask the relationship between
dependent and independent variables and hence must be
controlled in an experiment. For above example, diet can be an
extraneous variable impacting digestion with or without sleep.
 Confounding variable is the one which varies with independent
variable and cannot be controlled. It is important to unconfound
these variables, otherwise the results might not reflect correct
relationship. For above example, stress levels which varies with
sleep and impacts digestion, can be considered a confounding
variable.
IV. Active and Attribute:
 Any variable that can be manipulated, is active or experimental
variable, e.g.- teaching method, training courses etc
 An attribute variable cannot be manipulated, but measured by
experimenter., e.g.- intelligence, creativity
V. Quantitative and Categorical:
 Variables that vary in amount are quantitative, e.g. – noise level
 Variables that vary in kind are categorical, e.g. – gender, religion.
It can be of three types – constant (only one category e.g.- air),
dichotomous (two possible categories e.g.- working status) or
polytomous (multiple possible categories e.g.- religion).
VI. Continuous and Discrete:
 Continuous variables are those which can take infinite values
within a given interval. These can be measured to any arbitrary
degree of exactness, e.g.- height, weight.
 Discrete variables are those which have a fixed number of values
between two values and have a clear gap, e.g. – number of kids in
a family.
6. Discuss the various types of validity.
Validity of a test, refers to the extent to which it measures, what it claims to
measure. Being valid is extremely important for a test for its precise
administration and accurate interpretation of results. For example- if a test
claims to measure creativity, but instead measures analytical reasoning skills, it
is not considered valid.
Validity has various types mentioned below:
I. Content validity: A test has content validity, when its content or items
represents all the possible values the test should cover, e.g. – a test to
measure fluency of a language, should include all levels of fluency like
ability to speak or speak and write or understand or any possible
combinations of these.
II. Criterion-related validity: A test has criterion-related validity, when its
results closely relate to another standard test of the same concept. For
example – a specific IQ test must have results highly correlated to the
standard IQ test result. It has two types:
a. Concurrent: When the criterion measures are achieved at the
same time as test scores, such that the test scores estimate the
individual’s current state, it is said to have concurrent validity, e.g.
– a test to measure depression, if it measures the current level of
depression at the time test is performed, it has concurrent
validity.
b. Predictive: When the criterion measures are achieved later than
the time of test, e.g.– an IQ test which is used to successfully
predict future income, has predictive validity.
III. Construct validity: A test has construct validity, if it actually measures
the construct it claims to measure, e.g.– creativity tests should have
construct validity so that it does not measure other related measures
like education level. These are of two types:
a. Convergent: A test is said to have convergent validity, if its results
correlate with other test results, that are supposed to be
correlated, e.g. - if a test to measure anxiety produces correlated
results with a test to measure stress levels, it is said to have
convergent validity.
b. Divergent: Conversely, a test which does not correlate with
another test, when it is not supposed to, it is said to have
divergent validity, e.g. – if a test to measure IQ produces
uncorrelated result with a test to measure anxiety, it has
divergent validity.
IV. Face validity: A test has face validity, if a researcher looking on its face,
feels that it measures the construct, it claims to measure, e.g.- a test to
measure IQ having questions related to analytical skills.

7. Describe the importance and types of hypotheses.


In research, hypothesis is a statement of expectation or assumption (about the
outcome), which will be tested by research. It can be thought of as a calculated
and educated guess about the research findings, based on the current existing
knowledge. An example of a hypothesis can be - “long working hours impacts
physician visits”. However, a good research hypothesis must be stated in a
way, so that it can be subjected to empirical testing. A good research
hypothesis must be specific, clear and testable.
Importance
Hypothesis is the basic element for any research activity. It helps to link the
underlying theory with the research question. If it has been formulated in a
precise and clear way, designing the research becomes very easy. According to
Goode and Hatt, without hypothesis formulation, the research is unfocused.
Testing of the hypothesis helps in deciding whether to accept or reject the
proposed hypothesis, and thereby in concluding the findings of the research.

Types
Ideally there should be just one type of hypothesis. But due to conventions,
where the research theory is not directly proved, rather the alternate theory is
disproved to support the original theory, there are two major categories of
hypothesis. These are mentioned below:
I. Null hypothesis (h0): It is a statement which states that there is no
relationship between the variables defined in the hypothesis. In
statistics, a null hypothesis asserts that there is no difference between
two population means. Any difference found is due to random chance
factor. It is a very important tool in decision making. If there is a
significant difference found between the two means, the null hypothesis
is rejected.
II. Alternate hypothesis (h1): It is the opposite of null hypothesis and
reflects the problem statement that the researcher is trying to solve. It
asserts that the relationship exists between the variables defined. In
statistics, it asserts that there is a difference between the two
population means and it is not by chance. If null hypothesis is rejected,
alternate hypothesis is supported. It can be either directional, i.e. - the
difference exists and it is positive or negative, or non-directional, i.e.- the
difference exists and the direction is not known or doesn’t matter.

For example: to study relationship between long working hours and visits to
the physician, these can be written as:
H0: Working 8 or more hours does not increase visits to the physician.
H1: Working 8 or more hours increases visits to the physician. (directional)

8. Discuss the types, advantages and limitations of factorial research design.


(400 words, 5 marks)

Factorial research design is the most common approach to include multiple


independent variables or factors in an experiment. In a factorial design, each
level of one independent variable is combined with each level of other one to
generate all possible combinations. Each of these combinations is a condition
in the experiment, so that the main effect of all independent variables as well
as their combined or interaction effect can be studied.

For example – for studying effect of food(A) with 2 levels - veg/non-veg and
drink(B) with 3 levels – alcohol/cocktail/mocktail on glucose level, we will have
2 * 3 conditions to check for. The factorial design table will be like:
Alcohol Cocktail Mocktail
Veg food VA VC VM
Non-veg food NA NC NM

Types:
Below are the three types of factorial design:
I. Within subject design (repeated measures): Each participant is tested in
each level of all of the independent variables i.e.- in all conditions. In
above example, if A and B are within-subject, each participant will be
tested for all 6 conditions i.e. - VA + NA + VC + NC + VM + NM
II. Between subject design (non-repeated measures): Each participant is
tested in only one of the levels of all of the independent variables i.e. –
in only one condition. In above example, if A and B are between subject,
each participant is subjected to one of the 6 conditions, i.e. – VA or NA
or VC or NC or VM or NM
III. Mixed design: In this, some variables are within subject and some are
between subject. In example above, if A is within-subject and B is
between subject, a participant can be subject to either VA + NA or VC +
NC or VM + NM.
Advantages:
I. Manipulation and control of two or more independent variables at the
same time is possible with factorial design.
II. The study of main as well as interaction effect of multiple independent
variables can be done.
III. It is more precise than single factor design.
IV. It has more practical implications, as in real-life, most of the time there
are more than one independent variables in play.
Limitations
I. It becomes increasingly complex as the number of independent variables
increase.
II. With increased number of conditions to be tested, it becomes difficult
for the experimenter to select a homogenous group for each.

9. Content of research report.


Research report contains detailed summary of overall findings and serve as
guide for future researches. Its content generally includes below items, in
order:
 Title (research topic)
 Table of contents
 Executive summary
 Purpose
 Background of study subjects
 Methodologies followed for conducting experiment
 Result and findings
 Interpretation and conclusion
 Limitations
 Recommendations and implications
 References
 Appendices

10. Objectivity safeguards in research process.


Objectivity safeguards minimize subjectivity in the research process. It is done
through:
 Procedural safeguards, maintained by keeping complete records of
observations and data analysis.
 Standardization of procedures, i.e. – treating all participants in the
same standard.
 Operationalization of concepts, i.e.- defining concepts in terms of
how to measure them.
 Control procedures to avoid biases.

11. Types of constructs


Construct is a form of concept, invented to study a topic. It has two types:
I. Intervening(mediator) variables: A theoretical (non-observable) variable
explaining causal link between other variables. e.g. – occupation can be
intervening variable between education and income
II. Hypothetical construct: A theoretical construct describing something
real, but not directly observable, e.g. – motivation

12. Quota sampling.


It is a form of non-probability sampling i.e. – sample is selected from
population using a subjective (no-random) method. In quota sampling, the
researcher identifies relevant categories of population and then recruit
samples to fill predetermined quota(size), for each category, e.g. – a sampling
of 5 male and 5 female students of grade I.

13. Advantages and disadvantages of survey research.


Survey research collects data by asking questions to subjects using non-
experimental methods (e.g., pen-pencil feedback form). Its advantages include
being convenient, less time consuming and economical. The researcher also
gets a chance to fully explain the reasons behind study, improving the
likelihood of honest responses. Disadvantages include maintaining privacy and
high attrition rate of respondents.
14. Differences between Ex-post Facto and Experimental research.
 In experimental research, variables are manipulated to study the effect,
whereas in ex-post facto research (done post event’s occurrence),
manipulation of variables is not possible.
 In experimental research, interpretation is easier as variables can be
manipulated, whereas in ex-post facto, interpretation is difficult as effect
might be due to other variables too.
15. Criteria for a good research design.
Criteria for a good research design are:
 Clear research objective and operational definitions of concepts
 Explicit planning and description of research procedure for further
advancements
 Standard research design to minimize objectivity
 Appropriate and adequate data analysis to reveal its significance
 Verifying the validity and reliability of data
 Precise reporting of findings

16. Quasi-experimental research design.


Quasi-experimental design resembles (quasi means resembling) a true
experimental design, but the subjects are assigned to groups non-randomly.
The experiment is conducted after the independent variable has already
occurred, and hence the researcher studies the effects after the occurrence of
the variable. It is useful when true experiments cannot be used for ethical or
practical reasons.

17. Advantages of correlational research design


A correlational research design investigates relationships between variables,
when they don’t have causal relationship. Advantages include:
 The study of correlation between variables helps in generation of new
hypotheses.
 Easy availability of data makes it usable for a wide variety of cases.
 No repeated administration of test is needed, which avoids impact on
response due to pretest administration.

18. Ethnography.
Ethnography is a research process focused on a particular community. It
basically intends to study culture through close observation and active
participation. It focuses on the socio-cultural aspects of the community. For
example – studying about religious or social practices of Kashmiri Pandits.
Ethnography can also be used for comparative analysis of cultural groups.

You might also like