Outline Research Methods
Outline Research Methods
Outline Research Methods
[1]
30.03.2012
Social Constructionism:
Panpsychistic. The (social) world exists only in the cognition of the individual which itself is a linguistic construction. Explaining the world through interpretations and narrative analyses of everyday life. The deductivist approach is enough; which leads to probabilistic assertions (A is more likely than B). Science should be defined through experiments of things that can be exactly duplicated (Gergen, 1995), thus Geology would be no scientific discipline.
Contextualism/Perspectism:
Both started as a reaction against a mechanistic model of behavior. The pure knowledge itself does not exist. Circumstances are only through context, such as societal systems and culture, getting meaning; and within those borders true (Contextualism).
[2]
Therefore, all hypothesis and theories are true (and false) depending on the perspective. Thus, change (which is constrained by nature) must be intrinsic and no theory can account for everything. This doctrine is called methodological pluralism and theoretical ecumenism (no universal laws without borders).
Evolutionary Epistemology:
The core idea is that successful theories and knowledge in science evolve in a competition for survival. Theories survive because of their usefulness and independent from context (see Contextualism). Ultimately there is an a priori reality waiting to be discovered.
Aesthetics:
The perceptible images are evaluated on their beauty; probably a basic psychological component (theories in science can also be beautiful or elegant).
Neuroscience: Most micro. Biological and biochemical factors. Cognition: More micro. Thinking and reasoning. Social Psychology: More macro. Interpersonal and group factors. Sociology: Most macro. Societal systems.
[5]
Chapter 2
Contexts of Discovery and Justification
Inspiration and Exploration:
Discovery refers to the origin of ideas or the genesis of theories and hypotheses. Justification refers to the processes by which hypotheses are empirically adjudicated. Null hypothesis significance testing is a dichotomous decision making paradigm (either/or not).
[6]
Theoretical definitions: Do not attempt to force our thinking into a rigidly empirical mold. Typology is a systematic classification of types when trying to condense the definition of a psychological concept. Facet analysis: To formulate a classification system based on assumed structural patterns or dimensions (factor analysis). Coherence means whether the precise statement of the hypothesis fits logically Parsimony describes the simplicity of a statement (Occams razor).
Positivism:
Embracing the positive observational (opposite to negativism). So, statements authenticated by sensory experience are more likely to be true (see Vienna circle). Hume: All knowledge resolves itself in probability and thus it is impossible to prove beyond doubt that a generalization is incontrovertible true.
Falsificationism:
Antipositivist view on the basis of inescapable conclusions. It is argued that theories might have boundaries that will never be explored.
Conventionalism:
Duheme-Quine thesis plays on the role of language. Meaning that theories evolve on the basis of certain linguistic conventions (like simplicity).
An Amalgamation of Ideas:
The views about the requirements of scientific theories and hypotheses now seem a mixture of Falsificationism, conventionalism, and practicality: i. ii. iii. iv. v. vi. Finite testability. Falsifiability. Theories can add or replace outmoded models. If its not supported it may not be right. Though, if there is support it may not be true. There are always alternative explanations.
Type I Error is the mistake of rejecting H0 when its true Type II Error refers to the mistake of not rejecting H0 when it is false. Significance level, p-value, indicates the probability of Type I error and is denoted as alpha. The probability of Type II error is symbolized as beta. Confidence: 1-alpha, is the probability of not making a Type I error. Power: 1-beta, the probability of not making a Type II error; sensitivity.
[8]
Chapter 3
Ethical Considerations, Dilemmas, and Guidelines
Puzzles and Problems:
A controversy between Wittgenstein and Popper around their views of philosophy. There are no problems in philosophy, simply linguistic puzzles revealing misuse of language (W.). Ethics: Has to do with the values by which the conduct of individuals is morally evaluated.
[9]
Doing good (beneficence) or doing no harm (nonmalficience). Debriefing if deception is used; also referred to as dehoaxing. Can be useful for disclosure of information that was not revealed before as well.
Principle IV Trust:
Confidentiality is intended to ensure the subjects privacy by procedures for protecting the data (see Certificate of Confidentiality).
[10]
iii.
[11]
If we judge the internal-consistency reliability of a test to be too low, we can increase the value of R by increasing the number of items as long as the items remain reasonably homogenous. Spearman-Brown equation is particular useful to assess the total reliability of a test by increasing the number of test items.
Cohens Kappa:
Cohens Kappa is adjusted for agreement based in simple lack of variability to improve the measurement of percentage agreement. Observed minus expected is divided by the total number of cases minus the expected. Omnibus statistical procedures, has the problem that it is difficult to tell which statements are made reliable and which are not. Focused statistical procedures are testing a specific statement.
Replication in Research:
Replication or Repeatability can only be relative, so no exact replication will ever be possible and several factors affect the utility of a replication.
[12]
i. ii.
When the replication is conducted. Thus, early replications are typically more useful. How the replication is conducted. A precise replication is intended to be as close as possible to the original whereas a varied replication intentionally changes an aspect of it. Who conducted the replication because of the problem of correlated replicators (same person replicates finding over and over again). Unfortunately and fortunately, some researchers are precorrelated by virtue of their common interests.
iii.
iii.
[13]
Chapter 5
Observations, Judgments, and Composite Variables
Observing, Classifying, and Evaluating:
refers to data in written form, records, etc. consists of numerical data.
are not controlled for and they do have a direct impact on the reactions of research participants. Reactive measures affect the behavior that is being measured, whereas nonreactive measures do not.
Archives:
Two subcategories exist in archival research. Running records, such as actuarial data (birth, marriage, facebook timeline) and personal documents.
Physical Traces:
Simple unobtrusive observations are observations that are not visible for the person being observed.
Unobtrusive Observation:
Contrived unobtrusive observations are unobtrusive measures in manipulated situations.
Forced-Choice Judgments:
is a procedure used to overcome the halo error. refers to a type of response set in which the person being evaluated is judged in terms of a general impression.
Numerical Formats:
[15]
has numbers as anchors and a following description which is used to give those numbers meaning and a specific context to evaluate.
Graphic Formats:
Are simply straight lines (mostly invisibly consisting of different areas) in which the judge or the participant should indicate his/her position (attitude) with regard to construct in question (anchor).
Magnitude Scaling:
Magnitude scaling is a concept in which the upper range of the score is not defined but left for interpretation of the judge (open-ended).
[16]
If variables are highly correlated with each other it is hard to treat them as being different. Thus, conceptually the forming of composite variables is beneficial. If you combine variables you are able to obtain more accurate estimates of the relationships with other composite variables and you reduce the number of predictors.
The Intra/Intermatrix:
average means the average between variables within a single composite. average characterizes the level of relationship between one composite variable and another.
The r Method:
In this method the point-biseral correlation is computed. This is the Pearson r, where one of the variables is continuous and the other is dichotomous. The rpb is computed between the mean correlations of the intra/Intermatrix (continuous) with their dichotomously located position on the principal diagonal, versus off the diagonal. The more positive the correlation, the higher the intra than the inter.
The g Method:
G is the difference between the mean of the mean rs on the diagonal and the mean of the mean rs off the diagonal divided by the weighted s, combined from the on-diagonal (intra) and off-diagonal (intra) values of r.
[17]
Characteristics of Randomization:
The rare instances in which very large differences between conditions existed even before the treatments were administered are sometimes referred to as failures of randomization. Another variation on the randomized designs noted before is to use pretest measurements to establish baselines scores for all subjects.
[18]
Statistically randomness means that each participant has the same probability of being chosen for a particular group.
AS an example, the barometer falls before it rains, but a falling barometer doesnt cause the rain. Coincidentally, e.g. Monday precedes Tuesday but it is absurd to say that Monday causes Tuesday. Therefore, there is a missing ingredient for causality.
[19]
The group given the treatment (experimental condition) resembles the method of agreement, whereas the group not given the drug (control condition) resembles the method of difference. The method described above is referred to as joint method of agreement and difference.
v.
Selection is a potential threat when there are unsuspected differences between the participants in each condition; random allocation is not a guarantee of comparability between groups.
Representative research design describes an idealized experimental model in which everything is perfect. Ecologically valid are experiments that satisfy this criterion of representativeness. Single stimulus design (e.g. use of experimenters of one sex) has two major limitations. i. If there are differences in results after using a second stimulus (female experimenter) we cannot conclude if the results are still valid or confounded which resembles a threat to internal validity. If we fail to find differences, this could be due to the presence of an uncontrolled stimulus variable operation either to counteract or to increase an effect artificially to a ceiling value.
ii.
The use of convenience samples is topic of major debate (see convenience sample mice, Hull/Tollman)
Hawthorne effect describes that human subjects behave in a special way because they know they are subjects and under investigation. Saul Rosenzweig (one badass name!) describes three artifacts. i. ii. iii. Observational attitude of the experimenter. Motivational attitude. Errors of personality influence (e.g. warmth or coolness of the experimenter).
Dustbowl empiricist view that emphasizes only observable responses as acceptable data in science leaving all the cognitive accounts.
The use of replications is one amongst the most powerful tools available to control for these kinds of artifacts. Experimenter expectancy is a virtual constant in science and may lead to a self-fulfilling prophecy.
[23]
Chapter 8
Nonrandomized Research and Functional Relationships
Nonrandomized and Quasi-Experimental Studies:
Quasi-experimental refers experiments that are lacking the full control over the scheduling of experimental stimuli that make randomized experiments possible. Association implies covariation to some degree. Methodological pluralism is necessary because all research designs are limited in some way (it depends). There are four types of nonrandomized strategies. i. ii. Nonequivalent groups design, in this case, the researchers dont have any control about the assignments to groups (historical control trials). Interrupted time-series designs uses large numbers of consecutive outcome measures that are interrupted by a critical intervention. The objective is to assess the causal impact of the intervention by comparing before and after measurement. Single-case studies primarily used as detection experiments and frequently in Neuroscience. Correlational design, is characterized by the simultaneous observation of interventions and their possible outcomes (retrospective covariation of X and Y).
iii. iv.
Diachronic research is the tracking of the variable of interest over successive periods of time. Synchronic research is the name for studies that take a slice of time and examine behavior only at one point.
[25]
Correlational designs and cross-lagged panel designs are frequently used in behavioral sciences. Cross lagged implies that some data points are treated as temporally lagged values of the outcome measures. Panel design is another name for longitudinal research (increased precision of treatment and added time component in order to be able to detect temporary changes). Observed bivariate correlations can be too high, too low, spurious or accurate (causation [?]) depending on the pattern of relationship among the variables in the structure that actually generated the data. The absence of correlation in cross-lagged designs is not proof of the absence of causation. Three sets of paired correlations are represented. i. ii. iii. Test-retest correlations (ra1a2, rb1b2) Synchronous correlations (ra1b1, ra2b2) Cross lagged correlations (ra1b2, rb1a2)
In fact, relationships are seldom stationary, but instead are usually lower over longer lapses of time-described as temporal erosion; the reduction is also called attenuation and the leftover is referred to as residual.
[26]
Longitudinal data are usually collected prospectively but the data can also be gotten retrospectively from historical records.
Chapter 9
Randomly and Nonrandomly Selected Sampling Units
Sampling a Small Part of the Whole:
Probability sampling, here i. ii. iii. Every sampling unit has a known nonzero probability of being selected. The units are randomly drawn. The probabilities are taken into account in making estimates from the sample.
Convenience samples are raising the concern about generalizability (e.g. student samples). Paradox of sampling implies that the sample is of no use when its not representative of the population. In order to know that its representative, you need to know the whole population characteristics, but then you dont need the sample in the first place. Sampling plans specifies how the respondent will be selected. This is one way to overcome the paradox of sampling by consideration or the procedure by which the sample is obtained.
Random sampling without replacement describes the procedure in which a selected unit cannot be reselected and must be disregarded on any later draw. Random sampling with replacement refers to the possibility to reselect a unit.
[29]
The volunteer subject problem can be understood as a variant on the problem of nonresponse bias. Approaches to compare volunteers and nonvolunteers range from looking through archives, recruiting volunteers, to second-stage volunteers comparisons etc. Volunteering might have both general and specific predictors.
Explaining participants the significance of the research results in giving up trivial research and increases the likelihood of authentic participation.
MNAR, missingness related to variables of substantial interest and cannot be fully accounted for by other variables (missing not at random).
iii.
iv.
In multiple imputation each missing observation is replaced not by a single estimate, but by a set of m reasonable estimates that will yield m pseudocomplete data sets; these are later on combined to obtain a more accurate estimate of variability than what is possible with single imputational techniques. These procedures tend to be much simpler computationally than Bayesian or Maximum Likelihood Estimation and most useful.
[31]