Statistical Inference
Statistical Inference
Statistical inference
Statistical inference is the process of using data analysis to deduce properties of an underlying probability
distribution.[1] Inferential statistical analysis infers properties of a population, for example by testing hypotheses and
deriving estimates. It is assumed that the observed data set is sampled from a larger population.
Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties
of the observed data, and it does not rest on the assumption that the data come from a larger population.
Contents
Introduction
Models and assumptions
Degree of models/assumptions
Importance of valid models/assumptions
Approximate distributions
Randomization-based models
Model-based analysis of randomized experiments
Inference topics
See also
Notes
References
Citations
Sources
Further reading
External links
Introduction
https://en.wikipedia.org/wiki/Statistical_inference 1/11
6/15/2019 Statistical inference - Wikipedia
Statistical inference makes propositions about a population, using data drawn from the population with some form of
sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of
(first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the
model.
Konishi & Kitagawa state, "The majority of the problems in statistical inference can be considered to be problems related
to statistical modeling".[2] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to
statistical model is done is often the most critical part of an analysis".[3]
The conclusion of a statistical inference is a statistical proposition.[4] Some common forms of statistical proposition are
the following:
a point estimate, i.e. a particular value that best approximates some parameter of interest;
an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn
from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter
value with the probability at the stated confidence level;
a credible interval, i.e. a set of values containing, for example, 95% of posterior belief;
rejection of a hypothesis;[note 1]
clustering or classification of data points into groups.
Degree of models/assumptions
Statisticians distinguish between three levels of modeling assumptions;
Fully parametric: The probability distributions describing the data-generation process are assumed to be fully
described by a family of probability distributions involving only a finite number of unknown parameters.[5] For
example, one may assume that the distribution of population values is truly Normal, with unknown mean and
variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a
widely used and flexible class of parametric models.
Non-parametric: The assumptions made about the process generating the data are much less than in parametric
statistics and may be minimal.[7] For example, every continuous probability distribution has a median, which may be
estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the
data arise from simple random sampling.
Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For
example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the
mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption)
but not make any parametric assumption describing the variance around that mean (i.e. about the presence or
possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into
'structural' and 'random variation' components. One component is treated parametrically and the other non-
parametrically. The well-known Cox model is a set of semi-parametric assumptions.
https://en.wikipedia.org/wiki/Statistical_inference 2/11
6/15/2019 Statistical inference - Wikipedia
Incorrect assumptions of 'simple' random sampling can invalidate statistical inference.[8] More complex semi- and fully
parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases
lead to faulty conclusions.[9] Incorrect assumptions of Normality in the population also invalidates some forms of
regression-based inference.[10] The use of any parametric model is viewed skeptically by most experts in sampling human
populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements
about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have
distributions that are nearly normal."[11] In particular, a normal distribution "would be a totally unrealistic and
catastrophically unwise assumption to make if we were dealing with any kind of economic population."[11] Here, the
central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally
distributed, if the distribution is not heavy tailed.
Approximate distributions
Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for
approximating these.
With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample
distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of
accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem.[12] Yet for
many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution
when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience.[12]
Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to
quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this
approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and
the Hellinger distance.[13][14][15]
With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting
distribution, if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite
samples.[16][17][18] However, the asymptotic theory of limiting distributions is often invoked for work with finite samples.
For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized
estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the
limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using
simulation.[19] The heuristic application of limiting results to finite samples is common practice in many applications,
especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families).
Randomization-based models
For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the
null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the
randomization design. In frequentist inference, randomization allows inferences to be based on the randomization
distribution rather than a subjective model, and this is important especially in survey sampling and design of
experiments.[20][21] Statistical inference from randomized studies is also more straightforward than many other
situations.[22][23][24] In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling
without replacement ensures the exchangeability of the sample with the population; in randomized experiments,
randomization warrants a missing at random assumption for covariate information.[25]
https://en.wikipedia.org/wiki/Statistical_inference 3/11
6/15/2019 Statistical inference - Wikipedia
Objective randomization allows properly inductive procedures.[26][27][28][29] Many statisticians prefer randomization-
based analysis of data that was generated by well-defined randomization procedures.[30] (However, it is true that in fields
of science with developed theoretical knowledge and experimental control, randomized experiments may increase the
costs of experimentation without improving the quality of inferences.[31][32]) Similarly, results from randomized
experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do
observational studies of the same phenomena.[33] However, a good observational study may be better than a bad
randomized experiment.
The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental
protocol and does not need a subjective model.[34][35]
However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe
randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.
Bandyopadhyay & Forster[37] describe four paradigms: "(i) classical statistics or error statistics, (ii) Bayesian statistics, (iii)
likelihood-based statistics, and (iv) the Akaikean-Information Criterion-based statistics". The classical (or frequentist)
paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the AIC-based paradigm are summarized below.
Frequentist inference
This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population
distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated
sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification
may be challenging.
https://en.wikipedia.org/wiki/Statistical_inference 4/11
6/15/2019 Statistical inference - Wikipedia
One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency
probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman[38] develops
these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a
rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability
need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of
conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned
on unknown parameters) probabilities used in the frequentist approach.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility
functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility
functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators,
or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss
functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality
property.[39] However, loss-functions are often useful for stating optimality properties: for example, median-unbiased
estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares
estimators are optimal under squared error loss functions, in that they minimize expected loss.
While statisticians using frequentist inference must choose for themselves the parameters of interest, and the
estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist
procedures to become widely viewed as 'objective'.[40]
Bayesian inference
The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate to one,
and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical
propositions. There are several different justifications for using the Bayesian approach.
Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is
the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore
automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian
inference can be made for essentially any problem, although not every statistical inference need have a Bayesian
interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures
which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of
Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference
should not conclude with the evaluation and summarization of posterior beliefs.
https://en.wikipedia.org/wiki/Statistical_inference 5/11
6/15/2019 Statistical inference - Wikipedia
Likelihood-based inference
Likelihoodism approaches statistics by using the likelihood function. Some likelihoodists reject inference, considering
statistics as only computing support from evidence. Others, however, propose inference based on the likelihood function,
of which the best-known is maximum likelihood estimation.
AIC-based inference
The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data.
Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models.
Thus, AIC provides a means for model selection.
AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to
represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the
model and the simplicity of the model.)
However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it
provides the MDL description of the data, on average and asymptotically.[43] In minimizing description length (or
descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori
estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability
model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent
sampling.[43][44]
The MDL principle has been applied in communication-coding theory in information theory, in linear regression,[44] and
in data mining.[42]
The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity
theory.[45]
Fiducial inference
Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial
distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even
fallacious.[46][47] However this argument is the same as that which shows[48] that a so-called confidence distribution is not
a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not
necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of
Fisher's fiducial argument as a special case of an inference theory using Upper and lower probabilities.[49]
Structural inference
https://en.wikipedia.org/wiki/Statistical_inference 6/11
6/15/2019 Statistical inference - Wikipedia
Developing ideas of Fisher and of Pitman from 1938 to 1939,[50] George A. Barnard developed "structural inference" or
"pivotal inference",[51] an approach using invariant probabilities on group families. Barnard reformulated the arguments
behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful.
Inference topics
The topics below are usually included in the area of statistical inference.
1. Statistical assumptions
2. Statistical decision theory
3. Estimation theory
4. Statistical hypothesis testing
5. Revising opinions in statistics
6. Design of experiments, the analysis of variance, and regression
7. Survey sampling
8. Summarizing statistical data
See also
Algorithmic inference
Induction (philosophy)
Informal inferential reasoning
Population proportion
Philosophy of statistics
Predictive inference
Information field theory
Notes
1. According to Peirce, acceptance means that inquiry on this question ceases for the time being. In science, all
scientific theories are revisable.
References
Citations
1. Upton, G., Cook, I. (2008) Oxford Dictionary of Statistics, OUP. ISBN 978-0-19-954145-4.
2. Konishi & Kitagawa (2008), p. 75.
3. Cox (2006), p. 197.
4. "Statistical inference - Encyclopedia of Mathematics" (https://www.encyclopediaofmath.org/index.php/Statistical_infer
ence). www.encyclopediaofmath.org. Retrieved 2019-01-23.
5. Cox (2006) page 2
6. Evans, Michael; et al. (2004). Probability and Statistics: The Science of Uncertainty (https://books.google.com/books?
id=hkWK8kFzXWIC&printsec=frontcover#v=onepage&q=%22descriptive%20statistics%22&f=false). Freeman and
Company. p. 267. ISBN 9780716747420.
7. van der Vaart, A.W. (1998) Asymptotic Statistics Cambridge University Press. ISBN 0-521-78450-6 (page 341)
8. Kruskal 1988
https://en.wikipedia.org/wiki/Statistical_inference 7/11
6/15/2019 Statistical inference - Wikipedia
9. Freedman, D.A. (2008) "Survival analysis: An Epidemiological hazard?". The American Statistician (2008) 62: 110-
119. (Reprinted as Chapter 11 (pages 169–192) of Freedman (2010)).
10. Berk, R. (2003) Regression Analysis: A Constructive Critique (Advanced Quantitative Techniques in the Social
Sciences) (v. 11) Sage Publications. ISBN 0-7619-2904-5
11. Brewer, Ken (2002). Combined Survey Sampling Inference: Weighing of Basu's Elephants. Hodder Arnold. p. 6.
ISBN 978-0340692295.
12. Jörgen Hoffman-Jörgensen's Probability With a View Towards Statistics, Volume I. Page 399
13. Le Cam (1986)
14. Erik Torgerson (1991) Comparison of Statistical Experiments, volume 36 of Encyclopedia of Mathematics. Cambridge
University Press.
15. Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer.
ISBN 978-0-387-73193-3.
16. Kolmogorov (1963, p.369): "The frequency concept, based on the notion of limiting frequency as the number of trials
increases to infinity, does not contribute anything to substantiate the applicability of the results of probability theory to
real practical problems where we have always to deal with a finite number of trials".
17. "Indeed, limit theorems 'as tends to infinity' are logically devoid of content about what happens at any particular .
All they can do is suggest certain approaches whose performance must then be checked on the case at hand." — Le
Cam (1986) (page xiv)
18. Pfanzagl (1994): "The crucial drawback of asymptotic theory: What we expect from asymptotic theory are results
which hold approximately . . . . What asymptotic theory has to offer are limit theorems."(page ix) "What counts for
applications are approximations, not limits." (page 188)
19. Pfanzagl (1994) : "By taking a limit theorem as being approximately true for large sample sizes, we commit an error
the size of which is unknown. [. . .] Realistic information about the remaining errors may be obtained by simulations."
(page ix)
20. Neyman, J.(1934) "On the two different aspects of the representative method: The method of stratified sampling and
the method of purposive selection", Journal of the Royal Statistical Society, 97 (4), 557–625 JSTOR 2342192 (https://
www.jstor.org/stable/2342192)
21. Hinkelmann and Kempthorne(2008)
22. ASA Guidelines for a first course in statistics for non-statisticians. (available at the ASA website)
23. David A. Freedman et alia's Statistics.
24. Moore et al. (2015).
25. Gelman A. et al. (2013). Bayesian Data Analysis (Chapman & Hall).
26. Peirce (1877-1878)
27. Peirce (1883)
28. David Freedman et alia Statistics and David A. Freedman Statistical Models.
29. Rao, C.R. (1997) Statistics and Truth: Putting Chance to Work, World Scientific. ISBN 981-02-3111-3
30. Peirce; Freedman; Moore et al. (2015).
31. Box, G.E.P. and Friends (2006) Improving Almost Anything: Ideas and Essays, Revised Edition, Wiley. ISBN 978-0-
471-72755-2
32. Cox (2006), p. 196.
33. ASA Guidelines for a first course in statistics for non-statisticians. (available at the ASA website)
37. Bandyopadhyay & Forster (2011). The quote is taken from the book's Introduction (p.3). See also "Section III: Four
Paradigms of Statistics".
38. Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability".
Philosophical Transactions of the Royal Society of London A. 236 (767): 333–380. doi:10.1098/rsta.1937.0005 (http
s://doi.org/10.1098%2Frsta.1937.0005). JSTOR 91337 (https://www.jstor.org/stable/91337).
39. Preface to Pfanzagl.
40. Little, Roderick J. (2006). "Calibrated Bayes: A Bayes/Frequentist Roadmap". The American Statistician. 60 (3): 213–
223. doi:10.1198/000313006X117837 (https://doi.org/10.1198%2F000313006X117837). ISSN 0003-1305 (https://ww
w.worldcat.org/issn/0003-1305). JSTOR 27643780 (https://www.jstor.org/stable/27643780).
41. Soofi (2000)
42. Hansen & Yu (2001)
43. Hansen and Yu (2001), page 747.
44. Rissanen (1989), page 84
45. Joseph F. Traub, G. W. Wasilkowski, and H. Wozniakowski. (1988)
46. Neyman (1956)
47. Zabell (1992)
48. Cox (2006) page 66
49. Hampel 2003.
50. Davison, page 12.
51. Barnard, G.A. (1995) "Pivotal Models and the Fiducial Argument", International Statistical Review, 63 (3), 309–323.
JSTOR 1403482 (https://www.jstor.org/stable/1403482)
Sources
Bandyopadhyay, P. S.; Forster, M. R., eds. (2011), Philosophy of Statistics, Elsevier.
Bickel, Peter J.; Doksum, Kjell A. (2001). Mathematical statistics: Basic and selected topics. 1 (Second (updated
printing 2007) ed.). Prentice Hall. ISBN 978-0-13-850363-5. MR 0443141 (https://www.ams.org/mathscinet-getitem?m
r=0443141).
Cox, D. R. (2006). Principles of Statistical Inference, Cambridge University Press. ISBN 0-521-68567-2.
Fisher, R. A. (1955), "Statistical methods and scientific induction", Journal of the Royal Statistical Society, Series B,
17, 69—78. (criticism of statistical theories of Jerzy Neyman and Abraham Wald)
Freedman, D. A. (2009). Statistical Models: Theory and practice (revised ed.). Cambridge University Press.
pp. xiv+442 pp. ISBN 978-0-521-74385-3. MR 2489600 (https://www.ams.org/mathscinet-getitem?mr=2489600).
Freedman, D. A. (2010). Statistical Models and Causal Inferences: A Dialogue with the Social Sciences (Edited by
David Collier, Jasjeet S. Sekhon, and Philip B. Stark), Cambridge University Press.
Hampel, Frank (Feb 2003). "The proper fiducial argument" (http://e-collection.library.ethz.ch/eserv/eth:26403/eth-2640
3-01.pdf) (PDF) (Research Report No. 114). Retrieved 29 March 2016.
Hansen, Mark H.; Yu, Bin (June 2001). "Model Selection and the Principle of Minimum Description Length: Review
paper" (https://web.archive.org/web/20041116080440/http://www.stat.berkeley.edu/webmastr/users/binyu/ps/mdl.ps).
Journal of the American Statistical Association. 96 (454): 746–774. CiteSeerX 10.1.1.43.6581 (https://citeseerx.ist.ps
u.edu/viewdoc/summary?doi=10.1.1.43.6581). doi:10.1198/016214501753168398 (https://doi.org/10.1198%2F01621
4501753168398). JSTOR 2670311 (https://www.jstor.org/stable/2670311). MR 1939352 (https://www.ams.org/mathsci
net-getitem?mr=1939352). Archived from the original (http://www.stat.berkeley.edu/webmastr/users/binyu/ps/mdl.ps)
on 2004-11-16.
Hinkelmann, Klaus; Kempthorne, Oscar (2008). Introduction to Experimental Design (https://books.google.com/?id=T
3wWj2kVYZgC&printsec=frontcover) (Second ed.). Wiley. ISBN 978-0-471-72756-9.
Kolmogorov, Andrei N. (1963). "On tables of random numbers". Sankhyā Ser. A. 25: 369–375. MR 0178484 (https://w
ww.ams.org/mathscinet-getitem?mr=0178484). Reprinted as Kolmogorov, Andrei N. (1998). "On tables of random
https://en.wikipedia.org/wiki/Statistical_inference 9/11
6/15/2019 Statistical inference - Wikipedia
(1878 March), "The Doctrine of Chances", Popular Science Monthly, v. 12, March issue, pp. 604 (https://books.go
ogle.com/books?id=ZKMVAAAAYAAJ&jtp=604)–615. Internet Archive Eprint (https://archive.org/stream/popscimo
nthly12yoummiss#page/612/mode/1up).
(1878 April), "The Probability of Induction", Popular Science Monthly, v. 12, pp. 705 (https://books.google.com/boo
ks?id=ZKMVAAAAYAAJ&jtp=705)–718. Internet Archive Eprint (https://archive.org/stream/popscimonthly12youm
miss#page/715/mode/1up).
(1878 June), "The Order of Nature", Popular Science Monthly, v. 13, pp. 203 (https://books.google.com/books?id=
u8sWAQAAIAAJ&jtp=203)–217.Internet Archive Eprint (https://archive.org/stream/popularsciencemo13newy#pag
e/203/mode/1up).
(1878 August), "Deduction, Induction, and Hypothesis", Popular Science Monthly, v. 13, pp. 470 (https://books.go
ogle.com/books?id=u8sWAQAAIAAJ&jtp=470)–482. Internet Archive Eprint (https://archive.org/stream/popularsci
encemo13newy#page/470/mode/1up).
Peirce, C. S. (1883), "A Theory of probable inference", Studies in Logic, pp. 126-181 (https://books.google.com/book
s?id=V7oIAAAAQAAJ&pg=PA126), Little, Brown, and Company. (Reprinted 1983, John Benjamins Publishing
Company, ISBN 90-272-3271-7)
Pfanzagl, Johann; with the assistance of R. Hamböker (1994). Parametric Statistical Theory. Berlin: Walter de
Gruyter. ISBN 978-3-11-013863-4. MR 1291393 (https://www.ams.org/mathscinet-getitem?mr=1291393).
Rissanen, Jorma (1989). Stochastic Complexity in Statistical Inquiry. Series in Computer Science. 15. Singapore:
World Scientific. ISBN 978-9971-5-0859-3. MR 1082556 (https://www.ams.org/mathscinet-getitem?mr=1082556).
Soofi, Ehsan S. (December 2000). "Principal information-theoretic approaches (Vignettes for the Year 2000: Theory
and Methods, ed. by George Casella)". Journal of the American Statistical Association. 95 (452): 1349–1353.
doi:10.1080/01621459.2000.10474346 (https://doi.org/10.1080%2F01621459.2000.10474346). JSTOR 2669786 (http
s://www.jstor.org/stable/2669786). MR 1825292 (https://www.ams.org/mathscinet-getitem?mr=1825292).
Traub, Joseph F.; Wasilkowski, G. W.; Wozniakowski, H. (1988). Information-Based Complexity. Academic Press.
ISBN 978-0-12-697545-1.
Zabell, S. L. (Aug 1992). "R. A. Fisher and Fiducial Argument". Statistical Science. 7 (3): 369–387.
doi:10.1214/ss/1177011233 (https://doi.org/10.1214%2Fss%2F1177011233). JSTOR 2246073 (https://www.jstor.org/s
table/2246073).
Further reading
Casella, G., Berger, R. L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6
Freedman, D.A. (1991). "Statistical models and shoe leather". Sociological Methodology. 21: 291–313.
doi:10.2307/270939 (https://doi.org/10.2307%2F270939). JSTOR 270939 (https://www.jstor.org/stable/270939).
Held L., Bové D.S. (2014). Applied Statistical Inference—Likelihood and Bayes (Springer).
Lenhard, Johannes (2006). "Models and Statistical Inference: the controversy between Fisher and Neyman–Pearson"
(http://www.stats.org.uk/statistical-inference/Lenhard2006.pdf) (PDF). British Journal for the Philosophy of Science.
57: 69–91. doi:10.1093/bjps/axi152 (https://doi.org/10.1093%2Fbjps%2Faxi152).
https://en.wikipedia.org/wiki/Statistical_inference 10/11
6/15/2019 Statistical inference - Wikipedia
Lindley, D (1958). "Fiducial distribution and Bayes' theorem". Journal of the Royal Statistical Society, Series B. 20:
102–7.
Rahlf, Thomas (2014). "Statistical Inference", in Claude Diebolt, and Michael Haupert (eds.), "Handbook of
Cliometrics ( Springer Reference Series)", Berlin/Heidelberg: Springer.
http://www.springerreference.com/docs/html/chapterdbid/372458.html
Reid, N.; Cox, D. R. (2014). "On Some Principles of Statistical Inference". International Statistical Review. 83 (2):
293–308. doi:10.1111/insr.12067 (https://doi.org/10.1111%2Finsr.12067).
Young, G.A., Smith, R.L. (2005). Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8
External links
MIT OpenCourseWare (http://dspace.mit.edu/handle/1721.1/45587): Statistical Inference
NPTEL Statistical Inference (http://www.nptel.ac.in/courses/111105043/), youtube link (https://www.youtube.com/playli
st?list=PLbMVogVj5nJRkNUH5v9qNEJvW7r2A7rEY)
Statistical induction and prediction (https://www.academia.edu/3247833/)
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.
https://en.wikipedia.org/wiki/Statistical_inference 11/11