0% found this document useful (0 votes)
68 views4 pages

Assignment - Probability Distributions and Data Modeling

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 4

 Bernoulli distribution

In experiments and clinical trials, the Bernoulli distribution is sometimes used to model a single
individual experiencing an event like death, a disease, or disease exposure. The model is an excellent
indicator of the probability a person has the event in question.

 Binomial distribution

The binomial distribution is a probability distribution that summarizes the likelihood that a value will
take one of two independent values under a given set of parameters or assumptions.

 Complement

A mutually exclusive pair of events are complements to each other.

 Conditional probability

The conditional probability of an event B is the probability that the event will occur given the
knowledge that an event A has already occurred.

 Continuous random variable

A continuous random variable is a random variable where the data can take infinitely many values.
For example, a random variable measuring the time taken for something to be done is continuous since
there are an infinite number of possible times that can be taken.

 Cumulative distribution function

The cumulative distribution function (cdf) gives the probability that the random variable X is less
than or equal to x and is usually denoted F(x). The cumulative distribution function of a random variable
X is the function given by F(x)=P[X≤x].

 Discrete random variable

A discrete variable is a variable whose value is obtained by counting. A discrete random variable X
has a countable number of possible values.

 Discrete uniform distribution

In statistics, uniform distribution is a term used to describe a form of probability distribution where
every possible outcome has an equal likelihood of happening. The probability is constant since each
variable has equal chances of being the outcome.

 Empirical probability distribution

Empirical probability, also known as experimental probability, refers to a probability that is based on
historical data. In other words, empirical probability illustrates the likelihood of an event occurring
based on historical data.

 Event
something that happens or is regarded as happening; an occurrence, especially one of some
importance. the outcome, issue, or result of anything: The venture had no successful event. something
that occurs in a certain place during a particular interval of time.

 Expected value

Expected value is exactly what you might think it means intuitively: the return you can expect for
some kind of action; like how many questions you might get right if you guess on a multiple choice test.

 Experiment

theory is concerned with such random phenomena or random experiments. Consider a random
experiment. The set of all the possible outcomes is called the sample space of the experiment and is
usually denoted by S. Any subset E of the sample space S is called an event.

 Exponential distribution

The exponential distribution is often concerned with the amount of time until some specific event
occurs. For example, the amount of time (beginning now) until an earthquake occurs has an exponential
distribution.

 Goodness of fit

The goodness of fit test is a statistical hypothesis test to see how well sample data fit a distribution
from a population with a normal distribution.

 Independent Events

Two events A and B are said to be independent if the fact that one event has occurred does not
affect the probability that the other event will occur. If whether or not one event occurs does affect the
probability that the other event will occur, then the two events are said to be dependent.

 Multiplication law of probability

When we calculate probabilities involving one event AND another event occurring, we multiply their
probabilities.

 Mutually exclusive

Mutually exclusive is a statistical term describing two or more events that cannot happen
simultaneously. It is commonly used to describe a situation where the occurrence of one outcome
supersedes the other.

 Normal distribution

The normal distribution is a continuous probability distribution that is symmetrical on both sides of
the mean, so the right side of the center is a mirror image of the left side. ... The normal distribution is
often called the bell curve because the graph of its probability density looks like a bell.

 Outcome
in outcome is a possible result of an experiment or trial. Each possible outcome of a particular
experiment is unique, and different outcomes are mutually exclusive.

 Poisson distribution

The Poisson distribution is used to model the number of events occurring within a given time interval.

 Probability

Probabilities may be defined from one of three perspectives: Classical definition: probabilities can be
deduced from theoretical arguments Relative frequency definition: probabilities are based on empirical
data Subjective definition: probabilities are based on judgment and experience

 Probability density function

A continuous random variable takes on an unaccountably infinite number of possible values.

 Probability distribution

A probability distribution is a statistical function that describes all the possible values and likelihoods
that a random variable can take within a given range. This range will be bounded between the minimum
and maximum possible values, but precisely where the possible value is likely to be plotted on the
probability distribution depends on a number of factors. These factors include the distribution's mean
(average), standard deviation, skewness, and kurtosis.

 Probability mass function

a probability mass function is a function that gives the probability that a discrete random variable is
exactly equal to some value. Sometimes it is also known as the discrete density function.

 Random number

A random number is a number chosen as if by chance from some specified distribution such that
selection of a large set of these numbers reproduces the underlying distribution. Almost always, such
numbers are also required to be independent, so that there are no correlations between successive
numbers.

 Random number seed

A random seed is a starting point in generating random numbers. A random seed specifies the start
point when a computer generates a random number sequence.

 Random variable

A random variable is a variable whose value is unknown or a function that assigns values to each of
an experiment's outcomes.

 Random variate

a random variate is a particular outcome of a random variable: the random variates which are other
outcomes of the same random variable might have different values.

 Sample space
sample space of an experiment or random trial is the set of all possible outcomes or results of that
experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes
are listed as elements in the set.

 Standard normal distribution

The standard normal distribution is a normal distribution of standardized values called z-scores. A z-
score is measured in units of the standard deviation.

 Uniform distribution

uniform distribution or rectangular distribution is a family of symmetric probability distributions.


The distribution describes an experiment where there is an arbitrary outcome that lies between certain
bounds.

 Union

The union of two events contains all outcomes that belong to either of the two events.

You might also like