Probability Theory AND Random Processess (18B11MA314) : Unit-1 Basic Probability (Co-1)
The document provides an introduction to probability theory and random processes. It discusses key concepts such as random experiments, sample spaces, events, mutually exclusive and exhaustive events, classical, relative frequency, and axiomatic definitions of probability. Examples are provided to illustrate random experiments involving coin tosses, dice rolls, and component lifetimes. The three main approaches to defining probability - classical, relative frequency, and axiomatic - are described along with their limitations.
Probability Theory AND Random Processess (18B11MA314) : Unit-1 Basic Probability (Co-1)
The document provides an introduction to probability theory and random processes. It discusses key concepts such as random experiments, sample spaces, events, mutually exclusive and exhaustive events, classical, relative frequency, and axiomatic definitions of probability. Examples are provided to illustrate random experiments involving coin tosses, dice rolls, and component lifetimes. The three main approaches to defining probability - classical, relative frequency, and axiomatic - are described along with their limitations.
Lecture-1 Contents: • Introduction • Random Experiments • Three Basic Approaches to Probability INTRODUCTION • Probability deals with unpredictability and randomness, and probability theory is the branch of mathematics that is concerned with the study of random phenomena. • A random phenomenon is one that, under repeated observation, yields different outcomes that are not deterministically predictable. • Examples of these random phenomena include the number of electronic mail (e-mail) messages received by all employees of a company in one day, the number of phone calls arriving at the university’s switchboard over a given period, the number of components of a system that fail within a given interval, and the number of A’s that a student can receive in one academic year. APPLICATIONS • Probability provides mathematical models for random phenomena and experiments, such as: gambling, stock market, packet transmission in networks, electron emission, signal processing , communication , communication networks, reliability of systems , noise in circuits, artificial intelligence, statistical mechanics, etc. HISTORICAL DEVELOPMENT • Probability has an amazing history. A practical gambling problem faced by the French nobleman Chevalier de Méré sparked the idea of probability in the mind of Blaise Pascal (1623-1662), the famous French mathematician. Pascal's correspondence with Pierre de Fermat (1601-1665), another French Mathematician in the form of seven letters in 1654 is regarded as the genesis of probability. • Early mathematicians like Jakob Bernoulli (1654-1705), Abraham de Moivre (1667-1754), Thomas Bayes (1702-1761) and Pierre Simon De Laplace (1749-1827) contributed to the development of probability. • Later mathematicians like Chebyshev (1821-1894), Markov (1856-1922), Von Mises (1883-1953), Norbert Wiener (1894- 1964) and Kolmogorov (1903-1987) contributed to new developments. RANDOM EXPERIMENTS • If phenomenon or process of observations which can be repeated any number of times under essentially the same conditions, is called an experiment. • The results of an observation are called the outcomes of the experiment. • Each performance of an experiment is called a trial. RANDOM EXPERIMENTS • A deterministic experiment is an experiment whose outcome or result is known with certainty or predictable, i.e., result is unique. e.g. Ohm’s law I = E/R determines the current uniquely (with certainty). • A probabilistic or non-deterministic or random experiment is an experiment whose outcome or result is not unique and therefore cannot be predicted with certainty. e.g. Tossing of a coin, head or tail may occur, Throwing a die 1, 2, 3, 4, 5, or 6 may appear, Life-time of a computer system, etc. RANDOM EXPERIMENTS: DEFINITION • An experiment is a process of measurement or observation, in a laboratory, in a factory, on the street, in nature, or wherever; so experiment” is used in a rather general sense. Our interest is in experiments that involve randomness, chance effects, so that we cannot predict a result exactly. • Def.: “If each trial has more than one possible outcomes or results then experiment is called random experiment”. RANDOM EXPERIMENTS • The set of all possible outcomes of a random experiment is called the sample space or universal set. It is denoted by S. • An element in S is called a sample point. Each outcome of a random experiment corresponds to a sample point. e.g. Tossing of coin is a random experiment, sample space is, S = {H, T }, and sample points are H and T. e.g. Throwing a die is a random experiment, sample space is, S = {1, 2, 3, 4, 5, 6} and sample points are 1,2,3,4,5 and 6. RANDOM EXPERIMENTS • Event is a subset of a sample space. e.g. In a random experiment of tossing a die, some events are as follows; E1 = {odd number} = {1, 3, 5} E2 = {even number} = {2, 4, 6} E3 = {prime number} = {2, 3, 5} E4 = {number greater than 2} = {3, 4, 5, 6} RANDOM EXPERIMENTS • Mutually Exclusive Events : Two events A and B are said to be mutually exclusive if A and B can not happen (occur) simultaneously, i.e., A B i.e., A and B are disjoint. e.g. Events of getting an even number and an odd number in rolling a fair die are mutually exclusive events. • Mutually Exhaustive Events : A list of events A1 , A2 ,....,An are said to be collectively exhaustive if n Ai S , Ai Aj i j i 1
e.g. Events of getting an even number and an odd number in rolling a
fair die are mutually exhaustive events. • Universal Event: The entire sample space S is called a universal (or certain or sure) event. e.g. getting a tail or head in tossing a fair coin. • Impossible Event: The null set φ is called the null or impossible event. e.g. getting 7 in rolling a fair die. EXAMPLE: 1 • Consider a random experiment of tossing a coin three times. • Outcomes are Head (H) and Tail (T) • Space as result of three successive tosses is S= {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} • The event “two heads and one tail” is E= {HHT, HTH, THH} • Event “at least one head” is S= {HHH, HHT, HTH, HTT, THH, THT, TTH} EXAMPLE:2 • Consider a random experiment of rolling a dice twice or rolling two dice together • Sample space is as follows: S = { (1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,2), (2,3), (2,4), (2,5), (2,6), (3,1), (3,2), (3,3), (3,4), (3,5), (3,6), (4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (5,1), (5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,2), (6,3), (6,4), (6,5), (6,6) }
• Event “sum of two numbers is 9” is E = {(3,6), (4,5), (5,4), (6,3)}
EXAMPLE:3 • Life span of a component in an electronic device. – Sample space S = {x | 0 ≤ x < ∞} – Event : “Not more than 10 hours” is defined as E= {x | 0 ≤ x < 10 (hours)} THREE BASIC APPROACHES TO PROBABILITY • Different approaches: – Objective : based on experimental data – Subjective: based on experience • Defined in three ways: – Classical Definition – Relative Frequency Definition – Axiomatic definition CLASSICAL DEFINITION • Mathematical or classical or a priori probability : If an event E can happen in m ways out of possible n mutually exclusive and exhaustive and equally likely ways then probability of event E, denoted by, P (E) is defined asm P E p n • The probability of non-occurrence of event E (called its failure), denoted by P (not E) or q and is defined as P E q 1 p 1 m n • Thus, p + q = 1 and 0 ≤ p ≤ 1 and 0 ≤ q ≤ 1. • Probability of a certain (sure) event is n /n = 1. • Probability of an impossible (null) event is 0 /n = 0. CLASSICAL DEFINITION:FAILURE CONDITIONS
• This definition fails when
(i) The outcomes are not equally likely i.e. we can predict the outcome of a random experiment in advanced. (ii) Numbers of outcomes is infinite (not exhaustive). RELATIVE FREQUENCY DEFINITION • Statistical or Empirical or Estimated (Von Mises) Probability: Consider a random experiment is performed n times. If an event A occurs nA times, then Relative Frequency = n A , then P(A) lim nA n n n
e.g. Let 512 heads appeared in 1000 tosses of an unbiased
coin in first experiment, then p 512 0.512 1 1000 Again in second experiment 499 heads appeared in 1000 tosses then p 512 499 1011 0.5055 1000 1000 2 2000 RELATIVE FREQUENCY DEFINITION:FAILURE CONDITIONS • This definition fails when (i) limit does not exist or (ii) limit is not unique. AXIOMATIC DEFINITION OF PROBABILITY
• If a random experiment has a sample space S then
for each event A of S the probability of occurring of A is a real number P(A) such that the following conditions hold: – Axiom 1: 0 ≤ P(A) ≤ 1 – Axiom 2: P(S) = 1 – Axiom 3: For any set of n mutually exclusive events A₁, A₂, A₃, ..... , An of the same sample space P(A₁UA₂UA₃U ....UAn)= P(A₁)+P(A₂)+P(A₃)+ .... +P(An) • This definition was given by A.N.Kolmogorov in 1933, who developed the axiomatic probability theory. EXAMPLE Two fair dice are tossed. Find the probability of each of following: a) Sum of outcomes of two dice is 5 b) Sum of outcomes of two dice is 7 or 11 c) Outcome of second die is less than first die. Solution: Sample space, N= 36 S = { (1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,2), (2,3), (2,4), (2,5), (2,6), (3,1), (3,2), (3,3), (3,4), (3,5), (3,6), (4,1), (4,2), (4,3), (4,4), (4,5), (4,6), (5,1), (5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,2), (6,3), (6,4), (6,5), (6,6) } a) Event A "sum of outcomes of two dice is 5" A = { (1,4), (2,3), (3,2), (4,1 } NA = 4 P(A) = 4/36 = 1/9 b) Event B "sum of outcomes of two dice is 7 or 11 B = { (1,6), (2,5), (3,4), (4,3), (5,2), (6,1), (5,6), (6,5) } NB = 8 P(B)= 8/36 =2/9 c) Event C "Outcome of second die is less than first die" C = { (2,1), (3,1), (3,2), (4,1), (4,2), (4,3), (5,1), (5,2), (5,3), (5,4), (6,1), (6,2), (6,3), (6,4), (6,5)} NC = 15 P(C) = 15/36 = 5/12 d) Event D "Outcomes of both dice are odd" D = { (1,1), (1,3), (1,5), (3,1), (3,3), (3,5), (5,1), (5,3), (5,5) } ND = 9 P(D) = 9/36 = ¼ This problem can also be solved by considering two dimensional display of sample space.