0% found this document useful (0 votes)
57 views17 pages

04MA246L1A

1. A random experiment is a process with outcomes determined by chance that can be repeated. It has a sample space of all possible outcomes. 2. The sample space can be finite, countably infinite, or uncountably infinite. Events are subsets of outcomes in the sample space. 3. Probability is defined by the relative frequency of outcomes in many trials. It must satisfy axioms such as probabilities being between 0 and 1 and the probability of the sample space being 1.

Uploaded by

Jacky Po
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views17 pages

04MA246L1A

1. A random experiment is a process with outcomes determined by chance that can be repeated. It has a sample space of all possible outcomes. 2. The sample space can be finite, countably infinite, or uncountably infinite. Events are subsets of outcomes in the sample space. 3. Probability is defined by the relative frequency of outcomes in many trials. It must satisfy axioms such as probabilities being between 0 and 1 and the probability of the sample space being 1.

Uploaded by

Jacky Po
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Random experiments

A random experiment is a process characterized by the following properties: (i) It is performed according to some set of rules, (ii) It can be repeated arbitrarily often, (iii) The result of each performance depends on chance and cannot be predicted uniquely. Example: Tossing of a coin The outcome of a trial can be either head or tail showing up.

Sequential random experiments performing a sequence of simple random sub-experiments eg First toss a coin, then throw a dice. Sometimes, the second sub-experiment depends on the outcome of the first; eg Toss a coin first, if it is a head, then throw a dice.

A random experiment may involve a continuum of measurements. Say, the height of a student takes some value between 1.4m to 2m.

Sample space S of a random experiment is defined as the set of all possible outcomes. Outcomes are mutually exclusive in the sense that they cannot occur simultaneously.

A sample space can be finite, countably infinite or uncountably infinite. 1. Toss a coin two times S1 = {(H, H), (H, T), (T, H), (T, T)} S1 is countable, S1 is called a discrete sample space. Define B = {H, T}, then S1 = B B.

2. Toss a dice until a six appears and count the number of times the dice was tossed. S2 = {1, 2, 3, }; S2 is discrete and countably infinite (one-to-one correspondence with positive integers) 3. Pick a number X at random between zero and one, then pick a number Y at random between zero and X. S3 = {(x, y): 0 y x 1}; S3 is a continuous sample space.
y

(1, 1)

x 1

An event or event set is a set of possible outcomes of an experiment, so an event is a subset of sample space S. The whole sample space is an event and is called the sure event. The empty set is called the impossible event.

Example Tossing of a dice

Event E: dice turns up an even number; E = {2, 4, 6}, which is a subset of the sample space S = {1, 2, 3, 4, 5, 6}.

EC complement of E in S: defined as the set of elements not in E. EC = {1, 3, 5}, the dice turns up an odd number. EC E S

Suppose A and B are events in S, the following events are called derived events (i) A B (ii) A B (iii) A B (either A or B or both) (both A and B) (A but not B)

Two events A and B are mutually exclusive if both cannot occur simultaneously, that is, A B = .

AB Event A is a subset of event B, then event B will occur whenever event A occurs. (i) A B A and A B B (ii) A A B and B A B A = B Two events are equal if they contain the same set of outcomes. Notation

UA
k =1

= A1 A2 L An and

IA
k =1

= A1 A2 L An

For countably infinite sequence of events, we have

UA
k =1

and

IA
k =1

De Morgans rules (A B)c = Ac Bc Proof of the second rule: Suppose x (A B)c x is not contained in any of the events A and B x is contained in Ac and Bc x Ac Bc. Proof of the first rule: Based on the second rule, take A Ac and B Bc, we then have (Ac Bc)c = A B. Taking complement on both sides, we obtain the first rule. A B and (A B)c = Ac Bc

What do we mean by the probability P[E] of an event E? For example, what is the probability of getting a head in the toss of a coin? Statistically, the probability P[E] is defined as
P[ E ] = lim
n

fn[E] , n

fn[E] where n is the number of trials and is the relative frequency of n

the occurrence of the event E. This is the frequency approach. Statistical regularity

Averages obtained in long sequences of trials of random experiments consistently yield approximately the same value.

Can we estimate (calculate) the probability from the knowledge of the nature of the experiment?
Theory and the Real World Probability Theory derive probabilistic model make prediction

Mathematical world

Physical world

Experiments or knowledge

Experiments or actions

feedback

Axioms of probability Let E be a random experiment with sample space S. A probability law for the experiment E is a rule that assigns to each event A a number P[A], called the probability of A, that satisfies the following axioms: Axiom I Axiom II Axiom III 0 P[A] P[S] = 1 If A B = , then P[A B] = P[A] + P[B] (A and B are mutually exclusive events)

Corollary 1

P[Ac] = 1 P[A]

As A Ac = , from Axiom III, P[A Ac] = P[A] + P[Ac] Since S = A Ac, by Axiom II 1 = P[S] = P[A Ac ] = P[A] + P[Ac]. Corollary 2 P[A] 1

From Corollary 1, P[A] = 1 P[Ac] 1 since P[Ac] 0. Corollary 3 P[] = 0

Let A = S, Ac = ; so P[] = 1 P[S] = 0.

Corollary 4 If A1, A2, An are pairwise mutually exclusive, then


n P Ak = k =1

P[ A ],
k k =1

n 2.

Proof by mathematical induction. From Axiom III, it is valid for n = 2. The trick is to observe that if An+1 and Aj, j = 1, , n are pairwise mutually exclusive, then
n n n U Ak I An +1 = U ( Ak I An +1 ) = U = , k =1 k =1 k =1

we then have
n n n +1 P U Ak = P U An +1 = P U Ak + P[ An +1 ]. U Ak k =1 k =1 k =1

Corollary 5 P[A B] = P[A] + P[B] P[A B] hence P[A B] P[A] + P[B]

A Bc

AB

Ac B

B S

Since A Bc, A B and Ac B are disjoint events, we have P[A B] = P[A Bc] + P[B Ac] + P[ A B] P[A] = P[A Bc] + P[ A B] P[B] = P[B Ac] + P[ A B].

Generalization P[A B C] = P[A] + P[B] + P[C] P[A B] P[A C] P[B C] + P[A B C] For n events, we have
n P Ak = k =1

j =1

P[ A j ]

j<k

P[ A j I Ak ] + L + (1) n +1 P[ A1 I L I An ].

Corollary 6

If A B, then P[A] P[B].

B = A (Ac B) A and Ac B are mutually exclusive P[B] = P[A] + P[Ac B] P[A]


A Ac B

Example Toss a coin three times and observe the sequence of heads and tails. There are 8 possible outcomes: S3 = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}. For a fair coin, the outcomes of S3 are equiprobable. The outcomes are mutually exclusive, so the probability of each of the above 8 elementary 1 events is .
8

P[2 heads in 3 tosses] = P[{HHT}, {HTH}, {THH}] 3 = P[{HHT}] + P[{HTH}] + P[{THH}] = .


8

Suppose we count the number of heads in the 3 tosses. The sample space is now S4 = {0, 1, 2, 3}.

Are the above outcomes equiprobable?


1 If yes, then P[2 heads in 3 tosses] = P[{2}] = , a result contradicting 4

to that of the above.

Similar question Toss 2 dice and record the sum of face values. Is the chance of getting sum = 2 the same as that of sum = 3?

You might also like