Two events E and F are said to be independent if P (EF) = P (E )P (F )

By Equation (1.5) this implies that E and F are independent if P (E |F ) = P (E )

(which also implies that P (F |E ) = P (F )). That is, E and F are independent if knowledge that F has occurred does not affect the probability that E occurs. That is, the occurrence of E is independent of whether or not F occurs.

Two events E and F that are not independent are said to be dependent.

Example 1.9 Suppose we toss two fair dice. Let E1 denote the event that the sum of the dice is six and F denote the event that the first die equals four. Then

and hence E1 and F are not independent. Intuitively, the reason for this is clear for if we are interested in the possibility of throwing a six (with two dice), then we will be quite happy if the first die lands four (or any of the numbers 1, 2, 3, 4, 5) because then we still have a possibility of getting a total of six. On the other hand, if the first die landed six, then we would be unhappy as we would no longer have a chance of getting a total of six. In other words, our chance of getting a total of six depends on the outcome of the first die and hence E1 and F cannot be independent.

Let E2 be the event that the sum of the dice equals seven. Is E2 independent of F ? The answer is yes since

We leave it for you to present the intuitive argument why the event that the sum of

the dice equals seven is independent of the outcome on the first die. ■

The definition of independence can be extended to more than two events. The events E1, E2 , . . . , En are said to be independent if for every subset of these events

Intuitively, the events E1, E2 , . . . , En are independent if knowledge of the occurrence of any of these events has no effect on the probability of any other event.

Example 1.10 (Pairwise Independent Events That Are Not Independent) Let a ball be drawn from an urn containing four balls, numbered 1, 2, 3, 4. Let E =

{1, 2}, F = {1, 3}, G = {1, 4}. If all four outcomes are assumed equally likely, then

Hence, even though the events E, F, G are pairwise independent, they are not jointly

independent. ■

Example 1.11 There are r players, with player i initially having ni units,, i = 1, . . . , r . At each stage, two of the players are chosen to play a game, with the winner of the game receiving 1 unit from the loser. Any player whose f∑rtune drops to 0 is eliminated, and this continues until a single player has all units, with that player designated as the victor. Assuming that the results of successive games are independent, and that each game is equally likely to be won by either of its two players, find the probability that player i is the victor.

Solution: To begin, suppose that there are n players, with each player initially having 1 unit. Consider player i . Each stage she plays will be equally likely to result in her either winning or losing 1 unit, with the results from each stage being independent. In addition, she will continue to play stages until her fortune becomes either 0 or n. Because this is the same for all players, it follows that each player has the

same chance of being the victor. Consequently, each player has player probability 1/n of being the victor. Now, suppose these n players are divided into r teams, with team i containing ni players, i = 1, . . . , r . That is, suppose players 1, . . . , n1 constitute team 1, players n1 + 1, . . . , n1 + n2 constitute team 2 and so on. Then the probability that the victor is a member of team i is ni /n. But because team i initially has a total fortune of ni units, i = 1, . . . , r , and each game played by members

of different teams results in the fortune of the winner’s team increasing by 1 and that of the loser’s team decreasing by 1, it is easy to see that the probability that the victor is from team i is exactly the desired probability. Moreover, our argument also shows that the result is true no matter how the choices of the players in each

stage are made. ■

Suppose that a sequence of experiments, each of which results in either a “success” or a “failure,” is to be performed. Let Ei , i ≥ 1, denote the event that the ith experiment results in a success. If, for all i1, i2 , . . . , in ,

we say that the sequence of experiments consists of independent trials.