# Elementary Probability for Applications

Language: English

Pages: 254

ISBN: 0521867568

Format: PDF / Kindle (mobi) / ePub

This clear and lively introduction to probability theory concentrates on the results that are the most useful for applications, including combinatorial probability and Markov chains. Concise and focused, it is designed for a one-semester introductory course in probability for students who have some familiarity with basic calculus. Reflecting the author's philosophy that the best way to learn probability is to see it in action, there are more than 350 problems and 200 examples. The examples contain all the old standards such as the birthday problem and Monty Hall, but also include a number of applications not found in other books, from areas as broad ranging as genetics, sports, finance, and inventory management.

The Geometry of Schemes (Graduate Texts in Mathematics, Volume 197)

The Art of Computer Programming, Volume 3: Sorting and Searching (2nd Edition)

Ancient Mathematics (Sciences of Antiquity Series)

an infinite sequence of pairwise disjoint events (that is, Ai ∩ A j = ∅ when i = j ) then ∞ ∞ Ai P = i =1 P (Ai ) i =1 These assumptions are motivated by the frequency interpretation of probability, which states that if we repeat an experiment a large number of times then the fraction of times the event A occurs will be close to P (A). To be precise, if we let N(A, n) be the number of times A occurs in the first n trials then N(A, n) (1.1) n In Chapter 6, we see that this result is a

basket. 33. Change the second and third probabilities in the last problem so that each boy has an equal chance of winning. aatozz cuus671/Durrett 978-0-521-86756-6 109 Top Margin: Gutter Margin: July 1, 2009 7:15 3.5 Exercises Bayes’ formula 34. 5% of men and 0.25% of women are color blind. Assuming that there are an equal number of men and women, what is the probability that a color-blind person is a man? 35. The alpha fetal protein test is meant to detect spina bifida in unborn

two endpoints. To see what distribution functions look like and to explain the use of (5.8), we return to our examples. Example 5.7 Uniform distribution. f (x) = 1/(b − a) for a ≤ x ≤ b. x≤a 0 F (x) = (x − a)/(b − a) a ≤ x ≤ b 1 x≥b To check this, note that P (a < X < b) = 1, so P (X ≤ x) = 1 when x ≥ b and P (X ≤ x) = 0 when x ≤ a. For a ≤ x ≤ b, we compute x P (X ≤ x) = −∞ x f (y) d y = a x −a 1 dy = b−a b−a In the most important special case when a = 0 and b = 1, we have F

5.2 Distribution functions Using this definition of F −1 , we have P (F (X) < y) = P (X < F −1 (y)) = F (F −1 (y)) = y the last equality holding since F is continuous. This is the key to many results in nonparametric statistics. For example, suppose we have a sample of 10 men’s heights and 10 women’s heights. To test the hypothesis that men and women have the same height distribution, we can look at the ranks of the men’s heights in the overall sample of size 20. For example, these might be 1,

of the expected value we write (E X)2 . This convention is designed to cut down on parentheses. aatozz cuus671/Durrett 978-0-521-86756-6 23 Top Margin: Gutter Margin: July 1, 2009 7:15 1.6 Moments and variance The variance measures how spread out the distribution of X is. To begin to explain this statement, we show that var(X + b) = var(X) var(a X) = a 2 var(X) (1.12) In words, the variance is not changed by adding a constant to X, but multiplying X by a multiplies the variance by a