Methods in Algorithmic Analysis (Chapman & Hall/CRC Computer and Information Science Series)
Format: PDF / Kindle (mobi) / ePub
Explores the Impact of the Analysis of Algorithms on Many Areas within and beyond Computer Science
A flexible, interactive teaching format enhanced by a large selection of examples and exercises
Developed from the author’s own graduate-level course, Methods in Algorithmic Analysis presents numerous theories, techniques, and methods used for analyzing algorithms. It exposes students to mathematical techniques and methods that are practical and relevant to theoretical aspects of computer science.
After introducing basic mathematical and combinatorial methods, the text focuses on various aspects of probability, including finite sets, random variables, distributions, Bayes’ theorem, and Chebyshev inequality. It explores the role of recurrences in computer science, numerical analysis, engineering, and discrete mathematics applications. The author then describes the powerful tool of generating functions, which is demonstrated in enumeration problems, such as probabilistic algorithms, compositions and partitions of integers, and shuffling. He also discusses the symbolic method, the principle of inclusion and exclusion, and its applications. The book goes on to show how strings can be manipulated and counted, how the finite state machine and Markov chains can help solve probabilistic and combinatorial problems, how to derive asymptotic results, and how convergence and singularities play leading roles in deducing asymptotic information from generating functions. The final chapter presents the definitions and properties of the mathematical infrastructure needed to accommodate generating functions.
Accompanied by more than 1,000 examples and exercises, this comprehensive, classroom-tested text develops students’ understanding of the mathematical methodology behind the analysis of algorithms. It emphasizes the important relation between continuous (classical) mathematics and discrete mathematics, which is the basis of computer science.
showing them (and a large number of other relations) is via the methods we develop in Chapter 6. Example 2.134 We are going to show the following convolution-like formula: ∑ i p m+i From the identity (I–1), d = n+i p = m+i p+d , p−m+n p p−m−i p ∈ N, m, n ∈ Z. and the Vandermonde convolution, it fol- CHAPTER 2. COMBINATORICS 70 lows: ∑ i p m+i d p =∑ n+i p−m−i i d p+d = . n+i n+ p−m Exercise 2.135  For positive integers p and d, show the following four binomial sums, which are
qn− j , j q = 1 − p, (4.2) the total number of trials, random variable is said to be a degenerate variable if it has only one possible value in probability 1. expression in the right-hand side traditionally is called the Bernstein polynomial and denoted by B j,n (t) = nj t j (1 − t)n− j . 2 The 4.1. SPECIAL DISTRIBUTIONS 137 the number of successes in n trials, j n− j the number of failures in n trials, p probability of success, q = 1− p probability of failure, n j = n! j! (n−
and then winning the last 2 points, p2 . So we get the geometrically distributed random variable. The expected number of serves, N, that player A will make is N = 4(p4 + q4 ) + 5 4 5 6 3 3 2 [p4 q + pq4 ] + 6 [p4 q2 + p2 q4 ] + 8 p q (p + q2 ) + · · · 1 2 3 = 4(p4 + q4 ) + 20pq(p3 + q3 ) + 60p2 q2 (p2 + q2 ) + 20p3 q3 (p2 + q2 ) ∑ (8 + 2k) (2pq)k k 0 = 4(p4 + q4 ) + 20pq(p3 + q3 ) + 60p2 q2 (p2 + q2 ) + 20p3 q3 (p2 + q2 ) If p = q = 1/2, the expected length of a game is 9 78 serves. 8 − 12pq
deferred to Example 8.84, page 477. The situation is simplest if the coupon enclosed by the candy-manufacturer is drawn with uniform probability from all types, and there is no dependence between different candy bars. With uniform distribution, if the collector has already i types, her likelihood to get a new one def with purchase is pi = (n − i)/n, and therefore the number of purchases until the next new, (i + 1)st coupon appears, is a geometric random variable, G( n−i n ). Using Eq. (4.6) we
theorem of total probability. Randomization on X provides: Pr[W = k] = ∑ Pr[W = k | X = i] Pr[X = i] = ∑ Pr[X +Y = k | X = i] Pr[X = i] i∈N i 4.5. CONVOLUTION = 175 ∑ Pr[Y = k − i | X = i] Pr[X = i]. i Using Eq. (3.33) on page 123 and the claim of independence of X and Y , we find Pr[Y = k − i | X = i] = Pr[Y = k − i] =⇒ Pr[W = k] = ∑ Pr[X = i] Pr[Y = k − i]. i Let pX and pY be probability mass functions of the discrete random variables X and Y , respectively. Then the above sum can be