Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World

Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World

Leslie Valiant

Language: English

Pages: 208

ISBN: 0465060722

Format: PDF / Kindle (mobi) / ePub

From a leading computer scientist, a unifying theory that will revolutionize our understanding of how life evolves and learns.

How does life prosper in a complex and erratic world? While we know that nature follows patterns—such as the law of gravity—our everyday lives are beyond what known science can predict. We nevertheless muddle through even in the absence of theories of how to act. But how do we do it?

In Probably Approximately Correct, computer scientist Leslie Valiant presents a masterful synthesis of learning and evolution to show how both individually and collectively we not only survive, but prosper in a world as complex as our own. The key is “probably approximately correct” algorithms, a concept Valiant developed to explain how effective behavior can be learned. The model shows that pragmatically coping with a problem can provide a satisfactory solution in the absence of any theory of the problem. After all, finding a mate does not require a theory of mating. Valiant’s theory reveals the shared computational nature of evolution and learning, and sheds light on perennial questions such as nature versus nurture and the limits of artificial intelligence.

Offering a powerful and elegant model that encompasses life’s complexity, Probably Approximately Correct has profound implications for how we think about behavior, cognition, biological evolution, and the possibilities and limits of human and machine intelligence.

Robot Motion and Control: Recent Developments (Lecture Notes in Control and Information Sciences)

Testing Computer Software (2nd Edition)

PIC Robotics: A Beginner's Guide to Robotics Projects Using the PIC Micro

Genetic Programming Theory and Practice II (Genetic Programming, Volume 8)

Algorithms in a Nutshell

Practical Maya Programming with Python











level of confidence in a given rule is justified, even with fewer than 100 examples, and even when the rule predicts not all but only most examples correctly. What we have shown is that we can depend on the predictive power of a rule someone has given us if we are sure of three conditions. First, we need to be given a data set of past examples that have high agreement with the rule. Second, we need to know that the rule is from some small class of rules that was fixed before the examples were

gene transfer can be regarded as computations on the sum total of the genomes of a population, as can also the mutations of individual genes. The one-genome model can embrace all of these by having different internal mechanisms for producing variation. There are many aspects of evolution that we do not address at all. Diversity in the gene pool may be a good defense mechanism against the unexpected and hence be critically important for survival. Survival is no doubt indispensable, but by itself

the system being programmed than a teacher has to the learner. This is particularly important when the only available teacher is a passive environment. An advantage of learning therefore is that it interfaces directly between the possibly complex current state of knowledge of the learner and the invariably complex outside world. Learning can be accomplished without needing anyone to explicitly understand either the state of the learner or the complexities of the world. A programmer would need to

the gravitational pull exerted on it by another object. This seeming coincidence is a central part of his theory of mechanics, but one for which he knew he had no explanation. Not until Einstein’s general theory of relativity was this removed from the realm of the mysterious. Earlier I quoted Eugene Wigner’s comments on the effectiveness of mathematics in the physical sciences. Even as we have exploited mathematics to gain an ever more accurately predictive understanding of the physical world,

the Uncertainty in Artificial Intelligence conference series. It has been found empirically that in applications where large amounts of general knowledge need to be modeled, some learning component is essential, as, for example, in IBM’s Watson system for the Jeopardy! contest. 5. D. Angluin and P. Laird, “Learning from Noisy Examples,” Machine Learning 2 (1987): 343–370. A generic approach to making learning algorithms resistant to one kind of noise is given in Michael J. Kearns, “Efficient

Download sample