The art and theory of dynamic programming, Volume 130 (Mathematics in Science and Engineering)

The art and theory of dynamic programming, Volume 130 (Mathematics in Science and Engineering)

Stuart E. Dreyfus, Averill M. Law

Language: English

Pages: 284

ISBN: 0122218604

Format: PDF / Kindle (mobi) / ePub


Face Processing: Advanced Modeling and Methods

Understanding Operating Systems (7th Edition)

Physically Based Rendering: From Theory to Implementation

Bayesian Reasoning and Machine Learning

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

for a machine that has just turned age i at the end of year N . The problem is to decide when (or if) to replace the incumbent machine, when (or if) to replace its replacement, etc., so as to minimize the total cost during the next N years. 2. Dynamic-Programming Formulation We now have the opportunity to test what we learned in Chapter 1 on a “real” problem. Recall that first an optimal value function appropriate to the problem must be defined and that the arguments of the function should

only the sum already allocated, and hence the amount left, is essential to our optimization of the remaining process. Hence we define the optimal value function by f k ( x ) =the maximum return obtainable from activities k through N , given x units of resource remaining to be (3.3) allocated. As is frequently done in dynamic programming, we have this time chosen to write the stage variable k as a subscript to distinguish it from the other (state) variable. This is a quite arbitrary matter and

for any relative minimum or relative maximum or even for other types of stationary solutions. A further necessary condition for a relative minimum, easily obtained and used, is that a 2 J / a x 2 ( i )> 0, and a stronger condition is that the Hessian matrix is nonnegative definite (see any calculus text on the minimization of a function of N variables). Just as with the corresponding calculus problem, there is no way of distinguishing the absolute minimum from other relative minima. This is the

such as developed in this section and second-order, Newton-Raphson procedures such as given in Problem 6.6 is beyond the scope of this text. Briefly, a gradient procedure requires much less computation per iteration and will converge for initial guesses further from optimal than second-order a + a +a. a 7. DISCRETE-TIME OPTIMAL-CONTROL PROBLEMS 106 procedures. Second-order procedures converge rapidly once close to the solution while gradient procedures encounter difficulties (both the

warehouse rental, insurance, taxes, and maintenance. It also includes the cost of having capital tied up in inventory rather than having it invested elsewhere. If w,. - d,. is negative (there is unfilled demand at the end of period i), then there is a shortage (or penalty) cost IIi(di- w i ) and a minimal holding cost h,(O). It is assumed that IIi(0) = 0. The shortage cost includes the cost due to extra record keeping in the backlogged case and the cost due to loss of revenue in the lost sales

Download sample

Download