Probabilistics Search for Tracking Targets: Theory and Modern Applications

Probabilistics Search for Tracking Targets: Theory and Modern Applications

Irad Ben-Gal, Eugene Kagan

Language: English

Pages: 348

ISBN: 2:00320018

Format: PDF / Kindle (mobi) / ePub


Presents a probabilistic and information-theoretic framework for a search for static or moving targets in discrete time and space.

Probabilistic Search for Tracking Targets uses an information-theoretic scheme to present a unified approach for known search methods to allow the development of new algorithms of search. The book addresses search methods under different constraints and assumptions, such as search uncertainty under incomplete information, probabilistic search scheme, observation errors, group testing, search games, distribution of search efforts, single and multiple targets and search agents, as well as online or offline search schemes. The proposed approach is associated with path planning techniques, optimal search algorithms, Markov decision models, decision trees, stochastic local search, artificial intelligence and heuristic information-seeking methods. Furthermore, this book presents novel methods of search for static and moving targets along with practical algorithms of partitioning and search and screening.

Probabilistic Search for Tracking Targets includes complete material for undergraduate and graduate courses in modern applications of probabilistic search, decision-making and group testing, and provides several directions for further research in the search theory.

The authors:

• Provide a generalized information-theoretic approach to the problem of real-time search for both static and moving targets over a discrete space.
• Present a theoretical framework, which covers known information-theoretic algorithms of search, and forms a basis for development and analysis of different algorithms of search over probabilistic space.
• Use numerous examples of group testing, search and path planning algorithms to illustrate direct implementation in the form of running routines.
• Consider a relation of the suggested approach with known search theories and methods such as search and screening theory, search games, Markov decision process models of search, data mining methods, coding theory and decision trees.
• Discuss relevant search applications, such as quality-control search for nonconforming units in a batch or a military search for a hidden target.
• Provide an accompanying website featuring the algorithms discussed throughout the book, along with practical implementations procedures.

Practical Text Mining with Perl (Wiley Series on Methods and Applications in Data Mining)

Randomized Algorithms

The Algorithm Design Manual (2nd Edition), Corrected printing 2012

The Intelligent Web: Search, Smart Algorithms, and Big Data

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

set X. Since usually the search agent is not informed of the target’s selections, such a rule can be represented by a specific stochastic process that defines the target location probabilities at time t + 1, given the location probabilities at previous times pt+1 (x t+1 = xi |x t = xj , x t – 1 = xk , . . . , x 1 = x0 ), xi ∈ X. For example, if it is assumed that the target’s movement is governed by a Markov process with known transition probability matrix ρ = ρij n×n , where ρij = Pr{x t+1 = xj |x

x t=0 = (3, 3) and up to time t = 100 it follows the trajectory shown in Figure 2.8a. Then, the search density ut=100 that is obtained for time t = 100 is as shown in Figure 2.8b. The agent starts at the point x = (3, 3) and then follows the trajectory a t which includes returns to previous visited points. The search density is accumulated in the visited points and increases at each point according to the number of returns of the search agent. 0.3 0.2 0.1 0 10 4 3 2 1 0 10 8 8 6 4 y 2 2

the expected depth is E(D|T[n − 1]) + (p1 + p2 ) ≤ E(D|T[n − 1]) + (p1 + p3 ), and the combination of elements 1 and 2 is at least as good as the combination of elements 1 and 3. The procedure that constructs an optimal search tree as specified by Theorem 2.7 is illustrated by the following example. Example 2.16 As in the previous example, let sample space X include six points with location probabilities pi = p(xi ), i = 1, . . . , n, defined similarly as above: xi x1 x2 x3 x4 x5 x6 pi 0.2

partition α such that β ≺ α and |β| + 1 ≤ |α| ≤ 2 · |β|, where the size of the trivial partition β = {X, Ø} is |β| = 1: next_partition_GOTA(β, {Tj |j = 1, . . . , |β|}): 1. Set α = Ø. 2. For j = 1, . . . , |β| do: 2.1 Set αj = Tj (Bj ), Bj ∈ β: Creating partition of the area Bj using the rule Tj . 2.2 Set α = α ∪ αj : Updating resulting partition α with αj . 3. Return α. In addition to the functions over the tree that are used in Algorithm 2.9, we apply the following functions defined for the tree

before c∗ , which contradicts the assumption. Finally, let us demonstrate that, being applied over a finite graph G, the BF* algorithm (Procedure 2.5) terminates. Theorem 2.9 [79]. If there exists a path from the initial state sinit and the final state sfin , then Procedure 2.5 (BF* algorithm) terminates with a resulting path. Proof. Since graph G is finite, each path between the initial state sinit and final state sfin has a finite length. Assume, in contradiction, that Procedure 2.5 does not

Download sample

Download