Architectures for Computer Vision: From Algorithm to Chip with Verilog

Architectures for Computer Vision: From Algorithm to Chip with Verilog

Hong Jeong

Language: English

Pages: 450

ISBN: 111865918X

Format: PDF / Kindle (mobi) / ePub

This book provides comprehensive coverage of 3D vision systems, from vision models and state-of-the-art algorithms to their hardware architectures for implementation on DSPs, FPGA and ASIC chips, and GPUs. It aims to fill the gaps between computer vision algorithms and real-time digital circuit implementations, especially with Verilog HDL design. The organization of this book is vision and hardware module directed, based on Verilog vision modules, 3D vision modules, parallel vision architectures, and Verilog designs for the stereo matching system with various parallel architectures.

  • Provides Verilog vision simulators, tailored to the design and testing of general vision chips
  • Bridges the differences between C/C++ and HDL to encompass both software realization and chip implementation; includes numerous examples that realize vision algorithms and general vision processing in HDL
  • Unique in providing an organized and complete overview of how a real-time 3D vision system-on-chip can be designed
  • Focuses on the digital VLSI aspects and implementation of digital signal processing tasks on hardware platforms such as ASICs and FPGAs for 3D vision systems, which have not been comprehensively covered in one single book
  • Provides a timely view of the pervasive use of vision systems and the challenges of fusing information from different vision modules
  • Accompanying website includes software and HDL code packages to enhance further learning and develop advanced systems
  • A solution set and lecture slides are provided on the book's companion website

The book is aimed at graduate students and researchers in computer vision and embedded systems, as well as chip and FPGA designers. Senior undergraduate students specializing in VLSI design or computer vision will also find the book to be helpful in understanding advanced applications.

Communications and Networking: An Introduction (2nd Edition) (Undergraduate Topics in Computer Science)

Reasoning About Knowledge

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)

Understanding Operating Systems (6th Edition)













SystemVerilog). Considering the complexity of vision algorithms, we will follow the behavioral description method most of the time in this book. As the next step, let us design a simple adder, a four-bit adder, using the behavioral model. c = a + b, (2.1) where a, b, and c are all four-bit integers. The source codes are included in the file, adder.v. Listing 2.1 A 4-bit adder: adder.v ‘timescale 1ns/1ps module adder( input [3:0] a, b, output [3:0] c ); //unit time/precision //ports //input

executed in different locations in a program. In addition, they are building-blocks of large procedures, and thus, the source descriptions can be easily built and debugged. Architectures for Computer Vision 26 However, the two procedures have many different characteristics. Unlike those in C, a function must execute in one simulation time unit, but a task can execute in multiple time units, according to timecontrolling statements. Another difference is that a function cannot enable a task,

block then begins to inhibit the processing block using the semaphore, do_ip. The major operation is to provide a pair of new images, I1 (t) and I2 (t), by copying the contents from the external memories, RAM1 and RAM2, to the internal registers, img1 and img2. When the operation is finished, the reading block activates the processing block using the semaphore, do_ip. In return, the processing block, when activated, begins by deactivating the reading block using the semaphore, do_load. This

generally smooth. The smoothness measure assumes that if the surface is smooth, so do the disparity values: ∇Z(p) ←→ ∇d(p) = 0. (6.72) This premise is also very difficult to justify, because the disparity is the result of two different views for the same surface and thus the slope and the boundaries may affect the disparity in a very complicated manner. The are many variations of this form in terms of the derivative order, neighborhood size, and truncated values. Among many others, the linear

boundaries of objects and thus the disparity boundaries. The segment boundary affects the disparity boundary, as a modified version of smoothness constraint: ∇S(p) ≠ 0 ←→ ∇d(p) ≠ 0. (6.78) Architectures for Computer Vision 172 This constraint can also be used in either the left or right disparity, depending which image plane is used as reference. Next comes the planar constraint, which focuses on the surfaces instead of the boundary. The underlying concept is that there exist some

Download sample