Mesh algorithms for problems in image processing

Lynn Ellen Te Winkel, Purdue University

Abstract

Computer vision, image processing, and parallel processing are important areas in computer science. The use of parallelism is crucial in solving the computationally intensive problems arising in computer vision and image processing. A natural architecture for problems on images is the mesh architecture. In this thesis we concentrate on mesh algorithms for two problems in the area of image processing and present a number of algorithms based on different problem solving approaches. Let I be an n x n binary image stored in an n x n array of processors. The first problem we study is that of labeling the connected components of a binary image I. We describe two image-dependent algorithms (i.e., algorithms whose running time depends on the "difficulty" of the image) and an image-independent algorithm, all of which we implemented on NASA's Massively Parallel Processor (MPP). The image-dependent algorithms are relaxation algorithms and the image-independent algorithm is a divide-and-conquer algorithm. We also discuss how special hardware features of the MPP influence the design of parallel algorithms tailored towards that machine. In addition, we address the issue of how to handle images that are larger than the available processor array. Our work in analyzing different problem solving approaches for the connected component labeling problem is not only relevant to that problem, but also provides insight into how to design efficient mesh algorithms for problems with communication requirements similar to that of component labeling. We then consider the problem of determining a minimum-cost rectilinear Steiner tree of a given binary image. Our main contribution for this problem are two conceptually different methods for connecting components in an image and a method for improving subsolutions by making horizontal and vertical shortcuts. Using these methods, we present three heuristic mesh algorithms for this NP-hard problem. All of our algorithms have an O(nlogk) worst-case running time, where k is the number of connected components formed by the image entries of value '1'. Our algorithms are implemented by simulation and the results are compared to the cost of a minimum spanning tree. For random images, the cost of our solutions are on the average 91% of the cost of the corresponding minimum spanning tree solutions.

Degree

Ph.D.

Advisors

Hambrusch, Purdue University.

Subject Area

Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS