Reconstruction of High-Speed Event-Based Video Using Plug and Play

Trevor D Moore, Purdue University

Abstract

Event-Based cameras, also known as neuromophic cameras or dynamic vision sensors, are an imaging modality that attempt to mimic human eyes by asynchronously measuring contrast over time. If the contrast changes sufficiently then a 1-bit event is output, indicating whether the contrast has gone up or down. This stream of events is sparse, and its asynchronous nature allows the pixels to have a high dynamic range and high temporal resolution. However, these events do not encode the intensity of the scene, resulting in an inverse problem to estimate intensity images from the event stream. Hybrid event-based cameras, such as the DAVIS camera, provide a reference intensity image that can be leveraged when estimating the intensity at each pixel during an event. Normally, inverse problems are solved by formulating a forward and prior model and minimizing the associated cost, however, for this problem, the Plug and Play (P&P) algorithm is used to solve the inverse problem. In this case, P&P replaces the prior model subproblem with a denoiser, making the algorithm modular, easier to implement. We propose an idealized forward model that assumes the contrast steps measured by the DAVIS camera are uniform in size to simplify the problem. We show that the algorithm can swiftly reconstruct the scene intensity at a user-specified frame rate, depending on the chosen denoiser’s computational complexity and the selected frame rate.

Degree

M.Eng.

Advisors

Bouman, Purdue University.

Subject Area

Artificial intelligence|Mathematics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS