Entropy and analog computation

Sho Kuwamoto, Purdue University

Abstract

As computers become faster, smaller, and more efficient, it is natural that we ask ourselves what physical limits, if any, restrict our search for better computing machinery. We know that entropy arguments place a limit on the energy which must be expended on a given digital computation (roughly kT times the number of bits of information destroyed), but that any such computation may be done as part of a larger, dissipationless computation. In this thesis, we discuss what implications these arguments have on analog computation. We attempt to define and solve the problems we face as we attempt to extend our understanding of the physics of computation to analog computation. In order to do this, we use thought experiments to clarify and probe some of the unique obstacles we face when dealing with the physics of analog systems. We also design a computational system in which we attempt to minimize the dissipative loss due to such factors. This system, loosely based on the Fredkin gate, is capable of performing a neural computation in an invertible way. We simulate this system and test its computational power, and examine its reversibility properties. This model is purely a mathematical one, and we do not specify how such a system is to be built.

Degree

Ph.D.

Advisors

Tubis, Purdue University.

Subject Area

Electromagnetism|Computer science|Physics

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS