Signal Processing for Caching Networks and Non-volatile Memories

Tianqiong Luo, Purdue University

Abstract

The recent information explosion has created a pressing need for faster and more reliable data storage and transmission schemes. This thesis focuses on two systems: caching networks and non-volatile storage systems. It proposes network protocols to improve the efficiency of information delivery and signal processing schemes to reduce errors at the physical layer as well. This thesis first investigates caching and delivery strategies for content delivery networks. Caching has been investigated as a useful technique to reduce the network burden by prefetching some contents during off-peak hours. Coded caching proposed by Maddah-Ali and Niesen is the foundation of our algorithms and it has been shown to be a useful technique which can reduce peak traffic rates by encoding transmissions so that different users can extract different information from the same packet. Content delivery networks store information distributed across multiple servers, so as to balance the load and avoid unrecoverable losses in case of node or disk failures. On one hand, distributed storage limits the capability of combining content from different servers into a single message, causing performance losses in coded caching schemes. But, on the other hand, the inherent redundancy existing in distributed storage systems can be used to improve the performance of those schemes through parallelism. This thesis proposes a scheme combining distributed storage of the content in multiple servers and an efficient coded caching algorithm for delivery to the users. This scheme is shown to reduce the peak transmission rate below that of state-of-the-art algorithms. Then we study the trade-off between the network traffic load and disk I/O for caching networks. Coded caching can reduce traffic load by broadcasting coded messages that can benefit multiple users but, in the case with redundant requests, it requires reading some data segments multiple times to compose different coded messages. Hence, coded caching requires more disk I/Os than uncoded transmission. This thesis proposes caching and delivery algorithms which combine coded and uncoded transmission to strike a trade-off between traffic load and disk I/Os. Our algorithms can improve both the average and worst case performance in terms of the user requests. Finally, we broaden our perspective to look at the storage hardware. Two methods are proposed which are suitable for NAND flash technology: multi-page read and spreading modulation. The first one reads multiple wordlines simultaneously and returns a combination of their stored information. This multi-page read method is shown to be useful for equalizing the inter-cell interference, reduce the damage caused by erase operations, and speed up the decoding of some codes, such as WOM codes. Then a new data representation scheme is proposed which increases endurance and significantly reduces the probability of error caused by inter-cell-interference. This data representation scheme is based on using an orthogonal code to spread each bit across multiple cells, resulting in lower variance for the voltages being programmed. We also study an up-and-coming memory technology, ReRAM, with a different set of challenges. Specifically, we build a simple analytic model for the voltage drop and sneak currents in MLC-ReRAM arrays as a form of inter-cell-interference and proposes two techniques to minimize the resulting BER: distribution shaping and spreading modulation, which is extended from that of NAND flash.

Degree

Ph.D.

Advisors

Peleato-Inarrea, Purdue University.

Subject Area

Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS