Data consistency manangement in wireless client-server information systems

Jin Jing, Purdue University

Abstract

The emerging mobile computing environment no longer requires a user to maintain a fixed position in the network and thus allows for almost unrestricted user mobility. In the near future, users carrying portable devices will have access to information systems independent of the users' physical locations. This thesis proposes and investigates new techniques to provide high performance and scalability for these information systems while maintaining data consistency semantics in wireless and mobile computing environments. The common theme of the techniques developed is the utilization of mobile and fixed host resources through data replication (or cache) and partition. The initial chapters motivate and describe an indirect interaction architecture for wireless client-server information systems and present the arguments for using data replication, partition, and cache as the basis for constructing the wireless client-server information systems. The rest of the thesis then focuses on the development and performance analysis of algorithms for replicated and partitioned data management in fixed data servers and cached data management in mobile clients. A new algorithm that uses a "deferred log update" technique is developed for the replicated data management. A performance analysis shows that the algorithm can provide improved performance over traditional replicated data management algorithms in mobile environments. The "deferred log update" technique is further applied in the development of a partitioned data management algorithm. The algorithm is compared with other conventional protocols under different workload conditions. The reliability issues in applying the technique are examined. For cached data management, a broadcast based cache invalidation algorithm is resented. The algorithm uses "update aggregation" and "bit-sequence naming" techniques to reduce the broadcast message size. This algorithm trades the precision of invalidation for the speed of invalidation. Two extensions of the algorithm are designed for large databases. A simulation study of the proposed algorithm and its extensions is then presented. The study shows that the proposed algorithm can perform consistently well under conditions of variable update rates/patterns and client disconnection times and the two extensions can scale well to large databases for the "information feed" application domain with skewed access pattern.

Degree

Ph.D.

Advisors

Elmagarmid, Purdue University.

Subject Area

Computer science|Electrical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS