Privacy-Preserving Analysis with Applications to Textual Data
Textual data plays a very important role in decision making and scientific research, but cannot be shared freely if they contain personally identifiable information. In this dissertation, we consider the problem of privacy-preserving text analysis, while satisfying a strong privacy definition of differential privacy. We first show how to build a two-party differentially private secure protocol for computing similarity of text in the presence of malicious adversaries. We then relax the utility requirement of computational differential privacy to reduce computational cost, while still giving security with rational adversaries. Next, we consider the problem of building a data-oblivious algorithm for minimum weighted matching in bipartite graphs, which has applications to computing secure semantic similarity of documents. We also propose a secure protocol for detecting articulation points in graphs. We then relax the strong data-obliviousness definition to give ϵ-data-obliviousness based on the notion of indistinguishability, which helps us to develop efficient protocols for data-dependent algorithms like frequent item set mining. Finally, we consider the problem of privacy-preserving classification of text. A main problem in developing private protocols for unstructured data is high dimensionality. This dissertation tackles high dimensionality by means of differentially private feature selection. We show that some of the well known feature selection techniques perform poorly due to high sensitivity and we propose techniques that perform well in a differential private setting. The feature selection techniques are empirically evaluated using differentially private classifiers: naïve Bayes, support vector machine and decision trees.
Clifton, Purdue University.
Off-Campus Purdue Users:
To access this dissertation, please log in to our