Video annotation by crowd workers with privacy-preserving local disclosure
Advancements in computer vision are still not reliable enough for detecting video content including humans and their actions. Microtask crowdsourcing on task markets such as Amazon Mechnical Turk and Upwork can bring humans into the loop. However, engaging crowd workers to annotate non-public video footage risks revealing the identities of people in the video who may have a right to anonymity. This thesis demonstrates how we can engage untrusted crowd workers to detect behaviors and objects, while robustly concealing the identities of all faces. We developed a web-based system that presents obfuscated videos to crowd workers, and provides them with a mechanism to test their hypotheses about what behaviors and/or objects might be present in the videos. Our system, called Fovea, works by initially applying a heavy median blur to the videos. This guarantees privacy but impedes recognition of other content of interest. An algorithm was developed as a part of this thesis to calculate the radius of a safe-to-reveal region around a pixel. It was implemented into an interactive system that allows workers watching the blurred videos to selectively reveal small regions by clicking. We compared two approaches for local disclosure of information–foveated mode and keyhole mode–together with a non-interactive blur-only mode as a control. The results showed that both modes led to superior recognition of actions while keeping the odds of correct face recognition close to that of the control.
Quinn, Purdue University.
Computer Engineering|Computer science
Off-Campus Purdue Users:
To access this dissertation, please log in to our