Touch Event Detection and Texture Analysis for Video Compression

Qingshuang Chen, Purdue University

Abstract

Touch event detection investigates the interaction between two people from video recordings. We are interested in a particular type of interaction which occurs between a caregiver and an infant, as touch is a key social and emotional signal used by caregivers when interacting with their children. We propose an automatic touch event detection and recognition method to determine the potential timing when the caregiver touches the infant, and classify the event into six touch types based on which body part of the infant has been touched. We leverage deep learning based human pose estimation and person segmentation to analyze the spatial relationship between the caregivers’ hands and the infant. We demonstrate promising performance on touch event detection and classification, showing great potential for reducing human effort when generating groundtruth annotation.Recently, artificial intelligence powered techniques have shown great potential to increase the efficiency of video compression. In this thesis, we describe a texture analysis pre-processing method that leverages deep learning based scene understanding to extract semantic areas for the improvement of subsequent video coder. Our proposed method generates a pixel-level texture mask by combining the semantic segmentation with simple postprocessing strategy. Our approach is integrated into a switchable texture-based video coding method. We demonstrate that for many standard and user generated test sequences, the proposed method achieves significant data rate reduction without noticeable visual artifacts.

Degree

Ph.D.

Advisors

Zhu, Purdue University.

Subject Area

Artificial intelligence|Logic

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS