Perception of prosody in American Sign Language
Abstract
The main goal of this dissertation is to examine the ability to use prosodic cues when parsing Intonational Phrases (IPs) in American Sign Language (ASL) by four different populations, i.e. ASL signers, Hong Kong Sign Language (HKSL) signers, Non-Signers and second language (L2) signers of ASL. ASL signers, Non-Signers and L2 signers showed similar performance in terms of accuracy when identifying prosodic boundaries. Although hearing participants (i.e. Non-Signers and L2 signers) do not have a phonology for gestures, they are able to use superficial visual cues to mark prosodic boundaries based on their “gestural competence”; that is, their experience using co-speech gesture. Native HKSL signers’ performance was better overall demonstrating a greater ability to be accurate and fast when parsing ASL clauses than even native ASL signers. I argue that the ASL signers’ ability may be affected by lexical or semantic interference. Long reaction times and low accuracy levels revealed the complexity of the mental process involved in this task. When data were analyzed within group, Non-Signers as well as L2 signers showed a better performance when stimuli contained more cues. Contrary to hearing participants, the native signers groups did not show changes on accuracy depending on the number of cues. Data shows that 3-4 cues are sufficient for HKSL and ASL signers to be accurate, whereas Non-Signers and L2 signers needed more cues. HKSL signers as well as Non-Signers and L2 signers parse ASL clauses based on a broader range of cues whereas ASL signers focus just on a couple of cues when parsing ASL clauses, that is, ASL signers are sensitive to fewer cues.
Degree
Ph.D.
Advisors
Brentari, Purdue University.
Subject Area
Linguistics
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.