SST 2010 Plenary Speakers

Speakers

Professor Bob Ladd, Department of Linguistics and English Language, University of Edinburgh

Professor Hugh McDermott, The Bionic Ear Institute and The University of Melbourne

Professor Michael Robb, Department of Communication Disorders, University of Canterbury

Abstracts

Professor Bob Ladd, Department of Linguistics and English Language, University of Edinburgh

Segmental analogies for intonational gradience

It has long been recognised that certain aspects of intonation involve "gradience" (e.g. increasing pitch range for emphasis), but applying this notion in practice has always been a source of disagreement. There are still many specific cases, such as the difference between H* and L+H* in ToBI transcriptions of English, that are analysed in some descriptions as involving two categories and in others as involving a single category that is gradiently variable. Experimental phonetic evidence is usually compatible with either interpretation: proponents of a categorical distinction can argue that an apparent phonetic continuum simply reflects the overlapping phonetic realisation of phonologically distinct categories represented by the continuum's extremes. I propose that we can investigate this question indirectly based on segmental analogues.

Although most segmental distinctions are categorical and clearly distinct, cases do exist in which no clear phonetic distinction is made and perceptual discrimination is difficult.  Examples in English include junctural distinctions (e.g. Norman Elson vs Norma Nelson), morphologically distinct homophones (e.g. band vs banned), and distinctions involving underlying and epenthetic stops (e.g. prince vs prints or lens vs lends).  Intuitively, the first case involves clearly distinct forms whose phonetic realisations nevertheless overlap; the second involves a single phonological form that may nevertheless be realised in gradiently different ways (e.g. with differences of duration); and in the third case the intuitions are less certain. If these intuitive distinctions could be put on a firmer basis (e.g. if we could show that the statistical distribution of phonetic variability is different in the different types of cases) we may be able to identify an empirical criterion for distinguishing "gradience" from mere variability that could also be applied to intonation.

At the same time, exploring such segmental cases reminds us that there are problems with the idealised categories of standard phonological descriptions as well.  Examples include incomplete neutralisation (lens/lends or prince/prints) and "quasi-phonemes" (e.g. Scottish English side vs. sighed), which violate long-standing assumptions about contrast and complementary distribution.  Seeking clearer answers about the phonology of intonation may therefore lead to firmer theoretical foundations for phonology in general.

Professor Hugh McDermott, The Bionic Ear Institute and The University of Melbourne

The perception, processing, and production of high-frequency speech sounds

The perception of high-frequency sound signals is crucial not only for good understanding of speech but also for its accurate production. Unfortunately, however, hearing loss affecting mainly the high frequencies is a common condition. When such impairment is severe, amplification with conventional hearing aids may not be sufficient to restore satisfactory perception. Recently, several innovative techniques have been developed specifically to address communication problems associated with high-frequency hearing impairment. These include frequency-lowering schemes for acoustic hearing aids which shift high-frequency sound signals to lower frequencies where they can be made more audible and easier to recognise. In cases where such sound-processing techniques are inadequate, cochlear implants with special electrode arrays are now available. These devices provide information about high-frequency sounds by electrical stimulation while preserving, as far as possible, each recipient’s low-frequency acoustic hearing. In many instances, the combined use of acoustic and electric stimulation provides better outcomes than the use of either stimulation mode alone. The technical function, clinical challenges, and potential benefits of these new solutions for high-frequency hearing loss will be discussed.

Professor Michael Robb, Department of Communication Disorders, University of Canterbury

Speech Science Applications in Speech-Language Pathology

Speech science entails the study of production, transmission and perception of speech. Speech science research has led to a wealth of information describing the patterns and complexities of speech exhibited by individuals of various ages and language backgrounds. The science and technology developed to study normal speaker groups has played an important role in evaluating individuals with suspected or known speech disabilities. In particular, members of the profession of speech-language pathology are actively involved in speech science research and practice. This plenary address will illustrate how the various branches of speech science, including acoustic and physiological phonetics, as well as perceptual and transcriptional phonetics, can be applied to evaluate speech disorders. Examples will be drawn from the presenter’s career as a researcher in the field of speech-language pathology.

 
Contact ASSTA: Either email The ASSTA Secretary, or

G.P.O. Box 143, Canberra City, ACT, 2601.

Copyright © ASSTA