SPRING 2014 EVENTS
Lecture: Evelina Fedorenko, MIT
“The Language Network and Its Place within the Broader Architecture of the Human Mind and Brain
Wednesday, April 23, 2014, 4:30pm
A32 Lecture Hall, PNI
Lecture: Edward Gibson, MITThursday, February 27, 2014, 4:30pm
“Language for Communication: Language as Rational Inference”
Room 16, Joseph Henry House
Perhaps the most obvious hypothesis for the evolutionary function of human language is for use in communication. Chomsky has famously argued that this is a flawed hypothesis, because of the existence of such phenomena as ambiguity. Furthermore, he argues that the kinds of things that people tend to say are not short and simple, as would be predicted by communication theory. Contrary to Chomsky, my group applies information theory and communication theory from Shannon (1948) in order to attempt to explain the typical usage of language in comprehension and production, together with the structure of languages themselves. First, we show that ambiguity out of context is not only not a problem for an information-theoretic approach to language, it is a feature. Second, we show that language comprehension appears to function as a noisy channel process, in line with communication theory. Given si, the intended sentence, and sp, the perceived sentence we propose that peoplemaximize P(si | sp ), which is equivalent to maximizing the product of the prior P(si) and the likely noise processes P(si → sp ). We show that several predictions of this way of thinking of language are true: (1) the more noise that is needed to edit from one alternative to another leads to lower likelihood that the alternative will be considered; (2) in the noise process, deletions are more likely than insertions; (3) increasing the noise increases the reliance on the prior (semantics); and (4) increasing the likelihood of implausible events decreases the reliance on the prior. Third, we show that this way of thinking about language leads to a simple re-thinking of the P600 from the ERP literature. The P600 wave was originally proposed to be due to people's sensitivity to syntactic violations, but there have been many instances of problematic data in the literature for this interpretation. We show that the P600 can best be interpreted as sensitivity to an edit in the signal, in order to make it more easily interpretable. Finally, we discuss how thinking of language as communication can explain aspects of the origin of word order. Some recent evidence suggests that subject-object-verb (SOV) may be the default word order for human language. For example, SOV is the preferred word order in a task where participants gesture event meanings (Goldin-Meadow et al. 2008). Critically, SOV gesture production occurs not only for speakers of SOV languages, but also for speakers of SVO languages, such as English, Chinese, Spanish (Goldin-Meadow et al. 2008) and Italian (Langus & Nespor, 2010). The gesture-production task therefore plausibly reflects default word order independent of native language. However, this leaves open the question of why there are so many SVO languages (41.2% of languages; Dryer, 2005). We propose that the high percentage of SVO languages cross-linguistically is due to communication pressures over a noisy channel. We provide several gesture experiments consistent with this hypothesis, and we speculate how a noisy channel approach might explain several typical word order patterns that occur in the world's languages.
CANCELLED:Friday, February 21, 2014, 4:30pm
Lecture: Michael Wagner, McGill University
“Focus Sensitivity and What It Tells Us About Grammar”
209 Scheide Caldwell House
Performance: "The Language Archive"
Written by Julia Cho, Directed by Annika Bennett
February 20th-22nd and February 27th-March 1st
Lecture: Sali Tagliamonte, University of TorontoWednesday, February 19, 2014, 4:30pm
"The Sociolinguistic Puzzle of Language"
Room 16, Joseph Henry House
Language is inherently variable. People alternate between two or more ways of saying the same thing in every conversation and in all communities. This variation exists at all levels of grammar from lexical choices (e.g. couch vs. sofa) to pronunciation differences (e.g. talking vs. talkin’) to morphological alternations (e.g. go slow vs. go slowly) to discourse-pragmatic phenomena (e.g. I love pizza vs. I like love pizza.). Why do people do this?
In this presentation, I will encapsulate a subfield of Linguistics that studies this variation and analyses it statistically, comparatively and in reference to the social context in which it occurs (e.g. Tagliamonte, 2012) . The explanation for this behavior necessarily lies in the linguistic system, but it also is highly influenced by external aspects of its use (Labov, 1970; Sankoff, 1980) . In order to tap the system underlying this variation, analyses must be capable of modelling the simultaneous application of social and linguistic predictors and their interaction (Cedergren & Sankoff, 1974; Labov, 1994:3) . This type of behavior in language may be stable, but it may also be changing, often rapidly (Labov, 2001) . This means that historical, cultural and regional information may be required to interpret its use. Comparative techniques assist the analyst in evaluating similarities and differences across relevant categorizations of the data (e.g. age, sex, ethnicity, social network) (Tagliamonte, 2002) . Taken together, the methodological procedures and statistical techniques of Variationist Sociolinguistics — as I will exemplify in this presentation from several of my research projects — provide useful and transformative insights into the grammatical system. Further, understanding the origins and social embedding of language in use offers creative and useful means for understanding and explaining the behavior of human populations.
Lecture: Anastasia Giannakidou, University of ChicagoFriday, February 7, 2014, 4:30pm
"Nonveridicality in Natural Language: Negation, Affirmation and the Logical Space in Between"
Room 16, Joseph Henry House
Whether a sentence presents the epistemic agent with one or more possibilities about the world, i.e. whether it reflects a homogenous or non-homogenous epistemic space, seems to matter for a number of phenomena in natural language such as: negative polarity items, free choice items, modality, and mood choice (subjunctive-indicative). The polar opposition between affirmation ( p ) and negation (not p) drives the licensing of polarity items, but many polarity items appear also in nonveridical contexts, e.g. in questions, with modal verbs, with imperatives. These are also environments that license non-idicative mood. In my presentation, I will discuss representative cases of such phenomena in Modern Greek, aiming to show that nonveridicality, i.e. an epistemic domain where both p and not p are logical options, is an essential component of the logic of human language.
Lecture: Morten Christiansen, Cornell University
"Creating Language: Integrating Processing, Acquisition and Evolution"
Thursday, January 23, 2014, 4:30pm
Room 16, Joseph Henry House
The study of human language has frequently treated questions concerning the processing, acquisition and evolution of language as separate topics that can be addressed more or less independently. However, this tendency is misguided; there are strong constraints between each domain of inquiry, allowing each to shed light on one another. In this talk, I outline an integrated perspective on how language is ‘created’ across multiple timescales: the timescale of seconds in which particular utterances are spoken and understood; the timescale of years over which children acquire the language of their community; and the timescale of thousands of years over which languages themselves evolve. To illustrate this integrated perspective, I focus on three lines of my research that cut across levels of linguistic representation from multiple-cue integration in word learning to the processing of multiword constructions to linear order constraints on long-distance dependencies. The results provide new insights into key issues in linguistics, including the arbitrariness of the sign, the building blocks of language acquisition, and the processing complexity of embedded syntactic structure. More generally, my research highlights the importance of an integrated view of language processing, acquisition and evolution for understanding the nature language and how it works.
Lecture: Jason Merchant, University of ChicagoThursday, January 16, 2014, 4:30pm
“How Abstract Does Our Syntax Have to Be? Evidence From Ellipsis”
Room 16, Joseph Henry House
Forty years of research on the identity condition that conditions elliptical constructions has turned up two sets of conflicting data: ellipses that seem to require that their antecedent match them exactly in form and meaning, and ellipses that seem to allow for syntactic mismatches (while still requiring identity of meanings). In this talk, I review some of the evidence on both sides, and argue that a uniform theory of ellipsis is possible if we countenance certain kinds of abstractness in syntactic representations: all ellipsis requires identity of syntax and semantics between the missing material and its antecedent. I show that such a very strict identity condition can make sense of recently discovered facts from Spanish-German code-switching in sluicing, as well as an asymmetry in the voice (active/passive) mismatches that are permitted in English VP-ellipsis vs sluicing. It sheds light as well on recently collected spontaneous data from code-switching in Greek-English bilinguals, and also is consistent with a new experimental result that VP-ellipsis sites induce syntactic priming effects. These results argue that syntactic structures cannot be adequately modeled without a theory that posits "abstract" structures: both hierarchical structure and phonologically inert structure are required in any adequate model of the syntactic component of human language.
Lecture: T. Florian Jaeger, University of RochesterWednesday, January 8, 2014, 4:30pm
"Bias for Robust Information Transmission Shapes Language Production and Language: How Mathematical Theories of Communication Can Inform the Linguistics Sciences"
Room 16, Joseph Henry House
Research in my lab seeks to understand how language production and comprehension are shaped by the competing pressures inherent to communication, and how this in turn affects the development of language over generations. We approach these questions by drawing on mathematical theories of communication and inference to develop computational models that are evaluated against behavioral data (e.g., lab- and crowd sourcing-based experiments; spoken corpus studies; typological data).
A sometimes under-appreciated property of human communication is that the speech signal is both perturbed by noise and subject to systematic variability: the statistics of the speech signal are dependent on context (e.g., linguistic, social, visual). Critically, this includes context types that even an adult speaker will continue to frequently encounter novel instances of (e.g., novel speakers). During my 2012 visit to Princeton, I presented my lab's efforts to understand how comprehenders typically overcome this noise and variability through hierarchical inference and adaptation. Efficient prediction of the signal (language understanding) is made possible by adapting expectations (or, in Bayesian terms, beliefs) about not only low-level statistics (phonetic realizations of sounds classes), but also higher level statistics affecting lexical, semantic, and syntactic inferences during incremental language understanding. I presented evidence how brief exposure to a novel environment (e.g., a novel speaker) is sufficient to override the effects of life-long experience for that environment, suggesting that we maintain and adapt environment-specific beliefs about linguistic distributions (Fine and Jaeger, 2013; Fine et al., 2010, 2013; Kleinschmidt & Jaeger, 2011, 2012; Jaeger & Snider, 2013; Yildrim et al., 2013).
In this talk, I'll focus on production. This work investigates whether the systems underlying language production are organized so as to balance the demands inherent to production (e.g., sequential planning) and the goal of efficient information transfer (i.e, fast and robust inference of the intended message, incl., but not limited to, propositional, pragmatic, and social information). As would be expected if speakers contribute to efficient information transfer (Jaeger, 2006, 2013; Levy & Jaeger, 2007), production preference reflect a trade-off between prior inferrability and the quality of the speech signal: more predictable elements tend to be more likely to be reduced or omitted. As evidenced in both conversational speech corpora and production experiments, this tendency seems to hold at all levels of linguistic production (e.g., phonetics: Aylett & Turk, 2004; Buz & Jaeger, 2013; Bell et al., 2009; Pellegrino et al., 2011; morphology: Frank & Jaeger, 2008; Kurumada & Jaeger, 2013; syntax: Jaeger, 2010, 2011; Resnik 1996; Wasow et al., 2011).
Interestingly, more recent work has confirmed that the same preference are reflected in the linguistic code (i.e., the lexicon and grammar) of languages across the world (e.g., Graff and Jaeger, 2009; Maurits et al., 2010; Piantadosi et al., 2011, 2012). I close by asking precisely how these biases enter languages. In a series of artificial language learning experiments, we investigated one potential answer -- that biases enter language during acquisition (Fedzechkina et al., 2012, 2013). We found that the same biases observed in native production make learners of a new language reshape that language towards greater communicative efficiency. Critically, this happens even with regard to features that are *not* present in the learners' native language. This suggests that at least *some* properties of languages across the world are a consequence of the *goals* of language use: the transfer of information.