Home


Quasy website


Depling 19 website


TLT 2019 website


UDW 19 website


Shared Program Committee



Submission site


Registration site


Registration form


Invited talks


Program (with links to papers)

SyntaxFest 2019
Invited talks

Ramon Ferrer i Cancho (Universitat politècnica de Catalunya)
Monday 26th 9:40 Title: Dependency distance minimization: facts, theory and predictions

Abstract: Quantitative linguistics is a branch of linguistics concerned about the study of statistical facts about languages and their explanation aiming at constructing a general theory of language. The quantitative study of syntax has become central to this branch of linguistics. The fact that the distance between syntactically related words is smaller than expected by chance in many languages led to the formulation of a dependency distance minimization (DDm) principle.

From a theoretical standpoint, DDm is in conflict with another word order principle: surprisal minimization (Sm). In single head structures, DDm predicts that the head should be put at the center of the linear arrangement, while Sm predicts that it should be put at one of the ends. In spite of the massive evidence of the action of DDm and the trendy claim that languages are optimized, attempts to quantify the degree of optimization of languages according to DDm have been rather scarce. Here we present a new optimality measure indicating that languages are optimized to a 70% on average. We confirm two old theoretical predictions: that the action of DDm is stronger in longer sentences and that DDm is more likely to be beaten by Sm in short sequences (resulting in an anti-DDm effect), while shedding new light on the kind of tree structures where DDm is more likely to be shadowed. Finally, we review various theoretical predictions of DDm focusing on the scarcity of crossing dependencies. We challenge the belief that formal constraints on dependency trees (e.g., projectivity or relaxed versions) are real rather than epiphenomenal.

The talk is a summary of joint work with Carlos Gomez-Rodriguez, Juan Luis Esteban, Morten Christiansen, Lluis Alemany-Puig and Xinying Chen.

Short bio:Ramon Ferrer-i-Cancho is associate professor at Universitat Politecnica de Catalunya and the head of the Complexity and Quantitative Linguistics Lab. He is a language researcher in a broad sense. His research covers different levels of the organization of life: from human language to animal behavior and down farther to the molecular level. One of his main research objectives is the development of a parsimonious but general theory of language and communication integrating insights from probability theory, information theory and the theory of spatial networks. In the context of syntax, he pioneered the study of dependency lengths from a statistical standpoint putting forward the first baselines and the principle of dependency distance minimization. He also introduced the hypothesis that projectivity, the scarcity of crossings dependencies and consistent branching are epiphenomena of that principle.
Emmanuel Dupoux (ENS/CNRS/EHESS/INRIA/PSL Research University, Paris)
Tuesday 27th 9:00 Title: Inductive biases and language emergence in communicative agents

Abstract: Despite spectacular progress in language modeling tasks, neural networks still fall short of the performance of human infants when it comes to learning a language from scarce and noisy data. Such performance presumably stems from human-specific inductive biases in the neural networks sustaining language acquisitions in the child. Here, we use two paradigms to study experimentally such inductive biases in artificial neural networks. The first one relies on iterative learning, where a sequence of agents learn from each other, simulating historical linguistic transmission. We find evidence that sequence to sequence neural models have some of the human inductive biases (like the preference for local dependencies), but lack others (like the preference for non-redundant markers of argument structure). The second paradigm relies on language emergence, where two agents engage in a communicative game. Here we find that sequence to sequence networks lack the preference for efficient communication found in humans, and in fact display an anti-Zipfian law of abbreviation. We conclude that the study of the inductive biases of neural networks is an important topic to improve the data efficiency of current systems.

Short bio: E. Dupoux directs the Cognitive Machine Learning team at the Ecole Normale Supérieure (ENS) in Paris and INRIA (www.syntheticlearner.com). His education includes a PhD in Cognitive Science (EHESS), a MA in Computer Science (Orsay University) and a BA in Applied Mathematics (Pierre & Marie Curie University, ENS). His research mixes developmental science, cognitive neuroscience, and machine learning, with a focus on the reverse engineering of infant language and cognitive development using unsupervised or weakly supervised learning. He is the recipient of an Advanced ERC grant, the organizer of the Zero Ressource Speech Challenge (2015, 2017, 2019), the Intuitive Physics Benchmark (2019) and led in 2017 a Jelinek Summer Workshop at CMU on multimodal speech learning. He has authored 150 articles in peer reviewed outlets from both cognitive science and language technology.
Barbara Plank (IT University of Copenhagen)

Wednesday 28th

9:00
slides
Title: Transferring NLP models across languages and domains

Abstract: How can we build Natural Language Processing models for new domains and new languages?
In this talk I will survey some recent advances to address this ubiquitous challenge, from cross-lingual transfer to learning models under distant supervision from disparate sources, multitask-learning and data selection.

Short bio:Barbara Plank is Associate Professor in Natural Language Processing at IT University of Copenhagen. She has previously held positions as assistant professor at the University of Groningen and the University of Copenhagen, and a postdoc position at the University of Trento. Her research interests within NLP are broad and include learning under sample selection bias (domain adaptation, transfer learning), learning from beyond the text and multimodal inputs, and in general learning under limited supervision for cross-domain and cross-lingual NLP, applied to a range of applications from author profiling, syntactic language understanding, information extraction and visual question answering.
Barbara is member of the advisory board of the EACL (European Association for Computational Linguistics) and publicity director of the Association for Computational Linguistics.
Paola Merlo (University of Geneva)
Thursday 29th 9:00 slides
Title: Quantitative Computational Syntax: dependencies, intervention effects and word embeddings

Abstract: In the computational study of intelligent behaviour, the domain of language is distinguished by the complexity of the representations and the vast amounts of quantitative text-driven data. In this talk, I will let these two aspects of the study of language inform each other and will discuss current work investigating whether the notion of similarity in the intervention theory of locality is related to current notions of similarity in word embedding space.

Despite their practical success and impressive performances, neural-network-based and distributed semantics techniques have often been criticized as they remain fundamentally opaque and difficult to interpret. Several recent pieces of work have investigated the linguistic abilities of these representations, and shown that they can capture long agreement and thus hierarchical notions. In this vein, we study another core, defining and more challenging property of language: the ability to construe long-distance dependencies. We present results that show that word embeddings and the similarity spaces they define do not correlate with experimental results on intervention similarity in long-distance dependencies. These results show that the linguistic encoding in distributed representations does not appear to be human-like, and it also brings evidence to the debate on narrow or broad definitions of similarity in syntax and sentence processing.

Short bio: Paola Merlo is associate professor in the Linguistics department of the University of Geneva. She is the head of the interdisciplinary research group Computational Learning and Computational Linguistics (CLCL). The group is concerned with interdisciplinary research combining linguistic modelling with machine learning techniques. Prof. Merlo has been editor of Computational Linguistics, published by MIT Press and a member of the executive committee of the ACL. Prof. Merlo holds a doctorate in Computational Linguistics from the University of Maryland, and has been associate research fellow at the University of Pennsylvania, and visiting scholar at Rutgers, Edinburgh, Stanford and Uppsala.
Adam Przepiórkowski (University of Warsaw / Polish Academy of Sciences / University of Oxford)
Friday 30th 9:00 slides
Title: Arguments and adjuncts

Abstract: Linguists agree that the phrase “two hours” is an argument in “John only lost two hours” but an adjunct in “John only slept two hours”, and similarly for “well” in “John behaved well” (an argument) and “John played well” (an adjunct). While the argument/adjunct distinction is hard-wired in major linguistic theories, Universal Dependencies eschews this dichotomy and replaces it with the core/non-core distinction. The aim of this talk is to add support to the UD approach by critically examininig the argument/adjunct distinction. I will suggest that not much progress has been made during the last 60 years, since Tesnière used three pairwise-incompatible criteria to distinguish arguments from adjuncts. This justifies doubts about the linguistic reality of this purported dichotomy. But – given that this distinction is built into the internal machinery and/or resulting representations of perhaps all popular linguistic theories – what would a linguistic theory not making such an argument–adjunct distinction look like? I will briefly sketch the main components of such an approach, based on ideas from diverse corners of linguistic and lexicographic theory and practice.

Short bio:Adam Przepiórkowski is a full professor at the University of Warsaw (Institute of Philosophy) and at the Polish Academy of Sciences (Institute of Computer Science). As a computational and corpus linguist, he has led NLP projects resulting in the development of various tools and resources for Polish, including the National Corpus of Polish and tools for its manual and automatic annotation, and has worked on topics ranging from deep and shallow syntactic parsing to corpus search engines and valency dictionaries. As a theoretical linguist, he has worked on the syntax and morphosyntax of Polish (within Head-driven Phrase Structure Grammar and within Lexical-Functional Grammar), on dependency representations of various syntactic phenomena (within Universal Dependencies), and on the semantics of negation, coordination and adverbial modifcation (at different periods, within Glue Semantics, Situation Semantics and Truthmaker Semantics). He is currently a visiting scholar at the University of Oxford.