Keynote speakers





Tim Baldwin
The University of Melbourne, Australia
"Reevaluating Summarisation Evaluation "

Abstract: Document summarisation is a well-established task in NLP, with both a rich literature on evaluation and a strong de facto culture of evaluating using variants of the ROUGE metric. In this talk, I will first briefly review past work on document summarisation and summarisation evaluation, and then discuss recent work on different dimensions of summary evaluation, including: faithfulness (degree of factual consistency with the source), focus (precision of summary content relative to the reference), coverage (recall of summary content relative to the reference), and summary coherence.

Bio: Tim Baldwin is a Melbourne Laureate Professor in the School of Computing and Information Systems, The University of Melbourne, and also Director of the ARC Centre for Cognitive Computing in Medical Technologies and Vice President of the Association for Computational Linguistics. His primary research focus is on natural language processing (NLP), including social media analytics, deep learning, and computational social science.

Tim completed a BSc(CS/Maths) and BA(Linguistics/Japanese) at The University of Melbourne in 1995, and an MEng(CS) and PhD(CS) at the Tokyo Institute of Technology in 1998 and 2001, respectively. Prior to joining The University of Melbourne in 2004, he was a Senior Research Engineer at the Center for the Study of Language and Information, Stanford University (2001-2004). His research has been funded by organisations including the Australia Research Council, Google, Microsoft, Xerox, ByteDance, SEEK, NTT, and Fujitsu, and has been featured in MIT Tech Review, IEEE Spectrum, The Times, ABC News, The Age/Sydney Morning Herald, Australian Financial Review, and The Australian. He is the author of over 400 peer-reviewed publications across diverse topics in natural language processing and AI, with over 16,000 citations and an h-index of 60 (Google Scholar), in addition to being an IBM Fellow, ARC Future Fellow, and the recipient of a number of best paper awards at top conferences.

 
Josef van Genabith and Nico Herbig
DFKI and Saarland University, Germany
"MMPE: A Multi-Modal Interface for Post-Editing Machine Translation"


Abstract: In order to ensure professional human quality level translation results, in most cases, the output of Machine Translation (MT) systems has to be manually post-edited (PE) by human experts. The post-editing process consists of capturing and correcting mistakes, as well as selecting, manipulating, adapting, and recombining good segments. To date, PE user interfaces mostly rely on keyboard and mouse as input devices to support these tasks. As PE requires significantly less keyboard input but more manipulations of text, we asked ourselves if this traditional setup designed for translation from scratch is still the best for PE. In the MMPE project, we explored the design, development, implementation, and evaluation of a novel multi-modal post-editing support for machine translation, which extends traditional input techniques of a PE system with novel touch, pen, gesture, as well as speech and gaze input modalities (and their combinations). Users can use pen, touch, or gesture to identify and process text, drag and drop words for reordering, or use spoken commands to update text.

The talk will cover:
Jamara et. al. Mid-Air Hand Gestures for Post-Editing of Machine Translation. ACL–IJCNLP 2021.
Herbig et al. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation. ACL 2020.
Herbig et al. MMPE: A Multi-Modal Interface usingHandwriting, Touch Reordering and Speech Commands for Post-Editing Machine Translation. ACL 2020.
Herbig et al. Multi-modal Approaches for Post-editing Machine Translation. CHI 2019

Bio: Josef van Genabith is one of the Scientific Directors of the DFKI, the German Research Centre for Artificial Intelligence, where he heads the Multilingual Language Technologies (MLT) Lab. He is Full Professor at Saarland University where he holds the Chair of Translation Oriented Language Technologies. He was the founding Director of CNGL, the Centre for Next Generation Localisation (now ADAPT), Director of the National Center for Language Technology (NCLT), and an Associate Professor in the School of Computing at Dublin City University (DCU), Ireland. He worked as a postdoctoral researcher at IMS, University of Stuttgart, Germany, and obtained an MA and a PhD from the University of Essex, U.K. His first degree is in Electronic Engineering and English from RWTH Aachen, Germany.

After studying Computer Science at Saarland University, Nico Herbig worked for 5 years at the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken, Germany. Apart from projects in the retail and industry 4.0 domains, he mainly focused on interdisciplinary research at the intersection of Machine Translation and Human-Computer Interaction: His PhD topic “Multi-Modal Post-Editing of Machine Translation” puts the human translator in the center and explores how his/her interaction with MT output can be improved. For this, 3 research avenues are pursued: (a) improving CAT tools to support handwritten input, touch reordering, speech commands, eye tracking, and mid-air hand gestures, (b) considering the cognitive load of translators during post-editing, and (c) avoiding repetitive mistakes of the MT system through automatic post-editing.

 
He He
New York University, USA
"Towards Reliable Neural Text Generation"

Abstract: Recent advances in large-scale neural language models have transformed the field of text generation, including applications like dialogue and document summarization. Despite human-like fluency, the generated text tends to contain incorrect, inconsistent, or hallucinated information, which hinders the deployment of text generation models in real applications. I will review observations of such errors in current generation tasks, explain challenges in evaluating and mitigating factual errors, and describe our recent attempts on addressing these problems. I will conclude with a discussion on future challenges and directions.

Bio: He He is an assistant professor at the Center for Data Science and the Department of Computer Science at New York University. Before joining NYU, she spent a year at Amazon Web Services and was a postdoc at Stanford University. She received her PhD from University of Maryland, College Park. She is broadly interested in machine learning and natural language processing. Her current research interests include text generation, dialogue systems, and robust language understanding.

 
Eduard Hovy
Carnegie Mellon University, USA
"NLP: The Past and 3½ Futures"

Abstract: Natural Language Processing (NLP) of text and speech (also called Computational Linguistics) is just over 60 years old and is continuously evolving - not only its technical subject matter, but also the basic questions being asked and the style and methodology used to answer them. Unification followed finite-state technology in the 1980s, moving in the 1990s to statistical processing and machine learning as the dominant paradigm; since about 2015 deep neural methods have taken over. Large-scale processing over diverse data has brought general-level performance to a list of applications that includes speech recognition, information retrieval from the web, machine translation, information extraction, question answering, text summarization, sentiment detection, and dialogue processing. In all this work three main complementary types of research and foci of interest have emerged, each with its own goals, evaluation paradigm, and methodology: (1) the resource creators focus on the nature of language and representations required for language processing; (2) the learning researchers focus on algorithms to effect the transformation of representation required in NLP; and (3) the large-scale system builders produce engines that win the NLP competitions and build companies. Though the latter two have fairly well-established research methodologies, the first doesn’t, and consequently suffers in recognition and funding. However, I believe, the main theoretical advances of NLP will occur here. In the talk, I describe the three trends of NLP research and pose some general questions, including: What is NLP, as a field? What is the nature of the work performed in each stream? What, if any, are the theoretical contributions of each stream? What is the likely future of each stream, and what kind of work should one choose to do if one is a grad student today?

Bio: Eduard Hovy is a research professor at the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Starting in 2020 he served a term as Program Manager in DARPA’s Information Innovation Office (I2O), where he managed programs in Natural Language Technology and Data Analytics totaling over $30M per year. Dr. Hovy holds adjunct professorships in CMU’s Machine Learning Department and at USC (Los Angeles). Dr. Hovy completed a Ph.D. in Computer Science (Artificial Intelligence) at Yale University in 1987 and was awarded honorary doctorates from the National Distance Education University (UNED) in Madrid in 2013 and the University of Antwerp in 2015. He is one of the initial 17 Fellows of the Association for Computational Linguistics (ACL) and is also a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). Dr. Hovy’s research focuses on computational semantics of language and addresses various areas in Natural Language Processing and Data Analytics, including in-depth machine reading of text, information extraction, automated text summarization, question answering, the semi-automated construction of large lexicons and ontologies, and machine translation. In early 2021 his Google h-index was 93, with over 48,000 citations. Dr. Hovy is the author or co-editor of eight books and over 450 technical articles and is a popular invited speaker. From 2003 to 2015 he was co-Director of Research for the Department of Homeland Security’s Center of Excellence for Command, Control, and Interoperability Data Analytics, a distributed cooperation of 17 universities. In 2001 Dr. Hovy served as President of the international Association of Computational Linguistics (ACL), in 2001–03 as President of the International Association of Machine Translation (IAMT), and in 2010–11 as President of the Digital Government Society (DGS). Dr. Hovy regularly co-teaches Ph.D.-level courses and has served on Advisory and Review Boards for both research institutes and funding organizations in Germany, Italy, Netherlands, Ireland, Singapore, and the USA.

 
Jing Jiang
Singapore Management University, Singapore
"Challenges and Solutions for Answering Complex Questions and Conversational Questions from Knowledge Bases"


Abstract: KBQA is the task of answering questions using knowledge stored in a structured knowledge base. The task has gained much attention in recent years. Different from QA from unstructured text such as machine reading comprehension, KBQA can leverage the entity relation structures embedded in the knowledge graph to narrow down its candidate answer space. However, KBQA also poses different challenges from document-based QA. In this talk, I will focus on two tasks in KBQA: answering complex questions and answering conversational questions. The former requires good strategies to identify multi-hop paths in the KB that lead to candidate answers without blowing up the search space. The latter can benefit from an entity-centric approach in modeling the conversation flow. I will share our recent work on these two topics and also discuss some future challenges.

Bio: Jing Jiang is a professor of computer science in the School of Computing and Information Systems at Singapore Management University. Her research interests include question answering, topic modeling, social media analysis, sentiment analysis, information extraction and domain adaptation. She was program co-chair of EMNLP 2019 and currently serves as an action editor of TACL. She holds a PhD degree in computer science from UIUC.

 
Alessandro Moschitti
Amazon Alexa, USA
" Challenges and Achievements of Question Answering Research for Personal Assistants "

Abstract: Personal assistants, e.g., Amazon Alexa, Google Home, and Apple Siri, provide interesting challenges for Question Answering research. These systems operate in open domain, where the question complexity and variability are set by the information needs of millions of customers. Winning these challenges requires the use of web content in the form of unstructured text.

In this talk, we will describe how current NLP breakthroughs, i.e., neural architectures, pre-training, new datasets, and answer generation-based models, can be used to build web-based QA systems of impressive accuracy. In particular, we will (i) describe components to design state-of-the-art QA systems, (ii) provide an interpretation of why Transformer models are very effective for QA, (iii) illustrate our transfer and adapt approach to improve Transformer models, and (iv) show effective solutions, e.g., our Cascade Transformer, to make such technology efficient, and (v) present latest research for generating conversational answers from web data.

Finally, we will provide a practical demonstration by directly asking complex questions to a standard Alexa device.

Bio: Alessandro Moschitti is a Principal Applied Research Scientist of Amazon, where he has been leading the science of Web-based Question Answering for the Alexa Information service, since 2018. He is also a professor of the CS Dept. of the University of Trento, Italy (since 2007). He obtained his Ph.D. in CS from the University of Rome in 2003. He was a Principal Scientist of the Qatar Computing Research Institute (QCRI) for 5 years (2013-2018), and worked as a research fellow at The University of Texas at Dallas for 2 years (2002-2004). He was (i) a visiting professor for the Universities of Columbia, Colorado, John Hopkins, and MIT (CSAIL department); and (ii) a visiting researcher for the IBM Watson Research center (participating in the Jeopardy! Challenge 2009-2011). His expertise concerns theoretical and applied machine learning in the areas of NLP, IR and Data Mining. He has devised innovative structural kernels and neural networks for advanced syntactic/semantic processing and inference over text, documented by more than 300 scientific articles. He received four IBM Faculty Awards, one Google Faculty Award, and five best paper awards. He has led about 25 projects, e.g., MIT CSAIL and QCRI joint projects, and European projects. He was the General Chair of EMNLP 2014, a PC co-chair of CoNLL 2015, and has had a chair role in more than 60 conferences and workshops. He was an action editor of TACL, while he currently serves as an action/associate editor for the ACM Computing Survey journal and JAIR.

 
Hwee Tou Ng
National University of Singapore, Singapore
"Grammatical Error Correction: Where Have We Been? Where Are We Going?"

Abstract: Grammatical error correction is the task of correcting writing errors in texts, encompassing a wide variety of errors including spelling, grammar, collocation, and word choice errors. This field has a long history within the natural language processing research community, but has gained more prominence in the recent decade. In this talk, I will review past research on grammatical error correction, including recent advances such as formulating grammatical error correction as sequence-to-sequence generation using Transformers, employing large-scale synthetic parallel data, etc. I will end with some suggestions for future research directions.

Bio: Professor Hwee Tou Ng is Provost's Chair Professor of Computer Science at the National University of Singapore. He received a PhD in Computer Science from the University of Texas at Austin, USA. His research focuses on natural language processing. He is a Fellow of the Association for Computational Linguistics (ACL). His papers received the Best Paper Award at EMNLP 2011 and SIGIR 1997. He is the editor-in-chief of Computational Linguistics, an editorial board member of Natural Language Engineering, and a steering committee member of ACL SIGNLL. He was program co-chair of the EMNLP 2008 and ACL 2005 conferences.

 
Constantin Orasan
University of Surrey, UK
"Challenges in Translation of Sentiments and Emotions in User Generated Content"

Abstract: Recent translation engines are able to produce very fluent translations, but it is not unusual that they are not entirely faithful to the source, distorting the original message. One situation where this can prove particularly problematic is when these engines are used to translate user generated content which expresses sentiments and emotions. Often translation of such content is carried out fully automatically without a human postediting or reviewing the translation. For example, websites like Amazon, eBay and Booking.com, translate automatically user reviews, but machine translation can be easily confused by sarcastic language or expressions containing emotions and sentiments. This talk will discuss a number of problems introduced by machine translation when translating sentiments and emotions, and discuss ways to address some of them.

Bio: Constantin Orasan is Professor of Language and Translation Technologies at the Centre of Translation Studies, University of Surrey. He has over 20 years experience of working in the fields of (applied) Natural Language Processing, Artificial Intelligence and Machine Learning for language processing. His research interests are largely focused on facilitating information access and include translation technology, sentiment analysis, question answering, text summarisation, anaphora and coreference resolution, building, annotation and exploitation of corpora. More information is available at https://dinel.org.uk/.

 
Sebastian Riedel
University College London and Facebook AI Research, UK
"Parametric vs Nonparametric Knowledge, and What We Can Learn from Knowledge Bases"

Abstract: Traditionally, AI and Machine Learning communities have considered knowledge from the perspective of discrete vs continuous representations, knowledge bases (KBs) vs dense vectors or logic vs algebra. While these are important dichotomies, in this talk I will argue that we should put more focus on another: parametric vs non-parametric modelling. Roughly, in the former a fixed set of parameters is used, in the latter parameters grow with data. I will explain recent advances in knowledge intensive NLP from this perspective, show the benefit of hybrid approaches, and discuss KBs as non-parametric approaches with relatively crude assumptions about what future information needs will be. By replacing these assumptions with a learnt model, we show that such “modern KBs” are a very attractive alternative or complement to current approaches.

Bio: Sebastian Riedel is a researcher at Facebook AI research, professor in Natural Language Processing and Machine Learning at the University College London (UCL) and an Allen Distinguished Investigator. He works in the intersection of Natural Language Processing and Machine Learning, and focuses on teaching machines how to read and reason. He was educated in Hamburg-Harburg (Dipl. Ing) and Edinburgh (MSc., PhD), and worked at the University of Massachusetts Amherst and Tokyo University before joining UCL.