The third annual WeCNLP (West Coast NLP) Summit is an opportunity to foster discussion and collaboration between NLP researchers in academia and industry. The event will include talks and a panel from research leaders on the latest advances in NLP technologies.
In light of the global health safety recommendations related to COVID-19, WeCNLP will be a virtual event this year in lieu of an in-person conference. WeCNLP 2020 will be composed of invited and lightning talks, poster & demo sessions, a panel discussion on NLP research during and after the COVID-19 outbreak, and a virtual happy hour.
September 4, 2020 – Abstract & paper submission deadline (11:59PM PST).
October 8, 2020 – Notification of submission acceptance
October 30, 2020 – WeCNLP Summit
Asli Celikyilmaz is a Principal Researcher at Microsoft Research in Redmond, Washington and an affiliate faculty member with the Computer Science Department at the University of Washington. She has received Ph.D. Degree from University of Toronto, Canada, and later continued her Postdoc study at University of California, Berkeley. Her research interests are mainly in deep learning and natural language, specifically focusing on narrative coherence in long range generation, conversational grounded navigation, language understanding and interaction. She is serving on the editorial boards of Transactions of the ACL (TACL) as area editor and Open Journal of Signal Processing (OJSP) as Associate Editor. She has received several “best of” awards including NAFIPS 2007, Semantic Computing 2009, and CVPR 2019.
Mei-Yuh received her PhD in Computer Science from Carnegie Mellon University in 1993, as one of the main contributors to SPHINX-II, a continuous speech recognition system. She had worked at Microsoft in U.S. and in China for 18 years, and University of Washington for 4 years, publishing numerous conference and journal papers, and delivering industry products in speech recognition, machine translation, and language understanding. She is an IEEE fellow, who is passionate in bridging the gap between academia and industry. Following that passion, she spent 4 years at a startup company, whose main focuses are personal assistants in automobiles and smart watches. She is now focusing on personal assistants for productivity after returning to Microsoft a few months ago.
Hanna Hajishirzi is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and a Research Fellow at the Allen Institute for AI. Her research spans different areas in NLP and AI, focusing on developing machine learning algorithms that represent, comprehend, and reason about diverse forms of data at large scale. Applications for these algorithms include question answering, reading comprehension, representation learning, knowledge extraction, and conversational dialogue. Honors include the Sloan Fellowship, Allen Distinguished Investigator Award, Intel rising star faculty award, multiple best paper and honorable mention awards, and several industry research faculty awards. Hanna received her PhD from University of Illinois and spent a year as a postdoc at Disney Research and CMU.
Y-Lan Boureau is a research scientist at Facebook Artificial Intelligence Research, where she focuses on building more helpful conversations and understanding dialogue. She received her PhD from New York University and École Normale Supérieure (within the INRIA Willow project team), working in machine learning and computer vision, under Yann LeCun and Jean Ponce's supervision. She went on to do postdoctoral research in experimental psychology and neuroscience at New York University working with Nathaniel Daw, investigating self-control and meta decision making. Her research strives to foster stronger people orientation in AI.
Dhruv Batra is a Research Scientist at Facebook AI Research and an Associate Professor in the School of Interactive Computing at Georgia Tech. His research interests lie at the intersection of machine learning, computer vision, and AI. He is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) 2019, a number of early career awards (ECASE-Army 2018, ONR YIP 2017, NSF CAREER 2014, ARO YIP 2014), and several best paper awards & nominations. Research from his lab has been extensively covered in the media (with varying levels of accuracy) at CNN, BBC, CNBC, Bloomberg Business, The Boston Globe, MIT Technology Review, Newsweek, The Verge, New Scientist, and NPR.
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine (UCI). He is working primarily on robustness and interpretability of machine learning algorithms, along with models that reason with text and structure for natural language processing. Sameer was a postdoctoral researcher at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs. He was selected as a DARPA Riser, and has been awarded the grand prize in the Yelp dataset challenge, the Yahoo! Key Scientific Challenges, UCI Mid-Career Excellence in research award, and recently received the Hellman Fellowship. His group has received funding from Allen Institute for AI, Amazon, NSF, DARPA, Adobe Research, Base 11, and FICO. Sameer has published extensively at machine learning and natural language processing conferences and workshops, including paper awards at KDD 2016, ACL 2018, EMNLP 2019, AKBC 2020, and ACL 2020.
Jason Williams manages the language understanding group for Siri, at Apple. Before joining Apple, he was a Research Manager at Microsoft Research, where he led the Conversational Systems Research Group and the Redmond Reinforcement Learning Group. Prior to Microsoft, he was Principal Researcher with AT&T Labs – Research. He has published about 60 peer-reviewed papers on dialog systems and related areas, and has received five best paper/presentation awards for work on statistical approaches to dialog systems, including the use of POMDPs (partially observable Markov decision processes), reinforcement learning, turn-taking, and empirical user studies. In 2012 he initiated the Dialog State Tracking Challenge series, in 2014 he shipped components of the first release of Microsoft Cortana, in 2015 he launched Microsoft’s Language Understanding Service, and in 2018 he launched Microsoft’s Conversation Learner Service. He is President of SIGDIAL, and an elected member of the IEEE Speech and Language Technical Committee (SLTC) in the area of spoken dialogue systems.
Ruhi Sarikaya is Director at Amazon Alexa since 2016. He has been leading the Intelligence Decisions organization, which is one of the three pillars of Alexa AI. With his team, he has been building core AI capabilities around ranking, relevance, natural language understanding, dialog management, contextual understanding, personalization, self-learning, and end-to-end offline/online metrics and learning for Alexa. Prior to that, he was a principal science manager and the founder of the language understanding and dialog systems group at Microsoft between 2011 and 2016. His group has built language understanding and dialog management capabilities of Cortana, Xbox One, and the underlying platform supporting both 1st and 3rd party developers. Before Microsoft, he was a research staff member and team lead in the Human Language Technologies Group at IBM T.J. Watson Research Center for ten years. Prior to IBM, he worked as a researcher at the Center for Spoken Language Research (CSLR) at University of Colorado at Boulder for two years. He received his Ph.D. degree from Duke University, NC in 2001 in electrical and computer engineering. He has published over 120 technical papers in refereed journal and conference proceedings and, is inventor of 75 issued/pending patents. Dr. Sarikaya has served in the IEEE SLTC, the general co-chair of IEEE SLT’12, publicity chair of IEEE ASRU’05, associate editors of IEEE Trans. on Audio, Speech and Language Processing and IEEE Signal Processing Letters. He also gave a tutorial at Interspeech-2007. He has been giving keynotes in major AI, Web and language technology conferences.
Dr. Volkova is a recognized leader in the field of social media analytics and computational linguistics. Her scientific contributions and outstanding publication profile cover a range of topics on social media analytics, natural language processing (NLP), applied machine learning (ML), and deep learning (DL). More specifically, her research focuses on developing novel models for predicting and forecasting real-world events and human behavior from social data. Approaches developed by Svitlana and her team advance understanding, analysis, and effective reasoning about extreme volumes of dynamic, multilingual, and diverse real-world social data. Since joining PNNL in October 2015, Dr. Volkova has been a Principal Investigator (PI), co-PI and Project Manager (PM) on more than ten internally and externally funded projects, including two DARPA projects. Svitlana has authored more than 50 peer-previewed conference and journal publications. For her outstanding number of publications in 2016, she received the prestigious NSD Author of the Year award. In 2019, Dr. Volkova received the Ronald L. Brodzinski Early Career Exceptional Achievement Award for her leadership in the field of computational social science and computational linguistics.
Dr. Rudnicky's research has spanned many aspects of spoken language, including knowledge-based recognition systems, language modeling, and architectures for spoken language systems, multi-modal interaction, the design of speech interfaces and the rapid prototyping of speech-to-speech translation systems. Dr. Rudnicky has made contributions to dialog management, language generation and the computation of confidence metrics for recognition and understanding. His other work includes the automatic creation of summaries from event streams, automated meeting understanding and summarization, and language-based human-robot communication. He and his students have worked on open-domain conversational agents and on blended (social and task) conversation. His is active in research on spoken language understanding, multi-modal interfaces, emotion detection from speech. Dr. Rudnicky has published over 100 refereed papers and is a recipient of the Allen Newell Award for Research Excellence. Dr. Rudnicky is currently Professor Emeritus in the School of Computer Science at Carnegie Mellon University and is on the faculty of its Language Technologies Institute.