Commonsense knowledge, such as knowing that "bumping into people annoys them" or "rain makes the road slippery", helps humans navigate everyday situations seamlessly. Yet, endowing machines with such human-like commonsense reasoning capabilities has remained an elusive goal of artificial intelligence research for decades. This tutorial will discuss various challenges related to commonsense reasoning for AI, including how to represent and measure it, as well as incorporate it into downstream tasks.
This tutorial was presented (and recorded) at ACL 2020, July 5th at 3-6:30pm. See below for the slide decks for each section.
|1. Introduction||[.pdf] [.key]||Why commonsense reasoning is the new frontier of artificial intelligence.||Yejin Choi|
|2. Knowledge in LMs||[.pdf] [.gslides]||On the types of commonsense knowledge captured during the pre-training of language models, and what is still missing.||Vered Shwartz|
|3. Commonsense resources||[.pdf] [.pptx]||How to gather and represent commonsense knowledge of different types (e.g., social, physical, taxonomic).||Maarten Sap|
|4. Integration into NNs part 1||[.pdf] [.gslides]||How to enhance neural models for commonsense reasoning tasks with symbolic knowledge?||Vered Shwartz|
|5. Integration into NNs part 2||[.pdf] [.key]||In this section, we explore how language models can be converted to commonsense knowledge bases, and the downstream effects of these new tools.||Antoine Bosselut|
|6. Commonsense benchmarks||[.pdf] [.pptx]||How to create benchmarks to measure whether and how well a model can do commonsense reasoning.||Maarten Sap|
|7. Temporal commonsense||[.pdf]||In this section, we review studies in discovering the temporal commonsense implications of events described in text.||Dan Roth|