Affordable Access

Publisher Website

Learning to talk about events from narrated video in a construction grammar framework

Authors
Journal
Artificial Intelligence
0004-3702
Publisher
Elsevier
Publication Date
Volume
167
Identifiers
DOI: 10.1016/j.artint.2005.06.007
Keywords
  • Grammatical Construction
  • Language Acquisition
  • Event Recognition
  • Language Technology
Disciplines
  • Linguistics
  • Logic

Abstract

Abstract The current research presents a system that learns to understand object names, spatial relation terms and event descriptions from observing narrated action sequences. The system extracts meaning from observed visual scenes by exploiting perceptual primitives related to motion and contact in order to represent events and spatial relations as predicate-argument structures. Learning the mapping between sentences and the predicate-argument representations of the situations they describe results in the development of a small lexicon, and a structured set of sentence form-to-meaning mappings, or simplified grammatical constructions. The acquired grammatical construction knowledge generalizes, allowing the system to correctly understand new sentences not used in training. In the context of discourse, the grammatical constructions are used in the inverse sense to generate sentences from meanings, allowing the system to describe visual scenes that it perceives. In question and answer dialogs with naïve users the system exploits pragmatic cues in order to select grammatical constructions that are most relevant in the discourse structure. While the system embodies a number of limitations that are discussed, this research demonstrates how concepts borrowed from the construction grammar framework can aid in taking initial steps towards building systems that can acquire and produce event language through interaction with the world.

There are no comments yet on this publication. Be the first to share your thoughts.