Affordable Access

Publisher Website

Modeling belief in dynamic systems, part I: Foundations

Authors
Journal
Artificial Intelligence
0004-3702
Publisher
Elsevier
Publication Date
Volume
95
Issue
2
Identifiers
DOI: 10.1016/s0004-3702(97)00040-4
Keywords
  • Belief Change
  • Belief Revision
  • Milimal Change
  • Logic Of Knowledge
  • Logic Of Belief
  • Logic Of Time
  • Plausibility Measure
  • Agm Postulates
Disciplines
  • Linguistics
  • Mathematics

Abstract

Abstract Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes Φ if he knows that Φ is more plausible than ¬Φ. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin for modeling knowledge in multi-agent systems. We then examine the problem of “minimal change”. This notion can be captured by using prior plausibilities, an analogue to prior probabilities, which can be updated by “conditioning”. We show by example that conditioning on a plausibility measure can capture many scenarios of interest. In a companion paper, we show how the two best-studied scenarios of belief change, belief revision and belief update, fit into our framework.

There are no comments yet on this publication. Be the first to share your thoughts.