BCIT Citations Collection | BCIT Institutional Repository

BCIT Citations Collection

Arbitrary announcements in propositional belief revision
Proceedings of the International Workshop on Defeasible and Ampliative Reasoning (DARe-15) in Buenos Aires, Argentina, July 27, 2015. Public announcements cause each agent in a group to modify their beliefs to incorporate some new piece of information, while simultaneously being aware that all other agents are doing the same. Given some fixed goal formula, it is natural to ask if there exists an announcement that will make the formula true in a multi-agent context. This problem is known to be undecidable in a general modal setting, where the presence of nested beliefs can lead to complex dynamics. In this paper, we consider not necessarily truthful public announcements in the setting of propositional belief revision. We are given a goal formula for each agent, and we are interested in finding a single announcement that will make each agent believe the corresponding goal following AGM-style belief revision. If the goals are inconsistent, then this can be seen as a form of ampliative reasoning. We prove that determining if there is an arbitrary public announcement in this setting is not only decidable, but that it is simpler than the corresponding problem in the most simplified modal logics. Moreover, we argue that propositional announcements and beliefs are sufficient for modelling many practical problems, including simple robot controllers., Conference paper, Published.
Belief change and cryptographic protocol verification
Proceedings of the 22nd Conference on Artificial Intelligence (AAAI-07) in Vancouver, BC, July 22–26, 2007. Cryptographic protocols are structured sequences of messages that are used for exchanging information in a hostile environment. Many protocols have epistemic goals: a successful run of the protocol is intended to cause a participant to hold certain beliefs. As such, epistemic logics have been employed for the verification of cryptographic protocols. Although this approach to verification is explicitly concerned with changing beliefs, formal belief change operators have not been incorporated in previous work. In this paper, we introduce a new approach to protocol verification by combining a monotonic logic with a non-monotonic belief change operator. In this context, a protocol participant is able to retract beliefs in response to new information and a protocol participant is able to postulate the most plausible event explaining new information. We illustrate that this kind of reasoning is particularly important when protocol participants have incorrect beliefs., Conference paper, Published.
Belief modeling for maritime surveillance
Proceedings of 12th International Conference on Information Fusion, 2009, FUSION '09 in Seattle, WA, USA, 6-9 July 2009. In maritime surveillance, the volume of information to be processed is very large and there is a great deal of uncertainty about the data. There are many vessels at sea at every point in time, and the vast majority of them pose no threat to security. Sifting through all of the benign activity to find unusual activities is a difficult problem. The problem is made even more difficult by the fact that the available data about vessel activities is both incomplete and inconsistent. In order to manage this uncertainty, automated anomaly detection software can be very useful in the early detection of threats to security. This paper introduces a high-level architecture for an anomaly detection system based on a formal model of beliefs with respect to each entity in some domain of interest. In this framework, the system has beliefs about the intentions of each vessel in the maritime domain. If the vessel behaves in an unexpected manner, these intentions are revised and a human operations centre worker is notified. This approach is flexible, scalable, and easily manages inconsistent information. Moreover, the approach has the pragmatic advantage that it uses expert information to inform decision making, but the required information is easily obtained through simple ranking exercises., Conference paper, Published.
Belief revision on modal accessibility relations
Proceedings of the 6th International Conference on Agents and Artificial Intelligence in Angers, France, 2014. In order to model the changing beliefs of an agent, one must actually address two distinct issues. First, one must devise a model of static beliefs that accurately captures the appropriate notions of incompleteness and uncertainty. Second, one must define appropriate operations to model the way beliefs are modified in response to different events. Historically, the former is addressed through the use of modal logics and the latter is addressed through belief change operators. However, these two formal approaches are not particularly complementary; the normal representation of belief in a modal logic is not suitable for revision using standard belief change operators. In this paper, we introduce a new modal logic that uses the accessibility relation to encode epistemic entrenchment, and we demonstrate that this logic captures AGM revision. We consider the suitability of our new representation of belief, and we discuss potential advantages to be exploited in future work., Conference paper, Published.
Iterated belief change
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-05) in Edinburgh, Scotland, 2005. We use a transition system approach to reason about the evolution of an agent’s beliefs as actions are executed. Some actions cause an agent to perform belief revision and some actions cause an agent to perform belief update, but the interaction between revision and update can be nonelementary. We present a set of basic postulates describing the interaction of revision and update, and we introduce a new belief evolution operator that gives a plausible interpretation to alternating sequences of revisions and updates., Conference paper, Published.
Ranking functions for belief change
Proceedings of the 6th International Conference on Agents and Artificial Intelligence in Angers, France, 2014. In this paper, we explore the use of ranking functions in reasoning about belief change. It is well-known that the semantics of belief revision can be defined either through total pre-orders or through ranking functions over states. While both approaches have similar expressive power with respect to single-shot belief revision, we argue that ranking functions provide distinct advantages at both the theoretical level and the practical level, particularly when actions are introduced. We demonstrate that belief revision induces a natural algebra over ranking functions, which treats belief states and observations in the same manner. When we introduce belief progression due to actions, we show that many natural domains can be easily represented with suitable ranking functions. Our formal framework uses ranking functions to represent belief revision and belief progression in a uniform manner; we demonstrate the power of our approach through formal results, as well as a series of natural problems in commonsense reasoning., Conference paper, Published.
Trust as a precursor to belief revision
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information., Article, Published.
Trust-sensitive belief revision
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, 25–31 July 2015. Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed., Conference paper, Published.