BCIT Citations Collection | BCIT Institutional Repository

BCIT Citations Collection

Arbitrary announcements in propositional belief revision
Proceedings of the International Workshop on Defeasible and Ampliative Reasoning (DARe-15) in Buenos Aires, Argentina, July 27, 2015. Public announcements cause each agent in a group to modify their beliefs to incorporate some new piece of information, while simultaneously being aware that all other agents are doing the same. Given some fixed goal formula, it is natural to ask if there exists an announcement that will make the formula true in a multi-agent context. This problem is known to be undecidable in a general modal setting, where the presence of nested beliefs can lead to complex dynamics. In this paper, we consider not necessarily truthful public announcements in the setting of propositional belief revision. We are given a goal formula for each agent, and we are interested in finding a single announcement that will make each agent believe the corresponding goal following AGM-style belief revision. If the goals are inconsistent, then this can be seen as a form of ampliative reasoning. We prove that determining if there is an arbitrary public announcement in this setting is not only decidable, but that it is simpler than the corresponding problem in the most simplified modal logics. Moreover, we argue that propositional announcements and beliefs are sufficient for modelling many practical problems, including simple robot controllers., Conference paper, Published.
Belief change and cryptographic protocol verification
Proceedings of the 22nd Conference on Artificial Intelligence (AAAI-07) in Vancouver, BC, July 22–26, 2007. Cryptographic protocols are structured sequences of messages that are used for exchanging information in a hostile environment. Many protocols have epistemic goals: a successful run of the protocol is intended to cause a participant to hold certain beliefs. As such, epistemic logics have been employed for the verification of cryptographic protocols. Although this approach to verification is explicitly concerned with changing beliefs, formal belief change operators have not been incorporated in previous work. In this paper, we introduce a new approach to protocol verification by combining a monotonic logic with a non-monotonic belief change operator. In this context, a protocol participant is able to retract beliefs in response to new information and a protocol participant is able to postulate the most plausible event explaining new information. We illustrate that this kind of reasoning is particularly important when protocol participants have incorrect beliefs., Conference paper, Published.
Belief modeling for maritime surveillance
Proceedings of 12th International Conference on Information Fusion, 2009, FUSION '09 in Seattle, WA, USA, 6-9 July 2009. In maritime surveillance, the volume of information to be processed is very large and there is a great deal of uncertainty about the data. There are many vessels at sea at every point in time, and the vast majority of them pose no threat to security. Sifting through all of the benign activity to find unusual activities is a difficult problem. The problem is made even more difficult by the fact that the available data about vessel activities is both incomplete and inconsistent. In order to manage this uncertainty, automated anomaly detection software can be very useful in the early detection of threats to security. This paper introduces a high-level architecture for an anomaly detection system based on a formal model of beliefs with respect to each entity in some domain of interest. In this framework, the system has beliefs about the intentions of each vessel in the maritime domain. If the vessel behaves in an unexpected manner, these intentions are revised and a human operations centre worker is notified. This approach is flexible, scalable, and easily manages inconsistent information. Moreover, the approach has the pragmatic advantage that it uses expert information to inform decision making, but the required information is easily obtained through simple ranking exercises., Conference paper, Published.
Iterated belief change
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-05) in Edinburgh, Scotland, 2005. We use a transition system approach to reason about the evolution of an agent’s beliefs as actions are executed. Some actions cause an agent to perform belief revision and some actions cause an agent to perform belief update, but the interaction between revision and update can be nonelementary. We present a set of basic postulates describing the interaction of revision and update, and we introduce a new belief evolution operator that gives a plausible interpretation to alternating sequences of revisions and updates., Conference paper, Published.
Trust as a precursor to belief revision
Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we prove a representation result that characterizes the class of trust-sensitive revision operators in terms of a set of postulates. We also show that trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information., Article, Published.
Trust-sensitive belief revision
Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, 25–31 July 2015. Belief revision is concerned with incorporating new information into a pre-existing set of beliefs. When the new information comes from another agent, we must first determine if that agent should be trusted. In this paper, we define trust as a pre-processing step before revision. We emphasize that trust in an agent is often restricted to a particular domain of expertise. We demonstrate that this form of trust can be captured by associating a state partition with each agent, then relativizing all reports to this partition before revising. We position the resulting family of trust-sensitive revision operators within the class of selective revision operators of Ferme and Hansson, and we examine its properties. In particular, we show how trust-sensitive revision is manipulable, in the sense that agents can sometimes have incentive to pass on misleading information. When multiple reporting agents are involved, we use a distance function over states to represent differing degrees of trust; this ensures that the most trusted reports will be believed., Conference paper, Published.