BCIT Citations Collection | BCIT Institutional Repository

BCIT Citations Collection

An action description language for iterated belief change
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-07), Hyderabad, India, 2007. We are interested in the belief change that occurs due to a sequence of ontic actions and epistemic actions. In order to represent such problems, we extend an existing epistemic action language to allow erroneous initial beliefs. We define a non-Markovian semantics for our action language that explicitly respects the interaction between ontic actions and epistemic actions. Further, we illustrate how to solve epistemic projection problems in our new language by translating action descriptions into extended logic programs. We conclude with some remarks about a prototype implementation of our work., Conference paper, Published.
Belief change and cryptographic protocol verification
Proceedings of the 22nd Conference on Artificial Intelligence (AAAI-07) in Vancouver, BC, July 22–26, 2007. Cryptographic protocols are structured sequences of messages that are used for exchanging information in a hostile environment. Many protocols have epistemic goals: a successful run of the protocol is intended to cause a participant to hold certain beliefs. As such, epistemic logics have been employed for the verification of cryptographic protocols. Although this approach to verification is explicitly concerned with changing beliefs, formal belief change operators have not been incorporated in previous work. In this paper, we introduce a new approach to protocol verification by combining a monotonic logic with a non-monotonic belief change operator. In this context, a protocol participant is able to retract beliefs in response to new information and a protocol participant is able to postulate the most plausible event explaining new information. We illustrate that this kind of reasoning is particularly important when protocol participants have incorrect beliefs., Conference paper, Published.
Belief change in the context of fallible actions and observations
Proceedings of the 21st Conference on Artificial Intelligence (AAAI-06). Boston, MA, July 16–20, 2006. We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has some beliefs about the action that occurs as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible knowledge, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision., Conference paper, Published.
Belief change with uncertain action histories
We consider the iterated belief change that occurs following an alternating sequence of actions and observations. At each instant, an agent has beliefs about the actions that have occurred as well as beliefs about the resulting state of the world. We represent such problems by a sequence of ranking functions, so an agent assigns a quantitative plausibility value to every action and every state at each point in time. The resulting formalism is able to represent fallible belief, erroneous perception, exogenous actions, and failed actions. We illustrate that our framework is a generalization of several existing approaches to belief change, and it appropriately captures the non-elementary interaction between belief update and belief revision., Peer-reviewed article, Published.
Belief manipulation and message meaning for protocol analysis
Agents often try to convince others to hold certain beliefs. In fact, many network security attacks can actually be framed in terms of a dishonest that is trying to get an honest agent to believe some particular, untrue claims. While the study of belief change is an established area of research in Artificial Intelligence, there has been comparatively little exploration of the way one agent can explicitly manipulate the beliefs of another. In this paper, we introduce a precise, formal notion of a belief manipulation problem. We also illustrate that the meaning of a message can be parsed into different communicative acts, as defined in discourse analysis theory. Specifically, we suggest that each message can be understood in terms of what it says about the world, what it says about the message history, and what it says about future actions. We demonstrate that this kind of dissection can actually be used to discover the goals of an intruder in a communication session, which is important when determining how an adversary is trying to manipulate the beliefs of an honest agent. This information will then help prevent future attacks. We frame the discussion of belief manipulation primarily in the context of cryptographic protocol analysis., Peer-reviewed article, Published. Received: 17 January 2014; Accepted: 29 September 2014; Published: 10 October 2014.
An explicit model of belief change for cryptographic protocol verification
Proceedings of the 8th International Symposium on Logical Formalizations of Commonsense Reasoning. Stanford, CA, 2007. Cryptographic protocols are structured sequences of messages that are used for exchanging information in a hostile environment. Many protocols have epistemic goals: a successful run of the protocol is intended to cause a participant to hold certain beliefs. As such, epistemic logics have been employed for the verification of cryptographic protocols. Although this approach to verification is explicitly concerned with changing beliefs, formal belief change operators have not been incorporated in previous work. In this preliminary paper, we introduce a new approach to protocol verification by combining a monotonic logic with a non-monotonic belief change operator. In this context, a protocol participant is able to retract beliefs in response to new information and a protocol participant is able to postulate the most plausible event explaining new information. Hence, protocol participants may draw conclusions from received messages in the same manner conclusions are drawn in formalizations of commonsense reasoning. We illustrate that this kind of reasoning is particularly important when protocol participants have incorrect beliefs., Conference paper, Published.
Iterated belief change due to actions and observations
In action domains where agents may have erroneous beliefs, reasoning about the effects of actions involves reasoning about belief change. In this paper, we use a transition system approach to reason about the evolution of an agent's beliefs as actions are executed. Some actions cause an agent to perform belief revision while others cause an agent to perform belief update, but the interaction between revision and update can be non-elementary. We present a set of rationality properties describing the interaction between revision and update, and we introduce a new class of belief change operators for reasoning about alternating sequences of revisions and updates. Our belief change operators can be characterized in terms of a natural shifting operation on total pre-orderings over interpretations. We compare our approach with related work on iterated belief change due to action, and we conclude with some directions for future research., Peer-reviewed article, Published.
Using ranking functions to determine plausible action histories
Proceedings of the Sixth Workshop on Nonmonotonic Reasoning, Action, and Change (NRAC-05), Edinburgh, Scotland, 2005. We use ranking functions to reason about belief change following an alternating sequence of actions and observations. At each instant, an agent assigns a plausibility value to every action and every state; the most plausible world histories are obtained by minimizing the sum of these values. Since plausibility is given a quantitative rank, an agent is able to compare the plausibility of actions and observations. This allows action occurrences to be postulated or refuted in response to new observations. We demonstrate that our formalism is a generalization of our previous work on the interaction of revision and update., Conference paper, Published.