Making argumentation more believable.
PROCEEDING OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE SIXTEENTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE.
(pp. 269 - 274).
AMER ASSOC ARTIFICIAL INTELLIGENCE
There are a number of frameworks for modelling argumentation in logic. They incorporate a formal representation of individual arguments and techniques for comparing conflicting arguments. A problem with these proposals is that they do not consider the believability of the arguments from the perspective of the intended audience. In this paper, we start by reviewing a logic-based framework for argumentation based on argument trees which provide a way of exhaustively collating arguments and counter-arguments. We then extend this framework to a model-theoretic evaluation of the believability of arguments. This extension assumes that the beliefs of a typical member of the audience for argumentation can be represented by a set of classical formulae (a beliefbase). We compare a beliefbase with each argument to evaluate the empathy (or similarly the antipathy) that an agent has for the argument. We show how we can use empathy and antipathy to define a pre-ordering relation over argument trees that captures how one argument tree is "more believable" than another. We also use these to define criteria for deciding whether an argument at the root of an argument tree is defeated or undefeated given the other arguments in the tree.
|Title:||Making argumentation more believable|
|Event:||19th National Conference on Artificial Intelligence/16th Conference on Innovative Applications of Artificial Intelligence|
|Location:||San Jose, CA|
|Dates:||2004-07-25 - 2004-07-29|
|UCL classification:||UCL > School of BEAMS > Faculty of Engineering Science
UCL > School of BEAMS > Faculty of Engineering Science > Computer Science
Archive Staff Only