ics 806-Week 2

download ics 806-Week 2

of 33

Transcript of ics 806-Week 2

  • 8/14/2019 ics 806-Week 2

    1/33

    September 2009 UONBI, School of Computing andInformatics

    MSC COMPUTER SCIENCE

    AGENT ARCHITECTURES AND AGENT MODELLING

    Motivation: How are agents internally modeled and constructed ?

    Session Topics1. Defining Agent architecture2. Abstract Agent Architecture3. Perception

    4. Agents with states5. Agent Loop Control6. Utility Functions of States7. Reasoning Agents

    8. Problems with Symbolic Agents9. Practical Reasoning Agents10. Implementation of Practical Reasoning Agents11. Reactive and Hybrid Agents12. Agent Modeling

  • 8/14/2019 ics 806-Week 2

    2/33

    September 2009 UONBI, School of Computing andInformatics

    AGENT ARCHITECTURES

    Defining agent architectures Maes defines an agent architecture as:

    A particular methodology for buildingagents. It specifieshow . . . the agent can be decomposed into theconstruction of a set of component modules and howthese modules should be made to interact. Anarchitecture encompasses techniques and algorithmsthat support this methodology.

    Kaelbling considers an agent architecture to be:A specific collection of software (or hardware) modules,

    typically designated by boxes with arrows indicating thedata and control flow among the modules. A moreabstract view of an architecture is as a generalmethodology for designing particular modular

    decompositions for particular tasks.

  • 8/14/2019 ics 806-Week 2

    3/33

    September 2009 UONBI, School of Computing andInformatics

    ICS 806 - MULTI-AGENT SYSTEMS

    Agent architecturesThere are three types of agent architecture:

    1. Symbolic/logical

    2. Reactive3. Hybrid

    Agents that are built should have features such as:autonomy, reactiveness, pro-activeness, and socialability, etc., that were mentioned earlier.

  • 8/14/2019 ics 806-Week 2

    4/33

    September 2009 UONBI, School of Computing andInformatics

    ICS 806 - MULTI-AGENT SYSTEMS

    Abstract Architecture for AgentsLet Environment be a set Eof discrete, instantaneous states:E = { e, e, }eg {[TV-on, Radio-on], [Tv-on, Radio-off], [TV-off, Radio-on], [Tv-off, Radio-off] }

    Let Ac = {all actions of an agent that transform the environment state}

    Ac = {a, a,}eg {switch TV on, switch TV off, Switch Radio on, Switch Radio off}

    A run, r, of an agent in an environment is a sequence of interleavedenvironment states and actions:

    a0

    a1

    a2

    a3

    au-1

    r:e0 ----> e1 ----> e2 ----> e3 --> ---- e u

    Let: Rbe the set of all such possible finite sequences (over Eand Ac);

    RAc- be the subset of these that end with an action; and

    RE- be the subset of these that end with an environment state.

  • 8/14/2019 ics 806-Week 2

    5/33

    September 2009 UONBI, School of Computing andInformatics

    ICS 806 - MULTI-AGENT SYSTEMS

    State Transformer FunctionsA state transformerfunction represents behaviour of theenvironment:

    : RAc o(E) where o(E) is some environmental state or outcomeEnvironments can be : history dependent; or non-deterministic.

    If (r) = , then there are no possible successor states to r. In thiscase, we say that the system has ended or terminatedits run.

    An environment Envis a triple:

    Env= where: Eis a set of environment states, e0 Eis theinitial state; and is a state transformer function.

  • 8/14/2019 ics 806-Week 2

    6/33

    September 2009 UONBI, School of Computing andInformatics

    Agents

    Agent is a function which maps runs to actions: Ag: RE

    AcAn agent makes a decision about what action to perform based on thehistory of the system that it has witnessed to date.

    SystemsA systemis a pair containing an agent and an environment.

    Any system will have associated with it a set of possible runs; we denote

    the set of runs of agent Agin environment Envby R(Ag, Env).

    A sequence (e0, a0,e1,a1,e2, a2, .. ) represents a run of an agent Agin

    environment Env= if:1. e0 is the initial state of Env2. a0 = Ag(e0); and

    3. for u>0 , eu (e0, a0,e1,a1,e2, a2, .. ) where au = Ag((e0, a0,e1,a1,e2, a2,))

  • 8/14/2019 ics 806-Week 2

    7/33

    September 2009 UONBI, School of Computing andInformatics

    Purely Reactive AgentsMake no reference to their historyBase their decision making entirely on the present, without any referenceto the past.

    We call such agents purely reactive:action: E Ac

    A thermostat is a purely reactive agent.

    off if e= temperature OKAction(e) =

    on otherwise.

  • 8/14/2019 ics 806-Week 2

    8/33

    September 2009 UONBI, School of Computing andInformatics

    PerceptionPerceptionenables sensing in the system and is realized using the seefunction.

    seefunction -- the agents ability to observe its environment;actionfunction -- represents the agents decision-making process;Outputof the seefunction is a percept: see: E Perwhich mapsenvironment states to percepts;actionis now a function action:Per A which maps sequences ofpercepts to actions.

    SEE ACTION

    ENVIRONMENT

  • 8/14/2019 ics 806-Week 2

    9/33

    September 2009 UONBI, School of Computing andInformatics

    Agents with StateThese agents have some internal data structure, which is typically used to record

    information about the environmental state and history.

    Let Ibe the set of all internal states of the agent.The perception function seefor a state-based agent is unchanged: see: EPer

    The action-selection function actionis now defined as a mapping: action: IAcfrom internal states to actions.function nextis introduced, which maps an internal state and percept to aninternal state: next:I x Per I

  • 8/14/2019 ics 806-Week 2

    10/33

    September 2009 UONBI, School of Computing andInformatics

    Agent control loop

    1. Agent starts in some initial internal state i02. Observes its environment state e, and generates a percept see(e)3. Internal state of the agent is then updated via nextfunction, becoming

    next(i0, see(e)).4. The action selected by the agent is action(next(i0, see(e))). This action is

    then performed.5. Goto (2).

    Tasks for AgentsWe build agents in order to carry out tasksfor us.The task must be specifiedby us. . .

    But we want to tell agents what to do withouttelling them how to do it.

  • 8/14/2019 ics 806-Week 2

    11/33

    September 2009 UONBI, School of Computing andInformatics

    Utility Functions over StatesOne possibility: associate utilitieswith individual states the task of the agent is

    then to bring about states that maximize utility.

    A utility is a function u: ERwhich associates a real number with every

    environment state.

    Utility in the TileworldSimulated two dimensional grid environment on which there are agents, tiles,obstacles, and holes. An agent can move in four directions, up, down, left, or right,and if it is located next to a tile, it can push it. Holes have to be filled up with tiles

    by the agent. An agent scores points by filling holes with tiles, with the aim beingto fill as many holes as possible.

    TILEWORLD changes with the random appearance and disappearance of holes.Utility function defined as follows:

    number of holes filled in ru(r) = -----------------------------------------

    number of holes that appeared in r

  • 8/14/2019 ics 806-Week 2

    12/33

    September 2009 UONBI, School of Computing andInformatics

    AGENT ARCHITECTURES continue

    REASONING AGENTS (deductive)1956-1985: agents designed within AI were symbolic reasoningagents.

    - explicit logical reasoningin order to decide what to do.Issues: difficult to build; emergence of reactive agentsmovement, from 1985

    present.

    1990-present, a number of alternatives were proposed especially hybridarchitectures, which attempt to combine the best of reasoning and reactivearchitectures.

    Symbolic Reasoning AgentsBuild agents based on knowledge-based system approach.

    Deliberative agent architectureOne that contains an explicitly represented, symbolic model of the world;makes decisions (for example about what actions to perform) via symbolicreasoning.

  • 8/14/2019 ics 806-Week 2

    13/33

    September 2009 UONBI, School of Computing andInformatics

    Issues with building deliberative agents

    The transduction problem

    - translating the real world into an accurate, adequate symbolic description,in time for that description to be useful.

    .. . vision, speech understanding, learning.

    The representation/reasoning problem

    - symbolically represent information about complex real-world entities

    and processes, and how to get agents to reason with this informationin time for the results to be useful.... Knowledge representation, automated reasoning, automatic planning.

    - !!! none of the problems above is anywhere near solved !!!!!!!!

    symbol manipulation algorithms in general are complex: - many search-based symbol manipulation algorithms of interest are highly intractable.- alternative techniques emerged seen later

  • 8/14/2019 ics 806-Week 2

    14/33

    September 2009 UONBI, School of Computing andInformatics

    Deductive Reasoning in Agentstheorem proving is used to model agents decision making

    logic encodes a theory stating the bestaction to perform in any givensituation.Let: be a theory (typically a set of rules); be a logical database that describes the current state of the world;

    Acbe the set of actions the agent can perform; |- means that can be proved from using ./* try to find an action explicitly prescribed*/

    for each a Acdoif |-

    Do(a) then return aend-if

    end-for

  • 8/14/2019 ics 806-Week 2

    15/33

    September 2009 UONBI, School of Computing andInformatics

    General problems with symbolic architectures how do we convert real world inputs to fit the ontology? Eg. video

    camera input to Dirt?

    decision making assumes a staticenvironment: calculativerationality. decision making using first-order logic is undecidable! Even where we use propositionallogic, decision making in the worst

    case means solving co-NP-complete problems.

    Typical solutions to problems above1. weaken the logic;

    2. use symbolic, non-logical representations;3. shift the emphasis of reasoning from run timeto design time.

  • 8/14/2019 ics 806-Week 2

    16/33

    September 2009 UONBI, School of Computing andInformatics

    PRACTICAL REASONING AGENTS

    Practical reasoning the process of figuring out what to do, i.e. which action to take.

    - conflicting considerations are weighted for and against

    competing options, where the relevant considerations areprovided by what the agent desires/values/cares/ beliefs.(Bratman)

    Components of practical reasoning:deliberation:deciding whatstate of affairs we want to achieve;

    means-ends reasoning:deciding howto achieve these states ofaffairs.The outputs of deliberation are intentions.

    I t ti i P ti l R i

  • 8/14/2019 ics 806-Week 2

    17/33

    September 2009 UONBI, School of Computing andInformatics

    Intentions in Practical ReasoningIntentions pose problems for agents, who need to determine ways of

    achieving them.

    1. If I have an intention to , you would expect me to devote resources to deciding howto bring about .2. Intentions provide a filter for adopting other intentions, which must not conflict. If I

    have an intention to , you would expect me to adopt an intention such that and are mutually exclusive.3. Agents track the success of their intentions, and are inclined to try again if theirattempts fail. If an agents first attempt to achieve fails, then all other things being

    equal, it will try an alternative plan to achieve .4. Agents believe their intentions are possible. That is, they believe there is at least

    some way that the intentions could be brought about.

    5. Agents do not believe they will not bring about their intentions. It would not berational of me to adopt an intention to if I believed was not possible.

    6. Under normal circumstances, agents believe they will bring about their intentions. Itwould not normally be rational for one to believe that one would achieve theintentions all the time; intentions can fail. Moreover, it does not make sense that if

    one believes is inevitable that one would adopt it as an intention.7. Agents need not intend all the expected side effects of their intentions. If I believe and I intend that , I do not necessarily intend also. (Intentions are not closed

    under implication.) Eg. Going to dentist does not mean one wants tooth pain

  • 8/14/2019 ics 806-Week 2

    18/33

    September 2009 UONBI, School of Computing andInformatics

    PLANNING AGENTS- part of practical reasoning agentsBuilding largely on the early work of Fikes & Nilsson, many planning algorithmshave been proposed, and the theory of planning has been well-developed.

    What is Means-Ends Reasoning?Basic idea is to give an agent:representation of goal/intention to achieve;

    representation actions it can perform; andrepresentation of the environment; and have it generate a plantoachieve the goal.

    Question: How do werepresent:1. a goal to be achieved;2. the state of environment;3. the actions available to

    agent;4. the plan itself.

  • 8/14/2019 ics 806-Week 2

    19/33

    September 2009 UONBI, School of Computing andInformatics

    ExampleContains a robot arm, 2 blocks (A and B) of equal size, and a table-top.To represent this environment, need an ontology.

    On(x,y) obj xon top of obj yOnTable(x) obj xis on the tableClear(x) nothing is on top of obj xHolding(x) arm is holding x

    Here is a representation of the blocks world described above:Clear(A)On(A,B)OnTable(B)OnTable(C)

    Use the closed world assumption: anything not stated is assumed to be false.A goalis represented as a set of formulae.

    Here is a goal: {OnTable(A), OnTable(B), OnTable(C)}

    Actionsare represented using a technique that was developed in the STRIPSplanner. Each action has:

    1. a name: which may have arguments;2. a pre-condition list:list of facts which must be true for action to be executed;

    3. a delete list:list of facts that are no longer true after action is performed;4. an add list: list of facts made true by executing the action.Each of these may contain variables.

  • 8/14/2019 ics 806-Week 2

    20/33

    September 2009 UONBI, School of Computing andInformatics

    Example 1:The stackaction occurs when the robot arm places the object xit isholding is placed on top of object y.

    Stack(x, y);pre Clear(y) Holding(x);del Clear(y) Holding(x)add ArmEmpty On(x, y)

    Example 2:

    The unstackaction occurs when the robot arm picks an object xup fromon top of another object y.

    UnStack(x, y); pre On(x, y) Clear(x) ArmEmptydel On(x,y) ArmEmpty;add Holding(x) Clear( y)Stack and UnStack are inversesof one-another.

    Example 3:The pickupaction occurs when the arm picks up an object xfrom the table.Pickup(x)

    pre Clear(x) OnTable( x) ArmEmptydel OnTable(x) ArmEmptyadd Holding( x)What is a plan?A sequence (list) of actions, with variables replaced by constants.

    IMPLEMENTING PRACTICAL REASONING AGENTS

  • 8/14/2019 ics 806-Week 2

    21/33

    September 2009 UONBI, School of Computing andInformatics

    IMPLEMENTING PRACTICAL REASONING AGENTS

    Agent Control Loop Version 1

    1. while true2. observe the world;3. update internal world model;4. deliberate about what intention to achieve next;5. use means-ends reasoning to get a plan for the intention;6. execute the plan

    7. end while

    Agent Control Loop Version 2

    1. B = B0 ; /* initial beliefs */2. while true do

    1. get next percept p;2. B= beliefRevisionF(B, p);

    3. I=deliberate(B);4. P = plan(B, I);5. execute(P);

    3. end while

    A t C t l L V i 6 (filt i t ti )

  • 8/14/2019 ics 806-Week 2

    22/33

    September 2009 UONBI, School of Computing andInformatics

    Agent Control Loop Version 6 (filter, intentions)B=B0;I=I0;

    while true doget next percept p;B =beliefRevisionF(B,p);D = options (B,I);I=filter(B,D,I);P=plan(B, I);while (not empty(P) or suceeded(I,B) or impossible(I,B) )do

    a = headOfPerceptListFunction(p);execute(a);

    p=tail(p);get next percept(p);B=beliefRevisionF(B,p);D = options (B,I);

    I=filter(B,D,I);if not sound(P,I, B) then

    P = plan(B,I);end-if

    end-whileend-while

  • 8/14/2019 ics 806-Week 2

    23/33

    September 2009 UONBI, School of Computing andInformatics

    BELIEF-DESIRE-INTENTION (BDI) THEORY & PRACTICE

    Rao & Georgeff have developed BDI logicswhich is a non-classical logics withmodal connectives for representing beliefs, desires, and intentions.

    BDI Logic

    From classical logic: , , .Path quantifiers:

    A on all paths, E on some paths,

    The BDI connectives:(Bel, i) ibelieves

    (Des, i ) idesires

    (Int, i) iintends

  • 8/14/2019 ics 806-Week 2

    24/33

    REACTIVE AND HYBRID ARCHITECTURES

  • 8/14/2019 ics 806-Week 2

    25/33

    September 2009 UONBI, School of Computing andInformatics

    REACTIVE AND HYBRID ARCHITECTURESReactive ArchitecturesThe many unsolved problems associated with symbolic AI led to the

    development of reactivearchitectures.

    Brooks (criticism of mainstream AI) behaviour languagesBrooks has put forward three theses:1. Intelligent behaviour can be generated withoutexplicit representations of the

    kind that symbolic AI proposes.2. Intelligent behaviour can be generated withoutexplicit abstract reasoning of

    the kind that symbolic AI proposes.3. Intelligence is an emergentproperty of certain complex systems.

    He identifies two key ideas that have informed his research:Situatedness and embodiment: Real intelligence is situated in the world, not in

    disembodied systems such as theorem provers or expert systems.

    Intelligence and emergence: Intelligent behaviour arises as a result of anagents interaction with its environment. Also, intelligence is in the eye of thebeholder; it is not an innate, isolated property.

    Brooks built some of his ideas based on his subsumption architecture.A subsumption architecture is a hierarchy of task-accomplishing behaviours.

    Sit t d A t t

  • 8/14/2019 ics 806-Week 2

    26/33

    September 2009 UONBI, School of Computing andInformatics

    Situated AutomataApproach of Rosenschein and Kaelbling.

    In their situated automataparadigm, an agent is specified in a rule-like(declarative) language, and this specification is then compiled down to a digitalmachine, which satisfies the declarative specification.

    This digital machine can operate in a provable time bound.

    Reasoning is done off line, at compile time, rather than onlineat run time.The theoretical limitations of the approach are not well understood.

    Compilation (with propositional specifications) is equivalent to an NP-completeproblem.

    The more expressive the agent specification language, the harder it is to

    compile it.

    (There are some deep theoretical results which say that after a certainexpressiveness, the compilation simply cant be done.)

    HYBRID ARCHITECTURES

  • 8/14/2019 ics 806-Week 2

    27/33

    September 2009 UONBI, School of Computing andInformatics

    HYBRID ARCHITECTURESMany researchers have argued that neither a completely deliberative norcompletely reactive approach is suitable for building agents. They propose hybrid

    systems.

    Hybridsystems attempt to marry classical and alternative approaches.

    Build an agent out of two subsystems:Deliberativeone, containing a symbolic world model, which develops plans andmakes decisions in the way proposed by symbolic AI; andReactiveone, which is capable of reacting to events without complex reasoning.Often, the reactive component is given some kind of precedence over the deliberative one.

    This kind of structuring leads naturally to the idea of a layeredarchitecture:

    An agents control subsystems are arranged into a hierarchy, with higher layers dealing withinformation at increasing levels of abstraction.Horizontal layering: Layers are each directly connected to the sensory input and actionoutput.Vertical layering: Sensory input and action output are each dealt with by at most one layereach.

    HORIZONTAL LAYERING

  • 8/14/2019 ics 806-Week 2

    28/33

    September 2009 UONBI, School of Computing andInformatics

    HORIZONTAL LAYERING

    VERTICAL LAYERING

  • 8/14/2019 ics 806-Week 2

    29/33

    September 2009 UONBI, School of Computing andInformatics

    VERTICAL LAYERING

    AGENT MODELING

  • 8/14/2019 ics 806-Week 2

    30/33

    September 2009 UONBI, School of Computing andInformatics

    AGENT MODELING

    Modeling is a means to capture ideas, relationships, decisions, and

    requirements in a well-defined notation that can be applied to manydifferent domains. Modeling not only means different things to differentpeople, but also it can use different aspects of a tool such as UMLdepending on what you are trying to convey (Pilone Dan, Neil Pitman(2005). UML 2.0 in a Nutshell. O'Reilly Media, Inc.).

    Modeling languages are used to specify, visualize, construct, anddocument systems.

    Modeling is part of the process of constructing multi-agent systems.

    The conceptual structures are formulated and decisions related to the

    overall framework are considered. The modeling process, howeverdepends on a number of things including the architecture, developmentframework and application area.

    ITEMS TO MODEL

  • 8/14/2019 ics 806-Week 2

    31/33

    September 2009 UONBI, School of Computing andInformatics

    ITEMS TO MODEL

    AGENTS

    BASIC CHARACTERISTICS

    AUTONOMY MOBILITY

    INTELLIGENCE PERSONALITY

    VERACITY BENEVOLENCE

    SOCIAL ABILITY REACTIVENESS

    PROACTIVITY

    BEHAVIOUR

    INTERACTIONS

    COMMUNICATION

    COOPERATION NEGOTIATION AGREEMENTS

    DECISION MAKING

    INDIVIDUAL GROUP

    AGENT SOCIAL SYSTEMS MULTI-AGENT SYSTEMS

  • 8/14/2019 ics 806-Week 2

    32/33

    September 2009 UONBI, School of Computing andInformatics

    MODELLING RESOURCES AND TOOLS

    -NONE DEDICATED TO MULTI-AGENT SYSTEMS

    - VARIATIONS OF:

    OBJECT ORIENTED DESIGN RESOURCES + UML

    MATHEMATICAL MODELLING

    SYMBOLIC AI MODELLING- LOGIC

    GAME THEORY AND ECONOMICS

    EMERGING SUGGESTIONS SUCH AS MULTI-AGENTSYSTEMS NETWORK INFLUENCE DIAGRAMS

    VARIATIONS OF BAYESIAN REPRESENTATIONS

    PSYCHOLOGY- COGNITION

    SOCIOLOGY- SOCIAL PROCESSES

  • 8/14/2019 ics 806-Week 2

    33/33

    September 2009 UONBI, School of Computing andInformatics

    WEEK 2 EXERCISE

    1.Select a task that can be done using agents.Select an agent that can do the task or someaspect of the task. Give a full description how the

    agent should be structured and how it shouldoperate.

    2.Model your agent3. Implement your agent