Università di Milano-Bicocca Laurea Magistrale in Informatica

48
Università di Milano-Bicocca Laurea Magistrale in Informatica Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo Mauri

description

Università di Milano-Bicocca Laurea Magistrale in Informatica. Corso di APPRENDIMENTO E APPROSSIMAZIONE Lezione 6 - Reinforcement Learning Prof. Giancarlo Mauri. Agenda. Reinforcement Learning Scenario Dynamic Programming Monte-Carlo Methods Temporal Difference Learning. - PowerPoint PPT Presentation

Transcript of Università di Milano-Bicocca Laurea Magistrale in Informatica

Page 1: Università di Milano-Bicocca Laurea Magistrale in Informatica

Università di Milano-BicoccaLaurea Magistrale in Informatica

Corso di

APPRENDIMENTO E APPROSSIMAZIONE

Lezione 6 - Reinforcement Learning

Prof. Giancarlo Mauri

Page 2: Università di Milano-Bicocca Laurea Magistrale in Informatica

Agenda

Reinforcement Learning Scenario Dynamic Programming Monte-Carlo Methods Temporal Difference Learning

Page 3: Università di Milano-Bicocca Laurea Magistrale in Informatica

Reinforcement Learning Scenario

Agent

Environment

state st

st+

1

rt+

1

reward rt action at

s0a0

r1s1

a1

r2

s2a2

r3Goal: Learn to choose actions at that

maximize future rewards r1+r2+ 2r3+…

where 0< <1 is a discount factor

s3

Page 4: Università di Milano-Bicocca Laurea Magistrale in Informatica

Reinforcement Learning Scenario

Some application domains Learning to choose actions Robot learning to dock to a battery station Learning to choose actions to optimize a factory output

Learning to play Backgammon Characteristics

Delayed reward No direct feedback (error signal) for good and bad actions

Opportunity for active exploration Possibility that state is only partially observable

Page 5: Università di Milano-Bicocca Laurea Magistrale in Informatica

Markov Decision Process (MDP)

Finite set of states S Finite set of actions A At each time step the agent observes state stS and

chooses action atA(st) Then receives immediate reward rt+1

And state changes to st+1

Markov assumption : st+1 = (st,at) and rt+ 1= r(st,at) Next reward and state only depend on current state st and action at

Functions (st,at) and r(st,at) may be non-deterministic

Functions (st,at) and r(st,at) not necessarily known to agent

Page 6: Università di Milano-Bicocca Laurea Magistrale in Informatica

Learning Task

Execute actions in the environment, observe results and Learn a policy

t(s,a) : S A

from states stS to actions atA that maximizes the expected reward

E[rt+rt+1+ 2rt+2+…]

from any starting state st

NOTE: 0<<1 is the discount factor for future rewards Target function is t(s,a) : S A But there are no direct training examples of the form <s,a>

Training examples are of the form <<s,a>,r>

Page 7: Università di Milano-Bicocca Laurea Magistrale in Informatica

State Value Function

Consider deterministic environments, namely (s,a) and r(s,a) are deterministic functions of s and a.

For each policy (s,a) : S A the agent might adopt we define an evaluation function:

V(s) = rt+rt+1+ rt+2+…= i=0 rt+ii

where rt, rt+1,… are generated by following the policy from start state s

Task: Learn the optimal policy * that maximizes V(s)

* = argmax V(s) ,s

Page 8: Università di Milano-Bicocca Laurea Magistrale in Informatica

Action Value Function

State value function denotes the reward for starting in state s and following policy

V(s) = rt+rt+1+ rt+2+…= i=0 rt+ii

Action value function denotes the reward for starting in state s, taking action a and following policy afterwards

Q(s,a)= r(s,a) + rt+1+ rt+2+…= r(s,a) +V((s,a))

Page 9: Università di Milano-Bicocca Laurea Magistrale in Informatica

Bellman Equation (Deterministic Case)

V(s) = rt+ rt+1+ rt+2+…

= a(s,a) (r(s,a) + +V((s,a)))

s

V(s)

s’’s’’=(s,a2)

V(s’’)r=r(s,a2)

s’s’=(s,a1)

V(s’)r=r(s,a1)

(s,a2)

(s,a1)

Set of |s| linear equations, solve it directly or by policy evaluation

Page 10: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

G: terminal state, upon entering G agent obtains a reward of +100, remains in G forever and obtains no further rewards

Compute V(s) for equi-probable policy (s,a) = 1/|a|:

V(s3) = ½ V(s2) + ½ (100 + V(s6))

G

+100

+100s3s2

s6

Page 11: Università di Milano-Bicocca Laurea Magistrale in Informatica

Iterative Policy Evaluation

Instead of solving the Bellman equation directly one can use iterative policy evaluation by using the Bellman equation as an update rule

Vk+1(s) = a(s,a) (r(s,a) +Vk

((s,a))) The sequence Vk

is guaranteed to converge to V

V0=0V

0=0V0=0

V0=0 V

0=0 V0=0

V1=0V

1=33V1=0

V1=0 V

1=0 V1=50

V2=0V

2=33V2=15

V2=0 V

2=25 V2=50

V3=0V

3=45V3=15

V3=18 V

3=25 V3=61

V4=0V

4=45V4=29

V4=18 V

4=38 V4=61

V50=0V

50=66V50=52

V50=49 V

50=57 V50=76

=0.9

Page 12: Università di Milano-Bicocca Laurea Magistrale in Informatica

Bellman Equation (Deterministic Case)

Q(s,a) = r(s,a) + a’((s,a),a’) Q((s,a),a’)

sQ(s,a)

s’s’=(s,a)

Q((s,a),a3)

r=r(s,a)Q((s,a),a2)

Q((s,a),a1)

Page 13: Università di Milano-Bicocca Laurea Magistrale in Informatica

Iterative Policy Evaluation Bellman equation as an update rule for action-value function:

Qk+1(s,a) = r(s,a) + a’((s,a),a’) Qk

((s,a),a’)

=0.9 0

G

0

0

0

00

0

0

0

0

0 00

0

G

100

100

0

00

0

0

0

0

0 00

0

G

100

100

0

450

0

0

30

0

0 300

0

G

100

100

23

4523

0

0

30

13

13 3023

0

G

100

100

23

5523

16

16

41

13

13 4123

0

G

100

100

34

5534

16

16

41

26

26 4134

0

G

100

100

52

6952

44

44

60

47

47 6052

Page 14: Università di Milano-Bicocca Laurea Magistrale in Informatica

Optimal Value Functions

V*(s) = max V(s)

Q*(s,a) = max Q(s,a)

Bellman optimality equations V(s) = maxa Q*(s,a)

= maxa ( r(s,a) +V((s,a)) ) Q(s,a) = r(s,a) + V*((s,a)))

= r(s,a) +maxa’ Q((s,a),a’)

Page 15: Università di Milano-Bicocca Laurea Magistrale in Informatica

Policy Improvement

Suppose we have determined the value function V for an arbitrary deterministic policy .

For some state s we would like to know if it is better to choose an action a(s).

Select a and follow the existing policy afterwards gives us reward Q(s,a)

If Q(s,a) > V then a is obviously better than (s)

Therefore choose new policy ’ as

’(s)=argmaxa Q(s,a) = argmaxa r(s,a)+V((s,a))

Page 16: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

’(s)=argmaxa r(s,a)+V((s,a))

V=0V=71V=63

V=56 V=61 V=78r=100

r=100

(s,a)=1/|a|

V’=0

V’=100

V’=90V’=81

V’=90

V’=100

Page 17: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

’(s)=argmaxa Q(s,a)

0

G

100

100

52

6952

44

44

60

47

47 6052

Page 18: Università di Milano-Bicocca Laurea Magistrale in Informatica

Generalized Policy Iteration

Intertwine policy evaluation with policy improvement

0 V0 1 V1 2 V2 … * V*

E I E I E I … I E

V

evaluation

improvement

V V

greedy(V)

Page 19: Università di Milano-Bicocca Laurea Magistrale in Informatica

Value Iteration (Q-Learning)

Idea: do not wait for policy evaluation to converge, but improve policy after each iteration.

Vk+1(s) = maxa(r(s,a) + Vk

((s,a)))

or

Qk+1(s,a) = r(s,a) + maxa’ Qk

((s,a),a’)

Stop when s |Vk+1

(s)- Vk(s)| <

or s,a |Qk+1(s,a)- Qk

(s,a)| <

Page 20: Università di Milano-Bicocca Laurea Magistrale in Informatica

Non-Deterministic Case

State transition function (s,a) no longer deterministic but probabilistic given by

P(s’|s,a) = Pr{st+1=s’|st=s, at=a}

Transition probability that given a current state s and action a the next state is s’.

Reward function r(s,a) no longer deterministic but probabilistic given by

R(s’,s,a) = E{rt+1|st=s, at=a, st+1=s’}

P(s’|s,a) and R(s’,s,a) completely specify MDP.

Page 21: Università di Milano-Bicocca Laurea Magistrale in Informatica

Bellman Equation (Non-Deterministic Case)

Q(s,a) = s’ P(s’|s,a) [R(s’,s,a) + V(s’)]

V(s) = a(s,a) s’ P(s’|s,a) [R(s’,s,a) +V(s’)]

Q(s,a) = s’ P(s’|s,a) [R(s’,s,a)+ a’(s’,a’) Q(s’,a’)]

Bellman optimality equations:

V(s) = maxas’ P(s’|s,a) [R(s’,s,a) +V(s’)]

Q(s,a) = s’ P(s’|s,a) [R(s’,s,a)+ maxa’Q(s’,a’)]

Page 22: Università di Milano-Bicocca Laurea Magistrale in Informatica

Value Iteration (Q-Learning)

Vk+1(s) = maxas’ P(s’|s,a) [R(s’,s,a) +Vk

(s’)]

or

Qk+1(s,a) = s’ P(s’|s,a) [R(s’,s,a) + maxa’Qk

(s’,a’)]

Stop when

s (|Vk+1(s) -Vk

(s)| <

or

s,a (|Qk+1(s,a) -Qk

(s,a)| <

Page 23: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

P(s’|s,a)=0

P(s’|s,a)=0

P(s’|s,a)=(1-p)/3

P(s’|s,a)=(1-p)/3 s P(s’|s,a)=

p+(1-p)/3

Now assume that actions a are non-deterministic, with probability p agent moves to the correct square indicated by a, with probability (1-p) agent moves to a random neighboring square.

Page 24: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

V=0

V=100

V*=90V=81

V=90

V=100

Deterministic optimalvalue function

Non-deterministic optimalvalue function p=0.5

V=0

V=90

V*=77V=71

V=80

V=93

Page 25: Università di Milano-Bicocca Laurea Magistrale in Informatica

Example

BA

B’

A’

actions

+10 +5Reward : +10 for AA’+5 for BB’-1 for falling off the grid0 otherwise

Infinite horizon no terminal state1. Compute V for equi-probable policy2. Compute V* and * using policy or value iteration3. Compute V* and * using policy or value iteration but assume that with 1-p=0.3 the agent moves to a random neighbor state

Page 26: Università di Milano-Bicocca Laurea Magistrale in Informatica

Reinforcement Learning

What if the transition probabilities P(s’|s,a) and reward function R(s’,s,a) are unknown?

Can we still learn V(s), Q(s,a) and identify and optimal policy (s,a)?

The answer is yes. Consider the observed rewards rt and state transitions st+1as training samples drawn from the true underlying probability functions R(s’,s,a) and P(s’|s,a).

Use approximate state V(s) and action value Q(s,a) functions

Page 27: Università di Milano-Bicocca Laurea Magistrale in Informatica

Monte Carlo Method

Initialize: policy to be evaluated V(s) an arbitrary state-value function Return(s) an empty list, for all sS

Repeat forever Generate an epsiode using For each state s appearing in the epsiode:R return following the first occurence of s

Append R to Returns(s)V(s) average(Returns(s))

Page 28: Università di Milano-Bicocca Laurea Magistrale in Informatica

Monte Carlo Method

V(st) V(st) + [Rt-V(st)]

where Rt = rt+1+ rt+2 + 2 rt+3+…

is the observed reward after time t and is a constant step-size parameter

V=0V=60V=30

V=30 V=40 V=70

V(st) 30 + 0.1 [0 + 0.9*0 +0.92 *100 - 30] = 35.1

Page 29: Università di Milano-Bicocca Laurea Magistrale in Informatica

Temporal Difference Learning

Monte-Carlo: V(st) V(st) + [Rt-V(st)]

target for V(s) : E {Rt | st=s}

Must wait until the end of the episode to update V Temporal Difference (TD):

V(st) V(st) + [rt+1 + V(st+1)- V(st)]

target for V(s) : E {rt+1 + V(st+1) | st=s}

TD method is bootstrapping by using the existing estimate of the next state V(st+1) for updating V(st)

Page 30: Università di Milano-Bicocca Laurea Magistrale in Informatica

TD(0) : Policy Evaluation

Initialize policy to be evaluated V(s) an arbitrary state-value function

Repeat for each episode Initialize s

Repeat for each step of episodea action given by for sTake action a, observe reward r, and next state s’

V(s) V(st) + [r + V(s’)- V(s)]s s’

Until s is terminal

Page 31: Università di Milano-Bicocca Laurea Magistrale in Informatica

TD(0): Policy Iteration

Q(st,at) Q(st,at) + [rt+1 + Q(st+1,at+1)- Q(st,at)]

st st+1

rt+1

st,at

st+2

rt+2

st+1,at+1 st+2,at+2

Q(st,at) Q(st,at) + [rt+1 + Q(st+1,at+1)- Q(st,at)]

The update rule uses a quintuple of events (st,at,rt+1,st+1,at+1), therefore called SARSA

Problem: Unlike in the deterministic case we cannot choose a completely greedy policy (s) = maxa Q(s,a), as due to the unknown transition and reward functions (s,a) and r(s,a) we cannot be sure if another action might eventually turn out to be better

Page 32: Università di Milano-Bicocca Laurea Magistrale in Informatica

-greedy policy

Soft policy(s,a) > 0 for all sS, aA(s)

Non-zero probability of choosing every possible action

-greedy policy Most of the time with probability (1-) follow the optimal policy

(s) = maxa Q(s,a) but with probability e pick a random action:

(s,a) /|A(s)| Let 0 go to zero as t (for example =1/t) so that -greedy policy converges to the optimal deterministic policy

Page 33: Università di Milano-Bicocca Laurea Magistrale in Informatica

SARSA Policy Iteration

Initialize Q(s,a) arbitrarily:Repeat for each episode Initialize s Choose a from using -greedy policy derived from Q

Repeat for each step of episodeTake action a, observe reward r, and next state s’

Q(s,a) Q(s,a) + [r + Q(s’,a’)- Q(s,a)]s s’, a a’

Until s is terminal

Page 34: Università di Milano-Bicocca Laurea Magistrale in Informatica

SARSA Example

BA

B’

A’

+10 +5

Page 35: Università di Milano-Bicocca Laurea Magistrale in Informatica

SARSA Example V(s)

V(s) after 100, 200, 1000, 2000, 5000, 10000 SARSA steps

Page 36: Università di Milano-Bicocca Laurea Magistrale in Informatica

SARSA Example Q(s,a)

Q(s,a) after 100, 200, 1000, 2000, 5000, 10000 SARSA steps

Page 37: Università di Milano-Bicocca Laurea Magistrale in Informatica

Q-Learning (Off-Policy TD)

Approximates the optimal value functions V*(s) or Q*(s,a) independent of the policy being followed.

The policy determines which state-action pairs are visited and updated

Q(st,at) Q(st,at) + [rt+1 + maxa’ Q(st+1,a’)-Q(st,at)]

Page 38: Università di Milano-Bicocca Laurea Magistrale in Informatica

Q-Learning Off-Policy Iteration

Initialize Q(s,a) arbitrarily:Repeat for each episode Initialize s Choose a from s using -greedy policy derived from Q

Repeat for each step of episodeTake action a, observe reward r, and next state s’

Q(s,a) Q(s,a) + [r + maxa’Q(s’,a’)- Q(s,a)]s s’

Until s is terminal

Page 39: Università di Milano-Bicocca Laurea Magistrale in Informatica

TD versus Monte-Carlo

So far TD uses one-step look-ahead but why not use 2-steps or n-steps look-ahead to update Q(s,a).

2-step look-ahead

Q(st,at) Q(st,at) + [rt+1 + rt+2 +2 Q(st+2,at+2)-Q(st,at)] N-step look-ahead

Q(st,at) Q(st,at) + [rt+1 + rt+2 + … + n-1 rt+n + n

Q(st+n,at+n)-Q(st,at)] Monte-Carlo method

Q(st,at) Q(st,at) + [rt+1 + rt+2 + … + N-1 rt+N -Q(st,at)]

(compute total reward RT at the end of the episode)

Page 40: Università di Milano-Bicocca Laurea Magistrale in Informatica

Temporal Difference Learning

Drawback of one-step look-ahead: Reward is only propagated back to the successor

state (takes long time to finally propagate to the start state)

st st+1

r=0st+2

r=0sT-2

r=0sT

r=100sT-1…

V(s)=0 V(s)=0 V(s)=0 V(s)=0 V(s)=0Initial V:

After first epsiode:V(s)=0 V(s)=0 V(s)=0 V(s)=0 V(s)=*100

After second epsiode:V(s)=0 V(s)=0 V(s)=0 V(s)=*100V(s)=*100+..

Page 41: Università di Milano-Bicocca Laurea Magistrale in Informatica

Monte-Carlo Method

Drawback of Monte-Carlo method Learning only takes place after an episode terminated

Performs a random walk until goal state is discovered for the first time as all state-action values seem equal

It might take long time to find the goal state by random walk

TD-learning actively explores state space if each action receives a small default penalty

Page 42: Università di Milano-Bicocca Laurea Magistrale in Informatica

N-Step Return

Idea: blend TD learning with Monte-Carlo method

Define:

Rt(1) = rt+1 + Vt(st+1)

Rt(2) = rt+1 + rt+2 + Vt(st+2)

Rt(n) = rt+1 + rt+2 + … + n-1rt+n + nVt(st+n)

The quantity Rt(n) is called the n-step return

at time t.

Page 43: Università di Milano-Bicocca Laurea Magistrale in Informatica

TD()

n-step backup : Vt(st) Vt(st) + [Rt(n)-Vt(st)]

TD() : use average of n-step returns for backup

Rt= (1-) n=1

n-1 Rt(n)

Rt= (1-) n=1

T-t-1 n-1 Rt(n) + T-t-1 Rt

(if sT is a terminal state)

The weight of the n-step return decreases with a factor of

TD(0): one-step temporal difference method

TD(1) : Monte-Carlo method

Page 44: Università di Milano-Bicocca Laurea Magistrale in Informatica

Eligibility Traces

Practical implementation of TD():With each state-action pair associate an eligibility trace et(s,a)

On each step, the eligibility trace for all state-action pairs decays by a factor and the eligibility trace for the one state and action visited on the step is incremented by 1.

et(s,a) = et-1(s,a) + 1 if s=st and a=at

= et-1(s,a) otherwise

Page 45: Università di Milano-Bicocca Laurea Magistrale in Informatica

On-line TD()

Initialize Q(s,a) arbitrarily and e(s,a)=0 for all s,a:Repeat for each episode Initialize s,a Repeat for each step of episode

Take action a, observe r, s’ Choose a’ from s’ using policy derived from Q (-

greedy) r + Q(s’,a’) – Q(s,a) e(s,a) e(s,a) +1 For all s,a: Q(s,a) Q(s,a)+ e(s,a) e(s,a) e(s,a) s s’, a a’

Until s is terminal

Page 46: Università di Milano-Bicocca Laurea Magistrale in Informatica

Function Approximation

So far we assumed that the action value function Q(s,a) is represented as a table.

Limited to problems with a small number of states and actions

For large state spaces a table based representation requires large memory and large data sets to fill them accurately

Generalization: Use any supervised learning algorithm to estimate Q(s,a) from a limited set of action value pairs Neural Networks (Neuro-Dynamic Programming) Linear Regression Nearest Neighbors

Page 47: Università di Milano-Bicocca Laurea Magistrale in Informatica

Function Approximation

Minimize the mean-squared error between training examples Qt(st,,at) and the true value function Q(s,a)

sS P(s) [Q(s,a)- Qt(st,,at)]2

Notice that during policy iteration P(s) and Q(s,a) change over time

Parameterize Q(s,a) by a vector =(1,…, n)

for example weights in a feed-forward neural network

Page 48: Università di Milano-Bicocca Laurea Magistrale in Informatica

Stochastic Gradient Descent

Use gradient descent to adjust the parameter vector in the direction that reduces the error for the current example

t+1 = t + [Q(st,at) – Qt(st,at)]t Qt(st,at)

The target output qt of the t-th training example is not the true value of QP but some

approximation of it.

t+1 = t + [vt – Qt(st,at)]t Qt(st,at)

Still converges to a local optimum if E{vt}=Q(st,at)

if a0 for t