Robots and avatars as educational language learning...

Post on 18-Feb-2019

224 views 0 download

Transcript of Robots and avatars as educational language learning...

Robots and avatars as educational language learning tools?

Dr.-Ing. Kirsten BergmannBielefeld University

AZIONI A SUPPORTO DEL PIANO “TRENTINO TRILINGUE” Sviluppo delle risorse professionali e predisposizione di strumenti di apprendimento e valutazione (Codice: 2015_3_1034_IP.01)

Questa iniziativa è realizzata nell'ambito del Programma operativo FSE 2014 – 2020 della Provincia autonoma di Trento grazie al sostegno finanziario del Fondo sociale europeo, dello Stato italiano e della Provincia autonoma di TrentoLa Commissione europea e la Provincia autonoma di Trento declinano ogni responsabilità sull’uso che potrà essere fatto delle informazioni contenute nei presenti materiali

One-to-one tutoring

Classroomeducation

One-to-one tutoring vs. group education

Distinct advantages shown forone-to-one tutoring over classroom teaching

• 2 standard deviations improvement (Bloom 1984, p. 4):“the average tutored student was above 98% of the students in the control class”

• More recent research has shown smaller effects (VanLehn,2011)

But one-to-one tutoring is not feasible in traditional classroom arrangements

Can we transfer the benefitsof human one-to-one tutoringto digital learningtechnology?

Progress: Humanoid robots and virtual humans

• Increasingly realistic and expressive appearance

• Advances in input recognition/ interpretation (speech, attention,affect/emotion, …)

• Effects in interaction with humans: Credibility, competence, trust, communicative behavior

• Application domains: Health care, elderly care, entertainment, and education

Humanoid robots and virtual humansas learning partners?

Transferring the benefits of human-humantutoring to instructional communication and computer-supported learning

+ personalization

+ flexible availability

+ patient

+ can approach children with migration background in L1

Research tool to optimize human tutoring

• Studies: Manipulating subtle aspects of communicative behavior and observing cognitive and social outcome

• Inferring advice for human teachers

Virtual humans and robots for education— what we know so far…

Saerbeck et al. (2010) The role of social supportiveness

• Children learned an artificiallanguage (“Toki Pona”)

• Level of robot’s social supportiveness was manipulated (e.g. non-verbalfeedback, attention guiding, smiling)

• Higher social supportiveness had a positive effect on learning outcome (vocabulary, grammar, pronunciation) and students motivation

Virtual humans and robots for education— what we know so far…

Alemi et al. (2014, 2015)Effects of a social robot on learners’ anxiety and attitude

• Teacher was accompanied by a robot assistant (vs. no robot)

• Students in the robot group had great fun in the learning process and believed they were learning more effectively

• Robot helped to boost their motivation in the long run

Virtual humans and robots for education— what we know so far…

Herberg et al. (2015)Robot watchfulness hinders learning performance

• Children learned French and Latin rules from a robot tutor and filled in worksheets applying the rules to translate phrases

• Robot watched children as they filled in worksheet items (or not)

• Better performance when the robot looked away

Virtual humans and robots for education—what we know so far…

Tanaka & Matsuzoe (2012, 2015)Learning by teaching a care-receiving robot

• Japanese children teach word meaning to a robot — can this promote learning English words?

• Learning outcome was improved whenthe robot was present (vs. no-robotcontrol) for verbs

Moriguchi et al. (2011)Can 4/5 year-olds learn vocabulary from a robot?• 4-year-olds learned better from human vs. robot stimulus (both on video)

• No difference between human vs. robot stimulus at age 5

Virtual humans and robots for education— what we know so far…

Is tutoring with humanoid robots effective for language education?

• The potential of robots for language learning seems to be considerable

• But still many unanswered and underexplored scientific and technological issues

Designing a child-friendlytutor robot that can be usedto support teachingpreschool children a secondlanguage (L2) by interactingwith them

L1

GermanDutchTurkish

L2

English

L1

Turkish

L2

DutchGerman

+

Methodology

Observations Realization Evaluation

Major challenges

• Perceive and recognize the child’s verbal and nonverbalsignals and input provided over the tablet

• Monitor the child’s learning progress and behavior

• Respond adequately to the child via robot (speech, gesture,…) and tablet output while considering learning outcome andmotivation, engagement, fun

TTS/Prosody

NonverbalSynthesis

ASRMultimodal Input Interpretation

Vision Recogn.

Multimodal Output Generation

ChildModel

InteractionModel

InteractionManagement

Tablet Input Context Interpret.

Technical architecture

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

Research in cooperation with

Manuela Macedonia

Johannes-Kepler University, Linz Max-Planck Institute for HumanCognitive and Brain Sciences, Leipzig

Astrid Rosenthal-von der Pütten

University Duisburg-Essen

Enactment effect

Performing gestures when encoding verbal information enhances memory

• “Enactment effect” (Engelkamp & Krummnacker, 1980)

• “Subject performed task effect” (Cohen, 1981)

- for different materials: verbs, phrases, actions withreal objects, common and bizarre actions

- for different subjects: healthy, mentally impaired,memory impaired, young and elderly participants,students, children

- with different tests: recognition, free recall, cuedrecall

Enactment as a tool to enhance learningApplication on second language learning

Higher memory performance when using gestures whilelearning novel language materials

• English-French expressions; short- and long-termeffects (Quinn-Allen 1995)

• Artificial language; short- and long-term effects(Macedonia 2003; Macedonia et al. 2010, Macedonia et al. 2011)

• French-English: words + gestures vs. pictures;children (Tellier 2008)

• English-Japanese action verbs: speech + gestures vs.repeated speech (Kelly et al. 2009)

Enactment effect for foreign language materials reproducible with virtual characters?

Study

•45 word pairs (concrete nouns): German - Vimmi

•Vimmi: Artificial corpus following Italian phonotactic rules, developed for experimental purpose (Macedonia et al. 2010; Macedonia et al. 2011)

• Items controlled for length in Vimmi and frequency (i.e.,concept familiarity) in German

Vimmi Germanpuneri Handtuch (towel)

giketa lamubegaboki...

Blume Stuhl Spiegel...

(flower)

(chair)

(mirror)

Study

Within-subjects design with 3 training conditions

• Human (15 items)Spoken and written words + gesture stimuli by actress;Imitation by participants

• Agent (15 items)Spoken and written words + gesture stimuli by agent;Imitation by participants

• Control (15 items)Spoken and written words; Imitation by participants

Items trained in 9 blocks of 5 word pairs each, 45min daily over 3 days

Agent Stimulus Example

Ohrring — gelori

Ohrring — gelori

Human Stimulus Example

****

*

Results(Bergmann & Macedonia 2013; Macedonia, Bergmann et al. 2014)

Control Actress Virtual character

50 *

37,5

25

50

37,5 **

25

12,5 12,5

0Day 1 Day 2 Day 3

0Day 30

Effects of time (F2,56 = 187.26, p > .001) andstimulus type (F2,56 = 4.24, p = .019)

Effects of stimulus type(F2,56 = 3.68 , p = .032)

Fre

e r

ecal

l

Fre

e r

ecal

l

.50PerformanceLowPerformer HighPerformer

.40

.30

.20

.10

Human Agent Control

Training condition

High- vs. low-performing participants(Bergmann & Macedonia 2013; Macedonia, Bergmann et al. 2014)

Median split according to overall performance

• 16 high-performing participants (mean performance: 45.4%) vs.• 13 low-performing participants (mean performance: 21.2%)

Significant interaction effect: Stimulus type x performance (F2,56 = 4.37 , p = .017)

HG AG ConHG AG ConTraining

Fre

e R

ecal

l (%

)

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

Using a virtual character in gesture-supported vocabulary training

• Can increase learning outcome just as input from human actress

• Learner dependency: Especially high-performers can profit from the virtual character

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

Research in cooperation with

Thorsten Schodde

Bielefeld UniversityStefan Kopp

Bielefeld University

Interaction management

‣ Content: Next item to be learned/repeated

‣ Tutoring strategy: How to present the content?

‣ Integrating breaks, games etc.

‣ Feedback

‣ …

Interaction management

Interaction management

‣ Content: Next item to be learned/repeated

‣ Tutoring strategy: How to present the content?

‣ Integrating breaks, games etc.

‣ Feedback

‣ …

Interaction management

Example of adaptive item selection

Words to be learned‣ hippo‣ horse‣ monkey‣ ladybug‣ chicken‣ bird

[0.5][0.5][0.5][0.5][0.5][0.5]

System’s initial predictions about the learner’s knowledge (range

0 . 0 - 1.0)

Example of adaptive item selection

Words to be learned‣ hippo‣ horse‣ monkey‣ ladybug‣ chicken‣ bird

[0.5][0.5][0.5][0.5][0.5][0.5]

Ich sehe was wasDu nicht siehst und

das heisst aufEnglisch “hippo”

Child’s task is to select the correctpicture from a set of pictures (task

difficulty can vary via amount of

distractor images!)

Example of adaptive item selection

Words to be learned‣ hippo‣ horse‣ monkey‣ ladybug‣ chicken‣ bird

[0.5][0.5][0.5][0.5][0.5]

[0.7][0.3]

Ich sehe was wasDu nicht siehst und

das heisst aufEnglisch “hippo”

Child’s task is to select the correctpicture from a set of pictures (task

difficulty can vary via amount of

distractor images!)

Example of adaptive item selection

Words to be learned‣ hippo‣ horse‣ monkey‣ ladybug‣ chicken‣ bird

[0.5][0.5][0.5][0.5][0.5]

[0.7][0.3]

Ich sehe was wasDu nicht siehst und

das heisst aufEnglisch “hippo”

Depending on predicted knowledge

state, task difficulty can be adapted

Hippo 0.5

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.5

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.29

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.29

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.33

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.33

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.19

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.19

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.22

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.22

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.22

Horse 0.5

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

Hippo 0.22

Horse 0.66

Monkey 0.5

Ladybug 0.5

Chicken 0.5

Bird 0.5

First evaluation with adult learners

• Learning vocabularies in a game-like scenario: “I spy with my little eye…”:

- Robot names object features (color, shape, …) in L2

- 10 words from artificial language “Vimmi” (Macedonia et al. 2010)

- Participants respond by selecting an object on the tablet

• Between-subjects design with 2 experimental conditions (N=20 per condition):

- Selection of vocabularies based on predicted knowledge state

- Random selection of vocabularies

Example runs

Adaptive condition Random condition

Evaluation results

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

Predicting learners’ skills on the basis of observations and taking according action

•Learners performance can be increased by personalized robot tutoring

1 . H o w c a n w esupport vocabularylearning by meansof embodiment?

2. How can we makethe i nteractionsadaptive towardschildren’s individualneeds?

•Taking further social signals by the child into account (e.g. emotional state, attention)

•Predicting the most adequate tutoring strategy(gestures vs. no gestures) on the basis of

+

model

Robots and avatars as educational language learning tools?

Questions& Discussion