Recent experiments in Human-Robot Interaction
have explored the conversational style and attitude that a social robot employs to instruct
humans. Studies highlight the importance of adapting instructions to the user at hand,
in other words, designing for the recipient in the interaction. The process of grounding is incremental. If
the human does not understand an instruction, subsequent elaborations may be necessary.
Humans follow the principle of least-collaborative-effort, meaning that they will not convey all information
at their disposal at once but incrementally add more information. So the first one you should take is the frame but the one with the stripes the black one with the stripes perfect If the speaker has provided a very detailed
message all at once, they may have put more effort than necessary. Until a sufficient
level of understanding is reached, the speaker will produce more utterances, reformulating
previous turns or adding new information. How can a social robot design utterances with
the right amount of information? Should a social robot consider withholding information
in its first attempts of instruction, and as such display no concern over the user’s
understanding? Would that convey a collaborative and adaptive behaviour? In this direction, we identify three challenges:
How much information should a robot transmit in instructions?
When and how should the robot elaborate or repair previous utterances?
What user social cues should the robot attend to, in order to make decisions on repair? We believe that robots instructing humans
with the principle of least-collaborative-effort, are likely to create more satisfactory interactions