Authors
Marco Gillies,
Xueni Pan,
Mel Slater,
Publication date
2008
Publisher
John Wiley & Sons, Ltd.
Total citations
Description
Humans use their bodies in a highly expressive way during conversation, and animated characters that lack this form of non‐verbal expression can seem stiff and unemotional. An important aspect of non‐verbal expression is that people respond to each other's behavior and are highly attuned to picking up this type of response. This is particularly important for the feedback given while listening to some one speak. However, automatically generating this type of behavior is difficult as it is highly complex and subtle. This paper takes a data driven approach to generating interactive social behavior. Listening behavior is motion captured, together with the audio being listened to. These data are used to learn an animation model of the responses of one person to the other. This allows us to create characters that respond in real‐time during a conversation with a real human. Copyright ? 2008 John Wiley & Sons, Ltd.