Abstract
In intelligent virtual environments (IVEs), it is a challenging research issue to provide the intelligent virtual actors (or avatars) with the ability of visual perception and rapid response to virtual world events. Modeling an avatar’s cognitive and synthetic behavior appropriately is of paramount important in IVEs. We propose a new cognitive and behavior modeling methodology that integrates two previously developed complementary approaches. We present expression cloning, walking synthetic behavior modeling, and an autonomous agent cognitive model for driving an avatar’s behavior. Facial expressions are generated using our own-developed rule-based state transition system. Facial expressions are further personalized for individuals by expression cloning. An avatar’s walking behavior is modeled using a skeleton model that is implemented by seven-motion sequences and finite state machines (FSMs). We discuss experimental results demonstrating the benefits of our approach.
Original language | English |
---|---|
Pages (from-to) | 47-54 |
Number of pages | 8 |
Journal | Virtual Reality |
Volume | 12 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2008 |