Motion Capture allows a physical body to animate a 3D computer-generated virtual body to perform in computer space or cyberspace. This is usually done by markers on the body which are tracked by cameras, their motions analyzed by a computer and mapped onto the virtual actor.
Or it can be done using electromagnetic sensors (like Polhemus or Flock-of-Birds) which indicate position/orientation of limbs and head.
Consider, though, a virtual body or an avatar that can access a physical body, actuating its performance in the real world. If the avatar is imbued with an artificial intelligence, becoming increasingly autonomous and unpredictable, then it would become more an AL (Artificial Life) entity performing with a human body in physical space.
With an appropriate visual software interface and muscle stimulation system this would be possible. The avatar would become a Movatar. Its repetoire of behaviours could be modulated continously by Ping signals and might evolve using genetic algorithms. With appropriate feedback loops from the real world it would be able to respond and perform in more complex and compelling ways. The Movatar would be able not only to act, but also to express its emotions by appropriating the facial muscles of its physical body.
As a VRML entity it could be logged into from anywhere - to allow a body to be accessed and acted upon. Or, from its perspective, the Movatar could perform anywhere in the real world, at any time, with as many physical bodies in diverse and spatially separated locations....