By Topic

Computer Animation, 1999. Proceedings

Date 26-29 May 1999

Filter Results

Displaying Results 1 - 24 of 24
  • Proceedings Computer Animation 1999

    Save to Project icon | Request Permissions | PDF file iconPDF (197 KB)  
    Freely Available from IEEE
  • Index of authors

    Page(s): 235
    Save to Project icon | Request Permissions | PDF file iconPDF (99 KB)  
    Freely Available from IEEE
  • A behavioral interface to simulate agent-object interactions in real time

    Page(s): 138 - 146
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (436 KB)  

    The paper shows a novel approach to model and control interactive objects for simulations with virtual human agents when real time interactivity is essential. A general conceptualization is made to model objects with behaviors that can provide: information about their functionality changes in appearance from parameterized deformations, and a complete plan for each possible interaction with a virtual human. Such behaviors are described with simple primitive commands, following the actual trend of many standard scene graph file formats that connects language with movements and events to create interactive animations. In our case, special attention is given to correctly interpret object behaviors in parallel; a situation that arrives when many human agents interact at the same time with one same object View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time collision detection for virtual surgery

    Page(s): 82 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    We present a simple method for performing real-time collision detection in a virtual surgery environment. The method relies on the graphics hardware for testing the interpenetration between a virtual deformable organ and a rigid tool controlled by the user. The method enables to take into account the motion of the tool between two consecutive time steps. For our specific application, the new method runs about a hundred times faster than the well known oriented-bonding-boxes tree method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Animation of human walking in virtual environments

    Page(s): 4 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (136 KB)  

    This paper presents an interactive hierarchical motion control system dedicated to the animation of human figure locomotion in virtual environments. As observed in gait experiments, controlling the trajectories of the feet during gait is a precise end-point control task. Inverse kinematics with optimal approaches are used to control the complex relationships between the motion of the body and the coordination of its legs. For each step, the simulation of the support leg is executed first, followed by the swing leg, which incorporates the position of the pelvis from the support leg. That is, the foot placement of the support leg serves as the kinematics constraint while the position of the pelvis is defined through the evaluation of a control criteria optimization. Then, the swing leg movement is defined to satisfy two criteria in order: collision avoidance and control criteria optimization. Finally, animation attributes, such as controlling parameters and pre-processed motion modules, are applied to achieve a variety of personalities and walking styles View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group interaction in a surround screen environment

    Page(s): 92 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    We describe a setup using a surround screen environment (Extended Virtual Environment EVE dome) that we used to explore group interaction in real and virtual space. We have created a prototype framework to explore different modes of group interaction, the position and motion of users in real space are tracked using a vision-based interface that allows the activities of real crowds to be monitored. In the virtual space, we use a simple behavioural animation system that serves as a testbed to generate virtual group and crowd behaviour. Exploring different kinds of dynamic relationships between real and virtual groups gives insight to possible directions of group interaction View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MPEG-4 compatible faces from orthogonal photos

    Page(s): 186 - 194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    MPEG-4 is scheduled to become an international standard in March 1999. The paper demonstrates an experiment for a virtual cloning method and animation system, which is compatible with the MPEG-4 standard facial object specification. Our method uses orthogonal photos (front and side view) as input and reconstructs the 3D facial model. The method is based on extracting MPEG-4 face definition parameters (FDP) from photos, which initializes a custom face in a more capable interface, and deforming a generic model. Texture mapping is employed using an image composed of the two orthogonal images, which is done completely automatically. A reconstructed head can be animated immediately inside our animation system, which is adapted to the MPEG-4 standard specification of face animation parameters (FAP). The result is integrated into our virtual human director (VHD) system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A software system to carry-out virtual experiments on human motion

    Page(s): 16 - 23
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    This work presents a simulation system designed to carry-out virtual experiments on human motion. 3D visualization, automatic code generation and generic control design patterns provide biomechanicians and medics with dynamic simulation tools. The paper first deals with the design of mechanical models of human beings. It also presents design patterns of controllers for an upper-limb model composed of 11 degrees of freedom. As an example, two controllers are presented in order to illustrate these design patterns. The paper also presents a user-friendly interface dedicated to medics that makes it possible to enter orders in natural language View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realistic articulated character positioning and balance control in interactive environments

    Page(s): 160 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    The paper addresses the problem of articulated virtual character positioning using dynamic and kinematic constraints with an emphasis on balancing. The balance of the figure is controlled using a new technique based on the precise manipulation of joint torques. Each joint contributes to the control in proportion to its influence on the balance. The method allows one to control the balance either through direct adjustment of the position of the center of mass when the environmental force interaction is negligible, or through the adjustment of the root joint torque when the center of mass does not provide any useful information. The control of the postures is reduced to some intuitive parameters, simplifying the animator's work enormously. Results are quite realistic and some are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast synthetic vision, memory, and learning models for virtual humans

    Page(s): 118 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    The paper presents a simple and efficient method of modeling synthetic vision, memory, and learning for autonomous animated characters in real time virtual environments. The model is efficient in terms of both storage requirements and update times, and can be flexibly combined with a variety of higher level reasoning modules or complex memory rules. The design is inspired by research in motion planning, control, and sensing for autonomous mobile robots. We apply this framework to the problem of quickly synthesizing from navigation goals the collision-free motions for animated human figures in changing virtual environments. We combine a low level path planner, a path following controller and cyclic motion capture data to generate the underlying animation. Graphics rendering hardware is used to simulate the visual perception of a character, providing a feedback loop to the overall navigation strategy. The synthetic vision and memory update rules can handle dynamic environments where objects appear, disappear, or move around unpredictably. The resulting model is suitable for a variety of real time applications involving autonomous animated characters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative animation over the network

    Page(s): 107 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    The continuously increasing complexity of computer animations makes it necessary to rely on the knowledge of various experts to cover the different areas of computer graphics and animation. This fact leads to increasing effort being put into research concerning cooperative working over the Internet. However, it still requires substantial effort and time to combine different animation techniques in a common virtual environment. When trying to perform collaborative animation over a network, we often face the problem of having to combine animation systems and applications based on different software and hardware and using incompatible data structures. We present an approach, based on a client-server architecture and employing a VRML based language as common interchange format, that allows inhomogeneous systems to be easily incorporated into a collaborative animation. The applications can be freed from employing plug-ins or libraries to link into a common animation platform; they keep a local copy of the global scene and only need the ability to export the internal data representation into the so called “PaVRML” language, the language used to exchange data and synchronize clients. This approach allows a number of practitioners to share their know-how within a common animation without requiring the huge amount of work necessary to port their application to a common platform. It also makes it possible in the first place to combine the capabilities of different animation systems into a single complex animation. We also investigate solutions to optimize the network load for real time applications. We present preliminary results and discuss the future developments of this ongoing work View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visible volume buffer for efficient hair expression and shadow generation

    Page(s): 58 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB)  

    Much research has been conducted on hair modeling and hair rendering with considerable success. However, the immense number of hair strands means that memory and CPU time requirements are very severe. To reduce the memory and the time needed for hair modeling and rendering, a visible volume buffer is proposed. Instead of using thousands of thin hairs, the memory usage and hair modeling time can be reduced by using coarse background hairs and fine surface hairs. The background hairs can be constructed by using thick hairs. To improve the look of the hair model, the background hair near the surface is broken down into numerous thin hairs and rendered. The visible volume buffer is used to determine the surface hairs. The rendering time of the background and surface hairs is found to be faster than the conventional hair model by a factor of more than four with little lost in image quality. The visible volume buffer is also used to produce shadows for the hair model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatically generating virtual guided tours

    Page(s): 99 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (128 KB)  

    After the introduction of VRML, 3D Web browsing has become a popular form of networked virtual reality. However, it is still a great challenge for a novice user equipped with a regular desktop PC to navigate in most virtual worlds of moderate complexity. We think the main problem is due to the fact that a user usually uses a 2D mouse to provide low-level navigation control but the display frame rate is not high enough for this servo loop. We consider an alternative metaphor of allowing a user to specify locations of interests on a 2D-layout map and let the system automatically generate the animation of guided tours in virtual architectural environments. Specifically, we aim to generate animations of customizable tour paths and its associated human/camera motions in an on-line manner according to high-level user inputs. We describe an auto-navigation system, in which several efficient path-planning algorithms adapted from robotics are used. This system has been implemented in Java and adopts common VRML browsers as its 3D interface. We also use the geometric model of our departmental building as an example to demonstrate the efficiency and effectiveness of the system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emotionally expressive agents

    Page(s): 48 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (180 KB)  

    The ability to express emotions is important for creating believable interactive characters. To simulate emotional expressions in an interactive environment, an intelligent agent needs both an adaptive model for generating believable responses, and a visualization model for mapping emotions into facial expressions. Recent advances in intelligent agents and in facial modeling have produced effective algorithms for these tasks independently. We describe a method for integrating these algorithms to create an interactive simulation of an agent that produces appropriate facial expressions in a dynamic environment. Our approach to combining a model of emotions with a facial model represents a first step towards developing the technology of a truly believable interactive agent which has a wide range of applications from designing intelligent training systems to video games and animation tools View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual reality simulation modeling for a haptic glove

    Page(s): 195 - 200
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3820 KB)  

    The recent addition of force and touch feedback to virtual reality simulations has enhanced their realism. Research on haptics interfaces is now extended to physical modeling of contact surfaces, object hardness, surface deformation, etc. This is especially needed when dextrous manipulation of virtual objects is concerned. The paper describes a VR system using a haptic glove (Rutgels Master II) connected to a PC workstation, and a new method for modeling virtual hand haptic interactions. An application example presented here is an orthopedic rehabilitation library. The exercises in this library involve interactions with dynamic objects and physical modeling of plasticity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recursive dynamics and optimal control techniques for human motion planning

    Page(s): 220 - 234
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    We present an efficient optimal control based approach to simulate dynamically correct human movements. We model virtual humans as a kinematic chain consisting of serial, closed loop, and tree-structures. To overcome the complexity limitations of the classical Lagrangian formulation and to include knowledge from biomechanical studies, we have developed a minimum-torque motion planning method. This new method is based on the use of optimal control theory within a recursive dynamics framework. Our dynamic motion planning methodology achieves high efficiency regardless of the figure topology. As opposed to a Lagrangian formulation, it obviates the need for the reformulation of the dynamic equations for different structured articulated figures. We then use a quasi-Newton method based nonlinear programming technique to solve our minimal torque-based human motion planning problem. This method achieves superlinear convergence. We use the screw theoretical method to compute analytically the necessary gradient of the motion and force. This provides a better conditioned optimization computation and allows the robust and efficient implementation of our method. Cubic spline functions have been used to make the search space for an optimal solution finite. We demonstrate the efficacy of our proposed method based on a variety of human motion tasks involving open and closed loop kinematic chains. Our models are built using parameters chosen from an anthropomorphic database. The results demonstrate that our approach generates natural looking and physically correct human motions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Alhambra: a system for producing 2D animation

    Page(s): 38 - 47
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1412 KB)  

    There is a great interest in producing computer animation that looks like 2D classic animation. The flat shading, silhouettes and inside contour lines are all visual characteristics that, joined to flexible expressiveness, constitute the basic elements of 2D animation. We have developed methods for obtaining the silhouettes and interior curves from polygonal models. Virtual lights is a new method for modeling the visualization of inside curves. The need for flexibility of the model is achieved by the use of hierarchical nonlinear transformations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Skin aging estimation by facial simulation

    Page(s): 210 - 219
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    We propose a layered facial simulation model for skin aging with wrinkles, which includes muscle, connective tissue and skin layer. Our aim is to simulate relevant facial animation and aging with the guidance of general facial tissue anatomy, so that the model can be extended to medical and cosmetic applications. B-spline muscle patches are automatically adapted to each individual face by mapping the anatomical facial muscle image. Connective tissues are simulated as simple springs with the length of hypodermis thickness that constrain skin movement. Facial skin deformation and aging are estimated based on an elaborated biomechanical model considering large strain deformation and wrinkle formation. Finally, multi-layered color and bump texture mapping are used to represent wrinkle forms and to render an aged face View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual people: capturing human models to populate virtual worlds

    Page(s): 174 - 185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (33792 KB)  

    A new technique is introduced for automatically building recognisable moving 3D models of individual people. Realistic modelling of people is essential for advanced multimedia, augmented reality and immersive virtual reality. Current systems for whole-body model capture are based on active 3D sensing to measure the shape of the body surface. Such systems are prohibitively expensive and do not enable capture of high-quality photo-realistic colour. This results in geometrically accurate but unrealistic human models. The goal of this research is to achieve automatic low cost modelling of people suitable for personalised avatars to populate virtual worlds. A model based approach is presented for automatic reconstruction of recognisable avatars from a set of low cost colour images of a person taken from four orthogonal views. A generic 3D human model represents both the human shape and kinematic joint structure. The shape of a specific person is captured by mapping 2D silhouette information from the orthogonal view colour images onto the generic 3D model. Colour texture mapping is achieved by projecting the set of images onto the deformed 3D model. This results in the capture of a recognisable 3D facsimile of an individual person suitable for articulated movement in a virtual world. The system is low cost, requires single shot capture, is reliable for large variations in shape and size and can cope with clothing of moderate complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High level specification and control of communication gestures: the GESSYCA system

    Page(s): 24 - 35
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3364 KB)  

    This paper describes a complete system for the specification and the generation of communication gestures. A high level language for the specification of hand-arm communication gestures has been developed. This language is based both on a discrete description of space, and on a movement decomposition inspired from sign language gestures. Communication gestures are represented through symbolic commands which can be described by qualitative data, and traduced in terms of spatiotemporal targets driving a generation system. Such an approach is possible for the class of generation models controlled through key-points information. The generation model used in our approach is composed of a set of sensory-motor servo-loops. Each of these models resolves in real time the inversion of the servo-loop, from the direct specification of location targets, while satisfying psycho-motor laws of biological movement. The whole control system is applied to the synthesis, and a validation of the synthesized movements is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual input devices based on motion capture and collision detection

    Page(s): 201 - 209
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1844 KB)  

    The paper proposes virtual input devices based on collision detection for easy construction of interactive 3D graphics applications which use a motion capture system as a real time input device. Each virtual input device is composed from several collision sensor objects and an actuator object. These objects are software components represented as a visible object which users can manipulate on a computer screen. Each virtual input device has a certain metaphor associated with its role that is determined by location and composition structure of its components. Therefore, it is possible to define various virtual input devices easily only by combining several sensor objects and an actuator object through direct manipulations on a computer screen. The paper presents a realization mechanism and actual examples of virtual input devices View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human motion coordination: example of a juggler

    Page(s): 148 - 159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    The paper introduces a novel method for the coordination of human motion based on planification and AI techniques. Motions are considered as black boxes that are activated according to pre-conditions and produce post-conditions in a hybrid continuous and discrete world. Each part of the body is an autonomous entity which cooperates with the others depending on global criteria such as occupation rate and distance to a goal (common to all the entities). This technique makes it possible to easily specify and solve the motion coordination problem of a juggler that deals with a dynamic number of balls in real time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid elastic model allowing real-time cutting, deformations and force-feedback for surgery training and simulation

    Page(s): 70 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB)  

    We describe the basic components of a surgery simulator prototype developed at INRIA. After a short presentation of the geometric modeling of anatomical structures from medical images, we insist on the physical modeling components which must allow realistic interaction with surgical instruments. We present three physical models which are well suited for surgery simulation. Those models are based on linear elasticity theory and finite element modeling. The first model pre-computes the deformations and forces applied on a finite element model, therefore allowing the deformation of large structures in real-time. Unfortunately, it does not allow any topology change of the mesh therefore forbids the simulation of cutting during surgery. The second physical model is based on a dynamic law of motion and allows to simulate cutting and tearing. We called this model “tensor-mass” since it is analogous to spring-mass models for linear elasticity. This model allows volumetric deformations and cuttings, but has to be applied to a limited number of nodes to run in real-time. Finally, we propose a method for combining those two approaches into a hybrid model which may allow real time deformations and cuttings of large enough anatomical structures. This model has been implemented in a simulation system and real-time experiments are described and illustrated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Virtual human animation based on movement observation and cognitive behavior models

    Page(s): 128 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB)  

    Automatically animating virtual humans with actions that reflect real human motions is still a challenge. We present a framework for animation that is based on utilizing empirical and validated data from movement observation and cognitive psychology. To illustrate these, we demonstrate a mapping from effort motion factors onto expressive arm movements, and from cognitive data to autonomous attention behaviors. We conclude with a discussion on the implications of this approach for the future of real time virtual human animation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.