- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
- 纳金币
- 5568
- 精华
- 0
|
ACM SIGGRAPH ASIA 2011 - COURSE MATERIAL
Perceptually Inspired Methods for Naturally Navigating Virtual Worlds
Frank Steinicke
Immersive Media Group
University of W¨urzburg, Germany
Anatole L´ecuyer†
BUNRAKU Research Team
INRIA Rennes, France
Betty Mohler ‡
Perception and Action in Virtual Environments
MPI for Biological Cybernetics, Germany
Mary C. Whitton§
Department of Computer Science
University of North Carolina at Chapel Hill, USA
ABSTRACT
In recent years many advances have enabled users to more and more
naturally navigate large-scale graphical worlds. The entertainment
industry is increasingly providing visual and body-based cues to
their users to increase the naturalness of their navigational experience.
However, so far none of the existing solutions fully supports
the most natural ways of locomotion through virtual worlds, and
thus techniques and technologies have to be considered, which take
advantage of insights into human perceptual sensitivity.
In this context, by far the most natural way to move through the
real world is via a full body experience where we receive sensory
stimulation to all of our senses, i.e., when walking,***nning, biking
or driving. With some exciting technological advances, people
are now beginning to get this same full body sensory experience
when navigating computer generated three-dimensional environments
[11]. Enabling such an active and dynamic ability to navigate
through large-scale virtual scenes is of great interest for many 3D
applications demanding locomotion, such as video games, edutainment,
simulation, rehabilitation, military, tourism or architecture.
Today it is still mostly impossible to freely navigate through
computer generated environments in exactly the same way as in the
real world [1] and instead rather unnatural and artificial approaches
are usually applied, which provide only visual sensation of selfmotion.
However, while moving in the real world, sensory information
such as vestibular, proprioceptive, as well as visual information
create consistent multi-sensory cues that indicate ones own
motion, i. e., acceleration, speed and direction of travel [5]. Computer
graphics environments were initially restricted to visual displays,
combined with interaction devices, e. g. joystick or mouse,
for providing (often unnatural) inputs to generate self-motion [2].
Today, more and more interaction devices, e. g., Nintendos Wii,
Microsofts kinect or Sonys EyeToy, enable intuitive and natural interaction.
In this context many research groups are investigating
natural, multimodal methods of generating self-motion in virtual
worlds based on these consumer hardware.
An obvious approach is to transfer the users tracked head movements
to changes of the camera in the virtual world by means of a
one-to-one mapping. Then, one meter movement in the real world
is mapped to one meter movement of the virtual camera in the corre-
e-mail: frank.steinicke@uni-wuerzburg.de
†e-mail: anatole.lecuyer@irisa.fr
‡e-mail: betty.mohler@tuebingen.mpg.de
§e-mail: whitton@cs.unc.edu
sponding direction in the virtual environment (VE). This technique
has the drawback that the users movements are restricted by a limited
range of the tracking sensors, e. g. optical cameras, and usually
a rather small workspace in the real world [2]. The size of the virtual
world often differs from the size of the tracked space so that a
straightforward implementation of omni-directional and unlimited
walking is not possible [12, 2]. Thus, creative virtual locomotion
methods (i. e. redirected walking, walking in place, chairs as joysticks,
visual indications of natural movement) have been used that
enable the experience of traveling over large distances in the virtual
world while remaining within a relatively small space in the real
world [7, 6].
In recent years, two omni-directional treadmills have been built
and are used in the research community (University of Louisiana
and Max Planck Institute for Biological Cybernetics). These scientists
can now explore infinite virtual worlds while choosing to navigate
in any direction. Using these treadmills scientists can determine
what about the body-based senses are important for different
aspects of entertainment, training and learning.
In this course we will present an overview about the development
of locomotion interfaces for computer generated virtual environments
ranging from desktop-based camera manipulations simulating
walking, and different walking metaphors for virtual reality
(VR)-based environments to state-of-the-art hardware-based
solutions that enable omni-directional and unlimited real locomotion
through virtual worlds. As the computer graphics industry
advances towards increasingly more natural interaction, computer
graphics researchers and professionals will benefit from this course
by increasing their understanding of human perception and how this
knowledge can apply to enabling the most natural interaction technique
of all, navigating through the world.
1 COURSE DURATION
Half-day course (4×40 minutes presentations + 4×10 minutes
discussions)
2 COURSE OUTLINE
We will present an in-depth course about locomotion techniques
and approaches, which allow users to virtually travel through VEs.
The course will cover early mouse-/keyboard-based techniques and
advanced camera motion approaches, which support the sensation
of walking in desktop environments [10]. Furthermore, we
will present different VR-based walking metaphors, and advanced
multimodal omni-directional hardware devices supporting walking
through the virtual world. Participants will learn how the sensation
of walking in VEs has evolved and which approaches for natural
walking are currently available[1]. In the course we will summarize |
|