- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
- 纳金币
- 5568
- 精华
- 0
|
1 Introduction
Novel user interfaces such as the Microsoft kinect allow users to
actively move their body in order to interact with immersive video
games. Hence, users may navigate by using natural, multimodal
methods of generating self-motions. For instance, [LaViola and
Katzourin 2007] developed several body- and foot-based metaphors
for hands-free navigation in IVEs, including a leaning technique for
traveling short and medium distances and a floor-based world-inminiature
for traveling large distances. However, real walking is
the most basic and intuitive way of moving and, therefore, keeping
this ability is of great interest.
An obvious approach to support real walking in such setups is to
use a one-to-one mapping of the walked trajectory to the virtual
camera. Since the interaction space is restricted by the limited range
of the tracking sensors, concepts for virtual locomotion methods
are needed that enable walking over large distances in the video
game while remaining within a relatively small space in the real
world. Several approaches scale translation motions and therefore,
avoids discomfort due to lateral bumping. Other approaches such
as teleportation use view point transitions between two locations.
However, [Bowman et al. 1997] found that an abrupt change of
view is disorienting and suggested to use smooth transitions. |
|