VR

[IROS19] Learning Virtual Grasp with Failed Demonstrations via Bayesian Inverse Reinforcement Learning

We propose Bayesian Inverse Reinforcement Learning with Failure (BIRLF), which makes use of failed demonstrations that were often ignored or filtered in previous methods due to the difficulties to incorporate them in addition to the successful ones. …

[TURC19] VRGym: A Virtual Testbed for Physical and Interactive AI

We propose VRGym, a virtual reality testbed for realistic human-robot interaction. Different from existing toolkits and virtual reality environments, the VRGym emphasizes on building and training both physical and interactive agents for robotics, …

[VR2018] Spatially Perturbed Collision Sounds Attenuate Perceived Causality in 3D Launching Events

When a moving object collides with an object at rest, people immediately perceive a causal event: i.e., the first object has launched the second object forwards. However, when the second object's motion is delayed, or is accompanied by a collision …

[TVCG16] The Martian: Examining Human Physical Judgments Across Virtual Gravity Fields

This paper examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball …

[SIGGRAPHAsia16Workshop] A Virtual Reality Platform for Dynamic Human-Scene Interaction

Both synthetic static and simulated dynamic 3D scene data is highly useful in the fields of computer vision and robot task planning. Yet their virtual nature makes it difficult for real agents to interact with such data in an intuitive way. Thus …