Wednesday, 2 December

Opening Remarks (Presented live via Zoom)
Chaired by Gerd Bruder (University of Central Florida)

Keynote (Presented live via Zoom)

Short Bio
Benjamin Lok Benjamin Lok is a Professor in the Computer and Information Sciences and Engineering Department at the University of Florida and co-founder of Shadow Health, an education company. Professor Lok’s research focuses on using virtual humans and mixed reality to train communication skills within the areas of virtual environments, human-computer interaction, and computer graphics. Professor Lok received a Ph. D. (2002, advisor: Dr. Frederick P. Brooks, Jr.) and M.S. (1999) from the University of North Carolina at Chapel Hill, and a B.S. in Computer Science (1997) from the University of Tulsa. He did a post-doc fellowship (2003) under Dr. Larry F. Hodges. Professor Lok received the UF Innovator of the Year Award (2019), a UF Term Professorship (2017-2020), the Herbert Wertheim College of Engineering Faculty Mentoring Award (2016), a NSF Career Award (2007-2012), and the UF ACM CISE Teacher of the Year Award in 2005-2006.
Break

Paper Session I (Presentations and Q&A performed on Zoom)
Haptic and Visual Perception
Chaired by Jean-Marie Normand (Ecole Centrale de Nantes)

Abstract
In virtual environments, we are able to have an augmented embodiment with various virtual avatars. Also, in physical environments, we can extend the embodiment experience using Supernumerary Robotic Limbs (SRLs) by attaching them to the body of a person. It is also important to consider for the feedback to the operator who controls the avatar (virtual) and SRLs (physical). In this work, we use a servo motor and Galvanic Vestibular Stimulation to provide feedback from a virtual interaction that simulates remotely controlling SRLs. Our technique transforms information about the virtual objects into haptic and proprioceptive feedback that provides different sensations to an operator.
Abstract
The electrostatic tactile display could render the tactile feeling of different haptic texture surfaces by generating the frictional force through voltage modulation when a finger is sliding on the display surface. However, it is challenging to prepare and fine-tune the appropriate frictional signals for haptic design and texture simulation. We present FrictGAN, a deep-learning based framework to synthesize frictional signals for electrostatic tactile displays from fabric texture images. Leveraging GANs (Generative Adversarial Networks), FrictGAN could generate the displacement-series data of frictional coefficients for the electrostatic tactile display to simulate the tactile feedback of fabric material. Our preliminary experimental results showed that FrictGAN could achieve considerable performance on frictional signal generation based on the input images of fabric textures.
Abstract
The current approach to train Cardiopulmonary Resuscitation (CPR) is to employ a mannequin device replicating the physical properties of a real human head and torso. This aims to ensure a correct transfer of the cardiac massage location, amplitude and frequency in a real situation. However, this type of training doesn’t replicate the stress that may be elicited in the presence of a real victim ; this may result in reduced CPR performances or even errors. Virtual Reality (VR) may alleviate this lack by adding visual immersion with a Head-Mounted Display (HMD) so that the trainee is cut from the potential distractions of the real surrounding and can fully engage in a more faithful training scenario. However, one must ensure in the first place that using this technology maintains the quality of the CPR. Hence, we have conducted an experimental study to evaluate the potential of visual immersion in such a training context (limited to the cardiac massage). One important requirement was to ensure a correct hand tracking while executing the standard CPR two-hands pose. In the present paper we describe first how we assessed a simple approach using two HTC-Vive trackers. Results show that the proposed minimal setup based on a single hand tracking is validated for frequency and, with correction, for amplitude. Then, to assess the quality of the training, we performed an evaluation study considering the following two factors: Haptic feedback with the mannequin device (with/out) and Real-time Performance feedback (with/out) in the HMD. We observed that the visually immersive experience proposed in this paper delivers a sufficient level of spatial presence, involvement and agency. Integrating the real CPR mannequin in VR has a significantly positive impact on the massage performance quality whereas displaying the real-time performance in the virtual environment tends to be only useful for the frequency when no mannequin is used.
Abstract
In the field of augmented reality (AR), many applications involve user interfaces (UIs) that overlay visual information over the user’s view of their physical environment, e.g., as text, images, or three-dimensional scene elements. In this scope, optical see-through head-mounted displays (OST-HMDs) are particularly interesting as they typically use an additive light model, which denotes that the perception of the displayed virtual imagery is a composite of the lighting conditions of one’s environment, the coloration of the objects that make up the virtual imagery, and the coloration of physical objects that lay behind them. While a large body of literature focused on investigating the visual perception of UI elements in immersive and flat panel displays, comparatively less effort has been spent on OST-HMDs. Due to the unique visual effects with OST-HMDs, we believe that it is important to review the field to understand the perceptual challenges, research trends, and future directions. In this paper, we present a systematic survey of literature based on the IEEE and ACM digital libraries, which explores users’ perception of displaying text-based information on an OST-HMD, and aim to provide relevant design suggestions based on the meta-analysis results. We carefully review 14 key papers relevant to the visual perception research in OST-HMDs with UI elements, and present the current state of the research field, associated trends, noticeable research gaps in the literature, and recommendations for potential future research in this domain.
Break

Panel (Presented live via Zoom)

In 2001, the first research paper on redirected walking was published. This “new interactive locomotion technique for virtual environments captures the benefits of real walking while extending the possible size of the VE. Real walking, although natural and producing a high subjective sense of presence, limits virtual environments to the size of the tracked space.” (Razzaque et al., 2001) Over the years, this technique led to a research branch with contributions from a lot of different scientists and sciences. 20 years later, it is time to wrap up the current state and discuss about the future of redirecting VR users.

Panelists

  • Eric Hodgson (Miami University)
  • Keigo Matsumoto (University of Tokyo)
  • Evan Suma Rosenberg (University of Minnesota)
  • Frank Steinicke (Universität Hamburg)
  • Mary C. Whitton (University of North Carolina)
Break

Virtual Open Labs & Reception (Presented live via Discord)

Thursday, 3 December

Keynote (Presented live via Zoom)

Short Bio
Alvaro Cassinelli Alvaro Cassinelli is an equilibrist walking the thin line between Art and Science. Born in Uruguay, he obtained an Engineering degree and a Ph.D in Physics in France before moving to Japan where he founded an lead the Meta-Perception group at the Ishikawa-Oku Laboratory, university of Tokyo – a research group specialized on interfaces for enhancing human communication and expression, expanding the vocabulary of HCI and media arts. He is presently Associate Professor and co-founder of the XRL (extended reality laboratory) at the School of Creative Media in Hong Kong. Awards includes the Grand Prize [Art Division] (9th Japan Media Art Festival), Excellence Prize [Entertainment Division] (13th Japan Media Art Festival), Honorary Mention (Ars Electronica), NISSAN Innovative Award (2010), Jury Grand Prize at Laval Virtual (2011).
Break

Poster & Demo Session A  (Presented live via Discord)

Schedule for posters and demos

Break

Paper Session II (Presentations and Q&A performed on Zoom)
Mixed Reality Applications
Chaired by Dirk Reiners (University of Central Florida)

Abstract
Amblyopia is a developmental issue that children may experience neurological damage and insufficient visual gathering. Recent studies investigated the ability to improve amblyopic children’s visual acuity. However, according to previous theories, visual acuity is the foundation for an excellent visual system, but more lies beyond acuity improvements, such as their visual cognitive and visual perception. This paper aims to provide a pediatric-centered practical approach for amblyopic children. Virtual Reality acts as a serious game medium for active learning of virtual perceptions built after a solid improvement for visual acuity. We designed a system based upon the classical visual perception hierarchical pyramid. The bottom-up setup practices each stage independently while providing extra practice for layers below the current stage. This not only learns about the new stages of the current visual perception stage but also revisit previously learned skills. The end-goal provides a solid practice for amblyopic children to achieve good visual acuity, create a good understanding of the world, and appropriately act with their surroundings.
Abstract
Projective Augmented Reality (AR) offers exciting new ways of interacting with a museum exhibition. This paper presents such a projective AR application that was developed in cooperation with a museum (anonymized). It allows visitors to digitally paint a sculpture using a tablet while the result is simultaneously projected onto the real object. A first prototype of the application was tested in regard to usability, integration into the exhibition and its ability to transfer knowledge. The prototype was then improved based on the findings of this first user study and further evaluated in a second, comparative study, this time with a stronger focus on knowledge transfer. Applying regression and the bootstrap method yields a statistically significant increase in learning when using the developed application in comparison to the exhibition method traditionally used by the museum.
Abstract
We study student experiences for VR-based remote lectures using a social VR platform, evaluating both desktop and headset-based viewing in a real-world setting. Student ratings varied widely. Headset viewing produced higher presence overall. Strong negative correlations between headset simulator sickness and ratings of co-presence, overall experience, and some other factors suggest that comfortable users experienced additional benefits of headset VR, but other users did not. In contrast, correlations were not strong for desktop viewing, and it appears to be a good alternative in case of headset problems. We can predict that future headsets will bring benefits to more students, as visual stability and comfort are improving. Most students report prefer-ring a mix of headset and desktop viewing. We additionally report student opinions comparing VR to other class technologies,identifying difficulties and distractions, evaluating avatar features and factors of avatar movement, and identifying positive and negative aspects of the VR approaches. This provides a foundation for future development of VR-based remote instruction.
Abstract
Autonomous vehicles offer a driverless future, however, despite the rapid progress in ubiquitous technologies, human situational assessment continues to be required. For example, upon recognizing an obstacle on the road a request might be routed to a teleoperator, who can assess and manage the situation with the help of a dedicated workspace. A common solution to this problem is direct remote steering. Thereby a key problem in teleoperation is the time latency and low remote situational awareness. To solve this issue we present the Predictive Corridor (PC), a virtual augmented driving assistance system for teleoperated autonomous vehicles. In a user study (N = 32), we evaluated the PC by employing three measures: performance, subjective and physiological measures. The results demonstrate that driving with the PC is less cognitively demanding, improves operational performance, and nonetheless can visually compensate for the effect of the time delay between the teleoperator and the vehicle. This technology, therefore, is promising for being applied in future teleoperation applications.
Break

Paper Session III (Presentations and Q&A performed on Zoom)
Avatars in Single and Multi User Experiences
Chaired by Gudrun Klinker (Technical University of Munich)

Abstract
Effective collaboration in immersive virtual environments requires to be able to communicate flawlessly using both verbal and non-verbal communication. We present an experiment investigating the impact of anthropomorphism on the sense of body ownership, avatar attractiveness and performance in an asymmetric collaborative task. Using three avatars presenting different facial properties, participants have to solve a construction game according to their partner’s instructions. Results reveal no significant difference in terms of body ownership, but demonstrate significant differences concerning attractiveness and completion duration of the collaborative task. However the relative verbal interaction duration seems not impacted by the anthropomorphism level of the characters, meaning that participants were able to interact verbally independently of the way their character physically express their words in the virtual environment. Unexpectedly, correlation analyses also reveal a link between attractiveness and performance. The more attractive the avatar, the shorter the completion duration of the game. One could argue that, in the context of this experiment, avatar attractiveness could have led to an improvement in non-verbal communication as users could be more prone to observe their partner which translates into better performance in collaborative tasks. Other experiments must be conduced using gaze tracking to support this new hypothesis.
Abstract
Does virtual threat harm the Virtual Reality (VR) experience? In this paper, we explored the potential impact of threat occurrence and repeatability on users’ Sense of Embodiment (SoE) and threat response. The main findings of our experiment are that the introduction of a threat does not alter users’ SoE but might change their behaviour while performing a task after the threat occurrence. In addition, threat repetitions did not show any effect on users’ subjective SoE, or subjective and objective responses to threat. Taken together, our results suggest that embodiment studies should expect potential change in participants behaviour while doing a task after a threat was introduced, but that threat introduction and repetition do not seem to impact the subjective measure of the SoE (user responses to questionnaires) nor the objective measure of the SoE (behavioural response to threat towards the virtual body).
Abstract
Virtual Reality (VR) has the potential of becoming a game changer in education, with studies showing that VR can lead to better quality of and access to education. One area that is promising, especially for young children, is the use of Virtual Companions that act as teaching assistants and support the learners’ educational journey in the virtual environment. However, as it is the case in real life, the appearance of the virtual companions can be critical for the learning experience. This paper studies the impact of the age, gender and general appearance (human- or robot-like) of virtual companions on 9-12 year old children. Our results over two experiments (n=24 and n=13) tend to show that children have a bigger sense of Spatial Presence, Engagement and Ecological Validity when interacting with a human-like Virtual Companion of the Same Age and of a Different Gender.
Abstract
Embodied agents, i.e., computer-controlled characters, have proven useful for various applications across a multitude of display setups and modalities. While most traditional work focused on embodied agents presented on a screen or projector, and a growing number of works are focusing on agents in virtual reality, a comparatively small number of publications looked at such agents in augmented reality (AR). Such AR agents, specifically when using see-through head-mounted displays (HMDs) as the display medium, show multiple critical differences to other forms of agents, including their appearances, behaviors, and physical-virtual interactivity. Due to the unique challenges in this specific field, and due to the comparatively limited attention by the research community so far, we believe that it is important to map the field to understand the current trends, challenges, and future research. In this paper, we present a systematic review of the research performed on interactive, embodied AR agents using HMDs. Starting with 1261 broadly related papers, we conducted an in-depth review of 50 directly related papers from 2000 to 2020, focusing on papers that reported on user studies aiming to improve our understanding of interactive agents in AR HMD environments or their utilization in specific applications. We identified common research and application areas of AR agents through a structured iterative process, present research trends, and gaps, and share insights on future directions.
Friday, 4 December

Keynote (Presented live via Zoom)

Short Bio
Frank Steinicke Frank Steinicke is a professor for Human-Computer Interaction at the Department of Informatics at the University of Hamburg. His research is driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to reform the interaction as well as the experience in computer-mediated realities. Frank Steinicke regularly serves as panelist and speaker at major events in the area of virtual reality and human-computer interaction and is on the IPC of various national and international conferences. He serves as the program chair for IEEE VR 2017/2018, which is the most renowned scientific conference in the area of VR/AR. Furthermore, he is a member of the Steering committee of the ACM SUI Symposium and the GI SIG VR/AR, and currently editor of the IEEE Computer Graphics & Applications Department on Spatial Interfaces.
Break

Poster & Demo Session B  (Presented live via Discord)

Schedule for posters and demos

Break

Paper Session IV (Presentations and Q&A performed on Zoom)
Navigation in Virtual Environments
Chaired by Kening Zhu (City University of Hong Kong)

Abstract
Virtual environments can be infinitely large, but users only have a limited amount of space in the physical world. One way to navigate within large virtual environments is through teleportation. Teleportation requires two steps: targeting a place and sudden shifting. Conventional teleportation uses a controller to point to a target position and a button press or release to immediately teleport the user to the position. Since the teleportation does not require physical movement, the user can explore the entire virtual environment. However, as this is supernatural and can lead to momentary disorientation, it can break the sense of presence, and thus degrade the overall virtual reality experience. To compensate for the downside of this technique, we explore the effects of a jumping gesture as a teleportation trigger. We conducted a study with two factors: 1) triggering method (Jumping and Standing), and 2) targeting method (Head-direction and Controller). We found that the conventional way of using a controller while standing showed better efficiency, the highest usability and lower cybersickness. Nevertheless, Jumping+Controller invoked a high sense of engagement and fun, and therefore provides an interesting new technique, especially for VR games.
Abstract
Collaborative virtual environments provide the ability for collocated and remote participants to communicate and share information with each other. For example, immersive technologies can be used to facilitate collaborative guidance during navigation of an unfamiliar environment. However, the design space of 3D user interfaces for supporting collaborative guidance tasks, along with the advantages and disadvantages of different immersive communication modalities to support these tasks, are not well understood. In this paper, we investigate three different methods for providing assistance (visual-only, audio-only, and combined audio/visual cues) using an asymmetric collaborative guidance task. We developed a novel experimental design and virtual reality scenario to evaluate task performance during navigation of a complex and dynamic environment while simultaneously avoiding observation by patrolling sentries. Two experiments were conducted: a dyadic study conducted at a large public event and a controlled lab study using a confederate. Combined audio/visual guidance cues were rated easier to use and more effectively facilitated the avoidance of sentries compared with the audio-only condition. The presented work has the potential to inform the design of future experiments and applications that involve communication modalities to support collaborative guidance tasks with immersive technologies.
Abstract
This paper aims to investigate the influence of the control law in virtual steering techniques, and in particular the speed update, on users’ behaviour while navigating in virtual environments. To this end, we first propose to characterize existing control laws. Then, we designed a user study to evaluate the impact of the control law on users’ behaviour and performance in a navigation task. Participants had to perform a virtual slalom while wearing a head-mounted display. They were following three different sinusoidal-like trajectory (with low, medium and high curvature) using a torso-steering navigation technique with three different control laws (constant, linear and adaptive). The adaptive control law, based on the biomechanics of human walking, takes into account the relation between speed and curvature. We propose a spatial and temporal analysis of the trajectories performed both in the virtual and the real environment. The results show that users’ trajectories and behaviors were significantly affected by the shape of the trajectory but also by the control law. In particular, users’ angular velocity was higher with constant and linear laws compared to the adaptive law. The analysis of subjective feedback suggests that these differences might result in a lower perceived physical demand and effort for the adaptive control law. The paper concludes discussing the potential applications of such results to improve the design and evaluation of navigation control laws.
Abstract
Motion sickness while using virtual reality (VR) headsets affects 25-40% of users. Motion sickness results from a disconnect between the user’s physical movement and their experienced movement in the virtual environment. The problem of how to move and navigate in a virtual environment that is larger than the user’s physical environment is a well-studied problem. This project implements the three most common movement methods (teleportation, on-rails and free movement) and implements modifications to two of those methods (a natural acceleration and deceleration motion to on-rails and acceleration/inertia-based movement to free movement). The goal of this project is to find whether the modifications will decrease motion sickness and increase preferability compared to their conventional counterparts when tested in a user study. Users experienced lower nausea with our novel on-rails movement method combined with acceleration/deceleration than with any other method. This method was also preferred evenly with teleportation, the method most commonly used by developers now. This study indicates that on-rails should be given more attention as a viable solution to the virtual reality movement problem.
Break

Panel (Presented live via Zoom)

The COVID-19 pandemic has made significant changes in our lives and redefined the way we have social interactions, e.g., a dramatic increase in the use of video-conferencing tools, and social distance and face masks in face-to-face interactions. In this era, Social XR (AR/VR/MR) technology continuously shows promise as a new interaction paradigm. This panel will capture the current states and issues of Social XR research and technology, and discuss the opportunities and future directions of the technology while envisioning more effective and efficient social interactions in the new normal era.

Panelists

  • Sun Joo Ahn (University of Georgia)
  • Henry Fuchs (University of North Carolina at Chapel Hill)
  • Aleshia Hayes (University of North Texas)
  • Anthony Steed (University College London)
Break

Awards & Closing Remarks (Presented live via Zoom)
Chaired by Gerd Bruder (University of Central Florida)

A reminder for paper presenters: If you are presenting a long/short paper, you have 20 minutes for your presentation (15 min for your video presentation and 5 min for questions). Please be present in Zoom during the break before your session starts to coordinate the Q&A with your session chair.