KPE study explores why VR and AR technologies change how people interact with objects

Research associate Michael Wang fits a virtual reality (VR) head-mounted display onto a study participant, KPE undergraduate student Colin Dolynski (photo credit: Molly Brillinger)
Research associate Michael Wang fits a virtual reality (VR) head-mounted display onto a study participant, KPE undergraduate student Colin Dolynski (photo credit: Molly Brillinger)
10/04/2024

A new study from the University of Toronto Faculty of Kinesiology and Physical Education (KPE) looked into the potential impact that virtual (VR) and augmented reality (AR) technologies have on people interacting with virtual objects. The researchers wanted to understand if there are differences between moving in real and virtual environments so that they can better design virtual environments to enhance performance and learning transfer between real and virtual worlds.
 

“While virtual (VR) and augmented reality (AR) are often associated with gaming, these technologies are increasingly used in training for tasks that require hand-eye coordination through fine motor skills such as performing surgical procedures or pilot training,” says Xiaoye Michael Wang, a research associate at KPE and lead author of the study, recently published in Springer’s Virtual Reality journal.

“In this context, it is important to ascertain that the skills practiced during training in the virtual world are comparable to the skills that are relied on in practice in the physical environment because otherwise, one is actually not training for the skills as intended.”

The study confirmed what scientists and engineers have long known, which is that people move differently in real and virtual worlds. Specifically, people tend to undershoot (move shorter than they should) when interacting with objects in VR and AR. The innovation of the new study is that it identified one of the key mechanisms that contributes to this undershooting.

“VR and AR convey an immersive 3D environment via screens that are placed right in front of the users’ eyes,” says Wang, who works with Professor Timothy Welsh in KPE’s Action and Attention research lab. “This technological arrangement alters the natural visual geometry in perceiving depth, which, in turn, creates depth compression where objects appear to be closer to the user than they actually are.

“This depth compression contributes to the undershooting errors in movement.”

Based on their understanding of how the eyes move and change during the perception of different distances, the researchers identified the underlying geometry that alters visual stimuli in VR and AR that leads to the distance compression. They then looked into how they can develop a software solution that could alleviate the depth compression and make perception and movements more accurate in VR and AR.

The difference in the ways the eyes work in real and virtual environments and how these differences lead to distance compression is known as the vergence-accommodation conflict. This conflict has been a key technical challenge of VR since its conception in the 1990s. Many companies have spent millions, if not billions, of dollars trying to solve this issue using hardware solutions such as developing a complex set of lenses that could change where the screens are perceived, or even creating a lens from a type of permeable material that could change the lens’ shape to be more like the human eyes.

“However, all of these hardware changes require intensive research and development with contributions from different teams such as optical and mechanical engineering, human factors and even machine learning,” says Wang. “Our study shows this issue could be resolved using a simpler computer program without relying on complex and expensive hardware.

“The program systematically modifies how far away things are presented in the virtual world to offset the effect of the distance compression as a result of the vergence-accommodation conflict.

“This innovation could be a really exciting and cost effective for the VR/AR industry.”

The researchers are currently validating the algorithm proposed in the study to evaluate its effectiveness in eliminating depth compression in VR. For their work, the team was also selected for U of T’s UTEST program to explore commercialization potentials.

To this end, the team is also interested in working with surgeons, medical professionals and other VR companies in developing scenarios in which they could test the effectiveness of their algorithm in practice, i.e. in surgical training.

“With limited opportunities available for traditional, in-person training methods, using VR as an alternative for surgical training could greatly alleviate the need for access to physical resources,” says Wang, “but these benefits can only be achieved if the technology accurately represents how the trainee perceives and moves in the real world.”