Depth Judgments by Reaching and Matching in Near-Field Augmented Reality
Gurjot Singh, J. Edward Swan II, J. Adam Jones, and Stephen R. Ellis. Depth Judgments by Reaching and Matching in Near-Field Augmented Reality. In Poster Compendium, Proceedings of IEEE Virtual Reality 2012, pp. 165–166, March 2012. DOI: 10.1109/VR.2012.6180933.
Winner of an Honorable Mention award at IEEE Virtual Reality 2012.
Download
Abstract
In this abstract we describe an experiment that measured depth judgments in optical see-through augmented reality (AR) at near-field reaching distances of 24 to 56 cm. The 2 x 2 experiment crossed two depth judgment tasks, perceptual matching and blind reaching, with two different environments, a real-world environment and an augmented reality environment. We designed a task that used a direct reaching gesture at constant percentages of each participant's maximum reach; our task was inspired by previous work by Tresilian and Mon-Williams [6] that found very accurate blind reaching results in a real-world environment.
BibTeX
@InProceedings{VR12-djt,
author = {Gurjot Singh and J. Edward {Swan~II} and J. Adam Jones and Stephen R. Ellis},
title = {Depth Judgments by Reaching and Matching in Near-Field Augmented Reality},
booktitle = {Poster Compendium, Proceedings of IEEE Virtual Reality 2012},
location = {Orange County, CA, USA},
date = {March 4--8},
month = {March},
year = 2012,
pages = {165--166},
note = {DOI: <a target="_blank"
href="https://doi.org/10.1109/VR.2012.6180933">10.1109/VR.2012.6180933</a>.}
wwwnote = {<b>Winner of an Honorable Mention award at IEEE Virtual Reality 2012</b>.},
abstract = {
In this abstract we describe an experiment that measured depth
judgments in optical see-through augmented reality (AR) at near-field
reaching distances of ~24 to ~56 cm. The 2 x 2 experiment
crossed two depth judgment tasks, perceptual matching and blind
reaching, with two different environments, a real-world environment
and an augmented reality environment. We designed a task
that used a direct reaching gesture at constant percentages of each
participant's maximum reach; our task was inspired by previous
work by Tresilian and Mon-Williams [6] that found very accurate
blind reaching results in a real-world environment.
},
}