Laboratory Manifest
The Spatial Perception And Augmented Reality Lab (SPAAR Lab) studies the perception and technology required to give virtual objects definite spatial locations. Projects involve perception, measurement, calibration, and display technologies. Methods are interdisciplinary, including computer science, empirical methods, psychology, cognitive science, optics, and engineering. Culture involves teamwork, and the pursuit of beauty through scholarship and intellectual merit of the highest possible quality.
Current Research
My research can be most broadly described as conducting human-factors investigations of computer-generated graphics. I have primarily focused on the areas of Augmented Reality, Virtual Reality, and Visualization. Problem domains have included surgical simulation and other medical applications, weather visualization, military applications, computer forensics, flow-field visualization, and driving simulation.
Perception in Augmented and Virtual Reality
At MSU, I have founded the Spatial Perception And Augmented Reality Lab (SPAAR Lab), where we study perceptual aspects of Augmented and Virtual Reality, with an emphasis on the knowledge of, and methods for, presenting virtual objects that have definite spatial locations. This work encompasses studying how depth perception operates in augmented and virtual reality, as well as x-ray vision: how one can use augmented reality to let users see objects which are located behind solid, opaque surfaces—to allow, for example, a doctor to see organs that are inside a patient's body. An important related area has been the augmented reality calibration methods that allow accurate virtual object positioning. This work has involved many collaborations. In addition to my students, for more than 10 years I have collaborated with Stephen R. Ellis (NASA Ames Research Center); other collaborators have come from the University of Southern California, the University of South Australia, the Nara Institute of Science and Technology, the University of Jyväskylä, the Naval Research Laboratory, NVIDIA Corporation, the University of California at Santa Barbara, and the University of Rostock. The work has been funded by the National Science Foundation, the National Aeronautics and Space Administration (NASA), the Department of Defense, the Naval Research Laboratory, and the Office of Naval Research.
Data Science, Visualization, and Evaluation
I have worked with students and collaborators on a number of data science, visualization, and evaluation projects. These have involved computer vision image data set annotation, visualizing and interacting with ensembles of weather model simulation runs, visualizing computer forensics data, cognitively evaluating the effectiveness of forensics data visualizations, using parallel coordinates to visualize historical hurricane severity data, empirically evaluating additional weather data visualization techniques, empirically evaluating tensor visualization methods, and empirically evaluating flow-field visualization techniques. This work has involved collaborations with many MSU colleagues, in particular Song Zhang, T.J. Jankun-Kelly, Andrew Mercer, Jamie Dyer, and a number of students. This work has been funded by the National Science Foundation, Department of the Army, and the Naval Research Laboratory.
Human-Factors Aspects of Augmented and Virtual Reality
In addition to virtual object location, I have collaborated widely with colleagues and students on other human-factors aspects of augmented and virtual reality. Projects have involved the effect of switching context between augmented reality and the real world, how color and contrast perception operate in optical see-through augmented reality, how gait changes in virtual environments, driving simulator studies of cell phone distraction, and general reviews of perceptual issues. My longest collaboration, which has covered a number of these topics, has been with Joe Gabbard at Virginia Tech. In addition, with Joe and Debby Hix, also from Virginia Tech, I helped articulate some of the first systematic usability engineering concepts for augmented reality (IEEE Transactions on Visualization and Computer Graphics, 2008; IEEE Computer Graphics and Applications, 1999). This work has been funded by the National Science Foundation, the Department of Defense, the Naval Research Laboratory, and the Office of Naval Research.
Previous Research Topics
I was employed by the Naval Research Lab (NRL) from 1997 through 2004. While there, I was a member of the Virtual Reality Laboratory. Our primary project during this time was researching and developing the Battlefield Augmented Reality System (BARS), an outdoor, mobile augmented reality system that allowed soldiers on foot to see heads-up, graphically-drawn battlefield information through optical see-through displays. Anyone who has worked in augmented and virtual reality can appreciate how outrageously challenging this project was, especially considering the technical maturity of displays and computers during that era. I had a great time and learned a huge amount. My experiences with BARS are what lead me to originally become interested in the perceptual issues that arise from augmented and virtual reality. This work was a team effort with my fellow NRL scientists Larry Rosenblum (my boss), Simon Julier, Mark Livingston, Greg Schmidt, Yohan Baillot, and Dennis Brown. In addition, much of my work during this time also involved collaborations with my Virginia Tech colleagues Joe Gabbard and Deborah Hix.
From 1994 through 1997 I was engaged in my PhD studies in the areas of Volume Rendering, Visualization, and Graphics; my PhD adviser was Roni Yagel. With the help of Roni and my fellow PhD students Klaus Mueller and Torsten Möller, I developed the first published algorithm (IEEE Visualization 1997) for anti-aliasing with Splatting. Since that time, many others have greatly advanced this early work (including Klaus and Torsten).
From 1992 through 1994 I worked with my MS adviser Gary Perlman in the area of Human-Computer Interaction. Under Gary's direction I conducted two perceptual user studies (Proceedings of the Human Factors and Ergonomics Society, 1993 and 1994). Although these were my very first research experiences, I find that I still often apply the lessons and knowledge that I first learned from Gary.