
It is characterized (in addition to other aspects) by specific needs in terms of time-related synchronization of team members to enable them to coordinate the team task. Teamwork is a specific type of collaborative work and emerges in numerous everyday contexts. Our reference implementation is publicly available to encourage replicability and reuse in future XR workspaces.Īugmented reality (AR) has been used in a large variety of scenarios from assembly support over gaming to collaborative work. With our work, we form a foundation for future research and XR workspaces. The significant differences in subjective and objective measures emphasize the importance of specific evaluations for each possible combination of input techniques and XR displays to provide reusable, reliable, and high-quality text input solutions. Our results are consistent with previous work in VR and optical see-through (OST) AR, but additionally provide novel insights into usability and performance of the selected text input techniques for VST AR. Participants showed a significant learning effect with only ten sentences typed per condition. Further, the tap keyboard was significantly faster than the swipe keyboard in VR. In terms of performance, both input techniques were significantly faster in VR than in VST AR. Task load was also lower for tap keyboards. We found significantly higher usability and user experience ratings for tap keyboards compared to swipe keyboards in both VR and VST AR. A user evaluation with 64 participants revealed that XR displays and input techniques strongly affect text entry performance, while subjective measures are only influenced by the input techniques.

The developed contact-based mid-air virtual tap and wordgesture (swipe) keyboard provide established support functions for text correction, word suggestions, capitalization, and punctuation. This article compares two state-of-the-art text input techniques between non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) use-cases as XR display condition. Our results suggest that Head+Dwell should be the default selection technique, as it is relatively fast, has the lowest error rate and workload, and has the highest-rated user experience and social acceptance. This research aims to fill this gap and reports on an experiment with 12 participants, which compares the performance and usability (user experience and workload) of four possible techniques (Hand+Pinch, Hand+Dwell, Head+Pinch, and Head+Dwell). As a relatively new platform, text selection in AR is largely unexplored and the suitability of interaction techniques supported by current AR HMDs for text selection tasks is unclear. As Augmented Reality (AR) head-mounted displays (HMDs) become more widespread, they will need to provide effective interaction techniques for text selection that ensure users can complete a range of text manipulation tasks (e.g., to highlight, copy, and paste text, send instant messages, and browse the web). ABSTRACT Text selection is a common and essential activity during text interaction in all interactive systems.

The experimental interface used in this research: (1) an 'interaction panel', (2) an 'instruction panel', (3) the 'start' button, and (4) the 'check' button. Our results suggest that Head+Dwell should be the default selection technique, as it is relatively fast, has the lowest error rate and workload, and has the highest-rated user experience and social acceptance.įigure 1: The four text selection techniques explored in this research: (a) Hand-based: Hand+Pinch and Hand+Dwell, (b) Head-based: Head+Pinch and Head+Dwell.

Text selection is a common and essential activity during text interaction in all interactive systems.
