TUMSphere

Current Research

Integrating AI with Meta Human Avatars in VR​

This tutorial demonstrates how to integrate AI-driven characters with MetaHuman avatars in Unreal Engine 5 to create interactive, lifelike virtual agents. The objective is to guide developers through the complete workflow of linking large language models (LLMs) or dialogue systems with high-fidelity MetaHumans, enabling them to perceive user inputs, generate intelligent responses, and deliver them through natural voice and facial animations. By combining cutting-edge AI with real-time rendering, this approach unlocks powerful applications in virtual storytelling, immersive education, training simulations, and therapeutic environments.

Virtual Reality Author: Süeda Özkaya​

VR User Interface (UI) Design: Best Practices and Implementation​

This research focuses on identifying best practices for designing user interfaces (UI) in virtual reality (VR) environments and developing a comprehensive framework to guide VR UI design. The study synthesizes existing research into 28 guidelines covering key design aspects such as game design, learning, menus and interface elements, and cybersickness prevention. A VR application, FlUId, was developed to showcase these guidelines through good and bad design examples. The project aims to support the creation of more intuitive, comfortable, and accessible VR experiences.

UI Design Author: Esin Mehmedova​​

Gaze-Based Navigation and Interaction in Vision Pro: ​A Usability Study​​

This research aims to visualize the user gaze through heatmaps using the Apple Vision Pro's eye-tracking capabilities. The application includes several key functionalities, such as video eye tracking, spatial eye tracking, and averaged heatmap generation. In the video eye tracking, users watch pre-recorded videos while their gaze is captured and then overlaid as a heatmap on top of the video. Spatial eye tracking records the user's real-world environment as they look around, then projects their gaze onto the captured footage. The averaged heatmap feature combines gaze data from multiple participants, producing an averaged heatmap frame-by-frame visualization of collective viewing behavior. Key challenges included restricted access to the user's raw gaze data due to Apple's privacy policies and the technical difficulty of recording the user's perspective for spatial eye tracking. The resulting heatmap videos provide meaningful visual insights into user attention and behavior across various contexts.

Eye Tracking Author: Esra Mehmedova​​

Real-time Sign Language Learning with VR

This research focuses on developing a VR-based serious game designed to support the learning of sign language in an engaging and interactive way. The system features a realistic MetaHuman avatar that demonstrates sign language gestures, which users then replicate using VR hand tracking. The game evaluates users' gestures in real-time and provides immediate feedback to support effective learning. The goal is to lower the barrier to entry for sign language learners by offering an accessible and enjoyable training tool, particularly in educational settings where qualified sign language interpreters are in short supply. Key challenges include ensuring accurate gesture recognition, maintaining user engagement, and creating a scalable training experience for different sign languages. The final outcome aims to promote broader interest in learning sign language and expand access to inclusive communication tools.

Accessibility Author: Jeremy Immanuel​