[SIST Seminar] When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations

ON2023-05-10TAG: ShanghaiTech UniversityCATEGORY: Lecture

Topic: When Data Meets Reality: Augmenting Dynamic Scenes with Visualizations

Speaker: Dr. CHEN Zhutian, Postdoctoral Fellow in Computer Science, Harvard University (Harvard)

Date and time: 10:00–11:30, May 12

Venue: Room 1A 200, SIST

Host: LI Quan

 

Abstract:

We live in a dynamic world that produces a growing volume of accessible data. Visualizing this data within its physical context can aid situational awareness, improve decision-making, enhance daily activities like driving and watching sports, and even save lives in tasks such as performing surgery or navigating hazardous environments. Augmented Reality (AR) offers a unique opportunity to achieve this contextualization of data by overlaying digital content onto the physical world. However, visualizing data in its physical context using AR devices (e.g., headsets or smartphones) is challenging for users due to the complexities involved in creating and accurately placing the visualizations within the physical world. This process can be even more pronounced in dynamic scenarios with temporal constraints.

In this talk, the author will introduce a novel approach, which uses sports video streams as a testbed and proxy for dynamic scenes, to explore the design, implementation, and evaluation of AR visualization systems that enable users efficiently visualize data in dynamic scenes. He will first present three systems allowing users to visualize data in sports videos through touch, natural language, and gaze interactions, and then discuss how these interaction techniques can be generalized to other AR scenarios. The designs of these systems collectively form a unified framework that serves as a preliminary solution for helping users visualize data in dynamic scenes using AR. He will next share his latest progress in using Virtual Reality (VR) simulations as a more advanced testbed, compared to videos, for AR visualization research. Finally, building on his framework and testbeds, he will describe his long-term vision and roadmap for using AR visualizations to advance our world in becoming more connected, accessible, and efficient.


Biography:

Dr. CHEN Zhutian is a PostDoc Fellow in the Visual Computing Group at Harvard University. His research is at the intersection of data visualization, human-computer interaction, and augmented reality, with a focus on advancing human-data interaction in everyday activities. His research has been published as full papers in top conferences such as IEEE VIS, ACM CHI, and TVCG, and received one best paper in ACM CHI and three best paper nominations in IEEE VIS, the premier conference in data visualization. Before joining Harvard, he was a PostDoc in the Design Lab at UC San Diego. Dr. CHEN Zhutian received his Ph.D. in Computer Science and Engineering from the Hong Kong University of Science and Technology.