Abstract: Information search is an activity that involves various techniques and methods for finding new data and insights. It is determined by a context and information seeker (user). Physical and digital spaces as different contexts provide unique advantages for search activities: the physical environment provides the spatial layout, interaction with tangible objects, and serendipitous discovery, while online information systems support fast information retrieval, browsing, and knowledge discovery. The inconsistency between information search tasks and their interfaces challenges the role of the physical places. Compared to physical collections where one object is located in a single place, exploring digital spaces allows items to be inspected from various locations. Furthermore, a digital space offers an opportunity to demonstrate multiple relationships between and among objects. Some information search tasks are more easily performed in physical libraries; however, digital libraries provide support for more efficient information retrieval. Information foraging or ""being open to new information"" is a natural part of the information search process yet it has been poorly supported in both the physical and electronic arenas due to a gap between these two environments.
In this talk, I would like to focus on design methods, used for some of the visualization systems I have been working on for the past 3 years. The systems aim to bridge the physical and digital arenas, using digital data associated with physically situated objects and transforming and visualizing this data in relation to a given context. The systems are in the form of a web-based app - they serve as a visual ""companion"" that recognizes objects and uses them instantly to provide users with information or insight. After snapping a photo or using an AR headset, applications generate the object-related data or visual dashboard that users may use for further exploration. With the above-mentioned systems and its interplay between real and digital worlds, new avenues could be opened for creating new dimensions for adaptive visualizations.
Bio: Zona Kostic is a research, teaching, and innovation fellow at Harvard University. Her professional interests are at the intersection of data visualization, machine learning, and digital realities, with current research activities being specifically focused on semantic visual similarity and smart web-based interfaces. Previously, Zona was part of the Harvard Visual Computing Group developing immersive visual analytics systems that support the overlay of digital information over physical objects. She published six books, and a number of research works at scientific journals and conferences. Zona is also a co-founder of ArchSpike, a small venture that integrates data science and visualization with market modeling, allowing users to design buildings that better respond to future market demands.