In a world where everyday objects can anticipate our needs, researchers at Carnegie Mellon University’s Human-Computer Interaction Institute (HCII) are making this a reality through the combination of AI and robotic mobility.
Imagine a stapler sliding across a desk to meet your waiting hand, or a knife moving out of the way just before you lean against a countertop. This may sound like magic, but it’s the innovative work being done at HCII. By using large language models (LLMs) and wheeled robotic platforms, researchers have transformed ordinary items like mugs, plates, and utensils into proactive assistants that can observe human behavior, predict interventions, and move across surfaces to help humans at just the right time.
Presented at the 2025 ACM Symposium on User Interface Software and Technology in Busan, Korea, the team’s work on unobtrusive physical AI is groundbreaking. Led by HCII assistant professor Alexandra Ion, the Interactive Structures Lab aims to create adaptive systems for physical interaction that seamlessly blend into our lives while dynamically adapting to our needs.
This unobtrusive system utilizes computer vision and LLMs to understand a person’s goals, predicting their next actions or needs. A ceiling-mounted camera captures the environment and tracks object positions, translating this information into a text-based description of the scene. This technology allows everyday objects to become intuitive, helpful assistants without the need for explicit commands from the user.
The future of interactive technology is here, where objects can anticipate our needs and seamlessly assist us in our daily tasks. With the innovative work being done at HCII, a world where objects are not just tools but proactive helpers is within reach. In the realm of artificial intelligence, researchers are constantly pushing the boundaries of what technology can do to assist humans in their daily lives. One such advancement is the use of Language Model Machines (LLMs) to translate and infer individuals’ goals and actions to provide proactive assistance.
The process begins with the LLM translating the user’s language into actionable insights, allowing the system to understand what the person’s goals may be and which actions would help them the most. This translation is then used to predict actions that can be transferred to everyday items, such as cooking utensils, office supplies, or organizational tools.
Violet Han, a Ph.D. student at the Human-Computer Interaction Institute (HCII) working with Ion, emphasizes the importance of focusing on AI assistance in the physical domain. By enhancing the capabilities of everyday objects that users already trust, researchers hope to increase that trust and seamlessly integrate AI assistance into daily tasks.
The team led by Ion is exploring ways to expand the scope of unobtrusive physical AI in homes and offices. One example mentioned by Ion is the concept of a shelf that automatically folds out from the wall when a person comes home with groceries, providing a convenient place to set down bags while they remove their coat.
The vision of the team is to develop technology that seamlessly integrates into daily life, providing new functionalities while remaining almost invisible. This approach aims to bring safe and reliable physical assistance into various spaces, such as homes, hospitals, and factories.
The work of the Interactive Structures Lab, which focuses on creating intuitive physical interfaces, is paving the way for a future where everyday objects are transformed into proactive personal assistants through the power of artificial intelligence. By augmenting objects with intelligence and robotic movement, researchers are striving towards a future where physical AI becomes an integral part of our daily lives.