AI Search Robot Uses 3D Maps and Internet Knowledge to Find Lost Items
A new robot developed at the Technical University of Munich (TUM) combines spatial mapping with internet-based knowledge to efficiently locate misplaced objects.1, 2 The robot, created by Prof. Angela Schoellig’s Learning Systems and Robotics Lab, utilizes a camera and laptop to understand its surroundings and identify probable locations for sought-after items.
How the Robot Works
To find a lost item, such as a pair of glasses, the robot first constructs a three-dimensional map of the environment using depth information from its camera.1 This map is accurate to the centimeter and continuously updated. Simultaneously, a laptop provides the robot with information about objects in the image and their typical human-related significance.
“We have taught the robot to understand its surroundings,” says Prof. Schoellig, head of the Robotics Lab at the TUM Chair of Safety, Performance and Reliability for Learning Systems.1 The goal is to develop robots capable of independent navigation in any environment, a crucial capability for robots intended for use in factories or homes.
Leveraging Internet Knowledge
The robot doesn’t just see objects; it understands their relationships and typical placements. It knows, for example, that a table or windowsill is a likely spot for glasses, whereas a stovetop or sink is not.2 “The language model captures the relationships between the objects and we convert this information into the robot’s language,” explains Prof. Schoellig.2
The robot assigns numerical probabilities to different locations on its 3D map, indicating the likelihood of finding the target object. Research indicates this approach increases search efficiency by nearly 30% compared to random searching.1 This efficiency stems from the integration of artificial intelligence in both image recognition and language model utilization.
Memory and Adaptation
The robot also remembers previous images and compares them to new ones. If a new object appears, it recognizes the change with 95% certainty and flags those areas as potential search locations.1
Future Developments
The next phase of research focuses on enabling the robot to search in enclosed spaces like drawers and cupboards. This will require the robot to interact physically with its environment, using robotic arms and hands to open compartments and determine the best way to grasp handles.1
This research was published in the journal IEEE Robotics and Automation Letters.3
References
- Technical University of Munich. “AI search robot uses 3D maps and internet knowledge to find lost items.” https://www.tum.de/en/news/ai-search-robot-uses-3d-maps-and-internet-knowledge-to-find-lost-items. Accessed March 13, 2026.
- Dynamic Systems Lab. “Welcome to the Learning Systems and Robotics Lab.” https://www.dynsyslab.org/vision-news/. Accessed March 13, 2026.
- Bogenberger, B., et al. “Where Did I Leave My Glasses? Open-Vocabulary Semantic Exploration in Real-World Semi-Static Environments.” IEEE Robotics and Automation Letters (2026). DOI: 10.1109/lra.2026.3656790