Ally SenseScape
Multi-Sensory Indoor Navigation for PBLV
Client:
Envision Technologies BV
Scope:
UX Research, Interaction, Accessibility, Inclusive Design
Year:
2025
Indoor navigation remains one of the most persistent barriers for People who are Blind or have Low Vision [PBLV], especially in workplaces and educational buildings where layouts are complex and conditions change daily. Indoor spaces often lack consistent cues and reliable positioning, and “destinations” are frequently non-fixed - meeting rooms change, entrances close, furniture moves, and temporary obstacles appear without warning. When navigation breaks down, the consequences are not minor: people can miss meetings or classes, lose time and confidence, enter the wrong room at the wrong moment, or be forced to ask for help in ways that undermine privacy and independence. Because these environments are used repeatedly and are tightly tied to employment and education, the impact scales across many routine moments rather than rare events. This project matters because it targets that high-stakes, everyday gap: it asks how indoor navigation support can work in real motion, at real pace, and in coordination with the tools and collaborators people already rely on - prompting a new kind of solution that is not simply “more instructions,” but a practical way to extend an existing navigation ecology.
The proposed design solution treats indoor navigation not just as a route-planning problem to be automated, but as an enacted practice already distributed across the person, their cane or guide dog, and environmental cues. Instead of replacing that ecology with continuous turn-by-turn narration, the concept introduces an on-demand, hands-free (voice based) confirmation layer. The system is grounded in a lightweight building knowledge base (floor plan plus a small set of “decision points” documented with photos and non-visual cue notes such as flooring transitions or characteristic sounds).
While users move at their normal pace, the assistant stays quiet and available; it speaks in short, actionable steps only when asked, and it helps most in moments of uncertainty like confirming a junction, reading a sign, recovering when the route changes, or locating non-fixed destinations like meeting rooms. Where current “video” AI is limited, the design avoids continuous live processing (avoiding heavy processing in the background) and instead uses single, user-triggered snapshots for confirmation, with transparent fallback scripts when connectivity or confidence is low. In this way, the system supports independence by coordinating with existing navigation strategies, extending what already works and intervening only when it adds unique value.
The concept was developed through an iterative, participatory process that treated PBLV as experts in their own navigation practices. Interviews and in-context observations in workplace and educational settings were conducted to surface everyday navigation challenges and to map recurring breakdown moments.
From an initial set of nineteen issues, priorities were collaboratively established based on perceived impact and feasibility, which narrowed the scope to indoor navigation for non-fixed destinations in dynamic buildings. Co-design sessions were then used to examine what assistance is helpful at walking pace and with hands occupied, and prototype AI-guided navigation flows were tested to evaluate timing, verbosity, and interaction burden. Early trials highlighted misalignments between continuous instruction and embodied navigation, which led to a shift away from automation toward on-demand support. In parallel, a lightweight approach to place knowledge was explored, resulting in a building knowledge base composed of floor plans, decision-point photos, and notes on non-visual sensory cues. Finally, breakdowns observed during testing (for example, missed turns, ambiguity, and connectivity limits) were used to design explicit fallback scripts and recovery interactions so that assistance remained legible, interruptible, and aligned with existing coordination among cane/guide dog, environment, and AI. The project was also recognised by KHMW as first prize for Jong Talent Afstudeerprijs (Graduation Award) in the category of Healthcare Innovation for Underserved Communities (with Philips Foundation)










