Browsing by Author "Hudson T.E."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Metadata only A Smart Service System for Spatial Intelligence and Onboard Navigation for Individuals with Visual Impairment (VIS4ION Thailand): study protocol of a randomized controlled trial of visually impaired students at the Ratchasuda College, Thailand(2023-12-01) Beheshti M.; Naeimi T.; Hudson T.E.; Feng C.; Mongkolwat P.; Riewpaiboon W.; Seiple W.; Vedanthan R.; Rizzo J.R.; Mahidol UniversityBackground: Blind/low vision (BLV) severely limits information about our three-dimensional world, leading to poor spatial cognition and impaired navigation. BLV engenders mobility losses, debility, illness, and premature mortality. These mobility losses have been associated with unemployment and severe compromises in quality of life. VI not only eviscerates mobility and safety but also, creates barriers to inclusive higher education. Although true in almost every high-income country, these startling facts are even more severe in low- and middle-income countries, such as Thailand. We aim to use VIS4ION (Visually Impaired Smart Service System for Spatial Intelligence and Onboard Navigation), an advanced wearable technology, to enable real-time access to microservices, providing a potential solution to close this gap and deliver consistent and reliable access to critical spatial information needed for mobility and orientation during navigation. Methods: We are leveraging 3D reconstruction and semantic segmentation techniques to create a digital twin of the campus that houses Mahidol University’s disability college. We will do cross-over randomization, and two groups of randomized VI students will deploy this augmented platform in two phases: a passive phase, during which the wearable will only record location, and an active phase, in which end users receive orientation cueing during location recording. A group will perform the active phase first, then the passive, and the other group will experiment reciprocally. We will assess for acceptability, appropriateness, and feasibility, focusing on experiences with VIS4ION. In addition, we will test another cohort of students for navigational, health, and well-being improvements, comparing weeks 1 to 4. We will also conduct a process evaluation according to the Saunders Framework. Finally, we will extend our computer vision and digital twinning technique to a 12-block spatial grid in Bangkok, providing aid in a more complex environment. Discussion: Although electronic navigation aids seem like an attractive solution, there are several barriers to their use; chief among them is their dependence on either environmental (sensor-based) infrastructure or WiFi/cell “connectivity” infrastructure or both. These barriers limit their widespread adoption, particularly in low-and-middle-income countries. Here we propose a navigation solution that operates independently of both environmental and Wi-Fi/cell infrastructure. We predict the proposed platform supports spatial cognition in BLV populations, augmenting personal freedom and agency, and promoting health and well-being. Trial registration: ClinicalTrials.gov under the identifier: NCT03174314, Registered 2017.06.02.Item Metadata only UNav: An Infrastructure-Independent Vision-Based Navigation System for People with Blindness and Low Vision(2022-11-01) Yang A.; Beheshti M.; Hudson T.E.; Vedanthan R.; Riewpaiboon W.; Mongkolwat P.; Feng C.; Rizzo J.R.; Mahidol UniversityVision-based localization approaches now underpin newly emerging navigation pipelines for myriad use cases, from robotics to assistive technologies. Compared to sensor-based solutions, vision-based localization does not require pre-installed sensor infrastructure, which is costly, time-consuming, and/or often infeasible at scale. Herein, we propose a novel vision-based localization pipeline for a specific use case: navigation support for end users with blindness and low vision. Given a query image taken by an end user on a mobile application, the pipeline leverages a visual place recognition (VPR) algorithm to find similar images in a reference image database of the target space. The geolocations of these similar images are utilized in a downstream task that employs a weighted-average method to estimate the end user’s location. Another downstream task utilizes the perspective-n-point (PnP) algorithm to estimate the end user’s direction by exploiting the 2D–3D point correspondences between the query image and the 3D environment, as extracted from matched images in the database. Additionally, this system implements Dijkstra’s algorithm to calculate a shortest path based on a navigable map that includes the trip origin and destination. The topometric map used for localization and navigation is built using a customized graphical user interface that projects a 3D reconstructed sparse map, built from a sequence of images, to the corresponding a priori 2D floor plan. Sequential images used for map construction can be collected in a pre-mapping step or scavenged through public databases/citizen science. The end-to-end system can be installed on any internet-accessible device with a camera that hosts a custom mobile application. For evaluation purposes, mapping and localization were tested in a complex hospital environment. The evaluation results demonstrate that our system can achieve localization with an average error of less than 1 m without knowledge of the camera’s intrinsic parameters, such as focal length.