A Systematic Review of Wearable Devices for Orientation and Mobility of Adults With Visual Impairment and Blindness

Wearable devices have been developed to improve the navigation of blind and visually impaired people. With technological advancements, the application of wearable devices has been increasing. This systematic review aimed to explore existing literature on technologies used in wearable devices to provide independent and safe mobility for visually impaired people. Searches were conducted in six electronic databases (PubMed, Web of Science, Scopus, Cochrane, ACM Digital Library and SciELO). Our systematic review included 61 studies. The results show that the majority of studies used audio information as a feedback interface and a combination of technologies for obstacle detection - especially the integration of sensor-based and computer vision-based technologies. The findings also showed the importance of including visually impaired individuals during prototype evaluation and the need for including safety evaluation which is currently lacking. These results have important implications for developing wearable devices for the safe mobility of visually impaired people.


I. INTRODUCTION
Visually impaired people face several challenges to accomplish everyday tasks, especially when attempting to have safe and independent mobility. The ability to detect hazards is reduced with visual impairment, which can result in accidents, collisions, and falls, having a negative impact on their physical, psychological and social-economic development [1]- [3]. Visual impairment is often associated with mobility restrictions, leading to various health issues, including loss of independence, social isolation, reduced physical activity, and depression [4]. Improving the mobility skills of visually impaired people may improve their ability to participate in society, enhancing their productivity, selfmaintenance, leisure, and overall quality of life [4].
The associate editor coordinating the review of this manuscript and approving it for publication was Yuan Zhuang .
According to the World Health Organization, one billion people have some degree of visual impairment worldwide, including blindness, moderate-to-severe visual impairment, and near visual impairment [5], [6]. The prevalence of visual impairment is notably higher in low and middle-income countries (LMICs) [6]. In LMICs, factors like ageing, infrastructure barriers, and difficulty accessing assistive technologies may increase the occurrence of falls among adults with visual impairment [3].
To have efficient and safe mobility, visually impaired individuals rely on assistive technologies such as white canes, guide dogs, or electronic devices [4]. Although white canes and guide dogs are the most commonly used assistive technologies, they can only partially resolve safe and independent mobility [7], [8]. White canes have a short range for obstacles at ground level and cannot identify obstacles above the waist level [9]. Therefore, electronic travel aids (ETAs) have been

II. METHODS
This systematic review was conducted using Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) 2020 guidelines [21].

A. SEARCH STRATEGY
Studies were identified through searches of six databases: PubMed, Web of Science, Scopus, Cochrane Central Register of Controlled Trials (CENTRAL), ACM Digital Library, and SciELO (Scientific Electronic Library Online). Additionally, to ensure the inclusion of recent articles in this review, the alert function was set in the databases that allowed this option -namely PubMed, Web of Science, Scopus and ACM Digital Library. An additional study [79] was identified through a hand search of the reference list. The search was conducted in June 2020 and used MeSH headings and keywords associated with ''visual impairment'', ''wearable devices'', and ''mobility''. The search strategy, using PubMed, can be viewed in Appendix A.

B. ELIGIBILITY CRITERIA
Studies were included if they: (i) were developed for adults (18 years and older) with visual impairment (low vision or blindness); (ii) reported the development of wearable devices for mobility and/or orientation of visually impaired people. The visual impairment is determined by The International Classification of Diseases 11 (2018) [22] that classified into two groups, namely ''low vision'' with visual acuity worse than 6/18 to 3/60, or visual field loss to less than 20 • ; and ''blindness'' as visual acuity worse than 3/60 or a visual field loss to less than 10 • . Studies were excluded if: (i) not written in English nor Portuguese; (ii) were conference abstracts, book chapters, dissertations or review articles; (iii) technology described was not classified as wearable nor developed for orientation and/or mobility purposes.

C. STUDY SELECTION
All titles and abstracts were independently screened by two review authors (ADPS, AHGZ), following the inclusion criteria. Full-text studies were evaluated according to the eligibility criteria. The inclusion or exclusion of studies was discussed by two review authors until consensus was reached or consulted a third and a fourth review author (FOM, AV) for a final decision.
A quality assessment of the studies was not attempted due to the wide range of study designs and the number of studies describing algorithms. Besides, the focus of this review was on the technological characteristics of available wearables. Thus, a quality assessment would not provide additional information related to the objectives of this systematic review.

D. DATA EXTRACTION
Data were extracted from each eligible article by two authors (ADPS, AHGZ), independently and cross-checked, and organized in a spreadsheet. Data extraction included authors, year of publication, country of origin, technologies used and their objectives, type of feedback interface, study setting, sample characteristics and methods used for user evaluations, and summary of the findings.

A. STUDY SELECTION
A total of 2241 studies were identified through database searches and the reference list. A total of 61 studies (2.72%) were identified as meeting the inclusion criteria. Fig. 1 illustrates the flow diagram of the process of searching and selecting the studies according to the PRISMA flow diagram.

B. STUDY CHARACTERISTICS
A summary of the demographic and methodological characteristics of the included studies is provided in Table 1. The 61 studies were conducted in China (n = 13, 21.31%), United States (n = 10, 16.39%), India (n = 8, 13.12%), Japan (n = 4, 6.56%), United Kingdom (n = 3, 4.92%), Republic of Korea (n = 2, 3.28%), Taiwan (n = 2, 3.28%), and Sri Lanka (n = 2, 3.28%). Other studies included were conducted in Brazil, Canada, France, Germany, Hungary, Iraq, Italy, Jordan, Malaysia, Mexico, Portugal, Romania, Spain, Sweden, Switzerland and Turkey. According to the United Nations [23], countries are classified by their level of development measured by per capita gross national income (GNI). Low-income countries are those with less than $1,035 GNI per capita. Countries between $1,036 and $4,085 are classified as lower-middle-income countries, and those between $4,086 and $12,615 are upper-middle-income countries. Countries with incomes higher than $12,615 are considered high-income countries [23]. In our study, high-income countries were responsible for almost half (n = 30, 49.18%) of the included studies and upper-middle and lower-middle-income countries were responsible for 34.43% (n = 21) and 16.39% (n = 10), respectively. No studies were conducted in low-income countries.
Overall, the included studies were recently published, with the first one published in 2001 and 91.80% published in the last ten years. Although the number of included studies may seem high, the following studies were carried out by the same team, with each study reporting improvements on different stages of the project.
Ross [59] and Ross and Blasch [60] developed and evaluated an orientation and wayfinding aid with 15 participants with visual impairments crossing the streets using different interfaces in random order. Both studies reported the same methodology, sample size, and results. In [59], Ross focused on the design process, whereas in [60], they provided more details about the participants' performance and preferences. Their results indicated a significant decrease in participants' veering performance and that the tapping interface had better results in both performance and participants' preferences.
Bai et al. [27], [28] described the development of smart glasses. Initially, the system composed of the RGB-D camera and ultrasonic sensor worked indoors [27]. Later, the authors expanded the navigation capability to outdoors by adding a Convolutional Neural Network (CNN) object recognition module and fusing GPS and IMU data [28]. The system [28] was evaluated and demonstrated to be effective in navigation and recognition in both indoors and outdoors scenarios.
Silva and Wimalaratne [63], [64] developed a belt with ultrasonic sensors to assist indoor navigation. While in [63], Silva and Wimalaratne initially focused on the obstacle's detection and the fuzzy logic model to assess the safety level of the environment, in [64], they added the object recognition model by fusing sonar and vision sensors.
Zhang et al. [79], [80] proposed an ARCore-based navigation system. In [79], Zhang et al. focused on the device's functionality, whereas in [80], the focus was the user interface. Zhang et al. [79] evaluated the performance of an ARCore-supported smartphone to obtain computer visionbased localization as well as a hybrid interaction mechanism (audio and tactile) to provide better guidance. The vibration feedback had good results; however, participants highlighted that the device occupied the hand, which was inconvenient during daily activities. Therefore, they designed and prototyped a sliding wristband using 3D printing [80]. The efficiency and feasibility of the proposed design were evaluated through proof-of-concept experiments in virtual and real-world scenarios with eight participants (four blindfolded and four visually impaired) [80].
Ikeda et al. [40], [41] developed a visual aid to assist the mobility of patients with retinitis pigmentosa at night. Although both studies presented similar findings in darkened conditions, the device in [41] increased the performance in view size and image quality compared to [40], which also had high production costs. Ikeda et al. used in [41] a high-performance see-through display, implementing a high-sensitivity camera with a complementary metal-oxide-semiconductor (CMOS) sensor, which reduced the production costs, making the new device available commercially. User experiments had a sample size of 8 [40] and 28 [41] patients.
Yang et al. [73]- [76] implemented several frameworks throughout the years to improve smart glasses until it was commercially available. In [73], the 3D-printed prototype focused on expanding the detection of traversable areas using the Intel RealSense R200. The approach was tested with visually impaired participants using mixed methods, and it showed a reduction of 78.60% in the number of collisions. Next, the framework proposed in [74] used a polarized RGB-D sensor to improve the traversable area proposed in [73] in addition to detecting water hazards. The approach was tested with blindfolded participants, and it showed a detection rate of 94.40% compared to previous works. The focus in [75] was to decrease the minimum range for detecting the RealSense R200, from 650 mm to 60 mm, to enhance the traversability awareness and avoid close obstacles. Experiments with visually impaired participants showed a reduction of the number of collisions by nearly half. Later, Yang et al. [76] enhanced the previous proof-of-concept using deep neural networks to contribute to terrain awareness. Unlike the previous works [73], [74] that use depth segmentation, [76] used a semantic mask to segment the traversable areas. In a closed-loop field test with visually impaired users, the results indicated an improvement in the safety and versatility of the navigation system.
Later, Long et al. [52] also used the Intel RealSense R200 and the non-semantic stereophonic interface proposed in [73], which was also employed in [74]- [76]. However, the difference between Long et al.'s study [51] and Yang et al. [73]- [76] is that, instead of smart glasses, the prototype in [52] is worn on the user's neck. In [52], Long et al. proposed a unified framework for target detection, recognition and fusion based on the sensor fusion of a lowpower millimeter-wave (MMW) radar and the RGB-D sensor. In addition to technical features, price, dimensions, weight and energy consumption were also considered. The framework proposed by [52] expanded and enhanced the detection range at the same time as it showed high accuracy and stability under diverse illumination conditions. Finally, Long et al. [53] proposed a low power MMW radar system using the commercially available smart glasses improved by Yang et al. [73]- [76].
Although most studies published different stages of the project, Zhang et al. [79], [80] and Ross [59] and Ross and Blasch [60] divided their work into two approaches: one describing the device's functionality [60], [79] and the other focusing on the design process and the importance of including the user throughout the process [59], [80].

C. TECHNOLOGIES 1) SENSORS AND COMPUTER VISION
The 61 included studies reported a variety of technologies that were mainly used for obstacle detection. In addition to detection, some technologies also provided obstacle recognition, that is, the identification of different categories of obstacles (e.g., chair, car, stairs, or a moving person).
Among sensor-based technologies, ultrasonic sensors were the most used in the included studies (n = 24). The use of ultrasonic sensors has demonstrated several benefits for mobility performance, including a decrease in navigation time [13], [36], [63], and detection of complex obstacles such as stairs [13], [36] and moving obstacles [7].
Although ultrasonic sensors were the most commonly used sensor, our review showed that the majority of the studies used computer vision-based technologies in their approaches, either by itself or in combination with other sensors. Computer vision-based technologies use the camera as the primary source of information about the environment. In this study, the most popular vision-based technology was the RGB-Depth (RGB-D), a technology that combines stereo cameras, light-coding and time-of-flight (ToF) sensors, computing both RGB color and depth images in real-time to interpret the environment and detect obstacles [25], [51], [73]. The use of RGB-D technologies was observed in 13 studies      (21.31%) that reported using RGB-D cameras, RGB-D sensors or the RealSense (developed by Intel -Santa Clara, CA, USA), which consists of a range of depth and tracking technologies. Other cameras used in the reviewed studies included stereo cameras (n = 8), USB cameras (n = 4), infrared cameras (n = 3), high-sensitivity cameras (n = 2), micro cameras (n = 2), and smartphone cameras (n = 2). Studies that used computer vision-based technologies reported 99% precision in detecting main structural elements [25] an accuracy of 98% in detecting obstacles and 100% in avoiding them [35]. A decrease in navigation time was also reported in studies using high-sensitivity cameras [41], RGB-D cameras [49], and RealSense [73]. In addition, a reduction in the number of collisions was also reported [35], [49], [57], [73], [75].
In the 36 studies (59.02%) that used a combination of technologies, 29 (47.54%) reported using an integration of computer vision and sensor-based technologies for obstacle detection. Combining computer vision and sensorbased technologies can improve obstacle detection, increase accuracy, and provide efficient and safer mobility in both indoors and outdoors environments [35]. Examples include Bai et al. [27] and Mocanu et al. [54] that added an ultrasonic sensor to compensate for the limitations of the camera (transparent objects, larger obstructions like walls or doors).
Besides the combination of cameras and sensors for obstacle detection, there were also studies that employed a combination of different types of sensors. Prattico et al. [56] used ultrasonic and infrared sensors, whereas Chen et al. [33] and Hossain et al. [39] combined both sensors with a camera. In another direction, ultrasonic sensors were also combined with temperature [67], water [70], [71] and wet floor detector sensors [13].
We also observed a difference between the type of technology chosen according to the income of the country where the study was conducted. Studies published in lowermiddle-income countries (n = 10), represented by India and Sri Lanka, used mostly ultrasonic sensors for obstacle detection (n = 8), whereas studies conducted in upper-middle (n = 21) and high-income countries (n = 30) used computer vision-based technologies (n = 33).
Location technologies were used to assist local and global navigation. Local navigation refers to orientation instructions to help the user avoid obstacles (e.g., ''turn left''), whereas global navigation refers to navigation instructions to help the user reach the desired destination.
The user interface was provided as output either by warning the user through voice commands, as observed in [1], [28], [62], [68], or as a smartphone application featured in [49], [63], [64], [70]. In Lee and Medioni [49], the user could choose where to go from a list of registered places or translate the names of the places by speech recognition. In Bai et al. [28], a smartphone was used in several functions, including entering the navigation mode, obtaining the user's current position, running object recognition algorithms, and playing audio feedback to the user.
Regarding external communication, Ramadhan [58] and Sundaresan et al. [67] offered remote user monitoring and the option of contacting the family or caregivers in emergencies. Zhang et al. [79] used the smartphone as the major carrier running an augmented reality framework that can track the user position and build a map of the environment in realtime. In addition, the smartphone's integrated sensors (e.g., ambient light, gravity, proximity and gyroscope compass) were also used in [79].

3) SMART CLOTHING
Three studies developed prototypes to be worn as a garment. Bahadir et al. [26] developed smart clothing that detected obstacles. Li et al. [50] used an antenna, which consisted of a smart radar running along with a shoelace based on on-chip sensing modules, for obstacle detection. Wang et al. [72] developed an all-textile flexible airflow sensor that could be integrated into clothing to alert blind people walking outdoors about nearby fast-moving objects.

D. FEEDBACK INTERFACES
Audio feedback was the most used format of the interface, adopted in 27 studies (44.26%). Alerts were emitted in the form of voice commands (n = 16) or sounds, including beep, music or sound instruments as observed in [34], [35], [73]- [76]. Hybrid feedback (i.e., auditory and vibrotactile) was adopted in 17 studies (27.87%), and users could choose the type of feedback according to their preferences in [48] and [55]. Tactile feedback was reported in 11 studies (18.03%).
Studies conducted in lower-middle-income countries employed both audio (n = 5) and hybrid feedback (n = 5). Upper-middle-income countries mostly used audio feedback (n = 12) and high-income countries adopted mostly auditory feedback (n = 10), followed by tactile feedback (n = 8) and hybrid feedback (n = 7). We also observed that high-income countries developed prototypes focused on visual enhancement for low vision users, with visual feedback, as observed in [30], [38], [40], [41], [77]. Table 2 provides a summary of the types of technology and feedback used in the reviewed studies.

E. USER EVALUATION
The reviewed studies reported the use of different study designs in terms of development and evaluation of wearable devices for mobility of visually impaired people. Thirty-seven observational studies (60.66%) collected data by empirical means to evaluate the effectiveness of the wearable device in experimental settings. Eighteen studies (29.51%) focused on system's analysis, using quantitative approach to evaluate conceptual frameworks, models and algorithms developed for the wearable device. Of these, four studies used mixed methods, that is, system's analysis with a single case evaluation of the wearable device (i.e., case studies) [25], [35], [45], [66]. Three studies (4.92%) used participatory design approach for the development of the wearable device [12], [71], [77] and only one study reported a clinical investigation with 2-week evaluation period [46]. One study only reported a system conceptual overview with no type of evaluation described [70] and one study did not reported the method used [58].
A total of 47 studies (77.05%) included user evaluation, whereas, in the remaining studies, it was either missing or only reported on the technical feasibility of their solutions.
Training sessions prior to experiment tests were provided in 28 out of 61 studies, whereas two only mentioned giving a brief explanation about the device usage [34], [65]. The training time varied from a minimum of 2 to 3 minutes in [38] up to 30 hours divided into four sessions in [12]. The training instructions also varied from learning how the device works, learning about the experiment, and practising trials.
Twenty-three studies included qualitative user evaluations in the form of interviews (n = 8) or questionnaires (n = 15) addressing the experience using the device (satisfaction, comfort, feedback, usefulness, confidence, feasibility).
Mocanu et al. [54] interviewed 21 visually impaired people, with ages from 27 to 67 years. Their results indicated that people of different ages reacted differently to the innovation proposed. While older visually impaired people showed more mistrust to innovations, preferring to rely on their senses instead of the acoustic signals, younger visually impaired expressed more willingness to use the system in their daily routine. In addition, they also highlight that an ETA should be designed to complement the widely used white cane, with additional functionalities, instead of replacing it.
Kiuru et al. [46] presented a complete user safety evaluation in their 2-week clinical investigation with 25 visually impaired people, combining qualitative (interviews) and quantitative (QUEST 2.0) measures to verify the prototype's safety and daily usability. On average, the prototype's safety and security scores were 4.0 (SD 0.7) on a 5-point scale. In addition to the QUEST 2.0, 92% of the users evaluated that the device increased their perception of the environment, and 80% responded that it improved their confidence in independent mobility.
Simões and de Lucena [65] conducted a survey with five users that tested ranked the system's reliability as excellent (55%), very good (20%), good (10%), and satisfactory (15%). In Yang et al. [76], six visually impaired people scored the prototype's reliability 7.33 (on a 10-point scale) in a maturity analysis based on Dakopoulos and Bourbakis study [8]. Zhang et al. [79] conducted a survey with four visually impaired participants that scored the prototype's safety as 3.75 (on a 5-point scale).
Although the number of studies that included user safety evaluation was low, some studies adopted measures to guarantee the participants' safety during the experiments. Safety measures included training sessions by Orientation and Mobility instructors [12], measures to minimize the risks of falls during the experiment [64], researcher walking close to the participants to avoid falls or collisions [33], [57], [69], and careful selection of the environment [42], [69]. Katzschmann et al. [44] did not test an unassisted baseline condition due to safety concerns. In addition, some studies expressed concerns about the user's safety in the development of the prototype. Bai et al. [28] set the obstacle alert as their highest priority to ensure the user's safety. Kassim et al. [55] set a safety zone limit (distance between user and obstacle), based on the human walking speed, which alerts the user when this distance is less than the limit established. Silva and Wimalaratne [63] built a hybrid fuzzy model to evaluate a walking condition's safety level. The model was tested with five blindfolded participants and five visually impaired participants, and the results showed the effectiveness of the model in increasing safety. Although Silva and Wimalaratne [63] included user evaluation with safety considerations, they did not evaluate the safety from the user's perspective.

IV. DISCUSSION
There is a growing interest in developing wearable devices to assist visually impaired people's mobility. Although recent studies are reporting the development and testing of wearable devices for the mobility of visually impaired people, there is a need for more robust evidence supporting the effectiveness and safety of such devices on the user's mobility. This review provides information about technologies and feedback interfaces implemented on wearable devices to improve the mobility of visually impaired adults.

A. TECHNOLOGIES
A variety of technologies have been used to identify a safe path for the user. Our findings show a wide range of studies using computer vision-based technologies. This may be explained by the higher level of scene interpretation that these technologies provide compared to sensor-based technologies [11], [81]. This review shows that studies that used computer vision-based technologies reported high accuracy in detecting obstacles [25], [35], a decrease in the navigation time [41], [49], [73], and in the number of collisions [35], [49], [57], [73], [75]. Another possible explanation for the wide use of these technologies may be due to advances in this field, that according to Plikynas et al. [81], enables the development of solutions that can increase the mobility and quality of life of visually impaired people.
In accordance with [81], RGB-D cameras and sensors were the most popular choice among video-based systems. This review shows the use of these technologies for both indoor and outdoor environments, which is contrary to previous studies which suggested these technologies were only applied in indoor environments [81]. Furthermore, our results show that stereo cameras were a popular choice, as Lin and Han reported [51]. These results may be explained by the fact that these types of cameras can sense image depth information, which is an essential feature in object detection and scene interpretation [51], [73]. While stereo cameras compute image depth data captured from two or more lenses, RGB-D cameras compute depth information with RGB values using infrared sensors and color sensors [11], [51], [81].
Consistent with the literature, we also observed that ultrasonic sensors were the most common technology in sensor-based navigation systems [35], [81]- [83]. This result may be explained by the low cost of these sensors [15] or because they do not require light to work, while cameras do [39]. However, ultrasonic sensors can be affected by environmental conditions and/or other sensors [11], [82]. In addition, even though sensor-based systems have high accuracy in detecting obstacles, they are unable to identify and recognize objects [11], whereas computer vision-based systems provide this additional functionality [81]. These reasons may be possible explanations of why the majority of the included studies use a combination of technologies. This result agrees with data obtained in [81] and [84], who observed that combining different technologies, either as reinforcement or complement, may increase functionality and offer a more reliable location system that is available all the time.
Another interesting finding was the use of smartphones in navigation systems. They were used to capture information from the environment, process it, or communicate it to the user. Several reasons may explain these findings, including the fact that smartphones have been widely used by people of different functional capabilities, which may help devices be more user-friendly [82], and the portability and convenience that smartphones offer to the users [83]. Since they are discrete, Fernandes et al. [84] argue that using smartphones may help to mitigate the stigma associated with traditional assistive devices.
Although this review included a high number of researches focused on the development of wearable devices for mobility of visually impaired people, our results indicate a lack of smart clothing development, suggesting a potential gap for further research.

B. FEEDBACKS
User interface and feedback modalities are essential features to take into consideration during system development because they have the ability to enhance the accessibility and usability of a system application [85]. This review demonstrated that audio was a common choice for feedback information to the user, which corroborates the results found in [82] and [83]. A possible explanation for this might be due to the simple, timely and prompt cues that this interface provides about the position of an obstacle in the environment [84]. In addition, it may also be explained for several disadvantages presented by the vibration feedback, including insufficient information perception or the direct contact with the user's skin that this type of interface requires, which can be invasive. In accordance with this result, Mocanu et al. [54] have demonstrated that acoustic alerts were adopted instead of vibrotactile because the vibration requires direct contact with the user's skin, and visually impaired participants reported that vibration did not provide sufficient information about the environment.
In contrast, some studies reported that the exclusive use of audio information to alert the user is not recommended because it may interfere with the auditory sense, which is required in navigation in an environment [10], [12]. This result is also consistent with Dakopoulos and Bourbakis study [8]. On the other hand, the use of exclusive vibrotactile feedback is also not recommended since many visually impaired persons have diabetes, which may damage their peripheral and autonomic nervous system and compromise their vibrotactile sensitivity and response [5], [86]. This might be a possible explanation for the low adoption of the vibrotactile feedback found in the included studies.
Although there is no consensus regarding which interface channel is the best, we observed that, in general, studies that provide hybrid interface reported more positive evaluations regarding user-friendly and intuitive interface [31], [33], [48], [64], [79]. This result reflects those of Jafri and Khan [87], that found that 70% of the visually impaired participants (n = 10) preferred hybrid feedback as opposed to audio-only or vibration-only. Therefore, in future studies, it may be preferable if the system could provide both interfaces and allow users to choose the type of feedback that meets their demands and/or preferences, as observed in [48] and [55]. This finding is supported by Kuriakose et al. [83] that argues that one single channel may not be the best approach since different users may prefer different feedback methods.

C. USER EVALUATION
The findings pointed out the importance of including visually impaired users in the development of assistive devices. In accordance with our findings, several studies have reported that to develop successful and acceptable assistive technologies, the development must follow a human-centered design approach [88]. Therefore, it is essential to understand how visually impaired people move in unknown environments and what are their needs and requirements [84], [85], [89]. In agreement, Kuriakose et al. [83] argue that most solutions that may work in theory are not adopted in practice because they do not meet the user's requirements. These findings are also reflected on Katzschmann et al. [44], who highlighted that consulting visually impaired users improved the design and functionality of their prototype. Bhatlawande et al. [12] also reported positive outcomes in a survey with visually impaired people, their caregivers, and rehabilitation professionals to understand the user's needs and preferences (e.g., appearance, carry method, user interface and feedback, cost and safety).
Our findings show that the majority (77.05%) of the studies reported user evaluation; however, the number of studies that evaluated the prototype's safety was relatively low [46], [54], [65], [76], [79], indicating the need for more robust evidence supporting the safety of these devices on the user's mobility. The remaining studies, although they showed concerns about user safety, lack comprehensive safety evaluations.
Evaluating the prototype safety is as important as evaluating its effectiveness. If the user does not feel safe and confident with the device, it may influence the usage and lead to devices' abandonment or even health problems associated with low mobility. The feeling of safety can be understood as a subjective matter. Tapu et al. [11] suggest that interviews are the most appropriate resource to gather such information and better understand the user requirements. Nonetheless, among the 23 studies identified in our review that included qualitative user evaluations, only 8 used interviews. Among the studies that include safety evaluation, in general, surveys were the most common methodological approach, which could lack in-depth information from the user experience. This is an important consideration for future research. Implementing interviews or more qualitative approaches could provide more information about the preferred features to enhance user safety when using a device.
Finally, we also observed a lack of standardized evaluation methods, which was also reported by Plikynas et al. [81], who stated that this limits the representativeness of the experiments.

D. LIMITATIONS
The findings of this systematic review should be carefully interpreted. The search did not include grey literature and results from reports, dissertations, books, papers or studies that have not been completed or have not gone through a scientific peer-reviewed process. Some studies did not report the method used for user evaluation or did not provide sufficient information [58], [67], [68], [71], [72], [77]. However, this review followed a systematic procedure and searched peer-reviewed references in six different databases, including the alert function, to ensure the inclusion of relevant papers.

E. FUTURE RESEARCH DIRECTIONS
The findings of this review highlight directions for future development and research. A major concern observed in our review was the size of the device, more specifically, the miniaturization of the device [7], [32], [44], [46], [48], [52], [61]. A similar concern was reported by Kuriakose et al. [83], who reported a relationship between the device's size and its adoption. In accordance, Kiuru et al. [46] pointed out that with the optimization of components, the size and weight of the device will decrease, influencing the comfort and wearability, which can increase the usage.
Another interesting finding of this review was regarding the cost of developing a wearable device. This finding is also reflected in Kuriakose et al. [83], who highlighted that the cost is one of the reasons that influence the use of assistive devices. In addition, since most visually impaired people live in low-and middle-income countries, low-cost is a concern that needs to be taken into consideration. Several studies have reported suggestions for reducing the costs of the devices, including the use of additive manufacturing technologies [7], [73], [80], the implementation of open-source programming [7], [28], [33], [37], [77], [79], and the use of cloud servers, which eliminates the need for using an expensive high-performance processor [33]. An example is observed in Petsiuk and Pearce [7], whose prototype with 3D printed components and open-source programming resulted in cost savings from 73.5% to 97% compared to available commercial products. Another suggestion may be using computer vision-based technologies such as RGB-Depth (RGB-D). The increased use of this technology in devices for visually impaired people support this recommendation, and it might be explained due to its versatility, portability, and low cost [25], [73]. Lowering the cost of a wearable device could particularly benefit LMICs since it could provide access to more people.
This review also highlights the importance of including users in the development of assistive devices. However, our findings revealed a lack of participatory design approaches among studies. In this context, future research could benefit from this interaction.
A wide range of studies focused on the development and evaluation of algorithms for improving obstacle detection was also observed; however, there was not a standardized evaluation method. Thus, further research may examine the current challenges and complexity of the algorithms for offering different functionalities.
Recommendations for future development and evaluation of wearable devices for visually impaired people can be viewed in Appendix B.

V. CONCLUSION
This systematic review on wearable devices for the mobility of visually impaired people considered the technologies, feedback interfaces, and user evaluation methods used in the included studies. This study contributes to the improvement of existing recommendations and guidelines for assistive technology developers. This review also provides recommendations on reducing the costs of wearable devices to provide access to more people, especially in lower-and upper-middle-income countries, where more people live with visual impairment.
The findings show that the majority of studies featured a combination of technologies, especially integrating sensors (e.g., ultrasonic sensors) and computer vision (e.g., RGB-D and stereo cameras) for increasing accuracy in obstacle detection. While the majority of the reviewed studies (44.26%) have used audio feedback, there is no consensus on the best feedback channel. In fact, some studies recommend using hybrid feedback instead of audio-only or vibration-only interfaces.
Although studies including user evaluation reported several benefits to the user's mobility, there is a great diversity of study designs and a lack of user safety evaluation and standardized evaluation methods, limiting the conclusions about the effectiveness and safety in using the investigated technologies. Future research should focus on stronger evidence supporting the effectiveness and safety of wearable devices on the user's daily mobility.
Finally, the results suggest that including visually impaired users in the design and evaluation processes have shown improvements in the design and functionality of the wearable device. This study also highlights the need for more research and data in low-income countries to ensure fair access to technology in these countries.

APPENDIX A
Example of search strategy using PubMed -Date: 04 June 2020

APPENDIX B
Recommendations for the design and development of wearable devices for Orientation and Mobility of Adults with Visual Impairment and Blindness extracted from included literatures.