A Qualitative and Quantitative Analysis of Research in Mobility Technologies for Visually Impaired People

Assistive technology in rehabilitation programs is vital for people with vision impairments worldwide. The term “blind assistive technology” refers to mobility devices specifically designed to provide position, orientation and mobility assistance for visually impaired individuals during indoor and outdoor activities. The paper presents a comprehensive evaluation of 140 research articles published over the past 75 years (1946 to 2022). This research analyses the evolution of assistive technology aids in depth, in terms sensing technique followed, algorithms employed for obstacle detection, localization, object recognition, depth estimation and scene understanding. It also includes, the functional attributes of the aid, feedback type, and assistive solutions embedded in aid. It evaluates the assistive aids for their usability index, portability, battery life, feedback type, and aesthetics. The survey findings reveal that optical and sonic sensor-based aids prioritize speed, weight, and battery life but lack major functionalities, achieving an average performance score of 62%. Stereo, monocular, SLAM, and 3-D point cloud-based aids excel in obstacle distance estimation and avoidance but require greater memory resources, with a lower performance score of 41%. Artificial intelligence and cloud-based aids offer comprehensive scene details but demand complex computational capabilities, achieving a performance score of 44%. However, the most suitable technology for developing state-of-the-art solutions for blind individuals is the multisensor fusion-based and guide robot-based aids, providing a majority of the essential assistive functions with a performance score of 51%. The study highlights possible challenges associated with implementing assistive technology aids, emphasizes the importance of user acceptability, and stresses the need for real-time evaluation of blind aids. The paper lays a concrete foundation and direction for future development, emphasizing the critical challenges faced by blind users, including boarding trains, traveling on public transport, shopping in a supermarket, avoiding dynamic obstacles, and real-time understanding of the surrounding scene. Addressing these key concerns is crucial for the continued development and improvement of assistive technology aids for the visually impaired, leading to enhanced independence, mobility, and ultimately, a higher quality of life.

for both indoor and outdoor navigation. Visually impaired individuals rely on their heightened senses to perceive and understand their surroundings. It is essential to acknowledge that the right to access one's environment is a fundamental human entitlement. This notion encompasses not only the freedom to traverse one's surroundings using the preferred mode of transportation but also the ability to identify landmarks, streets, or neighborhoods. Therefore, accessibility pertains to the capacity to comprehend, recognize, and interpret the spatial arrangement of environmental elements, while also facilitating the ease of movement with minimal difficulty.
Braille and canes have been essential since the commencement of rehabilitation programs for the blind and visually impaired. Since the year 1900, there have been developments made in the realm of mobility aids that include both electronic and mechanical innovations. These sensory aids developed had the only objective of helping blind people read printed documents and navigate through the environment safely and efficiently. The visually impaired perceive the world surrounding them through their hearing ability. Based on the sound cues and the perception map created with the help of the white walking cane, they make decisions about the direction to be followed, and then with constant cues and mind-maps created they navigate in the unknown environment. Sometimes the information about the possible sudden changes known earlier in the environment would be beneficial. The assistive devices have been useful in helping visually impaired people to know about their location, how to avoid walking path obstacles, and have the pleasant experience of traveling independently.
Visually impaired people face significant challenges when it comes to navigating unfamiliar outdoor areas on their own [5], [6], [12], as well as identifying specific buildings, vehicles, and people. Obstacles encountered while navigating outdoors can be either stationary (as seen during regular walking) or moving (vehicles, persons walking towards them, and abnormalities in the walking path), presenting a wide variety of challenges. Outdoor navigational obstacles for the visually impaired include, but are not limited to the following: i) crossing streets without accident; ii) avoiding static impediments like trash cans, hawker stands, and parked cars; iii) avoiding dynamic obstacles like pedestrians, and moving vehicles; iv) beams or other overhead hanging objects; v) avoiding stray animals; vi) avoiding uneven walking surfaces, manholes, potholes; etc.
These objects and the circumstances observed during outdoor navigation are considered troublesome [3]. With the advent of technological advancement, several assistive solutions have been conceived of and put into practice with the intention of benefiting individuals who are blind or have some other form of visual impairment. The majority of these devices and prototypes have been beneficial to serve navigation assistance to some extent. Also, there are a few demerits such as low acceptance, high cost, difficulty to use, ergonomics, and not being easy to carry along. Features enabled by assistive aids for the visually impaired have been improved in response to user demand. More recently, advancements in the field of computer vision algorithms, technologies based on artificial intelligence, and high-performance processing processors have enabled these assistive devices to attain greater precision and efficiency. There is an increasing need for more effective vision assistive solutions that may be developed with the use of more sophisticated algorithms, smart sensors, and advanced computing units. This has motivated us to perform an in-depth survey of state-of-the-art techniques developed by researchers and industries over the past 75 years. The paper is focused on qualitative and quantitative analysis of mobility technologies evolved for assisting blind and visually impaired people.

II. MOTIVATION
According to WHO, there are 2.2 billion people worldwide [1] who, due to visual impairment or blindness face difficulty in day-to-day activities. 75% of the global population who are blind are aged 50 years and older [2], and the world's population is expected to plateau at 11 billion by 2100. The global trends and projections of the number of visually impaired people for the period 1990-2050 [3] are depicted in fig 1. It shows around 600 million people will be affected by visual impairment [3]. Age-related blindness and visual impairment are on the decline, but as the world's population ages, the number of people who are affected will increase rapidly [3]. These findings underline the necessity of stepping up efforts to reduce vision impairment at all levels and also provide assistive solutions to people with vision impairment. The probability of visual impairment is increasing in older adults, which makes independent mobility more challenging. For the majority of visually impaired individuals, independent wayfinding is a serious and pressing challenge [4], [6]. The most frequent difficulties they experience are obstructions on the walking path [12], [13], the presence of obstacles on sidewalks, including shops, parked cars, stray animals, potholes, and the possibility of aerial barriers [12], [13]. They worry about finding a path that is walkable and are VOLUME 11, 2023 82497 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
plagued by a persistent fear of falling [5]. Due to a greater reliance on perceptual and cognitive processes as well as their difficulty in quickly identifying unanticipated obstacles, their anxiety levels rise. One of the main causes of impairment in older adults is vision loss, which is linked to a lower quality of life and more severe depression and anxiety symptoms [7], [8], [9].
Vision impairment is frequently linked to a variety of adverse health outcomes and a poor quality of life, according to numerous research [10], [11]. People with permanent vision impairment can improve their functioning by engaging in vision therapy. Several investigations have documented activity in the brain's putative visual regions when auditory or tactile input is provided by a sensory replacement device [12], [13]. Behavioral, sensorimotor, and neurophysiological sense information can be replaced with the aid of sensory substitution devices [3].
This study examines the evolution of assistive technologies from 1946 to 2022. The 75 years of assistive technology solutions for blind people have been analyzed. Some solutions are developed using the sensor-based conventional approach. Recently there are many assistive solutions developed using models based on artificial intelligence. The work presented in the paper includes a detailed assessment of the Optical time of flow-based aids (14%), Sonic Triangulation based aids (12%), Monocular Camera based aids (9%), Stereo Camera based aids (4%), Artificial Intelligence based aids (14%), SLAM based (9%), 3D Point Cloud-based (4%), Multisensor fusion based (10%), Cloud Processing based aids (14%), Guide Robot based (10%). ii) The authors have compared algorithms developed, functionalities provided, feedback type employed, usability evaluation, challenges, and directions for possible future improvements.

III. MOBILITY AIDS-TECHNOLOGY EVOLUTION
The progression of technology that has been used to create aids for the blind during the past seventy-five years is illustrated in Figure 2. Optical time of flight technology was the foundation for many of the earliest assistive devices, including the white cane. During this same time period, obstacle-detecting devices based on the triangulation of sonic waves were developed. Because of advancements in camera and imaging technology, stereo image processing for identifying obstacles and its depth analysis have become useful navigational aids in recent years. Since 2005, numerous publications have been made about assistive aids that are based on monocular imaging, simultaneous localization and mapping, and multisensor fusion. After then, the image processing for a variety of other assistive technologies was migrated to the cloud. The 3D point cloud proved to be an efficient tool for locating obstructions.
In recent years, a substantial amount of research has been carried out on assistive solutions that are based on artificial intelligence. A taxonomy of the assistive technologies that evolved the over last 75 years for assisting blind and visually impaired people is illustrated in Figure 3. According to the amount of information that was compiled regarding the assistive solutions for the blind user, it was discovered that each of these approaches can be placed into one of six primary categories. Photocells, ultrasonic sensors, infrared sensors, lasers, lidar, and radar are all examples of technologies that fall within the first two categories. There have been many different methods developed, some of which are based on computer vision technology, others on multi-sensor fusion, and yet others on guide robots, and cloud processing.

A. OPTICAL TRIANGULATION-BASED AIDS FOR BLIND ASSISTANCE
The range-based solutions use sensors working on the principle of time of flight. These sensors can detect the presence of obstacles using the propagation of light waves, sound waves, radar waves, or electromagnetic waves. The time these waves have taken to travel towards the obstacle and then back has been used to detect the presence and then estimate the distance of the obstacle from the blind subject using the velocity of the transmitted signal. The received signal from the obstacle is processed using a microcontroller-based processing unit which can actuate the safety feedback to blind people. The commonly used actuators are audio signals, vibration motors, and/or both. Beginning in the year 1944 and continuing until the year 1965, optical obstacle avoidance technology based on the concept of light travel was developed. From 1944 to 1946, Haskins Laboratories investigated aid for the visually impaired. During World War II, Lawrence Cranberg created the first gadget at the United States Signal Corps Laboratory [14]. It resembled a purse and was a portable, single-channel optical sensory device. The Meeks Optical Device, Dr. Lashley's Device, and Lashley's Device were other optical triangulation devices [14], [15], [16] invented around the same time. Audio beeps and variable-frequency haptic vibrations were used to indicate presence of obstacles. In studies [17], [18], OPTAR's modulated light optical ambient light photocell was utilized for range detection. Farmer invented the G5 and C5-Laser canes in 1950 and 1954 to detect impediments using reflected light. By adjusting the angle of the source lens-laser assembly, the maximum range of the laser may be determined. The Bionic -C4 cane [19], [20] has enhanced scanning capabilities in the forward and downward directions with 5.48 m range. FOA Sawdish Laser cane [19], [20], [21] was invented by Benjamin et al. using a single upward-directed laser beam, a lightweight design, and a single audible output. Due to its single channel, it was unable to identify obstacles in other forward or downward directions simultaneously. Infrared-based Mim's Seeing Aid [21] was a pair of glasses with an IR transmitter, receiver, and plastic ear tubes. It utilized an infrared LED transmitter with pulse modulation to generate a narrowband beam. The IR pulses reflected from the obstacles were instantly converted into an output resembling audio sound, which changed in musical pitch with distance. Donnel Meeks' high-energy H1D1 laser cane couldn't detect side curbs [22]. Collins created tactile image projection [23] by placing 400 vibrators on the back of the user. The image on the television screen was turned into a vibrotactile skin display. It was a cumbersome apparatus for identifying the shape and orientation of obstacles.
Collins created tactile image projection [23] by placing 400 vibrators on the back of the user. The image on the television screen was turned into a vibrotactile skin display for identifying the shape and orientation of obstacles. Morrissette's night vision aid was wide-angle [24]. The other laser canes Teletact [25], Laser ETA [26], and Laser Distance Triangulation (20130 [28] were developed in succession to aid blind people. MiniTact [27], Tom Pounce II [27], EyeCane [29], and Infrared Smart Stick [30] utilized the infrared transmitter and receiver principle for navigational aid, obstacle avoidance [27], [29], and stairway detection [30]. The laser detects small components with low variations, while the IR detector identifies passageways that are broad enough for a person to pass through and are devoid of obstructions. [27] The various types of feedback available for obstacle detection are truly remarkable, ranging from audio beeps and variable frequency tactile vibrations [14], [15], [16] to single audio pitches and auditory output ranging [19], [20], [22]. Additionally, some systems utilize music tones through plastic ear tubes [21] or even vibrotactile displays on the skin [23] for sensory feedback. While some obstacle detection systems may have limitations, such as a narrow range of detection or single-direction detection, the technological advancements made in this field are truly impressive. However, it is important to note that bulky designs and difficulty in tracking [28] the laser point may pose challenges for certain systems.

B. SONIC TRIANGULATION-BASED AIDS FOR BLIND ASSISTANCE
Supersonic radiation was examined for obstacle detection during World War II due to sonar, radar, and optical rangefinders. Stromberg and Carlson created the first sound obstacle detection device. Magnetostriction transducers produced pulses or clicks at intervals proportionate to the distance from the object reflecting supersonic radiation. Stromberg-Carson Echo-Pulse System [YT-4ST] [14] combined a supersonic transmitter and receiver with magnetostriction transducers worn on the hand. Vibrating handles alerted the user.
The 1946 YA-5BD Brush Supersonic Guidance Instrument [14] was lightweight and portable device which could work at extremely high supersonic frequencies. Kay L.'s 1964 portable Ultrasonic Probe [31] had wired earphones and VOLUME 11, 2023 a rechargeable battery. Kay F. invented Ultrasonic Spectacles [32] in 1964. The nearest objects were indicated by a lowpitched alarm. The object's surface texture was audible via its sound resonance. The neck-worn Pathsounder [33], [34] emitted an ultrasonic beam at 30 degrees and warned of objects up to 6 feet using tactile and auditory feedback.
The Ultrasonic Sensing Aid [35], [36] was a binaural sensory. It used glasses with wide-angle transmitters and receivers on each side, providing directional cues by presenting separate signals to the left and right ears. The Nottingham Obstacle Detector [37] by Armstrong is a handheld pulsed sonar device with tactile output. It combines infrared and ultrasonic radiations to provide audible range feedback using eight musical notes with direction discrimination in minimal instruction [38]. Orlowski's Pulsed Ultrasonic Binaural Aid [39] employed click-based sonification using a handheld ultrasonic emitter and stereo receivers on glasses with headphones. The Varying pulse frequency indicated the distance to the nearest obstacle, while the stereo receivers provided directional alerts.
The Mowat Sensor [40] is a portable, wrist-strap ultrasonic system that offers tactile feedback. It generates vibrations of different frequencies based on the target range and is roughly the size of a flashlight. The Sonic Pathfinder [41] is a head-mounted device featuring three sound receivers positioned in the center, left, and right. This addition of a third receiver addresses the issue of prior systems identifying central obstructions multiple times. Obstacles are represented using musical scales. The Navbelt [42] utilizes ultrasonic sensors to map the surrounding environment. It employs two acoustic sound patterns to assist with obstacle detection and, in imaging mode, generates a single acoustic pattern to aid in identifying obstacle-free directions. Commercially available ultrasonic-based devices include the GuideCane [43], UltraCane [44], [45], MiniGuide [46], and Bat 'K' Sonar Cane [47]. The K Sonar cane provides image information through binaural headphones. Wearable ultrasonic spectacles and waist belt for visually impaired [48] performed accurate detection of obstacles at waist height to head level height in front, left and right direction.
The reviewed sonic devices, although innovative and groundbreaking in their time, present a range of limitations such as high cost, difficulty in accurate directional reception [14], ambient sound interference [14], bulky and heavy design [42], inability to detect certain types of obstacles, and lack of scene information [44], [45], [48]. Addressing these limitations is critical for creating assistive technologies that are practical, reliable, and effective in enhancing the mobility and independence of visually impaired individuals.
The monocular images that have been recorded are then analyzed in order to extract valuable features that may be used to train an algorithm for machine learning or deep learning in order to develop a model that assists in the recognition of a specific item. Object recognition is built on feature extractors and descriptors such as SIFT, HoG, BRISK, ORB, and SURF. The obtained features are grouped using a machine learning approach and finally, a type of object or obstacle in front of them is conveyed through an audio message. The monocular imaging-based blind assistive aids are compared in table 1. Monocular imaging-based assistive aids are compact and lightweight, making them suitable for object identification. However, they have limitations in accurately calculating obstacle depth and differentiating between foreground and background objects. Without a stereo baseline, these cameras face scale drift in large-scale scenes. Additionally, handling shadows during object detection poses a significant challenge. These aids are useful for recognizing known objects and nearby locations. Deep learning algorithms can enhance depth analysis in monocular imaging, but this adds complexity to the system.

D. ARTIFICIAL INTELLIGENCE-BASED AIDS FOR BLIND ASSISTANCE
Artificial intelligence-based assistive aids recognize objects, traffic signs, pedestrians, and scenes. Deep CNN models identify multi-class objects. Poggi et al. devised an assistive solution employing an RGBD camera and basic multi-layer perceptron with a 4-layer network to recognize things [62]. Tapu et al. DEEP-SEE framework [63] recognized static and moving objects during outdoor navigation using computer vision and enhanced YOLO (You look only once). It is based on two CNNs trainined on offline by employing visual and motion patterns. Saleh et al. segmented RANSAC semantically using VGG16 CNN and FCN computational networks [64]. Blind people were able to determine where the path could be walked by using the information that was provided by the segmented path. Obstacle detection was not possible with this methodology due to the inherent characteristics of the technique. Although the item could be recognized thanks to the deformation of the structured light source [65], the technology was not well suited for application in settings where there was a bright ambient light. The research carried out by Jiang et al. [66] utilized the stereo-analyzed images obtained from binocular sensors and processed them using a cloudbased system. The CNN-based object identification cloud services were helpful in identifying 10 different categories of impediments, and the user received warning messages about them on their smartphone. The only necessity, which is an always-on connection to the cloud, drives up the price.
Lin et al. [67] developed a CNN-based visual localizer framework for the visually impaired. GoogleNet was the best and easiest blind person locator after several tests. Jayakanth et al. [68] localized and navigated. Two CNN networks, CAMNav for position recognition and QRNav for QRCode-based walking path guidance were used. Jarraya et al. proposed a smartphone-based navigation obstacle classification aid [69]. The method uses features extracted from RGB, HSV, HoG, LBP and performs semantic segmentation using a multilayer perceptron algorithm. It has been tested in a variety of challenging environments, and it has been given an accuracy rating of 90.2%. Alhichri et al. [70] developed a technique to identify indoor obstacles using a wide-angle camera, a laser range finder, and inertial measurement sensors. At the output stage, it deployed ordered weighted averaging with the combination of the VGG16 and SqueezeNet models. It had a good performance overall, reliably recognizing the obstacles at a rate of 80.69 %.
Blind people often have difficulty correctly detecting the location of the crosswalk. In order to recognize crosswalks and provide a warning for pedestrian pathways that deviated off the planned path, the algorithms SDD MobileNet-V1, SSD-Inception-V2, and Faster RCNN-Inception V2 were used [72], [80]. These techniques made it possible to identify crosswalks [72], [80] as well as the location and walking deviation of the blind person [77]. Akilandeswari [78] constructed the indoor navigation aid that was implemented via denoising autoencoders based on CNN. It was suggested in [79] that employing 3D point cloud data could help blind persons avoid falling down stairs while walking. Safiya et al. [80] used CNN and LSTM networks in their implementation of the scene description algorithm. Image CaptionBot provides the Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. visually impaired user with information regarding the specifics of the scene in front of them. Table 2 shows 2016-2022 comparative analysis of artificial intelligence based assistive solutions.

E. STEREO IMAGING-BASED AIDS FOR BLIND ASSISTANCE
A stereo camera captures two images simultaneously using closely spaced image sensors. By comparing these images and exploiting the known distance between the sensors, it obtains depth information based on observed disparities in corresponding points. This enables accurate 3D perception and depth analysis. Table 3 compares stereo imagingbased blind aids. Stefano [81] designed a touch-based visual decoder employing left-right image sum of absolute difference (SAD). The compact, portable and hands-free Tyflos enhances reading and navigating [82]. It turns visual camera data into 2D vest vibrations or audio. Nikolaos proposed the Tyflos Petri-Nets concept in 2008. Its face recognition helped blind individuals socialize. Kai [83] designed a low-power wearable gadget using FPGA (Field Programmable Gate Array). For obstacle avoidance, the ELAS (Efficient Large Scale Stereo Matching) algorithm is employed to determine disparity, with 3 fps. Local disparity calculation favors the integrated device. An obstruction-detecting 3D vibration array was created using computer vision [84]. Vibration strengths generate a tiny frontal space model. Intriguing research [85] on the estimation of ego-motion in a highly dynamic environment with high reliability. The device was a head-mounted stereo system plus IMU that produced superior results in simulations and with actual data sets. Another solution, that makes use of the ZED Stereo Camera and Computer, both of which, when combined, have the capacity to calculate the distance that separates the system from the various obstructions in its environment. The system was able to achieve an accuracy of 83.16 percent and is capable of leading blind people to the walking path in a reliable manner. A new Transformer for Transparency (Trans4Trans) model based on Deep Learning that can distinguish between opaque and transparent objects has been presented by Jiaming et al. [86]. It was implemented using an Intel RealSense RGBD sensor. It performed obstacle segmentation, avoidance, and walkable path detection on Standford 2D3D data.  Table 4 presents a comparison of blind assistive solutions based on SLAM technology.
Saez [87] developed a SLAM-based assistive gadget for the blind, providing positional information through the integration of 2D and 3D features, entropy-based mapping error correction, and real-time optimization. However, detailed environmental information was limited. Vivek et al. [88] addressed this limitation with a head-mounted stereo-vision odometry, enabling 3D point cloud traversability mapping and obstacle warnings through vibrotactile cues. Salem et al. [89] utilized a Kinect RGBD camera and visual mapping to navigate blind users to specific rooms. Hongsheng [90] estimated attitude and ego-motion using visual and inertial measurements, employing feature matching and a multi-rate extended Kalman filter. Integration with GPS for localization can further enhance system performance in case of user disorientation.
Nguyen [91] employed offline learning for generating travel routes and learned locations from labeled image data. Online matching with a labeled place database enabled localization on a map. Young et al. [92] proposed a feature-based method using FAST corner characteristics for indoor scene mapping, utilizing a global 2D traversability map and 3D point cloud generation for improved speed. Groce [93] introduced a system integrating navigational sensors and a camera into mobile phones for directional estimation. Vision-based SLAM, such as ORB-SLAM [96], utilized feature-based mapping. Deep SAFT [94] enhanced feature tracking reliability for dynamic moving objects. Combining bag of words and lines (BoPLW) [95] with ORB-SLAM improved loop closure accuracy. Text-based SLAM [96] achieved robust pose estimation in noisy environments, outperforming V-SLAM in hazy video sequences. Zaipeng et al. [97] used YOLO for object recognition and ORB-SLAM for obstacle identification. Addressing the challenges of highly dynamic environments, deep learning and geometry-based algorithms [98] were suggested. The OpenPose architecture provided precise position estimates for dynamically moving individuals.
2D SLAM has limitations in accurately perceiving and mapping the environment due to the lack of depth information. It struggles to represent spatial layout, distinguish objects at different heights, handle dynamic objects, and estimate scale accurately. It is also sensitive to changes in lighting and prone to error accumulation over time, resulting in inaccuracies in map and pose estimation.

G. 3D POINT CLOUD MAPPING-BASED AIDS FOR BLIND ASSISTANCE
A point cloud is a set of data points. It is a 3D representation of data points obtained from georeferencing (GPS), Lidar, and Cameras. Each point in the point cloud represents the distance, height, and width of the object. These 3D point cloud data are analyzed for the development of industrial, automotive, and assistive technologies. This section of the paper describes the 3D point cloud processing-based assistive technology aids for the blind.
Zhang et al. demonstrated a wheeled device [99] with a 3D camera, a projector, and two hyperbolic mirrors. After fusing the images and mapping the single-shot image pattern using epipolar geometry, the object depth was determined. This device was not suitable to be operated under bright conditions. The problem with using a white cane to find your way is fixed by the Co-Robotic Cane solution [100]. SIFT features are utilized for visual range odometry and iterative closest point optimization. The time-of-flight-based camera captures the amplitude and the range image as a 3D point cloud. The 3D point cloud is segmented using RANSAC, and then recognition is done using a Gaussian mixture modelbased clustering. Julian et al. [101] developed a jogging aid for the visually impaired. It entailed mapping colour and depth information using the v-disparity approach, with a more robust roll angle compensation. In contrast to images, which have great resolution and rich texture, the 3D point cloud is sparse, lacks colour information, and is subject to sensor noise. The authors used subsampled ORB to map features from a 2D image onto a 3D point cloud [102], fusing the data in the process to detect the objects in a 3D point cloud.
Geographical positioning systems are not suitable for blind people due to their errors of up to 10m. Cheng et al. proposed a visual localization system using an image in RGB, Depth, and Infrared modalities [103]. It used a 2D local descriptor and a 3D point cloud for visual localization using the NetVLAD descriptor. Matsmura et al. [79] demonstrated a system for staircase detection using Deep learning. The depth camera-generated 3D point cloud is processed using PointNet, which accepts point clouds as input rather than voxelized picture layers. Pre-processing was used to reduce the density of a point cloud. The comparative analysis of 3D point cloud mapping-based aids is presented in table 5.

H. MULTI-SENSOR FUSION-BASED AIDS FOR BLIND ASSISTANCE
A wide variety of assistive aids for the visually impaired have been developed through the integration of a variety of vision sensors and other types of sensors. This has ensured a better quality of life for them. Willis [104] presented an RFID-based walking shoe for indoor and outdoor navigation. It requires pre-installed RFID tags on the floor and offers a detection range of up to 20 meters at a low cost. Wilson developed the SWAN system [105], which integrates various sensors and employs audio signals for navigation guidance. Kim [106] designed a wearable jacket with sensors interfacing with a microcontroller and PDA for obstacle alert and GPS-based navigation using Bluetooth. Przemyslaw et al. [107] devised an electronic travel aid incorporating a camera, GPS receiver, GSM, and WiFi transmitter that analyzes smartphone camera footage to provide real-time navigation assistance via Wi-Fi with the help of a remote operator. Minseok [108] improved sensor sampling rate using independent controllers and ZigBee communication. SpikeNet recognition engine [109] merged with sensor data to aid in location and orientation. Adrien et al. proposed a modified GIS system combining visual landmarks and GPS coordinates for blind user positioning.
Bousbia et al. [110] developed a navigational device that ensures the safety of blind people to navigate in a secure manner. The device consists of two vibrators, two ultrasonic sensors worn on the user's shoulders, and a third ultrasonic sensor built inside the cane itself. Accelerometer and double-integration drift errors would be much worse without the footswitch. It is hoped Minseok investigated an interesting mechanism for improving the sensor sampling rate using two independent controllers. GPS and ultrasonic controllers data was communicated using ZigBee UART protocol that the system that has been proposed will effectively assist those who are blind in navigating. Embedded Vision, GPS, and dead reckoning were merged in the NAVIG project [111] for more accurate location estimation. Simoes et al. [114] integrated RGB camera, ultrasonic sensor, magnetometer, gyroscope, and accelerometer. ALVU -Array of Lidars and Vibrotactile Units [114] was designed as a wearable belt with seven infrared sensors, and a haptic strap around the waist. The map of an environment is used to generate obstacle warnings and navigation guidance, to detect steps and curbs during daytime and night.
The spatial perception aid that Carolyn [115] designed was based on Lidar technology and included variable-pitch stereo sound feedback. Laura [116] integrated an inertial measurement unit and ultra-wideband (UWB) anchor location information. The use of ultra-wideband (UWB) for re-calibration of IMU data allows for more precise location estimation. In addition to boosting overall system performance, it can adjust the heading direction. Blind aids employ electromagnetic waves to transmit a signal and analyse the signal's reflection to determine the target's location. Most aid systems rely on radar technology, which employs frequency-modulated continuous wave transmission. Compared to ultrasonic and optical systems, electromagnetic instruments are more compact, lighter, and more accurate. It detects both holes and hanging impediments. Cardillo invented the 122 GHz millimeter-wave millimeter-wave radar cane [117]. The LidSonic system [119] consists of an Arduino edge computing device and a Bluetooth-transmitting smartphone app. Arduino collects data, operates sensors on smart glasses, detects obstacles using a combination of Lidar point cloud data and ultrasonic data, and provides users with buzzer feedback. GPS and GSM systems [105], [106], [109], [110], [111], [118] track and locate lost blind users by using GPS technology for precise location determination and GSM for transmitting the information. They enable caregivers or operators to track and guide blind users back to a safe route when they become lost. Table 6 compares the multi-sensor fusion based aids.

I. CLOUD PROCESSING-BASED AIDS FOR BLIND ASSISTANCE
The advent of Industry 4.0 has led to a rise in the use of wireless networking and intelligent sensing. The cloud computing environment provides a variety of image-processing algorithms for use in various applications. Image processing algorithms necessitate enormous amounts of storage and high-configuration computing resources. Object recognition services have been utilized to produce blind assistive aids. This section describes the cloud-based assistive technologies implemented between 2010 and 2022. These solutions employ sending high-quality images to a cloud server for feature extraction. Cloud-based systems for obstacle detection and recognition can employ either of the two feature extraction methods: cloud-based feature extraction or feature extraction conducted locally on an edge-based computing device [120]. Table 7 demonstrates the comparative analysis of Cloud Processing-Based Blind Assistive Devices.
Sam S. et al. developed a mobile visual solution using cloud-based image recognition that employed inverted index compression to reduce file sizes of uploaded images. This resulted in faster image identification with 5 seconds compared to uncompressed systems that took 14 seconds [121]. However, real-time assistance for blind individuals was not feasible with this solution. Ad-hoc P2P networks connected mobile devices, and Google's Android C2DM (Cloud to Device Messaging) service authenticated and registered devices [122]. Rosen et al. presented a Java-based mobile service that enabled blind users to interact with their environment using voice-augmented objects, utilizing a mobile phone with near-field communication and an accelerometer [123]. Cloud computing and internet resources were utilized in a navigation system that incorporated a traffic light detector and object identification [124]. An inexpensive and energyefficient IoT-based navigation device utilizing ultrasonic sensors on a belt was designed for obstacle detection, with cloud processing of beacon identifiers for location information [125]. Bai et al. developed a wearable device for obstacle avoidance and route assistance, utilizing cloud-based vision and voice processing, as well as visual slam with stereo cameras [126]. Dutta et al. created a navigational aid using a smartphone and cloud geospatial data for localization, along with obstacle avoidance through audio and haptic alerts [127].
Gracia et al. [128] designed Uasisi, the system that helps blind users understand their surroundings by performing image recognition in the cloud server. A wearable ankle belt alerts blind people about obstacles through sonification. These solutions help them to navigate in an unknown environment. Zhu et al. [129] presented a Prototype solution for processing voice commands using Amazon's Echo Dot, a speech-processing cloud API. Object recognition is performed using a Neural Compute Stick, a fog computing device that bypasses cloud processing to accelerate the image processing. Goncalves et al. [130] demonstrated an assistive tool for visually impaired individuals that makes use of the Google Cloud Vision API. This system is capable of text recognition as well as logo and landmark recognition with an accuracy of approximately 98% in 2-3 sec.
Be My Eyes [133], an additional free app, connects blind or low-vision individuals with sighted blind-folded volunteers via live video calls. The volunteer then provides the blind individual with the appropriate support. TapTapSee [132] is a smartphone camera app that leverages the CloudSight image recognition API, which is designed to detect items within a few seconds for the visually handicapped. Amazon's cloud services, AWS-Rekongnition [134], are an object recognition service powered by deep learning that can identify any object in an image. Microsoft Azure also includes Text recognition and scene understanding capabilities for analyzing images and videos. Microsoft's Seeing AI software [135] and Google's Lookout [137] are able to identify objects, text, and people using speech descriptions. Google Lens [136] and the Envision AI [138] software assists blind individuals with shopping and reading as well.

J. GUIDE ROBOT -BASED AIDS FOR BLIND ASSISTANCE
The majority of people rely on inexpensive and effective white canes. Along with the white cane, guide dogs have been utilized for decades. The guide dogs aid their owners in navigating crowded streets and avoiding other pedestrians. However, guide-dog companionship is unaffordable due to the exorbitant cost of their breeding and training. The environment is perceived by the guiding robot so that it may navigate, localize, and avoid obstacles. The most beneficial is the robots are effective in transforming vision to support visually impaired people. These robot assistants provide support with directional guiding, spatial alignment, and possible obstacle avoidance. These guide robots have assisted in lowering the cognitive work required for navigation in an unfamiliar area. Numerous efforts have been made over the past two decades to construct navigation robots to assist BVI residents. There are a variety of robotic navigational aids available, including robotic canes, suitcases, and edge computing platform-based solutions.
Harunobu-6 autonomous mobile robot outdoor guidance system by Kotani et al. [139] with Camera and GPS (differential) data help find landmarks. Shaft encoders and optical fibre gyros help robots dead reckon. Kulyukin et al. [140] proposed a wheeled mobile platform with RFID sensors and a laser rangefinder. RFID rooms and wheelchairs had cameras, sonar, differential GPS, and portable GIS. Gharpure et al. [141] created a blind store robot assistant. EyeDog [142] uses a USB webcam and Lidar. A mobile robotic guide uses a camera and two small laser rangefinders [143], clusters stairs, ramps, and obstacles. CRC 3-D cameras [144] detected indoor objects. Chuang et al. [145] taught BVI patients pedestrian tactile guide paths. Turn Left, Go Straight, and Turn Right to convert the input image in the deep convolutional neural network behavioral reflex model. Ballbot guided Li [146]. Planning algorithm and human-robot interface help it navigate indoors.
Zhu proposed Intel UpSquared and neural computationbased edge computing [147]. ASU VIPLE uses vision, voice, and control for obstacle avoidance, traffic sign recognition, and voice interaction. Kayukawa et al.'s suitcase navigation system BBeep predicts pedestrian movements [148]. Blind pedestrians hear collision warnings using two RGB-depth cameras [149]. LiDAR-SLAM predicted ego-motion. The technology alerts blind users at crossroads to slow down or stop without recalculating their path (the on-path mode). The program suggests detours if pedestrians block the path (the off-path mode). Deep reinforcement learning helps robots avoid obstacles [150]. UWB localization systems overcome dynamic environmental impediments to improve BLE accuracy. Mini Cheetah quadruped robots use Xiao's leash-guided framework [151]. Table 8 details the comparative analysis of these solutions.

IV. RESULTS, DISCUSSIONS, AND DIRECTIONS FOR FUTURE DEVELOPMENT
In this study, the authors evaluated assistive aids based on 14 parameters, including impediment detection, localization, scene details, detection range, speed, security and privacy, field of coverage, functionality, adaptability, portability, realtime evaluation, battery life, cost, and user acceptance. The researchers conducted a thorough analysis of these parameters and presented the results in this section, providing detailed insights into their findings.  VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.

A. DISCUSSION AND COMPARISON ON THE BASIS OF TYPE OF EVALUATIONS CONDUCTED
The assistive aids discussed have been analyzed for the type of evaluation carried out by the researchers. The effectiveness of assistive aids for the blind has been analyzed through one of the three types evaluations followed by researchers to validate the usability of their proposed system. These evaluations include either experimental simulations conducted in a controlled environment without involving any sighted blindfolded participants, or / as well as real-time evaluations involving blind or blindfolded participants. The aim of these evaluations is to assess how well the aids help with obstacle avoidance, object recognition, localization, or scene recognition. VenuCane aid [112] conducted real-time evaluations with 16 blind and low vision individuals, and is an effective vision rehabilitation aid. References [79], [114], and [139] also evaluated the aid with help of blind people. The graphical representation of the statistics regarding the methods followed for the evaluation is demonstrated in figure 4. According to figure 4, it is presented that only 29% of the time blind participants were involved in these evaluations over the last 75 years. This raises concerns about the validity and reliability of the evaluations conducted in a controlled environment without involving actual blind participants. It is imperative that blind individuals take part in the evaluation process in order to guarantee the usefulness and appropriateness of assistive aids for the visually impaired. This can provide more accurate and reliable feedback on the usefulness and practicality of the aids in real-life situations. Additionally, including blind participants can also help in identifying any potential challenges or issues that may arise when using the aids, which can inform further improvements and developments. Figure 4 demonstrates that early aids based on optical triangulation and sonic triangulation were evaluated  VOLUME 11, 2023 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.  involving blind or sighted blind folded volunteers. It is concerning to note that the aids developed using the monocular camera [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61], Artificial intelligence-based [62], [63], [64], [65], [66], [67], [68], [69], [70], [71], [72], [73], [74], [75], [76], [77], [78], [79], [80], stereo camera [81], [82], [83], [84], [85], [86], and Cloud Processing-based technologies [118], [119], [120], [121], [122], [123], [124], [125], [126], [127], [128], [129], [130], [131], [132] [136] were evaluated only in controlled environments 54.34% of the time. The study conducted highlights that the lack of real-world evaluation involving blind participants makes it challenging to determine the suitability and effectiveness of these aids for blind people. While controlled environments   can provide valuable insights into the functioning of these aids, real-world situations can present unique challenges that need to be addressed for the aids to be practical and usable.
Cloud technology provides functionalities for shopping item recognition [120], [121], [122], obstacle detection [125], [126], [128], localization [124], [125], [126], [127], traffic sign and cross-walk recognition [123], and visual positioning using Google vision API [130]. Cloud-based aids require a stable and reliable internet connection to function properly. In areas with poor or unreliable internet connectivity, these may not work effectively or at all. Figure 5 highlights the overall details of the technology vs solution provided. SLAM and 3D Point Cloud mapping enable blind assistive solutions to provide advanced features such as dynamic obstacle detection and trajectory estimation [90], [92], [98], which allow visually impaired users to navigate safely in their environment. Additionally, these technologies also enable 2D environmental change mapping [93], which helps users understand their surroundings better. Blind user's pose estimation [96], [100], object recognition [79], [88], [89], [90], [91], [92], [93], [94], [95], [96], [97], [98], [99], [100], [101], [102], [103] and indoor place recognition [91], [103] is also facilitated by SLAM and 3D Point Cloud mapping, which enables visually impaired users to identify objects in their surroundings and navigate indoor spaces confidently. The limitation with these methods is high dependence on accurate and precise mapping of the environment, which can be difficult in complex or dynamic environments, such as crowded urban areas or areas with rapidly changing conditions. Also, these methods may struggle with detecting and recognizing certain types of objects, particularly those that are small or have low contrast, which can limit their effectiveness in certain situations.
The selection of an assistive aid depends on the user's individual needs, preferences, and ability to interpret the information provided by the aid. Some users may prefer the autonomous navigation provided by guide robots, while others may benefit more from object and scene recognition functionalities offered by AI-based aids. Ultimately, it is essential to choose the right aid that best suits the user's needs and enhances their independence and quality of life.

C. COMPARISON ON THE BASIS OF DETECTION RANGE, COVERAGE OF FIELD, SECURITY, AND PRIVACY
The study evaluated the detection range and field of coverage of the assistive technology aids under investigation. Additionally, the potential vulnerability of security and privacy for blind users due to connected devices and cameras was analyzed. Figure 6 presents a graphical representation of the feature scores for the various assistive aids, ranging from 0 to 6. A score of 0 indicates poor performance, while a score of 6 indicates superior performance compared to other aids.
The optical time-of-flight (OToF) based and sonic triangulation devices offered a range of 3m to 7m. The stereo imaging-based aids can detect obstacles up to 6-7 m. Smart phone-based devices range up to 5m. The maximum range for obstacle detection is provided by the 3D point cloud mapping-based solution employing Intel Realsense camera [79] up to 10 m and 3D point cloud mapping-based aids with 2D and 3D Lidar can support detection range of 10-30m. The detection range, speed and coverage of OToF sensors may be affected by environmental factors such as lighting conditions, and reflective surfaces. For example, bright sunlight or glare can interfere with the accuracy of the sensor readings, and reflective surfaces can cause erroneous distance measurements. The field of view of ToF and sonic sensors is typically narrow, which may limit their ability to detect obstacles and provide situational awareness in complex environments. ToF sensors can be sensitive to lighting conditions, such as bright sunlight or low light levels, which can affect their performance and accuracy. These are mainly suitable for independent indoor or known environment travel. All aids without involving any camera input provide secured. The field of view of smartphone camera-based cloud technology aids is narrow, around 40-50 deg, which may limit their ability to detect obstacles and provide situational awareness in complex environments. The multisensory fusion-based aids and Guide robot technology-based aids offer a higher field of view employing multiple cameras [144], [145], [151] or multiple range-based sensors [43].
Blind assistive aids must include privacy and security. Cloud-based aids may have privacy, and security issues [120], [121], [122], [123], [124], [125], [126], [127], [128], [129], [130]. Cloud-based systems store data on third-party servers in multiple countries. This raises data sovereignty and local data protection problems. Cloud-based solutions may also have security vulnerabilities. Cyberattacks or unauthorized access may compromise user location data. Privacy and security should be considered when designing blind assistive aids. Including encryption, authentication, and access controls. The design should also include system risks and vulnerabilities and include security incident detection and response systems. The analysis shown in figure 6 detail that the guide robot and multisensory fusion based are evaluated to be the most suitable for longer range, wider coverage, slowto-optimum speed of recognition and better secured.

D. COMPARISON AND DISCUSSION ON FEEDBACK TO A BLIND USER
The various types of feedback used in blind assistive devices, such as audio beeps, tactile vibrations, varying frequency audio pitch, and binaural feedback, provide blind users with important information about their surroundings, helping them to avoid obstacles, locate objects, and navigate unfamiliar spaces. The use of audio messages, text-to-speech, and bone-conducting headphones further enhances the experience, providing blind users with verbal or auditory information about their environment. Figure 8 shows the widely used feedback types in blind assistive aids.Audio beeps [14], [21], [28], [41], [52], [55], [57], [58], [60], [74] are a simple and affordable way to provide feedback to visually impaired individuals. They are commonly used in navigation systems and can be helpful in indicating changes in direction or distance to an object. However, audio beeps may not be ideal for individuals who have difficulty hearing or those who are sensitive to loud noises. Tactile feedback [15], [17], [18], [19], [22], [26], [27], [29], [34], [40], [44], [45], [74], [81], [82], [84], [88], [89], [92], [97], [110], [112], [113], [117], [124], [127], [141] through vibrations is felt on the skin to indicate the location or proximity of an object or obstacle. It can be used in devices such as wearable bands or gloves to indicate directions or objects. Tactile feedback is generally well-accepted by users, but the intensity and frequency of vibrations need to be adjusted to individual preferences.
Varying frequency audio pitch [16], [17], [18], [19], [20], [26], [29], [31], [32], [33], [35] can be helpful in providing a more detailed feedback system. It can be used to indicate distance or direction and is generally well-accepted by users. However, it may require some training to distinguish between different pitch levels. Vibro-tactile [14], [15], [17], [18], [19], [22], [26], [27], [29], [34], [40], [44], [45], [74], [81], [82], [84], [88], [89], [92], [97], [110], [112], [114], [117], [124], [127], [141] feedback delivered through a wearable device can be helpful in providing spatial information. However, the accuracy of the feedback may depend on the placement and sensitivity of the sensors. Audio messages [30], [46], [53], [54], [68], [69], [72], [73], [83], [86], [89], [90], [97], [104], [105], [107], [108], [109], [111], [119], [120], [121], [122], [123], [126], [127], [128], [129], [130], [147], [150] can provide more detailed information which can be delivered through devices such as smartphones or navigation systems and can be helpful in providing directions or identifying objects. However, users may find it difficult to navigate through a large number of audio messages. Boneconducting [56], [63], [98], [106], [113], [124] headphones are used to provide audio feedback without obstructing the user's hearing. It uses bone conduction technology to transmit sound signals through the skull bone. The sound quality may not be as clear as other types of feedback, and users may need to adjust to the sensation of vibrations on their bones. Binaural feedback [35], [36], [37], [39], [42], [63], [115] can provide a more immersive and realistic experience for blind users by creating 3D audio environments. Acoustic HRTF [50] (Head related transfer function filter) feedback is a more advanced form of binaural feedback used to provide spatial information in navigation and orientation tasks. Binaural and HRTF require specialized equipment and may not be accessible to all users. Text-to-speech feedback [49], [51], [61], [62], [125] can be helpful in providing more detailed and specific information to visually impaired individuals. It can be used in devices such as smartphones or navigation systems and can be helpful in providing directions or identifying objects. Blind users may find it difficult to navigate through a large amount of text. Overall, the choice of blind assistive device feedback type will depend on individual preferences and needs. Some users may prefer more detailed and specific information, while others may prefer a more immersive and engaging experience. It is important to consider the accessibility, affordability, and ease of use of each type of feedback before making a final decision. Additionally, it may be beneficial to consider the user's existing skills and abilities, as some types of feedback may require more training or experience to use effectively.  Figure 8 presents the comprehensive performance evaluation of the developed technologies and aids, assessing their feature scores Obstacle Detection Speed, Portability, and Battery life. These parameters are directly affected by the type and number of sensors used, the complexity of computational requirements, and the type of functional assistance provided. The assistive aid needs to be portable, so it can be carried easily throughout the day. The longer battery life is another requirement to avoid any interruption in the service for extended periods. Also, the aid should be cost-effective and easily affordable to the blind individual. The highest scores are observed with OToF and Sonic aids. Although the functionality provided by these is limited to obstacle detection only. These aids are best suited for longer battery life, faster obstacle detection and relatively low cost. The SLAM, 3D Point Cloud mapping, and Artificial intelligence techniques require expensive hardware, such as LIDAR sensors and high-quality cameras, a high configuration processing unit, which leads to higher weight, cost, not affordable, and accessibility to all blind users. Also, these techniques can be computationally intensive, requiring powerful processing capabilities that may not be available on low-cost devices used by visually impaired individuals.
Based on the discussions presented in this section the authors suggestions for implementation of the future blind assistive aids should consider the following features: 1. Minimum Information Overload: Prioritize essential feedback, allowing free use of hands and ears.
2. Wearable and Lightweight: Utilize wearable technologies for ease of carrying.
3. Technology: Range-based sensors for obstacle detection and vision-based sensors for obstacle and scene recognition 4. Safe and Reliable: Instill user confidence to prevent anxiety and mental exhaustion.
5. Longer Battery Life: Balanced integrated complexity with extended operational duration.
6. Short Training Time: Enable intuitive use with minimal training requirements.
The authors recommend considering integration of multisensory aid which includes obstacle detection using range-based sensors such as Laser/Ultrasonic Sensor/Lidar. The existing white cane can be augmented with these sensors to detect and avoid obstacles. However, these range-based sensor do not provide any information related to the type of obstacle, and details of surrounding scene. This can be achieved by integrating vision-based sensors such as monocular camera/stereo camera/RGBD camera. These vision sensors can be used to augment the white cane to assist in recognition of obstacle, landmarks, face, and surrounding scene. This helps blind users to perceive the environment as normal sighted human do. The authors recommend fusion of range-based sensors and vision-based sensors ensuring complete assistance to blind people however a proper balance of weight, computational requirement and battery life is also prime important.
As per the study conducted the state-of-the-art solution developed includes the A Split Grip Cane [152], Venucane [112] Cane is most suitable augmented cane for obstacle detection, avoidance, lightweight and affordable. The usability evaluations of these aids have been conducted with blind volunteers.

V. CONCLUSION
The goal of this review study is to help future researchers learn about the progress made in blind assistive aids and then apply those learnings to the creation of cutting-edge products and services for the visually impaired. It includes a comprehensive examination of the technological advances, algorithms utilized, and numerous approaches utilized for the purpose of supporting visually impaired people. The safety and reliability of the assistive aid are crucial for the well-being of blind users. The survey details the evaluation of assistive technologies based on fourteen key parameters, namely impediment detection, localization, scene details, detection range, speed, security & privacy, field of coverage, functionality, adaptability, portability, real-time evaluation, battery life, cost, and user acceptance.
The results indicate that optical and acoustic sensor-based aids prioritize speed, weight, and battery life, but lack significant functionality, with 62% performance rating. In contrast, stereo, monocular, SLAM, and 3D Point Cloud-Mapping based aids excel at obstacle distance estimation and avoidance, but demand more memory resources, and lower 41% performance score. With a performance score of 44%, Artificial Intelligence and Cloud-based Aids provide scene information but require complex computational hardware. Multisensor Fusion and Guide Robot-based Aids provide the majority of the essential assistive functions with a performance score of 51%, making them the most promising technology for advancing state-of-the-art solutions for blind individuals. While these findings emphasize the advantages and disadvantages of various technologies, there is no complete system which can provide complete assistive solution to blind people developed so far. The future advancements in blind assistive aids with integration of stereo/sonic/lidar sensor, and image (camera) sensor, will be able to provide improved performance and versatile assistive functionalities. There are still a number of unresolved issues to address, namely boarding trains, traveling on public transport, shopping in a supermarket, avoiding dynamic obstacles, and understanding of the surrounding scene. The need for increased memory resources, complex computational requirements, and the trade-off between functionality and battery life are some of the challenges in implementation.
The presently available assistive aids provide a number of functions, but their real-time performance cannot be guaranteed. In order to enhance the assistive aids, it is crucial to engage blind users in the evaluation process. It is essential to prioritize user feedback on the developed aids, create intuitive user interfaces, and make assistive devices accessible and affordable. In addition, fostering inter-disciplinary research is an essential step towards the development of effective and widely accepted assistive aids for the blind individuals. SWATI SHILASKAR received the B.E. degree in electronics engineering, the M.E. degree in digital electronics, and the Ph.D. degree from Sant Gadge Baba Amravati University, India. Her research interests include computer based diagnostic support systems, brain-computer interface, machine learning, and automation.