Map Making in Social Indoor Environment Through Robot Navigation Using Active SLAM

Robotics has come a long way from industrial robotic arms and is all set to enter our homes. The capability of a robot to navigate in an unknown human populated environment with obstacles and making map simultaneously is one of the significant characteristics in the domain of autonomous robotics. Further, the problem of robot navigating in a social environment while ensuring human safety and comfort through social norms needs to be addressed. This article presents a solution for mapping of unknown terrains with dynamic obstacles using simultaneous localization in social environments through Adaptive Squashing Function based artificial neural network training, which is able to track the target orientation angles more efficiently as compared to conventional fixed slope squashing function based backpropagation training algorithm. The performance of different state of the art techniques have been compared with proposed work through simulation models. Simulation results demonstrated the effectiveness of the proposed algorithm in complex environment where the proposed algorithm converged in less than 50% of the iterations taken by the exhaustive search algorithms and approximately 33% of the iterations taken by random search algorithm. Further, the proposed approach was tested in the real-world settings, wherein the robot was deployed to create map for the Kalpana Chawla Center for Research in Space Science and Technology, Chandigarh University with mobile humans.


I. INTRODUCTION
One of the most important fields in robotics is the capacity of an autonomous robot to navigate in an unfamiliar environment while simultaneously generating a map and localizing itself. Extremely dependable technologies and solutions have emerged over decades of research and development on simultaneous localization and mapping (SLAM) [1]. The technique finds its place in many real-world robotic applications [2] evolving from simple manipulators to The associate editor coordinating the review of this manuscript and approving it for publication was Byoung Wook Choi . driverless cars, targeting almost similar objectives of finding where the robot is, what its surroundings look like, and how it can move around.
SLAM tries answering these questions by using LiDAR's, RADARs, and Cameras etc., often utilizing multiple sensors to improve accuracy. Vision sensors in comparison to LiDAR's and RADARs, are inexpensive and provide more information about location and surroundings giving SLAM (VSLAM) methods more popularity [3] than SLAM methods using LiDAR's or RADARs. In [4], the authors analyze state of the art VSLAM techniques. Significant developments towards robust feature extraction contributes majorly to the VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ success of SLAM. The references [5], [6], [7] present an evaluation of feature extractors directed towards VSLAM applications. Navigation is a crucial part of SLAM where probabilistic approaches involving parametric (Filter Based Approaches) and non-parametric (Monte Carlo) methods address the inherent uncertainty in orientation of robot, measurement space and errors [8]. Whereas non-probabilistic methods [9], formulate uncertainty without incorporating any statistical estimation resulting in lower accuracy, although decreasing load substantially [10].
In many applications, the robot is required to define its trajectory in real time, for unknown environments [11].
However, with low a priori knowledge about the environment, performance degrades. The authors in [12] modify cognitive-based adaptive optimization (CAO) approach [13] for navigation in unknown terrains while detecting obstacles and the free space of environment using a stereo vision camera. But, in real life situations, dynamic obstacles quickly and un-expectantly take arbitrary shapes, sizes and orientations. This problem is resolved by Artificial Neural Networks (ANNs) based reactive algorithm as developed in [14].
Unlike industrial applications, the behavior of robots in social environments needs to adapt to social expectations. Unfavorable social response (e.g., Delivery Robots were kicked) is inevitable [15], [16] unless the robot path planning includes elements of human robot interaction (HRI). Socially Aware Robot Navigation (SARN) can be made effective if the robot understands factors like human behavior, social rules, and environment or infers such factors from context, resulting in Context Aware Mapping (CAM). Abiding to physical and social constraints in particular environment and context, the authors in [17] present a framework catering to reactive and deliberate components for effective SARN.
A Social Convention block in the framework specifically takes care of local planning, slightly ahead of real execution, while catering to social and societal norms, providing solution in conflicting situations. The Human Motion block predicts human motion and accordingly adapts the velocity and geometry of the path to bring naturalness in robot motion. Social Convention in conjunction with Human Motion prediction and Context aware mapping delivers social awareness to the robot, increasing the efficiency of SARN.
The knowledge of environment and map should not be a limitation for the deployment of social robots in public spaces. Further, the element of human robot interaction is not considered while defining robotic path planning problems. In this article we present a solution to simultaneous localization and mapping of unknown terrains with dynamic obstacles in social environments, addressing the following objectives: • To conduct a systematic review and analysis with regard to SLAM-based techniques for map making by a mobile robot.
• To propose a novel design of Adaptive Squashing Function based on Artificial Neural Network for optimum navigation in social environment with dynamic obstacles for active SLAM • To evaluate the performance of proposed approach in comparison to random and exhaustive search algorithm across different terrains in simulated and real-world environment in terms of mean squared error and objective function convergence.
Organization of Paper: The rest of the paper is organized as follows. Section II describes the related work. Section III focuses on the proposed system model, and Section IV describes the simulation as well as experimental result analysis. Section V concludes the paper with future scope.

II. RELATED WORK
Various methods have been developed for SLAM in the recent years [18], [19], [20]. By maximizing the robot's entropic information received through two cameras and an inertial sensor, extended Kalman Filter (EKF)-based algorithm [21] could successfully localize and map terrains at theoretical level. SLAM for robotic navigational aid utilizing visual range odometry based algorithm for pose estimation is proposed in [22]. The proposed algorithm involves ego motion estimation from the range and intensity data of camera and state estimation through an EKF. In another SLAM approach [23], based on the covariance matrix from the EKF algorithm, simulations on a robot with ideal sensors, proved that conventional exploration algorithms that uses an EKF on the information from only the bearings can update the filters in an ill-conditioned manner. As an alternate approach a line feature-based SLAM [24] that associates line features through a weighted Euclidean distance measured in Hough space was studied through infrared sensors for active exploration. But due to limitations of the embedded robotic system it could not produce accurate maps.
While few of these methods enabled reliable mapping and to a certain level solved real-time problems, their solutions were mostly theoretical, or the implementations required high accuracy sensors. However, SLAM techniques based on visual data extracts the information from the cameras. Since visual data is used, such techniques are computationally very complex. Few of the techniques have been developed to reduce this computational burden like using corner detection to restrict the landmarking process [25] or use of multiple cameras to reduce the time required for construction of maps [26]. The authors in [27] propose an approach for autonomous exploration and SLAM in indoor environments for mobile robots utilizing polygon map making and graph-based SLAM with directional end point features. A sparse motion removal (SMR) model-based approach for robust visual localization is proposed in [28]. A Bayesian framework is employed for detecting the dynamic and static portions of an input frame. The detection uncertainty is reduced by utilizing the similarity between subsequent frames and the difference between the current frame and the reference frame. The work is extended to dynamic environments in [29].
Passive SLAM techniques focus on constructing the map when the robot explores the area on a predefined path. This type of technique is best suited for mapping the empty space without any obstructions. But, Active SLAM technique works in real time to create the navigation path of the robot and its efficiency to map the area depends largely on the unknown environment [7]. Since the trajectory of the robot in active SLAM techniques is decided in real time it requires a strong knowledge of the environment and the positioning of all the obstacles with high accuracy. Through modified version of the cognitive-based adaptive optimization (CAO) approach [12], an autonomous robot with stereo cameras was developed to localize the robot and map an unknown space with very less prior knowledge of the environment. The technique overcame the problems in SLAM presented by the obstructions of random shape and orientation. The technique for Active SLAM however considers only static obstacles and does not cater to the future needs where robots will share the social space with humans who will be moving in the exploration environment. To cater for this futuristic requirement the exploration robot needs to be able to identify the humans and perform the exploration tasks without disturbing the humans around them and ensure the comfort and safety of humans as well.
The Convolutional Neural Network (CNN) based model You Look Only Once (YOLO) [30], while working in realtime allows different elements in the exploration environment to be detected and further identified as moving and static objects. Moreover, it can also identify moving objects into humans or non-humans (pets or machines) The paper utilizes the modified version of YOLO [31] offering the advantages of YOLO while utilizing fewer computational resources, solving the problem of restrictions of computational resources associated with a movable robotic system. The authors in [32] utilize Artificial Fish Swarm Algorithm and a trial based forward search for collision avoidance through alternative local and global trajectory planning. The proposed technique performs well even in case of unforeseen obstacles. Differentiating humans from other moving objects enables the path planning for the exploration robot as per the human social conventions [33]. For example, while navigating if the robot encounters a group of people and there is no free space available to move, the robot will wait for the group to disassemble and provide a way, it will never cross the group and evade their personal space to make its way. While exploring the environment through Active SLAM, the navigation path needs be optimized to overpass the possible local minima causing the deadlocks. A modified SLAM by mathematically analyzing the CAO parameters has been able to show progress as it could overcome the deadlock in a faster and efficient way [12].
However, the existing techniques only optimize the navigation path under the presence of static obstacles which are mostly linear. The navigation under social environments (in the presence of humans), involves non-linear paths due to consideration of social norms while path planning. The present article proposes a novel approach for active exploration in an unknown structured and social environment. The moving robot is required to explore the area, following the rules of social engagement with the humans, while simultaneously constructing the map. A modified version of SLAM has been presented in the present paper that uses adaptive squashing function [34] to optimize the nonlinearities in the navigation path of the robot in a social environment.

III. PROPOSED METHODOLOGY A. SYSTEM MODEL
The schematic of the mobile robot housing different sensors and actuators is shown in Fig. 1. The mobile robot houses 6 ultrasonic rangefinder sensors (named as S1 to S6 in Fig. 1) that cover the 180 degrees around (in the front), with a stereovision camera for object detection, which is utilized for map making during the navigation. Along with the sensors, the mobile robot is also mounted with 2 independent actuators (motors, named as M1 and M2 in Fig. 1) attached with the two driving wheels for motion and orientation of the robot in the human-populated environment. One of the prime objectives of the mobile robot is to build a map of an unknown terrain (that includes static obstacles and mobile humans) with the utilization of the images captured from stereo-vision camera and navigate the environment to reach the destination in minimum possible motion steps, as depicted in Fig. 2. The flow diagram of the proposed methodology is depicted in Fig. 3. The navigation starts with detecting whether complete area has been covered or not. In case if whole area is not covered semantic mapping takes place where in the robot identifies objects and humans in the field of view of the camera. In parallel, the robot also evaluates the proxemics to evaluate non approachable positions to cater to the social norms.
Subsequently, for the approachable positions the robot evaluates the path for navigation based upon adaptive squashing function while utilizing active SLAM in social  environment. Further, the robot evaluates its next pose for navigation and ends navigation, if the complete area is covered. The details of the proposed methodology are illustrated in the following subsections.

B. REAL-TIME ACTIVE SLAM
As the robot will navigate in the human populated space, it is assumed that it maps M (t) area out of T (t) area at each timestep t, which in turn reduces the U (t) area. It is assumed that the robot system has no information of number of humans moving, location of static obstacles and terrain topology. Further, it is also assumed that the terrain (for navigation) is square along the two axes, for analytical simplicity. The objective of the moving robot is to maximize the covered/ mapped area by covering each point in the terrain through minimum number of motion steps. The objective function considering all the constraints for maximization is defined as where, x min , x max , y min , y max are the extremities of the terrain in which robot is navigating. Ideally, the objective function should converge to 1, which means 100% area coverage by the robot. Considering the presence of stationary objects and mobile/ sedentary humans in the same terrain, the robot will not be able to achieve the ideal value of 1 for the objective function defined in (1). In real, the objects and/or humans located at O(x, y) and H (x, y, t), respectively, in the area being explored. As the objects also cover some space/ area, the modified objective function is defined as Considering that the stereo-vision camera cannot capture the entire region at once, it is dependent on its viewing angle and the maximum viewing distance. While capturing the part of U (t), the obstacle (objects and/or humans) is detected along with the vacant space for navigation that is remembered by the robot for efficient navigation design by defining the locations, such as O det (x, y) for stationary objects and H det (x, y, t) for humans. Since the location of the detected obstacles have been identified, the objective function is modified as The basic obstacle avoidance is performed using the v-disparity algorithm [35], in which the basic geometric features of the captured scene is encoded based on the disparity values from a horizontal histogram. Every pixel in a v-disparity picture has a positive integer value that represents the number of pixels on the same image line with a disparity value equal to the abscissa [36]. The resulting v-disparity picture separates the ground plane pixels from the corresponding obstacle pixels. The suggested technique uses a simplified form of v-disparity and combines low computational complexity with advanced data processing to keep the total computational cost low.
The suggested method is used to define the next optimal motion once the region T (t) has been mapped. At each time step, a new set of candidate places for the robot is produced, with the proposed algorithm determining the most efficient position. It calculates the suggested method's objective function J (t), if the robot had progressed toward these candidate spots. The optimum position that corresponds to the maximum J (t) is chosen. Physical restrictions should be fulfilled by a navigation method that is efficient. The robot's movement must be limited inside the terrain's bounds, and it cannot travel toward an item or outside of its field of view (FOV). Finally, its movement requires consideration of its dimensions.
With the robot performing SLAM tasks for area mapping, it becomes necessary to optimize the objective function defined in (3). This criterion is also a function of the robot's positions and orientations assumed in due course of time while navigation. It is considered that the optimization criterion is a non-linear function of the robot's position/ orientation, defined as where, R pos (t) is the position vector of the robot at time t, and (.) is the non-linear function. While planning a navigation path in a social environment, where humans play an important role, the robot has to restrict certain positions/ orientations following social conventions. This burdens the robot with a set of constraints at each time-step t for the position vector R pos (t), as where, C(.) is a set of non-linear functions of the robot's poses while navigating. This function depends upon the environment in which the robot is navigating, which can be completely unknown because of humans' dynamic nature. The proxemics and social conventions play a vital role in deciding the constraints for the robot for efficient navigation. As a result, the challenge of active SLAM may be characterized mathematically as the task of moving R pos (t) to a collection of places with a given orientation that satisfies the restricted optimization problem, that is, maximize (4) subject to (5).

C. ROBOT KINEMATICS AND NAVIGATION
The robot navigates the terrain with the usage of two independent motors (right and left) on the robot chassis. The resultant velocity of the robot depends upon the linear and angular velocities of the right and left wheels of the robot. The linear velocities of the two wheels are dependent upon the angular velocities, given as where, l is the distance between the two wheels of the robot, and R is the distance between center of the wheel spindle and instantaneous center of curvature (assuming that the dimensions of both the wheels is same). It is necessary to generate sufficient left and right wheel speeds for the robot to navigate that follows necessary direction trajectories to ensure efficient and optimized map making. The position of the robot at any instantaneous time t can be denoted by a vector containing the planar coordinates and the orientation of the robot, given as Here, R x and R y correspond to the planar coordinates of the robot's center of gravity on the ground plane, R θ is the robot orientation with respect to the horizontal axis. The position attained by the robot is given as assuming zero initial value. The kinematics equation of the robot may be described aṡ VOLUME 10, 2022   where, v ω T is the velocity vector which is calculated from the output of the neural network, as described below.
For navigation, the robot has to move across the terrain while avoiding the objects. Since, the navigation is a nonlinear function of unknown terrain and objects placed inside, non-parametric estimation methods such as artificial neural networks (ANNs) can be utilized to figure out the next position/ orientation of the robot to navigate. ANNs are a functional network of non-linear units called neurons that are interconnected through weighted connections. Considering the underlying system conditions, a three-layer ANN architecture (as shown in Fig. 4) is used to output the steering angle and velocity for the next time-step robot motion [14]. The input-layer consists of 8 neurons, out of which 6 are based on the output of distance sensors and 2 neurons are based on the distance and angle detected from stereo-vision camera for the next position, avoiding the obstacles. The hidden-layer utilizes 120 neurons fully-connected with the input-layer, which is further connected to 2 neurons in the output-layer that outputs the steering angle and velocity for the next motion step.
The input vector for the input-layer can be defined as where, s i is the data received from distance sensors, , R y (t + 1) as the position coordinates for the next motion step, as targeted from the stereo-vision camera image processing. The inputoutput relationship for the ANN structure is described as [34]: v (1) ji signifies the synaptic-weight of connection between the j-th neuron of q-th layer with the i-th output of previous layer; net (q) j is the net induced input field of j-th neuron in the q-th layer by the previous layer neurons; f (.) is the nonlinear squashing-function, which is considered here as hyperbolic-tangent function with slope factor α (q) j . To attain faster convergence of Neural Networks, which is vital for navigation, an adaptive squashing function that adjusts the slope of squashing in addition to other network parameters through back propagation of errors into the network is considered [34]. The update of network parameters including the slope of squashing function starts with assigning trivial random values to the weights and unity slope to the squashing function for each neuron. Based on the error vector calculated from the output obtained w.r.t. inputs specified in (13), the network parameters are updated and the process continues until maximum epochs or acceptably low value of error is achieved. The ANN training enables the robot to adapt to the dynamism of the terrain due to human movement and/or involvement. The forthcoming robot position remain within the field of view of the robot, which also helps in tackling the complexity of the executed problem.

IV. RESULTS AND DISCUSSION
In order to calculate the effectiveness of the proposed method different scenarios in simulation and real world were taken in consideration. The programming, simulation and modelling were performed on Webots along with a MATLAB based controller. The proposed method used in experiments and VOLUME 10, 2022 compared with random approach and exhaustive search algorithm. The proposed method utilized the stereo camera to digitize the visual space and conducted search over all combinations to minimize the objective function.

A. SIMULATION EXPERIMENT
A simulation robot equipped with 6 ultrasonic rangefinder sensors covering 180 degrees, stereo camera with an angle of view of 120 degree and monitoring distance of 3 meters was deployed in a rectangular terrain of length 15 meters and breadth of 18 meters. The robot is 20 cm in length, 17 cm tall and 18 cm wide which is exactly same to the dimensions of real robot and it has no knowledge of the morphology of the terrain and obstacles in the scenario. The position of the robot is depicted by the scattered dots in all maps during the complete mapping process. The blue color represents the obstacles in the terrain i.e., two rectangle and one circular in nature as shown on Fig. 5. In order to compare the effectiveness of the training algorithm before applying onto the real scenario, the presented adaptive squashing functionbased backpropagation training algorithm is compared with the fixed slope hyperbolic tangent function based algorithm, on the basis of mean squared error (MSE). It can be connoted from Fig. 6(a) that the presented adaptive slope squashing function based algorithm tends to provide better results with lower MSE, owing to better adaptiveness due to tuning of slope parameter along with other free parameters of the ANN used. Also, the above-mentioned algorithms are compared while tracking the orientation target values for the simulation environment described above. Fig. 6(b) and Fig. 6(c) depicts that the presented algorithm tracks the target orientation angles with a better accuracy for different simulation terrains, that is Sample Terrain-1 and Sample Terrain-2, as shown in Fig. 5(a) and Fig. 5(b) respectively. Further, Fig. 7 represents the convergence of objective functions for proposed method in comparison to random approach and exhaustive search algorithm. For each sample terrain, 15 different scenarios were simulated with different initial positions and mean value of the objective functions for all methods is compared and shown in Fig. 7. The effectiveness of all the approaches can be assessed by observing both the number of positions robot visited and the coverage of terrain. The robot was able to cover 100% of the terrain in almost 1110 iterations as shown in Fig. 7(a). The random based approach was able to cover only 76% of the terrain whereas the exhaustive search algorithm covered 100% in 850 iterations. When the complexity of the terrain was increased as shown in Fig. 7(b), the proposed method took almost 1500 iterations to cover 100% of the terrain as compared to 93% for exhaustive search algorithm, which took almost 3500 iterations and random based algorithm took 4700 iterations to cover only 68% of the terrain.

B. EXPERIMENTAL RESULTS WITH A REAL OPERATIONAL ROBOT
The robot used for the experiments is equipped with Raspberry Pi 3, stereo camera, 6 ultrasonic rangefinder sensors, two servo motors and ATmega328p microcontroller as depicted in Fig. 8(a). The robot equipped with 6 ultrasonic distance sensors and a stereo camera was deployed at Kalpana Chawla Centre for Research in Space Science and Technology (KCCRSST), Chandigarh University along with 5 human subjects to test the proposed approach in real world while following the social norms. Firstly, the images captured by the robot are sent to a central computer where bitmaps are processed in order to execute the proposed protocol. The epipolar geometry is applied on the received images and further sums of absolute differences was utilized as a metric to find the correlation between the blocks of images while keeping complexity at low level.
The next position of the robot are computed by processing the data from 6 ultrasonic rangefinder sensors and then the basic principles of odometry are applied to coordinate the execution of motion of the robot. The rotational and translational motion is executed by robot while making use of servo motors. Further stereo-vision odometry principles were also embedded to support the continuous transitions. In light of the proposed approach the robot was able to avoid the obstacles while following the social norms in presence of humans. Robot was also able to cover the whole terrain and finally generated the map of the terrain following the active SLAM technique as shown in Fig. 8. The results of map making in real world by deployment of the proposed approach is shown in Fig. 9. All sixteen snapshots shown in the Fig. 9 depict different stages of robot exploration in social space while avoiding all the obstacles along with following all social norms. Also, Fig. 10(a) represents the convergence of objective functions for proposed method in comparison to random approach and exhaustive search algorithm for the terrain used for experimental work and Fig. 10(b) represents the velocities (in cm/sec) of left wheel and right wheel of the robot while traversing the terrain.

V. CONCLUSION AND FUTURE SCOPE
Rapid developments in the fields of robotics and artificial intelligence have led to significant deployment of robots across a plethora of applications. A major challenge in the field of autonomous robotics is navigation and mapping in indoor environments that involve humans. Such scenarios require autonomous robots to ensure the safety and comfort of humans as they traverse. This article presented an approach to mapping unfamiliar terrains with mobile obstacles while addressing social norms.
The approach involved an autonomous robot housing a stereo-vision camera and ultrasonic rangefinder sensors to identify and avoid different elements in the environment. The robot utilized adaptive squashing function based trained artificial neural network to navigate through entire terrain while simultaneously creating a map of the environment. The proposed approach was compared with exhaustive search and random approach for fifteen different scenarios with different initial positions. Simulation results established the supremacy of exhaustive search algorithm which covers 100% terrain in 850 iterations, over the proposed technique which takes more than 1000 iterations. However, in complex environments, the proposed approach outperforms, taking 1500 iteration to cover entire terrain, as compared to exhaustive search covering only 93% terrain in 1500 iterations and random approach covering only 68% terrain in 4700 iterations. Further, the proposed approach was tested in real world settings to create maps in a dynamic environment populated with humans.
In future, we plan to test the performance of proposed work in outdoor public spaces with enhanced hardware and will try to utilize different optimization algorithms to further enhance the performance. VOLUME 10, 2022 DIVNEET S. KAPOOR received the B.E., M.E., and Ph.D. degrees in electronics and communication engineering from the Thapar Institute of Engineering & Technology, Patiala, Punjab, in 2009, 2011, and 2021, respectively. He is currently working as an Assistant Professor at Chandigarh University. He has published articles in various national and international journals, including peerreviewed and impact factor journals. He has also filed more than 12 patents in Indian patent office. His current interests include the Internet of Things, embedded systems, robotics, and signal processing.
KHUSHAL THAKUR received the master's degree from Jay Pee University and the Doctorate degree from Chandigarh University, India. He is currently the Assistant Dean of Chandigarh University. His area of research during his Ph.D. work was MIMO precoding systems. He is also working on various funded projects. His research interests include wireless and the IoT.
ANSHUL SHARMA received the master's degree from Punjab University and the Doctorate degree from Chandigarh University, India. He is currently the Assistant Dean of Chandigarh University. His area of research during his Ph.D. work was fire management systems. His research interests include robotics and the IoT. He has filed multiple patents in Indian patent office and working on various funded projects.
ANAND NAYYAR received the Ph.D. degree in computer science from Desh Bhagat University, in 2017, in the area of wireless sensor networks, swarm intelligence, and network simulation. He is currently working with the School of Computer Science, Duy Tan University, Da Nang, Vietnam, as an Assistant Professor, the Scientist, the Vice Chairperson (Research), and the Director of the IoT and Intelligent Systems Laboratory. A Certified Professional with more than 125 professional certificates from CISCO, Microsoft, Amazon, EC-Council, Oracle, Google, Beingcert, EXIN, GAQM, and Cyberoam. He has published more than 140 research articles in various high-quality ISI-SCI/SCIE/SSCI impact factor journals cum Scopus/ESCI indexed journals, more than 70 papers in international conferences indexed with Springer, IEEE, and ACM digital library, more than 40 book chapters in various Scopus, Web of Science indexed books with Springer, CRC press, Wiley, IET, and Elsevier with citations: more than 7800, H-Index: 46, and I-Index: 160. He is a member of more than 60 associations as a senior member and a life member, including IEEE and ACM. He has authored/coauthored cum edited more than 40 books of computer science. He has associated with more than 500 international conferences as the programme committee/the chair/an advisory board/a review board member. He has 18 Australian patents, four German patents, two Japanese patents, 11 Indian design cum utility patents, one USA patent, three Indian copyrights, and two Canadian copyrights to his credit in the area of wireless communications, artificial intelligence, cloud computing, IoT, and image processing. He awarded 38 awards for a Teaching and Research-Young Scientist, a Best Scientist, the Young Researcher Award, the Outstanding Researcher Award, Excellence in Teaching, and a Best Senior Scientist Award. He is listed in top 2% scientists as per Stanford