The Robotarium: A Remotely-Accessible, Multi-Robot Testbed for Control Research and Education

In robotic research and education, the cost in terms of money, expertise, and time required to instantiate and maintain robotic testbeds can prevent researchers and educators from including hardware based experimentation in their laboratories and classrooms. This results in robotic algorithms often being validated by low-fidelity simulation due to the complexity and computational demand required by high-fidelity simulators. Unfortunately, these simulation environments often neglect real world complexities, such as wheel slip, actuator dynamics, computation time, communication delays, and sensor noise. The Robotarium provides a solution to these problems by providing a state-of-the-art, multi-robot research facility to everyone around the world free of charge for academic and educational purposes. This paper discusses the remote usage of the testbed since its opening in 2017, details the testbeds design, and provides a brief tutorial on how to use it.


I. INTRODUCTION
T HE steady improvement of technology pertaining to computation, sensing, communication, actuation, and power storage has led to the creation of many relatively low-cost, small robotic platforms capable of acting in collectives [1], [2], [3]. Even with this trend, many multirobot control algorithms are still validated in simulation instead of with hardware. This suggests that outside of cost, factors like expertise and time required to develop as well as maintain experimental testbeds hinders robotic researchers and educators from evaluating their algorithms in hardware. Even if all researchers could build their own multi-robot research facilities, the result would be a large duplication of effort and a variety of hardware differences that would make replication of research difficult if not impossible.
As an alternative to hardware experimentation, high-fidelity simulators provide an inexpensive and accessible way to validate robotic algorithms [4], [5], [6], [7], [8]. These advanced simulation environments allow for the inclusion of many real world complexities (e.g. actuator dynamics, sensor noise, collision, communication dynamics). However, mathematical models of these complexities must be developed and provided to the simulators. As George Box once said, "All models are wrong but some are useful." [9], meaning the simulated validation results of the robotic algorithm are reliant on the models estimating the complex physics of the real world being close enough. Additionally, the computational requirements to run high-fidelity simulations increases as the number of simulated agents and number of interactions increases. This computational requirement can result in researchers and educators validating their algorithms using simplified models or physics engines, which can neglect real world complexities all together (e.g. wheel slip, friction, computation time, actuator dynamics, sensor noise). Thus, while simulation provides an inexpensive and reliable way to validate algorithms, its reliance on accurate mathematical models and high computational cost to approximate the complex physics of the real world makes experimentation a more desirable validation method.
A solution that bridges the availability of simulation while retaining the realistic complexities of experimentation is remotely accessible testbeds like the Robotarium [10], [11], [12]. The Robotarium is a remotely-accessible, multi-robot testbed available to everyone around the world free of charge for academic research or educational purposes. The mission of the Robotarium is to democratize robotics by providing remote access to a state-of-the-art multi-robot research facility. Beyond democratization, the testbed serves as a starting point for standardized experimentation, where researchers can deploy algorithms on the same hardware to directly compare results, as well as hardware based remote education, where students may enhance their education through assignments that are evaluated in a true robotic system rather than a simulated one.
Previously published work on the Robotarium discusses; the original prototype design [12], the impact and implications of its usage [10], and the technical development required to automate the platform as well as reduce its failure modes to keep up with growing demand worldwide [11]. This paper, in contrast, fully details and justifies the design choices behind the components that make up the testbed and provides a detailed tutorial on how to use it, with the goal of providing a model that future remote-access testbeds can take inspiration from. The paper is organized as follows; Section II presents a brief discussion of the past and current usage statistics of the platform to support the design choices and justifications provided in Section III before providing a tutorial on how to use the platform in Section IV.

II. THE DEMAND FOR AND DIVERSE USAGE OF THE ROBOTARIUM
The Robotarium opened to the world in August of 2017, allowing a global community to experimentally validate their algorithms remotely without developing or maintaining research hardware of their own. Fig. 1 shows the number of experiments executed per month by the Robotarium testbed over its operational lifetime. Initially, remote usage of the testbed was low compared to present day. The increase in usage of the Robotarium is not due to any significant technological upgrades to the testbed over time. The simple reason for the low initial demand is the world did not know the Robotarium existed when it was first accessible. Thus, in the first few months, it was necessary to advertise the testbed to the world at large in order to create a user base. After this advertising generated an initial user base, the growth of demand to use the Robotarium has continued organically. It is not clear whether this growth is driven by word of mouth, publications citing the testbed, or other means. In any case, as of July 2022, the Robotarium has 2, 215 registered users who have submitted more than 10, 000 experiments that have been successfully performed.
The number of experiments run on the Robotarium per month has been trending upwards through time with periodic spikes and dips. These spikes in demand occur before major robotic conference deadlines and when classes leverage the platform for homework assignments or projects, as indicated by the colored markers in Fig. 1. The dips often correspond with academic recess periods and major world events like the COVID-19 pandemic. This growing yet inconsistent demand could not be maintained by human operators. Each experiment takes approximately 10 minutes to execute, meaning every 100 experiments require around 16.5 hours of continuous operation. This estimated time does not include the additional time required for robots idling to charge after running for long durations or the time required when errors occur that require human intervention to resolve. Thus, the growing experimental demand has required automation of the Robotarium that enables it to run experiments without human supervisions while minimizing failure modes as described in [11].
The submissions to the Robotarium come from a geographically diverse user base as shown in Fig. 2. While the majority of users are from North America (with approximately 1300 of those users residing in the United States of America), there is still a substantial number of users from Europe and Asia. This is an encouraging sign as it indicates remote-access testbeds, like the Robotarium, can be leveraged globally to democratize access to hardware experimentation and reduce the duplication of effort creating these testbeds around the world unnecessarily. However, this diversity comes with additional challenges. Primarily, users from different areas around the globe have different native languages. As a minor problem, this sometimes causes experiments to error when submissions use characters that the Robotarium's local encoder cannot translate. More importantly, while many users have been able to use the Robotarium, the user demographics suggest an overwhelming majority of users come from English speaking countries or countries where English is often a  second language. This suggests effort is needed to produce guides and information about remote-access testbeds in multiple languages to promote global usage.
The Robotarium was originally designed as a multi-agent and swarm robotic research platform that was capable of performing experiments that would support the diverse research application spaces of its users. This original design choice is partially responsible for its experimental usage being mainly driven by various multi-robot research topics, as shown in Fig. 3. These research submissions have resulted in 88 peer reviewed publications, whose research areas are shown in Fig. 4, that use the Robotarium to experimentally validate their algorithms. 1 Through its years of use, the testbed has been used to support research fields like multiagent model predictive control [13], networked control theory [14] as well as emerging fields like distributed, multi-agent machine learning [15]. An unexpected consequence of providing this type of remote-access research testbed to anyone has been a demand from research fields that deal with similar multi-agent problems, such as entomologists testing insect inspired algorithms that produce ensemble dynamics observed in nature. Beyond this, a demand for educational support has also grown substantially, unveiling an opportunity to expand remotely-accessible hardware into classrooms and potentially enhance Massive Open Online Courses through the inclusion remotely-accessible hardware based lab components.

III. THE ROBOTARIUM DESIGN
The Robotarium design is not focused on a specific use case, like autonomous driving or coverage control, but instead is built as a sandbox where users can deploy a wide variety of controllers and algorithms for a menagerie of applications. This section will describe the system architecture, hardware components, and software infrastructure that enables the Robotarium to support experiment submissions from a diverse user base with unknown goals.

A. SYSTEM ARCHITECTURE
A block diagram representation of the overall system architecture of the Robotarium is pictured in Fig. 5. The individual components of this architecture have been modified and upgraded over the lifetime of the Robotarium but the overall structure has remained fairly consistent since it's initial tabletop instantiation [12]. While this subsection will focus on the specific system architecture choices for the Robotarium, it should be noted the general structure presented can be leveraged for other remotely-accessible testbeds using different hardware.
On the user side of the Robotarium system architecture is the front-end of the Robotarium website [16], the e-mail account remote users provide when creating a profile with the Robotarium, and a simulation application program interface (API) in MATLAB or Python [17], [18]. These user facing elements of the system architecture satisfy three main objectives, 1) Provide an intuitive interface for users to submit experiments and access the results of previously completed experiments. 2) Enable the system to autonomously notify remote users about the status of their experiments and allow users to easily communicate with Robotarium operators. 3) Coalesce information about the Robotarium and guides on how to use it into a centralized hub. The front-end of the Robotarium website is the main bridge between remote users and the testbed. It serves as a centralized location for potential remote users to learn about the Robotarium. Beyond the general information about the testbed presented on the its website's landing page, pictured in Fig. 6, the web portal also provides guides on getting started with the platform, answers to frequently asked questions, tutorials, as well as general news updates. For current users, the website serves as an intuitive interface to submit new experiments, retrieve data from experiments successfully executed previously, and review old submissions.
Straightforward channels of communication between remote users and the Robotarium are necessary to keep its user base informed about updates to the system, allow user questions to be asked and answered by Robotarium developers quickly, and provide an avenue for users to receive automated updates about their experiment's status. To do this, the Robotarium leverages e-mail (provided by the users when they create an account on the website) and dedicated pages on the website. Autonomous e-mail notifications about changes in the status of a user's submitted experiment (e.g. it has completed successfully, the verification returned an error, etc.) are sent directly to them. Users may also be contacted directly through e-mail by Robotarium developers to answer questions or alert them to specific experiment submission problems not well explained by autonomous messages. When questions are asked frequently or a status update about the Robotarium should be broadcast to the entire community, the information is posted to the website. Posting answers to questions and general status updates about the Robotarium to the website reduces e-mail spam to users and provides an archive of information that newer users can access at a later date.
The final user facing piece of the Robotarium system architecture is the simulation API. This lightweight, low-fidelity simulator enables users to locally prototype their control algorithms and estimate their experiment's performance before submitting it to be run on the Robotarium's hardware. The simulation is purposefully designed to be simple and lowfidelity to lower the local computation needed to run it. In addition, users can rapidly adjust their submissions using feedback from their experiments that operate in the reality (i.e. with the Robotarium hardware) high-fidelity simulation engines like Gazebo [6], V-Rep [4], or NVIDIA Issac Sim [5] attempt to emulate.
The system architecture for the local side of the Robotarium testbed is fairly similar to traditional multi-robot testbeds developed solely for local use. The central computational units (represented as a single server block in Fig. 5) receive robot global pose information from a tracking system, communicate with the robots on the testbed via Wi-Fi, and interface with virtualization hardware (e.g. projector) to create augmented reality environments.
Unlike local testbeds, the Robotarium also has to handle remote user data and experiment scheduling. Additionally, the nominal performance of the remote user's submitted experiments is unknown and potentially hazardous. These complications require the expansion of the system architecture to include components that manage remote user's data, guarantee hardware safety (i.e. prevent robot collisions), and keep up with global demand by reducing the amount of human intervention required to diagnose and recover from failures.
To manage user account data as well as their associated experimental data, the Robotarium uses a database that is managed with MongoDB. 2 Since submitted code cannot be inherently tested to run safely or without software execution errors, a simulated based code validation layer is added to the system architecture as a filter to catch potentially dangerous or improperly formatted code. When experiments are run on the testbed, a local simulation API that is different from the user facing simulation API is used to handle experimental conditions that are not applicable to the pure simulation environment (e.g. robot communication, error handling, visualizations, etc.). To guarantee safety when unknown experiments are deployed on the Robotarium hardware, the communication and computation topologies are centralized, meaning the central control unit is responsible for processing the tracking information, generating all the control commands, and routing communications to all the robots.

B. HARDWARE INFRASTRUCTURE
The testbed and infrastructure supporting it, partially pictured in Fig. 7, is housed within a converted classroom at the Georgia Institute of Technology. The custom testbed is 4.25 × 3.65 2 [Online]. Available: https://www.mongodb.com/ meters with a matte, non-reflective white surface. This surface is chosen so backscatter from the infrared light emitted from the motion capture system is minimized and images placed on the testbed by a projector are visible without reflections. A hollow cavity, slightly higher than the surface of the workspace, surrounds the testbed, extending 0.5 meters into the workspace. This cavity provides room for necessary data and power cables to be routed within testbed so they are not trampled or unplugged by operators or visitors in the room. Currently, the surrounding ledge contains power cables that supply power to the Qi inductive chargers affixed to the walls of the testbed. These chargers enable safe, autonomous power management of the robots on the Robotarium. The workspace where user submitted scripts can control robots is a 3 × 2 meter subset of the possible remaining space. This leaves a 0.5 meter in width buffer zone for idle robots to charge safely along the longer walls of the testbed while remote experiments execute.
Currently, the Robotarium has 22 custom differential drive robots, pictured in Fig. 8 that may be controlled by remote users. The custom robot, named GRITSBot X, is a relatively small platform with a footprint of 11 × 8.5 × 7.5 cm (width, length, height). The platform is designed with a few criteria in mind, 1) Constructed from mainly inexpensive off the shelf components. 2) Capable of autonomously managing its power consumption and safely recharge itself. 3) Manufactured using standardized in-house techniques to minimize inter-robot variance. 4) Actuators and sensors that are likely to fail from natural degradation caused by frequent use or unexpected collisions are easily and quickly replaced. Meeting these design requirements for the mobile platforms deployed on the Robotarium has reduced the time required by human operators to maintain the GRITSBot X robots that have been operating on the testbed over the past 3 years. Additionally, the criteria that pertain to standardizing the in house manufacturing of the custom robots homogenizes the produced platforms as much as possible. That is, the open loop motion or sensor bias of two different robots is not so different that the same experiment executed on different robots will yield drastically different results. This enables faithful low-fidelity simulation of the robot behavior for remote users. Further details about the GRITSBot X can be found in [11]. For its tracking system, the Robotarium leverages a Vicon Vantage V16 tracking system 3 . Originally, the Robotarium used a visual tracking system consisting of ArUco tags [19] and a USB camera. However, as the number of agents deployed on the testbed increased so did the computational requirements for visual tracking. Beyond this, the tracking camera would often required recalibration through the day due to changing ambient light conditions or lose tracks of individual robots due to obstruction or reflections that would mask the tracking tags. To alleviate these issues, the visual camera tracking system was replaced by the Vicon motion capture system to track the robots more reliably with a higher frequency and accuracy than the visual camera could.
Above the center of the Robotarium testbed is a 2megapixel ELP mini-IP camera, 4 used for capturing overhead video of experiments. The camera's position requires a cable of about 10 meters to reach the workstation responsible for recording videos. With this length in mind, the camera used needed to transfers data over an Ethernet cable as USB cables are not designed to transfer data over long distances without data loss [20].
Beside the camera above the Robotarium testbed is an Optoma EH412ST Short Throw 1080P HDR Professional Projector. 5 This projector was chosen for two reasons; first, its short throw distance allows for the entire user workspace of the testbed to be covered by the projector when it is only 8 feet above the testbed, second, its Lumen rating enables the Robotarium to maintain vivid projections around the clock when the room is lit by the ambient light of the sun or in the middle of the night. This projector allows users to create 3 [Online]. Available: https://www.vicon.com/hardware/cameras/vantage/ 4 This product has been discontinued and thus no purchase link can be provided 5 [Online]. Available: https://www.optomausa.com/productdetails/eh412st= augmented reality environments for the robots to operate in and visually provide more context to their experiments like details about the experiment, labels near the robots, or any other information the user would like to visualize. A screenshot from the overhead camera of an example experiment displaying a multitude of information with the projector can be seen in Fig. 9.
Finally, the computational components of the system architecture are made up by a set of three Dell PowerEdge R730 servers and a Dell XPS 8750 desktop workstation (represented as a single server block in Fig. 5). These computational units interface with the testbed hardware, run the required simulation APIs, and host the user management software required to operate the testbed. The networking of components between physically different computational units is due to the diversity of requirements for each of the functional components. One server is outfitted with a large array of RAID partitioned hard drives running a Red Hat Linux operating system (OS) to store user data, associated experimental results, and recorded videos. The second server hosts the VICON Tracker software which requires a Windows Server OS to run. The third server runs the back-end of the Robotarium website. Finally, the desktop workstation acts as the central entity responsible for experiment execution, management, and connecting the required components during the different phases of experiment execution. A desktop computer is chosen over a server for this centralized entity role to facilitate developer interfacing that is often required to manage the testbed, update automation software, and resolve errors.

C. SOFTWARE ARCHITECTURE
The software architecture of the Robotarium is designed around reducing the human operator demand to run experiments and increasing the reproducibility of the code-execution environment for remote user submissions to be deployed in.
To accomplish this, nearly all the software operations for the Robotarium are performed within Docker 6 containers. These containers, analogous to lightweight virtual machines, ensure the code-execution environment is the same for each user as containers may be automatically removed and restored between submissions. This removal and restoration of containers deletes any software modification made by previous users and provides the original unaltered execution environment after it is restored. Beyond this, docker also allows for the translation of containers between different computational units, including the robots themselves. This benefit has been leveraged to maintain and update the firmware for all the robots on the testbed easily and quickly.
At all times a verification container is running to check for any problems with submitted user scripts before deploying them onto the Robotarium. First, a supervising container pulls the oldest submitted user experiment from the database, starts a container developed for simulating remote experiments, and copies necessary files into that newly started simulation container. The purpose of the verification software is to check if submitted user code compiles and/or has any problems that merits rejection before being deployed on the robotic hardware (e.g. inter-robot collisions, long time delays, robots driving out of the work space). When the verification is completed, the experiment's status is automatically updated in the database. If the user's experiment is rejected, they are sent an e-mail with the reason for their rejection, otherwise, the user's experiment is marked as validated in the database. The verification supervising container and simulation container are then removed and restarted automatically by a Python script, creating a fresh verification environment for the next user submission.
After validating a user's submission, a series of docker containers are used to execute the experiment on the hardware of the Robotarium as well. When the system has just been started or a previous experiment cycle has finished, a supervising docker container is opened autonomously by a Python script. This container is responsible for monitoring the status of the robots on the testbed, managing the experiment data within the database, starting other docker containers required for the submitted experiment, and sending status messages to Robotarium users and developers. First, this container pulls the oldest validated experiment in the database. When no problems are detected with the system (e.g. enough charged robots, no robots in the workspace, tracking and communication are working), the supervising container opens a video recording container and a MATLAB or Python code-execution container (depending on the submitted script's language). The submitted experiment is executed and then the video file is copied from the video recording container to the supervisor container and any data files saved by the user's script are copied from the code-execution container to the supervisor container. The video recording container and code-execution containers are 6 [Online]. Available: https://www.docker.com removed and the supervising container uploads the experiment's video file and data to the database. If the file uploads complete successfully, the user is e-mailed that their experiment was completed successfully, the charging container is opened to move all the robots to the charging stations, and all containers, including the supervising container are removed. A simple python script monitoring the status of the supervisor container then restarts the supervisor container and the next experiment is run in a software environment unaffected by the previous submission.
This compartmentalized architecture, while intricate, is chosen to ease development and maintenance of the Robotarium. While this experiment execution process could be done in a single Docker container, creating a set of Docker containers for specific tasks (e.g. recording experimental videos or running MATLAB submissions) reduces the time to build an image when a change is made to a specific task. For example, if an update was made to the Python simulator the entire system doesn't need to be rebuilt, only the containers responsible for Python simulation. Beyond this, if a major component of the system has to be changed a new container can be built and dropped into the system so long as it has the same expected inputs and output of the previous container. For example if a new camera was introduced to the Robotarium, only the camera container would need to be replaced, and, so long as it creates a video file that can be accessed by the supervising container, no other changes would need to be made to the software architecture.

IV. USING THE ROBOTARIUM
To use the Robotarium, users do not connect and control the robots on the Robotarium directly but instead submit premade scripts that will be run without user supervision at a later time. The Robotarium allows control of 1 to 20 custom differential drive robots simultaneously for whatever academic or educational application users desire. The motion of these robots with state x x x = [x y φ] T defined in a global reference frame is approximated by the unicycle kinematic model, where v, w ∈ R represent the unicycle control inputs of linear and angular velocity, respectively. No matter what application the user has in mind, to create an experiment for the Robotarium an algorithm must be created with a controller that produces a linear velocity (v) and angular velocity (w) control command for each robot used, every time step.
Users have access to the global state of the robots every time step to use as feedback for their developed control algorithms. In simulation, the state of each robot is attained perfectly while in reality it is queried from the tracking system. Currently, none of the robots onboard sensors are available to users. The reason for this is two fold, 1) The onboard odometry sensors (e.g. accelerometer, gyro, magnetometer, and wheel encoders) were either never accessed by users (e.g. wheel encoders) or were unable to be used reliably due to testbed conditions (i.e. metal under the testbed and the presence of various magnetic fluxes in the Electrical and Computer Engineering building of Georgia Tech, where the Robotarium resides, influences the magnetometer output). 2) Within the testbed, there are no physical obstacles besides other robots, making the onboard time of flight sensors relatively useless to sense its environment. Beyond this, for a multi-robot testbed, the bias present in the inexpensive time of flight distance sensors on different robots makes the resulting feedback conditional on the robots used in each experiment. For example, if a user submitted two experiments where the first used robot 1 and the second used robot 5 the resulting behavior is often drastically different due to the different sensor biases that cannot be represented faithfully in simulation. Thus, users are required to generate the sensor feedback for their algorithm by degrading the near perfect tracking information into whatever form they desire. For example, to perform obstacle avoidance, users may use the tracking information of two robots to make them avoid each other or generate a simulated sensor originating from a robots tracked position (e.g LIDAR) that interacts with simulated obstacles also created by the user. A more detailed example generating sensor feedback for a multi-agent Rendezvous algorithm is presented in Section IV-C.
Scripts developed for the Robotarium must contain the connecting tissue that generates the motion of the robot. How many robots they use, what feedback they use, if any, and the purpose of the algorithm is completely determined by the user.

A. GETTING STARTED WITH THE ROBOTARIUM
Remote users can access and interface with the Robotarium through its website (https://www.robotarium.gatech.edu). Links to the simulation application program interfaces (APIs), general information about the testbed, and example tutorials may be accessed right away, however, users must create an account to submit experiments to run on the Robotarium's hardware. This can be done by clicking the "Sign In" button followed by the "Sign Up" tab and filling out the required information on the user profile form pictured in Fig. 10. The requested information aims to answer four questions, 1) Who are you? 2) What e-mail is best to contact you? 3) Where are you from? 4) Why are you using the Robotarium? These questions are mainly for the Robotarium team to understand user demographics and application areas that can later inform the development directions of the platform.
After creating an account, users should download the simulation API in their preferred language. The APIs are available in MATLAB and Python. The provided software contains a large number of example programs to help users get started. Additionally, a variety of assistive functions are provided that create unicycle control commands for the robots from various abstraction levels as well as perform frequently desired behaviors like collision avoidance or robot condition checking. Some of these provided functions used most often are, r Transformation functions between single integrator control inputs and unicycle control inputs. r Initialization routines that position robots randomly or in a specific formation before an experiment begins.
r Barrier Certificate based collision avoidance functions [10], [22] to avoid collisions with other robots or simulated obstacles in the environment.
r Functions that create simulated connections between robots that match graph topologies such as fully connected, cyclic, delta disc, etc.

B. THE ROBOTARIUM SUBMISSION PROCESS
After creating a user account and downloading the Robotarium API, users must translate the algorithm they would like to experimentally validate on the Robotarium into a set of MATLAB or Python scripts. The performance of the algorithm should first be evaluated locally by the user through the Robotarium simulator. When they are satisfied with the simulated performance of their algorithm, their main script and any supporting scripts should be uploaded to the Robotarium website. To do this, the remote user must log into their previously created account on the Robotarium website, click the submit new experiment button, and fill out the submission form. The submitted user code is stored indefinitely in the Robotarium database. Through the web interface, users can delete their experiments at any time, access the code at a later date if they do not have it locally, easily resubmit experiments without uploading scripts or filling out the submission form again, or request that a Robotarium developer access the code to better assist them. User submitted code is never accessed or viewed by anyone outside of the user without permission.
After the code is submitted it is put into a first come first served queue within the Robotarium database. When the user's submitted experiment becomes the oldest in the experiment queue and if it is verified to execute safely in simulation, it is pulled and run on the hardware of the Robotarium. When the experiment completes, successfully or unsuccessfully, an overhead video and any data saved by the user's scripts is put into the database with their submission. The user is alerted through e-mail that their experiment has completed and they can access the returned data through the Robotarium website.
At this point, the submission cycle is complete. Based on the returned results, users can improve their algorithm and resubmit or move on to developing a new algorithm.

C. EXAMPLE SUBMISSION: RENDEZVOUS
To demonstrate the usage of the Robotarium more concretely, this subsection will detail the development and submission of the multi-robot rendezvous algorithm. The goal of this algorithm is to have a group of robots meet at the same point without direct communication, the use of global state information, or apriori information. The rendezvous algorithm is an implementation of a decentralized, nearest-neighbor averaging approach, known as the consensus equation [23], [24], [25].
Consider a collective of point robots with state x x x = [x y] T defined in a global reference frame whose velocity (ẋ xẋ) can be directly controlled,ẋ xẋ = u u u. For consensus to be achieved, each agent's control input (u u u) should be generated by the following equation, where N i is the set of robots whose position (relative or absolute) can be measured by robot i. In order to test the performance of this algorithm on the Robotarium, a script must be created in the downloaded API. Fig. 11 shows the rendezvous algorithm implemented in a script 7 using the Robotarium MATLAB API. The controller described in (2) is implemented in lines 47-64. These lines loop through all the robots on the testbed, determine their neighbors, then calculate a desired single integrator velocity for each robot based on the position of each robot's neighbors.
There are a few practical considerations and Robotarium functionalities that are also worth noting in this script. First, the output of (2) is a single integrator velocity which cannot be directly provided to the robots of the Robotarium who require a unicycle velocity input as described in (1). Line 23 creates a function that outputs unicycle control commands from single integrator control commands using a near-identity diffeomorphism [10], [26], [27]. This function is used in line 78 to transform the single integrator velocity command produced by (2) in line 62 to a unicycle velocity that approximates the single integrator motion and can be executed by the robots. Now that the velocity is in a form that can be used in the Robotarium, the user should consider if the proposed control inputs are guaranteed to be safe (i.e. robots do not collide with each other or run off the testbed into the walls/charging robots). The proposed controller is going to cause all the robots to collide by design (i.e. success of this algorithm is when all robots meet at the same point) and thus is not safe to run on robotic hardware. However, the Robotarium provides users with barrier certificate functions that enable minimally invasive obstacle avoidance [22], [28], [29]. These functions, whose implementation is described in [10], find the minimum difference between a user's control command and a safe control command, where safe is defined as a function of distances between robots. Line 25 creates the function that will compute the safe control input to avoid inter-robot collisions and keep the robots within the testbed. Line 80 computes the safe inputs given the desired velocities and states of all the robots. This guaranteed safe control input is then sent to the robots in line 85. Finally, line 102 calls a Robotarium function that will output any detected errors that would cause rejection of the submissions, such as simulated robots being too close or leaving the testbed. Fig. 12 shows the results of the developed rendezvous script when run in the Robotarium simulation API. Fig. 12(a) and (b) show screenshots of the simulation visualization at the beginning and end of the simulation, where the simulated robots are represented by the cartoon squares. Fig. 12(c) shows the paths the robots took in the simulation while Fig. 15(a) shows the distance of each robot to their closest neighbor through the duration of the experiment. The red dotted line in Fig.  15(a) represents the minimum safe distance allowed by the barrier certificate functions used in the script. These results suggest the simulation is performing as expected where robots are clustering together attempting to reach the same positions but remaining a safe distance from each other. The next step is to submit this script to be run on the Robotarium. This is done by navigating to the website, logging in, and clicking the submit new experiment button. This will bring the user to an experiment submission form. Fig. 13 shows a screenshot of an experiment submission form filled out for the developed rendezvous experiment. After hitting submit, the user will wait until they receive an automated e-mail from the testbed that their experiment has been completed.
After the user receives an e-mail notification that their submission is completed, they should return to the Robotarium website to view the results of their experiment. Fig. 14 shows the returned results of the developed rendezvous scripts when deployed on the Robotarium testbed. Fig. 12(a) and (b) show screenshots from the video returned with the Robotarium results at the beginning and end of the experiment. Fig. 14(c) shows the paths the robots took in the experiment while Fig.  15(c) shows the distance of each robot to their closest neighbor. The red dotted line in Fig. 15(c) represents the minimum safe distance allowed by the barrier certificate functions used in the script, just as in the simulation results.
The experimental results are show the same trends as the simulation results, providing experimental validation of the proposed implementation of the rendezvous algorithm. However, when comparing the resulting paths (Figs. 12(c) and 14(c)) and inter-robot distances (Fig. 15) the impact of the real world complexities are apparent. The paths of the robots in simulation are smooth and continuous, while the paths in experiment become more erratic as the robots approach each other and are simultaneously attempting to remain at a safe distance and get as close as possible to each other. Similarly, the nearest neighbor plots (Fig. 15) are smooth and settle in the simulation but are noisy and rattle within an envelope in the experiment. This is due to the real world intricacies, like motor dynamics, tracking error, and communication delay, that are not modeled in the low-fidelity simulation.
This example experiment submission shows the power of the Robotarium. The low-fidelity simulation can be leveraged to quickly prototype ideas that can then be easily deployed and validated on robotic hardware. In general, the Robotarium's simple simulation environment can be used to gather statistically significant results from a large number of numerical simulations (run at faster than real time speed) and then have those simulation results be validated by fewer experimental results returned from the testbed.

V. CONCLUSION
The Robotarium represents a successful implementation of remote-access hardware that supports multi-robot research and robotic education. Its usage over the past 5 years indicates a desire for remotely accessibly testbeds that support a diverse userbase of researchers and educators. The platform is being used by control theorists, roboticists, and researchers from other unexpected disciplines, like entomology, to validate their proposed algorithms against often-overlooked real-world complications. The testbed is also being used heavily in the classroom to support the education of future controls engineers and roboticists with hardware experience they may otherwise not have access to. This paper has discussed the usage statistics and design of the Robotarium as well as provided a brief tutorial on how to access and use the testbed. The Robotarium will continue to improve based on user feedback, potentially expanding to incorporate other platforms like quadrotors and robotic arms. Hopefully, the Robotarium may serve as a model for the development of additional remote-access testbeds that will promote global access to research quality hardware that accelerates research and enhances education.