A Multi-Arm Robotic Platform for Scientific Exploration

There are a large number of robotic platforms with two or more arms targeting surgical applications. Despite that, very few groups have employed such platforms for scientific exploration. Possible applications of a multi-arm platform in scientific exploration involve the study of the mechanisms of intractable diseases by using organoids, i.e., miniature human organs) The study of organoids requires the preparation of a cranial window, which is done by carefully removing an 8 mm patch of the skull of a mouse. In this work, we present the first prototype of our AI-robot science platform for scientific experimentation, its digital twins, and perform validation experiments under teleoperation. The experiments showcase the dexterity of the platform by performing peg transfer, gauze cutting, mock experiments using eggs, and the world's first four-hand teleoperated drilling for a cranial window. The digital twins and related control software are freely available for noncommercial use at https://AISciencePlatform.github.io.


I. INTRODUCTION
T HE COLLABORATION among humans and robots is a reality in many fields of industry, science, and medicine.The robotics community has made much progress in the direction of making that collaboration safe and meaningful.We often rely on robots for tasks that might be over the limit of human accuracy or endurance, and humans for decision making and higher-level recognition.
This development has been further empowered by the recent re-rise of data-driven deep learning [1] (usually referred to as artificial intelligence-AI), which has been undoubtedly influential in robotics research since at least 2015 [2].The success of these technologies has sparked the interest of society in the impact of future AI-embedded robots (henceforth AIrobots), further rekindled by the COVID-19 pandemic [3].This includes ethical concerns regarding the accountability of these intelligent systems, clear definitions of levels of autonomy [4], and so on.
In line with the views of other groups regarding the next frontiers in robotics [2], [3], we see a future in which AIrobots conduct experiments in challenging environments while interacting with scientists as peer scientists. 1 This is part of the ambitious Moonshot Research & Development Program of the Cabinet Office of Japan 2 to have, by 2050 and beyond, AI-robots that autonomously learn, adapt to their environment, evolve in intelligence, and act alongside human beings.
To achieve that long-term goal, our project is divided into many fronts.In one of those, discussed herein, we are interested in developing an experimental platform that will serve to conduct robotic experiments and scientific discovery.Indeed, one can think of the AI as the brain, whereas the robotic platform is the body.In this work, we present our first iteration of the body of the AI-robot science platform we envision together with elements that make one layer of the brain, i.e. the kinematic-level control algorithms and humanmachine interfaces.The multi-arm platform has several novel elements regarding hardware and software.We also developed two matching digital twins, which we make openly available.
Our novel robotic platform has been designed to study autonomous scientific experiments on animals and plants.Unlike artificial objects for which industrial robots have commonly been used, organisms have considerable individual variation and heterogeneity of mechanical properties.This often implies that the experiments are composed of unpredictable and unique elements necessitating manual operation by skilled operators, and consequently being difficult to automate.This exposes personnel to health risks, such as infectious diseases.Therefore, one can expect that automation would enable safer applications and potentialize other challenging tasks in unwelcoming environments, e.g.space.
In this context, the manipulation targets can vary between several hundred micrometers to several millimeters in size.Because they often require high magnification for observation, the field of view is extremely narrow and conventional surgical robotic technologies cannot be applied as-is.In addition, conventional automation has often been studied with the aim of reproducing human movements.However, there must be a robotic configuration that is easier for a robot system to conduct the task even if the configuration is difficult for a human to teleoperate.Therefore, in our prototype, the manipulator robots' bases are placed on a circular rail, to allow each robot arm to autonomously position its base as part of the kinematic structure.
In effect, we developed a platform for research in which the robot system can be used to learn skills through the teleoperation performed by human operator(s) and gradually evolve into to a more robot-centric approach, increasing its degree of autonomy.We do not believe that the robot's trajectory itself is directly related to skills, but that skills can be found in the interaction between the robot and the object.Thus, the robots do not need to move in a similar way to humans, as long as the task is completed with enough quality.
The proposed system is equipped with four arms with different characteristics; the choice of the number of arms is tentative, and a main goal is for the system to have a sense of "self".In the future, we expect it to be able to autonomously choose the best combination of robot arms and tools for a given task.In this way, the platform has been designed to be scalable to accelerate research into autonomous non-repetitive tasks on small natural objects.
In order to have a concrete aim, our initial prototype has been inspired by scientific discovery related to intravital imaging [5].Intravital imaging refers to images taken while the organism is alive, through a type of window.One of the applications of this imagining technique is to study the growth of human cells [6].This paper reports that the platform is now able to remotely operate on minute objects and collect training datasets to study skills, while automatically avoiding collisions outside of its extremely narrow field of view.As a proof-ofconcept, we perform experiments under teleoperation, namely a Tokyo-Kyoto teleoperated mock experiment using eggs during the IROS'22 Exhibition, peg transfer, gauze cutting, and the world's first four-hand teleoperated drilling for a cranial window, by users with no prior surgical experience.

II. RELATED WORKS
In recent years, teleoperated robotic platforms have gained the scientific community's attention thanks to the emergence of new applications in several fields, especially in surgical robotics [7], where surgeons exploit the robot's dexterity to perform challenging medical procedures.
Despite the high level of accuracy of the robotic platforms, the quality of teleoperated robot-assisted surgery (RAS) often relies on the surgeon's skills and experience, which are subject to human imprecision [8].
Aiming for greater robot autonomy, several platforms have been developed for surgical robotics research, for instance the dVRK [9] and the RAVEN [10].The former is based on the first-generation da Vinci robot hardware and the latter is based on custom open hardware.Both systems are based on cabledriven mechanisms, and, different from serial manipulators, they often require additional strategies to compensate for the inaccuracies related to slack and stretch of the cables [11], [12].Because of that, several platforms based on industrial serial manipulators have been proposed [13]- [15].
Schwaner et al. [16] presented MOPS, an open platform for surgical robotics research based on serial manipulators.MOPS has high modularity, uses commercially available robot arms, and allows scalability with custom tools.In addition to all these features, our platform includes a novel hardware design, which is oriented to dexterous cooperative task manipulations, and targets general experiments for scientific exploration.

III. ROBOTIC SYSTEM OVERVIEW
The proposed robotic system is shown in Fig. 1.The whole system is enclosed by a frame.Our proposed novel design is based on a circular rail to which each robotic branch is attached.Each branch has a servomotor (283867, MAXON, Switzerland) to independently rotate about the rail.There is no mechanical limitation for the motion of each branch about the rail. 3Serially, each branch has a linear actuator that moves back-and-forth in the direction of the center of the rail (RCP6, AIA, Japan).
Composed with this novel multi-arm mechanism, we have four robot branches.Two branches are composed by a collaborative-type robot (CVR038, Densowave, Japan) attached serially.The remaining two branches are composed by the serial connection of an industrial-type robot (VS050, Densowave, Japan).By doing so, two robotic branches have 8 degrees of freedom and the remaining two branches have 9 degrees of freedom.
Aiming at a large set of possible applications including the cranial window, we have four customized end-effectors 4 .One is a customized micro-drill based on a commercial micro-drill (MD1200, Braintree Scientific, USA).Another is a customized grasper for cotton swabs.The last two are customized actuators based on a rotary gripper module (EHMD-40-RE-GE, FESTO, Germany) to operate tweezers (15-416, BRC, Japan) and scissors (S7, Tsubasa Kougyo, Japan) that are commonly used in cranial window procedures.
Vision of the workspace is provided through six 4K cameras (STC-HD853HDMI, Omron-Sentech, Japan). 5Four of these cameras are placed around the workspace directed at the center to provide an overall understanding of the state of the robotic system using wide field-of-view lenses (VS-LDA4, VS Technology, Japan).Another camera is used to provide vision for the cranial window task using small field-of-view lenses (VS-LDA75, VS Technology, Japan).The last camera provides a more balanced field-of-view, for tasks such as those included in the fundamentals of laparoscopic surgery6 (FLS).

A. The fundamentals of laparoscopic surgery
The fundamentals of laparoscopic surgery (FLS) is a program to educate and assess surgeons in laparoscopy.One part of it comprises a training/assessment kit containing a number of tasks that aim to teach surgeons how to perform surgical procedures using long-thin instruments, through a small incision (emulated by a box or plate with small holes), and under indirect view of a laparoscopic camera.Among the existing tasks, peg-transfer involves picking up blocks and transferring them between pegs.It is important to note that, such task, per se, has no medical significance.Instead, it aims to train the surgeons' dexterity by abstracting away the task.In addition, it helps the surgeons learn the handeye coordination and task constraints.The FLS tasks are commonly used as a benchmark for robot dexterity and for the evaluation of AI algorithms.In robotics research, the peg transfer is still a benchmark for robot dexterity and AI research [17], even though it is one of the easiest tasks for humans given that the blocks are rigid and easy to visually discern.More complicated tasks, such as gauze cutting [18], are still an open problem in robotics research.It involves using one of the robotic arms to create tension in the gauze while another arm cuts it.Many groups are exploring the use of state-ofthe-art simulators to address this challenge, by trying to learn flexible material dynamics [19].Other tasks involve handling sutures, performing knot tying, and handling suture threads with varying degrees of difficulty.

B. Design novelty
The proposed platform is the culmination of a longstanding know-how in our group in dexterous manipulation in constrained workspaces, starting from mechanical systems developed for a particular purpose, e.g.[20].Aiming to cover a larger number of possible applications, we have moved on to using general-purpose manipulators with specialized end-effectors [15], effectively targeting a larger number of applications with arbitrary constraints [21].
With dual arm systems with a movable7 base (e.g. on a cart with casters), we have identified two challenges, which also affect similar robotic systems.These inspired the rail-pluslinear-actuator design.First, that for model-based (self) collision avoidance, i.e. the sense of "self", the precise knowledge of the base position of each manipulator is paramount.In this context, a time-consuming high-accuracy (re-)calibration for each new base repositioning is a bottleneck for the research.Second, that when arms with redundancy, e.g. each SmartArm with 9 degrees of freedom [21], workspace optimization using a performance metric, e.g.manipulability, is an open challenge.The redundancy, complex nature of the tasks, and large number of constraints, makes it impossible to optimize the placement of the bases of the robots using existing techniques.Having the rail-plus-linear-actuator, the dexterous workspace of the manipulators becomes its original volume, extruded by the area of the rail-plus-linear-actuator, effectively solving that issue for the table centered in the platform.In addition, because all those additional degrees of freedom have an encoder, we do not need re-calibration after motion.It is important to note that, given that there is considerable overlap on the workspaces of all arms, our design is only feasible given the existence of state-of-the-art execution-time (self) collisionavoidance frameworks, e.g.[21].
In addition, the system being composed by two different types of arms is inspired by applications in which the larger robotic arms cannot be used, owing to, e.g., size restrictions [22].In such applications, the large payload of larger arms is unnecessary and their large volume becomes, instead, an issue.They can easily occlude the view of the target and require that the camera is set considerably farther from the manipulation target, further restricting the choice of suitable camera/lens systems.
A multi-arm system, such as the one described herein, addresses all these issues and enables the scientific exploration of a large number of applications without a mechanical change.Having such a structure allows us to learn tasks in this system and, when possible, transfer the learned aspects to systems with a lower number of arms.

IV. SYSTEM INTEGRATION & SOFTWARE
In the implementation described herein, we used generalpurpose computational platforms to make the system feasible for experiments in scientific exploration.Our target at this stage was to develop a system with acceptable latency to a human operator or an autonomous system with a similar level of performance.In this context, we did not focus on developing a system with hard real-time guarantees, hence we have no strict guarantee of adherence to a predefined periodicity in any given computational thread.We took reasonable steps to reduce lag up to the reported level of responsiveness, but a missed thread deadline was not catastrophic in our target applications.If a system inspired by ours is to be used in a critical application that requires such guarantees, a suitable real-time kernel (e.g.RT_PREEMPT 8 used in [15]) along with the real-time capabilities provided by a suitable robotic system middleware9 might be good starting points.Nonetheless, such adjustment is not expected to be trivial.
Our system has been designed to have flexibility on the control modes.This means that any of the four robots can be controlled through teleoperation or autonomously.Each robot can be independently controlled by its respective operator side, which can be anywhere in the world with a properly configured internet connection.In this light, our system can be divided into multiple operator sides (OS) and a follower side (FS).One example configuration is shown in Fig. 2.

A. Operator sides
The OS are enabled by the SmartArmMaster program, developed as part of this work and made available with an installer 10 on Windows, 11 free for noncommercial use.It is currently compatible with 3D Systems' haptic devices. 12e developed a similar program for Medicaroid's hinotori's master manipulator device, 13 used in Section VI-B1.Our master programs read the kinematic information of the devices through the vendor's API, encode that information into a binary stream, and send it to the FS through user datagram protocol (UDP).

B. Follower side
Regarding the FS, the system's information is processed through two computers.The first computer, based on Windows 10 x64, is responsible for all tasks requiring GPU support including running the digital twins 14 described in Section V.In addition, it is used to handle and capture the information from the cameras and generate the composed views used in the experiments described in Section VI.The second computer, based on Ubuntu 20.04 x64 and ROS Noetic 15 , is responsible for monitoring and controlling all robotic devices in a centralized manner.
1) Low-level control: The low-level control pertains commanding the controllers of the four rail motors (ESCON 50/5, MAXON, Switzerland), the four motors in total for the two rotary gripper modules (CMMO-ST-C5-1-LKP, FESTO, Germany), and the four linear actuators (PCON-CB, AIA, Japan).The manipulator robots have the same type of controller (RC8, DensoWave, Japan) that is controlled over bCAP Slave Mode 16 , through UDP.
This integration has been achieved through the development of a large number of custom ROS Noetic packages, 17 all free for noncommercial use. 18Considering low-level control, the sas_robot_driver_denso and sas_robot_driver_escon were re-designed in this work, based on [15].The software packages sas_robot_driver_festo and sas_robot_driver_aia were newly developed using pymodbus. 19) Composition of robots: The composition of several devices20 into a single serial robotic system is enabled by an instance of sas_robot_driver_ros_composer.Using a configuration file, we specify which sas_robot_driver_ros are to be composed and in what order.In addition, we can add CoppeliaSim information for each composed robot.The abstraction of any number of systems into a single serial robot facilitates and modularizes the higher-level control modules which is impervious to changes in the lower-level devices.In this implementation, we have a total of four branches, each abstracted by a sas_robot_driver_ros_composer.
3) Centralized control: The centralized control module is the main contribution in terms of software.It is the implementation of a centralized kinematic control strategy based on quadratic optimization and inequality constraints [21], [23].The centralized control can be configured to simultaneously handle any number of robot branches, in this case four, as follows where ) is a convex objective function related to the task of each robotic branch [23], q i are the joint values of the ith robot, q is the vector stacking all q i in order, x d,i is the task-space target for the ith robot, and u is the vector of joint velocities output by the solver that will be send to the robotic system.With respect to the constraints, the first set of constraints contains the block-diagonal matrix W s that encodes constraints related to a single robot branch.Those can be, for instance, joint limits, joint velocity limits, and collision avoidance with static objects in the workspace using vector-field inequalities (VFIs) [21].Lastly, the second set of constraints is related to pairwise constraints to prevent collisions between robotic branches.
The inequality constraints are used to provide (self) collision avoidance, that is, a sense of self-awareness to the robotic system.That is very important in such a complex system, because a naive control strategy certainly would cause the robot to break.The self-awareness is provided through our VFI methodology.Each VFI is based on the signed distance between geometric primitives.
One of the biggest challenges we had so far with our strategy was related to the cumbersome process of adding several primitive pairs.In this work, we partially solve this issue by providing a way to add each VFI through a configuration file (see Fig. 3). 21

V. DIGITAL TWINS
The simulation of our robotic platform is addressed using two different environments, namely CoppeliaSim, 22 and Isaac Sim. 23The former is mainly used to define geometric primitives, useful to prevent (self)-collisions by means of the VFIs framework.The latter implements a digital twin, and it is used to train operators before using the real platform.Both environments run on the (FS) Windows computer, which is equipped with an Intel i9-11900K with 64GB RAM and a GPU Nvidia RTX A6000.
The second object describes a forbidden zone between two robots.The first entity is a sphere related to the second robotic branch's first joint, and the second entity is related to the first robot branch's first joint.Using a total of 31 VFIs, we can guarantee that the system is highly reactive and has collision-avoidance guarantees.
The kinematic controllers, which run on Ubuntu and ROS (FS), receive haptic interface (OP) teleoperation commands and set the joint positions of the CoppeliaSim simulation.Finally, Isaac Sim receives the joint position commands from CoppeliaSim by means of the DQ Robotics library [24] and the Isaac Sim Python API (standalone workflow).Fig. 4 shows the render of the scene using the path tracing algorithm.

VI. EXPERIMENTAL SHOWCASE A. Digital Twin: Isaac Sim
The proposed digital twin aims mainly to accurately replicate the robot kinematics, while the robot dynamics needs only to be plausible.The rationale behind this is twofold.First, the inertial parameters (e.g., mass and moment of inertia) of the robots composing our system are not made available by the companies.Furthermore, they are commanded by joint positions inputs and operate under relatively low velocities and accelerations in the applications we describe.This allows the use of kinematic control strategies [21], as in (1), which do not take into account the robot dynamics.Therefore, to obtain a similar behavior between the real platform and the Isaac Sim digital twin, 24 we set the damping and stiffness joint parameters to large values, and the masses of the links to small values.Both the center of mass and the inertia matrix of the links are computed automatically by the simulator according to the collision mesh of the body, which are approximations of the body mesh by convex decomposition. 25o validate the proposed approach, we performed two challenging task simulations.The first one is the peg transfer task, and the second one is cranial microsurgery on a mouse.Both tasks are performed via teleoperation.The main goal is to develop a training platform to improve the operator's skills before using the real robotic system.
Peg Transfer Task: To practice grasping tasks, we implemented a peg transfer task simulation using the kinematic branch equipped with the forceps.The goal is to transfer a block from its initial peg to a different one while avoiding collisions between the robot's forceps and the pegs, as shown in Fig. 5. Since the grasping of general objects is a challenging task, which usually relies on specialized simulation environments [25], we implemented a fake-grasping strategy.The idea is to constrain the relative pose between the forceps and the block when they are in contact below a minimum distance.Once the teleoperator opens the forceps, the block is released.
The block is a hollow triangular prism, which is composed of a rigid core covered with a deformable mesh, as shown in Fig. 5.The former enables static interactions with the forceps, whereas the latter allows for dynamic interactions with the pegs.Each peg is linked to a fixed base using a spherical joint set with custom damping and stiffness properties.This approach allows small movements of the peg when subjected to external forces, and prevents numerical instabilities when the robot forceps touch the peg.
Cranial Microsurgery: We implemented a second simulation aiming to practice cranial microsurgery experiments using the kinematic branch equipped with the drill.Instead of drilling specific meshes, which is not a trivial solution for Isaac Sim, we adopted a place-based approach.The goal is to create and place small spheres on the target object when the contact force applied by the end-effector exceeds a threshold.By doing so, we include different behaviors for different force values.This feature allows setting different colors for the drawn spheres or different textures for the drilled object, as shown in Fig. 5.
Results and Discussion: Fig. 5 shows the snapshots of the peg transfer simulation task.The block is transferred from its initial peg to a different one successfully.However, the lack of 3D vision makes it difficult for the operator to locate the end-effector in the workspace.To circumvent this, one could add more cameras in different positions in the scene.Nevertheless, each camera increases the GPU consumption considerably.Our scene, with one camera, runs at 30 FPS using a 1920x1080 render resolution.When two cameras are used, the overall performance drops to about 18 FPS.Another solution could be the use of a VR headset, which is currently supported by Isaac Sim.However, that is out of the scope of this work.
Figure 5 shows the snapshots of the cranial microsurgery simulation task.The goal is to perform an oval trajectory on the skull surface of a mouse.This trajectory represents a small piece of skull tissue that is to be removed.In this case, the robot is not teleoperated.Instead, we define a position task to follow a closed trajectory.The height of the end-effector is independently controlled to keep a constant force applied on the skull.The task is performed successfully, as shown in Fig. 5.However, our current simulation does not take into account the deformable behavior of the mouse skull, which is a challenging feature in the real experiments.

B. Physical Platform
1) Tokyo-Kyoto Teleoperated Eggshell drilling: One of the first and most delicate tasks in a cranial window is to drill a 8 mm patch of the mouse skull without damaging the brain below.Such a task is challenging even for expert scientists, given that even light damage to the brain results in complete failure.
Before blindly experimenting on mice, scientists train using raw chicken eggs.Chicken eggs have a similar thickness to the mouse skull and an underlying membrane.The task then becomes to drill and remove a circular patch of the eggshell without damaging the membrane below.
In this work, to show the reliability of our robotic setup, we designed a Tokyo-Kyoto experiment as part of an exhibition at IROS'22.The robotic system was in Tokyo while we had in Kyoto two Hinotori-type 26 master devices, as shown in Fig. VI-B1. 27The control signals between both sides were sent through suitable UDP protocols. 28s a showcase of what could be achieved, we performed one complete eggshell-drilling task29 , with snapshots shown in Fig. 7.
2) Peg transfer: The peg transfer task, part of the fundamentals of laparoscopic surgery (FLS) certification, is ubiquitous in surgical robotics automation and a common benchmark for human and robot dexterity.In this showcase experiment, our intention is to show the feasibility of using the robotic arm with tweezers to perform the peg transfer.The procedure was performed through teleoperation using the large field-of-view lenses on top and left, front, and right cameras.
With only the top view, it was extremely challenging to properly perform the task because of the difficulties in perceiving depth from a single camera.With the many simultaneous views, the user can naturally compensate for that difficulty.In this context, the user was able to perform the task without difficulties as shown in Fig. 8.As a reference, the task was done within 3 minutes.
Depending on usage, the system could also be retrofitted with two tweezer actuation units, allowing for two-arm peg transfer execution.
3) Gauze cutting: The gauze-cutting task, also part of the FLS certification, is an extremely challenging task for automation owing to the complexity of interacting with the gauze.In this experiment, our purpose was to show that the scissors attachment works properly in a mechanical aspect.In addition, to know if a complex object can be cut by our integrated system.The cameras used in this experiment were the same used for the peg transfer, namely: top and left, front, and right cameras.
This experiment was considerably more challenging than the peg transfer and snapshots of the gauze cutting are shown in Fig. 9.In particular, a considerable amount of rotation change is required to allow the task to be feasible.This causes the manipulator robots to perform, sometimes, large motions in Fig. 5. A: the egg-drilling task.From left to right: The drill is near the egg, but there is no contact force; the drill applies to the egg a contact force, and small black spheres are placed on the egg's surface.The applied force is above a threshold and the egg texture is modified to imitate a broken egg.B: snapshots of the cranial microsurgery simulation.The desired trajectory is denoted by the green dashed line.C: Interaction between the block and the peg.From left to right: The block is composed of a rigid core (orange hollow triangular prism) covered with a deformable blue mesh.A spherical joint links the peg with a static base; different behaviors of the block and the peg when subject to an external force.The blue arrow represents the external force vector applied.D: snapshots of the peg transfer task simulation.The goal is to transfer the block from its initial peg to a different one while avoiding collisions between the robot's forceps and the pegs.The collision groups are denoted on the top.The forceps (including the force sensor) interact with the rigid core and the pegs.The interactions between the block and the pegs are handled by the deformable mesh only.the workspace.Despite that, the robots can safely move owing to (self) collision avoidance described in Section IV-B3.
4) Two-person teleoperated cranial window drilling: In this experiment, our objective was to use the robotic platform for drilling a 8 mm patch on the skull of an euthanized mouse without damaging the tissue below.The mouse used in this experiment was otherwise going to be discarded and was provided by a neighboring research laboratory, that is, it was not euthanized for this experiment.
The preparation of the mouse for the experiments was done manually by removing the skin on top of the cranium and fixating the mouse under camera vision.One operator was tasked with handling the microdrill and the cotton swab.When the mouse is alive, the cotton swab and other tools are used to control and stop bleeding.In our case, the cotton swab was used to keep the cranium wet using tap water inside a small plastic container.The user operated the robot to wet the cotton swab and then applied it to the cranial surface.After that, the user performed the drilling of the skull.Those steps were repeated for about 5 minutes.After that, the second user, seated in another computer and having access to their own master interface, operated the tweezers.The operator could successfully remove the cranial patch.Snapshots of this procedure are shown in Fig. 10.

CONCLUSIONS
In this work, we have introduced a multi-arm robot platform for scientific exploration.We described its design, based on a centralized rail system and customized end effectors.After showing the scalable system integration, we showed two digital twins.One digital twin is based on CoppeliaSim and ready to run on most computers.Another is based on Isaac Sim and, while more computationally demanding, provides more realistic graphics and physical accuracy.
To evaluate this platform, we have shown experiments involving the real platform and the digital twins.We have performed peg transfer, gauze cutting, mock experiments using egg models, and the drilling toward cranial window generation using three robot arms controlled by two operators.The digital twins and related control software are available free for noncommercial use and can be used by anyone interested in validating their algorithms in a state-of-the-art multi-arm platform.
There is ongoing work in using this platform for artificial intelligence research toward scientific exploration.The teleoperation of the system is paramount in this case for the understanding of the tasks to be performed and for the generation of data to understand the features that are relevant to the task.One active research direction based on this work is the automatic drilling of the mouse skull using AI-based image recognition [26], in which the completion level of the task is estimated through changes in the image.The mouse skull is more challenging than the eggshell given that it must be kept humid, making it difficult to discern where the drill has passed.In addition, the mouse cranium is considerably flexible owing to its thickness.Nonetheless, in the case of Fig. 6.Kyoto-side operator console.The operator had access to two control interfaces similar to those used by the Hinotori surgical robot system.The user had a view composed of the Tokyo-side microscopic image taking most of the screen and three smaller views on the right corresponding to the left, front, and right views of the Tokyo-side system.In addition, the operator could hear the drilling sounds using speakers.the mouse, anatomical structures in the cranium, such as the cranial sutures, can be important features for image recognition and AI-based control.In this case, an in-depth study of experiments in mock setups, such as bimanual handling in the peg-transfer task, the understanding of flexible material handling in the gauze-cutting, and the drilling experiments on eggshells will provide insights of motion that can be more readily achieved by robotic systems without replicating human motion We expect this platform to be the first step towards collaborative AI robots that will autonomously perform scientific experiments that otherwise could not be done by human Fig. 8. Peg transfer on the real system operated in teleoperation mode.The peg transfer experiment is ubiquitous in surgical robotics automation and a common benchmark for human and robot dexterity.This feasibility experiment serves to show that our system is capable of both incredibly fine tasks and dexterous tasks such as peg transfer.Fig. 9. Gauze cutting using the real system operated in teleoperation mode.The gauze-cutting experiment is a considerably challenging task included in the FLS.This experiment serves to show that the tweezers-scissors pair can be used with high dexterity.scientists alone.Fig. 10.The world's first reported teleoperated cranial window in an euthanized mouse.Two users and three tools were used.The first user handled the microdrill and the cotton swab.The second user handled the tweezers that were used to remove the circular cranium patch.The only feedback available to the users are the camera images and the sound.This was a feasibility experiment in which time was of no particular essence; but, nonetheless, we were able to make a proper circular hole within 6 minutes.

Fig. 1 .
Fig.1.Our multi-arm robot platform.The four branches can be individually controlled.Two branches contain a collaborative-type manipulator robot and two branches contain an industrial-type manipulator robot.Each manipulator robot's base is attached to a linear actuator which is, then, attached to the circular rail using individual motors.The system has, in total, 34 active degrees-of-freedom in the configuration space, in addition to two degrees-of-freedom for actuating the tweezers and scissors.

Fig. 2 .
Fig. 2.An example of the system architecture when two robots are teleoperated and two robots are autonomously controlled.The local devices are controlled from the centralized control computer.Each robotic branch is encapsulated in a sas_robot_driver_ros_composer.There are four branches, but only two are shown in the figure for simplicity.Those four robotic branches are controlled in joint-space by the aisp_inverse_kinematics node which takes into account joint limits, environmental constraints, and collision avoidance among robots.The aisp_inverse_kinematics node receives desired pose signals for each branch individually.The desired pose signals can be sent through a master device, a custom program, or a ROS-based program.

Fig. 4 .
Fig. 4. On the left, a snapshot of the CoppeliaSim scene.The blue and red spheres are geometric primitives defined by the user.The kinematic controllers use these primitives to impose VFIs that prevent (self)-collisions.On the right, a snapshot of the Isaac Sim scene using path tracing render.

Fig. 7 .
Fig. 7.A sample of what could be achieved in the Tokyo-Kyoto teleoperation experiment.With the right hand, the user carefully drilled the eggshell.With the left hand, the user could use a cotton swab to clear the eggshell dust.After about five minutes of drilling and cleaning, the shell was penetrated without damaging the underlying membrane.The task was, thus, successful.