Skip to the content.

2021 AccelNet Surgical Robotics Challenge (online)

Automating surgical subtasks is an oft-mentioned research target for robot-assisted surgery. Certain subtasks, such as suturing and resection, have been automated on test bench setups. Yet current works are often limited in scope (i.e., based on very simplistic setups) and lack a standardized setup to reproduce results. In this challenge, we provide a simulation platform for participants to develop algorithms to address various questions in surgical subtask automation. The simulator contains two seven degrees-of-freedom (DOF) instrument arms based on the da Vinci Surgical System large needle driver, a controllable camera based on the da Vinci Endoscopic Camera Manipulator (ECM), a suturing phantom, and a needle with suture.


12/14/2021: Announcing the GitHub Discussions forum for questions and comments. See the Community page for more information.


September 15, 2021: Challenge opens

February 1, 2022: Challenge closes (all Docker containers must be submitted)

March 1, 2022: Challenge results announced


Awards will be given for the winning entries in each challenge. The awards will consist of cash prizes and travel grants to a future (in-person) AccelNet Surgical Robotics Challenge. Details to be announced.

System Setup

The challenge is based on the Asynchronous Multi-Body Framework (AMBF) simulator. Participants will be required to install AMBF on a Linux system and download a Docker container. Algorithms must be implemented within a Docker container and submitted for testing. Additional details are provided here.

Challenge Tasks

The challenge is partitioned into three tasks summarized below. While these tasks naturally build on each other, it is possible to perform any subset of the tasks. All tasks will use a suturing phantom, a needle with suture, and up to two da Vinci large needle drivers. The view of the scene is provided by a simulated stereo endoscope (1080p, 30fps), with a camera baseline as in a real da Vinci. By default, there is one light attached to the endoscope, but lighting can vary up to twice this amount (i.e., equivalent to two lights attached to the endoscope).

The only requirement is that the developed algorithms perform the tasks autonomously. There is no requirement to use a particular type of machine learning, or even to use machine learning at all. Note that the videos below for Challenges 2 and 3 were created using teleoperation.

All entries will be tested under the same set of test conditions. Descriptions of the test conditions are provided in the detailed pages for each challenge.

Challenge 1: Finding the needle

Task: Develop algorithms to identify the pose (position and orientation) of the metallic suture needle, with respect to the current endoscope pose. The video shows two sample endoscope images. [More...].

Challenge 2: Grasp needle and drive through tissue

Task: Move the large needle driver to grasp the needle and then move the needle tip to the target and drive the needle through the tissue until the tip exits. The accuracy of the simulated robot will be comparable to that of a real robot and thus visual feedback would be required to ensure accurate performance. [More...].

Challenge 3: Suture the phantom

Task: Drive the needle through the phantom from the first entry point to the corresponding exit point. The left instrument should pull the needle through the phantom and hand back the needle to the right instrument. This completes one suture. The algorithm should repeat the entry and exit for each pair of points. [More...].


Please use the GitHub Discussions forum for questions and comments. See the Community page for more information.

To contact the organizers by email:


NSF Logo Development of this Surgical Robotics Challenge is supported by the United States National Science Foundation (NSF) via OISE-1927354 and OISE-1927275, AccelNet: International Collaboration to Accelerate Research in Robotic Surgery.