Researchers from Johns Hopkins and Stanford Universities have made a significant advancement in medical technology by enhancing robotic surgery capabilities. They integrated an AI-powered vision-language model (VLM) with the da Vinci robotic surgery system, allowing robots to autonomously execute complex surgical tasks.
This development could potentially change how surgeries are performed globally. The VLM was trained on extensive surgical video footage, enabling the robotic system to autonomously perform critical surgical tasks, such as tissue manipulation, needle handling, and suturing. Traditionally, robotic systems required detailed programming for each movement.
However, the new model utilizes imitation learning, allowing the robot to replicate actions observed in surgical videos. The researchers used NVIDIA GeForce RTX 4090 GPUs, PyTorch, and CUDA-X libraries to train their model. The findings were presented at the Conference on Robot Learning in Munich, showcasing the capabilities of the da Vinci Surgical System, widely used for laparoscopic surgeries worldwide.
For training, miniature cameras were attached to the robotic arms to capture over 20 hours of surgical procedures. This data included precise kinematic information, crucial for training the VLM. The experiments were conducted using animal flesh, and the robot demonstrated near-flawless performance in a zero-shot environment, even solving unforeseen challenges autonomously.
The success of these experiments suggests a future where autonomous robotic surgeries could become commonplace. The researchers are already working on further experiments with animal cadavers and expanding the training data to enhance the capabilities of robotic systems. These developments are likely to influence the future of surgical practices, potentially improving precision and reducing the risk of human error.
Source
<p>The post AI-Integrated Robots Perform Complex Surgical Tasks with Precision first appeared on CoinBuzzFeed.</p>