Article Data

  • Views 367
  • Dowloads 114

Meeting Abstracts

Open Access

Artificial Intelligence and Robotics in Regional Anaesthesia: Do they have a role?

  • Εleni Μoka1
  • James Bowness2

1Creta InterClinic Hospital, Hellenic Healthcare Group (HHG), Heraklion – Crete, Greece

2University of Oxford, Aneurin Bevaun University Health Board, OX12JD Dundee, UK

DOI: 10.22514/sv.2021.192 Vol.17,Issue S1,September 2021 pp.47-48

Submitted: 26 August 2021 Accepted: 06 September 2021

Published: 15 September 2021

*Corresponding Author(s): Εleni Μoka E-mail: mokaeleni@hotmail.com

Abstract

“I expect it [Artificial Intelligence - AI] to play a foundational role in pretty much every aspect of our lives” Sundar Pichai, CEO Google, 2021

We are living in the fourth industrial revolution, characterised by the dominance of computers and technological advances including artificial intelligence (AI) and robotics [1]. Such developments are likely to have a profound impact on humanity, reforming our work environment and daily life.

Artificial intelligence refers to the simulation of human intelligence in machines [2]. Computers can be programmed to imitate neuronal activity and appear to think like humans or mimic their actions, with an attempt to find solutions to complex problems in a variety of scientific domains, including medicine. Such programs are able to make calculations with higher accuracy and speed, compared to humans, using large volumes of data. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning, planning, programming, creativity and problem–solving. AI renders machines capable of interpreting their environment to act for the purpose of achieving a specific goal. Importantly, AI systems can be capable of adapting their behaviour (up to a point) to solve problems with relative autonomy, via analysis of previous actions and outcomes. Robots are machines with an abundance of sensors, that are properly designed to perceive the outer environment and to interact with it, finally executing a series of programmed actions [3].

Artificial intelligence and robotics may provide extremely powerful advances with multiple applications in various medical fields. The emphasis, for the moment, is on surgery and radiology, and with the first related literature reports having appeared at the end of the previous century [3]. In anaesthesia, however, their development was slower, and the first attempt in automation was the introduction of computerised pharmacokinetic model – driven continuous infusion pumps. These attempts resulted in the first target–controlled infusion (TCI) device for administering propofol.

More recently, research has demonstrated that AI may also be useful in Regional Anaesthesia (RA), by identifying key anatomical features and by facilitating Ultrasound–Guided Regional Anaesthesia (UGRA) [2, 4]. The initial challenge in UGRA is an understading of the sono–anatomy, to acquire and interpret ultrasound images [2, 5]. This remains an under–explored area of research and is known to be imperfect amongst anaesthesiologists. While improvements in ultrasound technology provide greater image resolution, developments in AI can be helpful and may be employed to support the application of this technology to identify the salient sono–anatomy. In this regard, a field of AI called “computer vision” has received particular attention as it enables computers to interpret the visual world, most commonly using a technique called deep learning.

Artificial intelligence systems in RA are emerging [2, 6] Among them, the development of a deep learning–based system called ScanNav Anatomy Peripheral Nerve Block (Intelligent Ultrasound, Cardiff, UK) has recently received attention in literature [2–4]. This system uses deep learning to identify anatomical structures on B-mode ultrasound and applies a colour overlay to those structures in real time (as summarised below taken from Bowness et al, 2021) [5]. The labelling is achieved using a convolutional neural network, based on the U-Net architecture. Data (greyscale ultrasound images) that are entered, pass through a series of computational (neural) layers, with each layer extracting specific information. In the initial “contracting” path, each of the down–sampling layers applies a series of convolutional filters to extract image features, and then halves the resolution for the next layer. By this down–sampling, the AI machine can understand better what is present in the image, but it loses information about where some features are. In the subsequent “expanding” path, up–sampling layers apply further convolutional filters, doubling the resolution, until the final image is once again at the initial resolution. The up–sampling helps the network understand where the features are in the image. “Skip connections” facilitate the network to reuse information from higher layers, so that it can learn to finetune the details for the output segmentation (recognition of a specific anatomical structure/area and application of a colour overlay).

During development of the AI system ScanNav Anatomy PNB, a separate network was created for each anatomical area of interest (the region scanned for each block). Ultrasound videos for each area were allocated at random to training (90%) or testing (10%), with training data for a region comprising of pairs of images. In each pair, the first element is an unmodified still frame image and the second one a manually segmented colour overlay corresponding to a specific view. As still frame image pairs were presented, the network learned to make associations between the area of the colour overlay and the area on the underlying B-mode ultrasound image, and thus learned to recreate the desired output colour overlay. The 10% of data reserved for testing was used to evaluate the network’s performance after training. This is a supervised machine learning process, in which, learning is directed by human input at each stage. A typical training set consisted of 115.000 pairs of still frame images for each network, whereas over 800.000 images were finally labelled, evaluated and utilised.

The device has received approval for clinical use by the regulatory authorities in Europe and is currently being reviewed by FDA in the USA. In addition, an objective and quantitative assessment of the system is currently taking place, to frame its exact impact on the spectrum of training and clinical practice for both experts and non-experts anaesthesiologists. The goal is to highlight AI position in current clinical practice, and to focus on its future role in education and training. AI technology in RA indeed has limitations and inaccuracies, but automated medical image interpretation systems already exist, with the future potential to surpass human performance in such a process.

Regarding application of robotics in RA, some preliminary studies have been published. In this context, researchers developed a robotic needle driver for spinal blocks (nerve roots and facet blocks). Their equipment consisted of a robotic needle driver mounted on an interventional table and a joystick located in a control panel separated from the robot. A robot controller with safety features was installed in a computer and connected to the robot by cables. After application in cadavers and utilisation in humans they concluded that robotic spinal blocks are as feasible as manual blocks. Subsequently, other researchers developed a more general device for the guidance of soft-tissue injections, as RA is [3].

Similarly, other researchers established a control algorithm given a predetermined needle trajectory. Then, a robotic arm (C-Arm) drove a flexible, spinal needle toward the target (an animal specimen) and performed the puncture under a closed-loop control from a software guided by real-time X-ray images. This system aimed to create a pathway for needle driving given the initial coordinates and to optimise the plan for minimal pressure on tissues, also taking into account the possible obstacles. Other applications of robotics to RA have focused on peripheral nerve blocks, for example, use of the Da Vinci Surgical System to perform a robotically assisted ultrasound-guided nerve block. Researchers proved that robotically assisted RA is feasible; however, the cost and number of personnel needed to perform robotic RA is not practical currently. A system consisting of equipment specifically designed to perform robot-assisted UGRA is the Magellan, designed and developed at the McGill University in Canada. The Magellan has four components: a standard nerve block needle and a syringe mounted via a custom clamp to a robotic arm (JACO arm, Kinova, Canada), an ultrasound machine, a joystick (Thrust Master, New York, USA), and a control software. This system was designed to work with any ultrasound machine with a video output. The ultrasound video output is captured and displayed on the user interface of the control software. The system is provided with safety features which pose no risk for patients in the case of errors or failure. However, there is a potential danger of overreliance on robotic assistance during training. Although variability may be reduced among trainees, overall competence may be inadequate. Such deskilling would expose anaesthetists during emergencies and equipment failure. Therefore, it is important to carefully design robotic interventions in training as a feedback system to aide and not supersede the learning process.

In conclusion, the potential for utilisation of AI and robotics in UGRA is yet to be determined. Few applications are currently employed in daily practice and with limited scope. However, anatomical knowledge and ultrasound image interpretation are of paramount importance in UGRA, but the human performance and teaching of both are known to be fallible. Therefore robust, reliable AI and robotic technologies could support clinicians to optimise performance, increase uptake of and standardise practice in UGRA. They will likely offer innovative solutions to change service provision and enhance education in the future. Despite their limitations, such innovative modalities should not be perceived with scepticism; rather, they should be embraced as an opportunity for the promotion of the RA subspecialty in a modern, progressive manner.


Cite and Share

Εleni Μoka,James Bowness. Artificial Intelligence and Robotics in Regional Anaesthesia: Do they have a role?. Signa Vitae. 2021. 17(S1);47-48.

References

[1] McKendrick M, Yang S, McLeod GA. The use of artificial intelligence and robotics in regional anaesthesia. Anaesthesia. 2021; 76: 171–181.

[2] Bowness J, El‐Boghdadly K, Burckett‐St Laurent D. Artificial intelligence for image interpretation in ultrasound‐guided regional anaesthesia. Anaesthesia. 2021; 76: 602–607.

[3] Wehbe M, Giacalone M, Hemmerling TM. Robotics and regional anesthesia. Current Opinion in Anesthesiology. 2014; 27: 544–548.

[4] Bowness J, Varsou O, Turbitt L, Burkett-St Laurent D. Identifying anatomical structures on ultrasound: assistive artificial intelligence in ultrasound-guided regional anesthesia. Clinical Anatomy. 2021; 34: 802–809.

[5] Bowness J, Macfarlane A, Noble A, Higham H, Burckett-St Laurent D. Anaesthesia, nerve blocks and artificial intelligence. Anaesthesia News. 2021; 408: 4–6.

[6] Gungor I, Gunaydin B, Oktar SO, M Buyukgebiz B, Bagcaz S, Ozdemir MG, et al. A real-time anatomy ıdentification via tool based on artificial ıntelligence for ultrasound-guided peripheral nerve block procedures: an accuracy study. Journal of Anesthesia. 2021; 35: 591–594.


Abstracted / indexed in

Science Citation Index Expanded (SciSearch) Created as SCI in 1964, Science Citation Index Expanded now indexes over 9,200 of the world’s most impactful journals across 178 scientific disciplines. More than 53 million records and 1.18 billion cited references date back from 1900 to present.

Journal Citation Reports/Science Edition Journal Citation Reports/Science Edition aims to evaluate a journal’s value from multiple perspectives including the journal impact factor, descriptive data about a journal’s open access content as well as contributing authors, and provide readers a transparent and publisher-neutral data & statistics information about the journal.

Chemical Abstracts Service Source Index The CAS Source Index (CASSI) Search Tool is an online resource that can quickly identify or confirm journal titles and abbreviations for publications indexed by CAS since 1907, including serial and non-serial scientific and technical publications.

IndexCopernicus The Index Copernicus International (ICI) Journals database’s is an international indexation database of scientific journals. It covered international scientific journals which divided into general information, contents of individual issues, detailed bibliography (references) sections for every publication, as well as full texts of publications in the form of attached files (optional). For now, there are more than 58,000 scientific journals registered at ICI.

Geneva Foundation for Medical Education and Research The Geneva Foundation for Medical Education and Research (GFMER) is a non-profit organization established in 2002 and it works in close collaboration with the World Health Organization (WHO). The overall objectives of the Foundation are to promote and develop health education and research programs.

Scopus: CiteScore 0.5(2019) Scopus is Elsevier's abstract and citation database launched in 2004. Scopus covers nearly 36,377 titles (22,794 active titles and 13,583 Inactive titles) from approximately 11,678 publishers, of which 34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical sciences and health sciences.

Embase Embase (often styled EMBASE for Excerpta Medica dataBASE), produced by Elsevier, is a biomedical and pharmacological database of published literature designed to support information managers and pharmacovigilance in complying with the regulatory requirements of a licensed drug.

Submission Turnaround Time

Conferences

Top