Assistant Professor in Robotics and Artificial Intelligence UCL East Robotics and Wellcome/EPSRC Centre for Interventional and Surgical Sciences Department of Computer Science University College London Email: sophia.bano-at-ucl.ac.uk |
I am an Assistant Professor in Robotics and Artificial Intelligence at Department of Computer Science, since November 2022 and I am also part of the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), Surgical Robot Vision group, Centre for Artificial Intelligence and UCL Robotics Institute at University College London (UCL). Previously, I worked as a Senior Research Fellow at WEISS where I contributed to the GIFT-Surg (Wellcome/EPSRC Guided Instrumentation for Fetal Therapy and Surgery) project.
My research interests developing computer vision and AI techniques for context awareness, machine consciousness and navigation in minimally invasive and robot-assisted surgery. Particularly, I am interested in context understanding in endoscopic procedures, surgical workflow analysis, field-of-view expansion, 3D reconstruction, perception and navigation for surgical robotics.
I am the Principal Investigator on the UKRI EPSRC AI-enabled Decision Support in Pituitary Surgery project (2023-2025). During my previous roles at UCL, I was one of the leading researchers on the Wellcome/EPSRC Guided Instrumentation for Fetal Therapy and Surgery (GIFT-Surg) project (2014 - 2022) and named researcher on the EPSRC Context-Aware Augmented Reality for Endonasal Surgery (CARES) project (2022 - 2025).
I received the BEng in Mechatronics Engineering and MSc in Electrical Engineering from the National University of Sciences and Technology (NUST), Pakistan. In 2009, I was awarded an Erasmus Mundus Scholarship for the MSc in Computer Vision and Robotics (VIBOT) programme by Heriot-Watt University (UK), University of Girona (Spain) and University of Burgundy (France). This was followed by an internship at Imperial College London (UK) where I contributed to the ERC Soft Tissue Intervention Neurosurgical Guide (STING) project. In 2016, I received a joint PhD from Queen Mary University of London (UK) and Technical University of Catalonia (Spain), funded by an Erasmus Mundus Fellowship. Before joining UCL-WEISS in 2018, I worked as a post-doctoral researcher at the University of Dundee where I contributed to the EPSRC Augmenting Communication using Environmental data to drive Language Prediction (ACE-LP) project.
Computer Vision, Surgical Vision, Surgical Data Science, Medical Imaging, Geometric Understanding, Image Registration, Surgical Robotics, Computer-assisted Intervention, Computer-assisted Diagnosis.
For complete list of publications, CLICK HERE.
AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound Planes |
|
Automating Periodontal Bone Loss Measurement via Dental Landmark Localisation |
|
FetReg: Placental Vessel Segmentation and Registration in Fetoscopy Challenge Dataset |
|
A Fluidic Soft Robot for Needle Guidance and Motion Compensation in Intratympanic Steroid Injections |
|
Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy |
|
Deep Learning-based Fetoscopic Mosaicking for Field-of-View Expansion |
|
Deep Human Activity Recognition with Localisation of Wearable Sensors |
|
Deep Placental Vessel Segmentation for Fetoscopic Mosaicking |
|
FetNet: A Recurrent Convolutional Network for Occlusion Identification in Fetoscopic Videos |
|
Deep Learning Based Anatomical Site Classification for Upper Gastrointestinal Endoscopy |
|
Deep Sequential Mosaicking of Fetoscopic Videos | |
Deep Human Activity Recognition using Wearable Sensors |
|
XmoNet: A Fully Convolutional Network for Cross-Modality MR Image Inference |
|
Multimodal Egocentric Analysis of Focused Interactions Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video |
|
ViComp: Composition of User-Generated Videos |
|
Gyro-based Camera Motion Detection in User-Generated Videos |
|
Discovery and Organization of Multi-Camera User-Generated Videos of the Same Event |
|
Smooth Path Planning for a Biologically-Inspired Neurosurgical Probe |
For complete list of publications, CLICK HERE.
The fetoscopy placenta dataset is associated with our MICCAI2020 publication titled “Deep Placental Vessel Segmentation for Fetoscopic Mosaicking” (insert arxiv link). The dataset contains 483 frames with ground-truth vessel segmentation annotations taken from six different in vivo fetoscopic procedure videos. The dataset also includes six unannotated in vivo continuous fetoscopic video clips (950 frames) with predicted vessel segmentation maps obtained from the leave-one-out cross validation of our method.
Visit Fetoscopy Placenta Dataset website to access the data.
Deep Placental Vessel Segmentation for Fetoscopic Mosaicking
The Focused Interaction Dataset} contains 19 egocentric continuous videos (378 mins) captured, at high resolution (1080p) and at a frame rate of 25 fps, using a shoulder-mounted GoPro Hero4 camera and a smartphone (for inertial and GPS data), at 18 different locations and with 16 different conversational partners. This dataset is annotated into periods of no focused interaction, focused interaction (non-walk) and focused interaction (walk). This dataset is captured at several indoor and outdoor locations at different times of the day and night, and in different environmental conditions.
Visit Focused Interaction Dataset website to access the data.
Multimodal Egocentric Analysis of Focused Interactions
Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video
The multimodal dataset contains 24 user-generated videos (70 mins) captured using handheld mobile phones in high brightness and low brightness scenarios (e.g. day and night-time). The video (audio and visual) along with the inertial sensor (accelerometer, gyroscope, magnetometer) data is provided for each video. These recordings are captured using single camera at distinct timings and locations, changing lights and varying camera motions. Each captured video was manually annotated to get labels for camera motions (pan,tilt, shake) at each second. The ground-truth labels are included in the dataset.
Visit UGV Dataset website to access the data.
Gyro-based Camera Motion Detection in User-Generated Videos
Organising roles:
Peer Review roles:
Journal Reviewers:
Conference Reviewers: