Senior Research Fellow Wellcome/EPSRC Centre for Interventional and Surgical Sciences Department of Computer Science University College London Charles Bell House (WEISS) London, W1W 7TY, UK Email: sophia.bano-at-ucl.ac.uk ![]() ![]() ![]() ![]() |
![]() |
I am a Senior Research Fellow with interests in Computer Vision and Artificial Intelligence applications in Healthcare. Currently, I am associated with the Surgical Robot Vision Group in the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and I am working on the Guided Instrumentation for Fetal Therapy and Surgery (GIFT-Surg) project. I am keen at establishing collaborations at International level to introduce Artificial Intelligence in clinical settings. Previously, I worked on the Augmenting Communication using Environmental Data to Drive Language Prediction (ACE-LP) project as a Postdoctoral Research Assistant at University of Dundee. I received Bachelor in Mechatronics Engineering and Master in Electrical Engineering degrees from National University of Sciences and Technology (NUST), Pakistan. Following this, I secured Erasmus Mundus Scholarship for the highly competitive MSc in Computer Vision and Robotics (VIBOT) programme run jointly by Heriot-Watt University (UK), University of Girona (Spain) and University of Burgundy (France), and received my MSc degree in 2011, under the supervision of Prof. Ferdinando Rodriguez y Baena following an internship at Imperial College London (UK). I received a joint PhD from Queen Mary University of London (UK) and Technical University of Catalonia (Spain), through the Erasmus Mundus Fellowship, under the supervision of Prof. Andrea Cavallaro , in 2016.
Computer Vision, Surgical Vision, Surgical Data Science, Medical Imaging, Geometric Understanding, Image Registration, Robotics, Multimodal Content Analysis (Audio, Video, Inertial Sensor), Unconstrained Multimedia Analysis.
For complete list of publications, CLICK HERE.
AutoFB: Automating Fetal Biometry Estimation from Standard Ultrasound Planes |
|
Automating Periodontal Bone Loss Measurement via Dental Landmark Localisation |
|
FetReg: Placental Vessel Segmentation and Registration in Fetoscopy Challenge Dataset |
|
A Fluidic Soft Robot for Needle Guidance and Motion Compensation in Intratympanic Steroid Injections |
|
Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy |
|
|
Deep Learning-based Fetoscopic Mosaicking for Field-of-View Expansion |
![]() |
Deep Human Activity Recognition with Localisation of Wearable Sensors |
![]() |
Deep Placental Vessel Segmentation for Fetoscopic Mosaicking |
![]() |
FetNet: A Recurrent Convolutional Network for Occlusion Identification in Fetoscopic Videos |
![]() |
Deep Learning Based Anatomical Site Classification for Upper Gastrointestinal Endoscopy |
![]() |
Deep Sequential Mosaicking of Fetoscopic Videos |
![]() |
Deep Human Activity Recognition using Wearable Sensors |
![]() |
XmoNet: A Fully Convolutional Network for Cross-Modality MR Image Inference |
![]() |
Multimodal Egocentric Analysis of Focused Interactions Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video |
![]() |
ViComp: Composition of User-Generated Videos |
![]() |
Gyro-based Camera Motion Detection in User-Generated Videos |
![]() |
Discovery and Organization of Multi-Camera User-Generated Videos of the Same Event |
![]() |
Smooth Path Planning for a Biologically-Inspired Neurosurgical Probe |
For complete list of publications, CLICK HERE.
The fetoscopy placenta dataset is associated with our MICCAI2020 publication titled “Deep Placental Vessel Segmentation for Fetoscopic Mosaicking” (insert arxiv link). The dataset contains 483 frames with ground-truth vessel segmentation annotations taken from six different in vivo fetoscopic procedure videos. The dataset also includes six unannotated in vivo continuous fetoscopic video clips (950 frames) with predicted vessel segmentation maps obtained from the leave-one-out cross validation of our method.
Visit Fetoscopy Placenta Dataset website to access the data.
Deep Placental Vessel Segmentation for Fetoscopic Mosaicking
The Focused Interaction Dataset} contains 19 egocentric continuous videos (378 mins) captured, at high resolution (1080p) and at a frame rate of 25 fps, using a shoulder-mounted GoPro Hero4 camera and a smartphone (for inertial and GPS data), at 18 different locations and with 16 different conversational partners. This dataset is annotated into periods of no focused interaction, focused interaction (non-walk) and focused interaction (walk). This dataset is captured at several indoor and outdoor locations at different times of the day and night, and in different environmental conditions.
Visit Focused Interaction Dataset website to access the data.
Multimodal Egocentric Analysis of Focused Interactions
Finding Time Together: Detection and Classification of Focused Interaction in Egocentric Video
The multimodal dataset contains 24 user-generated videos (70 mins) captured using handheld mobile phones in high brightness and low brightness scenarios (e.g. day and night-time). The video (audio and visual) along with the inertial sensor (accelerometer, gyroscope, magnetometer) data is provided for each video. These recordings are captured using single camera at distinct timings and locations, changing lights and varying camera motions. Each captured video was manually annotated to get labels for camera motions (pan,tilt, shake) at each second. The ground-truth labels are included in the dataset.
Visit UGV Dataset website to access the data.
Gyro-based Camera Motion Detection in User-Generated Videos
Organising roles:
Peer Review roles:
Journal Reviewers:
Conference Reviewers: