Visual Augmentation for Virtual Environments in Surgical Training

phd research 

Abstract

Augmented reality is an important tool for surgical training and skills assessment. The use of computer simulation, particularly the reliance on patient specific data for building realistic models both in terms of biomechanical fidelity and photorealism has attracted extensive interests in recent years. For example, by fusing real bronchoscopy video with 3D tomographic data with the same patient, it is possible to generate photorealistic models that allow high fidelity, patient specific bronchoscope simulation. In order to match video bronchoscope images to the geometry extracted from 3D reconstructions of the bronchi, however, robust registration techniques have to be developed. This is a challenging problem as it implies 2D-3D registration with certain degrees of deformation and different physiological responses.

In this thesis, we propose a new pq-space based 2D-3D registration method for camera pose estimation in endoscope tracking. The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear, local shape-from- shading algorithm derived from the unique camera lighting constrains of the endoscopes. We demonstrate how to use the derived pq-space distribution to match to that of the 3D tomographic model. The registration algorithm is further enhanced by introducing temporal constrains based on particle filtering. For motion prediction, a second-order auto-regressive model has been used to characterize camera motion in a bounded lumen as encountered in bronchoscope examination. The proposed method provides a systematic learning procedure with modular training from ground truth data such that information from different subjects are integrated for creating a dynamic model, which accommodates the learnt behaviour.

To cater for airway deformation, an active shape model (ASM) driven 2D-3D registration has been proposed. ASM captures the intrinsic variability of the tracheo-bronchial tree during breathing and it is specific to the class of motion it represents. The method reduces the number of parameters that control the deformation, and thus greatly simplifies the optimisation procedure. Subsequently, pq-based registration is performed to recover both the camera pose and parameters of the ASM. Radial Basis Functions (RBFs) are employed to smoothly warp the 3D mesh based on the ASM point correspondences. The method also exploits the recent development of five degrees-of-freedom miniaturised catheter tip electromagnetic trackers such that the position and orientation of the bronchoscope can be accurately determined under dis-occlusion and bleeding artefacts. The accuracy of the proposed method has been assessed by using both a specially constructed airway phantom with an electro-magnetic tracker, and in vivo patient data.

My PhD was sponsored by EPSRC(GR-R56822-01). You can download my PhD thesis here (5.6MB).