Ian is an Associate Professor in the Centre of Digital Media Technology (DMT) and the subject lead for image and video technology in the DMT Lab. He is an expert in image analysis, Mixed and Augmented Reality (MR/AR), image texture analysis, 3D image processing and user interaction.
Ian delivers internationally recognised research by leading the Image and Mixed Reality group within the DMT Lab, and as subject lead, directs the module content for the level 4 module on Digital Audio Technology, the level 6 module on Digital Image Processing, the level 7 module on Research methods and the MSc Digital Broadcast Technology Projects. Supplementary to this academic delivery, Ian supervises PhD research in many fields of digital media technology and at present he is Director of Studies for four PhD students, all researching in areas of mixed reality, multidimensional image processing and interactive systems.
Ian joined the University in 2008, following a career as a communications engineer for the UK railway network and after successfully completing his PhD in Medical Image Processing.
Ian currently acts on the scientific and technical committee for the IEEE international symposium on Mixed and Augmented reality and is a reviewer for several international conferences and journals including, IEEE Signal Processing Letters, International Journal of Optics, IET Journal of Image Processing and Elsevier Journal of Computer Vision and Image Understanding.
Areas of Expertise (7)
Image Feature Extraction
Mixed and Augmented Reality
Digital Image Processing
Digital Video Processing
Manchester Metropolitan University: Ph.D., Image Processing 2007
Manchester Metropolitan University: B.Sc., Media Technology 2004
Selected Articles (5)
This paper presents an assessment of the variability in freehand grasping of virtual objects in an exocentric mixed reality environment. We report on an experiment covering 480 grasp based motions in the transition phase of interaction to determine the level of variation to the grasp aperture. Controlled laboratory conditions were used where 30 right-handed participants were instructed to grasp and move an object (cube or sphere) using a single medium wrap grasp from a starting location (A) to a target location (B) in a controlled manner. We present a comprehensive statistical analysis of the results showing the variation in grasp change during this phase of interaction. In conclusion we detail recommendations for freehand virtual object interaction design notably that considerations should be given to the change in grasp aperture over the transition phase of interaction.
This article presents an analysis into the accuracy and problems of freehand grasping in exocentric Mixed Reality (MR). We report on two experiments (1710 grasps) which quantify the influence different virtual object shape, size and position has on the most common physical grasp, a medium wrap. We propose two methods for grasp measurement, namely, the Grasp Aperture (GAp) and Grasp Displacement (GDisp). Controlled laboratory conditions are used where 30 right-handed participants attempt to recreate a medium wrap grasp. We present a comprehensive statistical analysis of the results giving pairwise comparisons of all conditions under test. The results illustrate that user Grasp Aperture varies less than expected in comparison to the variation of virtual object size, with common aperture sizes found.
When human actors interact with virtual objects the result is often not convincing to a third party viewer, due to incongruities between the actor and object positions. In this study we aim to quantify the magnitude and impact of the errors that occur in a bimanual interaction, that is when an actor attempts to move a virtual object by holding it between both hands. A three stage framework is presented which firstly captures the magnitude of these interaction errors, then quantifies their effect on the relevant third party audience, and thirdly assesses methods to mitigate the impact of the errors. Findings from this work show that the degree of error was dependent on the size of the virtual object and also on the axis of the hand placement with respect to the axis of the interactive motion.
This work presents an objective performance analysis of statistical tests for edge detection which are suitable for textured or cluttered images. The tests are subdivided into two-sample parametric and non-parametric tests and are applied using a dual-region based edge detector which analyses local image texture difference. Through a series of experimental tests objective results are presented across a comprehensive dataset of images using a Pixel Correspondence Metric (PCM). The results show that statistical tests can in many cases, outperform the Canny edge detection method giving robust edge detection, accurate edge localisation and improved edge connectivity throughout. A visual comparison of the tests is also presented using representative images taken from typical textured histological data sets.
Virtual studios typically use a layering method to achieve occlusion. A virtual object can be manually set in the foreground or background layer by a human controller, allowing it to appear in front of or behind an actor. Single point actor tracking systems have been used in virtual studios to automate occlusions. However, the suitability of single point tracking diminishes when considering more ambitious applications of an interactive virtual studio. As interaction often occurs at the extremities of the actor’s body, the automated occlusion offered by single point tracking is insufficient and multiple-point actor tracking is justified. We describe ongoing work towards an automatic occlusion system based on multiple-point skeletal tracking that is compatible with existing virtual studios. We define a set of occlusions required in the virtual studio; describe methods for achieving them; and present our preliminary results.