My background research training was obtained in High Energy Physics on an experiment at CERN. Since then I have spent 27 years as a computer vision and medical physics researcher. The first five years were spent at the Artificial Intelligence Vision Research Unit where I was involved in the initial development of the TINA vision system. This has formed a key focus for subsequent research and has culminated in several advanced systems one of which being a stereo vision system for robotic applications. I joined the Electronic Systems Group in 1992 and was there for a further 5 years. While there, my research focused on the design of VLSI chips for computer vision algorithmic support. This resulted in the fabrication of several working devices.

.

My work has addressed a wide area of topics including: neural networks, stereo vision and object recognition. A strong component of this has always been the use of statistical methodology. I have quite strong views on the need for algorithmic methodologies for design and testing (see machine vision notes below). I believe that such approaches (commonly found in other scientific disciplines) could significantly improve the quantity and quality of useful results in this area.

My current position at the University of Manchester is a Senior Lecturer in Neuroimage Analysis. In this capacity I was the first person in the University of Manchester to develop an fMRI analysis system for the investigation of brain activity using MR. This work continued for several years, and included experiments to investigate the functional processes underlying object constancy, in collaboration with researchers in the University of Bangor. Following this project I worked on a system for automatic location and analysis of landmarks for analysis of biological structures (e.g. fly wings and mice). More recently I have been working to develop systems for early detection of cancer using diffusion weighted MRI.

I teach Quantitative Medical Image Analysis to MSc students. For many years I contributed to the EPSRC Machine Vision Summerschool. I have been actively involved with the BMVA, holding positions of publicity officer and Meetings and Company Secretary.

For five years I was chair of the steering committee for the annual Medical Image Understanding and Analysis conference.

I currently supervise 6 PhD students and a 6 Research Associates. on multiple projects in the areas of medical imaging, machine vision and quantitative use of pattern recognition. These projects are funded by a variety of bodies including, Leverhulme, CRUK and the European Commission. My research often involves assessing the basic principles on which the subjects of computer vision and image analysis are founded. Recent research has involved; developing a statistically self-consistent solution to the problem of analysing point based shape models for genetics, the first fully quantitative quantitative pattern recognition system, and methods for calibration of MRI based diffusion measurement for clinical practice. All work is done using principles of quantitative use of probability backed up with Monte-Carlo testing, generally bootstrapped from real world data samples.

- Do we believe that a good starting point for a theory of human vision is to assume it makes the best use of the the data available?
- Does model selection (pattern recognition) actually require knowledge of the measurement errors for a meaningful solution?
- Do we have to accept that there are a multitude of ways of comparing probability density distributions, and can the Kullback-Liebler Divergence ever be used legitimately as the theoretical basis for an algorithm?
- Is Likelihood really required to be formulated in a space of uniform errors in order to remain quantitatively valid?
- Are algorithms based upon closed form solutions to sets of constraint equations apparently mathematically solid but in fact generally statistically inept?
- Are some theoretical mathematical approaches, such as Riemann geometry, incapable of supporting a valid statistical framework?
- Is it true that an algorithm based upon Bayes Inference can never be meaningfully tested?
- Do either Bayes Theory or Information Theory offer a quantitatively valid generalisation to the theory of Likelihood?
- Are estimation techniques based upon either Mutual Information or MAP simply a bad re-invention of Likelihood?

To find out answers and opinions look at the technical documents on our Tina web pages.

EPSRC Summerschool notes (Performance Characterisation)(postscript)

EPSRC Summerschool notes (transparencies)(postscript)

Journal Publications

Conference Publications

**E-Mail contact: neil.thacker (at) manchester.ac.uk**