The camera models introduced here are intended to be as general as possible. We adopt the conventional rotation matrix and translation approach rather than the more popular homogenous notation and ``essential matrices'' commonly found in the literature. There are two reasons for this, the first is that all cameras (that we are aware of) have orthogonal measurement axes and the real world is cartesian. Therefore the ability to represent arbitrary skews is unnecessary and deliberately mixing this with spatial warping is unhelpful 10.1 when attempting to parameterise a system. In addition, though homogenous notation may appear superficially simple, software implementations of conventioanl co-ordinate transformations are more easily interpretted by PhD level researchers than the equivalent homogenous form (which requires significant additional documentation to explain).
Single, binocular, trinocular, temporal and other more esoteric combinations of camera geometries can be representated. Camera structures represent both the physical properties of cameras (intrinsic parameters) and information required to compute the mapping between standard iconic image coordinates and camera based image coordinates (extrinsic parameters) from which actual measurements in the world can be derived (including the removal of any distortion that is known to be present as a result of the imaging process).
Enough information is maintained in the camera for its parameters to be rescaled to any size of image. Hence, it is possible to derive from the camera model associated with an image of one size the camera models of all scaled variants of it. These may result from either hardware of software quantization or image expansion (with interpolated pixel values).
Tina camera structure definitions and typedefs (amongst other things) can be included with
function declarations with
programs should be compiled with the libraries
-ltinavision -ltinamath -ltinasys