# Instrumental magnitude

Instrumental magnitude refers to an uncalibrated apparent magnitude, and, like its counterpart, it refers to the brightness of an astronomical object seen from an observer on Earth, but unlike its counterpart, it is only useful in relative comparisons to other astronomical objects in the same image (assuming the photometric calibration does not spatially vary across the image; in the case of images from the Palomar Transient Factory, the absolute photometric calibration involves a zero point that varies over the image by up to 0.16 magnitudes to make a required illumination correction). Instrumental magnitude is defined in various ways, and so when working with instrumental magnitudes, it is important to know how they are defined. The most basic definition of instrumental magnitude, $m$ , is given by
$m=-2.5\log _{10}(f)$ where $f$ is the intensity of the source object in known physical units. For example, in the paper by Mighell, it was assumed that the data are in units of electron number (generated within pixels of a charge-coupled device). The physical units of the source intensity are thus part of the definition required for any instrumental magnitudes that are employed. The factor of 2.5 in the above formula originates from the established fact that the human eye can only clearly distinguish the brightness of two objects if one is at least approximately 2.5 times brighter than the other. The instrumental magnitude is defined such that two objects with a brightness ratio of exactly 100 will differ by precisely 5 magnitudes, and this is based on Pogson's system of defining each successive magnitude as being fainter by $100^{1/5}$ . We can now relate this to the base-10 logarithmic function and the leading coefficient in the above formula:
$100^{1/5}=(10^{2})^{1/5}=10^{2/5}=10^{0.4}=2.51188643\cdots$ 