5.07  detectability index; d′. The square root of the ratio of the difference between the mean signal plus noise power and the mean noise power to the variance of the noise at the point where the detection decision is made. The detector may be a subject or any other decision-making device.

Annotation 1      As a consequence of this definition, d’ is the value of (2E/N0)1/2 necessary for an ideal detector to achieve the specified level of performance if the signal is known exactly and the noise is white, for the special case in which both signal and signal plus noise have Gaussian distributions and equal variance, where E is the total signal energy and N0 is the noise power per unit bandwidth. Under these conditions the detectability index is known as the “normal detectability index.” Contours of constant normal detectability index are often plotted as curves on a receiver-operating-characteristic graph, and this family is called the “normal receiver-operating-characteristic.” See 5.41 and 5.52.

Annotation 2      For a yes-no experiment the value of the normal detectability index can be computed from the conditional probability of a detection, PD, and the conditional probability of a false alarm, PFA. The equation is:

            d‘ = ϕ–1 (PD) – ϕ–1(PFA)

where ϕ–1 is the inverse of the normal distribution function.

Annotation 3      For a balanced two-alternative forced-choice experiment, the equation is:

d‘ = 21/2 ϕ–1(Pcorrect)

Annotation 4                See W.P. Tanner, Jr., and T.G. Birdsall, “Definitions of d′ and η as psychophysical measures,” J. Acous. Soc. of Am., pp. 922-928, Vol. 30, No. 10, 1958.

Annotation 5      The detectability index, d′, is equivalent to the square root of the detection index, d, used in sonar performance modeling. See R.J. Urick, “Principles of underwater sound,” p. 382, McGraw-Hill Company, New York, NY, 1983.

« Back to Standards Terminolgy Index