Skip to main content

Section 16.5 IEEE 754

Specific floating point formats involve trade-offs between resolution, round off errors, size, and range. The most commonly used formats are the IEEE 754. They range in size from four to sixteen bytes. The most common sizes used in C/C++ are floats (4 bytes) and doubles (8 bytes). The ARM supports both sizes.

In the IEEE 754 4-byte format, one bit is used for the sign, eight for the exponent, and twenty-three for the significand. The IEEE 754 8-byte format specifies one bit for the sign, eleven for the exponent, and fifty-two for the significand.

The bit patterns for floats and doubles are arranged as shown in Figure 16.5.1.

Figure 16.5.1. IEEE 754 bit patterns. (a) float (b) double

In this section we describe the 4-byte format in order to save ourselves (hand) computation effort. The goal is to get a feel for the limitations of floating point formats. The normalized form of a floating point number in binary is:

\begin{gather} N = (-1)^{s} \times 1.f \times 2^{e}\label{eq-ieee}\tag{16.5.1} \end{gather}

where: \(s\) is the sign bit, \(f\) is the 23-bit fractional part of the significand, and \(e\) is the 8-bit exponent.

As in decimal, the exponent is adjusted such that there is only one non-zero digit to the left of the binary point. In binary, though, this digit is always one. Since it is always one, it need not be stored. Only the fractional part of the normalized value needs to be stored as the significand. This adds one bit to the significance of the fractional part. The integer part (one) that is not stored is sometimes called the hidden bit.

The sign bit, \(s\text{,}\) refers to the number. Another scheme is used to represent the sign of the exponent, \(e\text{.}\) Your first thought is probably to use two's complement. However, the IEEE format was developed in the 1970s, when floating point computations took a lot of CPU time. Many algorithms depend upon only the comparison of two numbers, and the computer scientists of the day realized that a format that allowed integer comparison instructions would result in faster execution times. So they decided to store a biased exponent as an unsigned int. The amount of the bias is one-half the range allocated for the exponent. In the case of an 8-bit exponent, the bias amount is \(127\text{.}\)

The hidden bit scheme presents a problem—there is no way to represent zero. To address this issue, the IEEE 754 standard has several special cases:

Zero Value

Shown by setting all the bits in the exponent and significand to zero. Notice that this allows for \(-0.0\) and \(+0.0\text{,}\) although \((-0.0 == +0.0)\) computes to true.

Denormalized Value

Shown by setting all the bits in the exponent to zero. In this case there is no hidden bit. Zero can be thought of as a special case of denormalized.


Shown by setting all the bits in the exponent to one and all the bits in the significand to zero. The sign bit allows for \(-\infty\) and \(+\infty\text{.}\)


Shown by setting all the bits in the exponent to one, and the significand to non-zero. This is used when the results of an operation are undefined. For example, \(\pm{}nonzero \div 0.0\) yields infinity, but \(\pm{}0.0 \div \pm{}0.0\) yields NaN.

Example 16.5.2.

Show how \(97.8125\) is stored in 32-bit IEEE 754 binary format.

  1. Convert the number to binary.

    \begin{align*} 97.8125_{10} &= \binary{1100001.1101}_{2}\\ &= (-1)^0 \times \binary{1100001.1101} \times 2^{0} \end{align*}
  2. Adjust the exponent to obtain the normalized form.

    \begin{gather*} (-1)^0 \times \binary{1100001.1101} \times 2^{0} = (-1)^0 \times \binary{1.1000011101} \times 2^{6} \end{gather*}
  3. Compute \(s\text{,}\) \(e+127\text{,}\) and \(f\text{.}\)

    \begin{align*} s &= \binary{0}\\ e + 127 &= 6 + 127\\ &= 133\\ &= \binary{10000101}_{2}\\ f &= \binary{10000111010000000000000} \end{align*}
  4. Finally, use Figure 16.5.1 to place the bit patterns. (Remember that the hidden bit is not stored; it is understood to be there.)

    \begin{align*} 97.8125_{10} &= \binary{0 10000101 10000111010000000000000}_{2}\\ &= \hex{42c3a000}_{16} \end{align*}
Example 16.5.3.

Using IEEE 754 32-bit format, what decimal number does the bit pattern \(\hex{3e400000}_{16}\) represent?

  1. Convert the hexadecimal to binary, using spaces suggested by Figure 16.5.1

    \begin{gather*} \hex{3e400000}_{16} = \binary{0} \binary{01111100} \binary{10000000000000000000000}_{2} \end{gather*}
  2. Compute the values of \(s\text{,}\) \(e\text{,}\) and \(f\text{.}\)

    \begin{align*} s &= \binary{0}\\ e + 127 &= \binary{01111100}_{2}\\ &= 124_{10}\\ e &= -3_{10}\\ f &= \binary{10000000000000000000000} \end{align*}
  3. Finally, plug these values into Equation (16.5.1). (Remember to add the hidden bit.)

    \begin{align*} (-1)^0 \times \binary{1.100\dots 00} \times 2^{-3} &= (-1)^0 \times \binary{0.0011} \times 2^{0}\\ &= 0.1875_{10} \end{align*}