Skip to main content
\(\newcommand{\doubler}[1]{2#1} \newcommand{\binary}{\mathtt} \newcommand{\hex}{\mathtt} \newcommand{\octal}{\mathtt} \newcommand{\prog}{\mathtt} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section2.5Unsigned Decimal to Binary Conversion

Signed numbers can be either positive or negative, but unsigned numbers have no sign. Be careful to remember that a positive signed number is not unsigned.

Say we wish to convert an unsigned decimal integer, \(N\text{,}\) to binary. We set it equal to the expression in Equation (2.3.4), giving us:

\begin{equation} N = d_{n-1} \times 2^{n-1} + d_{n-2} \times 2^{n-2} + \ldots + d_{1} \times 2^{1} + d_{0} \times 2^{0} \tag{2.5.1} \end{equation}

where \(d_{i} = 0\) or \(1\text{.}\) Dividing both sides by \(2\text{,}\)

\begin{equation} N_{1} + \frac{r_0}{2} = d_{n-1} \times 2^{n-2} + d_{n-2} \times 2^{n-3} + \ldots + d_{1} \times 2^{0} + d_{0} \times 2^{-1} \tag{2.5.2} \end{equation}

where \(N_{1} = N/2\) (the integer div operation) and the remainder, \(r_0\text{,}\) is \(0\) or \(1\text{.}\) Since \(N_{1}\) is an integer and all the terms except the \(2^{-1}\) term on the right-hand side of Equation (2.5.2) are integers, we can see that \(d_{0} = r_{0}\text{.}\) Subtracting \(\frac{r_{0}}{2}\) from both sides gives,

\begin{equation} N_{1} = d_{n-1} \times 2^{n-2} + d_{n-2} \times 2^{n-3} + \ldots + d_{1} \times 2^{0} \tag{2.5.3} \end{equation}

Dividing both sides of Equation (2.5.3) by two:

\begin{equation} N_{2} + \frac{r_1}{2} = d_{n-1} \times 2^{n-3} + d_{n-2} \times 2^{n-4} + \ldots + d_{1} \times 2^{-1} \tag{2.5.4} \end{equation}

where \(N_{2} = N_{1}/2\text{.}\) From Equation (2.5.4) we see that \(d_{1} = r_{1}\text{.}\) It follows that the binary representation of a number can be produced from right (low-order bit) to left (high-order bit) by applying the algorithm shown in Algorithm 2.5.1.

There are times in some programs when it is more natural to specify a bit pattern rather than a decimal number. We have seen that it is possible to easily convert between the number bases, thus you could convert the bit pattern to a decimal value and then use that. But it is usually much easier to think of the bits in groups of four and use hexadecimal to specify each group.

For example, if your algorithm required the use of zeros alternating with ones,

\begin{equation*} \binary{0101\;0101\;0101\;0101\;0101\;0101\;0101\;0101} \end{equation*}

this can be converted to the decimal value,

\begin{equation*} 431655765 \end{equation*}

or expressed in hexadecimal (shown here in C/C++ syntax)

0x55555555

Once you have memorized Table 2.1.1, it is clearly much easier to work with hexadecimal for bit patterns.

The discussion in these two sections has dealt only with unsigned integers. The representation of signed integers depends upon some architectural features of the CPU and will be discussed in Chapter 3 when we discuss computer arithmetic.