Skip to main content
\(\newcommand{\doubler}[1]{2#1} \newcommand{\binary}{\mathtt} \newcommand{\hex}{\mathtt} \newcommand{\octal}{\mathtt} \newcommand{\prog}{\mathtt} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Chapter14Bit Operations; Multiplication and Division

We saw in Section 4.1 that input read from the keyboard and output written on the screen is in the ASCII code and that integers are stored in the binary number system. So if a program reads user input as, say, \(123_{10}\text{,}\) that input is read as the characters ‘1’, ‘2’, and ‘3’, but the value used in the program is represented by the bit pattern \(\hex{0000007b}_{16}\text{.}\) In this chapter, we return to the conversion algorithms between these two storage codes and look at the assembly language that is involved.

Character Storage

Some programs, notably those that do not perform many arithmetic operations, maintain the numbers in the character code. This requires more complex algorithms for performing arithmetic but simplifies character string processing.