Converting To Binary, Octal, and Hexadecimal

User Avatar Asher Wolfstein Reference Leave a Comment

  1. Understanding Radix
  2. Converting To Binary, Octal, and Hexadecimal
  3. Radix Economy
  4. Binary (Base-2) And Its Operations
  5. Negative Numbers In Binary

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. We then show the processes of the two conversion methods by converting from decimal to each featured radix: binary, octal, and hexadecimal.

Arbitrary Base Conversion

In the following, we endeavor to convert from one radix representation to a different radix. We delineate these radices as the source base and the target base. We convert from the source base to the target base. This article does not cover converting the fractional part of a number in a given radix.

The source radix doesn’t really matter so much in conversion as does the target radix. The source radix simply informs us as to what value is being represented by the source, giving us the place values by which to multiply our digits.

The most straightforward method of converting from a source to a target base is to enumerate the place values of the target base (in order) that are less than the value being converted. From there, you can perform what is known as a Euclidean division, a fancy way of saying you compute a quotient and a remainder. You divide the source value (dividend) by the highest place value (divisor) and record the quotient as the digit, retaining the remainder. You then move to the next lowest place value and repeat using the remainder as the new divisor. This method is shown below for the small value 190 decimal into base 3:

This could be considered a naive conversion implementation, however, it leaves the calculator room for improvement. Calculating the divisions of large numbers is complicated and drawn out. Fortunately, there is a better way, though counterintuitive at first. We can “flip” the Euclidean division so that, instead of recording quotients, we record remainders. To do this, instead of dividing by the place value, we instead divide by the radix itself, storing the remainder as the digit, and the quotient as the divisor of the next step. This method is shown below for the small value 190 decimal into base 3:

This works because of the nature of the remainder when dividing by the radix. Every place value is a multiple of the radix since each place value is the radix multiplied by itself index-number of times. By dividing by the radix, we are essentially subsuming, or dividing by, every place value until we can no longer divide, giving us the remainder. Essentially, we reduce each place value to a one’s place, and discover how many ones remain. This idea can be written mathematically using Euclid’s division lemma where q is the quotient and r is the remainder, and a and b are the dividend and divisor respectively:

This method may make more sense if you look at the operations in reverse: by multiplying by the radix and adding the remainder. This can be illustrated via a diagram borrowed from the next section. In this diagram, the second row is the previous integer from the third row multiplied by three, and the third row is the sum of the first two:

This method still requires the use of division, although a simpler division. For the calculator, there is still room for improvement, as division operations are more complicated and arduous to carry out than other operations. Addition and multiplication (repeated addition) can be carried out more easily than subtraction and division (repeated subtraction). Is there a way we could convert a value from a source base to a target base using a minimum of addition and multiplication?

Horner’s Method

The answer lies in an algorithm called Horner’s Method or Horner’s Scheme. Named after mathematician William George Horner, but dating farther back to Chinese and Persian mathematicians, Horner’s Method is an algorithm for efficiently calculating polynomials by utilizing addition to simplify extended multiplications.

It is based on Horner’s Rule, which put succinctly, unwraps a polynomial such as a0 + a1x + a2x2 + a3x3 + a4x4 into a recursive equation a0 + x(a1 + x(a2 + x(a3 + xa4))). This allows the evaluation of a polynomial of degree n with only n multiplications and n additions:

When considering a value in a given base, we can substitute the radix for x, and the coefficients (an) as the digits. So, for 190 expressed using a radix of 3 above, we would have:

We can then work through the process of multiplying and adding the coefficients with the digits using a table inspired by Synthetic Division (which is based on Horner’s Method):

We start with the left-most digit, multiply it by the radix, and then add it to the next right digit. We repeat this process until we have the end result. If we perform the additions and multiplications in our target base, the end result will be a complete base conversion.

This method, when done by hand or with the assistance of a calculator, is most useful for converting from arbitrary radices to decimal (a radix of ten). The observant reader will notice that this was a very formal (and roundabout) way to find the value of a numeral given in any radix: by summing the multiplication of each digit by its respective radix exponent.

Binary

The standard numerical system, at least in science, is a positional notation using the radix of ten known as the Indo-Arabic numeral system. This system uses the numerals/digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 to form numbers in increasing powers of ten. This system is the most common symbolic representation of numbers in the world. When used to represent integers and non-integers (fractions) alike this system is called decimal notation. You will also find decimal referring to purely the fractional part of a number being the digits after the decimal point. For a review on positional notation using a radix of ten refer to the previous article in the series: Understanding Radix.

You might also see decimal referred to as denary or decanary, though not often. The common characteristic here is the de(c)- prefix. The dec(a)- prefix in decimal comes from the Greek for ten: δέκα, pronounced déka.

Binary on the other hand is a system of positional notation using a radix of two. This notation is most often used today when discussing numerical values in electronic settings such as computer programming. Counting in binary uses the numerals/digits 0, and 1 exclusively to form numbers in increasing powers of two. The etymology of binary actually traces to the Latin bini which translates to “two-by-two.” The term binary can refer to anything made of two parts, such as a binary choice, or a binary star, but here we use it to refer to the binary numeral system.

Why Binary?

The next article in the series deals exclusively with the binary number system, its historical roots, modern usage, and arithmetic. However, we will explore two common dominating factors for using a binary numbering system in computation: electricity and elegance.

Electrical States

Computational mechanisms can be constructed from a variety of materials, including billiard balls (in an idealized fashion), as long as certain conditions are met. Modern computational mechanisms are built using electrical circuits (of increasingly smaller sizes) consisting of transistors. Transistors are semiconductor devices that can switch electrical signals depending on an input signal. This switch occurs in two states: on and off.

By mapping these two outputs, on and off, to the binary digits 0 and 1 (arbitrarily), we can construct devices that appear to operate in accordance with binary enumeration. By carefully aligning and stringing together collections of these switches, we can perform mathematical and logical functions on binary representations to form a miniature calculator. This construction of a miniature calculator is the essence of what is now known as the modern computer processor.

In 1937, Claude Shannon produced such a device sans transistors by utilizing electronic relays and switches for his master’s thesis at the Massachusetts Institute of Technology. This was the first historical computational processor and was outlined in the paper A Symbolic Analysis of Relay and Switching Circuits. This thesis went on to become the foundation for practical digital circuit design and enabled the creation of the modern computer.

Elegance Of Expression

In a further article in this series, I explore the concept of Radix Economy, being the efficiency of a given radix in expressing numbers. The general idea is to set up a count of materials necessary to express a number, such as faces on a die and the required number of dice. In that article, we arrive at the conclusion that three is the most efficient practical radix according to this measure, but two isn’t far behind.

As the radix climbs higher than three, the efficiency of the radix decreases, with a radix of 5 garnering approximately 3.10667 and a radix of 10 achieving 4.34294 (lower is better). This lines up with reality: an increase in possible digits leads to an increase in implementation complexity.

With binary, we must only track two clear states: on and off. With ternary, or any base higher than two, we would need to track multiple states in exclusivity to each other. For example, with a radix of three (ternary) we would need to track an off signal (minimum), an on signal (maximum), and something in between. Our switches in our processor would need to select not from a simple on and off, but from three states. Building a reliable third state electrically is complex, and increases the margin for error: if the electrical signal happens to fall outside the threshold of the intermediate state, it could be read as one of the others.

As mentioned in Radix Economy, radices larger than two in computing systems aren’t impossible. It’s not a forgone conclusion that binary will always remain the best answer. But as integrated circuit technology and transistors are currently used, binary is the most elegant representation in terms of complexity and margin for error.

Converting To Binary

Let’s convert the value 197 to binary following the division method from above:

Now, let’s use the Horner Method to convert 110001012 back into decimal notation:

Octal

The term octal refers to a radix of eight. The prefix oct- comes from the Greek word for eight: οκτώ, pronounced októ.

But why use a radix of eight? It turns out, much like the later hexadecimal is translatable to four bits, octal is translatable to three bits.

What’s a bit? The next article covering binary delves further into the definition of binary numbers in relation to computer hardware, but a quick overview here is useful. In computer science, ‘bit’ is the term used for the smallest amount of information processable/storable by a conventional electronic computer. One bit is one binary digit: on and off. By stringing multiple bits together you can represent numbers of varying ranges.

In the case of octal, we can string together three bits to represent a binary integer between the values zero and seven inclusively. This is a total of eight independent values, each representable by one octal digit (0, 1, 2, 3, 4, 5, 6, and 7). We can visualize this in the following table:

Simplifying Binary Expressions

Binary is clean, elegant, and simple… for a computer. Unfortunately, to even a trained eye binary representations can quickly become untenable. Consider the binary value 100010100111110100002, equal to 567,248 in decimal notation. If I were to change a single one of those digits (bits), what value would it then represent? You can see the issue.

Because the octal radix (8) is a clean power of two, we can simplify binary expressions in groups of that power. In octal’s case, in groups of three. We can reference the table above (and value above) to turn 100010100111110100002, into 21237208. The following diagram visualizes this process:

While octal is handy for simplifying binary, it’s not used as commonly today as the next notation of hexadecimal. Its most famous current usage is in file permissions in Unix-style operating systems, particularly Linux. In those systems, common use permissions are expressed in three sets of three bits. Each bit represents the permissions read, write, and execute, and each set denotes the context of owner, group, and public. In this regard, full permissions in every context for a file could be expressed as 7778 (1111111112).

Converting To Octal

With a binary representation as the source base, conversion to octal becomes simply a matter of substituting three bits at a time with the corresponding octal digit. In other cases though, such as converting from base 10, we can use the methods already outlined. Let’s convert the value 197 to octal following the division method from above:

Now, let’s use the Horner Method to convert 3058 back into decimal notation:

Hexadecimal

Hexadecimal is a positional notation with a radix of sixteen. The prefix hexadecimal- is comprised of two prefixes: hex- and dec-. Previously we said dec- is related to the Greek term for ten. Hex- is a similar prefix and is related to the Greek word for six: έξι, pronounced éxi.

You might think hexadec- would refer to ten multiplied by six and refer to a radix of sixty. In the first article in the series, we encountered the Babylonian era numeral system that used a radix of sixty. However, that system was called sexagesimal. Hexadecimal refers to a radix of sixteen, being the fourth power of two.

In order to represent numbers in hexadecimal, we need to have sixteen different numerals. The decimal notation can provide the first ten (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9), but what about the remaining six? In computer science and mathematical literature, the first six letters of the Latin alphabet are substituted for numerals: A, B, C, D, E, and F. This gives us a complete set of digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F.

Just like in octal, hexadecimal is a clean power of two so we can simplify binary expressions in groups of that power. In hexadecimal’s case, in groups of four. Just like in octal, where one octal digit represents three successive bits, one hexadecimal digit represents four successive bits. We can build a similar reference table as above and use it to turn 100010100111110100002, into 8A7D016. The following diagram visualizes this process:

Because binary values of considerable length can be significantly shortened using hexadecimal it is often used when working “close to the hardware.” Programming in machine code, or one layer of abstraction above that, Assembly, often requires dealing with large binary values as they are stored in registers, and as they are used to address computer memory. In this context, hexadecimal becomes a valuable resource.

Another area where users most often encounter hexadecimal is in 24/32-bit color values, particularly on the internet. These color values usually have three color channels, red, green, and blue, with an optional alpha channel for transparency. Each channel has 256 shades, 0 – 255 (exactly one byte, or eight bits of information). In binary, 255 is represented by 111111112. This is eight ones in sequence, but if you break it up into two groups of four (1111), you can convert it to hexadecimal using the above table: FF16. On the web (such as in the CSS standard), the format for specifying a 24-bit “web” color follows the hexadecimal triplet RRGGBB, where R stands for red, G for green, and B for blue. In this scheme, pure red becomes FF0000, pure green 00FF00, and pure blue 0000FF.

Converting to Hexadecimal

With a binary representation as the source base, conversion to hexadecimal becomes simply a matter of substituting four bits at a time with the corresponding hexadecimal digit. In other cases though, such as converting from base 10, we can use the methods already outlined. Let’s convert the above binary number (567,248 in decimal) to hexadecimal following the division method from above:

Now, let’s use the Horner Method to convert 8A7D016 into its decimal notation:

What’s To Come

Although hexadecimal and octal are useful when operating and observing values in relation to computers, the real power lies in binary. The other two radices simply provide methods for tracking large binary values with greater efficiency. In the next article, we’ll be exploring binary, its history, and how to perform arithmetic operations in binary. We’ll also briefly touch upon Boolean logic operations, and the shift operations found in many processors and their relation to binary.

Image Based On A Photo by Fikri Rasyid on Unsplash

About the Author

Asher Wolfstein

Homepage Facebook Instagram LinkedIn Pinterest YouTube Twitter

I am a time-traveling wunk. That's a hybrid between a wolf and skunk, and I'm currently stranded in the 21st century. In my travels, I’ve realized that of all the things I need to be, it is being true to myself that is the most important. With that, I am a wonderfully queer (in every way) furry trans-futurist. I began my auto-didactic programming career at the age of seven on a TRS-80 Color Computer II 16k. I went on to work as a professional programmer in a printing plant. These days I write blogs, and play and code computer games. I continually costume/fursuit, read everything, program computers, study Japanese, and even sometimes draw. Daily I don't have any plans, just coffee, and strangeness. 頑張れ、諦めないで!

Leave a Reply