Converting To Binary, Octal, and Hexadecimal

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. Finally, we perform radix conversion.
  1. Understanding Radix
  2. Converting To Binary, Octal, and Hexadecimal
  3. Radix Economy
  4. Binary (Base-2) And Its Operations
  5. Negative Binary Numbers

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

Arbitrary Base Conversion

In the following, we endeavor to convert from one radix representation to a different radix. We delineate these radices as the source base and the target base. We convert from the source base to the target base. This article does not cover converting the fractional part of a number in a given radix.

The source radix doesn’t really matter so much in conversion as does the target radix. The source radix simply informs us as to what value is being represented by the source, giving us the place values by which to multiply our digits.

The most straightforward method of converting from a source to a target base is to enumerate the place values of the target base (in order) that are less than the value being converted. From there, you can perform what is known as a Euclidean division, a fancy way of saying you compute a quotient and a remainder. You divide the source value (the dividend) by the highest place value (the divisor) and record the quotient as the digit, retaining the remainder. You then move to the next lowest place value and repeat using the remainder as the new divisor. This method is shown below for the small value 190 decimal into base 3:

A naïve radix conversion from base 3 of the decimal value 190,

This could be considered a naive conversion implementation, however. It leaves the calculator room for improvement. Calculating the divisions of large numbers is complicated and drawn out. Fortunately, there is a better way, though counterintuitive at first. We can “flip” the Euclidean division so that, instead of recording quotients, we record remainders. To do this, instead of dividing by the place value, we instead divide by the radix itself, storing the remainder as the digit, and using the quotient as the divisor of the next step. This method is shown below for the small value 190 decimal into base 3:

This diagram has fewer calculations than the previous diagram.

This works because of the nature of the remainder when dividing by the radix. Every place value is a multiple of the radix since each place value is the radix multiplied by itself index-number of times. By dividing by the radix, we are essentially subsuming, or dividing by, every place value until we can no longer divide, giving us the remainder. Essentially, we reduce each place’s value to one’s place and discover how many ones remain. This idea can be written mathematically using Euclid’s Division Lemma where q is the quotient and r is the remainder, and a and b are the dividend and divisor respectively:

This shows that a = bq + r is equal to a - bq = r.

This method may make more sense if you look at the operations in reverse: by multiplying by the radix and adding the remainder. This can be illustrated via a diagram borrowed from the next section. In this diagram, the second row is the previous integer from the third row multiplied by three, and the third row is the sum of the first two:

The top row reads 2 1 0 0 1, the middle row reads 6 21 63 189 and the bottom row reads 2 7 21 63 190

Dividing by the radix still requires the use of division, although a simpler division. For the calculator, there is still room for improvement. Division operations are more complicated and arduous to carry out than other operations. Addition and multiplication (repeated addition) can be carried out more easily than subtraction and division (repeated subtraction). Is there a way we could convert a value from a source base to a target base using a minimum of addition and multiplication?

Horner’s Method

The answer lies in an algorithm called Horner’s Method or Horner’s Scheme. This method is named after mathematician William George Horner but dates further back to Chinese and Persian mathematicians. Horner’s Method is an algorithm for efficiently calculating polynomials by utilizing addition to simplify extended multiplications.

It is based on Horner’s Rule, which put succinctly, unwraps a polynomial such as a0 + a1x + a2x2 + a3x3 + a4x4 into a recursive equation a0 + x(a1 + x(a2 + x(a3 + xa4))). This allows the evaluation of a polynomial of degree n with only n multiplications and n additions:

When considering a value in a given base, we can substitute the radix for x, and the coefficients (an) as the digits. So, for 190 expressed using a radix of 3 above, we would have:

A diagram showing how 1 + 0x3^2 + 0x3^3 + 1x3^3 + 2x3^4 is equal to 1 + 3(0 + 3(0 + 3(1 + 3x2)))

We can then work through the process of multiplying and adding the coefficients with the digits using a table inspired by Synthetic Division (which is based on Horner’s Method):

The top row reads 2 1 0 0 1, the middle row reads 6 21 63 189 and the bottom row reads 2 7 21 63 190

We start with the left-most digit, multiply it by the radix, and then add it to the next right digit. We repeat this process until we have the end result. If we perform the additions and multiplications in our target base, the end result will be a complete base conversion.

This method, when done by hand or with the assistance of a calculator, is most useful for converting from arbitrary radices to decimal (a radix of ten). The observant reader will notice that this was a very formal (and roundabout) way to find the value of a numeral given in any radix: by summing the multiplication of each digit by its respective radix exponent.

Binary

The standard numerical system, at least in science, is a positional notation using the radix of ten known as the Indo-Arabic numeral system. This system uses the numerals/digits 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 to form numbers in increasing powers of ten. This system is the most common symbolic representation of numbers in the world. When used to represent integers and non-integers (fractions) alike this system is called decimal notation. You will also find decimal referring to purely the fractional part of a number being the digits after the decimal point. For a review on positional notation using a radix of ten refer to the previous article in the series: Understanding Radix.

You might also see decimal referred to as denary or decanary, though not often. The common characteristic here is the de(c)- prefix. The dec(a)- prefix in decimal comes from the Greek for ten: δέκα, pronounced déka.

Binary on the other hand is a system of positional notation using a radix of two. This notation is most often used today when discussing numerical values in electronic settings such as computer programming. Counting in binary uses the numerals/digits 0, and 1 exclusively to form numbers in increasing powers of two. The etymology of binary actually traces to the Latin bini which translates to “two-by-two.” The term binary can refer to anything made of two parts, such as a binary choice, or a binary star, but here we use it to refer to the binary numeral system.

Why Binary?

An article later in the series titled Binary (Base-2) And Its Operations deals exclusively with the binary number system, its historical roots, modern usage, and arithmetic. However, here we will briefly explore two common dominating factors for using a binary numbering system in computation: electricity and elegance.

Electrical States

Computational mechanisms can be constructed from a variety of materials, even billiard balls (in an idealized fashion), as long as certain conditions are met. Modern computational mechanisms are built using electrical circuits (of increasingly smaller sizes) consisting of transistors. Transistors are semiconductor devices that can switch electrical signals depending on an input signal. This switch occurs in two states: on and off.

By mapping these two outputs, on and off, to the binary digits 0 and 1 (arbitrarily), we can construct devices that appear to operate in accordance with binary enumeration. By carefully aligning and stringing together collections of these switches, we can perform mathematical and logical functions on binary representations to form a miniature calculator. This construction of a miniature calculator is the essence of what is now known as the modern computer processor.

In 1937, Claude Shannon produced such a device sans transistors by utilizing electronic relays and switches for his master’s thesis at the Massachusetts Institute of Technology. This was the first historical computational processor and was outlined in the paper A Symbolic Analysis of Relay and Switching Circuits. This thesis went on to become the foundation for practical digital circuit design and enabled the creation of the modern computer.

Elegance Of Expression

In the next article in this series, I explore the concept of Radix Economy, being the efficiency of a given radix in expressing numbers. The general idea is to set up a count of materials necessary to express a number, such as faces on a die and the required number of dice. In that article, we arrive at the conclusion that three is the most efficient practical radix according to this measure, but two isn’t far behind.

As the radix climbs higher than three, the efficiency of the radix decreases, with a radix of 5 garnering approximately 3.10667 and a radix of 10 achieving 4.34294 (lower is better). This lines up with reality: an increase in possible digits leads to an increase in implementation complexity.

With binary, we must only track two clear states: on and off. With ternary, or any base higher than two, we would need to track multiple states in exclusivity to each other. For example, with a radix of three (ternary) we would need to track an off signal (minimum), an on signal (maximum), and something in between. Our switches in our processor would need to select not from a simple on and off, but from three states. Building a reliable third state electrically is complex, and increases the margin for error: if the electrical signal happens to fall outside the threshold of the intermediate state, it could be read as one of the others.

As mentioned in Radix Economy, radices larger than two in computing systems aren’t impossible. It’s not a forgone conclusion that binary will always remain the best answer. But as integrated circuit technology and transistors are currently used, binary is the most elegant representation in terms of complexity and margin for error.

Converting To Binary

Let’s convert the value 197 to binary following the efficient division method from above:

This diagram shows the decimal value of 197 repeatedly divided by two to arrive at 11000101 in binary.

Now, let’s use the Horner Method to convert 110001012 back into decimal notation:

We're converting base 2 to base 10 in the following sequence: 1 times 2 is 2 plus 1 is 3 times 2 is 6, 12, 24, then 48 plues 1 is 49 times 2 is 98, then 196 plus 1 is 197.

Octal

The term octal refers to a radix of eight. The prefix oct- comes from the Greek word for eight: οκτώ, pronounced októ.

But why use a radix of eight? It turns out octal is translatable to three bits, much like the later hexadecimal is translatable to four bits.

What’s a bit? The further article covering binary delves further into the definition of binary numbers in relation to computer hardware, but a quick overview here is useful. In computer science, “bit” is the term used for the smallest amount of information processable/storable by a conventional electronic computer. One bit is one binary digit: on and off. By stringing multiple bits together you can represent numbers of varying ranges.

In the case of octal, we can string together three bits to represent a binary integer between the values zero and seven inclusively. This is a total of eight independent values, each representable by one octal digit (0, 1, 2, 3, 4, 5, 6, and 7). We can visualize this in the following table:

This diagram lists octal digits to binary: 0 - 000, 1 - 001, 2 - 010, 3 - 011, 4 - 100, 5 - 101, 6 - 110, 7 - 111

Simplifying Binary Expressions

Binary is clean, elegant, and simple… for a computer. Unfortunately, to even a trained eye binary representations can quickly become untenable. Consider the binary value 100010100111110100002, equal to 567,248 in decimal notation. If I were to change a single one of those digits (bits), what value would it then represent? You can see the issue.

Because the octal radix (8) is a clean power of two, we can simplify binary expressions in groups of that power. In octal’s case, in groups of three. We can reference the table above (and the value above) to turn 100010100111110100002, into 21237208. The following diagram visualizes this process:

A long string of ones and zeros use the table above to convert 10001010011111010000 to 2123720

While octal is handy for simplifying binary, it’s not used as commonly today as the next notation of hexadecimal. Its most famous current usage is in file permissions in Unix-style operating systems, particularly Linux. In those systems, common user permissions are expressed in three sets of three bits. Each bit represents the permissions read, write, and execute, and each set denotes the context of owner, group, and public. In this regard, full permissions in every context for a file could be expressed as 7778 (1111111112).

Converting To Octal

With a binary representation as the source base, conversion to octal becomes simply a matter of substituting three bits at a time with the corresponding octal digit. In other cases though, such as converting from base 10, we can use the methods already outlined. Let’s convert the value 197 to octal following the division method from above:

We convert 197 base 10 to 305 base 8.

Now, let’s use the Horner Method to convert 3058 back into decimal notation:

We convert 197 in base 8 to base 10: 3 times 8 is 24 times 3 is 192 plus 5.

Hexadecimal

Hexadecimal is a positional notation with a radix of sixteen. The prefix hexadecimal- is comprised of two prefixes: hex- and dec-. Previously we said dec- is related to the Greek term for ten. Hex- is a similar prefix and is related to the Greek word for six: έξι, pronounced éxi.

You might think hexadec- would refer to ten multiplied by six and refer to a radix of sixty. In the first article in the series, we encountered the Babylonian era numeral system that used a radix of sixty. However, that system was called sexagesimal. Hexadecimal refers to a radix of sixteen, being the fourth power of two.

In order to represent numbers in hexadecimal, we need to have sixteen different numerals. The decimal notation can provide the first ten (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9), but what about the remaining six? In computer science and mathematical literature, the first six letters of the Latin alphabet are substituted for numerals: A, B, C, D, E, and F. This gives us a complete set of digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F.

Just like in octal, hexadecimal is a clean power of two so we can simplify binary expressions in groups of that power. In hexadecimal’s case, in groups of four. Just like in octal, where one octal digit represents three successive bits, one hexadecimal digit represents four successive bits. We can build a similar reference table as above and use it to turn 100010100111110100002, into 8A7D016. The following diagram visualizes this process:

A table showing hexadecimal digit to binary conversion, and then a conversion of long string of binary converted using this table.

Because binary values of considerable length can be significantly shortened using hexadecimal it is often used when working “close to the hardware.” Programming in machine code, or one layer of abstraction above that, Assembly, often requires dealing with large binary values as they are stored in registers, and when addressing computer memory. In this context, hexadecimal becomes a valuable resource.

Another area where users most often encounter hexadecimal is in 24/32-bit color values, particularly on the internet. These color values usually have three color channels, red, green, and blue, with an optional alpha channel for transparency. Each channel has 256 shades, 0 – 255 (exactly one byte, or eight bits of information). In binary, 255 is represented by 111111112. This is eight ones in sequence, but if you break it up into two groups of four (1111), you can convert it to hexadecimal using the above table: FF16. On the web (such as in the CSS standard), the format for specifying a 24-bit “web” color follows the hexadecimal triplet RRGGBB, where R stands for red, G for green, and B for blue. In this scheme, pure red becomes FF0000, pure green 00FF00, and pure blue 0000FF.

Converting to Hexadecimal

With a binary representation as the source base, conversion to hexadecimal becomes simply a matter of substituting four bits at a time with the corresponding hexadecimal digit. In other cases though, such as converting from base 10, we can use the methods already outlined. Let’s convert the above binary number (567,248 in decimal) to hexadecimal following the division method from above:

Here we divide 567248 by 16 successively to arrive at 8A7D0

Now, let’s use the Horner Method to convert 8A7D016 into its decimal notation:

We convert 8A7D0, really 8, 10, 7, 13, and 0 by multiplying and adding across to arrive at 567248

What’s To Come

We’ve been presented with various bases so far in this series: binary, base-3, base-7, octal, and hexadecimal. However, we haven’t discussed which radix is “the best.” Is there a “most efficient” radix? Is binary (base-2) the most efficient radix to store information? This question is answered in the next article. In doing so, we’ll also discuss how to calculate how many digits are in an arbitrary value in any base. This type of calculation is useful in information theory for determining how much data is required to represent any given probability. Further in the series, we’ll be exploring binary, its history, and how to perform arithmetic operations in binary. We’ll also briefly touch upon Boolean logic operations, the shift operations found in many processors, and their relation to binary.

Image Based On A Photo by Fikri Rasyid on Unsplash

Recent Posts

Negative Binary Numbers

A non-standard positional notation is one where the value of each position isn’t necessarily a straightforward power of the radix. I am also including when the radix is not a positive integer (such as -2), even though mathematically the representation is consistent with standard positional notation. By altering the interpretation of one or more of the place values (or the radix) of a binary representation, we are able to represent negative values. In this post I’ll be covering sign-magnitude, the most intuitive method, the radix complement methods (ones’ complement and two’s complement), offset binary (also known as excess-k or biased), and base -2 (base negative two).

Read More »

Binary (Base-2) And Its Operations

This article continues the trend of the previous articles and begins with a history of binary. After that, I briefly reiterate why binary is used in modern electronic devices as covered in the previous article, and go into more depth regarding binary “sizes” (bit, byte, kilobyte, etc.) Then I move on to important elements of binary arithmetic, and the operations of addition, subtraction, multiplication, and division. I cover two operations often found in computing processors, the shift operators, and their mathematical meaning. Finally, I briefly cover Boolean logic operations.

Read More »
An incadescent lightbulb burns.

Radix Economy

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What’s the most efficient radix?”

Read More »
A measurement chart in a shoe store.

Converting To Binary, Octal, and Hexadecimal

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. Finally, we perform radix conversion.

Read More »
A man in darkness has computer code projected onto him.

Understanding Radix

This article puts forth a brief history of counting, which details how we arrived at some of the conventions we have today, including the notion of radix. It then explores the concept of radix in positional numeral systems, and in particular the concept of using radices of arbitrary values. With this foundation, it becomes a simple exercise to use binary, octal, and hexadecimal, each with a radix of two, eight, and sixteen respectively.

Read More »
A set of cathode ray tube based numeric displays.

Binary, Octal, And Hexadecimal

This series intends to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic operations on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

Read More »
  • Follow On Twitter!

  • Leave A Comment