Radix Economy

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What's the most efficient radix?”
  1. Understanding Radix
  2. Converting To Binary, Octal, and Hexadecimal
  3. Radix Economy
  4. Binary (Base-2) And Its Operations
  5. Negative Binary Numbers

This is the third article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

Positional Notation

In a previous article, we defined a glossary of terms associated with positional notation using the following graphic:

The diagram shows the number 15,692 and denotes where each part, digits, number, position, and index are.

In this graphic:

  • Digit is a numeral in a given place indicating a corresponding value.
  • Number indicates the total value being represented.
  • Position is the singular placement of a given digit in relation to others.
  • Index is an enumeration starting from zero of each successive position.

Let’s expand this diagram to a full definition with an example:

Shows a mathematical definition of radix, with a specific instance below of the radix 5.

Here we show the definition of a positional notation system: a counting method in which the value of a given representation is a summation of each digit (db) multiplied by a factor determined by the digit’s position. In the most efficient of cases, the factor is determined by a given base b (radix) and increases exponentially as the index increases.

Here I introduce some symbols commonly associated with set theory to help with my formal definition. A set, as mentioned in the previous article, is a series of elements. I define a set D using curly braces {…} to denote the set of Digits available in the numbering system. The number of digits in this example, and in common use base systems, is equal to the base or radix being used (including zero). Thus, in a base 5 numbering system, d1, d2, d3, d4, d5 would correspond to 0, 1, 2, 3, 4, and D would be the set of all these digits.

Under Number, the symbols an … a4a3a2a1a0 indicate placeholders for digits (d1-5). So, in the case of base 5, an could be one of 0, 1, 2, 3, or 4. This is what is meant by ∀k: ak ∈ D. The upside-down A can be read as “for all,” and the forking character as “is a member of,” as in the sentence “for all k, element ak is a member of set D.”

For more information on how that system works with more examples, please refer to the previous article, Understanding Radix.

A Property Of Positional Notation

We can make some general observations on writing numbers using a straightforward positional notation. I use the term straightforward to refer to positional notation as I have outlined above, and do not include here unusual radices such as negative or imaginary radices.

We can observe that some numbers require more digits to represent them than others. In fact, the larger the number, the more digits required to write it down. However, this isn’t a direct relationship: the number of digits doesn’t increase in direct relation to the size of the number.

It takes one digit in decimal to represent the number five. Along those lines, you might expect it then to take a hundred times more digits to represent a hundred times that value. But it doesn’t; instead, we end up with three digits representing five hundred. The number of digits then, instead of growing linearly with the value of a number, increases in a logarithmic way.

Exponents And Logarithms

Up until now (in this series), I’ve assumed the reader is familiar with the concept of exponents and exponentiation when I’ve discussed powers of the radix. Here we shall actually define exponents as I have been using them, and then use that definition to illustrate their inverse: logarithms.

Exponents

For our use, an exponent is simply a shorthand way of indicating that a given value is multiplied by itself a number of times. It is usually shown as a superscript number to the right. The number it decorates is known as the base (not to be confused with radix), and the value of the exponent itself is known as the power or factor like so: basepower. The term power can also refer to the end result of the multiplication process, so for example 256 is a power of 2 (the eighth power). Performing exponentiation can also be called raising a given base by a given power.

A power of zero is defined as the integer 1, and a power of one is defined as the base itself. You can see in the above diagram that each successive power of 5 is simply the lower power of 5 multiplied by 5 again (here, 5 is the base). To illustrate, 51 = 5×1 = 5, 52 = 5×5 = 25, 53 = 5x5x5 = 125, 54 = 5x5x5x5 = 625, 55 = 5x5x5x5x5 = 3125, and so on.

Negative Exponents

What happens when the power of an exponent is negative? If we consider that a positive power indicates a series of multiplications, we might surmise intuitively that a negative power would be the inverse of that. We would be correct and identify the inverse of multiplication as division. Thus, we can define a negative power of b as b-n = 1 / bn:

This shows that b to the power of negative n is equal to one divided by b to the power of n.

Even though the right part of this definition is a single division (as opposed to multiple multiplications) it measures up to our standard. This is due to the fact that multiple divisions can be represented as a single division. For example, 10-1 = 1 / 10 or one-tenth, 10-2 is ten times smaller than that, being 1 / 100 or one-hundredth.

This can be illustrated by the multiplication of fractions. A fraction, as shown in the above diagram, represents a division of the numerator by the denominator. When multiplying by a fraction, the mathematician is actually performing multiplication and division by multiplying both the numerators and the denominators. Using this knowledge we can view the above equation as a series of multiplications (and divisions):

Outlines how b to the power of negative n is a repeated multiplication of one divided by b resulting in the same result as above.

The astute reader will notice that place-values progressing to the left in positional notation are increasing powers of the radix. If we were to draw a point, otherwise known as a radix point or in base-10 the decimal point, and then create place-values to the right we could continue the progression but rather than counting up, we count down into the negatives. This allows us to create a fractional part of a number, in exponentially increasing degrees of accuracy:

This diagram shows that digit position to the right of the decimal point are equal to one divided by the base's exponents.

Fractional Exponents

A fractional exponent is what is known as a root, the most widely recognized form of which is the square root. The root of a value is defined as the number that, when brought to the power of the denominator, equals the base. For example, an exponent of 1/2, the square root, is defined by what number multiplied by itself equals the base. For example, 91/2 = 3 because 32 = 9. In this example, 2 is the reciprocal, or inverse, of 1/2. An exponent of 1/3 would require us to raise the target number by a power of 3. That’s why 81/3 = 2, because 23 = 8. In mathematical terms, roots can be written either as a fractional exponent, or using check mark like symbol like the diagram below:

This diagram shows how fractional exponent can also be written as a root figure with the denominator on the left and the exponent inside.

When considering fractional exponents one must remember the properties of multiplication. You might think that 100.1 or ten to the one-tenth power might equal 1, as 1 is 1/10 of 10, however, that would be incorrect. 100.1 actually equals approximately 1.258925411794167, to our surprise. However, it makes sense because multiplying 1 by itself 10 times equals 1, whereas multiplying 1.258925411794167 by itself 10 times equals 10, completing the exponent definition.

Logarithms

Logarithms, on the other hand, are the inverse functions of exponents. Rather than compute the powers of a base like an exponent, a logarithm computes the power that a given base must be raised to in order to produce a result. You can think of it as computing a root/power where you know the base of the root and the final result beforehand. Exponents and logarithms are simply two sides to the same operation as shown in the diagram:

This diagram shows that 10 multiplied itself three times is equal to 1000. That means the logarithm base 10 of 1000 is 3.

Logarithms were originally introduced by John Napier, a Scottish mathematician, physicist, and astronomer, in 1614 as a means of simplifying the process of multiplication and division. Calculating logarithms themselves is complex, involving finding roots combined with exponentiation (it is the reverse of the calculations featured in fractional exponents), but one can write these results into a logarithm table. Arduous multi-digit multiplication steps can be replaced by table lookups and addition because of a particular property of logarithms: the logarithm of a product is the sum of the logarithms of its factors:

log10(a x b) = log10(a) + log10(b).

Thus, to multiply three large complicated numbers one would only need to look up each of their respective logarithms in the table, add them together, and then raise the logarithm base to the resulting number (most likely via reverse lookup):

Here we take base 10 logarithms of three numbers, add them, and raise the base to come to a correct result.

Every logarithm calculation relies on a base. Remember that the base in logarithms is the value that is being raised to a given power, and does not mean radix despite sharing a common synonym. The base is very important and cannot be ignored. However, in many texts, you will find logarithms that have no apparent base. These are often, depending on the context, what is known as common logarithms which have the base 10. Another base often seen in the wild is the Euler constant e, which makes up what are called natural logarithms. Common logarithms in text are often written log while natural logarithms are written ln.

Calculating The Number Of Digits

Coming back full circle, earlier we showed how the number of digits grows logarithmically in relation to the scale of a number. This is easily observed because each place value in a straightforward positional notation increases by a power of the radix. If we count in base-10, each place value increases by a power of 10, likewise in base-2 each place value increases by a power of 2.

Review the positional notation definitions:

The diagram shows the number 15,692 and denotes where each part, digits, number, position, and index are.

You’ve most likely noticed by now that the index of a given position is equal to the logarithm of base radix (logradix) of the place value (log10 1000 = 3). This is because the place value is the inverse of this, being the radix to the index power: radixindex (103 = 1000).

This curious relationship enables us to calculate the number of digits of a given number in a given base. By calculating the logarithm, base radix, of a value plus one (dropping the fractional part) we find the maximum place value required. Remember that a logarithm is what value a given base must be raised to; in this instance, the radix is the logarithm base as illustrated:

A diagram showing that the floor of a log base radix + 1 equals the number of digits: log10(1234567) + 1 = 7
In this diagram ⌊ … ⌋ means floor, or to round down and remove any fractional element.

Radix Economy

With the ability to calculate how many digits a given value will have in an arbitrary base we can now calculate the efficiency of a given radix. This measure of efficiency is known as the radix economy and tells us the relative costs involved in implementing a given number system.

When implementing machines that can store values, such as computers, or number displays, it can be useful to know what resources are necessary for representing those numbers. Imagine that we are devising a display that will represent the number of likes on this article. We decide each place value will be represented by a wheel with the digits printed on different wheel positions. In base-10 each wheel will then have 10 faces. If we need to count up to 99,999 we will then need to manufacture and have on hand 5 wheels each with 10 faces for a total of 50 faces.

This scenario can be generalized so that for a given value n (here it was 99,999) we can determine how many faces are necessary (50) to represent it in an arbitrary base (10). You’ll notice that for this scenario we multiplied the radix (being the number of possible digits) by the number of places (being the wheels). Since we know how to calculate the number of places needed to represent a value in a given base (above), we can simply substitute that equation in the multiplication:

Here we show that the radix economy (E) is a function of the radix times the floor of a log base radix N + 1.

This equation measures a given radix’s efficiency E as a function involving the parameters r (the radix) and N (the number). So, for example, the number 263 in base-10 would have a radix economy, E, of E(10, 263) = 10 x ⌊(log10 263) + 1⌋ = 3. The number 3256 would come out as 10 x ⌊(log10 3256) + 1⌋ = 4. Notice here it doesn’t matter much the exact value of N, as it gets ‘glossed’ over into the number of digits necessary to represent N. Through a process of approximation and algebraic re-arrangement we can arrive at an equation that captures this simplification:

Here we show how the above definition, through algebraic manipulation and translation (by converting the logarithm to a division by natural logarithm) to arrive at r divided by natural log r

The diagram above uses a calculation we haven’t covered in this tutorial: the natural logarithm, that being the logarithm with base e or the Euler constant. In short, you can easily calculate the logarithm of any given base by calculating the natural logarithm of the value and dividing it by the natural logarithm of the base. We use that calculation here to separate out the exponential aspects involving the given number so that we can separate, on the right side, the exponential aspects involving the radix.

The last equation in the diagram can be read: “The radix economy [ E(r,N) ] divided by the natural logarithm of N is approximately equal to the radix r divided by the natural logarithm of r.” This means, working backward, we can find the radix economy of N by r by dividing r by loge(r) and multiplying it by loge(N).

If we’re only interested in arriving at an efficiency value we can compare relative to other efficiency measures (and not worrying about the actual value N), we can skip the multiplication by loge(N) and just work with the parts that are pertinent to the radix, r. With this we can focus purely on the efficiency of the radix itself, that being r / loge(r):

This diagram performs the calculation for a radix of 10 (4.343), 5 (3.106), 3 (2.731), and the Euler constant (2.718)

You can see here an interesting derivation: using this equation as a measure of the relative efficiency of a given radix, a radix of 3 is actually more efficient (smaller factor) than a radix of 2, and of course, the most efficient radix (by this standard) is the Euler constant e.

The equation used in this article to compute radix economy was lifted from Wikipedia (https://en.wikipedia.org/wiki/Radix_economy) and is used (and explained) here under the terms of the CC3.0 license.

Meaning

So what does this mean for us? It is well known that computers and other electronic devices almost universally use binary, or base 2, to represent values. However, we can see here that, allegedly, 3 is the most efficient radix (barring the ability to implement e). So what gives?

There are a number of reasons devices today continue to use binary rather than ternary (or base 3). One is an already existing investment in electronic materials (integrated circuit designs, transistors, etc.) that most accurately represent a binary system. Another is that binary is less error-prone as opposed to larger radices because it is the simplest representation. A value is either on, or off, whereas in a ternary system we must also maintain the third state in between which could fluctuate giving rise to errors.

Despite this con, there are several pros to a ternary system over a binary system. A non-standard positional notation known as balanced ternary (which we will explore later) allows integer representations to be added or subtracted more efficiently, and certain operations (such as comparisons) could proffer up more information at once than in binary. When comparing two integers, the computer could select from greater than, less than, or equal, whereas in binary, one must test for each relationship separately in a true-false manner.

It isn’t as if ternary passed by unnoticed. Early computer scientists and engineers were intrigued by the ability to create a computer that could perform calculations more elegantly than binary allowed. In 1950, a survey conducted by Engineering Research Associates that was published in High-Speed Computing Devices concluded that a binary system was preferred, but that a ternary system under the mathematical assumptions given (similar to our above equation) was more economical.

In the early 1950s, the MIT Servomechanisms Laboratory developed the Whirlwind I computer for the U.S. Navy. Proposals for the architecture of this machine, the first of its kind to operate on 16 bits in parallel, included one from Herb Grosch, an early computer scientist best known for Grosch’s Law. He proposed that the Whirlwind computer operate using ternary, probably in an effort to squeeze out more performance. Whirlwind I morphed into the control system of the military radar network that remained in operation for much of the Cold War, but unfortunately, ternary was ruled out early on in its development.

Funnily enough, it was all the way on the other side of that war that a man named Nikolai P. Brusentsov and his colleagues at Moscow State University actually developed the first working ternary computer in 1958. It was named Setun, after a river that flowed near the working campus. In the Setun architecture values were stored in a series of eighteen ternary digits, or trits. This allowed the machine to store in eighteen slots what a binary computer would have to store in twenty-nine slots. When we multiply the number of “faces” by the number of “wheels,” as discussed earlier that gives Setun a radix economy of 54 (3 x 18) in opposition to a binary radix economy of 58 (2 x 29).

Unfortunately, this efficiency wasn’t actually realized with Setun. The hardware necessary to store a trit could have also easily stored two individual bits. In the same space a trit occupied, storing three states, there could have been a total of four states.

It is with this in mind that some criticize our measure of efficiency. Numerous posts and a few papers show that the overall cost of implementing ternary is more prohibitive than simply implementing binary. This is often shown using technology that is optimized for a binary system, so there is no surprise there. Our generalized equation for efficiency as well, being r x ⌊(logr number) + 1⌋, may not necessarily apply if technology is discovered that enables our metaphorical wheel to have more faces with no additional cost. This would mean the incremental cost of increasing the radix is not equivalent to the incremental cost of increasing the digits.

Ternary computing wasn’t contemplated and then wholly discarded. In the 1960s, experiments and proposals included building ternary memory cells and even ternary logic gates. This culminated two decades after Whirlwind I and Setun when Gideon Frieder and his colleagues at the State University of New York at Buffalo designed a complete ternary machine they dubbed ternac.

In 2022 the world is coming to grips with a post-pandemic (the novel coronavirus) economy and an evolving ecology of scarcity that is impacting the computing world. Microchips are becoming harder to manufacture, and computational demands (such as the metaverse and blockchain) are growing exponentially. It is possible that future work in computing technology might include systems that can perform calculations outside of the traditional transistor framework. Will these systems once again take up the ternary flag in pursuit of greater efficiency?

Only time will tell.

Image Based On A Photo by Christian Dubovan on Unsplash

Recent Posts

Negative Binary Numbers

A non-standard positional notation is one where the value of each position isn’t necessarily a straightforward power of the radix. I am also including when the radix is not a positive integer (such as -2), even though mathematically the representation is consistent with standard positional notation. By altering the interpretation of one or more of the place values (or the radix) of a binary representation, we are able to represent negative values. In this post I’ll be covering sign-magnitude, the most intuitive method, the radix complement methods (ones’ complement and two’s complement), offset binary (also known as excess-k or biased), and base -2 (base negative two).

Read More »

Binary (Base-2) And Its Operations

This article continues the trend of the previous articles and begins with a history of binary. After that, I briefly reiterate why binary is used in modern electronic devices as covered in the previous article, and go into more depth regarding binary “sizes” (bit, byte, kilobyte, etc.) Then I move on to important elements of binary arithmetic, and the operations of addition, subtraction, multiplication, and division. I cover two operations often found in computing processors, the shift operators, and their mathematical meaning. Finally, I briefly cover Boolean logic operations.

Read More »
An incadescent lightbulb burns.

Radix Economy

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What’s the most efficient radix?”

Read More »
A measurement chart in a shoe store.

Converting To Binary, Octal, and Hexadecimal

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. Finally, we perform radix conversion.

Read More »
A man in darkness has computer code projected onto him.

Understanding Radix

This article puts forth a brief history of counting, which details how we arrived at some of the conventions we have today, including the notion of radix. It then explores the concept of radix in positional numeral systems, and in particular the concept of using radices of arbitrary values. With this foundation, it becomes a simple exercise to use binary, octal, and hexadecimal, each with a radix of two, eight, and sixteen respectively.

Read More »
A set of cathode ray tube based numeric displays.

Binary, Octal, And Hexadecimal

This series intends to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic operations on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

Read More »
  • Follow On Twitter!

  • Leave A Comment