Contents

This is the third article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What’s the most efficient radix?”

### Positional Notation

In the previous article, we defined a glossary of terms associated with positional notation using the following graphic:

In this graphic:

*Digit*is a numeral in a given place indicating a corresponding value.*Number*indicates the total value being represented.*Position*is the singular placement of a given digit in relation to others.*Index*is an enumeration starting from zero of each successive position.

Let’s expand this diagram to a full definition with example:

Here we show the definition of a positional notation system: a counting method in which the value of a given representation is a summation of each digit (*d _{b}*) multiplied by a factor determined by the digit’s position. In the most efficient of cases, the factor is determined by a given base

*b*(radix) and increases exponentially as the index increases.

Here I introduce some symbols commonly associated with set theory to help with my formal definition. A set, as mentioned in the previous article, is a series of elements. I define a set *D* using curly braces {…} to denote the set of digits available in the numbering system. The number of digits in this example, and in common use base systems, is equal to the base or radix being used (including zero). Thus, in a base 5 numbering system, *d _{1}*,

*d*,

_{2}*d*,

_{3}*d*,

_{4}*d*would correspond to

_{5}*0*,

*1*,

*2*,

*3*,

*4*, and

*D*would be the set of all these digits.

Under **Number**, the symbols *a _{n} … a_{4}a_{3}a_{2}a_{1}a_{0}* indicate placeholders for digits (

*d*). So, in the case of base 5,

_{1-5}*a*could be one of 0, 1, 2, 3, or 4. This is what is meant by

_{n}*∀k: a*. The upside-down A can be read as “for all,” and the forking character as “is a member of,” as in the sentence “for all k, element a

_{k}∈ D_{k}is a member of set

*D*.”

For more information on how that system works with more examples, please refer to the previous article, Understanding Radix.

### Properties Of Positional Notation

We can make some general observations on writing numbers using a straightforward positional notation. I use the term straightforward to refer to positional notation as I have outlined above, and do not include here unusual radices such as negative or imaginary radices.

We can observe that some numbers require more digits to represent them than others. In fact, the larger the number, the more digits required to write it down. However, this isn’t a direct relationship: the number of digits doesn’t increase in direct relation to the size of the number.

It takes one digit in decimal to represent the number five. Along those lines, you might expect it then to take a hundred times more digits to represent a hundred times that value. But it doesn’t, and instead, we end up with three digits to represent five hundred. The number of digits then, instead of growing linearly with the value of a number, increases in a logarithmic way.

#### Exponents And Logarithms

Up until now, I’ve assumed the reader is familiar with the concept of exponents and exponentiation. Here we shall actually define exponents as I have been using them, and then use that definition to illustrate their inverse: logarithms.

##### Exponents

For our use, an exponent is simply a shorthand way of indicating that a given value is multiplied by itself a number of times. It is usually shown as a superscript number to the right. The number it decorates is known as the *base*, and the value of the *exponent* itself is known as the *power* or *factor* like so: base^{power}. The term *power* can also refer to the end result of the multiplication process, so for example 256 is a *power* of 2 (the 8th *power*). Performing exponentiation can also be called *raising* a given *base* by a given *power*.

A power of zero is defined as the integer 1, and a power of one is defined as the base itself. You can see in the above diagram that each successive power of 5 is simply the lower power of 5 multiplied by 5 (here, 5 is the base). To illustrate, 5^{1} = 5×1 = 5, 5^{2} = 5×5 = 25, 5^{3} = 5x5x5 = 125, 5^{4} = 5x5x5x5 = 625, 5^{5} = 5x5x5x5x5 = 3125, and so on.

###### Negative Exponents

What happens when the power of an exponent is negative? If we consider that a positive power indicates a series of multiplications, we might surmise intuitively that a negative power would be the inverse of that. We would be correct and identify the inverse of multiplication as division. Thus, we can define a negative power of b as b^{-n} = 1 / b^{n}:

Even though the right part of this definition is a single division (as opposed to multiple multiplications) it measures up to our standard. This is due to the fact that multiple divisions can be represented as a single division. For example, 10^{-1} = 1 / 10 or one-tenth, 10^{-2} is ten times smaller than that, being 1 / 100 or one-hundredth.

This can be illustrated by the multiplication of fractions. A fraction, as shown in the above diagram, represents a division of the *numerator* by the *denominator*. When multiplying by a fraction, the mathematician is actually performing multiplication *and* division by multiplying both the numerators and the denominators. Using this knowledge we can view the above equation as a series of multiplications (and divisions):

The astute reader will notice that place-values progressing to the left in positional notation are increasing powers of the radix. If we were to draw a point, otherwise known as a radix point or in base-10 the decimal point, and then create place-values to the right we could continue the progression but rather than counting up, we count down into the negatives. This allows us to create a fractional part of a number, in exponentially increasing degrees of accuracy:

###### Fractional Exponents

A fractional exponent is what is known as a *root*, the most widely recognized form of which is the *square root*. That is an exponent of 1/2, the square root, is defined by what number to the inverse power equals the base. For example, 9^{1/2} = 3 because 3^{2} = 9. In this example, 2 is the reciprocal of 1/2.

When considering fractional exponents one must remember the properties of multiplication. You might think that 10^{0.1} or ten to the one-tenth power might equal 1, as 1 is 1/10 of 10, however, that would be incorrect. 10^{0.1} actually equals approximately 1.258925411794167, to our surprise. However, it makes sense because multiplying 1 by itself 10 times equals 1, whereas multiplying 1.2589254… by itself 10 times equals 10, completing the exponent definition.

##### Logarithms

Logarithms, on the other hand, are the inverse functions of exponents. Rather than compute the powers of a base like an exponent, a logarithm computes the *power* that a given *base* must be raised to in order to produce a result. Exponents and logarithms are simply two sides to the same operation as shown in the diagram:

Logarithms were originally introduced by John Napier, a Scottish mathematician, physicist, and astronomer, in 1614 as a means of simplifying the process of multiplication and division. Calculating logarithms themselves is complex, involving finding roots combined with exponentiation (it is the reverse of the calculations featured in *fractional exponents*), but one can write these results into a *logarithm table*. Arduous multi-digit multiplication steps can be replaced by table lookups and addition because of a particular property of logarithms: the logarithm of a product is the sum of the logarithms of its factors:

Thus, to multiply three large complicated numbers one would only need to look up each of their respective logarithms in the table, add them together, and then raise the logarithm *base* to the resulting number (most likely via reverse lookup):

Every logarithm calculation relies on a *base*. Remember that the base in logarithms is the value that is being raised to a given power, and does not mean radix despite sharing a common synonym. The base is very important and cannot be ignored. However, in many texts, you will find logarithms that have no apparent base. These are often, depending on the context, what is known as *common logarithms* which have the base ten. Another base often seen in the wild is the Euler constant *e*, which makes up what are called *natural logarithms*. *Common logarithms* in text are often written *log* while *natural logarithms* are written *ln*.

#### Calculating The Number Of Digits

Coming back full circle, earlier we showed how the number of *digits* grows *logarithmically* in relation to the scale of a number. This is easily observed because each place value in a straightforward positional notation increases by a *power* of the *radix*. If we count in base-10, each place value increases by a power of 10, likewise in base-2 each place value increases by a power of 2.

Review the positional notation definitions:

You’ve most likely noticed by now that the *index* of a given *position* is equal to the logarithm of base radix (log_{radix}) of the place value. This is because the place value is the inverse of this, being the radix to the index power: radix^{index}.

This curious relationship enables us to calculate the number of digits of a given number in a given base. By calculating the logarithm base radix of a value plus one (dropping the fractional part) we find the maximum place value required. Remember that a logarithm is what value a given base must be raised to; in this instance, the radix is the logarithm base as illustrated:

#### Radix Economy

With the ability to calculate how many digits a given value will have in an arbitrary base we can now calculate the efficiency of a given radix. This measure of efficiency is known as the Radix Economy and tells us the relative costs involved in implementing a given number system.

When implementing machines that can store values, such as computers, or number displays, it can be useful to know what resources are necessary for representing those numbers. Imagine that we are devising a display that will represent the number of likes on this article. We decide each place value will be represented by a wheel with the digits printed on different wheel positions. In base-10 each wheel will then have 10 faces. If we need to count up to 99,999 we will then need to manufacture and have on hand 5 wheels each with 10 faces for a total of 50 faces.

This scenario can be generalized so that for a given value n (here it was 99,999) we can determine how many faces are necessary to represent it in an arbitrary base. You’ll notice that for this scenario we multiplied the radix (being the number of possible digits) by the number of places (being the wheels). Since we know how to calculate the number of places needed to represent a value in a given base (above), we can simply substitute that equation in the multiplication:

This equation measures a given radix’s efficiency E as a function involving the parameters *r* (the radix) and *N* (the number). So, for example, the number 263 in base-10 would have a radix economy, E, of E(10,263) = 10 x **⌊**log(263) + 1**⌋** = 30. The number 3256 would come out as 10 x **⌊**log(3256) + 1**⌋** = 40. Notice here it doesn’t matter much the exact value of *N*, as it gets ‘glossed’ over into the number of digits necessary to represent *N*. Through a process of approximation and algebraic re-arrangement we can arrive at an equation that captures this simplification:

The diagram above uses a calculation we haven’t covered in this tutorial: the *natural logarithm*, that being the logarithm with base *e* or the *Euler constant*. In short, you can easily calculate the logarithm of any given base by calculating the *natural logarithm* of the value and dividing it by the natural logarithm of the base. We use that calculation here to separate out the exponential aspects involving the given number so that we can separate, on the right side, the exponential aspects involving the radix.

The last equation in the diagram can be read: “The *Radix Economy* [ *E(r,N)* ] divided by the natural logarithm of *N* is approximately equal to the radix *r* divided by the natural logarithm of *r*.” This means, working backward, we can find the Radix Economy of *N* by *r* by dividing *r* by log_{e}(*r*) and multiplying it by log_{e}(*N*).

With this we can focus purely on the efficiency of the radix itself, that being *r* / log_{e}(*r*):

You can see here an interesting derivation: using this equation as a measure of the relative efficiency of a given radix, a radix of 3 is actually more efficient (smaller factor) than a radix of 2, and of course, the most efficient radix (by this standard) is the Euler constant *e*.

The equation used in this article to compute Radix Economy was lifted from Wikipedia (https://en.wikipedia.org/wiki/Radix_economy) and is used (and explained) here under the terms of the CC3.0 license.

### Meaning

So what does this mean for us? It is well known that computers and other electronic devices almost universally use binary, or base 2, to represent values. However, we can see here that, allegedly, 3 is the most efficient radix (barring the ability to implement *e*). So what gives?

There are a number of reasons devices today continue to use binary rather than ternary (or base 3). One is an already existing investment in electronic materials (integrated circuit designs, transistors, etc.) that most accurately represent a binary system. Another is that binary is less error-prone as opposed to larger radices because it is the simplest representation. A value is either on, or off, whereas in a ternary system we must also maintain a third state in between which could fluctuate giving rise to errors.

Despite this con, there are several pros to a ternary system over a binary system. A non-standard positional notation known as *balanced ternary* allows integer representations to be added or subtracted more efficiently, and certain operations (such as comparisons) could proffer up more information at once than in binary. When comparing two integers, the computer could select from *greater than*, *less than*, or *equal*, whereas in binary, one must test for each relationship separately in a true-false manner.

It isn’t as if ternary passed by unnoticed. Early computer scientists and engineers were intrigued by the ability to create a computer that could perform calculations more elegantly than binary allowed. In 1950, a survey conducted by Engineering Research Associates that was published in *High-Speed Computing Devices* concluded that a binary system was preferred, but that a ternary system under the mathematical assumptions given (similar to our above equation) was more economical.

In the early 1950s, the MIT Servomechanisms Laboratory developed the Whirlwind I computer for the U.S. Navy. Proposals for the architecture of this machine, the first of its kind to operate on 16 bits in parallel, included one from Herb Grosch, an early computer scientist best known for Grosch’s Law. He proposed that the Whirlwind computer operate using ternary, probably in an effort to squeeze out more performance. Whirlwind I morphed into the control system of the military radar network that remained in operation for much of the Cold War, but unfortunately, ternary was ruled out early on in its development.

Funnily enough, it was all the way on the other side of the war that a man named Nikolai P. Brusentsov and his colleagues at Moscow State University actually developed the first working ternary computer in 1958. It was named Setun, after a river that flowed near the working campus. In the Setun architecture values were stored in a series of eighteen ternary digits, or trits. This allowed the machine to store in eighteen slots what a binary computer could store in twenty-nine slots. Following our above equation, that gives Setun a Radix Economy of 54 (3 x 18) in opposition to a binary Radix Economy of 58 (2 x 29).

Unfortunately, this efficiency wasn’t actually realized with Setun. The hardware necessary to store a trit could have also easily stored two individual bits. In the same space a trit occupied, storing three states, there could have been a total of four states.

It is with this in mind that some criticize our measure of efficiency. Numerous posts and a few papers show that the overall cost of implementing ternary is more prohibitive than simply implementing binary. This is often shown using technology that is optimized for a binary system, so there is no surprise there. Our equation for efficiency as well, being *r* x log_{r}(number), may not necessarily apply if technology is discovered that enables our metaphorical wheel to have more faces with no additional cost. This would mean the incremental cost of increasing the radix is not equivalent to the incremental cost of increasing the digits.

Ternary computing wasn’t contemplated and then wholly discarded. In the 1960s, experiments and proposals included building ternary memory cells and even ternary logic gates. This culminated two decades after Whirlwind I and Setun when Gideon Frieder and his colleagues at the State University of New York at Buffalo designed a complete ternary machine they dubbed *ternac*.

In 2022 the world is coming to grips with a post-pandemic (the novel coronavirus) economy and an evolving ecology of scarcity that is impacting the computing world. Microchips are becoming harder to manufacture, and computational demands (such as the metaverse and blockchain) are growing exponentially. It is possible that future work in computing technology might include systems that can perform calculations outside of the traditional transistor framework. Will these systems once again take up the ternary flag in pursuit of greater efficiency?

Only time will tell.

Image Based On A Photo by Christian Dubovan on Unsplash