Binary (Base-2) And Its Operations

This article continues the trend of the previous articles and begins with a history of binary. After that, I briefly reiterate why binary is used in modern electronic devices as covered in the previous article, and go into more depth regarding binary “sizes” (bit, byte, kilobyte, etc.) Then I move on to important elements of binary arithmetic, and the operations of addition, subtraction, multiplication, and division. I cover two operations often found in computing processors, the shift operators, and their mathematical meaning. Finally, I briefly cover Boolean logic operations.
  1. Understanding Radix
  2. Converting To Binary, Octal, and Hexadecimal
  3. Radix Economy
  4. Binary (Base-2) And Its Operations
  5. Negative Binary Numbers

This is the fourth article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

A History Of Binary

As previously explained in the first two articles (Understanding Radix, and Converting To Binary, Octal, And Hexadecimal), the binary numeral system is a positional notation system based on a radix of 2. Because of this it is often also referred to as base-2. Since the radix is two, binary numbers use only two digits: 0, and 1.

The binary numeral system as we are familiar with it today was shaped by Gottfried Leibniz and other mathematicians in the 16th and 17th centuries. However, the idea of binary traces back before that, all the way to ancient Egypt.

Binary In Ancient Egypt

Early forms of a binary fractional system appeared in documents dated to the fifth dynasty of Egypt around 2400 BC and were later developed in full hieroglyphic form in the nineteenth dynasty of Egypt around 1200 BC. In this fractional system, one of two that the ancient Egyptian scholars used, one would measure portions of a heqat (a unit of volume) by adding together fractions whose denominator was a power of two (1/2, 1/4, 1/8, 1/16, 1/32, and 1/64). These are known as the Horus-Eye fractions to today’s mathematical historians. Some believe the symbols used in this system could be arranged to form the eye of Horus.

Besides the Horus-Eye fractions, which share binary’s powers of two, the ancient Egyptians also used binary for multiplication. The Rhind Mathematical Papyrus, dating to 1650 BC, outlines a multiplication method very similar to Horner’s method in the previous article. In this method, you perform the multiplication of two numbers by either doubling or adding the first number to itself to achieve a product according to the binary representation of the second number.

Binary In Ancient China

In the 9th century BC, China developed its own version of binary, though not necessarily for mathematical purposes. The I Ching (or Yi Jing, 易經), which translates to the Book Of Changes, or Classic Of Changes, serves as a text for divination and classical Chinese thought/cosmology. This text is based on the Taoistic duality of yin and yang and uses a form of cleromancy based on eight trigrams (Bagua) and sixty-four hexagrams (Gua). If we look at those two diagram systems as powers of two (and convert them to binary) we can see that, including zero, they equate to three-digit and six-digit binary numerals.

In fact, during the Song Dynasty, scholar Shao Yong (邵雍) rearranged the hexagrams in a format that can be read today, unintentionally by Yong, as a sequence of binary representations. If you read solid lines as zero and broken lines as one, you can read from the bottom right to the top left of Shao Yong’s square a sequence of binary numerals counting from zero to sixty-three. Below is a reproduction of Yong’s square:

Binary In West Africa And Europe

China wasn’t the only country that founded a divination system related to binary. There is also the Ifá, a Yoruba religion and as used here a system of divination. This system uses a sixteen-principle organization identified by four slots of binary values. This practice has its earliest origins in West Africa and is still observed today by many including the notable actor Xolo Maridueña.

On the Western side, there was the practice of geomancy, from the Late Greek geōmanteía, translating as “foresight by the earth.” This system of divination was based on a randomized generation of sixteen geomantic figures, four rows each containing a binary value of one or two dots. Geomancy itself can be traced back further into Arabic traditions, with the original names of the figures traditionally given in Arabic.

Binary In Ancient India

In the 2nd century BC, binary numbers emerged in India. In studying and describing prosody, being the study of meter in poetry and verses, the Indian scholar Pingala encoded short and long syllables into two different values. In his classic Chandaḥśāstra (“science of meters” in Sanskrit) he describes forming a matrix with these values to give a unique value to each meter. Unlike the binary we’ve been presenting, Pingala’s binary is read from left to right, increasing in value towards the right. His system also begins at one, instead of zero, where 00002 would equal 110.

Binary As We Know It Today

This brings us to the second most famous scholar of binary, Gottfried Leibniz. There were predecessors to Leibniz, such as Ramon Llull (who endeavored after a universal science called ‘ars generalis’), Francis Bacon (as a cipher), and John Napier (in his non-positional location arithmetic), but after George Boole, Leibniz is probably the most famous of Western philosophers to have embraced binary.

Leibniz admired and studied Chinese culture and was fascinated by the relation of the hexagrams (in Fuxi order above) to binary number representations one through sixty-three. His paper Explication de l’Arithmétique Binaire, published in 1703 (three years after Lobkowitz’s publication of a similar system), outlined the traditional binary positional notation we use today with increasing powers of two extending to the left.

Influenced by Llull’s arrangement of his Ars Generalis with binary combinations, Leibniz positioned his concepts on binary as central to his own ambitious version of Llull’s work, the Characteristica Universalis being a framing of the whole of reality in intellectual terms. Leibniz believed the way Llull had constructed his alphabet was too limiting and proposed another alternative and broader alphabet which used numbers rather than letters. These ideas would go on to influence his intellectual successors George Boole and Gottlob Frege in their forming of modern symbolic logic.

Binary Values And Boolean logic

One hundred and fifty-one years later in 1854 George Boole, a mostly autodidactic British mathematician, published The Laws of Thought. Within this landmark work were the principles of what is now known as Boolean algebra. Boole’s intention was to systematize Aristotelian logic in such a way that a mathematical foundation could be laid out for later extension.

In Boolean algebra the values of variables are one of two truth values: true or false, often marked as 1 or 0 (as in electronic computers). This is in contrast to elementary algebra, with which most students are more familiar, where variables can be any number. This leads to a constraint on possible meaningful operations. The main operations in Boolean algebra are truth-value based and include conjunction (and – ∧), disjunction (or – ∨), and negation (not – ¬). Because of this constraint Boolean algebra becomes a system that can describe logical operations much like those used by Aristotle and other philosopher-logicians.

The Birth Of The Modern Computer

Fast forward to 1937 and meet Claude Shannon, an American mathematician and engineer best known for being the father of information theory. Shannon became acquainted with George Boole’s work when he attended a philosophy class at the University of Michigan. Up until that point, Boole’s work seemed more like a pursuit of academic interest rather than something practical. However, Shannon brilliantly recognized that Boole’s work could be modeled by mechanical processes and thus could be brought to life in a practical way. This led to a twenty-one-year-old Shannon focusing the efforts of his master’s thesis on the optimization of electrical mechanical telephone relays using Boolean algebra.

Almost simultaneously, Victor Shestakov at Moscow State University proposed a theory of electric switches based on Boolean logic in 1935 and later presented an academic thesis alongside Soviet logicians and mathematicians Yanovskaya, Gaaze-Rapoport, Dobrushin, Lupanov, Medvedev, and Uspensky in 1938 which was finally published in 1941 in Russian.

In November of 1937, George Stibitz completed the Model K (for Kitchen) in his home kitchen. This machine utilized insights gleaned from Shannon and Shestakov about the practicalities of embodying Boolean algebra to perform binary addition. This led to his then-employer Bell Labs launching a research program in 1938 with Stibitz in charge. And this further led to Stibitz demonstrating the Complex Number Computer at the American Mathematical Society conference at Dartmouth College on September 11, 1940. There were a number of extremely notable persons in attendance including computer scientist John von Neumann (of Neumann architecture), John Mauchly (who later went to design ENIAC), and Norbert Weiner (the originator of Cybernetics).

Stibitz and Shannon weren’t entirely alone. Between the years 1935 and 1938, Konrad Zuse created a motor-powered mechanical computer called the Z1 completely from private funding. It too used binary as a basis for representation and calculation. Even though it was the first freely programmable computer (using punched celluloid film) using Boolean logic, it was unreliable. Unfortunately, this computer was a casualty of World War II when it was destroyed in a bombardment of Berlin. The Z3 was completed in 1941 and became known as the world’s first working programmable, fully automatic digital computer with 2,600 relays and a clock frequency of 5-10 Hz. This model too was destroyed in December of 1943 from an Allied bombardment of Berlin.

From there onwards, Boolean algebra and binary became the de facto standard foundation of practical digital circuit design, and likewise provided an intellectual framework for future developments of the Information Age.

Why Binary?

When observing modern digital computers one might wonder if binary holds some special qualities, or if it’s possible to recreate the same technology using other number bases. The answer is yes, it is possible and has been done.

One of the earliest calculating machines, the Difference Engine as created by Charles Babbage and Ada Lovelace, is not based on binary at all. This engine operated on ten discrete digits, making it a decimal machine. As well, in the case of such devices as Pascal’s calculator, binary wasn’t a factor.

In an earlier article in this series, Radix Economy, I cover explorations into three-valued logic, or ternary, computers. For example, in the 1940s Soviet engineers were able to build a ternary-based computer they called Setun. Two decades later, research in the United States came to fruition when Gideon Frieder and his colleagues at the State University of New York at Buffalo designed a complete ternary machine they dubbed ternac.

In the previous article of the series, Converting To Binary, Octal, and Hexadecimal, I also go over why binary might be more apt than other numeral systems for use in electrical computing devices.

To summarize, when looking at embodying numeral systems into a physical medium there exists a tradeoff between mathematical expressiveness and implementation complexity. After base-3, the higher the radix used the more complex the implementation becomes. This is in stark contrast to binary, which only has two states.

Consider an electrical charge. It is much easier and more reliable to simply test or measure for the existence of an electrical charge versus its nonexistence than it is to measure varying levels of charge. With varying levels of charge, any interference or noise can more easily disturb the encoded information from one digit to any other as opposed to having to achieve a full charge.

As well, with binary, or on and off, there is also elegance in circuit construction. Here one can use Boolean logic to construct various combinational circuits, whereas in a ternary (or larger) computer the outcomes of logical operations can be less clear.

All these factors, combined with historical tradition, conspire to make binary the current de facto standard for electrically charged computers. As technology progresses into the realms of nanotechnology and quantum mechanics this may change, but for now, it is here to stay.

Binary Sizes

When storing binary within a computer system (by means of charge, magnetism, or another medium) it is useful to define how much binary information is being stored. The smallest unit of binary, being one digit of binary, is called a “bit” as coined by the American mathematician John Wilder Tukey while working with John von Neumann on early computer designs. The term “bit” is a portmanteau of “binary digit,” and was first used by Claude Shannon in an article published in 1948.

Historically, the next largest size up from a single bit is known as a nybble (or nibble) and is comprised of four bits. You might recognize this definition from the previous article as equivalent to the binary expansion of a hexadecimal digit. The nybble is also referred to as a half-byte (see below), tetrade, semi-octet (see below), quadbit, or quartet. Except for semi-octet, this nomenclature is more often found in the early literature of computer science. In some computer architectures, four bits are the fundamental unit of processing and in this context, each four-bit group is also known as a character. If you’ve been reading the series in order you’ll know that four bits (being a hexadecimal digit) is able to portray sixteen different digits (or states) from 0 to F.

Traditionally, the next largest size from a nybble of four bits is the byte. I write traditionally because in early computer systems the size of a “byte” was hardware/system dependent and could range from 1 to 48 bits. Computer systems using 6-bit and 9-bit bytes were prevalent in the 1960s. However, the ISO/IEC 238201:1993 and the IEC 80000-13 standards documents provided the modern standard definition of “byte” as a sequence of eight bits, being able to store the values 0-255. The term “byte” can be said to be coined by Werner Buchholz in 1956 while working on the IBM Stretch, or Louis G. Dooley while working on SAGE at MIT Lincoln Laboratory, both in 1956. In the former case, it is an intentional misspelling of the term bite in an effort to distinguish it from “bit.” Readers will note then, from the previous article, that a byte, being a sequence of two nybbles, can be represented using two hexadecimal digits.

Because the term “byte” still remains somewhat ambiguous due to its use in the early literature of computer science and in the continued use of early designs, a strictly 8-bit sequence is also referred to as an octet. Due to its specificity, the term octet is often found in modern communications protocol standards and international communications.

The unit symbol for the byte is specified in IEC 80000-13 and IEEE 1541 as the upper-case character B, while the lowercase letter o denotes an octet.

From here, it gets trickier than you might at first imagine. The confusion rests on the simple decision of whether to use powers of ten, like the decimal measuring system, or powers of two (as a computer does when specifying memory addresses), when describing ever-larger sizes. In theory, decimal measuring relies on standard SI (International Standard of Units) prefixes for increasing sizes: kilo-, mega-, giga-, tera-, peta-, etc. and their abbreviations respectively, k, M, G, T, and P. Systems based on the more historical powers of two would use binary prefixes for increasing powers of 1,024 (a power of two itself): kibi- (10241), mebi- (10242), gibi- (10243), tebi- (10244), pebi- (10245), etc. and their abbreviations respectively, KiB, MiB, GiB, TiB, PiB.

In the non-ideal real world, however, due to historical usage and the medium, there is a third convention called the customary convention where the term kilobyte (using the SI prefix) actually refers to 1024 bytes, megabyte refers to 10242 bytes, and so on. This is a strange combination of the decimal measuring terms with the actual binary power values.

While kibibyte means 1024 bytes specifically, kilobyte can mean either 1024 bytes or 1000 bytes without proper disambiguation. This ambiguity is nothing to sneeze at and has actually led to lawsuits concerning the capacity of storage peripherals! This has led to courts holding that the legal definition of a gigabyte is decimal, being 109 bytes, rather than the binary definition of 230. This has also created a divide in popular computer operating systems, where devices running Microsoft Windows refer to their memory capacities using the customary convention, while other popular systems (mainly of the BSD variety) such as Apple’s macOS (Darwin), iOS, Ubuntu, and Debian have adopted the decimal-based SI standard.

Binary Representations

As you have read, the binary (base-2) numeral system is comprised of only two values: 0, and 1. Due to the simplicity of the digit, being one of two states, binary can be encoded using a variety of mediums as long as the interpreter can differentiate between two states. One could use a set of two-state switches, or on paper, one could punch out holes; as long as two mutually exclusive states exist. Please note that, when reading digital circuits, an “on” state is not necessarily equivalent to a value of 1. Thus, when examining any binary system, it’s important to understand the context.

As with any radix, you can represent any finite number using binary, particularly if using a radix point to indicate fractional values (see Radix Economy – Negative Exponents). With the exclusion of floating-point formats (which accommodate very large and very small values using scientific notation) most work in binary is done on integers that have no fractional component. When speaking or writing about binary values it is custom to use the numerals 0, and 1 in line with positional notation using a radix of two, such as 10110012.

As you’ve probably noticed, this site uses a subscript after the notation to indicate the radix used in that notation. For example, the number 510 (decimal) is represented in binary as 1012. There are many other conventions besides this to be aware of:

1011001 in binary
In this convention, the base is explicitly stated.
1011001b or 1011001B
This convention uses a suffix of b/B to indicate binary. Using a lower-case b is also known as the Intel convention.
%1011001
Using a prefix of % to indicate binary is in line with Intel’s competitor Motorola’s convention.
0b1011001 or #b1011001
You’ll often see this notation used in programming languages, the latter, particularly, in Lisp-based languages.

Performing Arithmetic Operations

Because binary is simply a base-2 numeral system and can represent any given finite value you can perform the standard mathematical operations of addition, subtraction, multiplication, and division. At first, it can seem a little odd as we are used to using more than two digits, but with a little practice, you can easily perform at least addition in your head.

Addition In Binary

The most important thing to remember when performing binary addition is that the value 210 is represented in binary as 102. In this case, you’ll notice that upon reaching the value 2 (being two 1’s) we immediately “carry” to the next place value. This gives rise to the disconcerting phenomena where two 1s become a 0 and a carry of 1. This is visualized in the following diagram:

Two ones are added, producing a zero with a one carried over to the next place value.

Just like in decimal arithmetic, when the result of addition between two digits is larger than can be represented by one digit you “carry” the resulting second digit over to the next place value. This is true of any positional notation system of any radix, and so it also follows in binary:

You’ll notice here that when confronted with three 1s in an addition, we get the value 310, which is 112 in binary. The result is a 1 below, and a carry of 1 as well.

Binary representations often have long strings of 1s in them. You can perform a sort of shorthand operation in terms of the carry digits by recognizing that any carry operation begun at the beginning of a series of 1s will extend until the termination of that sequence. This is known as the Long Carry Method. The diagram below puts this observation in a better graphical format:

Image shows arrows denoting where you can continue to carry a one across the addition.

Using these methods one can perform addition on any two binary representations.

Subtraction

Subtraction is the inverse of addition. Instead of carrying over an excess value into a new place value, when a subtraction operation of two digits would result in a negative result you borrow from the next largest place value.

Diagram showing 1 from 0 requires a carry from the left of a 1.

The requirement to carry for digit values too small to subtract from works just as it does in decimal (base-10). In decimal, you’d subtract the next place value to the left by 1, and carry over a “ten’s place.” The x in the above diagram indicates the digit of the next full place value. If the next place value is a 0, then you carry from the next left place value equal to 1.

You might be wondering how subtracting 1 from what looks like a 1 comes out as 1 when carrying. Remember that, even in decimal, when you carry you are adding the next place value to the upper digit. In decimal, you are adding 1010 (the radix) to the digit in question, so it follows that in binary you are adding 210 (102 in binary) to the place value. 2 – 1 = 1, hence the result.

I believe that the best way to represent binary subtraction is to see it in action, rather than through words. Observe the following binary subtraction:

A diagram showing that if you carry from a place value its zero, so another carry is required.

In mathematics, subtracting a positive number is equivalent to adding a negative number of equal absolute value. In the case of computers, you can do something similar by representing negative numbers in binary using a system known as Two’s Complement. With this non-standard notational system, you can perform subtractions utilizing the binary addition operation. Negative binary integer representation and Two’s Complement will be covered in the next article in this series.

Multiplication

Just as in decimal, multiplication in binary can be calculated by hand using partial products. Each partial product is shifted one place value to the left as the multiplication continues. Once all partial products are completed, you add them together to achieve the product.

In decimal, because the multiplication of two digits from 0 to 9 can result in a number involving two digits, calculating a product often involves complicated carries of varying amounts. Binary, in contrast, is fortunately much simpler: only two 1s multiplied together will result in a 1. Because of this, partial products are either a sequence of 0s, or a copy of the multiplier, depending on the specific digit reference in the multiplicand. The below diagram visualizes this process:

Observe in the partial products (each addition) that the multiplier 101012 is repeated according to the multiplicand (on the side).

Division

The division operation is the inverse of multiplication, where the calculator discovers how many units can fit “inside” another value. Division as a mathematical algorithm is the most intricate of the four basic arithmetic operations, and because of that has many different methods available for achieving a quotient. There are enough methods, in fact, that I could write an entire series on them alone.

However, for this section, we’ll be employing long division in binary much as we might do in decimal. This method was introduced by the mathematician Henry Briggs around 1600 AD. The long division I’m going to be performing here follows the notation used in the United States (where this is being written), where the divisor is to the left of the dividend, and the quotient above the dividend.

Much like in decimal long division, you perform binary long division through a series of subtractions of the divisor from the dividend. As in multiplication, where the two-value aspect of binary produced either the multiplier or zero, here we always subtract just the divisor rather than a multiple of the divisor. This means that, as we move to the right in our division, we can immediately subtract the divisor as soon as our “partial remainder” is large enough and place a 1 in the quotient. A diagram may be better able to convey this sequence of actions:

We divide 1101011010 by 101, and at each step we form a large enough partial remainder, and subtract 101 from it.

Here we divide 11010110102 (85810) by 1012 (510) and end up with a quotient 101010112 (17110) and a remainder of 112 (310). You can see how, when the partial remainder isn’t large enough to subtract the divisor we place a 0 in the quotient and move to the right. When the partial remainder is large enough, we place a 1 in the quotient and subtract the divisor (since there can only be a multiple of 1).

Shift Operations

Many early and modern processors have a special instruction that allows the programmer to “shift” the contents of a register (a binary representation) to the left or right, adding in 0s where necessary. Depending on the processor the bit being shifted out of the binary representation (the overflow) may end up in the carry flag, or simply be truncated. In our examples here, when shifting to the right any bits shifted past the one’s place are eliminated and ignored.

Why would a programmer want to do this?

There are many reasons, however, one of the chief reasons is to accomplish a quick multiplication or division by a power of two. Shifting all the bits in a representation to the left or right effectively multiplies and divides by two respectively. This works because shifting any singular bit to the left essentially multiplies its value by two because the next left place value is the next power of two while shifting to the right divides by two much as the next right place value is a lower power of two.

When the binary representation being shifted to the right has a 1 in its least-significant digit (the one’s place), the division operation rounds down to the next lowest integer.

These operations are summarized in the following diagram:

Here we shift 01011 (11) to the left and get 10110 (22). We shift 1010 (10) to the right and get 101 (5). We shift 101 (5) to the right and get 10 (2).

Boolean Operations

Earlier I talked about George Boole and his Boolean algebra. This algebra only has two truth values: true or false. This constraint enabled Boole to construct strict and specific methods of reasoning over logical propositions and provide a mathematical foundation to the field of logic.

Because values in Boolean algebra are one of two binary values, true or false, the traditional arithmetical operations in elementary algebra don’t really apply. Instead, Boolean algebra’s basic operations closely resemble the operations of set theory including conjunction (intersection set) and disjunction (union set).

There are numerous logical operators, also called logical connectives, sentential connectives, or sentential operators. Because logical operators aren’t directly related to counting or arithmetic in binary I will cover only three here: and ∧ (conjunction), or ∨ (disjunction), and not ¬ (negation).

The first two logical connectives take a minimum of two parameters, or values, while the last takes one parameter. A parameter here is a true or false value. Depending on the input values, the connective is evaluated and assigned a new truth value. This evaluation can be summed up in what is known as a truth table. Below are the truth tables for the three above operators:

This diagram outlines the truth tables of the and, or, and not operators.

You can see with the and operator, a 1 (true) is produced only if both inputs are also true. In the or operator a 1 (true) is produced if either of the inputs is true. In the case of negation, any input becomes its opposite: true produces false, and false produces true.

Boolean logic is the backdrop for what are known as logic gates in digital circuit design. A logic gate in a digital circuit outputs a charge depending on the inputs it receives, much in the same fashion as its namesake Boolean logic operation. So, for example, an AND gate outputs a true value (presumably a charge) only if both inputs are also true (charged). By clever utilization of these gates, circuits can be constructed that perform arbitrary combinational logic, including adding binary representations together. Circuits like these in great replication compose computer processors.

What’s To Come

We’ve covered not only the history of binary, but the most basic operations, both arithmetical and logical, that a programmer or engineer might want to perform on binary representations. I’ve covered a lot of ground, but there’s still more binary than I’ve laid out so far. I mentioned in the subtraction section that it’s possible to represent negative numbers in binary in such a way that you can add them to positive binary representations and end up with the correct answer. This wizardry relies on a non-standard positional notation known as Two’s Complement.

In the next article, I’ll be covering the standard methods of representing negative numbers in binary and what impact those methods have on arithmetical operations, in particular subtraction.

Image Based On A Photo by AX!X on Unsplash

Recent Posts

Negative Binary Numbers

A non-standard positional notation is one where the value of each position isn’t necessarily a straightforward power of the radix. I am also including when the radix is not a positive integer (such as -2), even though mathematically the representation is consistent with standard positional notation. By altering the interpretation of one or more of the place values (or the radix) of a binary representation, we are able to represent negative values. In this post I’ll be covering sign-magnitude, the most intuitive method, the radix complement methods (ones’ complement and two’s complement), offset binary (also known as excess-k or biased), and base -2 (base negative two).

Read More »

Binary (Base-2) And Its Operations

This article continues the trend of the previous articles and begins with a history of binary. After that, I briefly reiterate why binary is used in modern electronic devices as covered in the previous article, and go into more depth regarding binary “sizes” (bit, byte, kilobyte, etc.) Then I move on to important elements of binary arithmetic, and the operations of addition, subtraction, multiplication, and division. I cover two operations often found in computing processors, the shift operators, and their mathematical meaning. Finally, I briefly cover Boolean logic operations.

Read More »
An incadescent lightbulb burns.

Radix Economy

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What’s the most efficient radix?”

Read More »
A measurement chart in a shoe store.

Converting To Binary, Octal, and Hexadecimal

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. Finally, we perform radix conversion.

Read More »
A man in darkness has computer code projected onto him.

Understanding Radix

This article puts forth a brief history of counting, which details how we arrived at some of the conventions we have today, including the notion of radix. It then explores the concept of radix in positional numeral systems, and in particular the concept of using radices of arbitrary values. With this foundation, it becomes a simple exercise to use binary, octal, and hexadecimal, each with a radix of two, eight, and sixteen respectively.

Read More »
A set of cathode ray tube based numeric displays.

Binary, Octal, And Hexadecimal

This series intends to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. By the end of this series, you should be able to read and convert integer values into binary, octal, and hexadecimal, perform arithmetic operations on all three representations, understand basic Boolean operations, and otherwise have a further appreciation of the power of binary.

Read More »
  • Follow On Twitter!

  • Leave A Comment