Negative Numbers In Binary

User Avatar Asher Wolfstein Reference Leave a Comment

  1. Understanding Radix
  2. Converting To Binary, Octal, and Hexadecimal
  3. Radix Economy
  4. Binary (Base-2) And Its Operations
  5. Negative Numbers In Binary

This is the fifth article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. Up to now I have covered reading and converting integer values into binary, octal, and hexadecimal, performing arithmetic, and basic Boolean operations. Now I shall delve into non-standard positional notations. In this article, I will examine systems that allow us to represent negative numbers in binary and use those negative values in computations.

A non-standard positional notation is one where the value of each position isn’t necessarily a straightforward power of the radix. I am also including when the radix is not a positive integer (such as -2), even though mathematically the representation is consistent with standard positional notation. By altering the interpretation of one or more of the place values (or the radix) of a binary representation, we are able to represent negative values. In this post I’ll be covering sign-magnitude, the most intuitive method, the radix complement methods (ones’ complement and two’s complement), offset binary (also known as excess-k or biased), and base -2 (base negative two).

Significant Bits

In the following sections, the terms most significant bit (MSB) and least significant bit (LSB) pop up. These refer to two ends of a binary representation. As you might guess, the most significant bit is the bit which, if toggled, has the greatest effect on the resulting interpretation In our representations, it is the left-most bit, being the highest power of 2.

Likewise, the least significant bit is the bit which, if toggled has the smallest effect on the resulting interpretation. In our representations this is the right-most bit, being only a value of 1.

Sign Magnitude

In sign-magnitude representation, also called sign-and-magnitude or signed magnitude, one bit in a series of bits is assigned the role of the sign bit while the others play the part of magnitude. The sign bit is often the most significant bit in the sequence and is usually set to 0 for a positive value, and 1 for a negative value. The magnitude is read as a traditional binary number and then multiplied by a positive 1 (if the sign bit is positive: 0) or a negative 1 (if the sign bit is negative: 1).

In our examples for this post, we’ll be using a byte, defined as eight-bits, to illustrate the systems. In this setup, we are able to represent values in the range of -12710 to 12710. To change the sign of a value from a negative to a positive, or vice-versa, one simply has to toggle the sign-bit. For example, 2710 can be encoded as 000110112, and -2710 encoded as 100110112. Note how the most significant digit on the left was switched from a 0 to a 1.

IBM was an early supporter of sign-magnitude, utilizing it in their 704, 709, and 7090 mainframe models. Though this method is the easiest to reason about and most intuitive, it comes with some drawbacks, particularly in implementation. For example, you might have noticed there are two different representations for the value 0. In our example, 000000002 and 100000002 both represent 0, with one being -0. A consequence of two representations for 0 is that in comparing values to 0, you must perform two checks instead of one.

Another downside, in comparison to the next methods of sign representation, is that addition and subtraction require different circuitry logic depending on the sign bit. If a computer were to operate on sign-magnitude representations it would need to be equipped with logic circuitry for both addition and subtraction. The next methods of representation, using radix complements, avoids this issue.

Despite these shortcomings, sign-magnitude does find a place in the modern computing world. Floating-point values, those represented by a significand and an exponent component, often use a sign-bit in the leading or most significant bit to indicate negative or positive values.

Method Of Complements

The next couple of methods for encoding negative binary values in binary operates on what is known as the method of complements technique. This technique allows for encoding both negative and positive values in such a way that the calculator only needs to use one algorithm to compute both addition and subtraction. This is ideal in terms of hardware as extra circuitry means greater complexity and higher production costs.

The method of complements technique relies on the concept of additive inverses. The additive inverse of a given number x is a number that produces zero when added to x. This is also known as the opposite, sign change, or negation. In practice, the concept is simple: the additive inverse of 5 is -5, 23 is -23, 1 is -1, and so on.

To implement the method of complements technique we half the possible representations of a set number of digits in such a way that half of them represent positive integers, and the other half represent each positive integer’s additive inverse. Each pair, the positive number, and additive inverse are called complements. The digit values are arranged in such a way in these representations that subtraction of any number is accomplished by adding its complement.

How is this accomplished; how does one arrange the digits to allow this to happen? To answer that we must delve into the concept of radix complements.

Radix Complements

The radix complement of any value can be summed up in a simple equation: bn-x. Here b is the radix, n is the number of digits, and x is the original value. Below is a diagram outlining the details of this equation:

The above decimal radix complement is often called ten’s complement. It is referred to as ten’s complement because, for a single digit, the complement is the difference between the digit and ten. A variation on the radix complement is the diminished radix complement in which you subtract one from the computed base exponent before subtracting the value: (bn – 1) – x. This can be seen in the diagram below:

The above-diminished decimal radix complement is often called the nine’s complement due to the fact that each digit of the complement is arrived at by subtracting from 9 (the base – 1). As you might notice from the equation given, you can compute the ten’s complement by simply adding one to the nine’s complement.

Performing Subtraction Using Radix Complements

Radix complements are useful because you can do subtraction using radix complements by actually performing addition. As noted previously, this is advantageous for hardware engineers as only one algorithm must be encoded into logic circuitry. But how does one do this?

There are two main methods of performing subtraction through addition using radix complements. The first method utilizes diminished radix complements, and the second uses a “plain” radix complement.

The First Method

The first method relies on the following mathematical method. Suppose we are subtracting y from x. The diminished radix complement of x is bn – 1 – x as outlined above. If we were to add this to y we’d have bn – 1 – x + y or bn – 1 – (x – y). This construction is the same as if we had performed the diminished radix of x – y and so, its diminished radix (the reverse of that) is x – y (being (bn – 1 ) – (bn – 1) – (x – y) = x – y. The diagram below illustrates:

The Second Method

The second method is a little shorter and relies on the radix complement rather than the diminished radix complement. In this method, you add the radix complement of y to x resulting in the amount x + (bn – y). If yx then the resulting value will be greater than or equal to bn (if x = y then x – y + bn = 0 + bn = bn). You then subtract bn (essentially remove the initial 1) to achieve x – y + bn – bn or simply x – y. The below illustration lays out the process:

Another way to do the second method is to use the diminished radix complement in the addition, and then add 1 to the subtraction of bn yielding bn – 1:

There’s one caveat with this method that can trip up new calculators. If the subtrahend y has fewer digits than the minuend x the method won’t work unless you add additional digits to y before finding the radix complement. In the case of decimal, you would take the nine’s complement of 0, resulting in additional 9 digits. You can see this in action in the following figure:

Radix Complements As Negative Numbers

Earlier I wrote that “half of [the representations would] represent positive integers, and the other half represent each positive integer’s additive inverse.” What did I mean by this?

Suppose that we were to treat radix complements as negative values rather than simply numbers arrived at by a calculation. To do so, we’d have to designate a set number of digits that we’re going to use (to establish the range). Let’s specify three digits: 000 – 999. We could split the range available to us (1000) into two sections: positive integers (000 – 499), and negative numbers (their additive inverses) (500 – 999). This means that both 000 and 999 would represent 0 and -0. 001 and 998 would represent 1 and -1 respectively, and so on. 998 is 001’s additive inverse in this respect because we are only allowed three digits. Adding them together using normal decimal techniques (not our representations), we would strip the leading 1 (as above), and get 000, which is zero. If we were to place these representations on a number line it would look like this:

Don’t get tripped up by their decimal values! In this system 749 is a symbol and is actually equal to -250. Using this strange number line, and confining ourselves to three digits, we can represent negative values without using a negative sign. If we take a positive representation, and a negative representation and add them together as normal decimal values, the result is the correct placement on the number line. The key is that when we add two representations from this number line we add them as if they are normal numbers (like in the above diagrams) rather than add the values they represent. Somewhat magically when we map the sum (by subtracting by 999, see above) back to the number line it gives us the correct value. Let’s visualize this:

Notice how on the left calculation, where the result is a positive number that “overflows” over the maximum number of digits (3), we subtract 999 to achieve the correct result. We could have also simply stripped off the leading one, and added one to accomplish the same thing. On the right-hand side, there was no overflow, meaning we didn’t have to subtract bn-1. The sum on our modified number line is the correct value. This mapping is known as nine’s complement.

I have now demonstrated in decimal one of the most common methods of representing negative values in computer processors. If you change the radix from 10 to 2, you can accomplish the same thing in binary. Duplicating the above math in base-2 yields what is known as one’s complement.

One’s Complement

By altering the radix of the above process from ten to two we achieve in binary what is known as one’s complement. Because of the small number of digits in binary (0 and 1) one’s complement has at least one interesting property.

In one’s complement, negative binary numbers are simply the inverse (logical NOT) of their positive counterparts. This means, continuing our byte example, that 4610 in binary 001011102 has a complement of 110100012. This looks like 20910, but remember our number line (now in binary). It isn’t so much what its apparent value is, as it is what value it represents in our system:

The division of the representations along the number line is more elegant this time around. The leftmost bit (the most significant bit in our representation) clearly marks whether a number is considered positive or negative.

Many early computers implemented one’s complement in order to perform subtraction by using addition. These included the famous PDP-1, the LINC, the CDC 6600, and UNIVAC 1101.

It is interesting to observe the addition (and subtraction) of one’s complement representations. In observing an addition that “overflows” past our byte restriction, we can take the overflow digit (the carry) and add it to the rightmost digit (the least significant bit), much like we did in our decimal method. This is known as the end-around carry in logic circuits. You can observe this in action below:

For background on how to perform binary addition or subtraction refer to the previous article the series Binary (Base-2) And Its Operations. If we wanted to perform actual binary subtraction, we must do something similar with the presumed hanging borrow. This is known as the end-around borrow and comprises subtracting the borrow from the one’s place as such:

You can see that we have that dangling 1 that either needs to be added or subtracted depending on the values and operation. On top of that, there are two representations equaling 0 (-0 is essentially equivalent to 0 here) as happened before in sign-magnitude.

In one’s complement, you can avoid generating a -010 (111111112) by performing a subtraction rather an addition and reversing the sign of the subtrahend. This method is known as a complementing subtractor.

However, having two versions of zero still requires making two tests in comparisons to zero, which can be a common operation). As well, that additional hanging 1 in the end-around carry or end-around borrow complicates implementation. These two factors are less than ideal, but fortunately, there is a remedy available.

Two’s Complement

One way we can circumvent having two representations is if we perform a sort of shift by one. If we add 1 to the binary inverse (the one’s complement negative) we circumvent the hanging 1 we were encountering before. But what about -010 (111111112)?

If we add 1 to 111111112 we end up with 000000002 with a carry of 1. Ignoring the carry bit, we have the representation for (+)0. This eliminates the two versions of 0, for when we try to compute -0 we end up with 000000002 again. But what happens to 111111112? That value ends up being the start of the negative numbers, being the inverse of +110 (000000012 -> 111111102 + 12 = 111111112).

Computing negative values as the positive value’s binary inverse plus one is called two’s complement. One’s complement is to two’s complement as nine’s complement is to ten’s complement, hence the name. This relationship can be shown mathematically:

The sum of a value and its one’s complement is all 1 bits which is equal to 2n-1 (see radix complements above, 2 is the radix in base-2). If we add 1 to the one’s complement we get 2n (2n – 1 + 1 = 2n). In fact, the two’s complement of a given value x in binary is 2n – x (where n is the number of digits). This is consistent with the definition of the radix complement above. In an eight-bit example: 2810 – 7510 = 18110, in binary 1000000002 – 010010112 = 101101012. The following illustrates:

Another way to think about two’s complement is to think about it in terms of place values. As the negative values decrease (move right) down our number line, the normal interpretation of the last seven bits goes from 12710 to 010. If we consider the eighth bit to represent -12810, we can subtract the value presented in the next seven bits to arrive at the two’s complement value.

We can sum up this process in an equation that is applicable to any number of binary digits. The previous observations, this equation and our new number line look like the following:

The equation used in this article to compute Two’s Complement was lifted from Wikipedia (https://en.wikipedia.org/wiki/Two%27s_complement) and is used (and explained) here under the terms of the CC3.0 license.

The above equation is a formal way of stating the process outlined above. Here n is the number of digits. We start at the leftmost digit (n-1) and multiply it by -1 and 2n-1. From there we perform a summation (that’s the Σ) from the next leftmost digit (n-2)’s value onward.

One algorithmic method for obtaining a value’s two’s complement is to survey its binary representation from the least significant digit to the most significant digit (in our representation from right to left). Copy the zeros up until the first 1, then invert the rest of the bits:

The advantage of all this is that there is only one representation of zero, and addition and subtraction don’t require any special logic to perform, such as the hanging 1 of one’s complement above. All that is necessary is to strip the leading 1, if present. This is usually done by simply ignoring the final carry and keeping the result restricted to a set number of bits.

When restricting the number of processable digits, it’s possible to perform an addition or subtraction that results in a sum or difference larger than can be represented. This is called an arithmetic overflow. One interesting property of two’s complement is that you can verify whether an arithmetic overflow has occurred by examining the leftmost two carry bits. In addition, if they are the same the result is valid, but if they are different an overflow has occurred. This is illustrated in the following figure:

Sign Extension

What if we want to expand a two’s complement binary number from a smaller number of digits to a larger number of digits? If we simply add 0s as the most significant digits (leftmost in our representation), as we might do with an ordinary binary representation, we would end up transforming negative values into positive values. Remember that in two’s complement, a leading 1 indicates that the following bits are to be interpreted as negative (-128 + x).

The answer is to apply what is called a sign extension. A sign extension duplicates the most-significant digit (the leftmost here) of a binary representation to create a representation with more digits. Adding 1s to a negative value keeps it negative, and adding 0s to a positive value retains its sign.

In the previous article, we also discussed the shift operation of most processors. As above, when shifting a two’s complement representation to the right, the most significant digit is copied as the digit that is added to the left side. This is only done when shifting to the right; shifting to the left adds a 0 to the right.

The Most Negative Number

Two’s complement may at first seem like a panacea in terms of representing negative numbers (without a sign), but there is at least one tradeoff. One’s complement was plagued with two representations of the number 0. Two’s complement is plagued with a value that defies calculations: the most negative number.

With a restricted set of digits (here we are restricting ourselves to a byte, or eight bits), the minimum representable value in two’s complement fails to materialize an additive inverse. For our byte, the minimum representable value is -12810. -12810 is represented as 100000002. Inverting it gives us 011111112, which is 12710. Adding 1 to it would give us 100000002 or -12810. Positive 12810 can’t be represented with eight bits, and thus, -12810 has no complement to complete it.

This means that the two’s complement of -12810 in an eight-bit system is -12810. Performing the above sequence produces an arithmetic overflow (see above) as a carry digit goes into the most significant bit, but no carry comes from the most significant bit. This causes the carry’s two most significant bits (leftmost here) to be different (0 and 1) indicating an invalid calculation.

This leads to the following invalid results for calculations:

  • negating -128, or multiplying by -1, results in -128, not +128
  • the inverse of multiplication, division by -1 is undefined
  • the absolute value of |-128| is -128

These complications must be kept in mind when performing arithmetic operations on two’s complement representations. In modern systems, this tradeoff of attention from the programmer has come about as preferable to maintaining two representations of 0 internally. The first minicomputer, the PDP-8 in 1965, used two’s complement arithmetic and set the trend. Almost all subsequent home computer processors utilize two’s complement.

Offset Binary (Excess-K or Biased)

The offset binary method, also known as excess-k or biased representation, is another way of encoding negative values. In offset representation, the encoding of a value is the bit representation corresponding to that value plus an offset (biasing value or excess). So, for example, if we use a biasing value of 128 (excess-128) then 0 is encoded as 0 + 12810 making 100000002. A positive number, such as 56, would be encoded as 5610 + 12810 = 18410 making 101110002. An example of a negative value would be -4510, which is encoded as -4510 + 12810 = 8310 making 010100112. Computing the encoding like this sets up the following number line:

Notice that the number line for biased representations resembles the number line for two’s complement. The difference is in the most significant bit (leftmost here), which is opposite the two’s complement representation. Because of this you cannot simply add or subtract biased representations like you can two’s complement.

Excess-K representations are primarily used for the exponent component of floating-point representations. IEEE 754 defines the exponent component of a 32-bit single-precision number as an 8-bit excess-127 representation. The 64-bit double-precision exponent component is defined as an 11-bit excess-1023 representation. You will also see excess-k pop up again when we cover binary-coded decimal in Non-Standard Positional Notations In Binary And Otherwise.

Base -2 (Base Negative Two)

Back in Understanding Radix, I wrote that the radix of a number does not have to be a positive integer. Technically, the radix can be any value, including the Euler constant e. Here I’ll show you how you can use a radix of -2 to construct a representation that can encode positive and negative values.

When employing a negative radix, we must remember that the product of a negative value multiplied by a negative value is a positive value. As the place value increases by powers of -2 you’ll notice the place value is positive and negative in turn:

At first, it may seem there are gaps in the integer values (positive and negative) that this system could represent, but surprisingly there are not. It seems that way because of the asymmetric nature of the number line that a given number of bits can represent. If there are an odd number of bits, this base can represent twice as many negative numbers as positive. And if there are an even number of bits then twice as many positive numbers can be represented.

A consequence of using this base is that the binary representations are quite irregular and difficult to read. Observe counting from negative seven to positive seven using base -2 for instance:

What’s To Come

This concludes our survey of negative value representations in binary. We covered sign-magnitude, one’s and two’s complements, excess-k, and base -2. All of these are viable systems for representing negative numbers. However, some prove to be more useful in practice than others.

The most arithmetically versatile representation turns out to be two’s complement, despite the edge case of the most negative number. Since the release of mini- and microcomputers, mostly for home consumer use, most modern processors use two’s complement for representing negative numbers on the processor level (for non-floating-point operations).

Now that I’ve laid out the basic numerical uses and operations of binary, we can move on to material that “counts” in binary in a very different fashion. These are non-standard positional notation systems that don’t quite fit into the orderly box of normal numerical representation. In the next article of the series, I’ll be covering binary coded decimal, as well as signed-digit representations including balanced ternary and redundant binary representations. I will also provide an introduction to systems popularly known as Gray codes.

Image Based On A Photo by Fabian Bächli on Unsplash

About the Author

Asher Wolfstein

Homepage Facebook Instagram LinkedIn Pinterest YouTube Twitter

I am a time-traveling wunk. That's a hybrid between a wolf and skunk, and I'm currently stranded in the 21st century. In my travels, I’ve realized that of all the things I need to be, it is being true to myself that is the most important. With that, I am a wonderfully queer (in every way) furry trans-futurist. I began my auto-didactic programming career at the age of seven on a TRS-80 Color Computer II 16k. I went on to work as a professional programmer in a printing plant. These days I write blogs, and play and code computer games. I continually costume/fursuit, read everything, program computers, study Japanese, and even sometimes draw. Daily I don't have any plans, just coffee, and strangeness. 頑張れ、諦めないで!

Leave a Reply