Here lie posts (tutorials, news, profiles, series, and blogs) dedicated to pertinent core theory and fundamental knowledge. Core theories of programming are as varied as they are deep. The primary one, of course, is the theory of computation. This theory rests heavily on discrete mathematics and formal languages and includes automata, computability, and complexity theories.

Learning about these theories in conjunction with mathematical logic will help you gain a deep understanding. You’ll know what properties and behaviors are possible for a program. Mathematical logic includes, but isn’t limited to, set, model, recursion, and proof theories. These theories often build upon first-order logic (also known as predicate/quantificational logic and first-order predicate calculus).

In other words, what you’ll find on these pages will help you learn how to program just about anything. These fundamental concepts don’t really change much from one machine or language to the next. They are almost always applicable no matter what you’re doing. If you understand these concepts well enough, you may even create your own programming language or structures.

Image Based On A Photo by Jaredd Craig on Unsplash

Active content filters
Confused / Need Help? How To Use This Site

## Negative Numbers In Binary

Now I shall delve into non-standard positional notations. In this article, I will examine systems that allow us to represent negative numbers in binary and use those negative values in computations. By altering the interpretation of one or more of the place values (or the radix) of a binary representation, we are able to represent negative values. In this post I’ll be covering sign-magnitude, the most intuitive method, the radix complement methods (ones’ complement and two’s complement), offset binary (also known as excess-k or biased), and base -2 (base negative two).
Core:

## Binary (Base-2) And Its Operations

This article continues the trend of the previous articles and begins with a history of binary. After that, I briefly reiterate why binary is used in modern electronic devices as covered in the previous article, and go into more depth regarding binary “sizes” (bit, byte, kilobyte, etc.) Then I move on to important elements of binary arithmetic, and the operations of addition, subtraction, multiplication, and division. I cover two operations often found in computing processors, the shift operators, and their mathematical meaning. Finally, I briefly cover Boolean logic operations.
Core:

This article begins with a recap of where we are in the series in regards to the concept of counting. I review the definition of positional notation as outlined in the first article and then move on to reveal how we can calculate the number of digits a value will have in a given radix. In doing so I will go over two mathematical concepts relevant to this calculation: exponents and logarithms. I will then use logarithms to show how you can calculate the efficiency of a given radix, also called the radix economy, and answer the question, “What's the most efficient radix?”
Core:

## Converting To Binary, Octal, and Hexadecimal

This is the second article in a series whose intention is to have the reader able to understand binary, octal, and hexadecimal; three radices of great importance to contemporary computer theory. This article builds upon the previous article by outlining three important radices (binary, octal, and hexadecimal) that are useful in the field of computer science. I start with arbitrary base conversion using two methods. Then, a bit of background is given for why these bases are important, particularly binary. Finally, we perform radix conversion.
Core: