The Periodic Table of 2-Input Logic Gates

For a two-input, one-output digital logic gate (assuming no state memory or feedback), there are a total of sixteen possible behaviors, since for each combination of inputs (00, 01, 10, and 11), the gate must output either a zero or a one. Combining these four behaviors allows description of the gate with four bits, meaning there are sixteen possibilities (0000 through 1111).

Not all of these “gate types” are interesting, however. For example, the “Type 0” gate (whose descriptor is 0000) always outputs a zero, no matter what the inputs are. Similarly, the “Type 15” gate (descriptor 1111) will always output a one.

Some interesting properties that some of the gates have are:

  • Symmetry: If you swap inputs A and B, will the gate act the same way?
  • Triviality: Is the “gate” so trivial that a simple wire will suffice?
  • Commonality: Is the gate a “standard” or commonly-used gate?
  • Inversion: Can the gate invert one or both inputs in at least some cases?
  • Two-input: Do both inputs affect the gate’s output?
  • Universality: Can several of this gate be used to emulate any other gate?

Here is a chart showing the properties of all of the possible behaviors a simple two-input-one-output digital logic gate could exhibit. Some of these are not true “gates”; others are, but lack universality. Interestingly, six of the gate types are universal — meaning that given a sufficient number of them, any other gate type could be created. An entire computer could be created using nothing but NOR gates, for instance.

A chart of all possible simple two-input logic gates. (Click for larger.)

Logic gates of various types can sometimes show up in unexpected places. The humble 555 timer IC, for example, turns out to be capable of functioning as a Type 2 (or Type 4) gate. Since these are universal, a computer using only 555s as active logic is possible!

 

 

Posted in Digital, EET205, Electronics, Fundamentals, Math | Leave a comment

Cool (literally)

Electronics is at its most interesting when you come across something that truly seems magical. Making a webserver out of an Arduino comes close, as does building a computer from scratch — but it’s especially fun to experience something that seems to violate fundamental laws of physics.

The Peltier effect is one such “magical” piece of technology. Generally speaking, when current is run through a component, that component becomes warmer, as some of the electrical energy is radiated as heat. (This is what resistors do best.) Once you have designed enough devices, it becomes second nature to think about the heat generated; if you put five watts of power into a chip, you had better make sure there’s a way to remove five watts of heat at a temperature that won’t destroy the chip.

Now for the magical part. It turns out that certain solid-state devices can act as heat pumps when current flows through them. Overall, the device does become warmer, since like everything known to science, it operates at less than 100% efficiency. (A lot less than 100%, in this case.) However, when the current is passed through one of these Peltier devices, one side of the device cools noticeably. It’s science, yes — but science of a particularly magical flavor.

A medium-size Peltier device (without heatsink). Using Red as the positive lead will make the top the cool side; reverse the polarity to reverse the heat flow. (Click for larger.)

Peltier devices are notoriously inefficient for cooling, but they have some unique positive features. As solid-state devices, they are completely silent, quite reliable, and can be fit into small places. Their ability to cool below ambient temperature, silently and in a small form factor, is quite unique. They are used in some exotic CPU-cooling schemes (although air cooling is far more common and even water cooling and phase-change cooling are more widely used.) Peltier devices are especially useful for cooling CCD sensors for astrophotography, since CCD noise levels are best at low temperatures, and low noise is essential when capturing long-exposure images of very faint objects.

Peltiers, given a good heatsink/fan combination, can even produce temperatures below freezing…

An attempt at producing ice crystals with a Peltier device and heatsink+fan. (Click for larger.)

 

Posted in Components, Electronics, Science | Leave a comment

Cool math: Conway’s Game of Life

(This is the first of a planned series of articles on interesting topics in mathematics. Don’t worry, though; there won’t be a quiz. The whole idea is to show the fun side of math!)

One of the more interesting topics in math (for me, anyway) is emergent complexity. Often, very simple systems can lead to complex behavior. Fractals such as the Mandelbrot Set are a good example of this. Another example is Conway’s Game of Life.

Discovered / invented by Dr. John Conway in 1970, Life is a specific cellular automaton: a set of rules for updating the states (“alive” or “dead”) of a rectangular grid of cells (Conway apparently did some early investigations using a checkerboard.) Starting from a given “seed” configuration, the following two simple rules are repeatedly applied (to all of the cells at once):

  • If a “live” cell has exactly two or three neighbors, it remains “alive” in the next generation. Otherwise, it “dies” (potentially to be reborn again later).
  • If a “dead” (or “empty”) cell has exactly three neighbors, it becomes “alive” in the next generation. Otherwise, it remains empty.

These two simple rules lead to all sorts of interesting behavior. Some configurations quickly die out; others quickly evolve into stable “still life” configurations. The most interesting ones, though, can drift away across the plane, grow without bound, or play out in a chaotic manner for hundreds or thousands of generations before becoming stable. Here are some examples:


A “block” of four squares will remain exactly as it is — a classic example of a “still life”. Each live cell has three live neighbors; no empty cell has more than two — therefore, no cells ever die or are born.

 


Three cells in a row will oscillate between horizontal and vertical configurations — becoming a “blinker.”

 


A diagonal row of cells will evaporate from each end…

 


…but a diagonal row of cells “anchored” at both ends remains stable.

 


These are somewhat interesting — but the beauty of Conway’s Life is in the more complex behaviors it can produce:

 


Shortly after it was invented, the “glider” was discovered — which can move diagonally. (There are four versions, each mirror images, which move in the four diagonal directions.)

 


An excellent example of emergent complexity is the “R-Pentomino”: a set of five cells which explodes in a burst of activity, firing off several gliders until finally becoming stable over a thousand generations later(!)

 

 


There are many more interesting examples out there. In fact, it has even been proven that Conway’s Life can be used to build a universal (Turing-complete) computer — albeit a very inefficient one. I recommend Life32 as a good, easy-to-use program to start your investigations into Life. If you’re interested in finding out more about some of the many fascinating Life constructs discovered, Stephen A. Silver has compiled an excellent Life Lexicon.

 

Posted in Coding, Digital, Math, Science | 2 Comments

The Digital Revolution

By now, there is no doubt that we’re in the “Digital Age.” Everywhere you look, computers and other digital electronic devices are doing everything from carrying our telephone conversations to controlling our car engines to running our electronic toothbrushes.

Is this “digital” thing just another fad? Is this whole “Digital Revolution” just a way for manufacturers to sell us “upgraded” products, when the old ones would have worked just as well? What is “digital” anyway — and what benefits do we get from switching to “digital” technology?

There are plenty of important benefits — and it’s absolutely not an understatement to call digital technology a “revolution.” To understand why, it’s important to first understand the fundamental difference between “digital” technology and “analog” technology. Here is a brief explanation of what “digital” is, why it’s important, and (at a high level) how it makes the magic of the modern world possible. Sound recording is a good place to start.

In analog technology, signals (voltages or currents, in electronics) are continuously variable. An analog signal, for example, could range between zero and ten volts — or could take on any value in between. Zero volts is OK; ten volts is OK; so is 3.34523607 volts. None of these values necessarily have any special meaning in analog electronics — they generally correspond to, say, the audio waveform to be sent to a speaker. The sounds of a Beach Boys concert are picked up by a microphone and those “good vibrations” are transmitted, amplified but otherwise more or less unchanged, to a tape recorder.

Part of a typical analog waveform (about 30 milliseconds of "Good Vibrations")

Digital technology works differently. Instead of a continuous range of possible values, digital values are limited to a finite set of possible values. For the most basic circuits, this is the familiar “zero” and “one” of binary arithmetic, represented as two specific values (say, zero and five volts) in a circuit. Values near zero (for instance, values up to 0.5 volts) are considered “low” (or “zero”), and values near five volts (say, anything over 4.5 volts) are considered “high” (or “one”). Values between 0.5 and 4.5 volts are not guaranteed to be either value, and should be avoided.

TTL voltage levels (click for larger)

Digital signals, therefore, are always either zero or one. Information is passed not by directly copying a microphone’s movements into changes in voltage, but by describing those changes using zero and one. Such descriptions using a small set of values have been around for years — the dots and dashes of Morse Code have been used for over 150 years! Similarly, every wire carrying a digital signal switches between “high” and “low” values, producing a waveform that looks very different from an analog signal…

A (synthetic) digital signal, including a bit of noise. (Click for larger.)

At first, this seems very restrictive. How can you convey the delicate nuances of, say, a Chopin nocturne or the expansiveness and majesty of Tchaikovsky’s 1st piano concerto, if the music is broken up into little pieces like this?

The answer is that digital signals, although “zero or one” by themselves, can describe more complex signals. An analog signal from a microphone is “digitized” into a specific range of values. For music played by a CD player or typical mp3 player, sixteen bits per sample are used, meaning each sample of the signal is broken down into 2^16, or 65,536, possible values. A series of sixteen “bits,” each zero or one, is sent to represent each sample. Do this 44,100 times per second for each channel (left and right), and you have enough information to put the original signal back together almost exactly as it originally was. (If you want greater precision, use more bits and/or sample the signal more times per second. DVD-audio players can play back 24-bit music with up to 192,000 samples per second.)

This is much more accurate than any amplifier or speaker could ever hope to reproduce, by the way, despite what some audiophiles say. Discussions about the minutiae of amplifier design aside, there is no good reason to categorically dismiss digital recording technology. Use enough bits of precision and a fast enough sampling rate, and something else in the chain (the speakers, the wire, human ears, etc) will become the weak link.

With this understanding of what digital technology is, the benefits are easier to describe. The central point of digital technology is that it can describe any information — music, temperatures, Shakespearian sonnets, pictures, videos, etc. — as bytes (standardized, 8-bit characters). This seemingly trivial point is the key to all of the digital magic of the past few decades. Once a piece of information (say, a song) has been digitized, it can be treated like any other piece of information. It can be copied over and over — perfectly, without any degradation whatsoever. It can be emailed, stored for later use on a hard drive, archived, made searchable, analyzed, used as a ringtone, shared (I won’t get into legalities here), and played back over a network. All of this is possible because, to a computer, there is no difference between the characters that make up this song and those that make up an email, spreadsheet, database, program, or picture.

If signals were stored in analog format, specific conversions would have to be performed on each type of signal (audio, video, etc), before they could be copied to another computer, sent over a network, etc. Once information has been digitized, however, it is all fundamentally a series of bytes — and can easily be stored, recalled, transported, encrypted, decrypted, combined, analyzed, and sorted. Without this functionality, the Internet wouldn’t be possible in anything resembling its current form.

That’s what makes the “Digital Revolution” revolutionary — and that’s why it’s so important.

 

Posted in Analog, Audio, Digital, Digital Citizenship, EET205, EET325, Electronics, Fundamentals, Math | Leave a comment