Sunday, December 23, 2007

60 Years Since it all Started

As you can tell from the title of this post, today, December 23, 2007, marks the 60th anniversary of the most important invention of the 20th century (it was actually invented on December 16th, 1947, but not shown to anyone until December 23, 1947, because they wanted to have time to run tests on it). While there is sure to be much debate, this device is critical to nearly every product, of any kind, manufactured today. From its humble beginnings at Bell Labs in 1947, the transistor has led the march of electronics for the last 60 years. To really understand the transistor's impact, we need to go back a couple of years, and take a trip to science town. (I've tried to make this explanation as plain and simple as possible, but this stuff is pretty complicated when you get right down to it.)

The year is 1915. Researchers, scientists, and inventors worldwide are taking their first steps in the world of electronics since the initial discoveries surrounding DC (Direct Current, Thomas Edison) and AC (Alternating Current, Nikola Tesla) in the late 1800's. In 1880 Edison discovered a phenomenon called thermionic emissions, now commonly known as the 'Edison Effect', while inventing the incandescent light bulb. Thermionic emissions are the flow of electrons, or charge carriers, from a surface (like a piece of metal).

In the case of the light bulb, the thermionic emissions are created when an electric current passes through the filament (or the little wire in the light bulb) which resists the flow of electrons moving across it (thermionic emissions) , which in turn causes electromagnetic radiation to be emitted (or released) from the heated filament in the form in light (or radiation that exists in the visual spectrum).

Based on some of Edison's research on thermionic emissions, Irving Langmuir built the first modern Vacuum Tube at the GE (or General Electric as it was known back then) research lab in 1915. A vacuum tube also uses thermionic emissions, but in a completely different manner. A vacuum tube is composed of three main parts, an anode (a positively charged metal plate), a cathode (a negatively charged metal rod), and a control grid (a wire that is wrapped around the cathode) that sits between them. When the anode is heated, it releases electrons (thermionic emissions again), whose flow is directed by the grid into the cathode. All this takes place in a glass tube containing a vacuum (I know, how in the world did they come up with the name vacuum tube for this device?).

Now that we know what a vacuum tube is, what does it do? Basically it is used to amplify (as in a guitar amplifiers), switch (as in computers from the 50's and 60's), or to modify electric currents. In terms of computers, a switch is a device that can flip between two states (a 1 or 0, known as binary code). The highly charged (or high voltage/current) state of the switch is a 1, while the low state is the 0. Vacuum tubes where originally designed to replace mechanical switches (yes, that's right, computers actually used to be built from completely mechanical parts like the Difference Engine designed by Charles Babbage in 1822), which were incredibly slow (at least by today's standards any way) and prone to breaking down.

One of the first computers to use vacuum tubes was the ENIAC (short for Electronic Numerical Integrator And Computer), built in 1943. The ENIAC was designed at the University of Pennsylvania by John Mauchly and J. Presper Eckert to calculate artillery firing tables (charts that are used to calculate the angle and powder charge of artillery) for the US Army. It could take upwards of 40 hours by hand (due to the process of incorporating hundreds of variables), to calculate a single trajectory by hand (an incredibly complicated task), while it took ENIAC only 30 minutes to complete the same task. Without these charts it was very difficult to hit enemy targets with anything nearing precision, so needless to say, the US government was throwing a lot of weight behind the project as it was happening during World War II (just before the Pearl Harbor attack and America entered the war).

The ENIAC was able to do these fantastic (by 40's standards) calculations by utilizing over 18,000 vacuum tubes to perform 385 calculations per second. By controlling which tubes where 1's and which where 0's, mathematical calculations could be performed. Due to the "newness" of the vacuum tubes built at that point, there was the significant issue of the tubes burning out to factor into operating procedures. In fact, the tubes burnt out so often (several a day; remember that they are similar to light bulbs) that research was done to determine what was the main cause. They found that most of the failures happen during heat up (start up) or cool down (shut down) so they decided to just leave it on continuously to limit burnouts to a minimum (it never had to be rebooted because of operating system crashes either. Anybody miss those days?). The ENIAC ran nonstop from 1946 until it was retired in 1955.

Vacuum tubes where pretty much exclusively in every computer manufactured from the early 1940's until the 1970's. Nowadays over 10 million "true "vacuum tubes are still in use (mostly in radio stations, telecommunications, and amplifiers) and millions more in the form of "tube TVs" (ever wonder where the nickname "tube" came from? Well, now you know) , CRT (or cathode ray tube) computer monitors, and microwaves (which use a magnetron). So while it's on it's way out, the vacuum tube still plays an important part in electronics. Coincidentally, the device that replaced the vacuum tube was invented just over a year after the ENIAC went into operation.

The basic idea behind the transistor was to be able to perform the same electronic switching as the vacuum tube, but in a more energy efficient manner. Not only that, but they were considerably smaller as well. In fact, the transistor was considered so revolutionary back then that in 1956 it netted its inventors, William Bradford Shockley, John Bardeen, and Walter Houser Brattain, the Nobel Prize in Physics.

In 1955, some of the first commercial devices were built, by a tiny Japanese electronics company called Tokyo Tsushin Kogyo (fortunately they later changed their name to Sony, or Ridiculously Proprietary Incorporated as I like to call them), called transistor radios. Since they didn't need giant, power hungry, unreliable vacuum tubes anymore these FM radios could be made very small and run off of batteries. Originally sold for $50 ($364 in 2006 dollars) , they quickly dropped to $10 ($73) in the early 60's as Chinese manufacturers entered the market. This device did two very important things; first, it brought the idea of affordable electronics to the average working man, and second, it made transistors very cheap due to the amount that were being manufactured. (The more you make, the cheaper they are since most of the money it takes to build them is the factory, not the actual components themselves.)

However useful they are, the transistor is important because of only one device that uses them, the integrated circuit (IC). The integrated circuit, invented in 1952, uses transistors in the same fashion as the ENIAC used vacuum tubes. Basically, the IC chip is a collection of many smaller components (think of a computer full of vacuum tubes) that are controlled by a binary program such as assembly language. IC chips are used in nearly every single electronic device manufactured today and range from simple chips used in a calculator to extremely complex microprocessor chips (or CPU, Central Processing Unit).

The first commercial microprocessor, the Intel 4004 (a 740 kHz processor), released in 1971, utilized 2,300 transistors on a single chip to perform 92,000 instructions (or basic commands like 'add these two numbers together') per second. So we went from a computer (the ENIAC) that took up 680 square feet, used 107,000 individual parts (18,000 vacuum tubes) and was only able to perform 385 calculations per second in 1946, to just 25 years later where a single chip an inch long is able to perform 239 times the number of calculations per second. It's easy to see why transistors are so important and why they were instrumental in pushing computing forward.

In 1975, the first commercially available "home" computer, the Altair 8800, was sold. It used the Intel 8080 microprocessor (hence its name) and was sold for the rock bottom price (at that time only major companies could afford computers because they cost millions of dollars to build) of $250 ($1254 in 2006). Released just four years later, the 8080 (a 2MHz processor) was able to perform 500,000 calculations per second, over 5 times more computing power.

On a side note, the Altair also was the first client of a soon-to-be-famous two-man company from Albuquerque, New Mexico, founded that year (1975). Microsoft wrote a version of BASIC for the Altair that allowed people to actually "do" something with their new computers.

The Altair made computers cheap enough that everyone could afford to buy one, and within a couple years there were dozens of companies making similar machines. The rest, as they say, is history. And what happened to our little friend, the transistor? Why, he is now so small that a modern microprocessor the size of your thumbnail and just as thin, can now hold over 1.7 billion transistors. Contrast that to the first transistor 60 years ago that was the size of a baseball. What a long way we have come.

Of course, the transistor isn't going to last forever, but unlike nearly everything else in electronics, it is getting replaced for a very peculiar reason. While Moore's Law, named after Gordon Moore, one of the co-founders of Intel, stated in 1965 that the cost and size of a transistor (and therefore basically every piece of electronics since most of them are made from transistors) would decrease by half every 18 to 24 months. In effect, what this means is that a processor will cost the same, but be twice as fast around a year and a half later.

While this has held true (for more info see the blog post the 64-bit Question) for the last 42 years, within 20 years it will no longer be true. How can I be so sure? Simple. Within 20 years, in order to cut the size of a transistor in half, it would have to be smaller then a molecule. Now, while I think that humans are smart, they aren't God. So knowing the we're going to hit this limit soon, we have to look into a new direction, like molecular electronics and quantum computers. Researchers hope that one or both of these directions will provide us with the next "big" step in computers. By that point, our dear and faithful friend will be approaching his 80th birthday and will finally start entering into retirement. It just goes to show, that sometimes that smallest things do turn out to be the most important...

Think I'm right? Think I'm wrong? Click here to join the discussion about this post.