EET 267 Microprocessors
Spring 2013 Professor Alan C. Dixon
EET267 is divided into three areas of study:
You've already had a semester using and studying digital circuitry and principles. Now we turn to more sophisticated circuitry - we call the hardware, a 'computer'.
Computers operate using the binary number system. Bits are grouped and stored as 4, 8, 16, 32, 64, or 128 bit quantities. These quantities represent instructions or data as determined by the computer's programming. As you are already aware, both the hardware and the software can contribute to the success of a modern computer, or it's failure.
Electronic computing began with large mainframe computers such as Eniac - this was a vacuum tube computer (17,468 tubes ). Question: are there any vacuum tubes around today ?
Mainframe computers designed using transistors were popular in the 1960's and the main players were IBM and Digital Equipment Corporation - using 32 bit instructions. It's quite a history.
Digital Equipment Corporation ( DEC ) was well known for it's PDP line of 'mini' computers. Local firms such as Universal Instruments Corporation used the PDP 8 in a lot of it's component insertion equipment. These were 16 bit machines.
Simple transistor calculators appeared on the market and changed the way scientists and engineers did their work. The circuitry was basically dedicated to doing math. Someone had the bright idea that these programmed chips might have usefulness in other products.
The invention of the hand held calculator, pioneered by Texas Instruments among others, brought us to the first microprocessor, the 4004 - a 4 bit 'machine'. Arithmetic could be done using 4 bit quantities with the digits stored and manipulated as either Binary Coded Decimal ( BCD) or using 'excess three' coding ( Xs3). See the TI30 calculator - it replaced the slide rule. These calculators used multiplexed LED displays and a 9 volt battery - they often ran down and quit at the worst moments.
The 4004 was improved upon with the 4040. These were known as the first microprocessors.
The 8080 required 3 power supply voltages, 5, -5, and 12, an inconvenience. The 8085 was the first microprocessor that required only 5 volts, hence the 5 in the identification. Here's a reference to the programming model.
When the first 16 bit processor was introduced as the 8086, it was a real challenge to designers used to doing 8 bit work. IBM's first personal computer, ( the "PC") used an 8 bit version of the 8086 called the 8088 - it was felt that their engineers could not deal with the 16 bit concept. This is one of those facts that is often disputed. However, the PC could have been a faster machine at the start if the 8086 were used. Food for thought: ' Why would the 8086 be faster than the 8088 ?
The sequence of processors in the Intel line after that were:
80186, 80286, 80386, 80486, 80586 ( Pentium ), Pentium 2, Pentium 3, Pentium 4, ?
Note that while all this development and update to speed, size, capability was going on, as " Colossus " said:
" There is another. " This referred to a Russian computer - and the fight began.
From the beginning, Motorola was a significant competitor with their 6800, in competition with the 8085. The 6800 was an 8 bit microprocessor as was the 8085. We'll examine the architectural differences in class. History
What's In a Computer ?
The previous discussion is about processors and micro-processors. They are one part of a larger element called a 'Computer'. There are four major hardware elements necessary in a computer: the processor, the memory, some input/output arrangement, and a clock to push the system at it's highest speed.
Is a processor a computer ? Nope. It's a part of a computer. Many things work together to make a computer desirable. And please note that in 1972, a 5 MHz PC was very very very desirable. But things have advanced as the uses of computers have grown.
You already know some essential things about computers:
The organization of computers was first proposed by John von Neuman in 1945 and is often referred to as the von Neuman architecture. The Moore School at Pennsylvania State University was an early participant in all of this, faculty and students worked together on the first computers.
How Does a Computer Work ?
Here's the short version. Instructions are stored in sequential memory locations. The processor 'fetches' the instructions one at a time. Each instruction is decoded to determine what is to be done. The processor then performs the instruction. The basic operation is known as Fetch, Decode, and Execute.
Various things can be 'done'. The processor may be directed to bring in, or INPUT from an external device. Or, the processor may be directed to OUTPUT to a device. Possible devices include sensors, your printer, your modem and many more things you deal with every day.
Devices such as your computer display, hard drive, CD or DVD ROMS are usually treated as Memory devices rather than Input/Output devices. In this case, we refer to ' memory mapped ' input output, or memory mapped I/O.
Here are definitions of some computer terms.
Even a simple computer program can consist of thousands of simple instructions. A single instruction comes in two forms: machine language, and assembly language. The machine language instruction is byte sized for an 8085 and might be 3C, a hexadecimal pair. The assembly language equivalent for this is INR A or increment the Accumulator.
All computers contain a place, or register, where numbers are added, subtracted, incremented, decremented, XORed, and other logical operations. This place is usually referred to as the Accumulator. On the 8085 it's also referred to as the A register.
An 8085 program would consist of a long string of hex pairs: 3C D3 FF C3 00 20
That sequence in assembly language reads, Increment A, Output to Port FF, Jump to address 2000H. This simple program represents a closed loop that sends a continuous count to a set of lights at port FF. We'll demonstrate this on an 8080 based computer known as the IMSAI.
A high level language such as Visual BASIC uses hundreds of these machine language codes or instructions to accomplish the same thing. The difference is that in VB you're dealing with words in English. In assembly language you are using short abbreviations known as mnemonics.
HANDOUT: Dixon/Antonakos Digital Electronics - Chapter 9 - Organization of Computers
The basic parts of a computer, the Processor, Memory, and I/O are connected together with parallel conductors or wires. In an 8 bit processor such as the 8085, there are 8 data wires running to/from the Processor, Memory, and I/O. This is called the 'data bus'. An 8 bit instruction might be placed on the data bus by memory and then picked up by the processor for decoding.
The memory section in an 8085 system can have as many as 65,536 memory locations. They're all numbered, from 0 to 65535. Actually, they're numbered in hexadecimal, from 0000 to FFFF. It takes 16 wires to transmit the binary equivalent from the processor to the memory. These 16 conductors are called the address bus. ( Do 4 hex characters translate to 16 bits ? )
The wiring of a computer that connects the processor to the other sections must all be of the same length so that signals arrive at the same time. An electrical signal travels at about two thirds the speed of light - and in modern high speed computers, this has become a limiting factor. So the shorter the conductors the faster a computer can operate.
There's a third bus in an 8085 system called the 'control bus' - this tells the various parts when to read and when to write information. Since the data bus might contain 8 bits going from the processor to memory, or perhaps in the other direction, this bus is referred to as 'bi-directional'. The control bus keeps signals from crashing into one another.
The parts of a computer are connected together with a bus structure - wiring that carries binary values, pulses if you look at them - around the computer.
We study the 8085 for a while because it's simple. At least compared to what came later-Believe me; you won't really consider even the 8085 as 'simple'. The 8085 gives us a basis for comparison with newer machines - it will help us learn.
The arrangement and connection of the parts of a processor are reffered to as the computer's architecture. Here's an article related to the Intel 8086/Pentium architecture.
The Pentium architecture can be very dense - difficult to understand. Here's another attempt at explaining the internal workings.
There are many parts to a 'computer' . In the electronics industry, there is also a need for a compact computer, even a computer on a chip. These are available as a ' micro - controller '. These can be found with the 8048, 8051 familes, or the BASIC Stamp series. Software development tools are numerous.
Each microprocessor has it's own language and software. A computer's software at the lowest level is called machine language, and then assembly language. We'll be learning some of this. It's more convenient to write in a higher level language such as:
BASIC, C, C++, C#, Pascal, Fortran, COBOL, ALGOL, APL, Visual BASIC, JAVA, HTML. When you use a higher level language, it's necessary to convert a program to machine language so the processor will understand what is to be done. A single high level instruction converts to many machine language instructions. A 'read' instruction in a high level language like Fortran involves many complex machine level instructions. Trying to accomplish a 'read' directly using machine language would be very difficult to reproduce.
In this course: You'll use some assembly language to output ASCII characters to a computer terminal; you'll also be testing 7400 series integrated circuits. These are two practical applications of a microcomputer that relate to work going on in our local industries. Other applications include traffic light controllers, light displays (score boards), sound systems, automotive, games .. what can YOU think of ?
Often we test the concept for a product well before it's actually built. This gets into the areas of simulation and emulation. Binghamton is famous for it's flight simulators, and EET graduates have spent their careers building these over the years. An airplane pilot learns on a simulator less expensively than having a 'junior' license to drive a 747 during the learning process. Similarly, we have emulators for some of our equipment on campus. You'll use an emulator for the IMSAI microcomputer - located here.
What is the difference between a simulator and an emulator ? This from Texas Instruments:
Simulation versus emulation
The roles of simulation and emulation in the development of DSP-based designs can be confusing, since at a glance they perform similar functions. In simplest terms, the main difference between simulation and emulation is that simulation is done all in software and emulation is done in hardware. Probe deeper, however, and the unique characteristics and compelling benefits of each tool are clear. Together they complement each other to deliver benefits that either one alone cannot provide. http://en.wikipedia.org/wiki/Emulator#Emulation_versus_simulation
There are interesting relationships between hardware and software. Over the years when a product fails in some way, it's always a challenge to determine if the problem was caused by the hardware or by the software. Sometimes both are at fault. In reality, both work together to create a useful product. When something goes haywire with your own personal computer - you know how difficult it is to pin down the culprit. Suppose your computer starts behaving strangely, maybe it reboots sometimes by itself. What's going on ? Is it the hard drive, or memory ? The display card ? A virus, a recent update ? In any computer system, there are millions of possible causes for problems.
When you purchase a product, you expect it to last. When the government purchases a product, they expect it to perform for many years as well. We expect product designs and the manufactured results to be " reliable ". The reliability of a product is a very important issue to the user, to the purchaser, to everyone. Smart and careful design is a key to any successful endeavor.
There's been a simple and early question related to the hardware vs software issue. Performing math operations is essential for many applications. In the 8085, simple operations such as ADD and SUBTRACT were built into the processor. If you wanted to multiply, divide, or use higher level operations - you had to write some software to do the job. You also had to do some deep research to learn what taking the 'log' of a number meant for example, or the SINE, or EXPonent. IF you use software to do the math, processor time is used up - and a machine slows down. Processors through the 80486 did not do any significant math. With the 80486 came an optional Math Co-Processor chip - it plugged into the motherboard. This was a HUGE step forward, and also a big money maker. With the Pentium came an onboard math processor. Phew !
For an overview of how computers work, visit the How Stuff Works site.
This page serves as an introduction to our EET 267 Microprocessors course. It was written entirely for you - please read and follow all the links, take notes as you go. If you're reading this before the course starts - email me to let me know how it's going. If you're reading this while the course is running - email me to let me know how it's going.
The object of this first page is to give you the best start possible in learning about 'microcomputers'. The more you learn, the more valuable you will be to a future employer, and of course, your career.
January 7, 2013