Sunday, May 6, 2007

CPU Wars Part 1: General Trends

Informit.com has a series article on CPU Wars that talking about the real competition between CPU manufacturers and general trends in the CPU industry. Worth to spend some time to read through it (note: a long article), a lot of technical stuff in the article and you may gain something for your computer knowledge on CISC and RISC.

Article links:
CPU Wars Part 1

Early RISC designs had very few instructions. Most omitted even multiply and add instructions, since these operations could be implemented using a combination of adds and shifts. This turned out not to be such a great idea. The minimum amount of time in which an instruction can complete is one cycle, and chips with divide instructions eventually were able to complete them in fewer instructions than a chip that executed the shifts and adds, especially on floating-point values, where extra normalization steps are required and the mantissa and exponent must be handled separately.

Intel’s x86 architecture is the last surviving CISC chip, and has a particularly baroque architecture, including things like string-comparison instructions. All x86 CPUs since the Pentium have contained a more RISC-y core and have translated these instructions into sequences of μops that are executed internally. Starting with the Core microarchitecture, Intel has done this in reverse, reassembling sequences of μops into sequences that can be executed with a single instruction.


The only real difference between a RISC and a CISC chip these days is the public instruction set; the internal instruction sets are likely to be similar. RISC and CISC are not the only possible alternatives, however. RISC came from a desire to simplify the core, and a group at Yale in the early 1980s worked out that you could take this design even further. A pipelined CPU has to do a lot of work to determine which instructions can be executed concurrently.

No comments: