Dave Cutler, PRISM, Mica, Emerald, etc.


Definitions and Notes

Definition Notes
  • name of a new DEC hardware architecture after four DEC projects were rolled into one
    1. Titan, a high-speed design developed at Western Research Laboratory (DECwest) in Palo Alto (California, the USA) and supervised by Forest Baskett, since 1982
    2. SAFE (Streamline Architecture for Fast Execution), a project supervised by Alan Kotok and David Orbits, since 1983
    3. HR-32 (Hudson RISC 32-bit), developed at the semiconductor factory of DEC in Hudson (Massachusetts, the USA) and supervised by Richard Witek and Daniel Dobberpuhl, since 1984
    4. CASCADE, a project by David Cutler in Seattle (Washington, the USA), since 1984.
  • PRISM = PaRallel Instruction Set Machine
  • 32-bit RISC (64-bit RISC would be a future consideration)
  • 45-bit address space
  • officially started in May 1985.
    • http://h41379.www4.hpe.com/openvms/30th/t_past.html
    • quote: Digital brainstormed the next-generation reduced instruction-set computing (RISC) technology as the successor to the VAX complex instruction-set computing (CISC) architecture. This was known as "PRISM" and combined research in high-speed and streamlined architectures with the so-called Hudson RISC 32-bit projects led by Richard Witek, Dave Cutler, and others. A prototype was released in August 1985. PRISM's Epicode would later resurface in the Alpha architecture as programmed array logic (PAL) code.
  • cancelled by DEC in July-1988 after it was decided to switch to RISC processors from MIPS
  • Dave Cutler resigns in August-1988.
  • https://en.wikipedia.org/wiki/DEC_PRISM
  • Archive of original emails and presentations published by DECwest Engineering (Bellevue, Washington) in the late 1980's
DEC Jewel
  • PRISM based
  • single CPU
  • up to 256 MB of memory
DEC Crystal
  • PRISM based
  • one to four CPUs (SMP)
  • up to 1 GB of memory
  • A high performance, tightly coupled compute server for DIGITAL's hardware/software base
  • A high performance, highly available database machine built out of the QUARTZ database software and MICA subset operating system running on ROCK systems
  • Multipersonality operating system named Mica (project codenames were rocks-and-minerals). There'd be a base OS API that contained any function needed by any operating system personality. Environment subsystems would present the "personality" (i.e., APIs, execution environment) of VMS, Unix, and anything else DEC wanted. Environment subsystems would be in user-mode server processes, though it wasn't going to be a full-blown u-kernel like Mach. Sounds like any other system you've used?
  • quote from April 1989: Some time next year, Shannon said, DEC will bring out a VMS shell that will sit on the Ultrix software. That package, code-named Mica, will allow selected VAX/VMS applications to run on Ultrix.
  • code name of the first Mica OS implementation to port VMS onto PRISM
    (quote from April 1989: http://findarticles.com/p/articles/mi_m0SMG/is_n5_v9/ai_7242480/
    Digital has already developed a hybrid operating system called Emerald, sources say. Terry Shannon, director of the DEC Advisory Service at International Data Corp., Framingham, Mass., said Emerald "runs emulations of both Ultrix and VMS." )
  • Seattle's current official nickname is the "Emerald City", the result of a contest held in the early 1980s
  • Some sources state that Emerald was the first iteration of Mica
  • Some sources state that Emerald was a revival of the cancelled Mica project with the objective of porting VMS to MIPS (which was called ULTRIX)
  • According to Charlie Matco, Emerald was also attempting to port VMS to Intel's IA32 (80386, 80486, Pentium)
  • So-called "DEC languages" targeted at Alpha and Itanium are based upon the "GEM Optimizing Compiler System". GEM is a project code-name rather than an acronym. Everything coming out of DEC in those days had names like GEM, Opal, PRISM, Mica, Emerald etc.

PRISM / Mica / Emerald Links:

"Terry Shannon" channels "Charlie Matco"

Note: "Charlie Matco" is one of the pseudonyms of industry insider, Terry Shannon

Go to Google Groups Home x x
Groups x  
Advanced Groups Search   Preferences   Groups Help
Viewing message <VhFq6.9855$5f.2979544@typhoon.ne.mediaone.net> 
From: Terry C. Shannon (terryshannon@mediaone.net)
Subject: Re: - OpenVMS ever to be on Intel chip?
  Newsgroups: comp.os.vms, comp.sys.dec, comp.org.decus, vmsnet.alpha, comp.sys.intel
Date: 2001-03-10 22:54:05 PST
"Darren Peacock" <daz005@hotmail.com> wrote in message
> Cant remember the detail but back in 1988-89 there was a project in Digital
> to do just that ..
> My memory is fuzzy but it was called Emerald or Gem ..
> It was descibed as VMS on Intel.
> But the initial downsizing then , the project was halted. Funny to see the
> following  24 months some of the members end up in a small word processing
> company called Microsoft.

Well, I dragged Charlie Matco away from his usual pursuits of wine, women,
and rumourmongering and persuaded him to come clean on this matter.

There was in fact a DEC project called EMERALD back in the late 1980s. Its
goal: to boldly send VMS where it had not gone before... into IA32-land.
EMERALD was scuttled right around the same time the PRISM RISC project was
killed (late March 1988). Prototype PRISM processors existed at the time,
but apparently there were Big Delays with the complementary MICA operating
system. MICA was, simply stated, a reimplementation of VMS for the PRISM
RISC architecture. Dave Cutler flew the coop right after Ken Olsen pulled
the plug on PRISM/MICA. Word has it that a lot of the MICA code rose from
the dead when Windows NT was born.

Separately, there was a midnight project to port VMS to the Mach kernel. The
project was done (half-baked, actually) at Carnegie Mellon University IIRC.
Some of the incomplete code--which may well have a few facets of Emerald
embedded in it--is said to be floating around somewhere.

And that's all I got from Charlie. Heck, he started mumbling some stuff
about OZIX, MERLIN, QUARTZ, CHEYENNE, and other cryptic codewords from days
gone by. Wish he'd be a tad more forward-looking and spill the beans about
MARVEL... and the system a generation beyond MARVEL.


Matco's Handler

Google Home - Advertise with Us - Search Solutions - Language Tools - Jobs, Press, & Help

©2002 Google

GEM Links:

The long and winding road (via: softwareblogs.intel.com)

By Steve Lionel (8 posts) on September 25th, 2006 at 8:20 pm

The other day, I posted something in comp.lang.fortran in response to a post asking for a new feature in the Intel Fortran compiler. I suggested that the best thing to do was to submit an issue to Intel Premier Support asking for the feature since the more customers who ask for a feature, the easier job the Fortran project manager has in justifying it. This prompted a startled reply from someone who thought that I was the Intel Fortran project manager. "Heck no," I replied, "I'm not even the most senior engineer on the project!" Well, really, I'm not on the compiler project itself anymore, but I still sit and work with those who are. Yes, I started my Fortran career at DEC in 1978, but there are others on the team who have been at it longer.

Stan Whitlock, now he IS the Intel Fortran project manager, as well as our Fortran standards committee representative. He joined the DEC FORTRAN-10 (for the PDP-10) engineering team in 1976. Stan later worked on Fortran for the DECsystem 20, VAX APL, and then rejoined the DEC Fortran team when the short-lived MIPS-based DECstation line was introduced. He's led the Fortran team ever since, through the years of Alpha, Digital Visual Fortran and now Intel Fortran. But wait, there's more...

Dave Eklund joined DEC in 1975 doing support for DEC FORTRAN-10, but support also meant bug fixing and eventually development. He stayed with the DEC 36-bit systems and then joined Stan on DEC Fortran, so he's been doing Fortran continuously for 31 years now.

And then there's Rich Grove. Rich started with DEC back in 1971 on PDP-11 Fortran. Rich was later the project leader for VAX-11 FORTRAN-IV-PLUS, to give it it's full name, and he interviewed me for the job I eventually landed on the VAX Fortran project. Rich later became the project leader for DEC's GEM code generator and optimizer, which powered the DEC compilers for MIPS, Alpha and IA-32 (with Digital Visual Fortran.). When Rich joined Intel, he was named an "Intel Fellow", an extremely senior position in the company, with the role of "Compiler Architect". While Rich doesn't work daily on Fortran, he sits just a couple of offices down from us and keeps Fortran in mind as he helps shape the future of Intel compilers.

I should also mention Peter Karam, another Fortran compiler developer who has been with the project since he started in 1980. Peter tried being a manager for a while, but soon found that his heart was in development so that's where he returned and what he does today.

As you can see, there's a core of dedicated engineers who have guided a set of Fortran implementations for more than three and a half decades, starting with DEC, through the couple of years of Compaq, and now Intel. (We missed HP, which bought Compaq shortly after we joined Intel, which was a bit more than five years ago.) Of course, there's more to the project than just these folks, including many developers who have worked on compilers for 20 years or more (and some young whippersnappers, too.)

I once told a group of customers about our long Fortran heritage and that we were "a bunch of old farts". One of the group replied, "That's good - Fortran needs old farts."

P.S. Doctor Fortran now has his own URL.
You can find the doctor at http://www.intel.com/software/drfortran I'll be posting more regularly in the future.

Alpha RISC Architecture for Programmers (1999) by Evans + Eckhouse
Excerpt From Page 354:  A case might be made that RISC ventures could have failed , absent advances in compilers which made their pipelines perform adequately in spite of the timing problems with slower load/store instructions versus faster register-to-register instructions. Otherwise, it has been argued, the relatively greater "power" of CISC instructions combined with some pipelining possibilities would have continued to hold sway since more of the simpler RISC instructions are needed to solve comparable application problems.
NSR Comment (2013-07): I suppose it could be said that it was a lack of advances in compiler technology which prevented EPIC and VLIW from taking off the way RISC did previously.
Excerpt From Page 355: MIPS, in particular, became known for its compiler technology as well as for its processor architectures. MIPS pursued an approach to compiler systems involving language-specific "front ends" that covert programs into one common intermediate encoded form. A common "back end" analyzes and optimizes that intermediate expression of the program and then generates actual machine instructions. A compiler system composed of such front and back ends can be modified effectively, on the one hand as language standards change or as another language is supported, and on the other hand as new hardware implementations require different optimizations.
Digital Equipment Corporation developed its well-respected GEM compiler technology at a time when its line of VAX systems (CISC) was complemented by the line of MIPS-workstations and servers (RISC), and the Alpha architecture was soon coming. This GEM technology made it possible to offer compatible language compilers for both VAX and Alpha systems, thus facilitating migration of customer applications from the 32-bit era to 64-bit systems, especially those for the OpenVMS programming environment.
The GEM compiler design, like the MIPS system, includes the concept of a compact, universal intermediate representation. In the GEM system, a universal optimizer independent of any particular programming language or hardware considerations operates upon the intermediate code . Other preliminary optimizations occur through the operation if the appropriate language-specific front end. Specific requirements of a particular operating system and the target hardware are accommodated at the back end when machine instructions, data storage, and linkage pointers are formulated.
Compilers sometimes provide control over the types of optimizations that they can perform. Those optimizations may include not only generally applicable techniques such as unrolling loops, but also the deliberate use of special instructions that are implemented in hardware on some systems or in software on others. The dilemma in the case of commercial software is whether to distribute a "one size fits all" version, or many versions, or one version optimized for a particular implementation (i.e. model) of a computer system. The GEM compiler system provides for such implementation-based tuning.
NSR Comment (2013-07): The next paragraph discuses a benchmark program called COM_X which is written as optimally as possible in Fortran, Pascal, and C.

On page 358 we can see a 3-column table of Alpha machine language output generated by the compilers with optimization disabled. FORTRAN is the largest; Pascal is medium while C is the smallest.

On page 362 we can see a 3-column table of Alpha machine language output generated by the compilers with optimization enabled. The FORTRAN column contains 48 instructions, the Pascal column contains 37 instructions, while the C column contains these 2 instructions:
      MOV 1, R0
      RET R26      

Supporting docs

Note: in the past decade I have noticed PRISM/Mica/Emerald documents dropping off the net. It is for that reason alone that I started copying specific high-quality pages here.

Extracted Document 1

Original Link: http://www.alasir.com/articles/alpha_history/prism_to_alpha.html
Original Title: The PRISM Project. The Alpha Project.

The PRISM Project

In the beginning of 1980's, DEC was on the paramount of its financial strength mostly because of high revenues related to growing constantly sales of VAX hardware and software. Of course, nothing lasts forever, and it was obvious that some day VAX would have to leave the market in favour of a more sophisticated and flexible architecture as it was happening with PDP-11. Those days many companies started to pay more and more attention to RISC-based concepts and implementations, so DEC had no intention to ignore that trend. There were several development teams inside of DEC between 1982 and 1985 which researched actively over the RISC arena:

* Titan, a high-speed design developed at Western Research Laboratory (DECwest) in Palo Alto (California, the USA) and supervised by Forest Baskett, since 1982;

* SAFE (Streamline Architecture for Fast Execution), a project supervised by Alan Kotok and David Orbits, since 1983;

* HR-32 (Hudson RISC 32-bit), developed at the semiconductor factory of DEC in Hudson (Massachusetts, the USA) and supervised by Richard Witek and Daniel Dobberpuhl, since 1984;

* CASCADE, a project by David Cutler in Seattle (Washington, the USA), since 1984.

In 1985, after Cutler's initiative on creating a so-called corporate RISC plan, all 4 projects were merged into a single one called PRISM (PaRallel Instruction Set Machine), and the first draft for a new RISC processor was released in August of 1985. To mention, DEC participated in development of the MIPS R3000 processor those days and even initiated creation of Advanced Computing Environment consortium to promote the MIPS architecture.

No wonder that the new processor inherited many features of the MIPS architecture, though the differences were also obvious. All instructions were fixed-length at 32 bits with the upper 6 and the lower 5 ones presenting an instruction code and the remaining 21 were reserved for immediate data or addressing needs. There were 64 primary 32-bit general-purpose registers defined (MIPS required 32), 16 additional 64-bit vector registers and 3 control registers for vector operations: two 7-bit (vector length and vector count) and one 64-bit (vector mask). There was no processor state register, thus a result of two scalar operands compared was written into a general-purpose register, but a result of two vector operands compared — into the vector mask. There was no built-in floating-point unit. A set of special instructions called Epicode (Extended processor instruction code) was maintained in software through loadable microcode to facilitate handling of special tasks required for a particular environment or operating system given and not supported by the standard instruction set otherwise. Later, this function was implemented in the Alpha architecture under the name of PALcode (Privileged Architecture Library code).

In 1988, when the project was still in progress, top managers of DEC decided to close it considering any further support as a waste of resources. Protesting against that decision, Cutler resigned and went to Microsoft to supervise a department developing Windows NT (called OS/2 3.0 those days).

In the beginning of 1989, DEC presented first RISC-powered workstations of its own, DECstation 3100 with a 32-bit MIPS R2000 inside clocked at 16MHz, and DECstation 2100 with the same processor but clocked at 12MHz. Both machines were running Ultrix OS and were priced rather inexpensively. For instance, it took about 8000 USD (1990) for a DECstation 2100 configured regularly.

The Alpha Project

In 1989, the aging VAX architecture was hardly able to compete with RISC architectures of the 2nd generation such as MIPS and SPARC. It was obvious that the next generation of RISC hardware would leave not so many chances for VAX to survive. In the middle of 1989, DEC's engineers received a task to create a competitive RISC architecture with a long-term potential, but carrying a minimal set of incompatibilities with VAX at the same time. That was because VAX/VMS and all accompanying applications had to be ported to the new architecture which was also defined to be 64-bit right from the start since competitors were about to release their 64-bit solutions. A development group was created with Richard Witek and Richard Sites involved as the chief architects.

The Alpha architecture was mentioned officially for the first time on the 25th of February 1992 during a conference in Tokyo. In addition, most important features of the new architecture were listed within a concise overview for comp.arch, a USENET conference. It was also mentioned that "Alpha" was an internal code-name and an official name had to be provided later. The new processor was of a "clean" 64-bit RISC design to execute fixed-length instructions of 32 bits each. It accommodated 32 64-bit integer registers, operated with 43-bit virtual address space which could be expanded for up to 64 bits in future hardware implementations. Like VAX, it preferred little-endian byte order (i. e. when the least significant byte of a register occupies the lowest memory address if stored; was promoted by Intel in contrary to big-endian byte order, introduced by Motorola and employed by most processor architectures of those days, when the most significant byte of a register occupies the lowest memory address if stored). A mathematical co-processor was built into the core together with 32 64-bit floating-point registers which utilised random access order unlike primitive stack access order implemented in Intel x87 co-processors. The total lifetime of the new architecture was estimated in no less than 25 years.

The instruction set was simplified to facilitate pipelining actions as much as possible and consisted of 5 groups:

* integer instructions;
* floating-point instructions;
* branch and compare instructions;
* load and store instructions;
* PALcode instructions.

To mention, there was no hardware support for integer divide instructions because they would be the most computationally-expensive integer ones and thus badly pipelineable, so they were just emulated. It was an acceptable solution because integer divide was used relatively not so often in real life, especially considering that shift instructions were able to satisfy many integer divide and multiply calculations.

Alpha architecture was a real RISC in contrary to different x86 microarchitectures of the past and present starting with NexGen 586, Intel P6 and AMD K6. In fact, they were RISC on the level of processor functional units only. The conceptual difference between RISC (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) was (and still is) within a few moments:

Instruction length Variable,
depends upon
instruction type
doesn't depend upon
instruction type
Instruction set Large,
adapted for
programmer's needs
adapted for processor's
execution convenience
Memory access Allowed for different
kinds of instructions
Allowed for load/store
instructions only

Note: This table applies to general purpose processors only because DSPs and other embedded ASICs are much different. For instance, their instruction sets are small typically because of high level of specialisation.

The processor was supposed to be launched in production at a very high core frequency — 150MHz — which was planned to be increased for up to 200MHz while utilising the same engineering limits. It appeared to be possible because of a successful architecture as well as because engineers who developed the processor rejected to involve automatic design systems and did all work just manually. The project entered manufacturing stage and was reorganised into a regular division of DEC soon after.

Because of DEC marketing department's efforts the new architecture was called AXP (or Alpha AXP), though still not known for sure what exactly this misunderstanding meant. Quite possible that nothing at all. In the past, DEC had legal problems with its VAX brand because there was another pretending company, a manufacturer of vacuum cleaners, and the conflict was taken to court that time. By the way, it was also motivated that sales of DEC equipment suffered because of the other company's reasonable slogan, "Nothing sucks like a Vax!" After all, a joke showed up saying that AXP meant "Almost eXactly PRISM".

Extracted Document 2

Original Link: http://windowsitpro.com/Articles/Print.cfm?ArticleID=7153
Title: The Death of Alpha on NT (Windows-NT ran on Alpha???) - A Web Exclusive from Windows IT Pro August 27, 1999

Alpha on Windows NT is dead. As far as NT goes, it’s an Intel world.

Last week, Compaq announced that it was laying off more than 100 of its Alpha/NT employees in its DECwest facility located near the Microsoft campus. This group of developers was tasked with making Alpha on NT a technical reality. Citing Compaq's decision and the strength of Intel's architecture and systems, Microsoft says it will discontinue development of future 32-bit and 64-bit Alpha products across its existing product line.

Now, for the rest of the story...

History The Alpha on NT story has its roots back to the inception of NT. Dave Cutler, NT’s creator, was working on a new OS, code-named "Mica," for Digital Equipment. Digital intended Mica to be a successor to VMS and based it closely on VMS (thus, NT's strong roots in VMS). The Mica team worked at a Seattle-based location called DECwest, an office started by Cutler in the early 80s when he was working on Digital’s MicroVAX I project.

For some reason, Digital killed the Mica project. Seizing the opportunity, Microsoft picked up Dave Cutler and his Mica team and funded the continuation of the Mica project within Microsoft. A few years later, Windows NT was born. Digital, however, suspected that NT was actually Mica reborn and hired an OS specialist to determine the similarities. According to inside sources, many portions of NT’s code and even the comments were identical to Mica. As a result, Digital sued Microsoft. Microsoft and Digital settled out of court and the result was the Digital/Microsoft Alliance.

As part of the alliance, Microsoft promised to support the Alpha processor on NT and to ensure that Microsoft’s BackOffice products (i.e., SQL Server, Exchange Server, Internet Information Server—IIS) would be fully compatible and made available at the same time as their Intel equivalents. Digital added more than 100 engineers to DECWest, tasked with making Alpha on NT a technical reality. As part of the agreement, Digital (now Compaq) and Microsoft would have a perpetual cross-license of NT-related technology including full access to the source code.

What was NOT promised in the alliance was support for Microsoft’s Office, developer tools, or any other desktop products. If Digital wanted its Alpha chip to achieve application parity with Intel, Digital would need to fund the marketing and development efforts to the tune of millions of dollars each year. Digital did an OK job of marketing Alpha technology, creating a 32-bit Intel emulator called FX!32, attracting third-party software vendors, getting outside manufacturers to fabricate Alpha chips, and providing a line of Alpha-based workstations and servers. Other than a few isolated events such as Scalability Day or WinHEC, Microsoft did not market Alpha on NT.

When Compaq purchased Digital, many people feared that the Alpha chip would die. However, Compaq pledged renewed support for Alpha by announcing that future versions of its Tandem Himalaya computers would move from a MIPS chip to an Alpha chip. In addition, Digital UNIX (Tru64), NT, and VMS would continue to use and improve Alpha technology. Compaq recently added Alpha support to Linux.

The 64-bit question, however, remained: Can the Alpha ever achieve the economies of scale to compete with Intel or should it be positioned as a low-volume, high-margin chip for high-end computing? The answer is clear. It would be a high-end, low-volume chip, which is great for Himalaya, Tru64, and OpenVMS, but didn’t fit the high-volume NT market. As a result, Alpha on NT marketing was nonexistent. Any Alpha/NT momentum created by Digital ended abruptly.

Linux Originally, NT supported four CPU types: MIPS, Alpha, PowerPC, and Intel. Over time, the marketing and development resources required to pursue the NT market reduced the market to NT/Intel. Today, Linux supports many CPU types, including Alpha. Is this a win for open-source vs. closed-source development? Will the open-source community continue to support Alpha over the next 4 years, even if volumes don’t support it? Perhaps there are enough open-source Linux developers who will keep Alpha/Linux alive for years in spite of market dynamics—purely for the love of developing and supporting the Linux community. Time will tell.

The Future Today, Dave Cutler’s team is using Alpha-based systems to develop 64-bit NT. At WinHEC (April 99), Microsoft was able to boot 64-bit NT on an Alpha-based computer. However, at the current pace of development, Intel might deliver its 64-bit chip (Merced) by the time 64-bit NT is ready. If this happens, the fact that Alpha was first would offer little competitive advantage. Microsoft will position the 64-bit NT Server as a high-end, low-volume solution for those applications that need maximum scalability—e.g., a large SQL Server database that needs gigabytes of RAM for caching. Would the 64-bit version of NT perform significantly better on the Alpha vs. Merced for this type of application? If not, then being first and fastest would NOT overcome Intel’s competitive advantages: compatibility and cost. In the past, Compaq would position VMS, True64, or a Himalaya system for a highly scalable database application. Will Compaq position 64-bit NT against VMS, Tru64, or Himalaya? Not likely. Could a 64-bit Alpha Windows 2000 Professional (Win2K Pro) workstation save Alpha on NT? No way. Therefore, Alpha on NT is dead.

Will an Intel-only version of NT fundamentally change NT? "Not likely," says Mark Russinovich, author of the NT Internals column for Windows NT Magazine. "There’s already a significant amount of Intel and Alpha-specific optimization code in the kernel. The hardware abstraction layer (HAL) won’t be affected because it’s still a fundamental part of the OS. I believe Microsoft would want to leave the door open for other chips in the future," says Russinovich.

Support Over the years, I’ve received numerous emails from happy and frustrated users of Alpha on NT. Administrators using one BackOffice application like Exchange Server on an Alpha-based server seemed satisfied with the performance and reliability of their systems. The extra scalability their Alpha systems provided made a huge difference. For Alpha-workstation users, there is the constant frustration of not being able to use the latest version of Microsoft’s developer tools, Office, and other applications. And although FX!32 provided much needed compatibility, many times it did not allow performance of its Intel equivalents, which defeated the original reason why someone would buy an Alpha—speed!

Microsoft and Compaq have stated they plan to continue support for Alpha on NT through Service Pack 6 (SP6) for NT 4.0. This will let existing users get full use out of their systems, but cut them off from Win2K. Other sources such as Aaron Sakovich’s AlphaNT site (http://www.alphant.com) will continue to support Alpha on NT users with the latest news, drivers, applications, and tips.

The Impact of Alpha My heart goes out to the community of loyalists, users, developers, and vendors that have tirelessly supported Alpha on NT over the years. I believe Alpha on NT set a performance bar that motivated Intel to improve its chip offerings much faster than it had in the past. We’ve seen significant performance gains for Intel over the past 2 years—it’s hard to keep up. The gap has closed significantly. The loss of Alpha on NT might slow this process down. The loss also reduces any leverage Microsoft might have had against Intel. All NT eggs are in one basket now, for better or worse. One of these days, the hare might beat the tortoise, but not today.

Editor's Note: For additional reading about Alpha on NT, see the following Windows NT Magazine articles. (To view the articles online, you must be a Windows NT Magazine subscriber.)

"The Performance Curve" by Aaron Sakovich, January 1999 http://www.winntmag.com/Articles/Index.cfm?ArticleID=4686

"Living with Alpha: Finding Help" by Aaron Sakovich, February 1998 http://www.winntmag.com/Articles/Index.cfm?ArticleID=2924

"AlphaPowered" by Mark Smith, August 1997 http://www.winntmag.com/Articles/Index.cfm?ArticleID=26

Extracted Document 3

Original Link: http://www.nationmaster.com/encyclopedia/DEC-PRISM


PRISM was a 32-bit RISC CPU design from Digital Equipment Corporation (DEC). It was the final outcome of a number of DEC-internal research projects from the 1982-85 time-frame, and was at the point of delivering silicon in 1988 when management canceled the project. The next year work on the DEC Alpha started, based heavily on the PRISM design.

  1. Background
  2. PRISM
  3. Friction and cancellation
  4. Note
  5. References


In the early 1980s DEC was a huge success, flush with cash and infused with a feeling of invincibility. Projects were started all over the company to chase the "next big thing", with little or no overall direction or managerial oversight.

RISC computing was one of those next big things, and in the period from 1982 to 1985 no less than four attempts were made to create a RISC chip at different divisions. Titan from DEC's Western Research Laboratory (WRL) in Palo Alto, California was a high-performance ECL based design that started in 1982, intended to run Unix. SAFE (Streamlined Architecture for Fast Execution) was a 64-bit design that started the same year, designed by Alan Kotok (of Spacewar! fame) and Dave Orbits and intended to run VMS. HR-32 (Hudson, RISC, 32-bit) started in 1984 by Rich Witek and Dan Dobberpuhl at the Hudson fab, intended to be used as a co-processor in VAX machine. The same year Dave Cutler started the CASCADE project at DECwest in Seattle.


Eventually Cutler was asked to define a single RISC project in 1985, selecting Rich Witek as the chief architect. The design started as a 64-bit chip, but was later "downsized" to 32-bits. In August 1985 the first draft of a high-level design was delivered, and work began on the detailed design. The PRISM specification was developed over a period of many months by a five person team: Dave Cutler, Dave Orbits, Rich Witek, Dileep Bhandarkar, and Wayne Cardoza. This work was 98% done 1985-1986 and was heavily supported by simulations by Pete Benoit on a large VAXcluster.

On the integer side of things, the PRISM was a "me too" design in many ways, and displays a considerable similarity to the MIPS designs. Of the 32-bit instructions, the 6 highest and 5 lowest bits were the instruction, leaving the rest of the word for encoding either a constant or register locations. Sixty-four 32-bit registers were included, as opposed to thirty-two in the MIPS, but usage was otherwise similar. The PRISM and MIPS also lack the register windows that were a hallmark of the "other" design, Berkeley RISC/SPARC.

The PRISM design was notable for several aspects of its instruction set, however. Notably, PRISM included Epicode (extended processor instruction code), which defined a number of "special" instructions intended to offer the operating system a stable ABI across multiple implementations. Epicode was given its own set of 22 32-bit registers to use. A set of vector processing instructions were later added as well, supported by an additional sixteen 64-bit vector registers that could be used in a variety of ways.

Two versions of the system were planned, DECwest worked on a "high-end" ECL implementation known as Crystal, while the Semiconductor Advanced Development team worked on MicroPRISM, a CMOS version. MicroPRISM was finished first and was sent for test fabbing in April 1988. Additionally, Culter led development on a new microkernel-based operating system code-named Mica, which was to offer Unix-like and VMS-like "personalities" on top of a common substrate of services.

Friction and cancellation

Throughout the PRISM period, DEC was involved in a major debate over the future direction of the company. As newer workstations were introduced, the performance benefit of the VAX was constantly eroded, and the price/performance ratio completely undermined. Different groups within the company debated how to best respond. Some advocated moving the VAX into the "high-end", abandoning the low-end to the workstations. Others suggested moving into the workstation market using a commodity processor. Still others suggested re-implementing the VAX on a RISC processor.

This led to considerable problems with corporate immune response and turf wars between the various groups. Competition between the divisions delayed the architecture review, which wasn't closed until 1986. Work on associated support chips, the memory management unit and floating point unit, were later interrupted by yet another debate on whether or not the design should be 32 or 64-bit. The MicroPRISM design was not finalized until April 1988.

By this point in time other groups in DEC, fed up with the constant in-fighting and delays, decided to create their own series of workstations based on the MIPS R3000, running a port of their existing Ultrix Unix-like operating system. From the initial meeting to a prototype machine took only 90 days, with full production able to start by January 1989. At a meeting reviewing the various projects in July 1988, the company decided to cancel PRISM, and continue with the MIPS workstations and high-end VAX products.

Ironically, every attempt to produce a faster VAX that could compete with newer workstations was essentially a failure. The VAX 9000 ran into delays, and by the time it shipped newer Unix workstations had already surpassed it in performance, at a tiny fraction of the cost (or size). Apparently aware of this danger, at the very same meeting where PRISM was canceled, Ken Olsen started a new project to continue exploring a RISC-based VAX. This indirectly led to the formation of the Alpha project the next year.


DEC PRISM should not be confused with Apollo PRISM, which was used in Apollo's DN10000 workstations. Apollo Computer, Inc. ... PRISM (Parallel Reduced Instruction Set Machine) was Apollo Computers high-performance CPU used in their DN10000 series workstations. ...


* E-mail with Bob Supnik
* MicroPrism

Related Links

Back to Home
Neil Rieck
Waterloo, Ontario, Canada.