OpenVMS Notes: Raxco System Tuning

  1. The information presented here is intended for educational use by qualified OpenVMS technologists.
  2. The information presented here is provided free of charge, as-is, with no warranty of any kind.

Introduction

Back in 1987, I had the pleasure of attending a multi-day VAX/VMS tuning seminar put on by Raxco. At that time, the lecturer, Clayton J. Prestia, presented the audience with a plethora of VMS tuning information that I didn't know was buried (hidden?) across numerous volumes of DEC's own literature.

At that time, I was a member of a team managing two VAX-11/750's and eight VAX-11/730's distributed across the Canadian province of Ontario (we were also the application developers). The 730's would really bog down whenever there were 10-12 active interactive processes.

  1. Raxco-based tuning allowed me to double this number to twenty.
  2. I was able to use recommendations from the Raxco course notes to convince my peers that a change in our programming techniques would make the system much faster. Our biggest changes resulted in much lower file locking and Lock Manager overhead.

Before my Raxco tuning operation, a pair of VAX-8550s had already been ordered which would be operated as a VAX-Cluster from a central location in Toronto which would replace the ten distributed machines just mentioned. Even though these new machines had been installed with a maximum amount of memory (80 MB which seemed huge at the time), Raxco-based tuning still improved their operation.

In 2010, I still am responsible for several VMS systems including two AS-2100 platforms, one AS-4100 platform, and two AS-DS20e platforms. I still use some of this material to tune these systems as well as educate others.

VMS evolves into OpenVMS

The Raxco seminar was based upon VMS-4.x for VAX in 1987 but VMS has changed for the better with each successive release. In the early 1990's, DEC developed the 64-bit Alpha processor and even though the code base for Alpha was separate from that used by VAX, bugs corrected in the Alpha version of VMS (now called OpenVMS) were back ported to VAX. In the early 2000s the 64-bit code base, which was only meant for Alpha, was adapted to generate code for 64-bit Itanium (which HP named Integrity). HP engineers have told me that the Alpha-to-Itanium port was the first time a new set of eyes were collectively engaged in examining/improving OpenVMS since the previous port from VAX to Alpha.

32-bit VAX 64-bit Alpha 64-bit Itanium x86-64
VMS 1.x


VMS 2.x


VMS 3.x


VMS 4.x


VMS 5.x


OpenVMS 6.x OpenVMS 6.x

OpenVMS 7.x OpenVMS 7.x


OpenVMS 8.x OpenVMS 8.x



OpenVMS 9.x
Timeline:
  • 1977 - DEC announces VAX/VMS
  • 1978 - VMS 1.0 ships
  • 1980 - VMS 2.0 ships
  • 1982 - VMS 3.0 ships
  • 1984 - VMS 4.0 ships
  • 1985 - DEC begins work on PRISM (RISC)
  • 1988 - VMS 5.0 ships
  • 1991 - DEC introduces Alpha
  • 1992 - VMS 1.0 for Alpha ships
  • 1993 - VMS 6.0 for VAX ships
  • 1994 - VMS 6.1 for VAX and Alpha ships
  • 1996 - VMS 7.0 ships
  • 1998 - Compaq buys DEC
  • 2002 - HP buys (err, merges with) Compaq
  • 2003 - HP announced "First Boot" of OpenVMS on Itanium
  • 2003 - Limited release of OpenVMS 8.0 for Itanium only
  • 2005 - VMS 8.2 ships for Alpha + Itanium
  • 2014 - HP outsources OpenVMS development to VMS Software Inc (VSI)
  • 2015 - HP splits into HPE and HP (laptops, desktops and inkjet printers)
  • 2020 - VSI announced that VMS-9.x will only be supported on x86-64

Life This Side of Y2K

Why does this history matter? Well, in the days of VMS 4.x electronic memory was very expensive (compared to now) which meant there was never enough available. For example, the big VAX-8550 I mentioned previously could only accept a maximum amount of 80 MB although a third party modification would allow you to increase this limit. Today, memory is cheaper and faster (DRAM -> SDRAM -> DDR -> DDR2) which means that Alpha and Itanium systems almost always have access to much more.

Because most memory in 1987 was smaller in size, the default behavior of either "VMS Tuning" (or "Raxco Tuning") involved enabling and using AWSA (automatic working set adjustment) reduction. This meant that processes not actively using memory on a busy system would be required to surrender LRU (least Recently Used) pages to the FREELIST or MODIFIED LIST. Currently faulting processes could SOFT FAULT those pages back unless the system had already reassigned the memory meaning that the SOFT FAULT would need to be converted to a HARD FAULT (involving one, or more, disk I/O's)

Poking around tuning scripts (like AUTOGEN) this side of Y2K will reveal that AWSA will be disabled by default whenever a minimum of 500 MB of memory is present. This means that many Alpha or Itanium system will almost always have AWSA Reduction disabled.

Where I disagree with RAXCO:

Raxco Seminar Notes (1987)

Public Domain Notice

On June 30, 2010 I contacted RAXCO Software president Robert E. Nolan to obtain permission to make these notes freely available to the VMS Community. Permission was granted (verbally) provided the original RAXCO copyright remains in place. On behalf of the VMS community, "thank you Robert".

Unsolicited Advert:

Raxco sells Perfectdisk for Windows as well as Performance Suite for OpenVMS and RaxcoSupport Suite for OpenVMS so please send them some of your business.

Scanning

Contents (Chapters in HTML Format)

VM Definitions

OS vendors have slightly different meanings for the phrases "working set", "resident set", and "balance set" so  are a few official definitions from DEC via official VMS / OpenVMS documentation.

Working set The total number of a process’s pages in physical memory. It is a subset of the total number of pages allocated to a process. Also called the primary page cache.
Balance set The sum of all working sets currently in physical memory.
Resident Set
  • The set of pages currently in memory for a process is called the process's resident set and is described by a pager data structure.
  • When the resident-set limit is reached, the faulting process must release a page for each newly faulted page added to its resident set
  • As the program executes, the pager loads the page whenever a non-resident page is referenced
  • When the resident-set limit is reached, the faulting process must release a page for each newly faulted page added to its resident set.
  • FIFO replacement algorithm is used to select the page to be removed
  • When a page is removed from a process’s resident set, it is placed on one of two page lists: the free page list or the modified page list
  • If the modify bit is zero in the page table entry, the page is added to the tail of the free list
  • If the modify bit is one in the page table entry, the page is queued on the tail of the modified page list.
  • If a process faults a page that is on either list, the page is returned to the process’s resident set
  Primary page cache where processes execute: the balance set resides in the primary page cache.
Secondary page cache where data is stored for movement to and from the disks. The secondary page cache consists of two sections as follows:
 - Free Page List
 - Modified Page List
Hard Fault
Soft Fault
required page is retrieved from disk (pagefile or program image)
required page is retrieved from secondary cache

Links:


Back to Home
Neil Rieck
Waterloo, Ontario, Canada.