Linux Notes: Linux Introduction and Caveats

  1. The information presented here is intended for educational use by qualified computer technologists.
  2. The information presented here is provided free of charge, as-is, with no warranty of any kind.
Edit: 2021-05-17



Executive Summary
Every year (since the mid-2000s) I would install some version of Linux on one of my spare PCs just to try it out. In the early years this was usually Fedora but in the later years was usually CentOS (two distros in the Red Hat ecosystem). I had also played with Debian and SUSE. I'll probably take some flack for saying this but I did not think Linux was worth my time until I played with CentOS-7 which first appeared in 2014. I'm not saying it is better than other Linux distros but I am saying that CentOS-7 made me sit up and take notice.
comment: after IBM acquired Red Hat in 2019 for the tidy sum of $34 billion US dollars, Red Hat is behaving like Blue Hat in 2020 so I may need to shift to openSUSE

OpenVMS is just as fast on modern hardware as UNIX or Linux. Quick installation time, ease of use, and stability have convinced me that OpenVMS is the superior OS. On top of all this, OpenVMS does a much better job managing virtual memory.

Linux has been the plaything of academia for a decades now which means it is much more feature-rich than proprietary operating systems, but it is not friendly. In fact it suffers from the problem of "too many chefs in the kitchen".

Caveat: if you decide to enter the Linux world without a paid-up annual support contract then be prepared to support yourself. The self-help blogs are full of people "repeating the same mistakes" or "giving the same bad advice". Many do not appear to me to be seasoned computer professionals. At the very least you should read how Linux is maintained in IBM-managed data centers. You may also want to familiarize yourself with some of my real world Linux problems

A brief history of how Unix spawned Linux


  • Ken Thompson, Denis Ritchie and others were employees of Bell Labs when they created the Unix operating system around 1970 for an internal project.
  • Unix was written entirely in 18-bit PDP-7 assembly language which Bell Labs wanted to migrate to a newly announced, but not yet delivered because it was 9-months late, 16-bit PDP-11.
  • They wondered how were they going to move their software from one 18-bit machine architecture to a newer one with 16-bits but additional instructions so they developed a portable low level language named "C" which could be easily generate code for other computer system architectures. Then also used "C" to stamp out many thousands of bugs hiding in Unix. Now Unix was portable as well.
  • Since Bell Labs was (then) owned and operated by the Bell Telephone System (which was a monopoly) they were not allowed to make any money selling software. So both the Unix OS and the C programming language were licensed to educational institutions (colleges and universities) for no more than the cost of duplicating, then mailing, magnetic media. The "icing on the cake" included all the source code which meant that engineering schools could see how the UNIX kernel interfaced with hardware -AND- improve upon it.
  • Universities published their own distributions and one especially successful branch was known as BSD (Berkeley Standard Distribution). I personally worked on several Nortel systems which were based upon BSD UNIX 4.3

American Politics triggers change

  • After the Reagan administration broke up the Bell Telephone System in the early 1980s, Unix ended up at AT&T where new business managers announced that colleges and universities "would be required to buy commercial software licenses" AND "no more source code access". At this same time, AT&T Unix was rebranded to the copyrighted name, UNIX
  • American universities began rewriting Unix programs (to avoid copyright issues) which mostly ended up under the GNU Project (I have skipped a lot of history here).
  • Some European universities moved to MINIX which was only meant to be used as a teaching tool.
  • Linus Tovald began releasing his own kernel to the internet in the early 1990s as the Linux kernel
  • Eventually, the Linux kernel  (which had no programs) was merged with the GNU project (which had no kernel). The result is Linux which has been the plaything of academia for over 29 years.
  • Improvements to UNIX could not keep up with improvements to Linux which has left UNIX almost obsolete. (why pay for something when the free version is better?)

Hard Capitalism vs. Soft Capitalism?

 The concept of opensource software appears to be an American invention because it is peppered with almost nothing but American names. So I am surprised when it appears that big American companies are always trying to kill (or sideline) open-source products they feel are cutting into their profit margins. Here are a few examples of many:

  1. DEC hated both "C" and "Unix" because inexpensive and/or free software was cutting into their software revenue. (they seemed to ignore the fact that mostly PDP and VAX computers ran the internet from 1970 to 1980; they also ignored the fact that this was done using Unix and C)
  2. Microsoft, under Steve Ballmer anyway, always seemed to be at war with competition from the open-source world and yet we learned that Microsoft has been using a lot of Linux servers in their Azure cloud for the past decade
    (this is the same for all cloud-computing providers; no company would be able to afford the per-machine license in a multi-computer cloud)
  3. If you only looked at Oracle Linux then you might think that Oracle supports open-source software but when Oracle saw MySQL as a potential competitor to their database business they repeated attempted to sideline it as I have documented here. If MariaDB had not been created by forking the MySQL codebase then we would be having a very different conversation.
  4. SUSE was founded in 1992 Germany while Red Hat was founded in 1993 USA and they both became successful following the open-source model: give away the software but only charge for support. Well, both companies did this until 2019 when IBM purchased Red Hat  for $34 billion US dollars then began changing things (increased the cost of annual support contracts; now breaking the relationship between RHEL and CentOS). Many have suggested that IBM change the name from Red Hat to Blue Hat

The Scottish moral philosopher, Adam Smith, has been called "the father of capitalism" and yet many people do not know that his work was intended to provide the British government with a context on how to redistribute the wealth in order to the support workers who were about to be made redundant by the industrial revolution which was just booting up. While he advocated for what some people might call "soft socialism", American capitalists took things to the next level with hedge funds and the like giving us "hard capitalism". It appears to me that something similar has happened in the open-source world of computer software. While European companies like MariaDB Corporation AB seem quite successful by giving away the software but only charging for support I get the feeling that some American companies would like to put this genie back into the bottle. Of course they would argue that American money built the internet while ignoring the fact that European researchers built the world-wide-web which includes browsers and web servers.

SUSE is now an American company whilst openSUSE is still based in Germany. If you do not like the direction American companies are headed then I suggest you check out the European alternatives.


Editing a file on Linux

  • Some versions of Linux installers provide a really neat text editor called nano which is so easy to use that I won't comment any further.
  • Other versions of Linux installers only provide the vi (or vim) editor which is guaranteed to stop every newbie in their tracks.
    • caveat: some installation programs will drop you down into vi for certain inputs so if you don't have some vi skills then you will stopped in your tracks

    note: other editors exist but these are the only two I encountered during recent Linux installations

Many Linux distributions use different commands to do similar things

The only thing common with the various Linux distributions is the name "Linux". For example, numerous Linux distributions use a different tool to partition a hard-disk. For example, check out the following list of commands to initialize then mount a disk under various operating systems

OS Details
Linux In Linux you must do the following (incomplete list):
  • use either fdisk or parted to partition the disk (and know why you would use one over the other; most Linux systems only support one of these commands)
  • use mkfs (then choose one of 50+ volume formats); I prefer mkfs.xfs for most Linux volumes but formats like vfat and NTFS are available for those people requiring compatibility with Windows
  • use xfs_admin to set the volume label
  • use mkdir /mnt/whatever     (to create a mount point)
  • use mount /dev/sda3 /mnt/whatever
  • edit /etc/fstab                      (to force an automatic mount during the next reboot)
  • caveat: newer Linux distros also support LVM (Logical Volume Manager) which can increase overall confusion.
Note: online help is meant for experts. Type "man mount" or "man umount" to see what I mean. 

Initializing then mounting a disk in Windows is mostly automatic

OpenVMS Initializing then mounting a disk in OpenVMS is done in two commands:
  • initialize the disk (struct=2 hails from the days of VAX/VMS):
       initialize/structure=5  disk-name:  volume-label
  • mount the disk:
       mount  disk-name:  volume-label
Note: online help is targeted at non-experts. Type "help initialize" or "help mount" from DCL to see what I mean

comment: For those of us who have worked on systems initially set up by the clueless, perhaps a less friendly software environment is desirable.


  • Like other operating systems, Linux binary distributions will auto-install from optical boot media
    • caveat: I ran into a problem doing a reinstall of Debian-7.11.0 in 2016-09-xx. It seems that their "PARTMAN" disk partitioning tool used by the installer kept complaining about "no root partition". This could only be fixed by exploring all the options changeable in the partition menu associated with "/dev/sda2" (not sure a newbie could have gotten around this one). In this case PARTMAN also sets up "/etc/fstab" which is why it needed to know about a root partition.
  • Linux source-code distributions will not auto-install from boot media
    • Gentoo Linux requires you to go though dozens of commands spread across 11-chapters of their installation manual. Since this requires pulling hundreds of files from the internet, it could easily take 4-to-8 hours (if nothing goes wrong)

Software Updates (support, or lack of it)

  • Software updates for proprietary operating systems like OpenVMS are (usually) well-tested by the vendor then manually installed by the customer
  • Software updates for proprietary operating systems like Windows and Solaris are (usually) well-tested by the vendor then:
    • automatically installed by vendor (if the feature is left enabled by the customer)
    • selectively installed by the customer (if set by the customer)
    comment: A few years ago I heard of a problem with Windows Server Edition 2008 in a business environment in Montreal where automatic updates were enabled for a short time by a system admin who wasn't thinking clearly that day. Patches were applied to the server which broke some third-party software. The two companies pointed fingers at each other while the customer was off-the-air for days. The customer was only able to get back into business by doing a full restore from backup media.
  • Software updates for Linux are numerous and frequent (some occur weekly) but are not well tested. What is worse is this: every distribution is slightly different which means you need to be an expert just to control the patch levels.


  • Always test updates on a "development platform" then retest on a "qualification platform" before applying them to a "production platform".
    • some companies go so far as to test updates on a "sacrificial platform" before running the development-qualification-production cascade. Hardware is now cheap so I think the recommendation is worth consideration.
  • Linux cay be anywhere from free (without paid support) to cheaper (with paid support) but be prepared to shift qualification burdens to your own organization.
    • translation: just as there is no such thing as a free lunch, there is no such thing as a free OS
  • Be sure to produce backups of your installation media, system software and user data (including databases)
    • Even if you have a 14-day or 28-day daily backup rotation, be sure to do weekly or monthly backups to one-time optical media

Life in a customer-owned IBM-managed data center

Hardware is relatively inexpensive in 2018 (compared with systems before y2k) and operating systems like CentOS-7 are free which changes everything. As I understand it, many customer-owned IBM-managed data centers are run like this:
  • every named system has three, or more, variants:
    1. name-p (production)
    2. name-q (qualification)
      • sometimes these include customer acceptance)
    3. name-d (development)
    4. name-e (experimental)
  • the root password is never given out to the customer for any system other than experimental
  • root privs are temporarily granted for p/d/q systems after a formal request is submitted to IBM indicating what you intend to do with it
    • non-urgent requests are associated with response times of 4-8 weeks
  • major updates are discouraged for all platforms
  • minor upgrades are discouraged but will be allowed on a lower level platform after a review by IBM
  • only changes that have been tested for more than 8-weeks will even be allowed to percolate up to the next level
  • every change to any system needs to be addressed by a back-out plan
    • for some systems with RAID-10 storage, this means pulling half the drives then doing your upgrade(s) on the remaining media
    • blank replacement drives are inserted then merged into the RAID-10 set so your system has some sort of storage redundancy
    • the pulled drives are placed on the shelf for 8-weeks (minimum) with a year being optimal time before they are reused
  • question: why would companies ever agree to these seemingly draconian measures by IBM in non-IBM data centers?
  • answer: consider what would happen if a development system was damaged in any way. Your developers could be twiddling their thumbs for weeks while the data center people rebuild the system from backup media if that is possible. But consider what would happen if you lost your production system(s).

Comparing Linux problems to other operating systems

I have been a VMS system admin and programmer since 1987 then started to work with OpenVMS in 1999. On VMS or OpenVMS, I have always been able to roll back an update. But this appears to be impossible (or at least very difficult) with modern versions of Linux in 2018.


  • Supposedly the YUM tool will allow you to revert to previous product versions. I have never been able to revert important security-related modules (like OpenSSL) and I need to point out that we are not running SELinux (security enhanced Linux)


  • Lots of people hate working on Windows systems (personal or server) but you have got to hand it to Microsoft with their idea for WINDOWS RESTORE. There is not a single flavor of Linux or Unix that I am aware of that has a maintenance tool like this.
  • I have never been unable to roll back a windows system but need to point out that the maximum rollback time is limited by how much space on the system disk has been allocated for this purpose (bigger is better).
  • I do know of one windows server platform in Montreal that was never able to rollback. Not sure what went on here but I think their problem was related to a combination of things:
    • insufficient space allocated to keeping earlier pre-update snapshots
    • too much time elapsing before someone realized that a rollback was necessary (this could happen if the failing process only ran once on the last day of the month)


  • Because the Files-11 system as implemented on VMS and OpenVMS supports file versioning (you can have up to 32,767 versions of the same file name), installing new software does not overwrite old software. So it is always possible to go back to earlier software provided you don't issue a $purge command. Here is one example:
    • Here is one directory example from folder sys$exe
      $dir edt.exe;*/col=1
    • Anyone typing the command "$edit/edt" will execute program file SYS$EXE:EDT.EXE;4
    • If that file was just installed and something is wrong with it, deleting the version 4 file would cause everyone on the system to revert to the version 3 file. Easy-peasy.
  • Old-school VMS software is usually installed via DCL script: sys$ (install with one "L")
    • this script has the ability to rename old software with extensions like "_old1" (works better on ODS5 formatted volumes)
    • this provides a primitive way to roll back
  • Newer software is usually installed via the PRODUCT command.
    • this provides smarter ways of doing a rollback
    • caveat: some companies like Process Software still install new versions of MultiNet and TCPware via vmsinstl because their current products are written to work on older platforms, like VMS, where the PRODUCT command is not available

comment: everyone reading this probably knows that software cannot be updated on most computer systems while it is being used. This is not true of OpenVMS where an active process has a run-time lock on some executable (like sys$exe:EDT.EXE;4 in the example above). But if your update is just copying in a newer version of EDT.EXE then it would be saved as EDT.EXE;5. Any process invoking EDT would pick up the new file while current processes could continue to use the old file. I have never seen anything like this on any other operating system. In fact, software engineers at DEC (Digital Equipment Corporation) developed this after running into problems on previous DEC operating systems like RSX-11 and RT-11

Recommended Books

There are so many Linux books available today that it is difficult to recommend any one over another. But for some reason, all the really good computer books in my library are from No Starch Press and it seems the same is true for Linux. These books can be purchased directly from , and

How Linux Works: What Every Superuser Should Know - 2nd Edition (2015) Brian Ward

  • (publisher's website)
  • this book contains useful information I have never seen anywhere else. It is worth every penny.
    • the only thing this book lacks is information on LVM (logical volume manager) but that is true for most Linux books.

The Linux Programming Interface (2010) Michael Kerrisk
A Linux and UNIX System Programming Handbook

External Links



 Back to Home
Neil Rieck
Waterloo, Ontario, Canada.