Folding@home is biological research based upon the science of Molecular Dynamics where molecular chemistry and
mathematics are combined in computer-based models to predict how protein molecules might fold (or misfold) in three spatial
dimensions over time.
When I first heard about this, I recalled the science-fiction magnum opus by Isaac Asimov colloquially known as The Foundation Trilogy which introduced a fictional branch of science called psychohistory where statistics, history and sociology are combined in computer-based models to predict
Years ago I became infected with an Asimov
inspired optimism about humanity's future and have since felt the need to promote it. While Folding@home will not
cure my "infection of optimism", I am convinced Dr. Isaac
Asimov (who received a Ph.D. in Biochemistry from Columbia in 1948 then was employed as a Professor of Biochemistry at the
Boston School of Medicine until 1958 when his publishing workload became too large) would have been fascinated by something like
I was considering a financial charitable donation to Folding@home when it occurred to me that my
money would be better spent by:
Making a knowledgeable charitable donation to all of humanity by increasing my Folding@home
computations (which will advance medical discoveries along with associated pharmaceutical treatments thus lengthening human
life). I was already 'folding' on a half-dozen computers anyway so all I needed to do was purchase video graphics cards
which would increase my computational throughput by a thousand fold.
Then convincing others (like you) to follow my example. My solitary folding efforts will have
little effect on humanity's future. Together we can make a real difference. (read on)
Dr. Asimov: I am dedicating this website to you and your publishing. You have greatly influenced my life.
Misfolded proteins have been associated with numerous diseases and age-related illnesses. However, proteins are much more
larger and so much more complicated than smaller molecules that it is not possible to begin a chemical experiment without first
providing hints to researchers about where to look and what to look for. Since the behavior
of atoms-in-molecules (Computational Chemistry)
as well as atoms-between-molecules (Molecular
Dynamics) can be modeled, it makes more sense to begin with a computer analysis. Then permitted configurations can then be
passed on to experimental researchers.
Cooking an egg causes the clear protein (albumen) to unfold into long strings which now can intertwine into a tangled network
which will stiffen then scatter light (appear white). No chemical change has occurred but taste, volume and color have been
Click here to read a short "protein article" by Isaac Asimov published in 1993 shortly after his
Using the most powerful single core processor (CPU) available today, simulating the folding possibilities of one large protein
molecule for one millisecond of chemical time might require one million days (2737
years) of computational time. However, if the problem is sliced up then assigned to 100,000 personal computers over the internet,
the computational time drops to ten days. Convincing friends, relatives, and employers to do the same
would reduce the computational time requirement further.
required simulation time
1 billion days (2.7 M-years)
1 million days (2,737 years)
1 thousand days (2.73 years)
Additional information for science + technology nerds
Special-purpose research computers like IBM's BlueGene
employ 10 to 20 thousand processors (CPUs) joined by many kilometers of optical fiber to solve problems. IBM's
Roadrunner is a similar technology employing both "CPUs" and "special non-graphic GPUs that IBM refers to as cell processors"
Assuming that each GPU has 1,000 streaming processors, this leaves us with the equivalent of 60 million processors.
This means that the original million-day protein simulation problem could theoretically be
completed in (1,000,000 / 60,000,000) 0.016 days (or 23 minutes). But since there are many more
protein molecules than DNA molecules, humanity could be at this for decades. Adding your computers to Folding@home will permanently advance humanity's progress in protein
research and medicine.
When the Human Genome Project (to study
human DNA) was being planned, it was thought that the task may require 100 years. However, technological change in the areas
of computers, robotic sequencers and the internet (to coordinate the activities of a large number of universities where each
one was assigned a small piece of the problem), allowed the human genome project to publish results after only 15 years. A
660% increase in speed.
Distributed computing projects like Folding@home and BOINC
have only been possible since 1995:
the world-wide-web (proposed in 1989 to solve a document
sharing problem among scientists at CERN in Geneva; then implemented in 1991) began to
make the internet both popular and ubiquitous.
CISC was replaced with RISC which evolved to superscalar RISC then multicore
Vector processing became ubiquitous
Processor technology was traditionally defined like this:
vector processing (also known as matrix processing) usually involves only two data points (velocity
and direction would be one simple example of a vector data point)
tensor processing is the name given to any math involving three or more data points (this is not new
in computing; climate modelling begins with weather-prediction trials on ENIAC)
while it is possible to do floating point (FP) math on integer-only CPUs, adding FP logic decreased FP
processing time by an order of magnitude (x10) or more. Similarly, while it is possible to do vector processing
(VP) on a scalar machine, adding VP logic can decrease VP processing time by 2 to 3 orders of magnitude (x100 to
x1000). Certain modern applications (climate models, artificial intelligence, machine learning, triple-A video games) demand it.
Minicomputer / Workstation
1989: DEC adds vector processing
to their Rigel uVAX chip
1989: DEC adds optional vector
processing to VAX-6000 model 400 minicomputer
But GPU (graphics programming units) take vector processing to a whole new level. Why? A $200.00
graphics card now equip your system with 1500-2000 streaming processors and 2-4 GB of additional high speed memory.
According to the 2013 book "CUDA Programming", the author provides evidence why any modern high-powered PC
equipped with one, or more (if your motherboard supports it), graphics cards can outperform any supercomputer listed 12
years ago on www.top500.org
Both the PlayStation 4 as well as the XBOX One employ an 8-core APU (Accelerated
Processing Unit) made by AMD code-named
Jaguar. What is an APU? It is a multi-core CPU with an embedded Graphics Chip Engine. Placing both systems
on the same silicon die eliminates the signal delay associated with sending signals over an external bus.
I've been in the computer industry (both hardware and software) for a long while. Computers only began
to get real interesting again this side of 2007 with the releases of CUDA, OpenCL, etc.
Distributed computing projects like Folding@home and BOINC
have only been practical since 2005 when the CPUs in personal computers began to out-perform mini-computers and enterprise
servers. Partly because...
AMD added 64-bit support to their x86 processor technology calling it x86-64.(Linux
distros still refer to this a 686)
Intel followed suit calling their 64-bit extension technology EM64T
DDR2 memory became popular (this dynamic memory is capable of
Intel added DDR2 support to their Pentium 4 processor line
AMD added DDR2 support to their Athlon 64 processor line
DDR3 memory became popular (this dynamic memory is capable of
Since then, the following list of technological improvements has made computers both faster while less expensive:
Intel's abandonment of NetBurst which meant a return to shorter
instruction pipelines starting with Core2 comment: AMD never went to longer pipelines; a long pipeline is only efficient when running a
static CPU benchmark for marketing purposes - not running code in real-world where i/o events interrupt the primary
foreground task (science in our case)
multi-core (each core is a fully functional CPU) chips from all
continued development of optional graphic cards where CPUs would off-load much work to a graphics co-processor system
(each card appeared as hundred to thousands streaming processors)
ATI Radeon graphics cards (ATI was acquired by AMD in 2009)
NVIDIA GeForce graphics cards
development of high performance "graphics" memory technology (e.g. GDDR3
, GDDR4 , GDDR5)
to bypass processing stalls caused when streaming processors are too fast.
Note that GDDR5 is used a main memory in the PlayStation 4
(PS4). While standalone PCs were built to host an optional graphics card, it seems that Sony has flipped things so
that their graphics system is hosting an 8-core CPU. These hybrids go by the name APU.
shifting analysis from host CPU cores (usually 2-4) to thousands of streaming processors
HP preferred Itanium2 (jointly developed by HP and Intel) so
announced their intention to gracefully shut down Alpha
Alpha technology (which included CSI) was immediately sold to Intel
approximately 300 Alpha engineers were transferred to Intel between 2002 and 2004
CSI morphed into QPI (some industry watchers say that Intel ignored CSI until the announcement by AMD to go with a
new industry-supported technology known as HyperTransport
The remainder of the industry went with a non-proprietary technology called HyperTransport
which has been described as a multi-point Ethernet for use within a computer system.
As is true in any "demand vs. supply" scenario, most consumers didn't need the additional computing power which meant that
chip manufacturers had to drop their prices just to keep the computing marketplace moving. This was good news for people
setting up "folding farms". Something similar is happening today with computer systems since John-q-public is shifting
from "towers and desktops" to "laptops and pads". This is causing the price of towers and graphics cards to plummet ever
lower. You just can't beat the price-performance ratio of an Core-i7 motherboard hosting an NVIDIA graphics card.
Shifting from brute-force "Chemical Equilibrium" algorithms to techniques involving Bayesian
statistics and Markov Models will enable some exponential
This diagram depicts an
H2O molecule loosely
connected to four others
After perusing the periodic table of the
elements for a moment you will soon realize that the molecular
mass of water (H2O) is ~18 which is lighter than many gases so why is H20 in a liquid state at
room temperature while other slightly heavier molecules take the form of a gas?
Ethanol (a liquid) has one more atom of Oxygen than Ethane (a gas). How can this small difference change the state?
State at Room
In the case of an H20 (water) molecule, even though two hydrogen atoms are covalently bound to one oxygen atom, those same hydrogen atoms are also attracted to each other
which causes the water molecule to bend into a Y shape (according to
VSEPR Theory). At the mid-point of the bend, a positive electrical charge from the oxygen atom is exposed to the
world which allows a weak connection to the hydrogen atom of a neighboring H20 molecule (water molecules weakly
sticking to each other form a liquid). These weak connections are called Van
der Waals forces
Here are the molecular schematic diagrams of Ethane (symmetrical) and Ethanol (asymmetrical). Notice that
Oxygen-Hydrogen kink dangling to the right of Ethanol? That kink is not much different than a similar one associated with
water. That is the location where a Van der Waal force weakly connects with an adjacent ethanol molecule (not shown). So
it should be no surprise that ethane at STP (Standard Temperature and Pressure) is a gas while Ethanol is a liquid.
H H H H H
| | | | /
| | | |
H H H H
Van der Waals did all his computations with pencil and paper long before the first computer was
invented; and this was only possible because the molecules in question were small and few.
Chemistry Caveat: The Molecular Table above was only meant to get you thinking. Now inspect this LARGER
periodic table of the elements where the color
of the atomic number indicates whether the natural state is solid or gaseous:
all elements in column 1 (except hydrogen) are naturally solid
all elements in column 8 (helium to radon) are naturally gaseous
half the elements in row 2 starting with Lithium (atomic number 3) and ending with Carbon (atomic number 6),
as well as two thirds of row 3 starting with Sodium (atomic number 11) and ending with Sulphur (atomic number 16),
are naturally solid
I will leave it to you to determine why
Proteins come in many shapes and sizes. Here is a very short list:
technology problems, discussions, news, science, etc.
FAH Targeted Diseases
This "folding knowledge" will be used to develop new drugs for treating diseases such as:
ALS ("Amyotrophic Lateral Sclerosis" a.k.a. "Lou Gehrig's Disease")
Plaques, which contain misfolded peptides called amyloid beta, are formed in the brain many years before the signs of
this disease are observed. Together, these plaques and neurofibrillary tangles form the pathological hallmarks of the
Cancer & p53
P53 is the suicide gene involved in apoptosis (programmed cell death - something
necessary in order your immune system to kill cancer cells)
CJD (Creutzfeldt-Jakob Disease)
the human variation of mad cow disease
Huntington's disease is caused by a trinucleotide repeat expansion in the Huntingtin (Htt) gene and is one of
several polyglutamine (or PolyQ) diseases. This expansion produces an altered form of the Htt protein, mutant Huntingtin (mHtt),
which results in neuronal cell death in select areas of the brain. Huntington's disease is a terminal illness.
Normal bone growth is a yin-yang balance between osteoclasts and oseteoblasts.
Osteogenesis Imperfecta occurs when bone grows without sufficient or healthy collagen
The mechanism by which the brain cells in Parkinson's are lost may consist of an abnormal accumulation of the protein
alpha-synuclein bound to ubiquitin in the damaged cells.
Ribosome & antibiotics
A ribosome is a protein producing organelle found inside each cell
Note: AMD acquired Canadian company ATI Technologies
in 2006 but continued to use the ATI name into 2009
AMD related problems in 2012
Time and technology never stand still and this applies to graphics cards. You can imagine the difficulty researchers
experience while attempting to keep up with the continual introduction of new products from hardware manufacturers. So for
the past half-decade the computer industry as been working on heterogeneous technologies (OpenCL,
CUDA, PhysX, DirectCompute,
etc.) for doing science on graphics cards. Stanford Folding Software requires OpenCL (Open Computing
Language) not to be confused with OpenGL (Open Graphics Library).
Announcement: Stanford to drop GPU2 cards made by AMD
March-2012 : Stanford University announced their intention to drop AMD/ATI GPU2 cards in
AMD is no longer supporting OpenCL on Windows platforms belowWindows-Vista SP2
which means my OpenCL driver was deleted during the driver upgrade
Rolling back the driver will only buy you a small amount of time since Stanford is shifting from GP2 (fahcore_11)
hardware to the newer GP3 (fahcore_16) hardware.
All the cards above HD-4xxx support GPU3 (but not on Windows-XP or lower)
AMD still supports OpenCL on Linux (OpenSUSE, Ubuntu, RedHat, Fedora, CentOS)
Folding with an NVIDIA graphics card (2012-2016)
My Personal Experience Doing GPU-based Science:
I now run a mixture of systems employing graphics cards from both AMD and NVIDIA.
The HD-6670 from AMD
The GTX-560 from NVIDIA
I was forced to buy these cards when AMD removed OpenCL support from their Windows-XP device driver in the Spring of
The price of GTX-560 is approximately twice that of the HD-6670 but
appears to be doing 10 times more science.
It appears that the best NVIDIA bang-for-the buck comes from a card with model prefix of GTX and a
model number ending in 60
In 2016 many machines were unable to get work units for GTX-560 on 32-bit versions of Windows-XP. Here is what Stanford
published on 2016-07-03:
FAH tends to push the limits of science and that means that some things can no longer be done with Windows-XP or
32-bit CPUs. At some point all new projects will require 64-bit
and all new projects will require Windows7 or above. The studies of "easy" proteins have been or soon will be completed. I
can't predict when that will happen and I doubt anybody else can.
So it probably makes little sense to continue working with 32-bit OSs. If your hardware is 64-bit capable you might wish to
shift to a 64-bit version of Linux (see below)
As of 2016 I now recommend the GTX-960 (or any NVIDIA card ending in 60)
caveat: GPU folding on CentOS-7 failed 2021-12 so jump here to see the fix
I found two PCs in my basement with 64-bit CPUs that were running 32-bit operating systems (Windows-XP and Windows-Vista).
Unfortunately for me, neither were eligible for Microsoft's free upgrade to Windows-10; and I had no intention of buying a new
64-bit OS just for this. So I replaced these old Windows instances with CentOS-7.3 and was able to get each one folding with very
little difficulty. Here are some tips for people who are not Linux gurus:
CentOS-7.7 and CentOS-8.0 were released days apart in September 2019 (perhaps due to the
invisible hand of IBM?)
Downloads from the top of download page preferentially offer CentOS-8 which is too large (> 4.7 GB) to write to a
single-layer writable DVD (but I have has some success with Dual Layer media)
So read to the bottom of the download page then download any version of CentOS-7 that is smaller than 4.7 GB
Transfer to bootable media (choose one of the following)
copy the ISO image to a DVD-writer
use rufus to format an USB stick (capacity must be >= 5 GB) then copy the ISO
image to the USB caveat: PCs have transitioned from BIOS to UEFI. Older BIOS-based systems do not support booting
from a USB stick (strange because you can connect a USB-based DVD-drive then boot from that)
Boot-install CentOS-7 on the 64-bit CPU
pick and install a Linux recipe that supports a GUI (I usually choose Server with GUI and always
add on development tools).
If prompted to choose between the 'gnome gui' and the 'kde gui', newbies should always pick gnome.
You will need development tools if installing an noarch RPM requiring a c compiler
My machines hosted NVIDIA graphics cards (GTX-560 and GTX-960 respectively) so these systems required correct NVIDIA drivers
in order to do GPU-based folding. Why? you will need CUDA and/or OpenCL software which is not available on the graphics card
If you are not logged in as root then you must begin every command with sudo (Super
Now jump past the CentOS-8 section
Folding with Rocky Linux (2022)
Both of my CentOS-7 machines stopped folding 2021-12. Apparently this is due to several changes by the FAH
research team. First off, their code requires a new version of library file glibc which is available on other distros but not
CentOS-7 (so you need to upgrade to CentOS-8 or change to something else). Secondly, changes to their GPU core now require
OpenCL-2.0 and higher. This means that old GTX-560 is only useful as a video adapter (but not a streaming processor). I also
noticed another blurb about double-precision FP math which definitely rules out my GTX-560. For the GTX-960 I followed the
CentOS-7 instructions (100 lines above) but the Nvidia driver from elrepo did not contain any OpenCL or CUDU support so I
installed the driver provided by Nvidia.
Updating the NVIDIA driver on CentOS-8 for a GTX-960 card
Install CentOS-8 (any version) then type "yum update" to bring it up to the latest level
caveat: I do not like CentOS-8 on these machines. They were perky but now they crawl (perhaps
they are tuned to require more memory). So I replaced CentOS-8 with Rocky
Linux and now the machines are perky again.
DO NOT use elrepo to update the nvidia drivers (as of 2022-01-16 they are missing support for OpenCL and CUDA)
caveat: As of this writing, your CentOS-7 system most likely depends upon some version of Python2. Python3 can be
easily added to the system but do not remove or disable Python2 because this will break certain system utilities like
yum or firewall-cmd to name two of many.
starting the client with the --configure switch will generate an XML configuration file
starting the client with the --config switch will let you test an XML configuration file
starting the client with the --help switch will display more help than you ever dreamed
Caveat: just installing the FAH-Client will cause it to be installed as a service then start CPU
folding (which is probably what you do not want). If you want to enable GPU-based folding then you will need to stop the
client, modify the config file, test the config file, then restart the client. Here are some commands to help out.
Stopped services may only be deleted from DOS like so:
sc query neil369
sc delete neil369
BOINC (Berkeley Open Infrastructure for Network Computing)
BOINC (Berkeley Open Infrastructure for Network Computing) is a science framework
in which you can support one, or more, projects of choice.
If you are unable to pick a single cause then pick several because the BOINC manager will switch between science clients
every hour (this interval is adjustable). In my case I actively support POEM, Rosetta, and Docking.
http://boinc.bakerlab.org/rosetta/ is the home of Rosetta@home
which operates through the BOINC framework. Their graphics screen-saver is one very effective way to help visualize "what
molecular dynamics is all about". All science teachers must show this to their students.
I'm sure everyone already knows that a computer "rendering beautiful graphical displays" is doing less science. That
said, humans are visual creatures and graphical displays have their place in our society. Except for some public
locations, all clients should be running in non-graphical mode so that more system resources are diverted to protein
Five questions for Rosetta@home: How Rosetta@home helps cure cancer, AIDS, Alzheimer's, and more
some people may prefer to use the generic BOINC client from Berkley then install the WCG plugin from that application;
you will still need to create your WCG account at the WCG site
You only need to do this if you want to cycle your BOINC client between multiple projects of which WCG is just one
If you only want to run the WCG project (which also switches between IBM sponsored science projects) then it probably
makes more sense to use the WCG-specific client
https://en.wikipedia.org/wiki/World_community_grid (WCG) is
an effort to create the world's largest public computing grid to tackle scientific research projects that benefit humanity.
Launched 2004-11-16, it is funded and operated by IBM with client software currently available for Windows, Linux,
Mac-OS-X and FreeBSD operating systems. They encourage their employees and customers to do the same.
Personal Comment: I wonder why HP (Hewlett-Packard) has not followed IBM's lead. Up until now I
always thought of IBM as the template of uber-capitalism but it seems that the title of "king of profit by the elimination of
seemingly superfluous expenses" goes to HP. Don't they realize that IBM's effort in this area is done under IBM's advertising
budgets? Just like IBM's 1990s foray into chess playing systems (e.g. Deep Blue) led to increased sales as well as share prices,
one day IBM will be able to say "IBM contributed to a treatments for human diseases including cancer". IBM actions in this area
reinforce the public's association with IBM and information processing.
The Encyclopedia of DNA Elements (ENCODE) Consortium is an
international collaboration of research groups funded by the National Human Genome Research Institute (NHGRI). The goal of ENCODE is to build a comprehensive parts list of functional elements in the human
genome, including elements that act at the protein and RNA levels, and regulatory elements that control cells and circumstances in
which a gene is active.
The Encyclopedia of DNA Elements (ENCODE) is a public research consortium launched by the US National Human Genome Research Institute (NHGRI) in September 2003.
The goal is to find all functional elements in the human genome, one of the most critical projects by NHGRI after it completed
the successful Human Genome
Project. All data generated in the course of the project will be released rapidly into public databases.
On 5 September 2012, initial results of the project were released in a coordinated set of 30 papers published in the
journals Nature (6 publications), Genome Biology (18 papers) and Genome Research
(6 papers). These publications combine to show that approximately 20% of noncoding DNA in the human genome is functional while an additional 60% is transcribed with no
known function. Much of this functional non-coding DNA is involved in the regulation of the
expression of coding genes. Furthermore the expression of each coding gene is controlled by multiple regulatory sites
located both near and distant from the gene. These results demonstrate that gene regulation is far more complex than
http://www.technologyreview.com/view/510571/the-million-core-problem/ The Million-Core Problem - Stanford
researchers break a supercomputing barrier.
quote: A team of Stanford researchers have broken a record in supercomputing, using a million cores to model a complex
fluid dynamics problem. The computer is a newly installed Sequioa IBM Bluegene/Q system at the Lawrence Livermore National
Laboratories. Sequoia has 1,572,864 processors, reports Andrew Myers of Stanford Engineering, and 1.6 petabytes of memory.