[Guest Blogger] 40 Years in the Field (Part 5)

[Guest Blogger] 40 Years in the Field (Part 5)

  • Comments 1
  • Likes

 

Graham Jones (Surrey, British Columbia, IT Pro)

In part 4 I discussed some of the challenges of using FORTRAN 66 in process plant design. In part 5 we get into the guts of the numerical solution of large numbers of ill-conditioned (ie. tough!), simultaneous equations. In order to describe the issues I am going to have to throw in some “math terms”, which I hope won’t throw you too much of a curve. Basically there are 3 different approaches/algorithms that can be applied. These are “repeated substitution”, “secant method” and “Newton-Raphson”. There are many variants on these “themes” often claiming to be superior for one reason or another. The first challenge is since we don’t know where the solution lies, “where do we start from?”. The idea of asking the user to enter possibly hundreds of data items as their best guess probably wasn’t going to fly. Ideally a “self-starting” system would be the most appealing.

When I started the distillation project it wasn’t as if some programs didn’t exist around the company but none of them were capable of being self-starting or solving a wide range of problems. In fact there were programs using each of the basic methods mentioned above. I set about researching the characteristics of the different techniques, ie. their strengths and weaknesses. “Repeated substitution” basically works by formulating the equations in such a way that having made a guess out pops the next guess and so on until the last guess is very close to the last but one. A common variant is to use a % of the last value and (100-%) of the new value as a damping formula. A simple example finding the square root of a number, ie. x*x = 9, might help. This can be re-written as x = 9/x. If I guess x = 2 then the new x is 4.5. Clearly in this case if I use 4.5 I will get 2 again which illustrates one of the weakness of basic RS. However, if I say the new x = 0.5 * 2 + (1 – 0.5) * 4.5 = 3.25 and carry on this way I will eventually get close to 3 which is the answer. The big plus for RS is that with damping (some algorithms are adaptive, ie. they attempt to calculate the optimal damping factor as the calculation proceeds) it is quite stable a long way from the solution and therefore is good to get things rolling, ie. could be used for “self-starting”. The big disadvantage is that it can get painfully slow (or even oscillates) when close to the solution.

The “secant methods” are based upon taking 2 initial guesses and using a formula to calculate the next guess and so on. The function might be written like this - F(x) = x*x – 9. If you wish to see the basic secant formula, click on the link. The advantage of the secant methods are that they approach the solution faster but can be temperamental near the solution for difficult problems and be unstable if you start too far from the solution. “Newton-Raphson” uses either numerical or analytical derivatives of the function (very heavy duty differential calculus) to find a solution. Click on the link to find out more. For example, F(x) = x*x – 9 and the derivative wrt to x is dF(x)/dx = 2x which is the “slope” of the function at the value x. If we guess x =2 then the N-R formula would yield a next guess of 2 – (-5)/4 = 3.25 and the next guess would be 3.25 – 1.5625/6.5 = 3.0096. The advantage of N-R is clear. Given a decent starting point it homes in on the solution like “flys to a jam pot”. Equally, give it a poor starting point and chances are it will go off into the “wild blue yonder” generating floating point underflows and overflows all day!

So my conclusion was that we needed a package with all three methods where the user was free to move between methods and to specify the number of iterations for each method. Each method was totally re-written using the techniques described in part 4 and some new options added. I ended up with a distillation “package” rather than a single application. There was one remaining problem to solve and that was to do with numerical precision. If you do a very large number of computations especially involving the subtraction of 2 numbers of similar value then you can rapidly lose numerical precision even to the point that you cannot find a stable solution. To some extent this could be mitigated by how the equations were formulated (ie. try and minimize the number of subtractions). However, in the end it became clear that 64 bit and not 32 bit floating point calcs were required. I am going to have to rely on memory here (mine that is which of course is superior and 256 bit :)) but I think 32 bit gives a minimum of 5 significant digits of accuracy and 64 bit gives 13, one being more than double the other due to the structure of floating point numbers in the computer (ie. 1 exponent byte + 3 mantissa bytes versus 1 + 7). So the data array that I described in part 4 was declared as a double precision. On the other hand I didn’t want to waste space in the array when it came to storing single precision or integer values. FORTRAN has an Equivalence statement which is quite handy. Basically you can declare an Integer array and then tell the compiler that the actual memory space is “equivalent” to part of some other array, ie. they share the same memory space, eg. Double Precision Data(10000); Integer Ref(20); Equivalence (Ref(1), Data(50)). This means that the Integer array Ref starts at Data(50) and continues through to Data(59). Everything has to be on a double word boundary or some very interesting things can happen (:.

Although I said previously that with the new approach to memory management we could solve a wide range of problems. Well, that was true and kind of not true. The N-R method presents some particular problems because it involves the inversion of a very large matrix (called the Jacobian matrix). For certain problems it was not possible to fit all of it into memory. So just like today when we use disc space as virtual memory I did the same thing. FORTRAN 66 has two file formats that you could write, sequential (self-explanatory) and direct access (DA). DA permits you to write blocks of data (or records) to a file in any order giving it a block (or record) number and read it back in any order. The only stipulation is that, unlike sequential access, all of the blocks (records) must be the same size. Since the Jacobian matrix is a very sparse matrix which could be nicely split into equal size blocks this was the perfect solution, though not a very fast performing solution.

So I now had my working package called Distpack. The original plan was for the product to have a 5 year life span. In fact it gave solid service for over 10 years all around the world before a better “mouse trap” inevitably came along which was based on a product called Radfrac (Radical Fractionation – obviously its producers were modest types) which was a modified N-R method. I wanted to call my package Newdist but they wouldn’t let me (miserable lot) (:. You have to get your jollies somewhere. I had another program that I wanted to call Unisecs and they wouldn’t let me use that either.

Around this time the next (r)evolution in computing was taking place; computer bureaus accessed over telephone lines using modems and teletype printers at an earth shattering (drum roll) speed of 110 baud (ie. 10 characters per second). My last task before I moved on to totally different things was to install Distpack on a CDC 7600 at a computer bureau. At last only one installation accessible right across the country! The computer bureau was located in London (200 miles from me) and they had agreed to convert the Assembler component to get the contract. I travelled down early in the morning on the train lugging 7 boxes of cards (14000). It was a hot day and by the time that I got there I had shed a few pounds. My contact asked me where I was planning to spend the night. “I am not spending the night. I am going home in the afternoon”, I replied. I can still hear the laughter! “Nobody has ever done anything like this in a day before”. Well, way back in one of the previous posts I mentioned writing in ANSI FORTRAN 66. This is where it paid off. We left the cards with the computer operator and went for lunch. When we came back from lunch it had compiled, linked and run all of the test examples without even touching the sides. I should have made him a bet. I was home for dinner. Boy was I glad that my boss had insisted on working to the standard even though it had been painful at times.

In part 6 I move on to an entirely different but equally interesting part of my career.

Cheers
Graham J.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Do you know what is truly fascinating about this?  Just imagine how much we take for granted on the hardware setup and power we have today.

    Three Dead Trolls in a Baggie really weren't kidding when the computers that launched us to the moon had barely any ram in them.  

    This is before Full fledged editors, before even basic line editors.

    A lot of this is before shared computers and resrouces.   The code didn't load off a "simple magnetic disk".   Paper cards!  I remember studying this in school in Data Processing for a job in high school years ago.     Just imagine losing one card and trying to trace down the problem!

    Graham I both envy and dread what you had to go through!   The challenges you faced with effectively "Rocks and Scissors" compared to modern days setups.   It blows the mind away sometimes.  

    I always thought that some excellent reading was the "Rise and Fall of Commodore" since it gives you a pretty good glimpse of the early days.  (maybe not accurate, I'm certain there is a little coloration).

    One word sir.  WOW!