Flow-Based Programming - Chap. II
Higher-Level Languages, 4GLs and CASE

drawflow.ico

This chapter has been excerpted from the book "Flow-Based Programming: A New Approach to Application Development" (van Nostrand Reinhold, 1994), by J.Paul Morrison.

A second edition (2010) is now available from CreateSpace eStore and Amazon.com.

The 2nd edition is also available in e-book format from Kindle (Kindle format) and Lulu (epub format).

To find out more about FBP, click on FBP.                               

For definitions of FBP terms, see Glossary.                                                                                       

Material from book starts here:

In the Prologue I alluded to the concept of compiler compilers. At the time I was most interested in them (the mid-60s), the accepted wisdom was that more sophisticated compilers were the answer to the productivity problem. It was clear to everyone that an expression like

W = (X + Y) / (Z + 3)

was infinitely superior to the machine language equivalent, which might look something like the following:

LOAD  Z
ADD   3 
STORE TEMP
LOAD  X
ADD   Y
DIV   TEMP
STORE W

This is a made-up machine with a single accumulator, but you get the general idea. One of the reasons the syntax shown on the first line could effectively act as a bridge between humans and computers was that it was syntactically clean, and based on a solid, well-understood, mathematical foundation - namely arithmetic... with the exception of a rather strange use of the equals sign!

During this period it was not unreasonable to expect that this expressive power could be extended into other functions that machines needed to perform. This seemed to be supported by the fact that, although I/O was getting more complex to program at the machine language level, operating systems were coming on stream which still allowed the programmer to essentially write one statement to execute a simple I/O operation. On the IBM 1401 a Read Card command consisted of one instruction and also one character! MVS's GET, on the other hand, might cause several hundreds or thousands of machine language instructions to be executed, but the programmer still basically wrote one statement.

On this foundation, we started getting one programming language after another: COBOL was going to be the language that enabled the person in the street, or at least managers of programmers, to do programming! Algol became a program-level documentation standard for algorithms. IBM developed PL/I (I worked on one rather short-lived version of that); people developed compilers in their basements; graduate students wrote compilers for their theses at universities (they still do). There was always the feeling that one of these languages was going to be the key to unlocking the productivity that we all felt was innate in programmers. While it is true that the science of compiler writing advanced by leaps and bounds, by and large programmer productivity (at least in business application development) did not go up, or if it did, it soon plateaued at a new level.

COBOL and PL/I were general-purpose compilers. There were also many languages specialized for certain jobs: simulation languages, pattern-matching languages, report generation languages. And let us not forget APL - APL is an extremely powerful language, and it also opened up arcane areas of mathematics like matrix handling for those of us who had never quite understood it in school. Being able to do a matrix multiply in 5 key-strokes (A+.xB ) is still a level of expressiveness that I think few programming languages will ever be able to match! Its combination of sheer power in the mathematical area and the fact that there was no compile-time typing allowed one to get interesting programs up and running extremely fast. I once read a comment in a mathematical paper that the author didn't think the work would have been possible without APL - and I believe him. Although it was used a certain amount in business for calculation-oriented programming, especially in banking and insurance, and also as a base for some of the early query-type languages, APL did little for most commercial programming, plodding along using COBOL, and more recently PL/I, PASCAL, BASIC...

APL also illustrates in a different way the importance of minimizing the "gap" between idea and its expression - along with all of the most popular programming languages, it is an interpreter, which means that you can enter the program, and then immediately run it, without having to go through a compile and/or link step. Granted, this is more perception than actual fact (as one can build compile steps which are so fast that the user doesn't perceive them as a barrier), but the fact remains that some very awkward languages have become immensely popular because they did not require a compile step. A lot of the CPU cycles used in the industry on IBM machines are being used to run CMS EXECs or TSO CLISTs. Both of these are simple languages which let you stitch commands together into runnable "programs". Both are yielding nowadays to Mike Cowlishaw's REXX, which occupies the same niche, but also provides a vastly more powerful set of language constructs, to the point where one can build pretty complex programs with it. REXX is also interpretive, so it also allows one to change a program and see the results of that change very quickly.

Why didn't languages (even the interpretive ones) improve productivity more than they did? I will be exploring this more in the next chapter, but one thing that I noticed fairly early on was that they didn't do much for logic (IF, THEN, ELSE, DO WHILE, etc.). For many kinds of business programming, what pushes up the development time is the logic - there actually may not be much in the way of straight calculations. A logical choice can be thought of as a certain amount of work, whether you write it like this:

IF x > 2
THEN
  result = 1
ELSE
  result = 2
ENDIF

or like this:

result = (x>2) ? 1 : 2;

or even draw it as a Nassi-Shneiderman or Chapin chart. One can argue that, because both of the above phrasings involve one binary decision, they involve approximately the same amount of mental work. The more complex the logic, the more difficult the coding. In fact, there is a complexity measure used quite widely in the industry called McCabe's Cyclomatic complexity measure, which is based very directly on the number of binary decisions in a program. However, in our work we have discovered that the amount of logic in conventional programming is reducible, because much of the logic in conventional programming has to do with the synchronization of data, rather than with business logic. Since FBP eliminates the need for a lot of this synchronization logic, this means that FBP actually does reduce the amount of logic in programs.

A number of writers have made the point that productivity is only improved if you can reduce the number of statements needed to represent an idea. Put another way, you have to reduce the "gap" between the language of business and the language of computers. What is the lower limit on the number of bits or keystrokes to represent an idea, though? If a condition and its branches comprise one "idea", then there is a lower limit to how compactly it can be represented. If it is part of a greater "idea", then there is a hope of representing it more compactly. From Information Theory we learn that the number of bits needed to represent something is the log of the number of alternatives you have to choose between. If something is always true, there is no choice, so it doesn't need any bits to represent it. If you have 8 choices, then it takes 3 (i.e. log of 8 to the base 2) bits to represent it. In programming terms: if you only allow an individual two marital states, you only need 1 bit (log of 2 to the base 2). If you want to support a wider "universe", where people may be single, married, divorced, widowed, or common law spouses, that is five alternatives, so you need 3 bits (2 bits are not enough as they only allow 4 choices). And they're not mutually exclusive, so you could need even more bits!

This in turn leads to our next idea: one way to reduce the information requirements of a program is to pick options from a more limited universe. However, the user has to be willing to live within this more circumscribed universe. I remember an accounting package that was developed in Western Canada, which defined very tightly the universe in which its customers were supposed to operate. Within that universe, it provided quite a lot of function. I believe that its main file records were always 139 bytes long (or a similar figure), and you couldn't change them. If you asked them about it, the developers' reaction would be: why would anyone want to? Somewhat predictably, it didn't catch on because many customers felt it was too limited. The example that sticks in my memory was that of one customer who wanted to change a report title and had to be told it couldn't be done. Again, why would anyone feel that was so important? Well, it seems that customers, especially big ones, tend to feel that report headers should look the way they want, rather than the way the package says they are going to look. Our experience was that smaller customers might be willing to adapt their business to the program, especially if you could convince them that you understood it better than they did, but bigger customers expected the program to adapt to their business. And it was really quite a powerful package for the price! I learned a lot from that, and the main thing was that a vendor can provide standard components, but the customer has to be able to write custom components as well. Even if it costs more to do the latter, it is the customer's choice, not the vendor's. And that of course means that the customer must be able to visualize what the tool is doing. This is also related to the principle of Open Architecture: no matter how impressive a tool is, if it can't talk to other tools, it isn't going to survive over the long haul (paraphrased from Wayne Stevens).

The above information-theoretic concept is at the root of what are now called 4GLs (4th Generation Languages). These provide more productivity by taking advantage of frequently appearing application patterns, e.g. interactive applications. If you are writing applications to run in an interactive system, you know that you are going to keep running into patterns like:

  • read a key entered by the user onto a screen

  • get the correct record, or generate a message if the record does not exist

  • display selected fields from that record on the screen.

Another one (very often the next one) might be:

  • display the fields of a record

  • determine which ones were changed by the user

  • check for incorrect formats, values, etc.

  • if everything is OK,
    write the updated record back

  • else
    display the appropriate error message(s)

Another very common pattern (especially in what is called "decision support" or "decision assist" type applications) occurs when a list is presented on the screen and the user can select one of the items or scroll up or down through the list (the list may not all fit on one screen). Some systems allow more than one item to be selected, which are then processed in sequence.

These recurrent patterns occur so frequently that it makes a lot of sense to provide skeletons for these different scenarios, and declarative (non-procedural) ways of having the programmer fill in the information required to flesh them out.

The attractiveness of 4GLs has also been enhanced by the unattractiveness of IBM's standard screen definition facilities! The screen definition languages for both IMS and CICS are coded up using S/370 Assembler macros (high-level statements which generate the constants which define all the parts of a screen). This technique allows them to provide a lot of useful capabilities, but screen definitions written this way are hard to write and even harder to maintain! Say you want to make a field longer and move it down a few lines, you find yourself changing a large number of different values which all have to be kept consistent (the values are often not even the same, but have to be kept consistent according to some formula). I once wrote a prototyping tool which allowed screens to be specified in WYSIWYG (What You See Is What You Get) format, and could then be used to generate both the screen definition macros and also all the HLL declares that had to correspond to it. It was quite widely used internally within IBM, and in fact one project, which needed to change some MFS, started out by converting the old MFS into the prototyper specifications, so that they could make their changes, and then generate everything automatically. This way, they could be sure that everything stayed consistent. When such a screen definition tool is integrated with a 4GL, you get a very attractive combination. It's even better when the prototyping tool is built using FBP, as it can then be "grown" into a full interactive application by incrementally expanding the network. This ability to grow an application from a prototype seems very desirable, and is one of the things that make FBP attractive for developing interactive applications.

The problem, of course, with the conventional 4GL comes in when the customer, like our customer above who wanted to change a report title, wants something that the 4GL does not provide. Usually this kind of thing is handled by means of exits. A system which started out simple eventually becomes studded with exits, which require complex parametrization, and whose function cannot be understood without understanding the internals of the product - the flow of the product. Since part of the effectiveness of a 4GL comes from its being relatively opaque and "black boxy", exits undermine its very reason for being.

An example of this in the small is IBM's MVS Sort utility (or other Sorts which are compatible with it) - as long as one can work with the standard parameters for the Sort as a whole, it's pretty clear what it is doing, and how to talk to it. Now you decide you want to do some processing on each input record before it goes into the Sort. You now have to start working with the E15 exit. This requires that you form a concept of how Sort works on the inside - a very different matter. E15 and E35 (the output exit routine) have to be independent, non-reentrant, load modules, so this puts significant constraints on the ways in which applications can use load module libraries... and so on. Luckily Sort also has a LINKable interface, so DFDM [and AMPS before it] used this, turned E15 and E35 inside-out, and converted the whole thing into a well-behaved reusable component. Much easier to use and you get improved performance as well due to the reduction in I/O! In a similar sort of way, FBP can also capitalize on the same regularities as 4GLs do by providing reusable components (composite or elementary) as well as standard network shapes. Instead of programmers having to understand the internal logic of a 4GL, they can be provided with a network and specifications of the data requirements of the various components. Instead of having to change mental models to understand and use exits, the programmer has a single model based on data and its transformations, and is free to rearrange the network, replace components by custom ones, or make other desired changes.

I should point out also that the regularities referred to above have also provided a fertile breeding-ground for various source code reuse schemes. My feeling about source code reuse is that it suffers from a fundamental flaw: even if building a new program can be made relatively fast, once you have built it, it must be added to the ever-growing list of programs you have to maintain. It is even worse if, as is often required, the reusable source code components have to be modified before your program can work, because then you have lost the trail connecting the original pieces of code to your final program if one of them has to be enhanced, e.g. to fix a bug. Even if no modification takes place, the new program has to be added to the list of program assets your installation owns. Already in some shops, maintenance is taking 80% of the programming resource, so each additional application adds to this burden. In FBP, ideally all that is added to the asset base is a network - the components are all black boxes, and so a new application costs a lot less to maintain.

A related type of tool are program generators - this is also source-level reuse with a slightly different emphasis. As above, an important question is whether you can modify the generated code. If you can't, you are limited to the choices built into the generator; if you can, your original source material becomes useless from a maintenance point of view, and can only be regarded as a high-level (and perhaps even misleading) specification. Like out of date documentation, it might almost be safer to throw it away...

I don't want to leave this general area without talking about CASE (Computer-Aided Software Engineering) tools. The popularity of these tools arises from several very valid concepts. First, people should not have to enter the same information multiple times - especially when the different forms of this data are clearly related, but you have to be a computer to figure out how! We saw this in the case of the prototyping tool mentioned above. If you view the development process as a process of expressing creativity in progressive stages within the context of application knowledge, then you want to capture this in a machine, and not just as text, but in a form which captures meaning, so that it can be converted to the various formats required by other software, on demand. This information can then be converted to other forms, added to, presented in a variety of display formats, etc.

There are a number of such tools out in the marketplace today, addressing different parts of the development process, and I see these as the forerunners of the more sophisticated tools which will become available in the next few years. Graphical tools are now becoming economical, and I believe that graphical techniques are the right direction, as they take advantage of human visualization skills. I happen to believe HIPOs (remember them? Hierarchical Input, Process, Output) had part of the right answer, but adequate graphical tools were not available in those days, and maintaining them with pencil, eraser and a template was a pain! However, when someone went through all that trouble, and produced a finished HIPO diagram, the results were beautiful and easy to understand. Unfortunately, systems don't stand still and they were very hard to maintain, given the technology of those days!

Our experience is that Structured Analysis is a very natural first stage for FBP development, so CASE tools which support Structured Analysis diagrams and which have open architectures are natural partners with FBP. In fact, FBP is the only approach which lets you carry a Structured Analysis design all the way to execution - with conventional programming, you cannot convert the design into an executable program structure. There is a chasm, which nobody has been able to bridge in practice, although there are some theoretical approaches, such as the Jackson Inversion, which have been partially successful. In FBP, you can just keep adding information to a design which uses the Structured Analysis approach until you have a working program. In what follows, you will see that FBP diagrams do not really require much information at the network level to create a running program which is not already captured by the Structured Analysis design. Probably the most important point is that one has to distinguish between code components and processes (occurrences of components), and some Structured Analysis tools do not make a clear distinction between these two concepts. As we shall see in the following chapter, an FBP network consists of multiple communicating processes, but a tool which is viewed as primarily being for diagramming may be forgiven for assuming that all the blocks in a diagram are unique, different programs. The need to execute a picture imposes a discipline on the design process and the designer, which means that these confusions have to be resolved. We actually developed some PC code to convert a diagram on one of the popular CASE tools into a DFDM network specification, which was used successfully for several projects.

FBP's orientation towards reuse also forces one to distinguish between a particular use of a component and its general definition. This may seem obvious in hindsight, but, even when documenting conventional programs, you would be amazed how often programmers give a fine generalized description of, say, a date routine, but forget to tell the reader which of its functions is being used in a particular situation. Even in a block diagram I find that programmers often write in the general description of the routine and omit its specific use (you need both). This is probably due to the fact that, in conventional programming, the developer of a routine is usually its only user as well, so s/he forgets to "change hats". When the developer and user are different people, it is easier for the user to stay in character.

To summarize, HLLs, 4GLs and CASE are all steps along a route towards the place we want to be, and all have lessons to teach us, and capabilities which are definitely part of the answer. What is so exciting about FBP is that it allows one to take the best qualities of all the approaches which went before, and combine them into a larger whole which is greater than the sum of its parts.