Flow-Based Programming - Chap. XVIII
A Business-Oriented Very High Level Language

drawflow.ico

This chapter has been excerpted from the book "Flow-Based Programming: A New Approach to Application Development" (van Nostrand Reinhold, 1994), by J.Paul Morrison.

A second edition (2010) is now available from CreateSpace eStore and Amazon.com.

The 2nd edition is also available in e-book format from Kindle (Kindle format) and Lulu (epub format).

To find out more about FBP, click on FBP.                               

For definitions of FBP terms, see Glossary.                                                                                       

Material from book starts here:        

The following is a sketch for a different kind of language - one that describes business logic descriptively, rather than procedurally. I was reading an article by M. Hammer et al. (1977), describing a system which they called the Business Definition Language (BDL), and I started thinking that some of BDL's basic ideas were really complementary to FBP - FBP doesn't have a native way to express business logic, while FBP provides simple solutions to some of the awkwardnesses that I spotted in BDL. BDL might be described as a tabular/functional approach to expressing business applications, which embodies a number of desirable program attributes listed by B. Leavenworth in his 1977 paper:

  • elimination of arbitrary sequencing (sequencing not dictated by the data dependencies of the application)

  • pattern-directed structures (non-procedural specification)

  • aggregate operations

  • associative referencing

Of course, these are embodied in BDL, which the following language is modelled on, but it seems to me that FBP can make BDL even more natural, so these attributes should carry over to the resulting language.

I want to expand on these four points a bit. The following are my interpretations, but I hope that I don't depart too far from the spirit in which he intended them!

Leavenworth sees the first point, "arbitrary sequencing", as one of the serious problems with procedural languages, and of course it has come up frequently in various forms in this book. As we have said before, the von Neumann machine forces you to fix the sequence of every statement, whether it matters to the logic or not. Consider the two statements

MOVE A TO B

MOVE C TO D

Clearly it makes no difference to the logic what order they are executed in, but change the first statement to refer to C or D, and you reverse them at your peril! Thus,

MOVE A TO C

MOVE C TO D

is totally different from

MOVE C TO D

MOVE A TO C

It is also worth noting that much of the optimizing logic in compilers involves trying to determine which statements can be moved and which ones cannot.

It should also be clear in the light of what has gone before why "pattern-directed structures" are desirable. Humans are much better at working with descriptions (especially visual ones) than lists of instructions.

"Aggregate operations" - this is closely related with trying to avoid procedural solutions. We have found that the higher level the constructs the language deals with, the more powerful it is, and the less of a barrier there is between the programmer's concept and its expression in the language. Of course, higher-level constructs require higher-level operations.

An example I have always found interesting is the APL language. It is highly expressive and a lot of its power derives from the foundation work that Ken Iverson did investigating the basic theory of matrices. While most of us were taught to treat matrix multiplication as an atomic operation, Iverson found a simple yet powerful way to make visible the fact that it is really a particular relationship between two binary operators, which he represented using the "dot" symbol. This is a second-order operator, as it works with other operators, rather than variables. Not only did this allow other binary operators to be combined in the same way (like "or" and "and", or "min" and "max"), but it freed up the simple "multiply" operator so it could be generalized smoothly from scalars to vectors to matrices.

As an example of aggregate operations, let's compare the way PL/I and APL handle matrices. In PL/I you can write the following statement:

A = A + A(2,3);  

This is implemented in a very procedural manner. PL/I executes this statement one element at a time, and does it in such a way that the rightmost dimension is the one which cycles most rapidly. So the sequence will be A(1,1), A(1,2), ..., A(1,n), A(2,1), A(2,2), ..., and so on. When these additions hit A(2,3), all "later" elements (those following A(2,3)) will have the new (doubled) value added to them, rather than the original one. In APL, on the other hand, matrices are treated as "aggregates", which are conceptually handled all at the same moment of time. Thus, you can write

A <- A + A[2;3] 

and it behaves much like the PL/I example, but A[2;3] does not change halfway through execution. This of course means that APL has to be able to manage matrices as more or less opaque objects, while the PL/I model is more that a matrix is just another part of the machine's memory, with a particular structure superimposed on it. In FBP, although for performance reasons we have to process a stream one IP at a time, you very often can think of it as a single object. If you really have to process a whole stream as a single object, we have seen in the earlier chapters that you can turn it into a tree, and then turn it back later. In the chapter on Streams and Recursive Functions we will see how one can use recursive function definitions to effectively treat a whole stream as a single aggregate object.

"Associative referencing" means the ability to locate an object by content, rather than by location. On most modern machines, this is done by means of a look-up of some sort, although there have been machines designed which did it by hardware. A few years ago there was a lot of interest in associative or "content-addressable" memory. While it is certainly nice to be able to find things that have moved, my feeling is that "handles" are fine, provided you have a way to find new things which you have never searched for before. Once they are found, they shouldn't have to move again, and their handles can be passed from process to process. An additonal wrinkle is to leave a trail in the rare event that things do move (of course in both cases we are talking about a single shared memory). It is interesting that the main difference between Carriero and Gelernter's Linda (1989) and FBP is that the former is associative, while FBP, as we have seen, uses connections and handles. We will be looking at Linda in more detail in a later chapter, but it seems to me that Linda's design should increase its overhead considerably over FBP, without a corresponding advantage in expressiveness. An example in one of the articles about Linda describes how to simulate an FBP connection, and it would be trivial to write matching FBP components which store and retrieve Linda tuples! Of Leavenworth's four points, this last seems to me the least exciting, but it should certainly be borne in mind as a design technique.

To give you a flavour of BDL, here is a sample definition for the "extended details" file (or stream) in Leavenworth's paper. This is the same TR1 that was used in Chapter 10. In what follows "'s" is the BDL notation denoting a collection or group.


[I shall continue to use their term "extended price", although a better term would be something like "extended monetary value".]

I have labelled the columns as shown to give an indication of their purpose and interrelationships (BDL proper does not do this). The first two columns express the hierarchical structure of the file in question and the derivations of each of the entries, while the last two columns give definitions of all names appearing in the second column, and of any new names introduced in these definitions. The fourth column uses some reserved words to describe relationships between words in the first column and their sources. Thus, the derivation of "Ext price" is shown in the second column, and uses "Unit price" and "Quantity", whose definitions are in turn shown in the third and fourth columns. The whole approach is based on definitions, rather than procedures , which I found very appealing.

The structure shown in the first column is considered to repeat indefinitely through the "Ext detail's" file.

If a job step produces more than one output file, a BDL "program" similar to that shown above must be given for each one separately. Clearly, since each one is "driven" by the same input file, there is unnecessary duplication of logic here. In fact, in an attempt to avoid uncontrolled "global" definitions, the BDL language is considerably more redundant than one would like - every duplication increases the risk of one or more of the copies getting out of step.

A more subtle problem arises from the lack of an explicit stream concept: in column three of the referenced diagram, "Product Master" is defined as that Master record (hopefully unique) whose "Master Product number" field is equal to the "Detail Product number" in the Detail record. WITH is an example of what Leavenworth calls "associative referencing" - it describes an implicit matching process. However, there is no indication of how the appropriate master record is located. While this is to some extent deliberate, when we actually come to build an application, we will eventually have to choose between various techniques for relating details and their corresponding masters, e.g. direct access, sort and merge, and perhaps others, which will have a profound effect on the basic structure of our design. It is true that the software could be allowed to select the technique automatically, but in our work with FBP we tend to feel that it is better to leave developers free to choose whichever technique seems most appropriate. Remember there will be prewritten, "black-box" code available to support whatever approach they select.

In what follows, I will assume that sorted streams of data have been merged using Collate, so the master records will precede their associated details. I will now sketch out a possible mini-language which holds to the spirit of BDL, while taking advantage of the fact that we are only describing a single FBP process.

Three types of non-procedural information together describe what you want this kind of process to do:

  • a description of the input and output streams, down to the IP level, and the creation and output criteria for the output IPs (let's call it an IP Relationship Diagram)

  • descriptions of the various IPs involved (IP Descriptions)

  • description of the calculation basis for "derived" fields - fields not present in the input IPs, or whose values are changed by the component (Derivation Descriptions).

A free-form notation is used in the accompanying figures, using English-like names for fields, and upper-case strings for "reserved words".

In what follows, square brackets will indicate that a clause seems slightly redundant and could easily be eliminated by adding a new rule to the set of evaluation rules. These are usually reasonable defaults, which can still be overridden if the developer needs to.

I will now show the three portions of a Component Description listed above, using the component TR1 from the above example.

Here is the IP Relationship Diagram (IRD) for TR1 first:

INPUT STREAM

1. Merged Input
2. Product Substream: REPEATING
3. Product Master
3. Sales Detail: REPEATING

OUTPUT STREAMS

1. New Masters
2. Product Master: ONE PER Product Substream

1. Sort Input
2. Sales Detail: [ONE PER Sales Detail]

1. Report-1
2. Product Summary: ONE PER Product Substream [,NEW]

This table describes relationships among IPs, so the lowest level items at any point always describe IPs - all higher level items are substreams, while all level 1 items are streams.

Only one input stream may be specified in the IRD, but there may be any number of output streams.

All streams consist of repeating patterns of IPs, and a COBOL-like "level" notation is used to show how they are nested to form a complete data stream.

The repeating pattern may be trivial, e.g. the stream may consist of only one IP, or it may be as complex as desired, consisting of a variable number of substreams, each of which consists of a variable number of substreams, and so on. In what follows, unless explicitly stated otherwise, the word "substream" will be taken to allow a single IP. This way I don't have to keep on saying "IP or substream". Similarly, "substream" can also refer to the whole stream, except where explicitly stated otherwise.

A substream in an IRD always has a "quantifier", which may, for an input stream, be one of the phrases REPEATING, OPTIONAL or ONE-OR-MORE, or absent; for an output stream, it may be either ONE PER or absent. These quantifiers describe the number of occurrences of the substream in question within the next higher level substream. If no quantifier is specified, then there is exactly one such substream in the next higher level substream. REPEATING, OPTIONAL and ONE-OR-MORE mean respectively: zero or more, zero or one, and at least one occurrence. ONE PER 'x' indicates that the substream in question is to be generated once for each occurrence of substream 'x' (let's follow BDL in calling 'x' the "cause" of the substream being described).

Examining the IRD for TR1, we note that the input stream consists of zero or more Product Substreams, each of which consists of one Product Master, followed by zero or more Details. You will remember that substreams are delimited by bracket IPs, so a Product Substream is additionally marked by the presence of a Product Master. We have run into this kind of redundancy before, but it doesn't hurt, and, if the Product Master had been OPTIONAL, the brackets would be the only way to separate Product Substreams.

In the output streams, two of the IP types (Product Masters and Sales Details) actually came from the input stream, and are referred to as OLD, while the Product Summary IPs are created by TR1, and are called NEW.

The PER clause on Extended Details is shown in square brackets because a reasonable additional evaluation rule might be: if the PER clause is omitted on an OLD IP, the IP is its own "cause".

Now we will need some kind of description of the IPs. This has been discussed at length in the chapter on Descriptors, so I will not go into much detail here. The designers of BDL also saw the need to allow a more varied set of field types than conventional HLLs. A Sales Detail IP could also be described using COBOL-style level numbers to show relationships between fields, but the positioning of the fields is only important when importing or exporting files. The other main function of level numbers - to show groupings - either goes away when one has more powerful field types, or can be handled in other ways. We might therefore show the fields of an Extended Sales Detail IP as a simple list, e.g.

Extended Detail: 
{
Product Number: Ident,
Salesman Number: Ident,
District Number: Ident,
Quantity: Quantity,
Unit Price: MPrice,
Extended Price: Monetary;
}

The last thing we have to specify is the derivations of any derived fields. The following diagram shows the algorithms for those fields which are changed or created by the action of the component in question. Fields which are unchanged from the values they had on entering the component are not shown. Indentation is used to relate fields to the IPs they are part of.

 

When a field in a "new" IP is not mentioned, it is assumed to have the same value as the field with the same name in the "cause" IP.

The function "SUM of A OVER B" sums all occurrences of field A within higher-level substream B (which is called the "range" of the OVER). Thus, Product Total is defined as the sum of Extended Price OVER Product Substream. The OVER-clause is shown in square brackets in this example, as it becomes redundant if the following rule is added: if OVER is omitted, the "range" is taken to be the "cause" for the field being computed. Although this may seem arbitrary, in fact it covers off all the SUMs so far encountered in sample problems.

The IN in Year-to-date Sales is similar to qualification in COBOL or PL/I. It can be omitted if it is understood that it is the field of the same name in the "cause" that is meant.

"SAME as" is self-explanatory.

The Product Summary report poses a different kind of problem as it has to do with printing a human-readable report. Of course, screens are also human-readable, so many of the same solutions should apply to both. There are many possible approaches to parameterizing a report, and, under FBP, there is no need to decide on a single unique solution, as many printer components can all coexist in your library of components, just as a carpenter can have many different kinds of glue for different jobs. However it is not hard to imagine that a useful component would be a "report interpreter" similar to the interpreter which interprets the derivation rules. I am not going to show how reports could or should be defined, but am rather going to mention a few ideas that might be considered in designing such general components.

We have already seen in Chapter 10 what a useful Report generator component might look like, so we will concentrate on the components which generate the report lines which are fed into it. This of course brings in the question of "representations" that we talked about in Chapter 11 (Descriptors). There we talked about how representation standards normally come in layers: in a given application you may have an international standard, a national standard, and a company standard, and a particular report may even use one or more such representations for the same domain, e.g. amounts with and without separators. Since reports and screens have to be human-readable, you also have the problem of multilingual support, and two very noticeable differences between languages are: 1) that phrases in different languages are frequently of different lengths, and 2) that values may have to be imbedded in text in different orders.

The above-mentioned problems have been solved quite well in the area of message-handling. A common solution is to simply provide "skeletons" which contain the fixed information and have "insertion points" for variable information. This assumes that most of the variable information is numeric, but it covers most common reporting requirements. So a report or a screen would probably be like a set of messages with fixed columns (and maybe rows) added.

Reports (and screens which reflect reports) also will have subtotals and total lines, which, as we have seen, can be driven by special IPs generated upstream at close bracket time. There are a number of report generation tools, of which RPG is one of the oldest and best known, which can act as sources of mental models. The report design files or screens should resemble the final document in layout (WYSIWYG), but with the addition of IP type information (e.g. heading, detail, total for such and such level), and codes for common variable information, such as date, time, page number, etc. Such a design can then be encapsulated in a component, or maybe more than one - remember, you are not restricted to only one report generator! Similar specifications are showing up on in popular PC applications - for instance, the Report Generator function of the Database subsystem of Microsoft's Works™.

The foregoing is just a sketch for a mini-language that might do some of what people use HLLs for today. Because FBP allows us to mix languages freely, no one language has to be able to do everything, so we can design languages to specialize in different subject areas. By the same token, the same mental model is not valid for all possible uses - if you want to do some logic by pattern-matching, you do not have to do it all by pattern-matching! Similarly, while mathematics is a great foundation for some types of programming, that does not mean that all programmers must become mathematicians. Lastly, a language can be expressive and still be rigorous. If the mental model is one that people find easy to grasp, it should enhance productivity, rather than just becoming an extra burden on the developer. It is absolutely crucial that we shift a certain amount of the work of developing applications to people who are not necessarily professional programmers, both to reduce the DP bottleneck, but also to take advantage of their expertise.

The various diagrams and lists that we have seen above provide the advantages of applicative programming notation without the complexity of nested functions which are normally required in such a notation: FBP does this by making explicit the timing relationships of the different processing events, and the derivational relationships of the IPs involved in that processing. The righthand side of a derivation specification is in fact a functional expression of any desired complexity (it can even include conditional parts analogous to the cond function of applicative programming), and such a specification conforms to the "single assignment" characteristic of applicative programs.

Because of the open-endedness of the FBP architecture, a component of this type does not have to cover all possible applications, and can be part of a total application comprising both custom and off-the-shelf components. One can easily visualize a whole range of such components relevant to different application areas, essentially acting as repositories of specialized knowledge about these areas. If one component is more powerful than another one or covers a wider application area, a sort of evolutionary "survival of the fittest" would be expected and is in fact desirable. The problem is that some dinosaurs have survived far beyond their "natural" span because of the enormous inertia of our environment, rather than because of any intrinsic superiority!