Flow-Based Programming
Prologue

drawflow.ico

This chapter has been excerpted from the book "Flow-Based Programming: A New Approach to Application Development" (van Nostrand Reinhold, 1994), by J.Paul Morrison.

A second edition (2010) is now available from CreateSpace eStore and Amazon.com.

The 2nd edition is also available in e-book format from Kindle (Kindle format) and Lulu (epub format).

To find out more about FBP, click on FBP.                               

For definitions of FBP terms, see Glossary.                                                                                       

Material from book starts here:

Some of my colleagues have suggested that I fill in some background to what you are going to read about in this book. So let me introduce myself.... I was born in London, England, just before the start of World War II, received what is sometimes referred to as a classical education, learned to speak several languages, including the usual dead ones, and studied Anthropology at King's College, Cambridge. I have since discovered that Alan Turing had attended my college, but while I was there I was learning to recognize Neanderthal skulls, and hearing Edmund Leach lecture about the Kachin, so I regret that I cannot claim to have programmed EDSAC, the machine being developed at Cambridge, although I later took an aptitude test for another marvellous machine, the Lyons' LEO (Lyons Electronic Office), whose design was based on EDSAC's. But maybe computing was in the air at King's!

In 1959 I joined IBM (UK) as an Electronic Data Processing Machines Representative. I had come into computers by a circuitous route: around the age of 12, I got bitten by the symbolic logic bug. This so intrigued me that all during my school and university years I read up on it, played with the concepts for myself, and looked forward to the time when all the world's problems would be solved by the judicious rearrangement of little mathematical symbols. Having also been fascinated by the diversity of human languages since childhood, the idea of really getting to the root of what things meant was very exciting. It wasn't until later in life that I realized that many great minds had tried this route without much success, and that, while it is certainly a beguiling concept and there have been many such attempts in earlier centuries, the universe of human experience is too complex and dynamic, with too many interacting factors, to be encoded in such a simple way. This does not mean that attempts to convey knowledge to a computer will not work - it is just that there seem to be certain built-in limitations. The human functions which we tend to think of as being simple, almost trivial, such as vision, speech comprehension or the ability to make one's way along a busy street, are often the hardest to explain to a computer. What we call common sense turns out to be quite uncommon....

While symbolic logic has not delivered on its original promise of making the world's important decisions simpler, it is perfectly adapted to the design of computers, and I became fascinated by the idea of machines which could perform logical operations. This fascination has stayed with me during my 33 years with the IBM Corporation in three different countries (by the way, this is why most of the systems I will be mentioning will be IBM systems - I apologize for this, but that's my background!), but I've always been struck by the apparent mismatch between the power of these machines and the difficulty of getting them to do what we wanted. I gradually came to concentrate on one basic problem: why should the process of developing applications on computers be so difficult, when they can obviously do anything we can figure out the rules for?

There is definitely an advantage to having cut my proverbial teeth in this field at a time when very few people had even heard of computers: over the intervening years I have had time to digest new concepts and see which of them succeeded and which failed. Over the course of three and a bit decades [this was written in 1994], many concepts, techniques and fads have sprung up with great attendant fanfare, and have either faded out or just become part of the regular curriculum. Ideas which took a decade or two to evolve are now familiar to kids fresh out of university. I got advance notice of many of these concepts, and often had time to understand them before they became widespread! A list of these wonders would be too long to include here, and perhaps only of interest to historians. Some of them fell by the wayside, but many of them are still around - some good and some not so good! We who were working in the field also certainly contributed our fair share of techniques and fads, also some good and some not so good!

I think my first enthusiasm was compiler compilers. I first worked with a fascinating system called BABEL - appropriate name - which was going to make it far easier to write compilers. I still use some of its ideas today, 30 years later. We shall see later in this book that there are interesting parallels between compiler theory and the subject matter of this book, and there seems to be an important role for what are sometimes called "mini-languages" (I will be talking some more about them in Chapter 17). Certainly compiler compilers comprise a piece of the answer, but they did not result in the productivity improvement that we were looking for.

I have also always been taken with interpreters - I believe my first exposure to these was BLIS (the Bell Laboratories Interpretive System), which made the 650 look like a sequential machine. Probably the characteristic of interpreters which really appeals to people is the ability to debug without having to change languages. Of course, some of the recent debugging tools are starting to bring this capability to the world of Higher Level Languages (HLLs), but the ability to just slot in a TYPE or "say" command and rerun a test is so impressive that all the languages which became really popular have always been interpreters, no matter how awkward the syntax! In a survey of machine cycle usage done a few years ago at IBM's Research Center at Yorktown Heights, they found that the vast majority of cycles were being used by CMS EXEC statements - strings of CMS commands glued together to do specific jobs of work.

Another important concept for productivity improvement is that of a reusable subroutine library. I also believe strongly that reuse is another key piece of the solution, but not exactly in the form in which we visualized it in those days. In company after company, I have seen people start up shared subroutine libraries with a fine flurry of enthusiasm, only to find the action slowing to a standstill after some 30 or 40 subroutines have been developed and made available. Some companies are claiming much higher numbers, but I suspect these are shops which measure progress, and reward their people, based on how many subroutines are created and added to the library, rather than on whether they are actually used. Although organizational and economic changes are also required to really capitalize on any form of reuse, I believe there is a more fundamental reason why these libraries never really take off, and that is the philosophy of the von Neumann machine. I will be going into this in more detail in Chapter 1, but I found I was able to predict which subroutines would land up in these libraries, and it was always "one moment in time" functions, e.g. binary search, date routines, various kinds of conversions. I tried to build an easy-to-use, general purpose update (yes, I really tried), and I just couldn't do it (except for supporting a tiny subset of all the possible variations)! This experience is what got me thinking about a radically different approach to producing reusable code. I hope that, as you read this book, you will agree that there is another approach, and that it is completely complementary to the old one.

Rapid prototyping and the related idea of iterative development were (and are still) another enthusiasm of mine. Rapid prototyping is a process of reducing the uncertainties in the development process by trying things out. I believe that anything you are uncertain about should be prototyped: complex algorithms, unfamiliar hardware, data base structures, human interfaces (especially!), and so on. I believe this technique will become even more important in the next few decades as we move into ever more complex environments. Here again, we will have to modify or even abandon the old methodologies. Dave Olson's 1993 book, "Exploiting Chaos: Cashing in on the Realities of Software Development", describes a number of approaches to combining iterative development with milestones to get the best of both worlds, plus some fascinating digressions into the new concepts of chaos and "strange attractors". There are some very strange attractors in our business! I have also believed for some time that most prototypes should not just be thrown away once they have served their purpose. A prototype should be able to be "grown", step by step, into a full-fledged system. Since the importance of prototypes is that they reduce uncertainty, rewriting applications in a different language is liable to bring a lot of it back!

The pattern of all these innovations is always the same - from the subroutine to Object-Oriented Programming: someone finds a piece of the answer and we get a small increment in productivity, but not the big break-through we have been looking for, and eventually this technique makes its way into the general bag of tricks that every experienced programmer carries in his or her back pocket.

By the way, I should state at the outset that my focus is not on mathematical applications, but on business applications - the former is a different ball-game, and one happily played by academics all over the world. Business applications are different, and much of my work has been to try to determine exactly why they should be so different, and what we can do to solve the problem of building and maintaining them. These kinds of applications often have a direct effect on the competitiveness of the companies that use them, and being able to build and maintain this type of application more effectively will be a win-win situation for those of us in the industry and for those who use our services.

Before I start to talk about a set of concepts which, based on my experience over the last 30 years, I think really does provide a quantum jump in improving application development productivity, I would like to mention something which arises directly out of my own personal background. Coming from an artistic background, I find I tend to think about things in visual terms. One of the influences in the work described in this book was a feeling that one should be able to express applications in a graphical notation which would take advantage of people's visualization abilities. This feeling may have been helped along by exposure to a system called GPSS (General Purpose Simulation System). This system can be highly graphical, and it (along with other simulation systems) has another very interesting characteristic, namely that its constructs tend to match objects in the real world. It is not surprising that Simula (another language originally designed for simulation) is viewed as one of the forerunners of many of today's advanced programming languages.

Another effect of my personal orientation is a desire, almost a preoccupation, with beauty in programming. While I will stress many times that programming should not be the production of unique pieces of cabinetry, this does not mean that programs cannot exhibit beauty. There are places and times in the world's history where people have invested great creativity in useful objects such as spoons or drinking cups. Conversely, the needs of primitive mass-production, supported by a naïve view of value, resulted in factories turning out vast numbers of identical, artistically crude objects (although obviously there were some exceptions), which in turn are thought to have led to a deadening of the sensibilities of a whole culture. I believe that modern technology therefore can do more than just make our lives more comfortable - I believe it can actually help to bring the aesthetic back into its proper place in our life experience.

One more comment about my personal biasses (of which I have many, so I'm told): it has always seemed to me that application design and building is predominantly a creative activity, and creativity is a uniquely human ability - one that (I believe) computers and robots will never exhibit. On the other hand, any activity which bores people should be done by computers, and will probably be done better by them. So the trick is to split work appropriately between humans and machines - it is the partnership between the two that can lead to the most satisfying and productive era the world has ever known (I also read a lot of science fiction!). One of the points often missed by the purveyors of methodologies is that each stage of refinement of a design is not simply an expansion of information already in existence, but a creative act. We should absolutely avoid reentering the same information over and over again - that's boring! - but, on the other hand, we should never imagine that any stage of refinement of a design can somehow be magically done without human input. Robots are great at remembering and following rules - only humans create.

Corollary I: Do not use humans for jobs computers can do better - this is a waste of human energy and creativity, the only real resource on this planet, and demeans the human spirit.

Corollary II: Do not expect computers to provide that creative spark that only humans can provide. If computers ever do become creative, they won't be computers any more - they will be people! And I do not consider creativity the same as random number generation....

The other personal slant I brought to this quest was the result of a unique educational system which inculcated in its victims (sorry, students) the idea that there is really no area of human endeavour which one should be afraid to tackle, and that indeed we all could realistically expect to contribute to any field of knowledge we addressed. This perhaps outdated view may have led me to rush in where angels fear to tread.... However, this pursuit has at the least kept me entertained and given my professional life a certain direction for several decades.

In past centuries, the dilettante or amateur has contributed a great deal to the world's store of knowledge and beauty. Remember, most of the really big paradigm shifts were instigated by outsiders!  The word "amateur" comes from the idea of loving. One should be proud to be called an computing amateur! "Dilettante" is another fine word with a similar flavour - it comes from an Italian word meaning "to delight in". I therefore propose another theorem: if an activity isn't fun, humans probably shouldn't be doing it. I feel people should use the feeling of fun as a touchstone to see if they are on the right track. Here is a quote from my colleague, P.R. Ewing, which also agrees with my own experience: "The guys who turn out the most code are the ones who are having fun!" Too many experts are deadly serious. Play is not something we have to put away when we reach the state of adulthood - it is a very important way for humans to expand their understanding of the universe and all the interesting and delightful beings that occupy it. This feeling that the subject matter of this book is fun is one of the most common reactions we have encountered, and is one of the main things which makes my collaborators and myself believe that we have stumbled on something important. In what follows I hope to convey some of this feeling. Please forgive me if some whimsy sneaks in now and then!