[Home]LabView

FlowBasedProgramming | RecentChanges | Preferences

From JimKring?, except where marked --pm:

Last Tuesday, our group ( http://openg.org ) had an online meeting to discuss various project and organizational issues. I brought up FlowBasedProgramming and the SourceForge project you started, and a lot of people are interested in it.

I am very interested in evolving graphical data-flow programming as a Turing-complete programming language. I am not just interested in data-flow programming (or FBP) as simply a way to ease the development of software containing multiple concurrent agents. I have invested a large portion of my life learning about and thinking about and solving programming challenges using data-flow programming as my fundamental programming language. As such, I would like to see the evolution of open source graphical data-flow programming tools, including a compiler and editor. It is only a matter of time, until this happens. Currently, National Instruments has many patents on the graphical representation of data-flow programming constructs. However, they do not have patents on the data-flow model itself, and their patents are going to start expiring in the next couple of years. It would not violate any of National Instruments' patents to define an XML schema for describing data-flow programs (such as you have done for FBP), and build a compiler or interpreter for executing data-flow programs. The fundamental concepts of data-flow programs such as nodes, structures, wires, and dataflow are very easy to articulate.

For example:

Additionally, there are ways to apply OOP principles to data-flow programming, through wire-type inheritance and dynamic dispatch of nodes. Many of the more advanced LabVIEW? users already have a good grasp of these concepts and how to apply them. Also, data-flow does not have do be limited to by-value semantics. There are several constructs for by-reference semantics (pointer-flow is simply data-flow where the data is a pointer to an object's data), and remote-by-reference semantics (remote objects can exist anywhere on the network). All of these have been implemented using the first-principles of data-flow programming in LabVIEW?, without the help of the editor/compiler.

As far as creating an execution system for graphical data-flow, I see it several possibilities...

1.) One could build a translator, to convert this to a different programming Language and the compile it. National Instruments has already done this, internally. For example, they have C and Verilog translators. Both the translators and the generated source code are proprietary. The translators is used for targeting LabVIEW? into other platforms such as WinCE? (PocketPC?), Palm OS, FPGAs, etc.

2.) One could build a compiler to convert data-flow programs into machine (or virtual machine) code. One could either build their own VM or use an existing VM (such as the Java VM).

3.) One could build active objects for each of the nodes, and implement a dynamic system of data flow, between the various agents.

It appears the #3 is the method used by the implementations of FBP. Based on my very limited CS background, I would not be able to tackle #1 or #2 myself. However, I think that compiling data-flow programs down to Java byte code (as in #2) is a very interesting option since it would provide cross-platform capabilities.


More information from (and about) JimKring?:

I am a very graphically oriented developer. I've spent the past 6 years writing software in LabVIEW? almost full-time, and part-time for a couple years before that. In the past couple of years I have also gotten to know UML and find that it is a useful way of expressing systems. The best thing about it, is that there are several ways of viewing the same system. For example interactions can be viewed as activity diagrams or collaberation diagrams. The thing that I have found so appealing about LabVIEW?'s graphical code is the information density The saying, a picture is worth a thousand words, certainly applies. I think that it is important to find a language/notation which allows the design of the system to be easily expressed. The language should not get in the way of the developer. For example, a hierarchy (such as inheritance) is best represented graphically. Why have the extra step of converting a class diagram into set of class declarations? The capabilities of UML editors to forward-engineer code directly from UML is very promising. However, I think that the end goal should be a UML compiler and run-time system with debugger. You should be able to probe data-flow, control-flow, and various other pathways in order to "see" what's flowing through them and when. For example, below is a snippet of code showing a probed wire. The last value that flowed through the wire was 3.

Also, one should be able to set breakpoints which pause execution when flow reaches a certain point. For example, the red dot, below will cause the code to stop executing when data flow through that wire.


For a sketch of a possible "little language" for business applications, see http://www.jpaulmorrison.com/fbp/bdl.htm. The idea here is that, in a Flow-Based Programming, you can connect any process to any other as long as they can receive and send data chunks, so no one language has to be able to do everything. Rather we will have classes of languages. --pm

I'll check out bdl, when I get home. LabVIEW? has similar ideas. It has a Script Node, which supports plugin DLLs which provide an interface to some script interpretor. A fellow and I created one for python. We call this project LabPython?. It is hosted at SourceForge?.net <http://labpython.sf.net>;. Below, you can see a Python Script Node in action

Block Diagram

Front Panel


JohnBackus apologized for inventing Fortran! We have been paying for the confusion between variables and names that Fortran introduced ever since! --pm

I don't follow what you mean by "variables" and "names". What is a "name" in this context. BTW, I in the last class (1997) that they taught Fortran to engineers at UC Berkeley. After that they began teaching C++, instead. -jk


I believe a language has to have strong typing to prevent it from multiplying currency by currency, but it is not clear that these types should be attached to variables... In Python, apparently you can write

  x = 4
  x = 'a'

which only makes sense when you realize that x is the handle of an object, so is really typeless. It's really counterintuitive, though! If 'x' is the price of a pound of beef, IMO it has to be a Price, and nothing else! --pm

Actually the value of x is a handle, but x is just a container for a handle. You are right, though and I understand what you are getting at. In that Python does not support typing of its variables. Python has only dynamic type checking. For example, an error will be raised if an operation is performed which is not acceptable for the objects the operation is attempted upon. This provides a lot of flexibility, but can make things tough for a developer.

I took a Python course at UC Santa Cruz Extension, several years back. I have really grown to like it as a scripting language. I really like the interactivity. One can test snippets of code using the interactive prompt and then copy/paste it into scripts -- very nifty.

LabVIEW? does offers very strict typing. In LabVIEW?, wires have type which is color-coded. Strings are pink, floating points are orange, integers are blue, Booleans are green, etc. Arrays of scalar types are thicker and of the same color as the scalar type. 2D Arrays are thicker than 1D arrays, etc.

If you try to wire two disimilar types together, and the source is not coerceable to the sink, then the wire will become broken.

Another interesting concept in LabVIEW?, is that one does not need to use variables. Rather, one uses wires through which data flows. If one wishes to operate on objects by reference, then the data that flows through the wire is a reference, and a look-up is performed in order to read/write. the data.


Another quote from my book:

"In my opinion, there are three major problem areas common to almost all HLLs:

LabVIEW?'s graphical code is truly parallel. For example, in the block of code, shown below, the two tasks C = A + B and Z = X / Y will run in parallel.

The beauty of this is that LabVIEW? can be compiled to run directly on FieldProgrammableGateArrays. So, when I say that LabVIEW?'s code executes in parallel, literally the FPGA gates are computing each thread in parallel! In a multi-tasking OS, of course, timeslicing is used to get the affect of parallel execution.

In LabVIEW?, there are tools for dealing with the synchronous nature of execution. There is a set of "semaphore" functions that allow mutexing data, allowing data (or anything) to be locked and unlocked.


All of this mention of Verilog and schematic examples in an FBP discussion made me decide to use this page as the jumping-off point to describe TheConvergence. -- SteveTraugott


FlowBasedProgramming | RecentChanges | Preferences
This page is read-only - contact owner for a password | View other revisions
Last edited August 6, 2005 3:00 pm by PaulMorrison (diff)
Search: