This document is a broad and informal overview of Visual
Frameworks™, that originated as a private email on the
© Paul Tarvydas
Informal Overview of Visual Frameworks™
The VF concept germinated with a contract we had (in the late '80's) to
develop a multi-headed 68020 controller that would be retrofitted into
plastics molding machinery.
The client's problem was that then-state-of-the-art RTOS'es couldn't
come close to handling the cut-over event - something that had to be
detected and responded to in a 25 usec window (IIRC). The
best RTOS (OS/9) at that time had a context switch time of 80 usec.
(circa 1989). The best controller paradigm - PLC's - wasn't
even in the ballpark (100msec IIRC).
Instead of rolling up our sleeves and writing and debugging a huge
amount of ad-hoc code in assembler as driver code, we applied Harel
Statechart ideas to the problem (Harel had "just" published
his paper in 1986).
We treated the problem as a set of state machines - operating below the
RTOS (kind of like well-structured driver code) - and were able to cut
the response time down to the range our client needed.
This experience taught us something about Harel Statecharts and
something about software architecture:
- The Harel concept of hierarchical state machines was a
- The Harel concept of confabulating state machine
with notation for concurrency
was a really
- Harel's Statechart (including concurrency) semantics were
completely unusable for real-time software, as they required everything
to be globally synchronized with everything else.
- What you really want is something that runs in shorts
spurts to completion, is reactive (responds to events) and provides as
much execution parallelism as possible (i.e. as complete a separation -
encapsulation - of components as possible).
- The hardware guys were unintentionally right.
Every component of a system should be fundamentally
asynchronous. Synchronicity is added explicitly by the
designer only when needed (e.g. via handshakes, daisy-chaining, etc.).
- State machines don't require recursion, hence, they have
bounded space and time characteristics. State machines run a
single step to completion and then exit.
- You can build a state machine based system without using an
RTOS. A state machine has a static state - a handful of
variables and some code - that is known at compile-time.
Unlike with RTOS'es which allocate processes and stacks on the fly, you
can pre-compute the requirements for every state machine at compile
- If you think of a state machine as a 'chip' - with inputs
and outputs, then you can construct a system that exhibits complex
behaviour, yet is constructed out of incredibly small, understandable
units. Outputs from one state machine can be coupled to the
inputs of another to form a cascade of activity. The cascade
is event-driven - reactive.
- A cascade of state machines can operate in an interrupt,
stack-based paradigm, instead of operating in the more costly
pre-emptive process model. Any one step in a state machine
takes a short (and bounded) amount of time, hence, it can be allowed
to run to completion without being pre-empted. This foregoes
the need for contexts and context switching and requires only a single
stack for the whole system.
- If you think of a cascade of completely encapsulated state
machines as a set of asynchronous 'chips', you come up with a concept
hitherto unknown to software engineering - the schematic.
[I lie - this concept is the same as network diagram, but nothing I
know of applies this concept to something as low-level as a single
function or a single statement. CSP and Occam were on the
right track, but they failed to invent the schematic].
- The most innovative concept of VF is the concept of output
pins. Software science has the equivalent of input pins
(e.g. parameters to a function), but the concept of output pins appears
to have been ignored. A single VF component (for example, a
state machine) does not - cannot - know where its output is
going. A VF component which produces an output simply sends
that output to one of its output pins. What the output pin
is connected to is determined only by the parent schematic.
You can rewire a schematic to send outputs to different places, or to
simply drop outputs on the floor (N/C in hardware
terminology). [OK, so I lie again. /dev/null in
Un*x is similar, but, again has not been applied at as low a level as
can be done in VF].
- VF is a system of reactive software components. A
component is one of two types - one that 'does' something (a code
component) and another that wires other components together (a
schematic). Schematics are hierarchical - they can contain
code components and they can contain other schematic
components. Since everything looks like a 'chip' (a black box
with inputs and outputs), you don't need to know how a component is
implemented, and, you can change the implementation (e.g. from a code
component to another schematic component and v.v.) as required.
- Data that flows between components - an event - flows in
only one direction. No return from the callee to the caller
is ever implied. [A return can be explicitly engineering by
the programmer, e.g. using an ACK or handshake wire, if required].
- Code components consist simply of code that runs to
completion when an event arrives on some input pin. It is
often easiest to design code components as state machines, but entirely
possible to write them in a more familiar procedure form - as long as
the procedure doesn't call anything other than local or system-supplied
functions. [Aside: consider most 'normal' programming
languages. A procedure (aka wannabe component) doesn't know
who calls it - it simply knows what input parameters were passed to it,
but, it still has to know where its output is going. If it
calls another procedure, it needs to know too many details about that
procedure - it's name and its input signature (parameter
list). True Encapsulation is a myth in 'normal' programming
- Yes, this can be considered to be just a variant on the
cooperatively scheduled multitasking paradigm. The addition
of output pins and schematics makes it dramatically more reasonable to
use the VF paradigm and it turns out to be quite powerful.
The schematic-based structure has many interesting implications (many
of which we discovered slowly over more than a decade of use):
- If a software component is really, really an
asynchronous black box with inputs and outputs, all of a sudden it
makes sense to formalize black-box drawings. A box with pins
is a component and is compiled to a reference to an instance of that
component. A line between pins is a wire that is compiled
into a wiring table after the compiler type-checks the source pins and
destination pins for compatibility. A schematic is an object
that contains instances of components plus a set of wiring tables that
shepherd event data between pins of its components (and from/to its own
- True Encapsulation
- Since components can't call anything
that is not within their own environment (e.g. local functions, or
system functions), components are truly encapsulated. This
restriction can be strongly enforced if the code-component language is
based on a VM. If you write code components in C or
assembler, then "anything goes" and it is up to the programmer to
maintain the encapsulation conventions.
Everything is Asynchronous
- Each component acts as if it
were contained in a separate process. As we already know,
processes are the best bearers of encapsulation, but they are
infrequently used because of their cost (most embedded apps use just a
handful of processes and high-performance servers attempt to optimize
the cost by using process pools; in VF, it is not uncommon for an
embedded app to use 100's of pre-compiled 'processes'). The
VF semantics (e.g. no recursion at the schematic level, no pre-emption,
conservation of causality, single stack) allows the system to build
'processes' that are much more efficient than those found in typical
- Incremental Downloading
- Since a component is fully defined
by its input API and its output API, and it cannot know where its
inputs are coming from, nor where its outputs are going to, it suddenly
becomes possible to unplug components and replace them with other
pin-compatible components. This means that you can download a
change to the system by downloading the minimal set of components to
effect the change. Our all-time best incremental download was
6 bytes (a component that contained a different constant, along with
the appropriate component-header info).
- Process synchronization
- In the default case, you don't
synchronize at all, you simply let all of the components "free run"
(note that in an event-driven paradigm like VF, most components are, on
average, sleeping - waiting for an event to kick them off, but ready to
run as the need arises). In the few cases where you actually
need synchronization, you build it in explicitly - it shows up on the
schematic for all to see. This is a concept that is very
familiar to engineers, e.g. hardware engineers. There are
lots of known solutions to the synchronization problem just waiting to
be plucked from the grasp of unsuspecting hardware engineers.
The first time we designed a VF application that needed resource
allocation and synchronization, we literally laughed when we saw what
we had (naturally) drawn. The simplest solution (to that
problem) was a daisy chain.
The system had a resource that
was limited to a fixed number of units. To handle the
boundary case where there are fewer units left than requests for them
(e.g. the last unit of the resource with more than one requestor vying
for it) we simply strung the components out in a daisy chain.
Each component had a simple piece of logic in it - "if I am waiting for
this resource, then take it, otherwise pass it on (to an output pin) to
the next component in line". The first component in line had
the 'highest' priority (since it was first in line). We could
add components to the chain whenever we wanted without having to
revisit the synchronization problem. We could change the
'priorities' of components by simply shifting their position in the
chain. We designed a single arbitrator part and then plugged
it into each of the components in the chain. [This algorithm
is like a token-based network protocol embedded into hardware physical
- Components are truly encapsulated, and can be
plugged in out out of schematics. There is nothing to stop
you from unplugging a component from one schematic and plugging it into
another - totally different - schematic. The other schematic
can be a test jig. Unit testing - solved.
Back-to-back testing? No problem, just copy a schematic and
connect it to the first, reversing wires where appropriate.
Path and coverage testing? No problem - print out the state
diagrams, run tests, use a highlighter pen to mark the tested paths on
- All of sudden, white-board quality black-box
diagrams have a formal semantics (that can be compiled to code and can
be checked by a compiler / design-rule checker). And it's all
done with pictures. People can understand pictures.
Even managers can understand pictures! No complicated poop
about 'classes' and ridiculous levels of code-based detail.
The VF concept has led to something that no other language seems to
have provided for architecture. Providing even simple
tools and language constructs for architecture results in large
gains. For example, after investing in a year of development
by three senior engineers, two junior engineers and one manager, our
customer's business case changed (surprise!). The team
revamped the whole architecture of the system and got it working again in
less than two weeks. The 'change team' consisted of one (1)
non-programmer manager (who understood the business requirements) and
one (1) junior programmer who knew how to manipulate schematics and had
a broad understanding of the basic top-level components (and probably
asked questions of the other senior engineers).
- Management / Project Management.
- A VF application consists
of many (100's) of encapsulated components. A Project Manager
simply needs to put each component on a (large) Gantt Chart, assign
development times (as guessed by the developers) to each component and
track progress. Progress consists of "done" and "not done"
(and nothing in between - the end of the 90% complete
problem). Over the years, we have discovered a "rule of
thumb" - if the time assigned to a component (by a developer) is longer
than 2-3 days, then the component is insufficiently "specified" - back
to the drawing board to break it down into smaller pieces that fit in
under 3 days.
- The Law of Averages
- An application is broken down into
100's of well-specified components. The developers guess at
how long each component will take to code up and test. Some
of the guesses are wrong - some are high, some are low. In
practice, the errors tend to cancel out, and the overall schedule tends
to be met more closely compared with any other method we have used.
- Development Cycle
- The development (code writing) stage is
fairly well defined by the Gantt Chart. The architecture
phase (prior to development) is unbounded. We have found that
in our work - embedded systems that result in a couple of 100's of K of
code (e.g. average of <1K per component) - two weeks of
architecture time seems to suffice. Architecture is done by a
group of all the senior designers and might include some of the
mid-level and junior designers. Design is done on a white
board with someone 'taking notes' by transcribing the diagrams into the
VF tools. We stop redefining a component (its input and
output API) when it is agreed by all in the room that the component can
be coded in 0.5-3.0 days time.
- Once the initial architecture has
been completed, the components to be implemented are just thrown into a
bag. Upon completing one component, a developer justs grabs
the next unimplemented component from the bag and works on
it. The more developers you can apply to the problem, the
faster the implementation goes. This works better than in
class-based and library-based development because VF components are
truly encapsulated - each component lives in its own world and cannot
interact with other components (there are no inheritance issues nor
- In a VF application, integration is done
during the architecture phase, i.e. before coding commences!
This is possible because each (well designed) component has a
well-defined API (input and output), and, because the VF tools can
check the consistency of the wiring between components that comprise
the architecture (before code is written - the tools only need to check
that outputs are attached to inputs that are compatible). Of
course, late-in-the-day changes to the business requirements affect the
original architecture and are 'unbounded' in time, but we have found
that such changes are manageably short in practice.
Control Flow vs. Data Structure
- VF is a tool that
emphasizes the design of control-flow architectures. This is
the converse of the design principles that formed the basis of the
various design methodologies which resulted in UML (they started life
in the data design paradigm). A control-flow emphasis is
especially well-suited to embedded systems design. Corollary
- embedded systems development is hard when you start development using
the wrong paradigm. In practice, we have found that many of
today's programming problems are actually control- or event-driven -
e.g. GUI's, interactive systems, severs. Few of today's real
problems consist solely of procedural loops that calculate complex
- MVC, MFC, presentation managers, RTOS's, blah, blah, blah.
- The complicated morass of libraries and classes that are used, today,
to program applications are a result of thinking about the
problem 'the wrong way around'. They try to force-fit
procedural thinking onto problems that are fundamentally
event-driven. Many experienced programmers have a 'gut feel'
that "this should be easier". That is because the developers
have to dig through mountains of textual, often class-based (i.e.
data-oriented), code that tries to push procedural programming onto a
paradigm that is not suited to the applications.
We've found that many average programmers are resistant to programming
Managers 'get it'.
Experienced programmers, who have become architects at heart, 'get it'.
Business people faced with hoary problems of restricted bandwidth
updates and frequent upgrades 'get it'.
These other-than-average programmers, managers and businesspeople
understand the need for formally specifying application architectures
that can be easily understood (via visualization), can be integrated
easily and can be upgraded rapidly.