Tech posts by aclassifier
Also see Designer's note page

You are at http://oyvteig.blogspot.com
Archive by month and about me: here
Last edit: 4Sept2015
Started move: September 2012
All updates at http://www.teigfam.net/oyvind/home/, but the overview is updated

Friday, January 22, 2010

015 - In search of Go! programming language rendezvous

Intro

Update: I have (in 2021) bundled this up in my My Go (golang) notes even if this note is not about Go/golang at all.

I would not have known about the Go! programming language [1] if I had not been made aware of the Go programming language [2]. Since I liked so much of what I read about Go's concurrency solutions, what was more close than to have a look at Go!!

(Why Go! with an exclamation mark, a punctuation mark in the name? It's a smart language, but the whole idea of including ! in it, is this the reason why even Google didn't detect? See the mismatch above, the paragraph ended with two !!, not at same semantic level!) 

Name aside. I ordered the "Let's Go!" book [3] to have right here, beside me, since originally, I concluded there's no Go book to hold on to yet (Jan.10). The stuff I write here will be with reference to Let's Go! - but there's a "Go! reference manual" that's included in Go!'s download package. You'll find much of it there.

I will do no programming here.

Go! & concurrency

Go! is an object oriented, functional, procedural and logic programming language. I should study it, I see that I have a lot to learn. However, it is also multithreaded. My home field, even if I only know this little corner.

A user "spawns off computations as separate threads or tasks". It handles synchronized sharing of resources and coordination of activities. But a process (I prefer that here for thread or task) is not a primary citizen of the language. If you don't use a shared resource correctly, or don't comunicate between processes correctly, well - it's your problem.

However, there's sync that's built into the language (like synchronized in Java, I guess). It'd better be, it's harder not to, if you need them. Message communication is through a library: go.mbox is the standard message communication library.

Go! & sync

There is a sync mechanism. Any stateful object may be protected with sync. A stateful object uses @> where a statefree uses @= as class constructor. A stateful object seems to be a blacker box, since instance variables are only accessable through access methods. However,  there must be more to it than this, i.e. some stateful objects may not be protectable by sync, or some protectable with sync are not stateful (I don't know), since the book points out that deciding whether sync may be used around an object may have to be delayed until run time.

sync may have timeout.

sync may be guarded. According to the value a guard or guards, the sync will pick a choice. I don't know if the language requires the guard to be thread safe, local and not exportable from the process where the guard is used. I'm afraid not. The choice is prioritized, it runs the first (in time) first if several are ready. I don't see any non-determinstic choice. Also, I assume that fairness must be explicitly programmed, if it's a matter of concern. And since the guards only must be evaluated to true, then non-determinism may be programmed through random numbers. (But on a formal level non-deterministic and random choice are not the same [4].)

sync may also be used for notification. So, Java's synchronized and notify is not needed, since it's all in one mechanism here? An object sync'ed on a list may be notified when the list becomes non-empty. Nice! But only one object, so Java's notifyAll is not supported. (But ..All in Java may be too many, so they will have to go to sleep again if it weren't for me..)

In [3] page 177 McCabe points out that
The central message about the use of sync is that it makes for an excellenet way of resolving access contention for shared resources. On the other hand, sync is a perfectly terrible technique for coordinating multiple threads of activity. The reason is that there is no direct way of esatblishing a rendezvous of two or more threads with sync. For that we recommend looking at the go.mbox message communication library.
So, sync makes it possible to make thread-safe libraries.


Go! & Mailbox / Dropbox

First, what is a rendezvous? According to the Wikipedia disambiguation page it's "A communication method of Ada (programming language)". Going into the Ada page there's no mentioned of it (yet..). However, it is a place in code where the first one to arrive (be it sender or receiver) waits for the second to arrive. Then they exchange their data. Then both processes continue. The important things here are:
  1. A rendezvous involves no message buffers
    (References to process local data are carried into the rendezvous)
  2. A rendezvous involves blocking
    (But it's never busy-polled, and beware that this does not mean that less gets done! At the end of the day in any paradigm, when something isn't ready it isn't.)
I have discussed much of this in my blog 007 - and what it might be good for.

Ada's concurrency is based on C.A.R. Hoare's process algebra CSP (Communicating Sequential Processes). Just like Go's is! (Beware no exclamtion mark). Just like the first runnable language: occam. Just like like the formal language Promela. I say "based on", menaing they are all dialects.

So, I would need to investigate Go!'s mailbox. The concept is that a mailbox is shared, and that processes drop mail in a mailbox with a dropbox. There is one way to drop: use the post method. It is typed of course, but there is no way to block on it. So, the programmer who thinks that things get easier that way (sometimes it does!), just sends and forgets.

A Go! mailbox is for multiple writers, single reader (many to one). It handles coordination, but has specificly been designed not to handle synchronization. In CSP, those are the same things, since scheduling is driven by the rendezvous (or, in the speak I am used to: the channels drive the scheduling, or: the scheduler is just a channel handler (if it's not preemptive, which the model is neutral to)).

Since a Go! mailbox is for multiple writers, single reader, and since there is no addressing or placement, it is the use of them that governs the "addressing". In CSP, one communicates on a named channel (changed from the 1978 to the 1985 version, to better facilitate libraries), which could be not connected or connected or being sent on a channel (occam-pi and Go)). Also, the usage of them is not verified (onlike occam / occam-pi).

The Go! mailbox reader process may block. Blocking means that you don't have to busy-poll for data. A reader may instruct the mailbox to search in the mailbox for matching messages, and block if there isn't any.

This is nice! However, I fail to see the rendezvous in it.

Go! & perhaps some suggestions

I don't know Go! well, and am not aware of the language's application domain, but provided, as the book mentions, a rendezvous with the properties listed above are of any interest to its users, then I have some suggestion.

Observe that I don't want Go! to be equal to any other language!

The suggestion is to add more "symmetry" to Go! The occam realization of has input ALTs with guards, but not output ALTs. Promela has both, with guards. I believe that CSP describes both.

Go! has protected regions with sync with guards, this should be symmetric enough? But it has blocking on read of a mailbox, not blocking on send.

1. Blocking on sending: If I drop "any data" in a dropbox, it could block if the receiver does not need "any data". Or, if the receiver waits for a list of names, then block until somebody supplies it. This could make up an interesting type of  many-to-one "channel".

2. Return to known: occam has a way to directly identify where a reply is to be sent. In Go! it has to be coded. Occam does this by putting a many to one channel into an array of channels, and the receiver ALTs on that array. The index of the array falls out of the ALTing even if it's never sent by the sender, so the receiver knows where to send a reply.

3. Propagate guards into the mailbox system: I don't see in Go! that we could put an input guard into the mailbox. If a process wants to accept inputs from set A=[a,b,c,e,f], then if set B=[d] tries to dropbox, it would block. One event in A must happen before B (on perhaps a next round) would be triggered. Guards are constant when they are used.

Would this CSP stuff be interesting to put into an object oriented, functional, procedural and logic programming language with support for concurrency (Go!)? If yes, what would it be good for?

Disclaimer

This note has been written with most knowledge about the CSP type concurrency and some of its implementations. My knowledge about Go! is next to none. Therefore, there may be smart combinations of the concurrency related facets and the other primary citizens of the Go! language that could make this note in error. Please comment and I will change!

Another thing: my suggestions may well fail a verification if modeled in a formal language. Since I have moved things into the mailbox system and included more blocking, it could cause safetly and liveness property violations. In other words, it may not be possible to make a safe language with these features. I hope I am wrong!

References

[1] - Go! programming language, see http://en.wikipedia.org/wiki/Go!_(programming_language)

[2] - Go programming language, see http://en.wikipedia.org/wiki/Go_(programming_language)

[3] - Let's Go! Francis (Frank) G. McCabe (ISBN 0-9754449-1-3), 2007

[4] - Non-deterministic and random choice, a little about this at http://en.wikipedia.org/wiki/Promela#Case_Selection



Sunday, January 17, 2010

007 - Synchronous and asynchronous

Original posting


Intro and quotes

Observe that in 2010 the "action" goes on at the comment and bottom level of this post!

[1] caused me to write this post because I stalled at some of the paragraphs. The authors hit a chord in my favourite mantra - a computer programmer should be fluent at both asynchronous and synchronous systems, and know the pros and cons, necessities and unnecessities of both. I guess that I in this post will try to outline what systems means in this context.

I will start with the quotes from [1]:
[1.1] - "The flexibility required of this missions could not have been accomplished in real time without an asynchronous, multiprogramming operating system where higher priority processes interrupt lower priority processes."
[1.2] - "To our surprise, changing from a synchronous OS used in unmanned missions to an asynchronous OS in manned missions supported asynchronous development of the flight software as well."
[1.3] - "Lessons learned from this effort continue today: Systems are asynchronous, distributed, and event-driven in nature, and this should be reflected inherently in the language used to define them and the tools used to build them."
[1.4] - "Async is an example of a real-time, distributed, communicating FMap structure with both asynchronous and synchronous behavior."
Disclaimer 1: This paper is not a review of [1]. Parts of it was a catalyst to structure some of my own thoughts, many of them are not even mine. I should say again because I have done this over and over. The real theme of [1] I have not even mentioned: the Universal Systems Language (missing Wikipedia page (wrong: it has arrived after my original post [wikipedia] and [12])). The structure(?) of this post is such that I have a virtual dialogue with the authors - based on the quotes above.
Disclaimer 2: This is not a scientific paper - it is a post on a blog.
[1.1] - Synchronous vs. asynchronous OS
[1.1] - "The flexibility required of this missions could not have been accomplished in real time without an asynchronous, multiprogramming operating system where higher priority processes interrupt lower priority processes.
I am not certain what the authors mean with asynchronous, multiprogramming operating system as opposed to a synchronous OS.

But I will give it a try. I assume that in a synchronous OS, the processes (also called tasks) run in some sort of slot, like some milliseconds each. They have to finish their work in their slots, so that the next slot is not delayed. This is not a side effect, it's the whole idea. One process then is not able to interrupt any other - since they run in sequence. Maybe even asynchronous I/O (like polling of interrupt sources) are assigned a slot. With such a system there is no problems with shared access, so semaphores are not needed. Things seem simple.

However, there are several process scheduling algorithms for this [wikipedia]. I assume that some of these would fall into the synchronous OS category. One such scheduling policy could be rate monotonic - where "tasks with shorter periods/deadlines are given higher priorities" [wikipedia].

I feel confident that the authors did not mean that systems - such as those programmed in Ada, with synchronous rendez-vous or channel communication schemes - constitute a synchronous OS at the scheduler level.

Or I could be wrong, and a synchronous OS is the one such that runs and schedules programs of the synchronous programming language type [wikipedia]?

Synchronized OS as using synchronous communication is probably not what they mean - but asynchronous OS and asynchronous communication might be paired in [1]. However, I hope synchronization and OS are orthogonal, also in the paper.
[Def 1] - So I hope that an "asynchronous operating system" could include an Ada-style system without any visible "OS" at all. In the following I have assumed this.
[1.1] again - Required flexibility
[1.1] - "The flexibility required of this missions could not have been accomplished in real time without an asynchronous, multiprogramming operating system where higher priority processes interrupt lower priority processes.
If the authors, with asynchronous, multiprogramming operating system mean that communication mechanism between tasks mostly is based on asynchronous messaging, then I need more explanation than [1] gives me.

However, I think that this is not what they mean. They simply mean that asynchronous OS is not synchronous OS - and that messages (events) between processes drive the system. It's not the passing of a big clockwork that displays time, like the inner machinery of Big Ben in London, where all wheels are synchronous. It's more like the combined effort of any workforce with a loose degree of central control.

The communication mechanism between processes is rather crucial. Some mean that asynchronous communicating (send and forget) and a common message buffer solves their need. Some would say a synchronous communication mechanism solves their need (channels or Ada-like rendez-vous). The latter is often seen as rather stiff or rigid, and I dare to say, little understood. I have written a paper about this: CSP: arriving at the CHANnel island - Industrial practitioner's diary: In search of a new fairway [2].

Using an asynch message scheme would cause a process to have to know how the internal programming in the other processes work. Here is a quote from [3] - which discusses an alternative to the non-Ada type communication scheme, the Java monitor synchronization mechanism:

One crucial benefit of CSP is that its thread semantics are compositional (i.e. WYSIWYG), whereas monitor thread semantics are context-sensitive (i.e. non-WYSIWYG and that's why they hurt!). Example: to write and understand one synchronized method in a (Java) class, we need to write and understand all the synchronized methods in that class at the same time -- we can't knock them off one-by-one! This does not scale!! We have a combinatorial explosion of complexity!!!

CSP is not out of scope here - Ada's rendes-vous is based on CSP [5]. The WYSIWYG is also discussed in [4].

Bear in mind that communication scheme and degree of synchronization are closely related. With zero buffered synchronous channels, communication is synchronization. With buffered asynchronous-until-full channels, synchronization happens when the channel reaches capacity and becomes synchronous. When the channel size is infinite, synchronization never happens - or it does not happen before a sender at application level waits for a response, i.e. inserts synchronization.

Different programmers tend to chose different methodologies here. Or the traditions and earlier usage in different companies. Or changing times. My previous post '006' will try to discuss this.

There are also several other points, like the producer / consumer problem. Here, buffers would fill up (to overflow?) on a faster producer than consumer. Also, with messages flowing into a process, there may be no system to stop a message from arriving. With input guards and synchronous communication it's possible to close the door from a client, to finish off a session with another client - undisturbed by the fact that another message is waiting outside the door. An important building block.

However, since the world is basically asynchronous, it is important that programmers know the different tools and methodologies. Synchronous communication may be built on top of an asynchronous layer - and opposite. I will come back to this. Deadlock (synchronous) and pathological buffer overflow (asynchronous) must be avoided. This is exciting!

[1.2] - Loosely coupled development teams
[1.2] - "To our surprise, changing from a synchronous OS used in unmanned missions to an asynchronous OS in manned missions supported asynchronous development of the flight software as well."
If one of the systems I have been working on is in fact an asynchronous OS, but with synchronous communication (channels) layered on top - and that is the same as the above (assuming [Def 1]), we probably have experienced the same. I have written a paper about this: High Cohesion and Low Coupling: the Office Mapping Factor [6].

I argue that the higher cohesion in each process (they each do a well defined job), and the less coupling (with synchronous communication and WYSIWYG semantics) - the easier it seemed to be for programmers to agree on the interface contract (the message sequences or protocol) - and then enter the office and do the job. There was little need to discuss any more.

My paper is a case observation from industry - so the Office Mapping Factor is only a hypothesis until somebody takes the token and carries on.

In [1.2] there is a term, asynchronous development, which needs a definition. I think they talk about decoupling of the development members and teams as much as possible from each other. My gut feeling is that this is the same as my Office Mapping Factor. However, if they base their software on asynchronously communicating processes - then, they would perhaps get even better yield by using [Def 1] - Ada-type communication.

If the Ada designers had used named rendez-vous or named channels to communicate between processes, instead of named endpoints, it would have been even easier to build good libraries. C.A.R. Hoare, in his second CSP book of 1985 (ref. in [5]) took the channels to be named entities that could be ripped up and placed anywhere - in run-time. A process communicates on a named channel (like on an ip-address), not with another named process. This was about the same time that that Hoare (with fellows at Oxford University) together with people at Inmos (David May etc.) had built a runnable instance of CSP, the programming language occam [7] and a processor for it, called a transputer. It would be wrong of me not to admit that it is from those sources and fellow admirers I have been drinking, from about 1980 to this very day.

This would make the programmer's interfaces (API) better, and well defined message sequences with WYSIWYG semantics - tools for getting "asynchronous development" with high Office Mapping Factor.

[1.3] - Systems are increasingly asynchronous
[1.3] - "Lessons learned from this effort continue today: Systems are asynchronous, distributed, and event-driven in nature, and this should be reflected inherently in the language used to define them and the tools used to build them."
Now I really begin to doubt that [Def 1] is anything but a misinterpretation in the context of [1]. Because, yes, there is a trend. I saw a web page of a high profile lecturer who should have a one day course. One of the points was Running tasks in isolation and communicate via async messages. Then I queried on a discussion group, and one in the community said that:

"Most likely it's the fire and forget and hope the buffer doesn't overflow variety - Erlang style concurrency. This seems to be the current method everyone is interested in (Erlang, Scala actors, F# Async Mailboxes, Microsoft Concurrency and Coordination Runtime). There is sometimes some form of guard available, be it one that allows checking of the incoming message, or selection of one from multiple inputs."

Maybe this is what the authors refer to. Or something along that line.

When they talk about language, I am sure they mean the language described in [1], USL. With solid semantics, and if the semantics is well enough defined, it does not matter if it's based on synchronous or asynchronous communication? Like, Harel State Charts in UML are based on asynchronous communication between state machines. And it's pretty solid, and it doesn't really void formal verification, it seems. Of course it is possible to build good systems with this, both unmanned and manned missions of any literal kind.

But maybe, if the same designers knew as well about the [Def 1] style systems as they do with the asynch OS / asynch communication schemes, we would get even better systems? To introduce a world where it's understood that you cannot simulate or visualize a system to be error free. Some tools now do verification: IAR's Visual State (of UML state machines), Spin (of Promela models) and FDR2 (of CSP), as well as LTSA (of FSP) analysis, see [8]- [11].

[1.4] - Joining heads for synch and asynch?
[1.4] - "Async is an example of a real-time, distributed, communicating FMap structure with both asynchronous and synchronous behavior."
Now, what does this mean? If it's asynchronous OS and synchronous OS behavior, it would sound strange to me. According to Turing, any machine can run on any other. Laid out here: asynch or synch OS may be built on each other. So, the behaviour should really be the same? If MS Word runs on Mac OS X or Windows the behaviour is the same? (well, almost..)

Instead, I have a hypothesis that they this time talk about asynchronous and synchronous communication behaviour? Then it makes more sense to me to talk about different behaviour.

Any software engineer should know these methodologies (tools), know the strengths and weaknesses of each. When it's appropriate to use one instead of the other, and how combinations could be used.

I will not sum up here. I have glimpsed through some points, seriously tearing and wearing on the [1] paper, and used it as a catalyst. I have written about these things for years. Being incomplete as it is, there may be some points in some of my papers, at http://www.teigfam.net/oyvind/pub/pub.html. Maybe I will try to sum up in a future posting.

References

[1] - Universal Systems Language: Lessons Learned from Apollo. Margaret H. Hamilton and William R. Hackler, Hamilton Technologies, Inc. In IEEE Computer, Dec. 2008 pp. 34-43 ("USL"). The authors have commented on this blog, see [C.1] (below).

[2] - CSP: arriving at the CHANnel island - Industrial practitioner's diary: In search of a new fairway. Øyvind Teig, Navia Maritime AS, division Autronica. In "Communicating Process Architectures", P.H. Welch and A.W.P. Bakkers (Eds.), IOS Press, NL, 2000, Pages 251-262, ISBN 1 58603 077 9. CPA 2000 conference (In the series: "WoTUG-23"), Communicating Process Architectures, University of Kent at Canterbury, UK, 10-13. Sept. 2000. Read at my home page at http://www.teigfam.net/oyvind/pub/pub_details.html#CSP:arriving_at_the_CHANnel_island

[3] - Letter to Edward A. Parrish, The Editor, IEEE Computer. Peter Welch (University of Kent, UK) et al. dead url: http://www.cs.bris.ac.uk/~alan/Java/ieeelet.html (1997), internet archive at http://web.archive.org/web/19991013044050/http://www.cs.bris.ac.uk/~alan/Java/ieeelet.html (1999)

[4] - A CSP Model for Java Threads (and Vice-Versa). Peter Welch. Jeremy M. R. Martin. Logic and Semantics Seminar (CU Computer Laboratory) (2000) - http://www.cs.kent.ac.uk/projects/ofa/jcsp/csp-java-model-6up.pdf

[5]- CSP - Communicating sequential processes - http://en.wikipedia.org/wiki/Communicating_sequential_processes

[6] - High Cohesion and Low Coupling: the Office Mapping Factor. Øyvind Teig, Autronica Fire and Security (A UTC Fire and Security company). In Communicating Process Architectures 2007. Alistair McEwan, Steve Schneider, Wilson Ifill and Peter Welch (Eds.). IOS Press, 2007 (pages 313-322). ISBN 978-1-58603-767-3. © 2007 The authors and IOS Press. Read at http://www.teigfam.net/oyvind/pub/pub_details.html#TheOfficeMappingFactor

[7] - The occam programming language - http://en.wikipedia.org/wiki/Occam_(programming_language)

[8] - IAR's Visual State (of UML state machines) - http://en.wikipedia.org/wiki/IAR_Systems

[9] - Spin (of Promela models) - http://en.wikipedia.org/wiki/Promela

[10] - FDR2 (of CSP models) - http://www.fsel.com/

[11] - LTSA (of FSP models) - http://www-dse.doc.ic.ac.uk/concurrency/

[12] - Universal Systems Language - http://en.wikipedia.org/wiki/Universal_Systems_Language

New References from the Comments section (below)

[13] - A Comparative Introduction to CSP, CCS and LOTOS, Colin Fidge, Software Verification Research Centre Department of Computer Science, The University of Queensland, Queensland 4072, Australia, January 1994 - http://ls14-www.cs.tu-dortmund.de/main/images/vorlesungen/KomSer/uebungen/csp-ccs-lotos-fidge94g.pdf

[14] - Metric spaces as models for real-time concurrency,G.M. Reed and A.W. Roscoe, Oxford University Computing Laboratory - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.1311&rep=rep1&type=pdf

[15] - Concurrent Logic Programming Before ICOT: A Personal Perspective, Steve Gregory, Department of Computer Science, University of Bristol, August 15, 2007 - http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.3100&rep=rep1&type=pdf

[16] - http://oyvteig.blogspot.com/2009/03/009-knock-come-deadlock-free-pattern.html - "009 - The "knock-come" deadlock free pattern" blog here.

[17] - "3. Asynchroneous services for OpenComRTOS and OpenVE" in Altreonic News Q1 2010-1, http://www.altreonic.com/content/altreonic-news-q1-2010-1

Comments

[C.1] - Comments by the Authors of [1], Dec.2009

This post has been commented by the Authors of [1]. The comment has been approved for public reading. Read at http://www.teigfam.net/oyvind/blogspot/007/ieee_teig_notes1.htm.

Thank you for the comments!

I'd certainly like others to comment in this blog! I will try to follow up myself.

[C.2] - CSP and "true concurrency" 

Comment [C.1] states that "Note, that for any of the process algebras (CSP, CCS or LOTOS) "parallelism" is a misnomer (see Receptive Process Theory); none of the process algebras support "true" concurrency, only interleaving, see.." [13] page 12.

Page 46 of [13] reads: "True concurrency semantics. Although conceptually simple the interleaving semantics used by the process algebras mean that the “concurrency” operators are not fundamental and hinder the ability to add time to the languages. True concurrency semantics, in which traces become only partially ordered, have been suggested as a more accurate model of concurrency. There are two principal methods: multi-set traces use linear traces with a set of actions listed at each step; causally ordered transitions maintain pointers denoting causal relationships between events in the traces."

This puzzles me, especially since I am a programmer, and not a computer scientist, and therefore would not know the answer. However, I have been running occam on multi-transputers in the nineties, and those processes of course had true parallelism, and not pseudo concurrency. When placing the same processes on a single machine, I certainly experienced their semantics not to change. Occam had "parallel usage rules", helping me to obey CREW (concurrent read, exclusive write) on say, segments of a shared array.

In [14] (page 12) it says: "The semantics we gave for CSP is by no means the only possible one that is reasonable..". "Also both the parallel operators we gave were true parallel operators, in that the time taken by the two operands was not summed: one might well need time-sliced pseudo-parallel operators in applications." Does this contradict the page 46 quote from [13] above?

And [15] says "It seemed reasonable to replace both coroutining and pseudo-parallelism by “real” concurrency, using CSP-like input (match) and output (bind) operations on channels (shared variables)."

From theory to practive. I see no way where parallel processes could run in true parallel on a uni-processor machine. Instructions have to interleave on that implementation level. Not even interrupt (processes) are truly parallel. Moving to a multi-core would help. But all software processes need to communicate. If this is done by "send and forget" or "move data during blocking rendezvous", I don't see that any of this would move a process into being not truly concurrent and only interleaving.

See from this programmer's head, when two processes are not communicating on a CSP blocking channel, they are really running truly concurrently! And when they do communicate (and synchronize), that's what they do, and true or pseudo concurrency is not even asked. It's in a different problem domain.

I need help! Even if I, to a certain point, don't care if my runnable processes would be considered pseudo or real. To me I have mostly learnt them to know as both.

However, for formal language tool (like Promela) processes I guess it certainly matters.
(17Jan10)

[C.3] - USL Distributed Event Driven Protocol

This is described at comment [C.1] above.

I got a complete surprise when I saw this: it's quite close to my "The "knock-come" deadlock free pattern" [16]. The wording is different, but the essence is the same: tell that you have something ("knock"), then wait for a reply saying "come" - then send the data.

The USL now, is it a "language around this pattern"?

Since USL seems to build on this pattern, almost everything changes in my comments!

[16] is a back-to-back pattern that I have proven with Promela/Spin to be deadlock free.

And the protocol is why, in USL "There can never be overflow; faster producers and slower consumers are never an issue". Of course! (That being said, a faster producer than consumer is an application level problem which must still be solved at that level.)

Now, is it I that have turned synchronous into asynchronous (my "come" is sent on an asynchronous interrupt channel with no data), or USL that, inside the engine, really is synchronous? They say: we can push the accelerator any time, and I say: but the crankshaft makes ignition synchronous! Are we saying sort of same thing?

Also, when USL says that "[1.2] - To our surprise, changing from a synchronous OS used in unmanned missions to an asynchronous OS in manned missions supported asynchronous development of the flight software as well." - I have to agree! I have said the same of "my" architecture! See [6] "High Cohesion and Low Coupling: the Office Mapping Factor."

So, it's true that the process, communication and synchronization "concurrency building blocks" certainly show up in the final software architecture. A wooden and a brick house look different and have different qualities.

Next, I should study Universal Systems Language built "around" the USL Distributed Event Driven Protocol. I think I have discovered the canvas of this painting. Then I need to find out at which distance I should relate to it. As a reader, viewer or ..painter?

I have now added a comment to [16].
(18Jan10)

[C.4] - Another synchronous/asynchronous comment

The company Altreonic in [17] (above) explains about their OpenComRTOS in a newsletter:
"The asynchronous services are a unique type of service as they operate in two phases. In the first phase the application task (or driver) issues a request to synchronize or pass on data via an intermediate hub entity. This request is non-blocking and hence the task can continue. Later on it can wait for the confirmation that the synchronisation or data exchange has happpened. A task can simultaneously have multiple open requests using one or more hub entities. This is valid for sending as well as receiving tasks and provides a very flexible and efficient mechanism. Safety is assured by the use of a credit system."
My boss at work sent me the newsletter, wondering if this was perhaps in the same street as the "knock-come pattern". I got a feeling of rememberance, because I had talked with people from Altreonic for many years at CPA conferences, from the Eonic days and even before that, when some of us were using transputers, and quite a few occam.

So, this street seems to have several buildings: The USL protocol, The OpenComRTOS and even our knock-come when used as we use it - together with a small, embedded data base. And certainly there must be many more!

I sent a mail to Altreonic and queried. A thread followed with Eric Verhulst, who wrote (he allowed me to quote):
"I remember we had our discussions many years ago when Eonic was promoting Virtuoso. Already at that time there was a discussion whether Virtuoso violated the strict rules of CSP. At that time I called it a “pragmatic” superset of CSP. The work done on OpenComRTOS (whereby we used formal modeling) has proven that this intuition was right. I think it has also "solved" the synchroneous-asynchroneous debate.  It is not an XOR debate or an AND debate. At the lowest level of the primitive operations, it is synchroneous. At the higher semantic levels we can introduce asynchronisity. That good thing is this is not in conflict at all with CSP."
What more is there to say? Personally I started this blog with quarreling about sync versus asynch. Like in The Alchemist by Paulo Coelho there was something to return back to: maybe it's not that simple: "sync versus asynch".

What more to ask: "at what level and for what usage?"

There is no use to repeat  comment [C.3] above. But maybe the general idea holds as a comment also here.

There is more to the OpenComRTOS story, though: the consequence which their hub has on the safety of the application. The use of the knock-come pattern, use of the USL protocol and the use of the two-phase hub mechanism of OpenComRTOS seem to all help with this. It's "send and not forget" or "send and forget but then be reminded again". For certain needs this is great stuff. Especilally if one needs to send off lots of data or commands to wherever, let them do their anyway(?!), even at much much slower speeds, and then get the result. The idea then is to both report how it went, and not to get empty of memory for this stuff. No crash. Limits are handled by design. OpenComRTOS do static linkage of packets with no malloc, with blocking if there is no packets left.

Alternatively one could have a data bank with enough entries for each request or command and then tag the elements with status, like: want to send down, is expecting reply etc. This is how we did it in an embedded product: with success - but the knock-come pattern was used in connection.

A synchronous channel is really a handle to common data. With space for no packets, the receiver may hold the producer. With zero data there could be an asynchronous "interrupt" channel and asynchronous send. With some space in the channel, it would block when it's full. With Ada rendezvous, data exchange is synchronous (blocking) and bidirectional. The channel concept evolves, it could be one-to-many or many-to-one, and they could be connected to a mechanism like a barrier synchronization mechanism.

However, what I would look for is to have WYSIWYG semantics: no matter how asynchronous the mechanism I use, I would want to only relate to the defined protocol between the concurrent processes. I will not need to know how that other process is coded internally. Think about this: if I hold a channel (or whatever input mechanism) while I do the session I was just told to start (I am a server), am I allowed to send the result back to the originator client that used me, and not think a second about how that would influence the other clients competing for me? And the clients, could they be coded 100% irrelevant of the server code, provided they agree on the interface contract?

Am I allowed to control my clients myself, to make handling "fair"? Do I have the mechanism? Will this system behave the same way if I use a client with a statically linked channel, across the internet, or on a channel sent over a channel. Have a look at occam-pi for these things, look at the Go programming language, read the CPA-someyear literature (Communicating Process Architecture poceesings).

Read papers you seem to disagree with. Study newsletters from interesting companies.

There is much more out there than just send and forget.
(11Feb2010)

--


Archive and about

Popular Posts

Øyvind Teig

My photo
Trondheim, Norway

All new blogs and new home page start at
http://www.teigfam.net/oyvind/home

Overview of old blog notes here

My technology blog was at
http://oyvteig.blogspot.com

and my handicraft blog was at
http://oyvteig-2.blogspot.com

PS: just call me "Oyvind"