Home > On-Demand Archives > Talks >
Modern Embedded Software Goes Beyond the RTOS
Miro Samek - Watch Now - EOC 2020 - Duration: 01:08:12
Some of the most difficult problems with real-time and embedded programing are related to concurrent code execution as well as code organization, which all too often degenerates into "spaghetti code". These problems are usually intermittent, subtle, hard-to-reproduce, hard-to-isolate, hard-to-debug, and hard-to-remove. They pose the highest risk to the project schedule.
This session presents a set of best practices of concurrent programming, which are collectively known as the active object (or actor) design pattern. In this pattern, applications are built from event-driven, non-blocking, asynchronous, encapsulated threads (active objects), with the internal behavior of each active object controlled by a state machine.
While active objects can be implemented manually on top of a traditional RTOS, a better way is to use an active object framework. You will see how this leads to inversion of control, enables architectural reuse, and allows the framework to automatically enforce the best practices.
In the second part, the session will introduce modern hierarchical state machines as the powerful "spaghetti reducers". You will see how state machines complement active objects and enable graphical modeling and automatic code generation.
The session will utilize hands-on demonstrations using EFM32 Pearl-Gecko ARM Cortex-M4 board, the QP/C real-time embedded framework and the QM modeling and code-generation tool.
I'm glad to hear that the presentation gave you some food for thought. It sounds that you, like so many other embedded developers, have independently re-discovered some of the best practices (like avoiding blocking). That's perhaps the most difficult step.
But there is no need to keep re-inventing this wheel. Your run-to-completion timer-handlers could be easily generalized into event-handlers fed by event-queues. That way, you'll be able to handle both time "ticks" (time events) and all other events as well.
And the next logical step is to apply state machines inside your event-handlers. (I suspect that you must struggle there with some "spaghetti code", aren't you?)
One of the instructors of this EOC conference (Max the Magnificent) has recently posted a question on the EmbeddedRelated.com forum: "When is a State Machine not a State Machine?". In his question, Max has used the Arduino "Blink" example, the exact same one I've used in this presentation to illustrate sequential-programming and later contrasted it with event-driven programming and event-driven state machines.
Most interestingly, the majority of the respondents to Max's question classified the sequential "Blinky" code as a "state machine" (perhaps an "implicit" one, whatever that means).
I wonder what you, after watching the presentation, think about state machines. Please post your verdict: is the following "Arduino Blink" code a "state machine"?
void loop()
{
digitalWrite(PinLed, HIGH);
delay(1000);
digitalWrite(PinLed, LOW);
delay(1000);
}
Thank you for the presentation, it is very instructive!
I'm glad you've enjoyed the presentation.
Thank you for your enthralling talk for the subject, content and style of delivery. I have subscribed to your YouTube channel. Thanks
I'm glad you've enjoyed the session. The video course on YouTube has a few new lessons, including two about event-driven programming in general and an introduction to active objects. Stay tuned!
Hi Miro,
Good talk, very interesting insight about active objects.
How would you share reasonble big quantities of data (let's say a frame of 512 samples) between different threads / tasks ,i.e. for data processing applications? The data cannot always be owned by the active object as current filter output is next filter input.
Thanks!
In addition to what Miro said you can transfer ownership of the buffer instead of the buffer itself. The event/message from producer to consumer will then contain just enough parameters for the consumer to be able to find and process the buffer (it can be pointer to the buffer, or slot number, or start/end indices in an array). It's important that the producer relinquishes the ownership of the buffer as soon as it posts the message, so there's should be sharing and thus no additional locking/synchronisation. In a way it's less safe than posting the whole buffer in a message, because both producer and consumer will eventually be accessing the same memory (just at different times), but on the other hand it's zero-copy.
This logic scales well and can involve a chain of producers and consumers, each doing some processing and handing the buffer further down the line. At some point there will almost certainly be need to recycle the buffer - this can be done e.g. via message back to the original producer "here's your buffer back".
This concept of pipeline of producers and consumers can be extended further by:
- adding more buffers (if there's enough memory for that) so that multiple buffers can exist at different stages of the pipeline simultaneously,
- using data organisation well suited for producer/consumer handling, such as ring buffers - this can significantly reduce number of events/messages required.
At least that's the general direction I would go in. As to whether this corresponds to best practices, I guess I'll know after watching a few videos from the Miro's Youtube course :)
The management of the "buffers" holding mutable events can and should be handled by the active object framework. As it turns out, the framework controls events from "cradle-to-grave", so it is much better positioned to know when to recycle the "buffers" holding mutable events with parameters.
Of course, a toy FreeAO "framework" constructed in just 20 minutes in the presentation does not have this functionality. But a real, professional-grade framework should have it. For example the QP Real-Time Embedded Frameworks--RTEFs provide such "zero-copy" and thread-safe event management, as well as automatic recycling of dynamic events.
Agreed, if it's handled by framework, it's even better. I checked how it was implemented in QP/C. Indeed, the framework allows zero-copy operation based on memory pools out of which messages are allocated. Message needs to be allocated first, before filling it with HW data, to avoid copying. I see just one minor inconvenience - the memory pool interface doesn't support allocating blocks with specific alignment, which is sometimes necessary when doing DMA.
Actually, the QP frameworks have been specifically designed to allow you (meaning the application programmer) to allocate the event pool buffers that later hold dynamic events. Therefore, you can allocate them wherever you like and however you like. For example, you can use special memories, you can use special sections from your link file, and obviously you can choose any alignment you prefer. The only "limitation" is that the alignment must be such as to accommodate a void* pointer at the beginning of each block. But this should not be a problem for DMA alignment, which is only going to be more strict than that.
Good question. You can attack the problem of sharing from several angles.
First is to realize that minimizing any sharing should be the highest design priority. This is one of the most important messages from the presentation, because traditionally systems are built around the "shared-state concurrency" model, where a bunch of threads work on the central pile of data (which then needs to be protected by mutual exclusion mechanisms). So, if minimizing the sharing was never a priority, perhaps some sharing could be avoided if you make it a design priority...
Second, active objects can contain more functionality than traditional blocking threads, because AOs remain responsive to events. The presentation has spent significant time to demonstrate this, where the "Blinky" and "Button" threads have been merged into one "BlinkyButton" AO. So, perhaps you could merge some stages of your data processing into a single AO, which would obviously eliminate the need of sharing. Here, please note that one AO can manage multiple passive components that can be (hierarchical) state machines themselves. This is called the "Orthogonal Component" pattern. All such components running within the tread context of a single AO can safely share data, because there are no concurrency hazards. So, what I'm suggesting is to put all stages of data processing within one AO.
And finally, you can make a big event with event parameters holding your whole payload (e.g., 512 samples). You can then post such event to one AO, which can then (at the and of its RTC step) (re)post it to the next AO, and so on. However, for this you would need a more advanced framework than "FreeAct", which will guarantee thread-safe delivery of mutable events with parameters.
I hope that this gives you some ideas...
Thank you, Miro! Great presentation, great answers in the comment section. I sort of stumbled upon many of the described concepts by trial and error, but after your talk I can put labels on them and think about them in a more structured way. I'll certainly be watching some videos on your Youtube channel :)
Based on my work in a FreeRTOS-based project 3 jobs back, I have a question - is event-based approach always preferable to "sequential" tasks/threads in your opinion? I'll clarify in a second where I'm coming from.
In the presentation you point out one downside of events that I would be hitting big time in that project, namely handling of the context between events. That project ended up with between 10 and 15 FreeRTOS tasks/threads, depending on the device version. One of the threads controlled GSM modem which unfortunately had venerable text-based AT command protocol (over UART) as the only means of communication. We used the modem quite actively in various modes (SMS, GPRS, voice), so modem support code was extensive and consisted of 20-30 small to medium sized functions, each performing one modem-related action (such as sending or receiving SMS). Now, these actions in many cases required back-and-forth AT commands and responses between the modem and the firmware - e.g. to send the command with basic parameters, to provide additional parameters, to provide the payload, to handle intermediate responses, to receive the payload, to read the final status. So I ended up following "sequential" paradigm for the modem thread, with each function containing a small part of the protocol and using calls like serial_read_line() and serial_write_line(). The blocking was hidden inside those calls, so the modem functions looked relatively simple and straightforward and easy to directly compare with modem manual (say around 20-50 lines each with 2-5 potential blocking points).
After watching the presentation I started thinking about how it would look with event-based approach. Each of these functions would be split into 3-6 smaller pieces, with each piece being assigned to a stage in a FSM. This would mean 20-30 FSMs just for this module alone (there were similar considerations in other modules, although not as severe). Given this, it really feels like following event-based approach in the modem module would create major readability/maintainability problems. Almost all of the module would turn into state machine code (basically, switch() constructs) which would then make it much harder to compare the flow with the manual where AT exchanges are written like simple dialogs (e.g. command "+CPAS" - possible responses "+CPAS:
So finally the question - would you go ahead with event-based handling here anyway or is this case indeed pushing the limits of the concept?
As explained in the presentation, a sequential solution to a sequential problem is the simplest and most intuitive. It seems that the AT-command serial interface to a modem falls into this category. So, if a dedicated, sequential and blocking thread works well for you, I wouldn't necessarily rock the boat and re-write all this as event-driven active object.
The only really important thing to remember is not to mix sequential (blocking) and event-driven paradigm in the same thread. These things should be separated in different threads.
To this end, all too often developers start with the sequential paradigm, where they hard-code expected sequences of events. This appears to work in the beginning, but really is insidious. As the system inevitably grows and evolves, the developers are forced to process more and more event sequences. They do this by shortening the blocking time, and adding checks (IFs and ELSEs "spaghetti") as to what has actually happened. So the design ends up picking up the worst characteristics of both worlds. The code is not very responsive (sequential problem) and is a convoluted spaghetti (event-driven problem).
Which leads me to your central question: "would you go ahead with event-based handling here...?"
My answer is yes I would. This is because I have yet to see a problem, which would not evolve and would not have to handle new event sequences. The sequential paradigm with hard-coding event sequences is simply too inflexible.
So, specifically to your AT-command interface to a GSM modem, I would design the state machine much simpler then your suggestions:
The main simplifying idea here is a "sequence-handler" function, where a given sequence of "steps" is codified. Here is an example "sequence-handler" for the ATI command (request product information):
bool ATI_sequence_handler(Modem *me, ModemResponseEvt *e) {
bool more_to_come = false;
switch (me->step) {
case 0: // send the ATI command
BSP_modem_send("ATI");
break;
case 1: // <manufacturer>
save_manufacturer(e->str);
more_to_come = true;
break;
case 2: // <model>
save_model(e->str);
more_to_come = true;
break;
case 3: // <revision>
save_revision(e->str);
more_to_come = true;
break;
case 4: // <imei>
save_imei(e->str);
more_to_come = true;
break;
case 5: // <additional capabilities>
save_additional(e->str);
more_to_come = true;
break;
case 6: // <OK>
verify_OK(e->str);
more_to_come = false;
break;
default: // unexpected command
break;
}
return more_to_come;
}
I hope you can see that this code follows exactly the sequential specification of the ATI command in the manual.
There are potentially many such sequence-handlers, all held in an array All_sequences[]
of pointers to "sequence-handler" functions. It is assumed that each NEW_COMMAND event carries the "kind" of the sequence that needs to be handled.
Now, when the modem is "idle", there is no currently running AT-command exchange. When NEW_COMMAND arrives in this "idle" state, the appropriate "sequence-handler" function is found in the All_sequences[]
array based on the e->kind
event parameter. Then, the me->step
is reset to zero (because the sequence is just beginning). And finally, the me->sequence
handler is called to send out the command (step==0 is special in that it sends the AT-command). Then the SM transitions to the "busy" state, where it waits for the MODEM_RESPONSE
event(s).
Upon the entry to "busy" the "me->step" is incremented and timeout is armed (in case the modem never responds). Now, when the modem responds, the MODEM_RESPONSE transition calls the (*me->sequence)() handler and uses it as a guard condition.
If the guard returns "true", it means that more responses are expected, so the self-transition to "busy" is taken. This self-transition causes exit and entry to "busy", so the step counter is incremented and also the timeout is disarmed and freshly-rearmed.
Otherwise (the [else] branch) no more responses from the modem are expected, so the transition goes back to "idle".
Finally, the state machine demonstrates "event deferral", when a NEW_COMMAND arrives when the Modem is busy. In that case, you want to defer the event until you are "idle" again.
I was rather thinking about command sequences with more complicated protocol and multiple exchanges, such as sending/receiving over GPRS:
/* Activate PDP */
AT+ETCPIP=<parameters>
OK
/* Create socket */
AT+ETL=1,<parameters>
+ETL: <socket ID>
OK
/* Send data */
AT+EIPSEND=<socket ID>,<data>
+EIPSEND: <socket ID>, <status>
OK
/* Incoming data notification */
+ESOCK: <socket ID> READY RECV
/* Read data */
AT+EIPRECV=<socket ID>
+EIPRECV: <socket ID>,<data>
OK
/* Close socket */
AT+ETL=0,<socket ID>
OK
But with your example I see that it's going to still be reasonably readable. Next time when I have the choice between the two approaches, I'll prototype both and compare.
Thank you for taking the time to write the detailed answer, much appreciated!
Excellent presentation. Quite Interesting. But some doubts on real time big project, in such case to modularize everything and providing one task for each module will increase the memory requirement higher. Are the any tricks to tackle it?
This question seems to have two components: real-big projects and real-time projects. Let me try to provide some answers (plus some of the "tricks") to tackle both.
So, starting with real-big projects, of course you need to "modularize" them, but the Active Object approach allows you to use fewer expensive modules than the traditional naked threads of an RTOS. In the presentation I've specifically spent some significant time to demonstrate that an AO can hold more functionality than a traditional blocking thread. (Blinky and Button threads were merged together in one BlinkyButton AO.) This property allows you to better balance the two opposing design pressures: loose coupling among components, but high cohesion within components. Specifically, you no longer have to create expensive threads just to make your system responsive to events. You can partition where it makes sense and NOT partition where you have high cohesion. Also, the Active Object pattern gives you clearer criteria as to how and where to divide the functionality: your objective is to avoid sharing of resources and minimize the communication among modules.
An obvious huge benefit of all this, especially for real-big projects, is that such modules (AOs) have fewer dependencies (truly loose coupling without "sharing"). This makes the AOs ideal units for applying TDD (Test-Driven Development), which is critical in real-big projects.
Now, regarding some "tricks" to reduce the memory requirements, in the presentation I had only time to outline how to combine AOs with a traditional RTOS kernel. But a traditional blocking kernel is NOT the best fit for AOs! (Remember the picture of a horse pulling a Smart Car?) Just think about it: traditional RTOS is designed to block in an open-ended number of points in the thread code, whereas an AO needs to block in just one point known upfront (when the event-queue is empty) and otherwise is not allowed to block at all. Of course, this scheme can be served by a much simpler non-blocking real-time kernel than the traditional blocking RTOS. Such real-time kernels are known and are quite widely used. For example, the OSEK/VDX Operating System Specification describes "basic tasks" (BCC1/BCC2 conformance classes) that are not capable of blocking, but can preempt each other. Such basic-task kernels are much more efficient than the traditional blocking RTOS kernels, because all "basic tasks" can nest on just one stack (big savings in RAM). For anybody interested in this kind of real-time kernels, I'd recommend my ESD article Build a Super-Simple Tasker.
Which brings me to the really real-time projects. As I mentioned in the presentation, the priority-based schedulers dominating in the traditional RTOS kernels have been mainly applied to aid the RMA (Rate-Monotonic Analysis). But, ironically, blocking sprinkled inside traditional threads hinders the RMA essentially making RMA unworkable (because a thread can miss its real-time deadlines due to blocking). In contrast, the non-blocking AOs are much easier (ideal) for RMA. Please note that the non-blocking "basic threads" meet the assumptions of the RMA, so RMA applies to such lightweight kernels.
And in all this I didn't even mention the benefits of hierarchical state machines as "spaghetti code reducers", which only grows in importance as the project complexity increases.
So in summary, the question in my mind is not whether Active Objects are a good fit for bigger, real-time projects. The question is rather how any bigger, real-time project can succeed without applying at least some of the best practices embodied in the Active Object pattern.
Excellent talk and very well presented as always Dr. Samek. I've enjoyed learning from your YouTube lessons over the years.
I'm glad to hear that you enjoyed the YouTube lessons. The course is now entering a new phase, where it will cover the "modern" embedded programming concepts (in a sense defined in this presentation). The last lesson 33 already introduced event-driven programming, but in the context of GUI and Windows API. The next upcoming lesson 34 will show how this applies to embedded systems in the context of the Active Object pattern. Later, I plan a big segment about state machines. Stay tuned!
That's great! Looking forward to the new lessons!
Way beyond the wavelength of my noggin. However, complex topic with humorous images, makes it perfectly enjoyable. Thanks for the amazing presentation!
Discussing architecture and design of software is not easy. I was trying to begin with well known and established concepts, like "superloop" and RTOS. But I agree that the presentation makes much more sense to the listeners, who already "made all the mistakes that could be made" in this narrow filed. Knowing the pitfalls from one's own experience helps to appreciate the proposed approaches, especially that they involve some draconian restrictions, like "do not block" or the "share nothing" principle.
I do like the principle of "do not block". Well, I'm not a professional developer. In my hobby projects, say for Arduino, I try to avoid "Delay" function in superloop. Instead, I use state machines to steps through different states of my application. If I need to do something after specific interval of time in any states of state machine, I increment variables with the base-timing of my idle state machines. Of course, this creates time jitter and that's allowable for my application. Please let me know your opinion.
The practice of avoiding blocking (or polling in case of Arduino programs) is a good one, because it makes the "superloop" composable in the sense that you can keep adding code to the body of the loop. At the end of my YouTube lesson about "forgeground/background" systems I mention this "event driven" style of coding "superloops".
Not surprising, such non-blocking implementation is based on the special kind of state machines called "input-driven" or "polled" state machines. They typically don't respond to events, but instead they run "whenever possible", typically continuously inside the "superloop". Also, the transitions of such state machines are labeled by (guard) conditions instead of event-signals. I'm planning a whole segment of YouTube lessons about state machines, including the event-driven and "input-driven" varieties. Stay tuned!
Regarding the "jitter" caused by this approach, it can be avoided by combining state machines with a preemptive RTOS. There is a deep rooted misconception that state machines and RTOS are somehow two different and mutually exclusive approaches to concurrency. This is obviously not true, and the main message of this presentation is that the two concepts (RTOS and state machines) can be combined in the Active Object design pattern...
Finally, regarding Arduino, perhaps you might be interested in the Modern Event-Driven Programming for Arduino, with active objects, hierarchical state machines and modeling.
Very interesting. Of course the actor/event model is very much how languages such as Erlang achieve high concurrency with the advantage of language support, but without the hard realtime performance. I suppose it raises the question about whether the lack of language support for such constructs and the need to rely on programmers to be disciplined is what stopping these approaches being more widely used
Yes, thanks for pointing this out. Erlang has been supporting actors natively from the 1980's. Some more recent examples of actor-based languages and frameworks in general-purpose (server-side) computing are Scala and Akka, respectively. It seems that general-purpose computing experiences a resurgence of Erlang-style thinking.
My hope is that similar way of thinking can also be adapted in embedded computing. I'm not sure if the concepts need to be necessarily supported by the programming language. The lack of native support for threads in C certainly didn't stop proliferation of traditional RTOSes. As I tried to demonstrate in this presentation, building an active object framework directly in C is not that hard either.
So, I just think that we could use more event-driven actor-based frameworks in C, C++, and perhaps other languages applied in embedded. But above all, we need to widen our horizon and stop believing that "superloop" and RTOS are the only games in town. There are other paradigms and real-time execution models.
Thanks Miro for the great lecture. Wonderful!!
Miro this presentation is excellent. Well done. I have read your book and implemented a hobby project with QP in the past. But I haven't made the leap to apply this technology to my professional work. This presentation was very helpful to reinforce and crystallize some of the ideas in my mind. The section where you implement the active objects within FreeRTOS was especially eye-opening for me and bridged some mental gaps I had. You've reinvigorated my effort to apply event-drive active object design pattern to more of my work.
I'm really glad to hear that you feel inspired to do more with active objects. As I already commented here before, the three best practices of concurrent programming can be beneficial at any degree of adoption. This means that you can transition to event-driven active objects gradually and in selected parts of the system only. Also, the best practices provide a better focus and more specific criteria for partitioning the system into components. The guiding principle should be minimal sharing and avoiding blocking as much as possible. Many developers vastly underestimate the complexities and true costs of using various blocking mechanisms of an RTOS.
Hi Miro
Excellent presentation as usual.
I do have one question.
How to handle "priorities" within the active object framework? Imagine for a given active object, there are some events with higher priority than others. Let's say some "life-critical" event that needs processing immediately requires processing from the active object while the object is RTC for a previous event. How to handle this kind of priorities in the active framework? Thank you.
Some active object (actor) frameworks, for example the "ROOM virtual machine", assign priorities to events. In practice, this requires to dynamically adjust the priority of the private thread of an active object.
The QP frameworks take a more straightforward approach and (statically) assign priorities to active objects, not to events. The rationale is that the whole AO needs to process all its events at the same priority, because of the RTC processing.
For example, consider a situation where priorities are assigned to events. Then, let's say that an AO has just started to process a "low priority" event while a "high priority" event arrives in its queue. When would you boost the priority of the AO's thread: right away or only after the "low priority" event runs to completion.
I would say right away, so that the "low-priority" event is processed faster and AO can get to the "high-priority" event. So in the end you do process the "low priority" event at high priority, except that you potentially wasted some time when the "low priority" event was processed at low priority.
Of course, the situation gets even worse if the AO has more "low priority" events in its queue when the "high-priority" event arrives. Here, you could try to re-shuffle the events in the queue or provide multiple queues of different priority. All of this is getting really complex really quickly. Also, changing the ordering of events that way might have tricky side effects...
For these reasons, assigning priorities to AOs is simpler and more efficient.
Another consideration, regardless how you assign priorities, is that an AO should not have events that require very long RTC steps (longer than the hard real-time deadline of the most urgent event for this AO). For these reasons you sometimes need to break up long RTC steps into shorter pieces to give the "urgent" events a chance to "sneak in" and be processed more timely. The design patter to do this is called "Reminder" state pattern.
Hi Miro,
As I mentioned before it was a great take!
It would really be nice to combine the memory management (ownership model) of RUST with AO. In my opinion Rust will be a hot topic in embedded (Secure) SIL systems or other embedded systems as this ownership models removes a large share of possible error.
We already have some multicore embedded MCUs (eg. stm32H755..) and TrustZone is becoming more common in Embedded. Will you support distributed Statecharts (eg. on different processos Cores or in different secure areas) in your framework? ..or introduce support for TrustZone or Multi core communication techniques (as OpenAmp) in future?
As I commented in the presentation, I believe that the unavailablity of lightweight, efficient active object frameworks is one of the main reasons why active objects aren't used more in our field. Consequently,
I would really like to see more active object frameworks for embedded systems in any shape or form. This includes active object frameworks in Rust, Ada/SPARK, (micro)Python, and of course more frameworks in C and C++.
Regarding TrustZone or just the MPU (available already on Cortex-M3), the "share-nothing" principle makes active objects better positioned to take advantage of such hardware separation (as opposed to the traditional "shared-state concurrency" model). MPU or TrustZone can help to enforce the "share-nothing" principle.
Finally, regarding "distributed state machines", the "thin-wire" message-driven communication style of active objects seems ideal for distributed computing, either across networks, or within multicore chips, where cores communicate via limited shared memory. Currently, the QP frameworks run in a single address space. But an active object application (based on QP or other similar framework) can be distributed by applying the Proxy design pattern (a.k.a. "Communication Proxy"). The nice thing about that is that active objects don't need to change and don't need to "know" whether they post/publish events to local or remote active objects.
Great presentation. Very informational. Any possibility that you will post this in YouTube?
I'll check with the Embedded Online Conference organizers, but yes, I'd like to eventually release the video in the QuantumLeaps YouTube channel.
Thank you very much, Mr Semek for a great talk and shared resources!
What a great talk! Many thanks for all the insights and pointers to relevant material.
I'm glad to hear that you found it useful. The presented best practices of concurrent programming can be applied at any level and you don't necessarily need the whole Active Object framework with hierarchical state machines to benefit. For example, you might more judiciously apply blocking, which would make your code more responsive. Or, you could be more careful with sharing of resources, which would save you from applying mutual exclusion mechanisms. Or, you might consider structuring your threads more around the event-loop and asynchronous messages. Any of these steps would improve the final design.
Hi Miro, thanks for the presentation and for the online training material. I wonder if you'd care to comment about Rust's [RTFM]{https://blog.japaric.io/tags/rtfm/} as another approach to concurrency frameworks?
Unfortunately I don't known Rust enough to pass any judgement on this approach. The RTFM seems to be bound to Cortex-M and the NVIC. The "tasks" seem to be one-shot processing units, which is in contrast to the (super)loop structure of the traditional RTOS tasks. The approach reminds me of my Super Simple Tasker.
Miro,
This was very enlightening. I see a need to abstract the AO framework from the business logic of the code, and then you pop the HSM on me for the aha moment, but I wish you'd spent more time on the HSM tools you are using, and talking about available frameworks.
Where do I go to learn more?
Thanks for a well plotted discussion! I want the sequel!
-dan'l
I'm glad to hear that you would like to learn more. The presentation included some references on slides 19 and 20. You can also go to the "Key Concepts" section on state-machine.com.
Regarding other resources, Bruce Powell Douglass wrote numerous books on Real-Time UML, modeling, hierarchical state machines, and all related issues. Bruce is associated with IBM Rhapsody, which is the currently dominating tool in the industry.
Miro, First thanks for talk. There was many aha moments. I want to discuss one thing related to FreeRTOS project. Suppose In my project around 5-6 threads want to send some data over SPI so I will add mutux for mutual execution but it gona block thread if mutux is not free.
Now if I want to use active object way of doing same, How can I implement same using FreeAct.
In the presentation I was trying to describe a general active object-based approach to such situations. In your case, it would mean that you encapsulate the SPI inside one "SPIBroker" AO. Then all other AOs will post events (with event parameters holding the data to send) to the "SPIBroker". The "SPIBroker" will then also receive the data over SPI, package them into events and post the events to other interested AOs. That way you will realize the "share-nothing" principle. I hope this makes sense...
Please join me for a live Q&A session on Zoom tomorrow (Friday 5/22 at 9:AM Eastern):
Time: May 22, 2020 09:00 AM Eastern Time (US and Canada)
Join Zoom Meeting
https://us04web.zoom.us/j/3244153051?pwd=MjRtbWVEczZtbG5uWWV4Ujl5K3lSZz09
Meeting ID: 324 415 3051
Password: 3ZBCrg
You mentioned a hybrid architecture where you would use AOs and blocking RTOS threads together. Such as when you moved the button press to an AO, but left the blinking led as a blocking thread. Why would you want to use a hybrid in a practical application? AO doesn't seem to have any significant disadvantages that would lead to choosing a hybrid.
Hybrid architecture is necessary for gradual transition from sequential to event-driven paradigm. Also, most existing middleware, such as communication stacks (TCP/IP, USB, CAN), are written with the blocking paradigm, so they need a blocking thread context. The hybrid architecture allows you to use all such software, while your application can be build with active objects.
One thing that was not emphasized enough in the presentation is that any mixing of sequential and event-driven paradigms must occur between threads. You can have one whole thread programmed sequentially, while other threads run event-loops of active objects. But you should never mix the two paradigms within the same thread.
The "FreeAct" active object "framework" has been released to GitHub, if anyone wishes to play with it. It contains the working examples for the EFM32 board as well as the TivaC LaunchPad board. The code is licensed under the MIT open source license (the same as FreeRTOS.) Enjoy.
Really, really good talk. Not a wasteful moment. I have 2 questions. What are some strategies to perform if the event queue becomes full? With respect of the button events (pressed and released) what are some techniques to share the events with other consumers; not just the BlinkyButton task?
Regarding the event queue: most events should be reliably posted and processed, which means that a queue overflow should be treated as a system failure (similarly as stack overflow is commonly treated as a system failure in sequential systems that use more stack and more staks.) However, sometimes you can afford losing events (e.g., if you are interested only in the "last is best" updates). For such situations, an AO framework could provide an "extended post" operation variant that is allowed to lose events. There are some additional considerations, so that the other events will still have the delivery guarantees. The QP/C and QP/C++ frameworks provide such an "extened-post" variant.
Regarding "sharing" events, there is the publish-subscribe event delivery mechanism. In this mechanism events are multicast to all subscribers, so multiple AOs will receive the same event. Again, QP/C and QP/C++ framework provide publish-subscribe mechanism as well as the direct event posting mechanisms.
The Active Object design pattern was presented here in the context of an RTOS. However, the pattern is also valuable and applicable to bigger systems based on embedded Linux or other POSIX-compliant OSes (e.g., QNX, VxWorks, Integrity with POSIX subsystem). In fact, in bigger systems Active Objects make even more sense.
Excellent presentation full of detail! I appreciate that you took us into the code to implement AO in the examples. It is readily apparent from your presentation how clean the approach of using Active Objects is and how it can be used to make very maintainable systems.
The little "FreeAct framework" that was built during the session could be actually useful. There are no state machines there yet, but they can be added quite easily (at least the traditional non-hierarchical FSMs). You can use the classic "nested switch statement" or "table-driven" FSM implementations.
Well done, Miro! That was a perfect introduction/comparison between "traditional" and "modern" design techniques for concurrency. It was very interesting to hear your take on event-driven modern state machines.
I plan to release more lessons about state machines in my video course on YouTube. Stay tuned!
Thank you very much for the great presentation! ;)
Thank you very much for excellent presentation! I will surely watch it again and do the practical work.
Hi Miro, perhaps I just need to keep watching to find the answer, but does the Active Object pattern require a dedicated thread/task for each object?
If you base the active object framework on a traditional RTOS, you typically would map an active object to a thread (task). But, an active object framework does NOT need to be based on the traditional RTOS. In that case, you don't need to use the very expensive RTOS thread for an AO.
I see at the 52m mark of the video you just merged the 2 active objects into a single object. I could also envision a case where each event had an ActiveObjectID and could all be collected by a single event queue, but then dispatched to the appropriate active object based on the ID. The mind's turning here, and I suspect the rest of the video will address that ;).
Another great video by Miro !
Zoom says "The host has another meeting in progress." Aaarrrgggg...
Fabulous presentation Miro.
I am unable to join your Zoom meeting??
I've updated the meeting link that Zoom is giving me. Please try again...
https://us04web.zoom.us/j/76823971531?pwd=RUlKckFGeHFMOWtiZVllOWY3OWtaQT09
This was just great!
Thank you.
Thanks Miro!
Was very illuminating to watch you take a simple program from a superloop to freertos threads, to active objects.
I use freertos a lot, and tend to write component and application modules to run with access functions to communicate and "tick" functions that do the work, which run to completion quickly for composability purposes (can add multiple tick functions to a single thread) and rely on a wait at the "end" of each thread.... but seeing the whole active object thing built in this context was very illuminating. Seems like a very useful next level.
I think I'm going to have to watch this one again and take careful notes.
Thanks for making the video!