Understand the code generation strategy

Jump to: navigation, search


In this chapter we explain the structure and organization of the code generated by TASTE using the Kazoo tool.

We will go step by step through the creation of a small project and describe what code is generated at which step, for which purpose, and then how the overall build to create a binary is done.

Create a toy example

We are going to model a traffic light controller that has two nodes - one running on a STM32 microcontroller, and one running on a PC. The nodes will be connected via a serial link.

To create the project, run taste:

   $ taste

We will put a few functions and connect them together. We mix implementation languages on purpose to illustrate how code will look like:


The next step is to generate the code skeletons. You can do that from the tool menu:


Code skeletons structure

A folder named "work" has been created to hold the code skeletons of the project.

Let's explore this folder:


For each of the function that appear in the interface view, a subfolder with the same name has been created.

Because it is possible to provide several implementations of the same component but in different languages, a subfolder per language has also been created (C, Ada, SDL, GUI).

Finally, the "src" directory contains the source code of the function, which can be edited by the user. Lets look at them, starting with the watchdog, implemented in C:


As you can see this is straightforward. The .h lists the available required interfaces that this component can call:


And the .c is the file that implements the provided interfaces. User has to provide the content of these functions:


If you make modifications to the model, the .c file will not be overwritten. If you add interfaces, you will need to add them manually in the .c file. This can be done easily because the .h will be re-generated and provide the new function signature.

It is similar for the other functions, in Ada:


..and in SDL:


You may notice the presence of a Makefile in each of these directories. It contains a helper rule to edit the source code or in some cases invoke a code generator:


Code for the GUI component

When you set the implementation language to GUI, you instruct TASTE to generate a stub function. It consists of a piece of C code that can interact at runtime with the user through a graphical user interface generated in Python, via a message queue. The code is generated in the src folder. It is not different from a standard C function, except that the code is pre-filled:


The Data view

All the functions exchange data that is specified in the ASN.1 language. During the code skeleton generation, the ASN.1 models and translated to code too by the ASN1SCC compiler. You can see this code in the Dataview folder:


There is one folder for Ada (needed by the stm32_board component) and one for C. The SDL function can work directly with the native ASN.1 models so no extra code is needed.

You can see that more files are generated in the C folder, because the compiler was also requested to generate binary encoders and decoders for each data types. This is needed to create the binary packets that will be sent over the serial link. To understand this, take this enumerated definition that exists to specify the color of the traffic light:


In the C language this is translated to a standard enum type:


However in C, depending on the target (intel/x86 or STM32) this type will be either implemented as a 32 bits integer (default on Linux platforms), or as 1 byte (default with the ARM cross-compiler for the STM32 board). This is why you need ASN.1 : it defines a platform-neutral representation of each data type, that is optimized for space.

In the case of this enumerated, this representation requires only 2 bits!


The ASN1SCC compiler generates a function that goes from and to this representation, independently from the platform. This way there is no risk that the data gets mis-interpreted on any node. This is what is contained in the "dataview" folder.

The Makefiles

Two Makefiles were generated in addition to the source code. One in the main project folder. It can be modified by the user if some specific actions have to be done prior to building the system (setting environment variables, copying external files....). It contains a rule to re-create code skeletons, one to create the glue code and one to build the system. Usually it is kept simple and is generated once for all when the project was created. It is not project-dependent.


The build itself is handled by the Makefile present in the work folder (and invoked from the top-level Makefile).

Deployment of the toy example

To illustrate this example we will create two nodes and deploy the functions as shown in the following picture (using the Deployment View editor):


The nodes are named Node1 and Node2, and the partitions are named stm32_controller and user_interface. At the end the partition names will be the name of the binary (executable) files that will run on the nodes.

The glue code

The glue code and the project files are generated either by calling "make" on the top-level makefile (this will also build everything) or by just running this rule:

   $ make work/glue_built

Let's see what has been generated.


As you can see, each user function now contains a new folder called wrappers. This is the code that interface the user code with the rest of the system, independently from a specific runtime/middleware. The runtime-specific code is depending on the actual execution platform, and therefore resides in another folder (the build/node1 and build/node2, as explained below).

There isn't a lot of overhead here:


One file handle the calls to the provided interfaces, and one file is a hook to the calls to required interfaces.

This code makes the conversion of the user data types from/to the compact ASN.1 representation we saw above, if that is needed (e.g. if data is to be sent to an interface in a distant node).

The "build" folder

The build folder contains the glue code that interfaces to an actual runtime. In this case we use a GNAT/Ada runtime (for Linux and for STM32), but it could be RTEMS for ARM or another one.


The file "Makefile.taste" iterates over each node, and executes its corresponding Makefile to build using the appropriate cross-compiler.

In each node folder there is a directory with the name of the partition, and that contains the actual interface to the middleware/operating system. As you can see it is kept as simple as possible.

There are also project files (.gpr) that are invoked by the Makefile to make the actual build. These project files are based on the gprbuild build system. They contain all the platform dependent compilation and link flags, as well as the configuration of the cross-compiler.

The project files list all the folders that contain source code to be compiled for the given node. They can be open by an IDE (gps), from which the system can then be re-built at will.

This is an extract from the node1 project file (running on STM32):


As you can see, it is readable and straightforward to understand.

Are we ready to build the system ? Actually we still miss a few files: the middleware itself (with the serial driver for the inter-node communication, and the code doing the task creation and scheduling). This is generated by the ocarina tool taking as input the system concurrency view (the file named "system.aadl" in the screenshot above).

For the STM32 platform we also need the BSP and drivers. They are obtained automatically from GitHub the first time the main Makefile is exectuted.

So we run:

   $ make

The complete build is achieved, so we can look at what we have now.

The code of the middleware

The build added this content:


The Ada_Drivers_Library folder is the STM32 runtime support (drivers and BSP) from Adacore. It is directly cloned from https://github.com/adacore/ada_drivers_library

The deploymentview_final folder contains the PolyORB-HI middleware code that handles the task creation and the interface to the underlying OS (in that case the GNAT Runtime).

The drivers configuration

When you have a distributed system, it contains configuration parameters that are defined in the Deployment view:


This configuration is translated to code in the DriversConfig folder

In our example since we are using an Ada runtime, this configuration is generated as a set of Ada files. C targets (for Linux or RTEMS) would use driver configuration in C.


There is not much generated here - just constants that reflect the driver configuration for all nodes:


What next?

All the files are generated and stay in the work folder. If you don't make modification to the system architecture (data types, interface view, deployment view), none of the files will be re-generated the next time you build the system (with the Makefile or directly from an IDE using the GPR build system).

If you modify the user code of a function, only this will be recompiled and linked, meaning subsequent builds will be extremely fast.

Conclusion - Run the system

The binaries are put in the "work/binaries" folder


The STM32 binary can be download on the target using the following command:

   $ taste-flash-stm32 work/binaries/stm32_controller

And the Linux partition can be executed by running:

   $ cd work/binaries && ./run_user_interface_partition

This will execute both the binary and the Python GUI for interacting.