Platform Integration Aspects: Main Loop

This article describes the integration of an Embedded Wizard generated GUI application into your main application. It covers the necessary steps to initialize and de-initialize the entire system and it describes the details about launching the GUI application and providing the necessary user or system events.

IMPORTANT

Please be aware that every Embedded Wizard GUI application requires the execution in a single GUI task!

If you are working with an operating system and your software is using several threads/tasks, please take care to access your GUI application only within the context of your GUI thread/task. Use operating system services to exchange data or events between the GUI thread/task and other worker threads/tasks.

Life Cycle of the GUI Application

Typically, every Embedded Wizard GUI Application is running in three phases:

Initialization - The initialization of the graphics system and the Embedded Wizard GUI application. The initialization sequence depends completely on the target system: In case of MCUs, the entire hardware has to be initialized and configured before the GUI application can be created (e.g. clock configuration, SDRAM and flash configuration, MPU settings, framebuffer and display configuration,...). In case of MPUs with Linux, the system is already prepared so that the initialization sequence only has to get an access to the graphics API and the necessary drivers. The initialization of the GUI application looks very similar on all systems.

Main Loop - The main loop drives the entire GUI application and fulfills the same tasks with every run: Input events are provided to the GUI application, events, timers and signals are processed, the display is updated and finally, the garbage collection is executed.

De-Initialization - The deinitialization of the system is done when the GUI application should be terminated. In case of MPUs all used system resources (memory, graphics and touch driver, ...) have to be released. In case of MCUs the de-initialization may never happen, because the GUI is running until the power is switched off.

In most Build Environments, these three phases are implemented within the file main.c:

int main( void ) { /* initialize system */ ... /* initialize Embedded Wizard application */ if ( EwInit() == 0 ) return 1; /* process the Embedded Wizard main loop */ while( EwProcess()) ; /* de-initialize Embedded Wizard application */ EwDone(); /* terminate the system */ ... return 0; }

The three functions EwInit(), EwProcess() and EwDone() are typically implemented in the file ewmain.c and discussed in more detail below.

Initialization of the GUI Application

Before the Embedded Wizard generated GUI application can be executed on the target platform, the following initialization steps have do be done:

Initialize the display hardware and/or get access to the framebuffer. The implementation depends on the underlying graphics API and the suitable framebuffer concept.

Initialize the touch hardware and/or get access to the touch driver.

Initialize the memory manager by using EwInitHeap() and EwAddHeapMemoryPool(). The configuration of the memory pool is done within the file ewconfig.h. If you prefer to use any other memory manager (e.g. provided by the operating system), please ensure that the functions EwAlloc() and EwFree() are adapted accordingly.

Initialize the Graphics Engine. The configuration of the Graphics Engine can be controlled by several macros within ewconfig.h.

Create the root object of the GUI application.

Initialize the viewport of the Graphics Engine. A viewport serves as interface to the framebuffer, where the GUI application can draw its graphical content.

Initialize your Device Driver(s) to exchange data between the GUI and the underlying system.

The following code snippet illustrates the necessary steps to initialize the Embedded Wizard generated GUI application:

int EwInit( void ) { /* initialize display */ EwBspDisplayInit( &DisplayInfo ); /* initialize touchscreen */ EwBspTouchInit( EwScreenSize.X, EwScreenSize.Y, DisplayInfo.DisplayWidth, DisplayInfo.DisplayHeight ); #if EW_MEMORY_POOL_SIZE > 0 /* initialize heap manager */ EwInitHeap( 0 ); EwAddHeapMemoryPool( (void*)EW_MEMORY_POOL_ADDR, EW_MEMORY_POOL_SIZE ); #if EW_EXTRA_POOL_SIZE > 0 EwAddHeapMemoryPool( (void*)EW_EXTRA_POOL_ADDR, EW_EXTRA_POOL_SIZE ); #endif #endif /* initialize the Graphics Engine and Runtime Environment */ EwInitGraphicsEngine( 0 ); /* create the applications root object ... */ RootObject = (CoreRoot)EwNewObjectIndirect( EwApplicationClass, 0 ); EwLockObject( RootObject ); CoreRoot__Initialize( RootObject, EwScreenSize ); /* create Embedded Wizard viewport object to provide uniform access to the framebuffer */ Viewport = EwInitViewport( EwScreenSize, EwNewRect( 0, 0, DisplayInfo.BufferWidth, DisplayInfo.BufferHeight ), 0, 255, DisplayInfo.FrameBuffer, DisplayInfo.DoubleBuffer, 0, 0 ); /* initialize your device driver(s) that provide data for your GUI */ DeviceDriver_Initialize(); return 1; }

The parameter EwScreenSize is an automatically generated constant in the module Core.c of type XPoint. This value contains the size of the screen as it was defined in the attribute ScreenSize within the profile for that the code was generated.

The parameter EwApplicationClass, contains the root object of the GUI application. This application class is also defined within the attribute ApplicationClass of the profile.

Implementing the Main Loop

Embedded Wizard generated UI applications are running in an (endless) loop, which drives the UI application. This main loop is responsible to provide all user inputs to the UI application, to start the processing of timers and signals, to take care for the update of the display and finally to start the garbage collection.

All these aspects are implemented in the function EwProcess() in the file ewmain.c. This function can be called continuously from the main() function or it can be implemented as a separate OS task. However, it is absolutely important, that the complete Embedded Wizard generated UI is running within one task! Splitting the different actions in several tasks (e.g. to start the garbage collection from another task) will cause unpredictable results…

The following code snipped shows the typical implementation of a main loop:

int EwProcess( void ) { int timers = 0; int signals = 0; int events = 0; int devices = 0; XEnum cmd = CoreKeyCodeNoKey; int noOfTouch; XTouchEvent* touchEvent; int touch; int finger; XPoint touchPos; /* process data of your device driver(s) and update the GUI application by setting properties or by triggering events */ devices = DeviceDriver_ProcessData(); /* receive keyboard inputs */ cmd = EwGetKeyCommand(); if ( cmd != CoreKeyCodeNoKey ) { if ( cmd == CoreKeyCodePower ) return 0; /* feed the application with a 'press' and 'release' event */ events |= CoreRoot__DriveKeyboardHitting( RootObject, cmd, 0, 1 ); events |= CoreRoot__DriveKeyboardHitting( RootObject, cmd, 0, 0 ); } /* receive (multi-) touch inputs and provide it to the application */ noOfTouch = EwBspTouchGetEvents( &touchEvent ); if ( noOfTouch > 0 ) { for ( touch = 0; touch < noOfTouch; touch++ ) { /* get data out of the touch event */ finger = touchEvent[ touch ].Finger; touchPos.X = touchEvent[ touch ].XPos; touchPos.Y = touchEvent[ touch ].YPos; /* begin of touch cycle */ if ( touchEvent[ touch ].State == EW_BSP_TOUCH_DOWN ) events |= CoreRoot__DriveMultiTouchHitting( RootObject, 1, finger, touchPos ); /* movement during touch cycle */ else if ( touchEvent[ touch ].State == EW_BSP_TOUCH_MOVE ) events |= CoreRoot__DriveMultiTouchMovement( RootObject, finger, touchPos ); /* end of touch cycle */ else if ( touchEvent[ touch ].State == EW_BSP_TOUCH_UP ) events |= CoreRoot__DriveMultiTouchHitting( RootObject, 0, finger, touchPos ); } } /* process expired timers */ timers = EwProcessTimers(); /* process the pending signals */ signals = EwProcessSignals(); /* refresh the screen, if something has changed and draw its content */ if ( devices || timers || signals || events ) { if ( CoreRoot__DoesNeedUpdate( RootObject )) EwUpdate( Viewport, RootObject ); /* just for debugging purposes: check the memory structure */ EwVerifyHeap(); /* after each processed message start the garbage collection */ EwReclaimMemory(); /* print current memory statistic to console interface */ #ifdef EW_PRINT_MEMORY_USAGE EwPrintProfilerStatistic( 0 ); #endif /* evaluate memory pools and print report */ #ifdef EW_DUMP_HEAP EwDumpHeap( 0 ); #endif } else { /* otherwise sleep/suspend the UI application until a certain event occurs or a timer expires... */ EwBspEventWait( EwNextTimerExpiration()); } return 1; }

In the following, we want to have a closer look into the single steps of this main loop.

Step 1: Processing Data from your Device Driver(s)

Typically, every GUI application exchanges data with an underlying system in order to get data from a certain hardware device or to start a certain action. The interface between the GUI application and the underlying system is implemented by a device class and a device driver: The device class is accessed by the GUI application. The device driver is the counterpart that communicates with the real devices (e.g. by calling BSP functions, by setting registers, by calling other drivers,...).

The function DeviceDriver_ProcessData() is called from the main UI loop, in order to process data and events from your particular device. This function is responsible to update properties within the device class if the corresponding state or value of the real device has changed. This function is also responsible to trigger system events if necessary. The return value of the function indicates, that properties have been updated or events have been triggered.

For more details, please have look to the article 'Device Class and Device Driver'.

Step 2: Processing Key Events

Depending on the operating system or generally speaking the application runtime environment, the input device drivers (keyboard, remote control, hardware buttons, etc.) will generate messages when the input device state changes (e. g. a remote control button is pressed). Typically, the input task will have to write the input data into a message queue and the main loop can then process these events out of this queue and feed them to the GUI application.

If an event is in the message queue (e.g. a remote control key pressed event) the main loop first has to examine the event. This means, the event information has to be translated into a Mosaic key code. This has to be done within the function EwGetKeyCommand(): all key events which are needed within your UI application, need to be translated into a Mosaic key code. Typically this function is part of the ewmain.c template.

For this purpose, the Mosaic class library contains a set of predefined keys in the enumeration KeyCode of unit Core. For example, the identifier CoreKeyCodeOk corresponds to the Ok key on the remote control or the Enter key on a keyboard.

As soon as the incoming key event is translated into an appropriate Mosaic key code, it can be passed to the GUI application by using the method DriveKeyboardHitting().

In case of MCUs it also possible to access a keyboard hardware directly (e.g. a group of hardbuttons via GPIOs, a keypad or the data from a serial interface) and translate it into a Mosaic key code within the function EwGetKeyCommand().

The above description assumes that you are familiar with concepts how the native code (in particular the implementation of the main loop) and the generated code do interact together. If this is not the case, please see the sections Invoke GUI application code from device and Data types. To see the documentation of the described method DriveKeyboardHitting() please use the links above.

Step 3: Processing Cursor or Touch Screen Events

Some devices are cursor or touch operated (e.g. via the touch screen, a touch panel, a joystick or the mouse device). In this case a device driver will detect the touch positions or the cursor information. Depending on the integration, the touch events are written into a message queue (e.g. on MPUs with Linux) or they can be querried by a certain touch driver API (e.g. on MCUs). Similar to key events, the main loop have then to read the mouse or touch events and feed them to the GUI application.

The Mosaic class Core::Root (the base class for every GUI application) provides the both methods DriveMultiTouchHitting() and DriveMultiTouchMovement(). These serve as interface that you call from the main loop to feed the GUI application with events generated by a mouse device or a touch screen. These methods can process multi-touch events, if the touch screen in your device is able to distinguish between simultaneous touch interactions.

If you want use only single touch events, you can use the methods DriveCursorHitting() and DriveCursorMovement() instead. These both methods are limited to handle a single interaction at the same time.

The unique difference between the multi-touch version and the single-touch version of the methods is, that they expect a number in range 0 .. 9 identifying the finger, which has caused the event.

Usually, multi-touch screen drivers do track every movement of every finger touching actually the screen and are so able to identify the fingers individually. You have to translate this finger identification in a number lying in range 0 .. 9 and pass this value to the method DriveMultiTouchHitting() and DriveMultiTouchMovement() when calling it.

For example, the user touches the screen with one finger, then you call the methods with the number 0 since this number identifies the first finger. When the user presses then a second finger you have to use the value 0 for all events associated with the first finger and the value 1 for all events caused by the second finger. Similarly, when the user presses third finger on the screen, all events generated by this third finger should be passed with the value 2 when calling the methods DriveMultiTouchHitting() and DriveMultiTouchMovement().

The above description assumes that you are familiar with concepts how the native code (in particular the implementation of the main loop) and the generated code do interact together. If this is not the case, please see the sections Invoke GUI application code from device and Data types. To see the documentation of the described methods please use the links above.

Step 4: Processing Timers

Within an Embedded Wizard generated UI application all timer objects, that are currently running, are stored in a timer list. In this step, the timer list is traversed and all expired timers are handled. The number of processed timers is returned by this function. In order to activate the timer processing, the function EwProcessTimers() of the Runtime Environment has to be called.

Step 5: Processing Signals

In an Embedded Wizard generated UI application, signals can be sent in order to trigger slot methods with certain behavior. Beside the regular signal statement, which is processed immediately, special postsignal and idlesignal statements are collected and executed right before, respectively after a screen update is performed. The processing of the deferred signals is done by calling EwProcessSignals().

Step 6: Updating the Screen

In reaction to any system/key/cursor event, any signal or any expired timer the UI application can change its look and for example open a new dialog box or move a bitmap across the screen. In that case, all affected objects are marked as invalid. During the update of the root object, all invalid areas are examined and changes to the visible objects are processed. Finally, the resulting display is shown within the visible framebuffer.

The update of the screen depends on the selected framebuffer concept. In most Build Environments the update is implemented in the function EwUpdate() within the file ewmain.c.

Step 7: Triggering the Garbage Collection

In the last step of the main loop the garbage collection is triggered in case that any processing has happened. For this purpose, the function EwReclaimMemory() of the Runtime Environment is called.

De-Initialization of the Embedded Wizard GUI Application

When the application is finished, the unused resources have to be released.

The following code snippet illustrates the necessary steps to de-initialize the Embedded Wizard generated GUI application:

void EwDone( void ) { /* deinitialize your device driver(s) */ DeviceDriver_Deinitialize(); /* destroy the applications root object and release unused resources and memory */ EwDoneViewport( Viewport ); EwUnlockObject( RootObject ); EwReclaimMemory(); /* deinitialize the Graphics Engine */ EwDoneGraphicsEngine(); #if EW_MEMORY_POOL_SIZE > 0 /* deinitialize heap manager */ EwDoneHeap(); #endif /* deinitialize touch driver */ EwBspTouchDone(); /* deinitialize display */ EwBspDisplayDone(); }