# Edge Code & Deployment

In the previous section we optimised the python model into ANSI C99 source code. Now, we will show how to actually deploy it onto the XM122 board. Firstly we will go over the API of Imagimob Edge and then how to actually take the generated source (.c) and header (.h) files and incorporate them into the device firmware.

# C Code, API & Defines

# Initialisation

void IMAI_init(void);

The initialisation function is very simple. It just needs to be called once on program run. There are, however, scenarios where you may want to call it on after program sleep. These will be discussed later in Advanced Functionality.

# Enqueue

int IMAI_enqueue(const float *restrict data_in, const float *restrict time_in);

The enqueue function is what you use to pass data to the model. It has two (if time is added)arguments:

  • data_in: data sample to be passed to the model. Has size of IMAI_DATA_IN_COUNT from generated header file.
  • time_in (optional): timestamp of data sample. Has size of IMAI_TIME_IN_COUNT from generated header file.

The data_in represents a single sample. In our case and in most cases it's a single csv row that was used for training. In this case the size of that row is 192 columns and so the size of data_in is 192. We don't need to worry about any windowing functionality or keeping history that's all taken care of within the library/model. The time_in is that timestamp of this sample, in our case this is a single value in seconds. It's not used during the model computation but it can help with error checking and ensuring model output validity. This argument is only present if selected when optimizing the model for edge.

# Dequeue

int IMAI_dequeue(float *restrict data_out, float *restrict time_out);

The dequeue function is what you use to read the model output. It has two (if time is added) arguments:

  • data_out: data sample to be passed to the model. Has size of IMAI_DATA_OUT_COUNT from generated header file.
  • time_out (optional): timestamp of data sample. Has size of IMAI_TIME_OUT_COUNT from generated header file.

The data_out represents a array of the confidence level for each output symbol or class. In this case our expected output is of size 4. It's composed of the following.

  • data_out[0] - confidence level for the other or unlabelled symbol, this represents everything that's unlabelled in the data and it should be the normal state of the model.
  • data_out[1] - confidence level for the wiggle symbol
  • data_out[2] - confidence level for the vertical finger rotation symbol
  • data_out[3] - confidence level for the push symbol

To map your symbol to the right array location you can refer to the symbol map:

As can be seen, each symbol has an associated ID number and this ID number direct translates to the position in the output array.

The time_out, which is of size 2, represents the timestamps of the first and last data samples in the window as shown. This argument is only present if selected when optimizing the model for edge;

  • time_out[0] - timestamp of the first data sample
  • time_out[1] - timestamp of the last data sample

# Model Deployment

Now that we have gone through the API and what it means, the next step is to actually deploy it. This can be broken down into a number of steps as follows:

  1. Ensure that the model will fit into the target device. In our case we have the reduced model which will fit
  2. Prepare the development environment
  3. Update the firmware code to add new functionality for setting up the settings and running the model
  4. Flash the code

NOTE: This tutorial focuses on using the Segger IDE to flash the firmware and model to the device.

# Preparing the Development Environment

The preparation can be done by following Acconeer's software development guide (opens new window) and the SDK download is here (opens new window). This helps to direct the user through the process and additional downloads.

Once everything is set it up, you should test run the code to ensure everything is fine.

NOTE: Once flashed the device firmware is changed so your previous functionality is gone unless you revert back to it. This also means that you need to re-flash it back to the original firmware if you want to use it with the capture server. This was discussed in the Setting Up the Capture Server section but essentially you need to follow the following Acconeer guide (opens new window).

# Updating Firmware with Model Functionality

There are 3 parts to the programming the model onto the Acconeer XM122 board.

  1. Setting up the correct calls to model implementation functions
  2. Setting up the correct sensor settings and configurations to match those used in the data collection
  3. Implement the actual API calls for passing the data to the model and parsing the predictions

# Setting Up Right Program Call

This section is about setting up the right program call since the Acconeer examples in the SDK are composed of many different programs that can be run on the board. What we're going to do is the following steps:

  1. Add the source code and header (.c and .h) files to the project in their own folder.
  2. Make a copy of the closest example to the system used in the capture server. In our case this is the example_service_sparse.c and we name it something clear like example_service_sparse_IMAGIMOB_AI.c as shown below:

3. Open this newly copied file and then modify the following:

a. include the model header to the top of the C file:

b. modify the name of the function with the main application. It is called acconeer_example but we will rename it to acconeer_example_AI

c. update the function call from the main.c with the new application name

# Sensor Setting and Configuration

The Acconeer board supports different kinds of radar services and configurations. The firmware already sets the radar into sparse mode which is the mode we used during the data collection. That's why we picked this example to build on. Now, we just need to apply the rest of the configuration. Let's first take a look at the configuration file:

    {
        "mode": "SPARSE",
        "range_interval": [0.07, 0.20],
        "profile": "PROFILE_1",
        "update_rate": 39,
        "sweeps_per_frame": 64,
        "gain": 0.5,
        "sampling_mode": "A",
        "sweep_rate": 2500,
        "hw_accelerated_average_samples": 60
    }

Now, using the document found at xm122/doc/rss_api.html we can find the respective C functions that would allow us to set the radar in the configuration used when collecting data. We then set this in the update_configuration(acc_service_configuration_t sparse_configuration) function. The results in the following updated function:


void update_configuration(acc_service_configuration_t sparse_configuration)
{
	float       update_rate = 39;
	uint16_t    sweeps_per_frame = 64;
	float       start_m          = 0.07f;
	float       length_m         = 0.13f;
	uint8_t     hw_accelerated_average_samples = 60;
	float       sweep_rate = 2500.0f;
	float       gain = 0.5f;
	acc_service_sparse_sampling_mode_t sampling_mode = ACC_SERVICE_SPARSE_SAMPLING_MODE_A;
	acc_service_profile_t profile = ACC_SERVICE_PROFILE_1;


	acc_service_sparse_configuration_sweeps_per_frame_set(sparse_configuration, sweeps_per_frame);

	acc_service_requested_start_set(sparse_configuration, start_m);
	acc_service_requested_length_set(sparse_configuration, length_m);

		//sampling mode = mode A
	acc_service_sparse_sampling_mode_set(sparse_configuration, sampling_mode);
		//profile = profile_1
	acc_service_profile_set(sparse_configuration, profile);
		//"hw_accelerated_average_samples": 60
	acc_service_hw_accelerated_average_samples_set(sparse_configuration, hw_accelerated_average_samples);
		//sweep rate = 2500
	acc_service_sparse_configuration_sweep_rate_set(sparse_configuration, sweep_rate);
		//gain = 0.5
	acc_service_receiver_gain_set(sparse_configuration, gain);
		//update_rate = 39/
	acc_service_repetition_mode_streaming_set(sparse_configuration, update_rate);
}

Now, sensor output will be exactly the same as what we have collected from the capture server. If you have modified the settings to be different than what was in the tutorial then just use the relevant settings.

# Implementing the Main Loop Code

Now, that we have set up the sensor to output the right data in the right format and we have set up the system to call the right application loop. In this section we will go over actually integrating the model within the firmware.

Firstly before we get into the main loop we start off with the initialisation function. Simply add the IMAI_init() anywhere near the beginning before the main loop starts.

In our case, we're gonna initialise the library after updating the sensor settings as seen in the following figure

Next we will implement the main loop code in the following steps:

  1. Variable Initialisation
  2. Variable Type Casting
  3. Enqueue & Dequeue
  4. Applying Output Dependent Functionality
# Variable Initialisation

Next we move on to the variable declaration and initialisation. For that we do the following:

  1. increase number of iterations to a number that you want to run the program for. For example if set to 5000 at 39 loops per second it'll run for 128 seconds.
  2. set up the data_in variable as a float with size sparse_metadata.data_length which is given by the Acconeer library (alternatively we could have set it up as IMAI_DATA_IN_COUNT).
  3. set up the data_out variable as a float with size IMAI_DATA_OUT_COUNT as provided by the Imagimob library.
  4. finally, we set up the confidence_threshold float variable to improve the performance. Let's start with 0.8f this can be in increased if there are too many false positives or reduced if the model is unresponsive.

In the end your variable declarations should look as follows:

	bool            success    = true;
	const int       iterations = 5000;
	uint16_t        data[sparse_metadata.data_length];
    float           data_in[sparse_metadata.data_length];
	float           data_out[IMAI_DATA_OUT_COUNT] = {0};
	const float     confidence_threshold = 0.8f;

    acc_service_sparse_result_info_t    result_info;

Note: we chose to use the model without the timestamps. But if you'd like to use them just create the relevant variables and use them in exactly the same way as the data variables.

# Variable Type Casting

Since the model we built expects the inputs as floats and the sensor out is ints we need to cast the variable to a new type. This is very simple, we will just build a for loop over to cast and pass the array values to a new array as such:

	uint16_t        data[sparse_metadata.data_length];
    float           data_in[sparse_metadata.data_length];
    //float           data_in[IMAI_DATA_IN_COUNT]; //alternative definition based on size set by Acconeer code vs. that set by Imagimob code, both should be identical though

    for (int i = 0; i < sparse_metadata.data_length; i++)
    {
        data_in[i] = (float) data[i];		//bug caused by capture server
    }
# Enqueue & Dequeue

Enqueuing and Dequeuing relates to passing data to the model and reading model predictions respectively. Note that we are using the API version without the timestamps included for simplicity.

First, the enqueue function is passed a pointer to the data array for the last received sensor sample.

Next, the dequeue function is then checked to see if it has a model prediction ready. If it does it'll return IMAI_RET_SUCCESS (which has a value of 0). The dequeue function returns an array for the confidence level for each class or symbol. There are some cases where the dequeue function has enough data for multiple predictions. This why it's run in a while loop.

    IMAI_enqueue(data_in);        //passing data to the model
    while(!IMAI_dequeue(data_out)) {} //reading predictions from the model if it's ready i.e. returns 0
# Applying Output Dependent Functionality

Next, it's good to pass the model output through an argmax function or to compare it against a threshold if desired. In this case we will compare against a threshold because we only want to output something if the model is absolutely sure it's the right gesture. We previously set the confidence threshold to 0.8 out of 1. This means that the model needs to be at least 80% sure before it'll actually output one of the classes otherwise we ignore it. When we have an output we will simply print this to the terminal.

Building on top of the previous section and applying the new processing we get the following code:

    while(!IMAI_dequeue(data_out)) 
    {
        if(data_out[1] > confidence_threshold)
            printf("Wiggle: %f\n", data_out[1]);
        else if(data_out[2] > confidence_threshold)
            printf("Finger: %f\n", data_out[2]);
        else if(data_out[3] > confidence_threshold)
            printf("Push: %f\n", data_out[3]);    
    }
# Resulting Main Loop Code

Now, after doing all this, our main loop code should look like the following:


//initialise variables
bool            success    = true;
const int       iterations = 5000;
uint16_t        data[sparse_metadata.data_length];
float           data_in[sparse_metadata.data_length];
float           data_out[IMAI_DATA_OUT_COUNT] = {0};
const float     confidence_threshold = 0.8f;

//start main function loop
for (int i = 0; i < iterations; i++)
{
    success = acc_service_sparse_get_next(handle, data, sparse_metadata.data_length, &result_info);

    if (!success)
    {
        printf("acc_service_sparse_get_next() failed\n");
        break;
    }
            
    for (int i = 0; i < sparse_metadata.data_length; i++)
    {
        data_in[i] = (float) data[i];	//casting to match expected model variable type
    }

    IMAI_enqueue(data_in);        //passing data to the model
    while(!IMAI_dequeue(data_out))   //reading predictions from the model if it's ready i.e. returns 0
    {
        if(data_out[1] > confidence_threshold)
                printf("Wiggle: %f\n", data_out[1]);
        else if(data_out[2] > confidence_threshold)
                printf("Finger: %f\n", data_out[2]);
        else if(data_out[3] > confidence_threshold)
                printf("Push: %f\n", data_out[3]);    
    }
}

# Flashing the Device & Running the Code

If you have succeeded in flashing the board previously then just repeat the process and it should work again. I used a nRF5340-PDK board to act as a debugger for the XB122+XM122 board as seen below.

Next, run the code using the F5 command. If successful simply connect to the board with a serial terminal such as putty and perform some gestures in front of the radar!

With this done you have successfully deployed your model. You can now apply the same principles to different models, different sensor configurations and even entirely different boards and projects. For new models, all you have to do is drop the new one in place, configure the radar correctly if you used new settings, and update the output dependent functionality then you're good to go!

# Advanced Functionality

# Boarding Sleeping and Wake up Routines

In common usage of IoT devices and other such small devices you typically want to put the microcontroller to sleep in cases where it's not used. With how the data is sampled, however, you need to ensure that your window isn't half filled with data before the sleep and data after the sleep. There are two ways to get around this problem. The methods are the following:

  1. Initialisation function to clear the data buffers
  2. Dequeue function to check the timestamps used in the model inference

# Resetting Buffers with Initialisation

In the wake-up routine you can simply call the IMAI_init(void) function to clear all the buffers. This means that at the next sample, the system starts off fresh and no misclassifications occur.

# Error handling using Dequeue Output Timestamps

You can in all cases check the output time stamps to ensure that they are what's expected. For example you can catch problems with unexpected timing behaviour, such as sensor data drops or packet loss among other issues.

The following code gives an idea of how you could check the time. Given a system where the expected window size is 50 samples at 25 Hz meaning, that the window should be 2 seconds long. The following code checks to make sure that the output window is within ± 10% of 2 seconds.

float time_out[2] = {0};
float time_span = 0;

#define EXPECTED WINDOW TIME 2 
#define MIN_WINDOW_TIME 2*0.9
#define MAX_WINDOW_TIME 2*1.1

if (!IMAI_dequeue(data_out, time_out))
{
    if (time_span > MIN_WINDOW_TIME && time_span < MAX_WINDOW_TIME)
    {
        //proceed with classification, everything is fine
    }
    else
    {
        //something is wrong, drop this output and if it happens repeatedly raise flag
    }
}