# Model Evaluation

Now we have explored the preprocessing layers for this project, visualised the preprocessed gestures and started job training. It is time for the model evaluation. In this section, we are going to focus on how to monitor the job training, download models as well their predictions, and how to load predictions and evaluate them. This allows us to visualise the model output and compare it with the original video and data allowing us to get a stronger understanding of the model performance and behavior.

# Monitoring Training Progress

Firstly, let us locate our training job in the Explorer. To open the job, you can just double-click the job you started under your username. After this you'll arrive at the following view.

As you can see, some models are trained before others, you can click on those to view results and more information about them. Those that have test results have completed the training as seen by the status column. Those without have not finished but their results are regularly updated in this view.

# Model Predictions

Next, we are going to evaluate the model by looking at its output predictions. These will allow us to evaluate the model performance beyond just the numbers.

# Download Predictions

As shown in the picture below, you can download predictions of all data sets by clicking the Model Files and select Presictions to save the model evaluation in the same workspace as you are opening in the Studio.

Tip: Keep the models (.h5 files) with their corresponding predictions and have them clearly labelled to help keep track of them.

Let us load the prediction by double-clicking the .imsession file in the Prediction folder that we just downloaded. As we can see from the picture below, there is a data track Model0, which is the model output value for each of the classes - this is a value between 0 and 1 for each class. If you want to see the label of this prediction with the desired confidence level, you can click on Add Track and select the New Label Track from predictions.

Next, you will see the Generate Label Track windows as shown below,

here you can select:

  • Source Track - The prediction track to generate labels.
  • Track Name - The name of the label track.
  • File Name - The name of the label file.
  • Confidence Threshold% - The post-processing method filters out the predictions that is below this thereshold.
  • Confidence Display - The confidence display type and it is related to the Merged Label. You can choose neither the Min Value, the Max Value or the Average Value of all confidence numbers on all merged predictions.
  • Merged Labelcheck box - The option to show labels seperated or merged. When the labels are seperated, for each window of data there is an output label. When the labels are merged, all predictions of the same class that are overlapping are merged together into one. Hence, merging label can help improve visibility.

To improve the view, we need to add the original data. This includes the labels we created manually and the video to evaluate the prediction, which leads to the next step.

# Import Original Data, Video and Labels into Predictions

Now you can get a much more detailed view of the performance of a model by importing the original data into model predictions. So we will use the batch import functionality to link these sessions to the original data sessions so that we have the predictions superimposed over our original data. To do this, simply right-click on the Predictions folder where you save the evaluation in the Explorer view. Then select Tools -> Session Batch Import…

The Batch Import window will then show up, and we can click the Additional Tracks… button to browse to the local data sessions folder and select it. Now we can see that the data and label tracks from the local data sessions folder will be added to the Predictions folder that we downloaded. Next, press OK in the Batch Import modal to apply the merge. A warning box appears since this will modify the session files in the Predictions folder. Finally, a box will verify that the merge was successful. Click OK.

# Prediction Evaluation

Now it is time to finally see the power of the generated model predictions/labels. Let us load a test dataset session in the Predictions folder by double-clicking the .imsession file.

As shown in the figure below, the original label label track is the one we created manually. We can see that the predicted label is pretty close to the original label.

This gives us the ability to, essentially, run a field test on our computer, without deploying anything on a device until we are pleased with the model performance.

# Download Model

When a model's status changes to Complete, you are ready to download the result by simply clicking on the Model Files. Then you will get the .h5-file and remember to save it in the same workspace as you are opening in the Studio.

# Model Overview

If you double click the model file .h5 that you just downloaded, the view of the model will show up, which contains the Preprocessor layers, the Network architecture, performance Evaluation and Edge project configuration, as shown below.

# Confusion matrix

On the Evaluation of the .h5 model file, you can have an overview of the model performance.

Note: We normally check the performance of the Tensorflow Test Set since the test dataset contains the data that is not used during model training, and it gives an indication of how well the model will perform when it is deployed.

Shown in the figure above is a confusion matrix, which contains an overview of how well the model classifies the different events (push,vertical_rot, and wiggle).

Regarding the confusion matrix, the quick explanation is that green boxes are good (all the values in the diagonal represents correct predictions), and everything outside of the diagonal is miss-classifications. A bright red box symbolizes common miss-classifications. For more details, please check Confusion matrix (opens new window)

# Live testing

We also provide a tool to test the model's live performce, you can find it at Tools & Scripts\Acconeer Live Test Tool.

Before we run the Acconeer Live Test Tool, we need the .h5 model file, pre-processing layers, radar configuration file, and of course connect the radar to the PC.

# Generate Preprocessing layers

Since we already downloaded the model file and we have the radar configuration file which we used for data collection, now we only need to generate the pre-processing layer. To do that, we need to do the following:

  1. Head to the .improj file.

  2. Enter experimental mode by hitting ctrl+shift+D.

  3. Click on Build Pre-processor as shown in the figure below.

# Run the Acconeer Live Test Tool

To run the Acconeer Live Test Tool, we need to pass the following parameters:

  1. Sensor config file (-c) – this is the same file that is used for the Capture Server for data collection.

  2. Model (-m) – this is the model that you want to test. It should be in .h5 form.

  3. Pre-processing layers (-pp) – this is the model .py file generated in previous steps.

  4. Connection type (-u) – typically this is UART, enter the port number of the Radar.

  5. Confidence threshold (-ct) (optional) – a typical value is 0.5 for starting and increasing to improve performance.

An example of how to call the script can be with the following input parameters:

-c ..\Config\acconeer_tutorial_config.json
-m ..\Result\Predictions_Model1_fft\Model1.h5
-pp ..\PythonProcessing\model.py
-u COM6
-ct 0.95

Once the Starting demo shows up, we are ready to perform different gestures on the radar to test the model performance.