# Model Evaluation (Your First Edge AI Project)

Note: This page is a part of the tutorial Your First Edge AI Project

# Background

When a training job has finished, we are able to download the results and evaluate them.

In a typical Machine Learning project, this involves looking at different statistics and metrics in order to get a feel for if the model is good enough or not. If the model fails, this is the part of the workflow where we figure out why, so that we can improve the model.

In Imagimob Studio we empower the user with a very powerful tool of letting the model label the train/validation/test data for us so that we can play it back and look at it in detail. Just as we did when we labelled the data at the beginning of this guide.

Note: Your directory structure and file names might be different from the examples in the pictures below.

# Tracking training job progress

First, let us locate your training job in Explorer. Under the Cloud symbol, double-click your username (typically your email address) and a list of your jobs will show up. Double-click the job to open it.

In this view, we can track the progress of our training job.

As the training progresses, information is added to this view as seen below.

# Download model files

When a model has finished training, the status is set to "Training Completed".

The results can be downloaded by clicking on the "Model files" symbol.

A list showing the available results for that specific model pops up. From this list, you can choose to download the following files:

  • trained model (.h5 file)
  • model predictions to be used to evaluate the trained model
  • test input and output datasets to be used in the Generate Data Compare Test (see next page)

Save the files in the "Results" folder in your workspace or in a folder of your convenience.

Note: Models can be sorted by clicking on the accuracy/F1 score/parameters headings.

# Evaluate results

In the "Explorer" view we can see our downloaded files. Here we will discuss the model file (.h5) and the files contained in the "Predictions" directory.

In the download items, we see several files and folders. We will not explain all of them here, we focus instead on the model file *.h5 file as well as the "Predictions" directory.

# Model view

Let's double-click the downloaded model file *.h5 in Imagimob Studio.

Note: Depending on which model you downloaded, your view and file name may differ.

The model contains 4 tabs with the pre-processor layers, the trained network architecture, the training performance, and the edge project configuration support, see pictures below.

# Confusion matrix view

The starting point is the Evaluation tab in the .h5 file. This gives us a good overview of the performance of our model.

In the picture above, we see a "confusion matrix" which gives us a summary of how well the model classifies different activities (jumping, running, sitting, standing, walking). From the drop-down menu, we can choose if we want to look at the train, validation, or test dataset results. We see here the performance of the model on the test set.

Remember: The test set contained data that was not used during model training. This gives an indication of how well the model will perform when it is deployed.

Read more about "confusion matrices" here (opens new window).

The quick explanation is that the green boxes in the diagonal are good, they represent correct predictions. Everything outside of the diagonal is a misclassification. A bright red box symbolizes a common misprediction.

The rest of the metrics are standard for classification problems and will not be explained in this guide, see e.g. here (opens new window).

# Import predictions

In Imagimob Studio, we can get a much more detailed view of the performance of a model by importing the model predictions. Right-click on the "Predictions" folder in the "Explorer" view. Then select Tools -> Session Batch Import...

The batch import tool helps identify the sessions in your "Predictions" folder.

Note: Your folder name and contents might differ.

Now we will link these sessions to our local sessions so that we can get the predictions superimposed over our original data.

Click the Additional Tracks... button.

Browse to your sessions folder, typically your "Data" folder, and select it.

Now we can see that data and label tracks from your local "sessions" or "Data" folder will be added to the "Predictions" folder that we downloaded.

Press "OK" in the Batch Import dialog to apply the merge.

A warning box appears since this will modify the session files in the "Predictions" folder. This is exactly what we want. Press "OK".

Finally, a box will verify that the merge was successful. Click "OK".

# Investigating the results

Now it's time to finally see the power of the generated model predictions/labels and the "Session Batch Import" tool.

Let's open the first session in the "Predictions/sessions" folder by double-clicking the .imsession file as shown below.

In the main data window, we now see a combined plot of the original data (e.g. "accel") and the model predictions (here "Model0"). The model predictions are shown as solid lines with different colors depending on the class. We can hide one of the data tracks to more easily analyze the other track, below the acceleration track is hidden.

With the Add Track... button underneath the label tracks, we can generate a label from this model to more easily compare the model output with the label set by the human. To add the predictions tracks, click on the Add Track... button and select New Label Track from predictions....

The dialog in the picture below will pop up.

Here you can select:

  • Source Track - The model to be used to generate prediction label track.
  • Track Name - The name of the prediction label track.
  • File Name - The name of the prediction label file.
  • Confidence Threshold % - The post-processing method filters out the predictions that are below this threshold.
  • Confidence Display - The confidence display type. It is related to the Merged Label. You can choose either the Min Value, the Max Value or the Average Value of all confidence numbers on all merged predictions.
  • Merged Label checkbox - The option to show labels separated or merged. When the labels are separated, for each window of data there is an output label. When the labels are merged, all predictions of the same class that are overlapping are merged into one. Hence, merging labels can help improve visibility.

The default settings in the dialog shown above mean that the model output from "Model0" will be used and a track with the name "Model0" will be created. Also, a label file named "Model0.label" will be created. The Confidence Threshold of 90 % means that the class with more than 90 % output confidence level will be used to create the labels. The checkbox "Merged Labels" gives the ability to merge all the windows with the same prediction into one long label. The Min, Max, or Average output confidence value, as given in the "Confidence Display" drop-down menu, will be given to the merged label.

Press "OK" to confirm. Below your original label track "label1" there will be a new label track called "Model0" which contains the labels generated by the model. This helps you visualize each prediction made by the model together with the confidence level of that prediction. In this example, only the predictions with a confidence level above 90% are shown.

Note: The name of your prediction track(s) might differ.

This label is generated by the model, visualizing each prediction made by the model together with the confidence level of that prediction. This gives us the ability to, essentially, run a field test on our computer, without deploying anything on a device until we are pleased with the model performance

We are almost at the end of the guide now. Time to optimize the model for Edge deployment.

Let's go!

Next Section - Edge Optimization