Generating model

After completing the preprocessing steps, you can now configure a list of models that is sent to Imagimob cloud for training. IMAGIMOB Studio uses an auto machine learning approach that generates a number of different neural network architectures.

To generate model, follow the steps:

  1. Navigate to your project directory and double-click the project file (.improj).
    The project file opens in a new tab.

  2. Click Training tab on the left pane.


  3. Click Generate Model List. The Model Wizard window appears.



  4. In Auto ML on the left pane, configure the following parameters:

  • In Hardware, select the architecture that is applicable to target device or solution.

  • In Model family, select the model family as per your requirement -

    • Conv1D - Convolution-1D are lightweight models effective for time series data.

    • Conv1DLSTM - Convolution-1D and LSTM 1D-convolution are high accuracy models effective for one dimensional and two dimensional data.

    • Conv2D - Convolution-2D models are effective for two dimensional data like audio spectrograms.

  • In Model flavor, select the model flavor as per your requirement -

    • SmallKern - Small kernels capture small patterns, increasing the model speed.

    • LargeKern - Large kernels captures large patterns, increase the performance but slowing model speed.

  • In Classifier, select the classifier type at the end of the model - GlobalAverage Pool, Hybrid and Dense.

  • In Model size, select the model size as per your requirement.

  • In Optimization, select the Optimization level of the model for accuracy and speed.

  • In Downscale, enable or disable the radio button as per your requirement. When enabled, this parameter increases the speed of the model effecting the model accuracy. However, when the model identifies large features in the input data, downscaling increases the model speed as well ad the model accuracy at the same time.

  • In Pooling, enable or disable the radio button as per your requirement. When enabled, this parameter reduces the dimensionality after convolutional layers. In most cases, pooling will increase the speed and reduce the memory consumption without effecting the model accuracy.

  • In Learn rate, select the training speed of the model as low, mild and high. Setting the learn rate as high will speed up the training, however, this may cause suboptimal training results.

  • In Regularization, select the desired option as per your requirement. This parameter reduces over-fitting to the training data.

  • In Append models, enable the check box to keep old models and append new models to the list. This parameter is applicable when all models have the same class count.

  1. Click Ok.
⚠️
  • All the existing models will be replaced, unless you have enabled the Append models check box.

  • Verify your data labelling and database for missed labels or uneven data split for training, validation and test sets before starting the training. If you have a large database or a large training job, it might take a while for the results where the data issues are evident.

  1. Click Training on the left pane and configure the following parameters:


    • In Epochs enter the number of iterations for training per model. The training of a neural network is typically divided into epochs. Larger the value of epochs, longer will be the training time of each model structure or hyper parameter. Early stopping is used to prevent overfitting.

    • In Batch Size enter the number of windows to be send to update the model in each epoch. A higher value of batch size means that the job will run faster, but it requires larger memory and might cause the training job to end due to hardware limitation. The optimal value for this parameter depends on the total number of training samples or windows. For example, a batch size of 32 is usually good for 1000 training samples and 128 is usually good for 5000 training samples.

    • In Loss Function select the loss function type. This parameter defines the error a neural network does for a given input, and is used as a criterion to update the weights in each epoch. You should not modify this.

    • In Split count enter the value for splitting, shuffling and merging data. The split count should be dividable by the batch size and the maximum value should be the batch size otherwise this may cause data loss. You should not modify this.

    • In Patience enter the number of epochs with no improvement after which the training should stop.

  2. Click Build Steps where the steps after model generation are listed. You don't need to modify anything here.


    For most project types, the build steps are -

    • Model training
    • Confusion matrix
  3. Click OK to generate a list of models.

  4. Click Start New Training Job to start the model training.


  5. The Login Cloud Services dialog appears.


    ℹ️

    This login dialog appears, if you are not logged into IMAGIMOB Studio.

  6. Enter your credentials and click Log In. The New Training Job window appears.


  7. In Job Name and Description, enter the name of the job and description respectively.

  8. In Use GPU, enable the GPU powered training. This parameter is available for paid subscriptions.

  9. In Available credits, your total credits points are displayed.

  10. Click the down expander icon to expand the Advanced Settings.

    • Enable Generate Labels checkbox, if you want to view the preprocessed label track generated by the preprocessor for training the model. The preprocessed label track provides insights about how the user labels are interpreted during the training. This helps to evaluate the user labels by adjusting the size of the labels according to the size of the contextual window to achieve better model predictions. The preprocessed label track is downloaded automatically when downloading the predictions.

    • Enable Generate Gradcam checkbox, if you want to generate the Gradcam video for analyzing the model predictions. To know more, refer to Gradient-Weighted Class Activation Mapping.

⚠️

Using the Grad-CAM functionality may slow down the evaluation process and increase the cost for the training job.

  1. Click OK to begin the training. A popup window appears indicating that the job has been started.



  2. Click OK to view the progress of the job in a new tab.


ℹ️

Depending on the size of your database, transferring the job to the AI Training Service might take a while.

After you set the parameters for the model, the model is sent to Imagimob cloud for training. Depending on the size of the model, the training may take hours or days to complete.

Tracking the training job

You can track the progress of the training jobs from IMAGIMOB Studio, whenever required.

To track the training job, follow the steps:

  1. On the menu bar, click the Open cloud icon. The account portal window opens.



  2. Double click the required job on the right pane to track the progress of the training job. The view updates in real time as the job progresses.