Code Generation and Model Optimization for Infineon PSoC™ 6 and PSOC™ Edge boards

This section provides information on generating and optimizing code for Infineon PSoC™ 6 and PSOC™ Edge boards.

Code Generation

To generate the code, follow the steps:


  1. Navigate to your project directory and open the the model file (*.h5).

  2. Click the Code Gen tab on the left pane and cofigure the following parameters:

    • In Architecture, select Infineon PSoC as the architecture type.

    • In Target Device, select the required Infineon board or core type from the following options - PSoC 6, PSOC Edge M33, PSOC Edge M55/U55.

    • In Output Directory, browse and select the directory where you want to save the generated code. By, default Infineon is selected as the default folder.

    • In Output File Prefix, enter the prefix to be added at the beginning of the generated file names.

After setting the parameters, you have the option to either optimize the model or proceed without optimization, based on your requirements. If you choose not to optimize, you can directly generate the code. However, if you prefer to optimize the model, follow the steps to set the optimization parameters before generating the code.

Model Optimization

Model optimization involves refining the code to enhance performance, efficiency, and accuracy on the development boards. This process ensures that the model runs smoothly and effectively in various environments. By optimizing the model, you can reduce computational costs, improve execution speed, and achieve better overall results.

You can optimize the preprocessor as well as the network to enhance the model efficiency and performance. Studio provides several options to optimize the preprocessor using the CMSIS-DSP Library. The CMSIS (Cortex Microcontroller Software Interface Standard) library provides a robust framework for this optimization. By leveraging the CMSIS library, you can enhance the computational efficiency and overall effectiveness of your model, ensuring the model runs seamlessly and meets performance expectations.

To optimize the model, follow the steps:



  1. In Preprocessor Acceleration, optimize the code using the CMSIS-DSP Library by selecting the one of the following options:

    • None-Don't use CMSIS, if CMSIS is not supported on the target board.

    • CMSIS Floating Point (Float32), provides accelerated float 32 arithmetic, making this option an excellent choice for target boards with an ARM Core featuring a Floating Point Unit (FPU). This option is compatible with all units, including non-accelerated custom units. This option is user-friendly, offers good performance, but necessitates the presence of an FPU.

    • CMSIS Shifted Fixed Point 16 Bit (Q15), for excellent performance on target boards without an FPU, achieving similar efficiency to CMSIS Float32 on boards with an FPU. This option uses only 16 bits, consuming half the memory compared to CMSIS Float32. However, implementing support for this type in all custom units can be complex.

    • CMSIS Shifted Fixed Point 16 Bit (Q31), if you require high-resolution (32-bit) on your features and your target board lacks an FPU.

  2. Check the Enable Network Quantization checkbox, if you want to quantize the network code along with the preprocessor code. If you enable this option, you must provide calibration data to generate a 8-bit quantized model.

    • In Use Project file (.improj), select the radio button to provide the calibration data using the project file. When this option is enabled, the calibration data will be utilized from the training set.
      OR
    • In Recursive Directory Search, select the radio button to provide the calibration data from the desired directory.
  3. Select the Enable Sparity checkbox to further optimize the model by packing sparse weights, thereby saving memory when deploying the model on the target device.

  4. Click Install Dependencies button to install the core tools to generate and optimize the code. This is one time installation and may take some time.

  5. Once the core tools are installed, click the Generate Code button. The code (model.c/.h files) is generated in the Output Directory defined earlier and a Model performance and validation report opens.

After generating the code, lets deploy the code on the Infineon PSoC boards using ModusToolbox. To know how refer to Deploy siren detection model on PSoC™ 6 and PSOC™ Edge boards. We explain the deployment using the siren detection model as an example.

Supported layers and operators when generating optimized code for the PSoC family of Microcontrollers in DEEPCRAFT™ Studio

Below is a list of supported operators for the DEEPCRAFT™ Studio and PSoC code generation respectively.

Opening a model and generating code using Studio

To be able to open a .h5 model in DEEPCRAFT™ Studio and generate optimized code for the PSoC family of MCU's, the network part of the model is limited to the operators that are supported in both ModusToolbox ML and DEEPCRAFT™ Studio as below.

The preprocessor part of the model however, is limited to the operators supported in DEEPCRAFT™ Studio.

Network Layer/OperatorModus Toolbox MLDEEPCRAFT™ Studio
Activationxx
Addxx
AveragePooling1Dxx
AveragePooling2Dxx
BatchNormalizationxx
Concatenatexx
Conv1Dxx
Conv2Dxx
Densexx
Dropoutxx
Flattenxx
GlobalAveragePooling1Dxx
GlobalAveragePooling2Dxx
GlobalMaxPooling1Dxx
InputLayerxx
LeakyRELUxx
LSTMxx
MaxPooling1Dxx
MaxPooling2Dxx
RELUxx
Reshapexx
Softmaxxx
Absxx
Add_Nxx
Arg_Maxxx
Arg_Minxx
Dequantizexx
Fully_connectedxx
Logxx
Maximumxx
Minimumxx
Mulxx
Negxx
Sqrtxx
Squarexx
Subxx
Exp/Powxx
Gated Recurrent Unitx
Conv1D Transposexx
Time Distributedx
Preprocessor Layer/OperatorModus Toolbox MLDEEPCRAFT™ Studio
Clipxx
BitUtilizationxx
QShiftxx
Quantizexx
Trim_shapexx
Divxx
DotTxx
Stackxx
Averagexx
Takexx