# Data Collection
Now that the data capture hardware and software has been set up, it's time to collect data. We have defined 3 simple gestures, Vertical Finger Rotation, Push and Wiggle. In some circumstances it would useful or required to differentiate between opposing actions such as moving towards or away from the sensor or moving left or right. In our case we chose to omit those specifics from the model so that it's more user friendly.
# Gestures
For this project we will define and train on 3 gestures.
# Vertical Finger Rotation
The finger is held above the radar pointing towards it. The finger is then rotated in a clock-wise fashion.
# Hand Push
Open hand is held above the radar, then moved inwards and outwards to the sensor.
# Hand Wiggle
The hand is cupped or held loose above the sensor and tilted side to side.
# Recording the Data
Now we have defined our gestures and we will begin the data collection. First ensure that you've covered the setting up capture system part. Start up the capture server and connect it with Capture App or you can choose to use your own webcam by using local capture server. To collected data from the Acconeer Radar you need to also set up the configuration. Some typical configuration are included in the tutorial project as well as in the capture server.
# Configuration File
It is to be noted that this acconeer project uses a different radar configuration file (acconeer_tutorial_config.json) than the one in capture server. We modified the range_interval values and changed profile to PROFILE_1 instead of PROFILE_2 since we chose a closer range operation (<20 cm). For more information regarding how to modify the configuration file, please refer to Acconeer's Sparse Service Documentation (opens new window).
# Capture App
You can find information regarding how to set up Capture App and how to get data off it at Imagimob Capture (opens new window).
# Local Capture Server
Local capture server allows for capturing data without connecting to the Capture app, instead recording video using a web camera connected to the PC.
# Data collection
Once that's connected and you are ready you can begin the data collection. In this case we will start performing the gesture before hitting the record button so that we can label the entire file as the gesture.
We will take 1 min recordings repeating 15 times for each gesture. Typically we'd want to collect data from multiple people to ensure the model is capable of generalising well. This is done to ensure that it would work with different individuals, but, in this first iteration it's sufficient to have a working model for 1 person.
We are also going to make an additional special test file, this test file will contain all gestures we defined above and some random gestures as well as ambient data to test false positives and model behaviour during transitioning data. Ambient data is the data when there is no object or movement within the sensor's detection range. What we will do is the following:
- 10 seconds of ambient data
- 20 seconds of random gestures i.e. tap, swipe, flick
- 20 seconds of Vertical Finger Rotation gesture
- 20 seconds of Hand Wiggle gesture
- 20 seconds of Hand Push gesture
- 5 seconds of ambient data
- 10 seconds of more random gestures
# Importing Data
The next step, now that we have collected the data, is to import it.
If you chose the Imagimob Capture App for this data collection, you can import it directly through the studio. Now, by importing, in this case, we actually mean getting the data off the phone and onto our project folder. To do that we open the device by double clicking on (Note: ensure that the app is open on the phone and the phone and computer are connected to the same network), search for the date of the recording and then copy that folder or the contents over to our local/project folder. It might take some time for the process to complete since it is transferring data and videos and they can be big.
If you used Local Capture Server for the data collection, then data was already saved in your PC. You can just select workspace  and import the whole folder where you saved the data.
# Commentary
Data collection should be thought of as an iterative process. The first batch/session should be used to test out the potential of such data for building a machine learning model. In general, the more data the better the model. By having a lot of data the model is able to learn the behaviour and important features in the gestures. If the data collection is difficult, time consuming or expensive then we can think about collecting as much data in the beginning as possible to save costs on setting up consecutive data collection sessions. Even though it's often difficult to think of all the possible variations or occurrences that would happen in real life, but it's good to try to cover as many as possible. The data sets should cover all potential variations and in enough quantity to train and test the model.
# Navigation
Previous: Setting Up Capture System
Next: Data Labelling & Management