TinyML - Motion Recognition Using Raspberry Pi Pico
In this tutorial, we will use machine learning to build a gesture recognition system that runs on a not officially supported device by Arduino and Edge Impulse.
Last updated
In this tutorial, we will use machine learning to build a gesture recognition system that runs on a not officially supported device by Arduino and Edge Impulse.
Last updated
This tutorial has two parts. The first is to explore the Raspberry Pi Pico, its main components, and how to program it using Micropython and its C/C++ SDK (Software Development Kit). Next, we will use the Pico to capture "gesture data" on a TinyML model training using Edge Impulse Studio. Once developed and tested, the model will be deployed for real inference on the same device. Here is a quick view of the final project:
If you are familiar with Pico's basic programming, please feel are to jump for part 2, where the real fun will begin!
Raspberry Pi Pico is a low-cost, high-performance microcontroller board with flexible digital interfaces. Key features include:
RP2040 microcontroller chip designed by Raspberry Pi Foundation
Dual-core Arm Cortex M0+ processor, flexible clock running up to 133 MHz
264KB of SRAM and 2MB of onboard Flash memory
USB 1.1 with device and host support
Low-power sleep and dormant modes
26 × multi-function GPIO pins
2 × SPI, 2 × I2C, 2 × UART, 3 × 12-bit ADC, 16 × controllable PWM channels
Accurate clock and timer on-chip
Temperature sensor
Accelerated floating-point libraries on-chip
8 × Programmable I/O (PIO) state machines for custom peripheral support
An interesting characteristic is its ability to drag-and-drop programming using mass storage over USB.
Despite that, it is straightforward to "upload" a program to the Pico; what is missing is a reset push-button to prevent USB disconnection every time a new code is uploaded, which can damage the Pico USB connector. Fortunately, pin 30 (RUN) can be used for this function. Use a push-button (normally open) to connect this pin to the ground. Now, anytime a program should be uploaded to Pico, press both buttons simultaneously.
There are two ways of programming a Pico: MicroPython and C/C++.Programming using MicroPythonFor initial tests (and for beginners), running MicroPython with the Pico is extremely easy. Once the Pico is plugged for the first time into your computer (via USB) and with the BOOT button pressed (or pressing Reset and Boot after connection), a window named RPI-RP2 should appear as a typical Mass Storage Device (same as a regular Pen-Driver).
Confirm that the interpreter is configured to the Pico. Clic on it (IDE Inferior right corner) for options.
It is also possible to write or enter with Python scripts, as the blink example below:
The script is uploaded to the Pico with the Run button (marked in the above figure). The internal LED (pin 25) will flash ten times, printing the loop number on the Shell. Try to read the internal temperature sensor now, creating a log file to monitor internal Pico temp. In this example, the temp.txt log file is stored inside the Pico, so pay attention to how much memory space you need.
Install tools (Cmake and gcc for ARM):
Create a folder where the projects will be developed:
Clone the SDK repository:
Go to the pico-sdk folder and update sub-modules:
Return to pico folder
Install Toolkit
Install the SDK
At this point, you are all set to create an embedded project using C/C++
It's good to download the examples created specifically for Pico. They will give us a good starting point to handle the HW and libraries.
The examples also have a blink code, but let's start a complete one from scratch.First, create a folder where your project will be located (under /pico/ and at the same level as your /pico-sdk is located:
Note that we also create a sub-folder named build. This folder will receive the final compiled code to be uploaded to Pico.In the project folder (in this case, blink), we always must have 3 files:
blink.c (the main C code)
pico_sdk_import.cmake (help to locate the SDK)
Let's start to copy pico_sdk_import.cmake in the folder project:
For blink.c (C source file) and CMakeList.txt, use a text editor that you like more, like as Sublime, Eclipse, VS, Geany.
Let's see the CMakeList.txt:
Note that we will create an executable file named app_blink based on the code of blink.c. Now, the source code blink.c:
At the top of the C file, we include a header called pico/stdlib.h. This is an umbrella header that pulls in some other commonly used headers. The ones needed here are hardware/gpio.h, which is used for accessing the general-purpose IOs on RP2040 (the gpio_xxx functions here), and pico/time.h, which contains, among other things, the sleep_msfunction.A library whose name starts with pico provides high-level APIs and concepts or aggregates smaller interfaces; a name beginning with hardware indicates a thinner abstraction between your code and RP2040 on-chip hardware. So, using mainly the hardware_gpio and pico_time libraries, this C program will blink an LED connected to GPIO25 on and off twice per second, forever (or at least until unplugged). Great! At this point, your project folder should contain three files and one sub-folder (build):
Now, go to the build folder, export the environment variables, and run cmake:
The last step is to compile the project:
In the build folder, several files are generated, including the app_build.uf2, the executable file. Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file app_build.uf2 to that folder.
You can also use cp at command line, instead of dragging the file.
For example, on a Mac:
And, on an RPi:
Despite that, all those steps seem complicated; once your project environment is set up, for any changes in the project, you should only compile the new code using make or make -j4 (That uses all four cores of the CPU).
The idea of this project is to use the Pico to classify some human-made gestures as "up-down, " "left-right," and "circle". This classification will be done 100% "off-line" at the MPU level. In other words, we will do "embedded machine learning, also known as TinyML.
Depending on HW capacity and memory size, different types of MCU/Applications can be used in the TinyML arena, as shown in the below chart.
Our Pico, based on ARM Cortex-M0+ is more than suitable to perform Sensor Classification, as we will do in this project.
So far, we have already defined the 1st phase of the project: its goal (Gesture Classification). The below workflow shows all the remaining phases to be executed from data collection on our Pico till the final inference and evaluation back to our tiny device, passing for the real model development done at Edge Impulse Studio in the cloud.
Each sensor analog output (XOUT, YOUT, and ZOUT) will be connected to Pico's ADC inputs (ADC0, 1, and 2). The VDD is 3.3V and will also be supplied by Pico. The pin GS selected the g-level and will be left open (+/-1.5G).Wiring
There are several different packages for three-axis analog accelerometer. In principle any breakout board for Freescale's MMA7361L should work.
Sensor Measurements: With the pin GS left open (+/-1.5G), the sensor sensibility according to spec is 800mV/g, being that for 1G (sensor resting), the output is around 1.65V ('G0'). It is important to remember that Pico ADCs have a resolution of 12bits (3.3V ==> 4096), so if we want the ADC measurement in g, we must apply the following conversion factor to raw data collected (read_axis_raw):
And for acceleration in m/s:
The Data Collection project file tree structure should be:
Below is the source code for data collection:
The above code reads the 3 ADCs (10 times each), delivering an average (smooth value) value. The data values, one for each axis, separated by tabs and converted to m/s, are sent to Pico USB output using the instruction printf(). The reading frequency was defined as 50Hz but should be lower than this due to the time for data capture and smooth process. Below the CMakeLists.txt. Note the line: pico_enable_stdio_usb(accel_ml 1), this means that the USB (Serial 0) is enabled.
Once you have copied pico_sdk_import.cmake to your folder project, go to the build sub-folder and repeat the same procedure did with the blink project:
The final compiled code (accel_ml.uf2) will appear in a build folder. Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file accel_ml.uf2 to that folder, as you did with the blink. The Pico will start capturing data from the accelerometers and send them to USB (Serial 0). Now you can read it on a Serial Monitor. If you are using Linux, a good option is a minicom. First, install it:
Open the serial monitor:
One alternative is the program Serial or even the Arduino IDE Serial Monitor and Plotter on macOS.
What we need to do now is to collect data samples (set of 3-axis values) for each one of the human-made gestures that we want to classify:
"up-down" (Moving the Pico/Sensor from a high position to a lower)
"left-right" (Moving the Pico/Sensor from left to right and vice-versa)
"circle". (Moving the Pico/Sensor in circles CW and CCW).
"resting" (Left the Pico/Sensor on the table, with no movement)
The CLI will ask for your credentials, the name of the project that you are working on, and the name of the data values that we will capture (note that the CLI already analyzed the serial and knows that the 3-axis sensor data are available) and finally will ask you for a device name (optional).
Return to Edge Impulse Studio and go to the Data Acquisition section:
Your device name, the available sensor, and the capture frequency should appear automatically. Define the data label and the amount of sample you want (default is 10s), and press Start Sample. Below are 10 seconds of the up_down gesture.
Note that accZ (blue line) has the highest pics, what make sense.
"Machine Learning is a way of writing programs that process raw data and turn it into meaningful information at an application level". Hence, the more data you have, the more information you can get! Let's capture at least 60 seconds of data for each label. Try to balance your dataset, having the exact data for each label (class).
Now, we have all the raw data needed for training. But as you saw with the last image, raw data is a time series type of data, and it is not easy to create a model that understands such kind of data. So, the data should be pre-processed. For that, we will take a window of 2 seconds and extract some relevant values from that, for example, the RMS value for such a group of data and its main frequency components (FFT). From each window, 33 features will be generated (11 per axis).
It could seem complex, but the good news is that Edge Impulse will do this almost automatically for us. Feature Engineering and Model Definition (Impulse Design)Let's go back a little. Once you have your raw data, go to the Impulse Design section and create your project impulse.
An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data.
In short, the Impulse will split the raw data into 2-second segments. But note that those windows will slide over time, with 80ms of displacement. With this, more data will be generated.In the Spectral Features section, the general parameters for the feature generation can be defined. I stayed with default values, and on the Generate features tab, it was possible to explore visually all 3,879 samples generated.
Our dataset classes are very well-defined, suggesting our classification model should work fine.
Note that orange data (left-right) goes manly on the y-axis, and that red data (up-down) goes along the z-axis. Also, resting (green dot) shows no acceleration, which is expected (on the previous stage, the earth acceleration (g) was filtered from the z-axis.
The NN Classifier model can be straightforward:
The model has 33 neurons in its first layer (1 neuron for each feature) and four neurons in the last layer (1 neuron for each of the four classes). The model has two hidden layers with 20 and 10 neurons. The default hyper-parameters are 30 epochs (a lot and can be reduced in half in this case) and a learning rate 0.0005. Running the training, we ended with an accuracy of 100%, confirmed by the F1 Score. This is not normal with Deep Learning projects, but we realized that the data classes were very well split. Only 80% of data was used for training during the training phase, with 20% spare for validation. Both sets of data performed well, and it does not seem that the model overfitted, as shown in the Loss vs. Epoch graph:
Here is the Edge Impulse Studio Training result:
This quantized model is expected to take around 1ms in inference time, using 1.5Kb in RAM and 15.4Kb in ROM. Very good!
Testing the model with real (new) dataIn the Studio Live Classification Section, you can repeat what was done during the data capture phase. Once I kept the same type of movements, the result was excellent.
All data captured in this section are stored as Test Data, analyzed in Data Acquisition Section, under the tab Test Data.
The next phase tests the model with entirely new data (stored in the Data Acquisition/Test Data section). The result was excellent again, only with some confusion mixing left_right with the circle, which is expected.
Once the model is developed, trained, and tested, the next step in our Machine Learning Workflow is Conversion and Deployment. On the Edge Impulse Deployment section, deploying both the trained model and the pre-processing block (Signal Processing) as a C++ library is possible.
For MCUs that work with Arduino IDE, the studio automatically generates the libraries and workable examples that can be used as starting point for real inference.
Pressing the BUILD button on this studio section will download the complete package for our final project.
Now, it is time to do machine learning on embedding devices! We will program our Pico to recognize the gestures entirely offline, without an internet connection. This is the revolution that TinyML is doing!
The C/C++ package downloaded from Edge Impulse Studio has the following folders/files:
We will update the CMakeLists.txt with the needed specific information for our project and add our C++ source code (that will be under a source folder). The Gesture Recognition Collection project file tree structure should be:
Taking as starting point the Arduino example: nano_ble33_sense_accelerometer.ino and changing the instructions that are not compatible, create the file main.cpp below. The internal LED will flash during the time that the data is captured and classified:
In the source folder is the file ei_classifier_porting.cpp, also adapted by Dmitri, which I kept as it is. At this point, having all relevant files in our project folder, go to the build sub-folder and repeat the same procedure done with all previous projects to compile the final executable code:
The final compiled code (app.uf2) will appear in a build folder. Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file app.uf2 to that folder, as you did with other projects. The Pico will start sampling data from the accelerometers every 2 seconds and display the Serial Monitor's predictions.
Note that Classification time (Inference) is 1ms, same as predict by Edge Impulse Studio.
In this documentation link, it is possible to find detailed information about the , the heart of Pico.
Clicking on will send you to a page where you can start with MicroPython.
Follow the instructions to download the UF2 file to easily install the MicroPython interpreter in the Pico. Having the UF2 file, only drag it to that RPI-RP2 window, and that is it! The Pico is ready to receive an executable Python script. For MicroPython, I suggest as the IDE of choice once it is possible to write Python scripts directly on the shell as below or develop a script on the editor:
If you are new to MicroPython, the Raspberry Pi Foundation put together an excellent book, (free in pdf), that will teach all the steps on physical computing using the Pico e MicroPython. MicroPython programming is suitable for initial exploration and learning electronics, but using C/C++ language for real embedded projects is crucial. For that, it is necessary to understand the SDK C/C++.The RPi Foundation put together excellent documentation. The first one, ives information on how to set up your hardware, IDE/environment, and how to build and debug software for the Raspberry Pi Pico (and other RP2040-based devices). The second document, programming using the SDK with advanced features and gives complete API documentation.
(that tells the SDK how to turn the C file into a binary application for an RP2040-based microcontroller board)
As explained on Edge Impulse documentation (), recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest microcontrollers (our case with RP2040).
TinyML enables machine intelligence right next to the physical world using sensors. So, the first thing to do is to capture data to understand those gestures. For that, we will use a simple 3-axis accelerometer. The SensorThe sensor used, the is a three-axis analog accelerometer that requires a meager amount of power and has a g-select input, which switches the accelerometer between ±1.5g and ±6g measurement ranges. Other features include a sleep mode, signal conditioning, a 1-pole low pass filter, temperature compensation, self-test, and 0g-detect, which detects linear freefall. Zero-g offset and sensitivity are factory-set and require no external devices.
Alternatively, the code on shows how to capture data continuosly on Core 1, when the data is sent to Serial by the code running on Core 0.
If you do not have an account at , do it now! Edge Impulse is the leading development platform for machine learning on edge devices, free for developers and trusted by enterprises. Open an account and create a new project.
My project is public and can be cloned here: Once you have created your project, install Edge Impulse CLI on your computer; follow these instructions: This should be done only one time.Once the project is created and the CLI is installed, the easiest way of getting data from Pico is using the Edge Impulse his lets you forward data collected over a serial interface to the studio. This method only works perfectly on sensors with lower sampling frequencies, as in our case (human gestures).The data forwarder is used to easily relay data from any device to Edge Impulse over serial (exactly our case). Devices write sensor values over a serial connection, and the data forwarder collects the data, signs the data, and sends the data to the ingestion service.At your terminal, run:
In Raspberry Pi Pico's case, we will choose the C++ Library option once this MCU does not work yet with Arduino IDE. But, we will still use one of Arduino's code examples as our starting point. Also, we will enable the which permits to run of neural networks in 25-55% less RAM, and up to 35% less flash while retaining the same accuracy compared to TensorFlow Lite for Microcontrollers, as we can see below:
Based on the excellent tutorial by which inspired me on this project, we could create the below. The final executable program will be named "app":
The next step in this project would be to detect anomalies, which is also simple to implement with Edge Impulse Studio. For those curious about TinyML, I strongly suggest the Coursera free course Here is a glimpse of one of the projects that I developed during the course, classification operation modes and anomalies on a Blender:
You can also clone this project on Edge Impulse Studio: .
This project's general idea was to learn how to program a Raspberry Pi Pico and do a proof of concept that is possible to perform Machine Learning with this MCU, which, at the time that I wrote the tutorial, was not officially supported by Edge Impulse and Arduino. Today, both Arduino and EI support the Pico, which simplifies the coding process for non-expert developers enormously. On my GitHub repository, you will find the last version of the codes: