You'll get the most important features of our tool at a glance in this video:
© Video by IoTec-AI & Music by Patrick Patrikios
You'll find here below a non exhaustive list of the main features that are already available in the demo release. This list can be changed at any time without previous warning. Please consult this page regularly to get the more recent info.
If you are interested, you can join the demo release users here.
All available features and data are organized within a workbench making the access in one place to projects, datasets, models, applications or devices easier. Our tool can thus also be used as a central Document Management System for your TinyML projects.
In the demo release, only one user is able to use the workbench at a time. The GA release will be multi-users.
iPython Notebook execution
iPython Notebooks dealing with Tensorflow Keras models are used and executed but with time execution and size limitations. Uploading your own notebooks will be possible in the GA.
Keras models support and h5 format
Only Tensorflow Keras models are supported with the h5 format. The GA release will support more (e.g. Pytorch, ONNX, ...).
Keras models Tuning
With one of the demo project, we provide an example of using the Keras Tuner to find the best hyperparameters for your model.
Mobile/embedded TFLite conversion
The demo release, as for the free tool available in this web site, is able to convert the h5 model to a Tensorflow Lite version. Quantization optimization is activated.
Powerful converted model comparison tool
Based on different hyperparameters, compare the different generated and converted models based on MSE or MAE (regression) and model size (in bytes).
Target board selection and size check
The user is able to select the target board to be flashed. As for the free tool available in this web site, the demo release checks the model size too.
For the demo release, the following boards are supported:
- Arduino Nano 33 BLE Sense or not
- Arduino Portenta H7NEW
- STM Cortex M4 (mbed OS) family boards
- ESP32 family boards
The GA release will support more (e.g. Neural Network accelerators).
App generation and deployment on the selected target board
The application is generated partially or totally and deployed (flashed) on the selected target device. Routines to read/write data from/to the sensors attached to the board is generated.
Background and parallel execution
Because we know that executing a model or building and flashing an application can take time, even on very powerful platforms and in order to allow the users to work on other tasks in the meantime, all building tasks are managed by a job queueing system. The web UI is then not frozen at any time. Multiple workflows can be run in parallel.
We currently provide a few working demo projects:
- Sine estimation: Hello World project (Tensorflow Lite Micro framework example)
- Piggybank amount estimation: Funny amount estimation of a piggybank based on the weight and number of coins
- Person Detection: Computer Vision consisting in detecting a person with a camera (Tensorflow Lite Micro framework example)
- NEWImage recognition: Computer Vision consisting in recognizing animals, vehicles, ... with a STM32 camera (Tensorflow Lite Micro framework example)
Coming soon: more classification use cases.
Stable features are pusblished following the agile principles, allowing to collect the user's feedback as early as possible. Enhancement and bug corrections are provided as and when.
GA - Delivery date
We are currently working very hard to provide the GA release. If you join us, you'll be able to be kept informed on the delivery date. In the meantime, we can still start a project together.