Prologue: The OSIRX project

Initially, Technologies de France (TDF) want to be a startup disrupting the recycling sector with the OSIRX project. The challenge is colossal because the technology used, TimePix from CERN, is expensive, the regulatory and security constraints related to X-rays are important and our financial means are limited.

In addition to these difficulties, let’s remember that during our project, we were confronted with an unscrupulous computer subcontractor who never delivered the software for the control of our OSIRX prototype. Fortunately for my partner Aloïs and myself, both engineers, I had already studied the rudiments of LabVIEW at school, and I decided to hang on to this work. In fact, we had to start from scratch because the little that this software company had delivered was clearly unusable.

Let’s be positive though. By force of circumstance, we then went up in competence in this field, and I finally became LabVIEW architect by validating at NI the official Architect certification (the highest level of competence existing today).

The event that changes everything: The covid

Once the OSIRX test bench was completed, we were able to carry out a first measurement campaign which did not prove to be conclusive. The camera used was not 100% mastered, and it was impossible to lift the necessary technological locks. The only solution at the time was to manufacture our own camera, which obviously required significant funding.

At the beginning of 2020, the event that surprised a large part of the planet was added to this: the covid. During this crisis, like almost all other French companies, we were forced to confine ourselves. We then looked for all possible solutions to survive and make OSIRX evolve. Among them, artificial intelligence.

To be honest, I always wanted to work in this field but unfortunately never had the opportunity. This forced stop of our activities allowed us to think about the future of OSIRX and the best way to approach it.

Machine Learning: Scikit-Learn

In May 2020, we start learning how to use machine learning. Naturally, a machine learning library attracts our attention: Scikit-Learn. After learning how to use it, we decide to develop an interface of this library under LabVIEW to use it on OSIRX. This interface is now free and can be downloaded on the GitHub of TDF.
After several months of use, we conclude that the models proposed by this machine learning library are insufficient to solve the nonlinear physical equations of resolution of the quantities of matter for the OSIRX project. Therefore, we need to find another solution.


Deep Learning, the solution

Since January 2021, we are working on deep learning. Not having any deep knowledge of the subject, we decide to train ourselves and to use Keras, the Google library which is the deep learning module present in the Google TensorFlow library.

This library is simple and efficient to use. It can meet our needs for OSIRX but we encounter a new problem: its integration with our industrial system is not obvious. We work exclusively with NI hardware (ex National Instruments) and our programming language is LabVIEW. So we had to find a solution that would allow us to use deep learning models under LabVIEW. Another dead end…

Deep Learning with LabVIEW

Having a license of the NI Vision module, we realize that TensorFlow is “included” in this module. It is thus possible to run our graphs directly under LabVIEW!

Unfortunately, the joy is short-lived because after several difficulties, we understand that this inclusion means only the execution of models registered on old versions of TensorFlow. After several manipulations and adaptations on our side, we test one of our models on a NI IC3173 computer. The Yolo has a refresh time of 4 seconds, which is much too slow because it is probably not optimized.

We also find that we cannot train models, let alone modify our graphs in LabVIEW. Back in a dead end…

After some research, we find a library under LabVIEW called Ngene. We decide to test it, and quickly, we realize that the library does not have, among others, all the layers proposed on Keras.

So it lacks some functions that we could need to solve our problem.

TDF is also at this time facing financial constraints and cannot consider another failure. We no longer have the time to buy, test and maybe solve our problem with Ngene.

The risk of depending on a third party library is too high, because we are doing R&D and we need to easily change our architecture without limitations. This library in its philosophy does not correspond to what we want to do. There is no modularity in the writing of graphs.

We decide to find another solution than this one which would have strong chances of not corresponding. So we are again in a dead end…

Deep learning, the internal test

In July 2021, we decide to have two of our interns work on coding in LabVIEW native some basic layers of deep learning. We quickly realize that it is possible to code in LabVIEW an equivalent of the features proposed by Keras.
In August 2021, we manage to understand and code our first 2D convolution in LabVIEW.

Birth of HAIBAL

In September 2021, we succeeded in experimentally coding several layers of deep learning as well as some activation functions from theories. The question naturally arises: how to architect all this so that it is modular and easy to use? The HAIBAL project was born.

The philosophy of HAIBAL is simple: a deep learning graph in a graphical language like LabVIEW. It’s natural in fact!
We must be able to put any graph, without any restriction, in our LabVIEW diagram, be able to integrate these functions in any LabVIEW architecture and make it work on any platform.

This simple idea is in theory revolutionary because all deep learning languages are based on the model graph (its architecture), which opens the possibility to read any save coming from any deep learning language.