A LITTLE HISTORY

 VGG is a convolutional neural network proposed by K. Simonyan and A. Zisserman from Oxford University and gained notoriety by winning the ILSVRC (ImageNet Large Scale Visual Recognition Challenge) competition in 2014. The model achieved an accuracy of 92.7% on Imagenet which is one of the highest scores achieved. It marked an improvement over previous models by proposing smaller convolution kernels (3×3) in the convolution layers than had been done previously. The model was trained over weeks using state-of-the-art graphics cards.

DATASET USED TO TRAIN THE MODEL : IMAGENET

 ImageNet is a gigantic database of over 14 million labeled images divided into over 1000 classes, as of 2014. In 2007, a researcher named Fei-Fei Li started working on the idea of creating such a dataset. While modeling is a very important aspect of good performance, having high quality data is equally important for good learning. The data have been collected and labeled from the web by humans. They are therefore open source and do not belong to any particular company.

Since 2010, an annual ImageNet Large Scale Visual Recognition Challenge is organized to challenge image processing models. The competition takes place on a subset of ImageNet composed of : 1.2 million training images, 50,000 for validation and 150,000 to test the model.

In this video we import Keras VGG-16 model saved in HDF5 format to HAIBAL LabVIEW deep learning library, then we generate with our scripting the graph to allow user modify any architecture for his purpose before running it.  

 THE ARCHITECTURE 

 In fact there are two algorithms available: VGG16 and VGG19. In this article, we will focus on the architecture of the first one. If the two architectures are very similar and follow the same logic, VGG19 has a larger number of convolution layers.   

 

IMPORTING FROM HDF5 SAVE KERAS FORMAT A VGG 16

HAIBAL Library can import all HDF5 saved file from Keras Library. In the below video we import the famous VGG 16 exemple from Keras application and generate the LabVIEW HAIBAL architecture to use/modify it directly.

The HAIBAL user will have two choices: use the imported model for training or prediction, or generate it using a VI module generating the architecture to allow modification (this is what we see in the video below)

The VGG16 pretrained model exemple will be proposed  in the HAIBAL library to permit our community to use and modify it. laughing

Software needed to run VGG16

  • LabVIEW 2020 (or latest)
  • HAIBAL Deep Learning development module
  • Recommended Vision development module 2020 (or latest) to capture pictures and display the overlays