AVLYTICS System Training Guide

How to train your AI for CCTV




1. Purpose of AVLYTICS System Training Guide

We want to assist you in the correct method of training your AVLYTICS Artificial Intelligence device for CCTV to gain the most accurate predictions on The Edge. This guide will assist you in training the system accurately for each site, giving you control over your site’s predictions.


2. Where does training begin?

Because each site and each camera positioning is unique, you will require a few days of training the device in order to learn what various classifications look like to AME on your site.


3. Correct and incorrect training images

Below we will run through some scenarios that you will need to be wary of when training your system. Please note that incorrect training will lead to more false alarms and missed alarms. A global training set will be uploaded to all devices, however it is important that you train the device in its environment. This will therefore allow it to recognise the size and displacement of a human vs. animal or other presence. The training is not training the device on a specific object class, but rather calibrating its preprogramed intelligence.


3.1 CLEAR IMAGES:

In the below image a man is clearly visible and occupying the majority of the frame. Although the image has been classified correctly as AME Human, this global tag does not train your system. Global tags such as AME Human, AME Not Human, AME Vehicle and AME Background are merely a place holder on initial start-up of your device in an attempt to facilitate your alarms from the start. If you are happy with the image and decide that it cannot possibly be mistaken for any other entity, you may train it to the system by selecting the correct icon for the image, as per below:


Should you only wish to acknowledge the alarm, without it training the system, you may select the green thumb icon.


 3.2 UNCLEAR IMAGES:


If there is any chance that the image will be mistaken for any other object, the image must be discarded and should not be used to train your system. It is important to understand that training your device with unclear images will be detrimental to the accuracy of classifications. Clear images used for training that were obtained from any area in the in the field of view strengthen the devices’ ability to accurately predict the correct object classification.


In the images below the object is much too far away to clearly identify that it is a human, this could possibly be an animal or background movement.

 The following actions will be taken. The image has been classified as AME Human, this is your global training set predicting the object classification. Because we know that this is a human, from its position and contextual knowledge we possess, we can confirm that it is AME Human using the green thumb icon.


Should the same images have appeared with incorrect tags, such as AME Vehicle, we are also able to discard the image using the red bin icon. This will remove the image from your training completely.


3.3 IMAGES WITH TOO MUCH BACKGROUND:

In some instances, the images may seem clear to the human eye, however the frame contains a larger percentage background than the actual object itself. This image must be discarded and not learnt to your device as the background can cause misclassifications. In the image below, the two people are easily identified as human, however, the amount of background in the image occupies a larger size of the frame than the objects themselves, the image must therefore be discarded or merely acknowledged. Do not be concerned with training the device on multiple humans or attempting to learn different variations of human actions to the device, the device will detect human features with the clear images you have used in the previous training.




4. Icon meaning


Thank you for choosing AVLYTICS.