Face Recognition Based on *dlib* in a KVM Guest
This article has been contributed by Lin Ma, Software Engineer and KVM Virtualization Specialist at SUSE.
The challenge
Today I am going to perform a face recognition test on a SUSE Linux Enterprise Server 15 KVM guest with a passed-through Nvidia 970GTX card and a USB camera.
The package dlib is a well-known C++ open source toolkit, containing machine learning algorithms and tools. To get more detailed information, have a look here or just perform your own Google search.
The code I used can be found on GitHub at: https://github.com/ageitgey/face_recognition. This project had been created by Adam Geitgey. It’s easy to get started. Just read the README file and follow the instructions on the project page.
This time I didn’t train my own model, but I used the pre-trained face recognition model.
Install the required package: dlib
Note: When you have installed the latest dlib package, you can check if the CUDA support is available on your system. If you have a proper environment/configuration and Nvidia GPU card, you can see the following output:
If you don’t have a proper configuration, the output will look like this:
Download the source code of the project and build it
The project contains a couple of examples. The example code I used is available in the file examples/facerec_from_webcam_faster.py
For the test run, I took two pictures: one of my team lead Roger, and one of me. Then I copied the two pictures into the guest. Following the project’s instructions, I modified the file facerec_from_webcam_faster.py to use the convolutional neural network model and to make use of the mentioned pictures as sample pictures. This also means that only two faces (Rogers and mine) can be recognized in a live video. Other faces will be marked as ‘unknown’.
Run the file
After I made some additional minor changes to the file, I just ran it. See the result in the video here:
Finally, I made another very short video which you can find here:
As you can see, if the face is not close enough to the camera, it can’t be detected!
And just as a side note: My test scenario also shows that, if there aren’t too many faces in a video frame, the performance of this example with CPU reaches quite acceptable levels.
No comments yet