Learn how to run face recognition on a Raspberry Pi with OpenCV and the face recognition library. We will cover how to install the required packages and libraries, how to take photos and train the model, and how to control hardware with the detection results.

Transcript

In this video we're going to be using OpenCV and the face recognition library to get real-time face recognition running on a Raspberry Pi. This whole setup is going to be running on the Pi itself completely offline, no data leaving your Pi, and it's surprisingly accurate for a setup running on a low-powered device like the Pi. The process is going to be super simple. We're going to install the needed libraries, then we're going to run some code to take photos, then run another lot of code to train a model using those photos we took, and then finally we'll run some facial recognition using that model we trained. Let's get into it. To follow along we're going to need a few things, obviously a Raspberry Pi. We're using the Pi 5 because this will be a fairly processing intensive task, but we also managed to get this working on a Pi 4 as well, just expect it to be a fair bit slower. In terms of RAM we found that this process used about 1.4 gigabytes of RAM, so while this could be done on a 2 gigabyte Pi model, 4 or 8 gigs might be a bit safer of a bet and give you a bit more headroom. You're also going to need a Pi camera module, we're using the camera module 3 here, and you may also need an adapter cable. The Pi 5 shipped with a smaller camera connector and your camera might not come with the required adapter cable, so it's worth checking. You'll also need all the other essentials like a microSD card to install PiOS onto, a power supply, monitor, keyboard, mouse, all that junk. And you'll of course need a cooling solution for the Pi, we're just using the active cooler here. You'll find links to all of these in the written guide linked below, as well as all the code and commands that we'll be using in this guide. In terms of hardware assembly, we don't really have much here, just plug the camera into the Pi with the ribbon cable, just ensure that you plug it in the correct way because it only works in one orientation. Also try to be careful with these ribbon cables, they're not made out of paper but they will break if you don't look after them and you bend and curl them too much. And we're also just going to find a way to mount our camera as well. First of all, you're going to need to use another computer to install PiOS onto the microSD card using Raspberry Pi Imager. Super straightforward process, but if you need help, you'll find it in our written guide below. Once we've installed PiOS, go through the first time setup and ensure that you connect it to the internet, there's nothing fancy you need to do here. Alright, now that we're in our desktop, we're ready to get started. First things first, we're going to need to set up a virtual environment and this is needed in Bookworm OS and later and it's just a virtual space where we can experiment and play around without the risk of breaking the rest of our Pi system. So we're going to go ahead and open up a new terminal window and we're going to create one with the following command. This will create a new virtual environment called face underscore rec and if you navigate to your home folder, you should be able to see the virtual environment folder that we just created. There it is right there. Again, you can find all the commands in the written guide if you just want to copy and paste these instead of writing them out manually. Once that's been created, we can enter into our virtual environment by typing in this source command here, like so. And as you can see, after doing that, the terminal is currently in this environment as it's displayed on the left here next to the Pi at Raspberry Pi. If you ever need to get back into this environment, like if we close this terminal window and reopen it, we're no longer in it, you can just punch in that source command and you'll jump right back into it. So now that we're working in one, we can install the needed libraries. And as always, it's a good idea to just ensure that our Pi is up to date with update and upgrade these two lines. Then we're going to install OpenCV, some of the magic source behind this all, followed by IMU tools, however you want to pronounce it, like so, as well as CMake, which is what we're going to be using to actually make the face recognition library. And then finally, we can install our face recognition library. Now, this installation has a lot going on and it may take 15 to 30 minutes, depending on your system. So go and grab a tea or coffee, whatever you need. And once that's finished, we are done with all the installations that we need. Now we have to set up Thonny, which is the program that we'll be using to actually run our code out of. So we're going to go ahead and open up Thonny. And the first time you open it, up in the top right here, you'll see a switch to regular mode. Just hit it and reopen Thonny to get into our regular Thonny mode. So now we need to get Thonny to work in that virtual environment we created. So to do so, we're going to hit run, configure interpreter, and under our Python executable, we're going to hit these three dots here and navigate to our folder of our virtual environment that we created. It'll just be face rec under home. We're going to click on that, go to bin, and we're going to look for this Python 3 file. Select it. And as you can see, Thonny is now working out of our virtual environment. If you ever close Thonny and reopen it, it will automatically return to this environment, which is very nice. Now, if you head on over to our written guide, you'll find a zip folder that has all the code that we're going to be using. So go ahead, download it. I've just gone and downloaded here and we're going to just extract it to a convenient location. I'm just going to whack it on my desktop just for ease of access here. Now, in this file, you should see the first script we want to use called image capture. And if we open it up in Thonny, this is going to let us take the photos that we're going to use to train our model with. First things first, head on over to this line here and change it to the name of the person that you want these photos to identify. So I'm just going to pop in my name here, but be warned, this is case sensitive. So if I put a capital J here, it's going to train for another person. So it might be a good idea to just use lowercase all the time to avoid training the same person twice. If we run this script, you should see a little preview window pop up of our camera. It might take a second or two. Oh, there we go. Hello. And if you just frame yourself up in here and press space, it's going to take a photo and store it inside that same folder with the correct file structure that we need for the model. And you can just go ahead and keep hitting space to take as many photos as you want. Now, a single photo with good lighting face on is typically all you need for this, but some from different angles might not hurt. Once you've done hit the Q key to exit, and that's it for that step. And you can repeat this process as many times as you want for as many different people. Now, if you head on back over to the folder that this script is saving and go into data set, you should see that this will have created a new folder with the name that you set in the script. And inside this folder is all the images that you took. And this is the structure that our model will need for training. If you want to add preexisting photos, by the way, you can just whack them all inside of a folder, name that folder with the name of the person that you want to identify. And if you want to delete someone from the model, just select the folder, delete it, and they're gone. Now, once you're happy with the images that you've got, you can go ahead and open up the script called model training.py like so. And this one is super simple. Just hit run. You don't need to touch anything. And it's going to process all of those photos and spit out something called a pickle file, which is going to be our model. As you can see in the shell there, it's printing out the current process of it and how many photos it's currently gone through. And this is going to take longer if you have more photos, but it shouldn't be too long. Maybe if you have a few hundred, it might take a few minutes, but typically a few seconds. Once that's done, we have finished training the model. If you want to add more photos, just go back to that photo script, take the new photos and then run this script again to retrain the model. All of this code is designed that you can just do that and it sorts it all out for you. And this pickle file is of course stored in the same folder as well. Once we have that file, we are ready to run our face recognition code. So go ahead and open up the script called facerecognition.py in Thonny. And if we run this script, give the camera a few seconds to warm up and turn on. We should be able to recognize faces like so. And that's picking me up as of course Jared, because that's what we trained it on. And as you can see, it is pretty robust. I can move around and shake my head and even probably put on some sunglasses and it still detects me no issues whatsoever. It's not perfect though. If I obscure my face a little bit, you can see that it has a bit of a hard time detecting. And if I rotate my head beyond 45 degrees, it starts to have some trouble as well. And if you bring in an untrained face into frame, it will just pick them up as unknown like so. Now there is a variable that we can tune to get better performance and that's just going to be this CVScaler variable here towards the top. Now our camera is recording at 1920 by 1080 pixels and CVScaler is going to take that resolution and scale it down before it pushes it into the face recognition processing part of this script. So if we set this to 10, which is going to scale it 10 times smaller in each direction. And if we run that, you should be able to see that we get improved FPS. We're getting about four, five, sometimes six, depending on what's going on. But the problem with this is that it can't detect my face from more than about a meter away. It doesn't even register that I'm here. If we instead go the other way and set it to one, which means it doesn't scale it down whatsoever, you can see that we're only getting about half an FPS. But even if I'm standing all the way back here, about six meters away, it's not having a single issue with my face. And it should be good for up to about 10 to maybe even 12 meters. So by default, we just left this number at four because it's a good balance between detection distance and FPS, but you should play with it and find what works best for your setup. All right, let's now take a look at that last script, face recognition hardware. This script is going to allow us to do something based on our detection results. And if we scroll down here a little bit, we should see a list called authorized names. You can add or remove as many names as you want from this list. I'm just going to go ahead and put my name in for now. Just be careful because it is case sensitive as well. And I'm going to go ahead and just delete everyone else off this list. But if you want to add more names, just ensure that you put them in brackets and separate them like a comma, like it's done here. Now, if we scroll down a little bit, we should see the second modification to this script. And that is this if else structure here. If an authorized face is detected, then we will do whatever we put in this space here. Else, so if it's not detected, then we will do something else. Right now, we just have this set up as a really simple example. We just print detected and turn pin 14 of the Raspberry Pi on if we detect an authorized face and off if we don't. And we can do something like hook it up to a solenoid door lock. So if my face is detected, it opens. And if it's not detected, it closes again. Luke, get in here. Come and try and unlock my door. Didn't work. Let me in. I'll make my face look more like yours. No, it didn't work. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. I'll make my face look more like yours. Didn't work. Now this is a really simple example, but with this code, we now have a Raspberry Pi that can analyze a human face. And if it is someone, we can do something. And with a bit more work, you could do things like set it up to take a photo when it detects someone unknown, and maybe even send an email or text you with that photo. You could do things like create personalized workspaces. I like my workshop to have the lights as bright and white as possible, and I can use this to automatically adjust the lights when it detects me in there. On a similar note, you could control your music based on who's in the room. You could boot up a certain playlist or song if it detects a certain bunch of people. Whatever ideas you have, you now have a starting place for that. And that is a Raspberry Pi that can analyze faces and do something if it detects a certain face. Well, that's about it for now. We hope you enjoyed this video and that you got something out of it, and we hope that you make something with it. If you do make something cool with it and want to demonstrate it, or you just need a hand with anything in this video, we've got a community forums linked below. Until next time, happy making.

Comments


Loading...
Feedback

Please continue if you would like to leave feedback for any of these topics:

  • Website features/issues
  • Content errors/improvements
  • Missing products/categories
  • Product assignments to categories
  • Search results relevance

For all other inquiries (orders status, stock levels, etc), please contact our support team for quick assistance.

Note: click continue and a draft email will be opened to edit. If you don't have an email client on your device, then send a message via the chat icon on the bottom left of our website.

Makers love reviews as much as you do, please follow this link to review the products you have purchased.