Last week my OAK-D Lite from Luxonis arrived. I can imagine you’ve never heard from it. Basically it is a camera that can do all kinds of AI tasks on the device itself. I got mine via Kickstarter. And where I say camera, I actually mean it has multiple cameras. That’s how it can see depth for example.
It can do much more. Load an algorithm, point it at the street next to your house and it starts detecting cars, cyclists and pedestrians. Load the human posture algorithm and it starts showing your posture. Or gestures, sign language, face recognition or COVID-19 mask detection.
Do I have any project in mind for it? Not yet. For now it’s just a very cool device. Some of my colleagues at DIKW got one also, so maybe we come up with something cool later.
Anyway, the OAK-D Lite comes with a lot of demo code (available via the documentation site). So I had to try stuff out. Here is a video where I try the human pose tracking:
(When I tried this with a group of people, to my surpise the algorithm didn’t just estimated the pose of the people within shot, but also that of people on a poster behind them.)
The hand tracking was even more impressive. Really fast and it doesn’t miss much:
Text recognition is not entirely robust, as I show here:
But it does recognise some extreme fonts, like the heavy metal like font for my Metadata stickers.
You can program it all in Python. I ran one of their tutorials. I’m still not understand the code fully, but I hope with further experiments this will get better.
During the first day of the new Certified Data Engineering Professional course at DIKW, I showed off this device as something that might be the entrypoint of your next data pipeline. I totally can image this device sending coordinates of cars of pedestrians, to be send through a pipeline and summerized in a report.