Hello everyone, my name is Rik and I’m Bitcraze’s new intern. I took up this internship as part of my MSc studies at the faculty of Aerospace Engineering at the Delft University of Technology. Over the past year I have worked with the AI-deck as part of my thesis work at the MAVLab, and I’m looking to contribute to Bitcraze with my practical knowledge of the platform.
For my MSc thesis I have primarily worked on the design of very small optical flow CNNs, capable of running on the GAP8 SoC, using a neural architecture search (NAS). I have implemented a pipeline on the AI-deck, feeding a stream of camera images into the CNN to determine optical flow on-board. One of the last remaining goals of my thesis work is to design an application which utilizes the resulting dense optical flow. In the meantime, the NAS is ever running to find the best possible architecture.
With my practical knowledge I hope to contribute to making the AI-deck an easier platform to work with. Of course, working at Bitcraze is a great learning opportunity, too. I’m already learning a lot about embedded systems and programming. After several years of studying, it’s great to get my first relevant working experience. And maybe most important of all, so far it has been a lot of fun working at Bitcraze and I expect to have a lot more of it. And yes, the falafel is really good.
Hi Rik,
I am trying to build an autonomous drone to carry a small sensor on its back. I am insipred by the latest robot vacuum cleaners, which are fully autonomous and have lidar and a camera for basic object recognition. I am not an engineer nor a programmer, but am trying to learn. The vacuum cleaner (Xiaomi Roborock S6 Vmax) is basically a rover, scanning and mapping its surroundings. Their app has the option to draw a virtual line in its mapping, meaning it will not cross this line in reality. I want to build the Crazyflie in the same way, with the Multi-ranger, the Flowdeck, and the AI. If these were put together, would they be able to do the same as the Roborock? With this I mean fly not only autonmous, but also be able to scan and map an area after which virtual boundaries could be drawn. How much would the AI actually contribute to basic autonomous flight? Could the Multi-ranger and the Flowdeck do the same if there were no objects in a room? Or could it recognize a wall? I need to plan this carefully as I am on the edge of my lifting capacity for the Crazyflie as well as the power consumption. I would appreciate your comments and experience. Alvast bedankt!
Hi Sumit,
This should definitely be possible! The Crazyflie platform is versatile, so there are many ways in which one could implement this kind of behavior. The contributions of different sensors/decks would thus depend a lot on your implementation.
Due to hardware constraints, I would recommend limiting yourself to a 2D mapping of the environment. So to hover at a fixed height and map the horizontal plane. If you have a mapping of the environment you can manually introduce boundaries in this mapping.
Check out this video from the BAM days. Simon Zittenzieher, a student at Hochschule Ausburg, talks about a very similar implementation, using measurements from the multiranger that are fed into a CNN on the ground (that could be ported to the AI-deck) to map an unknown environment.
This implementation is of course a cool proof of concept of processing the multiranger data using AI, but since the actual task (line fitting to data points, i.e. determining your boundaries from the multiranger measurements) is relatively simple, I would recommend manually implementing an algorithm for it for the sake of simplicity, robustness, and predictability. Deep learning is a particularly powerful tool when it concerns more difficult tasks. This would allow you to program the same behavior without the AI-deck (or an off-board CNN for that matter).
You can use the flowdeck as a replacement for the lighthouse/loco positioning system. Positioning with the flowdeck is not as accurate, however, and will introduce additional error in your environment mapping.
This is one of the easier ways to implement the behavior. However, if you are interested in AI and not just the final implementation/behavior, by all means, go for it! Reproducing Simon’s work on the AI-deck can be a good place to start your dive into the world of deep learning.
Cheers,
Rik