video

It’s now become a tradition to create a video compilation showcasing the most visually stunning research projects that feature the Crazyflie. Since our last update, so many incredible things have happened that we felt it was high time to share a fresh collection.

As always, the toughest part of creating these videos is selecting which projects to highlight. There are so many fantastic Crazyflie videos out there that if we included them all, the final compilation would last for hours! If you’re interested, you can find a more extensive list of our products used in research here.

The video covers 2023 and 2024 so far. We were once again amazed by the incredible things the community has accomplished with the Crazyflie. In the selection, you can see the broad range of research subjects the Crazyflie can be a part of. It has been used in mapping, or swarms – even in heterogeneous swarms! With its small size, it has also been picked for human-robot interaction projects (including our very own Joseph La Delfa showcasing his work). And it’s even been turned into a hopping quadcopter!

Here is a list of all the research that has been included in the video:

But enough talking, the best way to show you everything is to actually watch the video:

A huge thank you to all the researchers we reached out to and who agreed to showcase their work! We’re especially grateful for the incredible footage you shared with us—some of it was new to us, and it truly adds to the richness of the compilation. Your contributions help highlight the fantastic innovations happening within the Crazyflie community. Let’s hope the next compilation also shows projects with the Brushless!

A few weeks ago, the prestigious Robotics: Science and Systems (RSS) conference was held at Delft University of Technology. We helped with the co-organization of a half-day tutorial and workshop called “Aerial Swarm Tools and Applications” so Kimberly (I) was there on behalf of both Bitcraze and Crazyswarm2. In this blog post, we will tell you a bit about the conference itself and the workshop (and perhaps also a tiny bit about RoboCup)

The Robotics: Science and Systems conference

The Robotics: Science and Systems conference, also known as RSS, is considered one of the most important robotics conferences to attend, alongside ICRA and IROS. It distinguishes itself by having only a single track of presented papers, which makes it possible for all attendees to listen to and learn about all the cool robotics work done in a wide range of fields. It also makes it more difficult to get a paper accepted due to the fixed number of papers they can accept, so you know that whatever gets presented is of high quality.

This year the topic was very much on large language models (LLMs) and their application in robotics, most commonly manipulators. Many researchers are exploring the ways that LLMs could be used for robotics, but that means not a lot of small and embedded systems were represented in these papers. We did find one paper where Crazyflies were presented, namely the awesome work by Darrick et al. (2024) called ‘Stein Variational Ergodic Search’ which used optimal control for path planning to achieve the best coverage.

It gave us the chance to experience many of the other works that could be found at RSS. One in particular was about the robotic design of the cute little biped from Disney Imagineering named “Design and Control of a Bipedal Robotic Character” by Grandia et al. (2024). Also very impressive was the Agile flight demo by the group of Davide Scaramuzza, and we enjoyed listening to the keynote by Dieter Fox, senior director at Nvidia, talking about ‘Where is RobotGPT?’. The banquet location was also very special, as it was located right in the old church of Delft.

You can find all the talks, demos, and papers on the website of RSS 2024

Photos of day 3 of RSS

Aerial Swarm Workshop

The main reason we joined RSS was that we were co-organizing the workshop ‘Aerial Swarm Tools and Applications’. This was done in collaboration with Wolfgang Hönig from Crazyswarm2/TU Berlin, Miguel Fernandez Cortizas and Rafel Perez Segui from Aerostack2/Polytechnic University of Madrid (UPM), and Andrea Testa, Lorenzo Pichierri, and Giuseppe Notarstefano from CrazyChoir/University of Bologna. The workshop was a bit of a hybrid as it contained both talks on various aerial swarm applications and tutorials on the different aerial swarm tools that the committee members were representatives of.

Photos of the Aerial Swarm Tools and Applications workshop

Sabine Hauert from the University of Bristol started off the workshop by talking about “Trustworthy swarms for large-scale environmental monitoring.” Gábor Vásárhelyi from Collmot Robotics and Eötvös University gave a talk/tutorial about Skybrush, showing its suitability not only for drone shows but also for research (Skybrush was used for the Big Loco Test show demo we did 1.5 years ago). The third speaker was SiQi Zhou, speaking on behalf of Angela Schöllig from TU Munich, discussing “Safe Decision-Making for Aerial Swarms – From Reliable Localization to Efficient Coordination.” Martin Saska concluded the workshop with his talk “Onboard relative localization for agile aerial swarming in the wild” about their work at the Czech TU in Prague. They also organize the Multi-robot systems summer school every year, so if you missed it this year, make sure to mark it in your calendar for next summer!

We had four tutorials in the middle of the workshop as well. Gábor also showed Skybrush in simulation after his talk for participants to try out. Additionally, we had tutorials that included real, flying Crazyflies live inside the workshop room! It was a bit of a challenge to set up due to the size of the room we were given, but with the lighthouse system it all worked out! Miguel and Rafael from Aerostack2 were first up, showing a leader-follower demo. Next up were Wolfgang and Kimberly (Crazyswarm2) who showed three Crazyflies collaboratively mapping the room, and finally, Andrea and Lorenzo from CrazyChoir demoed formation control in flight.

You can see the Crazyflies demos flying during the tutorials in the video below. The recording of each of the talks can be found on the workshops website: https://imrclab.github.io/workshop-aerial-swarms-rss2024/

RoboCup 2024 Eindhoven

Luckily, there was also a bit of time to visit Eindhoven for a field trip to the 2024 edition of the world championship competitions of RoboCup! This is a very large robotics competition held in several different divisions, namely Soccer (with many subdivisions), Industrial, Rescue, @Home, and Junior. Each country usually has its own national championships, and those that win there can compete in the big leagues at events like these. RoboCup was extremely fun to attend, so if any robotics enthusiasts happen to live close to one of these, go! It’s awesome.

Photos of the field trip to RoboCup

Maybe drone competitions might be one of RoboCup’s divisions in the future :)

It’s not often a blog post happens on the 25th of December, so this time, you’re having a treat with some new Bitcraze prototypes as a present from us! If you have time to get away from the Christmas table, there’s something we’d love you to watch:

Now let’s try to see if you noticed all the new stuff you see in this video!

Our new flight lab

We teased it, but in the beginning of December, we got our extended flight lab! We added 110 m2 to our flight space. It was a rush to have everything ready for the video – we cleaned everything, painted the walls and the green logo, set up the positioning system without our truss… But now we’re happy to show you how big the space is! Even if it’s hard to convey the real size on camera.

The Crazyflie 2.1 brushless

We already talked about it in this blog post, but the brushless has made significant progress and we feel confident that you will get your hands on it in 2024. Here, we use the extra power for a fast and agile flight. It also was very stable and didn’t crash once during the shooting!

The Lighthouse V2

Yes, you counted right! The Brushless flew with 16 base stations! We’ve worked really hard this past three months to create a new Lighthouse deck – the Lighthouse deck 2.0. It could get its position from 16 base stations. That’s 4 times more than what was previously possible! It behaved consistently well during the different tries, and we are really happy with the result. Right now, it’s just a prototype, but we’re hoping to get it to the next step in the coming months.

The contact charging station

Marcus created a power charger for the Brushless that doesn’t need any extra deck to allow for charging. It connects with the brushless feet. It has also the cool feature of changing LEDs indicating the status (idle, charging or charged). It is also a prototype, and we don’t know if this will end up being a product

The high-power LED

This is trickier to see, but it’s not our usual LED ring that the brushless carries. It’s a new, powerful LED underneath. It is so powerful that it nearly blinded us when we tried it for the first time. We put a diffuser on it, and it allowed the Crazyflie to be visible at such a high pace! This is a prototype too of course and we’re not sure if we will release it, but it’s fun to use for this kind of project.

Other announcements

During this week, our office is closed- we take this week to celebrate and rest a little before 2024. This means that shipping and support will be greatly reduced.

But we’re back the week after- at a somewhat reduced pace though. The developer meeting on the 3rd of January is maintained but without any presentation. We’ll take this time to answer any questions you have and talk a little! The details are here.

Bitcraze got their presents this year: a handful of working prototypes! We hope we got your wishes too, merry Christmas to you!

The Flow deck has been around for some time already, officially released in 2017 (see this blog post), and the Flow deck v2 was released in 2018 with an improved range sensor. Compared to MoCap positioning and the Loco Positioning System (based on Ultrawideband) that were already possible before, optical flow-based positioning for the Crazyflie opened up many more possibilities. Flight was no longer confined to lab environments with set-up external systems; people could bring the Crazyflie home and do their hacking there. Moreover, doing research for exploration techniques that cannot rely on external positioning systems was possible with it as well. For example, back in my day as a PhD student, I relied heavily on the Flow deck for multi-Crazyflie autonomous exploration. This would have been very difficult without it.

However, despite the numerous benefits that the Flow deck provides, there are also several limitations. These limitations may not be immediately familiar to many before purchasing a Crazyflie with a Flow deck. A while ago, we wrote a blog post about positioning systems in general and even delved into the Loco Positioning System in detail. In this blog post, we will explore the theory of how the Flow deck enables the Crazyflie to fly, share general tips and tricks for ensuring stable flight, and highlight what to avoid. Moreover, we aim to make the Flow deck the focus of next week’s Developer meeting, with the goal of improving or clarifying its performance further.

Theory of the Flow deck

I won’t delve into too much detail but will provide a generic indication of how the Flow deck works. As previously explained in the positioning system blog post, the Flow deck is a relative positioning system with onboard estimation. “Relative” means that wherever you start is the (0, 0, 0) position. The extended Kalman filter processes flow and height information to determine velocity, which is then integrated to estimate the position—essentially dead reckoning. The onboard Kalman filter manages this process, enabling the Crazyflie to use the information for stable hovering.

Image from Positioning System Overview blogpost

The optical flow sensor (PMW3901) calculates pixel flow per frame (this old blog post explains it well), and the IR range sensor (VL53L1x) measures height up to 4 meters (under ideal conditions). The Kalman filter incorporates a measurement model that describes the relationship between these two values and the velocity of the Crazyflie. More detailed information can be found in the state estimation documentation. This capability allows the Crazyflie to hover, as explained in the getting started tutorial.

Image from state estimation repo documentation

Tips & Tricks and Limitations

If you want to fly with the Crazyflie and the Flow deck, there are a couple of things to take in mind:

  • Take off from a floor with texture. Natural texture like wood flooring is probably the best.
  • The floor shouldn’t be too shiny, and be aware of infrared scattering for the height sensor
  • The room should be well-lit, as the sensor needs to see the texture.

There are certain situations that the Flow deck has some issues with:

  • Low or no texture. Flying above something that is only one plain color
  • Black areas. Similar reason to flying above no texture, but it’s more difficult than usual. Especially with startup, the position estimate diverges
  • Low light conditions
  • Flying over its own shadow

We made a video that shows these types of behaviors, starting of course with the most ideal flying conditions:

Moreover, it is also important to note that you shouldn’t fly too high or yaw too often. The latter will make the Crazyflie drift, as the optical flow cannot be distinguished as being caused by the yaw movement.

Developer meeting about Flow deck

We believe that many of the issues people experience are primarily due to the invisibility of the positioning quality. In many of our examples, the Crazyflie will not take off if the position is stable. However, we don’t have a corresponding functionality in our CFclient, as it is more up to the user to recognize when the positioning is diverging. There is a lot of room for improvement in this regard.

This is the reason why the next developer meeting will specifically focus on the Flow deck, which will be on Wednesday the 6th of December, 3 pm central European time. During the meeting, we will explain more about the Flow deck, discuss the issues we are facing, and explore ways to enhance the visibility of positioning quality. Check out this discussion thread for information on how to join.

This week’s guest blogpost is from Matěj Karásek from Flapper Drones, about flying the Nimble + with a positioning system. Enjoy!

Flapper Drones are bioinspired robots flying by flapping their wings, similar to insects and hummingbirds. If you haven’t heard of Flappers yet, you can read more about their origins at TU Delft and about how they function in an earlier post and on our company website.

In this blogpost, I will write about how to fly the Flappers (namely the Flapper Nimble+) autonomously within a positioning system such as the Lighthouse, and will of course include some nice videos as well.

The Flapper Nimble+ is the first hover-capable flapping-wing drone on the market. It is a development platform powered by the Crazyflie Bolt and so it can enjoy most of the perks of the Crazyflie ecosystem, including the positioning systems as well as other sensors (check this overview). If you would like to get a Flapper yourself, just head to the Bitcraze webstore, where there are some units ready to be shipped! (At the time of writing at least…)

Minimal setup

The minimal setup for flying in a positioning system is nearly identical as with a standard Crazyflie. Next to a Flapper with a recent firmware, a Crazyradio dongle, a positioning system (in this post we will use the Lighthouse), and a compatible positioning deck (Lighthouse deck) you will also need: 1) a mount, such that the deck can be attached on top of the Flapper, and 2) a set of extension cables. You can 3D print the mounts yourself (models here), the extension cable prototypes can either be inquired from Flapper Drones, or can be soldered by yourself (in that case, the battery holder deck, standard Crazyflie pin headers and some wires come handy). Just pay attention to connect the cables in the correct way, as if the deck was mounted right on top of the Bolt. The complete setup with the Lighthouse deck will look like this:

Lighthouse deck installation on a Flapper Nimble+. Make sure the extension cables are well secured (e.g. by using the additional cable mount) such they don’t get caught by the gears.

For the Lighthouse, as with regular Crazyflies, the minimum number of base stations (with some redundancy) is 2, but you will get larger tracking volume with more base stations. 4 base stations mounted at 3 m height will give you about 5 meters time 5 meters coverage, which is recommended especially if you want to fly more than 1 Flapper at a time (they are a bit larger than the Crazyflies, after all…).
From now on, it is exactly the same as with standard Crazyflies. After you calibrate the Lighthouse system using the standard wizard procedure via the Cfclient, you can just go to the Flight Control Tab and use the “Command Based Flight Control” buttons to take-off, command steps in xyz directions and land. It is this easy!

Flapper Nimble+ in Lighthouse flown via Command Based Flight Control of cfclient

Assisted flight demo

We used this setup in February for the demos we were giving at the Highlight Delft festival in the Netherlands. This allowed people with no drone piloting skills (from 3-year-olds, to grandmas – true story) fly and control the Flapper in a safe way (safe for the Flapper, as the Flapper itself is a very safe platform thanks to its soft wings and low weight). To make it more fun, and even safer for the Flapper, we used a gamepad instead of on screen buttons, and we modified the cfclient slightly such that the flight space can be geofenced to stay within the tracking volume.

Flight demo at Highlight Delft festival, using the Lighthouse and position hold assistance

If you would like to try it yourself (it works also with standard crazyflies), the source code is here (just keep in mind it is experimental and has some known bugs…). To fly in the position-assisted mode, you need to press (and keep pressing) the Alt 1 button, and use the joysticks to move around (velocity commands, headless mode). Releasing the Alt 1 button will make the Flapper autoland. Autoland will also get triggered when the battery is low. You can still fly the Flapper in a direct way when pressing Alt 2 instead.

Flying more Flappers at a time

Again, this is something that works pretty much out of the box. As with a regular crazyflie, you just need to assign a unique address to each of the Flappers and then use e.g. this example python script to run a preprogrammed sequence.

With a few extra lines of code, we pulled this quick demo at the end of the Highlight Delft festival, when we had 30 minutes left before packing everything (one of the Flappers decided to drop its landing gear, probably too tired after 3 evenings of almost continuous flying…):

Sequence with 3 Flappers within Lighthouse positioning system

Other positioning systems

Using other positioning systems is equally easy. In fact, for the Loco Positioning system, the deck can even be installed directly on the Flapper’s Bolt board (no extension cables or mounts are needed). As for optical motion tracking, we do not have experience with Qualisys and the active marker deck, but flying with retro-reflective markers within OptiTrack system can be setup easily with just a few hacks.

When choosing and setting up the positioning system, just keep in mind that due to its wings, the Flapper needs to tilt much more to fly forward or sideways, compared to a quadcopter. This is not an issue with the Loco Positioning system (but there can be challenges with position estimation, as described further), but it can be a limitation for systems requiring direct line of sight, such as the Lighthouse or optical motion tracking.

Ongoing work

In terms of control and flight dynamics, the Flapper is very different from the Crazyflie. Thus, for autonomous flight, there remains room for improvement on the firmware side. We managed to include the “flapper” platform into the standard Crazyflie firmware (in master branch since November 2022, and in all releases since then), such that RC flying and other basic functionality works out of the box. However, as many things in the firmware were originally written only for a (specific) quadcopter platform, the Crazyflie 2.x, further contributions are needed to unlock the full potential of the Flapper.

With the introduction of “platforms” last year, many things can be defined per platform (e.g. the PID controller gains, sensor alignment, filter settings, etc.), but e.g. the Extended Kalman filter, and specifically the motion model inside, has been derived and tuned for the Crazyflie 2.x, and is thus no representative of the Flapper with very different flight dynamics. This is what directly affects (and currently limits) the autonomous flight within positioning systems – it works well enough at hover and slow flight, but the agility and speed achievable in RC flight cannot be reached yet. We are planning to improve this in the future (hopefully with the help of the community). The recently introduced out of tree controllers and estimators might be the way to go… To be continued :)

Thanks Matej ! And for those of you at home, don’t forget that we have our dev meeting next Wednesday (the 5th), where we’ll discuss about the Loco positioning system, but also will take some time for general discussions. We hope to see you there!

It’s time for a new compilation video about how the Crazyflie is used in research ! The last one featured already a lot of awesome work, but a lot happened since then, both in research and at Bitcraze.

As usual, the hardest about making those videos is choosing the works we want to feature – if every cool video of the Crazyflie was in there, it would last for hours! So it’s just a selection of the most videogenic projects we’ve seen. You can find a more extensive list of our products used in research here.

We’ve seen a lot of projects that used the modularity of the Crazyflie to create awesome new features, like a catenary robot, some wall tracking or having it land upside down. The Crazyflie board was even made into a revolving wing drone. New sensors were used, to sniff out gas leaks (the Sniffy bug as seen in this blogpost), or to allow autonomous navigation. Swarms are also a research topic where we see a lot of the Crazyflie, this time for collision avoidance, or path planning. We also see more and more of simulators, which are used for huge swarms or physics tests.

Once again, we were surprised and awed by all the awesome things that the community did with the Crazyflie. Hopefully, this will inspire others to think of new things to do as well. We hope that we can continue with helping you to make your ideas fly, and don’t hesitate to share with us the awesome projects you’re working on!

Here is a list of all the research that has been included in the video:

And, without further ado, here it is:

This year, the traditional Christmas video was overtaken by a big project that we had at the end of November: creating a test show with the help of CollMot.

First, a little context: CollMot is a show company based in Hungary that we’ve partnered with on a regular basis, having brainstorms about show drones and discussing possibilities for indoor drones shows in general. They developed Skybrush, an open- source software for controlling swarms. We have wanted to work with them for a long time.

So, when the opportunity came to rent an old train hall that we visit often (because it’s right next to our office and hosts good street food), we jumped on it. The place itself is huge, with massive pillars, pits for train maintenance, high ceiling with metal beams and a really funky industrial look. The idea was to do a technology test and try out if we could scale up the Loco positioning system to a larger space. This was also the perfect time to invite the guys at CollMot for some exploring and hacking.

The train hall

The Loco system

We added the TDoA3 Long Range mode recently and we had done experiments in our test-lab that indicate that the Loco Positioning systems should work in a bigger space with up to 20 anchors, but we had not actually tested it in a larger space.

The maximum radio range between anchors is probably up to around 40 meters in the Long Range mode, but we decided to set up a system that was only around 25×25 meters, with 9 anchors in the ceiling and 9 anchors on the floor placed in 3 by 3 matrices. The reason we did not go bigger is that the height of the space is around 7-8 meters and we did not want to end up with a system that is too wide in relation to the height, this would reduce Z accuracy. This setup gave us 4 cells of 12x12x7 meters which should be OK.

Finding a solution to get the anchors up to the 8 meters ceiling – and getting them down easily was also a headscratcher, but with some ingenuity (and meat hooks!) we managed to create a system. We only had the hall for 2 days before filming at night, and setting up the anchors on the ceiling took a big chunk out of the first day.

Drone hardware

We used 20 Crazyflie 2.1 equipped with the Loco deck, LED-rings, thrust upgrade kit and tattu 350 mAh batteries. We soldered the pin-headers to the Loco decks for better rigidity but also because it adds a bit more “height-adjust-ability” for the 350 mAh battery which is a bit thicker then the stock battery. To make the LED-ring more visible from the sides we created a diffuser that we 3D-printed in white PLA. The full assembly weighed in at 41 grams. With the LED-ring lit up almost all of the time we concluded that the show-flight should not be longer than 3-4 minutes (with some flight time margin).

The show

CollMot, on their end, designed the whole show using Skyscript and Skybrush Studio. The aim was to have relatively simple and easily changeable formations to be able to test a lot of different things, like the large area, speed, or synchronicity. They joined us on the second day to implement the choreography, and share their knowledge about drone shows.

We got some time afterwards to discuss a lot of things, and enjoy some nice beers and dinner after a job well done. We even had time on the third day, before dismantling everything, to experiment a lot more in this huge space and got some interesting data.

What did we learn?

Initially we had problems with positioning, we got outliers and lost tracking sometimes. Finally we managed to trace the problems to the outlier filter. The filter was written a long time ago and the current implementation was optimized for 8 anchors in a smaller space, which did not really work in this setup. After some tweaking the problem was solved, but we need to improve the filter for generic support of different system setups in the future.

Another problem that was observed is that the Z-estimate tends to get an offset that “sticks” and it is not corrected over time. We do not really understand this and will require more investigations.

The outlier filer was the only major problem that we had to solve, otherwise the Loco system mainly performed as expected and we are very happy with the result! The changes in the firmware is available in this, slightly hackish branch.

We also spent some time testing maximum velocities. For the horizontal velocities the Crazyflies started loosing positioning over 3 m/s. They could probably go much faster but the outlier filter started having problems at higher speeds. Also the overshoot became larger the faster we flew which most likely could be solved with better controller tuning. For the vertical velocity 3 m/s was also the maximum, limited by the deceleration when coming downwards. Some improvements can be made here.

Conclusion is that many things works really well but there are still some optimizations and improvements that could be made to make it even more robust and accurate.

The video

But, enough talking, here is the never-seen-before New Year’s Eve video

And if you’re curious to see behind the scenes

Thanks to CollMot for their presence and valuable expertise, and InDiscourse for arranging the video!

And with the final blogpost of 2022 and this amazing video, it’s time to wish you a nice New Year’s Eve and a happy beginning of 2023!

This week’s guest blogpost is from Rik Bouwmeester from the Micro Air Vehicle lab, Faculty of Aerospace Engineering at the Delft University of Technology.

Tiny quadcopters like the Crazyflie can be operated in narrow, cluttered environments and in proximity to humans, making them the perfect candidate for search-and-rescue operations, monitoring of crop in a greenhouse, or performing inspections where other flying robots cannot reach. All these applications benefit from autonomy, allowing deployment without proximity to a base station or human operator and permitting swarming behavior.

Achieving autonomous navigation on nano quadcopters is challenging given the highly constrained payload and computational power of the platform. Most attention has been given to monocular solutions; the camera is a lightweight and energy-efficient passive sensor that captures rich information of the environment. One of the most important monocular visual cues is optical flow, which has been exploited on MAVs with higher payload for obstacle avoidance [1], depth estimation [2] and several bio-inspired methods for autonomous navigation [3–7].

Optical flow describes the apparent visual variations caused by relative motion between an observer and their surroundings. This rich visual cue contains tangled information of velocity and depth. However, calculating optical flow is expensive. The field of optical flow estimation is and has been for a couple of years dominated by convolutional neutral networks (CNNs). Despite efforts to find architectures of reduced size and latency [8-10], these methods are still highly computationally expensive, running at several to tens of FPS on modern desktop GPUs and requiring millions of parameters to run, rendering them incompatible with edge hardware.

To this end, we present “NanoFlowNet: Real-Time Dense Optical Flow on a Nano Quadcopter”, submitted to an international robotics conference, which introduces NanoFlowNet, a CNN architecture designed for real-time, fully on-board, dense optical flow estimation on the AI-deck.

CNN architecture

We adopt semantic segmentation CNN STDC-Seg [11] and modify it for optical flow estimation. The resulting CNN architecture may be considered “real-time” on desktop hardware, for deployment on edge devices such as a nano quadcopter the net must be significantly shrunk. We improve the latency of the architecture in three ways.

First, we redesign the key convolutional modules of the architecture, the Short-Term Dense Concatenate (STDC) module. By reordering the operations within the strided variant of the module, we save, depending on the location of the module within the architecture, from over 10% to over 50% of the MAC operations per module, while increasing the number of output filters with large receptive field size. A large receptive field size is desirable for optical flow estimation.

Second, inspired by MobileNets [12], we globally replace ‘regular’ convolutions with depthwise separable convolutions. Depthwise separable convolutions factorize a convolution into a depthwise and pointwise convolution, effectively reducing the calculational expense at a cost in representational capacity.

Third, we reduce the input dimensionality. We train and infer network on grayscale input images, reducing the required on-board memory for storing images by a factor 2/3. Any memory saved on the AI-deck’s L2 memory can be handed to AutoTiler for storing the CNN architecture, speeding up the on-board execution. Requiring more of a speed-up, we run the CNN on-board at a reduced input resolution of 160×112 pixels. Besides the speed-up through saved L2, reducing the input resolution makes all operations throughout the network cheaper. We downscale training data to closely match the target resolution. Both these changes come at a loss of input information. We will miss out on small objects and small displacements that are not captured by the resolution.

To give some intuition of the available memory: Estimating optical flow requires two input images. Storing two color input images at full resolution requires (2 x 324x324x3=) 630 kB. The AI-deck has 512 kB of L2 memory available.

Motion boundary detail guidance

Inspired by STDC-Seg, we guide the training of optical flow with a train-time-only auxiliary task to promote the encoding of spatial information in the early layers. Specifically, we introduce a motion boundary prediction task to the net. The motion boundary ground truth can be found in the optical flow datasets. This improves performance by 0.5 EPE on the MPI Sintel clean (train) benchmark, at zero cost to inference latency.

Performance on MPI Sintel

Given the scaling and conversion to grayscale of input data, our network is not directly comparable with results reported by other works. For comparison, we retrain one of the fastest networks in literature, Flownet2-s [13], on the same data. Given the reduction in resolution, we drop the deepest two layers to maintain a reasonable feature size. We name the model Flownet2-xs.

We benchmark the performance of the architecture on the optical flow dataset MPI Sintel. NanoFlowNet performs better than FlowNet2-xs, despite using less than 10% of the parameters. NanoFlowNet achieves 5.57 FPS on the AI-deck. FlowNet2-xs does not fit on the AI-deck due to the network size. To put the achieved latency of NanoFlowNet in perspective, we execute FlowNet2-xs’ first two convolutions and the final prediction layer on the GAP8. The three-layer architecture achieves 4.96 FPS, which is slower than running the entire NanoFlowNet. On a laptop GPU, the two architectures accomplish similar latency.

MethodMPI Sintel (train) [EPE]Frame rate [FPS]Parameters
CleanFinalGPUGAP8
FlowNet2-xs9.0549.4581501,978,250
NanoFlowNet7.1227.9791415.57170,881
Performance on MPI Sintel (train subset). (Average) end-to-end Point Error (EPE) describes how far off the estimated flow vectors are on average, lower is better.

Obstacle avoidance implementation

We demonstrate the effectiveness of NanoFlowNet by implementing it in a simple, proof-of-concept obstacle avoidance application on an AI-deck equipped Crazyflie. We let the quadcopter fly forward at constant velocity and implement the horizontal balance strategy [14], [15], where the quadcopter balances the optical flow in the left and right half plane by yawing.

We equip a Crazyflie with the Flow deck for positioning only. The total flight platform weighs 34 grams.

We augment the balance strategy by implementing active oscillations (a cyclic up-down movement), resulting in additional optical flow generated across the field of view. This is particularly helpful for avoiding obstacles in the direction of horizontal travel, since no optical flow is generated at the focus of expansion.

The obstacle avoidance implementation is demonstrated in an open and a cluttered environment in ‘the Cyber Zoo’, an indoor flight arena at the faculty of Aerospace Engineering at the Delft University of Technology. The control algorithm is most robust in the open environment, with the quadcopter managing to drain a full battery without crashing. In the cluttered environment, performance is more variable. Especially on occasions where obstacles are close to one another, the quadcopter tends to avoid the first obstacle successfully, only to turn straight into the second and crash into it. Adding a head-on collision detection based on FOE detection and divergence estimation (e.g., [7]) should help avoid obstacles in these cases.

Successful run in a cluttered environment in the Cyber Zoo. The Crazyflie manages to avoid collision until the battery is drained.

All in all, we consider the result a successful demonstration of the optical flow CNN. In future work, we expect to see applications that take more advantage of the resolution of the flow information.

Citation

Bouwmeester, R. J., Paredes-Vallés, F., De Croon, G. C. H. E. (2022). NanoFlowNet: Real-time Dense Optical Flow on a Nano Quadcopter. arXiv. https://doi.org/10.48550/arXiv.2209.06918

References

[1] Gao, P., Zhang, D., Fang, Q., & Jin, S. (2017). Obstacle avoidance for micro quadrotor based on optical flow. Proceedings of the 29th Chinese Control and Decision Conference, CCDC 2017, 4033–4037. https://doi.org/10.1109/CCDC.2017.7979206

[2] Sanket, N. J., Singh, C. D., Ganguly, K., Fermuller, C., & Aloimonos, Y. (2018). GapFlyt: Active vision based minimalist structure-less gap detection for quadrotor flight. IEEE Robotics and Automation Letters, 3(4), 2799–2806. https://doi.org/10.1109/LRA.2018.2843445

[3] Conroy, J., Gremillion, G., Ranganathan, B., & Humbert, J. S. (2009). Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Autonomous Robots, 27(3), 189–198. https://doi.org/10.1007/s10514-009-9140-0

[4] Zingg, S., Scaramuzza, D., Weiss, S., & Siegwart, R. (2010). MAV navigation through indoor corridors using optical flow. Proceedings – IEEE International Conference on Robotics and Automation, 3361–3368. https://doi.org/10.1109/ROBOT.2010.5509777

[5] De Croon, G. C. H. E. (2016). Monocular distance estimation with optical flow maneuvers and efference copies: A stability-based strategy. Bioinspiration and Biomimetics, 11(1). https://doi.org/10.1088/1748-3190/11/1/016004

[6] Serres, J. R., & Ruffier, F. (2017). Optic flow-based collision-free strategies: From insects to robots. Arthropod Structure and Development, 46(5), 703–717. https://doi.org/10.1016/j.asd.2017.06.003

[7] De Croon, G. C. H. E., De Wagter, C., & Seidl, T. (2021). Enhancing optical-flow-based control by learning visual appearance cues for flying robots. Nature Machine Intelligence, 3(1), 33–41. https://doi.org/10.1038/s42256-020-00279-7

[8] Ranjan, A., & Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 2720–2729. https://doi.org/10.1109/CVPR.2017.291

[9] Hui, T. W., Tang, X., & Loy, C. C. (2018). LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8981–8989. https://doi.org/10.1109/CVPR.2018.00936

[10] Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2017). PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8934–8943. https://doi.org/10.1109/CVPR.2018.00931

[11] Fan, M., Lai, S., Huang, J., Wei, X., Chai, Z., Luo, J., & Wei, X. (2021). Rethinking BiSeNet For Real-time Semantic Segmentation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 9711–9720. https://doi.org/10.1109/CVPR46437.2021.00959

[12] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. In arXiv. arXiv. http://arxiv.org/abs/1704.04861

[13] Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). FlowNet 2.0: Evolution of optical flow estimation with deep networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, 1647–1655. https://doi.org/10.1109/CVPR.2017.179

[14] Souhila, K., & Karim, A. (2007). Optical flow based robot obstacle avoidance. International Journal of Advanced Robotic Systems, 4(1), 2. https://doi.org/10.5772/5715

[15] Cho, G., Kim, J., & Oh, H. (2019). Vision-based obstacle avoidance strategies for MAVs using optical flows in 3-D textured environments. Sensors, 19(11), 2523. https://doi.org/10.3390/s19112523

Christmas is just around the corner, and it’s time for the traditionnal Christmas video! This year, we wanted to use the AI deck as we’ve been working hard on this deck for some time now. Showcasing its new feature in this festive video seemed the best idea.

Santa this year needed help to find and get the presents delivered, so he asked for help from Bitcraze! Let’s see how it played out:

I’m sure you’re wondering how we managed to set this up, so let’s discuss how we did it!

Picking up the packages

The goal was to pick up some packages and place them in the sleigh, and it all worked out pretty well. All the flying in the video is scripted using the python lib and positioning is done using Qualisys’ motion capture system with Active marker decks. The trajectories are hard coded and with some careful adjustments we managed to lift and fly the 4 crazyflies attached to the same present, even though it was a bit wobbly from time to time. Getting the present into the sleigh was not as easy, and we might have taken some short cuts here as well as when attaching/removing lines.

Sometimes detangling the wires was a teamwork

Picking up the second package needed some precision, and it went incredibly well, on the first try! The present was very light and needed someone to hold it, to prevent it from moving when the Crazyflie approached.

Getting the sleigh off

Our first tries included 5 strings attached to the sleigh, but it was difficult to get the right tension at the right time to have the UAVs actually pull the weight. Here came the “rubber band solution”: we just attached all the strings to a rubber band, that was itself attached to the sleight. That way, the tension could get even when all the Crazyflies were in the air and ready to pull the sleight.

The rubber band

The AI deck camera/streamer

When we started this project, the intention was to run a neural network in the AI-deck to identify or classify the presents. We did not manage to get to a point where we had something that actually added value to the story, so we settled for just streaming the video from the AI-deck camera on the scouting drone instead.

The AI-deck example with color camera viewer is still under development, but if you want to give it a try you can take a look at the readme in the github repo.

Bloopers

And finally, as an added bonus, if you ever wonder how many tries it takes to make 5 Crazyflies pull a sleigh, here is a little behind-the-scenes video too!

The last week was epic. We had 3 days of our online conference, the BAM days – I’m sure you’ve heard of them by now.

We are really happy with how everything went down. During those 3 days, 142 people attended, which is a highest number than we could have expected. The Welkom platform we used was stellar, allowing us to use Mibo rooms for very fruitful discussions after each talk.

Quiz and community Q&A

We took the opportunity to talk to our community, which is something we didn’t have the opportunity to do in a long time. Your insights and feedback were greatly appreciated and we have a lot to think about in the next coming weeks on how to best use all the remarks we got.

We also had a short quiz about Bitcraze, and we were quite impressed with how you performed ! And interesting to note that the hardest question for you was how many decks we sell (it’s 16, if you want to cheat on our hypothetical next quiz)

As I’ve mentioned, after each event we gathered in Mibo rooms. Even though attendance there was not as high as we would have liked, we still got quality time with community members, speakers, collaborators, even first-timers that were interested in the Crazyflie. We really love this platform, making us feel almost like meeting in real life. We even had some karaoke in Mibo during the closing party (which MAY be a good excuse to end the day for those who listened)

Content

Our external speakers presented a lot of interesting work. It was a great pleasure and honor to welcome every one of them as they explained their latest work. I have to admit that it’s rewarding to see such smart people doing awesome and cool research with our products.

We did our share too, with workshops and demos. Kristoffer’s autonomous demo using distributed consensus required a lot of work but worked perfectly in the end. Here is a small excerpt:

What now?

Now, we’re feeling as everyone is feeling the day after a party: exhausted, happy, and wondering what to do next. Hopefully we have some plans for that !

If you missed the conference, we created a Youtube playlist where you can watch everything that you missed. During the next few days, we’ll update the event page with BAM’s presentations too so you will have the opportunity to catch up. Some of our workshops will also turn into tutorials or documentations of some kind, but we’re still just cleaning up.

We are so happy with how everything went that we are already thinking about a future BAM. This one was exceptional, of course, since it was at first to celebrate our 10 year anniversary (and I have to admit that we’re all a little bit tired after 3 intense days), so we’re not going to be able to top that. But we are considering making BAM a fixed point in our agenda (and yours, let’s hope). We don’t how, we don’t know when, but one thing is sure: BAM is just beginning.

Kimberly on a different continent

On a totally different note, Kimberly is flying to the US this week: if any of you America-based wants to grab the occasion to have a more time-zone appropriate conversation with one of us, you will have a few weeks to make it possible!