This week it will be a bit of a different blogpost than you are used to read from us. Usually we talk about cool prototypes, explain bits and pieces from the Bitcraze ecosystem or let external parties/researchers show case their awesome work that they’ve done on the Crazyflie. Today’s blogpost will be more about a societal topic that plays a big part within the robotics world: diversity! Bitcraze is helping out with the Diversity Scholarship of this year’s ROSCon, which we’d like to advertise about, but is also complimented by some words about diversity in robotics and how this topic is reflected upon within Bitcraze itself.
Diversity & Robotics
It’s widely acknowledged that the field of robotics lacks diversity. While there have been improvements, significant underrepresented groups remain, including women, individuals in LGBTQIA+ communities, people with disabilities, and those from racial and/or ethnic minorities. There are some interesting communities to look into if you are part of these groups yourself. However, if you know of any other ones that are interesting, of course, let us know.
Other than these earlier mentioned groups, we do not regard ourselves as the absolute expert on diversity in robotics, but we have perhaps a simple but interesting statistic to share from our experience. We usually receive requests for guest blog posts on our website from external researchers and engineers looking to showcase their work with the Crazyflie. We thought it would be interesting to graph the gender distribution of these guest bloggers:
As you may have noticed, before 2020, all of our guest bloggers were male, and only in recent years has that changed. It’s also worth mentioning that to our knowledge, none of the bloggers has openly identified as anything other than cis-gender male or female. While this shift represents progress, it’s important to acknowledge that there is still room for improvement. Additionally, it is essential to recognize that this tiny statistic does not fully reflect the diversity of the robotics community but rather (perhaps) pertains to a specific subset, such as aerial robotics.
Diversity & Bitcraze
So let me just cut to the chase, Bitcraze is a very small company with currently only 6 full-time employees. Currently, we don’t have any formal policies on hiring and promoting diversity. However, we do have a very open culture within the company where we can discuss these topics at our coffee breaks without restrictions or judgment. There is a genuine interest in sharing and discussing negative experiences related to the lack of diversity at previous workplaces, so we do talk about it a lot.
In terms of our impact internally and externally, for now, we don’t come across enough hiring opportunities to implement diversity policies. We can perhaps also invite more diverse guest bloggers to contribute to our website, or make our developer meetings more welcoming. However, there is only a limited influence that we can exert here with our small company. Therefore, the choice to support other communities we love to improve diversity is perhaps the most we can do to contribute to this cause.
We are already involved in the ROS community by helping out with the ROS aerial community working group (blogpost1, blogpost2) and we loved the atmosphere during ROSCon when we were in Kyoto. When the opportunity arrived to be a co-chair of the diversity committee of ROSCon 2024, together with Belén Torres from Wymaq, we gladly took it and are hoping that is were we can make more of a difference.
Diversity Scholarship at ROSCon 2024
This year’s ROSCon will be held in Odense, Denmark, between October 21st and 23rd. Since 2016, the ROSCon organization has launched a diversity scholarship opportunity, and this year’s event is expected to be the biggest one yet. Individuals belonging to the underrepresented groups in robotics, as mentioned earlier, are invited to apply for the scholarship. The deadline is April 5th, so please don’t wait too long to apply. Check here for the ROS discourse post and here for the diversity scholarship application on the ROSCon website.
However, we have noticed that some of our beginning users struggle with understanding the concept of Kalman filtering, depending on whether this has been covered in their curriculum. And for some more experienced users, it might be nice to have a recap of the basics as well, since this is a very important part of the Crazyflie’s capabilities of flight (and also for robotics in general). So, in this blog post, we will explain the principles of Kalman filtering and how it is applied within the Crazyflie firmware, which hopefully will provide a good base for anyone starting to delve into state estimation within the Crazyflie.
Anybody remotely working with autonomous systems must, at one point, have heard of the Kalman filter, as it has existed since the 60s and even played a role in the Apollo program. Understanding its main principles is also important for anyone working with drones or robotics. There are plenty of resources available, and its Wikipedia page is filled with examples, so here we will focus mostly on the concept and principles and leave the bulk of the mathematics as an exercise for those who like to delve into that :).
So basically, there are several principles that apply to a Kalman filter:
It estimates a linear system that is driven by stochastic processes. The probability function that drives these stochastic processes should ideally be Gaussian.
It makes use of the Bayes’ rule, which is a general term in statistics that describes the probability of an event happening based on previous knowledge related to that event.
It assumes that the ‘to be estimated state’ can be described with a Markov model, which assumes that a sequence of the next possible event (or scenario) can be predicted by the current event. In other words, it does not need a full history of events to predict the next step(s), only the information from the event of one previous step.
A Kalman filter is described as a recursive filter, which means that it reuses (part of) its output as input for the next filtering step.
So the state estimate is usually a vector of different variables that the developer or user of the system likes to observe, for either control or prediction, something like position and velocity, for instance: [x, y, ẋ, ẏ, …]. One can describe a dynamics model that can predict the state in the next step using only the current time step’s state, like for instance: xt+1 = xt + ẋt, yt+1 = yt + ẏt. This can also be nicely described in matrix form as well if you like linear algebra. To this model, you can also add predicted noise to make it more realistic, or the effect of the input commands to the system (like voltage to motors). We will not go into the latter in this blogpost.
The Concept of Kalman filters
So, we will go through the process of explaining the steps of the Kalman filter now, which hopefully will be clear with the above picture. As mentioned before, we’d like to avoid formulas and are oversimplifying some parts to make it as clear as possible (hopefully…).
First, there is the predict phase, where the current state (estimate) and a dynamics model (also known as the state transition model) result in a predicted state. Also in the same phase, the predicted estimated covariance is calculated, which also uses the dynamics model plus an indication of the process noise model, indicating how much the dynamics model deviates from reality in predicting that state. In an ideal world and with an ideal model, this could be enough; however, no dynamics model is perfect, which is why the next phase is also very important.
Then it’s the update phase, where the filter estimate gets updated by a measurement of the real world through sensors. The measurement needs to go through a measurement model, which transforms the measurement into a measured state (also known as innovation or measurement pre-fit residual). Usually, a measurement is not a 1-1 depiction of one variable of the state, so the measurement model ensures that the measurement can properly be compared to the predicted state. This same measurement model, accompanied by the measurement noise model (which indicates how much the measurement differs from the real world), together with the predicted covariance, is used to calculate the innovation and Kalman gain.
The last part of the update phase is where the predictions are updated with the innovation. The Kalman gain is then used to update the predicted state to a new estimated state with the measured state. The same Kalman gain is also used to update the covariance, which can be used for the next time step.
An 1D example, height estimation
It’s always good to show the filter in some form of example, so let’s show you a simple one in terms of height estimation to demonstrate its implications.
You see here a Crazyflie flying, and currently it has its height estimated at zt and its velocity at żt. It goes to the predict phase and predicts the next height to be at zt+1,predict, which is a simple model of just zt + żt. Then for the innovation and updating phase, a measurement (from a range sensor) rz is used for the filter, which is translated to zt+1, meas. In this case, the measurement model is very simple when flying over a flat surface, as it probably is only a translation addition of the sensor to the middle of the Crazyflie, or perhaps a compensation for a roll or pitch rotation.
In the background, the covariances are updated and the Kalman gain is calculated, and based on zt+1,predict and zt+1, meas, the next state zt+1 is calculated. As you probably noticed, there was a discrepancy between the predicted height and measured height, which could be due to the fact that the dynamics model couldn’t correctly predict the height. Perhaps a PID gain was higher than expected or the Crazyflie had upgraded motors that made it climb faster on takeoff. As you can see here, the filter put the estimated height closer to zt+1 to the measurement than the predicted height. The measurement noise model incorporated into the covariances indicates that the height sensor is more accurate than the height coming from the dynamics model. This would very well be the case for an infrared height sensor like the one on the Flow Deck; however, if it were an ultrasound-based sensor or barometer instead (which are much noisier), then the predicted height would be closer to the one predicted by the dynamics model.
Also, it’s good to note that the dynamics model does not currently include the motor input, but it could have done so as well. In that case, it would have been better able to predict the jump it missed now.
A 2D example, horizontal position
Let’s take it up a notch and add an extra dimension. You see here now that there is a 2D solution of the Crazyflie moving horizontally. It is at position xt, yt and has a velocity of ẋt, ẏt at that moment in time. The dynamics model estimates the Crazyflie to end up in the general direction of the velocity factor, so it is a simple addition of the current position and velocity vector. If the Crazyflie has a flow sensor (like on the Flow Deck), flow fx, fy can be detected and translated by the measurement model to a measured velocity (part of the state filter) by combining it with a height measurement and camera characteristics.
However, the measurement in the form of the measured flow fx, fy estimates that there is much more flow detected in the x-direction than in the y-direction. This can be due to a sudden wind gust in the y-direction, which the dynamics model couldn’t accurately predict, or the fact that there weren’t as many features on the surface in the y-direction, making it more difficult for the flow sensor to measure the flow in that direction. Since this is not something that both models can account for, the filter will, based on the Kalman gain and covariances, put the estimate somewhere in between. However, this is of course dependent on the estimated covariances of both the outcome of the measurement and dynamic models.
In case of non-linearity
It would be much simpler if the world’s processes could be described with linear systems and have Gaussian distributions. However, the world is complex, so that is rarely the case. We can make parts of the world more abstract in simulation, and Kalman filters can handle that, but when dealing with real flying vehicles, such as the Crazyflie, which is considered a highly nonlinear system, it needs to be described by a nonlinear dynamics model. Additionally, the measurements of sensors in more complex and 3D situations usually don’t have a one-to-one linear relationship with the variables in the state. Can you still use the Kalman filter then, considering the earlier mentioned principles?
Luckily certain assumptions can be made that can still make Kalman filters useful in the sense of non-linearity.
Extended Kalman Filter (EKF): If there is non-linearity in either the dynamics model, measurement models or both, at each prediction and update step, these models are linearized around the current state variables by calculating the Jacobian, which is a collection of first-order partial derivative calculations of the model and the state variables.
Unscented Kalman Filter (UKF): An unscented Kalman filter deals with linearities by selecting sigma points selected around the mean of the state estimate, which are backpropagated through the non-linear dynamics model.
However, there is also the case of non-Gaussian processes in both dynamics and measurements, and in that case a complementary filter or particle filter would be best suited. The Crazyflie contains a complementary filter (which does not estimate x and y), an extended Kalman filter and an experimental unscented Kalman filter. Check out the state-estimation documentation for more information.
So…. where is the code?
This is all fine and dandy, however… where can you find all of this in the code of the Crazyflie firmware? Here is an overview of where you can find it exactly in the sense of the most used filter of them all, namely the Extended Kalman Filter.
Initialize the state and variances kalmanCoreInit() in kalman_core.c
Prediction step with the dynamics model: predictDt() in kalman_core.c
Innovation and update of the covariance with the measurement update: kalmanCoreScalarUpdate() in kalman_core.c
All measurement models can be found in seperate files in the kalman_core/ folder
The height measurement model for TOF range sensor like in the 1D example: kalmanCoreUpdatewithToF() in mm_tof.c
The flow measurement model for the flow sensor like in the 2D example: kalmanCoreUpdateWithFlow() in mm_flow.c
Finalizing the state (by rotating all the state variables in the correct orientation: kalmanCoreFinalize() in kalman_core.c
There are several assumptions made and adjustments made to the regular EKF implementation to make it suitable for flight on the Crazyflie. For those details I’d like to refer to the papers on where this implementation is based on, which can be found in the EKF documentation. Also for a more precise explanation of Kalman filter, please check out the lecture slides of Stanford University on Linear dynamical systems or the Linköping university’s course slides on Sensor Fusion.
Update: From the comments we also got notified of an nice EKF tutorial where you write the filter from scratch (github) from Prof. Simon D. Levy from Washington and Lee university. Practice makes perfect!
Also Kimberly and Arnaud will be attending FOSdem this weekend in Brussels, Belgium. We are hoping to organize an open-source robotics BOF/meetup there, so please let us know if you are planning to go as well!
A few years ago, we wrote a blogpost about the Commander framework, where we explained how the setpoint structure worked, which drives the controller of the Crazyflie, which is an essential part of the stabilization module. Basically, without these, there would not be any autonomy on the Crazyflie, let alone manual flight.
However, we notice that there is sometimes confusion regarding these different functionalities and what exactly sends which setpoints and how. These details might not be crucial when using just one Crazyflie, but become more significant when managing multiple drones. Understanding how often your computer needs to send setpoints or not becomes crucial in such scenarios. Therefore, this blog post aims to provide a clearer explanation of this aspect.
Sending set-points directly from the CFlib
Let’s start at the lower level from the computer. It is possible to send various types of setpoints directly from a Python script using the Crazyflie Python library (cflib for short). This capability extends to tasks such as manual control:
If you use these functions in a script, the principle is quite basic: the Crazyradio sends exactly 1 packet with this setpoint over the air to the Crazyflie, and it will act upon that. There are no secret threads opening in the background, and nothing magical happens on the Crazyflie either. However, the challenge here is that if your script doesn’t send an updated setpoint within a certain amount of time (default of 2 seconds), a timeout will occur, and the Crazyflie will drop out of the sky. Therefore, you need to send a setpoint at regular intervals, like in a for loop, to keep the Crazyflie flying. This is something you need to take care of in the script.
Example scripts in the CFlib that are sending setpoints directly:
Another way to handle the regular sending of setpoints automatically in the CFLib is through the Motion Commander class. By initializing a Motion Commander object (usually using a context manager), a thread is started with takeoff that will continuously send (velocity) setpoints at a fixed rate. These setpoints can then be updated by the following functions, for instance, moving forward with blocking:
forward(distance)
or a giving body fixed velocity setpoint updates (that returns immediately):
start_linear_motion(vx, vy, vz, rate_yaw)
You can check the Motion Commander’s API-generated documentation for more functions that can be utilized. As there is a background thread consistently sending setpoints to the Crazyflie, no timeout will occur, and you only need to use one of these functions for the ‘behavior update’. This thread will be closed as soon as the Crazyflie lands again.
Here are example scripts in the CFlib that use the motion commander class:
Setpoint handling through the high level commander
Prior to this, all logical and setpoint handling occurred on the PC side. Whether sending setpoints directly or using the Motion Commander class, there was a continuous stream of setpoint packets sent through the air for every movement the Crazyflie made. However, what if the Crazyflie misses one of these packets? Or how does this stream handle communication with many Crazyflies, especially in swarms where bandwidth becomes a critical factor?
This challenge led the developers at the Crazyswarm project (now Crazyswarm2) to implement more planning autonomy directly on the Crazyflie itself, in the form of the high-level commander. With the High-Level Commander, you can simply send one higher-level command to the Crazyflie, and the intermediate substeps (setpoints) are generated on the Crazyflie itself. This can be achieved with a regular takeoff:
take_off(height)
or go to a certain position in space:
go_to(x, y)
This can be accomplished using either the PositionHLCommander, which can be used as a context manager similar to the Motion Commander (without the Python threading), or by directly employing the functions of the High-Level Commander. You can refer to the automated API documentation for the available functions of the PositionHLCommander class or the High-Level Commander class.
Here are examples in the CFlib using either of these classes:
Considering the various options available in the Crazyflie Python library, it’s essential to realize that these setpoint-setting choices, whether direct or through the High-Level Commander, can also be configured through the app layer onboard the Crazyflie itself. You can find examples of these app layer configurations in the Crazyflie firmware repository.
It’s important to note some discrepancies regarding the Motion Commander class, which was designed with the Flow Deck (relative positioning) in mind. Consequently, it lacks a ‘go to this position’ equivalent. For such tasks, you may need to use the lower-level send_position_setpoint() function of the regular Commander class (see this ticket.) The same applies to the High-Level Commander, which was primarily designed for absolute positioning systems and lacks a ‘go forward with x m/s‘ equivalent. Currently, there isn’t a possibility to achieve these functionalities at a lower level from the Crazyflie Python library as this functionality needs to be implemented in the Crazyflie firmware first (see this ticket). It would be beneficial to align these functionalities on both the CFlib and High-Level Commander sides at some point in the future.
Hope this helps a bit to explain the commander frame work in more detail and where the real autonomy lies of the Crazyflie when you use different commander classes. If you have any questions on what the Crazyflie can do with these, we advise you to ask your questions on discussions.bitcraze.io and we will try to point you in the right direction and give examples!
Before we start settling down and preparing for Christmas, it’s time for another release! The last one was before the summer in July, and we’ve had quite a few changes on the development master branch that we’d like to share. You can now download the latest Cfclient through pip and install the newest firmware on the Crazyflie to 2023.11 via the CFclient.
Latest changes in CFclient and Cflib
The most significant change in the CFclient is that we have finally transitioned from QT5 to QT6 for the GUI graphics. Additionally, we have addressed some issues with the toolboxes. Finally, we have added an information box to indicate the state of the supervisor, such as whether the Crazyflie is considered tumbled, flying, or if a restart is required because it is locked.
For the backend, namely the Crazyflie Python library, some important changes have been implemented. Along with fixes to the parameter and logging framework, full-state setpoints have been introduced. This feature has existed in firmware for a while due to the Crazyswarm1 project (now Crazyswarm2), but it wasn’t implemented in the cflib until now. Additionally, it’s now necessary to use `notify_setpoint_stop` in cases of switching between high-level setpoints and regular position setpoints. There is also a generic motion capture example now based on the libmotioncapture library.
Note that even though the CFclient has been converted to QT6, there are several examples in the Cflib folder that have not been updated yet. This will be fixed soon, and a ticket has been created for it. Additionally, in the Bitcraze-VM, there have been some reported issues with QT6 (see this ticket).
Latest changes in the firmware
The firmware has undergone some important changes too. On the STM side of things, the hybrid TDOA mode has been merged (check out this recent blog post). This feature is still considered experimental, so please refer to the documentation for the right settings. Additionally, support for the supervisor information box in the CFclient has been added. To utilize it, both the firmware and CFclient need to be updated. There is also a new example demonstrating communication between gap8 and cpx. Last but not least, it is now possible to create Python bindings for portions of the Kalman filter, mainly for the Loco positioning system. On the other hand, the NRF firmware has no added functionalities except for some build changes and fixes.
Crazyradio2 + LPS tools
We’ve also made some improvements in other firmware or tools. Starting with the Crazyradio2, which includes fixes for broadcasting (important for you Crazyswarm2 folks!). We also aimed to make a new release of LPS tools since we heard that people were experiencing issues with USB devices. Unfortunately, there are some problems with the GitHub release actions, so that will likely be delayed. For anyone facing USB issues, you can install the LPS tools from source with Python following the ReadMe’s instructions.
As we already announced last week in the Monday blog post, we will be having a developer meeting this Wednesday (6th Dec, 3 pm CET) regarding the Flow deck (refer to this discussion thread for joining information). Since we usually don’t fill up the entire hour, the last part of the developer meeting is available for some generic support questions face-to-face (online), including questions about the release!
The Flow deck has been around for some time already, officially released in 2017 (see this blog post), and the Flow deck v2 was released in 2018 with an improved range sensor. Compared to MoCap positioning and the Loco Positioning System (based on Ultrawideband) that were already possible before, optical flow-based positioning for the Crazyflie opened up many more possibilities. Flight was no longer confined to lab environments with set-up external systems; people could bring the Crazyflie home and do their hacking there. Moreover, doing research for exploration techniques that cannot rely on external positioning systems was possible with it as well. For example, back in my day as a PhD student, I relied heavily on the Flow deck for multi-Crazyflie autonomous exploration. This would have been very difficult without it.
However, despite the numerous benefits that the Flow deck provides, there are also several limitations. These limitations may not be immediately familiar to many before purchasing a Crazyflie with a Flow deck. A while ago, we wrote a blog post about positioning systems in general and even delved into the Loco Positioning System in detail. In this blog post, we will explore the theory of how the Flow deck enables the Crazyflie to fly, share general tips and tricks for ensuring stable flight, and highlight what to avoid. Moreover, we aim to make the Flow deck the focus of next week’s Developer meeting, with the goal of improving or clarifying its performance further.
Theory of the Flow deck
I won’t delve into too much detail but will provide a generic indication of how the Flow deck works. As previously explained in the positioning system blog post, the Flow deck is a relative positioning system with onboard estimation. “Relative” means that wherever you start is the (0, 0, 0) position. The extended Kalman filter processes flow and height information to determine velocity, which is then integrated to estimate the position—essentially dead reckoning. The onboard Kalman filter manages this process, enabling the Crazyflie to use the information for stable hovering.
The optical flow sensor (PMW3901) calculates pixel flow per frame (this old blog post explains it well), and the IR range sensor (VL53L1x) measures height up to 4 meters (under ideal conditions). The Kalman filter incorporates a measurement model that describes the relationship between these two values and the velocity of the Crazyflie. More detailed information can be found in the state estimation documentation. This capability allows the Crazyflie to hover, as explained in the getting started tutorial.
Tips & Tricks and Limitations
If you want to fly with the Crazyflie and the Flow deck, there are a couple of things to take in mind:
Take off from a floor with texture. Natural texture like wood flooring is probably the best.
The floor shouldn’t be too shiny, and be aware of infrared scattering for the height sensor
The room should be well-lit, as the sensor needs to see the texture.
There are certain situations that the Flow deck has some issues with:
Low or no texture. Flying above something that is only one plain color
Black areas. Similar reason to flying above no texture, but it’s more difficult than usual. Especially with startup, the position estimate diverges
Low light conditions
Flying over its own shadow
We made a video that shows these types of behaviors, starting of course with the most ideal flying conditions:
Moreover, it is also important to note that you shouldn’t fly too high or yaw too often. The latter will make the Crazyflie drift, as the optical flow cannot be distinguished as being caused by the yaw movement.
Developer meeting about Flow deck
We believe that many of the issues people experience are primarily due to the invisibility of the positioning quality. In many of our examples, the Crazyflie will not take off if the position is stable. However, we don’t have a corresponding functionality in our CFclient, as it is more up to the user to recognize when the positioning is diverging. There is a lot of room for improvement in this regard.
This is the reason why the next developer meeting will specifically focus on the Flow deck, which will be on Wednesday the 6th of December, 3 pm central European time. During the meeting, we will explain more about the Flow deck, discuss the issues we are facing, and explore ways to enhance the visibility of positioning quality. Check out this discussion thread for information on how to join.
It seems that many of you are very interested in simulation. We might have gotten the hint when we noticed that our July’s development meeting had our best attendance so far! Therefore, we will be planning a new developer meeting to discuss the upcoming plans for supporting simulation for the Crazyflie.
Getting Started with Simulation tutorial
Perhaps you are not aware, but there is actually a Getting Started tutorial for simulation that has been available for a little over 2 months now. Unfortunately, circumstances prevented us from writing a blog post about it, but we’ve noticed that not all of you are aware of it yet!
The getting-started tutorial demonstrates how to set up the Webots simulator, which already includes Crazyflie models and some cool examples:
An example that you can control the Crazyflie with the keyboard
An example that the Crazyflie does wall following autonomously
The tutorial concludes with instructions on how to edit these controllers. Alternatively, you can choose to run the files directly from the crazyflie-simulation repository. After completing the tutorial, you can explore the simulation repository documentation for more information and to access additional examples.
Upcoming plans
With so many plans and so little time! This is a common phrase at Bitcraze, and it’s a symptom of being an overly ambitious, but too small, team. By the way, we are still looking for more people :). Nonetheless, we have big plans to take our Crazyflie simulation to the next level:
ROS 2 Crazyflie model for Webots: The Crazyflie has been a part of the Webots standard robots for 2 years now, but we still need to implement the Crazyflie into the Webots ROS 2 repository.
Better (new) Gazebo support: Currently, we only have a very simple example for Gazebo, which is limited to motors with no control input. Working with the C++ API can be a bit challenging, so it might be worth considering the use of ROS 2 in the loop here. Let’s see what comes out of it.
Integration into Crazyswarm2: Once the Webots ROS2 node has been released, integrating the Crazyflie simulation into Crazyswarm2 will become more straightforward.
Improvement to the Python bindings: We’ve had Python bindings for controllers and the high-level commander for a while. Recently, we also added Python bindings for the estimator (currently for loco positioning only). However, there are still some issues to address with the Python bindings for the controllers due to timing issues with the simulators.
Linking with our CFLIB: Currently, both Webots and the Crazyflie Python library use entirely different APIs. This means that these scripts are not compatible and you’ll need to be creative not to reuse new code. However, wouldn’t it be nice to use a python example from the python library with a --sim and that it would actually control the Crazyflie in the simulator instead?
Of course, there are probably more improvements that we haven’t thought of yet, but that’s why we have developer meetings!
Come and join us at the Developer meeting.
We will be hosting another developer meeting on November 1st at 15:00 Central European Time (accounting for the time-shift from summer to autumn). You can find details on how to join in the discussion thread here.
Just for your information, I (Kimberly) am the main driving force behind our simulation efforts. However, I’m currently on partial sick leave and will soon be on full leave for a while. I kindly ask for your patience with the pace of ongoing developments. Remember, it’s an open-source project, so if you’d like to contribute and help out, we would greatly appreciate it :)
As of this year around March/April we started with both Bitcraze developer meetings and Aerial-ROS meetings (the latter in collaboration with Dronecode Foundation). Now that summer is around and our office is a bit empty, we had a bit of a summer break, however we will start the meetings back up again soon! The next ROS-aerial meeting will be on the 16th of August and we will also have a Bitcraze developer meeting planned on the Wednesday the 6th of September (keep an eye on our announcements in discussions). In this blogpost we like to take the opportunity to show an overview of the meetings we had so far.
Aerial ROS meetings
In March we started a [ROS community working group] for aerial Vehicles together with our friends at Dronecode foundation, aka Aerial-ROS! We have biweekly meetings with some standard discussion meetings (with a topic) and with an invited guest presentation.
We already had a couple of developer meetings before but we started recording them since April. The first recorded one was about the loco positioning system. Here first we gave a presentation about the system itself, with the latest developments cooking in our pot and time for questions afterwards.
Then we had a meeting about the development of safety features in the Crazyflie in light of the Bolt developments:
Then we had a meeting where Kristoffer highlighted the autonomous swarm demo we showed at ICRA 2023.
And the last before the summer holiday, we had a meeting where Kimberly explained about the Crazyflie simulationmodel intergrated into Webots
We are still planning to have developer meeting every first wednesday of the month starting with September 6th (keep an eye on our announcements in discussions).
EPFL 101 Crazyflie presentation
Oh yeah, by the way, we also were invited by the EPFL-lis lab to give another Crazyflie 101 presentation in Lausanne last April! We made a prerecording of it so you can check it out right here:
In our ROS-aerial community working group, we had a meeting a few weeks ago to discuss education and tutorials within Aerial Robotics (see the ROS discourse thread here). The general conclusion was that there should be more courses and tutorials since the learning curve is too steep. But… is that actually the case? According to a LinkedIn post by Kimberly, asking for suggestions, we found out that might not be true! There are loads of tutorials out there! So in this blog post, we will provide an overview of the suggested tutorials and the ones that have materials available online.
Online books
One of the first suggestions was to explore the online free book titled ‘Small Unmanned Aircraft: Theory and Practice.’ This book has been written by Randy Beard and Tim McLain of Brigham Young University, and it covers everything from the absolute basics of coordinate frames and quadrotor dynamics to path planning and cameras. It is a must-read for anybody starting in UAVs and Aerial robotics.
Here are some suggestions for courses specifically focused on Aerial Robotics. These received the most recommendations! Many universities have made their courses available online, accessible to anyone interested.
Coursera offers the ‘Robotics: Aerial Robotics’ course as part of the Robotics specialization. Taught by Prof. Vijay Kumar from Penn University, this 4-week course covers the mechanics and control of aerial vehicles using Matlab. It starts from 1 dimension and gradually progresses to the 3rd dimension in simulation. The course is part of a paid educational program, but you can audit the lessons for free.
Udacity has been offering a course on Aerial Vehicles for quite some time. The lessons are taught by top names in the industry and cover key aspects of Aerial Robotics, such as motion planning, controls, and estimation, with lab assignments involving a real drone. The course duration is 4 months, and access is available for a fee.
The University of Maryland offers a course on Autonomous Aerial Robotics, making all videos, slides, and assignments available. Taught by Nitin J. Sanket and Chahat Deep Singh, the course covers everything from basic control and dynamics to full autonomy. It’s a comprehensive resource for aerial robotics. The course utilizes the Parrot Bebop 2.0, and while a Mocap system is required, you may explore the possibility of adapting the course to a different platform.
Additionally, there’s the course ‘Applied Control System 3: UAV Drone (3D Dynamics & Control)’ which is part of a series by Mark Misin. This course delves deep into the dynamics, control, and modeling of quadrotors.
Here are some suggestions for courses that focus on robotics but utilize UAVs/drones to demonstrate the implementation of the studied materials.
‘Visual Navigation For Autonomous Vehicles’ is a course available on MIT Open Courseware, taught by Prof. Luca Carlone. As the name implies, the course primarily focuses on autonomous navigation for any autonomous vehicle. It includes exercises where students implement vision algorithms on both ground robots and drones. Additionally, the course covers working with ROS and applying the knowledge to a simulated drone in Unity.
The ‘Bio-inspired Robotics’ course at the University of Washington, led by Prof. Sawyer Fuller, explores the realm of drawing inspiration from nature rather than reinventing the wheel. It covers various robots inspired by creatures capable of swimming, walking, hopping, and of course, flying. Lab assignments in this course involve working with a Crazyflie drone.
Brown University offers a course called ‘Introduction to Robotics,’ taught by Prof. Stefanie Tellix. While the introduction covers generic robotics, the focus of the full course is on building and programming the Duckiedrone. The course dives straight into autonomy and also teaches students how to work with ROS.
Princeton University (see this blogpost) have also decided to release their ‘Intro to Robotics’ lectures and materials for the public. Can’t believe I forgot this one!
If you’d like to start hands-on right away, here are a couple of suggestions for YouTube tutorials or series about aerial robotics.
Drone Programming with Python: This popular tutorial/course teaches viewers how to program a real drone using Python with the DJI Tello. It offers a great opportunity for anyone looking for a short and enjoyable project to undertake, especially on a rainy day, while still working with a real platform.
Intelligent Quads YouTube Channel: This channel is entirely dedicated to creating autonomous UAVs, covering topics from Ardupilot to MAVlink to ROS and Gazebo. It appears to be a valuable resource for beginners in the field of autonomous UAVs.
There are some extra recourses for you to also take a look at.
University of Twente UAV Centre: The University of Twente has created a portal with a variety of UAV-related courses. You can find a wealth of information and educational materials on their website. Link: https://www.itc.nl/facilities/centres-of-expertise/uav-centre/
Self-Driving Car Specialization: If you are interested in learning more about SLAM (Simultaneous Localization and Mapping) and sensors, this specialization is tailored for self-driving cars but the theory can be useful for drones as well. Link: https://www.coursera.org/specializations/self-driving-cars
Autonomous Navigation for Flying Robots: This older course is still highly relevant for anyone interested in autonomous navigation for flying robots. It offers valuable insights and knowledge. Link: https://www.edx.org/course/autonomous-navigation-for-flying-robots
Drone Dojo: For those looking to build their own drones, Drone Dojo provides useful instructions and courses to get started on DIY drone projects. Link: https://dojofordrones.com/
Indeed, it appears that there are plenty of courses and tutorials available for people interested in getting started with aerial robotics. The range of resources is vast, and it’s possible that we might still be missing some, which could lead to a part 2 of this blog post in the future! And perhaps also we would need to delve into these to see why the learning curve is considered steep. However, aerial robotics is not an easy subject anyway so perhaps it is good to start from the basics. Nevertheless, this compilation should provide a solid starting point for anyone eager to delve into the world of aerial robotics. A major thank you to everyone who has contributed so far (linked to in the original LinkedIn post); your valuable input has made this possible!
As you may have noticed from the recent blog posts, we were very excited about ICRA London 2023! And it seems that we had every right to be, as this conference had the highest number of Crazyflie related papers compared to all the previous robotics conferences! In the past, the conferences typically had between 13-16 papers, but this time… BOOM! 28 papers! In this blog post, we will provide a list of these papers and give a general evaluation of the topics and themes covered so far.
So here some stats:
ICRA had 1655 papers accepted (43 % acceptance rate)
28 Crazyflie papers (25 proceedings, 1 RA-L, 1RO-L, 1 late breaking result postor)
Haven’t included the workshop papers this time (no time)
The major topics we discovered were swarm coordination, safe trajectory planning, efficient autonomy, and onboard processing
Additionally, we came across a few notable posters, including one about a grappling hook for the Crazyflie [26], a human suit that allows for drone control [5], the Bolt made into a monocopter with a Jetson companion [16], and a flexible fixed-wing platform driven by a barebone Crazyflie [1]. We also observed a growing interest in aerial robotics with approximately 10% of all sessions dedicated to UAVs. Interestingly, 18 out of the 28 Crazyflie papers were presented in non-UAV specialized sessions, such as multi-robot systems and vision-based navigation.
Swarm coordination
Swarms were a hot topic at ICRA 2023 as already noticed by this tweet of Ramon Roche. We had over 10 papers dedicated to this topic, including one that involved 16 Crazyflies [9]. Surprisingly, more than half of the papers utilized multiple Crazyflies. This already sets a different landscape compared to IROS 2022, where autonomous navigation took center stage.
In IROS 2022, we witnessed single-drone gas mapping using a Crazyflie, but now it has been replicated in the Webots simulation using 2 Crazyflies [23]. Does this imply that we might witness a 3D gas localizing swarm at IROS 2023? We can’t wait.
Furthermore, we came across a paper [11] featuring the Bolt-based platform, which demonstrated flying formations while being attached to another platform using a string. It presented an intriguing control problem. Additionally, there was a work that combined safe trajectory planning with swarm coordination, enabling the avoidance of obstacles and people [12]. Moreover, there were some notable collaborations, such as robot pickup and delivery involving the Turtlebot 3 Burger [22].
Given the abundance of swarm papers, it’s impossible for us to delve into each of them, but it’s all very impressive work.
Safe trajectory planning and AI-deck
Another significant buzzword at ICRA was “safety-critical control.” This is important to ensuring safe control from a human interface [15] and employing it to facilitate reinforcement learning [27]. The latter approach is considered less “safe” in terms of designing controllers, as evidenced by the previous IROS competition, the Safe Robot Learning Competition. Although the Crazyflie itself is quite safe, it makes sense to first experiment with safe trajectories on it before applying them to larger drones.
Furthermore, we encountered approximately three papers related to the AIdeck. These papers covered various topics such as optical flow detection [17], visual pose estimation [21], and the detection of other Crazyflies [5]. During the conference, we heard that the AIdeck presents certain challenges for researchers, but we remain hopeful that we will see more papers exploring its potential in the future!
List of papers
This list not only physical Crazyflie papers, but also papers that uses simulation or parameters of the Crazyflie. This time the workshop papers are not included but we’ll add them later once we have the time
Enjoy!
‘A Micro Aircraft with Passive Variable-Sweep Wings’ Songnan Bai, Runze Ding, Pakpong Chirarattananon from City University of Hong Kong
‘Onboard Controller Design for Nano UAV Swarm in Operator-Guided Collective Behaviors’ Tugay Alperen Karagüzel, Victor Retamal Guiberteau, Eliseo Ferrante from Vrije Universiteit Amsterdam
‘Multi-Target Pursuit by a Decentralized Heterogeneous UAV Swarm Using Deep Multi-Agent Reinforcement Learning’ Maryam Kouzehgar, Youngbin Song, Malika Meghjani, Roland Bouffanais from Singapore University of Technology and Design [Video]
‘Inverted Landing in a Small Aerial Robot Via Deep Reinforcement Learning for Triggering and Control of Rotational Maneuvers’ Bryan Habas, Jack W. Langelaan, Bo Cheng from Pennsylvania State University [Video]
‘Ultra-Low Power Deep Learning-Based Monocular Relative Localization Onboard Nano-Quadrotors’ Stefano Bonato, Stefano Carlo Lambertenghi, Elia Cereda, Alessandro Giusti, Daniele Palossi from USI-SUPSI-IDSIA Lugano, ISL Zurich [Video]
‘A Hybrid Quadratic Programming Framework for Real-Time Embedded Safety-Critical Control’ Ryan Bena, Sushmit Hossain, Buyun Chen, Wei Wu, Quan Nguyen from University of Southern California [Video]
‘Distributed Potential iLQR: Scalable Game-Theoretic Trajectory Planning for Multi-Agent Interactions’ Zach Williams, Jushan Chen, Negar Mehr from University of Illinois Urbana-Champaign
‘Scalable Task-Driven Robotic Swarm Control Via Collision Avoidance and Learning Mean-Field Control’ Kai Cui, MLI, Christian Fabian, Heinz Koeppl from Technische Universität Darmstadt
‘Multi-Agent Spatial Predictive Control with Application to Drone Flocking’ Andreas Brandstätter, Scott Smolka, Scott Stoller, Ashish Tiwari, Radu Grosu from Technische Universität Wien, Stony Brook University, Microsoft Corp, TU Wien [Video]
‘Trajectory Planning for the Bidirectional Quadrotor As a Differentially Flat Hybrid System’ Katherine Mao, Jake Welde, M. Ani Hsieh, Vijay Kumar from University of Pennsylvania
‘Forming and Controlling Hitches in Midair Using Aerial Robots’ Diego Salazar-Dantonio, Subhrajit Bhattacharya, David Saldana from Lehigh University [Video]
‘AMSwarm: An Alternating Minimization Approach for Safe Motion Planning of Quadrotor Swarms in Cluttered Environments’ Vivek Kantilal Adajania, Siqi Zhou, Arun Singh, Angela P. Schoellig from University of Toronto, Technical University of Munich, University of Tartu [Video]
‘Decentralized Deadlock-Free Trajectory Planning for Quadrotor Swarm in Obstacle-Rich Environments’ Jungwon Park, Inkyu Jang, H. Jin Kim from Seoul National University
‘A Negative Imaginary Theory-Based Time-Varying Group Formation Tracking Scheme for Multi-Robot Systems: Applications to Quadcopters’ Yu-Hsiang Su, Parijat Bhowmick, Alexander Lanzon from The University of Manchester, Indian Institute of Technology Guwahati
‘Safe Operations of an Aerial Swarm Via a Cobot Human Swarm Interface’ Sydrak Abdi, Derek Paley from University of Maryland [Video]
‘Direct Angular Rate Estimation without Event Motion-Compensation at High Angular Rates’ Matthew Ng, Xinyu Cai, Shaohui Foong from Singapore University of Technology and Design
‘NanoFlowNet: Real-Time Dense Optical Flow on a Nano Quadcopter’ Rik Jan Bouwmeester, Federico Paredes-valles, Guido De Croon from Delft University of Technology [Video]
‘Adaptive Risk-Tendency: Nano Drone Navigation in Cluttered Environments with Distributional Reinforcement Learning’ Cheng Liu, Erik-jan Van Kampen, Guido De Croon from Delft University of Technology
‘Relay Pursuit for Multirobot Target Tracking on Tile Graphs’ Shashwata Mandal, Sourabh Bhattacharya from Iowa State University
‘A Distributed Online Optimization Strategy for Cooperative Robotic Surveillance’ Lorenzo Pichierri, Guido Carnevale, Lorenzo Sforni, Andrea Testa, Giuseppe Notarstefano from University of Bologna [Video]
‘Deep Neural Network Architecture Search for Accurate Visual Pose Estimation Aboard Nano-UAVs’ Elia Cereda, Luca Crupi, Matteo Risso, Alessio Burrello, Luca Benini, Alessandro Giusti, Daniele Jahier Pagliari, Daniele Palossi from IDSIA USI-SUPSI, Politecnico di Torino, Università di Bologna, University of Bologna, SUPSIETH Zurich [Video]
‘Multi-Robot Pickup and Delivery Via Distributed Resource Allocation’ Andrea Camisa, Andrea Testa, Giuseppe Notarstefano from Università di Bologna [Video]
‘Multi-Robot 3D Gas Distribution Mapping: Coordination, Information Sharing and Environmental Knowledge’ Chiara Ercolani, Shashank Mahendra Deshmukh, Thomas Laurent Peeters, Alcherio Martinoli from EPFL
‘Finding Optimal Modular Robots for Aerial Tasks’ Jiawei Xu, David Saldana from Lehigh University
‘Statistical Safety and Robustness Guarantees for Feedback Motion Planning of Unknown Underactuated Stochastic Systems’ Craig Knuth, Glen Chou, Jamie Reese, Joseph Moore from Johns Hopkins University, MIT
‘Spring-Powered Tether Launching Mechanism for Improving Micro-UAV Air Mobility’ Felipe Borja from Carnegie Mellon university
‘Reinforcement Learning for Safe Robot Control Using Control Lyapunov Barrier Functions’ Desong Du, Shaohang Han, Naiming Qi, Haitham Bou Ammar, Jun Wang, Wei Pan from Harbin Institute of Technology, Delft University of Technology, Princeton University, University College London [Video]
‘Safety-Critical Ergodic Exploration in Cluttered Environments Via Control Barrier Functions’ Cameron Lerch, Dayi Dong, Ian Abraham from Yale University
If you have been following the ROS Discourse on a regular basis, you might have seen a bit more activity on the Aerial Vehicles category than usual. We very recently started an Aerial Robotics Working Group in collaboration with Dronecode Foundation! It will be a community-driven working group initially, but we will hold biweekly meetings on Wednesday at 2:00 PM UTC, and build up a community members and gather information on the ROS Aerial community’s Github organization. This blogpost aims to explain how this working group came to light, what our current plans are and how you can participate.
How did it all begin?
There are actually quite some aerial enthusiasts out there dwelling in the ROS crowd, which became evident when 20-30 people showed up at the impromptu ROScon 2022 aerial roboticists meetup. This was also our first experience with ROScon as Bitcraze, and I (Kimberly) absolutely loved it. The idea popped to be able to be more active in the amazing ROS community, which we started doing with helping out more with the Crazyswarm2 project (see this blogpost) and giving a presentation about it as well. However, we did notice that there wasn’t as much online chatter about Aerial Vehicles on the ROS communication channels. Yes, the Embedded ROS working group led by eProsima (responsible for MicroROS) has done some really cool demos with Crazyflies! And the same goes for any other aerial project, that has probably contributed to some of the other staple projects like NAV2. But there aren’t any working groups that are specific for aerial robotics.
Since PX4 led by Dronecode foundation had similar ambitions to be emerged into the ROS family, since we met in person at the very same ROScon last year, we started talking about possibly starting up a working group. This started with us reaching out to the ROS community for interest with this ROS discourse post and after 25 and more replies, the obvious thing was to set up an first explorative meeting. About 30 people showed up to this, so the message was clear: yes, there is a demand for guidance, structure, and information in the ROS community regarding aerial robotics. Thus, the aerial robotics working group was born!
Current state and plans
One of the observed issues is that we have noticed that is happening is that there there are numerous projects and information about aerial robotics, and perhaps too much. That is because aerial robotics consists of a huge variety of robotic systems in different forms like multicopters or even monocopters (like in the blogpost here) but also hybrid VTOL vehicles, mini blimps (for example this hack we done) and so many more. But as you probably know, aerial vehicles come with their own set of challenges that distinguish them from ground robots, like instability, aerodynamics, and limitations related to their lift capabilities. Therefore, it offers an interesting platform for control theory, autonomy, and swarming and as a result several ROS-related projects have emerged, such as Crazyswarm2, Aerostack2, Kumar Robotics Autonomy Stack and, Agilicious. Moreover, even though a standard ROS interface for aerial robotics has been created some years ago, it has not been enforced or updated since. And also, although courses and tutorials can be found here and there scattered around on multiple projects and autopilot websites to get started with aerial robotics in ROS, but many have found the learning curve to be quite steep and usually don’t know where to start.
Due to the vast amount of systems, software, projects and information out there, we decided to gather all this information in one centralized location as an Aerial Robotics landscape instead of scattering it across various aerial robotics resources, of which we have created a simple repository with markdown files. The idea is to fill this in little by little by info that we get from the working group discussions or other input of users, or research done by ourselves. For that, we will facilitate biweekly meetings, where users will present about their project (like our last meeting about Aerostack 2) or where we engage in discussions on various aerial robotics topics (like Aerial Autonomy stacks in the startup meeting).
Future ambitions
Currently, we don’t have a specific end goal or main project in mind, as we are right at the start of the first discussions and information gathering. That is also why it will be considered a ‘community driven’ working group after some emails back and forth with Open Robotics Foundation, until we reach a stage where the landscape is adequately developed to establish specific development goals. and set up various subprojects for communication, autonomy, platforms and/or education. Additionally, incorporating direct communication protocols within swarms could be of interest, as these are a common use case within aerial robotics. Once we have established more specific development goals, we can apply to be an official ROS working group, and collaborate with other workgroups on overlapping projects. From our perspective, it would be more beneficial for the ROS ecosystem not to create a standalone aerial stack, but enhance the integration of other stacks with aerial vehicles.
Join us!
Currently I (Kimberly) representing Bitcraze and Ramon Roche from Dronecode Foundation will be in the ‘lead’ of the Aerial working group, although we prefer to act as facilitators rather than imposing our own direction. We will try our best not to geek out too much on PX4 and/or Crazyflies alone, so therefore anybody’s input will be crucial! So if you’d like to levitate ROS to new heights, come and join our meetings! Our next meeting is scheduled for Wednesday the 24th of May (2 pm UTC), and you can find the information on this ROS Discourse thread. We hope to see you there!