Author: Kimberly McGuire

A while ago, we wrote a generic blog post about state estimation in the Crazyflie, mostly discussing different ways the Crazyflie can determine its attitude and/or position. At that time, we only had the Complementary filter and Extended Kalman filter (EKF). Over the years, we’ve made some great additions like the M-estimation-based robust Kalman filter (an enhancement of the EKF, see this blog post) and the Unscented Kalman filter.

However, we have noticed that some of our beginning users struggle with understanding the concept of Kalman filtering, depending on whether this has been covered in their curriculum. And for some more experienced users, it might be nice to have a recap of the basics as well, since this is a very important part of the Crazyflie’s capabilities of flight (and also for robotics in general). So, in this blog post, we will explain the principles of Kalman filtering and how it is applied within the Crazyflie firmware, which hopefully will provide a good base for anyone starting to delve into state estimation within the Crazyflie.

We will also have a developer meeting about Kalman filtering on the Crazyflie, so we hope you can join that as well if you have any questions about how it all works. Also we are planning to got to FOSdem this weekend so we hope to see you there too.

Main Principles of the Kalman Filter

Anybody remotely working with autonomous systems must, at one point, have heard of the Kalman filter, as it has existed since the 60s and even played a role in the Apollo program. Understanding its main principles is also important for anyone working with drones or robotics. There are plenty of resources available, and its Wikipedia page is filled with examples, so here we will focus mostly on the concept and principles and leave the bulk of the mathematics as an exercise for those who like to delve into that :).

So basically, there are several principles that apply to a Kalman filter:

  • It estimates a linear system that is driven by stochastic processes. The probability function that drives these stochastic processes should ideally be Gaussian.
  • It makes use of the Bayes’ rule, which is a general term in statistics that describes the probability of an event happening based on previous knowledge related to that event.
  • It assumes that the ‘to be estimated state’ can be described with a Markov model, which assumes that a sequence of the next possible event (or scenario) can be predicted by the current event. In other words, it does not need a full history of events to predict the next step(s), only the information from the event of one previous step.
  • A Kalman filter is described as a recursive filter, which means that it reuses (part of) its output as input for the next filtering step.

So the state estimate is usually a vector of different variables that the developer or user of the system likes to observe, for either control or prediction, something like position and velocity, for instance: [x, y, , ẏ, …]. One can describe a dynamics model that can predict the state in the next step using only the current time step’s state, like for instance: xt+1 = xt + t, yt+1 = yt + ẏt. This can also be nicely described in matrix form as well if you like linear algebra. To this model, you can also add predicted noise to make it more realistic, or the effect of the input commands to the system (like voltage to motors). We will not go into the latter in this blogpost.

The Concept of Kalman filters

Simplified block scheme of Kalman filtering

So, we will go through the process of explaining the steps of the Kalman filter now, which hopefully will be clear with the above picture. As mentioned before, we’d like to avoid formulas and are oversimplifying some parts to make it as clear as possible (hopefully…).

First, there is the predict phase, where the current state (estimate) and a dynamics model (also known as the state transition model) result in a predicted state. Also in the same phase, the predicted estimated covariance is calculated, which also uses the dynamics model plus an indication of the process noise model, indicating how much the dynamics model deviates from reality in predicting that state. In an ideal world and with an ideal model, this could be enough; however, no dynamics model is perfect, which is why the next phase is also very important.

Then it’s the update phase, where the filter estimate gets updated by a measurement of the real world through sensors. The measurement needs to go through a measurement model, which transforms the measurement into a measured state (also known as innovation or measurement pre-fit residual). Usually, a measurement is not a 1-1 depiction of one variable of the state, so the measurement model ensures that the measurement can properly be compared to the predicted state. This same measurement model, accompanied by the measurement noise model (which indicates how much the measurement differs from the real world), together with the predicted covariance, is used to calculate the innovation and Kalman gain.

The last part of the update phase is where the predictions are updated with the innovation. The Kalman gain is then used to update the predicted state to a new estimated state with the measured state. The same Kalman gain is also used to update the covariance, which can be used for the next time step.

An 1D example, height estimation

It’s always good to show the filter in some form of example, so let’s show you a simple one in terms of height estimation to demonstrate its implications.

1D example of height estimation

You see here a Crazyflie flying, and currently it has its height estimated at zt and its velocity at żt. It goes to the predict phase and predicts the next height to be at zt+1,predict, which is a simple model of just zt + żt. Then for the innovation and updating phase, a measurement (from a range sensor) rz is used for the filter, which is translated to zt+1, meas. In this case, the measurement model is very simple when flying over a flat surface, as it probably is only a translation addition of the sensor to the middle of the Crazyflie, or perhaps a compensation for a roll or pitch rotation.

In the background, the covariances are updated and the Kalman gain is calculated, and based on zt+1,predict and zt+1, meas, the next state zt+1 is calculated. As you probably noticed, there was a discrepancy between the predicted height and measured height, which could be due to the fact that the dynamics model couldn’t correctly predict the height. Perhaps a PID gain was higher than expected or the Crazyflie had upgraded motors that made it climb faster on takeoff. As you can see here, the filter put the estimated height closer to zt+1 to the measurement than the predicted height. The measurement noise model incorporated into the covariances indicates that the height sensor is more accurate than the height coming from the dynamics model. This would very well be the case for an infrared height sensor like the one on the Flow Deck; however, if it were an ultrasound-based sensor or barometer instead (which are much noisier), then the predicted height would be closer to the one predicted by the dynamics model.

Also, it’s good to note that the dynamics model does not currently include the motor input, but it could have done so as well. In that case, it would have been better able to predict the jump it missed now.

A 2D example, horizontal position

A 2D example in x and y position

Let’s take it up a notch and add an extra dimension. You see here now that there is a 2D solution of the Crazyflie moving horizontally. It is at position xt, yt and has a velocity of t, ẏt at that moment in time. The dynamics model estimates the Crazyflie to end up in the general direction of the velocity factor, so it is a simple addition of the current position and velocity vector. If the Crazyflie has a flow sensor (like on the Flow Deck), flow fx, fy can be detected and translated by the measurement model to a measured velocity (part of the state filter) by combining it with a height measurement and camera characteristics.

However, the measurement in the form of the measured flow fx, fy estimates that there is much more flow detected in the x-direction than in the y-direction. This can be due to a sudden wind gust in the y-direction, which the dynamics model couldn’t accurately predict, or the fact that there weren’t as many features on the surface in the y-direction, making it more difficult for the flow sensor to measure the flow in that direction. Since this is not something that both models can account for, the filter will, based on the Kalman gain and covariances, put the estimate somewhere in between. However, this is of course dependent on the estimated covariances of both the outcome of the measurement and dynamic models.

In case of non-linearity

It would be much simpler if the world’s processes could be described with linear systems and have Gaussian distributions. However, the world is complex, so that is rarely the case. We can make parts of the world more abstract in simulation, and Kalman filters can handle that, but when dealing with real flying vehicles, such as the Crazyflie, which is considered a highly nonlinear system, it needs to be described by a nonlinear dynamics model. Additionally, the measurements of sensors in more complex and 3D situations usually don’t have a one-to-one linear relationship with the variables in the state. Can you still use the Kalman filter then, considering the earlier mentioned principles?

Luckily certain assumptions can be made that can still make Kalman filters useful in the sense of non-linearity.

  • Extended Kalman Filter (EKF): If there is non-linearity in either the dynamics model, measurement models or both, at each prediction and update step, these models are linearized around the current state variables by calculating the Jacobian, which is a collection of first-order partial derivative calculations of the model and the state variables.
  • Unscented Kalman Filter (UKF): An unscented Kalman filter deals with linearities by selecting sigma points selected around the mean of the state estimate, which are backpropagated through the non-linear dynamics model.

However, there is also the case of non-Gaussian processes in both dynamics and measurements, and in that case a complementary filter or particle filter would be best suited. The Crazyflie contains a complementary filter (which does not estimate x and y), an extended Kalman filter and an experimental unscented Kalman filter. Check out the state-estimation documentation for more information.

So…. where is the code?

This is all fine and dandy, however… where can you find all of this in the code of the Crazyflie firmware? Here is an overview of where you can find it exactly in the sense of the most used filter of them all, namely the Extended Kalman Filter.

There are several assumptions made and adjustments made to the regular EKF implementation to make it suitable for flight on the Crazyflie. For those details I’d like to refer to the papers on where this implementation is based on, which can be found in the EKF documentation. Also for a more precise explanation of Kalman filter, please check out the lecture slides of Stanford University on Linear dynamical systems or the Linköping university’s course slides on Sensor Fusion.

Update: From the comments we also got notified of an nice EKF tutorial where you write the filter from scratch (github) from Prof. Simon D. Levy from Washington and Lee university. Practice makes perfect!

Next Developer meeting and FOSdem

As you would have guessed, our next developer meeting will be about the Kalman filters in the Crazyflie. Keep an eye on this Discussion thread for more details on the meeting.

Also Kimberly and Arnaud will be attending FOSdem this weekend in Brussels, Belgium. We are hoping to organize an open-source robotics BOF/meetup there, so please let us know if you are planning to go as well!

A few years ago, we wrote a blogpost about the Commander framework, where we explained how the setpoint structure worked, which drives the controller of the Crazyflie, which is an essential part of the stabilization module. Basically, without these, there would not be any autonomy on the Crazyflie, let alone manual flight.

In the blogpost, we already shed some light on where different setpoints can come from in the commander framework, either from the Crazyflie python library (externally with the Crazyradio), the high level commander (onboard) or the App layer (onboard).

General framework of the stabilization structure of the crazyflie with setpoint handling. * This part is takes place on the computer through the CFlib for python, so there is also communication protocol in between. It is left out of this schematics for easier understanding.

However, we notice that there is sometimes confusion regarding these different functionalities and what exactly sends which setpoints and how. These details might not be crucial when using just one Crazyflie, but become more significant when managing multiple drones. Understanding how often your computer needs to send setpoints or not becomes crucial in such scenarios. Therefore, this blog post aims to provide a clearer explanation of this aspect.

Sending set-points directly from the CFlib

Let’s start at the lower level from the computer. It is possible to send various types of setpoints directly from a Python script using the Crazyflie Python library (cflib for short). This capability extends to tasks such as manual control:

send_setpoint(roll, pitch, yawrate, thrust)

or for hover control (velocity control):

send_hover_setpoint(vx, vy, yawrate, zdistance)

You can check the automatic generated API documentation for more setpoint sending options.

If you use these functions in a script, the principle is quite basic: the Crazyradio sends exactly 1 packet with this setpoint over the air to the Crazyflie, and it will act upon that. There are no secret threads opening in the background, and nothing magical happens on the Crazyflie either. However, the challenge here is that if your script doesn’t send an updated setpoint within a certain amount of time (default of 2 seconds), a timeout will occur, and the Crazyflie will drop out of the sky. Therefore, you need to send a setpoint at regular intervals, like in a for loop, to keep the Crazyflie flying. This is something you need to take care of in the script.

Example scripts in the CFlib that are sending setpoints directly:

Setpoint handling through Motion Commander Class

Another way to handle the regular sending of setpoints automatically in the CFLib is through the Motion Commander class. By initializing a Motion Commander object (usually using a context manager), a thread is started with takeoff that will continuously send (velocity) setpoints at a fixed rate. These setpoints can then be updated by the following functions, for instance, moving forward with blocking:

forward(distance)

or a giving body fixed velocity setpoint updates (that returns immediately):

start_linear_motion(vx, vy, vz, rate_yaw)

You can check the Motion Commander’s API-generated documentation for more functions that can be utilized. As there is a background thread consistently sending setpoints to the Crazyflie, no timeout will occur, and you only need to use one of these functions for the ‘behavior update’. This thread will be closed as soon as the Crazyflie lands again.

Here are example scripts in the CFlib that use the motion commander class:

Setpoint handling through the high level commander

Prior to this, all logical and setpoint handling occurred on the PC side. Whether sending setpoints directly or using the Motion Commander class, there was a continuous stream of setpoint packets sent through the air for every movement the Crazyflie made. However, what if the Crazyflie misses one of these packets? Or how does this stream handle communication with many Crazyflies, especially in swarms where bandwidth becomes a critical factor?

This challenge led the developers at the Crazyswarm project (now Crazyswarm2) to implement more planning autonomy directly on the Crazyflie itself, in the form of the high-level commander. With the High-Level Commander, you can simply send one higher-level command to the Crazyflie, and the intermediate substeps (setpoints) are generated on the Crazyflie itself. This can be achieved with a regular takeoff:

take_off(height)

or go to a certain position in space:

go_to(x, y)

This can be accomplished using either the PositionHLCommander, which can be used as a context manager similar to the Motion Commander (without the Python threading), or by directly employing the functions of the High-Level Commander. You can refer to the automated API documentation for the available functions of the PositionHLCommander class or the High-Level Commander class.

Here are examples in the CFlib using either of these classes:

Notes on location of autonomy and discrepancies

Considering the various options available in the Crazyflie Python library, it’s essential to realize that these setpoint-setting choices, whether direct or through the High-Level Commander, can also be configured through the app layer onboard the Crazyflie itself. You can find examples of these app layer configurations in the Crazyflie firmware repository.

It’s important to note some discrepancies regarding the Motion Commander class, which was designed with the Flow Deck (relative positioning) in mind. Consequently, it lacks a ‘go to this position’ equivalent. For such tasks, you may need to use the lower-level send_position_setpoint() function of the regular Commander class (see this ticket.) The same applies to the High-Level Commander, which was primarily designed for absolute positioning systems and lacks a ‘go forward with x m/s‘ equivalent. Currently, there isn’t a possibility to achieve these functionalities at a lower level from the Crazyflie Python library as this functionality needs to be implemented in the Crazyflie firmware first (see this ticket). It would be beneficial to align these functionalities on both the CFlib and High-Level Commander sides at some point in the future.

Hope this helps a bit to explain the commander frame work in more detail and where the real autonomy lies of the Crazyflie when you use different commander classes. If you have any questions on what the Crazyflie can do with these, we advise you to ask your questions on discussions.bitcraze.io and we will try to point you in the right direction and give examples!

Before we start settling down and preparing for Christmas, it’s time for another release! The last one was before the summer in July, and we’ve had quite a few changes on the development master branch that we’d like to share. You can now download the latest Cfclient through pip and install the newest firmware on the Crazyflie to 2023.11 via the CFclient.

Latest changes in CFclient and Cflib

The most significant change in the CFclient is that we have finally transitioned from QT5 to QT6 for the GUI graphics. Additionally, we have addressed some issues with the toolboxes. Finally, we have added an information box to indicate the state of the supervisor, such as whether the Crazyflie is considered tumbled, flying, or if a restart is required because it is locked.

Cfclient when the crazyflie is tumbled with supervisor info

For the backend, namely the Crazyflie Python library, some important changes have been implemented. Along with fixes to the parameter and logging framework, full-state setpoints have been introduced. This feature has existed in firmware for a while due to the Crazyswarm1 project (now Crazyswarm2), but it wasn’t implemented in the cflib until now. Additionally, it’s now necessary to use `notify_setpoint_stop` in cases of switching between high-level setpoints and regular position setpoints. There is also a generic motion capture example now based on the libmotioncapture library.

Note that even though the CFclient has been converted to QT6, there are several examples in the Cflib folder that have not been updated yet. This will be fixed soon, and a ticket has been created for it. Additionally, in the Bitcraze-VM, there have been some reported issues with QT6 (see this ticket).

Latest changes in the firmware

The firmware has undergone some important changes too. On the STM side of things, the hybrid TDOA mode has been merged (check out this recent blog post). This feature is still considered experimental, so please refer to the documentation for the right settings. Additionally, support for the supervisor information box in the CFclient has been added. To utilize it, both the firmware and CFclient need to be updated. There is also a new example demonstrating communication between gap8 and cpx. Last but not least, it is now possible to create Python bindings for portions of the Kalman filter, mainly for the Loco positioning system. On the other hand, the NRF firmware has no added functionalities except for some build changes and fixes.

Crazyradio2 + LPS tools

We’ve also made some improvements in other firmware or tools. Starting with the Crazyradio2, which includes fixes for broadcasting (important for you Crazyswarm2 folks!). We also aimed to make a new release of LPS tools since we heard that people were experiencing issues with USB devices. Unfortunately, there are some problems with the GitHub release actions, so that will likely be delayed. For anyone facing USB issues, you can install the LPS tools from source with Python following the ReadMe’s instructions.

Release details and Remaining issues

So here are the details of all that is released:

Some things still require attention that are a bit affected by this release, but we haven’t had the time to fix it yet:

  • Fix issues with LPS tools and release (see this ticket)
  • CFclient seems to be broken on the bitcraze-VM (see this ticket)
  • CFlib examples with QT-based GUI are still on QT5 (see this ticket)
  • The newest CFclient seems to need additional packages in some cases ( see this and this ticket)

Please let us know at https://discussions.bitcraze.io if you are having more problems.

Developer meeting this Wednesday

As we already announced last week in the Monday blog post, we will be having a developer meeting this Wednesday (6th Dec, 3 pm CET) regarding the Flow deck (refer to this discussion thread for joining information). Since we usually don’t fill up the entire hour, the last part of the developer meeting is available for some generic support questions face-to-face (online), including questions about the release!

The Flow deck has been around for some time already, officially released in 2017 (see this blog post), and the Flow deck v2 was released in 2018 with an improved range sensor. Compared to MoCap positioning and the Loco Positioning System (based on Ultrawideband) that were already possible before, optical flow-based positioning for the Crazyflie opened up many more possibilities. Flight was no longer confined to lab environments with set-up external systems; people could bring the Crazyflie home and do their hacking there. Moreover, doing research for exploration techniques that cannot rely on external positioning systems was possible with it as well. For example, back in my day as a PhD student, I relied heavily on the Flow deck for multi-Crazyflie autonomous exploration. This would have been very difficult without it.

However, despite the numerous benefits that the Flow deck provides, there are also several limitations. These limitations may not be immediately familiar to many before purchasing a Crazyflie with a Flow deck. A while ago, we wrote a blog post about positioning systems in general and even delved into the Loco Positioning System in detail. In this blog post, we will explore the theory of how the Flow deck enables the Crazyflie to fly, share general tips and tricks for ensuring stable flight, and highlight what to avoid. Moreover, we aim to make the Flow deck the focus of next week’s Developer meeting, with the goal of improving or clarifying its performance further.

Theory of the Flow deck

I won’t delve into too much detail but will provide a generic indication of how the Flow deck works. As previously explained in the positioning system blog post, the Flow deck is a relative positioning system with onboard estimation. “Relative” means that wherever you start is the (0, 0, 0) position. The extended Kalman filter processes flow and height information to determine velocity, which is then integrated to estimate the position—essentially dead reckoning. The onboard Kalman filter manages this process, enabling the Crazyflie to use the information for stable hovering.

Image from Positioning System Overview blogpost

The optical flow sensor (PMW3901) calculates pixel flow per frame (this old blog post explains it well), and the IR range sensor (VL53L1x) measures height up to 4 meters (under ideal conditions). The Kalman filter incorporates a measurement model that describes the relationship between these two values and the velocity of the Crazyflie. More detailed information can be found in the state estimation documentation. This capability allows the Crazyflie to hover, as explained in the getting started tutorial.

Image from state estimation repo documentation

Tips & Tricks and Limitations

If you want to fly with the Crazyflie and the Flow deck, there are a couple of things to take in mind:

  • Take off from a floor with texture. Natural texture like wood flooring is probably the best.
  • The floor shouldn’t be too shiny, and be aware of infrared scattering for the height sensor
  • The room should be well-lit, as the sensor needs to see the texture.

There are certain situations that the Flow deck has some issues with:

  • Low or no texture. Flying above something that is only one plain color
  • Black areas. Similar reason to flying above no texture, but it’s more difficult than usual. Especially with startup, the position estimate diverges
  • Low light conditions
  • Flying over its own shadow

We made a video that shows these types of behaviors, starting of course with the most ideal flying conditions:

Moreover, it is also important to note that you shouldn’t fly too high or yaw too often. The latter will make the Crazyflie drift, as the optical flow cannot be distinguished as being caused by the yaw movement.

Developer meeting about Flow deck

We believe that many of the issues people experience are primarily due to the invisibility of the positioning quality. In many of our examples, the Crazyflie will not take off if the position is stable. However, we don’t have a corresponding functionality in our CFclient, as it is more up to the user to recognize when the positioning is diverging. There is a lot of room for improvement in this regard.

This is the reason why the next developer meeting will specifically focus on the Flow deck, which will be on Wednesday the 6th of December, 3 pm central European time. During the meeting, we will explain more about the Flow deck, discuss the issues we are facing, and explore ways to enhance the visibility of positioning quality. Check out this discussion thread for information on how to join.

It seems that many of you are very interested in simulation. We might have gotten the hint when we noticed that our July’s development meeting had our best attendance so far! Therefore, we will be planning a new developer meeting to discuss the upcoming plans for supporting simulation for the Crazyflie.

Getting Started with Simulation tutorial

Perhaps you are not aware, but there is actually a Getting Started tutorial for simulation that has been available for a little over 2 months now. Unfortunately, circumstances prevented us from writing a blog post about it, but we’ve noticed that not all of you are aware of it yet!

The getting-started tutorial demonstrates how to set up the Webots simulator, which already includes Crazyflie models and some cool examples:

  • An example that you can control the Crazyflie with the keyboard
  • An example that the Crazyflie does wall following autonomously

The latter is based on the example app layer for wall-following in the crazyflie-firmware repository. Starting this year, there’s also a Python library equivalent available.

The tutorial concludes with instructions on how to edit these controllers. Alternatively, you can choose to run the files directly from the crazyflie-simulation repository. After completing the tutorial, you can explore the simulation repository documentation for more information and to access additional examples.

Upcoming plans

With so many plans and so little time! This is a common phrase at Bitcraze, and it’s a symptom of being an overly ambitious, but too small, team. By the way, we are still looking for more people :). Nonetheless, we have big plans to take our Crazyflie simulation to the next level:

  • ROS 2 Crazyflie model for Webots: The Crazyflie has been a part of the Webots standard robots for 2 years now, but we still need to implement the Crazyflie into the Webots ROS 2 repository.
  • Better (new) Gazebo support: Currently, we only have a very simple example for Gazebo, which is limited to motors with no control input. Working with the C++ API can be a bit challenging, so it might be worth considering the use of ROS 2 in the loop here. Let’s see what comes out of it.
  • Integration into Crazyswarm2: Once the Webots ROS2 node has been released, integrating the Crazyflie simulation into Crazyswarm2 will become more straightforward.
  • Improvement to the Python bindings: We’ve had Python bindings for controllers and the high-level commander for a while. Recently, we also added Python bindings for the estimator (currently for loco positioning only). However, there are still some issues to address with the Python bindings for the controllers due to timing issues with the simulators.
  • Linking with our CFLIB: Currently, both Webots and the Crazyflie Python library use entirely different APIs. This means that these scripts are not compatible and you’ll need to be creative not to reuse new code. However, wouldn’t it be nice to use a python example from the python library with a --sim and that it would actually control the Crazyflie in the simulator instead?

Of course, there are probably more improvements that we haven’t thought of yet, but that’s why we have developer meetings!

Come and join us at the Developer meeting.

We will be hosting another developer meeting on November 1st at 15:00 Central European Time (accounting for the time-shift from summer to autumn). You can find details on how to join in the discussion thread here.

Just for your information, I (Kimberly) am the main driving force behind our simulation efforts. However, I’m currently on partial sick leave and will soon be on full leave for a while. I kindly ask for your patience with the pace of ongoing developments. Remember, it’s an open-source project, so if you’d like to contribute and help out, we would greatly appreciate it :)

As of this year around March/April we started with both Bitcraze developer meetings and Aerial-ROS meetings (the latter in collaboration with Dronecode Foundation). Now that summer is around and our office is a bit empty, we had a bit of a summer break, however we will start the meetings back up again soon! The next ROS-aerial meeting will be on the 16th of August and we will also have a Bitcraze developer meeting planned on the Wednesday the 6th of September (keep an eye on our announcements in discussions). In this blogpost we like to take the opportunity to show an overview of the meetings we had so far.

Aerial ROS meetings

In March we started a [ROS community working group] for aerial Vehicles together with our friends at Dronecode foundation, aka Aerial-ROS! We have biweekly meetings with some standard discussion meetings (with a topic) and with an invited guest presentation.

Here are the discussion topic meetings we had:

And we had several guest speakers as well! Like Miguel Fernandez-Cortizas from CAR-UPM talking about Aerostack2:

ROS-aerial Meeting guest presentation about Aerostack2

Then we had a guest presentation from Gerald Peklar from NXP talking about the Drones4Bats project:

ROS-aerial Meeting guest presentation about Drones4Bats

And the last before the summer was from Alejandro Hernández Cordero (Open Robotics Consultant) about the ROS2 Project Vehicle Gateway.

ROS-aerial Meeting guest presentation about Vehicle Gateway

The next meeting for ROS-aerial is planned on the 16th of August. Keep an eye on the ROS discourse forum.

Bitcraze Developer Meetings

We already had a couple of developer meetings before but we started recording them since April. The first recorded one was about the loco positioning system. Here first we gave a presentation about the system itself, with the latest developments cooking in our pot and time for questions afterwards.

Dev meeting about Loco positioning.

Then we had a meeting about the development of safety features in the Crazyflie in light of the Bolt developments:

Bitcraze Dev meeting about Safety features.

Then we had a meeting where Kristoffer highlighted the autonomous swarm demo we showed at ICRA 2023.

Bitcraze dev meeting about the autonomous swarm demo

And the last before the summer holiday, we had a meeting where Kimberly explained about the Crazyflie simulationmodel intergrated into Webots

Bitcraze Dev meeting about Simulation

We are still planning to have developer meeting every first wednesday of the month starting with September 6th (keep an eye on our announcements in discussions).

EPFL 101 Crazyflie presentation

Oh yeah, by the way, we also were invited by the EPFL-lis lab to give another Crazyflie 101 presentation in Lausanne last April! We made a prerecording of it so you can check it out right here:

EPFL LIS crazyflie 101 presentation.

See you all after summer!

In our ROS-aerial community working group, we had a meeting a few weeks ago to discuss education and tutorials within Aerial Robotics (see the ROS discourse thread here). The general conclusion was that there should be more courses and tutorials since the learning curve is too steep. But… is that actually the case? According to a LinkedIn post by Kimberly, asking for suggestions, we found out that might not be true! There are loads of tutorials out there! So in this blog post, we will provide an overview of the suggested tutorials and the ones that have materials available online.

Stable diffusion with prompt ‘A drone flying in front of a school blackboard’

Online books

One of the first suggestions was to explore the online free book titled ‘Small Unmanned Aircraft: Theory and Practice.’ This book has been written by Randy Beard and Tim McLain of Brigham Young University, and it covers everything from the absolute basics of coordinate frames and quadrotor dynamics to path planning and cameras. It is a must-read for anybody starting in UAVs and Aerial robotics.

The physical book can be found here: http://press.princeton.edu/titles/9632.html

The available PDFs can be accessed on GitHub: https://github.com/randybeard/uavbook

Courses specified on Aerial Robotics

Here are some suggestions for courses specifically focused on Aerial Robotics. These received the most recommendations! Many universities have made their courses available online, accessible to anyone interested.

Coursera offers the ‘Robotics: Aerial Robotics’ course as part of the Robotics specialization. Taught by Prof. Vijay Kumar from Penn University, this 4-week course covers the mechanics and control of aerial vehicles using Matlab. It starts from 1 dimension and gradually progresses to the 3rd dimension in simulation. The course is part of a paid educational program, but you can audit the lessons for free.

Link: https://www.coursera.org/learn/robotics-flight

Udacity has been offering a course on Aerial Vehicles for quite some time. The lessons are taught by top names in the industry and cover key aspects of Aerial Robotics, such as motion planning, controls, and estimation, with lab assignments involving a real drone. The course duration is 4 months, and access is available for a fee.

Link: https://www.udacity.com/course/flying-car-nanodegree–nd787

The University of Maryland offers a course on Autonomous Aerial Robotics, making all videos, slides, and assignments available. Taught by Nitin J. Sanket and Chahat Deep Singh, the course covers everything from basic control and dynamics to full autonomy. It’s a comprehensive resource for aerial robotics. The course utilizes the Parrot Bebop 2.0, and while a Mocap system is required, you may explore the possibility of adapting the course to a different platform.

Link: http://prg.cs.umd.edu/enae788m

Additionally, there’s the course ‘Applied Control System 3: UAV Drone (3D Dynamics & Control)’ which is part of a series by Mark Misin. This course delves deep into the dynamics, control, and modeling of quadrotors.

Link: https://www.udemy.com/course/applied-control-systems-for-engineers-2-uav-drone-control/

Courses specified on Robotics applied to UAVs

Here are some suggestions for courses that focus on robotics but utilize UAVs/drones to demonstrate the implementation of the studied materials.

‘Visual Navigation For Autonomous Vehicles’ is a course available on MIT Open Courseware, taught by Prof. Luca Carlone. As the name implies, the course primarily focuses on autonomous navigation for any autonomous vehicle. It includes exercises where students implement vision algorithms on both ground robots and drones. Additionally, the course covers working with ROS and applying the knowledge to a simulated drone in Unity.

Link: https://ocw.mit.edu/courses/16-485-visual-navigation-for-autonomous-vehicles-vnav-fall-2020/

The ‘Bio-inspired Robotics’ course at the University of Washington, led by Prof. Sawyer Fuller, explores the realm of drawing inspiration from nature rather than reinventing the wheel. It covers various robots inspired by creatures capable of swimming, walking, hopping, and of course, flying. Lab assignments in this course involve working with a Crazyflie drone.

Link: https://faculty.washington.edu/minster/bio_inspired_robotics/

Brown University offers a course called ‘Introduction to Robotics,’ taught by Prof. Stefanie Tellix. While the introduction covers generic robotics, the focus of the full course is on building and programming the Duckiedrone. The course dives straight into autonomy and also teaches students how to work with ROS.

Link: https://cs.brown.edu/courses/cs1951r/

Update (4th of July)

Princeton University (see this blogpost) have also decided to release their ‘Intro to Robotics’ lectures and materials for the public. Can’t believe I forgot this one!

Link: https://irom-lab.princeton.edu/intro-to-robotics/

Youtube tutorials

If you’d like to start hands-on right away, here are a couple of suggestions for YouTube tutorials or series about aerial robotics.

Drone Programming with Python: This popular tutorial/course teaches viewers how to program a real drone using Python with the DJI Tello. It offers a great opportunity for anyone looking for a short and enjoyable project to undertake, especially on a rainy day, while still working with a real platform.

Link: https://youtu.be/LmEcyQnfpDA

Intelligent Quads YouTube Channel: This channel is entirely dedicated to creating autonomous UAVs, covering topics from Ardupilot to MAVlink to ROS and Gazebo. It appears to be a valuable resource for beginners in the field of autonomous UAVs.

Link: https://www.youtube.com/@IntelligentQuads

But wait, there is more!

There are some extra recourses for you to also take a look at.

  • Self-Driving Car Specialization: If you are interested in learning more about SLAM (Simultaneous Localization and Mapping) and sensors, this specialization is tailored for self-driving cars but the theory can be useful for drones as well. Link: https://www.coursera.org/specializations/self-driving-cars
  • Drone Dojo: For those looking to build their own drones, Drone Dojo provides useful instructions and courses to get started on DIY drone projects. Link: https://dojofordrones.com/

To conclude

Indeed, it appears that there are plenty of courses and tutorials available for people interested in getting started with aerial robotics. The range of resources is vast, and it’s possible that we might still be missing some, which could lead to a part 2 of this blog post in the future! And perhaps also we would need to delve into these to see why the learning curve is considered steep. However, aerial robotics is not an easy subject anyway so perhaps it is good to start from the basics. Nevertheless, this compilation should provide a solid starting point for anyone eager to delve into the world of aerial robotics. A major thank you to everyone who has contributed so far (linked to in the original LinkedIn post); your valuable input has made this possible!

As you may have noticed from the recent blog posts, we were very excited about ICRA London 2023! And it seems that we had every right to be, as this conference had the highest number of Crazyflie related papers compared to all the previous robotics conferences! In the past, the conferences typically had between 13-16 papers, but this time… BOOM! 28 papers! In this blog post, we will provide a list of these papers and give a general evaluation of the topics and themes covered so far.

So here some stats:

  • ICRA had 1655 papers accepted (43 % acceptance rate)
  • 28 Crazyflie papers (25 proceedings, 1 RA-L, 1RO-L, 1 late breaking result postor)
    • Haven’t included the workshop papers this time (no time)
  • The major topics we discovered were swarm coordination, safe trajectory planning, efficient autonomy, and onboard processing

Additionally, we came across a few notable posters, including one about a grappling hook for the Crazyflie [26], a human suit that allows for drone control [5], the Bolt made into a monocopter with a Jetson companion [16], and a flexible fixed-wing platform driven by a barebone Crazyflie [1]. We also observed a growing interest in aerial robotics with approximately 10% of all sessions dedicated to UAVs. Interestingly, 18 out of the 28 Crazyflie papers were presented in non-UAV specialized sessions, such as multi-robot systems and vision-based navigation.

Swarm coordination

Swarms were a hot topic at ICRA 2023 as already noticed by this tweet of Ramon Roche. We had over 10 papers dedicated to this topic, including one that involved 16 Crazyflies [9]. Surprisingly, more than half of the papers utilized multiple Crazyflies. This already sets a different landscape compared to IROS 2022, where autonomous navigation took center stage.

In IROS 2022, we witnessed single-drone gas mapping using a Crazyflie, but now it has been replicated in the Webots simulation using 2 Crazyflies [23]. Does this imply that we might witness a 3D gas localizing swarm at IROS 2023? We can’t wait.

Furthermore, we came across a paper [11] featuring the Bolt-based platform, which demonstrated flying formations while being attached to another platform using a string. It presented an intriguing control problem. Additionally, there was a work that combined safe trajectory planning with swarm coordination, enabling the avoidance of obstacles and people [12]. Moreover, there were some notable collaborations, such as robot pickup and delivery involving the Turtlebot 3 Burger [22].

Given the abundance of swarm papers, it’s impossible for us to delve into each of them, but it’s all very impressive work.

Safe trajectory planning and AI-deck

Another significant buzzword at ICRA was “safety-critical control.” This is important to ensuring safe control from a human interface [15] and employing it to facilitate reinforcement learning [27]. The latter approach is considered less “safe” in terms of designing controllers, as evidenced by the previous IROS competition, the Safe Robot Learning Competition. Although the Crazyflie itself is quite safe, it makes sense to first experiment with safe trajectories on it before applying them to larger drones.

Furthermore, we encountered approximately three papers related to the AIdeck. These papers covered various topics such as optical flow detection [17], visual pose estimation [21], and the detection of other Crazyflies [5]. During the conference, we heard that the AIdeck presents certain challenges for researchers, but we remain hopeful that we will see more papers exploring its potential in the future!

List of papers

This list not only physical Crazyflie papers, but also papers that uses simulation or parameters of the Crazyflie. This time the workshop papers are not included but we’ll add them later once we have the time

Enjoy!

  1. ‘A Micro Aircraft with Passive Variable-Sweep Wings’ Songnan Bai, Runze Ding, Pakpong Chirarattananon from City University of Hong Kong
  2. ‘Onboard Controller Design for Nano UAV Swarm in Operator-Guided Collective Behaviors’ Tugay Alperen Karagüzel, Victor Retamal Guiberteau, Eliseo Ferrante from Vrije Universiteit Amsterdam
  3. ‘Multi-Target Pursuit by a Decentralized Heterogeneous UAV Swarm Using Deep Multi-Agent Reinforcement Learning’ Maryam Kouzehgar, Youngbin Song, Malika Meghjani, Roland Bouffanais from Singapore University of Technology and Design [Video]
  4. ‘Inverted Landing in a Small Aerial Robot Via Deep Reinforcement Learning for Triggering and Control of Rotational Maneuvers’ Bryan Habas, Jack W. Langelaan, Bo Cheng from Pennsylvania State University [Video]
  5. ‘Ultra-Low Power Deep Learning-Based Monocular Relative Localization Onboard Nano-Quadrotors’ Stefano Bonato, Stefano Carlo Lambertenghi, Elia Cereda, Alessandro Giusti, Daniele Palossi from USI-SUPSI-IDSIA Lugano, ISL Zurich [Video]
  6. ‘A Hybrid Quadratic Programming Framework for Real-Time Embedded Safety-Critical Control’ Ryan Bena, Sushmit Hossain, Buyun Chen, Wei Wu, Quan Nguyen from University of Southern California [Video]
  7. ‘Distributed Potential iLQR: Scalable Game-Theoretic Trajectory Planning for Multi-Agent Interactions’ Zach Williams, Jushan Chen, Negar Mehr from University of Illinois Urbana-Champaign
  8. ‘Scalable Task-Driven Robotic Swarm Control Via Collision Avoidance and Learning Mean-Field Control’ Kai Cui, MLI, Christian Fabian, Heinz Koeppl from Technische Universität Darmstadt
  9. ‘Multi-Agent Spatial Predictive Control with Application to Drone Flocking’ Andreas Brandstätter, Scott Smolka, Scott Stoller, Ashish Tiwari, Radu Grosu from Technische Universität Wien, Stony Brook University, Microsoft Corp, TU Wien [Video]
  10. ‘Trajectory Planning for the Bidirectional Quadrotor As a Differentially Flat Hybrid System’ Katherine Mao, Jake Welde, M. Ani Hsieh, Vijay Kumar from University of Pennsylvania
  11. ‘Forming and Controlling Hitches in Midair Using Aerial Robots’ Diego Salazar-Dantonio, Subhrajit Bhattacharya, David Saldana from Lehigh University [Video]
  12. ‘AMSwarm: An Alternating Minimization Approach for Safe Motion Planning of Quadrotor Swarms in Cluttered Environments’ Vivek Kantilal Adajania, Siqi Zhou, Arun Singh, Angela P. Schoellig from University of Toronto, Technical University of Munich, University of Tartu [Video]
  13. ‘Decentralized Deadlock-Free Trajectory Planning for Quadrotor Swarm in Obstacle-Rich Environments’ Jungwon Park, Inkyu Jang, H. Jin Kim from Seoul National University
  14. ‘A Negative Imaginary Theory-Based Time-Varying Group Formation Tracking Scheme for Multi-Robot Systems: Applications to Quadcopters’ Yu-Hsiang Su, Parijat Bhowmick, Alexander Lanzon from The University of Manchester, Indian Institute of Technology Guwahati
  15. ‘Safe Operations of an Aerial Swarm Via a Cobot Human Swarm Interface’ Sydrak Abdi, Derek Paley from University of Maryland [Video]
  16. ‘Direct Angular Rate Estimation without Event Motion-Compensation at High Angular Rates’ Matthew Ng, Xinyu Cai, Shaohui Foong from Singapore University of Technology and Design
  17. ‘NanoFlowNet: Real-Time Dense Optical Flow on a Nano Quadcopter’ Rik Jan Bouwmeester, Federico Paredes-valles, Guido De Croon from Delft University of Technology [Video]
  18. ‘Adaptive Risk-Tendency: Nano Drone Navigation in Cluttered Environments with Distributional Reinforcement Learning’ Cheng Liu, Erik-jan Van Kampen, Guido De Croon from Delft University of Technology
  19. ‘Relay Pursuit for Multirobot Target Tracking on Tile Graphs’ Shashwata Mandal, Sourabh Bhattacharya from Iowa State University
  20. ‘A Distributed Online Optimization Strategy for Cooperative Robotic Surveillance’ Lorenzo Pichierri, Guido Carnevale, Lorenzo Sforni, Andrea Testa, Giuseppe Notarstefano from University of Bologna [Video]
  21. ‘Deep Neural Network Architecture Search for Accurate Visual Pose Estimation Aboard Nano-UAVs’ Elia Cereda, Luca Crupi, Matteo Risso, Alessio Burrello, Luca Benini, Alessandro Giusti, Daniele Jahier Pagliari, Daniele Palossi from IDSIA USI-SUPSI, Politecnico di Torino, Università di Bologna, University of Bologna, SUPSIETH Zurich [Video]
  22. ‘Multi-Robot Pickup and Delivery Via Distributed Resource Allocation’ Andrea Camisa, Andrea Testa, Giuseppe Notarstefano from Università di Bologna [Video]
  23. ‘Multi-Robot 3D Gas Distribution Mapping: Coordination, Information Sharing and Environmental Knowledge’ Chiara Ercolani, Shashank Mahendra Deshmukh, Thomas Laurent Peeters, Alcherio Martinoli from EPFL
  24. ‘Finding Optimal Modular Robots for Aerial Tasks’ Jiawei Xu, David Saldana from Lehigh University
  25. ‘Statistical Safety and Robustness Guarantees for Feedback Motion Planning of Unknown Underactuated Stochastic Systems’ Craig Knuth, Glen Chou, Jamie Reese, Joseph Moore from Johns Hopkins University, MIT
  26. ‘Spring-Powered Tether Launching Mechanism for Improving Micro-UAV Air Mobility’ Felipe Borja from Carnegie Mellon university
  27. ‘Reinforcement Learning for Safe Robot Control Using Control Lyapunov Barrier Functions’ Desong Du, Shaohang Han, Naiming Qi, Haitham Bou Ammar, Jun Wang, Wei Pan from Harbin Institute of Technology, Delft University of Technology, Princeton University, University College London [Video]
  28. ‘Safety-Critical Ergodic Exploration in Cluttered Environments Via Control Barrier Functions’ Cameron Lerch, Dayi Dong, Ian Abraham from Yale University

If you have been following the ROS Discourse on a regular basis, you might have seen a bit more activity on the Aerial Vehicles category than usual. We very recently started an Aerial Robotics Working Group in collaboration with Dronecode Foundation! It will be a community-driven working group initially, but we will hold biweekly meetings on Wednesday at 2:00 PM UTC, and build up a community members and gather information on the ROS Aerial community’s Github organization. This blogpost aims to explain how this working group came to light, what our current plans are and how you can participate.

How did it all begin?

There are actually quite some aerial enthusiasts out there dwelling in the ROS crowd, which became evident when 20-30 people showed up at the impromptu ROScon 2022 aerial roboticists meetup. This was also our first experience with ROScon as Bitcraze, and I (Kimberly) absolutely loved it. The idea popped to be able to be more active in the amazing ROS community, which we started doing with helping out more with the Crazyswarm2 project (see this blogpost) and giving a presentation about it as well. However, we did notice that there wasn’t as much online chatter about Aerial Vehicles on the ROS communication channels. Yes, the Embedded ROS working group led by eProsima (responsible for MicroROS) has done some really cool demos with Crazyflies! And the same goes for any other aerial project, that has probably contributed to some of the other staple projects like NAV2. But there aren’t any working groups that are specific for aerial robotics.

Since PX4 led by Dronecode foundation had similar ambitions to be emerged into the ROS family, since we met in person at the very same ROScon last year, we started talking about possibly starting up a working group. This started with us reaching out to the ROS community for interest with this ROS discourse post and after 25 and more replies, the obvious thing was to set up an first explorative meeting. About 30 people showed up to this, so the message was clear: yes, there is a demand for guidance, structure, and information in the ROS community regarding aerial robotics. Thus, the aerial robotics working group was born!

Current state and plans

One of the observed issues is that we have noticed that is happening is that there there are numerous projects and information about aerial robotics, and perhaps too much. That is because aerial robotics consists of a huge variety of robotic systems in different forms like multicopters or even monocopters (like in the blogpost here) but also hybrid VTOL vehicles, mini blimps (for example this hack we done) and so many more. But as you probably know, aerial vehicles come with their own set of challenges that distinguish them from ground robots, like instability, aerodynamics, and limitations related to their lift capabilities. Therefore, it offers an interesting platform for control theory, autonomy, and swarming and as a result several ROS-related projects have emerged, such as Crazyswarm2, Aerostack2, Kumar Robotics Autonomy Stack and, Agilicious. Moreover, even though a standard ROS interface for aerial robotics has been created some years ago, it has not been enforced or updated since. And also, although courses and tutorials can be found here and there scattered around on multiple projects and autopilot websites to get started with aerial robotics in ROS, but many have found the learning curve to be quite steep and usually don’t know where to start.

Due to the vast amount of systems, software, projects and information out there, we decided to gather all this information in one centralized location as an Aerial Robotics landscape instead of scattering it across various aerial robotics resources, of which we have created a simple repository with markdown files. The idea is to fill this in little by little by info that we get from the working group discussions or other input of users, or research done by ourselves. For that, we will facilitate biweekly meetings, where users will present about their project (like our last meeting about Aerostack 2) or where we engage in discussions on various aerial robotics topics (like Aerial Autonomy stacks in the startup meeting).

Future ambitions

Currently, we don’t have a specific end goal or main project in mind, as we are right at the start of the first discussions and information gathering. That is also why it will be considered a ‘community driven’ working group after some emails back and forth with Open Robotics Foundation, until we reach a stage where the landscape is adequately developed to establish specific development goals. and set up various subprojects for communication, autonomy, platforms and/or education. Additionally, incorporating direct communication protocols within swarms could be of interest, as these are a common use case within aerial robotics. Once we have established more specific development goals, we can apply to be an official ROS working group, and collaborate with other workgroups on overlapping projects. From our perspective, it would be more beneficial for the ROS ecosystem not to create a standalone aerial stack, but enhance the integration of other stacks with aerial vehicles.

Join us!

Currently I (Kimberly) representing Bitcraze and Ramon Roche from Dronecode Foundation will be in the ‘lead’ of the Aerial working group, although we prefer to act as facilitators rather than imposing our own direction. We will try our best not to geek out too much on PX4 and/or Crazyflies alone, so therefore anybody’s input will be crucial! So if you’d like to levitate ROS to new heights, come and join our meetings! Our next meeting is scheduled for Wednesday the 24th of May (2 pm UTC), and you can find the information on this ROS Discourse thread. We hope to see you there!

It is easy to forget that the reason why it is nice to develop for the Crazyflie is because it weighs only about 30 grams. In case something goes wrong with your script or there is a fly-away, you can simply pick it up from the air without worrying about the propellers hitting you. Moreover, when the Crazyflie crashes, it usually only requires a brush off and a potential replacement of a motor-mount or propeller. The risk of damage to yourself, other people, indoor furniture, or the vehicle itself is extremely low. However, things become very different if you’ve built a larger platform with the Bolt or BQ deck with large brushless motors (like with this blogpost), where the risk of injury to people or to the vehicle itself increases significantly. That is one of the major reasons why the BQ deck and the Bolt are still in early access and have been for a while. In our efforts to get it out of early access, it’s time to start thinking about safety features.

In this blog post, we’ll be discussing how other open-source autopilot programs are implementing safety features, followed by a discussion on current efforts for Crazyflie, along with an announcement of the developer meeting scheduled for May 3rd (see below for more info).

Catching the Crazyflie with a net

Safety in other Autopilots

We are a bit late to the game in terms of safety compared to other autopilot programs such as PX4, ArduPilot, Betaflight and Paparazzi UAV, which have been thinking about safety for quite some time. It makes a lot of sense when you consider the types of platforms that run these autopilots, such as large fixed VTOL or fixed-wing vehicles or 10-kilo quadcopters with cinematic cameras, or the degree of outdoor flight regulation. Flying a UAV autonomously or by yourself has become much more challenging as the US, EU, and many other countries have made it more restrictive. In most cases, you are not even allowed to fly if fail-safes are not implemented, such as what to do if your vehicle loses GPS signal. These types of measures can be separated into pre-flight checks and during-flight checks.

Pre-flight checks

Before a vehicle is allowed to fly, or even before the motors are allowed to spin, which is called ‘arming’, several conditions must be met. First, it needs to be checked if all internal sensors, such as the IMU, barometer, and magnetometer, are calibrated and functional, so they don’t give values outside of their normal operating range. Then, the vehicle must receive a GPS signal, and the internal state estimator (usually an extended Kalman filter) should converge to a position based on that information. It should also be determined if an external remote control is connecting to the vehicle and if there is any datalink to a ground station for telemetry. Feasibility checks can also be implemented, such as ensuring that the mission loaded to the UAV is not outside its mission parameters or that the start location is not too far away from its take-off position (assuming the EKF is functional). Additionally, the battery should not be low, and the vehicle should not still be in an error state from a previous flight or crash.

All of these features have the potential to be turned off or made less restrictive, depending on your situation. However, keep in mind that changing any of these may require recertification of the drone or make it fall outside what is required for outdoor flight regulation. Therefore, these should only be changed if you know what you are doing.

Preflight checks documentation

Fail-safe triggers during flight

Now that the pre-flight checks have passed, the UAV is armed and you have given it the takeoff command. However, there is so much more that can go wrong during a UAV flight, and takeoff is one of the most dangerous moments where everything could go wrong. Therefore, there are many more safety features, aka failsafes, during the flight than for the pre-flight checks. These can also be separated into ‘triggers’ and ‘behaviors,’ so that the developer can choose what the UAV should do in case of a failure, such as ‘GPS loss’ to ‘land safely’ and so on.

Thus, there are triggers that can enable the autopilot’s failsafe mechanics:

  • No connection with the remote control
  • No connection with the Ground station or Datalink
  • Low Battery
  • Position estimate diverges or full GPS loss
  • Waypoint going beyond geofence or Mission is not feasible
  • Other vehicles are nearby.

Also, sometimes the support of an external Automatic Trigger system is required, which is a box that monitors the conditions where the UAV should take action in case there is no GPS, other aerial vehicles are nearby, or the UAV is crossing a geofence determined by outdoor flight restrictions. Note that all of these triggers usually have a couple of conditions attached, such as the level of the ‘low battery’ or the number of seconds of ‘GPS loss’ deemed acceptable.

Fail-safe behavior

If any of the conditions mentioned above are triggered, most autopilot suites have some failsafe behaviors linked to those set by default. These behaviors can include the following:

  • No action at all
  • Warning on the console or remote control display
  • Continue the mission autonomously
  • Stay still at the same position or go to a home position
  • Fly to a lower altitude
  • Land based on position or safely land by reducing thrust
  • No input to motors or completely disarming the motors

Usually, these actions are set in regulation, but per trigger, it is possible to give a different behavior than the default. One can decide to completely disarm the vehicle, but then the chances of the UAV crashing are pretty high, which can result in damage to the vehicle or cause harm to people or objects. By the way: disarming is the opposite act of arming, which is not allowing the motors to spin, no matter if it is receiving an input. If you decide to never do anything and force the drone to finish the mission autonomously, then in a case of GPS or position loss, you risk losing your vehicle or that it will end up in areas where it is absolutely not allowed, such as airports. Again, changing these default behaviors should be done by someone who knows what they are doing, and it should be done with careful consideration.

Failsafe documentation other autopilot suites:

Emergencies

Fail-safes are measures that ensure safe flight. However, there will always be a chance that an emergency will occur, which will require an immediate action as well. If the vehicle has crashed during any of its phases or has flipped, or if the hardware breaks, such as the motors, arms, or perhaps even the autopilot board itself, what should be done then?

The standard default behavior for this is to completely disarm the vehicle so that it won’t react to any input to the motors itself. Of course, it’s difficult to do if the autopilot program is on, but at least it won’t try to take off and finish its mission while laying on its side. It might be that a backup system is connected to the ESCs that will take over in case the autopilot is not responding anymore, perhaps using a different channel of communication.

Also, the most important safety feature of all is the pilot itself. Each remote control should have a special button or switch that can put the drone in a different mode, make it land, or disarm it so that the pilot can act upon what they see. In case the motors are still spinning, have a net or towel available to throw over them, disconnect the battery as soon as possible, and make sure to have sand or a special fire retardant in case the LiPo batteries are pierced.

All of the autopilots have some tips to deal with such situations, but make sure to do some good research yourself on how to handle spinning parts or potential LiPo battery fires. I’m just giving a compilation of tips given in the documentation above here, but please make sure to read up in detail!

Safety in the Crazyflie Firmware

So how about the Crazyflie-firmware ? We have some safety features build in here and there but it is all over the code base. Since the Crazyflie is so safe, there was no immediate need for this and we felt it is more up to the developer to integrate it themselves. But with the Bolt and BQ deck coming out of early access, we want to at least do something. As we started already started looking into how other autopilot softwares are doing it, we can get some ideas, however we did notice that many of these are mostly meant for outdoor flight. The Crazyflie and the Crazyflie Bolt have been designed for indoor use and perhaps deal with different issues as well.

Current safety features

This is a collection of safety features currently in the firmware at the time of writing this blogpost. Most safety features in the Crazyflie are up for the developer to double check before and during flight, but these are some automatic once that are scattered around the firmware:

However, if for instance your Crazyflie or Bolt platform loses its positioning in air, or doesn’t have a flowdeck attached before takeoff, there are no default safety systems in check. You either need to catch it, make it land or use an self-made emergency stop button using one of the emergency stop services above.

Safety features in works

As mentioned earlier, we have safety features spread throughout the code base of the Crazyflie firmware. Our current effort is to collect all of these emergency stops and triggers in the supervisor module to have them all in one place.

In addition, since indoor positioning is critical, we want to be notified when it fails. For instance, if the lighthouse geometry is incorrect, we need to see if the position diverges. This check was done outside of the Crazyflie firmware in a cflib script, but it has not been implemented inside the firmware. We also want to provide some options in terms of behavior for these triggers. Currently, we are working on two options: ‘turn the motors off’ or ‘safe land,’ with ‘safe land’ decreasing the thrust while keeping the drone level in attitude.

Furthermore, we want to integrate these features into the cfclient as well. For example, we want to add more emergency safety features to our remote control through the cfclient, and show users how to arm and disarm the vehicle.

These are the elements we are currently working on, but there might be more to come!

Developer meeting May 3rd

You probably already guessed it… the topic about the next developer meeting will be about the safety features in the Crazyflie and the Bolt! We will present the current safety features in the Crazyflie and what we are currently working on to make it better. In this sense, we really want to have your feedback on what you think is important for brushless versions of the Crazyflie for indoor flight!

The Dev meeting will be on Wednesday May the 3rd at 3 PM CEST. Please keep an eye on the discussion forum in the developer meeting thread.