Category: Crazyflie

This week we have a guest blog post from Bárbara Barros Carlos, PhD candidate at DIAG Robotics Lab. Enjoy!

Quadrotors are characterized by their underactuation, nonlinearities, bounded inputs, and, in some cases, communication time-delays. The development of their maneuvering capability poses some challenges that cover dynamics modeling, state estimation, trajectory generation, and control. The latter, in particular, must be able to exploit the system’s nonlinear dynamics to generate complex motions. However, the presence of communication time-delay is known to highly degrade control performance.

A composite image showing our real-time NMPC with time-delay
compensation being used on the Crazyflie during the tracking of
a helical trajectory.

In our recent work, we present an efficient position control architecture based on real-time nonlinear model predictive control (NMPC) with time-delay compensation for quadrotors. Given the current measurement, the state is predicted over the delay time interval using an integrator and then passed to the NMPC, which takes into account the input bounds. We demonstrate the capabilities of our architecture using the Crazyflie 2.1 nano-quadrotor.

Time-Delay Compensation

In our aerial system, because of the radio communication latency, we have delays both in receiving measurements and sending control inputs. Likewise, since we intend to use NMPC, the potentially high computational burden associated with its solution becomes an element that must also be taken into account to minimize the error in the state prediction.

Crazyflie NMPC response without
considering the time-delay compensation.

To tackle this issue, we use a state predictor based on the round-trip time (RTT) associated with the sum of network latencies as a delay compensator. The prediction is computed by performing forward iterations of the system dynamic model, starting from the current measured state and over the RTT, through an explicit Runge Kutta 4th order (ERK4) integrator. Due to the independent nature of this operation, perfect delay compensation can be achieved by adjusting the integration step to be equal to the RTT. Thus, it is assumed that there is a fixed RTT, defined by τr, to be compensated.

Nonlinear Model Predictive Control

The NMPC controller is defined as the following constrained nonlinear program (NLP):

Therein, p denotes the inertial position, q the attitude in unit quaternions, vb the linear velocity expressed in the body frame, ω the angular rate, and Ωi the rotational speed of the ith propeller. The NLP is tailored to the Crazyflie 2.1 and is implemented using the high-performance software package acados, which solves optimal control problems and implements a real-time iteration (RTI) variant of a sequential quadratic programming (SQP) scheme with Gauss-Newton Hessian approximation. The quadratic subproblems (QP) arising in the SQP scheme are solved with HPIPM, an interior-point method solver, built on top of the linear algebra library BLASFEO, finely tuned for multiple CPU architectures. We use a recently proposed Hessian condensing algorithm particularly suitable for partial condensing to further speed-up solution times.

When designing an NMPC, choosing the horizon length has profound implications for computational burden and tracking performance. For the former, the longer the horizon, the higher the computational burden. As for the latter, in principle, a long prediction horizon tends to improve the overall performance of the controller. In order to select this parameter and achieve a trade-off between performance and computational burden, we implemented the NLP in acados considering: five horizon lengths (N = {10,20,30,40,50}), input bounds on the rotational speed of the propellers (lower bound = 0, upper bound = 22 krpm), discretizing the dynamics using an ERK4 integration scheme. Likewise, we compare the condensing approach with the state-of-the-art solver qpOASES against the partial condensing approach with HPIPM, concerning the set of horizons regarded.

Left: closed-loop trajectories comparing different horizon lengths.
Right: average runtimes per SQP-iteration
for different horizon lengths considering two distinct QP solvers.

As qpOASES is a solver based on active-set method, it requires condensing to be computationally efficient. In line with the observations found in the literature that condensing is effective for short to medium horizon lengths, we note that qpOASES is competitive for horizons up to approximately N = 30 when compared to HPIPM. The break-even point moves higher on the scale for longer horizons, mainly due to efficient software implementations that cover: (a) Hessian condensing procedure tailored for partial condensing, (b) structure-exploiting QP solver based on novel Riccati recursion, (c) hardware-tailored linear algebra library. Therefore, we chose horizon N = 50 as it offers a reasonable trade-off between deviation from the reference trajectory and computational burden.

Onboard Controller Considerations

How the onboard controllers (PIDs) use the setpoints of the offboard controller (NMPC) in our architecture is not entirely conventional and, thereby, deserves some considerations. First, the reference signals that the PID loops track do not fully correspond to the control inputs considered in the NMPC formulation. Instead, part of the state solution is used in conjunction with the control inputs to reconstruct the actual input commands passed as a setpoint to the Crazyflie. Second, a part of the reconstructed input commands is sent as a setpoint to the outer loop (attitude controller), and the other part is sent to the inner loop (rate controller). Furthermore, as the NMPC model does not include the PID loops, it does not truly represent the real system, even in the case of perfect knowledge of the physical parameters. As a consequence, the optimal feedback policy is distorted in the real system by the PIDs. 

Closed-loop Position Control Performance

Our control architecture hinges upon a ROS Kinetic framework and runs at 66.67 Hz. The Crazy RealTime Protocol (CRTP) is used in combination with our crazyflie_nmpc stack to stream in runtime custom packages containing the required data to reconstruct the part of the measurement vector that depends on the IMU data. Likewise, the cortex_ros bridge streams the 3D global position of the Crazyflie, which is then passed through a second-order, discrete-time Butterworth filter to estimate the linear velocities.

To validate the effectiveness of our control architecture, we ran two experiments. For each experiment, we generate a reference trajectory on a base computer and pass it to our NMPC ROS node every τs = 15 ms. When generating the trajectories, we explicitly address the feasibility issue in the design process, creating two references: one feasible and one infeasible. In addressing this issue, we prove through experiments that the performance of the proposed NMPC is not degraded even when the nano-quadrotor attempts to track an infeasible trajectory, which could, in principle, make it deviate significantly or even crash.

Overall, we observe that the most challenging setpoints to be tracked are the positions in which, given a change in the motion, the Crazyflie has to pitch/roll in the opposite direction quickly. These are the setpoints where the distortion has the greatest influence on the system, causing small overshoots in position. The average solution time of the tailored RTI scheme using acados was obtained on an Intel Core i5-8250U @ 3.4 GHz running Ubuntu and is about 7.4 ms. This result shows the efficiency of the proposed scheme.

Outlook

In this work, we presented the design and implementation of a novel position controller based on nonlinear model predictive control for quadrotors. The control architecture incorporates a predictor as a delay compensator for granting a delay-free model in the NMPC formulation, which in turn enforces bounds on the actuators. To validate our architecture, we implemented it on the Crazyflie 2.1 nano-quadrotor. The experiments demonstrate that the efficient RTI-based scheme, exploiting the full nonlinear model, achieves a high-accuracy tracking performance and is fast enough for real-time deployment.

Related Links

This research project was developed by:

Bárbara Barros Carlos1, Tommaso Sartor2, Andrea Zanelli3, and Gianluca Frison3, under the supervision of professors Wolfram Burgard4, Moritz Diehl3 and Giuseppe Oriolo1.

1 B. B. Carlos and G. Oriolo are with the DIAG Robotics Lab, Sapienza University of Rome, Italy.
2 T. Sartor is with the MECO Group, KU Leuven, Belgium.
3 A. Zanelli, G. Frison, and M. Diehl are with the syscop Lab, University of Freiburg, Germany.
4 W. Burgard is with the AIS Lab, University of Freiburg, Germany.

Last Wednesday we had our first live tutorial event, explaining our Spiral Swarm Demo that we usually show at conferences. About 60 people signed up and it seems that we have about 40-50 people that were able to join from all parts of the world. There were even several Crazyflie users from Asia that stayed up late especially for this, so we definitely appreciated the dedication!

For those who missed it, you can find the recordings and slides on this event page.

The Tutorial

The first hour we were mostly talking about the Lighthouse positioning system and in particular focusing on the base station V2. In real time, we had hands-on sessions where we actually showed how we setup the system, how to retrieve the calibration data and how to achieve geometry. The hour ended with showing a Crazyflie flying in the lighthouse system itself.

After the break , we focused on how to achieve more autonomy in the swarm, where we talked about the limitation of communication, the high level commander and the app layer. This was also shown with hands-on with multiple flying Crazyflies and the full automatic demo at the end. We were able to keep showing the demo in the end for a 30 minutes more while we were resting up with a drink :)

We were using Discord and Mozilla Hubs simultaneously to stream the tutorial. Discord worked out nicely since we could have one channel for the stream and one channel for the chat, which one of us was able to look at continuously. Mozilla hubs was a nice add-on however it definitely had some hiccups and streaming quality issues, which is not ideal for following a tutorial. Also being in Virtual Reality for 2 hours is very exhausting we heard from headset-using participants.

What next?

We really liked doing the tutorial and speaking one-on-one with our users very much so we are likely to organize one again. Not sure at what frequency though but of course we will announce it first. We have already some requests for topics so we will look into those first. Next time it probably will be a shorter tutorial on Discord only. Mozilla Hubs might still be used but as a virtual gallery where we put 3D visualizations of what we are working on (like how the base station sweeps work for instance), so that people can get a better understanding. If you have any request for topics please leave a comment below.

We will also try out to use our new Discord Server as a digital ‘watering hole’ for our users. Here everybody will have the opportunity to chat with each-other, to share awesome projects and to maybe help each-other out with certain questions. However, we will not be on Discord ourselves all the time and still advise to use forum.bitcraze.io as the main place to ask questions and to seek for support.

Click here to join our Discord server

As mentioned in this blog post, we added the possibility to write apps for the Crazyflie firmware a while ago. Now we have added more functions in the Firmware to make it possible to use apps for an even wider range of tasks.

The overall idea of the app API is to mirror the functionality of the python lib. This will enable a user to prototype an application in python with quick iterations, when everything is working the app can easily be ported to C to run in the Crazyflie instead. The functions in the firmware are not identical to the python flavour but we have tried to keep them as close as possible to make the translation simple.

An app is also a much better way to contain custom functionality as the underlying firmware can be updated without merging any code. The intention is that the api API will be stable over time and apps that work one version of the firmware also should work with the next version.

Improvements

We used our demo from IROS and ICRA (among others) with a fairly autonomous swarm as a driver for the development. The demo used to be implemented in a branch of the firmware with various modifications of the code base to make it possible to do what we wanted. The goal of the exercise was to convert the demo into an app and add the required API to the firmware to enable the app to do its thing. The new app is available here.

The main areas where we have extended the API are:

Log and parameters framework

The log framework is the preferred way for an app to read data from the firmware and this has been working from the start. Similarly the parameter framework is the way to set parameters. Even though this has worked, it broke a basic assumption in the setup with the client, that only the client can change a parameter. Changing a parameter from an app could lead to that the client and Crazyflie had different views of the state in the Crazyflie, but this has now been fixed and the client is updated when needed.

High level commander

The high level commander was not accessible from an app earlier and the functions in the python lib have been added to make it easy to handle autonomous flight.

Custom LED sequences

It is now possible to register custom LED sequences to control the four LEDs on the Crazyflie to signal events or state.

Lighthouse functionality

Functions for setting base station geometry data as well as calibration data have been added. These functions are also very useful for those who are using the lighthouse system as it now can be done from an app instead of modifying lighthouse_position_est.c.

Remaining work

We have taken a step forward with these changes but there is more to be done! The two main areas are support for custom CRTP packets and memory mapping through the memory sub system. There might be more, let us know if there is something you are missing. The work will continue and there might even be some documentation at some point :-)

Tutorial

One reason for doing this API work now was to prepare for the tutorial about the lighthouse 2 positioning system, swarm autonomy and the demo app that we will run this Wednesday on-line, don’t miss out! You can read more about the event here.

Li-Ion batteries have packed more energy per gram for a long time compared to Li-Po batteries. The problem for UAV applications has been that Li-Ion can’t deliver enough current, something that is starting to change. Now there are cells that are supposed to be able to deliver 30-35A continuously in the 18650 series, at least according to the specs. Therefor we thought it was time to do some testing and decided to build a 1 cell Li-Ion drone using the Crazyflie Bolt as base.

Since a 18650 battery is 18mm in diameter and 65mm long, the size would affect the design but we still wanted to keep the drone small and lightweight. The battery is below 20mm wide which means we can run the deck connectors around it, that is nice. We chose to use our 3D printer to build the frame and use off the shelf ESCs, motors and props. After a couple of hours of research we selected 3″ propellers, 1202.5 11500kv motors and tiny 1-2s single ESCs for our first prototype.

Parts list:

  • 1 x Custom designed 130mm 3D printed frame
  • 1 x Crazyflie Bolt flight controller
  • 4 x Eachine 3020 propeller (2xCW + 2xCCW)
  • 4 x Flywoo ROBO RB 1202.5 11500 Kv motors
  • 4 x Flash hobby 7A 1-2S ESC
  • 1 x Li-Ion Sony 18650 VTC6 3000mAh 30A
  • Screws, anti vib. spacers, zipties, etc.

The custom designed frame was developed in iterations, and can still be much improved, but at this stage it is small, lightweight and rigid enough. We wanted the battery to be as central as possible while keeping it all compact.

Prototype frame designed in FreeCAD.

Assembly and tuning

The 3D printed frame came out quite well and weighed in at 13g. After soldering the bolt connectors to the ESCs, attaching motors and props, adjusting battery cable and soldering a XT30 to the Li-Ion battery it all weighed ~103g and then the battery is 45g of these. It feels quite heavy compared to the Crazyflie 2.1 and we had a lot of respect when we test flew it the first time. Before we took off we reduced the pitch and roll PID gains to roughly half and luckily it flew without problems and quite nicely. Well it sounds a lot but that is kind of expected. After increasing the gains a bit we felt quite pleased with:

#define PID_ROLL_RATE_KP  70.0
#define PID_ROLL_RATE_KI  200.0
#define PID_ROLL_RATE_KD  2
#define PID_ROLL_RATE_INTEGRATION_LIMIT    33.3

#define PID_PITCH_RATE_KP  70.0
#define PID_PITCH_RATE_KI  200.0
#define PID_PITCH_RATE_KD  2
#define PID_PITCH_RATE_INTEGRATION_LIMIT   33.3

#define PID_ROLL_KP  7.0
#define PID_ROLL_KI  3.0
#define PID_ROLL_KD  0.0
#define PID_ROLL_INTEGRATION_LIMIT    20.0

#define PID_PITCH_KP  7.0
#define PID_PITCH_KI  3.0
#define PID_PITCH_KD  0.0
#define PID_PITCH_INTEGRATION_LIMIT   20.0

This would be good enough for what we really wanted to try, the endurance with a Li-Ion battery. A quick measurement of the current consumption at hover, 5.8A, we estimated up to ~30 min flight time on a 3000mAh Li-Ion battery, wow, but first a real test…

Hover test

For the hover test we used lighthouse 2 which is starting to work quite well. We had to change the weight and thrust constants in estimator_kalman.c for the autonomous flight to work:

#define CRAZYFLIE_WEIGHT_grams (100.0f)

//thrust is thrust mapped for 65536 <==> 250 GRAMS!
#define CONTROL_TO_ACC (GRAVITY_MAGNITUDE*250.0f/CRAZYFLIE_WEIGHT_grams/65536.0f)

After doing that and creating a hover script that hovers at 0.5m height and was set to land when the voltage reached 3.0V. We leaned back with excitement, behind a safety net, and started the script… after 19 min it landed… good but not what we hoped for and quite far from the calculated 30 min. Maybe Li-Ion isn’t that good when it needs to provide more current…? A quick internet search and we could find that Li-Ion can run all the way down to 2.5V, but we have to stop at 3.0V because of electronics and loosing thrust, so we are missing quite a bit of energy… Further investigations are needed.

Lighthouse 2 flight test

As a final test we launched some flight scripts to fly in a square and in a spiral so we would get a feel for Lighthouse 2 + Bolt with PID controller combination. We think it turned out quite nicely, and this with almost no optimization effort:

Summary

Li-Ion felt like it could be a game changer when it comes to flight time but was not as promising as we hoped for. It doesn’t mean we can’t get there though. More research and development is required.

We’re happy to announce that we have taken an important step forward in the development of the lighthouse positioning system, we have improved the calibration compensation. The changes improves the correctness of the coordinate system, especially for lighthouse V2 base stations.

As mentioned in this blog post one of the remaining areas to solve was handling of calibration data and this is what we have addressed lately. In the manufacturing process mechanical elements are mounted within some tolerances but since the precision of the system is so good, also a very fine tolerances makes a big difference in the end result. Each base station is measured in the factory and the calibration data describing these imperfections are stored in the base station. The calibration data is transmitted in the light sweeps to enable a receiver to use it to correct for the errors in the measured angles.

As with everything else related to lighthouse, there is no official information of how to interpret the calibration data so we (and the community) had to make educated guesses.

Lighthouse 1

The compensation model for lighthouse 1 has been known for quite long, see the Astrobee project by Nasa and Libsurvive. The most important parameter is the phase and until now this is the only part of the calib data that we have used in the firmware. In the new implementation we use all parameters.

The parameters of the lighthouse 1 calibration model are phase, tilt, gib mag, gib phase and curve.

Lighthouse 2

The compensation data for lighthouse 2 is similar to lighthouse 1 but there are two new parameters, ogee mag and ogee phase. It also seems as some parameters that are sharing names between lighthouse 1 and 2 have different meanings, for instance curve.

Libsurvive has implemented compensation for lighthouse 2 but we have unfortunately not managed to use their work with good results, instead we have tried to figure out what the model might look like and match it to measurements. We have managed to get good results for the phase, tilt, gib mag and gib phase, while we don’t know how to use curve and ogee mag and ogee phase. The solution seems to be pretty good with a subset of the parameters and we have decided to leave it at that for now.

Use of calibration data

The way we have used the calibration data so far has been to apply it to the measured angles to get (more) correct sweep angles that have been fed into the position estimation algorithms. The problem is that the compensation model is designed the other way around, i.e. it goes from correct angles to measured angles, and an iterative approach is required to apply it to the measured angles. A better way (most likely by design) is to apply it in the kalman estimator instead where it simply becomes part of the measurement model.

Currently we do calculate the corrected angles as well and expose them as log data, but it is not required for the standard functionality of the lighthouse system. We may make it possible to turn it on/off via a parameter in the future to save some CPU power.

Functional improvements

So what kind of improvements will the calibration add?

The first improvement is the base station geometry estimation. With more correct angles the estimated base station position and orientation will be better. This is important to be able to get a good estimation of the Crazyflie position since poor geometry data will give the position estimator conflicting data.

Secondly more correct angles will straighten the coordinate system. With angular distortion the position estimator will not be able to estimate the correct position and the coordinate system will be warped, bent or stretched. The improvement can be seen when flying parallel to the floor at constant height for instance.

Thirdly the stability will hopefully be improved. When the angles from two base stations match better, the estimated position will change less when one base station is occluded and generally make life easier for the position estimator. We will take a look at the outlier filter to see if it can be improved as well.

Remaining problems

The calibration data is transmitted as a part of the sweeping light planes with a low bitrate. For lighthouse 1 the decoding process works well and all calibration data is usually received within 20-30 seconds. For lighthouse 2 it does not work as well in our current implementation it takes (much) longer before all data has been received correctly from both base stations.

It is possible to get the calibration data via the USB port on lighthouse 2 and we are considering storing the calibration data in the Crazyflie somehow instead. This will be even more important when we support larger systems (2+ base stations) and all base stations are not within range at startup.

During the summer we were discusses at the office of what would be a good substitute of us not being able to go to conferences or fairs anymore (see this blogpost). We sparred with a few ideas, ranging from organizing an online competition to an seminar. Although we initially were quite enthusiastic about organizing the competition, the user questionnaire from the previous blog-post showed us that many of you are rather interested in online tutorials. Based on that we actually started to make some more step-by-step guides, however we definitely would agree that is not the same as meeting each-other face-to-face!

So now we are planning to organize one for real this time! So our first online live tutorial will be on:

Wednesday 4th of November, 18:00 (CET, Malmö Sweden)

Register for the first session here to indicate your interest and to receive up-to-date information. There are of course no cost involved!

First topic: Spiraling Swarm Demo (Live!)

The last couple of years we have been showing our demo at many robotics conferences and fairs, such as ICRA, IMAV and IROS. Since we do not have a opportunity to do that anymore (at least for the foreseeable future), we thought that a suitable first topic of the online tutorial to be about the Spiraling Swarm demo! We will go through the different elements of the demo, which includes the implementation details on the Crazyflie and the Lighthouse Positioning system. We hope to explain all of in about 20-30 minutes and that this would enable you to set the demo up yourself if you want.

We have been thinking about just doing a prerecorded tutorial, however we also really like to talk with our users about their needs and research topics. That is why we think it is important to do it live where we can answer your questions on the go or after the tutorial. This also means that we will be demonstrating the demo live as well! Afterwards we will have a social interaction where we have a friendly chat :)

Mozilla Hubs and Discord

There are so many options on how to exactly host this event, as there are a gazillion alternatives for video conferencing. Currently we have are looking at Mozilla hubs. which fits nicely with our interests in the lighthouse positioning system with the HTC Vive basestations. The nice thing aspect of Hubs is that you don’t need a fancy headset to join, since it is possible to join via your browser or your phone. Me (Kimberly) has joined a Virtual Reality seminar at the beginning of the pandemic, organized by Roland Meertens of pinchofintelligence.com, and it was definitely a very interesting and fun experience. When giving a presentation, it really felt like people were paying attention and were engaged. So, we recently recreated our own flight-lab in VR (using Hub’s environment creator Spoke) and tested it out ourselves. This way you will be able to see our workplace as well!

Of course, we can imagine not everybody is waiting to go full VR. That is why we will combine the online tutorial with Discord, where we will make a video channel where we will stream the live demo and tutorial. It will also be possible to send messages that are visible in both the VR space and the Discord chat channel with Hub’s discord bot. You can choose where to follow the tutorial — fully in VR, or first discord and afterwards socialize in VR — that is totally up to you.

We still need to figure out the specifics, but if you register with your email we will send all the necessary information for the first session to you directly.

IOT conference Malmö

Now something else: tomorrow, namely Tuesday the 5th of October, we will also present at the IOT conference 2020 in Malmö. It is free for participants and it is still possible to register! Come and join if you can not wait to see us until the 4th of November.

For a long time issue #270 has been bugging us. It caused the µSD-card logging to fail in combination when using either the flow or loco deck, or actually any deck that uses the deck SPI bus. Several attempts has been made to fix this issue over time and recently we decided to really dig in to it. There has been some workaround to move the µSD-card to a different SPI bus but that was tedious and required patching the deck. So it was time to fix this for good, or at least know why it doesn’t work. A SPI bus is designed to be a multi-bus so it should be possible… Timing problems is still tricky but that is another story.

The problem

The SPI driver is protecting the bus with a mutex to prevent several clients to access it at the same time. After some digging we found that the FatFs integration layer was bugged and that SPI bus handling wasn’t well done. After comparing this to some other open implementations we found that this needed to be rewritten.

The solution

After rewriting part of the integration layer to have clear path of when the SPI bus was taken, and when it was released, we immediately got some good results. µSD-card logging with flow and loco deck worked, hooray! There is of course a limit to this and as we mentioned earlier the bus is a shared resource and if it is to congested, things will slow down, or stop working. This is currently the case when LPS is put in TWR mode. The TWR is very chatty and causes around 15k transactions per seconds on the SPI bus, and since it has higher priority than the µSD-card logging, the µSD-card write task starves, causing the logging to fail.

µSD and LPS SPI bus captured with a logic analyzer, over 50ms
µSD and LPS SPI bus captured with a logic analyzer, over 6ms

So if you stay away from LPS in TWR mode µSD-card logging should now work fine. I’m pretty sure there is a workaround for the TWR mode as well. First guess is that you would need to slow down the TWR update rate which is now at its maximum.

Happy logging!

Following firmware releases we have now released a new version for the Crazyflie client and python lib (cflib). It took a bit more time to test and fix various last minute bugs but we now have released Crazyflie client 2020.09 as well as Crazyflie lib 0.12.1.

Crazyflie python lib 0.12.1

The main new funcitonalities of the lib are:

  • Python 2.x support is now dropped. Python 2 official support has ended beginning of this year and it is not installed by default in Ubuntu 20.04, it was time to stop supporting it in cflib
  • Some documentation work
  • Capabilities to abort a bootloading operation

Crazyflie client 2020.09

There has been a brunch of cosmetic and functional changes in the client. Some themes have been added so that the client can now be used with a dark blue or even green-on black hacker theme. The used Qt theme has also been forced to be drawn by Qt on all platforms: this means that the client will not look like a Windows or Mac app on Windows and Mac, but the styling will be consistent on all platform. This will simplify development and make documentation consistent with all platform.

The bootloader has been changed to automatically download releases of the firmware from Github by default. This is a great quality of life feature made by victor this summer that makes it very easy to run Crazyflies with a clean release build.

New bootloader window

There are also a brunch of bugfixes, the full changelog can be read on the release page.

Future plans

Semantic versioning for the lib

We have been thinking of using semantic versioning for the lib and bumping the version to 1.0. This will allow us to communicate the state of the lib more accurately: it is not perfect but it is perfectly usable. As well giving more freedom to break compatibility. A lot of things could be made better but we are always very careful not breaking backwards compatibility. Proper semantic versioning would allow us to make a Crazyflie client lib 2.0 making it clear that if you update from 1.0 to 2.0 you might have to make changes to your scripts.

Client binary release for more platform

So far, the Crazyflie client has had a binary release for Windows and some work to make one for MacOS. By Binary release we mean releasing the client in a form that does not require installing python, and then the crazyflie client pip package, on Windows it is an installer and on Mac it would be an app bundle.

Last week we have also been working on Linux releases, after all most of us uses Linux at Bitcraze so we might as well show it some love. Today we have released a snap of Crazyflie client. This means that you can install Crazyflie client from the Ubuntu software application on Ubuntu, or directly via the snap install –edge cfclient command on any Linux system with snap installed. There is still some rough edges, and a stable version will only be available for next release, but this should make it much easier to get started with the Crazyflie.

We are happy to annouce that we have released a new version of the Crazyflie firmware, version 2020.09. It is available for download from Github.

The new firmware solves an old compatibility issue when using the LPS and Flow deck at the same time and also improves stability. A list of all the issues that have been fixed can be found on the release page.

For users that have a LPS system, we have also made some improvements to the LPS node firmware and are releasing version 2020.09.

If you are building the Crazyflie or LPS firmware from source, remember to update the libdw1000 git submodule using

git submodule update

We are working on a release of the python client as well, but still have a few issues to fix so stay tuned.

I am back from parental leave and during this time I tried not to think too much about Crazyflie-related things to get a little break. However, over time, while geeking around, I eventually ended-up back to Crazyflie and Crazyradio designing a new channel-hopping communication protocol. This will likely be the subject of a future blog post but for the time being I thought I could write a bit about how the current Crazyflie radio communication is working.

Protocol layers

Like Many protocols, the Crazyflie communication protocol is layered. This allows to plug different elements at each level. When it comes to a Crazyflie client talking to a Crazyflie over the radio, the layering looks as follow:

Services are high-level functionalities like log that allows to get values of Crazyflie variables at regular interval. At this level there is essencially an API with commands like addLogBlock.

CRTP is a protocol that encapsulates the commands for each sevices. It multiplexes packets on the link using port numbers, this is very similar to TCP/UDP port on a network, each service is listening and sending packet on a pre-specified port.

Finally the radio link only deals with raw packets. The role of the radio link is to deliver packets from the PC to the Crazyflie and vice-versa. At this level, we have many link implementations, the most used are the radio and the USB link but there also is a Serial link that uses the Crazyflie serial port.

Radio link

The radio link is currently implemented by the Crazyradio (PA) on the PC side and the Crazyflie on the other side. The Crazyradio uses a nordic semiconductor nRF24LU1 USB/Radio chip and the Crazyflie a nRF51822 MCU/Radio. This is importance since, while the nRF51 has a quite flexible radio, the nRF24 does use a standalone SPI radio that has most of the packet handling hard-coded to a protocol that nordic call Enhanced Shockburst (ESB).

The ESB protocol handles sending packet and receiving acknowledgement automatically. A packet is sent on a radio channel, using a 5 bytes address, and when this packet is received on the other side an acknowledgement is sent back using the same address on the same channel. Both the original packet and the acknowledgement can contain a payload.

To implement a bi-directional radio link, the crazyradio is the one sending packets and the Crazyflies continuously listens and, when receiving a packet, sends back an ack. We do use the payload capabilities of both packet we send and of the ack to implement an uplink (PC->Crazyflie) and downlink (Crazyflie->PC) data link.

Of course, one problem with this setup is that while the PC can decide to send a packet at any moment, the Crazyflie needs to wait for the PC to send a packet to have a chance to send one back in an ACK. To make sure the Crazyflie has enough opportunity to send packets back, we are sending packets regular interval to the Crazyflie even if there is no packet to be sent. This polling allows to implement a continuous downlink.

The most important to see here is that the radio link gives to the upper layer, the CRTP layer, the illusion of a full duplex link. On the radio side this is implemented by polling regularly for downlink packets.

Communicating with multiple Crazyflies

In order to communicate with multiple Crazyflie, we just send packet to each Crazyflies one after each-other. This way we give equal chance for each Crazyflie to send back packets and doing so we divide the available bandwidth between them.

The main advantage of the polling protocol versus a more traditional P2P protocol where the Crazyflies would send when they want, is that when using polling the Crazyradio is the master and we can guarantee that we are not introducing any packet collision when we communicate.

Limitation and future

One major limitation of the current protocol is that it communicates on a single channel and requires the user to set manually channel and address for each Crazyflies. This means that the user has to tinker with parameters to find a good channel and has to manually handle all addressing.

Another limitation is that the polling is done over USB by python or, in case of ROS/Crazyswarm, in C++. This adds the USB latency to the equation and complexifies the client implementation.

I have been working on defining a new protocol that would be implemented efficiently in a Crazyradio and that would implement addressing and channel hopping. The idea is to get closer to a connection style more like bluetooth low energy where you do not have to care about channels and setting address, you just connect your device. Unlike BLE though, this protocol will be optimized for low latency. Stay tuned, we will likely talk about that more in future posts :-).