Category: Frontpage

You might, or might not have heard about a tool called Wireshark, it is quite popular in the software development world.

The wireshark official logo


Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, communications protocol development and education. It makes analyzing what is going on with packet based protocols easier.

Most often Wireshark is used for network based protocols like TCP and UDP, to try to figure out what is happening with your networking code. But! Wireshark also allows you to write your own packet dissector plugin, this means that you can register some code to make Wireshark handle your custom packet based protocol.

For the latest release of the Crazyflie Python Library we added support for generating a log of the Crazy Real Time Protocol (CRTP) packets the library sends and receives. This is the (packet based) protocol that we use to communicate with the Crazyflie via radio and USB.

We generate this log in the special PCAP format that Wireshark expects. And we also created an initial version of a dissector plugin, written in the programming language LUA.

When we put this two things together it turns into a pretty cool way of debugging what goes on between your computer and the Crazyflie!

What does it look like?

Wireshark gives you a graphical interface where you can view all the packets in a PCAP file. You will see the timestamps of when they arrived. Selecting a packet will give you the information that the dissector has managed to deduce as well as how the packet looked on the wire.

On top of that you get powerful filtering tools. In the below image we have set a filter to view only packets that are received or sent on the CRTP port 8, which is the port for the High level commander. This means that from a log file that contain 44393 packets we now only display 9. Which makes following what goes on with high level commands a bit easier.

Wireshark view of filtering out packets on CRTP port 8

The dissector knows about the different types of CRTP ports and channels and knows how to dissect an high level set-point, as seen by the image above.

What can this be used for?

This functionality is, we think, most useful for when developing new functionality in the Crazyflie firmware, or in the library. You can easily inspect what the library receives or sends and make sure it matches what your code indented.

But it can also be useful when doing client type work! We recently located the source of a bug in the Crazyflie client with the use of this Wireshark plugin.

It was when updating the Parameter tab of the client to handle persistent parameters, and to use a sidebar for extra documentation and value control. As I was testing the code I noticed that every time I changed the value of ring.effect to a valid integer and then disconnected and reconnected, the value was set to 0. Regardless of the value I had set.

I recorded a session using the PCAP log functionality:

$ CRTP_PCAP_LOG=ring.pcap cfclient

And the I fired up wireshark:

$ wireshark ring.pcap

It was now possible for me to track what the library and firmware thought was going on with the ring.effect parameter, by tracking the crtp.parameter_varid field using Wireshark. Filtering down from from 3282 packets to 12 packets.

I had earlier figured out that the varid of the ring.effect variable was 183. This is a quasi-internal representation of a parameter that we do not expose in a good way. In the future we will try to make this Wireshark tracking work with the parameter name as well.

Looking at the write parameter packet from USB #3 to the Crazyflie I could see where I set the value of the parameter to 5, so far so good.

Wireshark view of checking my setting of the ring.effect parameter to 5

The surprising part however was seeing a write further down setting the parameter to 0! This mean that something in the client was actually setting this to zero!

Wireshark view of something setting the ring.effect parameter to 0

After seeing this, locating the actual issue was trivial. I noticed that the Flight Control tab was setting the ring.effect parameter to the current index of the combo box in the UI. And when no LED-ring deck was attached, this amounted to always setting the value to zero.

But having confirmation that this was something happening on the client side, and not some kind of bug with the new persistent parameters was very helpful!

How do you use this?

We have added documentation to the repository documentation for the library on how to generate the PCAP log and how install the Wireshark plugin.

But the quick-start guide is this:

  • Copy the tools/crtp-dissector.lua script to the default Wireshark plugin folder
    • Windows: %APPDATA%\Wireshark\plugin or WIRESHARK\plugins
    • Linux: ~/.local/lib/wireshark/plugins
  • Restart Wireshark or hit CTRL+SHIFT+L
  • Set the environmental variable CRTP_PCAP_LOG to the filename of the PCAP log you want to generate
  • Run Wireshark with the filename as an argument

And please report any issues you find!

Happy hacking!

We are thrilled to announce the new 2022.1 release of the Crazyflie firmwares, library and client! There have been a lot of bug fixes, polishing and new features and we are glad we finally get to share it with all of you.

Noteworthy features and fixes

The features and fixes listed here is only a subset of all the bug fixes and other additions we have done in the last six months. For a more complete view, please check the release notes on GitHub.


Crazyflie STM firmware — Main firmware

Release notes on GitHub


Crazyflie NRF firmware — Radio and power management

Release notes on GitHub

  • We now report the version of the NRF firmware on the Crazyflie console

Crazyflie Python library — The official Python API

Release notes on GitHub


Crazyflie Client — The Crazyflie PC client

Release notes on GitHub

  • Rework of the Parameters tab
    • To better show parameter documentation and to include persistent functionality
Image of the Crayzflie PC client with new parameter tab
New Parameters tab with filtering search and sidebar with more information about values

Documentation

Along with all the new features, bug fixes and general polish of our software we have also spent time making sure our documentation is up-to-date and relevant! You can check it out on our website. Do not forget to check out the individual repositories documentation. And the tutorial page has gotten some love this cycle, check it out!

Please go forth and install this new release and please file issues with any problem you find!

Where to get it?

The firmware images for the Crazyflie STM firmware and the Crazyflie NRF firmware should already be available through the cfclient. And if you want to download them yourself you can find them at https://github.com/bitcraze/crazyflie-firmware/releases.

The Crazyflie Client and the Crazyflie Python library are available through Pypi (The Python Package Index), to install them you can use the following commands:

$ python3 -m pip install --upgrade cfclient # to install or upgrade the Crazyflie client

$ python3 -m pip install --upgrade cflib # to install or upgrade the Crazyflie Python libraryCode language: PHP (php)

Happy hacking!

This week we have a guest blog post from Anirudha Majumdar from Mechanical & Aerospace Engineering, Princeton University. Enjoy !

Overview

The course “Introduction to Robotics” at Princeton (MAE 345 / ECE 345 / COS 346) introduces students to fundamental theoretical and algorithmic principles in robotics. We use the Crazyflies to bring the excitement of drones to students in their very first introductory course in robotics. The course is primarily aimed at undergraduate students in their third and fourth years, and also offers a graduate-level track. In Fall 2021, approximately seventy undergraduate students and ten graduate students enrolled in the course from a variety of different majors: Mechanical and Aerospace Engineering, Computer Science, Electrical and Computer Engineering, Mathematics, Operations Research and Financial Engineering, Civil and Environmental Engineering, and Architecture.

The course covers the following technical topics:

  • Feedback Control;
  • Motion Planning;
  • State estimation, localization, and mapping;
  • Computer vision and learning;
  • Broader topics: Robotics and the law, ethics, and economics.

One of the primary aims of the course is to allow students to obtain hands-on experience implementing various algorithms on a hardware platform. For this, we use the Crazyflie 2.1 quadrotors with a camera attachment (Fig. 1). The Crazyflie is an ideal hardware platform for our course due to its light weight, reliability, safety, and open-source code. Throughout the semester, students work in teams to implement different algorithms from the course:

(i) linear quadratic regulator (LQR) for stabilizing the drone

(ii) rapidly-exploring randomized trees (RRTs) for motion planning through a forest of PVC pipe obstacles

(iii) the Lucas-Kanade optical flow algorithm

(iv) object-following with neural networks.

In the final project, students implement algorithms for autonomous vision-based navigation through novel (i.e., a priori unknown) obstacle environments (Fig. 2). Below, we describe the modified Crazyflies used for the course, along with the hands-on projects in more detail.

Open-source course materials and website

We have made the course materials (lecture notes, slides, and assignments) freely available through our course website. The assignments include theory questions, programming assignments in Python, and hardware implementation with the Crazyflie. All programming and hardware assignments are freely-available through GitHub. We hope that the course materials will be useful to instructors at other institutions in getting the next generation of engineers excited about robotics!

Fig. 1. Crazyflie with a camera attachment.
Fig. 2. Vision-based navigation.
Final project for course: vision-based navigation and target-seeking.

Hardware components and lab spaces

 The feedback control and motion planning hardware assignments can be completed with the following parts list:

For the vision-based assignments, we attach cameras to the drones, which transmit images in real-time to a receiver unit plugged into to a laptop. This requires the following additional parts:

We set up three netted arenas for students to complete hardware assignments. Each arena is approximately 12 ft x 8 ft in size; two of the arenas are shown in Fig. 3.

Fig. 3. Two netted flying arenas used for the course.

Crazyflie assignments

Lab 1: Linear Quadratic Regulator (LQR)

In the first hardware lab, students implement and tune a Linear Quadratic Regulator (LQR) controller to make the drone hover. This builds on the theoretical foundations of feedback control covered in the first module of the course. Implementing LQR control on a physical system allows students to get a sense for the “reality gap”, i.e., assumptions made by the theory that are not perfectly satisfied in real life (e.g., linear dynamics, perfect state estimation, and perfect knowledge of the dynamics). This assignment makes use of a modified version of the Crazyflie firmware that allows us to upload different controller gains (obtained using LQR) and run these on the drones. Code for this assignment is available here.

Lab 2: Rapidly-Exploring Randomized Trees (RRTs)

In the second lab, students implement the RRT algorithm for motion planning. Each netted arena is set up with PVC pipe obstacles (similar to Fig. 2). Students measure the obstacle locations and radii and use the RRT algorithm to find a collision-free path from a starting location to a goal location. The Crazyflie then uses its waypoint tracking capabilities to execute the motion plan. Code for this assignment is available here.

Lab 3: Lucas-Kanade optical flow

In the computer vision module of the course, we first introduce students to “classical” vision approaches (before discussing modern deep learning techniques). One of the primary algorithms we discuss in this module is the Lucas-Kanade algorithm for computing optical flow (which has many applications such as computing time-to-collision). Students record a video using the drone’s on-board camera attachment and process this offline to compute optical flow. Code for this assignment is available here.

Lab 4: Object tracking

In the computer vision module of the course, we introduce students to deep learning-based approaches to vision. As a demonstration of the power of deep learning, we use neural networks to perform real-time object detection. Specifically, students use neural networks trained on the Coco dataset to detect a person (or an object such as a cup) using the drone’s camera, and use the resulting bounding box to command the drone in a way that makes it follow the person or object. The neural networks can be run on standard laptop computers at >= 30Hz, allowing the drone to follow the person or object in real-time. Code for this assignment is available here.

Final project: Vision-based obstacle avoidance and target-seeking

In the final project for the course, students implement vision-based navigation and target-seeking using the Crazyflie drones. Teams must program the drones to autonomously navigate through a forest of PVC pipes, whose locations are not known beforehand. Students utilize different computer vision algorithms (e.g., edge detection, contour finding, etc.) to detect obstacles and program strategies to avoid these obstacles. The goal is to navigate to the other side of the flying arena and land in front of a target object (in the form of a book). Again, teams use different approaches (e.g., neural networks) to find the book and then land in front of it. Further details and starter code for the project can be found here. See above for a video.

Acknowledgements

The following teaching staff have contributed greatly to the development of the materials for this course: Vincent Pacelli, Julienne LaChance, Jon Prevost, Alec Farid, Meghan Booker, Lena Rosendahl, David Snyder, Allen Ren, and Eric Lepowsky. 

Base station geometry estimation is a function in the python client (in the lighthouse tab), where the system estimates the position and orientation of the base stations. The user places the Crazyflie on the floor (in the desired origin) and clicks a button to measure the angles to the base stations, which are used to estimate the geometry. The current implementation is fairly basic and has some issues associated with it:

  1. All base stations must be received from the point where the Crazyflie is located
  2. Only 2 base stations are supported
  3. The coordinate system is not properly aligned with the room
  4. The generated geometry is not as good as it could be, that is the position/orientation is sub-optimal
  5. The code has a dependency to OpenCV which causes problems for ROS users

I have been working on a solution for these problems as my fun Friday project and in this blog post I will tell you a bit more about the problems and a possible solution.

Screenshot from the client of a geometry with 4 base stations.

What are the problems to be solved?

In the current implementation, the user places the Crazyflie in the origin, with the front of the Crazyflie pointing in the direction of the positive X-axis. When the user hits the “Estimate Geometry” button, the angles to the visible base stations are recorded and the solvePnP() function in OpenCV is used to estimate their poses (position and orientation). This is all fine but it also has its limitations and in the following section we will outline what the limitations are and how to solve them.

All base stations must be received in the origin and only 2 base stations are supported

Currently the Crazyflie ecosystem supports up to 2 base stations and this works fine for a flight space of around 4×4 meters. With more base stations it would be possible to cover larger areas or multiple rooms, which is a feature that many users have been asking for. In these scenarios it will not be possible to receive all base stations from one position any more though, and it will require a new method for geometry estimation using multiple measurements. Suppose base station 1 and 2 are received in one position and 2 and 3 in another, then we can map the measurements together since we know base station 2 must have the same pose in both measurements. This way it is possible to relate all base station poses to each other, provided there are measurements that link them together.

The coordinate system is not properly aligned with the room

When generating the geometry in the current implementation, the orientation of the Lighthouse deck is used to define the coordinate system: forward of the deck defines the X-axis, left defines the Y-axis and up the Z-axis. The problem is that the deck might not be perfectly aligned with the Crazyflie, the floor might not be completely flat or the Crazyflie might not point exactly in the desired direction. A pretty small misalignment will result in fairly large errors a couple of meters away, resulting in unexpected behavior, for instance not flying at constant height. Expanding to more base stations and larger systems, the problem will become even bigger and a better solution is clearly needed.

If we used the position of the Crazyflie when placed at multiple positions, we could use this information to rotate the coordinate system to be better aligned. For instance, suppose we measured some points on the floor of the flight space, then we could make sure the XY-plane of the coordinate system goes through those points, or at least as close as possible. Similarly one or more measurements along the X-axis would help to define the rotation around the Z-axis.

The generated geometry is not as good as it could be

The lighthouse positioning system is based on measuring the angles between the sensors on the Lighthouse deck and the base stations. One can think of it as four beams or rays, going from each base station to the sensors on the deck, for which we measure the direction very precisely seen from the base station’s point of view. The purpose of the geometry estimation is to figure out the position and orientation of the base stations so that we can calculate how the beams are oriented in the flight space instead. By looking at where the beams from two base stations intersect we know where the sensors are located and can calculate the position of the Crazyflie. This is a somewhat simplified picture of how it works but it is sufficient for the following discussion.

So what happens if the geometry is not completely correct? If the estimated positions or orientations of the base stations are slightly off, the beams will not intersect and we have to use some method to find the point closest to the two beams instead to use as the sensor position. In a real world system there will always be errors and the implementation must be able to handle them, but we want to keep them as small as possible. Further more we want to make sure the errors are uniformly distributed in the flight space so that we get equally good results everywhere.

In the current estimation process, where we take a measurement in one position, we are able to generate a geometry that is good at that point, but due to noise in the measurements and other subtleties the error at the edges of the flight space might be several centimeters.

The solution to this problem is to measure the angles in multiple positions and try to find a geometry where the error is equally small for all of them. It does not guarantee that the error will be equal everywhere, but if we make measurements in the volume we plan to fly in we know it will be OK where we need it to be. It should also be a much better geometry, for the full covered volume, than what we can be achieved by measuring in one point only.

One bonus problem that hopefully will be solved by this approach is the moving back and forth that sometimes can be seen in a Lighthouse 2 system. What happens is that the base stations interfere with each other from time to time (by design) and most of the time the Crazyflie gets positioning information from both base stations, but every couple of seconds only from one of them. When both are available the “average” position is used, but when only one is received, the Crazyflie will “jump” to the position indicated by that base station (the simplified model from above with crossing beams does not hold in this case, sorry!). If the difference between the suggested positions of the two base stations (the error in the geometry) is large there will be a noticeable motion in the Crazyflie.

The code has a dependency to Open CV

In the current solution we use the solvePnP() function in Open CV to estimate the geometry. Open CV is an awesome library but unfortunately it has turned out that this dependency interferes with ROS, and since a fair amount of our users also use ROS, we would like to get rid of it if possible.

Luckily I found an open source implementation of IPPE, an algorithm that finds the pose of an object based on points seen by a camera, that we can use instead. There is actually an option to use Ippe in OpenCV’s solvePnP(), but we used another flavor.

The solution

The core idea is to first collect measurements of beams in many positions in the flight space by moving a Crazyflie around and record the lighthouse angles. Secondly an equation system is created that takes the poses of the base stations and all the recorded Crazyflie poses as input and as output calculates the lighthouse angles those poses would correspond to for all the sensors. Finally the output is compared to the recorded values and poses are adjusted using the least-square solver in scipy to find the poses that minimizes the difference between the measurements and the output from the equation system.

Before we can solve the equation system we have to record the angles from the base stations. There is a handy function in the Crazyflie that pushes measured lighthouse angles to the PC via the radio, and by letting the user move the Crazyflie around in space we get the angles along that path. What we are looking for though are angles collected in discrete positions and as an approximation I group measurements together based on time. The assumption is that if two angle measurements are closer than 10 ms in time, the Crazyflie did not move very far and they can be considered to be taken in the same position. The output of this process is a list of samples where each sample contains the measured lighthouse angles of one or more base stations for one specific Crazyflie pose. After this has been done, the list is filtered to only contain samples with two or more base stations.

We also need an initial guess of the base station and Crazyflie poses for the least-square solver to make the solution converge. I use IPPE for this and use the first sample as the reference to define a temporary global reference frame. Suppose the first sample contains angles for base stations 2 and 3, we can then use IPPE to calculate an estimate of the pose of the two base stations, in the Crazyflie reference frame of this sample. Since we use the first sample as the reference for the global reference frame (that is the pose of the Crazyflie in this sample is the origin by definition), those poses are also equal to the base station poses in the global reference frame.

Suppose the next sample contains lighthouse angles for base stations 1 and 2, using IPPE we can estimate the base station poses for base stations 1 and 2 in the reference frame of the Crazyflie in this sample. Since the relative positions of the base stations is the same, regardless of reference frame, we can rotate/translate the poses of the base stations so that base station 2 pose matches the pose of base stations 2 in the first sample. We now have an estimate of the poses of base stations 1, 2 and 3, further more the transformation used represents the pose of the Crazyflie in sample 2. Repeating the process for all samples gives us a pretty good idea of where all the base stations are located as well as the pose of the Crazyflie in all the samples.

We can now feed the initial guess and the equation system into scipy and hopefully get a refined solution back. From the estimated poses of the base stations and Crazyflie samples it is possible to calculate the distance between sensors and beams which gives us an approximation of how good the solution is.

The final step is to align the coordinate system with the room, as mentioned earlier the solution we have this far is based on the pose of the Crazyflie in the first sample. The way it is done in the suggested implementation is to ask the user to place the Crazyflie at some points in the desired origin, on the positive X-axis and in the XY-plane and measure the angles in these positions. The measurements are included as samples in the above process which means we will get the estimated positions as a part of the over all solution, in the temporary global reference frame. The task at hand is then to find the rotation/translation from the temporary global reference frame to the one indicated by the positions sampled by the user. Again we do a least-square optimization to find the transformation that minimizes the error in the sampled points. We can now calculate the final solution by applying the transformation to the base station poses we got earlier.

Does it work?

Yes, it seems to work pretty well, I have not had the time to do extensive testing yet but the results looks promising. In our flight arena with 4 base stations, the solution seems to generally be acceptable. We don’t know the exact poses of our base stations since it is very hard to measure, but they are mounted in the same truss and should be at similar heights.

Base stations at:
1: (-3.7104629351065146, -0.27330674065567867, 2.960720481536423)
2: (-0.9233909349006646, -2.9651389799486356, 2.9781503155699176)
3: (-0.12450705551081238, 3.430497907723026, 3.011201684709142)
4: (2.74012584124908, 0.5856524795079388, 3.023133069381165)
Solution match per base station:
1: {'mean_error': 0.0026020322270174697, 'max_error': 0.013310934923630531, 'std_error': 0.0028768923969783836}
2: {'mean_error': 0.0015240237742724164, 'max_error': 0.005526851773945277, 'std_error': 0.0011560341160273498}
3: {'mean_error': 0.002193101969828834, 'max_error': 0.006778096051979129, 'std_error': 0.0015768914826109067}
4: {'mean_error': 0.0033752667182490796, 'max_error': 0.014997173956894249, 'std_error': 0.00354931189334688}

The above snippet is part of the output from one run and as can be seen the estimated height is between 2.96 and 3.02 m. You can also see that the estimated average error for sensor positions is in the order of 2-3 mm while the maximum error is 1.5 cm.

Below is graph of the recorded Crazyflie positions in the final solutions. Note the three single points at the bottom that are from the origin, the X-axis and XY-plane.

Estimated positions of the Crazyflie

I did some testing on larger systems with 6-8 base stations this Friday and it seemed to be harder to get a solution that converges which indicates that there might be something to look into here.

Try it out

This is still work in progress, but if you want to try it out, you can find the code in this pull request. Run the examples/lighthouse/bs_geometry_estimation.py script, you will get instructions on the screen as you go.

Officially the firmware supports 2 base stations , but most of the code is designed to handle up to 16 and if you want to test the functionality with more than two base stations you have to update PULSE_PROCESSOR_N_BASE_STATIONS and re-flash your Crazyflie.

Any feedback is welcome, please use the pull request.

Today we will have a guest blogpost by Dominik Natter, working in the Robotics & Control group at SINTEF in Trondheim, Norway. Enjoy!!

In this blogpost we will teach you how to fly the Crazyflie beyond edges without crashing, using only on-board sensors. Come join in!

flying over edges
Safe flights across edges are achievable!

Introduction

UAVs have seen tremendous progress in the last decades and have since moved from research labs to various real-world environments. Small UAVs (so-called micro air vehicles, MAVs) like the Crazyflie open up even more possibilities. For example, their size allows them to traverse narrow passages or fly in cluttered environments (as recently showcased in this blog post). However, in order to achieve these complex tasks the community must further improve the cognitive ability of these MAVs in order to avoid crashes.

One task on this list and today’s topic is the possibility to fly at constant altitude irrespective of the terrain. This feature has been discussed in the community already two years ago. To understand the problem, let’s look at the currently implemented solution: With the Flow deck mounted the Crazyflie uses a 1D lidar sensor to estimate its vertical position. This vertical position (more or less) equals the current sensor reading. On flat floors this solution works very well. However, if the Crazyflie shall traverse through a narrow window or fly above irregular terrain its altitude will change based on the sensor readings. This can lead to unstable flights, as in the following video, or even crashes!

You might wonder: why not use any of the other great tools from the Bitcraze universe? Indeed, the Lighthouse positioning system and the Loco positioning system work well for absolute positioning (as we have seen earlier, e.g., in this blog post). However, the required setups are often not available in difficult environments. Alternatively, the barometer could be used to achieve a solution based solely on on-board sensors. In fact, Bitcraze has proposed an altitude hold functionality a few years ago. This is a cool feature, but its positioning accuracy of “roughly ±15cm” is not fully satisfying. Finally, relying on the on-board IMU alone will inevitably lead to drifting over time.

Thus, we propose a solution based on the Flow deck and the Multiranger deck. This approach, only based on on-board sensors, allows to fly at constant altitude with obstacles above, below, or even both above and below the Crazyflie. Kristoffer Skare developed this solution when he worked with us as an intern in 2021.

Technical Description

As a first step, the upward-facing lidar of the Multiranger deck is incorporated in the same way the downward-facing lidar of the Flow deck is used in the firmware. This additional measurement can then be used in the extended Kalman filter (EKF) to improve the state estimation. Currently, the EKF estimates and outputs 1 value for the altitude. For our purpose two more states are added to the EKF: one state is defined as the height of the object under the Crazyflie compared to the height where the altitude state is defined as 0. Similarly, the other state is defined as the height of the object above the Crazyflie compared to the same reference height. The Crazyflie keeps therefore track of the environment in order to keep its own altitude constant. To achieve this, an edge detection was implemented: The errors between the predicted and measured distance are tracked in both the upward or downward range measurement. If either of these errors is too large the algorithm assumes that the floor or roof has changed (while the original EKF would think the drone’s position has changed, triggering a change in thrust). Thus, the corresponding state gets updated. For more details on the technical implementation and the code itself, check out our pull request.

Results

To analyze our approach we have used a Qualisys motion capture system. We have conducted many different tests: flying over different obstacles, flying at different velocities, flying at different altitudes, or even flying under different lighting conditions. Exemplarily, in this post we will have a look at a baseline example, a good estimate, and a bad estimate. In each picture you can see the altitude (in meters) over time (in seconds) for different flight speeds (in centimeters per second). You will see three lines: The motion capture ground truth (blue), the altitude estimated by our code (orange), and the new state keeping track of the floor height (green). For each plot, the Crazyflie takes off, flies in positive x direction, and lands.

In the baseline experiment, it flies over a flat floor. Clearly, the altitude estimates follow the ground truth values well, and the floor is correctly estimated to be flat.

baseline experiment
Baseline experiment flying over a flat floor

In the next example, we have added a box with an approximate height of 0.225 m and made the Crazyflie fly over it. Despite the obstacle the altitude estimates follow the ground truth values well. Note how the floor estimates indicates the shape of the box.

experiment with box
Experiment flying over a box

Because the algorithm is based on an edge detection, we had a hunch that smoothly changing obstacles will pose a problem. Indeed, the estimates can be messy as we see in the next example. Here, the Crazyflie flies over an orthogonal triangle, with the short leg at 0.23 m pointing upwards and the long leg with 0.65 m pointing in flight direction (thus forming a slope). For different flight speeds different the estimates turn out quite differently.

experiment with slope
Experiment flying over a slope

If you don’t like looking at plots, check out this video with some cool shots instead!

Conclusion

To summarize, we propose a solution for constant altitude flight with Crazyflies, using the Flow deck and the Multiranger deck. We have tested it successfully under various circumstances. Still, we see some potential for improvement, e.g. when dealing with slopes. In addition, the current implementation is quite a change to the original EKF, which poses a problem for integration.

Thus, a way forward can be an out-of-tree build to ease the use of the solution for the community. At SINTEF we certainly plan to deploy this code in all of our tests in 2022, which will hopefully allow us to gather more experience and thus find further ways to improve or tune the system.

We want to emphasize that this is not a perfect solution. That means a) you should use it with care and b) you are very much welcomed to contribute. E.g. feel free to chime in in the pull request, test the code in your environments, propose improvements, or implement an out-of-tree build! :) Maybe you can even come up with an alternative approach for constant altitude flights?

If you want to check out more of our work, visit our website. Also, keep reading this amazing blog from Bitcraze as we try to be back some day (if Bitcraze wants us hehe)!

We’ve had an exciting year in 2021, and we’re eager to see what 2022 will bring ! Let’s see what’s in the pipeline and what we hope for this new year.

Products

The AI deck and Bolt out of Early Access

We’ve put a lot of efforts during these last months on working with the AI deck’s firmware and infrastructure. With great help of our intern Rik, we managed to make huge leaps, and hopefully sometimes in the coming months we’ll be able to share what we worked on. I can already tell you that the incoming release will bring some needed improvements on flashing on the GAP8 chip and improved image streaming! As the AI deck is one of the most challenging of our decks, we also hope to add an extensive tutorial (that we call the “mega tutorial”) to help you working with it.

Also we have started to push some framework changes to make it easier for you to make bigger drones with the Crazyflie Bolt. One of those are the persistent parameter system that we have recently implemented on the Crazyflie’s, so we will add more and more of these types of features. The hope is, is that we are able to provide some kind of assembly kit for a larger Bolt-based drone, of which we already did some initial battery investigations for.

Prototypes

Fun Fridays are usually our time to play around with new possibilities and prototypes. Marcus has already made great strides, and hopefully in 2022 we’ll be able to go even farther with those. Arnaud has also been working on the much waited new iteration of the Crazyradio, with a new chip and an improved communication protocol. Tobias, our dedicated hardware man, has also ideas down the pipes in the form of a brushless Crazyflie as we already showed in our future plans presentation of November’s BAMdays. Also we hope to initiate the design process of a new and improved version of the Crazyflie with more power and processing capabilities.

People and Collaborations

Last year we have continued our close collaboration with researchers at institutes and universities, to help them out with achieving their goals and contribute their work to our opensource firmware and software. It proves really fruitful, both for us and the people we talk to, so we hope 2022 will see yet again closer and newer connections.

We were really happy with our first own online conference, which helped us reconnect and talk to our community about all the awesomeness achieved with the Crazyflies. We hope to implement something similar on a more regular basis, to keep talking about collaborations, possibilities, and in general sharing all the work that’s been done on the platform. Those “lightweight” BAM should arrive soon, so keep updated if you want to join them!

Component shortage and productions issues

We expect to still deal with the component shortage, as it is expected to last for at least another year, even two. Production is therefore a continuous challenge, with a lot of unpredictability, and we will find better solutions to deal with it in 2022. Thankfully, we have good hopes on keeping good stock levels throughout the crisis, as we’ve increased our stock. We’ll of course keep you updated on any big updates regarding the crisis and how it is affecting it us.

Unfortunately, the component shortage also means that it’s harder to make prototypes. It’s difficult to find and/or buy just one chip, so it causes delays in our creative hardware developments. It is what it is… but we will sure be able to find solutions – as we did during our 10 years’ history!

Anything else?

Of course our heads are always full of ideas and we are passionate to work on anything! We have ambitions in developing a simulation for our users or CI, doing more measurements with the new thrust stand or adding further improvements to our documentation and tutorials. And we might also meet new interesting people (digitally or in person?) who might give us enough inspiration to start something completely new! Soon we will have our quarterly meeting, where we try to herd and select our passions and ideas into conceivable plans and actions.

With all these exciting projects, we’re really excited to see what 2022 has in store for us! I hope you too have an awesome year 2022.

2021 is coming to an end. As we’re about to flip the page on a new calendar, let’s take a look at what happened during this past year.

Community

It’s no news to you, but keeping in touch with our community during a pandemic has forced us to try new ways to meet and interact. The main event for us this year has been the BAM days: our first conference held entirely by ourselves, full of exciting talks and fun, and we’re very happy with how it went down! You can still watch the talks we hosted during those three days in our dedicated Youtube playlist.

Guest blogposts

Once again, we’ve had the honor to host some awesome guests on our blog, which you can read (or re-read) here:

Software

We had 3 releases this year, (one in January, one in March and one in June). We worked a lot on improving things like the Crazyflie logging and parameter interface or wondered how to deal with our API. We’ve also implemented a new way to store things, and we now have new, powerful persistent parameters.

Thanks to Jonas’ hardwork, we also set up a “crazy lab” here in Malmö. During Fun Friday, Arnaud has been experimenting with Rust, working on a webclient.

Hardware

We’ve been dealing since 2020 with a hardware crisis. The component shortage has made production erratic and difficult, and for the first time in Bitcraze’s history, we’ve had to increase our prices.

Lighthouse

The first part of the year was dedicated to the Lighthouse system. At the beginning of the year, we finally got it out of early access. We documented the new Lighthouse Functionality and even wrote a paper for ICRA about the Lighthouse accuracy with Wolfgang’s help. We created the Lighthouse swarm bundle, getting every element to fly a swarm with this positioning system.

AI deck

We worked a lot on the AI deck this year. We upgraded to the AI deck 1.1, with a gray-scale camera and a newer version of the GAP-8. Part of this work was also to improve the documention and informations we had on it. We had a workshop with PULP, and dreamed of a mega-tutorial.

Documentation

As usual, we’ve been trying to improve our documentation. This year, it included an API reference in our Python library , and rethinking our structure

Bitcraze

Bitcraze has once again increased in numbers, as we welcomed into our ranks Jonas. We were also really happy to add Wolfgang to the team for a few months, as well as Rik, an intern from MAV-lab.

The biggest event this year for us was our 10 year anniversary. That’s right, Bitcraze turned 10 in September, and we tried to celebrate it as much as possible ! With a nice outing and with the BAM days, but also with a video trying to show what 10 years of Bitcraze looks like:

Christmas is just around the corner, and it’s time for the traditionnal Christmas video! This year, we wanted to use the AI deck as we’ve been working hard on this deck for some time now. Showcasing its new feature in this festive video seemed the best idea.

Santa this year needed help to find and get the presents delivered, so he asked for help from Bitcraze! Let’s see how it played out:

I’m sure you’re wondering how we managed to set this up, so let’s discuss how we did it!

Picking up the packages

The goal was to pick up some packages and place them in the sleigh, and it all worked out pretty well. All the flying in the video is scripted using the python lib and positioning is done using Qualisys’ motion capture system with Active marker decks. The trajectories are hard coded and with some careful adjustments we managed to lift and fly the 4 crazyflies attached to the same present, even though it was a bit wobbly from time to time. Getting the present into the sleigh was not as easy, and we might have taken some short cuts here as well as when attaching/removing lines.

Sometimes detangling the wires was a teamwork

Picking up the second package needed some precision, and it went incredibly well, on the first try! The present was very light and needed someone to hold it, to prevent it from moving when the Crazyflie approached.

Getting the sleigh off

Our first tries included 5 strings attached to the sleigh, but it was difficult to get the right tension at the right time to have the UAVs actually pull the weight. Here came the “rubber band solution”: we just attached all the strings to a rubber band, that was itself attached to the sleight. That way, the tension could get even when all the Crazyflies were in the air and ready to pull the sleight.

The rubber band

The AI deck camera/streamer

When we started this project, the intention was to run a neural network in the AI-deck to identify or classify the presents. We did not manage to get to a point where we had something that actually added value to the story, so we settled for just streaming the video from the AI-deck camera on the scouting drone instead.

The AI-deck example with color camera viewer is still under development, but if you want to give it a try you can take a look at the readme in the github repo.

Bloopers

And finally, as an added bonus, if you ever wonder how many tries it takes to make 5 Crazyflies pull a sleigh, here is a little behind-the-scenes video too!

I have returned from my family visit in California, who I’ve haven’t seen them in 3 years due to Covid. To spend the most possible time with them, the plan was that I would still work full time for Bitcraze from my father’s home. The problem became however, that it wouldn’t fit so well in our current way of work as I would miss all the morning stand up meetings due to the large time difference between Sweden and California (-9 hours). That is why we settled that I would work on separate projects/investigations during my time away. So I thought it would be a great opportunity to dig into ROS and 3D simulations again and see what the latest state of that is! So about the simulations is what I’ll be mostly talking about right in this blog post, in terms of what simulators are out there and what simulation development is currently ongoing.

Need for simulation?

Why would it be actually be necessary to have a simulation in our current frame work? Just to give an example, my new colleague Jonas recently tried out his hand on the CFlib swarm class for the first time for the BAMdays tutorials, and simulator would have been great during that initial porcess. Namely, most of the crashes were not necessary due to low batteries or bad communication, but mostly due to the fact that he was not able to double check his script beforehand. If one is able to check if all the programmed positions of the Crazyflies are implemented as they should before an actual flight, this would prevented a lot of broken propellers!

Just to note here that there are a lot of types of simulations that you can think of. Earlier this year had our ex-interns Max and Josephine finish an Renode simulation of the Crazyflie’s microcontrollers. We’ve also seen the word Simulink pop-up multiple times on the forum which indicates that quite some control classes are investigating the dynamic model of the Crazyflie. However, the type of simulation that I’m currently referring to are the 3D simulators in which a robot or quadcopter can move and interact with a virtual environment, with usually an physics engine in effect.

Crazyflie in Gazebo (+ROS)

During some initial investigation there were already some simulations that pop out. First of all I went and looked into what is available for Gazebo at the moment, which is:

CrazyS is based on the RotorS simulation with some additional off-board crazyflie controllers for position control. I wasn’t able to build it for my Ubuntu 20.04 just yet myself, but that there is ongoing work to port CrazyS to ROS Noetic. For now on a virtual machine with ROS melodic it build just fine! Note my laptop did had to work quite hard when I wanted to simulate more than 1 Crazyflie, but the physics and plugins that were made for Gazebo is enabling many to do a lot for their research. Please check out the core papers about CrazyS!

Sim_cf is perhaps a little lesser known, but the project does stand out as it has some interesting features to it. It is for instance, possible to use the actual c-based firmware in software-in-the-loop (SITL) mode, which controls the simulated Crazyflie. It is even possible to use an actual crazyflie with an hardware-in-the-loop (HITL) simulation. Eventhough the project is not actively maintained anymore, I did manage to build it from source for ROS Noetic and Gazebo 11, although I was not able to fly more than 4 do to errors.

Other Simulators

Ofcourse Gazebo is not the only possibility out there. I also had a quick go at another simulator called Webots, which is quite an interesting option indeed as well. Currently there is only one quadcopter model available, so it only makes sense for it to also contain an Crazyflie! They do use their own robotic format, so probably the easiest process would be, is to convert an existing model for Gazebo/ROS into an format that Webots can understand.

Also, quite recently, a trending tweet has brought us to the attention of a Rviz based Crazyflie simulation! This looks quite promising as well, so I will try this out quite soon too.

Screenshot from 2021-11-15 11-56-48
Crazyflie in Ignition Gazebo

Ongoing work in Ignition Gazebo

So in the future, the current Gazebo in its form will disappear and will be only be part of Ignition. So that is why it made sense for me to start playing with an separate Crazyflie model and plugins for the Ignition frame work instead. Moreover, it seems that quite some elements and plugins based on the RotorS simulation for the original gazebo, are now fully integrated within the Ignition gazebo framework, which should make it more easier to make quadcopter models fly. Currently it’s still work in progress, so right now is only to be found on my personal github repository, but as soon as it becomes more fleshed out and stable, this will probably transferred to Bitcraze’s github repos and we will write a more elaborate blogpost about it. For now, I’ll try to work on it further as my Fun Friday project!

In the mean time, we have started a simulation discussion thread in the Crazyswarm2 repository, which is an ongoing port of Crazyswarm to ROS2. It would be the ideal situation if we would be able to use this simulator for both Crazyswarm and our native CFlib! But I’ve mostly have used Gazebo in the past, so if there are any other simulators that we should try out too, please join the discussion and let us know!

The Crazyflie parameter framework provides a way to get, set and monitor parameters in the quadcopter. Examples of a parameter include which effect the LED ring will display as well as which controller to use for flight.

The parameters have default values and up until now have been reset to those default values on each restart of the Crazyflie. This has not been seen as much of a problem (Or has it? Let us know!) since most use of the Crazyflie platform has been session based and the need for persistent parameters has been low. Our work with getting the Crazyflie Bolt out of early access has however changed this need.

The Crazyflie Bolt kit

The Bolt will need different values for the tuning parameters for controllers and being forced to set these on each boot would be pretty annoying. And since we want the Crazyflie Bolt and the Crazyflie 2.1 to share the same firmware, persistent parameters would come in handy.

Fortunately the Crazyflie includes an EEPROM (Electronically Erasable Programmable Read-Only Memory) which we can use to store parameter values and have them survive restarts of the quad.

So this means, with the latest branches of the firmware and the python library, you can set values of a persistent parameter, store it to EEPROM, and the parameter will keep its values across reset and even across firmware upgrades.

Persistent Parameter API

Not all parameters will be able to be persistent and stored in the EEPROM. We have added a way to declare a parameter as persistent in the firmware:

PARAM_ADD_CORE(PARAM_UINT8 | PARAM_PERSISTENT, effect, &effect)

This will allow the new API added to our python library to be used with this parameter. The API consists of three actions:

  • Getting the state of a persistent parameter
  • Storing the current value of a persistent parameter
  • Clearing the stored value of a persistent parameter

The state of a persistent parameter consist of three pieces of information:

  • Is the value stored or not
  • The default value of the parameter
  • The stored value of the parameter

Setting a value, using the regular parameter API, is not enough to store the value to the EEPROM for a persistent parameter, you need to be explicit. A special API call is needed to store the current value to memory. And to stop storing the value you need to use the clear API action.

Python API

The above choices corresponds to three methods in our Python API:


def persistent_get_state(self, complete_name, callback)
Code language: PHP (php)

Get the state of the specified persistent parameter. The state will be returned in the supplied callback. The state is represented as a named tuple with members: is_storeddefault_value and stored_value. The state is None if the parameter is not persistent or if something goes wrong.


def persistent_store(self, complete_name, callback=None)Code language: PHP (php)

Store the current value of the specified persistent parameter to EEPROM. The supplied callback will be called with True as an argument on success, and with False as an argument on failure.


def persistent_clear(self, complete_name, callback=None)
Code language: PHP (php)

Clear the current value of the specified persistent parameter from EEPROM. The supplied callback will be called with True as an argument on success and with False as an argument on failure.


Example code

We have also added an example script that will showcase the functionality, it will list the state of all persisted parameters in the firmware, and will store and clear some of them. The output when run against current master is:

$ CFLIB_URI=radio://0/80/2M/E7E7E7E7E8 python3 examples/parameters/persistent_params.py 
-- All persistent parameters --
sound.effect: PersistentParamState(is_stored=False, default_value=0, stored_value=None)
ring.effect: PersistentParamState(is_stored=True, default_value=6, stored_value=2)

Set parameter ring.effect to 10

Store the new value in persistent memory
Persisted ring.effect!
The new state is:
ring.effect: PersistentParamState(is_stored=True, default_value=6, stored_value=10)

Clear the persisted parameter
Cleared ring.effect!
The new state is:
ring.effect: PersistentParamState(is_stored=False, default_value=6, stored_value=None)Code language: PHP (php)

You can count on more parameters to be marked as persistent in the near future. Hopefully this will be useful for you! Please report any issues you find!

Happy Hacking!