Category: Crazyflie

Keeping things in stock has not been easy the last couple of years due to the general problems with availability of components. We have been mitigating this by increasing stock volumes when it has been possible, but we have also looked at redesigns of some products to be able to switch to other components. A positive side effect has been that it also enabled us to do some small changes we wanted to do for a long time.

The decks we have updated are the Lighthouse, SD-card and BigQuad decks. There are no big functionality changes so the decks have not gotten any updated version only a new board revision.

Lighthouse (Rev.D -> Rev.D1)
The outline of the PCB has changed a bit in the hope of protecting the photo-diode sensors a bit better during hard crashes.

SD-card (Rev.C -> Rev.D)
Some solder bridges were added to the bottom of the PCB to make it easier to utilize the “hidden” SPI port. This can be useful if wanting to log a lot of values to the SD-card in combination with decks using the SPI port as well, such as the Loco or Flow decks. See the datasheet for more details.

Biq-Quad (Rev.C -> Rev.C1)
The capacitor C1 was removed. This was used to filter the analog current measurement reading but also caused problem for the SPI bus on the deck port. The SPI bus turned out to be a more used functionality and therefore capacitor C1 was removed. If the analog filtering functionality is wanted, a 100nF 0603 capacitor can be soldered to C1.

From now on we ship the updated revisions if you order in our store.

Jonas is leaving Bitcraze

We are sad to announce that Jonas is leaving Bitcraze. He has been involved in a lot of Github management, setting up the Crazy Stabilization lab, and various improvements and tools within our eco-system. Although he will be missed, we are excited that he is able to start a new chapter in his live and hope the best for him in his future endeavors.

This week we merged a pretty big change in the Crazyflie firmware code repository. The change altered the way we configure and build what goes in to the drone. We now make use of the Kbuild build system.

The Kbuild build system is the build system used, foremost, by the Linux kernel, but is also used in other projects like Busybox, U-Boot and sort of Zephyr. It is mostly known from its terminal based configuration tool, menuconfig.

A view of the Expansion deck configuration in the menuconfig

Kbuild leverages Kconfig files to build up an hierarchy of configuration options to use when building the software. It allows you to setup dependencies between your configuration, allowing us to do things like only enable the Kalman filter when there is a deck driver that needs it enabled.

This new way of building the firmware replaces the old way of using config.mk to set the build defines you need. Our hope is that Kbuild will make it easier to customize the Crazyflie firmware to fit the need of your department or project.

What does this mean for you?

If you are not changing the firmware as part of your Crazyflie development this will not change anything for you. The Python library will continue to work just like before and Bitcraze will release official firmwares, just like before.

If you are in the habit of fetching and building the latest and greatest version of the Bitcraze firmware there will be some minor changes. This can be seen in our updated build documentation on the web. The biggest deal is that the firmware code needs a configuration file before building is possible. To get the default one you can go:

$ make defconfig
make[1]: Entering directory '/home/jonasdn/sandbox/kbuild-firmware/build'
  GEN     ./Makefile
scripts/kconfig/conf  --defconfig Kconfig
#
# configuration written to .config
#
make[1]: Leaving directory '/home/jonasdn/sandbox/kbuild-firmware/build'Code language: PHP (php)

The way to compile app-layer applications has changed a bit and you will need to adapt (sorry!) the new way of building your app-layer application can be seen in the updated documentation.

If you make heavy use of config.mk and frequently change code in the firmware there are many new possibilities for you. Check the documentation and keep reading this blog.

Making the firmware more modular

With the new build systems help we hope to make it easier to enable and disable features and sub systems in the quad copter. In the default firmware all drivers for all expansion decks are included, as well as all estimators. If you are pushing a feature or experiment that need more RAM or flash, that might be inconvenient for you.

As an experiment we can try building the current maximum-, minimum- and default configuration of the Crazyflie. We say current because the work to make the firmware more modular is ongoing.

The default configuration, the official firmware, we can obtain by invoking the special make command defconfig.

$ make defconfig

And building the maximum is done using allyesconfig this gives us configuration file with all options enabled.

$ make allyesconfig

And conversely the minimum configuration can be set using allnoconfig, which will disable all features that can be disabled.

$ make allnoconfig

The resulting firmware sizes can seen in the table below:

BuildFlashRAMCCM
defconfig232 Kb (23%)76 Kb (59%)57 Kb (89%)
allyesconfig428 Kb (42%)80 Kb (62%)57 Kb (90%)
allnoconfig139 Kb (14%)62 Kb (48%)45 Kb (71%)

This shows some of the potential of the modularization of the firmware. We hope it will make it easier for you to get your stuff to fit, without having to hack around in the code too much.

Making it easier for us to merge your contributions

The new system makes it easier include code in the firmware repository without necessary needing to include it in the official firmware. This will make it easier for us to merge controllers, estimators, algorithms, deck drivers and other stuff from you.

We can include them in our Kconfig files, allowing people to select them and build firmware using them and we can make sure they get (at least compile) tested as part of our continuous integration. So you can sleep soundly knowing your code will not suddenly break with new versions of the firmware.

Creating and distributing your own config

If you want to create your own configuration, and spread it around you can do so.

You can use:

$ make menuconfig

To create a base .config file with your special configuration. If you copy the file, or have us merge it, to the configs/ directory.

$ cp build/.config configs/waggle-drone_defconfig

Then it will be possible for other people to build your configuration by going:

$ make waggle-drone_defconfig
make[1]: Entering directory '/home/jonasdn/sandbox/kbuild-firmware/build'
  GEN     ./Makefile
#
# configuration written to .config
#
make[1]: Leaving directory '/home/jonasdn/sandbox/kbuild-firmware/build'Code language: PHP (php)

But it would also be possible to just add the configuration that differ with the default configuration to your config file:

$ echo CONFIG_PLATFORM_BOLT=y > configs/waggle-drone_defconfig

$ make waggle-drone_defconfig

$ grep PLATFORM build/.config
# CONFIG_PLATFORM_CF2 is not set
CONFIG_PLATFORM_BOLT=y
# CONFIG_PLATFORM_TAG is not set
Code language: PHP (php)

Help out and test it please!

This is quite a big change and we are still shaking out bugs. Please give it a test run and report any issues you find!

If you want to help out, there is a GitHub project that contain the issues we know about, feel free to grab one and contribute your solution!

Happy hacking!

The Toolbelt has been around for a pretty long time but we have not been that good at promoting it and documentation is unfortunately a bit sparse. In this blog post we will talk a bit about the Toolbelt and how to use it.

The basic idea behind the Toolbelt is to provide an easy to access tool that helps the user to do common tasks in Bitcraze projects, without installing a lot of special tool-chains, libs or programs. The intention is also to harmonize the use in all our projects to make them as similar as possible and reduce the cognitive load when switching between repositories.

A functional view

After a standard installation (see below), the Toolbelt is available using the tb command and it is intended to be executed from the root of (almost) any Bitcraze repository file tree. You can run tools (commands) from the toolbelt with the extra spice that they run in the required environment, for instance when building the firmware the correct compiler is automatically available.

Without any arguments the Toolbelt will display a brief help. For instance if I run it in the root of the crazyflie-firmware repository I get this:

kristoffer@kristoffer-XPS-13-9310:~/code/bitcraze/crazyflie-firmware$ tb
Usage:  tb [-d] tool [arguments]
The toolbelt is used to develop, test and build Bitcraze modules. When the toolbelt is called, it will first try to find the tool in the belt, after that it will try the tools in the module if the working directory is the root of a module. Module tools are executed in the context of a docker container based on the module requirements configured in the module.json config file.

-d:  print the docker call that executes the tool

Tools in the belt:
  help, -h, --help - Help
  update - Update tool belt to latest version
  version, -V, --version - Display version of the tool belt
  ghrn - Generate release notes from github milestone
  docs - Serve docs locally

Tools in the current module:
  build
  test
  compile
  check_elf
  make
  test_python
  clean
  build-docs
Code language: JavaScript (javascript)

We can see that there are two groups of tools; “Tools in the belt” that are available in all repositories and ” Tools in the current module”, these are tools that are specific to the current repository.

To run a tool, simply use tb and the command. To build the firmware for instance, you can use make

kristoffer@kristoffer-XPS-13-9310:~/code/bitcraze/crazyflie-firmware$ tb make
Running script tools/build/make in a container based on the bitcraze/builder docker image as uid 1000
Using default tag: latest
latest: Pulling from bitcraze/builder
Digest: sha256:bee591d94db757465b88338c69be847cdf527698f0270ea1a86a2ccaa3c9845d
Status: Image is up to date for bitcraze/builder:latest
docker.io/bitcraze/builder:latest
make: Entering directory '/module'
  CLEAN_VERSION
  CC    stm32f4xx_dma.o
  CC    stm32f4xx_exti.o
  CC    stm32f4xx_flash.o
  CC    stm32f4xx_gpio.o
...Code language: JavaScript (javascript)

We can see that the firmware is built but we did not have to set our environment beforehand like instructed in the build/install page. The Toolbelt handled that by providing an precompiled development environment to do the firmware-compiling for us.

I will not go through all the tools but I’d like to mention the docs command. The docs command starts a web server and renders a simplified version of the documentation for a repository (in the docs directory). It is useful when browsing or editing the documentation.

Implementation

The Toolbelt is based on Docker and it runs in container. When executing a tool, the Toolbelt starts a second container where the tool runs. This second container is called a builder and it contains all the software required to execute the tool. There are a few different builders with tool-chains that are appropriate for various languages, CPUs and so on, luckily the Toolbelt picks the correct one automatically.

The directory that the tool is executed from is mapped into the docker containers and this is how the tools access the files, for instance when compiling.

In the example above we can see that the Toolbelt is pulling the latest version of the bitcraze/builder image from docker hub to have the latest and greatest builder when running make. It will take a while to download the builder image the first time (or when it has been updated) but usually this is not necessary and usually starting a tool takes only around 1 second.

The builder images are also used by our build servers for CI and release builds, this means that building with the Toolbelt replicates the exact same environment as on our builder servers.

The tools that are specific to a repository can be found in the tools/build and tools/build-docs directories. They are usually bash or python scripts and often they can also be executed without the toolbelt if you have the appropriate software installed on your system.

The source code for the Toolbelt is available on github. You can also find the source code for the builders on github, search for “builder”

Installation

The Toolbelt is mainly designed for Linux like environments and works on MacOS and in WSL (Windows Subsystem for Linux). Some operations are slowish on Mac as file access is a bit slower from docker containers.

To run the Toolbelt you need to have Docker installed on your system, after that installation is as simple as adding an alias to your .bashrc (or similar). For instructions run:

docker run --rm -it bitcraze/toolbelt

Native installation VS Toolbelt VS Virtual machine

There are three paths for building and working with Bitcraze source code; native install, the Toolbelt and the VM (Virtual machine). They all have their pros and cons.

Native installation

All build tools installed on the machine.

Pros: fast, access to USB and Crazyradio which enables flashing of firmware, can use your standard development environment

Cons: Possible compatibility issues with other software on the system. Must maintain installation and upgrade from time to time.

Toolbelt

Pros: highly separated from the OS, automatically updated with the appropriate tools and versions

Cons: can not access USB and the Crazyradio – flashing not possible. No access to GUIs – can not run the client

VM

Pros: Everything ready in one place, also supports USB, Crazyradio and flashing. Client works. Highly separated from the OS.

Cons: A bit bulky

Conclusions

The Toolbelt is an option for users that are interested in working with the source code for the Bitcraze ecosystem, but do not want to put too much time into installing tool-chains and setting up environments. It does not solve all problems but hopefully simplifies some tasks.

Any feedback is welcome!

You might, or might not have heard about a tool called Wireshark, it is quite popular in the software development world.

The wireshark official logo


Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, communications protocol development and education. It makes analyzing what is going on with packet based protocols easier.

Most often Wireshark is used for network based protocols like TCP and UDP, to try to figure out what is happening with your networking code. But! Wireshark also allows you to write your own packet dissector plugin, this means that you can register some code to make Wireshark handle your custom packet based protocol.

For the latest release of the Crazyflie Python Library we added support for generating a log of the Crazy Real Time Protocol (CRTP) packets the library sends and receives. This is the (packet based) protocol that we use to communicate with the Crazyflie via radio and USB.

We generate this log in the special PCAP format that Wireshark expects. And we also created an initial version of a dissector plugin, written in the programming language LUA.

When we put this two things together it turns into a pretty cool way of debugging what goes on between your computer and the Crazyflie!

What does it look like?

Wireshark gives you a graphical interface where you can view all the packets in a PCAP file. You will see the timestamps of when they arrived. Selecting a packet will give you the information that the dissector has managed to deduce as well as how the packet looked on the wire.

On top of that you get powerful filtering tools. In the below image we have set a filter to view only packets that are received or sent on the CRTP port 8, which is the port for the High level commander. This means that from a log file that contain 44393 packets we now only display 9. Which makes following what goes on with high level commands a bit easier.

Wireshark view of filtering out packets on CRTP port 8

The dissector knows about the different types of CRTP ports and channels and knows how to dissect an high level set-point, as seen by the image above.

What can this be used for?

This functionality is, we think, most useful for when developing new functionality in the Crazyflie firmware, or in the library. You can easily inspect what the library receives or sends and make sure it matches what your code indented.

But it can also be useful when doing client type work! We recently located the source of a bug in the Crazyflie client with the use of this Wireshark plugin.

It was when updating the Parameter tab of the client to handle persistent parameters, and to use a sidebar for extra documentation and value control. As I was testing the code I noticed that every time I changed the value of ring.effect to a valid integer and then disconnected and reconnected, the value was set to 0. Regardless of the value I had set.

I recorded a session using the PCAP log functionality:

$ CRTP_PCAP_LOG=ring.pcap cfclient

And the I fired up wireshark:

$ wireshark ring.pcap

It was now possible for me to track what the library and firmware thought was going on with the ring.effect parameter, by tracking the crtp.parameter_varid field using Wireshark. Filtering down from from 3282 packets to 12 packets.

I had earlier figured out that the varid of the ring.effect variable was 183. This is a quasi-internal representation of a parameter that we do not expose in a good way. In the future we will try to make this Wireshark tracking work with the parameter name as well.

Looking at the write parameter packet from USB #3 to the Crazyflie I could see where I set the value of the parameter to 5, so far so good.

Wireshark view of checking my setting of the ring.effect parameter to 5

The surprising part however was seeing a write further down setting the parameter to 0! This mean that something in the client was actually setting this to zero!

Wireshark view of something setting the ring.effect parameter to 0

After seeing this, locating the actual issue was trivial. I noticed that the Flight Control tab was setting the ring.effect parameter to the current index of the combo box in the UI. And when no LED-ring deck was attached, this amounted to always setting the value to zero.

But having confirmation that this was something happening on the client side, and not some kind of bug with the new persistent parameters was very helpful!

How do you use this?

We have added documentation to the repository documentation for the library on how to generate the PCAP log and how install the Wireshark plugin.

But the quick-start guide is this:

  • Copy the tools/crtp-dissector.lua script to the default Wireshark plugin folder
    • Windows: %APPDATA%\Wireshark\plugin or WIRESHARK\plugins
    • Linux: ~/.local/lib/wireshark/plugins
  • Restart Wireshark or hit CTRL+SHIFT+L
  • Set the environmental variable CRTP_PCAP_LOG to the filename of the PCAP log you want to generate
  • Run Wireshark with the filename as an argument

And please report any issues you find!

Happy hacking!

We are thrilled to announce the new 2022.1 release of the Crazyflie firmwares, library and client! There have been a lot of bug fixes, polishing and new features and we are glad we finally get to share it with all of you.

Noteworthy features and fixes

The features and fixes listed here is only a subset of all the bug fixes and other additions we have done in the last six months. For a more complete view, please check the release notes on GitHub.


Crazyflie STM firmware — Main firmware

Release notes on GitHub


Crazyflie NRF firmware — Radio and power management

Release notes on GitHub

  • We now report the version of the NRF firmware on the Crazyflie console

Crazyflie Python library — The official Python API

Release notes on GitHub


Crazyflie Client — The Crazyflie PC client

Release notes on GitHub

  • Rework of the Parameters tab
    • To better show parameter documentation and to include persistent functionality
Image of the Crayzflie PC client with new parameter tab
New Parameters tab with filtering search and sidebar with more information about values

Documentation

Along with all the new features, bug fixes and general polish of our software we have also spent time making sure our documentation is up-to-date and relevant! You can check it out on our website. Do not forget to check out the individual repositories documentation. And the tutorial page has gotten some love this cycle, check it out!

Please go forth and install this new release and please file issues with any problem you find!

Where to get it?

The firmware images for the Crazyflie STM firmware and the Crazyflie NRF firmware should already be available through the cfclient. And if you want to download them yourself you can find them at https://github.com/bitcraze/crazyflie-firmware/releases.

The Crazyflie Client and the Crazyflie Python library are available through Pypi (The Python Package Index), to install them you can use the following commands:

$ python3 -m pip install --upgrade cfclient # to install or upgrade the Crazyflie client

$ python3 -m pip install --upgrade cflib # to install or upgrade the Crazyflie Python libraryCode language: PHP (php)

Happy hacking!

This week we have a guest blog post from Anirudha Majumdar from Mechanical & Aerospace Engineering, Princeton University. Enjoy !

Overview

The course “Introduction to Robotics” at Princeton (MAE 345 / ECE 345 / COS 346) introduces students to fundamental theoretical and algorithmic principles in robotics. We use the Crazyflies to bring the excitement of drones to students in their very first introductory course in robotics. The course is primarily aimed at undergraduate students in their third and fourth years, and also offers a graduate-level track. In Fall 2021, approximately seventy undergraduate students and ten graduate students enrolled in the course from a variety of different majors: Mechanical and Aerospace Engineering, Computer Science, Electrical and Computer Engineering, Mathematics, Operations Research and Financial Engineering, Civil and Environmental Engineering, and Architecture.

The course covers the following technical topics:

  • Feedback Control;
  • Motion Planning;
  • State estimation, localization, and mapping;
  • Computer vision and learning;
  • Broader topics: Robotics and the law, ethics, and economics.

One of the primary aims of the course is to allow students to obtain hands-on experience implementing various algorithms on a hardware platform. For this, we use the Crazyflie 2.1 quadrotors with a camera attachment (Fig. 1). The Crazyflie is an ideal hardware platform for our course due to its light weight, reliability, safety, and open-source code. Throughout the semester, students work in teams to implement different algorithms from the course:

(i) linear quadratic regulator (LQR) for stabilizing the drone

(ii) rapidly-exploring randomized trees (RRTs) for motion planning through a forest of PVC pipe obstacles

(iii) the Lucas-Kanade optical flow algorithm

(iv) object-following with neural networks.

In the final project, students implement algorithms for autonomous vision-based navigation through novel (i.e., a priori unknown) obstacle environments (Fig. 2). Below, we describe the modified Crazyflies used for the course, along with the hands-on projects in more detail.

Open-source course materials and website

We have made the course materials (lecture notes, slides, and assignments) freely available through our course website. The assignments include theory questions, programming assignments in Python, and hardware implementation with the Crazyflie. All programming and hardware assignments are freely-available through GitHub. We hope that the course materials will be useful to instructors at other institutions in getting the next generation of engineers excited about robotics!

Fig. 1. Crazyflie with a camera attachment.
Fig. 2. Vision-based navigation.
Final project for course: vision-based navigation and target-seeking.

Hardware components and lab spaces

 The feedback control and motion planning hardware assignments can be completed with the following parts list:

For the vision-based assignments, we attach cameras to the drones, which transmit images in real-time to a receiver unit plugged into to a laptop. This requires the following additional parts:

We set up three netted arenas for students to complete hardware assignments. Each arena is approximately 12 ft x 8 ft in size; two of the arenas are shown in Fig. 3.

Fig. 3. Two netted flying arenas used for the course.

Crazyflie assignments

Lab 1: Linear Quadratic Regulator (LQR)

In the first hardware lab, students implement and tune a Linear Quadratic Regulator (LQR) controller to make the drone hover. This builds on the theoretical foundations of feedback control covered in the first module of the course. Implementing LQR control on a physical system allows students to get a sense for the “reality gap”, i.e., assumptions made by the theory that are not perfectly satisfied in real life (e.g., linear dynamics, perfect state estimation, and perfect knowledge of the dynamics). This assignment makes use of a modified version of the Crazyflie firmware that allows us to upload different controller gains (obtained using LQR) and run these on the drones. Code for this assignment is available here.

Lab 2: Rapidly-Exploring Randomized Trees (RRTs)

In the second lab, students implement the RRT algorithm for motion planning. Each netted arena is set up with PVC pipe obstacles (similar to Fig. 2). Students measure the obstacle locations and radii and use the RRT algorithm to find a collision-free path from a starting location to a goal location. The Crazyflie then uses its waypoint tracking capabilities to execute the motion plan. Code for this assignment is available here.

Lab 3: Lucas-Kanade optical flow

In the computer vision module of the course, we first introduce students to “classical” vision approaches (before discussing modern deep learning techniques). One of the primary algorithms we discuss in this module is the Lucas-Kanade algorithm for computing optical flow (which has many applications such as computing time-to-collision). Students record a video using the drone’s on-board camera attachment and process this offline to compute optical flow. Code for this assignment is available here.

Lab 4: Object tracking

In the computer vision module of the course, we introduce students to deep learning-based approaches to vision. As a demonstration of the power of deep learning, we use neural networks to perform real-time object detection. Specifically, students use neural networks trained on the Coco dataset to detect a person (or an object such as a cup) using the drone’s camera, and use the resulting bounding box to command the drone in a way that makes it follow the person or object. The neural networks can be run on standard laptop computers at >= 30Hz, allowing the drone to follow the person or object in real-time. Code for this assignment is available here.

Final project: Vision-based obstacle avoidance and target-seeking

In the final project for the course, students implement vision-based navigation and target-seeking using the Crazyflie drones. Teams must program the drones to autonomously navigate through a forest of PVC pipes, whose locations are not known beforehand. Students utilize different computer vision algorithms (e.g., edge detection, contour finding, etc.) to detect obstacles and program strategies to avoid these obstacles. The goal is to navigate to the other side of the flying arena and land in front of a target object (in the form of a book). Again, teams use different approaches (e.g., neural networks) to find the book and then land in front of it. Further details and starter code for the project can be found here. See above for a video.

Acknowledgements

The following teaching staff have contributed greatly to the development of the materials for this course: Vincent Pacelli, Julienne LaChance, Jon Prevost, Alec Farid, Meghan Booker, Lena Rosendahl, David Snyder, Allen Ren, and Eric Lepowsky. 

Base station geometry estimation is a function in the python client (in the lighthouse tab), where the system estimates the position and orientation of the base stations. The user places the Crazyflie on the floor (in the desired origin) and clicks a button to measure the angles to the base stations, which are used to estimate the geometry. The current implementation is fairly basic and has some issues associated with it:

  1. All base stations must be received from the point where the Crazyflie is located
  2. Only 2 base stations are supported
  3. The coordinate system is not properly aligned with the room
  4. The generated geometry is not as good as it could be, that is the position/orientation is sub-optimal
  5. The code has a dependency to OpenCV which causes problems for ROS users

I have been working on a solution for these problems as my fun Friday project and in this blog post I will tell you a bit more about the problems and a possible solution.

Screenshot from the client of a geometry with 4 base stations.

What are the problems to be solved?

In the current implementation, the user places the Crazyflie in the origin, with the front of the Crazyflie pointing in the direction of the positive X-axis. When the user hits the “Estimate Geometry” button, the angles to the visible base stations are recorded and the solvePnP() function in OpenCV is used to estimate their poses (position and orientation). This is all fine but it also has its limitations and in the following section we will outline what the limitations are and how to solve them.

All base stations must be received in the origin and only 2 base stations are supported

Currently the Crazyflie ecosystem supports up to 2 base stations and this works fine for a flight space of around 4×4 meters. With more base stations it would be possible to cover larger areas or multiple rooms, which is a feature that many users have been asking for. In these scenarios it will not be possible to receive all base stations from one position any more though, and it will require a new method for geometry estimation using multiple measurements. Suppose base station 1 and 2 are received in one position and 2 and 3 in another, then we can map the measurements together since we know base station 2 must have the same pose in both measurements. This way it is possible to relate all base station poses to each other, provided there are measurements that link them together.

The coordinate system is not properly aligned with the room

When generating the geometry in the current implementation, the orientation of the Lighthouse deck is used to define the coordinate system: forward of the deck defines the X-axis, left defines the Y-axis and up the Z-axis. The problem is that the deck might not be perfectly aligned with the Crazyflie, the floor might not be completely flat or the Crazyflie might not point exactly in the desired direction. A pretty small misalignment will result in fairly large errors a couple of meters away, resulting in unexpected behavior, for instance not flying at constant height. Expanding to more base stations and larger systems, the problem will become even bigger and a better solution is clearly needed.

If we used the position of the Crazyflie when placed at multiple positions, we could use this information to rotate the coordinate system to be better aligned. For instance, suppose we measured some points on the floor of the flight space, then we could make sure the XY-plane of the coordinate system goes through those points, or at least as close as possible. Similarly one or more measurements along the X-axis would help to define the rotation around the Z-axis.

The generated geometry is not as good as it could be

The lighthouse positioning system is based on measuring the angles between the sensors on the Lighthouse deck and the base stations. One can think of it as four beams or rays, going from each base station to the sensors on the deck, for which we measure the direction very precisely seen from the base station’s point of view. The purpose of the geometry estimation is to figure out the position and orientation of the base stations so that we can calculate how the beams are oriented in the flight space instead. By looking at where the beams from two base stations intersect we know where the sensors are located and can calculate the position of the Crazyflie. This is a somewhat simplified picture of how it works but it is sufficient for the following discussion.

So what happens if the geometry is not completely correct? If the estimated positions or orientations of the base stations are slightly off, the beams will not intersect and we have to use some method to find the point closest to the two beams instead to use as the sensor position. In a real world system there will always be errors and the implementation must be able to handle them, but we want to keep them as small as possible. Further more we want to make sure the errors are uniformly distributed in the flight space so that we get equally good results everywhere.

In the current estimation process, where we take a measurement in one position, we are able to generate a geometry that is good at that point, but due to noise in the measurements and other subtleties the error at the edges of the flight space might be several centimeters.

The solution to this problem is to measure the angles in multiple positions and try to find a geometry where the error is equally small for all of them. It does not guarantee that the error will be equal everywhere, but if we make measurements in the volume we plan to fly in we know it will be OK where we need it to be. It should also be a much better geometry, for the full covered volume, than what we can be achieved by measuring in one point only.

One bonus problem that hopefully will be solved by this approach is the moving back and forth that sometimes can be seen in a Lighthouse 2 system. What happens is that the base stations interfere with each other from time to time (by design) and most of the time the Crazyflie gets positioning information from both base stations, but every couple of seconds only from one of them. When both are available the “average” position is used, but when only one is received, the Crazyflie will “jump” to the position indicated by that base station (the simplified model from above with crossing beams does not hold in this case, sorry!). If the difference between the suggested positions of the two base stations (the error in the geometry) is large there will be a noticeable motion in the Crazyflie.

The code has a dependency to Open CV

In the current solution we use the solvePnP() function in Open CV to estimate the geometry. Open CV is an awesome library but unfortunately it has turned out that this dependency interferes with ROS, and since a fair amount of our users also use ROS, we would like to get rid of it if possible.

Luckily I found an open source implementation of IPPE, an algorithm that finds the pose of an object based on points seen by a camera, that we can use instead. There is actually an option to use Ippe in OpenCV’s solvePnP(), but we used another flavor.

The solution

The core idea is to first collect measurements of beams in many positions in the flight space by moving a Crazyflie around and record the lighthouse angles. Secondly an equation system is created that takes the poses of the base stations and all the recorded Crazyflie poses as input and as output calculates the lighthouse angles those poses would correspond to for all the sensors. Finally the output is compared to the recorded values and poses are adjusted using the least-square solver in scipy to find the poses that minimizes the difference between the measurements and the output from the equation system.

Before we can solve the equation system we have to record the angles from the base stations. There is a handy function in the Crazyflie that pushes measured lighthouse angles to the PC via the radio, and by letting the user move the Crazyflie around in space we get the angles along that path. What we are looking for though are angles collected in discrete positions and as an approximation I group measurements together based on time. The assumption is that if two angle measurements are closer than 10 ms in time, the Crazyflie did not move very far and they can be considered to be taken in the same position. The output of this process is a list of samples where each sample contains the measured lighthouse angles of one or more base stations for one specific Crazyflie pose. After this has been done, the list is filtered to only contain samples with two or more base stations.

We also need an initial guess of the base station and Crazyflie poses for the least-square solver to make the solution converge. I use IPPE for this and use the first sample as the reference to define a temporary global reference frame. Suppose the first sample contains angles for base stations 2 and 3, we can then use IPPE to calculate an estimate of the pose of the two base stations, in the Crazyflie reference frame of this sample. Since we use the first sample as the reference for the global reference frame (that is the pose of the Crazyflie in this sample is the origin by definition), those poses are also equal to the base station poses in the global reference frame.

Suppose the next sample contains lighthouse angles for base stations 1 and 2, using IPPE we can estimate the base station poses for base stations 1 and 2 in the reference frame of the Crazyflie in this sample. Since the relative positions of the base stations is the same, regardless of reference frame, we can rotate/translate the poses of the base stations so that base station 2 pose matches the pose of base stations 2 in the first sample. We now have an estimate of the poses of base stations 1, 2 and 3, further more the transformation used represents the pose of the Crazyflie in sample 2. Repeating the process for all samples gives us a pretty good idea of where all the base stations are located as well as the pose of the Crazyflie in all the samples.

We can now feed the initial guess and the equation system into scipy and hopefully get a refined solution back. From the estimated poses of the base stations and Crazyflie samples it is possible to calculate the distance between sensors and beams which gives us an approximation of how good the solution is.

The final step is to align the coordinate system with the room, as mentioned earlier the solution we have this far is based on the pose of the Crazyflie in the first sample. The way it is done in the suggested implementation is to ask the user to place the Crazyflie at some points in the desired origin, on the positive X-axis and in the XY-plane and measure the angles in these positions. The measurements are included as samples in the above process which means we will get the estimated positions as a part of the over all solution, in the temporary global reference frame. The task at hand is then to find the rotation/translation from the temporary global reference frame to the one indicated by the positions sampled by the user. Again we do a least-square optimization to find the transformation that minimizes the error in the sampled points. We can now calculate the final solution by applying the transformation to the base station poses we got earlier.

Does it work?

Yes, it seems to work pretty well, I have not had the time to do extensive testing yet but the results looks promising. In our flight arena with 4 base stations, the solution seems to generally be acceptable. We don’t know the exact poses of our base stations since it is very hard to measure, but they are mounted in the same truss and should be at similar heights.

Base stations at:
1: (-3.7104629351065146, -0.27330674065567867, 2.960720481536423)
2: (-0.9233909349006646, -2.9651389799486356, 2.9781503155699176)
3: (-0.12450705551081238, 3.430497907723026, 3.011201684709142)
4: (2.74012584124908, 0.5856524795079388, 3.023133069381165)
Solution match per base station:
1: {'mean_error': 0.0026020322270174697, 'max_error': 0.013310934923630531, 'std_error': 0.0028768923969783836}
2: {'mean_error': 0.0015240237742724164, 'max_error': 0.005526851773945277, 'std_error': 0.0011560341160273498}
3: {'mean_error': 0.002193101969828834, 'max_error': 0.006778096051979129, 'std_error': 0.0015768914826109067}
4: {'mean_error': 0.0033752667182490796, 'max_error': 0.014997173956894249, 'std_error': 0.00354931189334688}

The above snippet is part of the output from one run and as can be seen the estimated height is between 2.96 and 3.02 m. You can also see that the estimated average error for sensor positions is in the order of 2-3 mm while the maximum error is 1.5 cm.

Below is graph of the recorded Crazyflie positions in the final solutions. Note the three single points at the bottom that are from the origin, the X-axis and XY-plane.

Estimated positions of the Crazyflie

I did some testing on larger systems with 6-8 base stations this Friday and it seemed to be harder to get a solution that converges which indicates that there might be something to look into here.

Try it out

This is still work in progress, but if you want to try it out, you can find the code in this pull request. Run the examples/lighthouse/bs_geometry_estimation.py script, you will get instructions on the screen as you go.

Officially the firmware supports 2 base stations , but most of the code is designed to handle up to 16 and if you want to test the functionality with more than two base stations you have to update PULSE_PROCESSOR_N_BASE_STATIONS and re-flash your Crazyflie.

Any feedback is welcome, please use the pull request.

Today we will have a guest blogpost by Dominik Natter, working in the Robotics & Control group at SINTEF in Trondheim, Norway. Enjoy!!

In this blogpost we will teach you how to fly the Crazyflie beyond edges without crashing, using only on-board sensors. Come join in!

flying over edges
Safe flights across edges are achievable!

Introduction

UAVs have seen tremendous progress in the last decades and have since moved from research labs to various real-world environments. Small UAVs (so-called micro air vehicles, MAVs) like the Crazyflie open up even more possibilities. For example, their size allows them to traverse narrow passages or fly in cluttered environments (as recently showcased in this blog post). However, in order to achieve these complex tasks the community must further improve the cognitive ability of these MAVs in order to avoid crashes.

One task on this list and today’s topic is the possibility to fly at constant altitude irrespective of the terrain. This feature has been discussed in the community already two years ago. To understand the problem, let’s look at the currently implemented solution: With the Flow deck mounted the Crazyflie uses a 1D lidar sensor to estimate its vertical position. This vertical position (more or less) equals the current sensor reading. On flat floors this solution works very well. However, if the Crazyflie shall traverse through a narrow window or fly above irregular terrain its altitude will change based on the sensor readings. This can lead to unstable flights, as in the following video, or even crashes!

You might wonder: why not use any of the other great tools from the Bitcraze universe? Indeed, the Lighthouse positioning system and the Loco positioning system work well for absolute positioning (as we have seen earlier, e.g., in this blog post). However, the required setups are often not available in difficult environments. Alternatively, the barometer could be used to achieve a solution based solely on on-board sensors. In fact, Bitcraze has proposed an altitude hold functionality a few years ago. This is a cool feature, but its positioning accuracy of “roughly ±15cm” is not fully satisfying. Finally, relying on the on-board IMU alone will inevitably lead to drifting over time.

Thus, we propose a solution based on the Flow deck and the Multiranger deck. This approach, only based on on-board sensors, allows to fly at constant altitude with obstacles above, below, or even both above and below the Crazyflie. Kristoffer Skare developed this solution when he worked with us as an intern in 2021.

Technical Description

As a first step, the upward-facing lidar of the Multiranger deck is incorporated in the same way the downward-facing lidar of the Flow deck is used in the firmware. This additional measurement can then be used in the extended Kalman filter (EKF) to improve the state estimation. Currently, the EKF estimates and outputs 1 value for the altitude. For our purpose two more states are added to the EKF: one state is defined as the height of the object under the Crazyflie compared to the height where the altitude state is defined as 0. Similarly, the other state is defined as the height of the object above the Crazyflie compared to the same reference height. The Crazyflie keeps therefore track of the environment in order to keep its own altitude constant. To achieve this, an edge detection was implemented: The errors between the predicted and measured distance are tracked in both the upward or downward range measurement. If either of these errors is too large the algorithm assumes that the floor or roof has changed (while the original EKF would think the drone’s position has changed, triggering a change in thrust). Thus, the corresponding state gets updated. For more details on the technical implementation and the code itself, check out our pull request.

Results

To analyze our approach we have used a Qualisys motion capture system. We have conducted many different tests: flying over different obstacles, flying at different velocities, flying at different altitudes, or even flying under different lighting conditions. Exemplarily, in this post we will have a look at a baseline example, a good estimate, and a bad estimate. In each picture you can see the altitude (in meters) over time (in seconds) for different flight speeds (in centimeters per second). You will see three lines: The motion capture ground truth (blue), the altitude estimated by our code (orange), and the new state keeping track of the floor height (green). For each plot, the Crazyflie takes off, flies in positive x direction, and lands.

In the baseline experiment, it flies over a flat floor. Clearly, the altitude estimates follow the ground truth values well, and the floor is correctly estimated to be flat.

baseline experiment
Baseline experiment flying over a flat floor

In the next example, we have added a box with an approximate height of 0.225 m and made the Crazyflie fly over it. Despite the obstacle the altitude estimates follow the ground truth values well. Note how the floor estimates indicates the shape of the box.

experiment with box
Experiment flying over a box

Because the algorithm is based on an edge detection, we had a hunch that smoothly changing obstacles will pose a problem. Indeed, the estimates can be messy as we see in the next example. Here, the Crazyflie flies over an orthogonal triangle, with the short leg at 0.23 m pointing upwards and the long leg with 0.65 m pointing in flight direction (thus forming a slope). For different flight speeds different the estimates turn out quite differently.

experiment with slope
Experiment flying over a slope

If you don’t like looking at plots, check out this video with some cool shots instead!

Conclusion

To summarize, we propose a solution for constant altitude flight with Crazyflies, using the Flow deck and the Multiranger deck. We have tested it successfully under various circumstances. Still, we see some potential for improvement, e.g. when dealing with slopes. In addition, the current implementation is quite a change to the original EKF, which poses a problem for integration.

Thus, a way forward can be an out-of-tree build to ease the use of the solution for the community. At SINTEF we certainly plan to deploy this code in all of our tests in 2022, which will hopefully allow us to gather more experience and thus find further ways to improve or tune the system.

We want to emphasize that this is not a perfect solution. That means a) you should use it with care and b) you are very much welcomed to contribute. E.g. feel free to chime in in the pull request, test the code in your environments, propose improvements, or implement an out-of-tree build! :) Maybe you can even come up with an alternative approach for constant altitude flights?

If you want to check out more of our work, visit our website. Also, keep reading this amazing blog from Bitcraze as we try to be back some day (if Bitcraze wants us hehe)!

We’ve had an exciting year in 2021, and we’re eager to see what 2022 will bring ! Let’s see what’s in the pipeline and what we hope for this new year.

Products

The AI deck and Bolt out of Early Access

We’ve put a lot of efforts during these last months on working with the AI deck’s firmware and infrastructure. With great help of our intern Rik, we managed to make huge leaps, and hopefully sometimes in the coming months we’ll be able to share what we worked on. I can already tell you that the incoming release will bring some needed improvements on flashing on the GAP8 chip and improved image streaming! As the AI deck is one of the most challenging of our decks, we also hope to add an extensive tutorial (that we call the “mega tutorial”) to help you working with it.

Also we have started to push some framework changes to make it easier for you to make bigger drones with the Crazyflie Bolt. One of those are the persistent parameter system that we have recently implemented on the Crazyflie’s, so we will add more and more of these types of features. The hope is, is that we are able to provide some kind of assembly kit for a larger Bolt-based drone, of which we already did some initial battery investigations for.

Prototypes

Fun Fridays are usually our time to play around with new possibilities and prototypes. Marcus has already made great strides, and hopefully in 2022 we’ll be able to go even farther with those. Arnaud has also been working on the much waited new iteration of the Crazyradio, with a new chip and an improved communication protocol. Tobias, our dedicated hardware man, has also ideas down the pipes in the form of a brushless Crazyflie as we already showed in our future plans presentation of November’s BAMdays. Also we hope to initiate the design process of a new and improved version of the Crazyflie with more power and processing capabilities.

People and Collaborations

Last year we have continued our close collaboration with researchers at institutes and universities, to help them out with achieving their goals and contribute their work to our opensource firmware and software. It proves really fruitful, both for us and the people we talk to, so we hope 2022 will see yet again closer and newer connections.

We were really happy with our first own online conference, which helped us reconnect and talk to our community about all the awesomeness achieved with the Crazyflies. We hope to implement something similar on a more regular basis, to keep talking about collaborations, possibilities, and in general sharing all the work that’s been done on the platform. Those “lightweight” BAM should arrive soon, so keep updated if you want to join them!

Component shortage and productions issues

We expect to still deal with the component shortage, as it is expected to last for at least another year, even two. Production is therefore a continuous challenge, with a lot of unpredictability, and we will find better solutions to deal with it in 2022. Thankfully, we have good hopes on keeping good stock levels throughout the crisis, as we’ve increased our stock. We’ll of course keep you updated on any big updates regarding the crisis and how it is affecting it us.

Unfortunately, the component shortage also means that it’s harder to make prototypes. It’s difficult to find and/or buy just one chip, so it causes delays in our creative hardware developments. It is what it is… but we will sure be able to find solutions – as we did during our 10 years’ history!

Anything else?

Of course our heads are always full of ideas and we are passionate to work on anything! We have ambitions in developing a simulation for our users or CI, doing more measurements with the new thrust stand or adding further improvements to our documentation and tutorials. And we might also meet new interesting people (digitally or in person?) who might give us enough inspiration to start something completely new! Soon we will have our quarterly meeting, where we try to herd and select our passions and ideas into conceivable plans and actions.

With all these exciting projects, we’re really excited to see what 2022 has in store for us! I hope you too have an awesome year 2022.

2021 is coming to an end. As we’re about to flip the page on a new calendar, let’s take a look at what happened during this past year.

Community

It’s no news to you, but keeping in touch with our community during a pandemic has forced us to try new ways to meet and interact. The main event for us this year has been the BAM days: our first conference held entirely by ourselves, full of exciting talks and fun, and we’re very happy with how it went down! You can still watch the talks we hosted during those three days in our dedicated Youtube playlist.

Guest blogposts

Once again, we’ve had the honor to host some awesome guests on our blog, which you can read (or re-read) here:

Software

We had 3 releases this year, (one in January, one in March and one in June). We worked a lot on improving things like the Crazyflie logging and parameter interface or wondered how to deal with our API. We’ve also implemented a new way to store things, and we now have new, powerful persistent parameters.

Thanks to Jonas’ hardwork, we also set up a “crazy lab” here in Malmö. During Fun Friday, Arnaud has been experimenting with Rust, working on a webclient.

Hardware

We’ve been dealing since 2020 with a hardware crisis. The component shortage has made production erratic and difficult, and for the first time in Bitcraze’s history, we’ve had to increase our prices.

Lighthouse

The first part of the year was dedicated to the Lighthouse system. At the beginning of the year, we finally got it out of early access. We documented the new Lighthouse Functionality and even wrote a paper for ICRA about the Lighthouse accuracy with Wolfgang’s help. We created the Lighthouse swarm bundle, getting every element to fly a swarm with this positioning system.

AI deck

We worked a lot on the AI deck this year. We upgraded to the AI deck 1.1, with a gray-scale camera and a newer version of the GAP-8. Part of this work was also to improve the documention and informations we had on it. We had a workshop with PULP, and dreamed of a mega-tutorial.

Documentation

As usual, we’ve been trying to improve our documentation. This year, it included an API reference in our Python library , and rethinking our structure

Bitcraze

Bitcraze has once again increased in numbers, as we welcomed into our ranks Jonas. We were also really happy to add Wolfgang to the team for a few months, as well as Rik, an intern from MAV-lab.

The biggest event this year for us was our 10 year anniversary. That’s right, Bitcraze turned 10 in September, and we tried to celebrate it as much as possible ! With a nice outing and with the BAM days, but also with a video trying to show what 10 years of Bitcraze looks like: