Today, our guest Airi Lampinen from Stockholm University is presenting the second Drone Arena Challenge. Enjoy!
Welcome to the second Drone Arena Challenge, a one-of-a-kind interactive experience with Bitcraze’s Crazyflie! This year, the challenge is focused on moving together with drones in beautiful, curious, and provocative ways – without needing to write a single line of code!
What, when, where? The event takes place May 16-17, 2023 at KTH’s Reactor Hall in Stockholm – a dismantled nuclear reactor hall – which provides a unique setting for creative human–drone encounters. You don’t need your own drone or be able to program a drone to participate! We will provide the drone equipment (a Crazyflie 2.1 equipped with the AI-deck) and take care of everything necessary to make them fly. What you need to do is to be creative and move together with the drones to set up the best show you can deliver! There’ll be a jury judging the final performances and we have exciting prizes for the most successful teams!
Who can join? Anyone irrespective of training, profession, and past experience with drones or performing arts is welcome to participate. Participants need to be at least 18 years old. If you are curious about how technology and humans may play together, enthusiastic about the Crazyflie, or eager to learn how to move with the Crazyflie, this event is for you. We welcome up to 10 pairs (teams of 2 people) to participate in the challenge.
Registration is already open, with only a few spots remaining. We encourage those interested to sign up as soon as possible to secure their spot!
Program & prizes? On the first day of the hackathon, we will host a keynote speaker and a short information session to explain what participants are expected to do and what support is available for them. The teams will then have access to the Reactor Hall to work on the challenge and explore moving with their drone – we offer long hours but each team is free to choose how much they want to work. (The goal here is to have a good time!) The competition itself takes place on the second day. We’ve got exciting prizes for the most successful teams!
In this blog post we will take a look at the new Loco positioning TDoA outlier filter, but first a couple of announcements.
Announcements
Crazyradio PA out of stock
Some of you may have noticed that there are a lot bundles out of stock in our store, the reason is the transition from Crazyradio PA to the new Crazyradio 2.0. Most bundles contain a radio and even though the production of the new Crazyradio 2.0 is in progress, the demand for the old Crazyradio PA was a bit higher than anticipated and we ran out too early. Sorry about that! We don’t have a final delivery date for the Crazyradio 2.0 yet, but our best guess at this time is that it will be available in about 4 weeks.
Developer meeting
The next developer meeting is on Wednesday, April 5 15:00 CEST, the topic will be the Loco positioning system. We’ll start out with around 30 minutes about the Loco Positioning system, split into a presentation and Q&A. If you have any specific Loco topics/questions you want us to talk about in the presentation, please let us know in the discussions link above.
The second 30 minutes of the meeting with be for general support questions (not only the Loco system).
The outlier filter
When we did The Big Loco Test Show in December, we found some issues with the TDoA outlier filter and had to do a bit of emergency fixing to get the show off the ground. We have now analyzed the data and implemented a new outlier filter which we will try to describe in the following sections.
Why outlier rejection
In the Loco System, there are a fair amount of packets that are corrupt in one way or the other, and that should not be part of the position estimation process. There are a number of reasons for errors, including packet collisions, interference from other radio systems, reflections, obstacles and more. There are several levels of protection in the path from when an Ultra Wide Band packet is received in the Loco Deck radio to the state estimator, that aims at removing bad packets. It works in many cases, but a few bad measurements still get all the way through to the estimator, and the TDoA outlier filter is the last protection. The result of an outlier getting all the way through to the estimator is usually a “jump” in the estimated position, and in worst case a flip or crash. Obviously we want to catch as many outliers as possible to get a good and reliable position estimate and smooth flight.
The problem(s)
The general problem of outlier rejection is to decide what is a “good” measurement and what is an outlier. The good data is passed on to the state estimator to be used for estimating the current position, while the outliers are discarded. To decide if a measurement is good or an outlier, it can be compared to the current position, if it is “too far away” it is probably an outlier and is rejected. The major complication is that the only knowledge we have about the current position is the estimated position from the state estimator. If we let outliers through, the estimated position will be distorted and we may reject good data in the future. On the other hand if we are too restrictive, we may discard “good” measurements which can lead to the estimator loosing tracking and the estimated position drift away (due to noise in other sensors). It is a fine balance as we use the estimated position to determine the quality of a measurement, at the same time as the output of the filter affects the estimated position.
Another group of parameters to take into account is related to the system the Crazyflie and Loco deck are used in. The over all packet rate in a TDoA3 system is changed dynamically by the anchors, the Crazyflie may be located in a place where some anchors are hidden, or the system may use the Long Range mode that uses a lower packet rate. All these factors change the packet rate and his means that the outlier filter should not make assumptions about the system packet rate. Other factors that depend on the system is the physical layout and size, as well as the noise level in the measurements, and this must be handled by the outlier filter.
In a TDoA system, the packet rate is around 400 packets/s which also puts a requirement on resource usage. Each packet will be examined by the outlier filter, why it should be fairly light weight when it comes to computations.
Finally there are also some extra requirements, apart from stable tracking, that are “nice to have”. As a user you probably expect the Crazyflie to find its position if you put it somewhere on the ground, without having to tell the system the approximate position, that is a basic discovery functionality. Similarly if the system looses position tracking, you might expect it to recover as soon as possible, making it more robust.
The solution
The new TDoA outlier filter is implemented in outlierFilterTdoa.c. It is only around 100 lines of code, so it is not that complex. The general idea is that the filter can open and close dynamically, when open all measurements are passed on to the estimator to let it find the position and converge. Later, when the position has stabilized, the filter closes down and only lets “good” measurements through. In theory this level of functionality should be be enough, after the estimator has converged it should never lose tracking as long as it is fed good data. The real world is more complex, and there is also a feature that can open the filter up again if it looks like the estimator is diverging.
The first test in the filter is to check that the TDoA value (the measurement) is smaller than the distance between the two anchors involved in the measurement. Remember that the measurements we get in a TDoA system is the difference in distance to two anchors, not the actual distance. A measurement that is larger than the distance between the anchors is not physically possible and we can be sure that the measurement is bad and it is discarded.
The second stage is to examine the error, where the error is defined as the difference between the measured TDoA value and the TDoA value at our estimated position.
float error = measurement - predicted;
This error does not really tell us how far away from the estimated position the measurement is, but it turns out to be good enough. The error is compared to an accepted distance, and is considered good if it is smaller than the accepted distance.
sampleIsGood = (fabsf(error) < acceptedDistance);
The rest of the code is related to opening and closing the filter. This mechanism is based on an integrator where the time since the last received measurement is added when the error is smaller than a certain level (integratorTriggerDistance), and remove if larger. If the value of the integrator is large, the filter closes, and if it is smaller than a threshold it opens up. This mechanism implements a hysteresis that is independent on the received packet rate.
The acceptedDistance and integratorTriggerDistance are based on the standard deviation of the measurement that is used by the kalman estimator. The idea is that they are based on the noise level of the measurements.
Feedback
The filter has been tested in our flight lab and on data recorded during The Big Loco Test Show. The real world is complex though and it is hard for us to predict the behavior in situations we have note seen. Please let us know if you run into any problems!
The new outlier filter was pushed after the 2023.02 release and is currently only available on the master branch in github (by default). You have to compile from source if you want to try it out. If no alarming problems surface, it will be the the default filter in the next release.
We are excited to announce that we will be having developer meetings on first Wednesdays of every month! Additionally, we are thrilled to be present in person at ICRA 2023 in London. During the same conference, there will be half day workshop called ‘The Role of Robotics Simulators for Unmanned Aerial Vehicles’ so make sure to sign-up! Please check out our newly updated event-page !
Monthly Developer meetings
We have had some online developer meetings in the past covering various topics. While these meetings may not have been the most popular, we believe it is crucial to maintain communication with the community and have interesting discussions, and exchange of ideas. However, we used to plan them ad-hoc and we had no regularity in them, which sometimes caused some of us **cough** especially me **cough**, to create confusion about the timing and location. To remove these factors of templexia (dyslexia for time), we will just have it simply on the first Wednesday of every month.
So our first one with be on Wednesday 5th of April at 15:00 CEST and the information about the particular developer meeting will be as usual on discussions. From 15:00 – 15:30 it will be a general discussion, probably with a short presentation, about a topic to be determined. From 15:30-16:00 will address regular support question from anybody that need help with their work on the Crazyflie.
ICRA 2023 London
ICRA will be held in London this year, from May 29 – June 2nd, atthe ExCel venue. We will be located in the H11 booth in the exhibitor hall, but as the date approaches, we will share more about what awesome prototypes we will showcase and what we will demonstrate on-site. Rest assured, plenty of Crazyflies will be flown in the cage! To get an idea of what we demo-ed last year it IROS Kyoto, please check out the IROS 2022 event page. Matej from Flapper Drones will join us at our booth to showcase the Flapper drone.
We are thinking of organizing a meetup for participants on the Wednesday after the Conference Dinner, so we will share the details of that in the near future as well. Also keep an eye on our ICRA 2023 event page for updated information.
Additionally, participants can submit an extended abstract to be invited for an poster presentation during the same workshop. The submission deadline has been extended to April 3rd, so for more information about submission, schedule and speaker info, go to the workshop’s website.
This week’s guest blogpost is from Florian Goralsky from Bok o Bok about their dance piece with multiple Crazyflies. Enjoy!
Flying bodies across the fields is a contemporary dance piece for four performers and a swarm of drones, exploring the phenomenon of the disappearance of bees and the use of pollinating drones to compensate for this loss. The piece attempts to answer this crucial question in a poetical way: can the machine create life and save us from ecological disaster?
We’re super excited to talk about a performance that we’ve been working on for the past two years in collaboration with Bitcraze. It premiered at the Environmental Forum, Centre Pompidou Paris, in 2021, and we’ve had the opportunity to showcase it at different venues since then. We are happy to share our thoughts about it!
Choreographic research
Beyond symbolizing current attempts to use drones to pollinate fields, the presence of the Crazyflie drones, supports the back and forth between nature and technology. We integrate a swarm, performing complex choreographies, which refer to the functioning of a beehive, including the famous “bee dance”, discovered by Karl von Frisch, which is used to transmit information on the food sources. Far from having a spectacular performance as its only goal, the synchronization of autonomous drones highlights bio-inspired computer techniques, focused on collective intelligence.
Challenges within a dance performance
Making a dance performance with drones needs a high accuracy and adaptability, both before and during the show. Usually, we only have a few hours, sometimes even a few minutes, to setup the system according to the space. We quickly realized we needed pre-recorded choreographies, and hybrid choreographies where the pilot could have a few degrees of freedom on pre-defined behaviors.
GUI Editor + Python Server
Taking this into account, we developed a web GUI editor, that is able to send choreographies created with any device to a Websocket Python server. The system supports any absolute positioning system (We use the Lighthouse), and then converts all the setpoints and actions to the Crazyflie API HighLevelCommander class. This system allows us to create, update, and test complex choreographies in a few minutes on various devices.
What is next?
We are looking forward to developing more dancers-drones interactions in the future. It will imply, in addition to the Lighthouse system, other sensors, in order to open up new possibilities: realtime path-finding, obstacle avoidance even during a recorded choreography (to allow improvisation), etc.
We’re happy to announce that the 2023.02 release is available for download!
The main new features of this release are:
Out of tree controllers
We have made it easier to add a new controller to the firmware in the Crazyflie. Controllers can now be added in an app, the same way as an estimator can be added. The main advantage is that all the code is contained in the app which makes it easy to upgrade the underlying firmware when new releases are available. You can read about how to use this feature in the firmware repository documentation.
Support to configure ESCs with BLHeli Configurator
On brushless Crazyflies, ESCs can now be configured using the BLHeli Configurator. See PR #1170
A UKF (Unscented Kalman Filter) state estimator has been added
An Unscented Kalman Filter (UKF) estimator has been added based by Klaus Kefferpütz from the paper ‘Error-State Unscented Kalman-Filter for UAV Indoor Navigation‘. The estimator is still slightly experimental and does not yet support all positioning methods (see this issue). Because of this, it is not available by default, but you can try it by enabling it using kbuild! You can read about the UKF estimator in the repository documentation.
Platform filter in client flash dialog
A filter has been added to the bootloader dialog in the client to make it easier to find the correct release. Releases are now filtered based on platform to avoid the clutter of mixing releases for cf2, tag, bolt and flapper.
Stability and bug fixes
We have fixed several bugs in the firmware and client software that, but you can check the release notes for each of these for further details.
We have created a simple deprecation policy to clarify future changes of the APIs. The short version is that we from now on will mention deprecated functionality in release notes and that the deprecated functionality will remain in the code base for 6 months before it is removed. Please see the development overview for more information.
It’s time for a new compilation video about how the Crazyflie is used in research ! The last one featured already a lot of awesome work, but a lot happened since then, both in research and at Bitcraze.
As usual, the hardest about making those videos is choosing the works we want to feature – if every cool video of the Crazyflie was in there, it would last for hours! So it’s just a selection of the most videogenic projects we’ve seen. You can find a more extensive list of our products used in research here.
We’ve seen a lot of projects that used the modularity of the Crazyflie to create awesome new features, like a catenary robot, some wall tracking or having it land upside down. The Crazyflie board was even made into a revolving wing drone. New sensors were used, to sniff out gas leaks (the Sniffy bug as seen in this blogpost), or to allow autonomous navigation. Swarms are also a research topic where we see a lot of the Crazyflie, this time for collision avoidance, or path planning. We also see more and more of simulators, which are used for huge swarms or physics tests.
Once again, we were surprised and awed by all the awesome things that the community did with the Crazyflie. Hopefully, this will inspire others to think of new things to do as well. We hope that we can continue with helping you to make your ideas fly, and don’t hesitate to share with us the awesome projects you’re working on!
Here is a list of all the research that has been included in the video:
A common task with the Crazyflie is to add a new controller or estimator. As we get some questions on how to do this, we will outline the process in this post. We will show how to add a custom controller and estimator that runs in the Crazyflie, built as an out-of-tree build.
This post assumes some basic knowledge about the Crazyflie firmware, the C programming language, how to build the firmware and flash it to a Crazyflie. If you need some more information on these topics, please see the “Getting started with development” tutorial. For an overview of how estimators and controllers are used by the stabilizer module, please see the firmware documentation.
Overview
The Crazyflie firmware is designed to make it easy to add custom controllers and estimators, a plugin system keeps the code clean and well separated. We will look at the details later, but the basic principle is to first write your new controller or estimator and then register it in the firmware. When the code has been compiled and flashed to the Crazyflie, the new module is activated by setting a parameter from the client or a python script.
We will implement the example as an app, which is a great way to make sure you can upgrade the underlying firmware without messing up your code. An app is a piece of code that exists somewhere in you file system outside of the main firmware source code. This setup minimizes the dependencies and the main firmware source tree can be upgraded without affecting your app (in most cases). That means there is no need for merges or complex management of source trees.
Registration of modules
Let’s first look at how controllers and estimators are registered and called in the plugin framework. We will use the controllers to show how it works, but the estimators are implemented in a similar way and it should be easy to understand how it works.
Note that there has been some updates of the Crazyflie firmware source code lately and any reference to the source code will be to the latest version (as of today).
The starting point of the controller implementation can be found in the src/modules/src/controller.c file, here we can find an array called controllerFunctions that holds a list of all the controllers in the system.
We can see that there is currently four controllers in the list: the PID controller, the Mellinger controller, the INDI controller and finally the Brescianini controller. There is also an “empty” controller at the top that is not important in this context and we will simply ignore it. At the bottom we find the out-of-tree controller, we will discuss this later.
Each controller must implement three functions: an initialization function, a test function and a controller function that performs the actual controller work. Signatures for the three functions are defined in controller.c. The functions are added to the list as function pointers that can be called by the stabilizer when needed.
There is a parameter, stabilizer.controller, in the stabilizer that tells the system which controller to use in the stabilizer loop. This parameter simply contains the index in the controllerFunctions list that will be used. For example, the default value 1 will make the stabilizer loop call the controllerPid function every iteration. If the value of the stabilizer.controller parameter is changed, the initialization function for the new controller will be called and subsequent calls from the stabilizer loop will be done to the new controller function.
We will not go into details of how to implement the actual controller here, but the existing controllers can be used as examples.
Suppose you want to add a new controller. It would be possible to add a new file in the Crazyflie firmware with your new controller implementation, add the function pointers to the list in controller.c and that would work just fine. The problem with such implementation would be that it is hard to maintain, your new files would be mixed with the files in the main firmware file tree, and even worse, you would have to modify the controller.c file to add your controller. The next time there is a new awesome feature in the firmware source code and you want to upgrade to the latest version, you will run into problems as you have to handle the files you modified!
A better solution is to use an app instead as apps are built out-of-tree, that is not in the main source tree. This removes the problem of merging changes in the main source files, all you have to do is to pull in the new file tree and recompile.
But how to register your new controller in the controller list? This is what the last line in the list of controllers is for
If CONFIG_CONTROLLER_OOT is defined we add a controller with the three functions controllerOutOfTreeInit, controllerOutOfTreeTest and controllerOutOfTree. All you have to do in your app is to define CONFIG_CONTROLLER_OOT and make sure the functions in your controller are named like above. That’s it!
Example implementation
Now we will create a new app and add a new controller, step by step. We assume that you have a newly cloned firmware repository in your filesystem to work on.
We will show the linux flavor of commands, but it should be easy to convert to other platforms.
Create a new app
The easiest way is to start from an existing app to get started, let’s use the hello world app. Copy the app and move into the new directory
cp -r examples/app_hello_world examples/my_controller
cd examples/my_controller/
Let’s rename hello_world.c
mv src/hello_world.c src/my_controller.c
We have to tell kbuild that we renamed the file. Open src/Kbuild in your favorite editor and update it to
obj-y += my_controller.o
Now let’s fix the basics in my_controller.c, open it in your editor and change according to the comments bellow:
#include <string.h>#include <stdint.h>#include <stdbool.h>#include "app.h"#include "FreeRTOS.h"#include "task.h"// Edit the debug name to get nice debug prints#define DEBUG_MODULE "MYCONTROLLER"#include "debug.h"// We still need an appMain() function, but we will not really use it. Just let it quietly sleep.
void appMain() {
DEBUG_PRINT("Waiting for activation ...\n");
while(1) {
vTaskDelay(M2T(2000));
// Remove the DEBUG_PRINT.// DEBUG_PRINT("Hello World!\n");
}
}Code language:PHP(php)
Now, lets add our new controller. We will not add a real implementation here as it would be a bit too large for this post, instead we will just call into the PID controller to make sure the Crazyflie still can fly. Add this code after appMain() in my_controller.c.
// The new controller goes here --------------------------------------------// Move the includes to the the top of the file if you want to#include "controller.h"// Call the PID controller in this example to make it possible to fly. When you implement you own controller, there is// no need to include the pid controller.#include "controller_pid.h"
void controllerOutOfTreeInit() {
// Initialize your controller data here...// Call the PID controller instead in this example to make it possible to fly
controllerPidInit();
}
bool controllerOutOfTreeTest() {
// Always return truereturntrue;
}
void controllerOutOfTree(control_t *control, const setpoint_t *setpoint, const sensorData_t *sensors, const state_t *state, const uint32_t tick) {
// Implement your controller here...// Call the PID controller instead in this example to make it possible to fly
controllerPid(control, setpoint, sensors, state, tick);
}
Code language:PHP(php)
Finally we need to tell the firmware that we have implemented the out-of-tree controller and that it should be added to the list. We do this by adding CONFIG_CONTROLLER_OOT to the app-config file. When you are done it should look like this:
Start your Crazyflie and the python client. Connect the client to the Crazyflie and open the console log tab. Make sure you are running your app by looking for the line:
MYCONTROLLER: Waiting for activation ...Code language:HTTP(http)
Now let’s activate our new controller! Open the parameter tab, find the stabilizer group and the controller parameter. Set it to 5 and check the console log that the out-of-tree controller was activated:
CONTROLLER: Using OutOfTree (5) controllerCode language:HTTP(http)
That’s it! Your new controller is activated and the Crazyflie is ready to fly.
Note: In the client, the comment for the stabilizer.controller parameter will not contain the out-of-tree controller, and it will look like only values 0-4 are valid even though 5 also works.
Conclusions
In this post we have shown how to add a new controller to the Crazyflie firmware. The process for adding an estimator is very similar, and hopefully it should be easy to understand how to do it based on the example above.
As you can see, very little code (apart from the actual controller/estimator) is required to add your own controller or estimator, and we hope that it will enable you to put your energy into the actual control problem, rather than the nitty gritty details of the code.
My name is Hanna, and I just started as an intern at Bitcraze. However, it is not my first time working with a drone or even the Crazyflie, so I’ll tell you a bit about how I ended up here.
The first time I used a drone, actually even a Crazyflie, was in a semester thesis at ETH Zurich in 2017, where my task was to extend a Crazyflie with a Parallel Ultra Low-Power (PULP) System-on-Chip (SoC) connected to a camera and external memory. This was the first prototype of the AI-deck you can buy here nowadays (as used here) :)
My next drone adventure was an internship at a company building tethered drones for firefighters – a much bigger system than the Crazyflie. I was in charge of the update system, so more on the firmware side this time. It was a very interesting experience, but I swore never to build a system with more than three microcontrollers in it again.
This and a liking for tiny and restricted embedded systems brought me back to the smaller drones again. I did my master thesis back at ETH about developing a PULP-based nano-drone (nano-drones are just tiny drones that fit approximately in the palm of your hand and use only around 10Watts of power, the category Crazyflies fit in) and some onboard intelligence for it. As a starting point, we used the Crazyflie, both for the hardware and the software. It turned out to be a very hard task to port the firmware to a processor with only a very basic operating system at that time. Still, eventually I knew almost every last detail of the Crazyflie firmware, and it actually flew.
However, for this to happen, I needed some more time than the master thesis – in the meantime, I started to pursue a PhD at ETH Zurich. I am working towards autonomous miniaturized drones – so besides the part with the tiny PULP-based drone I already told you about, I also work on the “autonomous” part. Contrary to many other labs our focus is not only on novel algorithms though, we also work with novel sensors and processors. Two very interesting recent developments for us are a multi-zone Time-of-Flight sensor and the novel gap9 processor, which both fit on a Crazyflie in terms of power, size and weight. This enables new possibilities in obstacle avoidance, localization, mapping and many more. Last year my colleagues and I already posted a blog post about our newest advances in obstacle avoidance (here, with Videos!). More recently, we worked on onboard localization, using novel multi-zone Time-of-Flight sensors and the very new GAP9 processor to execute Monte Carlo localization onboard a Crazyflie (arxiv).
For me, localizing with a given map is a fascinating topic and one of the reasons I ended up in Sweden. It is one of the most basic skills of robots or even humans to navigate from A to B as fast as possible, and the basis of my favourite sport. The sport is called “orienteering” and is about running as fast as possible to some checkpoints on a map, usually through a forest. It is a very common sport in Sweden, which is the reason I started learning Swedish some years ago. So when the opportunity to go to Malmö for some months to join Bitcraze presented itself, I was happy to take it – not only because I like the company philosophy, but also because I just like to run around in Swedish forests :)
Now I am looking forward to my time here, I hope to learn lots about drones, firmware, new sensors, production, testing, company organization and to meet a lot of new nice people!
Bats navigate using sound. As a matter of fact, the ears of a bat are so much better developed than their eyes that bats cope better with being blindfolded than they cope with their ears being covered. It was precisely this experiment that helped the discovery of echolocation, which is the principle bats use to navigate [1]. Broadly speaking, in echolocation, bats emit ultrasonic chirps and listen for their echos to perceive their surroundings. Since its discovery in the 18th century, astonishing facts about this navigation system have been revealed — for instance, bats vary chirps depending on the task at hand: a chirp that’s good for locating prey might not be good for detecting obstacles and vice versa [2]. Depending on the characteristics of their reflected echos, bats can even classify certain objects — this ability helps them find, for instance, water sources [3]. Wouldn’t it be amazing to harvest these findings in building novel navigation systems for autonomous agents such as drones or cars?
The quest for the answer to this question led us — a group of researchers from the École Polytechnique Fédérale de Lausanne (EPFL) — to design the first audio extension deck for the Crazyflie drone, effectively turning it into a “Crazybat” (Figure 1). The Crazybat has four microphones, a simple piezo buzzer, and an additional microprocessor used to extract relevant information from audio data, to be sent to the main processor. All of these additional capabilities are provided by the audio extension deck, for which both the firmware and hardware design files are openly available.1
In our paper on the system [4], we show how to use chirps to detect nearby obstacles such as glass walls. Difficult to detect using a laser or cameras, glass walls are excellent sound reflectors and thus a good candidate for audio-based navigation. We show in a first semi-static feasibility study that we can locate the glass wall with centimeter accuracy, even in the presence of loud propeller noise (Video 1). When moving to a flying drone and different kinds of reflectors, the problem becomes significantly more challenging: motion jitter, varying propeller noise and tight real-time constraints make the problem much harder to solve. Nevertheless, first experiments suggest that sound-based wall detection and avoidance is possible (Figure and Video 2).
The principle we use to make this work is sound-based interference. The sound will “bounce off” the wall, and the reflected and direct sound will interfere either constructively or destructively, depending on the frequency and distance to the wall. Using this same principle for the four microphones, both the angle and the distance of the closest wall can be estimated. This is however not the only way to navigate using sound; in fact, our software stack, available as an open-source package for ROS2, also allows the Crazybat to extract the phase differences of incoming sound at the four microphones, which can be used to determine the location of an external sound source. We believe that a truly intelligent Crazybat would be able to switch between different operating modes depending on the conditions, just like bats that change their chirps depending on the task at hand.
Note that the ROS2 software stack is not limited to the Crazybat only — we have isolated the hardware-dependent components so that the audio-based navigation algorithms can be ported to any platform. As an example, we include results on the small wheeled e-puck2 robot in [4], which shows better performance than the Crazybat thanks to the absence of propeller noise and motion jitter.
This research project has taught us many things, above all an even greater admiration for the abilities of bats! Dealing with sound is pretty hard and very different from other prevalent sensing modalities such as cameras or lasers. Nevertheless, we believe it is an interesting alternative for scenarios with poor eyesight, limited computing power or memory. We hope that other researchers will join us in the quest of exploiting audio for navigation, and we hope that the tools that we make publicly available — both the hardware and software stack — lower the entry barrier for new researchers.
1 The audio extension deck works in a “plug-and-play” fashion like all other extension decks of the Crazyflie. It has been tested in combination with the flow deck, for stable flight in the absence of a more advanced localization system. The deck performs frequency analysis on incoming raw audio data from the 4 microphones, and sends the relevant information over to the Crazyflie drone where it is converted to the CRTP protocol on a custom driver and sent to the base station for further processing in the ROS2 stack.
References
[1] Galambos, Robert. “The Avoidance of Obstacles by Flying Bats: Spallanzani’s Ideas (1794) and Later Theories.” Isis 34, no. 2 (1942): 132–40. https://doi.org/10.1086/347764.
[2] Fenton, M. Brock, Alan D. Grinnell, Arthur N. Popper, and Richard R. Fay, eds. “Bat Bioacoustics.” In Springer Handbook of Auditory Research, 1992.https://doi.org/10.1007/978-1-4939-3527-7.
[3] Greif, Stefan, and Björn M Siemers. “Innate Recognition of Water Bodies in Echolocating Bats.” Nature Communications 1, no. 106 (2010): 1–6. https://doi.org/10.1038/ncomms1110.
[4] F. Dümbgen, A. Hoffet, M. Kolundžija, A. Scholefield and M. Vetterli, “Blind as a Bat: Audible Echolocation on Small Robots,” in IEEE Robotics and Automation Letters (Early Access), 2022. https://doi.org/10.1109/LRA.2022.3194669.
This year, the traditional Christmas video was overtaken by a big project that we had at the end of November: creating a test show with the help of CollMot.
First, a little context: CollMot is a show company based in Hungary that we’ve partnered with on a regular basis, having brainstorms about show drones and discussing possibilities for indoor drones shows in general. They developed Skybrush, an open- source software for controlling swarms. We have wanted to work with them for a long time.
So, when the opportunity came to rent an old train hall that we visit often (because it’s right next to our office and hosts good street food), we jumped on it. The place itself is huge, with massive pillars, pits for train maintenance, high ceiling with metal beams and a really funky industrial look. The idea was to do a technology test and try out if we could scale up the Loco positioning system to a larger space. This was also the perfect time to invite the guys at CollMot for some exploring and hacking.
The Loco system
We added the TDoA3 Long Range mode recently and we had done experiments in our test-lab that indicate that the Loco Positioning systems should work in a bigger space with up to 20 anchors, but we had not actually tested it in a larger space.
The maximum radio range between anchors is probably up to around 40 meters in the Long Range mode, but we decided to set up a system that was only around 25×25 meters, with 9 anchors in the ceiling and 9 anchors on the floor placed in 3 by 3 matrices. The reason we did not go bigger is that the height of the space is around 7-8 meters and we did not want to end up with a system that is too wide in relation to the height, this would reduce Z accuracy. This setup gave us 4 cells of 12x12x7 meters which should be OK.
Finding a solution to get the anchors up to the 8 meters ceiling – and getting them down easily was also a headscratcher, but with some ingenuity (and meat hooks!) we managed to create a system. We only had the hall for 2 days before filming at night, and setting up the anchors on the ceiling took a big chunk out of the first day.
Drone hardware
We used 20 Crazyflie 2.1 equipped with the Loco deck, LED-rings, thrust upgrade kit and tattu 350 mAh batteries. We soldered the pin-headers to the Loco decks for better rigidity but also because it adds a bit more “height-adjust-ability” for the 350 mAh battery which is a bit thicker then the stock battery. To make the LED-ring more visible from the sides we created a diffuser that we 3D-printed in white PLA. The full assembly weighed in at 41 grams. With the LED-ring lit up almost all of the time we concluded that the show-flight should not be longer than 3-4 minutes (with some flight time margin).
The show
CollMot, on their end, designed the whole show using Skyscript and Skybrush Studio. The aim was to have relatively simple and easily changeable formations to be able to test a lot of different things, like the large area, speed, or synchronicity. They joined us on the second day to implement the choreography, and share their knowledge about drone shows.
We got some time afterwards to discuss a lot of things, and enjoy some nice beers and dinner after a job well done. We even had time on the third day, before dismantling everything, to experiment a lot more in this huge space and got some interesting data.
What did we learn?
Initially we had problems with positioning, we got outliers and lost tracking sometimes. Finally we managed to trace the problems to the outlier filter. The filter was written a long time ago and the current implementation was optimized for 8 anchors in a smaller space, which did not really work in this setup. After some tweaking the problem was solved, but we need to improve the filter for generic support of different system setups in the future.
Another problem that was observed is that the Z-estimate tends to get an offset that “sticks” and it is not corrected over time. We do not really understand this and will require more investigations.
The outlier filer was the only major problem that we had to solve, otherwise the Loco system mainly performed as expected and we are very happy with the result! The changes in the firmware is available in this, slightly hackish branch.
We also spent some time testing maximum velocities. For the horizontal velocities the Crazyflies started loosing positioning over 3 m/s. They could probably go much faster but the outlier filter started having problems at higher speeds. Also the overshoot became larger the faster we flew which most likely could be solved with better controller tuning. For the vertical velocity 3 m/s was also the maximum, limited by the deceleration when coming downwards. Some improvements can be made here.
Conclusion is that many things works really well but there are still some optimizations and improvements that could be made to make it even more robust and accurate.
The video
But, enough talking, here is the never-seen-before New Year’s Eve video
And if you’re curious to see behind the scenes
Thanks to CollMot for their presence and valuable expertise, and InDiscourse for arranging the video!
And with the final blogpost of 2022 and this amazing video, it’s time to wish you a nice New Year’s Eve and a happy beginning of 2023!