How to handle our documentation has been always a bit of struggle. For almost 2 years (see this blogpost and this one) we have working on improving the documentation structure, with by transferring information from the wiki, putting information closer to the code and setting up automating documentation. A few months ago, we managed to have automated logging and parameter documentation (see this blogpost).
Even though we think there is some improvement already, it can always be better! We have noticed that some of our users are a bit confused of how to go through our documentation. So in this blogpost we are discussing some navigational strategies of how you can maneuver yourself through the documentation as it is presented on bitcraze.io, which can also be found here.
Ecosystem-based navigation
So more than a year ago, we also started with a Ecosystem overview page, which are meant to take first-timers by the hand through the Crazyflie ecosystem.. This type of overview pages are starting from the three main pillars: the Crazyflie Platform, the Clients and Positioning Technology. This is a type of navigation that we mostly advise to take if you are a beginner Crazyflie user who do not know the structure of the eco system fully.
For those that already have experience with the Crazyflie and its Ecosystem, the previous way of navigating through the docs might be a bit convoluted. With the Ecosystem-based navigation, it takes about 3 scrolls and clicks to reach the STM development documentation, which is a bit to much of a round way if you already know what you are looking for. We have made the repository overview page not for this purpose but we actually started using ourselves a lot within the company, as a direct pathway to the development repository per element. So this is a page that would be useful to other advanced developers as well!
So the repository overview page is split up in 4 main categories: Python-based software, C-based firmware, Other languages and bootloaders. See the navigation tree which of those repositories approximately point too. By the way, have you noticed that repository documentation has a gray header (like this one) and all the overview pages on the web have a green header (like this one)? This are meant to make you aware if you are still on a fluffy overview website page or going in the nitty gritty details of the development documentation.
Feature-based navigation ?
Still a remaining problem is that the repository documentation might not be enough to get a good overview. Where do you need to look if you are interested in ‘controllers’ or ‘state estimators’, or how to make an app-layer application? Currently all of this is within the stm32 firmware documentation, as that is the exact location of where all of this is implemented. But how to document spanning features like the CRTP, where not only the STM chip but also the NRF, Crazyradio PA and the Crazyflie python library are also involved? Or how about the loco positioning system, where the Crazyflie communicates through the LPS deck with a separate LPS node?
So perhaps a good way how to present all this information, is to do it feature-based, like ‘controllers’, ‘positioning’, ‘high level commander’, where we present a structure that points to parts of the detailed documentation within the repo-docs. With ecosystem-based, or even repository-based, navigation documentation strategy, it will take for instance 4-7 clicks to come to the specific controller page, as you can verify by looking at the bread-crumb of the header. Perhaps splitting it up based on feature instead of Ecosystem elements or programming language might be a more logical structure of the current state of the Crazyflie documentation.
Feedback
One reason why it is so difficult to do this properly, is that we have a lot of repositories based on each microprocessor of all of our products, which makes our opensource projects quite unique. It is therefore difficult to find another opensource project of which we can take inspiration from. So, let us know what you would prefer for navigating through our documentation in this poll, but we are always open to other suggestions! If you know of any example of a similar opensource software project that is doing it the right way, or have any other tips, send us an email (contact_at_bitcraze.io), contact us on social media platforms or post a comment on this blogpost!
Update 2021-12-21:
The poll is closed and this is the result! Thanks all for responding!
Ever since we released the Lighthouse deck back in 2019, we’ve wanted to offer a bundle with the deck and the base stations. There’s multiple reasons for this, but the main reason was that we wanted users to be able to buy a full swarm (like the Loco Positioning Swarm) directly from us, without having to find the base stations separately. Initially this seemed easy to do, but it turned out to be a bit tricky. This post is about how we finally managed to get the Lighthouse Swarm Bundle finished and into the E-store.
When the Lighthouse deck was initially released it only had support for Lighthouse V1 base stations, but Ligthouse V2 was already out. Since the V1 base stations were already in short supply, we wanted to support V2 since this was what would be available in the future. We had started looking at V2 support, but there was still ongoing efforts from us (and others) to reverse engineer the protocol. After some prototyping we had some initial support, but there was still a lot of infrastructure work to be done before it could be released.
In parallell with this work we started trying to buy the Lighthouse V2 base stations. Normally there’s two options here, either buy from local distributors or buy directly from the manufacturer. Buying from local distributors wasn’t a good option for us since these will only have local power plugs and buying directly from the manufacturer often requires very large orders. So this process quickly stalled. But after a couple of months we got an offer to buy a bulk shipment of Ligthouse V2 base stations (without box or power adapters) which we finally decided to accept. And yeah, that’s me looking really happy next to a bunch of base stations…
With a bunch of base stations at the office, work with sourcing a power adapter and creating a box started. Unfortunately the number of COVID-19 cases started rising again shortly after receiving the base stations, so we started working more from home again. And with only 2 persons at the office at a time, it’s hard to work with hardware. Different team-members needs access to different resources, like the electronics labs, flight arena or packing orders. So getting box/adapter samples from manufacturers, doing testing and getting input on physical objects from other team-members quickly went from days to weeks.
Finally, after a couple of months of testing, evaluating and learning lots about adapters and cardboard, we had good candidates. But then, literally as we’re ordering the power adapters, it turns out the certification was not good for all the regions we wanted. Thankfully this time around we already had other options so we quickly decided on the second best option (now the best option) and ordered.
In the meantime work was underway finalizing the implementation of Lighthouse V2, including client support, firmware updates of the Lighthouse deck and documentation/videos. Finally in the beginning of 2021 we got documentation and the full implementation (although only for 2 base stations) in place (blog post).
After a bit more than a month of waiting, the power adapters and boxes finally showed up at our office. With all the supplies in place, we started preparing for the packing. Since you can buy base stations for multiple sources, we wanted to keep track of the base stations that we were sending out to be able to debug issues users might have with these units. Also, even though the base stations had already been factory tested, we wanted to quickly test them before shipping them out. So our flight arena was turned into a makeshift assembly line and we had some outside help come in to do the packing.
Finally, the end result! We’re really excited to be able to offer yet another swarm bundle, the Lighthouse swarm bundle. And we’re pretty happy about how the packaging turned out :-)
This week we have a guest blog post from Bart Duisterhof and Prof. Guido de Croon from the MAVlab, Faculty of Aerospace Engineering from the Delft University of Technology. Enjoy!
Tiny drones are ideal candidates for fully autonomous jobs that are too dangerous or time-consuming for humans. A commonly shared dream would be to have swarms of such drones help in search-and-rescue scenarios, for instance to localize gas leaks without endangering human lives. Drones like the CrazyFlie are ideal for such tasks, since they are small enough to navigate in narrow spaces, safe, agile, and very inexpensive. However, their small footprint also makes the design of an autonomous swarm extremely challenging, both from a software and hardware perspective.
From a software perspective, it is really challenging to come up with an algorithm capable of autonomous and collaborative navigation within such tight resource constraints. State-of-the-art solutions like SLAM require too much memory and processing power. A promising line of work is to use bug algorithms [1], which can be implemented as computationally efficient finite state machines (FSMs), and can navigate around obstacles without requiring a map.
A downside of using FSMs is that the resulting behavior can be very sensitive to their hyperparameters, and therefore may not generalize outside of the tested environments. This is especially true for the problem of gas source localization (GSL), as wind conditions and obstacle configurations drastically change the problem. In this blog post, we show how we tackled the complex problem of swarm GSL in cluttered environments by using a simple bug algorithm with evolved parameters, and then tested it onboard a fully autonomous swarm of CrazyFlies. We will focus on the problems that were encountered along the way, and the design choices we made as a result. At the end of this post, we will also add a short discussion about the future of nano drones.
Why gas source localization?
Overall we are interested in finding novel ways to enable autonomy on constrained devices, like CrazyFlies. Two years ago, we showed that a swarm of CrazyFlie drones was able to explore unknown, cluttered environments and come back to the base station. Since then, we have been working on an even more complex task: using such a swarm for Gas Source Localization (GSL).
There has been a lot of research focussing on autonomous GSL in robotics, since it is an important but very hard problem. The difficulty of the task comes from the complexity of how odor can spread in an environment. In an empty room without wind, a gas will slowly diffuse from the source. This can allow a robot to find it by moving up gradient, just like small bacteria like E. Coli do. However, if the environment becomes larger with many obstacles and walls, and wind comes into play, the spreading of gas is much less regular. Large parts of the environment may have no gas or wind at all, while at the same time there may be pockets of gas away from the source. Moreover, chemical sensors for robots are much less capable than the smelling organs of animals. Available chemical sensors for robots are typically less sensitive, noisier, and much slower.
Due to these difficulties, most work in the GSL field has focused on a single robot that has to find a gas source in environments that are relatively small and without obstacles. Relatively recently, there have been studies in which groups of robots solve this task in a collaborative fashion, for example with Particle Swarm Optimization (PSO). This allows robots to find the source and escape local maxima when present. Until now this concept has been shown in simulation [2] and on large outdoor drones equipped with LiDAR and GPS [3], but never before on tiny drones in complex, GPS-denied, indoor environments.
Required Infrastructure
In our project, we introduce a new bug algorithm, Sniffy Bug, which uses PSO for gas source localization. In order to tune the FSM of Sniffy Bug, we used an artificial evolution. For time reasons, evolution typically takes place in simulation. However, early in the project, we realized that this would be a challenge, as no end-to-end gas modeling pipeline existed yet. It is important to have an easy-to-use pipeline that does not require any aerodynamics domain knowledge, such that as many researchers as possible can generate environments to test their algorithms. It would also make it easier to compare contributions and to better understand in which conditions certain algorithms work or don’t work. The GADEN ROS package [4] is a great open-source tool for modeling gas distribution when you have an environment and flow field, but for our objective, we needed a fully automated tool that could generate a great variety of random environments on-demand with just a few parameters. Below is an overview of our simulation pipeline: AutoGDM.
First, we use a procedural environment generator proposed in [5] to generate random walls and obstacles inside of the environment. An important next step is to generate a 3D flowfield by means of computational fluid dynamics (CFD). A hard requirement for us was that AutoGDM needed to be free to use, so we chose to use the open-source CFD tool OpenFOAM. It’s used for cutting-edge aerodynamics research, and also the tool suggested by the authors of GADEN. Usually, using OpenFOAM isn’t trivial, as a large number of parameters need to be selected that require field expertise, resulting in a complicated process. Next, we integrate GADEN into our pipeline, to go from environment definition (CAD files) and a flow field to a gas concentration field. Other parts that needed to be automated were the random selection of boundary conditions, which has a large impact on the actual flow field, and source placement, which has an equally large impact on the concentration field.
After we built this pipeline, we started looking for a robot simulator to couple it to. Since we weren’t planning on using a camera, our main requirement was for the simulator to be efficient (preferably in 2D) so that evolutions would take relatively little time. We decided to use Swarmulator [6], a lightweight C++ robot simulator designed for swarming and we plugged in our gas data.
Algorithm Design
Roughly speaking, we considered two categories of algorithms for controlling the drones: 1) a neural network, and 2) an FSM that included PSO, with evolved parameters. Since we used a tiny neural network for light seeking with a CrazyFlie in our previous work, we first evolved neural networks in simulation. One of the first experiments is shown below.
While it worked pretty well in simple environments with few obstacles, it seemed challenging to make this work in real life with complex obstacles and multiple agents that need to collaborate. Given the time constraints of the project, we have opted for evolving the FSM. This also facilitated crossing the reality gap, as the simulated evolution could build on basic behaviors that we developed and validated on the real platform, including obstacle avoidance with four tiny laser rangers, while communicating with and avoiding other drones. An additional advantage of PSO with respect to the reality gap is that it only needs gas concentration and no gradient of the gas concentration or wind direction (which many algorithms in literature use). On a real robot at this scale, estimating the gas concentration gradient or the direction of a light breeze is hard if not impossible.
Hardware
Our CrazyFlie needs to be able to avoid obstacles, execute velocity commands, sense gas, and estimate the other agent’s position in its own frame. For navigation, we added the flow deck and laser rangers, whereas for gas sensing we used a TGS8100 gas sensor that was used on a CrazyFlie before in previous work [7]. The sensor is lightweight and inexpensive, but accurately estimating gas concentrations can be difficult because of its size. It tends to drift and needs time to recover after a spike in concentration is observed. Another thing we noticed is that it is possible to break them, a crash can definitely destroy the sensor.
To estimate the relative position between agents, we use a Decawave Ultra-Wideband (UWB) module and communicate states, as proposed in [8]. We also use the UWB module to communicate gas information between agents and collaboratively seek the source. The complete configuration is visible below.
Evaluation in Simulation
After we optimized the parameters of our model using Swarmulator and AutoGDM, and of course trying many different versions of our algorithm, we ended up with the final Sniffy Bug algorithm. Below is a video that shows evolved Sniffy Bug evaluated in six different environments. The red dots are an agent’s personal target waypoint, whereas the yellow dot is the best-known position for the swarm.
Simulation showed that Sniffy Bug is effective at locating the gas source in randomly generated environments. The drones successfully collaborate by means of PSO.
Real Flight Testing
After observing Sniffy Bug in simulation we were optimistic, but unsure about performance in real life. First, inspired by previous works, we disperse alcohol through the air by placing liquid alcohol into a can which is then dispersed using a computer fan.
We test Sniffy Bug in our flight arena of size 10 x 10 meters with large obstacles that are shaped like walls and orange poles. The image below shows four flight tests of Sniffy Bug in cluttered environments, flying fully autonomously, i.e., without the help from any external infrastructure.
In the total of 24 runs we executed, we compared Sniffy Bug with manually selected and evolved parameters. The figure below shows that the evolved parameters are more efficient in locating the source as compared to the manual parameters.
This does not only show that our system can successfully locate a gas source in challenging environments, but it also demonstrates the usefulness of the simulation pipeline. The parameters that were learned in simulation yield a high-performance model, validating the environment generation, randomization, and gas modeling parts of our pipeline.
Conclusion and Discussion
With this work, we believe we have made an important step towards swarms of gas-seeking drones. The proposed solution is shown to work in real flight tests with obstacles, and without any external systems to help in localization or communication. We believe this methodology can be extended to larger environments or even to 3 dimensions, since PSO is a robust, multi-dimensional heuristic search method. Moreover, we hope that AutoGDM will help the community to better compare gas seeking algorithms, and to more easily learn parameters or models in simulation, and deploy them in the real world.
To improve Sniffy Bug’s performance, adding more laser rangers will definitely help. When working with only four laser rangers you realize how little information it actually provides. If one of the rangers senses a low value it is unclear if a slim pole or a massive wall is detected, adding inefficiency to the algorithm. Adding more laser rangers or using other sensor modalities like vision will help to avoid also more complex obstacles than walls and poles in a reliable manner.
Another interesting discussion can be held on the hardware required for real deployment. When working with 40 grams of maximum take-off weight, the sensors and actuators that can be selected are limited. For example, the low-power and lightweight flow deck works great but fails in low-light scenarios or with smoke. Future work exploring novel sensors for highly constrained nano robots could really help increase the Technological Readiness Level (TRL) of these systems.
Finally, this has been a really fun project to work on for us and we can’t wait to hear your thoughts on Sniffy Bug!
If you haven’t visited our store in a while, you may have missed our new addition: the Lighthouse Swarm bundle!
We’ve been working for some time now on improving the Lighthouse decks and its positioning system. Earlier in the year, we have brought the Lighthouse deck out of early access. While working with it, we have seen the great possibilities and the accuracy of this new positioning system. Thanks to Steam’s VR base station that we use as an optical beacon, the Crazyflie calculates its position with an accuracy better than a decimeter and millimeter precision. It gives a tracking volume of up to 5x5x2 meters with sub-millimetre jitter and below 10 cm accuracy while flying. It’s perfect for a swarm, as it’s accurate, precise and autonomous. We’ve flown our Crazyflies with it a number of time and seen some awesome stuff with it!
As an example, here is a demo we’ve shown on a conference back in October. We’ve used 8 Crazyflies equipped with Lighthouse decks and Qi chargers, to make a spiraling swarm. A computer orchestrates the Crazyflies and make sure one is flying at all times, while the others re-charge their batteries on their pads. After a pre-programmed trajectory is finished or when the battery of the flying Crazyflie is depleted, it goes back to its pad while another one takes over. The demo had an all-in mode that runs the trajectory on all Crazyflie with sufficient charge at once, the result is quite impressive and demonstrate the great relative precision of the Lighthouse system:
After the launch signal is sent to the Crazyflies, the computer is not required anymore: the Crazyflie will autonomously estimate its position from the lighthouse’s signals. The Crazyflie can estimate its own X, Y and Z in a global coordinate system.
What’s great with the Lighthouse Swarm is that it allows you to do drone research even if you’re on a tighter budget.
And when we got the opportunity to acquire our own base stations (that are also available in the shop by the way), it seemed only logical to offer a Swarm bundle similar to our Loco swarm bundle. So what’s in it ?
While the positioning will work with one base station, two base stations will allow better coverage of the flight space and better stability; as Kimberly can attest, it’s even possible to set it in your kitchen. The Crazyradios allow communication between the Crazyflies and your computer.
We dedicated a lot of time to the Lighthouse this winter, writing a paper with the help of Wolgangs’ calibration expertise. In this paper, we compared both Lighthouse V1 and V2 with the MoCap system. In all cases, the mean and median Euclidean error of the Lighthouse positioning system are about 2-4 centimeters compared to our MoCap system as ground truth. You can check the paper here, but here is a brief summary we used for our ICRA workshop:
We are now quite excited to get to see what you will do with this exciting new swarm bundle !
And if you don’t know how to set up the Swarm, you can get started at least with your Lighthouse system in this tutorial or watch Kristoffer explain it in this video:
For quite a while now, I have been very interested in the Rust programming language and since Jonas joined us we are two rust-enthusiast at Bitcraze. Rust is a relatively recent programming language that aims at being safe, performant and productive. It is a system programming language in the sense that it compiles to machine code with minimal runtime. It prevents a lot of bugs at compile time and it provides great mechanisms for abstraction that makes it sometime feels as high level as languages like Python.
I have been interested in applying my love for Rust at Bitcraze, mostly during fun Fridays. There is two area that I have mainly explored so far: Putting Rust in embedded systems to replace pieces of C, having such a high-level-looking language in embedded is refreshing, and re-writing the Crazyflie lib on PC in rust to make it more performant and more portable. In this blog post I will talk about the later, I keep embedded rust for a future blog post :).
Re-implementing Crazyflie lib
To re-implement the Crazyflie lib, the easiest it to follow the way the communication stack is currently setup, more information can be found on Crtp in a pevious blog post about the Crazyflie radio communication and the communication reliability.
Since I am currently focusing on implementing communication using the Crazyradio dongle, I have separated the implementation in the following modules (A crate is the Rust version of a library):
This organization is very similar to the layering that we have in the python crazyflie-lib, the difference being that in the Crazyflie lib all the layers are distributed in the same Python package.
At the time this blog post is written, the Crazyradio crate is full featured. The link is in a good shape and even has a python binding. The Crazyflie lib however is still very much work in progress. I started by implementing the ‘hard’ parts like log and param but more directly useful part like set-points (what is needed to actually fly the Crazyflie) are not implemented yet.
Compiling to the web: Wasm
One of the nice property of Rust is that compiling to different platform is generally easy and seemless. For instance, all the crates talked about previously will compile and run on Windows/Mac/Linux without any modification including the Python binding using only the standard Rust install. One of the Rust supported platform is a bit more special and interesting compared to the other though: WebAssembly.
WebAssembly is a virtual machine that is designed to be targeted by system programming language like C/C++ and Rust. It can be used in standalone (a bit like the Java VM) as well as in a web browses. All modern web browser supports and can run WebAssembly code. WebAssembly can be called from JavaScript.
The WebAssembly in the web is unfortunately not as easy to target as the native Windows/Mac/Linux: WebAssembly does not support threading yet, USB access needs to be handled via WebUSB and since we run in a web browser from JavaScript we have to follow some rules inherited from it. The most important being that the program can never block (ie. std::sync::Mutex shall not be used, I have tried ….).
I made two major modification to my existing code in order to make it possible to run in a web browser:
crazyflie-link and crazyflie-lib have been re-implemented using Rust async/await. This means that there is no thread needed and Rust async/await interfaces almost seamlessly with Javascript’s promises. The link and lib still compile and work well on native platforms.
I have created a new crate named crazyradio-webusb (not uploaded yet at the release of this post) that exposes the same API as the crazyradio crates but using WebUSB to communicate with the Crazyradio.
To support the web, the relationship between the crates becomes as follow:
The main goal is to keep the crazyflie-lib and crazyflie-link unmodified. Support for the Crazyradio in native and on the web is handled by two crates that exposes the same async API. The crate used is chosen by a compile flag (called Features in the rust world). This architecture could easily be expanded to other platform like Android or iOS.
Status, demo and future work
I have started getting something working end-to-end in the browser. The lib currently only implements Crazyflie Param and the Log TOC so the current demo scans for Crazyflie, connects the first found Crazyflie and prints the list of parameters with the parameters type and values. It can be found on Crazyflie web client test server. This doesn’t do anything useful now, but I am going to update this server when I make progress, so feel free to visit it in the future :).
Note that WebUSB is currently only implemented by Chromium-based browser so Chrome, Chromium and recent Edge. On Windows you need to install the WinUSB driver for the Crazyradio using Zadig. On Linux/Mac/Android it should work out of the box.
The source code for the Web Client is not pushed on Github yet, once it is, it will be named crazyflie-client-web. It is currently mostly implemented in Rust and it will likely mostly be Rust since it is much easier to stick with one (great!) language. One of the plan is to make a javascript API and to push it on NPM, this will then become a Crazyflie lib usable by anyone on the web from JavaScript (or a bit better, TypeScript …).
My goal for now is to implement a clone of the Crazyflie Client flight control tab on the web. This would provide a nice way to get started with the Crazyflie without having to install anything.
In the past couple of weeks we have been busy trying to improve the development interface of the Crazyflie. We want to make developing with and for the platform a more pleasant experience.
We have started looking at the logging- and parameter framework and how to improve it for our users. The aim of this framework is to easily be able to log data from the Crazyflie and to set variables during runtime. Your application can use them to control the behavior of the platform or to receive data about what it is currently up to. As of today, in the firmware there are 227 parameters and 467 logging variables defined.
These logging variables and parameters have been added to the Crazyflie firmware over the course of years. Some are critical infrastructure, needed to be able to write proper applications that interface with the platform. Some are duplicates or were added as debug years ago. Others have in some way outlived their usefulness as the firmware and functionality has moved on. The problem is that we have no way of conveying this information to our users and this is what we are trying to rectify.
An attempt of stability
We are currently reviewing all of our logging variables and parameters in an attempt to make the situation clearer for our users … and ourselves. We are adding documentation to make the purpose of each individual parameters and logging variables more clear. And we are also dividing them up into two categories: core and non-core.
If a parameter or logging variable is marked as core in the firmware that constitutes a promise that we will try very hard to not remove, rename or in any other way change the behavior of it. The idea is that this variable or parameter can be used in applications without any fear or doubt about it going away.
If a variable or parameter is non-core it does not mean that it is marked for removal. But, it could mean that we need more time to make sure that it is the proper interface for the platform. It means that it could change in some way or in some cases be removed in later firmware releases.
The reason for doing this is twofold: we want to make the Crazyflie interface clearer for our users and we want something that we feel we can maintain and keep an up-to-date documentation of.
What is the result?
We have introduced a pair of new macros to the firmware, LOG_ADD_CORE and PARAM_ADD_CORE which can be used to mark a parameter or variable as core. When using these we also mandate that there should be a Doxygen comment attached to the macro.
Below is an example from the barometer log group, showing the style of documentation expected and how to mark a logging variable as core. Parameters gets treated in the same way.
/**
* Log group for the barometer
*/
LOG_GROUP_START(baro)
/**
* @brief Altitude above Sea Level [m]
*/
LOG_ADD_CORE(LOG_FLOAT, asl, &sensorData.baro.asl)
/**
* @brief Temperature [degrees Celsius]
*/
LOG_ADD(LOG_FLOAT, temp, &sensorData.baro.temperature)
/**
* @brief Air pressure [mbar]
*/
LOG_ADD_CORE(LOG_FLOAT, pressure, &sensorData.baro.pressure)
LOG_GROUP_STOP(baro)
We have also added a script In the firmware repository: elf_sanity.py. The script will return data about parameters and logging variables that it is included in a firmware elf. This can be used to count the number of core parameters. If we point it to a newly built Crazyflie elf, after we’ve done our initial review pass of the parameters and variables, we get the result below.
$ python3 tools/build/elf_sanity.py --core cf2.elf
101 parameters and 78 log vars in elf
To produce a list of the parameters and variables you can add the --list-params and --list-logs options to the script.
What is the next step?
Once we have finished our review of the parameters and logging variables we will explore different ways of making the documentation of them available in a clear and accessible way. And we will come up with a scheme for making changes to the set of parameters and variables. Once this is all finished you can expect an update from us.
The end goal of our efforts is making developing for the Crazyflie a smoother process, and we would love to hear from you. What is confusing? What are your pain points? Let us know! So we can do better.
It’s that time of the year again ! As the days get darker and darker here in Sweden, we’re happy to getting some time off to share some warmth with our families.
And to kick off the holiday season, we prepared a little treat for you ! We enjoyed making a Christmas video that tested how we could use the Crazyflie at home. Since we’re not at the office anymore, we decided to fly in our homes and this video shows the different ways to do so. First, take a look at what we’ve done:
Now let’s dig into the different techniques we used.
Tobias decided to fly the Bolt manually. His first choice was to land in the Christmas sock, but that was too hard, thereof the hard landing in top of the tree. We were not sure who would survive: the tree or the Bolt!
Kimberly installed two base stations V2’s and after setting up, determined some way points by holding the Crazyflie in her hand. Then she generated a trajectory with the uav_trajectories project (like in the hyper demo). Then she used the cflib to upload this trajectory and make the crazyflie fly all the way to the basket. Her two cats could have looked more impressed, though!
Using trials and errors, Barbara used the Flowdeck, the motion commander, and a broken measuring tape to calibrate the Crazyflie’s path next to the tree.
Arnaud realized that, with all the autonomous work, we hardly fly the Crazyflie manually anymore. So he flew the Crazyflie manually. It required a bit more training that expected, but Crazyflie is really a fun (and safe!) quad to fly.
Marcus used two Lighthouse V2 base stations together with the Lighthouse deck and LED-ring deck. For the flying, he used the high level commander. The original plan was to fly around his gingerbread house, but unfortunately it was demolished before he got the chance (by some hungry elves surely!)
Kristoffer made his own tree ornament with the drone, which turned out to be a nice addition to a Christmas tree !
It was a fun way to use our own product, and to show off our decorated houses.
I hope you enjoy watching this video as much as we enjoyed making it.
We are staying open during the Holiday season but on a limited capacity: we still ship your orders, and will keep an eye on our emails and the forum, but things will get a bit slower here.
We wish you happy holidays and safe moments together with your loved ones.
This week we have a guest blog post from CollMot about their work to integrate the Crazyflie with Skybrush. We are happy that they have used the app API that we wrote about a couple of weeks ago, to implement the required firmware extensions!
Bitcraze and CollMot have joined forcesto release an indoor drone show management solution using CollMot’s new Skybrush softwareand Crazyflie firmware and hardware.
CollMot is a drone show provider company from Hungary, founded by a team of researchers with a decade-long expertise in drone swarm science. CollMot offers outdoor drone shows since 2015. Our new product, Skybrush allows users to handle their own fleet-level drone missions and specifically drone shows as smoothly as possible. In joint development with the Bitcraze team we are very excited to extend Skybrush to support indoor drone shows and other fleet missions using the Crazyflie system.
The basic swarm-induced mindset with which we are targeting the integration process is scalability. This includes scalability of communication, error handling, reliability and logistics. Each of these aspects are detailed below through some examples of the challenges we needed to solve together. We hope that besides having an application-specific extension of Crazyflie for entertainment purposes, the base system has also gained many new features during this great cooperative process. But lets dig into the tech details a bit more…
UWB in large spaces with many drones
We have set up a relatively large area (10x20x6 m) with the Loco Positioning System using 8 anchors in a more or less cubic arrangement. Using TWR mode for swarms was out of question as it needs each tag (drone) to communicate with the anchors individually, which is not scalable with fleet size. Initial tests with the UWB system in TDoA2 mode were not very satisfying in terms of accuracy and reliability but as we went deeper into the details we could find out the two main reasons of inaccuracies:
Two of the anchors have been positioned on the vertical flat faces of some stairs with solid material connection between them that caused many reflections so the relative distance measurements between these two anchors was bi-stable. When we realized that, we raised them a bit and attached them to columns that had an air gap in between, which solved the reflection issue.
The outlier filter of the TDoA2 mode was not optimal, a single bad packet generated consecutive outliers that opened up the filter too fast. This issue have been solved since then in the Crazyflie firmware after our long-lasting painful investigation with changing a single number from 2 to 3. This is how a reward system works in software development :)
After all, UWB was doing its job quite nicely in both TDoA2 and TDoA3 modes with an accuracy in the 10-20 cm level stably in such a large area, so we could move on to tune the controller of the Crazyflie 2.1 a bit.
Crazyflies with Loco and LED decks
As we prepared the Crazyflie drones for shows, we had the Loco deck attached on top and the LED deck attached to the bottom of the drones, with an extra light bulb to spread light smoothly. This setup resulted in a total weight of 37g. The basic challenge with the controller was that this weight turned out to be too much for the Crazyflie 2.1 system. Hover was at around 60-70% throttle in average, furthermore, there was a substantial difference in the throttle levels needed for individual motors (some in the 70-80% range). The tiny drones did a great job in horizontal motion but as soon as they needed to go up or down with vertical speed above around 0.5 m/s, one of their ESCs saturated and thus the system became unstable and crashed. Interestingly enough, the crash always started with a wobble exactly along the X axis, leading us to think that there was an issue with the positioning system instead of the ESCs. There are two possible solutions for this major problem:
use less payload, i.e. lighter drones
use stronger motors
Partially as a consequence of these experiments the Bitcraze team is now experimenting with new stronger models that will be optimized for show use cases as well. We can’t wait to test them!
Optimal controller for high speeds and accurate trajectory following
In general we are not yet very satisfied with any of the implemented controllers using the UWB system for a show use-case. This use-case is special as trajectory following needs to be as accurate as possible both in space and time to avoid collisions and to result in nice synchronized formations, while maximal speed both horizontally and vertically have to be as high as possible to increase the wow-effect of the audience.
The PID controller has no cutoffs in its outputs and with the sometimes present large positioning errors in the UWB system controller outputs get way too large. If gains are reduced, motion will be sluggish and path is not followed accurately in time.
The Mellinger and INDI controllers work well only with positioning systems of much better accuracy.
We stuck with the PID controller so far and added velocity feed forward terms, cutoffs in the output and some nonlinearity in case of large errors and it helped a bit, but the solution is not fully satisfying. Hopefully, these modifications might be included in the main firmware soon. However, having a perfect controller with UWB is still an open question, any suggestions are welcome!
Show specific improvements in the firmware
We implemented code that uploads the show content to the drones smoothly, performs automatic preflight checking and displays status with the LED deck to have visual feedback on many drones simultaneously, starts the show on time in synchrony with all swarm members and handles the light program and show trajectory execution of the show.
These modifications are now in our own fork of the Crazyflie firmware and will be rewritten soon into a show app thanks to this new promising possibility in the code framework. As soon as Skybrush and Crazyflie systems will be stable enough to be released together, we will publish the related app code that helps automating show logistics for every user.
Summary
To sum it up, we are very enthusiastic about the Crazyflie system and the great team behind the scenes with very friendly, open and cooperative support. The current stage of Crazyflie + Skybrush integration is as follows:
New hardware iterations based on the Bolt system that support longer and more dynamic flights are coming;
a very stable, UWB-compatible controller is still an open question but current possibilities are satisfying for initial tests with light flight dynamics;
a new Crazyflie app for the drone show case is basically ready to be launched together with the release of Skybrush in the near future.
If you are interested in Skybrush or have any questions related to this integration process, drop us an email or comment below.
Only a week left until we stand in our ICRA booth in Montreal and give you a gimps of what we do here at Bitcraze. As we have been writing about earlier we are aiming to run a fully automated demo. We have been fine tuning it over the last couple of days and if something unpredictable doesn’t break it, we think it is going to be very enjoyable. For those that are interested in the juicy details check out this informative ICRA 2019 page, but if you are going to visit, maybe wait a bit so you don’t get spoiled.
Apart from the demo we are also going to show our products as well as some new things we are working on. The brand new things include:
AI-deck: This is a collaborative product between GreenWaves Technologies, ETH Zurich and Bitcraze. It is based on the PULP-shield that the Integrated and System Laboratory has designed. You can read more about it in this blog post. The difference with the PULP-shield is that we have added a ESP32, the NINA-W102 module, so that video can be streamed over WiFi. This we hope will ease development and add more use cases.
Active marker deck: Another collaboration, but this time with Qualisys. This will make tracking with their motion capture cameras easier and better. Some more details in this blog post. Qualisys will have the booth just next to us were it will be possible to see it in a live demo!
Lighthouse-4 deck: Using the Vive lighthouse positioning system this deck adds sub-millimeter precision to the Crazyflie. This is the deck used in the demo and could become the star of the show.
Adding to the above we will of course also display our recently released products:
Crazyflie 2.1: The Crazyflie 2.1 is an improvement of the Crazyflie 2.0 but keeping backward capability.
Better radio performance and external antenna support: With a new radio power amplifier we’ve improved the link quality and added support for dual antennas (on-board chip antenna and external antenna via u.FL connector)
Better power button: We’ve gotten feedback that the power button breaks too easily, so now we’ve replaced with a more solid alternative.
Improved battery cable fastening: To avoid weakening of the cables over time they are now run through a cable relief.
Improved sensors: To make the flight performance better we’ve switched out the IMU and pressure sensor. The new Crazyflie uses the drone specialized sensor combo BMI088 and BMP388 by Bosch Sensortech.
Flow deck v2: The Flow deck v2 has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
Z-ranger deck v2: The Z-ranger v2 deck has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
Multi-ranger deck: The Multi-ranger deck adds VL53L1x sensors in all directions for mapping and obstacle avoidance.
MoCap marker deck: The motion capture deck with support for easily attachment of passive markers for motion capture camera tracking.
Roadrunner: The Roadrunner is released as early access and the hardware is basically a Crazyflie 2.1 without motors and up to 12V input power. This enables other robots or system to use the loco positioning system.
You can find us in booth 101 at ICRA 2019 (in Montral, Canada), May 20 – 22. Drop by and say hi, check out the products & demo and tell us what you are working on. We love to hear about all the interesting projects that are going on. See you there!
Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet. Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone. In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station! Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.
PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna. In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.
The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.
Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich. To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC. The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:
Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power. To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.
Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones. Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).