Category: Video

Christmas is just around the corner, and it’s time for the traditionnal Christmas video! This year, we wanted to use the AI deck as we’ve been working hard on this deck for some time now. Showcasing its new feature in this festive video seemed the best idea.

Santa this year needed help to find and get the presents delivered, so he asked for help from Bitcraze! Let’s see how it played out:

I’m sure you’re wondering how we managed to set this up, so let’s discuss how we did it!

Picking up the packages

The goal was to pick up some packages and place them in the sleigh, and it all worked out pretty well. All the flying in the video is scripted using the python lib and positioning is done using Qualisys’ motion capture system with Active marker decks. The trajectories are hard coded and with some careful adjustments we managed to lift and fly the 4 crazyflies attached to the same present, even though it was a bit wobbly from time to time. Getting the present into the sleigh was not as easy, and we might have taken some short cuts here as well as when attaching/removing lines.

Sometimes detangling the wires was a teamwork

Picking up the second package needed some precision, and it went incredibly well, on the first try! The present was very light and needed someone to hold it, to prevent it from moving when the Crazyflie approached.

Getting the sleigh off

Our first tries included 5 strings attached to the sleigh, but it was difficult to get the right tension at the right time to have the UAVs actually pull the weight. Here came the “rubber band solution”: we just attached all the strings to a rubber band, that was itself attached to the sleight. That way, the tension could get even when all the Crazyflies were in the air and ready to pull the sleight.

The rubber band

The AI deck camera/streamer

When we started this project, the intention was to run a neural network in the AI-deck to identify or classify the presents. We did not manage to get to a point where we had something that actually added value to the story, so we settled for just streaming the video from the AI-deck camera on the scouting drone instead.

The AI-deck example with color camera viewer is still under development, but if you want to give it a try you can take a look at the readme in the github repo.

Bloopers

And finally, as an added bonus, if you ever wonder how many tries it takes to make 5 Crazyflies pull a sleigh, here is a little behind-the-scenes video too!

Sometimes we get the question of where to modify or add code to change some behavior of the Crazyflie. There is no quick answer to this question but we thought that we should write a post to clear up some question marks and give a better idea of how to approach the problem.

There are quite a few repositories on the Bitcraze github page but there are two that are the main focal point for almost any Crazyflie work, those are the crazyflie-firmware and the crazyflie-lib-python. The crazyflie-firmware contains the source code (written in C) for the firmware that runs in the Crazyflie, that is the code responsible for flying, blinking LEDs, communicating with the radio, scanning sensors and so on. The crazyflie-lib-python (often called the python lib) on the other hand is running on the PC side and is the API to use to communicate with the Crazyflie from a script. The crazyflie-lib-python is also used by the python client which means that anything you see in the client can be done by a script using the python lib.

Let’s assume we have a system of one Crazyflie connected to a computer using a Crazyradio. Now we want to control the Crazyflie and make it take off for instance, how should this be done?

The easiest way would be to use the python lib. The python lib is used to communicate with the Crazyflie and we can use it to send instructions to the Crazyflie, for instance to take off or fly a trajectory. It is also possible to use the parameter framework to change values in the Crazyflie. The main way of monitoring what is going on in the Crazyflie is to use the log framework to read variables from the Crazyflie. The python lib is perfect for controlling the Crazyflie or prototype ideas as it is very fast to make changes and try things out. The best to get started with the python lib is to start from an example that already uses functionalities you want to use.

Another option is to add code in the firmware. Originally this has been quite hard since the firmware has not been initially designed to accept user code. This means that unless you want to modify an already existing code, it is quite hard to find where to add your code so that it runs in the Crazyflie firmware and you would have to make a fork of the firmware which can he hard to maintain and keep up to date in the long run. This is one of the things the out-of-tree build and the app layer is solving, it is now quite easy to add and run your own C files to the firmware in your own project without having to fork the Crazyflie firmware. There is a bunch of examples in the firmware that shows how to implement autonomous behavior as an app. The easiest is to start with the hello world example. When it comes to modifying exiting functionality in the firmware code, most of the time forking and modifying the official firmware unfortunately is the only solution. We are however working our way to make more and more of the firmware modular so that it can be expanded out of tree. For example there has been work to make an out of tree estimator possible to implement.

A prototype written as a python script is often pretty easy to move to the Crazyflie firmware. This is a good pattern when writing an application, rapid prototyping in python and then finalizing in firmware if needed. The best example of that is the push demo. It is a demo where the Crazyflie can be pushed-around with the help of the flow deck for autonomous flight and the multiranger deck for detecting obstacle/hands. We have a python cflib push demo as well as a Crazyflie firmware push demo app.

There is some support in the python lib for interacting with multiple Crazyflies and it is probably a good start point for simple swarms. For more advanced swarm work Crazyswarm may be a better option.

If you would like to see some of the process in action, we have made a workshop during our BAM days about implementing functionality both using the python lib and in the firmware as an out of tree app:

This week we have a guest blog post from Enrica Soria from the Laboratory of Intelligent Systems Faculty of Ecole Polytechnique Fédérale de Lausanne (EPFL) . Enjoy!

From Star Wars to Black Mirror, sci-fi movies predict a future where thousands of drones will fill our sky. Curving sharply around trees or soaring over buildings, they fly just like a flock of starlings. To turn this vision into a reality, real drone swarms need to increase their autonomy and operate in a decentralized fashion. In a decentralized swarm, each robot makes its own decision based only on local information. Decentralization not only allows the swarm to be more robust to the failure of single individuals, but also removes the dependency from a single computing unit, thus making the swarm more scalable in terms of size.

We at LIS (EPFL) have shown that predictive controllers can improve the safety of aerial swarms by predicting and optimizing the agents’ future behavior in an iterative process. However, the centralized nature of this method allowed us to only control five drones and prevented us from scaling up to a large number of drones. For this reason, we have worked on a novel decentralized and scalable swarm controller that allows the safe and cohesive flight of aerial swarms in cluttered environments. In our latest article, published in IEEE Robotics and Automation Letters (RA-L), we describe how we designed the controller, show its scalability in size, and demonstrate its robustness to noise. We studied the swarms’ performance and compared how it changes in two different environments: a forest and funnel-like environment.

The Crazyflie 2.1 was the perfect platform for our experiments. They are lightweight, modular, and tough. This quadcopter can survive big hits when things don’t go as planned… and, if you work on swarms, things can go wrong!

The fleet of Crazyflies equipped with a single marker.

With our algorithm, sixteen robots were able to fly through an artificial forest that we set up in our indoor motion capture arena. In our previous work, we installed four markers on each quadcopter and used the rigid body tracking from Motive (the Optitrack software). The large volume of our experimental room required the usage of big markers for long-distance detection, which added considerable weight to the drone. Hence, in our new work, we use a single marker per drone. Tracking is supported by the ‘crazyswarm’ package and communication with the entire swarm only requires two radio links. However, despite our model being decentralized, in our implementation robots relay the information to an external brain, which does the computations for them. In the future, all the necessary code will be embedded onboard, removing the dependency on external infrastructure.

Our predictive swarm of Crazyflies flying among obstacles in our indoor experimental room.
Video about the article

This work is a step forward towards the fully autonomous deployment of drone swarms in our cities. By enabling safe navigation in cluttered environments, drone fleets will be able to integrate with conventional air traffic, search for missing people, inspect dangerous areas, transport injured people to hospitals quicker, and deliver important packages right to our doors.

For further details, check out our article here!

This week we have a guest blog post from Bart Duisterhof and Prof. Guido de Croon from the MAVlab, Faculty of Aerospace Engineering from the Delft University of Technology. Enjoy!

Tiny drones are ideal candidates for fully autonomous jobs that are too dangerous or time-consuming for humans. A commonly shared dream would be to have swarms of such drones help in search-and-rescue scenarios, for instance to localize gas leaks without endangering human lives. Drones like the CrazyFlie are ideal for such tasks, since they are small enough to navigate in narrow spaces, safe, agile, and very inexpensive. However, their small footprint also makes the design of an autonomous swarm extremely challenging, both from a software and hardware perspective.

From a software perspective, it is really challenging to come up with an algorithm capable of autonomous and collaborative navigation within such tight resource constraints. State-of-the-art solutions like SLAM require too much memory and processing power. A promising line of work is to use bug algorithms [1], which can be implemented as computationally efficient finite state machines (FSMs), and can navigate around obstacles without requiring a map.

A downside of using FSMs is that the resulting behavior can be very sensitive to their hyperparameters, and therefore may not generalize outside of the tested environments. This is especially true for the problem of gas source localization (GSL), as wind conditions and obstacle configurations drastically change the problem. In this blog post, we show how we tackled the complex problem of swarm GSL in cluttered environments by using a simple bug algorithm with evolved parameters, and then tested it onboard a fully autonomous swarm of CrazyFlies. We will focus on the problems that were encountered along the way, and the design choices we made as a result. At the end of this post, we will also add a short discussion about the future of nano drones.

Why gas source localization?

Overall we are interested in finding novel ways to enable autonomy on constrained devices, like CrazyFlies. Two years ago, we showed that a swarm of CrazyFlie drones was able to explore unknown, cluttered environments and come back to the base station. Since then, we have been working on an even more complex task: using such a swarm for Gas Source Localization (GSL). 

There has been a lot of research focussing on autonomous GSL in robotics, since it is an important but very hard problem. The difficulty of the task comes from the complexity of how odor can spread in an environment. In an empty room without wind, a gas will slowly diffuse from the source. This can allow a robot to find it by moving up gradient, just like small bacteria like E. Coli do. However, if the environment becomes larger with many obstacles and walls, and wind comes into play, the spreading of gas is much less regular. Large parts of the environment may have no gas or wind at all, while at the same time there may be pockets of gas away from the source. Moreover, chemical sensors for robots are much less capable than the smelling organs of animals. Available chemical sensors for robots are typically less sensitive, noisier, and much slower.  

Due to these difficulties, most work in the GSL field has focused on a single robot that has to find a gas source in environments that are relatively small and without obstacles. Relatively recently, there have been studies in which groups of robots solve this task in a collaborative fashion, for example with Particle Swarm Optimization (PSO). This allows robots to find the source and escape local maxima when present. Until now this concept has been shown in simulation [2] and on large outdoor drones equipped with LiDAR and GPS [3], but never before on tiny drones in complex, GPS-denied, indoor environments.

Required Infrastructure

In our project, we introduce a new bug algorithm, Sniffy Bug, which uses PSO for gas source localization. In order to tune the FSM of Sniffy Bug, we used an artificial evolution. For time reasons, evolution typically takes place in simulation. However, early in the project, we realized that this would be a challenge, as no end-to-end gas modeling pipeline existed yet. It is important to have an easy-to-use pipeline that does not require any aerodynamics domain knowledge, such that as many researchers as possible can generate environments to test their algorithms. It would also make it easier to compare contributions and to better understand in which conditions certain algorithms work or don’t work. The GADEN ROS package [4] is a great open-source tool for modeling gas distribution when you have an environment and flow field, but for our objective, we needed a fully automated tool that could generate a great variety of random environments on-demand with just a few parameters. Below is an overview of our simulation pipeline: AutoGDM.

AutoGDM, a fully automated gas dispersion modeling (GDM) simulation pipeline.

First, we use a procedural environment generator proposed in [5] to generate random walls and obstacles inside of the environment. An important next step is to generate a 3D flowfield by means of computational fluid dynamics (CFD). A hard requirement for us was that AutoGDM needed to be free to use, so we chose to use the open-source CFD tool OpenFOAM. It’s used for cutting-edge aerodynamics research, and also the tool suggested by the authors of GADEN. Usually, using OpenFOAM isn’t trivial, as a large number of parameters need to be selected that require field expertise, resulting in a complicated process. Next, we integrate GADEN into our pipeline, to go from environment definition (CAD files) and a flow field to a gas concentration field. Other parts that needed to be automated were the random selection of boundary conditions, which has a large impact on the actual flow field, and source placement, which has an equally large impact on the concentration field.

After we built this pipeline, we started looking for a robot simulator to couple it to. Since we weren’t planning on using a camera, our main requirement was for the simulator to be efficient (preferably in 2D) so that evolutions would take relatively little time. We decided to use Swarmulator [6], a lightweight C++ robot simulator designed for swarming and we plugged in our gas data.

Algorithm Design

Roughly speaking, we considered two categories of algorithms for controlling the drones: 1) a neural network, and 2) an FSM that included PSO, with evolved parameters. Since we used a tiny neural network for light seeking with a CrazyFlie in our previous work, we first evolved neural networks in simulation. One of the first experiments is shown below.

A single agent in simulation seeking a light source using a tiny neural network.

While it worked pretty well in simple environments with few obstacles, it seemed challenging to make this work in real life with complex obstacles and multiple agents that need to collaborate. Given the time constraints of the project, we have opted for evolving the FSM. This also facilitated crossing the reality gap, as the simulated evolution could build on basic behaviors that we developed and validated on the real platform, including obstacle avoidance with four tiny laser rangers, while communicating with and avoiding other drones. An additional advantage of PSO with respect to the reality gap is that it only needs gas concentration and no gradient of the gas concentration or wind direction (which many algorithms in literature use). On a real robot at this scale, estimating the gas concentration gradient or the direction of a light breeze is hard if not impossible.

Hardware

Our CrazyFlie needs to be able to avoid obstacles, execute velocity commands, sense gas, and estimate the other agent’s position in its own frame. For navigation, we added the flow deck and laser rangers, whereas for gas sensing we used a TGS8100 gas sensor that was used on a CrazyFlie before in previous work [7]. The sensor is lightweight and inexpensive, but accurately estimating gas concentrations can be difficult because of its size. It tends to drift and needs time to recover after a spike in concentration is observed. Another thing we noticed is that it is possible to break them, a crash can definitely destroy the sensor.

To estimate the relative position between agents, we use a Decawave Ultra-Wideband (UWB) module and communicate states, as proposed in [8]. We also use the UWB module to communicate gas information between agents and collaboratively seek the source. The complete configuration is visible below.

A 37.5 g nano quadcopter, capable of fully autonomous waypoint tracking, obstacle avoidance, relative localization, communication and gas sensing.

Evaluation in Simulation

After we optimized the parameters of our model using Swarmulator and AutoGDM, and of course trying many different versions of our algorithm, we ended up with the final Sniffy Bug algorithm. Below is a video that shows evolved Sniffy Bug evaluated in six different environments. The red dots are an agent’s personal target waypoint, whereas the yellow dot is the best-known position for the swarm.

Sniffy Bug evaluated in Swarmulator environments.

Simulation showed that Sniffy Bug is effective at locating the gas source in randomly generated environments. The drones successfully collaborate by means of PSO.

Real Flight Testing

After observing Sniffy Bug in simulation we were optimistic, but unsure about performance in real life. First, inspired by previous works, we disperse alcohol through the air by placing liquid alcohol into a can which is then dispersed using a computer fan.

Dispersion of liquid alcohol in flight tests.

We test Sniffy Bug in our flight arena of size 10 x 10 meters with large obstacles that are shaped like walls and orange poles. The image below shows four flight tests of Sniffy Bug in cluttered environments, flying fully autonomously, i.e., without the help from any external infrastructure.

Time-lapse images of real-world experiments in our flight arena. Sniffy was evaluated on four distinct environments, 10 x 10 meters in size, seeking a real isopropyl alcohol source. The trajectories of the nano quadcopters are clearly visible due to their blue lights.

In the total of 24 runs we executed, we compared Sniffy Bug with manually selected and evolved parameters. The figure below shows that the evolved parameters are more efficient in locating the source as compared to the manual parameters.

Maximum recorded gas reading by the swarm, for each time step for each run.

This does not only show that our system can successfully locate a gas source in challenging environments, but it also demonstrates the usefulness of the simulation pipeline. The parameters that were learned in simulation yield a high-performance model, validating the environment generation, randomization, and gas modeling parts of our pipeline.

Conclusion and Discussion

With this work, we believe we have made an important step towards swarms of gas-seeking drones. The proposed solution is shown to work in real flight tests with obstacles, and without any external systems to help in localization or communication. We believe this methodology can be extended to larger environments or even to 3 dimensions, since PSO is a robust, multi-dimensional heuristic search method. Moreover, we hope that AutoGDM will help the community to better compare gas seeking algorithms, and to more easily learn parameters or models in simulation, and deploy them in the real world.

To improve Sniffy Bug’s performance, adding more laser rangers will definitely help. When working with only four laser rangers you realize how little information it actually provides. If one of the rangers senses a low value it is unclear if a slim pole or a massive wall is detected, adding inefficiency to the algorithm. Adding more laser rangers or using other sensor modalities like vision will help to avoid also more complex obstacles than walls and poles in a reliable manner.

Another interesting discussion can be held on the hardware required for real deployment. When working with 40 grams of maximum take-off weight, the sensors and actuators that can be selected are limited. For example, the low-power and lightweight flow deck works great but fails in low-light scenarios or with smoke. Future work exploring novel sensors for highly constrained nano robots could really help increase the Technological Readiness Level (TRL) of these systems.

Finally, this has been a really fun project to work on for us and we can’t wait to hear your thoughts on Sniffy Bug!

References

[1] K. N. McGuire, C. De Wagter, K. Tuyls, H. J. Kappen, and G. C. H. E.de Croon, “Minimal navigation solution for a swarm of tiny flying robotsto explore an unknown environment,”Science Robotics, vol. 4, no. 35,2019.

[2] W. Jatmiko, K. Sekiyama and T. Fukuda, “A pso-based mobile robot for odor source localization in dynamic advection-diffusion with obstacles environment: theory, simulation and measurement,” in IEEE Computational Intelligence Magazine, vol. 2, no. 2, pp. 37-51, May 2007, doi: 10.1109/MCI.2007.353419.

[3] Steiner, JA, Bourne, JR, He, X, Cropek, DM, & Leang, KK. “Chemical-Source Localization Using a Swarm of Decentralized Unmanned Aerial Vehicles for Urban/Suburban Environments.” Proceedings of the ASME 2019 Dynamic Systems and Control Conference. Volume 3, Park City, Utah, USA. October 8–11, 2019. V003T21A006. ASME. https://doi.org/10.1115/DSCC2019-9099

[4] . Monroy, V. Hernandez-Bennetts, H. Fan, A. Lilienthal, andJ. Gonzalez-Jimenez, “Gaden: A 3d gas dispersion simulator for mobilerobot olfaction in realistic environments,”MDPI Sensors, vol. 17, no.7: 1479, pp. 1–16, 2017.

[5] K. McGuire, G. de Croon, and K. Tuyls, “A comparative study of bug algorithms for robot navigation,”Robotics and Autonomous Systems, vol.121, p. 103261, 2019.

[6] https://github.com/coppolam/swarmulator

[7] J. Burgues, V. Hern ́andez, A. J. Lilienthal, and S. Marco, “Smellingnano aerial vehicle for gas source localization and mapping,”Sensors(Switzerland), vol. 19, no. 3, 2019.[8] S. Li, M. Coppola, C. D. Wagter, and G. C. H. E. de Croon, “An autonomous swarm of micro flying robots with range-based relative localization,” Arxiv, 2020.

[8] S. Li, M. Coppola, C. D. Wagter, and G. C. H. E. de Croon, “An autonomous swarm of micro flying robots with range-based relative localization,” Arxiv, 2020.

Links

ArXiv: https://arxiv.org/abs/2107.05490

Code: https://github.com/tudelft/sniffy-bug

Video:

Please reach out if you have any questions or ideas, you can reach us at: b.p.duisterhof@gmail.com or g.c.h.e.decroon@tudelft.nl

If you haven’t visited our store in a while, you may have missed our new addition: the Lighthouse Swarm bundle!

We’ve been working for some time now on improving the Lighthouse decks and its positioning system. Earlier in the year, we have brought the Lighthouse deck out of early access. While working with it, we have seen the great possibilities and the accuracy of this new positioning system. Thanks to Steam’s VR base station that we use as an optical beacon, the Crazyflie calculates its position with an accuracy better than a decimeter and millimeter precision. It gives a tracking volume of up to 5x5x2 meters with sub-millimetre jitter and below 10 cm accuracy while flying. It’s perfect for a swarm, as it’s accurate, precise and autonomous. We’ve flown our Crazyflies with it a number of time and seen some awesome stuff with it!

As an example, here is a demo we’ve shown on a conference back in October. We’ve used 8 Crazyflies equipped with Lighthouse decks and Qi chargers, to make a spiraling swarm. A computer orchestrates the Crazyflies and make sure one is flying at all times, while the others re-charge their batteries on their pads. After a pre-programmed trajectory is finished or when the battery of the flying Crazyflie is depleted, it goes back to its pad while another one takes over. The demo had an all-in mode that runs the trajectory on all Crazyflie with sufficient charge at once, the result is quite impressive and demonstrate the great relative precision of the Lighthouse system:

After the launch signal is sent to the Crazyflies, the computer is not required anymore: the Crazyflie will autonomously estimate its position from the lighthouse’s signals. The Crazyflie can estimate its own X, Y and Z in a global coordinate system.

What’s great with the Lighthouse Swarm is that it allows you to do drone research even if you’re on a tighter budget.

And when we got the opportunity to acquire our own base stations (that are also available in the shop by the way), it seemed only logical to offer a Swarm bundle similar to our Loco swarm bundle. So what’s in it ?

While the positioning will work with one base station, two base stations will allow better coverage of the flight space and better stability; as Kimberly can attest, it’s even possible to set it in your kitchen. The Crazyradios allow communication between the Crazyflies and your computer.

We dedicated a lot of time to the Lighthouse this winter, writing a paper with the help of Wolgangs’ calibration expertise. In this paper, we compared both Lighthouse V1 and V2 with the MoCap system. In all cases, the mean and median Euclidean error of the Lighthouse positioning system are about 2-4 centimeters compared to our MoCap system as ground truth. You can check the paper here, but here is a brief summary we used for our ICRA workshop:

The poster presenting our paper

We are now quite excited to get to see what you will do with this exciting new swarm bundle !

And if you don’t know how to set up the Swarm, you can get started at least with your Lighthouse system in this tutorial or watch Kristoffer explain it in this video:

This week we have a guest blog post from Dr Feng Shan at School of Computer Science and Engineering
Southeast University, China. Enjoy!

It is possible to utilize tens and thousands of Crazyflies to form a swarm to complete complicated cooperative tasks, such as searching and mapping. These Crazyflies are in short distance to each other and may move dynamically, so we study the dynamic and dense swarms. The ultra-wideband (UWB) technology is proposed to serve as the fundamental technique for both networking and localization, because UWB is so time sensitive that an accurate distance can be calculated using the transmission and receive timestamps of data packets. We have therefore designed a UWB Swarm Ranging Protocol with key features: simple yet efficient, adaptive and robust, scalable and supportive. It is implemented on Crazyflie 2.1 with onboard UWB wireless transceiver chips DW1000.

Fig.1. Nine Crazyflies are in a compact space ranging the distance with each other.

The Basic Idea

The basic idea of the swarm ranging protocol was inspired by Double Sided-Two Way Ranging (DS-TWR), as shown below.

Fig.2. The exsiting Double Sided-Two Way Ranging (DS-TWR) protocol.

There are four types of message in DS-TWR, i.e., poll, response, final and report message, exchanging between the two sides, A and B. We define their transmission and receive timestamps are Tp, Rp, Tr, Rr, Tf, and Rf, respectively. We define the reply and round time duration for the two sides as follows.

Let tp be the time of flight (ToF), namely radio signal propagation time. ToF can be calculated as Eq. (2).

Then, the distance can be estimated by the ToF.

In our proposed Swarm Ranging Protocol, instead of four types of messages, we use only one type of message, which we call the ranging message.

Fig.3. The basic idea of the proposed Swarm Ranging Protocol.

Three sides A, B and C take turns to transmit six messages, namely A1, B1, C1, A2, B2, and C2. Each message can be received by the other two sides because of the broadcast nature of wireless communication. Then every message generates three timestamps, i.e., one transmission and two receive timestamps, as shown in Fig.3(a). We can see that each pair has two rounds of message exchange as shown in Fig.3(b). Hence, there are sufficient timestamps to calculate the ToF for each pair, that means all three pairs can be ranged with each side transmitting only two messages. This observation inspires us to design our ranging protocol.

Protocol Design

The formal definition of the i-th ranging message that broadcasted by Crazyflie X is as follows.

Xi is the message identification, e.g., sender and sequence number; Txi-1 is the transmission timestamp of Xi-1, i.e., the last sent message; RxM is the set of receive timestamps and their message identification, e.g., RxM = {(A2, RA2), (B2, RB2)}; v is the velocity of X when it generates message Xi.

As mentioned above, six timestamps (Tp, Rp, Tr, Rr, Tf, Rf,) are needed to calculate the ToF. Therefore, for each neighbor, an additional data structure is designed to store these timestamps which we named it the ranging table, as shown in Fig.4. Each device maintains one ranging table for each known neighbor to store the timestamps required for ranging.

Fig.4. The ranging table, one for each neighbor.

Let’s focus on a simple scenario where there are a number of Crazyflies, A, B, C, etc, in a short distance. Each one of them transmit a message that can be heard by all others, and they broadcast ranging messages at the same pace. As a result, between any two consecutive message transmission, a Crazyflie can hear messages from all others. The message exchange between A and Y is as follows.

Fig.5. Message exchange between A and Y.

The following steps show how the ranging messages are generated and the ranging tables are updated to correctly compute the distance between A and Y.

Fig.6. How the ranging message and ranging table works to compute distance.

The message exchange between A and Y could be also A and B, A and C, etc, because they are equal, that’s means A could perform the ranging process above with all of it’s neighbors at the same time.

To handle dense and dynamic swarm, we improved the data structure of ranging table.

Fig.7. The improved ranging table for dense and dynamic swarm.

There are three new notations P, tn, ts, denoting the newest ranging period, the next (expected) delivery time and the expiration time, respectively.

For any Crazyflie, we allow it to have different ranging period for different neighbors, instead of setting a constant period for all neighbors. So, not all neighbors’ timestamps are required to be carried in every ranging message, e.g., the receive timestamp to a far apart and motionless neighbor is required less often. tn is used to measure the priority of neighbors. Also, when a neighbor is not heard for a certain duration, we set it as expired and will remove its ranging table.

If you are interested in our protocol, you can find much more details in our paper, that has just been published on IEEE International Conference on Computer Communications (INFOCOM) 2021. Please refer the links at the bottom of this article for our paper.

Implementation

We have implemented our swarm ranging protocol for Crazyflie and it is now open-source. Note that we have also implemented the Optimized Link State Routing (OLSR) protocol, and the ranging messages are one of the OLSR messages type. So the “Timestamp Message” in the source file is the ranging message introduced in this article.

The procedure that handles the ranging messages is triggered by the hardware interruption of DW1000. During such procedure, timestamps in ranging tables are updated accordingly. Once a neighbor’s ranging table is complete, the distance is calculated and then the ranging table is rearranged.

All our codes are stored in the folder crazyflie-firmware/src/deck/drivers/src/swarming.

The following figure is a ranging performance comparison between our ranging protocol and token-ring based TWR protocol. It’s clear that our protocol handles the large number of drones smoothly.

Fig.8. performance comparison.

We also conduct a collision avoidance experiment to test the real time ranging accuracy. In this experiment, 8 Crazyflie drones hover at the height 70cm in a compact area less than 3m by 3m. While a ninth Crazyflie drone is manually controlled to fly into this area. Thanks to the swarm ranging protocol, a drone detects the coming drone by ranging distance, and lower its height to avoid collision once the distance is small than a threshold, 30cm.

Build & Run

Clone our repository

git clone --recursive https://github.com/SEU-NetSI/crazyflie-firmware.git

Go to the swarming folder.

cd crazyflie-firmware/src/deck/drivers/src/swarming

Then build the firmware.

make clean
make

Flash the cf2.bin.

cfloader flash path/to/cf2.bin stm32-fw

Open the client, connect to one of the drones and add log variables. (We use radio channel as the address of the drone) Our swarm ranging protocol allows the drones to ranging with multiple targets at the same time. The following shows that our swarm ranging protocol works very efficiently.

Summary

We designed a ranging protocol specially for dense and dynamic swarms. Only a single type of message is used in our protocol which is broadcasted periodically. Timestamps are carried by this message so that the distance can be calculated. Also, we implemented our proposed ranging protocol on Crazyflie drones. Experiment shows that our protocol works very efficiently.

Related Links

Code: https://github.com/SEU-NetSI/crazyflie-firmware

Paper: http://twinhorse.net/papers/SZLLW-INFOCOM21p.pdf

Our research group websitehttps://seu-netsi.net

Feng Shan, Jiaxin Zeng, Zengbao Li, Junzhou Luo and Weiwei Wu, “Ultra-Wideband Swarm Ranging,” IEEE INFOCOM 2021, Virtual Conference, May 10-13, 2021.

This week we have a guest blog post from Wenda Zhao, Ph.D. candidate at the Dynamic System Lab (with Prof. Angela Schoellig), University of Toronto Institute for Aerospace Studies (UTIAS). Enjoy!

Accurate indoor localization is a crucial enabling capability for indoor robotics. Small and computationally-constrained indoor mobile robots have led researchers to pursue localization methods leveraging low-power and lightweight sensors. Ultra-wideband (UWB) technology, in particular, has been shown to provide sub-meter accurate, high-frequency, obstacle-penetrating ranging measurements that are robust to radio-frequency interference, using tiny integrated circuits. UWB chips have already been included in the latest generations of smartphones (iPhone 12, Samsung Galaxy S21, etc.) with the expectation that they will support faster data transfer and accurate indoor positioning, even in cluttered environments.

A Crazyflie with an IMU and UWB tag flies through a cardboard tunnel. A vision-based  motion capture system would not be able to achieve this due to the occlusion.

In our lab, we show that a Crazyflie nano-quadcopter can stably fly through a cardboard tunnel with only an IMU and UWB tag, from Bitcraze’s Loco Positioning System (LPS), for state estimation. However, it is challenging to achieve a reliable localization performance as we show above. Many factors can reduce the accuracy and reliability of UWB localization, for either two-way ranging (TWR) or time-difference-of-arrival (TDOA) measurements. Non-line-of-sight (NLOS) and multi-path radio propagation can lead to erroneous, spurious measurements (so-called outliers). Even line-of-sight (LOS) UWB measurements exhibit error patterns (i.e., bias), which are typically caused by the UWB antenna’s radiation characteristics. In our recent work, we present an M-estimation-based robust Kalman filter to reduce the influence of outliers and achieve robust UWB localization. We contributed an implementation of the robust Kalman filter for both TWR and TDOA (PR #707 and #745) to Bitcraze’s crazyflie-firmware open-source project.

Methodology

The conventional Kalman filter, a primary sensor fusion mechanism, is sensitive to measurement outliers due to its minimum mean-square-error (MMSE) criterion. To achieve robust estimation, it is critical to properly handle measurement outliers. We implement a robust M-estimation method to address this problem. Instead of using a least-squares, maximum-likelihood cost function, we use a robust cost function to downweigh the influence of outlier measurements [1]. Compared to Random Sample Consensus (RANSAC) approaches, our method can handle sparse UWB measurements, which are often a challenge for RANSAC.

From the Bayesian maximum-a-posteriori perspective, the Kalman filter state estimation framework can be derived by solving the following minimization problem:

Therein, xk and yk are the system state and measurements at timestep k. Pk and Rk denote the prior covariance and measurement covariance, respectively.  The prior and posteriori estimates are denoted as xk check and xk hat and the measurement function without noise is indicated as g(xk,0). Through Cholesky factorization of Pk and Rk, the original optimization problem is equivalent to

where ex,k,i and ey,k,j are the elements of ex,k and ey,k. To reduce the influence of outliers, we incorporate a robust cost function into the Kalman filter framework as follows:

where rho() could be any robust function (G-M, SC-DCS, Huber, Cauchy, etc.[2]).

By introducing a weight function for the process and measurement uncertainties—with e as input—we can translate the optimization problem into an Iteratively Reweighted Least Squares (IRLS) problem. Then, the optimal posteriori estimate can be computed through iteratively solving the least-squares problem using the robust weights computed from the previous solution. In our implementation, we use the G-M robust cost function and the maximum iteration is set to be two for computational reasons. For further details about the robust Kalman filter, readers are referred to our ICRA/RA-L paper and the onboard firmware (mm_tdoa_robust.c and mm_distance_robust.c).

Performance

We demonstrate the effectiveness of the robust Kalman filter on-board a Crazyflie 2.1. The Crazyflie is equipped with an IMU and an LPS UWB tag (in TDOA2 mode). With the conventional onboard extended Kalman filter, the drone is affected by measurement outliers and jumps around significantly while trying to hover. In contrast, with the robust Kalman filter, the drone shows a more reliable localization performance.

The robust Kalman filter implementations for UWB TWR and TDOA localization have been included in the crazyflie-firmware master branch as of March 2021 (2021.03 release). This functionality can be turned on by setting a parameter (robustTwr or robustTdoa) in estimator_kalman.c. We encourage LPS users to check out this new functionality.

As we mentioned above, off-the-shelf, low-cost UWB modules also exhibit distinctive and reproducible bias patterns. In our recent work, we devised experiments using the LPS UWB modules and showed that the systematic biases have a strong relationship with the pose of the tag and the anchors as they result from the UWB radio doughnut-shaped antenna pattern. A pre-trained neural network is used to approximate the systematic biases. By combining bias compensation with the robust Kalman filter, we obtain a lightweight, learning-enhanced localization framework that achieves accurate and reliable UWB indoor positioning. We show that our approach runs in real-time and in closed-loop on-board a Crazyflie nano-quadcopter yielding enhanced localization performance for autonomous trajectory tracking. The dataset for the systematic biases in UWB TDOA measurements is available on our Open-source Code & Dataset webpage. We are also currently working on a more comprehensive dataset with IMU, UWB, and optical flow measurements and again based on the Crazyflie platform. So stay tuned!

Reference

[1] L. Chang, K. Li, and B. Hu, “Huber’s M-estimation-based process uncertainty robust filter for integrated INS/GPS,” IEEE Sensors Journal, 2015, vol. 15, no. 6, pp. 3367–3374.

[2] K. MacTavish and T. D. Barfoot, “At all costs: A comparison of robust cost functions for camera correspondence outliers,” in IEEE Conference on Computer and Robot Vision (CRV). 2015, pp. 62–69.

Links

The authors are with the Dynamic Systems Lab, Institute for Aerospace Studies, University of Toronto, Canada, and affiliated with the Vector Institute for Artificial intelligence in Toronto.

Feel free to contact us if you have any questions or suggestions: wenda.zhao@robotics.utias.utoronto.ca.

Please cite this as:

<code>@ARTICLE{Zhao2021Learningbased,
author={W. {Zhao} and J. {Panerati} and A. P. {Schoellig}},
title={Learning-based Bias Correction for Time Difference of Arrival Ultra-wideband Localization of Resource-constrained Mobile Robots},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={3639-3646},
year={2021},
publisher={IEEE}
doi={10.1109/LRA.2021.3064199}}
</code>

Ever since the AI-deck 1.x was released in early access, we’ve been excited to see so many users diving in and experimenting with it. The product is still in early access, where the hardware is deemed ready but the software and documentation still needs work. Even so, we try to do our best to make the product as usable as possible. We’re happy to see some of our users doing great stuff, like the pulp-platforms latest paper “Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs“.

Firmware and Examples

The AI-deck consists of the GAP8 chip developed by Greenwaves Technologies. On their website there’s an explanation of development tools where you get a general understanding of what you can use. Also their GAP SDK documentation explains how to install and try out some of their examples as well, on both a GAP8 simulator on the computer or on the GAP8 chip on the AI-deck itself.

We also host an separate repository for some AIdeck related examples which runs with the GAP SDK.

Documentation and Support

Recently we also added the AIdeck documentation to the Bitcraze website, generated from the docuemtnation already available in the Github repository. There’s still improvements to make, so if you find any issues or any additions needed, don’t hesitate to help us improve it. On the bottom there is an ‘improve this page’ link where you can give the suggested change, or notify us by posting on the issue list of the AIdeck repository.

Also note that we have a separate AI-deck category on our forum where you can search for or add any AI-deck related questions. Remember that posting the issues that you are having will also help us to improve the platform and hopefully soon get it out of Early Access.

Workshop by PULP-platform

On the 16th of April we hosted a workshop given by PULP-platform featuring Greenwaves Technologies. In the workshop the an overview of the AI-deck and GAP8 was given as well as going through some basic hands-on exercises. About 70 people joined the workshop and we were happy it was so well received.

The workshop is a great source of information for anybody who is just getting started with the AI-deck, so have a look at the recordings on Youtube and the slides on the event page. Also make sure to check out their PULP training page for more tools that also can be used on the AI-deck. A big thanks to the PULP-platform and Greenwaves Technologies for taking part in the workshop!


Also we would like to ask if anybody who joined the workshop, to fill in this small questionnaire so that we can get some more feedback on how it went and how we can improve for the next one.

A few weeks ago we released version 2021-03 including the python library, Cfclient and the firmware. The biggest feature of that release was that we (finally) got the lighthouse positioning system out of early access and added it as an official product to the Crazyflie eco system! Of course we are very excited about that milestone, but the work does not end there… We also need to communicate how to use it, features and where to find all this new information to you – our favourite users!

New Landing Page

First of all, we made a new landing page for only the lighthouse system (similar to bitcraze.io/start) we now also have bitcraze.io/lighthouse. This landing page is what will be printed on the Lighthouse base station box that will be available soon in our store, but is also directly accessible from the front page under ‘Product News’.

This landing page has all kinds of handy links which directs the user to the getting started tutorial, the shop page or to its place within the different positioning systems we offer/support. It is meant to give a very generic first overview of the system without being overloaded right off the bat and we hope that the information funnel will be more smooth with this landing page.

New tutorial and product pages

For getting started with the lighthouse positioning system, we heavily advise everybody to follow the new getting started tutorial page, even if you have used the lighthouse system since it’s early access days. The thing is is that the procedure of setting the system up has changed drastically. The calibration data and geometry are now stored in persistent memory on-board the Crazyflie and the lighthouse deck itself is now properly flashed. So if you are still using custom config.mk, hardcode geometry in the app layer or use get_bs_geometry.py to get the geometry… stop what you are doing and update the crazyflie firmware, install the newest Cfclient, and follow the tutorial!

We also already made some product page for the Lighthouse Swarm bundle. Currently it is still noted as coming soon but you can already sign up to get a notification when it is out, which we hope to have ready in about 1-2 month(s). The lighthouse deck was of course already available for those that can not wait and want to buy a SteamVR base station somewhere else. Just keep in mind that, even though the v1 is supported, in the future we will mostly focus on the version 2 of the base stations.

Video tutorial

Once again we have ventured into the land of videos and recorded a “Getting started with the Lighthouse positioning system” tutorial for those who prefer video over text.

Feedback

We love feedback and want to improve! Please don’t hesitate to contact us on contact@bitcraze.io if you have comments or suggestions!

It’s that time of the year again ! As the days get darker and darker here in Sweden, we’re happy to getting some time off to share some warmth with our families.

And to kick off the holiday season, we prepared a little treat for you ! We enjoyed making a Christmas video that tested how we could use the Crazyflie at home. Since we’re not at the office anymore, we decided to fly in our homes and this video shows the different ways to do so. First, take a look at what we’ve done:

Now let’s dig into the different techniques we used.

  • Tobias decided to fly the Bolt manually. His first choice was to land in the Christmas sock, but that was too hard, thereof the hard landing in top of the tree. We were not sure who would survive: the tree or the Bolt!
  • Kimberly installed two base stations V2’s and after setting up, determined some way points by holding the Crazyflie in her hand. Then she generated a trajectory with the uav_trajectories project (like in the hyper demo). Then she used the cflib to upload this trajectory and make the crazyflie fly all the way to the basket. Her two cats could have looked more impressed, though!
  • Using trials and errors, Barbara used the Flowdeck, the motion commander, and a broken measuring tape to calibrate the Crazyflie’s path next to the tree.
  • Arnaud realized that, with all the autonomous work, we hardly fly the Crazyflie manually anymore. So he flew the Crazyflie manually. It required a bit more training that expected, but Crazyflie is really a fun (and safe!) quad to fly.
  • Marcus used two Lighthouse V2 base stations together with the Lighthouse deck and LED-ring deck. For the flying, he used the high level commander. The original plan was to fly around his gingerbread house, but unfortunately it was demolished before he got the chance (by some hungry elves surely!)
  • Kristoffer made his own tree ornament with the drone, which turned out to be a nice addition to a Christmas tree !

It was a fun way to use our own product, and to show off our decorated houses.

I hope you enjoy watching this video as much as we enjoyed making it.

We are staying open during the Holiday season but on a limited capacity: we still ship your orders, and will keep an eye on our emails and the forum, but things will get a bit slower here.

We wish you happy holidays and safe moments together with your loved ones.