The main feature in this release is the stabilization of the Lighthouse positioning system. The main work done has been on the system setup and management, it has taken a lot of work spawning all the projects and a brunch of documentation, but we think we have reached a stage where the lighthouse positioning system is working very well and is very easy to setup and get working. We have now published the new Lighthouse getting started guide and will be working this week at updating all required materials to mark Lighthouse as released!
When the Lighthouse positioning system was released in early access, it required to install SteamVR, run some custom scripts and flash a modified firmware to get up and running. This has been improved slightly over time with scripts that allows to setup the system without using SteamVR and some way to store the required system data in the Crazyflie configuration memory rather than hard-coded in the firmware. With this release, everything is coming together and it is now possible to go from zero to an autonomous Crazyflie flying in a lighthouse system in minutes by just using the Crazyflie client.
Another major improvement made to support the lighthouse is the modification of the bootloader Crazyflie update sequence in the client as well as in command line. The new sequence will restart the Crazyflie a couple of times while upgrading the Crazyflie, this allows for an upgrade of the firmware in the installed decks if required. The lighthouse deck firmware has been added to the Crazyflie .zip release file and will be flashed into the deck while flashing the release to a Crazyflie that has the deck installed.
An alternative, robust TDoA implementation has been added for the Loco Positioning System. This change has been contributed by williamwenda on Github and can optionally be enabled at runtime.
An event subsystem has also been added to the firmware. It allows to log events onto the SD-Card which can be very useful when acquiring positioning data from the various positioning system supported by the Crazyflie. We have described this subsystem in an earlier blog post.
There has also been a lot of smaller improvement and bugfixes in this release. See the individual project releases not for more information.
We hope you are going to enjoy this new Crazyflie and lighthouse release. Do not hesitate to drop a comment here, questions on the forum if you have any or bug reports of github in the (very unlikely ;-) event that there are bugs left.
Most of the improvements have been done in the Crazyflie firmware and include:
The App API in the Crazyflie firmware has been extended and improved to be able to handle a wider range of applications. The goal is to enable a majority of users to implement the functionality they need in an app instead of hacking into the firmware it self.
We have improved the Lighthouse support in the firmware and both V1 and V2 are now working well. Even-though everything is not finished yet, we have taken a good step towards official Lighthouse positioning.
A collision avoidance module has kindly been contributed by the Crazyswarm team.
A persistant storage module has been added to enable data to be persisted and available after the Crazyflie is power cycled. It will initially be used to store Lighthouse system information, but will be useful for many other tasks in the future.
Basic arming functionality has been added for platforms using brushless motors.
In the client the LPS tab now has a 3D visualization of the positioning system and a new tab has been added to show the python log output.
Unfortunately we have run into some problems for the Windows client build which is not available for this release.
Finally we have fixed bugs and worked to improve the general stability.
This autumn when we had our quarterly planing meeting, it was obvious that there would not be any conferences this year like other years. This meant we would not meet you, our users and hear about your interesting projects, but also that we would not be forced to create a demo. Sometimes we joke that we are working with Demo Driven Development and that is what is pushing us forward, even-though it is not completely true it is a strong driver. We decided to create a demo in our office and share it online instead, we hope you enjoy it!
The wish list for the demo was long but we decided that we wanted to use multiple positioning technologies, multiple platforms and multiple drones in a swarm. The idea was also to let the needs of the demo drive development of other technologies as well as stabilize existing functionality by “eating our own dogfood”. As a result of the work we have for instance:
improved the app layer in the Crazyflie
Lighthouse V2 support, including basic support for 2+ base stations
better support for mixed positioning systems
First of all, let’s check out the video
We are using our office for the demo and the Crazyflies are essentially flying a fixed trajectory from our meeting room, through the office and kitchen to finally land in the Arena. The Crazyflies are autonomous from the moment they take off and there is no communication with any external computer after that, all positioning is done on-board.
Implementation
The demo is mainly implemented in the Crazyflie as an app with a simple python script on an external machine to start it all. The app is identical in all the Crazyflies so the script tells them where to land and checks that all Crazyflies has found their position before they are started. Finally it tells them to take off one by one with a fixed delay in-between.
The Crazyflie app
When the Crazyflie boots up, the app is started and the first thing it does is to prepare by defining a trajectory in the High Level Commander as well as setting data for the Lighthouse base stations in the system. The app uses a couple of parameters for communication and at this point it is waiting for one of the parameters to be set by the python script.
When the parameter is set, the app uses the High Level Commander to take off and fly to the start point of the trajectory. At the starting point, it kicks off the trajectory and while the High Level Commander handles the flying, the app goes to sleep. When reaching the end of the trajectory, the app once more goes into action and directs the Crazyflie to land at a position set through parameters during the initialization phase.
We used a feature of the High Level Commander that is maybe not that well known but can be very useful to make the motion fluid. When the High Level Commander does a go_to for instance, it plans a trajectory from its current position/velocity/acceleration to the target position in one smooth motion. This can be used when transitioning from a go_to into a trajectory (or from go_to to go_to) by starting the trajectory a little bit too early and thus never stop at the end of the go_to, but “slide” directly into the trajectory. The same technique is used at the end of the trajectory to get out of the way faster to avoid being hit by the next Crazyflie in the swarm.
The trajectory
The main part of the flight is one trajectory handled by the High Level Commander. It is generated using the uav_trajectories project from whoenig. We defined a number of points we wanted the trajectory to pass through and the software generates a list of polynomials that can be used by the High Level Commander. The generated trajectory is passing through the points but as a part of the optimization process it also chooses some (unexpected) curves, but that could be fixed with some tweaking.
The trajectory is defined using absolute positions in a global coordinate system that spans the office.
Positioning
We used three different positioning systems for the demo: the Lighthouse (V2), the Loco Positioning system (TDoA3) and the Flow deck. Different areas of the flight space is covered by different system, either individually or overlapping. All decks are active all the time and pick up data when it is available, pushing it into the extended Kalman estimator.
In the meeting room, where we started, we used two Lighthouse V2 base stations which gave us a very precise position estimate (including yaw) and a good start. When the Crazyflies moved out into the office, they only relied on the Flowdeck and that worked fine even-though the errors potentially builds up over time.
When the Crazyflies turned around the corner into the hallway towards the kitchen, we saw that the errors some times were too large, either the position or yaw was off which caused the Crazyflies to hit a wall. To fix that, we added 4 LPS nodes in the hallway and this solved the problem. Note that all the 4 anchors are on the ground and that it is not enough to give the Crazyflie a good 3D position, but the distance sensor on the Flow deck provides Z-information and the overall result is good.
The corner when going from the kitchen into the Arena is pretty tight and again the build up of errors made it problematic to rely on the Flow deck only, so we added a lighthouse base station for extra help.
Finally, in the first part of the Arena, the LPS system has full 3D coverage and together with the Flow deck it is smooth sailing. About half way the Crazyflies started to pick up the Lighthouse system as well and we are now using data from all three systems at the same time.
Obviously we were using more than 2 basestations with the Lighthouse system and even though it is not officially supported, it worked with some care and manual labor. The geometry data was for instance manually tweaked to fit the global coordinate system.
The wall between the kitchen and the Arena is very thick and it is unlikely that UWB can go through it, but we still got LPS data from the Arena anchors occasionally. Our interpretation is that it must have been packets bouncing on the walls into the kitchen. The stray packets were picked up by the Crazyflies but since the Lighthouse base station provided a strong information source, the LPS packets did not cause any problems.
Firmware modifications
The firmware is essentially the stock crazyflie-firmware from Github, however we did make a few alterations though:
The maximum velocity of the PID controller was increased to make it possible to fly a bit faster and create a nicer demo.
The number of lighthouse base stations was increased
In the demo we used 5 x Crazyflie 2.1 and 1 x Bolt very similar to the Li-Ion Bolt we built recently. The difference is that this version used a 2-cell Li-Po and lower KV motors but the Li-Ion Bolt would have worked just as well.
Hyperdemo drones and they configurations
To make all positioning to work at the same time we needed to add 3 decks, Lighthouse, Flow v2 and Loco-deck. On the Crazyflie 2.1 this fits if the extra long pin-headers are used and the Lighthouse is mounted on top and the Loco-deck underneath the Crazyflie 2.1 with the Flow v2 on the bottom. The same goes for the Bolt, but here we had to solder the extra long pin-header and the long pin-header together to make them long enough.
There is one catch though… the pin resources for the decks collide. With some patching of the loco-deck this can be mitigated by moving its IRQ to IO_2 using the solder-jumper. The RST needs to be moved to IO_4 which requires a small patch wire.
Also some FW configuration is needed which is added to the hyperdemo makefile:
The final weight for the Crazyflie 2.1 is on the heavy side and we quickly discover that fully charged batteries should be used or else the crash probability is increased a lot.
Conclusions
We’re happy we were able to set this demo up and that it was fairly straight forward. The whole setup of it was done in one or two days. The App layer is quite useful and we tend to use it quite often when trying out ideas, which we interpret as a good sign :-)
We are satisfied with the results and hope it will inspire some of you out there to push the limits even further!
This week we have a guest blog post from CollMot about their work to integrate the Crazyflie with Skybrush. We are happy that they have used the app API that we wrote about a couple of weeks ago, to implement the required firmware extensions!
Bitcraze and CollMot have joined forcesto release an indoor drone show management solution using CollMot’s new Skybrush softwareand Crazyflie firmware and hardware.
CollMot is a drone show provider company from Hungary, founded by a team of researchers with a decade-long expertise in drone swarm science. CollMot offers outdoor drone shows since 2015. Our new product, Skybrush allows users to handle their own fleet-level drone missions and specifically drone shows as smoothly as possible. In joint development with the Bitcraze team we are very excited to extend Skybrush to support indoor drone shows and other fleet missions using the Crazyflie system.
The basic swarm-induced mindset with which we are targeting the integration process is scalability. This includes scalability of communication, error handling, reliability and logistics. Each of these aspects are detailed below through some examples of the challenges we needed to solve together. We hope that besides having an application-specific extension of Crazyflie for entertainment purposes, the base system has also gained many new features during this great cooperative process. But lets dig into the tech details a bit more…
UWB in large spaces with many drones
We have set up a relatively large area (10x20x6 m) with the Loco Positioning System using 8 anchors in a more or less cubic arrangement. Using TWR mode for swarms was out of question as it needs each tag (drone) to communicate with the anchors individually, which is not scalable with fleet size. Initial tests with the UWB system in TDoA2 mode were not very satisfying in terms of accuracy and reliability but as we went deeper into the details we could find out the two main reasons of inaccuracies:
Two of the anchors have been positioned on the vertical flat faces of some stairs with solid material connection between them that caused many reflections so the relative distance measurements between these two anchors was bi-stable. When we realized that, we raised them a bit and attached them to columns that had an air gap in between, which solved the reflection issue.
The outlier filter of the TDoA2 mode was not optimal, a single bad packet generated consecutive outliers that opened up the filter too fast. This issue have been solved since then in the Crazyflie firmware after our long-lasting painful investigation with changing a single number from 2 to 3. This is how a reward system works in software development :)
After all, UWB was doing its job quite nicely in both TDoA2 and TDoA3 modes with an accuracy in the 10-20 cm level stably in such a large area, so we could move on to tune the controller of the Crazyflie 2.1 a bit.
Crazyflies with Loco and LED decks
As we prepared the Crazyflie drones for shows, we had the Loco deck attached on top and the LED deck attached to the bottom of the drones, with an extra light bulb to spread light smoothly. This setup resulted in a total weight of 37g. The basic challenge with the controller was that this weight turned out to be too much for the Crazyflie 2.1 system. Hover was at around 60-70% throttle in average, furthermore, there was a substantial difference in the throttle levels needed for individual motors (some in the 70-80% range). The tiny drones did a great job in horizontal motion but as soon as they needed to go up or down with vertical speed above around 0.5 m/s, one of their ESCs saturated and thus the system became unstable and crashed. Interestingly enough, the crash always started with a wobble exactly along the X axis, leading us to think that there was an issue with the positioning system instead of the ESCs. There are two possible solutions for this major problem:
use less payload, i.e. lighter drones
use stronger motors
Partially as a consequence of these experiments the Bitcraze team is now experimenting with new stronger models that will be optimized for show use cases as well. We can’t wait to test them!
Optimal controller for high speeds and accurate trajectory following
In general we are not yet very satisfied with any of the implemented controllers using the UWB system for a show use-case. This use-case is special as trajectory following needs to be as accurate as possible both in space and time to avoid collisions and to result in nice synchronized formations, while maximal speed both horizontally and vertically have to be as high as possible to increase the wow-effect of the audience.
The PID controller has no cutoffs in its outputs and with the sometimes present large positioning errors in the UWB system controller outputs get way too large. If gains are reduced, motion will be sluggish and path is not followed accurately in time.
The Mellinger and INDI controllers work well only with positioning systems of much better accuracy.
We stuck with the PID controller so far and added velocity feed forward terms, cutoffs in the output and some nonlinearity in case of large errors and it helped a bit, but the solution is not fully satisfying. Hopefully, these modifications might be included in the main firmware soon. However, having a perfect controller with UWB is still an open question, any suggestions are welcome!
Show specific improvements in the firmware
We implemented code that uploads the show content to the drones smoothly, performs automatic preflight checking and displays status with the LED deck to have visual feedback on many drones simultaneously, starts the show on time in synchrony with all swarm members and handles the light program and show trajectory execution of the show.
These modifications are now in our own fork of the Crazyflie firmware and will be rewritten soon into a show app thanks to this new promising possibility in the code framework. As soon as Skybrush and Crazyflie systems will be stable enough to be released together, we will publish the related app code that helps automating show logistics for every user.
Summary
To sum it up, we are very enthusiastic about the Crazyflie system and the great team behind the scenes with very friendly, open and cooperative support. The current stage of Crazyflie + Skybrush integration is as follows:
New hardware iterations based on the Bolt system that support longer and more dynamic flights are coming;
a very stable, UWB-compatible controller is still an open question but current possibilities are satisfying for initial tests with light flight dynamics;
a new Crazyflie app for the drone show case is basically ready to be launched together with the release of Skybrush in the near future.
If you are interested in Skybrush or have any questions related to this integration process, drop us an email or comment below.
We are happy to annouce that we have released a new version of the Crazyflie firmware, version 2020.09. It is available for download from Github.
The new firmware solves an old compatibility issue when using the LPS and Flow deck at the same time and also improves stability. A list of all the issues that have been fixed can be found on the release page.
For users that have a LPS system, we have also made some improvements to the LPS node firmware and are releasing version 2020.09.
If you are building the Crazyflie or LPS firmware from source, remember to update the libdw1000 git submodule using
git submodule update
We are working on a release of the python client as well, but still have a few issues to fix so stay tuned.
Now that we are all back from our summer holiday, we are back on what we were set on doing a while ago already: fixing issues and stabilizing code. In the last two weeks we have been focusing on fixing existing issues of the Flowdeck and LPS positioning system. It is still work in progress and even though we fixed some problems, we still have some way to go! At least we can give you an update of our work of the last few weeks.
Flow-deck Kalman Improvements
When we started working on the motion commander tutorials (see this blogpost), which are mostly based on flying with the flowdeck, we were also hit by the following error that probably many of you know: the Crazyflie flies over a low texture area, wobbles, flips and crashes. This won’t happen as long as you are flying of high texture areas (like a children’s play mat for instance), but the occasional situation that it is not, it should not crash like it does now. The expected behavior of the Crazyflie should be that it glides away until it flies over something with sufficient texture again (That is the behavior that you see if when you are flying manually with a controller, and you just let the controls go). So we decided to investigate this further.
First we thought that it might had something to do with the rotation compensation by the gyroscopes, which is part of the measurement model of the flowdeck, since maybe it was overcompensating or something like that. But if you remove that parts, it starts wobbling right away, even with high texture areas… so that was not it for sure… Even though we still think that it causes the actual wobbling itself (compensating flow that is not detected) but we still had to dig a bit deeper into the issue.
Eventually we did a couple of measurements. We let the Crazyflie fly over a low and high texture area while flying an 8 shape and log a couple of important values. These were the detected flow, the ground truth position, and a couple of quality measurements that the Pixart’s PMW3901 flow sensor provided themselves, namely the amount of features (motion.squal) and the automatic shutter time (motion.shutter). With the ground truth position we can transform that to the ground truth flow that the flowdeck is supposed to measure. With that we can see what the standard deviation of the measurement vs groundtruth flow is supposed to be, and see if we can find a relation the error’s STD and the quality values, which resulted in these couple of nice graphs:
Three major improvements were added to the code based on these results:
The standard deviation is the flow measurement is increased from 0.25 to 2.0 pixels, since this is actually a more accurate depiction of the measurement noise to be expected by the Kalman filter
An adaptive std based on the motion.shutter has been implemented (since there is a stronger correlation there than with motion.squal), which can activated putting the parameter motion.adaptive to True or 1. Its put by default on False or 0 since the heightened STD of the first improvement already increased the quality of flight significantly.
If the flow sensor indicates there is no motion detected (log motion.motion), it will now prevent to send any measurement value to the Kalman filter. Also it will adjust the difference in time (dt) between samples based on the last measurement received.
Now when the Crazyflie flies over low texture areas with the Flowdeck alone, it will not flip anymore but simply glide away! Check out this closed issue to know more about the exact implementation and it should be part of the next release.
The LPS and Flowdeck
Kalman filter conflicts
The previous fix of the flow deck also took care of this issue, which caused the Crazyflie to also flip in the LPS system if it does not detect any flow.. This happened because the Kalman filter trusted the Flow measurement much more than the UWB distance measurement in the previous firmware version, but not anymore! If the Flowdeck is out of range or can’t detect motion, the state estimation will trust the LPS system more. However, once the Flowdeck is detecting motion, it will help out with the accuracy of the positioning estimate.
Moreover, now it is possible to make the Crazyflie fly in and out of the LPS system area with the Flowdeck! however, be sure that it flies using velocity commands, since there are situations where the position estimate can skip:
1- LPS system is off 2- take off Crazyflie with only Flowdeck, 3 – turn on the LPS nodes
1- Take off in LPS, 2- fly out of LPS system’s reach for a while (position estimate will drift a bit) 3- Fly back into the LPS system with position estimation drift due to Flowdeck.
As long as your are flying with velocity commands, like with the assist modes with the controller in the CFclient, this should not be a problem.
Deck compatibility problems
The previous fixes only work with the LPS methods TDOA2 and TDOA3. Unfortunately, there is still some work to be done with the Deck incompatibility with the TWR method and the Flowdeck. The deck stops working quickly after the Crazyflie is turned on and this seems to be related to the SPI bus that is shared by the LPS deck and the flowdeck. Reading the flow sensor takes some time, which blocks the TWR algorithm for a while, making it miss an event. Since the TWR algorithm relies on a continues stream of events from the DWM1000 chip, it simply stops working if it does not … or at least that is our current theory …
Please check out this issue to follow the ongoing discussion. If you have maybe an idea of what is going on, drop a comment and see if we can work together to iron out this issue once and for all!
Accurate indoor localization is a crucial enabling technology for many robotic applications, from warehouse management to monitoring tasks. Ultra-wideband (UWB) localization technology, in particular, has been shown to provide robust, high-resolution, and obstacle-penetrating ranging measurements. Nonetheless, UWB measurements are still corrupted by non-line-of-sight (NLOS) communication and spatially-varying biases due to doughnut-shaped antenna radiation pattern. In our recent work, we present a lightweight, two-step measurement correction method to improve the performance of both TWR and TDoA-based UWB localization. We integrate our method into the Extended Kalman Filter (EKF) onboard a Crazyflie and demonstrate a closed-loop position estimation performance with ~20cm root-mean-square (RMS) error.
A stylized depiction of our UWB indoor localization system and the schematics of the proposed estimation framework.
Methodology
UWB measurement errors can be separated into two groups: (1) systematic bias caused by limitations in the UWB antenna pattern and (2) spurious measurements due to NLOS and multi-path propagation. We propose a two-step UWB bias correction approach exploiting machine learning (to address(1)) and statistical testing (to address (2)). The data-driven nature of our approach makes it agnostic to the origin of the measurement errors it corrects.
(1) Neural Network Bias Correction
The doughnut-shaped antenna radiation pattern causes the relative poses of anchors and tags to have a noticeable impact on the received signal power, which leads to systematic, predictable biases. To empirically demonstrate the systematic measurement errors resulting from varying the relative pose between anchors and tags, we placed two DWM1000 UWB anchors at a distance of 4m and collected both TWR and TDoA UWB range measurements for the UWB tag mounted on top of a Crazyflie spinning around its own z-axis.
Left: schematics of the ranges (∆p’s), azimuth (α’s) and elevation angles (β’s) defining the relative poses of tag T and anchors A0, A1 when collecting the systematic bias measurements. Right: the neural network’s inferred bias (in red) with respect to the tag’s varying azimuth angle towards anchor T0, αT0, plotted against the UWB raw measurements.
We choose to leverage the nonlinear representation power of neural networks to learn the systematic bias which only depends on anchor-tag relative poses. Considering the limited onboard computation power, we select a fully connected neural network with 50 neurons in each of two layers with ReLU activation. To represent the relative pose between the UWB tag and anchors, we select the relative distance ∆p and roll, pitch, and yaw angles of the quadcopter as the input features x for the network. As we used fixed anchors, we do not include their poses as inputs (this level of generalization is left for future work). Given sufficient training data, the spatially-varying measurement bias can be described by a nonlinear function b=f(x) captured by the trained neural network.
(2) Outlier (Spurious Measurements) Rejection
Besides our learning-based bias correction, we use a quadcopter’s dynamic model to filter inconsistent UWB range measurements. Given the estimated velocity v and maximum acceleration amax, we can compute the maximum distance dmax a quadcopter can cover during time ∆t. Based on this information, we can reject unattainable measurements before fusing them into the EKF by comparing the measurement innovation with dmax.
Moreover, we use a statistical hypothesis test to further classify potential outlier measurements. Since the measurement innovation vector is assumed to be distributed according to a multivariate Gaussian distribution, the normalized sum of squares of its values should follow a Chi-square distribution. We use the Chi-square hypothesis test to determine whether a measurement innovation is likely coming from this distribution.
UWB measurement bias f (x) prediction performance of the trained neural network (in red) compared to the actual measurement errors (blue dots) as well as the role of model-based filtering (purple dots) and statistical validation (orange dots) in rejecting outlier measurement innovations (teal dots) during a 60” flight experiment.
Data Collection and Training
We use a Crazyflie 2.0 quadcopter and the Loco Positioning System (LPS)’s UWB DW1000 modules as our research platforms. Our calibration approach runs on the Crazyflie STM32 microcontroller within the FreeRTOS real-time operating system. We equipped a cuboid flying arena with 8 UWB anchors, one for each vertex. The anchor positions were measured using a Leica total station theodolite.
Left: three-dimensional plot of our flight arena showing the positions and poses of the eight UWB DW1000 anchors (each facing towards its own x-axis, i.e., the red versor). Right: two of the training trajectories we flew to collect the samples that we used to train our neural network-based bias estimator
For all experiments, the ground truth position of the Crazyflie was provided by 10 Vicon cameras. The neural network was trained using PyTorch. To perform inference on the Crazyflie’s microcontroller, we re-use PyTorch’s trained weights in a plain C re-implementation. Since the DW1000 modules in the LPS provide UWB measurements every 5ms, the neural network inference runs at 200Hz during flight as well. Our outlier rejection method is also implemented in plain C and merged with the onboard EKF.
Close-loop Position Estimation Performance
We demonstrate the position estimation and close-loop performance of the proposed methods by flying a Crazyflie quadcopter along planar and non-planar circular trajectories (which were not among the trajectories used for training). A comparison between the estimation error of (A) the UWB localization estimate enhanced with outlier rejections and (B) the estimated enhanced with both outlier rejection and neural network bias compensation is conducted in our experiments for both TWR and TDoA2 modes. We repeated all of our experiments 10 times with a target velocity of 0.375m/s. The quadcopter trajectories during these flight tests are displayed in the following plots.
Flight paths and the tracking performance of our approach with (in blue) and without (in orange) the neural network bias correction for two reference trajectories (planar and non-planar circular orbits) and both UWB modes (TWR and TDoA).
The distributions of the RMS estimation errors are summarized into a box plot. TWR-based ranging results in better localization performance than TDoA. However, we observe that, with our neural network bias compensation, the average RMS error of TDoA localization is around 0.21m, which is comparable to that of TWR-based localization (~0.19m). Thanks to the neural network bias compensation, the average reduction in the RMS error is ~18.5% and 48% for TWR and TDoA, respectively. Most notably, this result suggests that bias compensation might help closing the performance gap between TWR- and TDoA-based localization.
Root mean square error (RMSE) of the quadcopter position estimate before (in orange) and after (in blue) the neural network calibration step for both TWR and TDoA ranging modes. Each pair of box plots refers to a planar reference trajectory (left of each pair) and a reference trajectory with varying z (right of each pair), showing a greater performance enhancement for the latter.
Outlook
In this work, we presented a two-step methodology to improve UWB localization—for both TWR- and TDoA-based measurements. We used a lightweight neural network to model and compensate for pose-dependent and spatially-varying biases and an outlier rejection mechanism to filter spurious measurements. Through several real world flight experiments tracking different trajectories, we showed that we are able to improve localization accuracy for both TWR and TDoA, granting safer indoor flight. In our future work, we will include the anchors’ pose information to allow our method to further generalize to previously unobserved indoor environments, with different anchor configurations.
Feel free to contact us if you have any questions or ideas: wenda.zhao@robotics.utias.utoronto.ca. Please cite this as:
<code>@article{wenda2020learning,
title={Learning-based Bias Correction for Ultra-wideband Localization of Resource-constrained Mobile Robots},
author={Wenda Zhao and Abhishek Goudar and Jacopo Panerati and Angela P. Schoellig},
journal={arXiv preprint arXiv:2003.09371},
year={2020}
}</code>
We have been traveling a lot this Autumn and have been talking to many Crazyflie users. It is always great to talk to our users and to get feedback about how what we make is being used.
The Swarm bundle
One particular subject that stood out was the Loco positioning system (LPS): the LSP seems to be used by a lot of people and we have gotten quite some feedback about it, some are about things that work but also some things that could be improved. This is interesting because we normally do not get that much feedback from people using the LPS.
We released the LPS about 2.5 years ago with Two-Way-Ranging single Crazyflie support, and it has been improved regularly since then among other things by adding 2 TDoA modes that supports multiple Crazyflies as well as releasing the Roadrunner board, a standalone LPS tag.
If you are using the LPS, it would be great to have some feedback about what you are using it for, what works, what does not work and (even better ;) if you have any improvement that can be pushed to the community. Do not hesitate to post in this blog post, on the forum or by posting issues or pull request in the LPS-node or Crazyflie-firmware github projects.
Only a week left until we stand in our ICRA booth in Montreal and give you a gimps of what we do here at Bitcraze. As we have been writing about earlier we are aiming to run a fully automated demo. We have been fine tuning it over the last couple of days and if something unpredictable doesn’t break it, we think it is going to be very enjoyable. For those that are interested in the juicy details check out this informative ICRA 2019 page, but if you are going to visit, maybe wait a bit so you don’t get spoiled.
Apart from the demo we are also going to show our products as well as some new things we are working on. The brand new things include:
AI-deck, Active marker deck and Lighthouse-4 deck
AI-deck: This is a collaborative product between GreenWaves Technologies, ETH Zurich and Bitcraze. It is based on the PULP-shield that the Integrated and System Laboratory has designed. You can read more about it in this blog post. The difference with the PULP-shield is that we have added a ESP32, the NINA-W102 module, so that video can be streamed over WiFi. This we hope will ease development and add more use cases.
Active marker deck: Another collaboration, but this time with Qualisys. This will make tracking with their motion capture cameras easier and better. Some more details in this blog post. Qualisys will have the booth just next to us were it will be possible to see it in a live demo!
Lighthouse-4 deck: Using the Vive lighthouse positioning system this deck adds sub-millimeter precision to the Crazyflie. This is the deck used in the demo and could become the star of the show.
Adding to the above we will of course also display our recently released products:
Crazyflie 2.1: The Crazyflie 2.1 is an improvement of the Crazyflie 2.0 but keeping backward capability.
Better radio performance and external antenna support: With a new radio power amplifier we’ve improved the link quality and added support for dual antennas (on-board chip antenna and external antenna via u.FL connector)
Better power button: We’ve gotten feedback that the power button breaks too easily, so now we’ve replaced with a more solid alternative.
Improved battery cable fastening: To avoid weakening of the cables over time they are now run through a cable relief.
Improved sensors: To make the flight performance better we’ve switched out the IMU and pressure sensor. The new Crazyflie uses the drone specialized sensor combo BMI088 and BMP388 by Bosch Sensortech.
Flow deck v2: The Flow deck v2 has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
Z-ranger deck v2: The Z-ranger v2 deck has been upgraded with the new ST VL53L1x which increases the range up to 4 meters
Multi-ranger deck: The Multi-ranger deck adds VL53L1x sensors in all directions for mapping and obstacle avoidance.
MoCap marker deck: The motion capture deck with support for easily attachment of passive markers for motion capture camera tracking.
Roadrunner: The Roadrunner is released as early access and the hardware is basically a Crazyflie 2.1 without motors and up to 12V input power. This enables other robots or system to use the loco positioning system.
You can find us in booth 101 at ICRA 2019 (in Montral, Canada), May 20 – 22. Drop by and say hi, check out the products & demo and tell us what you are working on. We love to hear about all the interesting projects that are going on. See you there!
As we have written about before we moved to a new office last month. One of the major reasons was the need for a bigger flight lab. This will enable us to do better testing and improve how we can develop things, especially our positioning solutions. Even though it is not a huge place, going from 4x4x3m to 8x8x3.5m and possibly 13x9x3.5m, is a great improvement for us. We call this great playground for the Arena as this is what it feels like for us :-). Very soon we hope to have our Qualisys MoCap system, Lighthouse and the Loco positioning running, maybe even at the same time.
The Arena
As a bonus this is a great pace to play HTC Vive VR games, we just need to get the wireless transmitter so we can make full use of the space!