Blog

This week we have a guest blog post from Wenda Zhao, Ph.D. candidate at the Dynamic System Lab (with Prof. Angela Schoellig), University of Toronto Institute for Aerospace Studies (UTIAS). Enjoy!

Accurate indoor localization is a crucial enabling capability for indoor robotics. Small and computationally-constrained indoor mobile robots have led researchers to pursue localization methods leveraging low-power and lightweight sensors. Ultra-wideband (UWB) technology, in particular, has been shown to provide sub-meter accurate, high-frequency, obstacle-penetrating ranging measurements that are robust to radio-frequency interference, using tiny integrated circuits. UWB chips have already been included in the latest generations of smartphones (iPhone 12, Samsung Galaxy S21, etc.) with the expectation that they will support faster data transfer and accurate indoor positioning, even in cluttered environments.

A Crazyflie with an IMU and UWB tag flies through a cardboard tunnel. A vision-based  motion capture system would not be able to achieve this due to the occlusion.

In our lab, we show that a Crazyflie nano-quadcopter can stably fly through a cardboard tunnel with only an IMU and UWB tag, from Bitcraze’s Loco Positioning System (LPS), for state estimation. However, it is challenging to achieve a reliable localization performance as we show above. Many factors can reduce the accuracy and reliability of UWB localization, for either two-way ranging (TWR) or time-difference-of-arrival (TDOA) measurements. Non-line-of-sight (NLOS) and multi-path radio propagation can lead to erroneous, spurious measurements (so-called outliers). Even line-of-sight (LOS) UWB measurements exhibit error patterns (i.e., bias), which are typically caused by the UWB antenna’s radiation characteristics. In our recent work, we present an M-estimation-based robust Kalman filter to reduce the influence of outliers and achieve robust UWB localization. We contributed an implementation of the robust Kalman filter for both TWR and TDOA (PR #707 and #745) to Bitcraze’s crazyflie-firmware open-source project.

Methodology

The conventional Kalman filter, a primary sensor fusion mechanism, is sensitive to measurement outliers due to its minimum mean-square-error (MMSE) criterion. To achieve robust estimation, it is critical to properly handle measurement outliers. We implement a robust M-estimation method to address this problem. Instead of using a least-squares, maximum-likelihood cost function, we use a robust cost function to downweigh the influence of outlier measurements [1]. Compared to Random Sample Consensus (RANSAC) approaches, our method can handle sparse UWB measurements, which are often a challenge for RANSAC.

From the Bayesian maximum-a-posteriori perspective, the Kalman filter state estimation framework can be derived by solving the following minimization problem:

Therein, xk and yk are the system state and measurements at timestep k. Pk and Rk denote the prior covariance and measurement covariance, respectively.  The prior and posteriori estimates are denoted as xk check and xk hat and the measurement function without noise is indicated as g(xk,0). Through Cholesky factorization of Pk and Rk, the original optimization problem is equivalent to

where ex,k,i and ey,k,j are the elements of ex,k and ey,k. To reduce the influence of outliers, we incorporate a robust cost function into the Kalman filter framework as follows:

where rho() could be any robust function (G-M, SC-DCS, Huber, Cauchy, etc.[2]).

By introducing a weight function for the process and measurement uncertainties—with e as input—we can translate the optimization problem into an Iteratively Reweighted Least Squares (IRLS) problem. Then, the optimal posteriori estimate can be computed through iteratively solving the least-squares problem using the robust weights computed from the previous solution. In our implementation, we use the G-M robust cost function and the maximum iteration is set to be two for computational reasons. For further details about the robust Kalman filter, readers are referred to our ICRA/RA-L paper and the onboard firmware (mm_tdoa_robust.c and mm_distance_robust.c).

Performance

We demonstrate the effectiveness of the robust Kalman filter on-board a Crazyflie 2.1. The Crazyflie is equipped with an IMU and an LPS UWB tag (in TDOA2 mode). With the conventional onboard extended Kalman filter, the drone is affected by measurement outliers and jumps around significantly while trying to hover. In contrast, with the robust Kalman filter, the drone shows a more reliable localization performance.

The robust Kalman filter implementations for UWB TWR and TDOA localization have been included in the crazyflie-firmware master branch as of March 2021 (2021.03 release). This functionality can be turned on by setting a parameter (robustTwr or robustTdoa) in estimator_kalman.c. We encourage LPS users to check out this new functionality.

As we mentioned above, off-the-shelf, low-cost UWB modules also exhibit distinctive and reproducible bias patterns. In our recent work, we devised experiments using the LPS UWB modules and showed that the systematic biases have a strong relationship with the pose of the tag and the anchors as they result from the UWB radio doughnut-shaped antenna pattern. A pre-trained neural network is used to approximate the systematic biases. By combining bias compensation with the robust Kalman filter, we obtain a lightweight, learning-enhanced localization framework that achieves accurate and reliable UWB indoor positioning. We show that our approach runs in real-time and in closed-loop on-board a Crazyflie nano-quadcopter yielding enhanced localization performance for autonomous trajectory tracking. The dataset for the systematic biases in UWB TDOA measurements is available on our Open-source Code & Dataset webpage. We are also currently working on a more comprehensive dataset with IMU, UWB, and optical flow measurements and again based on the Crazyflie platform. So stay tuned!

Reference

[1] L. Chang, K. Li, and B. Hu, “Huber’s M-estimation-based process uncertainty robust filter for integrated INS/GPS,” IEEE Sensors Journal, 2015, vol. 15, no. 6, pp. 3367–3374.

[2] K. MacTavish and T. D. Barfoot, “At all costs: A comparison of robust cost functions for camera correspondence outliers,” in IEEE Conference on Computer and Robot Vision (CRV). 2015, pp. 62–69.

Links

The authors are with the Dynamic Systems Lab, Institute for Aerospace Studies, University of Toronto, Canada, and affiliated with the Vector Institute for Artificial intelligence in Toronto.

Feel free to contact us if you have any questions or suggestions: wenda.zhao@robotics.utias.utoronto.ca.

Please cite this as:

<code>@ARTICLE{Zhao2021Learningbased,
author={W. {Zhao} and J. {Panerati} and A. P. {Schoellig}},
title={Learning-based Bias Correction for Time Difference of Arrival Ultra-wideband Localization of Resource-constrained Mobile Robots},
journal={IEEE Robotics and Automation Letters},
volume={6},
number={2},
pages={3639-3646},
year={2021},
publisher={IEEE}
doi={10.1109/LRA.2021.3064199}}
</code>

Background

In the past couple of weeks we have been busy trying to improve the development interface of the Crazyflie. We want to make developing with and for the platform a more pleasant experience.

We have started looking at the logging- and parameter framework and how to improve it for our users. The aim of this framework is to easily be able to log data from the Crazyflie and to set variables during runtime. Your application can use them to control the behavior of the platform or to receive data about what it is currently up to. As of today, in the firmware there are 227 parameters and 467 logging variables defined.

View from the cfclient of the different logging variables one could subscribe to

These logging variables and parameters have been added to the Crazyflie firmware over the course of years. Some are critical infrastructure, needed to be able to write proper applications that interface with the platform. Some are duplicates or were added as debug years ago. Others have in some way outlived their usefulness as the firmware and functionality has moved on. The problem is that we have no way of conveying this information to our users and this is what we are trying to rectify.

An attempt of stability

We are currently reviewing all of our logging variables and parameters in an attempt to make the situation clearer for our users … and ourselves. We are adding documentation to make the purpose of each individual parameters and logging variables more clear. And we are also dividing them up into two categories: core and non-core.

If a parameter or logging variable is marked as core in the firmware that constitutes a promise that we will try very hard to not remove, rename or in any other way change the behavior of it. The idea is that this variable or parameter can be used in applications without any fear or doubt about it going away.

If a variable or parameter is non-core it does not mean that it is marked for removal. But, it could mean that we need more time to make sure that it is the proper interface for the platform. It means that it could change in some way or in some cases be removed in later firmware releases.

The reason for doing this is twofold: we want to make the Crazyflie interface clearer for our users and we want something that we feel we can maintain and keep an up-to-date documentation of.

What is the result?

We have introduced a pair of new macros to the firmware, LOG_ADD_CORE and PARAM_ADD_CORE which can be used to mark a parameter or variable as core. When using these we also mandate that there should be a Doxygen comment attached to the macro.

Below is an example from the barometer log group, showing the style of documentation expected and how to mark a logging variable as core. Parameters gets treated in the same way.

/**
 * Log group for the barometer
 */
LOG_GROUP_START(baro)

/**
 * @brief Altitude above Sea Level [m]
 */
LOG_ADD_CORE(LOG_FLOAT, asl, &sensorData.baro.asl)

/**
 * @brief Temperature [degrees Celsius]
 */
LOG_ADD(LOG_FLOAT, temp, &sensorData.baro.temperature)

/**
 * @brief Air pressure [mbar]
 */
LOG_ADD_CORE(LOG_FLOAT, pressure, &sensorData.baro.pressure)

LOG_GROUP_STOP(baro)

We have also added a script In the firmware repository: elf_sanity.py. The script will return data about parameters and logging variables that it is included in a firmware elf. This can be used to count the number of core parameters. If we point it to a newly built Crazyflie elf, after we’ve done our initial review pass of the parameters and variables, we get the result below.

$ python3 tools/build/elf_sanity.py --core cf2.elf 
101 parameters and 78 log vars in elf

To produce a list of the parameters and variables you can add the --list-params and --list-logs options to the script.

What is the next step?

Once we have finished our review of the parameters and logging variables we will explore different ways of making the documentation of them available in a clear and accessible way. And we will come up with a scheme for making changes to the set of parameters and variables. Once this is all finished you can expect an update from us.

The end goal of our efforts is making developing for the Crazyflie a smoother process, and we would love to hear from you. What is confusing? What are your pain points? Let us know! So we can do better.

As you have noticed, we talk about the lighthouse positioning a lot these last couple of months ever since we got it out of early release. However, it is good to realize that it is not the only option out there for positioning your Crazyflie! That is why in this blog-post we will lay out possible options and explain how they are different/similar to one another.

The four possible ways to position the crazyflie

Absolute Positioning / Off-board Pose Estimation

Absolute Positioning and External Pose Estimation with the MoCap System

The first we will handle are the use of motion capture systems (MoCap), which resolves around the use of InfraRed cameras and Markers. We use the Qualysis camera ourselves but there are also labs out there that use Vicon or Optitrack. The general idea is that the cameras have an IR-light-emitting LED ring, which are bounced back by reflective markers that are supposed to be on the Crazyflie. These markers can therefore be detected by the same cameras, which pass through the marker positions to an external computer. This computer will have a MoCap program running which will turn these detected markers into a Pose estimate, which will in turn be communicated to the Crazyflie by a Crazyradio PA.

Since that the positioning is estimated by an external computer instead of onboard of the crazyflie, a MoCap positioning system is categorized as an off-board pose estimation using an absolute positioning system. For more information, please check the Motion Capture positioning documentation.

Absolute Positioning / On-board Pose Estimation

Absolute Positioning and Internal Pose Estimation with the Lighthouse and Loco Positioning System

The next category is a bit different and it consists of both the Loco positioning system and the Lighthouse positioning system. Even though these both use beacons/sensors that are placed externally of the Crazyflie, the pose estimation is done all on-board in the firmware of the Crazyflie. So there is no computer that is necessary to communicate the position back to the Crazyflie. Remember that you do need to communicate the reference set-points or high level commands if you are not using the App layer.

Of course there are clear differences in the measurement type. A Crazyflie with the Locodeck attached takes the distance to the externally placed nodes as measured by ultra wide band (UWB) and the Lighthouse deck detects the light plane angles emitted by the Lighthouse Base Stations. However the principle is the same that those raw measurements are used as input to the Extended Kalman filter onboard of the Crazyflie, and outputs the estimated pose after fusing with the IMU measurements.

Therefore these systems can be classified as absolute positioning systems with on-board pose estimation. To learn more please read the Loco and Lighthouse positioning system documentation!

Relative Positioning / On-board Pose Estimation

Relative Positioning and Internal Pose Estimation with the Flowdeck V2.

It is not necessary to have to setup an external positioning system in your room in order to achieve a form of positioning on the Crazyflie. With the Flowdeck attached, the Crazyflie can measure flows per frame with an optical flow sensor and the height in millimetres with a time of flight sensor. These measurements are then fused together with the IMU within the Extended Kalman filter (see the Flow deck measurement model), which results in a on-board pose estimation.

The most important difference here to note is that positioning estimated by only the Flowdeck, will not result in a absolute positioning estimate but a relative one. Instead of using an external placed system (like MoCap, Lighthouse and Loco) which dictate where the zero position is in XYZ, the start-up position the Crazyflie determines where the origin of the coordinate system is. That is why the Flowdeck is classified as a Relative Positioning System with On-board Pose Estimation.

IMU-only On-board Pose Estimation ?

Oh boy… that is a different story. Theoretically it could be possible by using the onboard accelerometers of the crazyflie and fusing those in some short of estimator, however practice has shown that the Crazyflie’s accelerometers are too noisy to result in any good pose estimation… We haven’t seen any work that has been successfully to achieve any stable hover on only the IMU of the Crazyflie, but if you have done/see research that has, please let us know!

And if you would like to give a go yourself and build an estimator that is able to do this, please check out the new out of tree build functionality for estimators. This is still work in progress so it might have some bugs, but it should enable you to plugin in your own estimator separate from the Crazyflie firmware ;)

Documentation

We try to keep keep all the information of all our positioning systems on our website. So check out the positioning system overview page to be referred to more details if you would be interested in a particular system that fits your requirements!

Now that the Lighthouse deck is out of early access and we have made it easier to setup a lighthouse positioning system, we are currently at the next stage: showing how awesome it is! We feel that there are not enough people out there that know about the Lighthouse positioning system and sometimes confuse it even with the Loco position system (to be honest, the abbreviation LPS makes it challenging). But we are confident that the Lighthouse system is a good alternative for those that want to do drone research but are on a tight budget.

The area of the data collection. from the paper

Lighthouse Dataset

During Wolfgang Hönig‘s time here at Bitcraze, one of the bigger projects we worked together on was to generate a dataset comparing the positioning quality of the Lighthouse system with a Motion Capture (MoCap) system. You could imagine that would be a difficult task, since as the lighthouse basestations transmit infrared light sweeps and MoCap cameras by default also emit IR light which are reflected back by markers. However, with the Active marker deck for the Qualysis system, we were able to use the MoCap and Lighthouse positioning without too much interference.

Moreover, Wolfgang also helped out with improving the logging quality on the Micro-SD-card deck which also enabled us to get as much data real-time as possible. He wrote a blogpost about event-based logging a few weeks ago which is a new approach to record data on the Crazyflie at a fast pace. With the Active Marker Deck, the Micro-SD-card deck and of course the Lighthouse deck, … the Crazyflie turn into a full-blown positioning data-collection machine!

The configuration of the Crazyflie with the Micro-SD-card deck, the Lighthouse-deck from the lighthouse dataset paper

Paper

About this whole process, we wrote the following paper:
Lighthouse Positioning System: Dataset, Accuracy, and Precision for UAV Research,
A.Taffanel, B. Rousselot, J. Danielsson, K. McGuire, K. Richardsson, M. Eliasson, T. Antonsson, W. Hönig, ICRA Workshop on Robot Swarms in the Real World, Arxiv 2021

This paper contains an short explanation of the lighthouse system, how we set up the data collection and an analysis of the results, where we compared both Lighthouse V1 and V2 with the Crossing beam (C.B.) method and the extended Kalman filter. In all cases, the mean and median Euclidean error of the Lighthouse positioning system are about 2-4 centimeters compared to our MoCap system as ground truth.

Check out the lighthouse dataset paper to read all the details of the experiments!

The Euclidean Error of both LH1 and LH2 with Mocap as ground-truth taken from the dataset paper.

ICRA Swarm Workshop

Our paper is selected for a poster presentation at the ICRA 2021 Workshop: Robot Swarms in the Real World. So if you have any questions about the paper, please join and ask us in person! The workshop will be held on the 4th of June.

Moreover, we also are sponsoring the event by giving away a Lighthouse Swarm Bundle to whomever wins the best video-demonstration award! So to all the participants, the best of luck! We are super curious to what you’ll have to show us.

A little while ago we made a blog post talking about the communication reliability, one part of this work did include an alternative CRTP link implementation for the Crazyflie lib. This week we will make one of the native CRTP link implementations, the Crazyflie-link-Cpp, the default link for the Crazyflie python lib.

In the Crazyflie communication stack, the CRTP link is the piece of software that handles reliable packet communication between a computer and the Crazyflie. In the Crazyflie it is mainly implemented in the nRF51 radio chip (for the radio link), in the computer it is part of the Crazyflie python lib + Crazyflie client and for Ros and Crazyswarm it is implemented in crazyflie-cpp. Other clients like Lamouchefolle also have re-implemented their own versions of the CRTP link.

The CRTP link is the most critical part in the communication with Crazyflie, if it is not working properly, nothing can work properly, all communication with the Crazyflie passes though the CRTP link. This code duplication in multiple projects means that we have many places for diverging behaviour and creative bugs which makes it hard to develop new clients for the Crazyflie.

One way we are trying to solve this problem is to unify the links: let’s try to use the same link in all the clients. In order to do so we need a link that can act as a minimum common denominator: it needs to work with Python in order to accommodate the Crazyflie Python lib, work with C++ for Crazyswarm and LaMoucheFolle and be high-performance to be able to handle swarms of Crazyflies. The C++ link implementation can full-fill all these requirements using a python binding, and it will give higher performance to the Python lib.

Switching to the native C++ link will give us a lot of benefits:

  • It is higher performance, it has about 20% higher packet throughput than the pure python implementation
  • It will unify the Python lib and Crazyswarm a bit more. Crazyswarm will be able to use the same link as soon as broadcast packet support is added.
  • The Cpp link supports dynamic allocation of radios, no need to choose what radio to use for each connection, the link will distribute connections over all available radios by using a star instead of the radio number (radio://*/80/2M) in the URI.

The biggest drawback so far is that the CPP link needs to be compiled for each platform. We are compiling a python Wheel for it in Github actions for Windows/Mac/Linux on X86_64, any other architecture (including raspberry pi) will use the existing python CRTP link for the time being.

We are in the process of enabling the new link as the default in the lib. There are still some outstanding bugs related to boot loading that needs to be ironed out, but we expect this change to be included in the next release.

Ever since the AI-deck 1.x was released in early access, we’ve been excited to see so many users diving in and experimenting with it. The product is still in early access, where the hardware is deemed ready but the software and documentation still needs work. Even so, we try to do our best to make the product as usable as possible. We’re happy to see some of our users doing great stuff, like the pulp-platforms latest paper “Fully Onboard AI-powered Human-Drone Pose Estimation on Ultra-low Power Autonomous Flying Nano-UAVs“.

Firmware and Examples

The AI-deck consists of the GAP8 chip developed by Greenwaves Technologies. On their website there’s an explanation of development tools where you get a general understanding of what you can use. Also their GAP SDK documentation explains how to install and try out some of their examples as well, on both a GAP8 simulator on the computer or on the GAP8 chip on the AI-deck itself.

We also host an separate repository for some AIdeck related examples which runs with the GAP SDK.

Documentation and Support

Recently we also added the AIdeck documentation to the Bitcraze website, generated from the docuemtnation already available in the Github repository. There’s still improvements to make, so if you find any issues or any additions needed, don’t hesitate to help us improve it. On the bottom there is an ‘improve this page’ link where you can give the suggested change, or notify us by posting on the issue list of the AIdeck repository.

Also note that we have a separate AI-deck category on our forum where you can search for or add any AI-deck related questions. Remember that posting the issues that you are having will also help us to improve the platform and hopefully soon get it out of Early Access.

Workshop by PULP-platform

On the 16th of April we hosted a workshop given by PULP-platform featuring Greenwaves Technologies. In the workshop the an overview of the AI-deck and GAP8 was given as well as going through some basic hands-on exercises. About 70 people joined the workshop and we were happy it was so well received.

The workshop is a great source of information for anybody who is just getting started with the AI-deck, so have a look at the recordings on Youtube and the slides on the event page. Also make sure to check out their PULP training page for more tools that also can be used on the AI-deck. A big thanks to the PULP-platform and Greenwaves Technologies for taking part in the workshop!


Also we would like to ask if anybody who joined the workshop, to fill in this small questionnaire so that we can get some more feedback on how it went and how we can improve for the next one.

During the last year we, like many others, have had to deal with new challenges. Some of them have been personal while some have been work related. Although it’s been a challenge for us to distribute the team, we’ve tried to refocus our work in order to keep making progress with our products.

Our users have of course had challenges of their own. Having a large user-base in academia, we’ve seen users having restricted physical access to hardware and purchasing procedures becoming complex. Some users have solved this by moving the lab to their homes using the Lighthouse positioning or the Loco positioning systems. Others have been able to stay in their labs and classrooms although under different circumstances.

At Bitcraze we have been able to overcome the challenges we’ve faced so far, largely thanks to a strong and motivated team, but now it seems as if one of our biggest challenges might be ahead of us.

Semiconductor sourcing issues

Starting early this year lead-times for some components increased and there were indication that this might become a problem. This has been an issue for other parts like GPUs and CPUs for a while, but not for the semiconductors we use. Unfortunately sourcing of components now has become an issue for us as well. As far as we understand the problems have been caused by an unforeseen demand from the automotive industry combined with a few random events where the production capacity of certain parts has decreased. Together it has created a global semiconductor shortage with large effects on the supply chain.

For us it started with the LPS Node where the components suddenly cost twice as much as normally. The MCU and pressure sensor were mainly to blame, incurring huge price increases. Since these parts were out of stock in all the normal distributor channels, the only way to find them was on the open market. Here price is set by supply and demand, where supply is now low and demand very high. Prices fluctuate day by day, sometimes there’s very large swings and it’s very hard to predict what will happen. To mitigate this for the LPS Node we started looking for an alternative MCU, as the STM32F072 was responsible for most of the increase (600% price increase). Since stock of the LPS Node was getting low we needed a quick solution and found the pin-compatible STM32L422 instead, where supply and price was good through normal channels. The work with porting code started, but after a few weeks we got word that importing this part to China was blocked. So after a dead end we’re back to the original MCU, with a few weeks of lead-time lost and a very high production price.

Unfortunately this problem is not isolated to the LPS Node, the next issue we’re facing is the production of the Crazyflie 2.1 where the STM32F405, BMI088, BMP388 and nRF51822 are all affected with increases between 100 and 400 % in price. These components are central parts of the Crazyflie and they can not easily be switch to other alternatives. Even if they could, a re-design takes a long time and it’s not certain that the new parts are still available for a reasonable price at that time.

Aside from the huge price increases in the open market we’re also seeing price increases in official distributor channels. With all of this weighed together, we expecting this will be an issue for most of our products in the near future.

Planning ahead

An even bigger worry than the price increase is the risk of not being able to source these components at all for upcoming batches. Having no stock to sell would be really bad for Bitcraze as a business and of course also really bad for our customers that rely on our products for doing their research and classroom teaching.

To mitigate the risk of increasing price and not being able to source components in the near future, we’re now forced to stock up on parts. Currently we are securing these key components to cover production until early next year, hoping that this situation will have improved until then.

Updated product prices

Normally we keep a stable price for a product once it has been released. For example the LPS Node is the same price now as it was in 2016 when it was released, even though we’ve improved the functionality of the product a lot. We only adjust prices for hardware updates, like when the Crazyflie 2.0 was upgraded to Crazyflie 2.1. But to mitigate the current situation we will have to side-step this approach.

In order for us to be able to continue developing even better products and to support those of you that already use the Crazyflie ecosystem, we need to keep a reasonable margin. From the 1st of May we will be adjusting prices across our catalog, increasing them with 10-15%. Although this doesn’t reflect the changes we are seeing in production prices at the moment, we believe the most drastic increases are temporary while the more moderate ones will probably stick as we move forward.

Even though times are a bit turbulent now, we hope the situation will settle down soon and we think the actions we are taking now will allow us to focus on evolving our platform for the future.

A few weeks ago we released version 2021-03 including the python library, Cfclient and the firmware. The biggest feature of that release was that we (finally) got the lighthouse positioning system out of early access and added it as an official product to the Crazyflie eco system! Of course we are very excited about that milestone, but the work does not end there… We also need to communicate how to use it, features and where to find all this new information to you – our favourite users!

New Landing Page

First of all, we made a new landing page for only the lighthouse system (similar to bitcraze.io/start) we now also have bitcraze.io/lighthouse. This landing page is what will be printed on the Lighthouse base station box that will be available soon in our store, but is also directly accessible from the front page under ‘Product News’.

This landing page has all kinds of handy links which directs the user to the getting started tutorial, the shop page or to its place within the different positioning systems we offer/support. It is meant to give a very generic first overview of the system without being overloaded right off the bat and we hope that the information funnel will be more smooth with this landing page.

New tutorial and product pages

For getting started with the lighthouse positioning system, we heavily advise everybody to follow the new getting started tutorial page, even if you have used the lighthouse system since it’s early access days. The thing is is that the procedure of setting the system up has changed drastically. The calibration data and geometry are now stored in persistent memory on-board the Crazyflie and the lighthouse deck itself is now properly flashed. So if you are still using custom config.mk, hardcode geometry in the app layer or use get_bs_geometry.py to get the geometry… stop what you are doing and update the crazyflie firmware, install the newest Cfclient, and follow the tutorial!

We also already made some product page for the Lighthouse Swarm bundle. Currently it is still noted as coming soon but you can already sign up to get a notification when it is out, which we hope to have ready in about 1-2 month(s). The lighthouse deck was of course already available for those that can not wait and want to buy a SteamVR base station somewhere else. Just keep in mind that, even though the v1 is supported, in the future we will mostly focus on the version 2 of the base stations.

Video tutorial

Once again we have ventured into the land of videos and recorded a “Getting started with the Lighthouse positioning system” tutorial for those who prefer video over text.

Feedback

We love feedback and want to improve! Please don’t hesitate to contact us on contact@bitcraze.io if you have comments or suggestions!

Approximately two month ago we wrote a blog post presenting our planned master thesis. Time flies and we have now reached a sufficient state where the results are presentable and possible to use. We have used the Renode framework and created a platform for the Crazyflie 2.1. In Bitcraze’s Github repository there now is a Renode fork with a custom Renode-infrastructure submodule. To get Renode up and running on your computer check the README found there.

On the Renode branch crazyflie there are two new REPL (Renode platform) files describing the platform. An example of the syntax is given below.

// I am a comment
peripheralname: Namespace.ClassName @ parent 0x08000000
    numericConstructorField: 0x100000
    stringConstructorField: "template"
    Interrupt -> interrupthandler@3

In the cf2.repl all the external peripherals are connected while the stm32f405.repl contains the STM32F405 peripherals. Note that only the peripherals used by the current Crazyflie 2.1 have been added since they are the only ones that can be tested using the Crazyflie firmware.

When running Renode a RESC (Renode script) file is loaded. There are currently two RESC files for the Crazyflie, one that only loads the Crazyflie plattform and one that can be used for testing. The one for testing automatically starts the simulation and it also has a hook to exit Renode once the self test has passed.

Successful startup!

As mentioned, Renode is usable both for automatically and interactively testing firmware. The current version of automatic testing is based on the firmware passing the self test. The plan is to incorporate this in the CI pipeline.

When used interactively it is possible to pause the emulation whenever the user wants to, either manually via a Renode command or by connecting to GDB. This allows reading (as well as writing to) memory addresses. Want to read the DMA status at a specific line of code or mess with the system by randomly flipping bits? Doable in the emulator without risking your Crazyflie crashing.

In the platform there also are our customized sensors to which data can be loaded. The data can be loaded either manually or via a file and then sent to the STM32. The scope of this master thesis however has been on firmware testing, not getting a simulated Crazyflie to fly in a virtual environment.

Emulation of hardware is not a trivial task, there are still improvements to be made and the everlasting question whether the emulation actually represents the real system.

One of the Crazyflie features that had to be simplified was the syslink and connection to the nRF microcontroller, which in the emulation simply sends messages back to the STM32. The most exciting part about it currently is how the STM32 receives a signal that no expansion decks are connected via the 1-wire when a scan is executed. Further improvements would be to emulate the radio, power management and support expansion decks, either via an external program or through Renode.

Of course there are other things to improve as well, there will always be someone who thinks of better ways to implement features and only time will tell how this emulator is going to evolve.

Josefine & Max

When I was started my Robotics Master back in 2012, I remembered how frustrated I was at the time to setup my development environment in Windows for the C++ beginners course. My memory is a bit fuzzy of course but I remembered it took me days to get all the right drivers, g++ libraries right, and to setup all in the path environmental in Eclipse at the time. Once I started working on Ubuntu for my Master thesis, forced to due to ROS, I was hooked and swore I will never go back to Windows for robotics development again… until now!

I always used Windows on my personal machine on the side for gaming and have a dual-boot on the work computer for some occasional video editing, but especially I had begun to learn game development for Fun Fridays, I started to be drawn to the windows side of the dual boot more and more. But if I needed to try something out on the Crazyflie or needed to debug a problem on the forum, I needed to restart the computer to switch operating systems and that was starting to become a pain! Slowly but steadily I tried out several aspects of the crazyflie ecosystem for development on Windows 10 and actually…. it is not so traumatic as it was almost 10 years ago.

Python Library and Client

It went quite smooth when I first try to install python on windows again. Adding it to the PATH environment variable is still very important but luckily the new install manager provides that as an option. Moreover, Visual Studio Code also provides the possibility to switch between python environments so that you try different versions of python, but for now version 3.8 was plenty for me.

With the newest versions of the Windows install of Python, pip is by definition already installed, but I experienced that it would still be necessary to upgrade pip by typing the following in either a Command Prompt or (my favorite) Powershell:

python -m pip install --upgrade pip

After this, install the cflib from release was quite an ease (‘pip install cflib’) but even installing it from source with Git configured on Windows was no problem at all and very similar to a native Ubuntu install.

Until recently the cfclient was a bit more challenging to install through pip from due the SDL2 windows library had to be downloaded separately, so the only options would have been installing from source or the .exe application release. The later has not been updated since the 2020.09 release due to building errors. Luckily, with the latest release, this has now been fixed as a SLD2 python library was found. Now the cfclient can be installed with a simple ‘pip install cfclient’.

Firmware Building with WSL

The firmware development was the next thing that I tried to get up and running, which managed to be slightly trickier. About a year ago I tried to get Cygwin to work on Windows, but my bad memories of the past came back due to the clunkiness of it all and I abandoned it again. Also there are some reported issues with the out-of-tree build (aka the App layer). Some colleagues at Bitcraze already mentioned the Windows Subsystem for Linux (WSL) but I never really looked at it until the need came to move back to Windows for development. And I must say, I wish I had tried it out a while ago.

With some repositories downloaded already on my Windows system with Git, I installed Ubuntu 20.04 WSL, got the appropriate gcc libraries and accessed the crazyflie-firmware by ‘cd /mnt/c/my/repos‘. Building the firmware with ‘make all’ went pretty okay… although it took about a minute which is a little long compared to the 20 secs on the native Ubuntu install. The big problem was that I could not use Docker and the handy bitcraze toolbelt due to the WSL version still being 1. These functionalities were only available for version 2 so I went ahead and upgraded the WSL and linked it to docker desktop. But after upgrading, building the firmware from that same repository on the C:/ drive took insanely long (almost 10 minutes). So I switched back the WSL ubuntu 20.04 to version 1, installed a second WSL (this time Ubuntu 18.04), updated that one to WSL2 and used solely for docker and toolbelt purposes. Not ideal quite yet… but luckily with visual studio code it is very easy to switch the WSL .

But there is more though! Recently I timed how long it took to build the crazyflie-firmware with ‘make all -j8’ from both WSL version in a repository that is on the C:/ drive on Windows (accessible by /mnt/c from the WLS), or from a repository on the local file system:

  • WSL 2 (ubuntu 18.04)
    • C:/ = 11m06s
    • WSL local = 00m19s
  • WSL 1 (ubuntu 20.04)
    • C:/ = 01m04s
    • WSL local = 00m59s

This is done on an Windows laptop with an i7-6700HQ with 32 Gb RAM. The differences with WLS2 between build firmware on the windows file system or the local WSL file system is huge! So that means that the right way is to have WSL2 with the repo on the WSL file system, which is similar to build time as a native install of Ubuntu.

Flashing the Crazyflie

The main issue still with WSL is that it does not allow USB access… So even if the crazyradio driver is installed on windows with Zadig, you will not be able to see if you type ‘lsusb‘ in WSL for both version 1 and 2. So when I still had the repository on the C:/ drive and build the crazyflie-firmware from there I could flash the Crazyflie through the Cfclient or Cflib (with cfloader) through Powershell, but building it from the local subsystem, which is way faster for WSL2, would require to first copy the cf2.bin file to my C drive before doing that.

Another option, although still in Alpha phase, is to use the experimental Crazyradio server for WSL made by Arnaud, for which the user instructions can be found in an issue thread only for now. The important thing is that the Zadig installed driver has to be switched to WinUSB and switched back again to LibUSB if you want to use the Cfclient on windows. It would still needs some work to improve the user experience but gives promise of better integration of WSL development for the Crazyflie.

To Conclude

Soon I’m planning to soon reinstall the Windows part on the dual boot laptop but there are already some things that I will integrate on my freshly installed Windows based on what I experienced so far:

  • Keep using Python on windows and install the Cfclient and Cflib by pip only.
  • Only use Ubunu 20.04 as WSL2 and install the Crazyflie-firmware on the WSL local file system.
  • Use Visual Studio Code as the editor for both C:/ based and WSL based repos.
  • Use the Crazyradio server or copy the bin file to C:/ whenever I need to flash the crazyflie with development firmware.

For any AIdeck development, I would still need to use the native Ubuntu part or the bitcraze-VM since there is not a USB access or server yet for the programmer. However, if Windows would support USB devices and a graphical interface for WSL, that will make all our Windows-based Crazyflie development dreams come true!