Category: Frontpage

This week we have a guest blog post from CollMot about their work to integrate the Crazyflie with Skybrush. We are happy that they have used the app API that we wrote about a couple of weeks ago, to implement the required firmware extensions!


Bitcraze and CollMot have joined forces to release an indoor drone show management solution using CollMot’s new Skybrush software and Crazyflie firmware and hardware.

CollMot is a drone show provider company from Hungary, founded by a team of researchers with a decade-long expertise in drone swarm science. CollMot offers outdoor drone shows since 2015. Our new product, Skybrush allows users to handle their own fleet-level drone missions and specifically drone shows as smoothly as possible. In joint development with the Bitcraze team we are very excited to extend Skybrush to support indoor drone shows and other fleet missions using the Crazyflie system.

The basic swarm-induced mindset with which we are targeting the integration process is scalability. This includes scalability of communication, error handling, reliability and logistics. Each of these aspects are detailed below through some examples of the challenges we needed to solve together. We hope that besides having an application-specific extension of Crazyflie for entertainment purposes, the base system has also gained many new features during this great cooperative process. But lets dig into the tech details a bit more…

UWB in large spaces with many drones

We have set up a relatively large area (10x20x6 m) with the Loco Positioning System using 8 anchors in a more or less cubic arrangement. Using TWR mode for swarms was out of question as it needs each tag (drone) to communicate with the anchors individually, which is not scalable with fleet size. Initial tests with the UWB system in TDoA2 mode were not very satisfying in terms of accuracy and reliability but as we went deeper into the details we could find out the two main reasons of inaccuracies:

  1. Two of the anchors have been positioned on the vertical flat faces of some stairs with solid material connection between them that caused many reflections so the relative distance measurements between these two anchors was bi-stable. When we realized that, we raised them a bit and attached them to columns that had an air gap in between, which solved the reflection issue.
  2. The outlier filter of the TDoA2 mode was not optimal, a single bad packet generated consecutive outliers that opened up the filter too fast. This issue have been solved since then in the Crazyflie firmware after our long-lasting painful investigation with changing a single number from 2 to 3. This is how a reward system works in software development :)

After all, UWB was doing its job quite nicely in both TDoA2 and TDoA3 modes with an accuracy in the 10-20 cm level stably in such a large area, so we could move on to tune the controller of the Crazyflie 2.1 a bit.

Crazyflies with Loco and LED decks

As we prepared the Crazyflie drones for shows, we had the Loco deck attached on top and the LED deck attached to the bottom of the drones, with an extra light bulb to spread light smoothly. This setup resulted in a total weight of 37g. The basic challenge with the controller was that this weight turned out to be too much for the Crazyflie 2.1 system. Hover was at around 60-70% throttle in average, furthermore, there was a substantial difference in the throttle levels needed for individual motors (some in the 70-80% range). The tiny drones did a great job in horizontal motion but as soon as they needed to go up or down with vertical speed above around 0.5 m/s, one of their ESCs saturated and thus the system became unstable and crashed. Interestingly enough, the crash always started with a wobble exactly along the X axis, leading us to think that there was an issue with the positioning system instead of the ESCs. There are two possible solutions for this major problem:

  1. use less payload, i.e. lighter drones
  2. use stronger motors

Partially as a consequence of these experiments the Bitcraze team is now experimenting with new stronger models that will be optimized for show use cases as well. We can’t wait to test them!

Optimal controller for high speeds and accurate trajectory following

In general we are not yet very satisfied with any of the implemented controllers using the UWB system for a show use-case. This use-case is special as trajectory following needs to be as accurate as possible both in space and time to avoid collisions and to result in nice synchronized formations, while maximal speed both horizontally and vertically have to be as high as possible to increase the wow-effect of the audience.

  • The PID controller has no cutoffs in its outputs and with the sometimes present large positioning errors in the UWB system controller outputs get way too large. If gains are reduced, motion will be sluggish and path is not followed accurately in time.
  • The Mellinger and INDI controllers work well only with positioning systems of much better accuracy.

We stuck with the PID controller so far and added velocity feed forward terms, cutoffs in the output and some nonlinearity in case of large errors and it helped a bit, but the solution is not fully satisfying. Hopefully, these modifications might be included in the main firmware soon. However, having a perfect controller with UWB is still an open question, any suggestions are welcome!

Show specific improvements in the firmware

We implemented code that uploads the show content to the drones smoothly, performs automatic preflight checking and displays status with the LED deck to have visual feedback on many drones simultaneously, starts the show on time in synchrony with all swarm members and handles the light program and show trajectory execution of the show.

These modifications are now in our own fork of the Crazyflie firmware and will be rewritten soon into a show app thanks to this new promising possibility in the code framework. As soon as Skybrush and Crazyflie systems will be stable enough to be released together, we will publish the related app code that helps automating show logistics for every user.

Summary

To sum it up, we are very enthusiastic about the Crazyflie system and the great team behind the scenes with very friendly, open and cooperative support. The current stage of Crazyflie + Skybrush integration is as follows:

  • New hardware iterations based on the Bolt system that support longer and more dynamic flights are coming;
  • a very stable, UWB-compatible controller is still an open question but current possibilities are satisfying for initial tests with light flight dynamics;
  • a new Crazyflie app for the drone show case is basically ready to be launched together with the release of Skybrush in the near future.

If you are interested in Skybrush or have any questions related to this integration process, drop us an email or comment below.

As many European countries, Sweden is now suffering the effects of the second COVID-19 wave. In line with current local restrictions we’re limiting the number of people at our office, which for us means no external guests and only a few people at a time. Although for customers there won’t be any difference since we’re still keeping our regular shipping (1-2 days after placing the order).

Stock levels

During the next couple of weeks we’re going to be short on some of our products, specifically the Swarm bundle, Loco Positioning deck and AI deck. We’re working hard to get them back into stock, and they are scheduled to arrive first weeks of December.

Lighthouse progress

Lately we have been working on finalizing the support for two lighthouse base stations (V1 as well as V2) in the firmware and python lib, which means that we are messing around with large portions of the lighthouse code. As some of you may have noticed it also means that the code base is unstable from time to time. It is likely that it will take a couple of more weeks before it settles down and it might be a good idea to avoid the latest commit if you are looking for a fully working and documented system. Hopefully we will have a good base for future releases and functionality when we are done.

The latest official stable release is 2020.09 and this is also what we recommend for now.

This week we have a guest blog post from Bárbara Barros Carlos, PhD candidate at DIAG Robotics Lab. Enjoy!

Quadrotors are characterized by their underactuation, nonlinearities, bounded inputs, and, in some cases, communication time-delays. The development of their maneuvering capability poses some challenges that cover dynamics modeling, state estimation, trajectory generation, and control. The latter, in particular, must be able to exploit the system’s nonlinear dynamics to generate complex motions. However, the presence of communication time-delay is known to highly degrade control performance.

A composite image showing our real-time NMPC with time-delay
compensation being used on the Crazyflie during the tracking of
a helical trajectory.

In our recent work, we present an efficient position control architecture based on real-time nonlinear model predictive control (NMPC) with time-delay compensation for quadrotors. Given the current measurement, the state is predicted over the delay time interval using an integrator and then passed to the NMPC, which takes into account the input bounds. We demonstrate the capabilities of our architecture using the Crazyflie 2.1 nano-quadrotor.

Time-Delay Compensation

In our aerial system, because of the radio communication latency, we have delays both in receiving measurements and sending control inputs. Likewise, since we intend to use NMPC, the potentially high computational burden associated with its solution becomes an element that must also be taken into account to minimize the error in the state prediction.

Crazyflie NMPC response without
considering the time-delay compensation.

To tackle this issue, we use a state predictor based on the round-trip time (RTT) associated with the sum of network latencies as a delay compensator. The prediction is computed by performing forward iterations of the system dynamic model, starting from the current measured state and over the RTT, through an explicit Runge Kutta 4th order (ERK4) integrator. Due to the independent nature of this operation, perfect delay compensation can be achieved by adjusting the integration step to be equal to the RTT. Thus, it is assumed that there is a fixed RTT, defined by τr, to be compensated.

Nonlinear Model Predictive Control

The NMPC controller is defined as the following constrained nonlinear program (NLP):

Therein, p denotes the inertial position, q the attitude in unit quaternions, vb the linear velocity expressed in the body frame, ω the angular rate, and Ωi the rotational speed of the ith propeller. The NLP is tailored to the Crazyflie 2.1 and is implemented using the high-performance software package acados, which solves optimal control problems and implements a real-time iteration (RTI) variant of a sequential quadratic programming (SQP) scheme with Gauss-Newton Hessian approximation. The quadratic subproblems (QP) arising in the SQP scheme are solved with HPIPM, an interior-point method solver, built on top of the linear algebra library BLASFEO, finely tuned for multiple CPU architectures. We use a recently proposed Hessian condensing algorithm particularly suitable for partial condensing to further speed-up solution times.

When designing an NMPC, choosing the horizon length has profound implications for computational burden and tracking performance. For the former, the longer the horizon, the higher the computational burden. As for the latter, in principle, a long prediction horizon tends to improve the overall performance of the controller. In order to select this parameter and achieve a trade-off between performance and computational burden, we implemented the NLP in acados considering: five horizon lengths (N = {10,20,30,40,50}), input bounds on the rotational speed of the propellers (lower bound = 0, upper bound = 22 krpm), discretizing the dynamics using an ERK4 integration scheme. Likewise, we compare the condensing approach with the state-of-the-art solver qpOASES against the partial condensing approach with HPIPM, concerning the set of horizons regarded.

Left: closed-loop trajectories comparing different horizon lengths.
Right: average runtimes per SQP-iteration
for different horizon lengths considering two distinct QP solvers.

As qpOASES is a solver based on active-set method, it requires condensing to be computationally efficient. In line with the observations found in the literature that condensing is effective for short to medium horizon lengths, we note that qpOASES is competitive for horizons up to approximately N = 30 when compared to HPIPM. The break-even point moves higher on the scale for longer horizons, mainly due to efficient software implementations that cover: (a) Hessian condensing procedure tailored for partial condensing, (b) structure-exploiting QP solver based on novel Riccati recursion, (c) hardware-tailored linear algebra library. Therefore, we chose horizon N = 50 as it offers a reasonable trade-off between deviation from the reference trajectory and computational burden.

Onboard Controller Considerations

How the onboard controllers (PIDs) use the setpoints of the offboard controller (NMPC) in our architecture is not entirely conventional and, thereby, deserves some considerations. First, the reference signals that the PID loops track do not fully correspond to the control inputs considered in the NMPC formulation. Instead, part of the state solution is used in conjunction with the control inputs to reconstruct the actual input commands passed as a setpoint to the Crazyflie. Second, a part of the reconstructed input commands is sent as a setpoint to the outer loop (attitude controller), and the other part is sent to the inner loop (rate controller). Furthermore, as the NMPC model does not include the PID loops, it does not truly represent the real system, even in the case of perfect knowledge of the physical parameters. As a consequence, the optimal feedback policy is distorted in the real system by the PIDs. 

Closed-loop Position Control Performance

Our control architecture hinges upon a ROS Kinetic framework and runs at 66.67 Hz. The Crazy RealTime Protocol (CRTP) is used in combination with our crazyflie_nmpc stack to stream in runtime custom packages containing the required data to reconstruct the part of the measurement vector that depends on the IMU data. Likewise, the cortex_ros bridge streams the 3D global position of the Crazyflie, which is then passed through a second-order, discrete-time Butterworth filter to estimate the linear velocities.

To validate the effectiveness of our control architecture, we ran two experiments. For each experiment, we generate a reference trajectory on a base computer and pass it to our NMPC ROS node every τs = 15 ms. When generating the trajectories, we explicitly address the feasibility issue in the design process, creating two references: one feasible and one infeasible. In addressing this issue, we prove through experiments that the performance of the proposed NMPC is not degraded even when the nano-quadrotor attempts to track an infeasible trajectory, which could, in principle, make it deviate significantly or even crash.

Overall, we observe that the most challenging setpoints to be tracked are the positions in which, given a change in the motion, the Crazyflie has to pitch/roll in the opposite direction quickly. These are the setpoints where the distortion has the greatest influence on the system, causing small overshoots in position. The average solution time of the tailored RTI scheme using acados was obtained on an Intel Core i5-8250U @ 3.4 GHz running Ubuntu and is about 7.4 ms. This result shows the efficiency of the proposed scheme.

Outlook

In this work, we presented the design and implementation of a novel position controller based on nonlinear model predictive control for quadrotors. The control architecture incorporates a predictor as a delay compensator for granting a delay-free model in the NMPC formulation, which in turn enforces bounds on the actuators. To validate our architecture, we implemented it on the Crazyflie 2.1 nano-quadrotor. The experiments demonstrate that the efficient RTI-based scheme, exploiting the full nonlinear model, achieves a high-accuracy tracking performance and is fast enough for real-time deployment.

Related Links

This research project was developed by:

Bárbara Barros Carlos1, Tommaso Sartor2, Andrea Zanelli3, and Gianluca Frison3, under the supervision of professors Wolfram Burgard4, Moritz Diehl3 and Giuseppe Oriolo1.

1 B. B. Carlos and G. Oriolo are with the DIAG Robotics Lab, Sapienza University of Rome, Italy.
2 T. Sartor is with the MECO Group, KU Leuven, Belgium.
3 A. Zanelli, G. Frison, and M. Diehl are with the syscop Lab, University of Freiburg, Germany.
4 W. Burgard is with the AIS Lab, University of Freiburg, Germany.

Last Wednesday we had our first live tutorial event, explaining our Spiral Swarm Demo that we usually show at conferences. About 60 people signed up and it seems that we have about 40-50 people that were able to join from all parts of the world. There were even several Crazyflie users from Asia that stayed up late especially for this, so we definitely appreciated the dedication!

For those who missed it, you can find the recordings and slides on this event page.

The Tutorial

The first hour we were mostly talking about the Lighthouse positioning system and in particular focusing on the base station V2. In real time, we had hands-on sessions where we actually showed how we setup the system, how to retrieve the calibration data and how to achieve geometry. The hour ended with showing a Crazyflie flying in the lighthouse system itself.

After the break , we focused on how to achieve more autonomy in the swarm, where we talked about the limitation of communication, the high level commander and the app layer. This was also shown with hands-on with multiple flying Crazyflies and the full automatic demo at the end. We were able to keep showing the demo in the end for a 30 minutes more while we were resting up with a drink :)

We were using Discord and Mozilla Hubs simultaneously to stream the tutorial. Discord worked out nicely since we could have one channel for the stream and one channel for the chat, which one of us was able to look at continuously. Mozilla hubs was a nice add-on however it definitely had some hiccups and streaming quality issues, which is not ideal for following a tutorial. Also being in Virtual Reality for 2 hours is very exhausting we heard from headset-using participants.

What next?

We really liked doing the tutorial and speaking one-on-one with our users very much so we are likely to organize one again. Not sure at what frequency though but of course we will announce it first. We have already some requests for topics so we will look into those first. Next time it probably will be a shorter tutorial on Discord only. Mozilla Hubs might still be used but as a virtual gallery where we put 3D visualizations of what we are working on (like how the base station sweeps work for instance), so that people can get a better understanding. If you have any request for topics please leave a comment below.

We will also try out to use our new Discord Server as a digital ‘watering hole’ for our users. Here everybody will have the opportunity to chat with each-other, to share awesome projects and to maybe help each-other out with certain questions. However, we will not be on Discord ourselves all the time and still advise to use forum.bitcraze.io as the main place to ask questions and to seek for support.

Click here to join our Discord server

As mentioned in this blog post, we added the possibility to write apps for the Crazyflie firmware a while ago. Now we have added more functions in the Firmware to make it possible to use apps for an even wider range of tasks.

The overall idea of the app API is to mirror the functionality of the python lib. This will enable a user to prototype an application in python with quick iterations, when everything is working the app can easily be ported to C to run in the Crazyflie instead. The functions in the firmware are not identical to the python flavour but we have tried to keep them as close as possible to make the translation simple.

An app is also a much better way to contain custom functionality as the underlying firmware can be updated without merging any code. The intention is that the api API will be stable over time and apps that work one version of the firmware also should work with the next version.

Improvements

We used our demo from IROS and ICRA (among others) with a fairly autonomous swarm as a driver for the development. The demo used to be implemented in a branch of the firmware with various modifications of the code base to make it possible to do what we wanted. The goal of the exercise was to convert the demo into an app and add the required API to the firmware to enable the app to do its thing. The new app is available here.

The main areas where we have extended the API are:

Log and parameters framework

The log framework is the preferred way for an app to read data from the firmware and this has been working from the start. Similarly the parameter framework is the way to set parameters. Even though this has worked, it broke a basic assumption in the setup with the client, that only the client can change a parameter. Changing a parameter from an app could lead to that the client and Crazyflie had different views of the state in the Crazyflie, but this has now been fixed and the client is updated when needed.

High level commander

The high level commander was not accessible from an app earlier and the functions in the python lib have been added to make it easy to handle autonomous flight.

Custom LED sequences

It is now possible to register custom LED sequences to control the four LEDs on the Crazyflie to signal events or state.

Lighthouse functionality

Functions for setting base station geometry data as well as calibration data have been added. These functions are also very useful for those who are using the lighthouse system as it now can be done from an app instead of modifying lighthouse_position_est.c.

Remaining work

We have taken a step forward with these changes but there is more to be done! The two main areas are support for custom CRTP packets and memory mapping through the memory sub system. There might be more, let us know if there is something you are missing. The work will continue and there might even be some documentation at some point :-)

Tutorial

One reason for doing this API work now was to prepare for the tutorial about the lighthouse 2 positioning system, swarm autonomy and the demo app that we will run this Wednesday on-line, don’t miss out! You can read more about the event here.

Back in 2014 a friend of ours dropped by our office to chat a bit about how things were going for Bitcraze. During the conversation we got a few questions that we should have been able to answer, but we couldn’t. Things like “How many Crazyflies have you sold?” and “To which countries do you mostly sell?”. We realized that we need some way of keeping track of things like this. Since I was mostly handling economics and admin, and also like developing things, this became the start of our internal “do a bit of everything” system.

At this point in time we were only selling our products though Seeedstudio, so the system was mostly used to keep track of the stock levels there. But as things at Bitcraze started to change, so did the system. Later that year our Crazyflie 2.0 was heading into production and we wanted a better way to keep track of production, so we modified the production scripts to upload the production test results to our servers. Now we had production stats and could trace returns from the field trough production using serial numbers.

Fast forward to 2016 and we’ve decided to start selling our products in our own E-store. We found a 3PL partner in Hong Kong to do our packaging and shipping. It was a big step for us and something we really liked since we got direct contact with our customers. But the amount of work quickly started to increase. Our main issue was that some customers were not keeping track of their shipments and the shipments would end up sitting in customs awaiting questions from the customers and would eventually be returned to us, and this was a big hassle. So the tracking of orders were added to our internal system, now it would keep track of the shipment progress and warn us if there were any issues.

Apart from the E-store we also had orders though our invoicing software. In order for us to get a unified picture of what was happening our internal system started to merge information from the different systems: our E-store, invoicing software and Seeedstudio.

At this point the system had grown so much that the architecture was a bit out of hand and we re-wrote the system from scratch to be able to handle more changing requirements. In hindsight was a good decision, since the next big change change was around the corner. In 2018 everything was running smoothly but our 3PL partner was suddenly moving the warehouse from Hong Kong to Shenzhen (blog post), something that was supposed to take a week and run smoothly. Unfortunately the move was everything else than smooth and after battling for months trying to get the operation back up again, we finally pulled the breaks and started to re-route newly produced units to our office in Sweden instead.

Now we had an office full of products and an E-store were things were being sold. But we didn’t have any solution for actually taking the orders and shipping them. At first we were booking everything manually, but as order counts increased this quickly became a problem. We looked at a few solutions but didn’t find anything that matched well for integrating all our systems. So our internal system once again got expanded to handle the warehouse part, print packing lists, book shipments and print labels (blog post).

As time passes (and more challenges pop up) we’ve been adding more and more functionality and have been able to quickly adapt to what’s been happening. While developing the system over the years, my overall goal has been to automate as much as possible of the admin/logistics in order to keep the workload the same even though sales increase. So far this has been successful!

Today we use this system for a wide range of administration, logistics and production tasks. Below is a list of some of the things it handles:

  • E-store and sales
    • Live shipping quotes during checkout
    • Requesting order quotes from the basket
    • It handles all of the accounting from the E-store
    • GMail plugin for easily accessing information
  • Logistics
    • Warehouse functionality (stock, incoming products etc)
    • Printing picking/packaging lists
    • Warnings for shipments that are stuck or lost
    • Booking and printing of shipping labels
  • Manufacturing
    • Keep track of all tested units with all data from the tests
    • Production planning and warning if things will run out of stock
  • Analytics
    • Various harmonized business analytics from our different sales systems
    • Harmonized data that is used for various analysis
  • Reminds us about various things like unpaid invoices

So have we made back the time we spent building the system? Probably not, but we will. There’s a lot of good solutions for similar systems out there, but the decision to make our own system came from not finding a solution where all our pieces fit. In order to minimize the work I’ve tried to use various external services to speed up development. In the end the list of systems that are used together is quite long:

  • Our production test software
  • Shopify (E-store SaaS)
  • Fortnox (Bookkeeping, invoicing and quoting SaaS)
  • Easypost (shipping API)
  • Riksbanken (for exchange rates)
  • Our printing station software for label/document printers
  • Slack for notifying us about various issues
  • Sendgrid for sending emails
  • Mondido (our payment gateway)
  • Google docs
  • Seeedstudio

Li-Ion batteries have packed more energy per gram for a long time compared to Li-Po batteries. The problem for UAV applications has been that Li-Ion can’t deliver enough current, something that is starting to change. Now there are cells that are supposed to be able to deliver 30-35A continuously in the 18650 series, at least according to the specs. Therefor we thought it was time to do some testing and decided to build a 1 cell Li-Ion drone using the Crazyflie Bolt as base.

Since a 18650 battery is 18mm in diameter and 65mm long, the size would affect the design but we still wanted to keep the drone small and lightweight. The battery is below 20mm wide which means we can run the deck connectors around it, that is nice. We chose to use our 3D printer to build the frame and use off the shelf ESCs, motors and props. After a couple of hours of research we selected 3″ propellers, 1202.5 11500kv motors and tiny 1-2s single ESCs for our first prototype.

Parts list:

  • 1 x Custom designed 130mm 3D printed frame
  • 1 x Crazyflie Bolt flight controller
  • 4 x Eachine 3020 propeller (2xCW + 2xCCW)
  • 4 x Flywoo ROBO RB 1202.5 11500 Kv motors
  • 4 x Flash hobby 7A 1-2S ESC
  • 1 x Li-Ion Sony 18650 VTC6 3000mAh 30A
  • Screws, anti vib. spacers, zipties, etc.

The custom designed frame was developed in iterations, and can still be much improved, but at this stage it is small, lightweight and rigid enough. We wanted the battery to be as central as possible while keeping it all compact.

Prototype frame designed in FreeCAD.

Assembly and tuning

The 3D printed frame came out quite well and weighed in at 13g. After soldering the bolt connectors to the ESCs, attaching motors and props, adjusting battery cable and soldering a XT30 to the Li-Ion battery it all weighed ~103g and then the battery is 45g of these. It feels quite heavy compared to the Crazyflie 2.1 and we had a lot of respect when we test flew it the first time. Before we took off we reduced the pitch and roll PID gains to roughly half and luckily it flew without problems and quite nicely. Well it sounds a lot but that is kind of expected. After increasing the gains a bit we felt quite pleased with:

#define PID_ROLL_RATE_KP  70.0
#define PID_ROLL_RATE_KI  200.0
#define PID_ROLL_RATE_KD  2
#define PID_ROLL_RATE_INTEGRATION_LIMIT    33.3

#define PID_PITCH_RATE_KP  70.0
#define PID_PITCH_RATE_KI  200.0
#define PID_PITCH_RATE_KD  2
#define PID_PITCH_RATE_INTEGRATION_LIMIT   33.3

#define PID_ROLL_KP  7.0
#define PID_ROLL_KI  3.0
#define PID_ROLL_KD  0.0
#define PID_ROLL_INTEGRATION_LIMIT    20.0

#define PID_PITCH_KP  7.0
#define PID_PITCH_KI  3.0
#define PID_PITCH_KD  0.0
#define PID_PITCH_INTEGRATION_LIMIT   20.0

This would be good enough for what we really wanted to try, the endurance with a Li-Ion battery. A quick measurement of the current consumption at hover, 5.8A, we estimated up to ~30 min flight time on a 3000mAh Li-Ion battery, wow, but first a real test…

Hover test

For the hover test we used lighthouse 2 which is starting to work quite well. We had to change the weight and thrust constants in estimator_kalman.c for the autonomous flight to work:

#define CRAZYFLIE_WEIGHT_grams (100.0f)

//thrust is thrust mapped for 65536 <==> 250 GRAMS!
#define CONTROL_TO_ACC (GRAVITY_MAGNITUDE*250.0f/CRAZYFLIE_WEIGHT_grams/65536.0f)

After doing that and creating a hover script that hovers at 0.5m height and was set to land when the voltage reached 3.0V. We leaned back with excitement, behind a safety net, and started the script… after 19 min it landed… good but not what we hoped for and quite far from the calculated 30 min. Maybe Li-Ion isn’t that good when it needs to provide more current…? A quick internet search and we could find that Li-Ion can run all the way down to 2.5V, but we have to stop at 3.0V because of electronics and loosing thrust, so we are missing quite a bit of energy… Further investigations are needed.

Lighthouse 2 flight test

As a final test we launched some flight scripts to fly in a square and in a spiral so we would get a feel for Lighthouse 2 + Bolt with PID controller combination. We think it turned out quite nicely, and this with almost no optimization effort:

Summary

Li-Ion felt like it could be a game changer when it comes to flight time but was not as promising as we hoped for. It doesn’t mean we can’t get there though. More research and development is required.

We’re happy to announce that we have taken an important step forward in the development of the lighthouse positioning system, we have improved the calibration compensation. The changes improves the correctness of the coordinate system, especially for lighthouse V2 base stations.

As mentioned in this blog post one of the remaining areas to solve was handling of calibration data and this is what we have addressed lately. In the manufacturing process mechanical elements are mounted within some tolerances but since the precision of the system is so good, also a very fine tolerances makes a big difference in the end result. Each base station is measured in the factory and the calibration data describing these imperfections are stored in the base station. The calibration data is transmitted in the light sweeps to enable a receiver to use it to correct for the errors in the measured angles.

As with everything else related to lighthouse, there is no official information of how to interpret the calibration data so we (and the community) had to make educated guesses.

Lighthouse 1

The compensation model for lighthouse 1 has been known for quite long, see the Astrobee project by Nasa and Libsurvive. The most important parameter is the phase and until now this is the only part of the calib data that we have used in the firmware. In the new implementation we use all parameters.

The parameters of the lighthouse 1 calibration model are phase, tilt, gib mag, gib phase and curve.

Lighthouse 2

The compensation data for lighthouse 2 is similar to lighthouse 1 but there are two new parameters, ogee mag and ogee phase. It also seems as some parameters that are sharing names between lighthouse 1 and 2 have different meanings, for instance curve.

Libsurvive has implemented compensation for lighthouse 2 but we have unfortunately not managed to use their work with good results, instead we have tried to figure out what the model might look like and match it to measurements. We have managed to get good results for the phase, tilt, gib mag and gib phase, while we don’t know how to use curve and ogee mag and ogee phase. The solution seems to be pretty good with a subset of the parameters and we have decided to leave it at that for now.

Use of calibration data

The way we have used the calibration data so far has been to apply it to the measured angles to get (more) correct sweep angles that have been fed into the position estimation algorithms. The problem is that the compensation model is designed the other way around, i.e. it goes from correct angles to measured angles, and an iterative approach is required to apply it to the measured angles. A better way (most likely by design) is to apply it in the kalman estimator instead where it simply becomes part of the measurement model.

Currently we do calculate the corrected angles as well and expose them as log data, but it is not required for the standard functionality of the lighthouse system. We may make it possible to turn it on/off via a parameter in the future to save some CPU power.

Functional improvements

So what kind of improvements will the calibration add?

The first improvement is the base station geometry estimation. With more correct angles the estimated base station position and orientation will be better. This is important to be able to get a good estimation of the Crazyflie position since poor geometry data will give the position estimator conflicting data.

Secondly more correct angles will straighten the coordinate system. With angular distortion the position estimator will not be able to estimate the correct position and the coordinate system will be warped, bent or stretched. The improvement can be seen when flying parallel to the floor at constant height for instance.

Thirdly the stability will hopefully be improved. When the angles from two base stations match better, the estimated position will change less when one base station is occluded and generally make life easier for the position estimator. We will take a look at the outlier filter to see if it can be improved as well.

Remaining problems

The calibration data is transmitted as a part of the sweeping light planes with a low bitrate. For lighthouse 1 the decoding process works well and all calibration data is usually received within 20-30 seconds. For lighthouse 2 it does not work as well in our current implementation it takes (much) longer before all data has been received correctly from both base stations.

It is possible to get the calibration data via the USB port on lighthouse 2 and we are considering storing the calibration data in the Crazyflie somehow instead. This will be even more important when we support larger systems (2+ base stations) and all base stations are not within range at startup.

During the summer we were discusses at the office of what would be a good substitute of us not being able to go to conferences or fairs anymore (see this blogpost). We sparred with a few ideas, ranging from organizing an online competition to an seminar. Although we initially were quite enthusiastic about organizing the competition, the user questionnaire from the previous blog-post showed us that many of you are rather interested in online tutorials. Based on that we actually started to make some more step-by-step guides, however we definitely would agree that is not the same as meeting each-other face-to-face!

So now we are planning to organize one for real this time! So our first online live tutorial will be on:

Wednesday 4th of November, 18:00 (CET, Malmö Sweden)

Register for the first session here to indicate your interest and to receive up-to-date information. There are of course no cost involved!

First topic: Spiraling Swarm Demo (Live!)

The last couple of years we have been showing our demo at many robotics conferences and fairs, such as ICRA, IMAV and IROS. Since we do not have a opportunity to do that anymore (at least for the foreseeable future), we thought that a suitable first topic of the online tutorial to be about the Spiraling Swarm demo! We will go through the different elements of the demo, which includes the implementation details on the Crazyflie and the Lighthouse Positioning system. We hope to explain all of in about 20-30 minutes and that this would enable you to set the demo up yourself if you want.

We have been thinking about just doing a prerecorded tutorial, however we also really like to talk with our users about their needs and research topics. That is why we think it is important to do it live where we can answer your questions on the go or after the tutorial. This also means that we will be demonstrating the demo live as well! Afterwards we will have a social interaction where we have a friendly chat :)

Mozilla Hubs and Discord

There are so many options on how to exactly host this event, as there are a gazillion alternatives for video conferencing. Currently we have are looking at Mozilla hubs. which fits nicely with our interests in the lighthouse positioning system with the HTC Vive basestations. The nice thing aspect of Hubs is that you don’t need a fancy headset to join, since it is possible to join via your browser or your phone. Me (Kimberly) has joined a Virtual Reality seminar at the beginning of the pandemic, organized by Roland Meertens of pinchofintelligence.com, and it was definitely a very interesting and fun experience. When giving a presentation, it really felt like people were paying attention and were engaged. So, we recently recreated our own flight-lab in VR (using Hub’s environment creator Spoke) and tested it out ourselves. This way you will be able to see our workplace as well!

Of course, we can imagine not everybody is waiting to go full VR. That is why we will combine the online tutorial with Discord, where we will make a video channel where we will stream the live demo and tutorial. It will also be possible to send messages that are visible in both the VR space and the Discord chat channel with Hub’s discord bot. You can choose where to follow the tutorial — fully in VR, or first discord and afterwards socialize in VR — that is totally up to you.

We still need to figure out the specifics, but if you register with your email we will send all the necessary information for the first session to you directly.

IOT conference Malmö

Now something else: tomorrow, namely Tuesday the 5th of October, we will also present at the IOT conference 2020 in Malmö. It is free for participants and it is still possible to register! Come and join if you can not wait to see us until the 4th of November.

For a long time issue #270 has been bugging us. It caused the µSD-card logging to fail in combination when using either the flow or loco deck, or actually any deck that uses the deck SPI bus. Several attempts has been made to fix this issue over time and recently we decided to really dig in to it. There has been some workaround to move the µSD-card to a different SPI bus but that was tedious and required patching the deck. So it was time to fix this for good, or at least know why it doesn’t work. A SPI bus is designed to be a multi-bus so it should be possible… Timing problems is still tricky but that is another story.

The problem

The SPI driver is protecting the bus with a mutex to prevent several clients to access it at the same time. After some digging we found that the FatFs integration layer was bugged and that SPI bus handling wasn’t well done. After comparing this to some other open implementations we found that this needed to be rewritten.

The solution

After rewriting part of the integration layer to have clear path of when the SPI bus was taken, and when it was released, we immediately got some good results. µSD-card logging with flow and loco deck worked, hooray! There is of course a limit to this and as we mentioned earlier the bus is a shared resource and if it is to congested, things will slow down, or stop working. This is currently the case when LPS is put in TWR mode. The TWR is very chatty and causes around 15k transactions per seconds on the SPI bus, and since it has higher priority than the µSD-card logging, the µSD-card write task starves, causing the logging to fail.

µSD and LPS SPI bus captured with a logic analyzer, over 50ms
µSD and LPS SPI bus captured with a logic analyzer, over 6ms

So if you stay away from LPS in TWR mode µSD-card logging should now work fine. I’m pretty sure there is a workaround for the TWR mode as well. First guess is that you would need to slow down the TWR update rate which is now at its maximum.

Happy logging!