Category: Crazyflie

Over time the scope of Crazyflie has changed a lot. At first, Crazyflie was “just flying” with the only possible control was attitude (roll, pitch, yaw) and thrust setpoint sent from the Radio. Soon after, autonomous flight was investigated, first by implementing position controller outside Crazyflie and then, over time, moving position control on-board and sending position or trajectory setpoint to the Crazyflie. Now that the Crazyflie has good position control, the next step is to implement autonomous behavior and until now the most practical way is to do this from code running in an external computer. Similarly to what happened in the history of position control (first off-board and now onboard), it needs to be possible implement autonomous behavior in the Crazyflie itself. This blog post is about two newly implemented capabilities that will allow to implement automation in the Crazyflie firmware in an easy and maintainable way, namely the ‘App layer’ and the P2P communication.

App layer

The “App layer” is a term we have been using internally in Bitcraze to describe a set of functionalities that would allow to implement code in the Crazyflie. This includes the infrastructure to compile and maintain external code running in the Crazyflie as well as a set of API to control flight and behavior from C code rather than from radio communication.

Last week we implemented the first step of the App layer: the infrastructure part. It is now possible to build the Crazyflie firmware out-of-tree. This means that it is now possible, from a project, to point to the Crazyflie firmware folder and to compile a firmware from the project folder without touching the Crazyflie firmware folder. Practically it allows to create a git-repos implementing custom firmware code that has Crazyflie firmware as a sub-repos. This makes the maintenance of custom firmware code much easier than maintaining a branch of the Crazflie firmware as previously required.

A second piece that has been implemented is the app entry-point. It allows to start running code by just creating an “appMain()” function. The function will be called from a dedicated FreeRTOS task after the Crazyflie has initialized and started. This should make it much easier to get started.

For an example, we have extracted the multiranger push demo into a standalone git repos. This demonstrate the implementation of autonomous behavior using these new infrastructures.

Peer to Peer communication

The Crazyflie has been used for many research related to swarming, some examples are the crazyswarm project or the work done by Carnegie Mellon University. However, it is now time to turn it up a notch. On the forum and on the Github repository, there has been several request of enabling direct peer to peer communication to the Crazyflies. Now we finally found time to work on it and implement some basic functionality on the NRF and STM side of the firmware.

Currently, it is possible to send and receive a P2P packet in broadcast mode from the STM directly (see how to do this in the documentation). This enables data to be send from one Crazyflie to another with a maximum data size of 60 bytes. We were able to stress-test this with our test rig, by sending broadcast messages in a round-robin-kind of fashion, where the broadcast message was transferred through 10 Crazyflies in 10-20 ms. Even-though the current implementation is for now very minimal, we were able to fix some existing issues in the radiolink framework.

We will not stop there, as we are hoping to implement a communication system similar to how the CTRP protocol has been implemented. We are getting a lot of help by our active community members, so check out this github issue to be up-to-date with the current discussion.

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

As pointed out in Daniele’s blog post about the PULP-DroNet we are collaborating on a AI-deck built around the new GAP8 RISC-V multi-core MCU. In the blog post you can find all the details around DroNet while here we will talk a bit about the AI-deck hardware. The AI-deck is similar to the PULP-Shield but with some optimizations. One of the HyperFlash memory spots has been removed, the communication interface slimmed down and a ESP32 (NINA module) has been added for WiFi connectivity.

Latest AI-deck prototype

So all together this a pretty good platform to develop low power AI on the edge for a drone.

Features:

  • GAP8 – Ultra low power 9 core RISC-V MCU
  • Himax HM01B0 – Ultra low power 320×320 greyscale camera.
  • 512 Mbit HyperFlash and 64 Mbit HyperRAM
  • ESP32 for WiFi and more (NINA-W102)
  • 2 x JTAG for GAP8 and ESP32

Currently we are doing the final testing of the hardware and hopefully we will launch production in the end of October. If production goes according to plan we hope we can offer it as an early access product just before X-mas. Make sure to come back and check the blog for more information about the progress as well as pricing.

We are happy to announce that we have made new official releases of a number of our software components. The name of the release is 2019.09 and we have outlined the main changes below.

Crazyflie/Roadrunner firmware

  • Added support for the Crazyflie Bolt
  • Improved support for external positioning systems
  • Basic support for the Lighthouse positioning system
  • Added support for the Active marker deck
  • Improved debug support
  • Improved uSD card logging functionality
  • Bug fixes

For more details, please see crazyflie-release, crazyflie-firmware and crazyflie2-nrf-firmware.

Download the release package that you can flash with the client from crazyflie-release.

Python client and library

  • Basic Lighthouse support
  • More examples
  • Bug fixes

For more details, please see crazyflie-lib-python and crazyflie-clients-python. Note: the version of the crazyflie-lib-python is 0.1.8.

The Windows build of the python client has unfortunately been delayed but will be available soon.

LPS Node firmware

  • Improved menus
  • Bug fixes

For more details and download, please see lps-node-firmware




Kimberly and Arnaud is at IMAV in Madrid this week. Drop by the booth and check out the demo.

The Crazyflie Bolt and the Crazyflie 2.1 with the lighthouse deck are coming to Madrid!

Only one week away until the start of the big Bitcraze Conference frenzy, with the first stop… Madrid! We will visit the Micro Air Vehicle Competition and Conference, which is a robotics event that is more specialized in (as the name implies) MAVs! So it should be right up our alley. This is the first time that we attend as Bitcraze, although the writer of this blog post has experienced fun times at the conference and the competition as a participant with her previous lab, the MAVlab.

The IMAV has been around for almost 12 years, starting in Toulouse, France in 2007. Although it initially mostly was held in various places in Europe, in 2016 into a more worldwide phenomenon by making it’s tribute in Beijing, China and Melbourne, Australia in 2018. It hosts a conference to which researchers can send their work in anything related to MAVs, from autonomous navigation, state estimation and design.

IMAV is mostly know for hosting big indoor and outdoor competitions for MAVs. The outdoor competitions can range from survey tasks to finding a hidden person or object. This year the focus will be on the delivery of packages from one place to another. The judges will look at how many packages that can be safely delivered and if the drone is able to detect certain objects in the outdoor environment. The indoor competition is oriented around the application of MAVs in a warehouse. They should be able to take off autonomously, monitor boxes in shelves and make an innovatory, and pickup packages to release them in their designated location. 40 teams of 28 universities will show their awesome implementation in these difficult tasks.

We will have a booth at the main company fair at the conference and indoor competition, and will also be present at the outdoor competition day as well. We will bring the lighthouse positioning system and show the awesome swarming demo we developed. Also we want to bring the new Crazyflie Bolt with us, which we are sure of that the regular IMAV crew will love. If you are at the IMAV between the 30th of September to the 4th of October, come by and say hello!

We have briefly mentioned the Active marker deck earlier in our blog and in this post we will describe how it works and what it is all about.

The Active marker deck is a result of our collaboration with Qualisys, a Swedish manufacturer of high end optical tracking systems. Optical tracking systems are often referred to as motion capture (mocap) systems and are using cameras to track markers on an object. By using multiple cameras it is possible to calculate the 3D position of the markers and the object they are attached to with very high precision and accuracy. It is common to use mocap systems in robotic labs to track the position and orientation of robots, for instance quadrotors.

Passive markers

The most common marker type is the passive marker, that is reflective spheres that are attached to the robot. By using infrared flashes on the cameras, the visibility of the markers is maximized and it makes it easier for the system to detect and track them. We are selling the Motion capture marker deck to make it easy to attach markers to a Crazyflie.

To get the full pose (position, roll, pitch, yaw) of a robot, the markers must be placed in a configuration that makes it possible for the mocap system to identify the orientation. This means that there must be some asymmetry in the marker positions to understand what is front, back, up, down and so on.

With a swarm of Crazyflies, unique marker configurations makes it possible to distinguish one individual from another and track all drones simultaneously. With a larger number of robots it becomes cumbersome though to place markers in unique configurations, and one approach to solving this problem is to have known start positions for all individuals and keep track of their motions over time instead. This solution is used in the Crazyswarm for instance and all Crazyflies can use the same marker configuration in this setup. Another approach is to make it possible to distinguish one marker from another, enter the Active marker deck.

Active markers

It is possible to use infrared LEDs instead of the passive markers, this is called active markers. The LEDs are triggered by the flash from the cameras and they are easily detected as strong points of light. Since they are emitting light they can be detected further away from the camera than a passive marker and the smaller physical size also keeps them more separated when they are far away and only a few pixels are available to detect them in the camera.

Furthermore Qualisys has a technology that makes it possible to assign an id to each marker and that enables the tracking system to identify individual markers and thus uniquely identify individuals in a swarm. With different IDs on the markers, there is no need have asymmetrical configurations and the marker layout can be the same on all drones. It also reduces the risk of errors in the estimated pose, since there is more information available.

The deck

The Active marker deck is designed to go on top of the Crazyflie and has four arms with one LED each. The arms are as long as possible to maximize the signal/noise ratio in the cameras, while still short enough to be protected from crashes by the motors. There is a STM32 F0 on the deck that takes care of the LEDs and handling of IDs and the main Crazyflie CPU does not have to spend any time on this.

The status of the deck is that the hardware is fully functional (we might want to move something around before we produce it though) and that there is a basic implementation of the firmware. IDs are assigned to the markers using parameters in the standard parameter framework in the client or from a script.

We will start production of the deck in the near future and it will be available in the store this autumn. Qualisys added support for rigid bodies using active markers in V2019.3 of the QTM tracking software.

Hello everyone, I’m Victor and you probably haven’t heard of me yet but I’ve got the awesome opportunity to spend some weeks during this summer working at Bitcraze. Working… Well, I’ve spent the majority of my time here getting invaluable experience, programming, flying with drones, eating incredible falafel and having fun so it’s really been a pleasure.

I’m quite new to both programming and electronics, so while I haven’t created any huge masterpieces of code yet, I did make a small program with a GUI that let you test the health of motors and propellers of the Crazyflie. You can run multiple ones simultaneously (I’ve tried up to 8, which works fine, even with a single radio, and you should be able to run many more) and it relies on using either Lighthouse or a Flow deck for positioning.
The propeller-test is essentially the same test as the one integrated with the cfclient, however the motor-test checks the thrust-levels of the motors (by hovering in the air for x seconds) to see if any of them are off and ranks them as good/bad. The default threshold is 15% but can be changed according to needs. The program is written in Python and uses tkinter to run the GUI application and the cflib to communicate with the Crazyflie. The script can be found here.

In the end of August I’m going to study Computer Science and Engineering which I’m extremely thrilled about and this has really been a perfect preparation for that! In the future I hope to contribute to the Crazyflie projects and learn more from the great team here at Bitcraze.

So until next time, fly safe!
Victor

The High-level Commander has been part of the Crazyflie firmware since the 2018.10 release. In combination with a positioning system, it can fly the Crazyflie along a trajectory that is either defined in the firmware or uploaded through the python lib. It originates from the Crazyswarm project and we have used it in various demos since it is possible to make trajectories that are very fluid and looks really cool. The trajectories are defined as 7th degree polynomials describing segments executed one after each other.

The controller gives full control of position, velocity, acceleration and jerk, the only problem is that it is non-trivial to generate the polynomials. We have wanted to simplify the creation of trajectories for a long time and have finally had some time to play with it. In this blog post we will describe how it can be done with Bezier curves and show some examples.

Each segment in a High-level commander trajectory is defined by four 7 degree polynomials, one for x, y, z and yaw. There is also a scaling parameter that tells the controller the time scale to use when executing the segment. Using polynomials of degree 7 makes it possible to design trajectories that are continuous in position, velocity, acceleration and jerk when changing from one segment to the next, which is important to get a smooth and controlled flight.

Bezier curves are common in many graphics applications and are probably known to most users. They are parametric curves defined by control points, usually three or four. Bezier curves can also be expressed as a polynomial, and this is what we will use in this case. To get a correct mapping to the desired polynomials we need some more control points and will use 8 per segment. The basic idea is to define the trajectory as Bezier curves and make sure the control points are placed in such a way that the continuity requirements are satisfied.

Bezier curve with 8 control points

On this page from University of Cambridge, there is a good explanation on continuity across the joins between curves and formulas for c0, c1 and c2 continuity. We also need c3 continuity which can be calculated in the same manner

With these formulas it is possible to set the handles of the Bezier curves to make sure we get a smooth ride.

We have added a python example that implements the ideas above. You can find it in crazyflie-lib-python/examples/positioning/bezier_trajectory.py. The design is based on Nodes that represents the connection points between bezier curves (called Segments). The Nodes has a set of handles that are shared between the Segments that use the Node. If not all handles are set the implementation will set them to appropriate values, see the comments in the code for more details. The Node API only allows the user to set handles on one of the Segments, the handles for the other segment are automatically set to generate a continuous trajectory.

The example uses nodes in the corners of a square and contains three parts:

No velocity in the nodes. The Crazyflie stops in the Nodes. Similar to calling go-to in the HL commander.
Velocity in the nodes. A fluid motion all the way around.

Velocity in the nodes. A fluid motion all the way around.

A bit more aggressive settings to get a little action.

Finally a video showing the full sequence, we use the Lighthouse for positioning.

Improving the flow of information

Our usual blog posts usually consists of the awesome new products and demos that we make here at Bitcraze, but now we will talk about… documentation! Alright alright, it is maybe not the most thrilling topic, however you should be excited about it! Good documentation about the Crazyflie and its tools will not only enable you to recreate the demos and the work of others, but also to implement your own ideas and to contribute to our open-source firmware.

In the years that Bitcraze has been around, there has been quite the build-up of information, which can be either found on the main website, the wiki, the github repositories, and in bits & pieces on the forum. Although we try to provide all the information necessary for getting started with development, it is currently quite a clutter. If we at Bitcraze already have difficulty of finding and maintaining all the documentation, we can only imagine how difficult it would be for a starting developer. We therefore would like to improve the flow of information dramatically!

Here are some ideas of what we would like to do with the documentation:

Moving product information to the shop.

LED ring expansion deck in the main website, shop, and wiki.

Currently there are three different locations where you can find information about physical Crazyflie, localization systems or expansion decks, which is the main website, the online store and the wiki. We see that a lot of electronic and hardware shops usually put all the details of the product directly on the product page of their shops. We aim to do that as well, since there will only be one page for users to go to for schematics, specifications, instructions and more, and for us it will be also easier to maintain and update any product information

Moving Software Info to GitHub

There is a lot of bits and pieces of information to handle the firmware on the Crazyflie and all the tools in the tutorials on our main website, wiki and Github repository. This again makes a lot of duplicate information, which is difficult for us to maintain and therefore gets easily outdated. We could put all the information on the wiki, but what if somebody changes something in the code which requires a change in the procedure as well?

It would be the best to keep all the information about the firmware as close to the source as possible, therefore we think is best to move everything to the github repositories. For instance, the wiki cfclient instructions can be moved to the documentation of the cfclient repostitory, or Onchip debugging instructions can go to crazyflie firmware repostitory To keep it all manageable we will:

  • Create a /doc folder on the repositories to better structure all the information
  • Add more Doxygen comments to all the function in the the codes and automatically generate documentation for this.

Restructuring the Wiki Content

After moving all the hardware- related content to the shop, and moving all the firmwar- related info to the Github repositories, we will need to think about what we want to do with the Wiki! You would think that there is nothing left to put on the wiki anymore after the replacement of the earlier documentation, but we beg to differ! For instance, there is so many Github repositories that there is a really a need for an overview. The wiki can educate developers on which tools we have an how to properly use them. Of course, we already have the getting started tutorials, but we want to also provide a more in-depth explanation of the overall structure and how the different repositories would need tho work together, like this .

This does mean that we would need to restructure the wiki entirely and only focusing on topics like:

  • System Architecture Crazyflie
  • Communication protocol between STM and the NRF
  • Communication protocol between the Python library and the Client
  • Overview Github-repositories
  • Projects and hacks
  • etc etc

What do you think?!

Of course we can change all we want in the documentation, but you guys are the ones who actually use it! We are very curious of what what you think of the plans and give us more tips or suggestions on how to improve the overall documentation experience. Please leave a message below or express your opinion on this forum thread.

Two weeks ago we posted about the demo we did for our new office move-in party. There has been multiple requests to share the script but unfortunately this is a hacked old script that is not going to be useful at all as an example. So, last week, we made an example that could run a synchronized swarm sequence.

The example has been pushed in the example folder of the Crazyflie-lib-python project. It is called synchronizedSequence.py. Running this example unmodified with 3 Crazyflies in a positioning system will give you this result. (Like the previous demo, this was done in a lighthouse system.)

One of the key design of the example is that it is based on a single control loop that can be synchronized with an outside system: in this example, there is a simple sleep of one seconds between each step of the sequence but it could for example be changed into a midi clock receiver to synchronize the sequence with music.

The example was developed with the help of Victor, a student we have hired to help-out during the summer. He has then played around a little bit to make a 9 Crazyflies sequence that is more impressive:

I uploaded Victor’s sequence in a github gist as it can be good for inspiration. One bit of warning though: as is, the sequence contains some vertical movements that are quite aggressive and the part where Crazyflies fly directly on top of each-other is more to be considered as a stress test.