Category: Software

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

We are happy to announce that we have made new official releases of a number of our software components. The name of the release is 2019.09 and we have outlined the main changes below.

Crazyflie/Roadrunner firmware

  • Added support for the Crazyflie Bolt
  • Improved support for external positioning systems
  • Basic support for the Lighthouse positioning system
  • Added support for the Active marker deck
  • Improved debug support
  • Improved uSD card logging functionality
  • Bug fixes

For more details, please see crazyflie-release, crazyflie-firmware and crazyflie2-nrf-firmware.

Download the release package that you can flash with the client from crazyflie-release.

Python client and library

  • Basic Lighthouse support
  • More examples
  • Bug fixes

For more details, please see crazyflie-lib-python and crazyflie-clients-python. Note: the version of the crazyflie-lib-python is 0.1.8.

The Windows build of the python client has unfortunately been delayed but will be available soon.

LPS Node firmware

  • Improved menus
  • Bug fixes

For more details and download, please see lps-node-firmware




Kimberly and Arnaud is at IMAV in Madrid this week. Drop by the booth and check out the demo.

We have briefly mentioned the Active marker deck earlier in our blog and in this post we will describe how it works and what it is all about.

The Active marker deck is a result of our collaboration with Qualisys, a Swedish manufacturer of high end optical tracking systems. Optical tracking systems are often referred to as motion capture (mocap) systems and are using cameras to track markers on an object. By using multiple cameras it is possible to calculate the 3D position of the markers and the object they are attached to with very high precision and accuracy. It is common to use mocap systems in robotic labs to track the position and orientation of robots, for instance quadrotors.

Passive markers

The most common marker type is the passive marker, that is reflective spheres that are attached to the robot. By using infrared flashes on the cameras, the visibility of the markers is maximized and it makes it easier for the system to detect and track them. We are selling the Motion capture marker deck to make it easy to attach markers to a Crazyflie.

To get the full pose (position, roll, pitch, yaw) of a robot, the markers must be placed in a configuration that makes it possible for the mocap system to identify the orientation. This means that there must be some asymmetry in the marker positions to understand what is front, back, up, down and so on.

With a swarm of Crazyflies, unique marker configurations makes it possible to distinguish one individual from another and track all drones simultaneously. With a larger number of robots it becomes cumbersome though to place markers in unique configurations, and one approach to solving this problem is to have known start positions for all individuals and keep track of their motions over time instead. This solution is used in the Crazyswarm for instance and all Crazyflies can use the same marker configuration in this setup. Another approach is to make it possible to distinguish one marker from another, enter the Active marker deck.

Active markers

It is possible to use infrared LEDs instead of the passive markers, this is called active markers. The LEDs are triggered by the flash from the cameras and they are easily detected as strong points of light. Since they are emitting light they can be detected further away from the camera than a passive marker and the smaller physical size also keeps them more separated when they are far away and only a few pixels are available to detect them in the camera.

Furthermore Qualisys has a technology that makes it possible to assign an id to each marker and that enables the tracking system to identify individual markers and thus uniquely identify individuals in a swarm. With different IDs on the markers, there is no need have asymmetrical configurations and the marker layout can be the same on all drones. It also reduces the risk of errors in the estimated pose, since there is more information available.

The deck

The Active marker deck is designed to go on top of the Crazyflie and has four arms with one LED each. The arms are as long as possible to maximize the signal/noise ratio in the cameras, while still short enough to be protected from crashes by the motors. There is a STM32 F0 on the deck that takes care of the LEDs and handling of IDs and the main Crazyflie CPU does not have to spend any time on this.

The status of the deck is that the hardware is fully functional (we might want to move something around before we produce it though) and that there is a basic implementation of the firmware. IDs are assigned to the markers using parameters in the standard parameter framework in the client or from a script.

We will start production of the deck in the near future and it will be available in the store this autumn. Qualisys added support for rigid bodies using active markers in V2019.3 of the QTM tracking software.

Hello everyone, I’m Victor and you probably haven’t heard of me yet but I’ve got the awesome opportunity to spend some weeks during this summer working at Bitcraze. Working… Well, I’ve spent the majority of my time here getting invaluable experience, programming, flying with drones, eating incredible falafel and having fun so it’s really been a pleasure.

I’m quite new to both programming and electronics, so while I haven’t created any huge masterpieces of code yet, I did make a small program with a GUI that let you test the health of motors and propellers of the Crazyflie. You can run multiple ones simultaneously (I’ve tried up to 8, which works fine, even with a single radio, and you should be able to run many more) and it relies on using either Lighthouse or a Flow deck for positioning.
The propeller-test is essentially the same test as the one integrated with the cfclient, however the motor-test checks the thrust-levels of the motors (by hovering in the air for x seconds) to see if any of them are off and ranks them as good/bad. The default threshold is 15% but can be changed according to needs. The program is written in Python and uses tkinter to run the GUI application and the cflib to communicate with the Crazyflie. The script can be found here.

In the end of August I’m going to study Computer Science and Engineering which I’m extremely thrilled about and this has really been a perfect preparation for that! In the future I hope to contribute to the Crazyflie projects and learn more from the great team here at Bitcraze.

So until next time, fly safe!
Victor

Two weeks ago we posted about the demo we did for our new office move-in party. There has been multiple requests to share the script but unfortunately this is a hacked old script that is not going to be useful at all as an example. So, last week, we made an example that could run a synchronized swarm sequence.

The example has been pushed in the example folder of the Crazyflie-lib-python project. It is called synchronizedSequence.py. Running this example unmodified with 3 Crazyflies in a positioning system will give you this result. (Like the previous demo, this was done in a lighthouse system.)

One of the key design of the example is that it is based on a single control loop that can be synchronized with an outside system: in this example, there is a simple sleep of one seconds between each step of the sequence but it could for example be changed into a midi clock receiver to synchronize the sequence with music.

The example was developed with the help of Victor, a student we have hired to help-out during the summer. He has then played around a little bit to make a 9 Crazyflies sequence that is more impressive:

I uploaded Victor’s sequence in a github gist as it can be good for inspiration. One bit of warning though: as is, the sequence contains some vertical movements that are quite aggressive and the part where Crazyflies fly directly on top of each-other is more to be considered as a stress test.

Summer is here and temperatures are rising. Since many of us will be on holidays, we will focus this quarter on a special summer clean up! See here what we are working on:

  • Fixing issues: This time we are aiming to close many of the issue tickets in our Github repositories, so that after the summer everything will run much more smoothly (we hope!). Definitely our test rig will come in very hand to sniff out more issues in terms of radio communication as well. You can help as well! Everybody who is using and developing on with the STM-firmware, NRF-firmware or python library, or anything else and is noticing issues, please make a ticket in that same Github repository (if you are familiar with the code) or post about it on our forum (if you do not know exactly what is going on). Together we can make the code better.
  • Lighthouse calibration: In March we released our lighthouse deck for positioning with the HTC Vive base stations. We did feel that the setup process could be improved further, since currently, the Crazyflies’ firmware must contain hardcoded information of the Steam VR’s base station position. We will try to apply the factory calibration direct from the Base stations itself. This will enable us to do 2 additional things: (1) The Crazyflie with the LH deck itself could be used to setup the Lighthouse system, so that SteamVR would not be necessary anymore. (2) Only 1 base station is needed for positioning instead of 2, which will improve the robustness in case of loss of visual-line-of-sight of one base station.
  • Documentation: We try to provide all the possible information for everybody to be able do anything they want with their Crazyflie. But with high flexibility comes great responsibility…. for proper documentation! We are planning to restructure all of our media outlets and try to improve the flow and level of detail for our users. We hope to make it easier for beginning developers to get started and more advanced developers to gain better understanding of the system in order to implement their own awesome ideas. So our very first step is to restructure and clean up the Bitcraze wiki and see where we can add more content.
  • Products: We have a lot of products coming out in the 2nd half of the year
    • AI Deck: We are working hard to get the AI deck all ready for production and we are estimating that they will be available for early access in late autumn. Keep a look out on our forum for regular updates on the progress!
    • Lighthouse breakout board: We made our first working prototype of the lighthouse breakout board, which should make it easier for the lighthouse positioning system to also work on other platforms than the Crazyflie.
    • Active Marker Deck: We are very much on on track with the Qualisys active marker deck! It should be available in the Autumn.
    • Crazyflie Bolt: This has been send off to production for the early access version, which should be available in the Autumn!

Many of our users are flying larger and larger swarms and we’ve been getting some feedback that there’s communication issues when connecting to many Crazyflies. So during the last weeks we’ve been looking at this. Among the things we’re doing is building a test rig where we can automate the communication testing (last weeks blog post). We’ve also fixed a few communication issues listed below.

One of the issues causing problems is dropping packages coming in to the Crazyflie. If the flow of packages was too high to one CRTP port these would start dropping. This has now been fixed by increasing the length of the queues for each port. (GitHub issue)

Another issues has been logging data piling up after disconnect. The detection for the radio disconnection was boken so logging data would continue to be generated and pushed into the communication stack. This has now been fixed so logging will be reset which should clear up he congestion on the next connect. (GitHub issue)

Lastly we also fixed the USB communication issue with dropped packages and crashing when the USB was disconnected. (GitHub issue)

We’ve already noticed a few other issues when using the rig so there should be more fixes coming soon. In the meantime test out the new firmware and let us know if there’s still issues.

While running our ICRA demo, we came across a bug in the Crazyflie python-lib radio handling, limiting the number of Crazyflie that could be controlled using one Crazyradio PA. Communication with many Crazyflies is crucial as flying swarms is becoming more of an interesting topic for research and education. So we decided to take the problem at hand and create a radio test-bench:

To make the test-bench we have attached 10 roadrunner boards to a plank of wood together with USB switches that can provide enough power to the roadrunners. We used the roadrunner because it is mechanically easier to use in this context and it has an identical architecture to the Crazyflie 2.1 when it comes to the radio implementation.

Initially we will use the test-bench to run test scripts that pushes the communication to its limits and that consistently test the communication stack functionalities. This should allow us to find bug and verify that we solve them as well as discovering and documenting limitations.

Eventually we want to connect a raspberry-pi to the test-bench and run tests for each commit and pull-request to the crazyflie-firmware, crazyflie2-nrf-firmware and crazyflie-lib-python projects. This will guarantee that we do not introduce new limitations in the communication stack. The test-bench will also be very useful in implementing new functionalities like direct crazyflie-to-crazyflie P2P communication.

As a final note, the Crazyswarm project is not affected by the Crazyflie-lib bug since it is using the C++ implemented crazyflie-ros driver. Hence Crazyswarm can control more Crazyflies per Crazyradio PA, so it is still the preferred way to fly a swarm mostly when using a motion capture system. Though, with the progress made on LPS and Lighthouse positioning, running swarms, using the python API directly is a probably a more lightweight alternative.

Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet.
Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone.
In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station!
Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.


PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna.
In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.

The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.

Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich.
To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC.
The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:

Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power.
To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.

Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones.
Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).

If you want to know more about our work:

Questions? Drop us an email (dpalossi at iis.ee.ethz.ch and fconti at iis.ee.ethz.ch)

As mentioned earlier, we will be attending ICRA 2019 and have started to work on the demo that we will run in our booth. The main features we are going to show this time are the Lighthouse positioning deck and the Crazyflie 2.1.

One Crazyflie flying and 5 re-charging on their pads

Running a demo with flying Crazyflies at a conference, usually means a lot of work with changing and charging batteries, starting demos and so on. This takes time and leads to less time to talk to all the interesting people that visit our booth. This year we are aiming at making the demo as fully automated as possible. We will have 8 Crazyflie 2.1 with Lighthouse and Qi charger decks, each with a charging pad. A computer will orchestrate the Crazyflies and make sure one is flying at all times while the others re-charge their batteries. When the battery of the flying Crazyflie is depleted it goes back to its pad while another one takes over.

Most of the functionality is implemented in the Crazyflie firmware and it is pretty much autonomous after the trajectory is started. It will fly the trajectory over and over until it is low on battery, when it goes back to where it started from for recharging. Even though the Lighthouse positioning is very good, it sometimes slips off the charging pad when landing, so we have added a reposition feature to take off again and land if it is not charging after landing.

The orchestration computer (we call it the Control tower) is just keeping track of the state of the flying copter and starts a new one when the flying one goes back the the pad.

We are reusing the spiral trajectory from IROS last year and it has the property that it is possible to run up to 4 copters at the same time without colliding, if they are started at the correct times. There is a swarm feature in the Control tower that runs up to 4 Crazyflies with continuous replacements. The chargers are not fast enough to keep it going and it does not work as expected every time, but it is exciting with a few more Crazyflies buzzing around!

There are some bits and pieces left to implement, like crash detection, but the demo is mostly functional. If you want to play with the (slightly hackish) code, it is available at https://github.com/bitcraze/crazyflie-firmware-experimental/tree/icra-2019

You can find us in booth 101 at ICRA 2019, May 20 – 22. Drop by and say hi, check out the demo and tell us what you are working on. We love to hear about all the interesting projects that are going on. See you there!