Category: Video

3 of us where at ICRA 2019 in Montreal last week, where we met a lot of interesting people and a lot of Crazyflie users. Thanks a lot to everyone that drop by our booth, and for the ones that missed it we are planning on being at iROS2019 later this year so we might see you there :-).

We have already described our demo in a previous post, now that we run it we can update on how it went. We are also updating the ICRA2019 page with the latest source code and information so that anyone interested can reproduce the demo.

In its final state at the conference, the demo contained 8 Crazyflies 2.1 equiped with Lighthouse deck and Qi charger deck. There were 8 3D-printed charging pads on the floor with Ikea Qi wireless chargers and two HTC Vive base stations (V1) on tripods. The full system was contained in a cage, built from 50 cm-long tubes or aluminium and nets.

The full setup of the booth took us about 4 hours, this included about 3 hours for the cage, 15 min for the demo including calibration of the lighthouse base-station geometry and the rest to fine-tune things. This is by far our best setup time, we still need to prettify the cage a bit and to make is easier to install, but we will most likely re-use this system for upcoming conferences.

In this demo we aimed at keeping a Crazyflie in the air at every moment, to do so we had a computer connected to all 8 Crazyflies sending to one of them the signal to start flying if no other where actually in the air flying a trajectory. The flight was completly autonomous as we explained in our previous blog post. We setup the Crazyflie to fly 2 cycles and then land, which increase the rate of swap and so increased the ‘action’, though it also meant that during the swap two Crazyflies where flying. This drained the batteries a bit more than expected and meant that after about an hour all the Crazyflies where bellow the take-off threshold and we had to wait ~30 seconds between flights. Here is a video of it in action:

The demo was very care-free, we had very few Crashes and we mostly restarted the Crazyflies to swap batteries manually to add a bit of power in the swarm. The last day we decided to spice it up a little bit by adding a chair in the cage and by calibrating the chair position and flight trajectory, we managed to have the Crazyflie partly fly under it. This worked quite well most of the time and showed that the lighthouse positioning is repeatable and works fairly well with short occlusion in the path. Though we also found out that even though a single Crazyflie would always fly the same trajectory, two different Crazyflies will not. We think differences in propeller stiffness and the fact that the our Mellinger position controller has not been calibrated for changing YAW are the main reasons.

If you want to know more about the demo or if you want to reproduce it do not hesitate to visit the ICRA 2019 page that explains it in more details and links to the source code of everything including 3D printed parts for the cage and the landing pads.

Hi everyone, here at the Integrated and System Laboratory of the ETH Zürich, we have been working on an exciting project: PULP-DroNet.
Our vision is to enable artificial intelligence-based autonomous navigation on small size flying robots, like the Crazyflie 2.0 (CF) nano-drone.
In this post, we will give you the basic ideas to make the CF able to fly fully autonomously, relying only on onboard computational resources, that means no human operator, no ad-hoc external signals, and no remote base-station!
Our prototype can follow a street or a corridor and at the same time avoid collisions with unexpected obstacles even when flying at high speed.


PULP-DroNet is based on the Parallel Ultra Low Power (PULP) project envisioned by the ETH Zürich and the University of Bologna.
In the PULP project, we aim to develop an open-source, scalable hardware and software platform to enable energy-efficient complex computation where the available power envelope is of only a few milliwatts, such as advanced Internet-of-Things nodes, smart sensors — and of course, nano-UAVs. In particular, we address the computational demands of applications that require flexible and advanced processing of data streams generated by sensors such as cameras, which is beyond the capabilities of typical microcontrollers. The PULP project has its roots on the RISC-V instruction set architecture, an innovative academic and research open-source architecture alternative to ARM.

The first step to make the CF autonomous was the design and development of what we called the PULP-Shield, a small form factor pluggable deck for the CF, featuring two off-chip memories (Flash and RAM), a QVGA ultra-low-power grey-scale camera and the PULP GAP8 System-on-Chip (SoC). The GAP8, produced by GreenWaves Technologies, is the first commercially available embodiment of our PULP vision. This SoC features nine general purpose RISC-V-based cores organised in an on-chip microcontroller (1 core, called Fabric Ctrl) and a cluster accelerator of 8 cores, with 64 kB of local L1 memory accessible at high bandwidth from the cluster cores. The SoC also hosts 512kB of L2 memory.

Then, we selected as the algorithmic heart of our autonomous navigation engine an advanced artificial intelligence algorithm based on DroNet, a Convolutional Neural Network (CNN) that was originally developed by our friends at the Robotic and Perception Group (RPG) of the University of Zürich.
To enable the execution of DroNet on our resource-constrained system, we developed a complete methodology to map computationally-intense deep neural networks on the PULP-Shield and the GAP8 SoC.
The network outputs two pieces of information, a probability of collision and a steering angle that are translated in dynamic information used to control the drone: respectively, forward velocity and angular yaw rate. The layout of the network is the following:

Therefore, our mission was to deploy all the required computation onboard our PULP-Shield mounted on the CF, enabling fully autonomous navigation. To put the problem into perspective, in the original work by the RPG, the DroNet CNN enabled autonomous navigation of big-size drones (e.g., the Bebop Parrot). In the original use case, the computational power and memory was not a problem thanks to the streaming of images to a remote base-station, typically a laptop consuming 30-100 Watt or more. So our mission required running a similar workload within 1/1000 of the original power.
To make this work, we combined fixed-point arithmetic (instead of “traditional” floating point), some minimal modification to the original topology, and optimised memory and computation usage. This allowed us to squeeze DroNet in the ultra-small power budget available onboard. Our most energy-efficient configuration delivers 6 frames-per-second (fps) within only 64 mW (including all the electronics on the PULP-Shield), and when we push the PULP platform to its limit, we achieve an impressive 18 fps within just 3.5% of the total CF’s power envelope — the original DroNet was running at 20 fps on an Intel i7.

Do you want to check for yourself? All our hardware and software designs, including our code, schematics, datasets, and trained networks have been released and made available for everyone as open source and open hardware on Github. We look forward to other enthusiasts contributions both in hardware enhancement, as well as software (e.g., smarter networks) to create a great community of people interested in working together on smart nano-drones.
Last but not least, the piece of information you all were waiting. Yes, soon Bitcraze will allow you to enjoy of our PULP-shield, actually, even better, you will play with its evolution! Stay tuned as more information about the “code-name” AI-deck will be released in upcoming posts :-).

If you want to know more about our work:

Questions? Drop us an email (dpalossi at iis.ee.ethz.ch and fconti at iis.ee.ethz.ch)

Last week we blogged about the early release version of the lighthouse deck and showed a nice push-around demo of the Crazyflies using the Vive controller. Now we wanted to push the system even further, by making a Lighthouse Painting!

We started by adding a LED-ring deck on the bottom of the CrazyFlie 2.1 with the lighthouse deck attached to the top. We were able to access the input of the track pad of the Vive controller and link it to a specific color / hue value. The LED ring can display any color possible in the RGB range, so in theory, you could paint in whatever color you like. For now, the brightness was fixed, but this could be easily added to the demo script as well.

To capture the light trace, we needed to make a long-exposure image, therefore, the flight arena need to stay completely dark. Luckily, this was easy to do for us since we do not have any windows in our new testing arena. Our camera is the Canon D5600 with a manually controlled shutter time setting selected (press to open the shutter and press again to close the shutter). The aperture setting was set at F-22. Nevertheless, this is very depended on the environment, so we had to do some trial-and-error in order to get this parameter right.

Aperture too wide… perfect!

Once we had the set-up finished, we made several long exposure photo paintings with one person controlling the camera and another painting the picture into thin air. Of course, the artist would need to imagine its creation, as we were not able to see the result until after the picture was taken. Also, big gestures were required in order to complete the painting, as the Crazyflie’s and the Vive controller’s movements were synced 1:1, so adding some multiplication factor would come in handy. Nonetheless, the results were amazing.

Some nice examples of a single crazyflie flying based on the Vive’s position, changing color based on the trackpad

We took it even further, by making the Crazyflie fly a predefined trajectory and planned color scheme without the Vive controller. First, it flew three concentric circles in green, red and blue with the high level commander with the PID controller setting. But, the circles would probably be closed-off more properly with the Mellinger controller setting. We also were able to reproduce the Bitcraze logo in the same fashion. In both long-exposure photos, it still possible to see the Crazyflie, as it is still traceable due to its routine LED functionality, so you can easily observe where it took off, and where it flew in between shapes.

The Crazyflie flying a predefined trajectory in several shapes

The demo python scripts of the above flights can be found here:

An we also took a video of the Bitcraze logo being drawn. The mobile phone camera had some problems focusing in the dark, but it gives a good idea of how things works:

We have just released the Crazyflie Lighthouse deck as Early Access! It is now available in our web store.

The lighthouse deck allows the Crazyflie to estimate its position using the HTC Vive tracking base-station normally used for Virtual Reality. The positioning is done by tracking the timing of rotating infra-red laser beams emitted from the base-stations. This system has the advantages of having a very good precision and of allowing the Crazyflie to acquire its position autonomously: once the Crazyflie knows the position and orientation of the base-station, it can calculate its own position without the help of any external systems.

The release as Early Access means that we have finished the hardware and we are confident that the hardware is working properly. Though we have not yet finished all the software and firmware, by releasing the hardware early we can get the hardware into the hands of users quickly to try it out. In return we hope we can get some help making the software better.

Current state

  • The Crazyflie can calculate its position from the received Vive Base-Station V1 signals.
  • Direct line of sight should be kept to both base-stations. The Lighthouse deck has 4 receivers so in the future it will be possible to get a position from seeing only one base station.
  • Base-Station V2 support is still being worked-on, it will only require a software update.
  • The Base-station position is hard-coded in the Crazyflie and found using SteamVR. Ideally this should be sent from the ground and the Crazyflie should calculate the positions of the Base-Stations automatically.
  • The previous point means that a full VR system or at least two base stations and a controller or tracker is required to setup the system. In the future we hope to setup the system with only a Crazyflie and two base stations.
  • Since this version of the deck only has horizontal sensors, it is important that the base-stations are placed above the flight space and the Crazyflies should fly ~40cm bellow the base-stations

As long as the deck is in early access, the main documentation will be the lighthouse positioning page in the wiki. This page is going to be updated a lot in the near future and will track the progress in development.

Demo

We have written a small demo script that allows to set the position of the Crazyflie using a Vive controller. It is a good demo to experiment with the precision of the system and the ability to mix VR and Crazyflie since they are in the same tracking space:

In this demo, a python script connects to two Crazyflies and acquire the controller position using OpenVR and makes the Crazyflies take-off above the controller. Then, when the controller trigger is pushed, the setpoint to the closest Crazyflie is changed to follow the controller movement, the Crazyflies are flying autonomously only getting position setpoints from the python script. The position estimation and control is handled onboard.

We are pretty excited by this release since we think this positioning technology will be very useful for a lot of use-case. Let us know what you think and do not hesitate to contribute if you want to improve the system :).

A few weeks ago we wrote about the release of the Multi-ranging deck and the new STEM ranging bundle.

The STEM ranging bundle is a great addition in the classroom for a wide range of students. By combining the Flow deck v2’s time-of-flight distance sensor and optical flow sensor with the Multi-ranger deck’s ability to measure distance to objects, the Crazyflie gets position and spatial awareness.

We have shot a video that shows the bundle in action!

 

To get started with the STEM ranging bundle we have created a guide for the bundle with step-by-step instructions. The code for the demos in the video are available in the example directory of the crazyflie-lib-python project:

  • multiranger_push.py: When the application in launched the Crazyflie will take off and hover. If anything is getting close to the right/left/front/back sensors the Crazyflie will move in the opposite direction. 
  • multiranger_pointcloud.py: When the application is launched the Crazyflie will take off, hover and a 3D-plot will be shown of what is detected by the Multi-ranger deck sensors. By default the left/right/front/back/up sensors will be plotted, but you can also add the Crazyflie position and the down sensor if you like. The Crazyflie can be moved around by using the arrow keys on the keyboard and w/s for up/down and a/d for rotating CCW/CW. For more info see the documentation in the example.

We love feedback so please leave some comments in the field below!

In August we got invited by Marion from ETH Zurich to help out with this years PolyHack, that is organized by Telejob, and which theme was about drones. We really like this kind of events but our reality is that we normally don’t have enough time to participate. For this occasion though we had the opportunity to both have fun and see how our products work when used during an event like this. Two birds with one stone and the decision was made.  Together with one of the main sponsors ELCA, we organized the flying postman challenge:

Drones seem to be the future of post deliveries, but how is it going to work? Join us to reproduce a swarm of drones delivering parcels through a city to have a glimpse at this future!

The challenge the teams got was to deliver as many parcels within 5min in a miniature city, 4m x 4m, using Crazyflies. Since the Crazyflies can’t carry that much payload the parcels was just digital/imaginary but had to be picked up at a pick-up zone. They were allowed to use up to thee Crazyflies simultaneous to increase capacity. For more details checkout the challenge description.

To manage the challenge ELCA developed the CrazyServ which uses a REST API to control Crazyflies, wrapping the high level position commander, and to pick-up parcels. One nice benefit with a server is that it can keep track of which parcels has been picked up and been delivered making the scoring fully automatic.

Bitcraze part in the challenge was to bring drones, technical support and our loco positioning system to make up the 4m x 4m city. Or actually three of them, as there were going to be six teams competing for the victory. The initial information was that the three systems would be installed in separated rooms, far away, but we ended up having them side by side. That left us with some live-hacking, changing from TDoA-2 to TDoA-3 so the anchors would not interfere with each other. We ended up using 12 anchors in total which gave enough precision for the PolyHackers to complete their challenge.

The PolyHack was a success and we had a great time. The winning team in our challenge, Electek Innovation, managed to deliver 19 parcels during the 5min with the use of a “loop” system. Congrats and well done! If you get inspired by this hackaton the CrazyServ is available on github! Together with a e.g. swarm bundle it shouldn’t be to hard to reproduce.

Thanks Telejob for letting us take part of this great event!

 

We have a collaboration with Qualisys, a Swedish manufacturer of top of the line motion capture systems. Similar to us they are a passionate about what they do, are working on high tech products and to make it even better, they are located in Gothenburg, just a couple of hours away by train. If you are not familiar with motion capture systems, it is a system that can track objects with reflective markers in space using high resolution cameras. The precision/accuracy is very good (sub millimeter) and can be used to track more or less anything such as the movements of a human body or the position of a robot, for instance a Crazyflie. The position of a Crazyflie is calculated by the MoCap system and by sending it to the drone via radio, it can fly autonomously.

Qualisys

We are super happy of getting the opportunity to work with MoCap systems and making it an integral part of the Bitcraze eco system. We have already added support in Crazyswarm for the Qualisys system and soon there will be a tab in the Crazyflie python client for basic autonomous flight using a Qualisys system. We will release a passive MoCap deck in the near future that will make it easy to attach reflective markers to a Crazyflie in a well known configuration, see this blog post for more information. Further more we are looking at making an active marker deck that utilizes Qualisys’ active marker technology to both position and identify an object at the same time.

Recently we spent a day in the large lab of Qualisys. We played with the LPS system in a larger set up and experimented with passive MoCap deck configurations and finally tried to fly a swarm.

Martin and Tobias configuring MoCap decks

Unfortunately we ran out of time and we tried to push the envelop a bit too far so we never managed to fly the full sequence without crashes, on the other hand, getting that close in a couple of hours is not too bad. Even though the full swarm did not work out we learned new things and had a lot of fun. Thanks Martin and everyone at Qualisys!

 

If you are looking for a motion capture system and want more information about Qualisys, please do not hesitate to contact us or Qualisys.

In this blog post we will describe one of the demos we were running at IROS and how it was implemented. Conceptually this demo is based on the same ideas as for ICRA 2017 but the implementation is completely new and much cleaner.

The demo is fully autonomous (no computer in the loop) but it requires an external positioning system. We flew it using either the Loco Positioning System or the prototype Lighthouse system.
A button has been added to the LPS deck to start the demo. When the button is pressed the Crazyflie waits for position lock, takes off and repeats a predefined spiral trajectory until the battery is out, when it goes back to the door of the cage and lands.
For some reason we forgot to shoot a video at IROS so a reproduced version from the (messy) office will have to do instead, imagine a 2×2 m net cage around the Crayzflie.

Implementation

As mentioned in an earlier blog post the demo uses the high level commander originally developed by Wolfgang Hoenig and James Alan Preiss for Crazyswarm. We prototyped everything in python (sending commands to the Crazyflie via Crazyradio) to quickly get started and design the demo . Designing trajectories for the high level commander is not trivial and it took some time to get it right. What we wanted was a spiral downwards motion and then going back up along the Z-axis in the centre of the spiral. The high level commander is a bit picky on discontinuities and we used sines for height and radius to generate a smooth trajectory. 

Trajectories in the high level commander are defined as a number of pieces, each describing x, y, z and yaw for a short part of the full trajectory. When flying the trajectories the pieces are traversed one after the other. Each piece is described by 4 polynomials with 8 terms, one polynomial per x, y, z and yaw. The tricky part is to find the polynomials and we decided to do it by cutting our trajectory up in segments (4 per revolution), generate coordinates for a number of points along the segment and finally use numpy.polyfit() to fit polynomials to the points. 

When we were happy with the trajectory it was time to move it to the Crazyflie. Everything is implemented in the app.c file and is essentially a timer loop with a state machine issuing the same commands that we did from python (such as take off, goto and start trajectory). A number of functions in the firmware had to be exposed globally for this to work, maybe not correct from an architectural point of view but one has to do what one has to do to get the demo running :-) The full source code is available at github. Note that the make file is hardcoded for the Crazyflie 2.1, if you want to play with the code on a CF 2.0 you have to update the sensor setting

This approach led to an idea of a possible future app API (for apps running in the Crazyflie) containing similar functionality as the python lib. This would make it easy to prototype an app in python and then port it to firmware.

Controllers

The standard PID controller is very forgiving and usually handles noise and outliers from the positioning system in a fairly good way. We used it with the LPS system since there is some noise in the estimated position in an Ultra Wide Band system. The Lighthouse system on the other hand is much more precise so we switched to the Mellinger controller instead when using it. The Mellinger controller is more agile but also more sensitive to position errors and tend to flip when something unexpected happens. It is possible to use the Mellinger with the LPS as well but the probability of a crash was higher and we prioritised a carefree demo over agility. An extra bonus with the Mellinger controller is that it also handles yaw (as opposed to the PID controller) and we added this when flying with the Lighthouse. 

Going faster

Since the precision in the Lighthouse positioning system is so much better we increased the speed to add some extra excitement. It turned out to be so good that it repeatedly almost touched the panels at the back without any problems, over and over again!

One of the reasons we designed the trajectory the way we did was actually to make it possible to fly multiple copters at the same time, the trajectories never cross. As long as the Crazyflies are not hit by downwash from a copter too close above all is good. Since the demo is fully autonomous and the copters have no knowledge about each other we simply started them with appropriate intervals to separate them in space. We managed to fly three Crazyflies simultaneously with a fairly high degree of stability this way.

As mentioned in an earlier post, this year we are going to exhibit at iROS 2018 in Madrid. Every time we go to fairs and exhibition, it is the occasion for us to work more on integration to put together the latest development into a demo we can show at the event. One of the latest development we will show at iROS is the lighthouse deck.

Work on the lighthouse deck have continued during the summer and we are now at a stage where things are starting to work quite well with Lighthouse V1 base stations. We are quite impressed by the performance: we have measured a positioning noise bellow 1mm. We are flying the Crazyflie using Crazyswarm which allows us to fly smooth trajectory using the high-level controller:

The goal for iROS is to stabilize and push the code in the main Crazyflie firmware repos. We will have a couple of Crazyflie setup with the Lighthouse deck and that we will be able to demonstrate. In the future we are also thinking of making a general purpose tag that could be used with other robots. One of the great advantage of the lighthouse tracking technology is that the position and orientation is available in the receiver, in the robot. This means that, like the LPS, the robots are autonomous and do not require an active data connection with a computer in order to locate themselves.

There is still a lot of challenges and work to be done on the deck. For once, this is currently using HTC Vive lighthouse base station V1, Valve has release the base station V2 that allows to cover much more space for each base station and to use more than 2 base stations in the same system, we plan to implement support for it. We will also need to work on multi-sensor localization and setup procedure. Currently the Crazyflie calculates its orientation using only one lighthouse receiver and requires to be in direct light of sight of both lighthouse, it is possible using more receiver to get a position and orientation with only one base station in sight which will increase the system reliablility. As for the system setup we are still using SteamVR to obtain the lighthouse positions using at least one Vive controller, the goal is eventually to be able to setup a system with the Crazyflie alone, without needing to install SteamVR. All that will most likely be discussed in more details in future post.

If you are attending iROS 2018 feel free to come and meet us at booth #91.