navigation

Drones can perform a wide range of interesting tasks, from crop inspection to search-and-rescue. However, to make drones practically attractive they should be safe and cheap. Drones can be made safer by reducing their size and weight. This causes less damage in a collision with people or the environment. Additionally, being cheap means that the drones can take more risk – as it is less expensive to lose one – or that they can be deployed in larger numbers.

To function autonomously, such a drone should at least have some basic navigation capabilities. External position references such as GPS or UWB beacons can provide these, but such a reference is not always available. GPS is not accurate enough in indoor settings, and beacons require prior access to the area of operation and also add an additional cost.

Without these references, navigation becomes tricky. The typical solution is to have the drone construct a map of its local environment, which it can then use to determine its position and trajectories towards important places. But on tiny drones, the on-board computational resources are often too limited to construct such a map. How, then, can these tiny drones navigate? A subquestion of this – how to follow previously traversed routes – was the topic of my MSc thesis under supervision of Kimberly McGuire and Guido de Croon at TU Delft, and my PhD studies. The solution has recently been published in Science Robotics – “Visual route following for tiny autonomous robots” (TU Delft mirror here).

Route following

In an ideal world, route following can be performed entirely by odometry: the measurement and recording of one’s own movements. If a drone would measure the distance and direction it traveled, it could just perform the same movements in reverse and end up at its starting place. In reality, however, this does not entirely work. While current-day movement sensors such as the Flow deck are certainly accurate, they are not perfect. Every time a measurement is taken, this includes a small error. And in order to traverse longer distances, multiple measurements are summed, which causes the error to grow impractically large. It is this integration of errors that stops drones from using odometry over longer distances.

The trick to traveling longer distances, is to prevent this buildup of errors. To do so, we propose to let the drone perform ‘visual homing’ maneuvers. Visual homing is a control strategy that lets an agent return to a location where it has previously taken a picture, called a ‘snapshot’. In order to find its way back, the agent compares its current view of the environment to the snapshot that it took earlier. The trick here is that the difference between these two images smoothly grows with distance. Conversely, if the agent can find the direction in which this difference decreases, it can follow this direction to converge back to the snapshot’s original location.

The difference between images smoothly increases with their distance.

So, to perform long-distance route following, we now command the drone to take snapshots along the way, in addition to odometry measurements. Then, when retracing the route, the drone will routinely perform visual homing maneuvers to align itself with these snapshots. Because the error after a homing maneuver is bounded, there is now no longer a growing deviation from the intended path! This means that long-range route following is now possible without excessive drift.

Implementation

The above mentioned article describes the strategy in more detail. Rather than repeat what is already written, I would like to give a bit more detail on how the strategy was implemented, as this is probably more relevant for other Crazyflie users.

The main difference between our drone and an out-of-the-box one, is that our drone needs to carry a camera for navigation. Not just any camera, but the method under investigation requires a panoramic camera so that the drone can see in all directions. For this, we bought a Kogeto Dot 360. This is a cheap aftermarket lens for an older iPhone that provides exactly the field-of-view that we need. After a bit of dremeling and taping, it is also suitable for drones.

ARDrone 2 with panoramic camera lens.

The very first visual homing experiments were performed on an ARDrone 2. The drone already had a bottom camera, to which we fitted the lens. Using this setup, the drone could successfully navigate back to the snapshot’s location. However, the ARDrone 2 hardly qualifies as small as it is approximately 50cm wide, weighs 400 grams and carries a Linux computer.

To prove that the navigation method would indeed work on tiny drones, the setup was downsized to a Crazyflie 2.0. While this drone could take off with the camera assembly, it would become unstable very soon as the battery level decreased. The camera was just a bit too heavy. Another attempt was made on an Eachine Trashcan, heavily modified to support both the camera, a flowdeck and custom autopilot firmware. While this drone had more than enough lift, the overall reliability of the platform never became good enough to perform full flight experiments.

After discussing the above issues, I was very kindly offered a prototype of the Crazyflie Brushless to see if it would help with my experiments. And it did! The Crazyflie brushless has more lift than the regular platform and could maintain a stable attitude and height while carrying the camera assembly, all this, with a reasonable flight time. Software-wise it works pretty much the same as the regular Crazyflie, so it was a pleasure to work with. This drone became the one we used for our final experiments, and was even featured on the cover of the Science Robotics issue.

With the hardware finished, the next step was to implement the software. Unlike the ARDrone 2 which had a full Linux system with reasonable memory and computing power, the Crazyflie only has an STM32 microcontroller that’s also tasked with the flying of the drone (plus an nRF SoC, but that is out of scope here). The camera board developed for this drone features an additional STM32. This microcontroller performed most of the image processing and visual homing tasks at a framerate of a few Hertz. However, the resulting guidance also has to be followed, and this part is more relevant for other Crazyflie users.

To provide custom behavior on the Crazyflie, I used the app layer of the autopilot. The app layer allows users to create custom code for the autopilot, while keeping it mostly decoupled from the underlying firmware. The out-of-tree setup makes it easier to use a version control system for only the custom code, and also means that it is not as tied to a specific firmware version as an in-tree process.

The custom app performs a small number of crucial tasks. Firstly, it is responsible for communication with the camera. Communication with the camera was performed over UART, as this was already implemented in the camera software and this bus was not used for other purposes on the Crazyflie. Over this bus, the autopilot could receive visual guidance for the camera and send basic commands, such as the starting and stopping of image captures. Pprzlink was used as the UART protocol, which was a leftover from the earlier ARDrone 2 and Trashcan prototypes.

The second major task of the app is to make the drone follow the visual guidance. This consisted of two parts. Firstly, the drone should be able to follow visual homing vectors. This was achieved using the Commander Framework, part of the Stabilizer Module. Once the custom app was started, it would enter an infinite loop which ran at a rate of 10 Hertz. After takeoff, the app repeatedly calls commanderSetSetpoint to set absolute position targets, which are found by adding the latest homing vector to the current position estimate. The regular autopilot then takes care of the low-level control that steers the drone to these coordinates.

The core idea of our navigation strategy is that the drone can correct its position estimate after arriving at a snapshot. So secondly, the drone should be able to overwrite its position estimate with the one provided by the route-following algorithm. To simplify the integration with the existing state estimator, this update was implemented as an additional position sensor – similar to an external positioning system. Once the drone had converged to a snapshot, it would enqueue the snapshot’s remembered coordinates as a position measurement with a very small standard deviation, thereby essentially overwriting the position estimate but without needing to modify the estimator. The same trick was also used to correct heading drift.

The final task of the app was to make the drone controllable from a ground station. After some initial experiments, it was determined that fully autonomous flight during the experiments would be the easiest to implement and use. To this end, the drone needed to be able to follow more complex procedures and to communicate with a ground station.

Because the cfclient provides most of the necessary functions, it was used as the basis for the ground station. However, the experiments required extra controls that were of course not part of a generic client. While it was possible to modify the cfclient, an easier solution was offered by the integrated ZMQ server. This server allows external programs to communicate with the stock cfclient over a tcp connection. Among the possibilities, this allows external programs to send control values and parameters to the drone. Since the drone would be flying autonomously and therefore low-frequencies would suffice, the choice was made to let the ground station set parameters provided by the custom app. To simplify usability, a simple GUI was made in python using the CFZmq library and Tkinter. The GUI would request foreground priority such that it would be shown on top of the regular client, making it easy to use both at the same time.

Cfclient with experimental overlay (bottom right).

To perform more complex experiments, each experiment was implemented as a state machine in the custom app. Using the (High-level) Commander Framework and the navigation routines described above, the drone was able to perform entire experiments from take-off to landing.

While the code is very far from production quality, it is open source and can be viewed here to see how everything was implemented: https://github.com/tomvand/2020-visualhoming-crazyflie . The PCB used to fit Crazyflie decks to the Eachine Trashcan can be found here: https://github.com/tomvand/cf-deck-carrier .

Outcome

Using the hardware and software described above, we were able to perform the route-following experiments. The drone was commanded to fly a preprogrammed trajectory using the Flow deck, while recording odometry and snapshot images. Then, the drone was commanded to follow the same route in reverse, by traveling short sections using dead reckoning and then using visual homing to correct the incurred drift.

As shown in the article, the error with respect to the recorded route remained bounded. Therefore, we can now travel long routes without having to worry about drift, even under strict hardware limitations. This is a great improvement to the autonomy of tiny robots.

I hope that this post has given a bit more insight into the implementation behind this study, a part that is not often highlighted but very interesting for everyone working in this field.

As you probably noticed already, this summer I experimented with ROS2 and connecting the Crazyflie with multi-ranger to several mapping and navigation nodes (see this and this blogpost). First I started with an experimental repo on my personal Github account called crazyflie_ros2_experimental, where I managed to do some mapping and navigation already. In August we started porting most of this functionality to the crazyswarm2 project, so that is what this blogpost is mostly about.

Crazyswarm goes ROS2

Most of you are already familiar with Crazyswarm for ROS1, which is a project that Wolfgang Hönig and James Preiss have maintained since its creation in 2017 at the University of Southern California. Since then, many have used and referred to this work, since the paper has been cited more than 260 times. From all the Crazyflie papers of the latest ICRA and IROS conferences, 50 % of the papers have used Crazyswarm as their communication middleware. If you haven’t heard about Crazyswarm yet, please check-out the nice BAMdays talk Wolfgang gave last year.

Unfortunately, ROS1 will not be there forever and will be phased out anno 2025 and will not be supported for Ubuntu 22.04 and up. Therefore, Wolfgang, now at the Intelligent Multi-robot Coordination Lab at TU Berlin, has already started with the ROS2 port of Crazyswarm, namely Crazyswarm2. Here the same principle of the C++ based Crazyflie server and the python wrapper were been implemented, along with the simple position based simulation and Teleop nodes. Mind that the name Crazyswarm2 is just the project name out of historic reasons, but the package itself can also be used for individual Crazyflies as well. That is why the package names will be called crazyflie_*

Porting the Summer Hack project to Crazyswarm2

The crazyflie_ros2_experimental was fun to hack around, as it was (as the name suggests) experimental and I didn’t need to worry about releases, bugfixes etc. However, the problem of developing only here, is that the further you go the more work it becomes to make it more official. That is when Wolfgang and I sat down and started talking about porting what I’ve done in the summer into Crazyswarm2. This is also a good opportunity to get more involved with the project, especially with so many Crazyfliers using the ROS as well.

The first step was to write a second crazyflie_server node that relied on the python CFlib. This means that many of the variables I used to hardcode in the experimental node, needed to be defined within the parameter structure of ROS2. The crazyflies.yaml is where anything relevant for the server (like the URIs and parameters) needs to be defined. Both the C++ backend server and the CFlib backend server are using the same parameters. Also the functionality of the both servers are pretty similar, except for that logging is only possible on the CFlib version and uploading/follow trajectories is only possible on the C++ version. An overview will be provided soon on the Crazyswarm2 documentation website.

The second step was to make the crazyflie_server (cflib) node suitable to be connected to external packages that I’ve worked with during the hack project. Therefore, there are some special logging modes, that enables the server to not only output topics based on logging, but Pose/Odometry/LaserScan messages along with Transforms. This allowed the SLAM_toolbox to use the data from the Crazyflie itself to create a map, which you can see an example of in this tutorial.

Moreover, for the navigation it was important that incoming Twist messages either from keyboard or from a navigation toolkit were handled properly. Most of these packages assume a 2D non-holonomic robot, but a quadcopter like the Crazyflie needs to first take off, stay in the air and land. Therefore in the examples, a separate node (vel_mux.py) was written to receive incoming Twist messages, first have the Crazyflie take off in high level commander, and keep sending hover commands to keep it in the air until a land service is called.

What’s next?

As you probably noticed, the project is still under development, but at least it is now at a good state that we feel comfortable to presented at the upcoming ROScon :) We also want to include an more official simulation package, especially now that the Crazyflie has recently became part of the official release of Webots 2022b, but we are currently waiting on the webots_ros2 to be released in the ubuntu packages. Moreover, the idea is to provide multiple simulation backends that based on the requirement of the topic (swarms, vision-based etc), the user can select the simulation most useful for their situation. Also, we would like to even out the missing items (trajectory handling, logging) in both the cflib and cpp backend of the crazyflie_server so that they can be used interchangeably. Also, I saw that the experimental simple mapper node has been featured on social media, so perhaps we should be converting that to Crazyswarm2 as well :)

So once we got the most of the above mentioned issues out the way, that will be the time that we can start discussing the official release of a ROS2 Crazyflie package with its source code residing in the Crazyswarm2 repository. In the meantime, it would be awesome that anybody that is interested in ROS2, or want to soon upgrade their Crazyswarm(1) packages to ROS2 to give the package a whirl. The more people that are trying it out and report bugs/proposing fixes, the more stable it becomes and closer it will come to an official release! Please join us and start any discussions on the Crazyswarm2 project github repository.

Now it is time to give a little update about the ongoing ROS2 related projects. About a month ago we gave you an heads-up about the Summer ROS2 project I was working on, and even though the end goal hasn’t been reached yet, enough has happened in the mean time to write a blogpost about it!

Crazyflie Navigation

Last time showed mostly mapping of a single room, so currently I’m trying to map a bigger portion of the office. This was initially more difficult then initially anticipated, since it worked quite well in simulation, but in real life the multi-ranger deck saw obstacles that weren’t there. Later we found out that was due to this year old issue of the multi-ranger’s driver incapability to handle out-of-range measurements properly (see this ongoing PR). With that, larger scale mapping starts to become possible, which you can see here with the simple mapper node:

If you look at the video until the end, you can notice that the map starts to diverge a bit since the position + orientation is solely based on the flow deck and gyroscopes , which is a big reason to get the SLAM toolbox to work with the multi-ranger. However, it is difficult to combine it with such a sparse ‘Lidar’ , so while that still requires some tuning, I’ve taken this opportunity to see how far I get with the non-slam mapping and the NAV2 package!

As you see from the video, the Crazyflie until the second hallway. Afterwards it was commanded to fly back based on a NAV2 waypoint in RViz2. In the beginning it seemed to do quite well, but around the door of the last room, the Crazyflie got into a bit of trouble. The doorway entrance is already as small as it is, and around that moment is also when the mapping started to diverge, the new map covered the old map, blocking the original pathway back into the room. But still, it came pretty close!

The diverging of the map is currently the blocker for larger office navigation, so it would be nice to get some better localization to work so that the map is not constantly changed due to the divergence of position estimates, but I’m pretty hopeful I’ll be able to figure that out in the next few weeks.

Crazyflie ROS2 node with CrazySwarm2

Based on the poll we set out in the last blogpost, it seemed that many of you were mostly positive for work towards a ROS2 node for the Crazyflie! As some of you know, the Crazyswarm project, that many of you already use for your research, is currently being ported to ROS2 with efforts of Wolfgang Hönig’s IMRCLab with the Crazyswarm2 project. Instead of in parallel creating separate ROS2 nodes and just to add to the confusion for the community, we have decided with Wolfgang to place all of the ROS2 related development into Crazyswarm2. The name of the project will be the same out of historical reasons, but since this is meant to be the standard Crazyflie ROS2 package, the names of each nodes will be more generic upon official release in the future.

To this end, we’ve pushed a cflib python version of the crazyflie ros2 node called crazyflie_server_py, a bit based on my hackish efforts of the crazyflie_ros2_experimental version, such that the users will have a choice of which communication backend to use for the Crazyflie. For now the node simply creates services for each individual Crazyflie and the entire swarm for take_off, land and go_to commands. Next up are logging and parameter handling, positioning support and broadcasting implementation for the CFlib, so please keep an eye on this ticket to see the process.

So hopefully, once the summer project has been completed, I can start porting the navigation capabilities into the the Crazyswarm2 repository with a nice tutorial :)

ROScon talk

As mentioned in a previous blogpost, we’ll actually be talking about the Crazyflie ROS2 efforts at ROScon 2022 in Kyoto in collaboration with Wolfgang. You can find the talk here in the ROScon program, so hopefully I’ll see you at the talk or the week after at IROS!