Category: Guest-blogger

Drones can perform a wide range of interesting tasks, from crop inspection to search-and-rescue. However, to make drones practically attractive they should be safe and cheap. Drones can be made safer by reducing their size and weight. This causes less damage in a collision with people or the environment. Additionally, being cheap means that the drones can take more risk – as it is less expensive to lose one – or that they can be deployed in larger numbers.

To function autonomously, such a drone should at least have some basic navigation capabilities. External position references such as GPS or UWB beacons can provide these, but such a reference is not always available. GPS is not accurate enough in indoor settings, and beacons require prior access to the area of operation and also add an additional cost.

Without these references, navigation becomes tricky. The typical solution is to have the drone construct a map of its local environment, which it can then use to determine its position and trajectories towards important places. But on tiny drones, the on-board computational resources are often too limited to construct such a map. How, then, can these tiny drones navigate? A subquestion of this – how to follow previously traversed routes – was the topic of my MSc thesis under supervision of Kimberly McGuire and Guido de Croon at TU Delft, and my PhD studies. The solution has recently been published in Science Robotics – “Visual route following for tiny autonomous robots” (TU Delft mirror here).

Route following

In an ideal world, route following can be performed entirely by odometry: the measurement and recording of one’s own movements. If a drone would measure the distance and direction it traveled, it could just perform the same movements in reverse and end up at its starting place. In reality, however, this does not entirely work. While current-day movement sensors such as the Flow deck are certainly accurate, they are not perfect. Every time a measurement is taken, this includes a small error. And in order to traverse longer distances, multiple measurements are summed, which causes the error to grow impractically large. It is this integration of errors that stops drones from using odometry over longer distances.

The trick to traveling longer distances, is to prevent this buildup of errors. To do so, we propose to let the drone perform ‘visual homing’ maneuvers. Visual homing is a control strategy that lets an agent return to a location where it has previously taken a picture, called a ‘snapshot’. In order to find its way back, the agent compares its current view of the environment to the snapshot that it took earlier. The trick here is that the difference between these two images smoothly grows with distance. Conversely, if the agent can find the direction in which this difference decreases, it can follow this direction to converge back to the snapshot’s original location.

The difference between images smoothly increases with their distance.

So, to perform long-distance route following, we now command the drone to take snapshots along the way, in addition to odometry measurements. Then, when retracing the route, the drone will routinely perform visual homing maneuvers to align itself with these snapshots. Because the error after a homing maneuver is bounded, there is now no longer a growing deviation from the intended path! This means that long-range route following is now possible without excessive drift.

Implementation

The above mentioned article describes the strategy in more detail. Rather than repeat what is already written, I would like to give a bit more detail on how the strategy was implemented, as this is probably more relevant for other Crazyflie users.

The main difference between our drone and an out-of-the-box one, is that our drone needs to carry a camera for navigation. Not just any camera, but the method under investigation requires a panoramic camera so that the drone can see in all directions. For this, we bought a Kogeto Dot 360. This is a cheap aftermarket lens for an older iPhone that provides exactly the field-of-view that we need. After a bit of dremeling and taping, it is also suitable for drones.

ARDrone 2 with panoramic camera lens.

The very first visual homing experiments were performed on an ARDrone 2. The drone already had a bottom camera, to which we fitted the lens. Using this setup, the drone could successfully navigate back to the snapshot’s location. However, the ARDrone 2 hardly qualifies as small as it is approximately 50cm wide, weighs 400 grams and carries a Linux computer.

To prove that the navigation method would indeed work on tiny drones, the setup was downsized to a Crazyflie 2.0. While this drone could take off with the camera assembly, it would become unstable very soon as the battery level decreased. The camera was just a bit too heavy. Another attempt was made on an Eachine Trashcan, heavily modified to support both the camera, a flowdeck and custom autopilot firmware. While this drone had more than enough lift, the overall reliability of the platform never became good enough to perform full flight experiments.

After discussing the above issues, I was very kindly offered a prototype of the Crazyflie Brushless to see if it would help with my experiments. And it did! The Crazyflie brushless has more lift than the regular platform and could maintain a stable attitude and height while carrying the camera assembly, all this, with a reasonable flight time. Software-wise it works pretty much the same as the regular Crazyflie, so it was a pleasure to work with. This drone became the one we used for our final experiments, and was even featured on the cover of the Science Robotics issue.

With the hardware finished, the next step was to implement the software. Unlike the ARDrone 2 which had a full Linux system with reasonable memory and computing power, the Crazyflie only has an STM32 microcontroller that’s also tasked with the flying of the drone (plus an nRF SoC, but that is out of scope here). The camera board developed for this drone features an additional STM32. This microcontroller performed most of the image processing and visual homing tasks at a framerate of a few Hertz. However, the resulting guidance also has to be followed, and this part is more relevant for other Crazyflie users.

To provide custom behavior on the Crazyflie, I used the app layer of the autopilot. The app layer allows users to create custom code for the autopilot, while keeping it mostly decoupled from the underlying firmware. The out-of-tree setup makes it easier to use a version control system for only the custom code, and also means that it is not as tied to a specific firmware version as an in-tree process.

The custom app performs a small number of crucial tasks. Firstly, it is responsible for communication with the camera. Communication with the camera was performed over UART, as this was already implemented in the camera software and this bus was not used for other purposes on the Crazyflie. Over this bus, the autopilot could receive visual guidance for the camera and send basic commands, such as the starting and stopping of image captures. Pprzlink was used as the UART protocol, which was a leftover from the earlier ARDrone 2 and Trashcan prototypes.

The second major task of the app is to make the drone follow the visual guidance. This consisted of two parts. Firstly, the drone should be able to follow visual homing vectors. This was achieved using the Commander Framework, part of the Stabilizer Module. Once the custom app was started, it would enter an infinite loop which ran at a rate of 10 Hertz. After takeoff, the app repeatedly calls commanderSetSetpoint to set absolute position targets, which are found by adding the latest homing vector to the current position estimate. The regular autopilot then takes care of the low-level control that steers the drone to these coordinates.

The core idea of our navigation strategy is that the drone can correct its position estimate after arriving at a snapshot. So secondly, the drone should be able to overwrite its position estimate with the one provided by the route-following algorithm. To simplify the integration with the existing state estimator, this update was implemented as an additional position sensor – similar to an external positioning system. Once the drone had converged to a snapshot, it would enqueue the snapshot’s remembered coordinates as a position measurement with a very small standard deviation, thereby essentially overwriting the position estimate but without needing to modify the estimator. The same trick was also used to correct heading drift.

The final task of the app was to make the drone controllable from a ground station. After some initial experiments, it was determined that fully autonomous flight during the experiments would be the easiest to implement and use. To this end, the drone needed to be able to follow more complex procedures and to communicate with a ground station.

Because the cfclient provides most of the necessary functions, it was used as the basis for the ground station. However, the experiments required extra controls that were of course not part of a generic client. While it was possible to modify the cfclient, an easier solution was offered by the integrated ZMQ server. This server allows external programs to communicate with the stock cfclient over a tcp connection. Among the possibilities, this allows external programs to send control values and parameters to the drone. Since the drone would be flying autonomously and therefore low-frequencies would suffice, the choice was made to let the ground station set parameters provided by the custom app. To simplify usability, a simple GUI was made in python using the CFZmq library and Tkinter. The GUI would request foreground priority such that it would be shown on top of the regular client, making it easy to use both at the same time.

Cfclient with experimental overlay (bottom right).

To perform more complex experiments, each experiment was implemented as a state machine in the custom app. Using the (High-level) Commander Framework and the navigation routines described above, the drone was able to perform entire experiments from take-off to landing.

While the code is very far from production quality, it is open source and can be viewed here to see how everything was implemented: https://github.com/tomvand/2020-visualhoming-crazyflie . The PCB used to fit Crazyflie decks to the Eachine Trashcan can be found here: https://github.com/tomvand/cf-deck-carrier .

Outcome

Using the hardware and software described above, we were able to perform the route-following experiments. The drone was commanded to fly a preprogrammed trajectory using the Flow deck, while recording odometry and snapshot images. Then, the drone was commanded to follow the same route in reverse, by traveling short sections using dead reckoning and then using visual homing to correct the incurred drift.

As shown in the article, the error with respect to the recorded route remained bounded. Therefore, we can now travel long routes without having to worry about drift, even under strict hardware limitations. This is a great improvement to the autonomy of tiny robots.

I hope that this post has given a bit more insight into the implementation behind this study, a part that is not often highlighted but very interesting for everyone working in this field.

Today, we’re excited to share research from Vrije Universiteit Amsterdam, ‘From Shadows to Light,’ which presents an innovative swarm robotics approach where nano-drones autonomously track dynamic sources indoors.

Motivation

In dynamic and unpredictable indoor environments, locating moving sources—such as heat, gas, or light—presents unique challenges. GPS-denied settings, in particular, demand innovative and efficient onboard solutions for both control and sensing. Our research demonstrates how small drones, like Crazyflies, can be organized into a coordinated swarm to autonomously locate and follow these sources indoors, relying solely on onboard sensing and communication capabilities. Without sharing individual measurements, each drone adapts its behavior in response to its own sensor readings, allowing the swarm to collectively converge on the center of a light source through modified interactions with nearby agents.

Tugay Alperen (right) and Victor Retamal (left) during ICRA 2024 poster session

Method

Our approach enables each Crazyflie to function autonomously, using onboard sensing combined with continuous inter-agent communication at a frequency of 20 Hz. This methodology is structured around three core components:

Proximal Control and Collective Motion

Each drone broadcasts its position to nearby agents, enabling the calculation of relative positions to maintain safe distances. This proximal control ensures cohesive group movement by computing virtual force vectors for velocity commands, which are sent to onboard controllers operating at 20 Hz.

Source Seeking Through Adaptive Social Proximity

Drones use custom light sensors to detect local light intensity. Instead of directly adjusting positions based on this measurement, each drone modifies its social proximity to neighbors according to the sensed intensity without broadcasting this information. This adaptation allows the swarm to collectively follow the light gradient toward the source in a decentralized manner.

Obstacle Avoidance

Equipped with time-of-flight sensors, each drone independently detects obstacles and adjusts its trajectory to maintain safety. This ensures the swarm remains intact while navigating toward the source.

By combining continuous relative positioning, virtual force-based control, individual sensing, and adaptive social behavior, our methodology provides a robust framework for efficient source seeking in GPS-denied indoor environments.

Experimental Setup

Crazyflie equipped with Flow Deck v2, UWB Deck, Multi-Ranger Deck, and a custom-made deck that produces an analog voltage reading from an LDR for light intensity measurements.

The system architecture allowing us to achieve autonomous flocking and source localization with a swarm of Crazyflie

Our experiments take place in a 7×4.75-meter indoor arena with remotely controlled overhead light bulbs. These bulbs, activated individually or in pairs, create a moving light gradient. We tested our flocking swarm by initially positioning them at the edge of an illuminated area. As the light source shifted, we assessed the swarm’s performance by comparing their trajectories with the known centers of the illuminated areas without waiting for full convergence at each step. We also mapped our environment’s light intensity by moving a single Crazyflie randomly around the flight arena and recording the measurements to later merge on a single map to generate this light intensity heatmap.

The brightness values around the test environments, measured for each light source when only it was active.

Results

The flock flies as an ordered swarm, successfully localizing around the source with the swarm’s centroid positioned at the source center. (The centroid appears as a point without an arrow in the video.)

Even with an obstacle present within or between the illuminated regions, the flock successfully localizes around the center, avoiding the obstacle and maintaining order and cohesion within the swarm. The Multi-Ranger deck provides distance measurements for obstacle detection.

Future Directions

As the next step, we plan to apply our highly generalizable algorithm to various source types, including gas sources, radio signals, and similar sources that provide only scalar strength measurements rather than directional cues. Additionally, we have demonstrated that our flocking and source localization algorithms work effectively in 3D. We aim to showcase a fully functional application with a 3D-localized source and a flocking swarm operating in 3D space. Finally, we are working toward achieving fully onboard relative localization, which would eliminate the need for any indoor positioning system. This advancement would allow our swarm to operate autonomously in any environment, replicating the same behavior wherever it is deployed.

Links

The authors were with the Vrije Universiteit Amsterdam.

Please feel free to contact us with any questions or ideas: t.a.karaguzel@vu.nl

Please cite this as:

@ARTICLE{10314746,
  author={Karagüzel, Tugay Alperen and Retamal, Victor and Cambier, Nicolas and Ferrante, Eliseo},
  journal={IEEE Robotics and Automation Letters}, 
  title={From Shadows to Light: A Swarm Robotics Approach With Onboard Control for Seeking Dynamic Sources in Constrained Environments}, 
  year={2024},
  volume={9},
  number={1},
  pages={127-134},
  keywords={Robot sensing systems;Autonomous aerial vehicles;Position measurement;Vehicle dynamics;Sensors;Location awareness;Drones;Swarm robotics;aerial systems: perception and autonomy;multi-robot systems},
  doi={10.1109/LRA.2023.3331897}}

This week, we have a guest blog post from Scott at Droneblocks.

DroneBlocks is a cutting-edge platform that has transformed how educators worldwide enrich STEM programming in their classrooms. As pioneers in the EdTech space, DroneBlocks wrote the playbook on integrating drone technology into STEM curriculum for elementary, middle, and high schools, offering unparalleled resources for teaching everything from computer science to creative arts. What started as free block coding software and video tutorials has become a comprehensive suite of drone and robotics educational solutions. The Block-Coding software still remains free to all, as the DroneBlocks mission has always been to empower educators and students, allowing them to explore and lead the way. This open-source attitude set DroneBlocks on a mission to find the world’s best and most accessible micro-drone for education, and they found it in Sweden!

Previously, DroneBlocks had worked alongside drone juggernaut DJI and their Tello Drone. The Tello was a great tool for its time, but when DJI decided to discontinue it with little input from its partners and users, it made the break much easier. The hunt began for a DJI Tello replacement and an upgrade!

Bitcraze’s choice to build Crazyflie as an open platform had their drone buzzing wherever there was curiosity. The Crazyflie was developed to fly indoors, swarm, and be mechanically simplistic. DroneBlocks established that the ideal classroom micro-drone required similar characteristics. This micro-drone needed to be small for safety but sturdy for durability. It also needed to be easy to assemble and simple in structure for students new to drones. Most importantly, the ideal drone needed to have an open line of software communication to be fully programmable. Finally, there had to be an opportunity for a long-lasting partnership with the drone manufacturer, including government compliance.

After extensive searching and testing by DroneBlocks, the Crazyflie was a diamond in the rough – bite-sized and lightweight, supremely agile and accurate, reliable and robust, and most importantly, it was an open-source development platform. The DroneBlocks development team took the Crazyflie for a spin (or several) and with excitement, it was shared with the larger curriculum team to be mined for learning potential. It was promising to see Crazyflie’s involvement in university-level research studies, which proved it meant business. DroneBlocks knew the Crazyflie had a lot going for it – on its own. The team imagined how, when paired with DroneBlocks’ Block Coding software, Flight Simulator, and Curriculum Specialists, the Crazyflie could soar to atmospheric heights! 

Hardware? Check. Software? Check. But what about compatibility? DroneBlocks was immediately drawn to the open communication and ease of conversation with the Bitcraze team. It was obvious that both Bitcraze and DroneBlocks were born from a common thread and shared a mutual goal: to empower people to explore, investigate, innovate, research, and educate. 

DroneBlocks has since built a new Block Coding interface around the Crazyflie, allowing students to pilot their new drone autonomously and learn the basics of piloting and coding concepts. This interface is offered with a brand new drone coding simulator environment so students can test their code and fly the Crazyflie in a virtual classroom environment.

The Crazyflie curriculum currently consists of courses covering building, configuring, and finally, programming your drone with block coding (DroneBlocks) and Python. DroneBlocks’ expert curriculum team designed these courses to enable learners of all ages and levels to learn step by step through video series and exercises. New courses around block coding and Python are in constant development and will be continuously added to the DroneBlocks curriculum platform.

Crazyflie Drones now headline DroneBlocks’ premiere classroom launch kit. The DroneBlocks Autonomous Drones Level II kit encompasses everything a middle or high school would need to launch a STEM drone program, including the hardware, necessary accessories, and safety wear paired with the DroneBlocks software and curriculum. As a result, thousands of new students have entered the world of Drones and programming thanks to the Bitcraze + DroneBlocks partnership.

DroneBlocks has become an all-inclusive drone education partner for engaging and innovative learning experiences—and the Crazyflie delivers this by being a cutting-edge piece of hardware in a clever package.

Today we welcome Sam Schoedel and Khai Nguyen from Carnegie Mellon University. Enjoy!

We’re excited to share the research we’ve been doing on model-predictive control (MPC) for tiny robots! Our goal was to find a way to compress an MPC solver to a size that would fit on common microcontrollers like the Crazyflie’s STM32F405 while being fast enough to control the higher frequency dynamics of smaller robots. We came up with a few tricks to make that happen and dubbed the resulting solver TinyMPC. When it came time for hardware experiments, using the Crazyflie just made sense. A tiny solver deserves a tiny robot.

Motivation

Model predictive control is a powerful tool for controlling complex systems, but it is computationally expensive and thus often limited to use cases where the robot can either carry enough computational power or when offboard computing is available. The problem becomes challenging to solve for small robots, especially when we want to perform all of the computation onboard. Smaller robots have inherently faster dynamics which require higher frequency controllers to stabilize, and because of their size they don’t have the capacity to haul around as much computational power as their larger robot counterparts. The computers they can carry are often highly memory-constrained as well. Our question was “how can we shrink the computational complexity and memory costs of MPC down to the scale of tiny robots?”

What We Did

We settled on developing a convex model predictive control solver based on the alternating direction method of multipliers. Convex MPC solvers are limited to reasoning about linear dynamics (on top of any other convex constraints), but have structure that TinyMPC exploits to solve problems efficiently. The tricks we used to achieve this efficiency are described in the paper, but it boils down to rewriting the problem as a constrained linear-quadratic regulator to reduce the memory footprint and then precomputing as many matrices as possible offline so that online calculations are less expensive. These tricks allowed us to fit long-time horizon MPC problems on the Crazyflie and solve them fast enough for real-time use.

What TinyMPC Can Do

We decided to demonstrate the constraint-handling capabilities of TinyMPC by having the Crazyflie avoid a dynamic obstacle. We achieved this by re-computing hyperplane constraints (green planes in the first video) about a spherical obstacle (transparent white ball) for each knot point in the trajectory at every time step, and then by solving the problem with the new constraints assuming they stayed fixed for the duration of the solve.

In the two videos below, the reference trajectory used by the solver is just a hover position at the origin for every time step. Also, the path the robot takes in the real world will never be exactly the same as the trajectory computed by the solver, which can easily result in collisions. To avoid this, we inflated the end of the stick (and the simulated obstacle) to act as a keep-out region.

TinyMPC is restricted to reasoning about linear dynamics because of its convex formulation. However, a simple linearization can be taken pretty far. We experimented with recovering from different starting conditions to push the limits of our linear Crazyflie model and were able to successfully recover from a 90 degree angle while obeying the thrust commands for each motor.

We recently added support for second-order cone constraints as well. These types of constraints allow TinyMPC to reason about friction and thrust cones, for example, which means it can now intelligently control quadrupeds on slippery surfaces and land rockets. To clearly demonstrate the cone constraint, we took long exposure photos of the Crazyflie tracking a cylindrical landing trajectory without any cone constraints (red) and then with a spatial cone constraint that restricts the landing maneuver to a glide slope (blue).

How To Use TinyMPC

All of the information regarding the solver can be found on our website and GitHub org (which is where you can also find the main GitHub repository). TinyMPC currently has a Python wrapper that allows for validating the solver and generating C++ code to run on a robot, and we have a few examples in C++ if you don’t want to use Python. Our website also explains how to linearize your robot and has some examples for setting up the problem with a linear model, solving it an MPC loop, and then generating and running C++ code.

Most importantly to the Crazyflie community, our TinyMPC-integrated firmware is available and should work out of the box. Let us know if you use it and run into issues!

Our accompanying research papers:

Khai Nguyen, Sam Schoedel, Anoushka Alavilli, Brian Plancher, and Zachary Manchester. “TinyMPC: Model-Predictive Control on Resource-Constrained Microcontrollers.” arXiv preprint arXiv:2310.16985 (2023). https://arxiv.org/pdf/2310.16985

Sam Schoedel, Khai Nguyen, Elakhya Nedumaran, Brian Plancher, and Zachary Manchester. “Code Generation for Conic Model-Predictive Control on Microcontrollers with TinyMPC.” arXiv preprint arXiv:2403.18149 (2024). https://arxiv.org/pdf/2403.18149

We would love your feedback and suggestions, and let us know if you use TinyMPC for your tiny platforms!

This week we have a guest blogpost by Kamil Masalimov (MSc) and Tagir Muslimov (PhD) of the Ufa University of Science and Technology. Enjoy!

As researchers passionate about UAV technology, we are excited to share our recent findings on how structural defects affect the performance of nano-quadcopters. Our study, titled “CrazyPAD: A Dataset for Assessing the Impact of Structural Defects on Nano-Quadcopter Performance,” offers comprehensive insights that could greatly benefit the Crazyflie community and the broader UAV industry.

The Motivation Behind Our Research

Understanding the nuances of how structural defects impact UAV performance is crucial for advancing the design, testing, and maintenance of these devices. Even minor imperfections can lead to significant changes in flight behavior, affecting stability, power consumption, and control responsiveness. Our goal was to create a robust dataset (CrazyPAD) that documents these effects and can be used for further research and development.

Key Findings from Our Study

We conducted a series of experiments by introducing various defects, such as added weights and propeller cuts (Figure 1), to nano-quadcopters. For the experiments, we used the Lighthouse Positioning System with two SteamVR 2.0 virtual reality stations (Figure 2).

Figure 1. Propeller with two side defects
Figure 2. Schematic of the experimental setup with Lighthouse Positioning System

Here are some of the pivotal findings from our research:

  1. Stability Impact: We observed that both added weights and propeller cuts lead to noticeable changes in the stability of the quadcopter. Larger defects caused greater instability, emphasizing the importance of precise manufacturing and regular maintenance.
  2. Increased Power Consumption: Our experiments showed that structural defects result in higher power consumption. This insight is vital for optimizing battery life and enhancing energy efficiency during flights.
  3. Variable Control Responsiveness: We used the standard deviation of thrust commands as a measure of control responsiveness. The results indicated that defects increased the variability of control inputs, which could affect maneuverability and flight precision.
  4. Changes in Roll and Pitch Rates: The study also highlighted variations in roll and pitch rates due to structural defects, providing a deeper understanding of how these imperfections impact flight dynamics.

We show Figure 3 as an example of a graph obtained from our dataset. In this figure, you can see the altitude and thrust command over time for different flight conditions. The blue line represents the normal flight, while the orange line represents the flight with additional weight near the M3 propeller. In Figure 4, you can see the 3D flight trajectory of the Crazyflie 2.1 quadcopter under the cut_propeller_M3_2mm condition with the corrected ideal path. The blue line represents the actual flight trajectory, while the red dashed line with markers represents the ideal trajectory. Figure 5 shows the Motor PWM values over time for the add_weight_W1_near_M3 condition. The plot shows the PWM values of each motor (M1, M2, M3, and M4) as they respond to the added weight near the M3 propeller.

More examples of graphs obtained from the CrazyPAD dataset can be found in our research paper specifically describing this dataset: https://doi.org/10.3390/data9060079

Figure 3. Altitude and thrust command over time for different flight conditions
Figure 4. 3D flight trajectory of the Crazyflie 2.1
Figure 5. Motor PWM values over time

Leveraging Research for Diagnostic and Predictive Models

One of the most exciting aspects of our research is its potential application in developing diagnostic and predictive models. The CrazyPAD dataset can be utilized to train machine learning algorithms that detect and predict structural defects in real-time. By analyzing flight data, these models can identify early signs of wear and tear, allowing for proactive maintenance and reducing the risk of in-flight failures.

Diagnostic models can continuously monitor the performance of a UAV, identifying anomalies and pinpointing potential defects. This real-time monitoring can significantly enhance the reliability and safety of UAV operations.

Predictive models can forecast future defects based on historical flight data. By anticipating when and where defects are likely to occur, these models can inform maintenance schedules, ensuring UAVs are serviced before issues become critical.

Why This Matters for the Crazyflie Community

The CrazyPAD dataset and our findings offer valuable resources for the Crazyflie community. By understanding how different defects affect flight performance, developers and enthusiasts can improve design protocols, enhance testing procedures, and ensure higher safety and performance standards for their UAVs.

We believe that sharing our research with the Crazyflie community can lead to significant advancements in UAV technology. The dataset we created is open under the MIT License for further exploration and can serve as a foundation for new innovations and improvements.

Get Involved and Explore Further

We invite community members to explore our full research article and the CrazyPAD dataset. Together, we can drive forward the standards of UAV technology, ensuring that Crazyflie remains at the forefront of innovation and excellence.

Our research paper with a detailed description of this dataset:

Masalimov, K.; Muslimov, T.; Kozlov, E.; Munasypov, R. CrazyPAD: A Dataset for Assessing the Impact of Structural Defects on Nano-Quadcopter Performance. Data 2024, 9, 79. https://doi.org/10.3390/data9060079

Dataset:  https://github.com/AerialRoboticsUUST/CrazyPAD

We are eager to collaborate with the Crazyflie community and welcome any feedback or questions regarding our research. Let’s work together to push the boundaries of what’s possible in UAV technology.

This week we have a guest blogpost by Christian Llanes, a Robotics PhD of from Formal Methods & Autonomous Control of Transportation Systems Lab of the Georgia Institute of Technology. Enjoy!

Why do we need simulators?

Simulators are one of the most important tools used in robotics research. They usually are designed for different purposes with different levels of complexity. For example, simulators with low computational overhead that are parallelizable are mainly used for either training reinforcement learning algorithms or Monte Carlo sampling for verification of task completion in a nondeterministic environment. Some simulators also use rendering engines for the graphical display of models and the environment or when cameras are intended to be used in the robotics platform. Simulation is also useful for the development and deployment of new robotics firmware features where the firmware is compiled on a test machine and run in the loop with a simulated sensor suite. This simulator configuration is known as software-in-the-loop (SITL) because the vehicle firmware is intended to be run in the loop with the simulated vehicle physics and/or rendering engine. This feature is supported by autopilot suites such as PX4ArduPilotCogniPilot, and BetaFlight. This feature is not officially supported yet for Crazyflies because it requires a large overhaul of the firmware to be able to compile on a desktop machine and interact with different simulators such as Gazebo, Webots, PyBullet, CoppeliaSim, Isaac Sim, or Unreal Engine.

CrazySim

Last summer I began working with Crazyflies and noticed this Crazyflie simulator gap. I stumbled on a community-developed project for Crazyflie SITL called sim_cf. This project is exactly what I was looking for. However, the firmware used by the project is from July 2019 and the official firmware has had over 2000 commits made since then. The project also uses ROS 1, Gazebo Classic, and doesn’t support the Crazyflie Python library (CFLib). Using this project as a starting point I set out to develop CrazySim–a Crazyflie SITL project that doesn’t require ROS, uses Gazebo Sim, and supports connectivity through CFLib. Using CFLib we can connect the simulator to external software such as Crazyswarm2 or the Crazyflie ground station client. Users test their control algorithms in the external software using the simulator interface before deploying to real flight hardware.

An example of offboard model predictive control design and deployment workflow using CrazySim.

Using the Crazyflie Client for PID Tuning

We have also provided a modified Crazyflie client for CrazySim support. The Crazyflie client is a cool tool for testing a single drone in hardware. We can perform command based flight control, look at real time plots, save log data, and tune PID values in real time. The PID values are typically tuned for an out of the box Crazyflie. However, when we modify the Crazyflie and add extra weight through batteries, decks, and upgraded thrust motors then the behavior of the Crazyflie will change. If a user wants to tune a custom Crazyflie setup, then they can add additional models in this folder with their own motor and mass properties. Then they just need to add it to the list of supported models in either of the launch scripts. There is already an example model for the thrust upgrade bundle. Documentation for installing the custom client can be found here.

PID tuning a simulated Crazyflie using CrazySim on the Crazyflie PC client.

Crazyswarm2

We can now connect to the simulated Crazyflie firmware using CFLib. Therefore, we can set up a ROS 2 interface through Crazyswarm2 for swarm command and control through ROS 2 topics and services. To do this we first startup the drones using any of the launch scripts.

bash tools/crazyflie-simulation/simulator_files/gazebo/launch/sitl_multiagent_square.sh -n 16 -m crazyflie

Then, we bring up Crazyswarm2 after setting up the configuration file for the number of drones chosen.

ros2 launch crazyflie launch.py backend:=cflib

We demonstrate an example of how we can control a swarm of drones using Crazyswarm2 GoTo service commands.

Crazyswarm2 GoTo service commands using CrazySim.

ICRA 2024

CrazySim is also being presented as a paper at the 2024 IEEE International Conference on Robotics and Automation in Yokohama, Japan. If you are attending this conference and are interested in this work, then I invite you to my presentation and let me know that you are coming from this blog post after. For the paper, I created a multi agent decentralized model predictive controller (MPC) case study on ROS 2 to demonstrate the CrazySim simulation to hardware deployment workflow. Simulating larger swarms with MPC may require a high performance computer. The simulations in this work were performed on an AMD Ryzen 9 5950X desktop processor.

Model predictive control case study for ICRA 2024 paper.

Links

  1. CrazySim
  2. Modified Crazyflie client

Other Crazyflie SITL projects:

  1. sim_cf
  2. sim_cf2 blog post
  3. LambdaFlight blog post

Today we have a guest blogpost by Simon D. Levy (Washington and Lee University) about using Haskell to warp parts of the Crazylife-firmware. We also have some announcements from Bitcraze itself at the bottom. Enjoy!

As Richard Feynman famously said, What I cannot create, I do not understand. My less ambitious version of this motto is What I can translate into another language, I might better understand.

In order to better understand the Crazyflie firmware, I first undertook to translate the C-based firmware into C++. By separating out the general-purpose algorithmic code (state estimator, closed-loop control) from the Crazyflie-specific / Hardware Abstraction Layer (HAL) component of the firmware, I ended up with a nice, header-only C++ library of algorithms that can work with both a simulator like Webots and on a Crazyflie 2.1 or Crazyflie Bolt flight controller (with the remaining Crazyflie firmware translated into the standard C++ .h/cpp format).

As a fan of functional languages like OCAML and Haskell (and Rust, which was strongly influenced by their approach), I wondered whether I couldn’t push this idea further, to get an even higher-level implementation of the algorithms. Having played a bit with Rust and encountered issues getting it to work efficiently with C++ (something that is thankfully now being addressed), I thought it might be work trying NASA Copilot, a Haskell extension that compiles Haskell into C with a fixed memory footprint (i.e., no malloc() or garbage-collection).

My efforts paid off, resulting in a Haskell library of algorithms for closed-loop (PID) control and motor mixing that works with both my modified version of the Crazyflie firmware and with a simulator like Webots. For anyone wondering “why Haskell”, I refer you to this excellent discussion on the language’s advantages (purity, elegance), as well as this classic presentation on why the functional programming offers a better solution to complex tasks than object-oriented approach. For example, the code in the LambdaFlight core module cleanly reflects the dataflow (input/output) diagram for a standard flight controller:

Here is the Haskell code corresponding the component labeled Core in the diagram:

— Clock rate is 500Hz for Crazyflie, 100Hz for sim
dt = rateToPeriod clock_rate

— Make a list of PID controllers based on application order
pids = [positionPid resetPids inHoverMode dt,
      pitchRollAnglePid resetPids inHoverMode dt,
      pitchRollRatePid resetPids inHoverMode dt,
      altitudePid inHoverMode dt,
      climbRatePid inHoverMode dt,
      yawAnglePid dt,
      yawRatePid dt]

— Run PID controllers on open-loop (stick) demands to get demands for motor mixer
demands’ = foldl (\demand pid -> pid vehicleState demand) openLoopDemands pids

— Adjust thrust component for hover mode
thrust” = if inHoverMode then ((thrust demands’) * tscale + tbase) else tmin

— Run motor mixer on closed-loop demands to get motor spins, scaled for CF or sim
motors = quadCFMixer $ Demands thrust”
                   ((roll demands’) * prscale)
                   ((pitch demands’) * prscale)
                   ((yaw demands’) * yscale)

By adjusting the values of the clock rate and scaling coefficients (tscale, tbase, prscale, …), the same PID controllers (with same Kp, Ki, Kd constants) can be used for both the simulation and the actual Crazyflie.

Two pieces complete the picture.

First, how can a pure functional language like Haskell, lacking mutation/side-effects, support an algorithm like PID control, which requires keeping state variables (error integral, previous error) across iterations? The answer to this question comes from an ingenious part of the NASA Copilot framework, namely, streams. A stream (which is similar to a lazy list in Haskell), can come from an module of the program written in C (for example, the vehicle state obtained by state estimation) or from a previous value passed into the function. This feature allows Copilot to have functions that possess state while still being “pure” in the sense of being amenable to formal verification (mathematical proof of correctness; see this thread for a discussion). Although I don’t have the mathematical background to do formal verification proofs, the ability to prove the code correct is an extremely powerful feature of languages like Haskell and is what led NASA to develop the Copilot framework for its space missions.

Second, how can Haskell be combined with C/C++ without running into the performance issues typically encountered in calling code in one language from code in another? As I alluded to before, NASA Copilot solves this issue nicely, by compiling the Haskell code to C code with a fixed memory footprint. Declaring certain C/C++ variables to be global makes them accessible to the Haskell code as streams, and causes them to be compiled into the resulting C code, which can then be compiled and linked to the Crazyflie (or simulator) C/C++ code in the usual way (thanks to the Crazyflie Makefile’s use of the Kbuild system). Hence, to compile the whole project into a form suitable for flashing with make cload, requires just the following chain of commands (where make links creates links to some external libraries I wrote for the sensors):

make cf2_defconfig && make links && make copilot && make -j 32

My current work focuses on two directions: first, I am translating Crazyflie’s Kalman filter into Haskell — a more ambitious undertaking, but one that I feel more confident in undertaking now that the core control algorithms are completed. Second, I am looking for ways of modifying the most popular robotics simulators (Webots, Gazbeo) to work with dynamics (physics engine) code custom-written for quadcopters and other aerial vehicles (like the dynamics code I wrote for this simulator), as well as faster control loops.

Announcements

There are some announcements that are not part of this guest blogpost that we’d like to share.

  • The 350 mAh batteries are out of stock and unfortunately have a very long lead time at our manufacturer, so you can expect them to be back in stock early May. Please take a look this blogpost which also compares this battery to the Tattu alternative, if you can’t wait and need to have a similar battery now.
  • We have a Bitcraze developer meeting this Wednesday to talk about the supervisor safety functionalities in the crazyflie-firmware. Please keep an eye on the GH discussion thread for information on how to join.

Today we have a guest blogpost by Thomas Izycki (Technische Hochschule Augsburg) and Klaus Kefferpütz (Ingolstadt Technical University of Applied Sciences, former Augsburg Technical University of Applied Sciences) talking about implementing Software-in -the-Loop for swarm simulation of the Crazyflie.

When our cooperative control lab at the Technical University of Applied Sciences Augsburg was founded a few years ago, our goal was to develop distributed algorithms for teams of UAVs. We quickly decided to use Crazyflies with the algorithms directly implemented in the firmware, thus having a platform for a truly decentralized system. Our ongoing projects focus on cooperative path planning, navigation, and communication.

Since working with several drones at once, which have to communicate and coordinate with each other, can quickly become confusing and very time-consuming, a simulation was needed. It should preferably offer the possibility to integrate the firmware directly in the simulation environment and ideally also offer an interface to the crazyflie python API. With the relocation of our laboratory from Augsburg to the Technical University of Applied Sciences Ingolstadt, which however does not yet have a permanently established flying space, the need for a simulation environment was further increased in order to be less dependent on hardware accessibility. A look at the community and the available simulations quickly led us to the sim_cf flight simulator for the Crazyflie. A fantastic project supporting the use of the actual Crazyflie firmware in software-in-the-loop (SITL) mode and even in hardware-in-the-loop (HITL) mode on a real Crazyflie using the FreeRTOS Linux Port together with ROS and Gazebo. Unfortunately, the project has not been maintained for several years and also had no integration with the Crazyflie Python API. After a short chat with Franck Djeumou and his agreement to use the code of the original sim_cf simulation, the project was ready for an upgrade.

sim_cf2 Flight Simulator

With support for ROS coming to an end we decided to migrate the project to ROS2 and additionally support the current version of the Crazyflie firmware, which at this time is release version 2023.11. With the addition of a driver to connect the cflib to the SITL process, the same python cflib-based scripts can now be used with real Crazyflies and for use in the simulation environment. Only the corresponding driver needs to be loaded during initialization. Overall, the focus of the sim_cf2 simulation is now on using the Crazyflie python API instead of commanding the Crazyflies via ROS.

As for the Crazyflie firmware the whole build process has been fully integrated into the firmware’s KBuild build system. This also allows the use of the same code base for simulation and execution on the real Crazyflie. Depending on the configuration, the firmware is compiled for the STM32F on the Crazyflie or the host system running the SITL.

Components of the sim_cf2 Flight Simulator

To run the simulation, the three modules must therefore be started separately. Gazebo is started using the main launch file. The number and initial pose of the simulated Crazyflies is also defined in this file. Once the firmware for the host system has been built, the desired number of SITL instances can be started using the attached script. Finally, a cflib-based script, the Crazyflie client or the multi-agent client presented below is started. With no radio dongles attached to the computer, the simulation driver is initialized automatically and a connection to the simulated Crazyflie can be established.

There are still a few open issues, including the absence of implemented decks for positioning, such as LPS and Lighthouse. Currently, the absolute position is sent from Gazebo to the SITL instance and fused in the estimator. Moreover, Gazebo requires quite a lot of computing power. We were able to run a maximum of four Crazyflies simultaneously on a relatively old laptop. However, a modern desktop CPU with multiple cores allows for simulating a significantly larger number of Crazyflies.

Multi-Agent Client

Independent of the simulation we designed a GUI for controlling and monitoring our multi-agent teams. It currently supports up to eight Crazyflies but could be upgraded for bigger teams in the future. So far it has been enough for our requirements.

Multi-Agent Client with six connected Crazyflies

A central feature is the interactive map, which makes up about half of the gui. This is a 2D representation of the flight area with a coordinate system drawn in. Connected Crazyflies are displayed as small circles on the map and a new target position can be assigned by clicking on the map after they have been selected by their corresponding button. If required, obstacles or paths to be flown can also be drawn into the map.

Pseudo-decentralized communication

An important aspect of a truly decentralized system is peer to peer communication. It allows information to be exchanged directly between agents and ideally takes place without a central entity. Currently peer to peer communication is not available in the Crazyflie ecosystem, but is in development.

For this reason, we have implemented a workaround in our client, enabling a pseudo-decentralized communication system. This involves adding an additional layer to the Crazy RealTime Protocol (CRTP), which we named the Multi-Agent Communication Protocol (MACP). It consists of an additional packet header made of the destination ID, source ID and a port as endpoint identifier. Every ID is unique and directly derived from the Crazyflie’s address.

These MACP packets are sent via the unused CRTP port 0x9. The packet routing mechanism implemented in the client forwards the packets to their destination. It is also possible to send packets as a broadcast or to address the client directly.

On the firmware side, we have added a corresponding interface to simplify the sending and receiving of macp packets. It is analogous to the CRTP implementation and allows the registration of callback functions or queues for incoming packets on corresponding ports. It can be activated via KBuild.

At least for use in the laboratory and the development of distributed algorithms, this method has proven its worth for us.

Project

We hope that the sim_cf2 simulation can also be useful to others. The complete source code is available on GitHub. Further information concerning installation and configuration can be found in the readme files in the respective repositories.

We would also like to point out that other simulations have been created that are based on sim_cf and therefore offer similar functionality. One of these projects is CrazySim, also available on GitHub. Moreover, there are ongoing efforts to officially integrate such a software-in-the-loop simulation into the Crazyflie firmware and ecosystem.

sim_cf2:

https://github.com/CrazyflieTHI/sim_cf2

Multi-Agent Client:

https://github.com/CrazyflieTHI/crazyflie-client-multi-agent-thi

Show-and-tell post of CrazySim:

https://github.com/orgs/bitcraze/discussions/995

For the original sim_cf Flight Simulator for the Crazyflie visit:

https://github.com/wuwushrek/sim_cf

Today, we welcome Dimitrios Chaikalis from New York University to talk about their project of cooperative flight. Enjoy!

For our work in cooperative flight, we developed controllers for many tightly coupled drones to fly as a unit. The idea is that, either in a centralized or decentralized manner, it should be possible to treat drones as thrust force and yaw moment modules, in order to allow many small drones to carry objects too heavy for a single one to lift.

It quickly turned out that the Crazyflies, with their small size, open-source firmware, ROS compatibility, and, as we happily found out after hours upon hours of crashes, amazing durability, would be the perfect platform to test our controllers.

We designed and 3D-printed very lightweight, hollow connecting rods that could latch onto Crazyflies on one side, along with a number of lightweight polygons such as squares and hexagons with housings for the other side of the rods on all their faces. This allowed us to seamlessly change between geometric configurations and test our controllers.

We first tested with some symmetric triangle and quad formations.

The above is probably literally the first time our cooperative configuration achieved full position control
The tests on quad-copter configurations started as we transitioned to fully modular designs

Eventually, to make the controller generic, we developed a simple script that could deduce with some accuracy the placement of drones given a small lexicographic description submitted by the tester as a string, essentially denoting a sequence of rods and polygons utilized in the current configuration. Of course, some parameters such as rod lengths, or additional weights that we added to the system (such as a piece of foam attached to the structure), could not be known in advance, but the adaptive controller design ensured that the overall system could still achieve stable flight.

Strangely, the L shape has become a sort of ‘staple’ configuration in cooperative load transportation

We also proved that with more than 3 drones in a configuration, we could optimize the thrusts of the agents such that additional performance criteria could be met. For example, in an asymmetric configuration of 5 drones, one of them had a significantly more depleted battery. Crazyflies provide real-time battery voltage feedback, so we were able to use that in an optimization node running in Matlab on a ground computer, choosing thrust levels such that the depleted agent could be utilized less. This was a significant help, because in many of those experiments, the Crazyflies had to operate at more than 80% of their thrust capacity, so battery life optimization was of the essence.

We used ROS for all the code written for the above implementations, using the Crazyflie-ROS package in order to get battery and IMU readings from all drones and provide thrust and roll, pitch, and yaw rate commands at up to 100Hz.

The corresponding publication can be found here: https://link.springer.com/article/10.1007/s10846-023-01842-1

In case you want to build on our work, you can cite the above paper as such:

D. Chaikalis, N. Evangeliou, A. Tzes, F. Khorrami, ‘Modular Multi-Copter Structure Control for Cooperative Aerial Cargo Transportation‘, Journal of Intelligent & Robotic Systems, 108(2), 31.

YouTube Link: https://www.youtube.com/watch?v=nA41uJIehH8&t=1s

Today, Suryansh Sharma from TU Delft presents the open-source Gimbal they devised. Enjoy!

Crazyflies (and other drones in this weight class) are extremely fun to fly and prototype with! But if you are also a scientist or tinkerer and not a well-skilled drone pilot then you might struggle with flying these platforms especially when testing new control loops or experimental code. While crashing also teaches a lot about the behavior of the system, sometimes we are interested in seeing the system dynamics without breaking the drone.

Currently, doing this for such small drones is not easy. We need something lightweight and still accessible. To solve this, we made Open Gimbal: a specially designed 3 degrees of freedom (DoF) platform that caters to the unique requirements of these tiny drones. We make two versions, (a) Tripod version which can be mounted on a camera / light tripod with a screw thread of sizes 1/4-20 UNC or 3/8-16 UNC (b) Desktop version which can be placed on a table top.

Our approach focuses on simplicity and accessibility. We developed an open-source, 3-D printable electro-mechanical design that has minimal size and low complexity. This design facilitates easy replication and customization, making it widely accessible to researchers and developers. The platform allows for unrestricted and free rotational motion, enabling comprehensive experimentation and evaluation. You can see the movement from the CAD version below:

Degrees of Rotational freedom that Open Gimbal provides

You can also check out the interactive CAD model and see how the gimbal moves here. All of the 3D model files as well as the BOM and instructions for assembly can be found in our repository here.

In our publication, we also address the challenges of sensing flight dynamics at a small scale. To do so, we have devised an integrated wireless batteryless sensor subsystem. Our innovative solution eliminates the need for complex wiring and instead uses wireless power transfer for sensor data reception. You can read all about how we do this in our paper here.

If you do end up using the platform for research then you can cite us using the details below:

@ARTICLE{10225720, author={Sharma, Suryansh and Dijkstra, Tristan and Prasad, Ranga Venkatesha}, journal={IEEE Sensors Letters}, title={Open Gimbal: A 3 Degrees of Freedom Open Source Sensing and Testing Platform for Nano- and Micro-UAVs}, year={2023}, volume={7}, number={9}, pages={1-4}, doi={10.1109/LSENS.2023.3307121}}

I hope that you find the Open Gimbal useful! Feel free to reach out to me at Suryansh.Sharma@tudelft.nl if you have any ideas/questions or if you end up making an Open Gimbal yourself!