Category: Video

We have now worked a few weeks on the new TDoA 3 mode for the Loco Positioning System. We are happy with the results so far and think we managed to do what we aimed for: removing the single point of failure in anchor 0 and supporting many anchors as well as larger spaces.

 

We finished off last week by setting up a system with 20 anchors covering two rooms down in the lunch area of the office. We managed to fly a scripted autonomous flight between two rooms.

Work so far on the anchors

Messages from the anchors are now transmitted at random times, which removes the dependency on anchor 0 that used to act as a master that all other anchors were synchronized to. The drawback is that we get problems with collisions when two anchors happens to transmit at the same time. Experiments showed that at 400 packets/s (system rate) we ended up at a packet loss of around 15% and 340 TDoA measurements/s sent to the kalman filter for position estimation.  We figured that this was acceptable level and added an algorithm in the anchors that reduces the transmission rate based on the number of anchors around them. If more anchors are added to a room they all reduce their transmission rate to target 400 packets/s in total system rate.

The anchors continuously keeps track of the clock drift of all other anchors by listening to the messages that are transmitted. We know that clocks do not change frequency suddenly and can use this fact to filter the clock correction to reduce noise in the data. Outliers are detected and removed and the resulting correction is low pass filtered. We have done some experiments on using this information and compare it to the time stamp of a received message to detect if the time stamp is corrupt or not, but this idea requires more work.

One interesting feature of the anchors is the limited CPU power that is available. The strategy we have chosen to handle this fact has been to create an algorithm that is efficient when handling messages. A timer based maintenance algorithm (@1 Hz) examines the received data and makes demissions on which anchors to include in the messages in the future as well as purges old data.

The Crazyflie

The implementation in the Crazyflie is fairly straight forward. The biggest change to TDoA 2 is that we now can handle a dynamic number of anchors and have to chose what data to store and what to discard. We  have also extracted the actual TDoA algorithm into a module to separate it from the TDoA 3 protocol. The clock correction filtering algorithm from the anchors has also been implemented in the Crazyflie. 

An experimental module test has been added where the TDoA module is built and run on a PC using data recorded from a sniffer. We get repeatability as well as better tools for debugging and this is something that we should explore further.

Work remaining 

The estimated position in the Crazyflie is still more noisy than in TDoA 2 and we would like to improve it to at least the same level. We see that we have outliers in the TDoA measurements that makes the Crazyflie go off in a random direction from time to time, we believe it should be possible to get rid of most of these.

The code is fairly hackish and there are no structured unit or module tests to verify functionality. So far the work has been in an exploratory phase but we are getting closer to a set of algorithms that we are happy with and that are  worth testing. 

We have not done any work on the client side, that is support for visualizing and configuring the system. This is a substantial amount of work and we will not officially release TDoA 3 until this is finished.

How to try it out

If you are interested in trying TDoA 3 out your self, it is all available on github. There are no hardware changes and if you have a Loco Positioning system it should work just fine. There is a short description on the wiki of how to compile and configure the system. The anchor supports both TDoA 2 and TDoA 3 through configuration while the Crazyflie has to be recompiled to change between the two. The support in the client is limited but will basically handle anchors 0 – 7.

Have fun!

I’ve spent the last 5 years of my career at Microsoft on the team responsible for HoloLens and Windows Mixed Reality VR headsets. Typically, augmented reality applications deal with creating and manipulating digital content in the context of real-world surroundings. I thought it’d be interesting to explore some applications of using an augmented reality device to manipulate and control physical objects and have them interact with the real world and/or digital content.

Phase 1: Gesture Input

The HoloLens SDK has APIs for consuming hand gestures as input. For the first phase of this project, I modified the existing Windows UAP/UWP client to handle these gestures and convert them to CRTP setpoints. I used the “manipulation gesture” which provides offsets in three dimensions for a tap-and-drag gesture, from the point in space where the initial tap occurred. These three degrees of freedom are mapped to thrust, pitch and roll.

For the curious, there’s an article on my website with details about the implementation and source code. Here’s a YouTube video where I explain the concept and show a couple of quick demos.

As you can see in the first demo in the video, this works but isn’t entirely useful or practical. The HoloLens accounts for head movements (otherwise moving the head to the left would produce the same offset as moving the hand to the right, requiring the user to keep his or her head very still) but the user must still take care to keep the hand in the field of view of the device’s cameras. Once the gesture is released (or the hand goes out of view) the failsafe engages and the Crazyflie drops to the ground. And of course, lack of yaw control cripples the ability to control the Crazyflie.

Phase 2: Position Hold

Adding a flow deck makes for a more compelling user experience, as seen in the second demo in the video above. The Crazyflie uses the sensors on the flow deck to hold its position. With this functionality, the user is free to move about the room and make shorter “adjustment” hand gestures, instead of needing to hold very still. In this mode, the gesture’s degrees of freedom map to an x/y velocity and a vertical offset from the current z-depth.

This is a step in the right direction, but still has limitations. The HoloLens doesn’t know where it is in space relative to the Crazyflie. A gesture in the y axis relative to the device will always result in a movement in the y direction of the Crazyflie, which begins to feel unnatural if the user moves around. Ideally, gestures would cause the crazyflie to move in the same direction relative to the user, not relative to the ‘front’ of the Crazyflie. Also, there’s still no control over yaw.

The flow deck has some limitation as well: The z-range only goes to 2 meters with any accuracy. The flow sensor (for lateral stabilization) has a strong dependency on the patterns on the floor below. A flow sensor is a camera that relies on measuring pixel deltas from frame to frame, so if the floor is blank or has a repeating pattern, it can be difficult to hold position properly.

Despite these limitations, using hand gestures to control the Crazyflie with a flow deck installed as actually quite fun and surprisingly easy.

Phase 3 and Beyond: Future Work & Ideas

I’m currently working on some new features that I hope will open the door for more interesting applications. All of what follows is a work in progress, and not yet implemented or functional. Dream with me!

Shared Coordinate System

The next phase (currently a work in progress) is to get the HoloLens and the Crazyflie into a shared coordinate system. Having spatial awareness between the HoloLens and the Crazyflie opens up some very exciting scenarios:

  • The orientation problem could be improved: transforms could be applied to gestures to cause the Crazyflie to respond to commands in the user’s frame of reference (so ‘pushing’ away from one’s self would cause the Crazyflie to fly away from the user, instead of whatever direction is ‘forward’ to the Crazyflie’s perspective).
  • A ‘follow me’ mode, where the crazyflie autonomously follows behind a user as he or she moves throughout the space.
  • Ability to walk around and manually set waypoints by selecting points of interest in the environment.

The Loco Positioning System is a natural fit here. A setup step (where a spatial anchor or similar is established at same physical position and orientation as the LPS origin) and a simple transform for scale and orientation (HoloLens and the Crazyflie define X,Y,Z differently) would allow the HoloLens and Crazyflie to operate in a shared coordinate system. One could also use the webcam on the HoloLens along with computer vision techniques to track the Crazyflie, but that would require constant line of sight from the HoloLens to the Crazyflie.

Obstacle Detection/Avoidance

Example surface map produced by HoloLens

The next step after establishing a shared coordinate system is to use the HoloLens for obstacle detection and avoidance. The HoloLens has the ability to map surfaces in real time and position itself in that map (SLAM). Logic could be added to the HoloLens to consume this surface map and adjust pathing/setpoints to avoid these obstacles without reducing the overall compute/power budget of the Crazyflie itself.

Swarm Control and Manipulation

As a simple extension of the shared coordinate system (and what Bitcraze has been doing with TDoA and swarming lately) the HoloLens could be used to manipulate individual Crazyflies within a swarm through raycasting (the same technique used to gaze at, select and move specific holograms in the digital domain). Or perhaps a swarm could be controlled to move out of the way as a user passes through the swarm, and return to formation afterward.

Augmenting with Digital Content

All scenarios discussed thus far have dealt with using the HoloLens as an input and localization device, but its primary job is to project digital content into the real world. I can think of applications such as:

  • Games
    • Flying around through a digital obstacle course
    • First person shooter or space invaders type game (Crazyflie moves around to avoid user or fire rendered laser pulses at user, etc)
  • Diagnostic/development tools
    • Overlaying some diagnostic information (such as battery life) above the Crazyflie (or each Crazyflie in a swarm)
    • Set or visualize/verify the position of the LPS nodes in space
    • Visualize the position of the Crazyflie as reported by LPS, to observe error or drift in real time

Conclusion

There’s no shortage of interesting applications related to blending augmented reality with the Crazyflie, but there’s quite a bit of work ahead to get there. Keep an eye on the Bitcraze blog or the forums for updates and news on this effort.

I’d love to hear what ideas you have for combining augmented reality devices with physical devices like the Crazyflie. Leave a comment with thoughts, suggestions, or any other relevant work!

Here at the USC ACT Lab we conduct research on coordinated multi-robot systems. One topic we are particularly interested in is coordinating teams consisting of multiple types of robots with different physical capabilities.

A team of three quadrotors all controlled with Crazyflie 2.0 and a Clearpath Turtlebot

Applications such as search and rescue or mapping could benefit from such heterogeneous teams because they allow for more flexibility in the choice of sensors and locomotive capability. A core challenge for any multi-robot application is motion planning – all of the robots in the team need to make it to their target locations efficiently while avoiding collisions with each other and the environment. We have recently demonstrated a scalable method for trajectory planning for heterogeneous robot teams utilizing the Crazyflie 2.0 as the flight controller for our aerial robots.

A Crazier Swarm

To test our trajectory planning research we wanted to assemble a team with both ground robots and multiple sizes of aerial robots. We additionally wanted to leverage our existing Crazyswarm software and experience with Crazyflie firmware to avoid some of the challenges of working with new hardware. Luckily for us the BigQuad deck offered a straightforward way to super-size the Crazyflie 2.0 and gave us the utility we needed.

With the BigQuad deck and off-the-shelf components from the hobbyist drone community we built three super-sized Crazyflie 2.0s. Two of them weigh 120g (incl. battery) with a motor-to-motor size of 130mm, and the other is 490g (incl. battery and camera) with a size of 210mm.

120g, 130mm

490g, 210mm

We wanted to pick components that would be resistant to crashing while still offering high performance. To meet these requirements we ended up picking components inspired by the FPV drone racing community where both reliable performance and high-impact crashes are expected. Full parts lists for both platforms are available here

Integrating the new platforms into the Crazyswarm was fairly easy. We first had to re-tune the PID controller gains to account for the different dynamics of the larger platforms. This didn’t take too long, but we did crash a few times — luckily the components we chose were able to handle the crashes without any breakages. After tuning the platforms behave very well and are just as easy to work with as the original Crazyflie 2.0. We additionally updated the Crazyswarm package to be able to differentiate between BigQuad and regular Crazyflie types and those updates are now available for use by anyone!

In future work, we are excited to do hands-on experiments with a prototype of the CF-RZR. This new board seems like a promising upgrade to the CF 2.0 + BQD combination as it has upgraded components, an external antenna, and a standardized form factor. Hopefully we will see the CF-RZR as part of the Crazyswarm in the near future!

Mark Debord
Master’s Student
Automatic Coordination of Teams Laboratory
University of Southern California
Wolfgang Hönig
PhD Student
Automatic Coordination of Teams Laboratory
University of Southern California

 

We have been flying swarms in our office plenty of times. There is kind of a limitation to this though, our flying space is only around 4 x 4 meters. Flying 8 – 10 Crazyflies in this space is challenging and it is hard do make it look good. Slight position inaccuracy makes it look a bit sloppy. To mitigate this we decided to have a small swarm show using a a bigger flying space and to invite families and friends, just to raise the stake a bit.

As usual we had limited time to accomplish this, and this time the result should be worth looking at. Well, we have managed to pull off hard things in one day before so why not this time… The setup is basically a swarm bundle with added LED-rings. Kristoffer took care of the choreography, Tobias setting up the drones and Arnaud configuring the Loco positioning system.

Choreography

Kristoffers pre-Bitcraze history involves some dancing and he has been playing a bit earlier with the idea of creating choreographies with Crazyflies. One part of this was a weekend-hack a few months back when he tried to write a swarm sequencer that is a bit more dance oriented. The goal was to be able to run a sequence synchronized to music and define the movements in terms of bars and beats rather than seconds. He also wanted to be able to define a motion to end at a specific position at a beat as opposed to start on the beat. As Kristoffer did not have access to a swarm when he wrote the code he also added a simple simulator to visualize the swarm. The hack was not a complete success at that time but turned out to be useful in this case.

The sequences are defined in a YML file as a list of time stamps, positions and, if needed the color of the LED-ring. After a few hours of work he had at least some sort of choreography with 9 Crazyflies moving around, maybe not a master piece from a dance point of view but time was running out.

The simulator is super basic but turned out to be very useful anyway (the color of the crosses indicates the color of the LED ring). We actually never flew the full sequence with all drones before the performance, but trusted the simulation to be accurate enough! We did fly most of the sequence with one Crazyflie, to at least make it plausible that we got it all right.

Short snippet from the simulation

Setting up drones

Handling swarms can be tedious and time consuming. Just making sure all drones are assembled, fully operational and charged is a challenge when the number increases. Tobias decided to do manual flight test of every drone. If it flies well manually it will most likely fly well autonomously.  The testing resulted in switching out some motors and props as vibrations is a crippling factor, especially for Z accuracy. Takeaway from this exercise is to implement better self testing so this can be detected automatically and fixed much quicker.

Loco Positioning System

We ran the positioning system with standard firmware in TDoA mode to support multiple Crazyflies simultaneously. The mapped space was around 7 x 5 x 2.5 meters and the anchors were placed more or less in the corners of the flying space box.

The result

The audience (families and friends) was enthusiastic and expectations high! Even though not all drones made it all the way through the show, the spectators seemed to be duly impressed and requested a re-run.
 

We have been lucky get the opportunity to use a motion capture system from Qualisys in our flight lab. The Qualisys system is a camera based system that is using IR-cameras to track objects with sub-millimeter precision! The cameras are designed to measure the position and track small reflective marker balls that are fixed to the object to be tracked with high accuracy. By using multiple cameras shooting from different angles it is possible for the system to calculate the 3D position of a marker in space. By mounting multiple markers on an object the system can also identify the object as well as its orientation in space. Very cool!

We have started to look at how to add support in our ecosystem for the Qualisys system as well as other “external” positioning systems, external in this context is systems that calculate the position outside the Crazyflie. There is already great support for external positioning in the CrazySwarm project by the USC-ACTLab, but we are now looking at light weight support in the python client. We are not sure what we will add but ideas are on the lines of viewing an external position in the client, feed an external position into the Crazyflie for autonomous flight and maybe a simple trajectory sequencer.

MoCap Deck

We have also started to design a MoCap Deck to make it easy to mount reflective markers on the Crazyflie. Our design goals include:
* light weight
* easy to use
* support for multiple configurations to enable identification of individuals
* the possibility to add a button for human interaction

The current design of the MoCap Deck

The suggested design of the MoCap Deck

Any feedback on the MoCap Deck and ideas for functionality to add to the client is welcome! Please add a comment to this blog post or send us an email.

We will write more about the Qualisys system later on, stay tuned!

This week we have a guest blog post by Ben, enjoy!

I’m Ben Kuperberg and i’m a digital artist, artist-friendly software developer and orchestra conductor. Being a juggler, I’ve decided to focus some of my work on the intersection between juggling and technology, and i’ve since been working more and more with jugglers, my last project being “Sphères Curieuses” from Le Cirque Inachevé, created by Antoine Clée. While the whole project is not focused on drones, a part of it involves synchronized flight of multiple drones and precise human interaction with those drones. Swarm flight is something already out there and some solutions already exist but the context of this project added some challenges to it.

Most work on drone swarms have been done by research group or school. They use high-grade expensive motion-capture system able to track precisely the drones and able to assign their absolute positions. While the quality of the result is undeniable, it’s not fit for stage shows : the setup is taking a lot of time which we can’t always have when the show is on the road. Moreover, the mocap system is too invasive for the stage if you want to be able to “hide” a bit the technology and let the spectator focus on what the artist wants you to see. Not to mention it costs an arm and a leg and Antoine needs both to juggle.

So we had to find other ways to be able to track multiple drones. That’s when we found out the [amazing] team at Bitcraze was working on the TDoA technology, which allows precise-enough tracking of a virtually unlimited amount drones, at reduced cost and with a fast and clean setup.

After some work we managed to have a first rough version of our swarm server made by Maxime Agor that allowed to connect and move multiple drones using the TDoA system, controlled from a Unity application.

While we were able to present a decent demo with this system, we were facing a major problem of reactivity. When working with artists and technology, reactivity is a key component to creativity. Because it can be frustrating and tense to stop each 2 minutes to make changes or fix problems. My first priority was therefore to prepare and design softwares that will allow me to spend most of the “creation time” on the actual creation aspect and not on technical parts. It is also essential that the artist performing in front of the audience can entirely focus on the performance and by fully confident in this technology. The last challenge is that as I focus my work on the creation and not touring, all my work needs to be easily understood and modified by both the artists and the technicians who will take over my work for the tour.

With all of that in mind, I decided to create a software with a high-end user interface called “La Mouche Folle” (« The Crazy Flie » in french) that allows to control multiple drones and have an overview of all the drones, their battery/charging/alert states, auto-connect / auto-reboot features, external control via OSC, and a Unity client to view and actually decide how to move the drones. All my work is open-source, so you can find the software on github.

There only is a Windows release for now but it should compile just fine on OSX and Linux, the software is made with JUCE, depends on OrganicUI and lib-usb. Feel free to contact me if you want more information on the software. Many thanks Wolfgang Hoenig for the support and the great work on the crazyflie cpp library i’m relying on.

So this is the basic setup of our project, but we needed more than that to control the drone. We wanted to be able to control them in the most natural way possible. We quickly decided to go with glove-base solutions, and have been working with Specktr to get our hands – pun intended – on developer versions of the glove. The glove is good but can’t give us absolute position of the hand, so we added HTC Vive trackers with the lighthouse technology and then were able to get both natural hand control and sub-millimeter precision of the tracked hand.

Then it was a matter of connecting everything together : for other projects for Theoriz Studio, I already developed MrTracker (used in the MixedReality project) that acts as a middleware between the Vive trackers and Unity.

I used Chataigne to easily connect and route the Specktr Glove data to Unity as well so we would have maximum flexibility to switch hardware or technology without breaking whole setup if we needed to.

 
A video of the final result
 

 

In the past years, i’ve come to work on a lot of different projects, with different teams, which i like very much, because each project leads to discover new people, new ways of working and new challenges to overcome. I’m having a great time working on this project and especially sharing everything with the guys at Bitcraze and the community, everyone has been so cool and nice. I’ve planned to go at the Bitcraze studios to work for few weeks with them and i’m sure it’ll be a great experience !

It is now the first day in 2018 and a good day to look back at 2017. Its been a busy year as always and we have had a lot of fun during the year. One of the first things popping up is that things takes so much longer then we think. Luckily we are working with open source and the progression is not only dependent on us as we have awesome help from the community. We are already really excited about what’s coming in 2018, looking forward to working together with so many great people!  

Community

The Crazyflie 2.0 is still gaining attention and are becoming more and more popular among universities around the world. We see interest from researchers working with autonomous systems, control theory, multi-agent systems, swarm flight, robotics and all kinds of research fields, which is really great. This means that a lot of exciting work have been contributed by the community, so here is a small summary of what has happened in the community during the year.

In the beginning of the year the Multi-Agent Autonomous Systems Lab at Intel Labs shared how the Crazyflie 2.0 is used in their research for trajectory planning in cluttered environments. We wrote a blog post about this if you want learn more about their work. The Crazyflie showed up on the catwalk of Berlin Fashion week being part of fashion designer Maartje Dijkstras futuristic creation TranSwarm Entities”, a dress made out of 3D prints accompanied by autonomously flying Crazyflies.

For the third year Bitcraze visited Fosdem. We had a good time and got to hang out with community members like Fred how did a great presentation about what’s new in the Crazyflie galaxy. During the conference we took the opportunity to present the Loco positioning system and demo autonomous flight with the Crazyflie controlled by the Loco positioning system. In the demo we flew with the non-linear controller from Mike Hammer using trajectory generation from Marcus Greiff

We have had a few interesting blog post contributions during the year from major universities. Including a guest post written by researchers at Carnegie Mellon University. The researchers are using the Crazyflie 2.0 drone to create an adaptive multi-robot system. Similar work has been done by the researchers at the Computer Science and Artificial Intelligence Lab at MIT were they have been studying coordination of multiple robots, developing multi-robot path planning for a swarm of robots that can both fly and drive.

We have also had two interesting guest blog post from the GRASP Laboratory at University of Pennsylvania, the “A Flying Gripper based on Modular Robots” and “ModQuad – Self-Assemble Flying Structures“. Inspired by swarm behavior in nature, for instance how ants solve collective tasks, both projects explore the possibilities of how multiple Crazyflies can work together to perform different missions.

During the fall Fred took the time to pay us a visit at the office in Sweden and worked together with us. He is making great progress on the Java Crazyflie lib that is going to be used in the Android client as well as in PC clients. It will allow to connect and use a Crazyflie from any Java program, there has already been some successful experimentation done using it from Processing

Some other great news is that thanks to Sean Kelly the Crazyflie 2.0 is now officially supported by the Betaflight flight controller firmware. Betaflight is a flight controller firmware used a lot in the FPV and drone racing community.

Thanks to denis on the forum, there is now support for Crazyflie 2.0 in the PX4 flight controller firmware. PX4 is a comprehensive flight controller firmware used in research and by the industry.

Finally The Crazyswarm project, by Wolfgang Hoenig and James A. Preiss from USC ACTlab has been presented at ICRA 2017. It is a framework that allows to fly swarms of Crazyflie 2.0 using a motion capture system.  There is currently some work done on merging the Crazyswarm project into the Crazyflie master branch, this will make it even easier to fly a swarm of Crazyflie. In the meantime the project is well documented and can be used by anyone that has a couple of Crazyflies and a motion capture system.

Hardware

During 2017 we released four new products. Beginning with the Micro SD-card deck which e.g. makes high speed logging possible. Then the Z-ranger that enables a height hold flight mode up to 1m above ground. We like to call it drone surfing as that is very much what it feels like when flying. We ended by releasing two boards, Flow deck and Flow breakout, in collaboration with Pixart containing their new PMW3901 optical flow sensor. The Flow deck enables scriptable flight which is very exiting. That lead us to release the STEM drone bundle which we hope will inspire people to learn more about flying robotics.

Hardware prototypes, our favorite sub-category, are something we have plenty of lying around here at the office. To name a few, a possible Crazyradio 2, the Loco positioning tag, the Crazyflie RZR, the Glow deck or Obstacle avoidance/SLAM deck. It takes a long time making a finished product… Hopefully we will see more of these during 2018!

Software

At the same time we released the Flow deck we also released the latest official Crazyflie 2.0 FW and client (2017.06). This enables autonomous capabilities as soon as the Flow deck is inserted by automatically turning on the corresponding functionality. Just before that, the loco positioning was brought out of early access with improved documentation and simplified setup. Since then a lot of work has been put into making a release of TDoA and improving overall easy of use. With the TDoA2 and automatic anchor estimation starting to work pretty well we should not be far from a new official release!

We would like to end 2017 with a big thank you to our users and community with this compilation video. Make sure to pump up the volume!

video link

This year we decided to celebrate the holiday by painting a Christmas tree, rather than dressing one like last year. What better way to do this that with the flow deck,  a LED-ring and a long exposure photo. To check out all the yummy details and how to DIY check out this hackster project we made. Also as an Christmas extra we made this light painting video with the LED-ring mounted on top of the Crazyflie 2.0 and a bit of video editing. To be able to mount the LED-ring on top we hacked together an inverting deck. Not a bad idea and something we aim to release in the future!

 

Getting started

For those of you that was lucky and got a Crazyflie 2.0 under the Christmas tree, here is a short intro to get you started.

You can find all our getting started guides in the “Tutorials” menu on www.bitcraze.io. Take a look at “Getting started with the Crazyflie 2.0” to see how to assemble the kit and take off for your first flight. If you have an expansion deck you will also find a guide for how to install it.

Development

When you are comfortable flying the Crazyflie you might feel that it is time for the next step, to make use of the flexibility of the platform. After all it is designed to be modified!

Check out the “Getting started with development” tutorial to set up your development environment, build your first custom firmware and download it to the copter.

Maybe you want to add a sensor or some other hardware? Heat up your soldering iron and dive in to it! Find more information about the expansion bus on the wiki. The wiki is the place to look for all product and project documentation.

All source code is hosted on github.com/bitcraze and this is also where you will find documentation related to each repository. 

Projects

Looking for inspiration for a project? Take a look at hackster.io or read our blog postsThe video gallery contains some really cool stuff as well as our You Tube channel.

Contribute!

Open source is about sharing, creating something awesome together and contribute to the greater good! Whenever you do something that you think someone else could benefit from, please contribute it! If you were curious or confused about something, someone else probably will too. Help them by sharing your thoughts, insights and discoveries.

Why not

Need help?

Can not find the solution to a problem? Don’t understand how or what to do? Have you read all documentation and are still confused? Don’t worry, head over to the forum and check if someone else had the same problem. If not, ask a new question on the forum and get help from the community.

Happy holidays from the Bitcraze team!

The Loco Positioning System (LPS) default working mode is currently Two Way Ranging (TWR), it is a location mode that has the advantage of being pretty easy to implement and gives good positioning performance for most use cases and anchor setups. This was a very good reason for us to start with it. Though, TWR only supports positioning and flying of one or maybe a couple of Crazyflies, while it is not a solution to fly a swarm.

One solution to fly a swarm is an algorithm called Time Difference of Arrival (TDoA). We have had a prototype implementation for a while but we experienced problems with outliers, most of them where due to the fact that we where loosing a lot of packets and thus using bad data.

To solve these issues, TDoA2 makes two changes:

  • Each packet has a sequence number and each timestamps is associated with the sequence number of the packet it has been created from
  • The distances between anchors are calculated and transmitted by the anchors

A slightly simplified explanation follows to outline why this helps (a more detailed explanation of how TDoA works is available in the wiki).

We start by assuming that all timestamps are available to the tag, this is done by transmitting them in the packets from the anchors to the copter.

The end goal is to calculate the difference of time of arrival between two packets from two different anchors. Assuming we have the transmission time of the packets in the same clock, all we need to do is to subtract the time between the two transmissions with the time between the two receptions:

0 – anchor 0, 1 – anchor 1, T – Tag (that is the LPS deck on the Crazyflie)

To do so we need to have the time it took for the packet to travel between the two anchors, this will enable us to calculate the transmit time of P2 in anchor 1, this can be done by calculating the TWR time of flight between the two anchors, this would require the tag to receive 3 packets in sequence:

So now for the part where TDoA2 helps: previously we had to have the 3 packets in sequence in order to calculate a TDoA, if any one of these where missing the measurement would fail or worse, it could give the wrong result. Since we did not have sequence numbers, it was hard to detect packet loss. Now that we have sequence numbers, we can understand when a packet is missing and discard the faulty data. We also do not have to calculate the distance between anchors in the tag anymore, it is calculated by the anchors themselves. This means that we can calculate a TDoA with only two consecutive packets which increases the probability of a successful calculation substantially.

To reduce packet loss even more, we have also added functionality to automatically reduce the transmission power of the NRF radio (the one talking to the Crazyradio dongle) when the LPS deck is detected. It has turned out that the NRF radio transmissions are interfering with UWB radio reception, and since most indoor use cases does not require full output power we figured that this was a good trade-off.

The results we have seen with the new protocol is quite impressive: TDoA is usually very sensitive to the tag being inside the convex hull, so much so that with the first TDoA protocol we had to start the Crazyflie from about 30cm up to be well within the convex hull. This is not required anymore and the position is still good enough to fly even a bit outside of the convex hull. The outliers are also greatly reduced which makes this new TDoA mode behave very close to the current TWR mode, but with the capability to locate as many Crazyflies as you want:

Added to that, we have also implemented anchor position handling in the TDoA2 protocol and this means that it is now as easy to setup a system with TDoA2 as with TWR:

We are now working on finishing the last functionality, like switching between algorithms (TWR and TDoA) and on writing a “getting started guide”. When that is done TDoA will become an official mode for the LPS.

In the mean time, if you are adventurous, you can try it yourself. It has been pushed in the master branch of the Crazyflie firmware and the LPS node firmware. You should re-flash the Crazyflie firmware, both STM32 and nRF51, from master and the anchors from master too.

Modular robots can adapt and offer solutions in emergency scenarios, but self-assembling on the ground is a slow process. What about self-assembling in midair?

In one of our recent work in GRASP Laboratory at University of Pennsylvania, we introduce ModQuad, a novel flying modular robotic structure that is able to self-assemble in midair and cooperatively fly. This work is directed by Professor Vijay Kumar and Professor Mark Yim. We are focused on developing bio-inspired techniques for Flying Modular Robots. Our main interest is to develop algorithms and controllers for self-assembling modular robots that can dock in midair.

In biological systems such as ant or bee colonies, collective effort can solve problems not efficient with individuals such as exploring, transporting food and building massive structures. Some ant species are able to build living bridges by clinging to one another and span the gaps in the foraging trail. This capability allows them to rapidly connect disjoint areas in order to transport food and resources to their colonies. Recent works in robotics have been focusing on using swarm behaviors to solve collective tasks such as construction and transportation.Docking modules in midair offers an advantage in terms of speed during the assembly process. For example, in a building on fire, the individual modules can rapidly navigate from a base-station to the target through cluttered environments. Then, they can assemble bridges or platforms near windows in the building to offer alternative exits to save lives.

ModQuad Design

The ModQuad is propelled by a quadrotor platform. We use the Crazyflie 2.0. The vehicle was chosen because of its agility and scalability. The low-cost and total payload gives us an acceptable scenario for a large number of modules.

Very light-weight carbon fiber rods connected by eight 3-D printed ABS connectors form the frame. The frame weight is important due to tight payload constraints of the quadrotor. Our current frame design weighs 7g about half the payload capability. The module geometry has a cuboid shape as seen in the figure below. To enable rigid attachments between modules, we include a docking mechanism in the modular frame. In our case, we used Neodymium Iron Boron (NdFeB) magnets as passive actuators.

Self-Assembling and Cooperative Flying

ModQuad is the first modular system that is able to self-assemble in midair. We developed a docking method that accurately aligns and attaches pairs of flying structures in midair. We also designed a stable decentralized modular attitude controller to allow a set of attached modules to cooperatively fly. Our controller maximizes the use of the rotor forces by generating the maximum possible moment.

In order to allow the flying structure to navigate in a three dimensional environment, we control thrust and attitude to generate vertical and horizontal translations, and rotation in the yaw angle. In our approach, we control the attitude of the structure in a decentralized manner. A modular attitude controller allows multiple connected robots to stably and cooperatively fly. The gain constants in our controller do not need to be re-tuned as the configurations change.

In order  to dock pairs of flying structures in midair, we propose to have two flying structures: the first one is hovering and waiting, meanwhile the second one is performing a docking action. Both, the hovering and the docking actions are based on a velocity controller. Using a velocity controller, we are able to dock multiple robots in midair. We highlight that docking robots in midair is one of the fastest ways to assemble structures because the building units can rapidly move and dock in three dimensional environments. The docking system and control has been validated through multiple experiments.

Our system takes advantage of robot swarms because a large number of robots can construct massive structures.

This work was developed by:

David Saldaña, Bruno Gabrich, Guanrui Li, Mak Yim, and Vijay Kumar.

Additional resources at:

http://kumarrobotics.org/

http://www.modlabupenn.org/

http://davidsaldana.co/