Category: Frontpage

 

Things are moving fast here at Bitcraze and we have lots of exciting things going on. So it’s time to grow the team and try to add one or two new team-members to increase the tempo and bring more awesome products to our customers. The normal case might be that you would post a job ad describing what kind of skill-set potential new members should have, but we would like to try something different. So today we added a jobs page describing a bit about how we work and what we do. Our goal is to give a picture of what it’s like to work at Bitcraze and try to find individuals who like what we do and how we work. If you would be interested in joining the team let us know on jobs@bitcraze.io who you are, what you like and how you think you could contribute.

Returning visitors of our website might have noticed that we recently released a new and fresh design of our website’s front page.

The goal has been to make a front page that better reflects Bitcraze and what we do, so together with the updated design and new cool images (credit to USC) we have also added two new sections to the front page. First off we have extended the blog post section to show the three latest blog posts instead of just the latest one. Different users have different interests so by showing a bit wider range of blog topics the hope is that even more people will discover our blog and start following it. Our blog is a big part of how we communicate to the outer world so by adding a whole new section for the blog giving it more room on the front page feels exciting. 

The testimonials section is the second part that we have added to the new front page. Here we are finally taking the opportunity to show some of the amazing work our community members have been doing using our Crazyflie. Each testimonial consist of a guest blog post that people from our community have contributed. It is pretty cool to show how researchers around the world are basing their projects on our products. If it’s by adding wheels to the drone or making LED lit swarms we are always happy to promote and show how the community are using our Crazyflie.

Next step

Our website is under constant improvement and the grand master plan for the near future is to update the content and design of the different portals and clean up the website in general. If you have any suggestions on what kind of content you would like to see in the portals or otherwise please send us feedback. As I mentioned in the beginning of the blog post, the new beautiful open shutter swarming photos we use on the front page is contributed to us by the researchers at USC. If you have any cool photos of the Crazyflie we would be more than happy to use them.

 

 

 

 

First of all we are happy to announce that (almost) all products have been stocked in the new warehouse and are now shipping! The last orders that were on hold are on their way out and new orders placed in the store will now be shipped again within a few days.

We released the TDoA mode, a.k.a. swarm mode of the Loco Positioning System back in January. TDoA supports positioning of many Crazyflies simultaneously which makes it possible to fly a swarm of Crazyflies with the LPS system. The release in January was actually the second iteration of the TDoA implementation (the first iteration was never publicly released) and it is also known as TDoA 2.

TDoA 2 works well but there are a couple of snags that we would like to fix and we have now started the work on the next iteration, TDoA 3. 

Single point of failure

TDoA 2 is based on a fixed transmission schedule with time slots when each anchor transmits its ranging packet. All anchors listen to anchor 0 and use the reception of a packet from anchor 0 to figure out when to transmit. The problem with this solution is that if anchor 0 stops transmitting for some reason the full system will stop transmitting positioning information. This is clearly a property that would be nice to get rid of.

Limited number of anchors

The packets in the TDoA 2 protocol have 8 slots for anchor data that are implicitly addressed through the position in the packet. First slot is anchor 0, second slot anchor 1 and so on. This setup is easy to use but creates an upper limit of 8 anchors in the system.

The maximum radio reach of an anchor depends mainly on the transmitted power and the environment. This distance, in combination with a maximum of 8 anchors and that all anchors must be in range of anchor 0, sets an upper limit of the volume that an LPS system can cover, basically one large room. When we designed TDoA 2 we were happy to be able to support a swarm of Crazyflies and did not really bother too much about the covered volume. We get more and more questions about larger areas and more anchors though and it would be nice to have a positioning system that could be expanded.

The solution – maybe…

What we want to do in TDoA 3 is to transmit packets at random times and add functionality to handle the collisions and packet loss that will happen in a system like this. The idea is that the even if some data is lost, the receiving side will get enough packets to be able to calculate the distance to other anchors or a position as needed. By removing the time slots and synchronization to anchor 0, we get rid of the single point of failure. 

In the TDoA 3 protocol, we have added explicit ids to the anchor data, and thus removed the implicit addressing of anchors. We have 8 bits for anchor ids and the system will handle 256 anchors for sure. We do think that it will be possible to design larger systems though by reusing ids and making sure that the radio ranges of anchors with the same ids do not overlap.

The UWB radios have a nice property that makes this a bit easier to handle collisions than one might first think, if they receive two packets at the same time, they will most likely “pick” one of the packets and discard the other. The drawback is that it is likely that the receive time of the packet will be less accurate. We are not completely sure it will be possible to detect and handle the added noise in the time stamps but we have good hope!

The current state of the project

Last week we did a proof of concept hack when we modified the old TDoA 2 implementation to transmit at random times, as well as minor modifications to handle random receive order of packets. It all worked out beautifully and we could fly a short sequence in the office with the new mode. The estimated position was a bit more shaky which is not surprising, considering that the receive times are more noisy.

We have just started with the real deal.  We have designed a draft spec of the protocol and have also started to implement the new protocol on top of the old TDoA2 algorithms in the anchors and the Crazyflie to get started. Next steps will be to introduce random transmission times, dynamic anchor management and better error handling. The TDoA 3 implementation will exist in parallel with the current TDoA2 implementation and should not interfere.

If you want to contribute, are interested in what we do or have some input, please comment this blog post or contact us in any other way.

 

 

 

Something we seldom write about on the blog is production and supply chain. It’s a big part of what we do, both in time and business wise. Even though we spend most of our time on firmware/software we’re actually only selling hardware. So this blog-post is about how we’ve set this up and the problems we’ve been facing the last month due to our 3rd party warehouse moving to a new location.

Photo by frank mckenna on Unsplash

The current set-up

Currently we’re using Seeedstudio for our manufacturing. They do varying batch sizes, but most of the batches we produce are between 300 and 2000 units. We’ve been experimenting a bit with varying size of batches, too large and you tie up too much funds in stock while with smaller batches you spend most of you’re time tending to manufacturing. Another issue with large batches are things like battery shelve life and changing market (i.e suddenly some parts are EOL or have been replaced when it’s time for the next batch).  Finding a good level for different products depending on production cost, complexity and shelf-life is tricky.

After production the goods are moved to a number of warehouses. Part of the goods are warehoused at Seeedstudio, part of them are sent to our 3rd party warehouse in Hong Kong serviced by Shipwire and a small amount is sent to our office for testing/development/customers. The products in Seeedstudio’s warehouses services a number of distributors though their wholesale channels as well as end-users though their Bazaar. We service our E-store though Shipwire in Hong Kong and a few customer though our Swedish office.

Scaling up

Since the end of last year we’ve seen an increase of sales, which we are of course really happy about! More sales will mean more resources for development which translates into more awesome products and features for everyone. The problem is that it takes time to scale up the supply chain on the back. Today we have have 27 SKUs and 7 bundle SKUs “virtually” made out of combining products into bundles. Out the 27 SKUs we control the manufacturing of 17 SKUs (like PCBs and plastic parts) and 5 SKUs are things we buy (like the USB-cable). Typically the lead time for simpler products is 1 month and more complex products 2 months, with an additional lead-time of at least a week to reach our Hong Kong warehouse and become available in the E-store. Creating bundles by “virtually” tying together a number of products is great since it gives us more flexibility but if one of the bundled SKUs is out of stock the bundle will also be our of stock.

Controlling this complex situation while scaling up for larger sales has proved challenging, also when everything works as expected (see below). Most of our customers have gotten their things in time, but we’ve had to put a lot of hours into juggling products around between warehouses to make it happen.

Warehouse issues

Back in February we were notified by Shipwire that they would be moving the operation to a new warehouse in Shenzhen/Hong Kong. The timeline that was communicated was that the inventory would be offline 3rd – 6th of April. This might seem optimistic for a warehouse that is  about 10 000 m2, but since they have a large amount of warehouses around the globe we assumed they would pull this off. Unfortunately this wasn’t the case, a number of factors played in to delay the move. Since the first week of delays the expected timeline has been “next week”, which unfortunately hasn’t held. Finally we’re at a point where our old inventory has been moved into the new warehouse and is available. The next problem we’re facing is getting our incoming goods into the inventory, which is currently expected to be finished by the end of this week. To say the least we’re unhappy about this situation, but unfortunately we have had very little control. We don’t have a large number of products available in any other warehouse so we haven’t been able to “switch over” to another solution. We’ve done our best to keep the effected customers updated on the situation and calling support every day to get an update.

Moving forward

We’re a small team of 5 people and we’ve always been most focused on product development. It’s what we like to do and it’s what we’re best at. So an easy way forward would be to pay someone else to handle all of the above. Unfortunately this has proven to be tricky for us. Basically handing over everything that generates revenue for our company to someone else is a huge risk, to say the least. So we’ve realized that this has to be a central part of what we do, just like development. This was the main reason for starting our own E-store last year and it’s something we’re continuously working on improving.

Moving forward the overall goal is to minimize the work spent on production and stock management while making sure to not run out of stock or tie up all our funds in stock. We think that one key to this is being proactive instead of reactive. So we have integrated this into our daily work just as much as development. Next to the “development” board with stories/tasks we have an even bigger kanban board with production/logistics/warehouses and it’s something that is constantly part of the planning/status meetings. We’ve also been gearing up for producing batches of popular products more often and increasing the batch sizes to meet the increased demand and to lower the risk of being out of stock. The last part is an internal system we’ve been developing during the last couple of years that keeps track of stock, production, customer shipments and stats in general. More on this in a future blog-post!

The Bitcraze Virtual Machine is designed as a quick and isolated way to start development with Crazyflie and other bitcraze projects.

The current VM is starting to get very old, even though we keep it updated it is based on XUbuntu LTS 14.04. This month Ubuntu LTS 18.04 is being release which is a good reason to upgrade the VM!

The main update will then to switch from XUbuntu 14.04 to XUbuntu 18.04. There is a couple more things that we are looking at updating:

  • Updating Eclipse and CDT to the latest version Oxygen.3a
  • Fixing Eclipse code completion and hinting configuration
  • Pre-configuring eclipse with gnu-mcu-eclipse to make it easier to flash and debug Crazyflie. 
  • Updating KiCad to the latest stable version 4
  • Fixing the virtual machine Crazyradio communication bugs

We are writing this blog post as a request for comment:

  • Is there anything else that you would like to add/remove in the new virtual machine?
  • Anything we could do to make it easier to start developing for Crazyflie?

The virtual machine is generated automatically using packer and VirtualBox, the code is hosted on GitHub. If you want to help making the VM or want functionality to be added to it do not hesitate to open a ticket in the bug tracker.

It’s time for an update on the Multi-ranger deck (see previous updates here: 1, 2, 3). Back in February/March we were just about to start the production of the Multi-ranger deck when the new VL51L1 ToF sensor from ST became available. Among the interesting features for the new sensor is increased range and ROI (region of interest) settings. We felt that the improvement was enough to consider using the new sensor for the Multi-ranger so we got some sensors and started testing.

Point cloud

We’ve made a little example where you can control the Crazyflie with a keyboard (using velocity mode) that records estimated position, body attitude and all the distances (down/up, left/right, front/back) from the ToF sensors. We then did some post-processing of the log data and plotted it using pyntcloud, you can see the results in the point cloud. There are still lots of possible improvements (like taking body attitude into consideration) to be made on the script, but once we’ve cleaned it up a bit we’ll publish it on GitHub so others can play around with it. Note that in the plot the blue points are up/down sensors (i.e Crazyflie movement) and the red points are the side sensors (front/back/left/right).

The room that was mapped

So far we’re happy with the results. We feel that the increased range and new features enables users to work on more interesting problems with the deck, so we’ve decided to switch our the sensors and go to production with the new one. Right now we’re running a 0-series of 10 units using the real production manufacturing fixture (for the standing PCBs with sensors) as well as the production test rig. Our best estimate for when the deck will become available for purchase is some time during the summer. Below is a picture of the latest prototype. We’ll make sure to keep you updated on the progress!

Warehouse issues

On an additional note we’re having some issues with our warehouse provider which ships out orders from our E-store. In the beginning of the month the provider hade scheduled a move of the warehouse to a new physical location which would delay handling of orders for max a week. Unfortunately the move, which should have been finished mid last week, is still in progress which means we can’t ship out any orders for the moment. We’re working hard on trying to work this out with the provider.

I’ve spent the last 5 years of my career at Microsoft on the team responsible for HoloLens and Windows Mixed Reality VR headsets. Typically, augmented reality applications deal with creating and manipulating digital content in the context of real-world surroundings. I thought it’d be interesting to explore some applications of using an augmented reality device to manipulate and control physical objects and have them interact with the real world and/or digital content.

Phase 1: Gesture Input

The HoloLens SDK has APIs for consuming hand gestures as input. For the first phase of this project, I modified the existing Windows UAP/UWP client to handle these gestures and convert them to CRTP setpoints. I used the “manipulation gesture” which provides offsets in three dimensions for a tap-and-drag gesture, from the point in space where the initial tap occurred. These three degrees of freedom are mapped to thrust, pitch and roll.

For the curious, there’s an article on my website with details about the implementation and source code. Here’s a YouTube video where I explain the concept and show a couple of quick demos.

As you can see in the first demo in the video, this works but isn’t entirely useful or practical. The HoloLens accounts for head movements (otherwise moving the head to the left would produce the same offset as moving the hand to the right, requiring the user to keep his or her head very still) but the user must still take care to keep the hand in the field of view of the device’s cameras. Once the gesture is released (or the hand goes out of view) the failsafe engages and the Crazyflie drops to the ground. And of course, lack of yaw control cripples the ability to control the Crazyflie.

Phase 2: Position Hold

Adding a flow deck makes for a more compelling user experience, as seen in the second demo in the video above. The Crazyflie uses the sensors on the flow deck to hold its position. With this functionality, the user is free to move about the room and make shorter “adjustment” hand gestures, instead of needing to hold very still. In this mode, the gesture’s degrees of freedom map to an x/y velocity and a vertical offset from the current z-depth.

This is a step in the right direction, but still has limitations. The HoloLens doesn’t know where it is in space relative to the Crazyflie. A gesture in the y axis relative to the device will always result in a movement in the y direction of the Crazyflie, which begins to feel unnatural if the user moves around. Ideally, gestures would cause the crazyflie to move in the same direction relative to the user, not relative to the ‘front’ of the Crazyflie. Also, there’s still no control over yaw.

The flow deck has some limitation as well: The z-range only goes to 2 meters with any accuracy. The flow sensor (for lateral stabilization) has a strong dependency on the patterns on the floor below. A flow sensor is a camera that relies on measuring pixel deltas from frame to frame, so if the floor is blank or has a repeating pattern, it can be difficult to hold position properly.

Despite these limitations, using hand gestures to control the Crazyflie with a flow deck installed as actually quite fun and surprisingly easy.

Phase 3 and Beyond: Future Work & Ideas

I’m currently working on some new features that I hope will open the door for more interesting applications. All of what follows is a work in progress, and not yet implemented or functional. Dream with me!

Shared Coordinate System

The next phase (currently a work in progress) is to get the HoloLens and the Crazyflie into a shared coordinate system. Having spatial awareness between the HoloLens and the Crazyflie opens up some very exciting scenarios:

  • The orientation problem could be improved: transforms could be applied to gestures to cause the Crazyflie to respond to commands in the user’s frame of reference (so ‘pushing’ away from one’s self would cause the Crazyflie to fly away from the user, instead of whatever direction is ‘forward’ to the Crazyflie’s perspective).
  • A ‘follow me’ mode, where the crazyflie autonomously follows behind a user as he or she moves throughout the space.
  • Ability to walk around and manually set waypoints by selecting points of interest in the environment.

The Loco Positioning System is a natural fit here. A setup step (where a spatial anchor or similar is established at same physical position and orientation as the LPS origin) and a simple transform for scale and orientation (HoloLens and the Crazyflie define X,Y,Z differently) would allow the HoloLens and Crazyflie to operate in a shared coordinate system. One could also use the webcam on the HoloLens along with computer vision techniques to track the Crazyflie, but that would require constant line of sight from the HoloLens to the Crazyflie.

Obstacle Detection/Avoidance

Example surface map produced by HoloLens

The next step after establishing a shared coordinate system is to use the HoloLens for obstacle detection and avoidance. The HoloLens has the ability to map surfaces in real time and position itself in that map (SLAM). Logic could be added to the HoloLens to consume this surface map and adjust pathing/setpoints to avoid these obstacles without reducing the overall compute/power budget of the Crazyflie itself.

Swarm Control and Manipulation

As a simple extension of the shared coordinate system (and what Bitcraze has been doing with TDoA and swarming lately) the HoloLens could be used to manipulate individual Crazyflies within a swarm through raycasting (the same technique used to gaze at, select and move specific holograms in the digital domain). Or perhaps a swarm could be controlled to move out of the way as a user passes through the swarm, and return to formation afterward.

Augmenting with Digital Content

All scenarios discussed thus far have dealt with using the HoloLens as an input and localization device, but its primary job is to project digital content into the real world. I can think of applications such as:

  • Games
    • Flying around through a digital obstacle course
    • First person shooter or space invaders type game (Crazyflie moves around to avoid user or fire rendered laser pulses at user, etc)
  • Diagnostic/development tools
    • Overlaying some diagnostic information (such as battery life) above the Crazyflie (or each Crazyflie in a swarm)
    • Set or visualize/verify the position of the LPS nodes in space
    • Visualize the position of the Crazyflie as reported by LPS, to observe error or drift in real time

Conclusion

There’s no shortage of interesting applications related to blending augmented reality with the Crazyflie, but there’s quite a bit of work ahead to get there. Keep an eye on the Bitcraze blog or the forums for updates and news on this effort.

I’d love to hear what ideas you have for combining augmented reality devices with physical devices like the Crazyflie. Leave a comment with thoughts, suggestions, or any other relevant work!

Here at the USC ACT Lab we conduct research on coordinated multi-robot systems. One topic we are particularly interested in is coordinating teams consisting of multiple types of robots with different physical capabilities.

A team of three quadrotors all controlled with Crazyflie 2.0 and a Clearpath Turtlebot

Applications such as search and rescue or mapping could benefit from such heterogeneous teams because they allow for more flexibility in the choice of sensors and locomotive capability. A core challenge for any multi-robot application is motion planning – all of the robots in the team need to make it to their target locations efficiently while avoiding collisions with each other and the environment. We have recently demonstrated a scalable method for trajectory planning for heterogeneous robot teams utilizing the Crazyflie 2.0 as the flight controller for our aerial robots.

A Crazier Swarm

To test our trajectory planning research we wanted to assemble a team with both ground robots and multiple sizes of aerial robots. We additionally wanted to leverage our existing Crazyswarm software and experience with Crazyflie firmware to avoid some of the challenges of working with new hardware. Luckily for us the BigQuad deck offered a straightforward way to super-size the Crazyflie 2.0 and gave us the utility we needed.

With the BigQuad deck and off-the-shelf components from the hobbyist drone community we built three super-sized Crazyflie 2.0s. Two of them weigh 120g (incl. battery) with a motor-to-motor size of 130mm, and the other is 490g (incl. battery and camera) with a size of 210mm.

120g, 130mm

490g, 210mm

We wanted to pick components that would be resistant to crashing while still offering high performance. To meet these requirements we ended up picking components inspired by the FPV drone racing community where both reliable performance and high-impact crashes are expected. Full parts lists for both platforms are available here

Integrating the new platforms into the Crazyswarm was fairly easy. We first had to re-tune the PID controller gains to account for the different dynamics of the larger platforms. This didn’t take too long, but we did crash a few times — luckily the components we chose were able to handle the crashes without any breakages. After tuning the platforms behave very well and are just as easy to work with as the original Crazyflie 2.0. We additionally updated the Crazyswarm package to be able to differentiate between BigQuad and regular Crazyflie types and those updates are now available for use by anyone!

In future work, we are excited to do hands-on experiments with a prototype of the CF-RZR. This new board seems like a promising upgrade to the CF 2.0 + BQD combination as it has upgraded components, an external antenna, and a standardized form factor. Hopefully we will see the CF-RZR as part of the Crazyswarm in the near future!

Mark Debord
Master’s Student
Automatic Coordination of Teams Laboratory
University of Southern California
Wolfgang Hönig
PhD Student
Automatic Coordination of Teams Laboratory
University of Southern California

 

We have been flying swarms in our office plenty of times. There is kind of a limitation to this though, our flying space is only around 4 x 4 meters. Flying 8 – 10 Crazyflies in this space is challenging and it is hard do make it look good. Slight position inaccuracy makes it look a bit sloppy. To mitigate this we decided to have a small swarm show using a a bigger flying space and to invite families and friends, just to raise the stake a bit.

As usual we had limited time to accomplish this, and this time the result should be worth looking at. Well, we have managed to pull off hard things in one day before so why not this time… The setup is basically a swarm bundle with added LED-rings. Kristoffer took care of the choreography, Tobias setting up the drones and Arnaud configuring the Loco positioning system.

Choreography

Kristoffers pre-Bitcraze history involves some dancing and he has been playing a bit earlier with the idea of creating choreographies with Crazyflies. One part of this was a weekend-hack a few months back when he tried to write a swarm sequencer that is a bit more dance oriented. The goal was to be able to run a sequence synchronized to music and define the movements in terms of bars and beats rather than seconds. He also wanted to be able to define a motion to end at a specific position at a beat as opposed to start on the beat. As Kristoffer did not have access to a swarm when he wrote the code he also added a simple simulator to visualize the swarm. The hack was not a complete success at that time but turned out to be useful in this case.

The sequences are defined in a YML file as a list of time stamps, positions and, if needed the color of the LED-ring. After a few hours of work he had at least some sort of choreography with 9 Crazyflies moving around, maybe not a master piece from a dance point of view but time was running out.

The simulator is super basic but turned out to be very useful anyway (the color of the crosses indicates the color of the LED ring). We actually never flew the full sequence with all drones before the performance, but trusted the simulation to be accurate enough! We did fly most of the sequence with one Crazyflie, to at least make it plausible that we got it all right.

Short snippet from the simulation

Setting up drones

Handling swarms can be tedious and time consuming. Just making sure all drones are assembled, fully operational and charged is a challenge when the number increases. Tobias decided to do manual flight test of every drone. If it flies well manually it will most likely fly well autonomously.  The testing resulted in switching out some motors and props as vibrations is a crippling factor, especially for Z accuracy. Takeaway from this exercise is to implement better self testing so this can be detected automatically and fixed much quicker.

Loco Positioning System

We ran the positioning system with standard firmware in TDoA mode to support multiple Crazyflies simultaneously. The mapped space was around 7 x 5 x 2.5 meters and the anchors were placed more or less in the corners of the flying space box.

The result

The audience (families and friends) was enthusiastic and expectations high! Even though not all drones made it all the way through the show, the spectators seemed to be duly impressed and requested a re-run.
 

This is a guest blog post written by Fred, the maintainer of the Android Crazyflie client and Java Crazyflie lib.

As a follow-up to last week’s blog post about the different Crazyflie clients, I would like to describe the current status of the Android client in a bit more detail.

After more than a year, Version 0.7.0 of the Android client has been released last Friday (March 16). Most importantly this release fixes a very annoying UI bug that appeared on newer Android versions, where the on-screen joystick did not show up when the app was started for the first time. It also adds support for height hold mode when using the zRanger or Flow deck, it adds a console view (can be enabled in Settings -> App settings -> Show console) and also shows the link quality for BLE connections now. You can read the full changelog on Github. You can find the release in the Google Play Store and as an APK on GitHub.

Connection quality and console messages now work on a BLE connection

Apart from the obvious/visible new features and bug fixes, quite a lot has happened “under the hood”. Some parts of the code were cleaned up, refactored, decoupled, etc. This is still a work in progress though.

There is still plenty of stuff to do for future releases, especially in the realm of Bluetooth support. On the short list are:

  • Param/Log support for BLE connections
  • bootloading over BLE
  • support for Flow deck sequences

Admittedly there was almost no documentation for the Android client and some features are hidden (too well). In an effort to change that, I’ve started to document some features on the project’s Github wiki.

If you find bugs in the app, want to request a feature or see errors in the documentation, please open a GitHub issue.

If you are interested in the development of the Crazyflie Android client and want to get involved, let me know. The fastest way to get new features added or bugs fixed is to contribute a pull request.

Last but not least, I’d like to thank the Bitcraze team for creating and developing the Crazyflie and amazing new decks. Maintaining the Crazyflie Android client is still one of my favorite past times. :)