Category: Random stuff

2019 is coming to an end and we are soon flipping the calendar to a new year. This is the last blog post of 2019 – time to look back and recap what has been going on during the year.

Community

We have had quite a few blog posts by community members this year. It is exciting for us to hear about the cool things our users are doing with our products, and we are happy to share them with all of you. If you have not read them yet and you might have some spare time during the holidays, it is well worth the time spent!

During 2019, we have also attended a number of conferences and events, where we have met a lot of interesting people, heard about amazing projects and got feedback from users. We attended FOSDEM (Belgium), ICRA (Canada), IMAV (Spain), ISRF (South Korea) and finally IROS (China).

Software

There have been quite a lot of improvements to the software in the Crazyflie ecosystem. Apart from bug fixes there has been some restructuring to simplify modifications and increase the utilization of system resources. The Crazyflie firmware has also been generalized to support multiple hardware platforms. We have added an app layer, Peer 2 Peer communication and support for new decks (see below).

The community has been buzzy contributing new and improved functionality as well as bug fixes to the software stack. Just to mention a few: support for new sensors, improved positioning support, better logging to SD-card, improved communication, new controllers and compressed trajectories. We can not express how grateful we are, thank you all!

Hardware

There have been quite some work on new hardware as well during 2019. We kicked the year off by releasing the two new platforms: the Crazyflie 2.1 and Roadrunner, and finished it by the recent release of the Crazyflie Bolt.

On the deck side there has been a focus on positioning support in the Passive– and Active marker decks that we have released in collaboration with our friends at Qualisys. The Lighthouse deck also falls into this category, we are excited about the performance and have high hopes of future awesomeness when it leaves Early access!

We have put a lot of work into the AI-deck during the year. Unfortunately we did not manage to finalize it 2019, but hopefully it should not be too long into 2020 until it is available in the store.

Documentation

Documentation is always hard, especially when the system is spread over many repositories. We have moved a fair amount of our documentation from the wiki to the code repositories to keep it closer to the code, and hopefully make it easier to keep it up to date. The documentation is now also published on the web to make it easy to access.

Logistics

We have tried out various 3d party shipping solutions earlier, but have settled on shipping our selfs, from our own warehouse in Sweden. This give us better control of the process and we have made a number of improvements and automated as much as possible to keep it lean and smooth.

Bitcraze

It has been an intense year for the Bitcraze family. We have moved to a new office with much more space and opportunities. It has required quite some work to set up labs, flight arena and other areas to our liking, but finally we have settled in and are very happy with the result!

Björn decided to leave the company in the beginning of the year, but on the other hand Kimberly joined in May! We have also had the great pleasure of hosing interns Victor and Zhouxin during the year.

Kimberly McGuire

On the system side, we have revamped our server platform for web, forum, wiki and internal services and are now using kubernetis. We also had a rapid increase in spam on the forum during the summer, but managed to counter it with better anti-spam tools.

Conclusions

It is a good exercise to look back and remember what we have done during the year. We are equally surprised each time we do this and realize all the things we have managed to squeeze in, only being 5 persons in the company! It has been yet another hectic year, but full of happiness and excitement.

Thanks for an awesome year!

We are currently finishing production test design for a couple of expansion decks and we figured we never wrote about it and about the more general board production process. In this blog post we wanted to talk a bit about how we test boards in the productions phase, taking as an example the forthcoming active marker deck.

The active marker deck

When finalizing an electronic board, we send to the manufacturer documentation that allows to manufacture & assemble a, hopefully, functional board. Although we assume that the individual components are in working order, the problem is that the assembling is not always perfect, so we need to check that everything we do is actually working,. This is what the production test is solving.

The first thing is to find out what to test, for that we need a strategy. The strategy we have been using is to test every step where we have modified or work on: for example we will test all the connections we have soldered in the manufacturing process. We will normally not test all the functionalities of ready-made module. For example, following this strategy, we will usually test all communication interface we have cabled, but we will not test all functionalities of a microcontroller we solder on the board, these are deemed to be already tested and working by the microcontroller manufacturer. This step usually end up with an annotated schematic:

Annoted schematics of ActiveMarker Deck

Once we know what to test and roughly how to test it, we document a test rig that will be able to run the tests automatically. Some tests are generic and applicable to all our boards, for example we do test voltages with a multi-meter on every board that has a regulator. Some tests are very board specific. For example, for the active marker deck we want to test IR LEDs and an IR detector, we define a test rig that has reflector to reflect the LED to the detector and we will use the onboard detector to test the LEDs:

Simple block diagram of the test rig for the ActiveMarker Deck

We are normally using a Crazyflie on all our test rig, since it is usually possible to test all functionality from the deck port. We also try as much as possible to integrate the test software into the real software. For the active marker deck it meant adding 38KHz modulated output mode to the LEDs in order to emit a signal detectable by the detector, which will make it to the final firmware. Finally, we have a test software, running on the test computer, that uses the Crazyflie python lib to talk to the Crazyflie and run the test. The last step of all the test is to write the deck One Wire identification memory so that it can be detected by a Crazyflie.

Screenshot of the test program for the test engineer

From these specification, the manufacturer can then build a test rig and start testing boards, non-passing board will be re-worked until they pass or discarded.

Test rig for the Multi-ranger expansion deck

What we have learned in our years at Bitcraze is that testing phase is the most important part of the development process of PCB. Therefore, the earliest we already start thinking about the production tests in the board design, the more smooth the final phase of production of our new products will be.

This week we have a guest blog post from Joseph La Delfa.

DroneChi is a Human Drone interaction experience that uses the Qualisys motion capture system that enables the Crazyflie to react to movements of your body. At the Exertion Games Lab in Melbourne Australia, we like to design new experiences with technology where the whole body can be the controller and is involved in the experience.

When we first put these two technologies together we realised two things. 

  1. It was super easy to keep your attention on a the drone as it flew around the room reacting to your movements. 
  2. As a result it was also really easy to reflect on and refine ones own movements. 

We thought this was like meditation meditated by a drone, and wanted to investigate how to further enhance this experience through design. We thought the smooth movements were especially mesmerising and so I decided to take beginner Tai Chi lessons; to get an appreciation of what it felt like to move like a Tai Chi student.

We undertook an 8 month design program where we simultaneously designed the form and the interaction of the Crazyflie. The initial design brief was pretty simple, make it look and feel light, graceful and from nature. In Tai Chi you are asked all the time to imagine a flower, the sea or a bird as you embody its movements, we wanted to emulate these experiences but without verbal instruction. Could a drone facilitate these sorts of experiences through it’s design?

We will present a summarised version of how the form and the interaction came about. Starting with a mood board, we collated radially symmetrical forms from nature to match a drone’s natural weight distribution.

We initially went with a jelly fish, hoping to emulate their “push gliiide” movement by articulating laser cut silhouettes (see fig c). This proved incredibly difficult, after searching high and low for a foam that was light enough for the Crazyflie to lift, we just could not get it to fly stable. 

However, we serendipitously fell into the flower shape by trying to improve how we joined the carbon rods together in a loop (fig b below).  By joining them to the main hull we realised it looked like a petal! This set us down the path of the flower, we even flipped the chassis so that the LED ring faced upwards (cheers to Tobias for that firmware hack). 

Whilst this was going on we were experimenting with how to actually interact with the drone. Considering the experience was to be demonstrated at a major conference we decided to keep the tracking only to the hands, this allowed quick change overs. We started with cardboard pads, experimented with gloves but settled on some floral inspired 3D printed pads. We were so tempted to include the articulation of the fingers but decided against it to avoid scope creep! Further to this, we curved the final hand pads (fig  d) to promote the idea of holding the drone, inspired by a move in Tai Chi called “holding the ball”.

As a beginner practicing Tai Chi I was sometimes overwhelmed by the number of aspects of my movement that constantly needed monitoring, palms out, heel out, elbow slightly bent, step forward etc. However in brief moments it all came together and I was able to appreciate the feelings of these movements as opposed to consciously monitoring them. We wanted this kind of experience when learning DroneChi so we devised a way of mapping the drone to the body to emulate this. After a few iterations we settled on the “mid point” method as seen below.

The drone only followed the midpoint (blue dot above) if it was within .2m of it. If it was outside of this range it would float away slowly from the participant. This may seem like a lot, but with little in the way of visual guidance (eg a laser pointer or an augmented display) a person can only rely on the proprioceptive feedback from their own body. We used the on board LED ring on the drone to let the person know at least when they are close, but that is all the help they got. As a result this takes a lot of concentration to get right!

In the end we were super happy with the final experience, in the study participants reported tuning into their bodies when using the drone, as well as experiencing a unique sort of relationship to the drone; not entirely like a pet and also like an extension of the body. We will be investigating both findings from the study through the design and testing of a new system on the Crazyflie. We see this work contributing to more intimate designs for human drone interactions as well as a being applicable to health contexts such as rehabilitation.

We have been traveling a lot this Autumn and have been talking to many Crazyflie users. It is always great to talk to our users and to get feedback about how what we make is being used.

The Swarm bundle

One particular subject that stood out was the Loco positioning system (LPS): the LSP seems to be used by a lot of people and we have gotten quite some feedback about it, some are about things that work but also some things that could be improved. This is interesting because we normally do not get that much feedback from people using the LPS.

We released the LPS about 2.5 years ago with Two-Way-Ranging single Crazyflie support, and it has been improved regularly since then among other things by adding 2 TDoA modes that supports multiple Crazyflies as well as releasing the Roadrunner board, a standalone LPS tag.

If you are using the LPS, it would be great to have some feedback about what you are using it for, what works, what does not work and (even better ;) if you have any improvement that can be pushed to the community. Do not hesitate to post in this blog post, on the forum or by posting issues or pull request in the LPS-node or Crazyflie-firmware github projects.

Hey there, my name is Zhouxin. I was born in the Netherlands and I still live there, but not for the upcoming four months since I am going to be an intern here, at Bitcraze. I am really looking forward to contribute and to be part of this team! In this blog post I will share with you my motivation of interning here and something about myself.

I am doing this internship as part of my studies at the TU Delft, an university in the Netherlands, but the main reason is that I like the technical challenges related to their product and the dynamic work environment. I am convinced that I can learn a lot here about the practical things like working in a tech company and also about the technical challenges when developing code for practical applications such as the Crazyflie 2.1.

As mentioned before this is part of my studies, at the moment I am studying for my Masters degree in Aerospace Engineering. In this degree there are profiles. I choose the profile Control and Simulation which is mainly focused on the control and navigation systems in aviation. This still sounds quite general, so I will give a few examples where you can think of. A graduate from this profile might work on the automatic pilot of an aircraft, on the simulators for pilot training, on air traffic management systems, or on autonomous micro air vehicles. The latter is something I am interested in and that’s one of the reasons I am doing my internship at Bitcraze.

As a child I was always intrigued by how birds can fly, this led to my desire to fly. I had tried flying several times by wearing a cape and by jumping of the couch while trying to optimize the airtime with flapping my cape. This gave me some adrenaline boost but I never was able to fly. Later I discovered that the only way for humans to fly is to become a pilot. This became my new dream. When I was about 8 years old I started to practice flying by controlling RC airplanes. This made me interested in electronics and technology which later translated into pursuing my degree in Aerospace Engineering.

Besides my academic interests I also occupy myself with other activities. About two years ago I took a gap year and went traveling in East Asia. There I have discovered my enjoyment of nature and exploring cultures. Also I love to snowboard in the French Alps during the winter holidays and to share these wintersport adventures with friends. When I am not traveling or abroad, I enthusiastically play field hockey or tennis.

We talked about it in a previous post, it is more than time to implement a higher abstraction layer for the Crazyflie firmware to make it easy to implement custom automations and programs on top of the flying platform. In this post we will try to explain the state of the art and where we are thinking of heading. This is mostly a request for comments and we are creating a github ticket to discuss about it.

DELFT – Zwerm Drones TU Delft. – FOTO GUUS SCHOONEWILLE

The out-of-tree build and P2P API presented in the previous post is a great start: it allows to make project on top of the Crazyflie firmware that can easily be maintained over time and to communicate directly between Crazyflie without having a PC in the loop. Though we have not completely solved or documented the API that can be called by the programs written on top of the Crazyflie, this is what the APP-layer is supposed to provide.

The current plan for the app layer is to make the same functionality that is available in the Python crazyflie lib API, accessible from within the Crazyflie firmware, using similar API calls. This way we get the possibility of prototyping functionality in python code on a remote machine, and when it is working, easily convert it to an app onboard. This is already implemented, in part, for the log and param API as well as for the low level parts of the commander. It has enabled us to write programs like the multiranger push demo and SGBA from Kimberly’s paper. The API is not yet documented properly and the function calls do not look like the ones on the python lib side at this time, but our intention is to converge the APIs over time.

We think that having the same level of functionality for Log, Param and Commander within a Crazyflie app, as in the python API, will already allow to implement a lot of onboard programs much more easily than has been possible until now. If there is anything else you think would be interesting to develop in this field, do not hesitate to drop a comment in this post or in the github issue.

This week we are exhibiting at IROS in Macau. We are running our fully autonomous demo based on the Lighthouse positioning technology and charging pads. We also have brought some prototypes to show, for instance the Crazyflie Bolt, the AI deck and the Active marker deck. You can read more about the demo at the IROS 2019 page.

We’d love to hear what you are working on, discuss issues, possibilities or new products. If you are at IROS, drop by our booth (B34) and say hi!

Lighthouse yaw

We have not only prepared for IROS, we have also been working on improving the lighthouse positioning system. Recently we added a (slightly hackish) solution for updating the yaw with data from the Lighthouse deck. This means that it is not necessary to start the Crazyflie facing the positive X direction when using the Lighthouse deck. The Crazyflie will understand its heading and act accordingly.

Two Crazyflies facing a random direction, take off and rotate to yaw=0.

We are also working on integrating the Lighthouse deck in a better way in the kalman filter. If everything goes according to plan, it will enable a Crazyflie to fly with only one base station, and be more robust when using two base stations.

For the last four years of doing my PhD at the TU Delft and the MAVlab, we were determined to figure out how to make a swarm/group of tiny quadcopters fly through and explore an unknown indoor environment. This was not easy, as many of the sub-challenges that needed to be solved first. However, we are happy to say that we were able to show a proof-of-concept in the latest Science Robotics issue! Here you can see the press release from the TU Delft for general information about the project.

Since we used the Crazyflie 2.0 to achieve this result, this blog-post we wanted to mostly highlight the technical side of the research, of the achievements and the challenges we had to face. Moreover, we will also explain the updated code which uses the new features of the Crazyflie Firmware as explained in the previous blogpost.

A swarm of drones exploring the environment, avoiding obstacles and each other. (Guus Schoonewille, TU Delft)

Hardware

In the paper, we presented a technique called Swarm Gradient Bug Algorithm (SGBA), which borrows (as the name suggests) navigational elements from the path planning technique called ‘Bug Algorithms’ (see this paper for an overview). The basic principle is that SGBA is a state-machine with several simple behavior presets such as ‘going to the goal’, ‘wall-following’ and ‘avoiding other Crazyflies’. Here in the bottom you can see all the modules were used. For the main experiments (on the left), the Crazyflie 2.0’s were equipped with the Multiranger and the Flowdeck (here we used the Flow deck v1). On the right you see the Crazyflies used for the application experiment, were we made an custom Multiranger deck (with four VL53L0x‘s) and a Hubsan Camera module. For both we used the Turnigy nanotech 300 mAh (1S 45-90C) LiPo battery, to increase the flight time to 7.5 min.

Hardware used in the experiments. Adapted from the science robotics paper.

Experiments

With this, we were able to have 6 Crazyflies explore an empty office floor in the faculty building of Aerospace engineering. They started out in the middle of the test environment and flew all in different preferred directions which they upheld by their internal estimated yaw angle. With the multi-rangers, they managed to detect walls in their, and followed its border until the way was clear again to follow their preferred direction. Based on their local odometry measurements with the flowdeck, the Crazyflies detected if they were flying in a loop, in order to get out of rooms or other situations.

A little before half way of their battery life, they would try to get back to their initial position, which they did by measuring the Received Signal Strength Intensity of the Crazyradio PA home beacon, which was located at their initial starting position. During wall-following, they measured the gradient of the RSSI, to determine in which directions it increases or decreases, to estimate the angle back the goal.

While they were navigating, they were also communicating with each-other by means of broadcasting messages. Based on those measurements of RSSI, they could sense other Crazyflies approaching, which they first of all used for collision avoidance (by letting the low priority CFs move out of the way of the high priority CFs). Second of all, during the initial exploration phase, they communicated their preferred direction as well, so that one of them can change its exploration behavior to not conflict with the other. This way, we tried to maximize the explored area by the Crazyflies.

One of those experiments with 6 Crazyflies can be seen in this video for better understanding:

We also showed an application experiment where 4 crazyflies with the camera modules searched for 2 dummies in the same environment.

Challenges

In order to get the results presented above, there were many challenges to overcome during the development phase. Here is a list that explains a couple of the elements that needed to work flawlessly:

  • Single CF robustness: We used the Flowdeck v1, for the ‘deadlock’ detection and the basic velocity control, which was challenging in the testing environment because of low lighting conditions and texture. Therefore the Crazyflies were flying at 0.5 meters in order to ensure robustness. The wall-following was performed solely using the Multiranger. This was tested out in many situations and was able to handle a lot of type of obstacles without any problem. However the limited FOV of the laser range finder can not detect all types of obstacles, for instance thin ones or irregular ones such as plants. Luckily these were not encountered in the environment the Crazyflies flew in, but to increase robustness, we will need to consider adding a camera to the navigational drive as well.
  • Communication base-station. SGBA by essence only needs one base-station Crazyradio PA, since all the behavior is completely on board. However, in order to show results in the paper, it was necessary for the CF to communicate information back, like odometry, state and such. As this was a two way communication (CFs needed RSSI to get back) each Crazyflie needed 1 base-station. Also, they all needed to be on different channels to avoid package collisions and RSSI accumulation.
  • Communication Peer to Peer. At development time, P2P didn’t exist yet, so we had to implement broadcast communication between the Crazyflies. Since the previous pointer required them to listen on different channels, the NRF had to be configured to send separate broadcast messages on all those channels as well. In order to time this properly, the home beacon had to sync the Crazyflies accordingly by sending out a timer. Even so, the avoidance maneuvers were done very conservatively to try to prevent inter-drone collisions.

Many of the issues, especially the communication challenges, will be solved with the updated code implementation as explained in the next section.

Updated code

The firmware that the Crazyflies used to fly in the experiments showed in the paper, can all be found in this public repository. However, the code is based quite an old version the current Crazyflie firmware, as it was forked almost a year ago. The implementation of the SGBA state machine and the P2P broadcasting were not generic enough to integrate this back to the development cycle, therefore the current code is only suitable for the old Crazyflie 2.0.

Therefore, we developed two major changes in the latest firmware which will make it much easier for me (and other ideas as well we hope!) to implement SGBA and the P2P communication in a way that should be compatible with any version of the firmware (and hardware) from here and on. We implemented SGBA as an app-layer and also handled all the broadcast messaging directly from this layer as well. Please check out this Github repository with this new app layer implementation of SGBA.

CrazyFlies are great for indoor applications, thanks to their maneuverability and ubiquitous character. Its small size, however, limits sensor quality and compute capability. In our recent work we present source seeking onboard a CrazyFlie by deep reinforcement learning. We show a general methodology for deploying deep neural networks on heavily constrained nano drones, using full 8-bit quantization and input scaling. 

Our fully autonomous light-seeking CrazyFlie

Problem definition

Source seeking can be interesting in a variety of contexts. We focus on light seeking, as seen in nature. Many insects rely on light, either for survival or navigation. Light seeking in aerial robotics has many applications, such as finding the exit out of a dark room. 

Our goal is to fully autonomously find a light source, using only the onboard Micro Controller Unit (MCU) and deep reinforcement learning. 

Crazyflie configuration

Our fully autonomous nano drone uses several standard and custom sensors. We use the multiranger and flowdeck for position control and obstacle avoidance.

The Multiranger deck with our custom light sensor

We add a custom light sensor, based on the Adafruit TSL2591 sensor. The custom light sensor nicely fits in the multiranger deck, adding little mass and inertia (total vehicle mass is 33 grams).

CrazyFlie 2.1 with multiranger, flowdeck and light sensor

Algorithm

We use a deep reinforcement learning algorithm with a discrete action space. The neural network policy has laser rangers and light readings (current and past values) as input. The neural network tells the drone to rotate left, right or fly forward. We train a neural network with 2 hidden layers of both 20 nodes, featuring bias add and relu activation functions. The input layer is a vector with a length of 20 (4 states), which, compared to images, greatly reduces computational effort. 

DQN policy architecture

Simulation and conversion

We train our agent in simulation using the Air Learning simulation platform, after which we fully quantize the neural network to 8-bit integers.

To maintain accuracy after quantization, we have come up with quantization innovations. Both input layer and all tensors in the network need to have a pre-defined [min,max] range in float32, to convert to 8-bit integers. 

Air Learning pipeline

In the input layer, not all inputs have the same range. That is, a laser ranger can have values from 0 to 5 meters while our light sensor may return a value between 0 and 300 lux. To avoid this issue, we scale all inputs to the same range.

Additionally, the tensors in the network need to have an assigned [min,max] range for quantization. To achieve this, we input a range of representative input into the unquantized model, and read out the values of intermediate layers. With this strategy, we arrive at a 2.9x speed-up compared to float32 inference.

Implementation

We use Tensorflow Lite to deploy our tensorflow models in C on the CrazyFlie. The TFMicro Stack, together with the actual model, almost completely fill up the available RAM. 

RAM utilization on the CrazyFlie 2.1

The total amount of RAM available on the CrazyFlie 2.1 is 196kB, of which only 131kB is available for static allocation at compile time. The Bitcraze software stack uses 98kB of RAM, leaving only 33kB available for our purposes. The TFMicro stack takes up 24kB, thus leaving 9kB for the actual model (e.g., weights, bias terms). 

We also analyzed CPU usage, and noticed a high amount of interrupts by the ‘stabilizer’ thread, i.e., the PID controllers. Because of these interrupts, inference of our model takes 46.4 times longer than it would have been without interruption. 

Our quantized model is 3kB. If it were an FP32 model, it would have taken 12kB, which would not have fitted in the available memory. We were able to run inference at 4Hz, compared to the estimated 1.4Hz of the same but unquantized model. 

In a practical sense, we noticed a decreased level of stability when increasing model size. Occasionally the drone would reboot randomly while flying. Possible causes for this behavior are RAM overflow and task scheduling problems in RTOS. Besides, we observed variation in performance loss after quantization. Some of our trained models would just keep rotating after quantization, while our final model demonstrates robust source seeking behavior. This degree of uncertainty can possibly be avoided using quantization aware training. 

Finally, flying in a dark room without a position estimate can be challenging. The PID controllers heavily rely on information provided by the Flow Deck. This information is limited when little light is present while flying over a floor containing little features. To fix this, we added mats with texture on the ground, adding features and enabling stable flight in a dark room.

Flight tests

To validate our results in simulation, we created a cluttered environment with a light source. We randomly initialized the drone in the room, and hereby observed a success rate of 80% in a total of 105 flight tests. By varying the environment and initial drone position, we learned more about the inner workings of our algorithm.

Experiment testing environment

We learned that the algorithm performs better with more obstacles, and that a closer initial position improves performance. Generally, source seeking far away from the source seems really hard. Almost no variation in source strength exists between different measurements, and the drone observes mostly noise. 

Outlook

With our methodology, we were able to perform fully autonomous source seeking using deep reinforcement learning on a Cortex-M4 MCU. We hope our methodology will be applicable to other TinyML applications where resources are heavily constrained. Developing custom accelerators for a specific workload is time-consuming and expensive, while general purpose MCU’s are cheap and widely available. With our methodology, we unlock new applications for learning algorithms on heavily constrained platforms.

Direct path to source in empty room, blue = take-off

Links

Video: https://www.youtube.com/watch?v=wmVKbX7MOnU

Paper: https://arxiv.org/abs/1909.11236

Github: https://github.com/harvard-edge/source-seeking

Feel free to contact us might you have any questions or ideas: bduisterhof@g.harvard.edu

As pointed out in Daniele’s blog post about the PULP-DroNet we are collaborating on a AI-deck built around the new GAP8 RISC-V multi-core MCU. In the blog post you can find all the details around DroNet while here we will talk a bit about the AI-deck hardware. The AI-deck is similar to the PULP-Shield but with some optimizations. One of the HyperFlash memory spots has been removed, the communication interface slimmed down and a ESP32 (NINA module) has been added for WiFi connectivity.

Latest AI-deck prototype

So all together this a pretty good platform to develop low power AI on the edge for a drone.

Features:

  • GAP8 – Ultra low power 9 core RISC-V MCU
  • Himax HM01B0 – Ultra low power 320×320 greyscale camera.
  • 512 Mbit HyperFlash and 64 Mbit HyperRAM
  • ESP32 for WiFi and more (NINA-W102)
  • 2 x JTAG for GAP8 and ESP32

Currently we are doing the final testing of the hardware and hopefully we will launch production in the end of October. If production goes according to plan we hope we can offer it as an early access product just before X-mas. Make sure to come back and check the blog for more information about the progress as well as pricing.