Category: Software

Li-Ion batteries have packed more energy per gram for a long time compared to Li-Po batteries. The problem for UAV applications has been that Li-Ion can’t deliver enough current, something that is starting to change. Now there are cells that are supposed to be able to deliver 30-35A continuously in the 18650 series, at least according to the specs. Therefor we thought it was time to do some testing and decided to build a 1 cell Li-Ion drone using the Crazyflie Bolt as base.

Since a 18650 battery is 18mm in diameter and 65mm long, the size would affect the design but we still wanted to keep the drone small and lightweight. The battery is below 20mm wide which means we can run the deck connectors around it, that is nice. We chose to use our 3D printer to build the frame and use off the shelf ESCs, motors and props. After a couple of hours of research we selected 3″ propellers, 1202.5 11500kv motors and tiny 1-2s single ESCs for our first prototype.

Parts list:

  • 1 x Custom designed 130mm 3D printed frame
  • 1 x Crazyflie Bolt flight controller
  • 4 x Eachine 3020 propeller (2xCW + 2xCCW)
  • 4 x Flywoo ROBO RB 1202.5 11500 Kv motors
  • 4 x Flash hobby 7A 1-2S ESC
  • 1 x Li-Ion Sony 18650 VTC6 3000mAh 30A
  • Screws, anti vib. spacers, zipties, etc.

The custom designed frame was developed in iterations, and can still be much improved, but at this stage it is small, lightweight and rigid enough. We wanted the battery to be as central as possible while keeping it all compact.

Prototype frame designed in FreeCAD.

Assembly and tuning

The 3D printed frame came out quite well and weighed in at 13g. After soldering the bolt connectors to the ESCs, attaching motors and props, adjusting battery cable and soldering a XT30 to the Li-Ion battery it all weighed ~103g and then the battery is 45g of these. It feels quite heavy compared to the Crazyflie 2.1 and we had a lot of respect when we test flew it the first time. Before we took off we reduced the pitch and roll PID gains to roughly half and luckily it flew without problems and quite nicely. Well it sounds a lot but that is kind of expected. After increasing the gains a bit we felt quite pleased with:

#define PID_ROLL_RATE_KP  70.0
#define PID_ROLL_RATE_KI  200.0
#define PID_ROLL_RATE_KD  2
#define PID_ROLL_RATE_INTEGRATION_LIMIT    33.3

#define PID_PITCH_RATE_KP  70.0
#define PID_PITCH_RATE_KI  200.0
#define PID_PITCH_RATE_KD  2
#define PID_PITCH_RATE_INTEGRATION_LIMIT   33.3

#define PID_ROLL_KP  7.0
#define PID_ROLL_KI  3.0
#define PID_ROLL_KD  0.0
#define PID_ROLL_INTEGRATION_LIMIT    20.0

#define PID_PITCH_KP  7.0
#define PID_PITCH_KI  3.0
#define PID_PITCH_KD  0.0
#define PID_PITCH_INTEGRATION_LIMIT   20.0

This would be good enough for what we really wanted to try, the endurance with a Li-Ion battery. A quick measurement of the current consumption at hover, 5.8A, we estimated up to ~30 min flight time on a 3000mAh Li-Ion battery, wow, but first a real test…

Hover test

For the hover test we used lighthouse 2 which is starting to work quite well. We had to change the weight and thrust constants in estimator_kalman.c for the autonomous flight to work:

#define CRAZYFLIE_WEIGHT_grams (100.0f)

//thrust is thrust mapped for 65536 <==> 250 GRAMS!
#define CONTROL_TO_ACC (GRAVITY_MAGNITUDE*250.0f/CRAZYFLIE_WEIGHT_grams/65536.0f)

After doing that and creating a hover script that hovers at 0.5m height and was set to land when the voltage reached 3.0V. We leaned back with excitement, behind a safety net, and started the script… after 19 min it landed… good but not what we hoped for and quite far from the calculated 30 min. Maybe Li-Ion isn’t that good when it needs to provide more current…? A quick internet search and we could find that Li-Ion can run all the way down to 2.5V, but we have to stop at 3.0V because of electronics and loosing thrust, so we are missing quite a bit of energy… Further investigations are needed.

Lighthouse 2 flight test

As a final test we launched some flight scripts to fly in a square and in a spiral so we would get a feel for Lighthouse 2 + Bolt with PID controller combination. We think it turned out quite nicely, and this with almost no optimization effort:

Summary

Li-Ion felt like it could be a game changer when it comes to flight time but was not as promising as we hoped for. It doesn’t mean we can’t get there though. More research and development is required.

We’re happy to announce that we have taken an important step forward in the development of the lighthouse positioning system, we have improved the calibration compensation. The changes improves the correctness of the coordinate system, especially for lighthouse V2 base stations.

As mentioned in this blog post one of the remaining areas to solve was handling of calibration data and this is what we have addressed lately. In the manufacturing process mechanical elements are mounted within some tolerances but since the precision of the system is so good, also a very fine tolerances makes a big difference in the end result. Each base station is measured in the factory and the calibration data describing these imperfections are stored in the base station. The calibration data is transmitted in the light sweeps to enable a receiver to use it to correct for the errors in the measured angles.

As with everything else related to lighthouse, there is no official information of how to interpret the calibration data so we (and the community) had to make educated guesses.

Lighthouse 1

The compensation model for lighthouse 1 has been known for quite long, see the Astrobee project by Nasa and Libsurvive. The most important parameter is the phase and until now this is the only part of the calib data that we have used in the firmware. In the new implementation we use all parameters.

The parameters of the lighthouse 1 calibration model are phase, tilt, gib mag, gib phase and curve.

Lighthouse 2

The compensation data for lighthouse 2 is similar to lighthouse 1 but there are two new parameters, ogee mag and ogee phase. It also seems as some parameters that are sharing names between lighthouse 1 and 2 have different meanings, for instance curve.

Libsurvive has implemented compensation for lighthouse 2 but we have unfortunately not managed to use their work with good results, instead we have tried to figure out what the model might look like and match it to measurements. We have managed to get good results for the phase, tilt, gib mag and gib phase, while we don’t know how to use curve and ogee mag and ogee phase. The solution seems to be pretty good with a subset of the parameters and we have decided to leave it at that for now.

Use of calibration data

The way we have used the calibration data so far has been to apply it to the measured angles to get (more) correct sweep angles that have been fed into the position estimation algorithms. The problem is that the compensation model is designed the other way around, i.e. it goes from correct angles to measured angles, and an iterative approach is required to apply it to the measured angles. A better way (most likely by design) is to apply it in the kalman estimator instead where it simply becomes part of the measurement model.

Currently we do calculate the corrected angles as well and expose them as log data, but it is not required for the standard functionality of the lighthouse system. We may make it possible to turn it on/off via a parameter in the future to save some CPU power.

Functional improvements

So what kind of improvements will the calibration add?

The first improvement is the base station geometry estimation. With more correct angles the estimated base station position and orientation will be better. This is important to be able to get a good estimation of the Crazyflie position since poor geometry data will give the position estimator conflicting data.

Secondly more correct angles will straighten the coordinate system. With angular distortion the position estimator will not be able to estimate the correct position and the coordinate system will be warped, bent or stretched. The improvement can be seen when flying parallel to the floor at constant height for instance.

Thirdly the stability will hopefully be improved. When the angles from two base stations match better, the estimated position will change less when one base station is occluded and generally make life easier for the position estimator. We will take a look at the outlier filter to see if it can be improved as well.

Remaining problems

The calibration data is transmitted as a part of the sweeping light planes with a low bitrate. For lighthouse 1 the decoding process works well and all calibration data is usually received within 20-30 seconds. For lighthouse 2 it does not work as well in our current implementation it takes (much) longer before all data has been received correctly from both base stations.

It is possible to get the calibration data via the USB port on lighthouse 2 and we are considering storing the calibration data in the Crazyflie somehow instead. This will be even more important when we support larger systems (2+ base stations) and all base stations are not within range at startup.

For a long time issue #270 has been bugging us. It caused the µSD-card logging to fail in combination when using either the flow or loco deck, or actually any deck that uses the deck SPI bus. Several attempts has been made to fix this issue over time and recently we decided to really dig in to it. There has been some workaround to move the µSD-card to a different SPI bus but that was tedious and required patching the deck. So it was time to fix this for good, or at least know why it doesn’t work. A SPI bus is designed to be a multi-bus so it should be possible… Timing problems is still tricky but that is another story.

The problem

The SPI driver is protecting the bus with a mutex to prevent several clients to access it at the same time. After some digging we found that the FatFs integration layer was bugged and that SPI bus handling wasn’t well done. After comparing this to some other open implementations we found that this needed to be rewritten.

The solution

After rewriting part of the integration layer to have clear path of when the SPI bus was taken, and when it was released, we immediately got some good results. µSD-card logging with flow and loco deck worked, hooray! There is of course a limit to this and as we mentioned earlier the bus is a shared resource and if it is to congested, things will slow down, or stop working. This is currently the case when LPS is put in TWR mode. The TWR is very chatty and causes around 15k transactions per seconds on the SPI bus, and since it has higher priority than the µSD-card logging, the µSD-card write task starves, causing the logging to fail.

µSD and LPS SPI bus captured with a logic analyzer, over 50ms
µSD and LPS SPI bus captured with a logic analyzer, over 6ms

So if you stay away from LPS in TWR mode µSD-card logging should now work fine. I’m pretty sure there is a workaround for the TWR mode as well. First guess is that you would need to slow down the TWR update rate which is now at its maximum.

Happy logging!

Following firmware releases we have now released a new version for the Crazyflie client and python lib (cflib). It took a bit more time to test and fix various last minute bugs but we now have released Crazyflie client 2020.09 as well as Crazyflie lib 0.12.1.

Crazyflie python lib 0.12.1

The main new funcitonalities of the lib are:

  • Python 2.x support is now dropped. Python 2 official support has ended beginning of this year and it is not installed by default in Ubuntu 20.04, it was time to stop supporting it in cflib
  • Some documentation work
  • Capabilities to abort a bootloading operation

Crazyflie client 2020.09

There has been a brunch of cosmetic and functional changes in the client. Some themes have been added so that the client can now be used with a dark blue or even green-on black hacker theme. The used Qt theme has also been forced to be drawn by Qt on all platforms: this means that the client will not look like a Windows or Mac app on Windows and Mac, but the styling will be consistent on all platform. This will simplify development and make documentation consistent with all platform.

The bootloader has been changed to automatically download releases of the firmware from Github by default. This is a great quality of life feature made by victor this summer that makes it very easy to run Crazyflies with a clean release build.

New bootloader window

There are also a brunch of bugfixes, the full changelog can be read on the release page.

Future plans

Semantic versioning for the lib

We have been thinking of using semantic versioning for the lib and bumping the version to 1.0. This will allow us to communicate the state of the lib more accurately: it is not perfect but it is perfectly usable. As well giving more freedom to break compatibility. A lot of things could be made better but we are always very careful not breaking backwards compatibility. Proper semantic versioning would allow us to make a Crazyflie client lib 2.0 making it clear that if you update from 1.0 to 2.0 you might have to make changes to your scripts.

Client binary release for more platform

So far, the Crazyflie client has had a binary release for Windows and some work to make one for MacOS. By Binary release we mean releasing the client in a form that does not require installing python, and then the crazyflie client pip package, on Windows it is an installer and on Mac it would be an app bundle.

Last week we have also been working on Linux releases, after all most of us uses Linux at Bitcraze so we might as well show it some love. Today we have released a snap of Crazyflie client. This means that you can install Crazyflie client from the Ubuntu software application on Ubuntu, or directly via the snap install –edge cfclient command on any Linux system with snap installed. There is still some rough edges, and a stable version will only be available for next release, but this should make it much easier to get started with the Crazyflie.

We are happy to annouce that we have released a new version of the Crazyflie firmware, version 2020.09. It is available for download from Github.

The new firmware solves an old compatibility issue when using the LPS and Flow deck at the same time and also improves stability. A list of all the issues that have been fixed can be found on the release page.

For users that have a LPS system, we have also made some improvements to the LPS node firmware and are releasing version 2020.09.

If you are building the Crazyflie or LPS firmware from source, remember to update the libdw1000 git submodule using

git submodule update

We are working on a release of the python client as well, but still have a few issues to fix so stay tuned.

Now that we are all back from our summer holiday, we are back on what we were set on doing a while ago already: fixing issues and stabilizing code. In the last two weeks we have been focusing on fixing existing issues of the Flowdeck and LPS positioning system. It is still work in progress and even though we fixed some problems, we still have some way to go! At least we can give you an update of our work of the last few weeks.

Flow-deck Kalman Improvements

When we started working on the motion commander tutorials (see this blogpost), which are mostly based on flying with the flowdeck, we were also hit by the following error that probably many of you know: the Crazyflie flies over a low texture area, wobbles, flips and crashes. This won’t happen as long as you are flying of high texture areas (like a children’s play mat for instance), but the occasional situation that it is not, it should not crash like it does now. The expected behavior of the Crazyflie should be that it glides away until it flies over something with sufficient texture again (That is the behavior that you see if when you are flying manually with a controller, and you just let the controls go). So we decided to investigate this further.

First we thought that it might had something to do with the rotation compensation by the gyroscopes, which is part of the measurement model of the flowdeck, since maybe it was overcompensating or something like that. But if you remove that parts, it starts wobbling right away, even with high texture areas… so that was not it for sure… Even though we still think that it causes the actual wobbling itself (compensating flow that is not detected) but we still had to dig a bit deeper into the issue.

Eventually we did a couple of measurements. We let the Crazyflie fly over a low and high texture area while flying an 8 shape and log a couple of important values. These were the detected flow, the ground truth position, and a couple of quality measurements that the Pixart’s PMW3901 flow sensor provided themselves, namely the amount of features (motion.squal) and the automatic shutter time (motion.shutter). With the ground truth position we can transform that to the ground truth flow that the flowdeck is supposed to measure. With that we can see what the standard deviation of the measurement vs groundtruth flow is supposed to be, and see if we can find a relation the error’s STD and the quality values, which resulted in these couple of nice graphs:

Three major improvements were added to the code based on these results:

  • The standard deviation is the flow measurement is increased from 0.25 to 2.0 pixels, since this is actually a more accurate depiction of the measurement noise to be expected by the Kalman filter
  • An adaptive std based on the motion.shutter has been implemented (since there is a stronger correlation there than with motion.squal), which can activated putting the parameter motion.adaptive to True or 1. Its put by default on False or 0 since the heightened STD of the first improvement already increased the quality of flight significantly.
  • If the flow sensor indicates there is no motion detected (log motion.motion), it will now prevent to send any measurement value to the Kalman filter. Also it will adjust the difference in time (dt) between samples based on the last measurement received.

Now when the Crazyflie flies over low texture areas with the Flowdeck alone, it will not flip anymore but simply glide away! Check out this closed issue to know more about the exact implementation and it should be part of the next release.

The LPS and Flowdeck

Kalman filter conflicts

The previous fix of the flow deck also took care of this issue, which caused the Crazyflie to also flip in the LPS system if it does not detect any flow.. This happened because the Kalman filter trusted the Flow measurement much more than the UWB distance measurement in the previous firmware version, but not anymore! If the Flowdeck is out of range or can’t detect motion, the state estimation will trust the LPS system more. However, once the Flowdeck is detecting motion, it will help out with the accuracy of the positioning estimate.

Moreover, now it is possible to make the Crazyflie fly in and out of the LPS system area with the Flowdeck! however, be sure that it flies using velocity commands, since there are situations where the position estimate can skip:

  • 1- LPS system is off 2- take off Crazyflie with only Flowdeck, 3 – turn on the LPS nodes
  • 1- Take off in LPS, 2- fly out of LPS system’s reach for a while (position estimate will drift a bit) 3- Fly back into the LPS system with position estimation drift due to Flowdeck.

As long as your are flying with velocity commands, like with the assist modes with the controller in the CFclient, this should not be a problem.

Deck compatibility problems

The previous fixes only work with the LPS methods TDOA2 and TDOA3. Unfortunately, there is still some work to be done with the Deck incompatibility with the TWR method and the Flowdeck. The deck stops working quickly after the Crazyflie is turned on and this seems to be related to the SPI bus that is shared by the LPS deck and the flowdeck. Reading the flow sensor takes some time, which blocks the TWR algorithm for a while, making it miss an event. Since the TWR algorithm relies on a continues stream of events from the DWM1000 chip, it simply stops working if it does not … or at least that is our current theory …

Please check out this issue to follow the ongoing discussion. If you have maybe an idea of what is going on, drop a comment and see if we can work together to iron out this issue once and for all!

Hello everyone, this is Victor and I’ve spent another internship here at Bitcraze. You can read my blogpost from last year here https://www.bitcraze.io/2019/08/summer-internship/. I have learned a lot since last year so it has been fun to put my skills to test!

This summer I have spent my time on improving the cf-client. I’ve fixed a few bugs but mostly it has been about making small changes that improves overall functionality.

Some of the improvements that I’ve worked on:

  • Flight-control tab: Added logging of x and y-axis and changed the columns to more suitable groups. Also improved UI for assist mode.
  • Flashing dialog: Added support to flash both of the MCUs individually as well as choosing which one to flash (previously you could only flash the stm32 or both). Created an automatic firmware-release-downloader, so that you don’t have to download the files manually.
  • Log-config: Added grouping of log-configurations, that allows you to group the configs into categories. Also added small functionalities like double-click to add, remove configs etc.
  • Added sort-support for all list/tables.
  • Removed traces from Crazyflie 1.0 and support for x-mode, since it is no longer supported by the client.
  • Removed traces from python 2.

I hope that the functionalities will help you and make your experience with the client better. If you have any tips for further improvements, you’re more than welcome to leave a comment or contribute yourself. This is my last week for this summer, but I hope to see you all again and until then, fly safe!

As you probably already know we have been wondering how to best handle our documentation and how to provide information to new Crazyflie starters as easy as possible, as you can read in our blogpost of two weeks ago. In the mean time, we also had a chance to think about the results of a poll we had when we discussed about new ways on how to meet our users. We had about 30 responses, but it became clear that many of you are in the need of getting some more knowledge about working with the Crazyflie. The majority voted for online tutorials, and although it might be difficult to do those during the summer holidays, we already started to make step-by-step guides of various parts of the Crazyflie Eco-system.

Poll result of alternative events.

Python Library Tutorials

Currently we have started with step-by-step tutorials of the CFLIB (the python library of the crazyflie). Usually we refer to the example pages of the CFLIB, however we feel that many users copy paste parts of these scripts for their own purposes, without understanding what is actually going on. Therefore, in order to move Crazyflie beginners to the starting developer phase, we have made these guides in order to teach exactly what is going on in each module, step-by-step.

These tutorials can be found in the python library documentation. The first tutorial focuses on connecting, logging and parameters, which guides you through the process of connecting the Crazyflie through a python script, starting up logging configurations in two ways (asynchronous and synchronous) and how to read and set parameters.

The second tutorial is about the motion commander and is a logical continuation of the first. A nice thing is that we also show how to build in some protection in your script as well. If the Flowdeck is not attached, you can check that and prevent the Crazyflie from taking off altogether if it does not detect the Flowdeck. This will be a life saver in your future endeavors, and trust me I know from experience ;). Afterwards it will go into how to take off – fly forward and go back to the initial position. The application of the end of the motion commander tutorial, is where we also use the logging functionality to get the actual estimated position of the Crazyflie. With this information, we show how to write an application that create a virtual bounding box where the Crazyflie can bounce around in (like the old windows screensaver).

The results of the motion commander step by step guide.

We are planning to finish this step-by-step guide by adding the multi-ranger to the mix, continuing on the bouncing ball example. After that we will probably start some tutorials on how to use the swarming functionality before moving on to the firmware or the client.

Work in Progress

The tutorials are still work in progress. So let us know on the forum, python library github repository or as a comment on this blogpost if you see anything wrong or if something is not very clear. This will improve the quality further so that other users can benefit as well. Also once the these step-by-step tutorials are finished we can start working on video based tutorials as well.

Remember, it is possible to contribute your own fixes (or tutorials) to our repositories if you want to. It’s an open source project after all ;)

Modular robotics implies in general flexibility and versatility to robots. In theory, you could design a modular robot basically on the way you would want it to be, by simply adding or removing modules from the already existing robot. Changing the robot configuration by adding more individuals, generally increases the system redundancy, meaning that now, there are probably many different ways to achieve a specific goal. From a naive standard point of view, more modules could imply in practice more robustness due to this redundancy. In fact, it does get more robust by the cost of becoming more complex, and probably harder to control. Added to that, other issues may arise when you take into account that your modular robot is flying, and how physical properties and actuation scales as the number of modules grow.

In the GRASP Laboratory at the University of Pennsylvania, one of our focuses is to allow robots to achieve a specific task. In this work, we present ModQuad-DoF, which is a modular flying platform that enlarges the configuration space of modular flying structures based on quadrotors (crazyflies), by applying a new yaw actuation method that relies on the desired roll angles of each flying vehicle. This research project is coordinated by Professor Mark Yim , and led by Bruno Gabrich (PhD candidate).

Scaling Modular Robots

Scaling modular robots is a very challenging problem that usually limits the benefits of modularity. The sum of the performance metrics (speed, torque, precision etc.) from each module usually does not scale at the same rate as the conglomerate physical properties. In particular, for ModQuad, saturation from individual motors would increase as the structures became larger leading to failure and instability. When conglomerate systems scale up in the number of modules, the moment of inertia of the conglomerate often grows faster than the increase in thrust capability for each module. For example, the increase in the moment of inertia for a fifth module added to four modules in a line can be approximated by the mass of the module times half the distance to the center squared. This quadratic increase gives us the intuition that the required yaw actuation grows faster than the actuation authority.

Yaw Actuation

An inherit characteristic of quadrotors is to have their yaw controlled by the drag moments from each propeller. For ModQuad as more modules are docked together, a decreased controllability in yaw is noticed as the structure becomes larger. In a line configuration the structure’s inertia grows quadratically with the distance of each module to the structure center of mass. On the other hand the drag moments produced scales linearly with the number of modules.

The new yaw actuation method relies on the fact that each quadrotor is capable to generate an individual roll enabled by our new cage design. By working in coordinated manner, each crazyflie can then generate structure moments from moment arms provided by the propellers given its roll and its distance from the structure’s center of mass.

Cage Design

The Crazyflie 2.0 is the chosen platform to enable thrust and attitude to the individual modules. The flying vehicle measures 92×92×29 mm and weights 27 g while its battery lasts around 4 minutes for the novel design proposed. In this work the cage performs as pendulum relative to the flying vehicle. The quadrotor is joined to the cage through a one DOF joint. The cages are made of light-weight materials: ABS for the 3-D printed connectors and joints, and carbon fiber for the rods.

Although the flying vehicle does not necessarily share same orientation as the cage, the multiple connected cages do preserve same orientation relative to each other. With the purpose of allowing such behavior, we used Neodymium Iron Boron (NdFeB) magnets as passive actuators to enable rigid cage connections. Docking is only allowed at the back and front face of the modules, and each one of these faces contains four magnets. Those passive actuators have dimensions of 6.35 × 6.35 × 0.79 mm with a bonding force of 1 kg.

Structure Flying Performance

Conclusions

ModQuad-DoF is a flying modular robotic structure whose yaw actuation scales with increased numbers of modules. ModQuad-DoF has a one DOF jointed cage design and a novel control method for the flying structure. Our new yaw actuation method was validated conducting experiments for hovering conditions. We were able to perform two, four and, six modules cooperatively flying in a line with yaw controllability and reduced loss in thrust. In future work we aim to explore the structure controllability with more robots in a line configuration, and exploring different solutions for the desired roll angles. Possibly, with more modules in the structure, only a few would be required to roll in order to maintain a desired structure yaw. Given that, we could explore the control allocation for each  module in a specific structure configuration, and dependent on its desired behavior. Further, structures that are not constrained to a line will also be tested using the basis of the controller proposed in this work.

Detailed Video Explanation (ICRA 2020)

This work was developed by:

Bruno Gabrich, Guanrui li, and Mark Yim

Additional resources at:

https://www.modlabupenn.org/
https://www.grasp.upenn.edu/

It has been about a month since the AI-deck became available in Early Access. Since then there are now quite a few of you that own an AI-deck yourself. A new development we would like to share: we thought before that we had selected a gray-scale image sensor. However, it came to our attention that the camera actually contains a color image sensor, which on second viewing of the video presented in this blogpost is pretty obvious in hindsight (thanks PULP project ETH Zurich for letting us know!).

A color image from the AI-deck

This came as a little surprise, but a color camera can also add some new possibilities, like making the Crazyflie follow a orange ball, or also train the CNNs incorporate color in their classification training as well. The only thing is that it will require an extra preprocessing task in order to retrieve the color image, which will be explained in the next section.

Demosaicing

Essentially all CMOS image sensors are gray-scale by definition. In order to retrieve color from a scene, manufacturers add a Bayer filter on top of the image sensor, so it filters out the red, green and blue on each pixels. This Color filter array does not need to be RGB, but all kinds of colors, but we will only talk about the Bayer filter. If the pattern of the filter is known, the pixels that related to a certain color will be interpolated with each-other in order to fill in the gaps in between. This process is called demosaicing and it creates the RGB channels that are converted to a color image.

Process of demosaicing with a Bayer filter

Currently we only implemented a simple nearest-neighbor interpolation scheme for demosaicing, which is fine for demonstration purposes, however is not the best technique out there. Such a simple interpolation is not very ‘edge and detail’ aware and can therefore cause artifacts, like these Moiré effects seen here below. Anyway, we are still experimenting how to get a better image and how to translate that to all the examples of the AI-deck example repository (see this issue if you would like to follow or take part in the discussion).

Moiré effect

So technically, once we have the color image, this can be converted to a gray-scale images which can be used for the examples as is. However, there is a reduction in quality since the full pixel resolution is not used for obtaining the full scale image. We are currently discussing if it would be useful to get the gray-scale version of this camera and make this available as well, so let us know if you would be interested!

Feedback and Early Access

Like we said before, there now quite a few of you out there that have an AI-deck in their procession. As it is in Early Access, the software part is still in full development. However, since we have not received any negative feedback of you, we believe that everything is fine and peachy!

Just kidding ;) we know that the AI-deck is quite a challenging deck to work with and we know for sure that many of you probably have questions or have something to say about working with it. Buying an Early Access product also comes with a little bit of responsibility. The more feedback we get from you guys, the more we can tailor the software and support to help you and others, thereby advancing the product forward and getting it out of the early access phase.

So please, let us know if you are having any trouble starting up by posting a thread on the forum (we have a special AI-deck group!). If there are any issues with the examples or the documentation of the AI-deck repo. We and also our collaborators at Greenwaves Technologies (from the GAP8 chip) are more than happy to help out. That is what we are here for :)