[Update] Faster link time for Qt WebAssembly

The guys working on Emscripten have begun integrating the new llvm wasm backend into mainstream emscripten. Although it is still not the default, it is now heaps easier to install and use, as you no longer need to build llvm yourself.


The jist of it is:

emsdk install latest-upstream
emsdk activate latest-upstream

According to the linked blog, there are code size benefits as well as link time speed-ups.

The faster link time mostly affects application builds.

Of course, you will need to recompile Qt for WebAssembly to use this, and need to configure it with the Qt WebAssembly specific option:

-device-option WASM_OBJECT_FILES=1

After that, you just need to run qmake as normal.

One note: You will need to remove the line

 -s \"BINARYEN_TRAP_MODE=\'clamp\'\"

from the mkspecs/wasm-emscripten/qmake.conf, as upstream llvm webassembly backend does it's own overflow clamping and does not support BINARYEN_TRAP_MODE argument.

Qt Creator 4.9.2 released

We are happy to announce the release of Qt Creator 4.9.2!

This release fixes some smaller bugs. Please find the details in our change log.

Get Qt Creator 4.9.2

The opensource version is available on the Qt download page under “Qt Creator”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.9.2 is also available as an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.9.2 released appeared first on Qt Blog.

Qt Creator 4.10 Beta2 released

We are happy to announce the release of Qt Creator 4.10 Beta2 !

Most notably we fixed a regression in the signing options for iOS devices, and that the “Build Android APK” step from existing Android projects was not restored.
As always you find more details in our change log.

Get Qt Creator 4.10 Beta2

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.10 Beta2 is also available under Preview > Qt Creator 4.10.0-beta2 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.10 Beta2 released appeared first on Qt Blog.

Announcing the Qt Automotive Suite 5.13

We’re elated to announce the latest edition of our Qt Automotive Suite 5.13, a unified HMI toolchain and framework for building the next generation of digital cockpits.

Three years ago, we introduced the Qt Automotive Suite to solve key challenges in creating HMIs for digital cockpits in automobiles [1][2][3]. Qt Automotive Suite offers a unique end-to-end HMI solution with a single technology: combining world class tooling and automotive-specific software HMI components, with Qt optimized for embedded devices,

In this release, we focused on a few selected areas to improve and expand the offering. In the market, we continue to see steady adoption of our technology by top OEMs and Tier 1 suppliers around the world. We will see volume production of vehicles powered by the Qt Automotive Suite rolling out as early as 2020.

The Base

Qt Automotive Suite 5.13 is based on the new Qt 5.13 and sits on top of its embedded offering – Qt for Device Creation. Qt 5.13 brings new features, updates, bug fixes, and other improvements. For more information, refer to the official release announcement at: https://blog.qt.io/blog/2019/06/19/qt-5-13-released/.

We have also improved the tooling to make designing, developing, and deploying software with Qt more efficient for designers and developers. One such improvement is via Qt 3D Studio. With the version 2.4 release, Qt 3D Studio and its runtime is another component for the Qt Automotive Suite.

Here’s an example of a full 3D Instrument Cluster to show you what’s possible with Qt 3D Studio:


Instrument cluster built with Qt 3D Studio by Siili Solution (For more details, see https://www.behance.net/gallery/66707281/QT-Altair)

The Components

Our Reference UI, Neptune 3 UI, can now use Qt 3D Studio and its runtime to render 3D content. It’s also possible to switch between the new implementation based on the Qt 3D Studio Runtime and the previously available implementation based on Qt3D, which is still the default selection. This new scene loads assets directly from a Qt 3D Studio project, simplifying the task of incorporating changes from designers. The implementation still supports the existing “remote control” features to move some objects in the 3D scene with simple QML statements. We use this type of feature to control car doors, sunroof, and the trunk of our 3D car model. Additionally, all of the APIs for the UI have been retained.

We optimized the 3D car model itself to improve its loading time. It’s possible to switch between three levels of complexity in the model, for optimal use of hardware capabilities. The UI controls for all these new switches are provided in the new “3D options” panel, in the Vehicle app.

And we’re still not done with 3D just yet! In addition to the existing 2G gauges, the Neptune 3 UI cluster can now use the Qt 3D Studio runtime as well, with its own Qt 3D Studio-designed content. You can also switch between the 2D and 3D implementation.


The new 3D gauges and 3D car model follow the color accent set in the General Settings.


3D runtime options between Qt 3D and Qt 3D Studio


All applications follow the color theme managed by System UI

The cluster UI now integrates with Qt Safe Renderer. This integration is focused on the development aspects, hence we have added some “tricks” to improve the developer productivity in these use cases – even using standard hardware, that typically doesn’t provide safe rendering. On the desktop, we use a separate application with its own top-level window that renders content using Qt Safe Renderer. This window is placed underneath the cluster window. If the Neptune 3 UI process stops or crashes, this Qt Safe Renderer window remains on the desktop and keeps rendering, fetching the values from the new Vehicle Data service. On embedded targets, we have a separate process that takes over the rendering if Neptune 3 UI, as the main UI, stops working. This implementation leverages all parts of the Qt Safe Renderer 1.1.

We promote development of automotive and embedded HMI in separate apps running as multiple processes. In this release, we improved the application installation in Neptune 3 UI. It now uses a new configuration as the applications are not saved in the temporary folder anymore. In the System Dialog, the application control has been extended to provide controls and settings for life cycle states. It’s now possible to define which applications should be launched automatically on startup and which ones should be relaunched if they stop, in the event of a crash.


System UI Apps configuration

If you look at the list of applications, you may notice that most of Neptune 3 UI now consist of separate applications. You can stop all of them, if necessary. Most importantly, this improvement makes it possible for you to develop and test each part separately and independently; even across different teams.

In our previous release, we introduced the UI harness, whereas with this release, Neptune 3 UI introduces new wrappers that allow developers to run these harnesses directly from Qt Creator; but you can also run these wrappers independently. There are two harnesses provided for reference: a cluster harness and a vehicle harness. If you run the cluster harness, it starts the cluster independently in QML Live, depending on whether QML live is available or not. Otherwise, the cluster runs with “qmlscene”. The harness is a key differentiator as it illustrates how our architecture allows part of Neptune 3 UI to be decomposed and run separately, using the same code base as when it runs completely.

We mentioned the new Vehicle Data service above. In this release, we introduced it an additional out-of-process service, based on our old Remote Settings Server. This is a key improvement in the middleware infrastructure, which is now more tailored to the use of out-of-process services.

While Neptune 3 UI runs, the values displayed in the cluster are simulated to change on their own. This is a nifty feature for demos, but is also very useful in day-to-day development. It helps HMI developers test the implementation in various states of the business logic. The simulation is powered by the Simulation Engine introduced in the last release, as part of Qt IVI. Just like the Neptune 3 UI, this simulation is written in QML. In this architecture, the Vehicle Data service is actually a fully autogenerated simulation server, that feeds data to the HMI via the same API as a real middleware service. The QML code running this simulation can model the business logic of many real life use cases. This is beneficial for HMI development, particularly in the early prototyping phase and other development phases.

The Simulation Engine in Qt IVI has also been extended to support the new middleware infrastructure. This includes the use of Qt Remote Objects as a transport layer. In combination with the auto-generated code, the new middleware infrastructure provides a complete environment for HMI development separated from the middleware development.

Improvements in the middleware infrastructure also include a revamped Companion app. Initially developed for testing, it’s now provided with Neptune 3 UI as a new app. This app implements a popular automotive use case: an in-vehicle companion app that runs on a mobile device and directly connects to the head unit. The app has an improved UI which is closer to real-world use cases.


Companion mobile app

The Companion app uses the same API for middleware as Neptune 3 UI itself. Both the API and backend plugins are shared, which saves both time and effort.

Last, but not least, Neptune 3 UI now uses Qt Application Manager’s Intent Management, introduced in the last release. This feature is used to switch between the music sources: the Tuner and the Media Player.

The Tools

We have released version 2.11.0 of GammaRay; our Qt application monitoring tool. GammaRay allows you to observe the behavior and data structures of Qt code inside your program live at runtime.

GammaRay 2.11.0 includes a new inspection tool for Qt’s event handling, providing deeper insights into the inner workings of your application. Apart from analyzing the events and their properties as they occur, the event monitor visualizes event propagation during input handling, as it happens for both Qt Quick or Qt Widgets.


Additionally, the event monitor now provides statistics on how often which type of event occurred, as well as fine-grained filtering options to drilldown into events that may interest you , even in a huge dataset.


One major new feature is the network operation inspector, that allows you to observe the HTTP operations triggered via QNetworkAccessManager and helps you to optimize network interactions, identify QNetworkReply objects that may have leaked, and ensure that all operations are encrypted.


GammaRay 2.11.0 now supports more data types, like the QJson* classes, a new thread affinity checker for the problem reporter, as well as compatibility with the newly released Qt 5.13. Under the hood, we’ve also done some performance improvements, particularly improving GammaRay’s responsiveness when inspecting large and/or busy applications.

QML Live is both, a local and a remote Qt Quick live reloading system that allows you to change your QML UI source code and view the result in real time.


QML Live in action

In this release, QML Live supports multi-process rendering. This is a substantial change, that lets you start multiple runtimes to observe changes in more than one top-level HMI view.

We have also resumed active development of the Qt Creator plugin for Qt Application Manager. In addition to the internal improvements, the plugin now has a simplified UI and a bit more automation, to reduce the number of configuration steps.


In this release, we‘ve invested significant effort into improving and rewriting large parts of our documentation. We’ve tried to focus on explaining the key concepts and approaches that we use, with the hope that this content fills in the gaps and aids new users to connect the dots: https://doc.qt.io/QtAutomotiveSuite/index.html.

Looking Ahead

We are looking to add hardware cursor key navigation, provide a basic support for voice assistants, support background application services, bundle packages, integrate the Deployment Server into the workflow in Qt Creator, and also automatically populate the Deployment Server with apps from a specific repository.

Furthermore, we continue to improve support for multi-domain architecture on a single SoC, embracing webOS, Android Automotive OS, AGL, GENIVI and bringing inter-domain interactivity on top


Digital cockpit by Siili Solution (https://www.behance.net/gallery/79438455/Enhancing-the-driving-experience)


With Qt Automotive Suite 5.13, we’ve made more steps towards our goal: unleash the power of Qt to make the development of appealing UX easy and fast. We want to help our customers to transform their visions and concepts into HMIs that you can experience at car dealerships near you.

One of these days you might see the next version of this appealing prototype made with Qt:


Peugeot E-Legend digital cockpit (https://www.peugeot.co.uk/concept-cars/e-legend/)

Finally, big thanks for our partners, KDAB and Luxoft, for the great collaborative work and contributions. We are also very grateful to our customers and prospects for their continuous feedback and open discussions.

If you have questions about Qt Automotive Suite, fell free to contact us.

The post Announcing the Qt Automotive Suite 5.13 appeared first on Qt Blog.

How to comply with the upcoming requirements in Google Play

by Eskil Abrahamsen Blomfeldt (Qt Blog)

Starting on August 1st, Google Play will no longer accept new applications or application updates without a 64-bit version (unless of course there is no native code at all). For Qt users, this means you have to build an additional APK that contains the 64-bit binaries.

Qt has shipped 64-bit binaries for Android since Qt 5.12.0, so complying with the new requirement is technically no big deal. But after discussing with users, I see that it is not clear to everyone exactly how to set up an app in Google Play that supports multiple architectures at once.

This call for help, combined with the fact that I am currently setting up a fresh Windows work station, made for a golden opportunity to look at Qt for Android app development in general. In this blog, I will start with a clean slate and show how to get started on Android, as well as how to publish an app that complies with the Google Play requirements.

I will

  • guide you through the installation steps needed to get a working environment,
  • describe the process of building an application for multiple architectures,
  • and show you how to upload your binaries to Google Play.

The first few parts might be familiar to many of you, so if you get bored and want to hear about the main topic, feel free to skip right to Step 4.

A note about SDK versions

The Android SDK is itself under heavy development, and quite often it isn’t backwards compatible, causing problems with our integration in Qt. We react as quickly as we can to issues that arise from changes or regressions in the SDK, but a general rule of thumb is to wait before you upgrade to the latest and greatest versions of the Android tools until we have had a chance to adapt to incompatibilities in Qt.

While there have been some issues on other platforms as well, the majority of the problems we have seen have been on Windows. So if you are on this host system, be extra aware to check for known good versions before setting up your environment.

We are currently recommending the use of the following tools together with Qt 5.13.0:

  • Android build tools version 28
  • Android NDK r19
  • Java Development Kit 8

If you do bump into some problems, please make sure to check our known issues page to see if there is any updated information.

Now for the details on where and how to get the right versions of everything.

Step 1: Installing the JDK

Android is primarily a Java-based platform, and while you can write your Qt applications entirely in C++ and/or QML, you will need the Java Development Kit in order to compile the files that make the integration possible.

Note that there is an incompatibility between Android’s SDK Manager tool and the later versions of Oracle’s JDK, making the latest JDK versions unusable together with the Android environment. To work around this, we recommend that you download JDK version 8 for use with Android.

You may use the official binaries from Oracle, or an alternative, such as the AdoptOpenJDK project.

Download and run the installer and install it in the default location.

Step 2: Setting up the Android environment

The second step is getting the actual Android development environment. Start by downloading and installing Android Studio. Scroll past the different “beta” and “canary” releases, and you will find the latest stable release.


Once the Android Studio has been installed, you can use this to install the “SDK Platform” for you. This is the actual collection of Java classes for a particular Android distribution. When you start Android Studio the first time, it should prompt you to install the SDK Platform. You can safely use the latest version of the SDK, platform 29, which is the suggested default.

In addition to the SDK, we also need to install the NDK. This is the development kit used for cross-compiling your C++ code to run on Android. As mentioned above, we will use Android NDK r19c and not the latest release, since there are issues with Android NDK r20 causing compilation errors. The issue will be been addressed in Qt 5.13.1 and Qt 5.12.5, so when you start using those, then upgrading to Android NDK r20 is possible.

And as a final step, we need to make sure that we are using version 28.0.3 of the Android build tools rather than the latest version. Note that this is only an issue on Windows hosts.

From the starting dialog box of Android Studio, click on Configure and then select SDK Manager. Go to the SDK Tools tab and make sure Show Package Details is checked. Under the Android build tools, make sure you deselect 29.0.0 and select 28.0.3 instead.


This will uninstall the non-functioning version of the build tools and install the older one. Click Apply to start the process, and when it is done you will have installed a functioning Android environment.

Step 3: Install Qt

For this guide, we will be using Qt 5.13.0. If you haven’t already, start by downloading the online installer tool from your Qt Account.

When you run the installer, make sure you select the arm64-v8a and armv7a target architectures. These are technical names for, respectively, the 64-bit and 32-bit versions of the ARM family of processors, which is the most commonly used processors on Android devices.


Note: For this example in particular, we will also need Qt Purchasing, since it contains the application I am planning to use as demonstration. This can also be selected from the same list.

When Qt is finished installing, start Qt Creator and open the Options. Under Devices, select the Android tab and select the directories where you installed the different packages in the previous steps.


If everything is set up correctly, Qt Creator will show a green check mark, and you will be ready to do Android development with Qt.

Step 4: Setting up project in Qt Creator

For this example, I will use the Qt Hangman example. This is a small example we made to show how to implement in-app purchases in a cross-platform way.

First we open the example in Qt Creator, which can be done from the Welcome screen. Once it has been opened, Qt Creator will ask us to select which Qt versions we want to use for building it.


Select both the 64-bit and 32-bit versions of Qt and click Configure Project.

In order to comply with the additional requirements in Google Play, we want to create two APK packages: One for 32-bit devices and one for 64-bit devices. We need to configure each of these separately.


This screenshot shows an example setup for the 32-bit build. Important things to notice here:

  • Use a different shadow build directory for each of the builds.
  • Make sure you select the Release configuration.
  • You should also tick the Sign package checkbox to sign your package, otherwise the Google Play store will reject it.

With the exception of the build directory, the setup for the 64-bit build should be the same. Select the 64-bit kit on the left-hand side and make the equivalent adjustments there.

Step 5: Preparing the manifest

In addition, the two packages will need identical AndroidManifest.xml files, except for one detail: The version code of the two has to differ. The version code can be pretty much anything you choose, as long as you keep in mind that when an APK is installed on a device from the store, it will select the APK with the highest version code. As Qt user Fabien Chéreau poined out in a comment to a bug report, you therefore typically want to set the version code of the 64-bit version to be higher than for the 32-bit version, so that a device which supports both will prefer the 64-bit one.

As Felix Barz pointed out in the same thread, this can be automated in the .pro file of the project. Here is my slightly modified version of his code:

defineReplace(droidVersionCode) {
        segments = $$split(1, ".")
        for (segment, segments): vCode = "$$first(vCode)$$format_number($$segment, width=3 zeropad)"

        contains(ANDROID_TARGET_ARCH, arm64-v8a): \
            suffix = 1
        else:contains(ANDROID_TARGET_ARCH, armeabi-v7a): \
            suffix = 0
        # add more cases as needed


VERSION = 1.2.3

This neat trick (thanks, Felix!) will convert the application’s VERSION to an integer and append a new digit, on the least significant end, to signify the architecture. So for version 1.2.3 for instance, the version code will be 0010020030 for the 32-bit package and 0010020031 for the 64-bit one.

When you generate an AndroidManifest.xml using the button under Build APK in the project settings, this will automatically pick up this version code from the project. Once you have done that and edited the manifest to have your application’s package name and title, the final step is to build the package: First you do a build with one of the two kits, and then you must activate the other kit and do the build again.

When you are done, you will have two releasable APK packages, one in each of the build directories you set up earlier. Relative to the build directory, the package will be in android-build\build\outputs\apk\release.


Note that for a more efficient setup, you will probably want to automate this process. This is also quite possible, since all the tools used by Qt Creator can be run from the command line. Take a look at the androiddeployqt documentation for more information.

Step 6: Publish the application in Google Play

The Google Play publishing page is quite self-documenting, and there are many good guides out there on how to do this, so I won’t go through all the steps for filling out the form. In general, just fill out all the information it asks for, provide the images it needs, and make sure all the checkmarks in the left side bar are green. You can add all kinds of content here, so take your time with it. In the end, it will have an impact on how popular your app becomes.

Once that has been done, you can create a new release under App Releases and upload your APKs to it.

One thing to note is that the first time you do this, you will be asked if you want to allow Google Play to manage your app signing key.


For now, you will have to select to Opt Out of this. In order to use this feature, the application has to be in the new “Android App Bundle” format. This is not yet supported by Qt, but we are working to support this as well. In fact, Bogdan Vatra from KDAB (who is also the maintainer of the Android port of Qt) has already posted a patch which addresses the biggest challenge in getting such support in place.

When we do get support for it, it will make the release process a little bit more convenient. With the AAB format, Google Play will generate the optimized APKs for different architectures for us, but for now we have to do this manually by setting up multiple kits and building multiple APKs, as I have described in this tutorial.


When the two APKs have been uploaded to a release, you should see a listing such as this: Two separate APK packages, each covering a single native platform. By expanding each of the entries, you can see what the “Differentiating APK details” are. These are the criteria used for selecting one over the other when a device is downloading the APK from the Google Play Store. In this case, the differentiating detail should be the native platform.

And that is all there is to it: Creating and releasing a Qt application in Google Play with both 32-bit and a 64-bit binaries. When the APKs have been uploaded, you can hit Publish and wait for Google Play to do its automated magic. And if you do have existing 32-bit apps in the store at the moment, make sure you update them with a 64-bit version well before August 2021, as that is when non-compliant apps will no longer be served to 64-bit devices, even if they also support 32-bit binaries.

Until then, happy hacking and follow me on Twitter for irregular updates and fun curiosities.

The post How to comply with the upcoming requirements in Google Play appeared first on Qt Blog.

Cutelyst 2.8.0 released

Cutelyst a Qt/C++ Web framework got a new release!

This release took a while to be out because I wanted to fix some important stuff, but time is short, I've been working on polishing my UPnpQt library and on a yet to be released FirebaseQt and FirebaseQtAdmin (that's been used on a mobile app and REST/WebApp used with Cutelyst), the latter is working quite well although it depends ATM on a Python script to get the Google token, luckly it's a temporary waste of 25MB of RAM each 45 minutes.

Back to the release, thanks to Alexander Yudaev it has cpack support now and 顏子鳴 also fixed some bugs and added a deflate feature to RenderView, FastCGI and H2.

I'm also very happy we now have more than 500 stars on GitHub :)

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.8.0

Qt 3D Studio 2.4 Released

We are happy to announce the Qt 3D Studio 2.4 release is now available via the online and offline installers. Here’s a quick summary of the new features and functions in 2.4. For detailed information about the Qt 3D Studio, visit the online documentation page or see the older blog posts.

Changes in the runtime

One of the biggest changes in the 2.4 release is removing the Qt 3D dependency from the Qt 3D Studio runtime and using the architecture we have been using in 1.x releases. The main reason for this change is performance especially in the embedded environments. We have seen quite significant performance differences between the runtimes even after the optimizations done in Qt 3D Studio 2.3 runtime. For details about the performance improvements please see the separate blog post.

The change in 3D runtime does not require any other code changes than changing the import statements (e.g. in QML import QtStudio3D.OpenGL 2.4 instead of import QtStudio3D 2.3) and then recompilation with new Qt 3D Studio 2.4 is enough. Please refer to the Getting Started documentation.

When opening a presentation created with earlier version of the Qt 3D Studio you might also get a notification that “Some custom materials, effects, and behaviors may not work correctly”. This is related to an updated presentation file format which defines also also alpha for the colors i.e. now colors are type vec4 instead of vec3. Saving the presentation with a newer version removes the notification.

Dynamic object creation

Qt 3D Studio C++ API supports now also dynamic object creation. This feature is handy in use cases where you have to create new objects to the scene for example based on e.g. sensor values or the scene contains predetermined amount of certain objects.

Dynamic Elements Example

Dynamic Element Example

Dynamically created objects can be new instances of objects included in the presentation or object geometry can be also created during runtime. Also object materials can be defined dynamically. Please see dynamicelement example from the runtime installation folder.

Vertex Shaders

2.4 version also enables using vertex shaders in the custom materials. Vertex shaders can be used to transform the attributes of vertices i.e. the original objects can be distorted or reshaped in any manner. For more information about creating custom materials please refer to the documentation.

Getting started

Qt 3D Studio 2.4 is available through Qt online installer under the Developer and Designer Tools section. We also provide standalone offline installers which contain all you need to start designing Qt 3D Studio User Interfaces. Online installer also contains pre-build runtime for Qt 5.12 which is needed for developing Qt applications using Qt 3D Studio UI. Qt online installer and offline installers can be obtained from Qt Download page and commercial license holders can find the packages from Qt Account. Binary packages are available for Windows, Mac and Linux.

If you are targeting for embedded systems with running e.g. RTOS you need to build the Qt 3D Studio runtime component for the target operating system. Qt 3D Studio runtime can be found from the git repository. Building instructions can be found from documentation.



The post Qt 3D Studio 2.4 Released appeared first on Qt Blog.

Little Trouble in Big Data – Part 2

In the first blog in this series, I showed how we solved the original problem of how to use mmap() to load a large set of data into RAM all at once, in response to a request for help from a bioinformatics group dealing with massive data sets on a regular basis. The catch in our solution, however, was that the process still took too long. In this blog, I describe how we solve this, starting with Step 3 of the Process I introduced in Blog 1:

3. Fine-grained Threading

The original code we inherited was written on the premise that:

  1. Eigen uses OpenMP to utilize multiple cores for vector and matrix operations.
  2. Writing out the results of the Monte Carlo simulation is time-consuming and therefore put into its own thread by way of OpenMPI with another OpenMPI critical section doing the actual analysis.

Of course, there were some slight flaws in this plan.

  1. Eigen’s use of OpenMP is only for some very specific algorithms built into Eigen itself. None of which this analysis code was using, so that was useless. Eigen does make use of vectorization, however, which is good and can in ideal circumstances give a factor of 4 speedup compared to a simplistic implementation. So we wanted to keep that part.
  2. The threading for writing results was, shall we say, sub-optimal. Communication between the simulation thread and the writer thread was by way of a lockless list/queue they had found on the interwebs. Sadly, this was implemented with a busy spin loop which just locked the CPU at 100% whilst waiting for data to arrive once every n seconds or minutes. Which means it’s just burning cycles for no good reason. The basic outline algorithm looks something like this:
const std::vector colIndices = {0, 1, 2, 3, ... };
const std::vector markerIndices = randomise(colIndices);

for (i = 0; i < maxIterations; ++i) {
    for (j = 0; j < numCols; ++j) {
        const unsigned int marker = markerIndices[j];
        const auto col = data.mappedZ.col(marker);

        output += doStuff(col);

    if (i % numIterations == 0)

So, what can we do to make better use of the available cores? For technical reasons related to how Markov Chain Monte Carlo works, we can neither parallelize the outer loop over iterations nor the inner loop over the columns (SNPs). What else can we do?

Well, recall that we are dealing with large numbers of individuals – 500,000 of them in fact. So we could split the operations on these 500k elements into smaller chunks and give each chunk to a core to process and then recombine the results at the end. If we use Eigen for each chunk, we still get to keep the SIMD vectorization mentioned earlier. Now, we could do that ourselves but why should we worry about chunking and synchronization when somebody else has already done it and tested it for us?

This was an ideal chance for me to try out Intel’s Thread Building Blocks library, TBB for short. As of 2017 this is now available under the Apache 2.0 license and so is suitable for most uses.

TBB has just the feature for this kind of quick win in the form of its parallel_for and parallel_reduce template helpers. The former performs the map operation (applies a function to each element in a collection where each is independent). The latter performs the reduce operation, which is essentially a map operation followed by a series of combiner functions, to boil the result down to a single value.

These are very easy to use so you can trivially convert a serial piece of code into a threaded piece just by passing in the collection and lambdas representing the map function (and also a combiner function in the case of parallel_reduce).

Let’s take the case of a dot (or scalar) product as an example. Given two vectors of equal length, we multiply them together component-wise then sum the results to get the final value. To write a wrapper function that does this in parallel across many cores we can do something like this:

const size_t grainSize = 10000;

double parallelDotProduct(const VectorXf &Cx, const VectorXd &y_tilde)
    const unsigned long startIndex = 0;
    const unsigned long endIndex = static_cast(y_tilde.size());

    auto apply = [&](const blocked_range& r, double initialValue) {
        const long start = static_cast(r.begin());
        const long count = static_cast(r.end() - r.begin());
        const auto sum = initialValue + (Cx.segment(start, count).cast() *
                                          y_tilde.segment(start, count)).sum();
        return sum;

    auto combine = [](double a, double b) { return a + b; };

    return parallel_reduce(blocked_range(startIndex, endIndex, grainSize), 0.0,
                            apply, combine);

Here, we pass in the two vectors for which we wish to find the scalar product, and store the start and end indices. We then define two lambda functions.

  1. The apply lambda, simply uses the operator * overload on the Eigen VectorXf type and the sum() function to calculate the dot product of the vectors for the subset of contiguous indices passed in via the blockedRange argument. The initialValue argument must be added on. This is just zero in this case, but it allows you to pass in data from other operations if your algorithm needs it.
  2. The combine lambda then just adds up the results of each of the outputs of the apply lambda.

When we then call parallel_reduce with these two functions, and the range of indices over which they should be called, TBB will split the range behind the scenes into chunks based on a minimum size of the grainSize we pass in. Then it will create a lightweight task object for each chunk and queue these up onto TBB’s work-stealing threadpool. We don’t have to worry about synchronization or locking or threadpools at all. Just call this one helper template and it does what we need!

The grain size may need some tuning to get optimal CPU usage based upon how much work the lambdas are performing but as a general rule of thumb, it should be such that there are more chunks (tasks) generated than you have CPU cores. That way the threadpool is less likely to have some cores starved of work. But too many and it will spend too much time in the overhead of scheduling and synchronizing the work and results between threads/cores.

I did this for all of the operations in the inner loop’s doStuff() function and for some others in the outer loop which do more work across the large (100,000+ element) vectors and this yielded a very nice improvement in the CPU utilization across cores.

So far so good. In the next blog, I’ll show you how we proceed from here, as it turns out this is not the end of the story.

The post Little Trouble in Big Data – Part 2 appeared first on KDAB.

Qt for WebAssembly: Multithreading

by Morten Johan Sørvig (Qt Blog)

As of the 5.13 release, Qt for WebAssembly now has experimental support for multithreading. This has much of the same benefits as threads on other platforms: the application can offload work to secondary threads to keep the main thread responsive, and make use of multiple CPU cores.

Secondary threads on the web platform has one additional benefit in that they can block without unwinding the stack. This is not true for main thread, which must return control to the browser after processing an event or risk having the browser show the “page is unresponsive” notification.

WebAssembly multithreading has been possible but disabled by default for some time. Browsers have now begun enabling it again, first out is Chrome 67.

Old New Thing

These days it’s not too often that we get to introduce threads as a feature, so I took the opportunity to touch up Qt’s classic Mandelbrot threading example.


Upgrades include now using all CPU cores instead of just one (using QThread::idealThreadCount() to control the amount of worker threads). A quick peek at the task manager confirms this:


Rest assured, CPU usage does return to 0% when the application is idle.

An example build of the demo and source code are available. The build should run on desktop Chrome, and also on Firefox if you enable javascript.options.shared_memory in about:config page.


Next, lets look at some of the practicalities of working with multithreaded builds of Qt with Emscripten. If you have experience in this area you’d like to share, please chime in in the comments section below.

Emscripten is doing most of the heavy lifting here, and provides support for the pthreads API, implemented using Web Workers and SharedArrayBuffer. Qt then reuses its existing unix QThread implementation.

Enabling The default Qt for WebAssembly build disables threads by default. To enable, build from source and configure with the -feature-thread flag. We’ve found that emscripten 1.38.30 works well for threaded builds.

Threading-enabled binaries will not run on browsers with SharedArrayBuffer disabled; you may see error messages such as:

CompileError: wasm validation error: at offset 5711: shared memory is disabled

Error: WebAssembly.Module doesn’t parse at byte 5705: can’t parse resizable limits flags”

Main thread deadlocks Calling QThread::wait (or pthread_join) on the main thread may deadlock, since the browser will be blocked from servicing application requests such as starting a new Web Worker for the thread we are waiting on. Possible workarounds include:

  • Pre-allocate Web Workers using QMAKE_WASM_PTHREAD_POOL_SIZE (maps to emscripten PTHREAD_POOL_SIZE)
  • Don’t join worker threads when not needed (e.g. app exit). Earlier versions of the Mandelbrot demo was freezing the tab on exit; solved by disabling application cleanup for the Qt build which powers it.

Fixed Memory Size: Emscripten will normally increase the heap size as needed, starting from small initial allocation. This is not yet supported for multithreaded builds and a fixed memory size must be set. We’ve empirically determined that most (desktop) browsers limit this initial allocation to 1GB, and this is the size Qt sets by default. You can change the value by setting QMAKE_WASM_PTHREAD_POOL_SIZE (maps to emscripten PTHREAD_POOL_SIZE)

See the documentation or Wiki page for further info on Qt for WebAssembly, or ask below.

The post Qt for WebAssembly: Multithreading appeared first on Qt Blog.


KDAB is sharing the Qt booth at SIGGRAPH in Los Angeles. We’ll be showing some of our profiling and debugging tools as well as our latest QiTissue demo, a desktop Application developed for Quantitative Imaging Systems (Qi) to help cancer researchers efficiently handle gigabytes of data (see more about that here),

We’ll also give a preview of the related nanoquill coloring App which is due for release in time for the show.

Performance enhancing tools you can expect to see in action at SIGGRAPH are:

Meet us at SIGGRAPH, 2019!

The post KDAB at SIGGRAPH – 2019 appeared first on KDAB.

3D – Interactions with Qt, KUESA and Qt Design Studio, Part 1

This is the first in a series of blog posts about 3D and the interaction with Qt, KUESA™ and Qt 3D Studio, and other things that pop up when we’re working on something.

I’m a 3D designer, mostly working in blender. Sometimes I come across interesting problems and I’ll try to share those here. For example, trying to display things on low-end hardware – where memory is sometimes limited, meaning every polygon and triangle counts;  where the renderer doesn’t do what the designer wants it to, that sort of thing. The problem that I’ll cover today is, how to easily create a reflection in KUESA or Qt 3D Studio.

Neither KUESA or Qt 3D Studio will give you free reflections. If you know a little about 3D, you know that requires ray tracing software, not OpenGL. So, I wondered if there would be an easy way to create this effect. I mean, all that a reflection is, is a mirror of an object projected onto a plane, right? So, I wondered, could this be imitated?

To recreate this, I’d need to create an exact mirror of the object and duplicate it below the original, and have a floor that is partially transparent. I’ve created a simple scene to show you how this technique works – a scene with two cubes, a ground plane and a point light.

Here’s the result of this scene. It’s starting to look like something, but I want to compare it to a ‘real’ reflection.

For comparison, the above is a cube on a reflective, rough surface – showing the result using raytracing. You can see here the reflection is different from our example above – the main issue is that the reflection eventually fades out the further away it gets from the contact point. 

How to resolve this? This can be mimicked by creating an image texture for the alpha that fades out the model towards the top (or rather the bottom) of the reflection. I can also further enhance the illusion by ensuring that the floor is rough – allowing the texture of the surface to assist the illusion of a reflection.

Another difference between the shots is the blurriness on the edge of the mesh – this could be approximated by creating duplicates of the mesh and for each one, increasing the size and reducing the opacity. Depending on the complexity of the model, this may add too many polygons to render, while only adding a subtle effect.

So, given that this is a very simple example and not one that would translate well to something that a client might ask for, how can I translate this into a more complex model, such as the car below? I’ll chat about that in the next post.

The post 3D – Interactions with Qt, KUESA and Qt Design Studio, Part 1 appeared first on KDAB.

What's the difference between PyQt5 & PySide2? What should you use, and how to migrate.

If you start building Python application with Qt5 you'll soon discover that there are in fact two packages which you can use to do this — PyQt5 and PySide2.

In this short guide I'll run through why exactly this is, whether you need to care (spoiler: you really don't), what the few differences are and how to work around them. By the end you should be comfortable re-using code examples from both PyQt5 and PySide2 tutorials to build your apps, regardless of which package you're using yourself.


Why are there two packages?

PyQt has been developed by Phil Thompson of Riverbank Computing Ltd. for a very long time — supporting versions of Qt going back to 2.x. Back in 2009 Nokia, who owned the Qt toolkit at the time, wanted to have Python bindings for Qt available under the LGPL license (like Qt itself). Unable to come to agreement with Riverbank (who would lose money from this, so fair enough) they then released their own bindings as PySide (also, fair enough).

Edit: it's called PySide because "side" is Finnish for "binder" — thanks to Renato Araujo Oliveira Filho in the comments.

The two interfaces were comparable at first but PySide ultimately development lagged behind PyQt. This was particularly noticeable following the release of Qt 5 — the Qt5 version of PyQt (PyQt5) was available from mid-2016, while the first stable release of PySide2 was 2 years later.

It is this delay which explains why many Qt 5 on Python examples use PyQt5 rather than PySide2 — it's not necessarily better, but it existed. However, the Qt project has recently adopted PySide as the official Qt for Python release which should ensure its viability and increase it's popularity going forward.

PyQt5 PySide2
Current stable version (2019-06-23) 5.12 5.12
First stable release Apr 2016 Jul 2018
Developed by Riverbank Computing Ltd. Qt
License GPL or commercial LGPL
Platforms Python 3 Python 3 and Python 2.7 (Linux and MacOS only)

Which should you use? Well, honestly, it doesn't really matter.

Both packages are wrapping the same library — Qt5 — and so have 99.9% identical APIs (see below for the few differences). Code that is written for one can often be used as-is with other, simply changing the imports from PyQt5 to PySide2. Anything you learn for one library will be easily applied to a project using the other.

Also, no matter with one you choose to use, it's worth familiarising yourself with the other so you can make the best use of all available online resources — using PyQt5 tutorials to build your PySide2 applications for example, and vice versa.

In this short chapter I'll run through the few notable differences between the two packages and explain how to write code which works seamlessly with both. After reading this you should be able to take any PyQt5 example online and convert it to work with PySide2.


The key difference in the two versions — in fact the entire reason PySide2 exists — is licensing. PyQt5 is available under a GPL or commercial license, and PySide2 under a LGPL license.

If you are planning to release your software itself under the GPL, or you are developing software which will not be distributed, the GPL requirement of PyQt5 is unlikely to be an issue. However, if you plan to distribute your software commercially you will either need to purchase a commercial license from Riverbank for PyQt5 or use PySide2.

Qt itself is available under a Qt Commercial License, GPL 2.0, GPL 3.0 and LGPL 3.0 licenses.

Python versions

  • PyQt5 is Python 3 only
  • PySide2 is available for Python3 and Python 2.7, but Python 2.7 builds are only available for 64 bit versions of MacOS and Linux. Windows 32 bit is supported on Python 2 only.

UI files

Both packages use slightly different approaches for loading .ui files exported from Qt Creator/Designer. PyQt5 provides the uic submodule which can be used to load UI files directly, to produce an object. This feels pretty Pythonic (if you ignore the camelCase).

import sys
from PyQt5 import QtWidgets, uic

app = QtWidgets.QApplication(sys.argv)

window = uic.loadUi("mainwindow.ui")

The equivalent with PySide2 is one line longer, since you need to create a QUILoader object first. Unfortunately the api of these two interfaces is different too (.load vs .loadUI) and take different parameters.

import sys
from PySide2 import QtCore, QtGui, QtWidgets
from PySide2.QtUiTools import QUiLoader

loader = QUiLoader()

app = QtWidgets.QApplication(sys.argv)
window = loader.load("mainwindow.ui", None)

To load a UI onto an object in PyQt5, for example in your QMainWindow.__init__, you can call uic.loadUI passing in self (the target widget) as the second parameter.

import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5 import uic

class MainWindow(QtWidgets.QMainWindow):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        uic.loadUi("mainwindow.ui", self)

app = QtWidgets.QApplication(sys.argv)
window = MainWindow()

The PySide2 loader does not support this — the second parameter to .load is the parent widget of the widget you're creating. This prevents you adding custom code to the __init__ block of the widget, but you can work around this with a separate function.

import sys
from PySide2 import QtWidgets
from PySide2.QtUiTools import QUiLoader

loader = QUiLoader()

def mainwindow_setup(w):
    w.setTitle("MainWindow Title")

app = QtWidgets.QApplication(sys.argv)

window = loader.load("mainwindow.ui", None)

Converting UI files to Python

Both libraries provide identical scripts to generate Python importable modules from Qt Designer .ui files. For PyQt5 the script is named pyuic5

pyuic5 mainwindow.ui -o MainWindow.py

You can then import the UI_MainWindow object, subclass using multiple inheritance from the base class you're using (e.g. QMainWIndow) and then call self.setupUI(self) to set the UI up.

import sys
from PyQt5 import QtWidgets
from MainWindow import Ui_MainWindow

class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

app = QtWidgets.QApplication(sys.argv)
window = MainWindow()

For PySide2 it is named pyside2-uic

pyside2-uic mainwindow.ui -o MainWindow.py

The subsequent setup is identical.

import sys
from PySide2 import QtWidgets
from MainWindow import Ui_MainWindow

class MainWindow(QtWidgets.QMainWindow, Ui_MainWindow):

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

app = QtWidgets.QApplication(sys.argv)
window = MainWindow()

For more information on using Qt Designer with either PyQt5 or PySide2 see the Qt Creator tutorial.

exec() or exec_()

The .exec() method is used in Qt to start the event loop of your QApplication or dialog boxes. In Python 2.7 exec was a keyword, meaning it could not be used for variable, function or method names. The solution used in both PyQt4 and PySide was to rename uses of .exec to .exec_() to avoid this conflict.

Python 3 removed the exec keyword, freeing the name up to be used. As PyQt5 targets only Python 3 it could remove the workaround, and .exec() calls are named just as in Qt itself. However, the .exec_() names are maintained for backwards compatibility.

PySide2 is available on both Python 3 and Python 2.7 and so still uses .exec_(). It is however only available for 64bit Linux and Mac.

If you're targeting both PySide2 and PyQt5 use .exec_()

Slots and Signals

Defining custom slots and signals uses slightly different syntax between the two libraries. PySide2 provides this interface under the names Signal and Slot while PyQt5 provides these as pyqtSignal and pyqtSlot respectively. The behaviour of them both is identical for defining and slots and signals.

The following PyQt5 and PySide2 examples are identical —

my_custom_signal = pyqtSignal()  # PyQt5
my_custom_signal = Signal()  # PySide2

my_other_signal = pyqtSignal(int)  # PyQt5
my_other_signal = Signal(int)  # PySide2

Or for a slot —

def my_custom_slot():

def my_custom_slot():

If you want to ensure consistency across PyQt5 and PySide2 you can use the following import pattern for PyQt5 to use the Signal and @Slot style there too.

from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot

You could of course do the reverse from PySide2.QtCore import Signal as pyqtSignal, Slot as pyqtSlot although that's a bit confusing.

Supporting both in libraries

You don't need to worry about this if you're writing a standalone app, just use whichever API you prefer.

If you're writing a library, widget or other tool you want to be compatible with both PyQt5 and PySide2 you can do so easily by adding both sets of imports.

import sys

if 'PyQt5' in sys.modules:
    # PyQt5
    from PyQt5 import QtGui, QtWidgets, QtCore
    from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot

    # PySide2
    from PySide2 import QtGui, QtWidgets, QtCore
    from PySide2.QtCore import Signal, Slot

This is the approach used in our custom widgets library, where we support for PyQt5 and PySide2 with a single library import. The only caveat is that you must ensure PyQt5 is imported before (as in on the line above or earlier) when importing this library, to ensure it is in sys.modules.

An alternative would be to use an environment variable to switch between them — see QtPy later.

If you're doing this in multiple files it can get a bit cumbersome. A nice solution to this is to move the import logic to its own file, e.g. named qt.py in your project root. This module imports the Qt modules (QtCore, QtGui, QtWidgets, etc.) from one of the two libraries, and then you import into your application from there.

The contents of the qt.py are the same as we used earlier —

import sys

if 'PyQt5' in sys.modules:
    # PyQt5
    from PyQt5 import QtGui, QtWidgets, QtCore
    from PyQt5.QtCore import pyqtSignal as Signal, pyqtSlot as Slot

    # PySide2
    from PySide2 import QtGui, QtWidgets, QtCore
    from PySide2.QtCore import Signal, Slot

You must remember to add any other PyQt5 modules you use (browser, multimedia, etc.) in both branches of the if block. You can then import Qt5 into your own application with —

from .qt import QtGui, QtWidgets, QtCore

…and it will work seamlessly across either library.


If you need to target more than just Qt5 support (e.g. including PyQt4 and PySide v1) take a look at QtPy. This provides a standardised PySide2-like API for PyQt4, PySide, PyQt5 and PySide2. Using QtPy you can control which API to load from your application using the QT_API environment variable e.g.

import os
os.environ['QT_API'] = 'pyside2'
from qtpy import QtGui, QtWidgets, QtCore  # imports PySide2.

That's really it

There's not much more to say — the two are really very similar. With the above tips you should feel comfortable taking code examples or documentation from PyQt5 and using it to write an app with PySide2. If you do stumble across any PyQt5 or PySide2 examples which you can't easily convert, drop a note in the comments and I'll update this page with advice.

Qt Creator 4.10 Beta released

We are happy to announce the release of Qt Creator 4.10 Beta!

Some highlights in this version of Qt Creator are:


You can “pin” files so they stay open when closing all files. Check the context menu on the document dropdown and the Open Documents pane.

The client for the Language Server Protocol is now better integrated into Locator, shows tooltip information from the server, and has more flexible server settings.
We also moved the plugin out of the experimental state, so it is enabled by default.


You can filter many of the output panes for lines matching an expression.

The Qt Widgets Application and C++ Library wizards finally allow you to choose CMake or Qbs as the build system.

We added support for Android targets to CMake and Qbs projects.

For remote Linux targets you can now deploy all files that are installed by your build system’s install step.

We added basic support for Boost tests.

Please have a look at our change log for a more complete overview of changes. Many thanks to all who contributed to this release!

Get Qt Creator 4.10 Beta

The opensource version is available on the Qt download page under “Pre-releases”, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.10 Beta is also available under Preview > Qt Creator 4.10.0-beta1 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.10 Beta released appeared first on Qt Blog.

Qt 5.13 Released!

Today, we have released Qt 5.13 and I’m really proud of all the work that everyone has put into it. As always, our releases come with new features, updates, bug fixes, and improvements. For Qt 5.13, we have also been focused on our tooling that makes designing, developing and deploying software with Qt more efficient for designers and developers alike. Let’s take a look at some of the highlights of Qt 5.13 as well as some of the updates on the tooling side.

I will also be holding a webinar summarizing all the news around Qt 5.13 together with our Head of R&D Tuukka Turunen on July 2. Please sign up and ask us your questions.

New in Qt 5.13 & Qt Design and Development Tools Update - Jul 2, 2019

Qt for WebAssembly

Qt for WebAssembly lets you build Qt applications for web browsers and is now fully supported. Qt for WebAssembly uses Emscripten to compile Qt applications for a web server allowing you to run native applications in any browser that supports WebAssembly without requiring a client-side installation. Qt is setting the pace for C++ development for WebAssembly and Google recently used Qt as an example of how to run C++ applications in the browser at the Google I/O ’19 event. You can take a look at the video.

We have also collected a range of examples for Qt for WebAssembly. Check out them out here.


Qt GUI summarises our classes for windowing system integration, event handling, OpenGL and Open GLES integration, 2D graphics, basic imaging, fonts, and text.


The Qt QML module provides a framework for developing fluid user interfaces in the QML language. We have improved the support for enums declared in C++, JavaScript “null” as binding value is now optimized at compile time, and QML now generates function tables on 64bit windows, which makes it possible to unwind the stack through JITed functions.

Qt Quick and Qt Quick Controls 2

The standard library for writing QML applications and our UI controls for creating user interfaces have also received some updates. We have added support to TableView for hiding rows and columns and for Qt Quick Controls 2 we have added SplitView, a control that lays out items horizontally or vertically with a draggable splitter between each item. We have also added a cache property to icon.

Qt WebEngine

Qt WebEngine integrates Chromium’s fast-moving web capabilities into Qt and its latest version is now based on Chromium 73. We have added PDF viewing via an internal Chromium extension, application-local client certificate store, client certificate support from QML, Web Notifications API and a thread-safe and page-specific URL request interceptors.

Qt Network

Qt Network provides a set of APIs for programming applications that use TCP/IP and we have added Secure Channel support for SSL socket and OCSP stapling support. With Qt 5.13 are now using OpenSSL 1.1 to support SSL connections on Linux and Android.

Qt Multimedia

Provides a rich set of QML types and C++ classes to handle multimedia content. We have also added gapless playback in QML VideoOutput using flushMode property, support of GStreamer for Windows/macOS and HTTP headers and audio roles for Android.


The client side of a client – KNXnet/IP server connection. This connection can be used to send messages to the KNX bus and to control the functionalities of the KNX devices typically used in building automation. With Qt 5.13, the module has received a secure client API.


The next generation of Industry 4.0 applications based on a Client / Server architecture has received some updates. Mainly, the C++ API is now fully supported and we added tech previews of a QML API and a secure client C++ API.

Qt CoAP (Tech Preview)

Qt CoAP (Constrained Application Protocol) is a client-side implementation of the M2M protocol for use with constrained nodes and networks for the internet of things. With Qt 5.13, the module has received support for Datagram TLS (DTLS) over UDP.

Other Recent updates

Qt Creator IDE 4.9

Qt Creator IDE has received some updates, which include an improved UI for diagnostics from the Clang analyzer tools, a QML parser update, support for ECMAScript 7 and a new performance profiling plugin for software running on Linux. You can read more about the updates to Qt Creator in the dedicated release post.

Qt Design Studio 1.2

The latest edition of the Qt UI design and development tool now lets you seamlessly import your designs from Sketch (in addition to Photoshop) and turn them into QML code. Adding support for Sketch has been a sought-after request and is a popular tool for designers so this is really taking Design Studio to the next level, enabling a much smoother designer-developer workflow. In addition, support for more complex gradients from Qt Quick Shapes have been added, and various improvements and fixes have been made. We have also released a Qt Design Studio Community Edition. You can read more about it in the release blog post.

Qt for Python

Qt for Python has received a large number of bug fixes and improvements since its first officially supported version that came with Qt 5.12.

New Version of Qt Safe Renderer

TÜV NORD certifies that you can use Qt to build functionally safe embedded systems. The QSR 1.1 is certified based on the new edition of ISO 26262:2018 series of standards and includes Qt Safe Renderer Code, Designer and Build Tooling, ac safety manual, certification artifacts, and global Qt technical support. Our recent update allows you to render UI elements dynamically.

Qt Lottie Animation Tech Preview

Engineers and UI designers can now easily embed Adobe After Effect animations directly into Qt Quick applications using the Bodymovin export format and the new Qt Lottie renderer for it. For more details have a look at the blog post about it QtLottie.

Thanks to the Qt Community

Qt 5.13 adds new functionality and improvements. Some of them would not have been possible without the help of the great community who contributes to Qt with new functionalities, documentation, examples, as well as bug fixes and reports. There are too many people to mention, but I’d like to especially thank basysKom and Witekio for their work on Qt OPC UA and QtCoAP, respectively.

The post Qt 5.13 Released! appeared first on Qt Blog.

Significant Performance Improvements with Qt 3D Studio 2.4

Speed of the 3D rendering is essential for a 3D engine, in addition to efficient use of system resources. The upcoming new Qt 3D Studio 2.4 release brings a significant boost to rendering performance, as well as provides further savings on CPU and RAM utilization. With our example high-end embedded 3D application the rendering speed is improved whopping 565%, while the RAM use and CPU load are down 20% and 51% respectively. 

Performance is a key driver for Qt and especially important for being able to run complex 3D applications on embedded devices. We have been constantly improving the resource efficiency with earlier releases of Qt 3D Studio and with the upcoming Qt 3D Studio 2.4 takes a major step forward in rendering performance. The exact performance increase depends a lot on the application and used hardware, so we have taken two example applications and embedded hardware for a closer look in this blog post. The example applications used in this post are automotive instrument clusters, but similar improvement can be seen in any application using Qt 3D Studio runtime.

Entry-level embedded example with Renesas R-Car D3

The entry-level embedded device used in the measurement is Renesas R-Car D3, which has the Imagination PowerVR GE8300 entry-class GPU (https://www.imgtec.com/powervr-gpu/ge8300/) and one ARM Cortex A53 CPU core. Operating system is Linux.

The example application used is the low-end cluster, available at https://git.qt.io/public-demos/qt3dstudio/tree/master/LowEndCluster. The low-end cluster example is well optimized, as described in a detailed blog post about optimizing 3D applications.



In order to make the application as lightweight as possible, only the ADAS view is created as a real-time 3D user interface. Other parts of the instrument cluster are created with Qt Quick. This allows having a real-time 3D user interface even on a entry-class hardware like Renesas R-Car D3.

High-end embedded example with NVIDIA Tegra X2

The high-end embedded device used in the measurement is NVIDIA Jetson TX2 development board equipped with Tegra X2 SoC, which has 256-core NVIDIA Pascal™ GPU and Dual-Core NVIDIA Denver 2 64-Bit as well as Quad-Core ARM Cortex-A57 MPCore CPUs. Operating system is Linux.

The example application used is the Kria cluster, available at https://git.qt.io/public-demos/qt3dstudio/tree/master/kria-cluster-3d-demo. The Kria cluster example is made intentionally heavy with large and not fully optimized textures, high resolution etc.


In the high-end example all the gauges and other elements are real-time 3D, rendered with the Qt 3D Studio runtime. There are very few Qt Quick parts and these are brought into the 3D user interface using texture sharing via QML streams.

Rendering performance improvement

The biggest improvement with the new Qt 3D Studio 2.4 release is to the rendering performance – getting the same application to render more Frames Per Second (FPS) on the same hardware. As always with Qt we aim to run steady 60 FPS, but on embedded devices pure performance is not enough. When there are items like heat management and tackling different usage scenarios it typically pays off not to run on the very edge of the SoC’s graphics capabilities. In the case of an application such as an instrument cluster, the performance needs to be smooth in all operation conditions, including under maximum load of the system. For measurement purposes with the high-end example we have disabled vsync, allowing the system to draw as many frames it can. In a typical real-life application there always is the vsync set, so anything that we can go over 60 FPS means saved processing resources.

The graphs below show the measured Frames Per Second with the high-end example on NVIDIA TX2 (vsync off) and with the low-end example on Renesas R-Car D3 (vsync on):


High end example: With the new Qt 3D Studio 2.4 we see a a whopping 565% improvement in the rendering performance. With Qt 3D Studio 2.3 the application was running only at 20 FPS, but the new Qt 3D Studio 2.4 allows the application to run 133 FPS. This is measured turning off vsync, just to measure the capability of the new runtime. In practice running 60 FPS is enough, and the additional capacity of the processor can be leveraged to have a larger screen (or another screen) or more complex application – or simply by not using the maximal capacity of the SoC to save on power.

Low-end example: The improvement is 46% because the maximum FPS is capped to 60 FPS by Qt Quick. With Qt 3D Studio 2.3 the application achieved 41 FPS, and with the new 2.4 runtime it reaches 60 FPS easily. Just like with the more powerful high-end hardware the excess capacity of the SoC can be used for running a more complex 3D user interface, or simply left unused.

CPU load improvement

The overall CPU load of an application is a sum of multiple things, one of them being the load caused by the 3D engine. In embedded applications it is important that using 3D in the application does not cause excessive load for the CPU. If the application exceeds the available CPU, it will not be able to render at target FPS and stuttering or other artefacts may appear on the screen.

The graphs below show the measured CPU load with the high-end example on NVIDIA TX2 and with the low-end example on Renesas R-Car D3:


High-end example: With the new Qt 3D Studio 2.4 we see a hefty 51% improvement in the CPU load compared to Qt 3D Studio 2.3 while at the same time the FPS goes from 20 FPS to 133 FPS. The overall load with the Runtime 2.3 is 167% (of total 400%) and with the Runtime 2.4 the load drops to 81%. Note that the increased rendering speed has its effect on the CPU load as well. With the vsync on and FPS capped to 60 FPS, the CPU load is 74%.

Low-end example: We see only a modest 5% improvement in the CPU load, mainly due to the application being mostly Qt Quick. But this is with FPS going from 41 FPS up to 60 FPS at the same time. It should also be noted that the CPU of R-Car D3 is not very powerful, so the increased FPS of the overall application has its effect to the overall CPU load.

Memory usage improvement

For any graphics and especially 3D it is the assets that typically takes most of the RAM. There are ways to optimize, most notably avoiding unnecessary level of detail and leveraging texture compression. For the purposes of this blog post, we do not leverage any specific optimization methods. The measurements are done with exactly the same application, no other changes than using a different version of the Qt 3D Studio runtime.

The graphs below show the measured RAM use with the high-end example on NVIDIA TX2 and with the low-end example on Renesas R-Car D3:


High-end example: With the new Qt 3D Studio 2.4 we see a reduction of 48MB compared to Qt 3D Studio 2.3. This is 20% reduction to the overall RAM usage of the application.

Low-end Example: In the simpler example the reduction of RAM use is 9MB when using the new 2.4 runtime. Percentage-wise this is is a 15% reduction to the overall RAM usage of the application.

How was this achieved?

The improvements are really big especially on embedded, so one may wonder what was changed in the new version? What we did is to use the same runtime architecture as with Qt 3D Studio 1.x releases instead of running on top of Qt 3D. The core logic of the 3D engine is still the same as before, but it is running directly on top of OpenGL instead of using Qt 3D. This provides significantly improved performance especially on embedded devices, but also on more powerful desktop systems. By running Studio’s 3D engine directly on top of OpenGL we avoid overhead in rendering and simplify the architecture. The simpler architecture translates to less internal signalling, less objects in memory and reduced synchronization needs between multiple rendering threads. All this has allowed us to make further optimizations over the Qt 3D Studio 1.x – and of course to bring the new features developed in the Qt 3D Studio 2.x releases on top of the OpenGL based runtime.

The change in 3D runtime does not require any changes for most projects. Just change the import statement (import QtStudio3D.OpenGL 2.4 instead of import QtStudio3D 2.3) and then recompilation with new Qt 3D Studio 2.4 is enough. As API and the parts of the 3D engine relevant for the application are the same as earlier, all the same materials, shaders etc work just like before. In the rare cases where some changes are needed e.g. for some custom material, these are rather small.

Get Qt 3D Studio 2.4

If you have not yet tried out the Qt 3D Studio 2.4 pre-releases, you should take that for a spin. It is available with the online installer under the preview node. Currently we have the third Beta release out and soon provide the Release Candidate. Final release is targeted to be out before end of June. Qt 3D Studio is available under both the commercial and open-source licenses.

The post Significant Performance Improvements with Qt 3D Studio 2.4 appeared first on Qt Blog.

Qt 5.12.4 Released with support for OpenSSL 1.1.1

Qt 5.12.4, the fourth patch release of Qt 5.12 LTS, is released today. Qt 5.12.4 release provides a number of bug fixes, as well as performance and other improvements. As an important new item it provides binaries build with OpenSSL 1.1.1, including the new TLS 1.3 functionality.

Compared to Qt 5.12.3, the new Qt 5.12.4 provides around 250 bug fixes. For details of the most important changes, please check the Change files of Qt 5.12.4.

The update to OpenSSL 1.1.1 is important to note for users leveraging OpenSSL in their applications. We wanted to update now as the earlier version of OpenSSL runs out of support at the end of the year and some platforms, such as Android, need the new one even sooner. Unfortunately OpenSSL 1.1 is binary incompatible with 1.0, so users need to switch to the new one and repackage their applications. One important functionality enabled by OpenSSL 1.1 is TLS 1.3 bringing significant cryptography and speed improvements. As part of the change, some old and insecure crypto algorithms have been removed and support for some new crypto algorithms added. For the users not leveraging OpenSSL in their applications, no actions are needed. OpenSSL is not included in a Qt application, unless explicitly so defined by the developer.

Going forward, Qt 5.12 LTS will receive many more patch releases throughout the coming years and we recommend all active developed projects to migrate to Qt 5.12 LTS. Qt 5.9 LTS is currently in ‘Strict’ phase and receives only the selected important bug and security fixes, while Qt 5.12 LTS is currently receiving all the bug fixes. Qt 5.6 Support has ended in March 2019, so all active projects still using Qt 5.6 LTS should migrate to a later version of Qt.

Qt 5.12.4 is now available via the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal or from qt.io Download page. Offline packages are available for commercial users in the Qt Account portal and at the qt.io Download page for open-source users. You can also try out the Commercial evaluation option from the qt.io Download page.

The post Qt 5.12.4 Released with support for OpenSSL 1.1.1 appeared first on Qt Blog.

Qt Support – Aligning chart views underneath each other

One thing that comes up occasionally in support is that when you have multiple chart views that it would be good to align them up underneath each other so that the actual plotted size is the same for both charts. If you have similar sizes for the axes then this is easily achieved as they will take up the same width and it will align for you. However, if this is not the case as it so often is, then we can use the margins to force it to have the sizes that we want. So let’s say that our chart currently looks like:

Chart with unaligned axes

If you are using C++ for your chart code then this can be achieved with the aid of a single slot, that can be connected to the QChart::plotAreaChanged() signal. To start off we need to prevent recursion from happening as we will be changing margins and so on but we want to let the internals still do their thing so we don’t want to block signals. So to do that we will just have a static boolean which we will set to true to indicate that we are in the middle of our calculations:

    void updatePlotArea(const QRectF &area)
        static bool fixing = false;
        if (fixing)
        fixing = true;

The next thing we need to do is work out what chart is the best one to be used for aligning the others against, this means picking the one that has the largest left value for the plot area (i.e. the one that has the widest axis). We use the currently changed one as a starting point for this and if any are in fact better we change the margins for those as it will ensure we have the most size available for the plot area at any given point for the largest axis.

    QChart *bestChart = (QChart *)sender();
         QRectF bestRect = area;
         foreach(QChart *chart, charts) {
            if (chart->plotArea().left() > bestRect.left()) {
                bestChart = chart;
                bestRect = chart->plotArea();
                chart->setMargins(QMargins(20, 0, 20, 0));

Then, with the exception of the one that ends up being the best one we adjust the margins to ensure that it matches the best one so that they are aligned correctly by setting the margins to be the existing margin plus the difference between the best chart’s plot area and this chart’s plot area. Finally we send any posted events to ensure it is updated right away for us.

    foreach(QChart *chart, charts) {
        if (bestChart != chart) {
            const int left = chart->margins().left() +
                (bestRect.left() - chart->plotArea().left());
            const int right = chart->margins().right() +
                (chart->plotArea().right() - bestRect.right());
            chart->setMargins(QMargins(left, 0, right, 0));
    fixing = false;

This will give us the two charts aligned to look like:


As for doing this in QML, we can do something similar in a function which is called via onPlotAreaChanged.

    property bool fixing: false
    property var chartViews: [chartview, chartview_b]
    function updatePlotArea(chart, area) {
        if (fixing)
        fixing = true
        var tmpChart
        var bestRect = chart.plotArea
        var bestChart = chart
        for (var i = 0; i < chartViews.length; i++) {
           tmpChart = chartViews[i]
           if (tmpChart.plotArea.left >
               Math.ceil(bestRect.left) ||
               (Math.ceil(tmpChart.plotArea.left) === 
                Math.ceil(bestRect.left) &&
                Math.floor(tmpChart.plotArea.right) < 
                Math.floor(bestRect.right))) {
                    bestChart = tmpChart;
                    bestRect = tmpChart.plotArea;
        bestRect.left = Math.ceil(bestRect.left)
        bestRect.right = Math.floor(bestRect.right)
        for (i = 0; i < chartViews.length; i++) {
            tmpChart = chartViews[i]
            if (tmpChart !== bestChart) {
                var newLeft = 20 + bestRect.left -
                var newRight = 20 +
                      Math.ceil(tmpChart.plotArea.right) -
                tmpChart.margins.left = newLeft
                tmpChart.margins.right = newRight
        fixing = false;

The only difference being is that we account for the fact that the plot area is using real values and margins are still integer based so we do some extra accounting for that as a result.

In other news from the support desk, to reiterate the value of reporting bugs directly to the support team as a customer. As a customer indicated the issue reported in https://bugreports.qt.io/browse/QTBUG-74523 we were able to quickly find a solution and also get that integrated in time for the next release. It had been reported earlier this year, but was given a lower priority because Qt Quick Controls 1 is deprecated and therefore no one had scheduled time to investigate it. As always, when a bug is reported to us from a customer it will increase the priority and in cases like this we are able to solve it much quicker as a result as the support team can spend time on it too.

The post Qt Support – Aligning chart views underneath each other appeared first on Qt Blog.

Faster link time for Qt WebAssembly

I have built Qt and various apps using emscripten so many times over the last couple of years, it isn't even funny.

One detractor with building Qt applications for the web using Qt for WebAssembly, is the time it takes to build the client application. Especially during linking, it takes a huge amount of time to produce a binary for Qt WebAssembly.
This is caused by all the magic that the linker in emscripten is doing.

Luckily, llvm can now produce wasm binaries directly so emscripten does not have to go through any additional steps to output wasm. Emscripten and Qt can utilize this feature to reduce the length of time it takes to link and emit a .wasm binary.

YAY! \0/

I did some very non scientific measurements on build time. Using my development machine with make -j8:

Qt (not including configure) textedit
default configure Qt for WebAssembly user 20m3.706s user 2m46.678s
using -device-option WASM_OBJECT_FILES=1 user 23m30.230s user 0m29.435s

The result is that building Qt takes a tad longer, but building the client application is significantly faster. Which means development iterations will be much faster. In this instance, I used the textedit example in Qt. The build time went from 2 minutes and 46 seconds for a default non object files build, to 29 seconds! Zip! Zoom! Zang! Dang! that's fast (for wasm)

Unfortunately, this is not currently done by default by either Qt, emscripten or llvm. There is a way to do this which involves building llvm from upstream and telling emscripten to use that.

For the emscripten and binaryen version, I tested the current 1.38.32

From this bug report https://bugreports.qt.io/browse/QTBUG-72537
Morten has fleshed out a method for doing this.


Getting started:

(Note: this is "how-I-did-it", not necessarily "how-it-should-be-done".)
1. Clone the following repositories:

2. Check out the version you want to use 
  • emscripten and binaryen have matching emsdk version tags, for example "1.38.23".
  • llvm has its own version numbers, and must be synced up manually. Currently:
    • emscripten <= 1.38.23 : llvm 8.0 (8.0.0-rc2 branch)
    • emscripten > 1.38.23 : llvm 9.0 (master branch)
      (em++/emcc will complain if you have the incorrect llvm version)

3. Build
  • emscripten is all python, no build needed.
  • binaryen: "cmake -GNinja && ninja"
  • llvm:
    cmake -GNinja
    (one of the WebAssembly targets may be redundant)

4. Configure Environment
I use the following:
export EMSDK="/Users/msorvig/dev/emsdks/emscripten-1.38.23"
export PATH="$EMSDK:$PATH"
export LLVM="/Users/msorvig/dev/emsdks/llvm-8.0.0-build/bin"
export BINARYEN="/Users/msorvig/dev/emsdks/binaryen-1.38.23"
export EM_CONFIG="/Users/msorvig/dev/emsdks/.emscripten-vanillallvm-1.38.23"
export EM_CACHE="/Users/msorvig/dev/emsdks/.emscripten-vanillallvm-cache-1.38.23"
Where .emscripten-vanillallvm-1.38.23 is a copy of the .emscripten config file that emsdk generates.


You will need to adjust the various paths to your system, of course. 
Then to configure and build Qt, add -device-option WASM_OBJECT_FILES=1 to the normal Qt for WebAssembly configure line.

The one I normally use is:

configure -xplatform wasm-emscripten -developer-build -nomake tests -nomake examples -opensource -confirm-license -verbose -compile-examples -no-warnings-are-errors -release  -device-option WASM_OBJECT_FILES=1

Works like a charm for the Qt 5.13 and the 5.13.0 branches of the Qt source code repo. I tried this with the 5.13.0 beta4 WebAssembly binaries, but got:

wasm-ld: warning: function signature mismatch:
so a complete rebuild is required.

There is a chapter regarding Qt for WebAssembly in the book, Hands-On Mobile and Embedded Development with Qt 5

Introducing QtCoAP

I am happy to introduce the new QtCoAP library! It is the client-side implementation of the Constrained Application Protocol (CoAP) for the Internet of Things. The library provides quick and easy way of using the CoAP protocol in your cross-platform Qt applications. Big thanks to our partners at Witekio for development and contribution of the main functionality! As it has been announced earlier, QtCoAP will be available in the new Qt 5.13 release as a part of our Qt for Automation offering, together with other IoT protocol implementations, such as MQTTKNX and OPC UA.

What is CoAP?

CoAP was designed as a lightweight machine-to-machine (M2M) communication protocol that can run on devices with scarce memory and computing resources. It is based on the concept of RESTful APIs and is very similar to HTTP. CoAP has a client-server architecture and uses GET, POST, PUT and DELETE requests for interaction with the data. But unlike HTTP, it uses the lightweight UDP for the transport instead of TCP. Additionally, it supports some interesting features like multicast requests, resource discovery and observation.

Thanks to the low overhead and simplicity, CoAP  has become one of the popular IoT protocols to be used on the embedded devices. It acts as a sort of HTTP for the embedded world.

Overview of Qt CoAP Implementation

QtCoAP supports the following functionality:

  • Resource observation
  • Resource discovery
  • Group communication (multicast)
  • Blockwise transfers
  • Security

The library is really simple to use. You just need to create an instance of QCoapClient and connect its signals:

QCoapClient client;
connect(&client, &QCoapClient::finished, this, &CoapHandler::onFinished);
connect(&client, &QCoapClient::error, this, &CoapHandler::onError);

Now you are ready to send requests and receive replies:

// GET requests
// or simply

// PUT/POST requests
QFile file("data.json");
// ...
client.post(QUrl(""), file.readAll());
client.put(QUrl(""), file.readAll());

// DELETE requests

Using the QCoapRequest class you can pass options and customize your requests. For example:

QCoapRequest request;
request.addOption(QCoapOption::UriPath, "resource");

CoAP also provides a publish-subscribe mechanism achieved via “observe” requests:

QCoapReply *observeReply = client.observe(QUrl(""));
connect(observeReply, &QCoapReply::notified, this, &CoapHandler::onNotified);

Now your application will get notified whenever the “/temperature” resource changes.

What makes CoAP even more interesting, is the ability to find and discover CoAP resources. You can discover resources on the given host:

*discoverReply = client.discover(QUrl(""));

Or in the entire network:

*discoverReply = client.discover();

This will send a multicast discovery request to the IPv4 CoAP multicast group. You can also run the discovery for the IPv6 nodes:

discoverReply = client.discover(QtCoap::MulticastGroup::AllCoapNodesIPv6LinkLocal);
// or
discoverReply = client.discover(QtCoap::MulticastGroup::AllCoapNodesIPv6SiteLocal);
connect(discoverReply, &QCoapResourceDiscoveryReply::discovered, this, &CoapHandler::onDiscovered);

You will get several discovery replies from each CoAP device in your network. For example:

Host 1:

RES: 2.05 Content

Host 2:

RES: 2.05 Content

This will indicate, that in your network you have 2 devices running CoAP servers: one of them is connected to temperature and light sensors and the other has only a temperature sensor.


Last, but not least is security. QtCoAP library supports the following security modes:

  • Authentication via pre-shared keys.
  • Using X.509 certificates.

For securing the CoAP connection, you need to pass one of these modes when creating the client and configure it accordingly. For example:

QCoapClient secureClient(QtCoap::SecurityMode::PreSharedKey);
QCoapSecurityConfiguration config;

Please give us your feedback if you find this post interesting!

The post Introducing QtCoAP appeared first on Qt Blog.

Building and testing on multiple platforms – introducing minicoin

When working on Qt, we need to write code that builds and runs on multiple platforms, with various compiler versions and platform SDKs, all the time. Building code, running tests, reproducing reported bugs, or testing packages is at best cumbersome and time consuming without easy access to the various machines locally. Keeping actual hardware around is an option that doesn’t scale particularly well. Maintaining a bunch of virtual machines is often a better option – but we still need to set those machines up, and find an efficient way to build and run our local code on them.

Building my local Qt 5 clone on different platforms to see if my latest local changes work (or at least compile) should be as simple as running “make”, perhaps with a few more options needed. Something like

qt5 $ minicoin run windows10 macos1014 ubuntu1804 build-qt

should bring up three machines, configure them using the same steps that we ask Qt developers to follow when they set up their local machines (or that we use in our CI system Coin – hence the name), and then run the build job for the code in the local directory.

This (and a few other things) is possible now with minicoin. We can define virtual machines in code that we can share with each other like any other piece of source code. Setting up a well-defined virtual machine within which we can build our code takes just a few minutes.

minicoin is a set of scripts and conventions on top of Vagrant, with the goal to make building and testing cross-platform code easy. It is now available under the MIT license at https://git.qt.io/vohilshe/minicoin.

A small detour through engineering of large-scale and distributed systems

While working with large-scale (thousands of hosts), distributed (globally) systems, one of my favourite, albeit somewhat gruesome, metaphors was that of “servers as cattle” vs “servers as pets”. Pet-servers are those we groom manually, we keep them alive, and we give them nice names by which to remember and call (ie ssh into) them. However, once you are dealing with hundreds of machines, manually managing their configuration is no longer an option. And once you have thousands of machines, something will break all the time, and you need to be able to provision new machines quickly, and automatically, without having to manually follow a list of complicated instructions.

When working with such systems, we use configuration management systems such as CFEngine, Chef, Puppet, or Ansible, to automate the provisioning and configuration of machines. When working in the cloud, the entire machine definition becomes “infrastructure as code”. With these tools, servers become cattle which – so the rather unvegetarian idea – is simply “taken behind the barn and shot” when it doesn’t behave like it should. We can simply bring a new machine, or an entire environment, up by running the code that defines it. We can use the same code to bring production, development, and testing environments up, and we can look at the code to see exactly what the differences between those environments are. The tooling in this space is fairly complex, but even so there is little focus on developers writing native code targeting multiple platforms.

For us as developers, the machine we write our code on is most likely a pet. Our primary workstation dying is the stuff for nightmares, and setting up a new machine will probably keep us busy for many days. But this amount of love and care is perhaps not required for those machines that we only need for checking whether our code builds and runs correctly. We don’t need our test machines to be around for a long time, and we want to know exactly how they are set up so that we can compare things. Applying the concepts from cloud computing and systems engineering to this problem lead me (back) to Vagrant, which is a popular tool to manage virtual machines locally and to share development environments.

Vagrant basics

Vagrant gives us all the mechanisms to define and manage virtual machines. It knows how to talk to a local hypervisor (such as VirtualBox or VMware) to manage the life-cycle of a machine, and how to apply machine-specific configurations. Vagrant is written in Ruby, and the way to define a virtual machine is to write a Vagrantfile, using Ruby code in a pseudo-declarative way:

Vagrant.configure("2") do |config|
    config.vm.box = "generic/ubuntu1804"
    config.vm.provision "shell",
        inline: "echo Hello, World!"

Running “vagrant up” in a directory with that Vagrantfile will launch a new machine based on Ubuntu 18.04 (downloading the machine image from the vagrantcloud first), and then run “echo Hello, World!” within that machine. Once the machine is up, you can ssh into it and mess it up; when done, just kill it with “vagrant destroy”, leaving no traces.

For provisioning, Vagrant can run scripts on the guest, execute configuration management tools to apply policies and run playbooks, upload files, build and run docker containers, etc. Other configurations, such as network, file sharing, or machine parameters such as RAM, can be defined as well, in a more or less hypervisor-independent format. A single Vagrantfile can define multiple machines, and each machine can be based on a different OS image.

However, Vagrant works on a fairly low level and each platform requires different provisioning steps, which makes it cumbersome and repetitive to do essentially the same thing in several different ways. Also, each guest OS has slightly different behaviours (for instance, where uploaded files end up, or where shared folders are located). Some OS’es don’t fully support all the capabilities (hello macOS), and of course running actual tasks is done different on each OS. Finally, Vagrant assumes that the current working directory is where the Vagrantfile lives, which is not practical for developing native code.

minicoin status

minicoin provides various abstractions that try to hide many of the various platform specific details, works around some of the guest OS limitations, and makes the definition of virtual machines fully declarative (using a YAML file; I’m by no means the first one with that idea, so shout-out to Scott Lowe). It defines a structure for providing standard provisioning steps (which I call “roles”) for configuring machines, and for jobs that can be executed on a machine. I hope the documentation gets you going, and I’d definitely like to hear your feedback. Implementing roles and jobs to support multiple platforms and distributions is sometimes just as complicated as writing cross-platform C++ code, but it’s still a bit less complex than hacking on Qt.

We can’t give access to our ready-made machine images for Windows and macOS, but there are some scripts in “basebox” that I collected while setting up the various base boxes, and I’m happy to share my experiences if you want to set up your own (it’s mostly about following the general Vagrant instructions about how to set up base boxes).

Of course, this is far from done. Building Qt and Qt applications with the various compilers and toolchains works quite well, and saves me a fair bit of time when touching platform specific code. However, working within the machines is still somewhat clunky, but it should become easier with more jobs defined. On the provisioning side, there is still a fair bit of work to be done before we can run our auto-tests reliably within a minicoin machine. I’ve experimented with different ways of setting up the build environments; from a simple shell script to install things, to “insert CD with installed software”, and using docker images (for example for setting up a box that builds a web-assembly, using Maurice’s excellent work with Using Docker to test WebAssembly).

Given the amount of discussions we have on the mailing list about “how to build things” (including documentation, where my journey into this rabbit hole started), perhaps this provides a mechanism for us to share our environments with each other. Ultimately, I’d like coin and minicoin to converge, at least for the definition of the environments – there are already “coin nodes” defined as boxes, but I’m not sure if this is the right approach. In the end, anyone that wants to work with or contribute to Qt should be able to build and run their code in a way that is fairly close to how the CI system does things.

The post Building and testing on multiple platforms – introducing minicoin appeared first on Qt Blog.

Qt Design Studio 1.2 released

We are happy to announce the release of Qt Design Studio 1.2!

Qt Design Studio is a UI design and development tool that enables designers and developers to rapidly prototype and develop complex UIs. Both designers and developers use Qt Design Studio and this makes collaboration between the two a lot simpler and more streamlined. To get an impression, you should watch this video.

The most notable addition to Qt Design Studio 1.2 is the Qt Bridge for Sketch. This allows you to seamlessly import your designs to Qt Design Studio.
The Qt Bridge for Sketch is delivered with Qt Design Studio as a plugin that you can install to Sketch. At the moment the feature set of the Qt Bridge for Sketch is very similar to the  Qt Bridge for Photoshop and we already support symbols.

We created a short video to give an impression of the Qt Bridge for Sketch.

For details about how to install and use the  Qt Bridge for Sketch check our online documentation.

Qt Design Studio 1.2 Community Edition

With Qt Design Studio 1.2 we also offer a free to download and use Community Edition. The Community Edition can be downloaded from here. The Qt Bridge for Sketch and the Qt Bridge for Photoshop are not part of the Community Edition.

Since there have been many questions about the source code of Qt Design Studio I want to emphasize that Qt Design Studio is basically a different configured Qt Creator and that most of the source code is already part of  Qt Creator.
The Qt Bridge for Sketch and the Qt Bridge for Photoshop are closed source, though. In the timeframe of Qt Creator 4.10, we will clean up the way how we exchange the icons, rename the executable and product name.

Qt Creator 4.9 already contains the graphical timeline editor for example. Users who are primarily C++ developers can continue to use Qt Creator for their QML development and to integrate Qt Design Studio projects.

Qt Design Studio 1.2 Community Edition

Qt Design Studio 1.2 Community Edition

Complex gradients

In Qt Design Studio 1.2 we now support the more complex gradients from Qt Quick Shapes for the Qt Design Studio Components. Those advanced gradients like spherical and conical gradients are very useful when creating gauges and meters especially since their properties can also be animated. Finally, you are not limited to linear vertical gradients anymore when using Qt Design Studio to design content.


For Qt Design Studio 1.2 we also fixed many bugs and issues.
To learn more about the improvements and fixes in Qt Design Studio 1.2, you can check out the changelog.

Download and try out Qt Design Studio 1.2

The commercially licensed packages are available through the online installer and on the Qt Account Portal. You can try out and evaluate Qt Design Studio by registering for an evaluation license. 2019).
The Community Edition, which does not contain the Qt Bridges, can be downloaded from here.

Getting Started

You can find the latest online documentation for Qt Design Studio 1.2 here. The documentation is also available from inside Qt Design Studio.

For Qt Design Studio we created step by step tutorials as part of the documentation.

The welcome page of Qt Design Studio contains examples and links to video tutorials to help you get started.

Please post issues you find or suggestions you have in our bug tracker.

The post Qt Design Studio 1.2 released appeared first on Qt Blog.