First Machine Learning Steps with TensorFlow and FloydHub

by Peter Hartmann (peter)
TL;DR: Setting up a machine learning environment on an operating system other than Windows seems harder than just running the code in the cloud.
Recently Opitz Consulting hosted a Machine Learning Introduction with a Hackathon; the posed problem was the typical "Learn separating cats from dogs pictures" issue.
When dealing with big datasets, running the code in a CPU-only environment is far to slow, using a GPU can make execution around 10 times faster.
Interestingly, the most difficult part of the hackathon was to get the code to run in a GPU enabled environment with TensorFlow (sort of the de-facto Machine Learning framework these days): It apparently is easy on a Windows machine with NVIDIA GPU, more or less impossible on MAC, and might work on Linux if you carefully download the exact versions of NVIDIA drivers (CUDA, cuDNN).
A better solution than fiddling for hours with setting up GPU support for Tensorflow is running it in the cloud:
Google Colab offers free GPU usage for running Jupyter Notebooks, which is an environment to share code, documentation etc. So it is good for trying out code, but not really suitable for iteratively editing code locally in your favorite IDE and running it.
A better solution for that use case is FloydHub: You edit your code locally, then upload it and run it on a system with a powerful GPU (Tesla K80 or better). Also, you can upload your datasets separately, so you don't need to download it in the script you execute. In our case, the dataset consists of images of cats and dogs.
The downside: It is not for free ;) 10 hours of GPU usage cost 10$ which seems fair; the danger here is that your script is malformed somehow and runs longer than expected or just never finishes. For the setup here each run was around 2 minutes so with 10 hours you can get quite far. Also, the first 2 hours of GPU usage are for free.
The code to tell cats and dogs apart is hosted on github, and is a fork of Philipp Fehrmann's code, so almost all code was written by him for the Hackathon mentioned earlier.
To run your code on FloydHub, create a new project there, download the command line tools and then initialize one in your github repo with the same name:
git clone https://github.com/peter-ha/ML-Example-Steps.git
cd ML-Example-Steps
# create a repository on floydhub.com first and then init with the same name here:
floyd init peterpeterha/ml-example-steps
Then initialize the dataset, which can be downloaded from Microsoft for free; the github repository already contains a script to separate the code into training and test data:
cd ..
wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
unzip kagglecatsanddogs_3367a.zip
# This will create a directory "data" with subdirectories "train" and "test":
python ML-Example-Steps/util.py
cd data
floyd data init kaggle-cats-and-dogs floyd data upload
Finally run the code on FloydHub:
floyd run --gpu --env tensorflow-1.8 --data peterpeterha/datasets/kaggle-cats-and-dogs/2:/data "python main.py"
# output logs in console e.g. for job 15:
floyd logs -t 15
Check out the logs of a successful run, which yields an accuracy of 66% and runs in 2 minutes. The accuracy is not that great yet, but at least now we have a set up over which we can iterate quickly.
Summary: Running Machine Learning code in the clode (here: FloydHub) seems more convenient than running it locally on a non-Windows environment; it can even be faster because the used GPU is better than a typical end-consumer graphics card.
--
Comments / questions? Feel free to let me know on Twitter: @peha23

What a mesh!

With all the advances being made in Qt 3D, we wanted to create some new examples showing some of what it can do. To get us started, we decided to use an existing learning framework, so we followed the open source Tower Defence course, which you can find at CGCookie. Being a game, it allows an interactive view of everything at work, which is very useful.

We found it to be so diverse, that we are now implementing Parts 2 and 3 of the game into Qt 3D. However you don’t have to wait for that, you can start now by following the steps we took.

The setup

These instructions will help you setup for Qt 5.11.0 .

To start, turn to your QtCreator and create a new Qt Console Application, set to run on your Qt 5.11.0 kit.

A Qt Console Application doesn’t come with too much ‘plumbing’. A lot of the other options will attempt to give you starting files that aren’t required or in some cases, the wrong type entirely.

Let’s edit it to fit our needs by opening up the .pro file and adding the following:

First remove the QT += core and QT -= gui lines if they are present.

QT += 3dcore 3drender 3dinput 3dquick 3dquickextras qml quick

Then, if the lines CONFIG += c++11 console and CONFIG -= app_bundle are present, remove them too. Now back on the main.cpp file we need to edit our “includes” from the Qt 3D library.

Replace the #include QCoreApplication with #include QGuiApplication and add these lines:

#include <Qt3DQuick/QQmlAspectEngine>
#include <Qt3DQuickExtras/Qt3DQuickWindow>
#include <QtQml>

Within the main block we now have to edit QCoreApplication a(argc, argv); to mirror our include change. So change it to:

QGuiApplication a(argc, argv);

Before the first build / run we should add something to look at. Adding the following block of code before the return statement will provide us with a window:

Qt3DExtras::Quick::Qt3DQuickWindow view;
view.setSource(QUrl("qrc:/main.qml"));
view.show();

Commenting out the line referring to main.qml will allow you to build and run what you have already. If everything has gone to plan, you will get a white window appear. Now you can uncomment the line and continue onwards!

QRC creation

Okay, let’s get rid of the boring white scene and get something in there. Right-click the ‘Sources’ folder and select ‘Add New…’. From here select the Qt > QML File (Qt Quick 2) option. We’ve gone and named it main so that after clicking next till the end you should now have a main.qml and a main.cpp.

This QML file is now going to hold our scene, but to do that we need some resources. We will achieve this by adding a Qt Resource File, just as we did for main.qml – assuming you have an obj with accompanying textures placed in an assets folder within the project.

So this time right-click on the project folder and select ‘Add New…’. From the Qt menu, select ‘Qt Resource File’ and name it something fitting. When this opens it will look noticeably different to the qml and cpp files. At the bottom you will see the self-descriptive; Add, Remove and Remove Missing Files buttons. Click the ‘Add’ button and select ‘Add Prefix’. Now remove everything from the Prefix: text input just leaving the ‘/‘. Click the ‘Add’ button again, this time selecting the ‘Add Files’ option.

Navigate to your obj and texture files and add them all to the qrc, save and close it. If everything went to plan, a ‘Resources’ folder will now be visible in the Projects window on the left.

Follow this again and add main.qml to the qrc in the same way.

One last thing we need before playing with the scene is a skymap. With the files placed in your assets folder, go ahead and add the skymap to the qrc file.

Gotcha

We use three dds files for our skymaps, irradiance, radiance and specular. If you are trying this on a Mac, you will have to uncompress them or they will not work. Keep the names similar to their compressed version. For example we simply added ‘-16f’ to the filename. So our files would be ‘wobbly_bridge_4k_cube_irradiance’ vs ‘wobbly_bridge_4k-16f_cube_irradiance’ respectively.

The necessities

Back to the QML file now, rename the Item { } to be an Entity { } and give it the id: scene. Entity is not recognised because we are missing some imports. Hitting F1 with Entity selected shows us that we need to import Qt3D.Core 2.0, so add this to the imports at the top of the file.

There are certain components that a 3D scene must have, a camera and Render settings being two of those. For this example, we’ll throw in a camera controller too so we can move around the scene.

components: [
    RenderSettings {
        activeFrameGraph: ForwardRenderer {
            camera: mainCamera
            clearColor: Qt.rgba(0.1, 0.1, 0.1, 1.0)
        }
    },
    // Event Source will be set by the Qt3DQuickWindow
    InputSettings { }
]

Camera {
    id: mainCamera
    position: Qt.vector3d(30, 30, 30)
    viewCenter: Qt.vector3d(0, 0, 0)
}

FirstPersonCameraController {
    camera: mainCamera
    linearSpeed: 10
    lookSpeed: 50
}

Here we see that Camera is not recognised, so let’s get the missing import.

Gotcha

If you select Camera and hit F1 to find the import, you will in fact be shown the import for the non-Qt3D Camera. The one you will want is: import Qt3D.Render 2.9

The sky is the limit

Let’s put that skymap to use now. Back in the main.cpp file, we need to add code to check if we’re on MAC or not. If you remember, this was due to MAC not supporting compressed files and needing its own versions. After the QGuiApplication line, put in the following:

#if defined(Q_OS_MAC)
    const QString envmapFormat = QLatin1String("-16f");
#else
    const QString envmapFormat = QLatin1String("");
#endif

Then after the Qt3DExtras line, add the following:

auto context = view.engine()->qmlEngine()->rootContext();
context->setContextProperty(QLatin1String("_envmapFormat"), envmapFormat);

If you try to build at this point, you will notice various imports missing. One for FirstPersonCameraController, one for InputSettings and TexturedMetalRoughtMaterial. Hitting F1 on FirstPersonCameraController will give you import Qt3D.Extras 2.0 and F1 on InputSettings will give you import Qt3D.Input 2.0 but then later you’ll hit a snag. TexturedMetalRoughtMaterial may not turn up any documentation but we’ll be kind enough to give you the answer… edit the Qt3D.Extras 2.0 to be 2.9 instead. If this now works you will get a dark grey window.

Barrel of laughs

The final part will be our mesh, we chose a barrel, and the skymap for it to reflect (although this might not be visible).

In main.qml after the InputSettings{}, throw in the following:

EnvironmentLight {
    id: envLight
    irradiance: TextureLoader {
        source: "qrc:/path/to/your/file" + _envmapFormat + "_cube_irradiance.dds"

        minificationFilter: Texture.LinearMipMapLinear
        magnificationFilter: Texture.Linear
        wrapMode {
            x: WrapMode.ClampToEdge
            y: WrapMode.ClampToEdge
        }
        generateMipMaps: false
    }
    specular: TextureLoader {
        source: "qrc:/path/to/your/file" + _envmapFormat + "_cube_specular.dds"
                
        minificationFilter: Texture.LinearMipMapLinear
        magnificationFilter: Texture.Linear
        wrapMode {
            x: WrapMode.ClampToEdge
            y: WrapMode.ClampToEdge
        }
        generateMipMaps: false
    }
}

You can hit build now to check it’s working, but the scene will still be pretty boring. Throw in your obj to get some eye candy. Here is the code we used after EnvironmentLight:

Mesh {
    source: "qrc:/your/model.obj"
},
Transform {
    translation: Qt.vector3d(4, 0, 2)
},
TexturedMetalRoughMaterial {
    baseColor: TextureLoader {
        format: Texture.SRGB8_Alpha8
        source: "qrc:/path/to/your/Base_Color.png"
    }
    metalness: TextureLoader { source: "qrc:/path/to/your/Metallic.png" }
    roughness: TextureLoader { source: "qrc:/path/to/your/Roughness.png" }
    normal: TextureLoader { source: "qrc:/path/to/your/Normal_OpenGL.png" }
    ambientOcclusion: TextureLoader { source: "qrc:/path/to/your/Mixed_AO.png" }
}

Finally, hit build and then run.

Rendered Barrel

The barrel viewed at the end of What A Mesh pt1

The post What a mesh! appeared first on KDAB.

Partially initialized objects

I found this construct some time ago. It took some reading to understand why it worked. I’m still not sure if it is actually legal, or just works only because m_derivedData is not accessed in Base::Base.

struct Base {
    std::string& m_derivedData;
    Base(std::string& data) : m_derivedData(data) {
    }
};

struct Derived : public Base {
    std::string m_data;
    struct Derived() : Base(m_data), m_data("foo") {
    }
};

Scripting In C++

Recently I had to write some scripts to automatize some of my daily tasks. So I had to think about which scripting language to use. You’re probably not surprised when I say I went for C++. After trying several hacky approaches, I decided to try out Cling – a Clang-based C++ interpreter created by CERN.

Cling allows developers to write scripts using C and C++. Since it uses the Clang compiler, it supports the latest versions of the C++ standard.
If you execute the interpreter directly, you’ll have a live environment where you can start writing C++ code. As a part of the standard C/C++ syntax, you will find some other commands beginning with ‘.’ (dot).

When you use the interactive interpreter, you can write code like:

#include <stdio.h>
printf("hello world\n");

As you can see, there is no need to worry about scopes; you can just call a function.

If you plan to use Cling as an interpreter for creating your scripts, you need to wrap everything inside of a function. The entry point of the script by default is the same as the file name. It can be customized to call another function. So, the previous example would turn into something like:

#include <stdio.h>                                                                               
                                                                                                       
void _01_hello_world() {                                                                               
    printf("foo\n");                                                                                   
}

…or the C++ version:

#include <iostream>                                                                               

void _02_hello_world()
{
    std::cout << "Hello world" << std::endl;
}

The examples are quite simple, but they show you how to start.

 

What about Qt?

#include <QtWidgets/qapplication.h>                                                                    
#include <QtWidgets/qpushbutton.h>                                                                     
                                                                                                       
void _03_basic_qt()                                                                                    
{                                                                                                      
    int argc = 0;                                                                                      
    QApplication app(argc, nullptr);                                                                   
                                                                                                       
    QPushButton button("Hello world");                                                                 
    QObject::connect(&button, &QPushButton::pressed, &app, &QApplication::quit);                       
    button.show();                                                                                     
                                                                                                       
    app.exec();                                                                                        
}

But the previous code won’t work out of the box – you need to pass some custom parameters to cling:

cling -I/usr/include/x86_64-linux-gnu/qt5 -fPIC -lQt5Widgets 03_basic_qt.cpp

You can customize your “cling” in a custom script based on your needs.

You can also load Cling as a library in your applications to use C++ as a scripting language. I’ll show you how to do this in one of my next blog posts. Cheers!

The post Scripting In C++ appeared first on Qt Blog.

Qt Contributors’ Summit 2018 wrap-up

QtCS

Qt Contributors’ Summit 2018 is over. Two days of presentations and a lot of discussions during presentations, talk of Qt over coffee and lunch and in restaurants in the evening.

A hundred people gathered to think where Qt is heading and where it now is. Oslo showed it’s best with warm and sunny weather. The nordic light is something to see, while it does wake people at awkward hours of the morning. I’ve never had as much company at the early breakfast time before rushing to the event venue for last minute checks 🙂

The major topics of the event included the first early ideas for Qt6. The first markings on the white board put Qt6 still securely in the future several releases out, maybe after Qt5.14

Bugreports has a list of suggested changes. If you have something that you would like to see changed the next time there is an ABI break, take a look and see if you need to add to the list.

The C++ version for Qt6 raised only a mild discussion. This is most likely due to things being a bit open in the C++ development. It seems like C++17 would make most sense, as staying with an older release might tie the project too much. But going with C++20 seems agressive as it will most likely not be completely stable when Qt6 needs to be in heavy development. However there are a lot of open questions around how compilers for different platforms implement the new features coming to the language.

Session

The tools of the project got several sessions and upcoming improvements and changes to Gerrit, Jira, Coin and testing tools were discussed. Every area will see changes and improvements going forward. So if your Jira boards look strange one morning, it means that that the tools has gotten updates and the process has been streamlined.

The sessions included Qt for Python, that is now in tech preview and officially supported. It builds on the PySide project and has a robust system for making Python bindings for Qt, but it can also be used for any C++ project. Check out Qt for Python now. It is under development, but on a tech preview usable level, so it will see new features arriving all the time.

The above and all the other can be found at the event program page. People will be adding notes to the session descriptions, posting notes to the development list and also adding actionable items to Bugreports. That makes following up on everything much easier and more visible.

coffee

It was a good event with a lot of things being cleared and moved forward. It is always good to have the contributors come together and see each other, it helps the project way more than is present on the surface.

Last it is again time to thank the sponsors for making Qt Contributors’ Summits possible!

KDAB, Viking Software, Froglogic, Intel and Luxoft

KDAB-300x204

logo

luxoft-logo

1280px-intel-logo-svg

froglogic

The post Qt Contributors’ Summit 2018 wrap-up appeared first on Qt Blog.

Qt for Python 5.11 released

We are happy to announce the first official release of Qt for Python (Pyside2).

As the version tag implies, it is based on Qt 5.11 and therefore the first release that supports the Qt 5 series. At large the project will follow the general Qt release schedule and versions. Although this is still a Technical Preview we will support the release with the usual support pattern (except the compatibility one). Unfortunately, earlier versions of Qt than 5.11 are not supported. It is available for open source and commercial Qt for Application Development users. Note that there is only one package for commercial and open source users. We hope we can receive plenty of feedback on what works and what does not. We want to patch early and often.
Eventually the aim is to release Qt for Python 5.12 without the Tech Preview flag.

The Qt for Python development

It’s been a long journey for this release to come to this point. It started two years ago with this announcement from Lars. Since that day we had a fair share of ups and downs. As the first step, we had to sort out the license situation. We are very grateful for the support and agreements we got from the project contributors during this process.

First development (for our internal Qt for Python team) started based on Qt 5.6 and was mostly focused on stabilizing the code base. With the (at the time) upcoming Qt 5.7 release and it requiring C++11 support a major update was needed for Shiboken (our bindings generator). Similar to qdoc and QtCreator we walked down the path of deferring C++ parsing to clang. Another major construction yard was the documentation. As some might know, the documentation generation pipeline is much longer than Qt’s. It required us to reanimate long lost or dead code in qdoc. Nevertheless we have not given up on being able to simplify this further down the road.

Earlier this year we started the generation of snapshots and we are very grateful for all the comments and bug reports we have received from early adopters in the community. Naturally we will continue to publish the snapshots. Another step on this journey has been a technical blog post series describing some of the possibilities of the project (in chronological order):

If you have not read those blogs yet I suggest you head there and get a first impression. The last milestone has been the adoption of the Python Stable ABI. It enables us to significantly reduce the number of packages as the same package can address all Python 3.5 and later versions.

Get Qt for Python

The release supports Python 2.7, 3.5 & 3.6 on the three main desktop platforms. The packages can be obtained from download.qt.io or using pip with

pip install \
  --index-url=https://download.qt.io/official_releases/QtForPython/ pyside2

Eventually, we hope we can upload the packages to the Python Package Index (PyPi) under https://pypi.org/project/PySide2/. Unfortunately package size restrictions on PyPi could not be lifted in time for the release.

If you want to report a bug, please use the Qt for Python project on bugreports.qt.io. The team can be reached on Freenode #qt-pyside and regularly publishes its progress in the weekly meeting minutes.

Other interesting links:

The post Qt for Python 5.11 released appeared first on Qt Blog.

Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

V-Play 2.17.0 introduces Firebase Cloud Storage to store local files in the cloud easily & fast. You can use it to create photo or document sharing apps without any server-side code, or even the next Facebook? It also adds downloadable resources at runtime to reduce initial package size, or to load specific resources only on demand. FileUtils give you convenient access to the native device file system. You can check out two new app demos for using C++ with QML, use new Qt modules with V-Play Live and much more. This is a big update, don’t miss it!

New Firebase Storage to Upload Local Files to the Cloud

With the new FirebaseStorage item you can upload files to the Firebase Cloud Storage. It uploads local files to the cloud file system and returns the public download URL. With Firebase, you can create content sharing apps like Facebook or Snapchat without additional server-side code.

Examples for local files you can upload are:

Here is a code example, that shows how to upload an image taken with the camera. After the image is uploaded, we display it in the app.

vplay-camera-picker-and-firebase-cloud-storage-upload

NOTE: This example uses a public Firebase storage instance, don’t upload any sensitive data!

import QtQuick 2.0
import VPlayApps 1.0
import VPlayPlugins 1.0

App {
  
  NavigationStack {
    Page {
      title: "Firebase Storage"
      
      FirebaseStorage {
        id: storage
        
        config: FirebaseConfig {
          id: customConfig
          
          projectId: "v-play-live-client-test-db"
          databaseUrl: "https://v-play-live-client-test-db.firebaseio.com"
          storageBucket: "v-play-live-client-test-db.appspot.com"
          
          //platform dependent - get these values from the google-services.json / GoogleService-info.plist
          apiKey: Qt.platform.os === "android" ? "AIzaSyD3Pmw89NHhdG9nGIQWwaOB55FuWjcDSS8" : "AIzaSyCheT6ZNFI4mUwfrPRB098a08dVzlhZNME"
          applicationId: Qt.platform.os === "android" ? "1:40083798422:android:ed7cffdd1548a7fa"  : "1:40083798422:ios:ed7cffdd1548a7fa"
          
        }
      }
      
      AppFlickable {
        anchors.fill: parent
        
        Column {
          width: parent.width
          anchors.margins: dp(12)
          
          AppButton {
            text: "Capture image + upload"
            onClicked: nativeUtils.displayCameraPicker()
          }
          
          AppText {
            id: status
            text: "Idle"
          }
          
          // this will display the image after it's uploaded
          AppImage {
            id: img
            width: parent.width
            fillMode: AppImage.PreserveAspectFit
            autoTransform: true
          }
        }
      }
    }
  }
  
  Connections {
    target: nativeUtils
    onCameraPickerFinished: {
      if(accepted) {
        //picture taken with camera is stored at path - upload to Firebase Storage
        storage.uploadFile(path, "test-image" + Date.now() + ".png", function(progress, finished, success, downloadUrl) {
          if(!finished) {
            status.text = "Uploading... " + progress.toFixed(2) + "%"
          } else if(success) {
            img.source = downloadUrl
            status.text = "Upload completed."
          } else {
            status.text = "Upload failed."
          }
        })
      }
    }
  }
}

Download Resources at Runtime

DownloadableResource allows downloading app and game assets on demand during runtime. You no longer need to include all resources like images or videos in the app binary. This results in smaller downloads from the app stores. Cut down your 500MB training app to e.g. only 30MB, and let the user download your workout videos on demand!

The most popular use cases for downloadable packages are:

  • You want to keep your app store binary as small as possible for the first download, to increase the download numbers of your app or game with a smaller download size.
  • You want to download additional content packages after in-app purchases.
  • Keep your initial app size below the store limits:
    • On Google Play, your initial apk size must be below 100MB, after that you need to use Android expansion files. To avoid that, you can just use DownloadableResource and download the additional files at a later time.
    • On iOS, your initial binary size limit is 150MB for mobile network downloads. If your binary is bigger, the user can only download your app over WiFi. Downloading additional resources later also helps you to avoid this limit.
    • With the V-Play DownloadableResource component, you can create a cross-platform solution to work with downloadable resources, that works for both iOS AND Android. It even works on Desktop too, with a single source code for all platforms! This way, you do not need to deal with Android expansion files and can create a working solution for all platforms instead.

Here is a small example how you could use it: it downloads and extracts a zip archive including an image to the default location after 5 seconds. Then it replaces the placeholder image with the downloaded image:

import VPlayApps 1.0
import QtQuick 2.0
import VPlay 2.0

App {
  
  // uncomment this to remove the resources on startup, so you can test the downloading again
  //Component.onCompleted: resource1.remove()
  
  // after 5 seconds, we download the resources
  Timer {
    running: true
    interval: 5000
    onTriggered: {
      resource1.download()
    }
  }
  
  NavigationStack {
    Page {
      title: "Downloadable Resource"
      
      DownloadableResource {
        id: resource1
        
        extractAsPackage: true // true for zip archives
        source: "https://v-play.net/web-assets/girl.zip"
      }
      
      AppImage {
        width: parent.width
        fillMode: AppImage.PreserveAspectFit
        // as long as the resource file is not available, we use a placeholder image
        // (the example placeholder is actually also from a web url, to be usable with the web editor)
        // if the resource is available, we get the extracted file url and set it as new image source
        // on your next app start (or live reload) the resource will be available immediately and not downloaded again
        source: resource1.available ? resource1.getExtractedFileUrl("girl.jpg") : "https://v-play.net/web-assets/balloon.png"
      }
    }
  }
}

You have full information about the download, with properties like status, progress and available. You know exactly when resources are available or when to show a loading indicator.

DownloadableResource can load files from any HTTP(S) web addresses. You can add a secret to protect and restricts downloads to your app or game only. You can download single files or entire .zip-archives, which are automatically extracted.

Once a resource is downloaded, you can use it like any other asset. On your next app start, the resource will be available right away.

FileUtils Class for Cross-Platform Native File Access

You can use the new FileUtils context property to open, read, copy or delete files and folders on any device.

This is an example to download a PDF file and then open it with the native PDF viewer application, using FileUtils::openFile():

vplay-download-pdf-and-open-native-viewer

import VPlayApps 1.0
import QtQuick 2.0
import VPlay 2.0

App {
  id: app
  // uncomment this to remove the resources on startup, so you can test the downloading again
  //Component.onCompleted: pdfResource.remove()
  NavigationStack {
    Page {
      title: "Download PDF"
      
      Column {
        anchors.centerIn: parent
        
        AppButton {
          text: "Download / Open"
          onClicked: {
            if(pdfResource.available) openPdf()
            else pdfResource.download()
          }
        }
        AppText {
          text: "Status: " + pdfResource.status
        }
      }
    }
  }
  DownloadableResource {
    id: pdfResource
    source: "http://www.orimi.com/pdf-test.pdf"
    storageLocation: FileUtils.DocumentsLocation
    storageName: "pdf-test.pdf"
    extractAsPackage: false
    // if the download is competed, available will be set to true
    onAvailableChanged: if(available) openPdf()
  }
  function openPdf() {
    // you can also open files with nativeUtils.openUrl() now (for paths starting with "file://")
    nativeUtils.openUrl(pdfResource.storagePath)
    // with V-Play 2.17.0 you can also use fileUtils.openFile(), however this is not yet supported by the mobile live scripting apps
    //fileUtils.openFile(pdfResource.storagePath)
  }
}

Two New App Examples How to Integrate C++ with QML

You can check out and copy parts from two brand-new app demos that show how to integrate C++ with QML!

Exposing a C++ Class to QML

The first example shows the different forms of C++ and QML integrations. This example is the tutorial result from How to Expose a Qt C++ Class with Signals and Slots to QML.

Path to the app demo: <Path to V-Play>/Examples/V-Play/appdemos/cpp-qml-integration

Display Data from C++ Models with Qt Charts

The second example shows how to combine a C++ backend that provides the model data for a frontend created in QML. The data is displayed with QML with Qt Charts for both 2D and 3D charts. It also includes shader effects, because, why not?

vplay-cpp-backed-charts-example

Path to the app demo: <Path to V-Play>/Examples/V-Play/appdemos/cpp-backend-charts-qml

Live Client Support for Bluetooth, NFC and Pointer Handlers

The V-Play Live Client now supports the Qt modules for Bluetooth, NFC and Pointer Handlers.

Network Adapter Selection in Live Server

You can now change the used network adapter in the Live Server. This fixes a possible issue that the mobile Live Client stalls in the “Connected – Loading Project” screen. If you also face this issue, here is how to fix it.

Open the settings screen from the lower left corner of your Live Server:

vplay-live-server-settings-icon

Now you can change the selected network adapter:

vplay-live-server-network-adapter-settings

This IP is sent to the Live Client to establish the connection. You can try to select different adapters, the IP of the Live Server and Live Client should be in the same network.

More Features, Improvements and Fixes

How to Update V-Play

Test out these new features by following these steps:

  • Open the V-Play SDK Maintenance Tool in your V-Play SDK directory.
  • Choose “Update components” and finish the update process to get this release as described in the V-Play Update Guide.

V-Play Update in Maintenance Tool

If you haven’t installed V-Play yet, you can do so now with the latest installer from here. Now you can explore all of the new features included in this release!

For a full list of improvements and fixes to V-Play in this update, please check out the change log!

 

 

 

More Posts Like This

 

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

vplay-update-2.16.1-live-client-module-live-code-reloading-custom-cpp

Release 2.16.1: Live Code Reloading with Custom C++ and Native Code for Qt

teaser-iphonex-support-and-runtime-screen-orientation-change-705px

Release 2.16.0: iPhone X Support and Runtime Screen Orientation Changes

new-firebase-qt-features-live-code-reloading-qt-5-10-1-vplay-release-2-15-1
Release 2.15.1: New Firebase Features and New Live Code Reloading Apps | Upgrade to Qt 5.10.1 & Qt Creator 4.5.1

The post Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms appeared first on V-Play Engine.

Remote UIs with WebGL and WebAssembly

A frequently requested feature by Qt customers is the possibility to access, view and use a Qt-made UI remotely.

However, in contrast to web applications, Qt applications do not offer remote access by nature as communication with the backend usually happens via direct functions call and not over socket-based protocols like HTTP or WebSockets.

But the good thing is, with right system architecture with strong decoupling of frontend and backend and using the functionality of the Qt framework, it is possible to achieve that!

If you want the embedded performance of a Qt application and together with zero installation remote access for your solution, you might consider the following bits of advice and technologies.

Remote access via WebGL Streaming or VNC

When having a headless device or an embedded device with a simple QML-made UI that only needs to be accessed by a small number of users remotely via web browser, WebGL streaming is the right thing for you. In WebGL streaming, the GL commands to render the UI are serialized and sent from the web server to the web browser. The web browser will interpret and render the commands. Here’s how Bosch did it:

On a headless device, you can simply start your application with these command line arguments: -platform webgl.

This enables the WebGL streaming platform plugin. WebGL data is accessible as the app runs locally.

For widget-based applications, you might consider the VNC based functionality, but this requires more bandwidth since the UI is rendered into pixel buffers which are sent over the network.

The drawback of both of the platform plugin approach of VNC and WebGL is, that within one process, you can start your application only either remotely or locally.

If your device has a touchscreen and you still want to have remote access, you need to run at least two processes: One for local and one for remote UI.

The data between both processes is shared via the Qt RemoteObjects library.

Use cases examples are remote training or remote maintenance for technicians in an industrial scenario, where from the browser you can show and control remotely mouse pointer on an embedded HMI device.

Have a look at the previous blog posts:

http://blog.qt.io/blog/2017/02/22/qt-quick-webgl-streaming/

http://blog.qt.io/blog/2017/07/07/qt-webgl-streaming-merged/

http://blog.qt.io/blog/2017/11/14/qt-webgl-cinematic-experience/

 

WebAssembly

WebGL streaming and VNC are rather suited for a limited number of users accessing the UI at the same time.

For example, you might have an application that needs to be accessed by a large number of users simultaneously and does not require to be installed. This could be the case when the application is running as Software-as-a-Service (SaaS) in the cloud. Fortunately, there is another technology that might suit your needs: Qt for WebAssembly.

While WebAssembly itself is not a Remote UI technology like WebGL or VNC, it is an open bytecode format for the web and is standardized by W3C.

Here’s a short video of our sensortag demo running on WebAssembly:

With Qt for WebAssembly, we are able to cross-compile Qt applications into the WebAssembly bytecode. The generated WASM files can be served from any web server and run in any modern web browser. For remote access and distributed applications, a separate data channel needs to be opened to the device. Here it needs to considered that a WebAssembly application runs in a sandbox. Thus, the only way to communicate with the web server is via HTTP requests or web sockets.

However, this means that in terms of web server communication, Qt applications then behave exactly like web applications!
Of course, you could still compile and deploy the application directly into another platform-specific format with Qt and use Qt Remote Objects for client-server communication. But only with Qt for WebAssembly, the zero-installation feature and sandboxing come for free 😉

Exemplary use case scenarios are SaaS applications deployed in the cloud and running in the browser, multi-terminal UIs, UIs for gateways or headless devices without installation.

Have a look at our previous blog posts on Qt for WebAssembly:
https://blog.qt.io/blog/2018/04/23/beta-qt-webassembly-technology-preview/
https://blog.qt.io/blog/2018/05/22/qt-for-webassembly/

That’s it for today! Please check back soon for our final installment of our automation mini-blog-series, where we will look at cloud integration. In the meantime, have a look at our website for more information, or at Lars’ blog post for an overview of our blog series on Qt for Automation 2.0.

The post Remote UIs with WebGL and WebAssembly appeared first on Qt Blog.

Kirigaming – Kolorfill

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

Kolorfill

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

Qt 5.9.6 Released

I am pleased to announce that Qt 5.9.6 is released today. As a patch release Qt 5.9.6 does not add any new functionality, but provides important bug fixes and other improvements.

With Qt 5.9.6 we are also adding binary installers for QNX 7. Qt 5.9 has supported QNX 7 from the very beginning, but since we have only offered binaries for QNX 6.6, there have been some confusion if QNX 7 is supported or not. Now there are binaries for both QNX 7 and QNX 6.6, which both are also fully supported with Qt 5.9. For Qt 5.9.6 the QNX binaries are available as offline installers for those holding a valid commercial license.

Compared to Qt 5.9.5, the new Qt 5.9.6 contains 33 bug fixes. In total there are around 195 changes in Qt 5.9.6 compared to Qt 5.9.5. For details of the most important changes, please check the Change files of Qt 5.9.6.

Qt 5.9 LTS entered ‘Strict’ phase in the beginning of February 2018 and while Qt 5.9.5 contained some fixed done before entering the Strict phase, Qt 5.9.6 is the first one to receive only the changes done during the Strict phase. Going forward Qt 5.9 continues to receive important bug fixes and significant performance fixes during the ‘Strict’ phase. Intention is to reduce risk of regressions and behavior changes by restricting the changes to the most important ones. We also continue to create new Qt 5.9.x patch releases, but with slower cadence than earlier.

Qt 5.9.6 can be updated to using the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal or from qt.io Download page. Offline packages are available for commercial users in the Qt Account portal and at the qt.io Download page for open-source users. You can also try out the Commercial evaluation option from the qt.io Download page.

The post Qt 5.9.6 Released appeared first on Qt Blog.

Qt Creator 4.6.2 released

We are happy to announce the release of Qt Creator 4.6.2!

This fixes reparsing of QMake projects, for example when project files change, and a couple of other issues.
Have a look at our change log for more details.

Get Qt Creator 4.6.2

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.6.2 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.6.2 released appeared first on Qt Blog.

Integrating QML and Rust: Creating a QMetaObject at Compile Time

In this blog post, I would like to present a research project I have been working on: Trying to use QML from Rust, and in general, using a C++ library from Rust.

The project is a Rust crate which allows to create QMetaObject at compile time from pure Rust code. It is available here: https://github.com/woboq/qmetaobject-rs

Qt and Rust

There were already numerous existing projects that attempt to integrate Qt and Rust. A great GUI toolkit should be working with a great language.

As far back as 2014, the project cxx2rust tried to generate automatic bindings to C++, and in particular to Qt5. The blog post explain all the problems. Another project that automatically generate C++ bindings for Qt is cpp_to_rust. I would not pursue this way of automatically create bindings because it cannot produce a binding that can be used from idiomatic Rust code, without using unsafe.

There is also qmlrs. The idea here is to develop manually a small wrapper C++ library that exposes extern "C" functions. Then a Rust crate with a good and safe API can internally call these wrappers.
Similarly, the project qml-rust does approximately the same, but uses the DOtherSide bindings as the Qt wrapper library. The same used for D and Nim bindings for QML.
These two projects only concentrate on QML and not QtWidget nor the whole of Qt. Since the API is then much smaller, this simplifies a lot the fastidious work of creating the bindings manually. Both these projects generate a QMetaObject at runtime from information given by rust macros. Also you cannot use any type as parameter for your property or method arguments. You are limited to convert to built-in types.

Finally, there is Jos van den Oever's Rust Qt Binding Generator. To use this project, one has to write a JSON description of the interface one wants to expose, then the generator will generate the rust and C++ glue code so that you can easily call rust from your Qt C++/Qml application.
What I think is a problem is that you are still expected to write some C++ and add an additional step in your build system. That is perfectly fine if you want to add Rust to an existing C++ project, but not if you just want a GUI for a Rust application. Also writing this JSON description is a bit alien.

I started the qmetaobject crate mainly because I wanted to create the QMetaObject at rust compile time. The QMetaObject is a data structure which contains all the information about a class deriving from QObject (or Q_GADGET) so the Qt runtime can connect signals with slots, or read and write properties. Normally, the QMetaObject is built at compile time from a C++ file generated by moc, Qt's meta object compiler.
I'm a fan of creating QMetaObject: I am contributing to Qt, and I also wrote moc-ng and Verdigris which are all about creating QMetaObject. Verdigris uses the power of C++ constexpr to create the QMetaObject at compile time, and I wanted to try using Rust to see if it could also be done at compile time.

The qmetaobject crate

The crate uses a custom derive macro to generate the QMetaObject. Custom derive works by adding an annotation in front of a rust struct such as #[derive(QObject)] or #[derive(QGadget)]. Upon seeing this annotation, the rustc compiler will call the function from the qmetaobject_impl crate which implements the custom derive. The function has the signature fn(input : TokenStream) -> TokenStream. It will be called at compile time, and takes as input the source code of the struct it derives and should generate more source code that will then be compiled.
What we do in this custom derive macro is first to parse the content of the struct and find about some annotations. I've used a set of macros such as qt_property!, qt_method! and so on, similar to Qt's C++ macros. I could also have used custom attributes but I chose macros as it seemed more natural coming from the Qt world (but perhaps this should be revised).

Let's simply go over a dummy example of using the crate.

extern crate qmetaobject;
use qmetaobject::*; // For simplicity

// Deriving from QObject will automatically implement the QObject trait and
// generates QMetaObject through the custom derive macro.
// This is equivalent to add the Q_OBJECT in Qt code.
#[derive(QObject,Default)]
struct Greeter {
  // We need to specify a C++ base class. This is done by specifying a
  // QObject-like trait. Here we can specify other QObject-like traits such
  // as QAbstractListModel or QQmlExtensionPlugin.
  // The 'base' field is in fact a pointer to the C++ QObject.
  base : qt_base_class!(trait QObject),
  // We declare the 'name' property using the qt_property! macro.
  name : qt_property!(QString; NOTIFY name_changed),
  // We declare a signal. The custom derive will automatically create
  // a function of the same name that can be called to emit it.
  name_changed : qt_signal!(),
  // We can also declare invokable methods.
  compute_greetings : qt_method!(fn compute_greetings(&self, verb : String) -> QString {
      return (verb + " " + &self.name.to_string()).into()
  })
}

fn main() {
  // We then use qml_register_type as an equivalent to qmlRegisterType
  qml_register_type::<Greeter>(cstr!("Greeter"), 1, 0, cstr!("Greeter"));
  let mut engine = QmlEngine::new();
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0; import Greeter 1.0;
    Window {
      visible: true;
      // We can instantiate our rust object here.
      Greeter { id: greeter; name: 'World'; }
      // and use it by accessing its property or method.
      Text { text: greeter.compute_greetings('hello'); }
    }"#.into());
  engine.exec();
}

In this example, we used qml_register_type to register the type to QML, but we can also also set properties on the global context. An example with this model, which also demonstrate QGadget

// derive(QGadget) is the equivalent of Q_GADGET.
#[derive(QGadget,Clone,Default)]
struct Point {
  x: qt_property!(i32),
  y: qt_property!(i32),
}

#[derive(QObject, Default)]
struct Model {
  // Here the C++ class will derive from QAbstractListModel
  base: qt_base_class!(trait QAbstractListModel),
  data: Vec<Point>
}

// But we still need to implement the QAbstractListModel manually
impl QAbstractListModel for Model {
  fn row_count(&self) -> i32 {
    self.data.len() as i32
  }
  fn data(&self, index: QModelIndex, role:i32) -> QVariant {
    if role != USER_ROLE { return QVariant::default(); }
    // We use the QGadget::to_qvariant function
    self.data.get(index.row() as usize).map(|x|x.to_qvariant()).unwrap_or_default()
  }
  fn role_names(&self) -> std::collections::HashMap<i32, QByteArray> {
    vec![(USER_ROLE, QByteArray::from("value"))].into_iter().collect()
  }
}

fn main() {
  let mut model = Model { data: vec![ Point{x:1,y:2} , Point{x:3, y:4} ], ..Default::default() };
  let mut engine = QmlEngine::new();
  // Registers _model as a context property.
  engine.set_object_property("_model".into(), &mut model);
  engine.load_data(r#"
    import QtQuick 2.6; import QtQuick.Window 2.0;
    Window {
      visible: true;
      ListView {
        anchors.fill: parent;
        model: _model;  // We reference our Model object
        // And we can access the property or method of our gadget
        delegate: Text{ text: value.x + ','+value.y; } }
    }"#.into());
  engine.exec();

Other implemented features include the creation of Qt plugins such as QQmlExtensionPlugin without writing a line of C++, only using rust and cargo. (See the qmlextensionplugins example.)

QMetaObject generation

The QMetaObject consists in a bunch of tables in the data section of the binary: a table of string and a table of integer. And there is also a function pointer with code used to read/write the properties or call the methods.

The custom derive macro will generate the tables as &'static[u8]. The moc generated code contains QByteArrayData, built in C++, but since we don't want to use a C++ compiler to generate the QMetaObject, we have to layout all the bytes of the QByteArrayData one by one. Another tricky part is the creation of the Qt binary JSON for the plugin metadata. The Qt binary JSON is also an undocumented data structure which needs to be built byte by byte, respecting many invariants such as alignment and order of the fields.

The code from the static_metacall is just an extern "C" fn. Then we can assemble all these pointers in a QMetaObject. We cannot create const static structure containing pointers. This is then implemented using the lazy_static! macro.

QObject Creation

Qt needs a QObject* pointer for our object. It has virtual methods to get the QMetaObject. The same applies for QAbstractListModel or any other class we could like to inherit from, which has many virtual methods that we wish to override.

We will then have to materialize an actual C++ object on the heap. This C++ counterpart is created by some of the C++ glue code. We will store a pointer to this C++ counterpart in the field annotated with the qt_base_class! macro. The glue code will instantiate a RustObject<QObject> . It is a class that inherits from QObject (or any other QObject derivative) and overrides the virtual to forward them to a callback in rust which will then be able to call the right function on the rust object.

One of the big problems is that in rust, contrary to C++, objects can be moved in memory at will. This is a big problem, as the C++ object contains a pointer to the rust object. So the rust object needs somehow to be fixed in memory. This can be achieved by putting it into a Box or a Rc, but even then, it is still possible to move the object in safe code. This problem is not entirely fixed, but the interface takes the object by value and moves it to an immutable location. Then the object can still be accessed safely from a QJSValue object.

Note that QGadget does not need a C++ counter-part.

C++ Glue code

For this project I need a bit of C++ glue code to create the C++ counter part of my object, or to access the C++ API for Qt types or QML API. I am using the cpp! macro from the cpp crate. This macro allows embedding C++ code directly into rust code with very little boiler plate compared to manually creating callbacks and declaring extern "C" functions.
I even contributed a cpp_class macro which allows wrapping C++ classes from rust.

Should an API be missing, it is easy to add the missing wrapper function. Also when we want to inherit from a class, we just need to imitate what is done for QAbstractListView, that is override all the virtual functions we want to override, and forward them to the function from the trait.

Final Words

My main goal with this crate was to try to see if we can integrate QML with idiomatic and safe Rust code. Without requiring to use of C++ or any other alien tool for the developer. I also had performance in mind and wanted to create the QMetaObject at compile time and limit the amount of conversions or heap allocations.
Although there are still some problems to solve, and that the exposed API is far from complete, this is already a beginning.

You can get the metaobject crate at this URL: https://github.com/woboq/qmetaobject-rs

Cutelyst on TechEmpower benchmarks round 16

Yesterday TechEmpower released the results for round 16 of their benchmarking tests, you can see their blog about it here. And like for round 15 I'd like add my commentary about it here.

Before you look into the results web site it's important to be aware of a few things, first round 16 runs on a new hardware newer and more powerful than the previous rounds, they also did a Dockerization of the tests which allowed us to pull different distro images, cache package install and isolate from other frameworks. So don't try to compare to round 15.

Having put Cutelyst under testing there has brought many benefits to it, in previous rounds we noticed that due testing on a server with many CPU cores it letting the operating system do the scheduling wouldn't be a good idea, so we added CPU affinity feature to cutelyst-wsgi, while uWSGI also has it our logic does this core pinning for threads and not only process.

While testing at home on an AMD Phenon II x4 using pre-fork mode was much faster than threaded mode, but on my 5th Gen Intel laptop the results were closer but threading was much better, and thanks to TFB I found out that each process were being assigned to CPU core 0, which made 28 processes be bound to a single core. In this new round you can now see the difference between running threaded or in pre-fork mode is negligible in some tests pre-fork is even faster. Pre-fork does consume more RAM, running 100 threads of Virtlyst uses around 35MiB, while 100 process around 130 MiB but multiple process are better if your code happens to crash.

An important feature of Tech Empower Benchmarks are it's filters, they allow you to filter stuff that doesn't matter to you, due the above fix you call look at the results and see the close threads vs pre-fork match here.

Using an HTML templating engine is very important for real world apps, in round 16 I've added a test that uses Grantlee for rendering, they are around 46% slower than creating the HTML directly on C++, there might be room for improvements in Grantlee but the fortunes results aren't bad.

Some people asked why there were some many occurrences of Cutelyst in the tests, the reason is that these benchmarks allow you to tests some features and see what performs better, in round 15 it became clear that using EPoll vs Qt's glib based event loop was a clear win, so in 16 we don't have Qt's glib tested anymore and EPoll is now the default event dispatcher in Cutelyst 2 on Linux.

This round however I tried to make a reasoning about TCP_NODELAY and while results are closer it's a bit clear that due the blocking nature of the SQL tests TCP_NODELAY will decrease latency and increase performance a bit due it not waiting to send a bigger TCP packet.

As I also mentioned on the release of Cutelyst 2.4.0 a fix was made to avoid stack overflow and you can see the results on the "Plain Text" test, before that fix cutelyst was crashing and respawning and that limited the results to 1,000,000 requests/second, after the fix it is 2,800,000 requests/second.

This round used Ubuntu 18.04 as base, Cutelyst 2.4.0 and Qt 5.9.5.

Last but not least if you still try to compare to round 15 :) you might notice Cutelyst went lower on the ranking, that's in part due frameworks getting optimizations but mostly due new micro/platform frameworks, but you can enable the Fullstack classification filter if you want to compare frameworks that would provide similar features that what Cutelyst offers.

Getting the most of signal/slot connections

Signals and slots were one of the distinguishing features that made Qt an exciting and innovative tool back in time. But sometimes you can teach new tricks to an old dog, and QObjects gained a new way to connect between signals and slots in Qt5, plus some extra features to connect to other functions which are not slots. Let’s review how to get the most of that feature. This assumes you are already moderately familiar with signals and slots.

Building a Bridge from Qt to DDS

In our previous posts, we looked into various aspects of using Qt in a telemetry scenario. Part one focused on reducing message overhead, while part two was about serialization.

To demonstrate a typical IoT scenario, we used MQTT as a protocol and the Qt MQTT module available in Qt for Automation. However, the landscape of protocols in the automation world is bigger and different protocols provide different advantages, usually with a cost involved.

A well-known weakness of MQTT is that it relies on a server instance. This implies that all nodes talk to one central place. This can easily become a communication bottleneck. For MQTT version 3.1.1 many broker providers implemented their own solutions to tackle this issue, and to some extent, this got taken care of in MQTT version 5. Those solutions do add additional servers, which sync with each other, but do not remove the need for a server completely.

One prominent protocol, which allows for server-less communication is the Data Distribution Service (DDS). DDS is a standard available via the Object Management Group, a full description is available on their website.

In addition to the D2D communication capabilities, DDS includes a very interesting design approach, which is data-centric development. The idea behind data centricity is that you as a developer would not need to care about how data is transferred and/or synced between nodes, the protocol handles all of this in the background. While this is a convenience, optimizations like in our previous posts are possible in a limited fashion only.

Qt does not provide a module integration for DDS. But after all, existing implementations are available written in C++ and consequently using both technologies in a project is doable. Following, we will go through the steps to create a Data-Centric Publish-Subscribe (DCPS) application with DDS.

In this example we are going to use the DDS implementation by RTI, which has the highest market adoption currently. Nevertheless, a couple of alternatives do exist like Vortex OpenSplice or OpenDDS. The design principles stay the same for any of those products.

To be able to sync data of the same type on all ends, a design pattern in form of a IDL language is required. Similar to protobuf, the sensor design is the following

/* Struct to describe sensor */
struct sensor_information {
    string ID; //@key
    double ambientTemperature;
    double objectTemperature;
    double accelerometerX;
    double accelerometerY;
    double accelerometerZ;
    double altitude;
    double light;
    double humidity;
};

To convert the IDL to source code, a tool called rtiddsgen is invoked during the build process. To integrate it into qmake, an extra compiler step is required.

RTIDDS_IDL = ../common/sensor.idl
ddsgen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Additionally created files get their own rule
ddsgen.variable_out = GENERATED_SOURCES
ddsgen.input = RTIDDS_IDL
ddsgen.commands = $${RTIDDS_PREFIX}\\bin\\rtiddsgen -language c++ -d $${OUT_PWD} ${QMAKE_FILE_NAME}

QMAKE_EXTRA_COMPILERS += ddsgen

Rtiddsgen generates more than one source and one header file. For each IDL (here sensor.idl), those additional files are created

  • Sensor.cxx / .h
  • SensorPlugin.ccxx / .h
  • SensorSupport.cxx / .h

Especially the source files need to become part of the project. Otherwise, they will not get compiled and you will recognize missing symbols in the linking phase.

Even more, adding extra compiler steps implement a clean step to remove generated files again. If not all files are removed properly before reinvoking rtiddsgen, the tool does not generate correct code anymore and causes various compile errors.

To fix this, one additional compile step is created for each file generated. Those are

ddsheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.h
ddsheadergen.variable_out = GENERATED_FILES
ddsheadergen.input = RTIDDS_IDL
ddsheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddsheadergen.commands = echo "Additional Header: ${QMAKE_FILE_NAME}"

ddsplugingen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Plugin.cxx
ddsplugingen.variable_out = GENERATED_SOURCES
ddsplugingen.input = RTIDDS_IDL
ddsplugingen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Depend on the output of rtiddsgen
ddsplugingen.commands = echo "Additional Source(Plugin): ${QMAKE_FILE_NAME}"

ddspluginheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Plugin.h
ddspluginheadergen.variable_out = GENERATED_FILES
ddspluginheadergen.input = RTIDDS_IDL
ddspluginheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddspluginheadergen.commands = echo "Additional Header(Plugin): ${QMAKE_FILE_NAME}"

ddssupportgen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Support.cxx
ddssupportgen.variable_out = GENERATED_SOURCES
ddssupportgen.input = RTIDDS_IDL
ddssupportgen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx # Depend on the output of rtiddsgen
ddssupportgen.commands = echo "Additional Source(Support): ${QMAKE_FILE_NAME}"

ddssupportheadergen.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}Support.h
ddssupportheadergen.variable_out = GENERATED_FILES
ddssupportheadergen.input = RTIDDS_IDL
ddssupportheadergen.depends = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.cxx
ddssupportheadergen.commands = echo "Additional Header(Support): ${QMAKE_FILE_NAME}"

QMAKE_EXTRA_COMPILERS += ddsgen ddsheadergen ddsplugingen ddspluginheadergen ddssupportgen ddssupportheadergen

Those compile steps do nothing but write the filename which has been generated in the previous step, but allow for proper cleanup. Note that setting the dependencies of the steps correctly is important. Otherwise, qmake might invoke the steps in the wrong order and try to compile a not-yet-generated source file.

Moving on to C++ source code, the steps are rather straight-forward, though there are some nuances between creating a publisher and a subscriber.

Generally, each application needs to create a participant. A participant registers itself to the domain and allows communication to all other devices, or the cloud. Next, a participant creates a topic allowing data transmission via a dedicated channel. Participants, who are not registered to the topic will not receive messages. This allows for filtering and reducing data transfer.

Following a publisher is created, which then again creates a datawriter. Quoting from the standard: “A Publisher is an object responsible for data distribution. It may publish data of different data types. A DataWriter acts as a typed accessor to a publisher.” Beforehand, this applies to subscribers as well.

    const DDS_DomainId_t domainId = 0;
    DDSDomainParticipant *participant = nullptr;

    participant = DDSDomainParticipantFactory::get_instance()->create_participant(domainId,
                                                                                  DDS_PARTICIPANT_QOS_DEFAULT,
                                                                                  NULL,
                                                                                  DDS_STATUS_MASK_NONE);
    if (!participant) {
        qDebug() << "Could not create participant."; 
        return -1;
    } 

    DDSPublisher *publisher = participant->create_publisher(DDS_PUBLISHER_QOS_DEFAULT,
                                              NULL,
                                              DDS_STATUS_MASK_NONE);
    if (!publisher) {
        qDebug() << "Could not create publisher.";
        return -2;
    }

    const char *typeName = sensor_informationTypeSupport::get_type_name();
    DDS_ReturnCode_t ret = sensor_informationTypeSupport::register_type(participant, typeName);
    if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not register type."; 
        return -3; 
    } 

    DDSTopic *topic = participant->create_topic("Sensor Information",
                                                typeName,
                                                DDS_TOPIC_QOS_DEFAULT,
                                                NULL,
                                                DDS_STATUS_MASK_NONE);
    if (!topic) {
        qDebug() << "Could not create topic."; 
        return -4; 
    } 
    DDSDataWriter *writer = publisher->create_datawriter(topic,
                                                         DDS_DATAWRITER_QOS_DEFAULT,
                                                         NULL,
                                                         DDS_STATUS_MASK_NONE);
    if (!writer) {
        qDebug() << "Could not create writer.";
        return -5;
    }

The writer object is a generic so far. We aim to have a data writer specific to the sensor information we created with the IDL. The sensorSupport.h header provides a method declaration to do exactly this

sensor_informationDataWriter *sensorWriter = sensor_informationDataWriter::narrow(writer);

To create a sensor data object, we also use the support methods

    sensor_information *sensorInformation = sensor_informationTypeSupport::create_data();

If a sensorInformation instance is now supposed to publish its content, this is achieved with a call to write()

    ret = sensorWriter->write(*sensorInformation, sensorHandle);

After this call, the DDS framework takes care of publishing the object to all other subscribers.

For creating a subscriber, most steps are the same as for a publisher. But instead of using a narrowed DDSDataReader, RTI provides listeners, which allow to receive data via a callback pattern. A listener needs to be passed to the subscriber when creating a data reader

ReaderListener *listener = new ReaderListener();     DDSDataReader *reader = subscriber->create_datareader(topic,                                                          DDS_DATAREADER_QOS_DEFAULT,                                                          listener,                                                          DDS_LIVELINESS_CHANGED_STATUS |                                                          DDS_DATA_AVAILABLE_STATUS);

The ReaderListener class looks like this

    
class ReaderListener : public DDSDataReaderListener {
  public:
    ReaderListener() : DDSDataReaderListener()
    {
        qDebug() << Q_FUNC_INFO;
    }
    void on_requested_deadline_missed(DDSDataReader *, const DDS_RequestedDeadlineMissedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_requested_incompatible_qos(DDSDataReader *, const DDS_RequestedIncompatibleQosStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_sample_rejected(DDSDataReader *, const DDS_SampleRejectedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_liveliness_changed(DDSDataReader *, const DDS_LivelinessChangedStatus &status) override
    {
        // Liveliness only reports availability, not the initial state of a sensor
        // Follow up changes are reported to on_data_available
        qDebug() << Q_FUNC_INFO << status.alive_count;
    }

    void on_sample_lost(DDSDataReader *, const DDS_SampleLostStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_subscription_matched(DDSDataReader *, const DDS_SubscriptionMatchedStatus &) override
    {
        qDebug() << Q_FUNC_INFO;
    }

    void on_data_available(DDSDataReader* reader) override;
};

As you can see the listener is able to report a lot of information. In our example, though, we are mostly interested in the receival part of data, trusting that the other parts are doing fine.

On_data_available has a DDSDataReader argument, which will be narrowed now. The reader provides a method called take, which passes all available data updates to the invoker. Available data is formatted in sequences, specifically sensor_informationSeq.

    sensor_informationSeq data;
    DDS_SampleInfoSeq info;
    DDS_ReturnCode_t ret = sensorReader->take(
        data, info, DDS_LENGTH_UNLIMITED,
        DDS_ANY_SAMPLE_STATE, DDS_ANY_VIEW_STATE, DDS_ANY_INSTANCE_STATE);

    if (ret == DDS_RETCODE_NO_DATA) {
        qDebug() << "No data, continue...";
        return;
    } else if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not receive data:" << ret;
        return;
    }

    for (int i = 0; i < data.length(); ++i) {
        if (info[i].valid_data) {
            qDebug() << data[i];
        } else {
            qDebug() << "Received Metadata on:" << i;
        }
    }

Note that the sequences contain all updates to all sensors. We are not filtering to one specific sensor.

Once we are done with processing the data, we have to return it to the reader.

    ret = sensorReader->return_loan(data, info);

One optimization capability in DDS is to use zero-copy data. This implies, that the internal representation of the data is passed to the developer to reduce creating copies. That can be important when the data size is big.

Again, the source is located here. When running the application, it is important to note that the NDDSHOME environment variable needs to be specified to create a virtual mesh network for experimentation.

As a last step, we want to integrate the publisher with a QML application. This demo is also available in the repository.

dds_qml_screenshot

We are going to re-use the principles from our protobuf examples in the previous post of this series. Basically, we create a SensorInformation class, which holds a sensor_information member being created from DDS.

class SensorInformation : public QObject
{
    Q_OBJECT
    Q_PROPERTY(double ambientTemperature READ ambientTemperature WRITE setAmbientTemperature NOTIFY ambientTemperatureChanged)
    Q_PROPERTY(double objectTemperature READ objectTemperature WRITE setObjectTemperature NOTIFY objectTemperatureChanged)
[…]
    void init();
    void sync();
private:
    sensor_information *m_info;
    sensor_informationDataWriter *m_sensorWriter;
    DDS_InstanceHandle_t m_handle;
    QString m_id;
};

In this very basic example, init() will initialize the participant, publisher and datawriter similar to the steps described above.

We added a function sync(), which will sync to the current data state to all subscribers.

void SensorInformation::sync()
{
    DDS_ReturnCode_t ret = m_sensorWriter->write(*m_info, m_handle);
    if (ret != DDS_RETCODE_OK) {
        qDebug() << "Could not write data.";
    }

}

 

Sync() is invoked whenever a property changes, for instance

void SensorInformation::setAmbientTemperature(double ambientTemperature)
{
    if (qFuzzyCompare(m_info->ambientTemperature, ambientTemperature))
        return;

    m_info->ambientTemperature = ambientTemperature;
    emit ambientTemperatureChanged(ambientTemperature);
    sync();
}

And that is all which needs to be done to integrate DDS into a Qt application. For the subscriber part, we could use the callback based listeners and update the objects accordingly. This is an exercise left for the reader.

While experimenting with DDS and Qt a couple of ideas came up. For instance, a QObject-based declaration could be parsed and for all properties, a matching IDL would be generated. That IDL would automatically be integrated and source code templates be created for the declaration, both publisher and subscriber. That would allow to use Qt only and integrate an IoT syncing mechanism out of the box, DDS in this case. What would you think about such an approach?

To summarize, the IoT world is full of different protocols and standards. Each has their pros and cons, and Qt cannot (and will not) provide an implementation for all of them. Recently, we have been focusing on MQTT and OPC UA (another blog post on this topic will come soon). But DDS is another good example of a technology used in the field. What we wanted to highlight with this post is that it is always possible to integrate other C++ technologies into Qt and use both side by side, sometimes even benefitting each other.

The post Building a Bridge from Qt to DDS appeared first on Qt Blog.

Qt Creator 4.7 Beta released

We are happy to announce the release of Qt Creator 4.7 Beta!

C++ Support

The greatest improvements were again done for our Clang based C++ support. First of all we made the Clang code model the default for Qt Creator 4.7. That is quite a milestone after years of experimenting and developing, so Nikolai has wrapped up the history and current state in a separate blog post. I’ll just summarize some of the most important changes in 4.7 here.

We did another upgrade of the backend to Clang 6.0, which was released this March. This brings all its improvements like new and fixed diagnostics to Qt Creator as well.

The outline pane and dropdown, the locator filter for symbols in the current document, and Follow Symbol inside the current translation unit (meaning current document and its includes) are now based on Clang too.

You can now perform the static checks from Clang-Tidy and Clazy on the whole project or a subset. This is implemented as a new analyzer tool in Debug mode (Analyzer > Clang-Tidy and Clazy). We also improved the settings, and fixed that the static checks had a performance impact on completion.

Note that you can opt-out of using Clang code model by manually disabling the ClangCodeModel plugin in Help > About Plugins (Qt Creator > About Plugins on macOS). That said, it is very important that we get as much feedback as possible at this stage, so please create bugreports for all issues that you find.

QML Support

The QML code model now includes minimal support for user defined enums, which is a new feature in Qt 5.10 and later. Additionally some reformatting errors have been fixed.

Test Integration

If your text cursor in the C++ editor is currently inside a test function, you can directly run that individual test with the new Run Test Under Cursor action. The test integration now also marks the location of failed tests in the editor. For Google Test we added support for filtering.

Other Improvements

The kit options got their own top-level entry in the preferences dialog, which is also the very first one in the list. In the file system view you can now create new folders. By default it now shows folders before files, but you can opt-out to the previous, purely alphabetic sorting by unchecking the option in the filter button menu. In the context menu on file items at various places, for example the projects tree, file system view and open documents list, there is a new item that opens a properties dialog, showing name, path, mime type, permissions and many more properties of the file.

There have been many more improvements and fixes. Please refer to our changes file for a more comprehensive list.

Get Qt Creator 4.7 Beta

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.7 Beta is also available under Preview > Qt Creator 4.7.0-beta1 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

 

Note on Wayland support: Even though stated differently in Johan’s blog post, we explicitly disabled the Wayland platform plugin for Qt Creator 4.7 beta1. It broke too heavily in some setups. We will look into adding it again as an opt-in option after the beta.

The post Qt Creator 4.7 Beta released appeared first on Qt Blog.

Qt Creator’s Clang Code Model

Starting with the upcoming Qt Creator 4.7, the Clang Code Model is enabled by default. That’s a great time to look at the differences between our old code model and the new Clang Code Model. But first things first.

History of C/C++ Support in Qt Creator

Since the beginning of Qt Creator the C/C++ support was implemented around a custom C++ frontend (lexer, preprocessor, parser, lookup). The whole support was referred to as the “C/C++ Code Model”, the code model being the collection of language-specific services, for example code completion and semantic highlighting.

Back then the next C++ standard was a long time in the coming (C++0x – C++1x – C++11), the tooling support from Clang was not where it is today and  a custom C++ front-end gave us some extra flexibility when it comes to performance, error recovery and support of Qt-specifics. The code model around the custom front-end served us well (and still does) – the appropriate trade-offs between precision and performance were made back then. However, maintaining a custom C++ front-end is not a trivial task, notably so during the interesting times for the company we were part of back then, and with only a few developers on it. With the availability of Clang and its tooling, especially from the point on where it became self-hosting, we did some experiments to base the code model on it – the “Clang Code Model” was born. The experiments looked promising in general, but the stability and performance were a problem from the beginning on, especially when considering all platforms.

Fast forward: today C++ evolves much faster, Clang and its tooling are prospering and we have picked up working on the Clang Code Model.

Status of the Clang Code Model

We believe to have addressed the most severe performance and stability issues by now. With the Clang Code Model you get up to date language support based on libclang 6.0, greatly improved precision and diagnostics.

The first big area we have tackled are the services related to the currently open file, not taking any project or global index information into account yet, which is work in progress. Currently, the following services are implemented with the Clang Code Model:

  • Code completion
  • Syntactic and semantic highlighting
  • Diagnostics with fixits and integrated Clang-Tidy and Clazy checks
  • Follow Symbol (partly)
  • Outline of symbols
  • Tooltips
  • Renaming of local symbols

For the not yet ported services, the services based on the custom frontend are used. This includes for example indexing, find usages and refactoring.

Due to Clang’s precision the Clang Code Model is inherently slower than the old code model and has lower error recovery capabilities. However, the extra precision and diagnostics will result in less build errors and thus reduce your edit-build-cycle count.

Differences to the old code model

Now what are the visible changes that you can observe as a user?

Updated language support

The Clang Code Model is based on libclang 6.0 and as such it can parse C++17 and more.

Precision

You will immediately notice the improved precision in highlighting and diagnostics.

For example, our custom front-end never validated function calls and thus you would notice these when building. With the Clang Code Model only valid function calls are properly highlighted, whereas invalid ones will be rendered as “Text”, that is, black by default.

Another example is code completion. Items are no longer offered for declarations that are below your completion position, except class members of course. Also, completion of const objects will take the “constness” into account.

Diagnostics

Chances are that you will notice Clang’s diagnostics early on, as they are displayed as inline annotations in the editor. Detailed information is provided by tooltips. Check out for the light bulb at the end of the line, as these indicate the availability of “Fixits”. These are small/local refactoring actions fixing diagnostics.

Of course the diagnostics for the current document are also available in the Issues pane. This behavior can be disabled by using the “Filter by categories” icon in the Issues pane toolbar.

You can set up diagnostic configurations in C++ > Code Model > “Manage…” in the options dialog. The predefined configurations are really a starting point and it is recommended to adapt them to your needs. For example, you could set up a configuration for a specific project.

Note that Clang-Tidy and Clazy checks are integrated, too. Enabling these checks for the Clang Code Model will naturally slow down re-parsing, but depending on your machine and the specific selection of checks this can be a great help. Starting from this version, it is also possible to run Clang-Tidy and Clazy checks for the whole project or a subset of it (Menu: Analyze > “Clang-Tidy and Clazy…”). For example, you could set up a diagnostic configuration for this mode that also enables the expensive checks and run it once in a while.

Code Completion

In general, the completion is more context-sensitive now. For example, completing after “switch (“, “case “, “int foo = ” or after “return ” will put more relevant items to the top of the completion list.

The completion of templated classes and functions is improved. Completion of unique_ptr<>/shared_ptr<> objects and friends works now on all platforms.

Does your code base make use of doxygen comments? Then you will probably be happy to see doxygen comments as part of the completion item, too.

Miscellaneous

Italic arguments in a function call indicate that the function call might modify the argument (“Output Argument” in the text editor settings).

Tooltips resolve “auto” properly and show also doxygen comments.

A visible document is automatically re-parsed if one of its headers is modified, either in an editor or on disk.

Future of C/C++ Support

Given the evolving tooling around C++ it is pretty unlikely that we will go back to a custom front-end.

Given the raise of the Language Server Protocol and the development of clangd, how to proceed from here on? The implementation of an LSP client makes sense for any IDE to also get support for other languages. And we are actively working on that. Having that, it will also help us to evaluate clangd further.

The post Qt Creator’s Clang Code Model appeared first on Qt Blog.

Meet us at TU Automotive Detroit 2018 – Booth #B136

meetqt_tuautomotive_boston2018_facebook_1200x900_shared-img

Are you going to TU Automotive Detroit this year? So are we! Join us at the Qt booth #B136, have a chat and check out what’s new in the world of automotive HMIs with our latest demos #builtWithQt. 

  • A certifiably functionally safe digital cockpit with Hypervisor. Build multi-process UIs and incorporate external applications in a certified functionally safe two-screen UI. 
  • E-bike with fastboot. An E-bike instrument cluster concept designed and implemented with Qt Quick, that shows how great an HMI on a low-end SoC, even without a GPU, can look.  

If you are attending the event and would like to set up a meeting with us contact Chuck Mallory, Director of Strategic Accounts, Automotive, at Chuck.Mallory@qt.io or 1(419) 305-0018.   

Looking forward to seeing you there! 

The post Meet us at TU Automotive Detroit 2018 – Booth #B136 appeared first on Qt Blog.

Write your own Python bindings

Hi.

In a previous blog post we touched upon the topic of creating Python bindings for the Qt libraries.

Today however, we’ll take a sneak peek at how you can create bindings for your own project.

We are happy to announce that Qt for Python will also include Shiboken – our binding generation tool.

Read the material below and you’ll obtain an understanding of how to generate Python bindings for a simple C++ library. Hopefully it will encourage you to do the same with custom libraries of your own.

As with any Qt project we are happy to review contributions to Shiboken, thus improving it for everyone.

Sample library

icecream
For the purposes of this post, we will use a slightly nonsensical custom library called Universe. It provides two classes: Icecream and Truck.
Icecreams are characterized by a flavor. And Truck serves as a vehicle of Icecream distribution for kids in a neighborhood. Pretty simple.

We would like to use those classes inside Python though. A use case would be adding additional ice cream flavors or checking whether ice cream distribution was successful.

In simple words, we want to provide Python bindings for Icecream and Truck, so that we can use them in a Python script of our own.

We will be omitting some content for brevity, but you can check the full source code inside the repository under pyside-setup/examples/samplebinding.

The C++ library

First, let’s take a look at the Icecream header:

class Icecream
{
public:
    Icecream(const std::string &flavor);
    virtual Icecream *clone();
    virtual ~Icecream();
    virtual const std::string getFlavor();

private:
    std::string m_flavor;
};

and the Truck header:

class Truck {
public:
    Truck(bool leaveOnDestruction = false);
    Truck(const Truck &other);
    Truck& operator=(const Truck &other);
    ~Truck();

    void addIcecreamFlavor(Icecream *icecream);
    void printAvailableFlavors() const;

    bool deliver() const;
    void arrive() const;
    void leave() const;

    void setLeaveOnDestruction(bool value);
    void setArrivalMessage(const std::string &message);

private:
    void clearFlavors();

    bool m_leaveOnDestruction = false;
    std::string m_arrivalMessage = "A new icecream truck has arrived!\n";
    std::vector m_flavors;
};

Most of the API should be easy enough to understand, but we’ll summarize the important bits:

  • Icecream is a polymorphic type and is intended to be overridden
  • getFlavor() will return the flavor depending on the actual derived type
  • Truck is a value type that contains owned pointers, hence the copy constructor and co.
  • Truck stores a vector of owned Icecream objects which can be added via addIcecreamFlavor()
  • The Truck’s arrival message can be customized using setArrivalMessage()
  • deliver() will tell us if the ice cream delivery was successful or not

Shiboken typesystem

To inform shiboken of the APIs we want bindings for, we provide a header file that includes the types we are interested in:

#ifndef BINDINGS_H
#define BINDINGS_H
#include "icecream.h"
#include "truck.h"
#endif // BINDINGS_H

In addition, shiboken also requires an XML typesystem file that defines the relationship between C++ and Python types:

<?xml version="1.0"?>
<typesystem package="Universe">
    <primitive-type name="bool"/>
    <primitive-type name="std::string"/>
    <object-type name="Icecream">
        <modify-function signature="clone()">
            <modify-argument index="0">
                <define-ownership owner="c++"/>
            </modify-argument>
        </modify-function>
    </object-type>
    <value-type name="Truck">
        <modify-function signature="addIcecreamFlavor(Icecream*)">
            <modify-argument index="1">
                <define-ownership owner="c++"/>
            </modify-argument>
        </modify-function>
    </value-type>
</typesystem>

The first important thing to notice is that we declare "bool" and "std::string" as primitive types.
A few of the C++ methods use these as parameter / return types and thus shiboken needs to know about them. It can then generate relevant conversion code between C++ and Python.
Most C++ primitive types are handled by shiboken without requiring additional code.

Next, we declare the two aforementioned classes. One of them as an “object-type” and the other as a “value-type”.

The main difference is that object-types are passed around in generated code as pointers, whereas value-types are copied (value semantics).

By specifying the names of the classes in the typesystem file, shiboken will automatically try to generate bindings for all methods declared in the classes, so there is no need
to mention all the method names manually…

Unless you want to somehow modify the function. Which leads us to the next topic: ownership rules.

Shiboken can’t magically know who is responsible for freeing C++ objects allocated in Python code. It can guess, but it’s not always the correct guess.
There can be many cases: Python should release the C++ memory when the ref count of the Python object becomes zero. Or Python should never delete the C++ object assuming that it will
be deleted at some point inside the C++ library. Or maybe it’s parented to another object (like QWidgets).

In our case the clone() method is only called inside the C++ library, and we assume that the C++ code will take care of releasing the cloned object.

As for addIcecreamFlavor(), we know that a Truck owns an Icecream object, and will remove it once the Truck is destroyed. Thus again, the ownership is set to “c++.”
If we didn’t specify the ownership rules, in this case, the C++ objects would be deleted when the corresponding Python names go out of scope.

Building

To build the Universe custom library and then generate bindings for it, we provide a well-documented, mostly generic CMakeLists.txt file, which you can reuse for your own libraries.

It mostly boils down to calling “cmake .” to configure the project and then building with the tool chain of your choice (we recommend the ‘(N)Makefiles’ generator though).

As a result of building the project, you end up with two shared libraries: libuniverse.(so/dylib/dll) and Universe.(so/pyd).
The former is the custom C++ library, and the latter is the Python module that can be imported from a Python script.

Of course there are also intermediate files created by shiboken (the .h / .cpp files generated for creating the Python bindings). Don’t worry about them unless you need to
debug why something fails to compile or doesn’t behave as it should. You can submit us a bug report then!

More detailed build instructions and things to take care of (especially on Windows) can be found in the example README.md file.

And finally, we get to the Python part.

Using the Python module

The following small script will use our Universe module, derive from Icecream, implement virtual methods, instantiate objects, and much more:

from Universe import Icecream, Truck

class VanillaChocolateIcecream(Icecream):
    def __init__(self, flavor=""):
        super(VanillaChocolateIcecream, self).__init__(flavor)

    def clone(self):
        return VanillaChocolateIcecream(self.getFlavor())

    def getFlavor(self):
        return "vanilla sprinked with chocolate"

class VanillaChocolateCherryIcecream(VanillaChocolateIcecream):
    def __init__(self, flavor=""):
        super(VanillaChocolateIcecream, self).__init__(flavor)

    def clone(self):
        return VanillaChocolateCherryIcecream(self.getFlavor())

    def getFlavor(self):
        base_flavor = super(VanillaChocolateCherryIcecream, self).getFlavor()
        return base_flavor + " and a cherry"

if __name__ == '__main__':
    leave_on_destruction = True
    truck = Truck(leave_on_destruction)

    flavors = ["vanilla", "chocolate", "strawberry"]
    for f in flavors:
        icecream = Icecream(f)
        truck.addIcecreamFlavor(icecream)

    truck.addIcecreamFlavor(VanillaChocolateIcecream())
    truck.addIcecreamFlavor(VanillaChocolateCherryIcecream())

    truck.arrive()
    truck.printAvailableFlavors()
    result = truck.deliver()

    if result:
        print("All the kids got some icecream!")
    else:
        print("Aww, someone didn't get the flavor they wanted...")

    if not result:
        special_truck = Truck(truck)
        del truck

        print("")
        special_truck.setArrivalMessage("A new SPECIAL icecream truck has arrived!\n")
        special_truck.arrive()
        special_truck.addIcecreamFlavor(Icecream("SPECIAL *magical* icecream"))
        special_truck.printAvailableFlavors()
        special_truck.deliver()
        print("Now everyone got the flavor they wanted!")
        special_truck.leave()

After importing the classes from our module, we create two derived Icecream types which have customized “flavours”.

We then create a truck, add some regular flavored Icecreams to it, and the two special ones.

We try to deliver the ice cream.
If the delivery fails, we create a new truck with the old one’s flavors copied over, and a new *magical* flavor that will surely satisfy all customers.

The script above succinctly shows usage of deriving from C++ types, overriding virtual methods, creating and destroying objects, etc.

As mentioned above, the full source and additional build instructions can be found in the project repository under pyside-setup/examples/samplebinding.

We hope that this small introduction showed you the power of Shiboken, how we leverage it to create Qt for Python, and how you could too!

Happy binding!

The post Write your own Python bindings appeared first on Qt Blog.

Serialization in and with Qt

In our first part of this series, we looked at how to set up messages, combine them, and reduce their overhead in the context of telemetry sensors.

This part focuses on the payload of messages and how to optimize them.

There are multiple methods to serialize an object with Qt. In part one, we used JSON. For this, all sensor information is stored in a QJsonObject and a QJsonDocument takes care to stream values into a QByteArray.

QJsonObject jobject;

jobject["SensorID"] = m_id;

jobject["AmbientTemperature"] = m_ambientTemperature;

jobject["ObjectTemperature"] = m_objectTemperature;

jobject["AccelerometerX"] = m_accelerometerX;

jobject["AccelerometerY"] = m_accelerometerY;

jobject["AccelerometerZ"] = m_accelerometerZ;

jobject["Altitude"] = m_altitude;

jobject["Light"] = m_light;

jobject["Humidity"] = m_humidity;

QJsonDocument doc( jobject );

 

return doc.toJson();

JSON has several advantages:

  • Textual JSON is declarative, which makes it readable to humans
  • The information is structured
  • Exchanging generic information is easy
  • JSON allows extending messages with additional values
  • Many solutions exist to receive and parse JSON in cloud-based solutions

However, there are some limitations to this approach. First, creating a JSON message can be a heavy operation taking many cycles. The benchmark in part 2 of our examples repository highlights that serializing and de-serializing 10.000 messages takes around 263 ms. That might not read like a significant number per message, but in this context time equals energy. This can significantly impact a sensor which is designed to run for years without being charged.

Another aspect is that the payload for an MQTT message per sensor update is 346 bytes. Given that the sensor sends just eight doubles and one capped string, this can be a potentially huge overhead.

Inside the comments of my previous post, using QJsonDocument::Compact has been recommended, which reduces the payload size to 290 bytes in average.

So, how can we improve on this?

Remember I was referring to textual JSON before? As most of you know, there is also binary JSON, which might reduce readability, but all other aspects are still relevant. Most importantly, from our benchmarks we can see that a simple switch of doc.toJson() to doc.toBinaryData() will double the speed of the test, reducing the iteration of the benchmark to 125msecs.

Checking on the payload, the message size is now at 338 bytes, the difference is almost neglectable. However, this might change in different scenarios, for instance, if you add more strings inside a message.

Depending on the requirements and whether third-party solutions can be added to the project, other options are available.

In case the project resides “within the Qt world” and the whole flow of data is determined and not about to change, QDataStream is a viable option.

Adding support for this in the SensorInformation class requires two additional operators

QDataStream &operator<<(QDataStream &, const SensorInformation &);
QDataStream &operator>>(QDataStream &, SensorInformation &);

The implementation is straightforward as well. Below it is shown for the serialization:

QDataStream &operator<<(QDataStream &out, const SensorInformation &item)
{
    QDataStream::FloatingPointPrecision prev = out.floatingPointPrecision();
    out.setFloatingPointPrecision(QDataStream::DoublePrecision);
    out << item.m_id
        << item.m_ambientTemperature
        << item.m_objectTemperature
        << item.m_accelerometerX
        << item.m_accelerometerY
        << item.m_accelerometerZ
        << item.m_altitude
        << item.m_light
        << item.m_humidity;
    out.setFloatingPointPrecision(prev);
    return out;}

 

Consulting the benchmarks, using QDataStream resulted in only 26 msecs for this test case, which is close to 10 times faster to textual JSON. Furthermore, the average message size is only 84 bytes, compared to 290. Hence, if above limitations are acceptable, QDataStream is certainly a viable option.

If the project lets you add in further third-party components, one of the most prominent serialization solutions is Google’s Protocol Buffers (protobuf).

To add protobuf to our solution a couple of changes need to be done. First, protobuf uses an IDL to describe the structures of data or messages. The SensorInformation design is

syntax = "proto2";

package serialtest;

message Sensor {
    required string id = 1;
    required double ambientTemperature = 2;
    required double objectTemperature = 3;
    required double accelerometerX = 4;
    required double accelerometerY = 5;
    required double accelerometerZ = 6;
    required double altitude = 7;
    required double light = 8;
    required double humidity = 9;
}

To add protobuf’s code generator (protoc) to a qmake project, you must add an extra compiler step similar to this:

PROTO_FILE = sensor.proto
protoc.output = $${OUT_PWD}/${QMAKE_FILE_IN_BASE}.pb.cc
protoc.commands = $${PROTO_PATH}/bin/protoc -I=$$relative_path($${PWD}, $${OUT_PWD}) --cpp_out=. ${QMAKE_FILE_NAME}
protoc.variable_out = GENERATED_SOURCES
protoc.input = PROTO_FILE
QMAKE_EXTRA_COMPILERS += protoc

Next, to have a comparable benchmark in terms of object size, the generated struct is used as a member for a SensorInformationProto class, which inherits QObject, just like for the QDataStream and JSON example.

class SensorInformationProto : public QObject
{
    Q_OBJECT
    Q_PROPERTY(double ambientTemperature READ ambientTemperature WRITE setAmbientTemperature NOTIFY ambientTemperatureChanged)
[...]

public:
    SensorInformationProto(const std::string &pr);
[...]

     std::string serialize() const;
 [...]

private:
    serialtest::Sensor m_protoInfo;
};

The serialization function of protoInfo is generated by protoc, so the step to create the payload to be transmitted looks like this:

std::string SensorInformationProto::serialize() const
{
    std::string res;
    m_protoInfo.SerializeToString(&res);
    return res;
}

 

Note that compared to the previous solutions, protobuf uses std::string. This means you are losing capabilities of QString, unless the string is stored as a byte array (manual conversion is required). Then again, this will slow down the whole process due to parsing.

From a performance perspective, the benchmarks results look promising. The 10.000 items benchmark only takes 5 ms, with an average message size of 82 bytes.

As a summary, the following table visualizes the various approaches:

Payload Size Time(ms)
JSON (text) 346 263
JSON (binary) 338 125
QDataStream 84 26
Protobuf 82 5

 

One promising alternative is CBOR, which is currently getting implemented by Thiago Macieira for Qt 5.12. However, as development is in progress it has been too early to be included in this post. From discussions on our mailing list, results are looking promising though, with a significant performance advantage over JSON, but with all its benefits.

We have seen various approaches to serialize data into the payload of an MQTT message. Those can be done purely within Qt, or with external solutions (like protobuf). Integration of external solutions into Qt is easy.

As a final disclaimer, I would like to highlight that those benchmarks are all based on the scenario of the sensor demo. The amount of data values per message is fairly small. If those structs are bigger in size, the results might differ and different approaches might lead to the better results.

In our next installment, we will be looking at message integration with DDS. For an overview of all the articles in our automation mini-series, please check out Lars’ post.

The post Serialization in and with Qt appeared first on Qt Blog.

Released and Certified: Qt Safe Renderer – An ASIL-D Functional Safety Solution

We are pleased to announce that the Qt Safe Renderer has been certified to meet the following functional safety standards:

  • ISO 26262:2011-6, ISO 26262:2011-8 (ASIL-D) (road vehicles functional safety)
  • IEC 61508:2010-3 7.4.4 (SIL 3) (electrical/electronic/programmable safety-related systems)
  • EN 50128:2011 6.7.4 (SIL 4) (railway applications)
  • IEC 62304:2015(2006+A1) (medical devices)

With this certification, the Qt Safe Renderer is now commercially available with all certification artifacts provided to customers.

The Qt Safe Renderer solves functional safety requirements in the Automotive, Medical, Automation and multiple other industries.

qt-safe-renderer-certificate-document

Risk Mitigation

The Qt Safe Renderer offers a high level of reliability and risk mitigation to any application where the correct rendering of graphical information is of paramount safety. An example for this from the automotive industry would be the warning icons on a car’s dashboard (for example in the images below). For medical devices, it would be critical patient readings.

Functional Safety Icons

Achieving certification requires rigorous development processes, from requirements, to architectural design, to implementation, to testing. When we built the Qt Safe Renderer, we documented every single one of these steps. This documentation, specifically the design documentation, the safety manual, and verification documentation are all provided in the software delivery. This gives companies peace of mind that their safety-critical UI systems will be easier to certify end-to-end when using the Qt Safe Renderer and of course they save time and money along the way by using a pre-certified component.

Designer Tooling

In addition to the certified run-time, Qt Safe Renderer also comes with designer tooling. This tooling allows designers to easily add safety critical elements to their designs. When they run and test their design on the desktop, these items run like regular graphical elements. When built for an embedded safety-critical system, the tooling separates out the safety-critical elements for execution by the Qt Quick Renderer run-time. This is performed seamlessly, making development cycles quick and easy and leaving designers to focus on what they do best.

qsr-build-tooling_

Further Information

We’ve written quite a few blogs in the past that include technical details on Functional Safety and our solution which can found here. Make sure to watch this webinar where we talk about functional safety and how the Qt Safe Renderer can be used to create functionally safe systems.

To learn more click on the button below.

qsr_release_blog_cta_image

The post Released and Certified: Qt Safe Renderer – An ASIL-D Functional Safety Solution appeared first on Qt Blog.

Vulkan for Qt on macOS

by Morten Johan Sørvig (Qt Blog)

Sometimes, development efforts align such that new use cases can be enabled with modest extra effort. The QtBase dev branch (which will become Qt 5.12) now has experimental Vulkan support, courtesy of MoltenVK and prior work in Qt. Let’s take a look at what has happened.

hellomacosvulcancubes

Backstory

Last year, Laszlo wrote about Vulkan support in Qt on Windows, Linux, and Android. This work included adding QSurface::VulkanSurface and also adding cross-platform classes such as QVulkanInstance and QVulkanWindow

Then, a couple of months ago, the MoltenVK Vulkan to Metal translation library was open sourced. One of the issues raised in the bug reporter was how to make this work with Qt; this requires configuring the NSView used by QWindow to be layer-backed by a CAMetalLayer.

And, as it happens we were already looking at making use of Metal in Qt, starting with adding support for CAMetalLayer as a backing layer. We’re also looking at different ways to integrate application Metal code with Qt, but that’s a topic for another blogpost.


// Snippet from Qt internal layer management code

@implementation QT_MANGLE_NAMESPACE(QNSView) (DrawingAPI)

- (NSViewLayerContentsRedrawPolicy)layerContentsRedrawPolicy
{
    // We need to set this this excplicitly since the super implementation
    // returns LayerContentsRedrawNever for custom layers like CAMetalLayer.
    return NSViewLayerContentsRedrawDuringViewResize;
}

- (void)updateMetalLayerDrawableSize:(CAMetalLayer *)layer
{
    CGSize drawableSize = layer.bounds.size;
    drawableSize.width *= layer.contentsScale;
    drawableSize.height *= layer.contentsScale;
    layer.drawableSize = drawableSize;
}

- (void)layoutSublayersOfLayer:(CALayer *)layer
{
    if ([layer isKindOfClass:CAMetalLayer.class])
        [self updateMetalLayerDrawableSize:static_cast(layer)];
}

- (void)viewDidChangeBackingProperties
{
    CALayer *layer = self.layer;
    if (!layer)
        return;

    layer.contentsScale = self.window.backingScaleFactor;

    // Metal layers must be manually updated on e.g. screen change
    if ([layer isKindOfClass:CAMetalLayer.class]) {
        [self updateMetalLayerDrawableSize:static_cast(layer)];
        [self setNeedsDisplay:YES];
    }
}

@end

With all this in place the missing parts was then the platform plugin glue code which adds support for QSurface::VulkanSurface (on macOS) and also implements a QPlatformVulkanInstance subclass that abstracts over platform-specific Vulkan tasks such as creating a Vulkan surface for a native surface.

How do I use it?

The minimal way

Call setSurfaceType(QSurface::VulkanSurface) in the constructor of your QWindow subclass. You can now access the NSView with QWindow::winId(), and pass that on to MoltenVK. The NSView is configured in such a way that MoltenVK can render to it. See also MotelVK issue #78 for more info. Note that I have not actually tried this myself 🙂

This does not require enabling Vulkan support when building Qt, which again means that the Qt binary package can be used (from 5.12 onwards).

The using the Qt API way

This makes using QVulkanWindow and friends possible, at the cost of having to build Qt from source.

Configure and build Qt with Vulkan support by adding the MoltenVK includes (build MoltenVK according to instructions first):

./confgure -I /path/to/MoltenVK/Package/Release/MoltenVK/include

You should now see “Vulkan: yes” on the configure report; if so the QVulkan* classes are available.

Then, tell Qt the location of libMoltenVK.dylib before starting apps or examples:

export QT_VULKAN_LIB=
  /path/to/MoltenVK/Package/Release/MoltenVK/macOS/libMoltenVK

This re-uses the existing Qt code which loads and resolves the Vulcan library at run-time. On macOS we might want to link against MoltenVK.framework instead, but we’ll leave that as a future improvement for now.

Finally

I’d like to round off by repeating the “experimental” warning. In particular we don’t have enough insight into the inner workings of MoltenVK to know if there are potential points of incompatibility. Please report any findings below or in a QTBUG.

The post Vulkan for Qt on macOS appeared first on Qt Blog.

Cutelyst 2.4.0 released

Cutelyst, the C++/Qt web framework got another up.

This release includes:

  • Fix for our EPoll event loop spotted by our growing community
  • An Sql query conversion optimization (also from community), we have helper methods to create QVariantHash, QVariantMap, QVariantList... from QSqlQuery to be used by Grantlee templates
  • New Sql query conversion to JSON types (QJsonObject, QJsonArray..) making it easier and faster to build REST services as shown on our blog post about Creating RESTful applications with Qt and Cutelyst
  • New boolean methods to test for the HTTP method used (isPUT(), isDelete()...) easing the work on REST apps
  • Fix for our CPU affinity logic, thanks to the TechEmpower benchmarks I found out that when creating many workers process they were all being assigned to a single CPU core, I knew that pre-fork was usually slower than threads on modern CPUs but was always intrigued why the difference was so big. Hopefully it will be smaller now.

Since I'm currently building a REST service OAuth2 seems to be something that would be of good use, one user has started working on this, hopefully this will soon be ready.

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.4.0

 

Playing with coroutines and QML

** aka Playing with coroutines and Qt (Part II)

From 5747a7530206ac410b6bd7c1b8490d7d389ad3a5 JavaScript Generators are supported in QML. This enables to write the Fibonacci example from the previous post using generators and QML.

Mandatory example:

import QtQuick 2.11
import QtQuick.Window 2.11
import QtQuick.Controls 2.2

Window {
    property var fibonacci: function* () {
        yield "0: 0"
        yield "1: 1"
        var f0 = 1, f1 = 0, n = 2;
        while (true) {
            var next = f1 + f0;
            f0 = f1;
            f1 = next;
            yield n++ + ": " + (f0 + f1);
        }
    }

    visible: true
    width: 640
    height: 480
    title: qsTr("Fibonacci")

    Row {
        anchors.fill: parent
        Button {
            id: button
            property var coroutine: fibonacci()
            width: parent.width / 2; height: parent.height
            text: coroutine.next().value
            onPressed: text = coroutine.next().value
        }

        Button {
            text: "Reset!"
            width: parent.width / 2; height: parent.height
            onPressed: {
                button.coroutine = fibonacci()
                button.text = button.coroutine.next().value
            }
        }
    }
}

Have fun!

The post Playing with coroutines and QML appeared first on Qt Blog.

What’s new with the Wayland platform plugin in Qt 5.11?

Wayland is a display server protocol used on modern Linux systems, the Qt Wayland platform plugin lets Qt applications run on Wayland display servers (compositors).

Apart from bug fixes, the Qt 5.11 release contains a substantial amount of improvements, especially for desktop users.

Key composition support

key-composition

Support for compose keys has been missing for a long time and has finally been added. That means you can now enter characters that require a sequence of keys, such as:

  • ¨, A to write “ä”
  • compose key, S, S to write “ß”

Qt Wayland in official binaries

Starting with Qt 5.11 and Qt Creator 4.7, binaries in the official installers now also include Qt Wayland (previously you would have to build it yourself).

So the official build of Qt Creator itself now runs on Wayland, as well as the applications you build with the official Qt packages.

UPDATE: It’s still unclear whether QtWayland will be in the official release of Qt Creator 4.7.0. Due to a mistake, Qt Wayland was made the default platform plugin on gnome-shell sessions in Qt 5.11. The combination of gnome-shell and Qt Wayland still results in too many bugs, and hence Qt Wayland was removed from the Qt Creator pre-release builds altogether, at least until Qt Wayland is made opt-in again.

Qt creator 4.7 nightly running on Wayland

Qt creator 4.7 nightly running on Wayland

There are nightlies for QtCreator 4.7 available if you want to try it out before the official release.

Fallback to X11 if Wayland is not available

The common way of selecting a Qt platform plugin, has been to set the environment variable QT_QPA_PLATFORM=wayland. This has been a problem on Linux desktops, because some applications—for instance the official QtCreator package—use a bundled version of Qt that doesn’t include Wayland, and will fail to launch with the following message:

This application failed to start because it could not find or load the Qt platform plugin "wayland" in "".

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, xcb.

Reinstalling the application may fix this problem.

In Qt 5.11 we added support for fallback platform plugins, this means you can now set QT_QPA_PLATFORM="wayland;xcb", which makes Qt use the xcb (X11) plugin if Wayland is not available.

Improved high-dpi support

automatic-dpi-change
If you have a multi-monitor setup with both high-dpi and low-dpi screens, you’ll be happy to hear that windows now switch to the appropriate scale when moved from one screen to another. No more tiny or blurry windows 🙂

Testing and continuous integration

QA-wise Qt Wayland has seen significant improvements lately. We now run a subset of the QtBase unit tests on every patch set submitted, which means we will catch more bugs earlier. However, this is a topic suitable for a separate blog post.

News from the development branch

There has also been many recent changes that didn’t make it into the 5.11 release. State changes, such as resizing or maximizing have seen a lot of work. We now finally support maximizing and full screen on xdg-shell-v6. We have also added a new shell integration for xdg-shell stable.

Qt Wayland backports repository

If you want to test the new features and fixes in Qt Wayland, but don’t want to wait for a release, or if you don’t want to upgrade or compile all of Qt, I have set up an unofficial qtwayland-backports repository.

It contains branches with new versions of Qt Wayland that compile against older versions of Qt. I.e. if you use Qt 5.10.x, you can still test recent changes in the Qt Wayland dev branch using the dev-for-5.10 branch.

Arch Linux users can install the AUR package, qt5-wayland-dev-backport-git, as a drop-in replacement for qt5-wayland. Again, note that these backports are unofficial and there are no guarantees that I will keep updating them.

The post What’s new with the Wayland platform plugin in Qt 5.11? appeared first on Qt Blog.

Playing with coroutines and Qt

Hello!
I was recently wondering about the status of the coroutines in C++ and I found several implementations. I decided to choose this one for my experiment.
It’s simple and easy to use and works in Linux and Windows.

My goal was to try to find a way to have code running asynchronously without waiting for signals to trigger my slots and avoiding to call QCoreApplication::processEvents or creating a QEventLoop in the stack.

My first approach was to convert the processEvent function of a custom event dispatcher into a coroutine and use yield. After several failures, I decided not to continue this way.

My next attempt was to convert a slot into a coroutine:

QTimer::singleShot(0, std::bind(&coroutine::resume, coroutine::create([]() { ... });

Inside this lambda, the CPU will execute the code until the yield, it will jump back to the application event loop.
The full code is:

#include "coroutine.h"
#include <QtCore>
#include <QtWidgets>

int main(int argc, char **argv)
{
  QApplication app(argc, argv);
  QPushButton fibonacciButton("0: 0");
  fibonacciButton.show();
  QObject::connect(&fibonacciButton, &QPushButton::pressed,
                   std::bind(&coroutine::resume, coroutine::create([&]() {
    qulonglong f0 = 1, f1 = 0, n = 1;
    fibonacciButton.setText(QString("1: 1"));
    coroutine::yield();
    fibonacciButton.setText(QString("2: 1"));
    coroutine::yield();
    forever {
      auto next = f1 + f0;
      f0 = f1;
      f1 = next;
      fibonacciButton.setText(QString("%0: %1").arg(n++).arg(f0 + f1));
      coroutine::yield();
    }
  })));
  return app.exec();
}

Here we can see a button connected to a lambda function which calculates the numbers in the Fibonacci sequence. After calculating the next number, we call yield and this will jump from this function to the event loop. When the user presses the button again, the code will return to the next line after the yield.

This example works because the user needs to press the button again to resume the execution of the code.

However, sometimes we want to resume the execution automatically. To do this, we need to yield the execution and schedule a resume of the execution:

void qYield()
{
  const auto routine = coroutine::current();
  QTimer::singleShot(0, std::bind(&coroutine::resume, routine));
  coroutine::yield();
}

The first line gets the identifier of the coroutine and the second schedules the resume. With yield the CPU will come back to the previous frames and finally to the main loop with a resume enqueued resuming the code unconditionally.

Next step is to try to resume when a condition happens. Qt provides signals that indicate when something happened, so the more optimal way to yield the execution is:

template <typename Func>
void qAwait(const typename QtPrivate::FunctionPointer::Object *sender, Func signal)
{
  const auto routine = coroutine::current();
  const auto connection = QObject::connect(sender, signal,
                                           std::bind(&coroutine::resume, routine));
  coroutine::yield();
  QObject::disconnect(connection);
}

Instead of enqueuing a resume we create a temporary connection to resume the execution of our slot.

An example of this can be:

#include "coroutine.h"
#include <QtCore>
#include <QtWidgets>
#include <QtNetwork>

int main(int argc, char **argv)
{
  QApplication app(argc, argv);
  QPlainTextEdit textEdit;
  textEdit.show();
  QTimer::singleShot(0, std::bind(&coroutine::resume, coroutine::create([&]() {
    QUrl url("http://download.qt.io/online/qt5/linux/x64/online_repository/Updates.xml");
    QNetworkRequest request(url);
    QNetworkAccessManager manager;
    auto reply = manager.get(request);
    qAwait(reply, &QNetworkReply::finished);
    textEdit.setPlainText(reply->readAll());
    reply->deleteLater();
  })));
  return app.exec();
}

Here, I created a QTextEdit which receives the content of a file from internet. When the QNetworkReply finishes, the data is written in the QTextEdit.

Another example:

#include "coroutine.h"
#include <QtCore>
#include <QtWidgets>
#include <QtNetwork>

int main(int argc, char **argv)
{
  QApplication app(argc, argv);
  QPlainTextEdit edit;
  edit.show();

  QTimer::singleShot(0, std::bind(&coroutine::resume, coroutine::create([&]() {
    auto previousText = edit.toPlainText();
    forever {
      if (edit.toPlainText() == QStringLiteral("quit")) {
        qApp->quit();
      } else if (previousText != edit.toPlainText()) {
        qDebug() << previousText << "->" << edit.toPlainText();
        previousText = edit.toPlainText();
      }
      qAwait(&edit, &QPlainTextEdit::textChanged);
      qDebug() << "QPlainTextEdit::textChanged";
    }
  })));
  return app.exec();
}

This application prints the text every time the user modifies the text, and it finishes the execution when the user writes the word ‘quit.’

The post Playing with coroutines and Qt appeared first on Qt Blog.

Porting guide from Qt 1.0 to 5.11

Now that Qt 5.11 is released, it is finally time to upgrade the last Qt 1.0 applications out there… No, not really. 😉 I want to take a look at how well we have kept compatibility in Qt over the years since the first official release.

Qt guarantees source and binary compatibility between minor releases, and we take that seriously. Making sure that you don’t have to rewrite (or even recompile) your application when you upgrade to a newer version of Qt is important to us. However, there are times when we need to make bigger changes in order to keep Qt up to date. This is done in major releases. Since the release of Qt 1.0 in 1996 (almost twenty-two years ago), we have broken source compatibility four times: in 2.0, 3.0, 4.0 (some of you may remember that as a painful transition), and 5.0.

We do try to keep breakages to a minimum, even in the major releases, but the changes do add up. This raises the question: How hard would it be to port a Qt application from Qt 1.0 to 5.11?

To find an answer to this question, I took the tutorial example from the Qt 1.0 release, and tried to compile it against Qt 5. Since the Qt archives only go back to version 1.41, I actually had to retrieve it from ancient history which has been preserved through four different source code control systems …but I digress. Its name is t14 because it is the 14th and final chapter of the tutorial.

Here are the steps I needed to make to get it to build and run.

  • Import tutorial 14 from Qt 1.0 10 files changed, 798 insertions(+)

    This is the state of the art: the official recommendation from Troll Tech (we weren’t Trolltech yet) on how to write programs with Qt 1.0, in 1996.

  • Translate to qmake 3 files changed, 2 insertions(+), 148 deletions(-)
    tmake
    Before qmake, there was tmake. tmake was written in Perl. The basic syntax was the same, except that qmake no longer allows embedding Perl code in the project files. This is probably for the best…

    Also, tmake was not public, so we would generate Makefiles for the release packages. Having completely different internal and external build systems added excitement to the release process.

  • Fix include file names 4 files changed, 8 insertions(+), 8 deletions(-)
    includenames
    Back in the old days, Windows only allowed eight character file names. We did have proper include files on Unix, but if you wanted your code to be portable, you had to use wonderful names like "qscrbar.h" and "qbttngrp.h".
  • Add missing includes 1 file changed, 3 insertions(+)

    Dependency on indirect includes was a problem back then as well.

  • Change TRUE/FALSE to true/false 1 file changed, 13 insertions(+), 13 deletions(-)
    truefalse
    Kids these days. They don’t know how lucky they are. We had to make our own bool type.
  • Fix things moved to the Qt:: namespace 3 files changed, 15 insertions(+), 15 deletions(-)
    namespace
    The Qt namespace was introduced in 1998 … as a class, since we didn’t have those fancy namespaces back then.
  • Remove “name” argument 6 files changed, 26 insertions(+), 26 deletions(-)
    objectname
    All constructors of QObject subclasses used to take the object name as a parameter.
  • The QScrollBar API has changed 1 file changed, 5 insertions(+), 5 deletions(-)
    scrollbarapi
    Sometimes we have to get rid of old, bad APIs. Having individual setter functions is much better than a constructor that takes 7 arguments.
  • Use QString instead of const char * 2 files changed, 2 insertions(+), 2 deletions(-)
    qstring
    QString has been in Qt since 1994. It used to be 8 bit Latin1 with an automatic cast to const char*, so the API would use const char * arguments. Unicode support was introduced in Qt 2.0.
  • warning() is called qWarning() now 1 file changed, 1 insertion(+), 1 deletion(-)

    We avoid putting identifiers into the global namespace. Except for the letter ‘Q’. We own that.

  • Remove calls to old QApplication functions 1 file changed, 2 deletions(-)
    qapp
    Qt does the right thing automatically these days. In 1996, most displays used 8 bits per pixel. You had to tell Qt if you wanted to use other colors than the 256 standard ones.
  • Replace QAccel with QShortcut 1 file changed, 4 insertions(+), 3 deletions(-)
    shortcut
    QShortcut is more powerful and easier to use, and the name is not an abbreviation. It was introduced in 2004.
  • Fix repaint logic 1 file changed, 7 insertions(+), 7 deletions(-)
    repaint
    In the ’90s, we could just paint directly onto the widgets whenever we wanted. Nowadays everything is buffered and composed, so we have to make an update request, and later repaint the widget when we get a go-ahead. Fortunately, this simplifies the logic.
  • QObject::killTimers() doesn’t exist anymore 2 files changed, 3 insertions(+), 2 deletions(-)
    killtimer
    This function was just too dangerous. It would kill all timers belonging to an object, including those used by Qt internally. Now you have to kill timers individually.
  • QWMatrix is now QMatrix 1 file changed, 2 insertions(+), 2 deletions(-)

    A simple name change.

  • QWidget::setBackgroundColor() is gone. 1 file changed, 3 insertions(+), 1 deletion(-)
    realbackgroundcolor
    Background color is no longer a separate concept: it’s wrapped inside QPalette together with all the other color roles. Also, child widgets are now transparent by default. We have to tell Qt to draw the background.
  • Can’t fill pixmap with contents of widget anymore 1 file changed, 1 insertion(+), 1 deletion(-)
    pixmapfill
    We use a transparent pixmap instead, since Qt supports that now.
  • Rectangle painting has changed since Qt 1.0 1 file changed, 1 insertion(+), 1 deletion(-)
    drawrect
    This is the worst incompatibility so far: In Qt 4.0 QPainter::drawRect() was changed so that “a stroked rectangle has a size of rectangle.size() plus the pen width”. Therefore, we need to subtract the pen width (i.e. 1) before passing the rectangle to QPainter.

Now we have a fully functional port of the tutorial example, and here is a screenshot:

t14 take one

Oops… Some of the text is clipped. It turns out that the example used hardcoded sizes and positions for most of the elements, and font sizes have changed a bit since 1996. The solution is to use QLayout, which was not available in Qt 1.0 (the first version was added in Qt 1.1, and it was completely rewritten for 2.0).

With this change, everything looks as it should:

t14 with layouts

So, what did we learn? It did not take that much effort to port tutorial 14 from Qt 1.0 to Qt 5.11. It probably took me longer to write this blog post. Most of the 1.0 API still lives on in recognizable form. The incompatible changes have improved the usability, readability and safety of Qt code.

This was just a small example, of course. Porting a full-sized application across multiple major versions would probably be more difficult. Fortunately, we have a vibrant Qt ecosystem with several companies offering consultancy services .

The post Porting guide from Qt 1.0 to 5.11 appeared first on Qt Blog.

Qt for Python: under the hood

Today I would like to tell you about the “Qt for Python bindings” generation process.

The following diagram shows the most general interaction of the internal process that The Qt Company is using to provide the PySide2 module:

qtforpython-underthehood

When the PySide project was launched back in 2009, the team decided to use external tools to generate Python bindings from Qt C++ headers.
One of the main concerns, besides using a tool that properly handles all the Qt C++ constructs, was the size of the final packages.
The previous choice was using templates excessively, hence another alternative was required.
After analyzing a few other options the team decided to write their own generator, Shiboken.

Shiboken uses an ApiExtractor library (based on an old project called QtScriptGenerator) to parse the Qt Headers and get the information of all the Qt classes, using Clang.
This step is not Qt-dependent, so it can be use for other C++ projects as well.

Additionally, Shiboken has a Typesystem (based on XML) which allows modifying the obtained information to properly represent and manipulate the C++ classes into the Python World.

Through this Typesystem we can remove and add methods to certain classes, and even modify the arguments of each function, which is really necessary when both C++ and Python collide and a decision needs to be made to properly handle the data structures or types.

The outcome of this process is a set of wrappers written in CPython, which allow us to easily compile and then provide you the python module called PySide2.

Official documentation related to the ApiExtractor and Shiboken will be included in the official Qt for Python documentation, so you can get involved in their development, and if you are curious how Shiboken works, stay tuned for the future blog posts!

The post Qt for Python: under the hood appeared first on Qt Blog.

Qt 5.11 released

Slightly ahead of our planned schedule, we have released Qt 5.11 today. As always, Qt 5.11 comes with quite a few new features as well as many bug fixes to existing functionality. Let’s have a look at some of the cool new features.

Qt Core and Network

A lot of work on small details has happened in Qt Core. As an example, some of our tools classes got new rvalue reference overloads and we filled in some missing methods for better STL compatibility. Our item model has received a couple of new features, for details have a look at this blog post.

In Qt Network, ALPN and thus HTTP/2 negotiation are now supported on iOS. QNetworkRequest gained a Http2DirectAttribute to start an HTTP/2 connection without first negotiating.

One of the larger updates in Qt Core went into our Unicode support. QChar, QString, QTextBoundaryFinder and our algorithm for bidirectional text are now fully compatible with Unicode 10.

Qt GUI and Widgets

A major focus area for Qt 5.11 has been the accessibility support on Windows. It got completely rewritten and is now based on Microsoft UI Automation, not on the old Microsoft Active Accessibility framework, leading to a vastly improved accessibility support on Windows.

Some major work has also gone into improving the widget styles on Windows to better support High-DPI displays. The print dialog on Linux has also received a major overhaul, now featuring much better support for all the CUPS provided options.

Qt Widgets itself has received numerous bug fixes and support for quick text selection by mouse in QLineEdit.

Together, this gives a very nice update for all our Desktop users.

Qt QML

Some larger changes have been happening under the hood of our QML engine. We have completely rewritten the compiler pipeline, that parses and compiles QML. The new pipeline brings some major improvements in performance and maintainability.

The new pipeline always compiles QML to a platform-independent bytecode. The engine will cache this bytecode in .qmlc files. You can also generate the bytecode ahead of time, using the qmlcompiler feature (which is now also available in the open source version).

The new bytecode interpreter has a vastly improved performance over the old version. It reaches around 80-90% of the performance of the JIT in Qt 5.10 in most of our test cases. A new hotspot JIT has been added on top of that, beating our old JIT in pretty much all areas.

For more details, have a look at the separate blog post here.

Qt Quick and Qt Quick Controls

In Qt Quick, we’ve expanded our support for loading compressed textures in the Image element and now support both .ktx and .pkm container file formats. This feature helps cut down on application startup time and memory consumption by storing images in a format that is directly digestible by the GPU. For more details have a look at this blog post.

Qt Quick Controls 2 has received lots of smaller features and bug fixes. Examples are auto-repeat properties for Buttons, better positioning support for ScrollBars and better styling support for SpinBoxes.

Qt Location

Qt Location is also an area where many cool things have been happening. The largest new feature is probably the experimental support for turn-by-turn navigation. But there is more. Qt Location now has an experimental API to create map objects that are not bound to QQuickItems. The performance of MapPolyline objects has seen large improvements and layers are now working in combination with Map items. In addition, we made the Routing and Places API extensible and added a new WayPoint element. Finally, the MapBox plugin gained support for geocoding and Places.

Qt Webengine

As has become standard in our feature releases, we’ve updated the Chromium version underneath Qt Webengine to Chromium 65. In addition, we now support embedded DevTools without requiring the use of a separate browser, an installable cookie filter and quota permissions.

Qt for Device Creation

All new functionality mentioned above is of course also available in Qt for Device Creation. In addition, we’ve been working on improving some embedded-specific features.

One new feature here is support for hardware-based graphics layers, currently available as a Technology Preview for platforms supporting VSP2 hardware compositing. This can be used for features such as Video underlays and helps improve performance and reduce power consumption. We are aiming to extend the support to more platforms and hardware combinations in future releases.

Qt SerialBus has received improvements to its CAN Bus support. The KNX module has received some larger updates. In addition, Qt 5.11 will feature a new module that adds support for OPC/UA. This module is available as a Technology Preview in Qt 5.11.

Other items

qdoc now uses libclang to parse C++ giving us much better support for modern C++ in our documentation. Qt Serialbus and Bluetooth have now improved support for CAN bus and BTLE.

With Qt 5.11, we have also removed support for some older compilers and platforms. MSVC 2013, QNX 6.6 and macOS 10.10 are no longer supported.

Qt 3D and Qt 3D Studio

We are working hard to get the second release of Qt 3D Studio ready for you. This second release comes with a fully rewritten runtime, that’ll be based on top of Qt 3D. This will give all of you a better and deeper integration into the rest of Qt when using Qt 3D Studio to create 3D user interfaces.  With this work, Qt 3D has also received numerous new features, performance improvements, and bug fixes. Qt 3D Studio 2.0 is currently in beta, and we are working hard to get the final release out within the next few weeks.

Qt for Webassembly and Python

With Qt for Webassembly, we are working towards filling the last large gap in our cross-platform story, allowing our users to target the web and browsers as a platform for Qt applications. The first version has been released as a Technology preview today, please check out the separate blog post for further details.

In addition, to the above, we are actively working on supporting Qt on Python . The first release of it is planned for June, and we’ll keep you posted with more details.

Thanks to the Qt Community

Qt 5.11 adds a lot of new functionality and improvements. Some of them would not have been possible without the help of the great community of companies and people that contribute to Qt by contributing new functionality, bug fixes, documentation, examples or bug reports. There are too many people to mention everybody in detail, but I’d like to especially thank Thiago Maciera from Intel for his ongoing work on maintaining Qt Core. From our partner basysKom, I’d like to thank Jannis Voelker and Frank Meerkötter for their work on OPC/UA. From our partner KDAB, I’d like to thank Albert Astals Cid for his work on CUPS printing, Sean Harmer and Paul Lemire for their ongoing work on Qt 3D and many others for helping maintain different parts of Qt. Thank you!

Get the new version

As usual, Qt 5.11 will be supported for one year. If you need longer support periods, Qt 5.9 is our current LTS release and will be supported until June 2020. Extended lifetime support can of course always be purchased from The Qt Company if required. Our next release after Qt 5.11, Qt 5.12 is planned for November and will again be a long-term supported release.

You can download Qt 5.11 from your Qt Account or www.qt.io/download. I hope you’ll like and enjoy the new release!

 

The post Qt 5.11 released appeared first on Qt Blog.

Heaptrack v1.1.0 release

After more than a year of work, I’m pleased to release another version of heaptrack, the Linux memory profiler! The new version 1.1.0 comes with some new features, significant performance improvements and – most importantly – much improved stability and correctness. If you have tried version v1.0 in the past and encountered problems, update to the new v1.1 and try again!

Notable Changes

The most effort during this release cycle was spent on improving the correctness of heaptrack. The initial version suffered from bugs that could lead to corrupted or truncated data files. Heaptrack v1.1 is generally much better in this regard, and you should be able to use it in more situations than before. Furthermore, attaching heaptrack to an already-running process will catch more allocations and thus produce more accurate data. To verify the quality of the heaptrack code base, more tests have been added as well. These tests also finally enable us to use Valgrind or the Sanitizers on most of the heaptrack code, which wasn’t possible previously.

Additionally, some important new features have been added which greatly improve the usability of heaptrack:

  1. When extended debug information is available, stack traces now include inlined frames.
  2. Split debug information in separate files is now supported.
  3. Compressed debug information is properly handled.
  4. The embedded flamegraph view is now searchable.

Finally, quite some work went into optimizing heaptrack to further reduce its overhead. The initial version was quite good already from a performance point of view, but version 1.1 is even better! Most notably, the analysis of large data files is now often much faster. This is in great parts due to the new optional dependency on zstd. This fantastic state-of-the-art compression algorithm greatly reduces the CPU overhead during recording while compressing the heaptrack data. But not only that – decompression at analysis time is significantly reduced compared to the standard gzip compression. In case you wonder: Data files are now often slightly smaller too!

Last but not least, heaptrack v1.1.0 can be downloaded as a portable AppImage which should run on most 64bit Linux systems in use these days!

Download heaptrack v1.1.0

If possible, wait for your distribution to provide you with an updated package for heaptrack v1.1.0. Otherwise, download the AppImage, make it executable and run it. If neither of these two options works for you, grab the sources and compile the code for your target platform:

The GPG signatures have been created by Milian Wolff with the key A0C6B72C4F1C5E7C.

Many thanks to the various people who contributed to this release. Please continue to hand in your patches, preferably via KDE’s phabricator instance or via heaptrack on GitHub. Bugs can be reported on bugs.kde.org.

If your company needs commercial support for heaptrack, then get in touch with us at KDAB. We offer workshops and trainings specifically about profiling and debugging on Linux.

The post Heaptrack v1.1.0 release appeared first on KDAB.