Qt Conference Apps – out of the Developer Trenches – Part 1

by Ekkehard Gentz [Independent Software Architect, Consultant] (Qt Blog)

In a few weeks the Qt World Summit 2016 will open its doors in San Francisco and I have been given the chance to speak there about my experiences while developing the Qt World Summit 2016 Conference App. This article series here will give you some additional information to my presentation.

For nearly 40 years I have been developing software, where the last 8 years I have focused on mobile App Development. I started mobile App Development with BlackBerry OS7 (Java) followed by BlackBerry 10 native Apps (Qt 4.8, Cascades UI Controls).

In 2016 BlackBerry, for first time ever, started to build secure Android Phones and my customers asked for x-platform Apps. Personally, I liked the way BlackBerry 10 Apps were built using QML and Cascades. Fortunately Qt just started Qt 5.6 Tech Preview of new Qt Quick Controls 2. I did some first tests to see if Qt Quick Controls 2 will enable me to develop good looking and performant mobile Apps.

First steps went well so I decided to spend some more time and to give Qt 5.7 and Qt Quick Controls 2 a try in real-life projects. Over the last 4 years I built many mobile business Apps for Enterprise and SMB and I also did some Apps for Developer Conferences.

I asked Tero Kojo to develop the QtCon 2016 Conference App as a proof-of-concept to rely on new Qt Quick Controls 2. You can download the QtCon Conference App from Google Play (https://play.google.com/store/apps/details?id=org.ekkescorner.c2g.qtcon), Apple App Store (https://itunes.apple.com/us/app/qtcon-2016-conference-app/id1144162386), Amazon App Store (https://www.amazon.com/ekkescorner-QtCon-2016-Konferenz-App/dp/B01L7DVJTO), as APK (https://app.box.com/s/fgeo14re3hrp47shg915geo1q4gzyxrz) or build it by yourself from Open Source Github Repo (https://github.com/ekke/c2gQtCon_x).

The App was built without any extra native Code – pure Qt only. Feedback was great and I just started to do the Qt World Summit 2016 Conference App – Github Repo will be public soon. Hopefully this time the App will also be available for Windows 10 from Windows App Store. Special thanks to Maurice Kalinowski for his help, the QtCon Conference App is running on Windows 10, although I had some problems uploading this to Windows App Store.

There is a blog series about all my experiences using Qt Quick Controls 2 to develop mobile Apps (http://j.mp/qt-x), also a series in (German) Web & Mobile Developer Magazin and now some articles here at Qt Blog, too. You can expect some 3 – 4 articles here at Qt Blog about developing Qt Conference Apps.

All development is done in my spare time and my goal is to motivate mobile App Developers to try out Qt Quick Controls 2 to develop x-platform Apps. I never did Qt development before, also never did native Apps for Android, iOS or Windows but now I am able to develop and upload Apps to Google Play or Apple App Store :) I am also using Google Material Style to provide a modern mobile App feeling. Thanks to J-P Nurmi, Mitch Curtis and others for great hints HowTo customize Qt Quick Controls 2.

From my experiences over the last 6 months, developing mobile Apps with Qt 5.7 and Qt Quick Controls 2 is much more comfortable and easier than using Xamarin, React Native, Cordova, Angular or Ionic. The good news for all my friends from BlackBerry 10 community: there is a great amount of re-use of C++ Code from Cascades and also architecture style is similar using Signals/Slots and QObject* as data model.

Speed is key to success

The first impression of any mobile App with regards to User Experience comes from starting the App. The User should never have the feeling that an App is slow. Some of my recipes for a speedy start are below:

  • fast creation of C++ Classes
  • immediately show something on the screen
  • be dynamic: only instantiate UI Controls you really need

How am I doing this? Only instantiate C++ Classes, avoid any initialization as open Databases, load Cache Files and more.

DataServer::DataServer(QObject *parent) : QObject(parent)
{
    // Do NOTHING HERE
}

Use the fastest possible way to show some UI to the User. My root and main Navigation Control is a Drawer. The Drawer contains a list of “Destinations“, where a Destination is a specific area of the Application as

  • Home
  • Schedule
  • Speakers
  • Venue

01_drawer

Each Destination can be one of the Qt Quick Controls 2 Navigation Controls (http://doc.qt.io/qt-5/qtquickcontrols2-navigation.html) or Container Controls(http://doc.qt.io/qt-5/qtquickcontrols2-containers.html):

  • Pane
  • Page
  • StackView
  • SwipeView / Tab Bar

Inside the Drawer you can use a ListView to let the User select a Destination – take a look at Qt Quick Controls 2 Gallery Example. I‘m using a Repeater to create different types of Controls: Destinations, Divider, Header, …

To show the selected Destination best way is to use a StackView as your root UI Control and swap the content – so there‘s always only one Item at this root StackView.

02_destinations

To startup immediately don‘t create all the Drawer – Destinations ! This can easy be done with a little trick: define the Repeater without a data model.

        Repeater {
            id: destinations
            // Don‘t set the model here !
            // model: navigationModel
            Destination {
                id: destinationLoader
            }
        }

So nothing will be created now. To show something to the User create a lightweight Control as initialItem. I‘m using a BusyIndicator.

        // STACK VIEW INITIAL ITEM (BUSY INDICATOR)
        // immediately activated and pushed on stack as initialItem
        Loader {
            id: initialPlaceholder
            source: "pages/InitialItemPage.qml"
            active: true
            visible: false
            onLoaded: {
                // Show BUSY INDICATOR
                rootPane.initialItem = item
                item.init()
                // Now something is VISIBLE - do the other time-consuming stuff
                startupDelayedTimer.start()
            }
        }

The next trick is to start a Timer with a small delay to allow QML to show and animate the BusyIndicator. Then from Timer timeout execute all the initialization stuff and call some Q_INVOKABLE methods from your C++ Classes to load data from Cache and more.

As soon as this is done you can go on with creation of UI Controls. To trigger this set the Repeater Data Model and all the Destinations will be created and HomePage will become current Item on root StackView.

        Timer {
            id: startupDelayedTimer
            interval: 300
            repeat: false
            onTriggered: {
                initialPlaceholder.item.showInfo("Initialize Data ...")
                dataManager.init()
                settings = dataManager.settingsData()
                dataUtil.setSessionFavorites()
                // … and so on ...
                // inject model into Destinations Repeater
                destinations.model = navigationModel
                // show the Navigation Bars (Drawer and Favorites)
                initDone = true
                // now NavigationBars available
                // show first destination
                rootPane.activateDestination(firstActiveDestination)
            }
        }

 

Here we go: first „real“ Page is visible.

But wait: not all Destinations will really be created from the Repeater – this would take too much time and consume too much memory. All the Destinations are created dynamically using Loaders and I implemented some Activation Policies:

  • Immediate: The Control will be instantiated and remain. I‘m using this only for the first visible Page – the HomePage.
  • When-Selected: First time a User selects a Destination will create the Control and remain. This happens for all Destinations a User normaly will use while the App is running: Schedule, Speakers,…
  • While-Selected: Those Destinations are only created when needed and be destroyed if User changes the Destination. Candidates for this Policy: Help, Settings, About, …

Take a look at the code how all of this is implemented, attend my Session at Qt World Summit 2016 in San Francisco (http://www.qtworldsummit.com/speakers/ekkehard-gentz/) or meet me at #QtWS16.

Stay tuned – next article will cover the QObject* Data Model I‘m using, Caching and Data Binding.

The post Qt Conference Apps – out of the Developer Trenches – Part 1 appeared first on Qt Blog.

Qt Graphics with Multiple Displays on Embedded Linux

Creating devices with multiple screens is not new to Qt. Those using Qt for Embedded in the Qt 4 times may remember configuration steps like this. The story got significantly more complicated with Qt 5’s focus on hardware accelerated rendering, so now it is time to take a look at where we are today with the upcoming Qt 5.8.

Windowing System Options on Embedded

The most common ways to run Qt applications on an embedded board with accelerated graphics (typically EGL + OpenGL ES) are the following:

  • eglfs on top of fbdev or a proprietary compositor API or Kernel Modesetting + the Direct Rendering Manager
  • Wayland: Weston or a compositor implemented with the Qt Wayland Compositor framework + one or more Qt client applications
  • X11: Qt applications here run with the same xcb platform plugin that is used in a typical desktop Linux setup

We are now going to take a look at the status of eglfs because this is the most common option, and because some of the other approaches rely on it as well.

Eglfs Backends and Support Levels

eglfs has a number of backends for various devices and stacks. For each of these the level of support for multiple screens falls into one of the three following categories:

  • [1] Output management is available.
  • [2] Qt applications can choose at launch time which single screen to output to, but apart from this static setting no other configuration option is provided.
  • [3] No output-related configuration is provided.

Note that some of these, in particular [2], may require additional kernel configuration via a video argument or similar. This is out of Qt’s domain.

Now let’s look at the available backends and the level of multi-display support for each:

  • KMS/DRM with GBM buffers (Mesa (e.g. Intel) or modern PowerVR and some other systems) [1]
  • KMS/DRM with EGLDevice/EGLOutput/EGLStream (NVIDIA) [1]
  • Vivante fbdev (NXP i.MX6) [2]
  • Broadcom Dispmanx (Raspberry Pi) [2]
  • Mali fbdev (ODROID and others) [3]
  • (X11 fullscreen window – targeted mainly for testing and development) [3]

Unsurprisingly, it is the backends using the DRM framework that come out best. This is as expected, since there we have a proper connector, encoder and CRTC enumeration API, whereas others have to resort to vendor-specific solutions that are often a lot more limited.

We will now focus on the two DRM-based backends.

Short History of KMS/DRM in Qt

Qt 5.0 – 5.4

Qt 5 featured a kms platform plugin right from the beginning. This was fairly usable, but limited in features and was seen more as a proof of concept. Therefore, with the improvements in eglfs, it became clear that a more unified approach was necessary. Hence the introduction of the eglfs_kms backend for eglfs in Qt 5.5.

Qt 5.5

While originally developed for a PowerVR-based embedded system, the new backend proved immensely useful for all Linux systems running with Mesa, the open-source stack, in particular on Intel hardware. It also featured a plane-based mouse cursor, with basic support for multiple screens added soon afterwards.

Qt 5.6

With the rise of NVIDIA’s somewhat different approach to buffer management – see this presentation for an introduction – an additional backend had to be introduced. This is called eglfs_kms_egldevice and allows running on the automotive-oriented Jetson Pro, DRIVE CX and DRIVE PX systems.

The initial version of the plugin was standalone and independent from the existing DRM code. This led to certain deficiencies, most notably the lack of multi-display support.

Qt 5.7

Fortunately, these problems got addressed pretty soon. Qt 5.7 features proper code sharing between the backends, making most of the multi-display support and its JSON-based configuration system available to the EGLStream-based backend as well.

Meanwhile the GBM-based backend got a number of fixes, in particular related to the hardware mouse cursor and the virtual desktop.

Qt 5.8

The upcoming release features two important improvements: it closes the gaps between the GBM and EGLStream backends and introduces support for advanced configurability. The former covers mainly the handling of the virtual desktop and the default, non-plane-based OpenGL mouse cursor which was unable to “move” between screens in previous releases.

The documentation is already browsable at the doc snapshots page.

Besides the ability to specify the virtual desktop layout, the introduction of the touchDevice property is particularly important when building systems where one or more of the screens is made interactive via a touchscreen. Let’s take a quick look at this.

Touch Input

Let’s say you are creating digital instrument clusters with Qt, with multiple touch-enabled displays involved. Given that the touchscreens report absolute coordinates in their events, how can Qt tell which screen’s virtual geometry the event should be translated to? Well, on its own it cannot.

From Qt 5.8 it will be possible to help out the framework. By setting QT_LOGGING_RULES=qt.qpa.*=true we enable logging which lets us figure out the touchscreen’s device node.  We can then create a little JSON configuration file on the device:

{
    "device": "drm-nvdc",
    "outputs": [
      {
        "name": "HDMI1",
        "touchDevice": "/dev/input/event5",
      }
    ]
}

This will come handy in any case since configuration of screen resolution, virtual desktop layout, etc. all happens in the same file.

Now, when a Qt application is launched with the QT_QPA_EGLFS_KMS_CONFIG environment variable pointing to our file, Qt will know that the display connected to the first HDMI port has a touchscreen as well that shows up at /dev/input/event5. Hence any touch event from that device will get correctly associated with the screen in question.

Qt on the DRIVE CX

Let’s see something in action. In the following example we will use an NVIDIA DRIVE CX board, with two monitors connected via HDMI and DisplayPort. The software stack is the default Vibrante Linux image, with Qt 5.8 deployed on top. Qt applications run with the eglfs platform plugin and its eglfs_kms_egldevice backend.

drivecx_small

Our little test environment looks like this:

disp_both

This already looks impressive, and not just because we found such good use for the Windows 95, MFC, ActiveX and COM books hanging around in the office from previous decades. The two monitors on the sides are showing a Qt Quick application that apparently picks up both screens automatically and can drive both at the same time. Excellent.

The application we are using is available here. It follows the standard multi-display application model for embedded (eglfs): creating a dedicated QQuickWindow (or QQuickView) on each of the available screens. For an example of this, check the code in the github repository, or take a look at the documentation pages that also have example code snippets.

A closer look reveals our desktop configuration:

disp2

The gray MouseArea is used to test mouse and touch input handling. Hooking up a USB touch-enabled display immediately reveals the problems of pre-5.8 Qt versions: touching that area would only deliver events to it when the screen happened to be the first one. In Qt 5.8 this is can now be handled as described above.

disp1

It is important to understand the screen geometry concepts in QScreen. When the screens form a virtual desktop (which is the default for eglfs), the interpretation is the following:

  • geometry() – the screen’s position and size in the virtual desktop
  • availableGeometry() – without a windowing system this is the same as geometry()
  • virtualGeometry() – the geometry of the entire virtual desktop to which the screen belongs
  • availableVirtualGeometry() – same as virtualGeometry()
  • virtualSiblings() – the list of all screens belonging to the same virtual desktop

Configuration

How does the virtual desktop get formed? It may seem fairly random by default. In fact it simply follows the order DRM connectors are reported in. This is often not ideal. Fortunately, it is configurable starting with Qt 5.8. For instance, to ensure that the monitor on the first HDMI port gets a top-left position of (0, 0), we could add something like the following to the configuration file specified in QT_QPA_EGLFS_KMS_CONFIG:

{
  "device": "drm-nvdc",
  "outputs": [
    {
      "name": "HDMI1",
      "virtualIndex": 0
    },
    {
      "name": "DP1",
      "virtualIndex": 1
    }
  ]
}

If we wanted to create a vertical layout instead of horizontal (think an instrument cluster demo with three or more screens stacked under each other), we could have added:

{
  "device": "drm-nvdc",
  "virtualDesktopLayout": "vertical",
  ...
}

More complex layouts, for example a T-shaped setup with 4 screens, are also possible via the virtualPos property:

{
  ...
  "outputs": [
    { "name": "HDMI1", "virtualIndex": 0 },
    { "name": "HDMI2", "virtualIndex": 1 },
    { "name": "DP1", "virtualIndex": 2 },
    { "name": "DP2", "virtualPos": "1920, 1080" }
  ]
}

Here the fourth screen’s virtual position is specified explicitly.

In addition to virtualIndex and virtualPos, the other commonly used properties are mode, physicalWidth and physicalHeight. mode sets the desired mode for the screen and is typically a resolution, e.g. “1920×1080”, but can also be set to “off”, “current”, or “preferred” (which is the default).

For example:

{
  "device": "drm-nvdc",
  "outputs": [
    {
      "name": "HDMI1",
      "mode": "1024x768"
    },
    {
      "name": "DP1",
      "mode": "off"
    }
  ]
}

The physical sizes of the displays become quite important when working with text and components from Qt Quick Controls. This is because these base size calculations on the logical DPI that is in turn based on the physical width and height. In desktop environments queries for these sizes usually work just fine, so no further actions are needed. On embedded however, it has often been necessary to provide the sizes in millimeters via the environment variables QT_QPA_EGLFS_PHYSICAL_WIDTH and QT_QPA_EGLFS_PHYSICAL_HEIGHT. This is not suitable in a multi-display environment, and therefore Qt 5.8 introduces an alternative: the physicalWidth and physicalHeight properties (values are in millimeters) in the JSON configuration file. As witnessed in the second screenshot above, the physical sizes did not get reported correctly in our demo setup. This can be corrected, as it was done for the monitor in the first screenshot, by doing something like:

{
  "device": "drm-nvdc",
  "outputs": [
    {
      "name": "HDMI1",
      "physicalWidth": 531,
      "physicalHeight": 298
    },
    ...
  ]
}

As always, enabling logging can be a tremendous help for troubleshooting. There are a number of logging categories for eglfs, its backends and input, so the easiest is often to enable everything under qt.qpa by doing export QT_LOGGING_RULES=qt.qpa.*=true before starting a Qt application.

What About Wayland?

What about systems using multiple GUI processes and compositing them via a Qt-based Wayland compositor? Given that the compositor application still needs a platform plugin to run with, and that is typically eglfs, everything described above applies to most Wayland-based systems as well.

Once the displays are configured correctly, the compositor can create multiple QQuickWindow instances (QML scenes) targeting each of the connected screens. These can then be assigned to the corresponding WaylandOutput items. Check the multi output example for a simple compositor with multiple outputs.

The rest, meaning how the client applications’ windows are placed, perhaps using the scenes on the different displays as one big virtual scene, moving client “windows” between screens, etc., are all in QtWayland’s domain.

What’s Missing and Future Plans

The QML side of screen management could benefit from some minor improvements: unlike C++, where QScreen, QWindow and QWindow::setScreen() are first class citizens, Qt Quick has currently no simple way to associate a Window with a QScreen, mainly because QScreen instances are only partially exposed to the QML world. While this is not fatal and can be worked around with some C++ code, as usual, the story here will have to be enhanced a bit.

Another missing feature is the ability to connect and disconnect screens at runtime. Currently such hotplugging is not supported by any of the backends. It is worth noting that with embedded systems the urgency is probably a lot lower than with ordinary desktop PCs or laptops, since the need to change screens in such a manner is less common. Nevertheless this is something that is on the roadmap for future releases.

That’s it for now. As we know, more screens are better than one, so why not just let Qt power them all?

The post Qt Graphics with Multiple Displays on Embedded Linux appeared first on Qt Blog.

Your data, your code, your cloud…your choice!

The Internet of Things, can mean, well…so many things. So can platform independence. For many people, a cloud platform, often provided as a service, is an essential part of an IoT offering. For others, flexibility is more important – the flexibility to run your solution on any cloud – or the flexibility to run it internally on your own network on your own servers, because you’re paranoid and believe your competitors are watching you (and you know they are), or even worse, maybe someone is going to try to hack your solution, bring it down and put a serious dent in your up-time track record. Regardless, when developing an IoT solution – the choice should be yours and you should be in charge of your own data.

Qt Top 5 Considerations IoT infographicI was previously involved in the development an IoT strategy for a company operating within the industrial automation space, where we initiated the same IoT project with a third party platform as a service provider (PaaS) – to be part of our offering to our end customers – twice. Yes, twice! Twice, because we were working with the wrong providers and because we came to the same conclusion both times. Twice, because the providers were offering their solution on their cloud and they wanted to rent it to us so we could re-rent it to our end customers.  It didn’t sound right and they would never allow us to move the solution to other cloud platforms or host the solution in any other format. And we could forget about making drastic changes to the back-end system and if we were to make any minor change we had to use their consultants ($$$). In other words, it would have never been our solution – it would have been theirs and they would own the data and eventually most likely reap all the revenue potential once we were locked in. We wanted it to be our solution and our end customers wanted it to be theirs. Flexibility, ownership, cross-platform and cash money being the key words here.

We even tried to pitch the solution to our end customers, but they weren’t buying our story. When we reported the customer feedback to the IoT/PaaS vendor, they replied that our end customers “were being conservative” (no, they weren’t – they were just being smart). They wanted flexibility and control over their own solution and their own data. They valued security. They didn’t want to have some sort of closed software agent sending data from their devices where they couldn’t see what was being sent and to where it was being sent. One of our key customers, who we pitched the solution to, even went to the measure of making us sign a paper stating that no such “IoT software” existed on their equipment and that we would never, ever, EVER connect them to any cloud, which wasn’t their own choice and on which they could not see what data was being sent and where it was being sent to. Period! So we stopped the project. Twice. With the software we created with Qt we didn’t have this problem and you won’t either. With Qt it will be your solution, on your platform of choice and we are not forcing you onto any cloud. It is not that we don’t like the cloud. We do. We just think the choice of how you want to host your IoT solution should be yours and we have developed tools to make that simpler for you. Qt is also open, so you know what is going on and can make sure your data remains yours and you know where it is sent. There is also a bunch of other benefits you can achieve by using Qt in your IoT development. If you want to learn more about the software requirements and other important stuff you should be considering before choosing an IoT platform, read our whitepaper: “Building the Internet of Things and How Qt Can Help”. 

The post Your data, your code, your cloud…your choice! appeared first on Qt Blog.

Internet of Things: Why Tools Matter?

With the Internet of Things (IoT) transformation, it’s obvious that the amount of connected devices in the world is increasing rapidly. Everywhere around our daily lives we all use more and more of them. In addition to being connected, more devices get equipped with a touch screen and a graphical user interface. We have all seen this around us and many Qt users are also deeply involved in creating software for these devices. To bring in some numbers, the Gartner group has estimated that the amount of connected devices will grow to a whopping 20.7 billion by 2020 (and some predict even higher growth, up to 30 billion devices).

Not only is the number of devices growing, but the complexity and amount of software is also increasing rapidly. For example, today’s passenger car can have over 100M lines of code, and this is expected to triple in the future as the functionality of automotive software increases. Cars are on the high side of complexity, but even the simplest connected devices need a lot of software to be able to handle the requirements for connectivity, security and to match the growing usability expectations of consumers.

Here is how the estimated growth of connected devices looks in a line graph:

iotdevices

What is inside these devices? What kind of software drives the connected devices? What kind of skills are needed to build these? It is estimated that 95% of today’s embedded systems are created with C/C++, and that this is not significantly changing in the foreseeable future. Then, on the other hand, according to a study there were 4.4M C++ developers and 1,9M C developers in 2015 in the World. An older study by IDC from 2001, shows that the number of C++ developers was estimated to be 3M back then. This means the number of C++ developers has been growing steadily around 3% per year and is expected to continue with a similar trend – or at least within a similar range.

So, a visualization of C++ developer growth provides the following graph:

cppdevelopers
The estimated number of devices, most of which will be done with C and C++, is already growing with much faster pace than the amount of C++ developers and the growth is expected to get even higher. Due to the increased complexity of functionality, the amount of software needed in the devices is also growing. Although some of the new devices will be very simple in functionality, on average the devices get more and more complex to meet consumers’ requirements.

Now, comparing these two trends together gives us an interesting paradox: How can the few millions of C++ developers match the requirement to build the dozens of billions of connected devices in the future?

Putting these two graphs together, we can clearly visualize the paradox (and a possible solution):

developes_vs_iotdevices

 

So how does this add up? Do we expect a 2020 C++ developer to write 20 times more code than a decade ago? That does not work. Even if all the C++ developers would focus into embedded, with no one creating and maintaining software for desktop and mobile applications, there still may not be enough developers. C++ developers can’t be easily trained from other professionals – programming is a skill that takes years to learn and not everyone can master.

So, Something needs to be done to facilitate two things: Enabling the C++ developers to be more productive and also helping the non-C++ developers to create the devices.

Therefore, the approach for creating embedded software needs to be adapted to the new situation. The only way to cope with the growth is to have good tools for embedded device creation and to increase the reuse of software. It is no longer viable to re-invent the wheel for each product – the scarce programming resources have to be targeted into differentiating functionality. Organizations will have to prioritize and focus into where they add value the most – anything that can be reused should not be created inhouse. Using tools and frameworks like Qt is the only viable approach to create the envisioned devices. The old Qt tagline: “Code less. Create more. Deploy Everywhere” is more relevant today than it has ever been. Qt has a solid track record from embedded, desktop and mobile development, making the creation of applications easy in any platform and also across multiple platforms.

It is likely that even reuse of software assets is not enough. It is also necessary to increase productivity of the C++ developers and to extend the personnel creating the software beyond the ones who master C++. Using the widely renowned and well-documented Qt API functionality and excellent development tools, C++ developers are more productive than before. Qt also provides an easy-to-use declarative QML language and visual design tools for user interface creation, growing the amount of people who can create software for embedded beyond the C++ developers. There are already over million developers familiar with Qt, and new developers across the world are taking it into use every day.

With the QML language, visual UI design tools and functionality for embedded devices does not mandate C++ skills for every developer in the team. It will still be necessary to have core C/C++ developers when making embedded devices, but others can help as well. Using Qt allows both non-C++ developers to create some of the needed functionality and the C++ developers to be more productive.

To increase developer productivity and to extend the developer base, Qt offers otherwise unseen ease of embedded development. Qt provides many of the common development boards supported out of the box, one click deployment to target device, built-in device emulator, on target debugger, performance analyzer, visual UI designer and many more tools in the integrated development environment. With the integrated tools and extensive API functionality, developing with Qt is unlike traditional embedded development. Qt makes embedded development almost as easy as creation of desktop or mobile applications.

The future is written with Qt.

To learn more about the latest developments of Qt, join us at the Qt World Summit October 18-20th 2016 in San Francisco, USA.

We’re also hosting an online panel discussion with industry experts around IoT and software in general September 27th. Register today for the webinar for an interesting fireside chat!

The post Internet of Things: Why Tools Matter? appeared first on Qt Blog.

Qt World Summit 2016 San Francisco Conference App: Behind The Scenes

Qt World Summit 2016

Meet me at this years Qt World Summit 2016 in San Francisco

qtws16_sfo

I’ll speak about development of upcoming Qt World Summit Conference App running on

  • BlackBerry 10 (Qt 4.8, Cascades)
  • Qt 5.7 (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

My Session

See how easy it is to develop cross-platform mobile Apps using Qt 5.7+ and new Qt QuickControls 2

qtws16_session_ekke

BlackBerry 10 Cascades Development ?

Already have BlackBerry 10 Apps (Cascades) ? Learn how to save your investment: most C++ Code für Business Logic, REST / Web Services, Persistence (SQLite, JSON) can be re-used and the app architecture is similar using Qt SIGNALS – SLOTS concept.

cu in San Francisco


Filed under: BB10, C++, Cascades, mobile, Qt

QtCon wrap up

QtCon16_Logo

First, a huge Thank You to everyone who was at QtCon!

We had an incredible time in Berlin. First the training day by KDAB and then three conference days packed full with topics ranging from how to set up an open source organisation to fine tuning Qt graphics.

Second. a shout out to the communities that we had the pleasure to work with to create QtCon, FSFE, KDE and VideoLAN, and of course to our partners KDAB, you guys rock!

Last but definitely not least, Thank You obviously to all the volunteers from the different communities!

The magic of QtCon

When we originally got together to plan QtCon, we envisioned it as a meeting of communities, one event where everyone can come. This is something we achieved. At the end of the event, every one of the community representatives made the same comment; meeting new and interesting people was the best part. Chatting about new things over coffee or lunch, walking from a deep dive technical session to the social impact of open source was something that only happens when we have different communities mix. By the end of the event, I believe everyone headed for home with a feeling that getting together is something we need to do more often.

The Keynotes were amazing! Please take the time to hear what Raul, Leslie and Julia had to say in their talks. The take-home message for me was that software has changed the world and we need to understand the change on every level. For Qt this means that we need to be sensitive to these changes and understand the impact we have in society.

The Qt Specific topics covered everything happening in and around Qt – from the technical details to overall community issues.

The most awaited Qt session was naturally the talk on the status and future of Qt by Chief Maintainer Lars Knoll. The talk outlined the bigger trends in Qt, and where biggest development pushes are expected to be. Lars also talked about how he sees the next releases of Qt going forward. These topics continued in corridor discussions and during the evening party on Friday.IMG_20160902_161511

My personal favourite talk was an ‘unconference session’ that was reserved on location about Qt QUIPs, a way to arrange and manage the information related to the Qt project. I’m looking forward to seeing QUIPs in action, but it will naturally take a while for the developers to get all the bits and pieces together.

The Qt session videos will be available soon on the QtStudios YouTube channel. However if you are really hungry to get at the videos, in the QtCon schedule the talks that were in the bigger rooms already have the links added to the talk descriptions. For example The Qt Project Status talk video is here. The incredible speed at which the videos got online is entirely due to the hard and efficient work of the CCC Video Operations Center, hats off to them!

In conclusion, I met old friend, new and interesting people, heard cool talks and had a good time. I’m sure the other attendees did too.

See you in coming events!

P.S. Qt World Summit is coming up soon 😉

The post QtCon wrap up appeared first on Qt Blog.

Squish tip of the week: Automating Multiple Applications with Multiple Squish Installations or Editions

Automating Multiple Applications with Multiple Squish Installations or Editions

Did you know that it is possible to use multiple Squish editions in a single test script?

The following example describes the setup and workflow for such scenario utilizing Squish for Qt and Squish for Web.

  1. Install Squish for Qt.
  2. Install Squish for Web.
  3. Create a Squish for Qt test suite with the Squish for Qt IDE.
  4. Create a Squish for Web test suite with the Squish for Web IDE.

  5. Share the objects.map file across both test suites (Learn how to share an Object Map).
  6. Decide which of the two test suites should be the main test suite. In our example, we choose the Squish for Qt test suite.
  7. Continue with Recording
  8. Continue with Replay

Note: The information above describes the setup for Squish for Qt and Squish for Web but the instructions are not limited to these Squish editions.

Related topics

Creating Certified Medical Devices with Qt

by Matthias Hölzer-Klüpfel [Medical Devices Consultant] (Qt Blog)

Many modern medical devices provide a graphical interface to the user. In dialysis machines, for example, touch screen interfaces to set up the treatment parameters and to monitor the treatment progress are commonplace. Qt is a viable technical solution to implement those interfaces, so it is used in quite a number of medical devices.

When designing and implementing a medical device, however, you have to do more than to find a good technical solution. You have to analyze the risks associated with your device, and you have to make sure that your system design and development is appropriate for that risk. That is not an easy task, but there are laws, regulations and standards that provide guidance on the required development process for medical device software. Important guidance documents are:

If you develop software for a medical device that will be marketed in the EU and the US, you will have to follow those guidelines. They are mainly concerned with the process of designing, implementing, verifying and testing your own device software. But they also influence the use of third-party software like Qt in your device. If your third-party software, or SOUP (“Software of Unknown Provenance” in terms of IEC 62304), may contribute to a hazardous situation, i.e. might lead to harm to the patient, you have to minimize that contribution and make sure that the chosen third-party software is appropriate.

If we continue the example of a dialysis machine, one of the functions of the therapy – besides cleaning the blood of the patient – is to remove excess water from the body of the patient. Depending on the physical condition of the patient, up to four liters of water may be removed in a typical therapy session. But that is just the maximum amount, a patient might need less water removal, or none at all. The problem is that you have to enter the right amount of water to remove via the user interface, and it is critical that you do not remove more than that amount, as removing too much water might lead to a circulatory collapse and might severely harm the patient. Input of safety-critical values is a typical critical user interface function in medical devices, as well as the output of safety-critical values, e.g. the oxygen saturation of a patient’s blood.

Another critical user interface function that is common in medical devices is alarms. Imagine that during a dialysis therapy, the device detects that there is an air bubble in the blood line (which, when infused back into the patient, might lead to embolism). What the device probably should do is to stop the therapy, sound an alarm sound and display a visual warning to the operator of the device to take appropriate actions.

Obviously, if one of those functions fail to work correctly, the patient may be harmed. Now a manufacturer might ask some basic questions:

  • 1. What can we do to prevent the harm to the patient?
  • 2. May we use Qt to implement those safety-critical user interface functions?
  • 3. Do we need a validated toolkit to build a safety-critical user interface?

Let’s start with the first question. It can only be answered by performing a detailed risk analysis. The standard ISO 14971 provides guidance on how to do this. In the example of the air-in-line alarm, we start with the hazard (the air bubble), determine the potential harm (which – in the worst case – is the death of the patient) and try to estimate the probability of the harm (for the sake of the example, let’s assume an air bubble once per 24 hours of treatment). If we combine those assumptions and estimates, we will find that the risk (the combination of the severity of the harm and the probability of its occurrence) is not acceptable. Thus we need to do something to reduce the risk. We might decide to add an air-bubble-detector into the system, and to add an alarm function to the user interface. When a bubble is detected, the system stops the therapy and raises the alarm to request the user to take appropriate action.

This is a reasonable first step, but not the end of our analysis. What happens if the alarm is not displayed? This could be caused by a problem with the display driver, or a failing LCD backlight, or by an unexpected failure of the GUI toolkit. A medical device needs to be safe even in the presence of a single fault in the system. So having an alarm system that might fail because of a single reason is not acceptable. Typical devices would therefore add another redundant and diverse alarm mechanism, e.g. a flashing LED that can be activated even when the GUI is not working properly. With this second channel, the alarm can be indicated even with a failure in the GUI or a failure in the LED mechanism. And this is generally considered to be safe. Of course, there is a cost – additional hardware.

There are other examples of diversity in graphical user interfaces: If we display a critical numerical value, we might be concerned that loading the correct font fails. Remember, we have to assume a first fault like a damaged font file. We can add some redundancy and display a bar graph visualizing the numerical value in addition to the number. Even if the numbers are not displayed correctly, the bar graph will present the information to the user. Sometimes you will see an old-fashioned LCD screen next to a touch screen on a medical device. This is a secure (if not pretty) way to add redundancy to the display system. The important point is that the resulting risk, even with a failure in the GUI, has to be acceptable.

Now we can tackle the second question: May we use Qt for the GUI of a safety critical medical device? Principally, the choice of technologies is up to the system designer. None of the standards will tell you to choose one toolkit over the other. The manufacturer of the medical device needs to make sure that it is safe, according to what has already been mentioned. But in addition to that, the IEC 62304 and the OTS require that we make a conscious decision about the choice third-party software or SOUP. In addition to the mentioned risk analysis, we need to make sure that:

  • The toolkit provides the functionality and performance that we depend on
  • The device provides the support necessary to operate the toolkit within its specification
  • The toolkit performs as required for our system

So a device manufacturer will have to provide evidence of these claims, i.e. will have to document the requirements to Qt, analyze and document the requirements imposed by Qt on the system and you will have to perform some degree of testing in the system to prove the requirements are met. And the manufacturer needs to set up a monitoring process to regularly check the bug list of the third-party software component and to assess if any new bugs impose additional risks to the patients. All of these points might be subject to an audit by a notified body or the FDA.

Very often the following question will be asked: Where can we buy a GUI toolkit that has already been validated to be suitable for use in safety-critical medical devices? Unfortunately, there is no such thing as a pre-validation for medical devices. As the starting point of third-party component validation is focused on the risk analysis, only the manufacturer of the device can do the qualification, because only the manufacturer can identify the risks. Therefore, IEC 62304 and the FDA regulations do not define a certification process for third-party software (SOUP). The best way a vendor can support a medical device manufacturer therefore is by providing good documentation of its development process and proof of internal testing, which allows the manufacturer to asses if it is appropriate for the planned application.

If you use a commercial license of Qt, contact The Qt Company and request a description of the QA practices and test report of the Qt version you intent to use. These documents are readily available and support your qualification effort.

To summarize, if you plan to use Qt for safety-critical functions in a medical device, make sure to:

  • 1. Identify all risks that might be caused by failures of the user interface
  • 2. Try to mitigate those risks by means outside the user interface, e.g. by redundant inputs and outputs
  • 3. Build redundancy into the user-interface itself to protect against single-fault failures
  • 4. Carefully select the software components you use to implement the user interface
  • 5. Document the rationale for your decision that Qt is appropriate for your device so it can be reviewed by external auditors

If you follow those steps, you will be able to design your device with a modern user interface, and still meet all the safety requirements.

About the Blog Post Author:

Matthias Hölzer-Klüpfel is an independent consultant, trainer and contractor concerned with development processes and project management for medical device software. He co-founded the association “International Certified Professional for Medical Software Board e.V.” which provides the foundation for a certified education program for medical device software development.

You can reach Matthias via matthias@hoelzer-kluepfel.de if you have any further questions.

Earlier Blog Posts about Functional Safety with Qt:

If you are interested in hearing more about Functional Safety, there is a talk at Qt World Summit by Tuukka Turunen about ‘Creating Functional Safety Certified Systems with Qt’.

The post Creating Certified Medical Devices with Qt appeared first on Qt Blog.

Qt 5.8 Alpha released

I’m happy to let you know that we have now reached our first milestone towards the release of Qt 5.8. The Alpha version of Qt 5.8 is now ready, and can be downloaded from download.qt.io or your Qt Account. As a new minor release, Qt 5.8 comes with a lot of new features as well as many bug fixes and improvements. We’ll go through all the new features in more detail as we get closer to the release. For now, let me just mention some of the biggest changes.

New graphics architecture

With Qt 5.8, the graphics architecture for Qt Quick has undergone a larger rewrite. The goal was to remove the tight dependency of Qt Quick onto OpenGL that we have had since Qt 5.0, and make the architecture more agnostic with regards to the graphics API that is being used. The new infrastructure has been used to create a vastly improved Software rendering backend for Qt Quick, and a backend based on DirectX 12.

QML caching

The QML engine has also seen some major improvements with a new caching infrastructure, that can cache the QML files in a precompiled binary form. This infrastructure does help to significantly speed up loading of QML applications once the binary cache has been created. It also helps reduce memory consumption. Ahead of time compilation of Qt Quick continues to be supported through the commercial Qt Quick Compiler.

Qt Lite Project and configurability

Even though Qt is split up into many modules, it is a large framework with many features. Many of our customers are using only parts of them, and have been asking for an option to create tailored builds of Qt for their use case. This is especially important for embedded devices, where both RAM and Flash storage are often limited.

To accommodate this, we have over the last 6 months done significant work on our build infrastructure to give our users much more fine grained control over the way how Qt is being built. This is what we called the Qt Lite Project. The basic infrastructure for this is now in place with the 5.8 Alpha, but we will be doing some more work on it while moving towards the Beta release.

With Qt 5.8, we will add a new tool to Qt for Device Creation, that will make it easier to tailor your Qt build and remove all the pieces of functionality that you are not using in your embedded project. From initial measurements, we expect that you will be able to reduce the size of a statically linked Qt Quick application by up to 70% compared to Qt 5.6.

New modules

The Wayland Compositor, SCXML and Serial Bus modules have now graduated from Technology Preview to being fully supported. In addition, we added Qt Speech and Qt Network Authentication (featuring OAuth support) as new Technology Previews.

Timeline

With the Qt 5.8 Alpha being released, we are now focusing fully towards finalizing a couple of remaining items, and plan to have the beta ready for you towards the beginning of October and Qt 5.8.0 final by end of November.

If you would like to hear more about all the cool new things coming with Qt 5.8, we will have in-depth talks about all of them at the Qt World Summit in San Francisco.

I hope you’ll enjoy the Qt 5.8 Alpha. Please download it from download.qt.io or your Qt Account, and don’t forget to give us feedback by writing to the mailing lists or reporting bugs.

The post Qt 5.8 Alpha released appeared first on Qt Blog.

In 45 Minutes: from Scratch to App for Android, iOS, Windows 10

As you know last months I evaluated Qt 5.7 and new Qt Quick Controls 2 for x-platform mobile App development.

Qt Con Conference App

You can see a first App live at Google Play, Apple App Store or Amazon App Store: Search for ‘QtCon’:

qtcon_google_play

The Application is available Open Source at Github.

Blog with some more details: here.

Next Conference App for Qt World Summit in San Francisco will also be available from Windows App Store (Windows 10) and BlackBerry World (BlackBerry 10)

Talk at Code Talks in Hamburg

I will demonstrate the power of Qt 5.7 / Qt Quick Controls at Code Talks in Hamburg:

in 45 Minutes from Scratch to App

ekke_codetalks

BlackBerry 10

Perhaps you’re asking “And what about your BlackBerry 10 Development, ekke ?

Nothing has changed – I’m still developing business Apps for BlackBerry 10 – and the good thing: I can re-use most of the C++ / Qt Code for Android / iOS / W10🙂 Also App Architecture and events (Qt SIGNALS – SLOTS concept) are similar.


Filed under: BB10, C++, Cascades, mobile, Qt

Cutelyst 0.13.0 released!

cutelyst-logoCutelyst the Qt web framework just got a new release, 0.13.0.

A new release was needed now that we have this nice new logo. 

Special thanks to Alessandro Longo (Alex L.) for crafting this cute logo, and a cool favicon for Cutelyst web site.

But this release ain't only about the logo, it's full of cool things: 

When I started Cutelyst a simple developer Engine (read HTTP engine) was created, it was very slow and mostly an ugly hackery but helped work on the APIs that matter, I then took a look at uWSGI due some friend saying it was awesome and it was great to be able to deal with many protocols without the hassled of writing parsers for them.

Fast forwarding to 0.12.0 release and I started to feel that I was reaching a limit on Cutelyst optimizations and uWSGI was holding us back, and it wasn't only about performance,  memory usage (scalability) was too high for something that should be rather small, it's written in C after all. 

It also has a fixed number of requests it can take, if you start it with 5 threads or process it's 5 blocking clients that can be processed at the same time, if you use the async option you then have a fixed number of clients per process, 5 process * 5 async clients = 25 clients at the same time, but this 5 async clients are always pre-allocated which means that each new process will also be bigger right from launch.

Think now about websockets, how can one deal with 5000 simultaneous clients? 50 process with async = 100? Performance on async mode was also slower due complexity to deal with them.

So before getting into writing an alternative to uWSGI in Cutelyst I did a simple experiment, asked uWSGI to load a Cutelyst app and fork 1000 times and wrote a simple QCoreApplication that would do the same, uWSGI used > 1GB of RAM and took around 10s to start, while the Qt app used < 300MB of RAM and around 3s. So ~700MB of RAM is a lot of RAM and that was enough to get me started.

Cutelyst-wsgi, is born, and granted the command line arguments are very similar to uWSGI and I also followed the same separation between socket and protocol handling, of course in C++ things are more reusable, so our Protocol class has a HTTP subclass and in future will have FastCGI and uWSGI ones too.

Did I say uWSGI before 2.1 doesn't support keep-alive? And that 2.1 is not released nor someone knows when it will? Cutelyst-wsig supports keep-alive, http pipelining, is complete async and yes, performs a little better. If you put NGINX in front of uWSGI you can get keep alive support, but guess what? the uwsgi protocol closes the connection between the front server so it's quite hard to get very high speeds. Preliminary results of TechEmpower Benchmarks #13 showed Cutelyst hitting these limits as others frameworks were using keep-alive properly.

Thanks to this new Engine the Engine API got several improvements and is quite stable now. Besides it a few other important changes were made as well:

  • Change internals to take advantage of NRVO (named return value optimization)
  • Improved speed of Context::uriFor() making Cutelyst now require Qt 5.6 due a behavior change in QUrl
  • Improved speed and memory usage of Url query parser 1s faster in 1m iterations, using QByteArray::split() is very convenient but it allocates more memory and a QList for the results, using ::indexOf() and manually getting the parts is both faster and more memory efficient but yes, this is the optimization we do in Cutelyst::Core and that makes a difference, in application code the extra complexity might not worth it.
  • C++ for ranged loops, all our Q_FOREACH & friends where replaced with for ranged loops
  • Use of new reverse and equal_range iterators
  • Use QHash for storing headers, this was done after several benchmarks that showed QHash was faster for all common cases, namely if it keept the values() in order like QMap it would be used in other places as well
  • Replaced most QList with QVector, and internally std::vector
  • Multipart/form-data got faster, it doesn't seek() anymore but requires a not sequential QIODevice as each Upload object point to parts of the body device.
  • Add a few more unit tests.

Thanks to the above the core library size is also a bit smaller, ~640KB on x64.

I was planning to do a 1.0 after 0.13 but with this new engine I think it's better to have a 0.14 version, and make sure no more changes in Core will be needed for additional protocols. 

Download here enjoy!

New Forum theme and security notice

Hello,

Last week we updated the Qt Forum to the latest version of NodeBB.

We had been planning the upgrade for a while, but had to do the upgrade on a quick notice, as a bug that leaked user emails was found in the forum. Thanks to Justin Clift for pointing out the issue to us!

This means that it was possible for someone to find out user emails from the forum. For those users who have their email as public, this is not a issue, but some of you want to keep your email to yourself. The bug meant that these email addresses could also be found.

No other data was available through the bug, and as we are using a central sign in service, no account information could leak from the forum.

So if you have gotten more email spam than normally this might be one cause.

We are sorry for the leak, but in our defence, we did not know of it, and patched the system in under a day of becoming aware of the issue.

But on to the upgrade itself.

With the upgrade we changed to the new default theme used by NodeBB. It looks quite different from the old theme, and has already gotten some for and against feedback. I personally am getting used to the look and feel, and after the initial shock, I like it. That’s a personal opinion, your mileage may vary, and please do tell us in the comments.

Due to the rushed upgrade some small things still need tweaking, the colours are a bit off from the Qt green, that will be fixed as soon as I find the time for it.

The reasoning for updating the theme, is that we can now follow the NodeBB upgrades faster, as we do not need to customise the theme as much as before. This will bring the improvement faster to you.

As an example new feature we now have chat rooms instead of one-to-one chats on the forum. To create a room, you can start a chat, and from the chat window settings add other users. At least for the Forum regulars this is quite an improvement.

So what do you think of the new Qt Forum look? Please tell us in the comments or drop by the forum to share your opinion.

Updated to credit Justin for finding the leak, thanks again!

The post New Forum theme and security notice appeared first on Qt Blog.

Fast-Booting Qt Devices, Part 4: Hardware Matters

Welcome back!

A while ago, I posted three parts of Fast-Booting Qt Devices blog post series where we showcased 1,5 second boot-time, optimized the Qt application and finally showed you how we optimized the entire Linux stack. Today, we will show you that hardware selection and hardware architecture in general can have a big impact on the actual startup time even when using the same CPU. To demonstrate this, we have two boards with NXP i.MX6 Quadcore CPU. One is a board geared towards software development, and the other is a system-on-module board aimed to be used in the production as well.

So, let’s a have a small Battle of the Boards! :)

On the left side, we have the NXP SABRE  i.MX 6 Quad Development Board:

  • NXP i.MX 6 Quadcore processor, running at 1GHz
  • 1GB DDR3 RAM
  • 8GB eMMC

On the right, we have Toradex Apalis i.MX 6 Computer on Module:

  • NXP i.MX 6 Quad core processor, running at 1GHz
  • 1GB DDR3 RAM
  • 4GB eMMC

Both boards are running exactly the same Qt Cluster demo, kernel configurations and u-boot.

Toradex Computer on Module is a clear winner with 19% (294 ms) faster startup time. Our earlier fast-boot example with the NXP SABRE resulted in a very good 1560 ms from power up to display of the first full screen Qt Quick frame. Now, with the Toradex board, we got an even faster 1266 ms.

Where does the difference come from?

  • Powering up of the board is faster with Toradex module
  • Kernel is able to access eMMC earlier resulting into a faster kernel startup time

So, when designing your embedded devices, remember that hardware selection does matter too. If you need to reach blazing fast startup time, it certainly helps to have fast memory and memory bus, well optimized bootloader and kernel, as well as of course a powerful chip that can quickly crunch through the libraries you need to load. The rest is then up to your software–even with the optimized hardware you can ruin your boot-up time with a sloppy software design. For those tips, check out the earlier posts in this series.

If you are interested in hearing more, I will be talking about fast-boot of Qt based devices at the Qt World Summit in San Francisco, October 18-20. We are looking forwards to seeing you there, and hearing your feedback!

 

The post Fast-Booting Qt Devices, Part 4: Hardware Matters appeared first on Qt Blog.

Goodbye, Q_FOREACH

Q_FOREACH (or the alternative form, foreach) will be deprecated soon, probably in Qt 5.9. Starting with Qt 5.7, you can use the QT_NO_FOREACH define to make sure that your code does not depend on Q_FOREACH.

You may have wondered what all the fuss is about. Why is there a continuous stream of commits going to into Qt replacing Q_FOREACH with C++11 ranged for-loops? And why does it take so many commits and several Qt versions to port away from Q_FOREACH? Can’t we just globally search and replace Q_FOREACH (a, b) with for (a : b) and be done with it?

Read on for the answers.

What is Q_FOREACH?

Q_FOREACH is a macro, added for Qt 4, that allows to conveniently iterate over a Qt container:

Q_FOREACH(int i, container)
    doSomethingWith(i);
Q_FOREACH(const QString &s : functionReturningQStringList())
    doSomethingWith(s);

It basically works by copying the second argument into a variable called QForeachContainer, and then iterating over it. I’m only mentioning this for two reasons: First, you will start seeing that internal QForeachContainer at some point in deprecation warnings (probably starting with Qt 5.9), and, second, yes, you heard correctly, it copies the container.

This copying has two effects: First, since the copy taken is essentially const, no detaching happens when iterating, unlike if you use the C++98 or C++11 alternatives:

for (QStringList::const_iterator it = container.begin(), end = container.end(); it != end; ++it)
   doSomethingWith(*it);
for (const auto &s : container)
   doSomethingWith((*it);

In both cases the (explicit or implicit) calls to begin() and end() cause a non-const container to detach from shared data, ie. to perform a deep-copy to gain a unique copy of the data.

This problem is well-known and there are tools to detect this situation (e.g. Clazy), so I won’t spend more time discussing it. Suffice to say that Q_FOREACH never causes detaches.

Except when it does.

Q_FOREACH is Convenient^WEvil

The second effect of Q_FOREACH taking a copy of the container is that the loop body can freely modify the original container. Here’s a very, very poor implementation that takes advantage of this:

Q_FOREACH(const QString &lang, languages)
    languages += getSynonymsFor(lang);

Of course, since Q_FOREACH took a copy, once you perform the first loop iteration, languages will detach from that copy in Q_FOREACH, but this kind of code is safe when using Q_FOREACH, unlike when you use C++11 ranged for-loops:

for (const auto &lang : languages)
    languages += getSynonymsFor(lang); // undefined behaviour if
                                       // languages.size() + getSynonymsFor(lang).size() > languages.capacity()

So, as we saw, Q_FOREACH is convenient—if you write code.

Things look a bit different if you try to understand code that uses Q_FOREACH, because you often can’t tell whether the copy that Q_FOREACH unconditionally takes is actually needed in any particular case, or not. A loop that plain falls apart if the container is modified while iterating is much easier to reason about than a Q_FOREACH loop.

And this brings us to porting away from Q_FOREACH.

Towards a Q_FOREACH-Free World

Things would be pretty simple if you could just globally search and replace Q_FOREACH (a, b) with for (a : b) and be done with it. But alas, it ain’t so easy…

We now know that the body of a Q_FOREACH loop is free to modify the container it’s iterating over, and don’t even for a minute think that all cases are so easy to recognize as the example with the languages above. The modification of the container may be several functions deep in the call stack originating from the loop body.

So, the first question you need to ask yourself when porting a Q_FOREACH loop is:

Does the loop body (directly or indirectly) modify the container iterated over?

If the answer is yes, you also need to take a copy and iterate over the copy, but as the nice guy that you are, you will leave a comment telling the future You why that copy is necessary:

const auto containerCopy = container; // doSomethingWith() may modify 'container' if ....
for (const auto &e : containerCopy)
    doSomethingWith(e);

I should note that in cases where the container modification is restricted to appends, you can avoid the copy (and the detach caused by it) by using an indexed loop:

for (auto end = languages.size(), i = 0; i != end; ++i) // important: cache 'languages.size()'
    languages += getSynonymsFor(languages[i]);

Avoiding Detaching

If your container is a std:: container or QVarLengthArray, you are done. Arguably, Q_FOREACH should never, ever have been used on such a container, since copying those always copies all elements (deep copy).

If your container is a const lvalue or a const rvalue, you are done, too. Const objects don’t detach, not even the Qt containers.

If your container is a non-const rvalue, simply store it in an automatic const variable, and iterate over that:

const auto strings = functionReturningQStringList();
for (const QString &s : strings)
    doSomethingWith(s);

Last, not least, if your container is a non-const lvalue, you have two choices: Make the container const, or, if that doesn’t work, use std::as_const() or qAsConst() (new in Qt 5.7, but easily implemented yourself, if required) to cast to const:

for (const QString &s : qAsConst(container))
    doSomethingWith(s);

There, no detaches, no unnecessary copies. Maximum efficiency and maximum readability.

Conclusion

Here’s why you’ll want to port away from Q_FOREACH, ideally to C++11 ranged for-loops:

  • Q_FOREACH is going to be deprecated soon.
  • It only works efficiently on (some) Qt containers; it performs prohibitively expensive on all std containers, QVarLengthArray, and doesn’t work at all for C arrays.
  • Even where it works as advertised, it typically costs ~100 bytes of text size more per loop than the C++11 ranged for-loop.
  • Its unconditionally taking a copy of the container makes it hard to reason about the loop.

Happy porting!

The post Goodbye, Q_FOREACH appeared first on KDAB.

Embedded Systems Are the Backbone of IoT, but It’s Software That Brings It All Together

Smoking hot terms like Big Data and the Internet of Things or “IoT” have taken their place in conventional business lingo, and it’s practically impossible to avoid these terms — everyone has recognized what developers have seen for many years. New applications for your products, new opportunities for your offering, new customer areas are emerging, and the time to re-think how you apply connectivity, mash-ups and various sensors is going mainstream. As the business potential has started to materialize, we see that the ecosystem around IoT begins to intensify and expand, strengthening the backbone of IoT, as it shifts into high gear.

Qt Top 5 Considerations IoT infographicYou could argue that the Internet of Things is simply the connected embedded system re-coined. These systems are already around us and machine-to-machine (M2M) systems have been chatting to each other for decades. But in addition to just the technical capabilities between embedded devices, IoT also includes the aspect of The Omnipresent Cloud and mobile client access, shifting the way we use these connected embedded systems. And that then enables all the new IoT innovations, but also affects how we need to design these systems, especially from software perspective: Instead of creating a self-contained embedded device with an online connection, we are designing complex and extensible systems with connected sensors, embedded devices, a cloud back-end and mobile clients. *Poof* Embedded software design just became exponentially more complex.

As computers (and sensors) get smaller, smarter and connected, our everyday objects, from clothing to lavatories to cars, get more intelligent. Although hardware has center stage, it’s time to start looking at the software that will bring it all together.

Embedded Development Can Be Modern, Too 

In the past ten years, there has been a tremendous leap in how software is developed. Modern software development in general seems to be about finding new ways of working even more agile, adopting new techniques quickly, abandoning non-working ones, moving rapidly forward with continuously deployed changes and near-realtime iterations in a harmonious telepathy between the customer and a self-guiding and proactive development team.

Modern software development is naturally awesome, but unfortunately, in embedded software development one can too rarely apply any of the stuff the cool kids in the web/mobile world are hyping about. Because of industry-related verification/certification requirements and especially the technical limitations of embedded cross-compilation workflow, I still hear waterfalls in the distance. At the same time, when we’re supposed to create these complex and innovative IoT things with modern touch UIs, we can’t afford to have development cycles that take weeks for each iteration of a simple UI tweak. The markets need to be reached faster! This is what we want to change with Qt—we want to make embedded development as seamless as desktop or mobile development. We want to provide one technology for all embedded and mobile platforms — and enable rapid deployment cycles for the whole IoT system.

Qt libraries give you various UI approaches for creating a unified UX between your embedded and mobile devices. In addition, there are plenty of high-level Qt APIs for creating the engine of your IoT gateways: eg. Bluetooth LE for sensor communication and built-in JSON support for cloud communication. Qt Creator IDE works on all platforms, supports direct deployment to desktop, embedded and mobile targets and includes all the tools for designing, developing, debugging, profiling and analyzing your code. You can do rapid prototyping on your laptop and push the build to your embedded hardware or mobile device to see the changes there.

  • Support multiple devices with or without screens
  • Leverage your core communication libraries between a desktop interface and a mobile gadget
  • Share code with other IoT developers building different parts of the ecosystem

To learn why having an embedded tool that has powerful UX capabilities can make the difference for your business: Read the whitepaper “Building the Internet of Things and How Qt Can Help”. 

 

The post Embedded Systems Are the Backbone of IoT, but It’s Software That Brings It All Together appeared first on Qt Blog.

Qt Creator 4.1.0 released

We are happy to announce the release of Qt Creator 4.1.0.

Flat Dark Theme - Qt Creator 4.1Flat Light Theme - Qt Creator 4.1

Themes

We added Flat Light and Flat Dark themes, complementing the Flat theme which was added in 4.0. They are available in the Environment > Interface > Theme settings. We also added some more editor color schemes which you find in Text Editor > Fonts & Colors.

Editing

Text Editors now behave much better with regards to automatically inserting and skipping characters. If you type a quote or bracket, the corresponding closing character is added. If you remove the opening character, it is removed again. If you type the closing character yourself, it replaces the automatically inserted one. Both of these now only happen as long as the text cursor did not move away from the closing character. You can also configure automatic insertion of brackets and quotes individually in the Text Editor > Completion settings.

C++

Aside from fixing bugs in the code model and static analyzer integration, we also updated our binary packages to use Clang 3.8.1, which also fixes many issues, especially with MSVC. We also added a more recent patch to Clang that makes it work better with MSVC2015 Update 3.

Qt Quick

Both Qt Quick Designer and QML Profiler received many performance improvements. You can now choose a Qt Quick Controls 2 style which Qt Quick Designer uses to render your items. The new Move to Component action moves an item and its contents into a separate file.

CMake

Many bugs were fixed for supporting CMake projects, and the workflow further improved. CMake is only run automatically if Qt Creator is the active application, and you can turn automatic running completely off (Build & Run > CMake). If you set the QML_IMPORT_PATH variable in the CMake cache for your project, Qt Creator picks this up and feeds it to the QML code model, so you can access your QML imports in the editor. (Example CMake code: set(QML_IMPORT_PATH ${CMAKE_SOURCE_DIR}/qml ${CMAKE_BINARY_DIR}/imports CACHE string "" FORCE))

Other improvements

If you turn on the plugin (Help > About Plugins, or Qt Creator > About Plugins on macOS), Qt Creator gains experimental support for the Nim programming language. Many thanks to Filippo Cucchetto for this contribution. It supports syntax highlighting, indentation, coding style settings, and simple project management, including building, running and debugging applications.

All this is just a small excerpt from all the changes and improvements that you find in Qt Creator 4.1.0. Find out more in our change log, or just go ahead, download and try it for yourself!

Get Qt Creator 4.1.0

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.1.0 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.1.0 released appeared first on Qt Blog.

Release 2.9.1: Multiplayer Enhancements & Plugin Updates

V-Play 2.9.1 is now available to download. This update improves the V-Play Multiplayer by adding a single player option and a new latency testing feature, as well as general improvements to the multiplayer game example. When you update to V-Play 2.9.1, you can also include the latest Android libraries, Play Services libraries 9.4.0, for all of your V-Play Plugins. This provides your plugins with the most up-to-date functionality on the Android platform.

It also includes fixes and improvements for a number of other V-Play features.

Update Now!

If you’d like to make cross-platform apps and games but haven’t signed up for V-Play yet, you can download the SDK for free.

Multiplayer Enhancements

One Card!

Following the recent release of V-Play Multiplayer, we added a multiplayer game example, ONU, to the V-Play Sample Launcher. ONU is now One Card!, and includes advanced functionality such as single player mode, medals for high-level players and an increased max player level.

onecard-menu onecard-game

One Card! also showcases how you can add monetization features, such as in-app purchases and advertisements, to your mobile game. It’s simple to add these features with the Soomla and AdMob plugins.

V-Play Multiplayer Functions & Properties

In order to make the creation of multiplayer games even easier with V-Play, you can now use some added functions and properties to reduce your development time and improve your testing phase.

The latencySimulationTime property allows you to test the messaging system within your multiplayer game by simulating network latency. This helps you to fix errors coming from delayed messages which occur when players have a poor network connection. This makes debugging your multiplayer game much easier.

The restartGame() and endGame() functions make it possible to restart or end a game at the same time for all players in a game room.

You can now use the createSinglePlayerGame() to turn your multiplayer game into an offline single player game. This function skips the matchmaking phase of game creation and lets the player begin a new game immediately. It prevents messages from being sent and handles everything on the local device.

One Card! Game Demo

You can try out all these new features for yourself. For a sample implementation of these new features, have a look at the “One Card!” Multiplayer Demo. You can also download the official One Card! game on the App Store and Google Play.

iTunes_download_link Google_Play_Badge-1

Updated V-Play Plugins

When you update to V-Play 2.9.1, you can now include the latest Android libraries, Play Services libraries 9.4.0, for all of your V-Play Plugins. This provides your plugins with the most up-to-date functionality on the Android platform.

We’ve also updated the OneSignal, Chartboost and AdMob plugins to their latest versions. If you’re planning to use these plugins on iOS devices, make sure to copy the latest iOS framework from here. This update fixes the video caching issue within the Chartboost plugin that prevented the loading of video interstitials and rewarded videos.

Fixes & Improvements

V-Play 2.9.1 includes fixes and improvements to TexturePacker integration, the V-Play Game Network, and the Theme component. You can find more information on these updates in the V-Play change log.

How to Update

Test out these new features by following these steps
Step 1

Open the V-Play SDK Maintenance Tool in your V-Play SDK directory. Choose “Update components” and finish the update process to get V-Play 2.9.1 as described in the V-Play Update Guide

If you haven’t installed V-Play yet, you can do so now with the latest installer from here.

Step 2

The V-Play Sample Launcher allows you to quickly test and run all the open-source examples and demo apps & games that come with the V-Play SDK, from a single desktop application.

After installing V-Play, you can start the V-Play Sample Launcher from the application shortcut in your V-Play SDK directory.

Sample Launcher-v-play

Now you can explore all of the new features included in V-Play 2.9.1!

For a complete list of the changes to V-Play with this update, please check out our change log!

More Posts like This

How to Make a Game like Super Mario Maker with Our New Platformer Level Editor

super mario level editor blog

16 Great Sites Featuring Free Game Graphics for Developers

game graphics

The 13 Best Qt, QML & V-Play Tutorials and Resources for Beginners

tutorials capture

21 Tips That Will Improve Your User Acquisition Strategy

User Acquisition

The post Release 2.9.1: Multiplayer Enhancements & Plugin Updates appeared first on V-Play Engine.

QtCon: Squish for Qt Training in Berlin

QtCon

On September 1st, as part of the QtCon conference, our partner KDAB hosts a day of training. This training day allows you to gain knowledge in several Qt-related topics, including automated Qt GUI testing with Squish for Qt.

froglogic‘s ISTQB-certified Senior Software Trainer Florian Turck will conduct the full-day Squish for Qt training and share his in-depth experience of effectively using Squish. Seats for the training are still available.

Register for QtCon here:

https://conf.qtcon.org/en/session/new?conference_acronym=qtcon

Boost dependencies and bcp

Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

The monster diagram is here:

Edges and Incidental Modules and Packages

The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

Dependency Analysis and Reduction

One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf boostdir/libs/icl/{doc,test,example}
$ rm -rf boostdir/boost/mpl/aux_/preprocessed
$ du -hs myicl/
15M     myicl/

Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

boost/libs/icl$ git grep -Pl -e include --and \
  -e "thread|spirit|pool|serial|date_time" include/
include/boost/icl/gregorian.hpp
include/boost/icl/ptime.hpp

Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

boost/libs/icl$ git grep -l gregorian include/
include/boost/icl/gregorian.hpp
boost/libs/icl$ git grep -l ptime include/
include/boost/icl/ptime.hpp

As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

boost/libs/icl$ git grep -l -e include \
  --and -e date_time include/boost/icl/ | xargs rm
boost/libs/icl$

and run the bcp tool again.

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf myicl/libs/icl/{doc,test,example}
$ rm -rf myicl/boost/mpl/aux_/preprocessed
$ du -hs myicl/
12M     myicl/

I’ve saved 3M just by understanding the dependencies a bit. Not bad!

Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

include/boost/icl$ git grep -C2 -e include \
   --and -e boost/container
#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/map.hpp>
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <map>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>

So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

index 6f3c851..cf22b91 100644
--- a/include/boost/icl/map.hpp
+++ b/include/boost/icl/map.hpp
@@ -12,12 +12,4 @@ Copyright (c) 2007-2011:
 
-#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
-#   include <boost/container/map.hpp>
-#   include <boost/container/set.hpp>
-#elif defined(ICL_USE_STD_IMPLEMENTATION)
 #   include <map>
 #   include <set>
-#else // Default for implementing containers
-#   include <map>
-#   include <set>
-#endif

This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

Examining Proposed Standard Libraries

We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

Using a bcp-extracted library with Modern CMake

After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

add_library(boost_mpl INTERFACE)
target_compile_definitions(boost_mpl INTERFACE
    BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS
)
target_include_directories(boost_mpl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl"
)

add_library(boost_icl INTERFACE)
target_link_libraries(boost_icl INTERFACE boost_mpl)
target_include_directories(boost_icl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include"
)
add_library(boost::icl ALIAS boost_icl)
#

Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

target_link_libraries(myexe boost_icl)
target_link_libraries(myexe boost::icl)
#

Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.


QtWebKit: I'm back!

Hello world!

 

Five years have passed since the last entry in this blog, and almost 3 years since infamous "Changes in QtWebKit development" thread at webkit.org. Fortunately, we've made quite different kind of change in QtWebKit development lately, and it is much more exciting.

QtWebKit is back again!


If you were following QtWebKit development after 2013, you know that actually development have never stopped: each release was getting a bunch of bugfixes and even brand new features. However, WebKit engine itself has not been updated since Qt 5.2 release. That's why it didn't support recent changes in Web standards that happened after 2013, including: new JavaScript language standard ES2015 (also known as ES6), as well as improvements in DOM API and CSS.

However, things have changed in 2016, and now we have revived QtWebKit! Core engine code was updated to it's actual state, and as a result we (and you!) can use all improvements made by WebKit community during these 3 years without any changes in code of existing Qt applications!



You may be wondering, why anyone would like to use QtWebKit in 2016, when shiny new QtWebEngine is available? There is a number of reasons:
  • When used in Qt application, QtWebKit has smaller footprint because it shares a lot of code with Qt. For example, it uses the same code paths for drawing and networking that your regular Qt code uses. This is especially important for embedded systems, where both storage space and memory are scarce resources. It's possible to go further and cut away features which are not crucial for your application, using flexible configuration system of WebKit.
  • On Linux QtWebKit uses GStreamer as a default media player backend. This means that application users will be able to use patent encumbered codecs (if this is legal in their areas) without getting you (as application developer or distributor) into legal troubles.
  • Lots of existing open source applications depend on QtWebKit, but without security updates their users are left open to vulnerabilities. The are only two ways to work around this problem: port applications away from QtWebKit (which is often a hard task because QtWebKit allows much deeper integration with application code than alternative solutions), or update QtWebKit itself, which makes these large porting work unnecessary.
  • QtWebKit is more portable than Chromium: it can run on any CPU architecture supported by Qt and on virtually any Unixish OS (as well as Windows and Mac). The only requirement is a C++11 compiler.
  • Non-interactive user agents like PhantomJS or wkhtmltopdf don't gain any benefits from multi-process architecture, so using single-process WebKit 1 API allows them to have less resource footprint and simpler flow of execution.

    Q: I've heard that WebKit engine is not relevant anymore, since the crowd is working on Blink these days!


    A: This is not true. Despite of the Google departure, WebKit remains on of the leading browser engines, and is progressing at a fast pace. If you don't believe, read on! Also you may want to read release announcements of Safari Technology Preview and WebKitGTK, which higlight other WebKit features under development.

    Now let's see what can we do with QtWebkit in 2016!

    JavaScript engine improvements and ES2015 status

     

    Most of features are supported now (QtWebKit 5.6 has 10% rating). Note that WebKit is the first web engine providing proper tail calls, which means you can enjoy functional programming without unnecessary stack growth in tail recursion!

    WebKit gained new tier of JavaScript JIT compilation, called FTL. First implementation was based on LLVM compiler infrastructure, but now we are shipping B3 compiler which is more lightweight, does not pull in additional dependencies, and also compiles faster. FTL usually gets activated for computationally-intensive JS code, and is especially useful for running native code compiled to ASM.js.

    Code produced by JavaScript JIT now uses normal C stack, reducing overall memory usage and fragmentation.

    JIT compiler now uses background threads, so compilation does not block execution of other code.


    New (and old) CSS properties


    Web standards evolve rapidly, and more and more CSS properties find their way into specification. Most of them have already been available for a long time, but used -webkit vendor prefix as they were non-standard extensions at a time of their introduction, and now they (finally!) have formal description which all vendor are obliged to follow (though sometimes standardization process changes behavior of old properties). Standardized properties are available without vendor prefixes, and web page authors start actively using these new spelling.

    Unfortunately, sometimes they break compatiblity with old browsers, which implement prefixed properties, with disastrous consequences. Here are screenshots of site that uses unprefixed flexbox properties, defined in CSS3:

      QtWebKit 5.6


      QtWebKit TP3

      CSS Selector JIT


      Besides JavaScriptCore, WebKit now features yet another JIT compiler. Its aim is to speed up application of CSS style sheet to elements of page DOM, so called style resolution. Average performance gain is about 2x, however for complex selector and/or pages with lots of DOM nodes gain may be substantially larger.

      Selector JIT also makes querySelector() and querySelectorAll() faster, but speed up factor may differ.

      -webkit-initial-letter


      This is new CSS property, allowing page author to create "drop cap" effect without much hassle. In order to make this effect work correctly with calligraphic fonts, Qt 5.8 (not yet released) is required.

      Other improvements


      • Responsive images support (<picture> element, srcset and sizes attributes)
      • ellipse()method in Canvas API
      • CSS selectors ::read-writeand ::read-only
      • HTML <template>element 
      • APNG images

      We also support following web features with experimental status and only for GStreamer media player backend:
      • Media Source Extensions
      • WebAudio


        The path ahead


        Unfortunately, porting Qt-specific code to the new WebKit is not always easy, and we had to disable certain features until all code behind them is ported properly. So far, following prominent features are not yet working:
        • QML API
        • WebGL and CSS 3D transforms
        • Accelerated compositing
        • Private browsing
        However, don't be discouraged! Work is in progress and we hope to get these feature available soon. But we are short on manpower so we cannot work on many things in parallel. If you want to get your favorite feature ready sooner rather than later, please join our project. We have a lot of work to do, most items don't require any prior knowledge of WebKit, and some even don't require you to know C++ (yes, there is work for those of you who know only HTML + CSS + basic JavaScript, or only Python). Another way to help us is to report bugs you have found, or help to track down known issues.

        You can follow development of QtWebKit at GitHub repository; however if you want to obtain get bleeding edge sources, use another repository - the latter one is much smaller than original, but still contains all files required to build QtWebKit. See our wiki for build instruction and additional information about the project.

        P.S.


        Today is 10 years since the first chunk of QtWebKit code have been merged into WebKit repository. See https://bugs.webkit.org/show_bug.cgi?id=10466 for more details.

        Update


        Technology Preview 3 is now available: release notes, tarball. Binaries for Windows and macOS will be uploaded a bit later.

        Introducing the Qt Lite project—Qt for any platform, any thing, any size

        by Nils Christian Roscher-Nielsen (Qt Blog)

        We believe in a future of great software and hardware, developed together, delivered quickly, and that you can have fun in the process. Embedded development should be just as simple as all other software development, and you should immediately see the result of your ideas running on your device.

        The amount of devices and things surrounding us are rapidly increasing, becoming more intelligent and requiring software that runs on a greater variety of hardware—everything from IoT devices with or without a screen, smart watches through to high end smart TVs and industrial grade PCs. As the requirements and the world of software development is changing so does Qt. We have taken action and are now unveiling the Qt Lite Project. This is a whole range of changes to Qt, allowing you to strip Qt down and bring in exactly what you need in order to create your device for more or less any platform and any thing – regardless of size. Qt Lite is neither a separate product nor a fork of Qt—it is all built into Qt allowing us to efficiently develop and maintain it as part of the whole Qt framework. As such, many of these changes will benefit all Qt users, but especially those targeting resource-constrained devices.

        For the past 20 years, Qt has been used on a massively wide range of operating systems and embedded devices. It didn’t take long before embedded Linux was as important for Qt as its desktop counterpart, but many other embedded operating systems have also followed this trend, and Qt has supported a wide range of Linux, Microsoft and various real time operating systems (RTOS).

        However, to efficiently utilize Qt on these operating systems, and especially on those embedded devices—special as they often are—it has sometimes been challenging and time consuming to configure Qt to efficiently use the different hardware components, available libraries, and strip out the parts of Qt and the OS that are not needed.

        Over the past six months we have looked at many of these challenges—and more—and been working on making Qt a much more targeted framework that will facilitate the whole development cycle and lifetime of embedded device based products. In this blog post we will look at some of the changes, we have made as well as the path beyond that. All of these efforts are part of “Project Qt Lite”.

        The configuration system

        We know that Qt is being used in many different projects, varying industries and for vastly different purposes. So making one change, or one optimal version of Qt is not feasible. Therefore the starting point, and the biggest code change coming as a part of our embedded effort for Qt 5.8, is a new configuration system. When we introduced Qt 5, we had a lot of focus on the modularization of Qt, so it was less monolithic. The modules became less dependent on each other, could easily be developed, tested and deployed independently. But configuring the content of each module was still difficult, so optimizing for a resource-constrained embedded system was not as straight forward as we would like it to be. If you needed a specific feature, like a specific way of handling internationalization or audio functionality, or broader multimedia features, you often needed to add in several new modules, where you would only use a fraction of the functionality. Enabling one single feature exclusively required a lot of manual tweaking, and that took a lot of time.

        The new configuration system in Qt, allows your define the content you need from each module in much more detail for your project and easily allows for feature based tailoring of the Qt modules. We are starting with enabling this fully for Qt Core, Qt Network, Qt GUI, Qt QML and Qt Quick. You can now fine tune which features from these modules you want to include in your project. There is no longer any need to include unnecessary features. We will also expand this to be more granular and cover more modules in the time to come.

        Developer Workflow

        Moving forwards we want to put focus on a development workflow that has optimization in mind from the very beginning. In a world where hardware is getting cheaper, most frameworks do not care much for footprint or memory consumption — all libraries are included from the get go, all features enabled and options checked. This makes feature development simple, but optimization so much harder. Qt Lite now allows you to start with a minimal deployable configuration, and allows you to simply add in any additional feature you will require while developing your project.

        This leaves you in complete control, with a continuous understanding of the consequences of your actions, and allows for transparency of the development project throughout the team. How big is the application developed becoming? Is this web browser really needed? And do cutting these corners actually make sense? Every included feature and added module will be immediately visible, and you will know how it affects the overall footprint of the application.

        To facilitate this, we will start by provide two different reference configurations as a part of Qt Lite:

        Firstly, a full prototyping environment, like for example the configuration behind our demo images as they are shipped with Qt for Device Creation today. This is a great starting point for a mid-cost, low volume distribution for example, it has all features enabled and can quickly and easily be used in products.

        In addition to that we also want to add another Qt configuration that is as minimal as possible. This will provide a great starting point for software that needs a smaller footprint, high performance and still be delivered quickly to the market. By significantly reducing the time spent on optimization at the end of the project, products can have a much faster time-to-market.

        No Open GL Requirement

        One of the main drivers behind the Qt Quick and QML technology, was to introduce a rendering architecture optimized for OpenGL. However, that also meant that OpenGL became a requirement for all Qt Quick based projects. For several good reasons, we see the need for cheaper, more efficient or specially certified hardware that does not support OpenGL. In Qt we have therefore introduced a fully integrated, supported and efficient 2D Software Renderer for Qt Quick. This allows you to use all the power of the QML language to create beautiful user interfaces on embedded devices without OpenGL hardware available.

        The Qt Quick 2D renderer can work in software only, but it is also designed to utilize accelerated 2D operations, for devices that packs a little bit more punch, but still doesn’t have full OpenGL support.

        Tooling

        Along with the new configuration system, we have also developed a new graphical tool for configuring, selecting and setting various options when building Qt. These configurations can be saved and reused. This will also make it easier to modify your configurations for new hardware, or changing requirements.

        The Qt configuration tooling is now even more powerful and feature rich than ever before. By making all the options available easily accessible, integrating the documentation and providing reasonable starting default configurations for various use cases, you get a simple and efficient way to squeeze a lot more juice out of your existing projects.

        We are currently working on a way to sort configuration options into groups, so that you can easily see which configurations need to work together to enable use cases like internationlization, multimedia, web capabilities or other features. You can of course save these configurations and profiles, to continue using with other builds, version so of Qt, or new hardware. These tools will be integrated as a part of Qt for Device Creation.

        Targets

        A major part of our focus, is on extending the available hardware that you can easily and efficiently use to deploy Qt based applications. There are several devices and project types that can benefit from our current efforts. A typical example can be devices with RAM and Flash in the 32MB or even 16 MB area, with the intention to go much lower in the future. Also, there is no longer any need for OpenGL hardware to use Qt Quick, which extends the number of devices where Qt can be used significantly.

        The main usage of this is still expected to the be Cortex A based architecture, or similar, but we are also aiming at the ARM Cortex M7, as one example.

        And the list goes on

        There is a myriad of other features all enhancing the embedded developer experience and device creation workflow on resource-constrained devices, coming with Qt 5.8. We are further developing the Qt Quick Controls 2, that are specially designed for touch-enabled devices, and are introducing many new features as well as improvements and new themes.

        We have put a lot of effort into our new Over-the-Air update mechanism. It is also a part of Qt for Device Creation for Qt 5.8, and we have already blogged about it in great detail. This is a part of our continuous push to make device creator’s life simpler, shorten time-to-market, and reduce the total cost of a project, by providing an extremely powerful way of managing your device life cycle.

        The Qt Wayland based compositor makes it simple to create fully fledged multi applications devices. But we are also improving EGLFS, and enhancing the multi-screen capabilities.

        And the Qt Emulator that ships with Qt Creator makes it very simple to quickly iterate over designs and optimize applications, even without the target hardware available to all developers in the project.

        An open road ahead

        We have for a long time been putting a lot of emphasis on the embedded space, for example with our Qt for Device Creation product, and we will continue this effort relentlessly. And we don’t want that effort just to be an internal project, but we want you to know about it. Because it is all about you, and what you can achieve when creating your products. Our aim is to improve Qt, making it more light weight, easier to use, and performing better than ever before. To achieve this, we need your feedback.

        We will continue our work making Qt a better framework for embedded projects of all kinds, running on devices in a wide range of industries. We have many exciting plans and we are working with some really interesting customers to bring great projects to the market. Examples being the Automotive systems based on Qt, the usage in the Avionics industry and the work we do with home appliances amongst many other Qt based projects. IoT is another important part of our strategy ahead, and making sure that all devices can be developed with a Qt based platform, communicate over supported protocols and that software can easily be extended to the next generation device is extremely important in a wide range of industries today.

        The next stage of Qt Lite—as soon as the essentials are in place—will be along three major lines.

        Firstly, code optimizations to improve the run time performance and the RAM consumption. This will require a lot of code changes, in many different places of Qt. Some of these changes might not be fully source compatible with Qt, but believe that such embedded projects can make that sacrifice for the sake of performance. This is important—but difficult—work, and some of our best developers are on it.

        Secondly we will spend a lot of time on the configuration of the full stack, not just the Qt libraries. With Qt for Device Creation we offer an out-of-the-box embedded Linux stack based on Yocto. We will also extend the new configuration system to cover and optimize the complete Linux stack as well as the Qt build. This will allow you to easily and efficiently improve the total footprint, boot time and complexity of your system, not just the Qt bits.

        The third avenue of improvement will be to more fully integrate all the tooling around this, to bring all the elements into the same tool, and integrate this into Qt Creator. We think this can improve not only the developer experience, but also the communication in the whole team, provide more transparency towards other stakeholders and reduce the total time/cost of a project.

        In summary, we have now laid the foundation of how to more efficiently address embedded development, and how to make the most of resource-constrained hardware. A configuration architecture that makes it simpler to build Qt according to your needs, and improves the performance on resource-constrained devices. We have improved Qt a lot. But we have also staked out on a clear path towards further improvements. The focus forwards will be on making Qt even faster, smaller and easier to work with. We are very much looking forwards to your feedback and feature requests, and hope all your projects are successful. If you are interested in participating in that future, to provide feedback or learn more about this, both our CTO Lars Knoll and myself will be talking about this subject at the Qt World Summit in San Francisco, October 18-20. We are looking forwards to seeing you there, and gaining your feedback!

        The post Introducing the Qt Lite project—Qt for any platform, any thing, any size appeared first on Qt Blog.

        QtCon

        QtCon_logo_headerI’ve just booked flights and hotel for QtCon. It is going to be great to see all the Qt people and some of my fellow Pelagicorians from our Munich office. For those who want to hear me speak, I’ll share my view of the world on Friday at 11:30.

        The Qt Quick Graphics Stack in Qt 5.8

        This is a joint post with Andy. In this series of posts we are going to take a look at some of the upcoming features of Qt 5.8, focusing on Qt Quick.

        OpenGL… and nothing else?

        When Qt Quick 2 was made available with the release of Qt 5.0, it came with the limitation that support for OpenGL (ES) 2.0 or higher was required.  The assumption was that moving forward OpenGL would continue its trajectory to be the hardware acceleration API of choice for both desktop, mobile and embedded development. Fast forward a couple years down the road to today, and the graphics acceleration story has gotten more complicated.  One assumption we made was that the price of embedded hardware with OpenGL GPUs would continue to drop and they would be ubiquitous.  This is true, but at the same time there are still embedded devices available without OpenGL-capable GPUs where customers continue to wish to deploy Qt Quick applications.  To remedy this we released the Qt Quick 2D Renderer as separate plugin for Qt Quick in Qt 5.4.

        At the same time it turned out that Qt Quick applications deployed on a wide range of machines including older systems often have issues with OpenGL due to missing or unavailable drivers, on Windows in particular. Around Qt 5.4 the situation got improved with the ability to dynamically choose between OpenGL proper, ANGLE, or a software OpenGL rasterizer. However, this does not solve all the problems and full-blown software rasterizers are clearly not an option for low-end hardware, in particular in the embedded space. All this left us with the question of why not focus more on the platforms’ native, potentially better supported APIs (for example, Direct3D), and why not improve and integrate the 2D Renderer closer with the rest of the Qt Quick instead of keeping it a separate module with a somewhat arcane installation process.

        Come other APIs

        Meanwhile, the number of available graphics hardware APIs has increased since the release of Qt Quick 2. Now rather than the easy to understand Direct3D vs OpenGL choice, there is a new generation of lower level graphics APIs available: Vulkan, Metal, and Direct3D 12. So for Qt 5.8 we decided to explore how we can make Qt Quick more future proof, as introduced in this previous post.

        Modularization

        The main goal for the ScenegraphNG project was to modularize the Qt Quick Scene graph API and remove the OpenGL dependencies in the renderer.  By removing the strong bindings to OpenGL and enhancing the scenegraph adaptation layer it is now possible to implement additional rendering backends either built-in to Qt Quick itself or deployed as dynamically loaded plugins. OpenGL will still be the default backend with full compatibility for all existing Qt Quick code. The changes are not just about plugins and moving code around, however. Some internal aspects of the scenegraph, for instance the material system, exhibited a very strong OpenGL coupling which could not be worked around in a 100% compatible manner when it comes to the public APIs. Therefore some public scenegraph utility APIs got deprecated and a few new ones got introduced. At the time of writing work is still underway to modularize and port some additional components, like the sprite and particle systems, to the new architecture.

        To prove that the changes form a solid foundation for future backends, Qt 5.8 introduces an experimental Qt Quick backend for Direct3D 12 on Windows 10 (both traditional Win32 and UWP applications). In the future it will naturally be possible to create a Vulkan backend as well, if it is deemed beneficial. Note that all this has nothing to do with the approaches for integrating custom rendering into QWidget-based or plain QWindow applications. There adding Vulkan or D3D12 instead of OpenGL is possible already today with the existing Qt releases, see for instance here and here.

        Qt Quick 2D Renderer, integrated

        The Qt Quick 2D Renderer was the first non-OpenGL renderer, but when released, it lived outside of the qtdeclarative code base (which contains the QtQml and QtQuick modules) and carried a commercial-only license. In Qt 5.7 the Qt Quick 2D Renderer was made available under GPLv3, but still as a separate plugin with the OpenGL requirement inherited from Qt Quick itself. In practice this got solved by building Qt against dummy libGLESv2 library, but this was neither nice nor desirable long-term. With Qt 5.8 the Qt Quick 2D renderer is merged into qtdeclarative as the built-in software rendering backend for the Qt Quick SceneGraph. The code has also been relicensed to have the same licenses as QtDeclarative. This also means that stand-alone 2D renderer plugin is no longer under development and the qtdeclarative-render2d repository will become obsolete in the future.

        Supercharging the 2D Renderer: Partial updates

        The 2D Renderer, which is now referred to mostly as the software backend (or renderer or adaptation), is getting one huge new feature that was not present in the previous standalone versions: partial updates. Previously it would render the entire scene every frame from front to back, which meant that a small animation in a complicated UI could be very expensive CPU-wise, especially when moving towards higher screen resolutions. Now with 5.8 the software backend is capable of only rendering what has changed between two frames, so for example if you have a blinking cursor in a text box, only the cursor and area under the cursor will be rendered and copied to the window surface, not unlike how the traditional QWidgets would operate. A huge performance improvement for any platform using the software backend.

        QQuickWidget with the 2D Renderer

        Another big feature that the new software backend introduces with Qt 5.8 is support for QQuickWidget. The Qt Quick 2D Renderer was not available for use in combination with QQuickWidget, which made it impossible for apps like Qt Creator to fall back to using the software renderer. Now because of the software renderer’s closer integration with QtDeclarative it was possible to enable support for the software renderer with QQuickWidget. This means that applications using simple Qt Quick scenes without effects and heavy animation can use the software backend in combination with QQuickWidget and thus avoid potential issues when deploying onto older systems (think the OpenGL driver hassle on Windows, the trouble with remoting and X forwarding, etc.). It is important to note that not all types of scenes will perform as well with software as they do with OpenGL (think scrolling larger areas for instance) so the decision has to be made after investigating both options.

        No OpenGL at all? No problem.

        One big limitation of the Qt Quick 2D Renderer plugin was that in order to build QtDeclarative, you still had to have OpenGL headers and libraries available. So on devices that did not have OpenGL available you had to use provided “dummy” libraries and headers to trick Qt into building QtDeclarative and then enforce your developers not to call any code that could call into OpenGL. This always felt like a hack, but with the hard requirement in QtDeclarative there was no better options available. Until now. In Qt 5.8 this is not an issue because QtDeclarative can now be built without OpenGL. In this case the software renderer becomes the default backend instead of OpenGL. So whenever Qt is configured with -no-opengl or the development environment (sysroot) lacks OpenGL headers and libraries, the QtQuick module is no longer skipped. In 5.8 it will build just fine and default to the software backend.

        Switching between backends

        Now that there are multiple backends that can render Qt Quick we also needed to provide a way to switch between which API is used. The approach Qt 5.8 takes mirrors how QPA platform plugins or the OpenGL implementation on Windows are handled: the Qt Quick backend can be changed on a per-process basis during application startup. Once the first QQuickWindow, QQuickView, or QQuickWidget is constructed it will not be possible to change it anymore.

        To specify the backend to use, either set the environment variable QT_QUICK_BACKEND (also known as QMLSCENE_DEVICE that is inherited from previous versions) or use the C++ API of the static functions QQuickWindow provides. When no request is made, a sensible default will be used. This is currently the OpenGL backend, except in Qt builds that have OpenGL support completely disabled.

        As an example, let’s force the software backend in our application:

        int main(int argc, char **argv)
        {
            // Force the software backend always.
            QQuickWindow::setSceneGraphBackend(QSGRendererInterface::Software);
            QGuiApplication app(argc, argv);
            QQuickView view;
            ...
        }
        

        Or launch our application with the D3D12 backend instead of the default OpenGL (or software):

        C:\MyCoolApp>set QT_QUICK_BACKEND=d3d12
        C:\MyCoolApp>debug\MyCoolApp.exe
        

        To verify what is happening during startup, set the environment variable QSG_INFO to 1 or enable the logging category qt.scenegraph.general. This will lead to printing a number of helpful log messages to the debug or console output, depending on the type of the application. To monitor the debug output, either run the application from Qt Creator or use a tool like DebugView.

        With an updated version of the Qt 5 Cinematic Experience demo the result is something like this:

        Qt 5 Cinematic Experience demo app running on Direct3D 12

        Qt 5 Cinematic Experience demo application running on Direct3D 12

        Everything in the scene is there, including the ShaderEffect items that provide a HLSL version of their shaders. Unsupported features, like particles, are gracefully ignored when running with such a backend.

        Now what happens if the same application gets launched with QT_QUICK_BACKEND=software?

        Qt5 Cinematic Experience demo running on the Software backend

        Qt5 Cinematic Experience demo application running on the Software backend

        Not bad. We lost the shader effects as well, but other than that the application is fully functional. And all this without relying on a software OpenGL rasterizer or other extra dependencies. No small feat for a framework that started out as a strictly OpenGL-based scene graph.

        That’s it for part one. All this is only half of the story – stay tuned for part two where are going to take a look at the new Direct3D 12 backend and what the multi-backend Qt Quick story means for applications using advanced concepts like custom Quick items.

        The post The Qt Quick Graphics Stack in Qt 5.8 appeared first on Qt Blog.