Qt 5.11.2 Released

Qt 5.11.2 is released today. As a patch release it does not add any new functionality, but provides important bug fixes, security updates and other improvements.

Compared to Qt 5.11.1, the Qt 5.11.2 release provides fixes for more than 250 bugs and it contains around 800 changes in total. For details of the most important changes, please check the Change files of Qt 5.11.2.

The recommended way for getting Qt 5.11.2 is using the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal (commercial license holders) or from qt.io Download page (open source).

Offline packages are also available for those who do not want to use the online installer.

The post Qt 5.11.2 Released appeared first on Qt Blog.

Qt Creator 4.7.1 released

We are happy to announce the release of Qt Creator 4.7.1!

The probably most prominent fixes we did were for Windows:

  • The amount of resources that we used for MSVC detection could trigger virus scanners, so we limit this now.
  • We no longer force Qt Creator’s use of ANGLE for OpenGL on user applications, so applications using desktop OpenGL run again from Qt Creator without environment modifications.

You find more details about other fixes in our change log.

Get Qt Creator 4.7.1

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.7.1 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.7.1 released appeared first on Qt Blog.

Meet QSkinny, a lightweight Qt UI library

by Peter Hartmann (peter)
TL;DR: QSkinny offers a QWidget-like library built on top of the modern Qt graphic stack. It is using the Qt scene graph and is written fully in C++, thus making QML optional.
QSkinny offers a Qt API to write user interfaces in C++. It is inspired by the class design of QtWidgets, but runs on top of QtQuick. This means that QSkinny is hardware accelerated and can make use of e.g. animations and shaders. Below is a screenshot of a sample UI written with QSkinny:
1. How does it work?
Check out a simple "hello world" program written with QSkinny:
Looks familiar? Users of QtWidgets code will feel right at home when using QSkinny's API: There are similar controls in both worlds like text labels, push buttons, layouts, dialogs etc.
This diagram shows how QSkinny, QML and QtWidgets relate:
The layers in the diagram above are:
QSkinny: C++ UI controls
QML engine: declarative / JavaScript engine to parse UI files
QtQuick: basic layer of UI controls (containing e.g. x/y positioning and focus handling)
Qt scene graph: low level drawing primitives to make use of hardware acceleration
OpenGL: API to support hardware accelerated drawing
QtWidgets: C++ UI controls designed for desktop use
Qt raster paint engine: software (i.e. not hardware accelerated) drawing engine
QPainter API: interface for drawing images, text, shapes etc.
Since both QSkinny and QML elements are instances of QQuickItem, both technologies can be mixed: The QSkinny "buttons" example for instance is using a QskPushButton from QML.
2. Where is the code?
The code lives on github and is licensed under LGPLv2:
Its original authors are Uwe Rathmann and Andrew Knight, the author of this blog post started contributing later.
3. Why is it called QSkinny?
It is slim. The sample screenshot above shows 3 speedometers, each one of them consists of one QQuickItem, which itself contains several scene graph nodes: There is one node for the background one for the needle, one for the labels etc. In QML, each subcontrol is a QQuickItem and therefor a QObject.It separates the functionality of controls from their appearance; the latter being handled by so called Skinlets. Those Skinlets live on the scene graph thread and handle the actual drawing. How exactly they are drawn is determined by a so called Skin, and can be changed at runtime. This makes it easy to implement e.g. a daylight vs. nighttime theme or different brand schemes:
As an example, here is a skin setting all push buttons to have blue text on green background with a 10 pixel padding:
Those skin properties are similar to properties in QML.
Upon a skin change, a programmer would just replace the colors, padding etc. with different values and then trigger a repaint; the according animation can even interpolate between colors, as seen above (reduced frame rate due to GIF compression).
4. How mature is it?
QSkinny is currently used in a major automotive project which unfortunately cannot be shown in public yet. This means it is being stress tested for production, but still lacking in areas like documentation; moreover the controls currently implemented are aligned to the project needs so far.
Mixing QSkinny and QML is in a proof-of-concept state, because the application mentioned above is purely written in C++ and not using QML in any way.
This project is showing very good performance numbers, especially a fast startup time and low memory usage. Considering that there are lots of controls loaded right at application startup, those things do not seem to be an issue with QSkinny (at least for this project).
Also, since the developers working on it came from a QtWidgets background, they were familiar with the underlying concepts and productive right away with QSkinny.
Do you want to try it out? Just clone the repository above and let us know how it goes!
Contributions (source code changes, documentation etc.) will of course also be appreciated.

Qt 5.12 Alpha Released

I am pleased to announce that Qt 5.12 Alpha is released today. There are prebuild binaries as well in addition to source code packages with Alpha release.

Please check Qt 5.12 New Features wiki to see what new is coming with Qt 5.12 release. Please note that the feature list is still in progress and not to be considered final before the first Beta release.

Our target is to start releasing regular beta releases quite soon after this Alpha release.  First beta release should be out within few weeks, see details from 5.12 wiki.

Please take a tour and test Qt 5.12 Alpha. You can download the Qt 5.12 Alpha source packages from your Qt Account or from download.qt.io and prebuild binaries by using Qt Online Installer.

Most importantly, remember to give us feedback by writing to the mailing lists and reporting bugs to Jira.

The post Qt 5.12 Alpha Released appeared first on Qt Blog.

An introduction to texture mapping in Qt 3D Studio, part I

This is the first in a series of blog posts that will explain the basics of texture mapping in Qt 3D Studio. To follow the examples, you need to have a basic understanding on how to work with objects and import assets in Qt 3D Studio. If you have not used Qt 3D Studio before, please spend a minute to browse through the Getting Started section in documentation.

In this first post, we will go through how to apply texture maps to objects in Qt 3D Studio, additionally we will explain and show examples of the most common types of texture maps.

What is texture mapping?

So, what is texture mapping? Basically, texture mapping is applying images to the surface of 3D objects to control properties such as color, transparency, specularity and more.

Texture maps can be used to make objects more realistic and better looking as well as to create special effects.

Included in the Qt 3D Studio asset library you will find a set of different textures that you can use, some of the images used in these blog posts are taken from there. It is of course possible to use any image you like.

Applying texture maps

To apply a texture map to an object you first need to add the object to the scene For example, drag an object from the basic objects palette to the scene.

Next, expand the object in the timeline and select the material (Default).

2018-09-04_14-51-40

Now you will see all properties, including texture maps, in the inspector palette. To apply a texture map to the object, simply click the Select drop down menu for the desired map property, i.e. Diffuse Map.

2018-09-05_10-38-17

Diffuse Map

The diffuse map is the most used texture map. Applying an image as a diffuse map to a 3D object will wrap the image around the model.

Let’s try this out with a cube and a sphere in Qt 3D Studio. First you will need to import the diffuse map to your Studio project. In this example we will use the Wood7.png from the asset library but basically any image will do.

Sphere and cube without any texture map.

Once you have added the objects to the scene, select the default material in the timeline palette to display the properties of the objects material in the inspector palette.

Now, for each of the objects, click the Diffuse Map drop down and select the Wood7.png. You should now have a wooden sphere and a wooden cube.

Sphere and cube with wooden diffuse map

Bump Map

A bump map is used to apply small details (height differences) such as bumps or scratches to a 3D object. A bump map is basically a grayscale image that fakes height differences on the mesh. It does not alter the geometry of the object in any way, if you look closely you will see that a bump map does not change the silhouette or shadow of the object.

Let’s try this on our sphere using bump.png found in the asset library.

A bump map

First, remove the diffuse map from the sphere and import bump.png to your project. Then, in the inspector palette, set bump.png as Bump Map for the material of the sphere.

A sphere with a bump map

Now we got some structure on our sphere instead of the flat surface. Note that there is a Bump Amount setting for the material which you can use to change the strength of the bump map. A positive Bump Amount value will display black color as the lowest areas while a negative value will display white color as the lowest areas.

Opacity Map

Sometimes called alpha map or transparency map, the opacity map is used to control the opacity of an object. An opacity map needs to be of an image format that support transparency, i.e. PNG or DDS. Transparent parts of the image will render as transparent once applied as an opacity map to the object.

Let’s try adding an opacity map to our sphere, you can keep the bump map there if you wish to. In this case I will use an image I have created myself but there are many transparent images in the Alpha Maps directory of the assets library that can be used instead.

This is the image I am using, a black flower silhouette with transparent background.

An opacity map

Then I set it as Opacity Map for the material of the sphere, as you can see it wraps around the sphere and display the transparent areas of the images as transparent also on the sphere.

 A sphere with an opacity map

Specular Map

A specular map controls the specularity of an object, in most cases a grayscale image is used but it’s possible to use a color image if you wish to add a color tint to the reflections.  Black colors on the specular map will add no reflections, the lighter the color gets, the more reflective will the specific area be. A specular map can for example be used if you have a diffuse map showing different materials; some reflective, some non-reflective.

The textures in this example is from www.3dtextures.me, a great online resources for texture maps.

In this example we have applied a tile diffuse texture to a sphere.

A sphere with a tile diffuse map

It looks nice, but we need to add some reflections to make it more realistic. In Qt 3D Studio, you make the object reflective by increasing the Specular Amount value (by default it is set to 0) for the material. In this example I set it to 0.1 which adds reflections to the whole object. If Specular Amount is 0, the specular map will have no effect.

A sphere with a diffuse map and a specular material

Now it’s time for the specular map to fine tune the reflections. This is what the specular map looks like in this case; the darker the color of tha map is, the more reflections will be toned down. Black colors will remove all reflections while white color will leave reflections unchanged.  In this case  the specular map will add some variations to the reflections, tone down reflections on the side of the tiles and so on.

A specular map

Apply it as Specular Map to the cube.

A sphere with a diffuse map and a specular map.

Summary

In this blog post we had a look at the most common types of texture maps and how you can use them in Qt 3D Studio to improve the appearance of basic 3D objects.

In next blog post in this series we will go through more types of texture maps in Qt 3D Studio.

The post An introduction to texture mapping in Qt 3D Studio, part I appeared first on Qt Blog.

API Changes in Clang

I’ve started contributing to Clang, in the hope that I can improve the API for tooling. This will eventually mean changes to the C++ API of Clang, the CMake buildsystem, and new features in the tooling. Hopefully I’ll remember to blog about changes I make.

The Department of Redundancy Department

I’ve been implementing custom clang-tidy checks and have become quite familiar with the AST Node API. Because of my background in Qt, I was immediately disoriented by some API inconsistency. Certain API classes had both getStartLoc and getLocStart methods, as well as both getEndLoc and getLocEnd etc. The pairs of methods return the same content, so at least one set of them is redundant.

I’m used to working on stable library APIs, but Clang is different in that it offers no API stability guarantees at all. As an experiment, we staggered the introduction of new API and removal of old API. I ended up replacing the getStartLoc and getLocStart methods with getBeginLoc for consistency with other classes, and replaced getLocEnd with getEndLoc. Both old and new APIs are in the Clang 7.0.0 release, but the old APIs are already removed from Clang master. Users of the old APIs should port to the new ones at the next opportunity as described here.

Wait a minute, Where’s me dump()er?

Clang AST classes have a dump() method which is very useful for debugging. Several tools shipped with Clang are based on dumping AST nodes.

The SourceLocation type also provides a dump() method which outputs the file, line and column corresponding to a location. The problem with it though has always been that it does not include a newline at the end of the output, so the output gets lost in noise. This 2013 video tutorial shows the typical developer experience using that dump method. I’ve finally fixed that in Clang, but it did not make it into Clang 7.0.0.

In the same vein, I also added a dump() method to the SourceRange class. This prints out locations in the an angle-bracket format which shows only what changed between the beginning and end of the range.

Let it bind

When writing clang-tidy checks using AST Matchers, it is common to factor out intermediate variables for re-use or for clarity in the code.

auto valueMethod = cxxMethodDecl(hasName("value"));
Finer->addMatcher(valueMethod.bind("methodDecl"));

clang-query has an analogous way to create intermediate matcher variables, but binding to them did not work. As of my recent commit, it is possible to create matcher variables and bind them later in a matcher:

let valueMethod cxxMethodDecl(hasName("value"))
match valueMethod.bind("methodDecl")
match callExpr(callee(valueMethod.bind("methodDecl"))).bind("methodCall")

Preload your Queries

Staying on the same topic, I extended clang-query with a --preload option. This allows starting clang-query with some commands already invoked, and then continue using it as a REPL:

bash$ cat cmds.txt
let valueMethod cxxMethodDecl(hasName("value"))

bash$ clang-query --preload cmds.txt somefile.cpp
clang-query> match valueMethod.bind("methodDecl")

Match #1:

somefile.cpp:4:2: note: "methodDecl" binds here
        void value();
        ^~~~~~~~~~~~

1 match.

Previously, it was only possible to run commands from a file without also creating a REPL using the -c option. The --preload option with the REPL is useful when experimenting with matchers and having to restart clang-query regularly. This happens a lot when modifying code to examine changes to AST nodes.

Enjoy!

Release 2.18.1: Use JavaScript Promises with Qt, Material Cards and an Improved API to Connect to REST Services

V-Play 2.18.1 introduces new components for embedding YouTube videos, for creating material cards and Tinder-like swipe cards. It also simplifies connecting to REST services, with the new HttpRequest component. V-Play 2.18.1 also adds several other fixes and improvements.

Important Note for iOS Live Client: The current store version of the V-Play Live Client app is built with V-Play 2.17.1 and does not include the latest features. If you want to use QML live code reloading with the latest V-Play features on iOS, you can build your own live clients with Live Client Module.

Connect to REST Services with JavaScript Promises and Image Processing from QML

You can now use the HttpRequest type as an alternative to the default XmlHttpRequest. It is available as a singleton item for all components that use import VPlayApps 1.0:

import VPlayApps 1.0
import QtQuick 2.0

App {
  Component.onCompleted: {
    HttpRequest
    .get("http://httpbin.org/get")
    .timeout(5000)
    .then(function(res) {
      console.log(res.status);
      console.log(JSON.stringify(res.header, null, 4));
      console.log(JSON.stringify(res.body, null, 4));
    })
    .catch(function(err) {
      console.log(err.message)
      console.log(err.response)
    });
  }
}

Similar to HttpRequest, which matches the DuperAgent Request type, other DuperAgent features are also available in V-Play with the Http prefix:

The HttpRequest type also supports response caching of your requests out-of-the-box.

The DuperAgent package, which brings the HttpRequest type, also contains an implementation of the Promises/A+ specification and offers an API similar to the Promises API in ES2017. The Promise type works independently of DuperAgents’s http features:

import VPlayApps 1.0
import QtQuick 2.0

App {
  Component.onCompleted: {
    var p1 = Promise.resolve(3);
    var p2 = 1337;
    var p3 = HttpRequest
    .get("http://httpbin.org/get")
    .then(function(resp) {
      return resp.body;
    });
    
    var p4 = Promise.all([p1, p2, p3]);
    
    p4.then(function(values) {
      console.log(values[0]); // 3
      console.log(values[1]); // 1337
      console.log(values[2]); // resp.body
    });
  }
}

Add QML Material Design Cards and Tinder Swipe Gestures

Create material design cards with the new AppCard. You can also use Tinder-like swipe gesture with cards. With additional AppPaper and AppCardSwipeArea components, you can create fully custom card-like UI elements, that can be swiped in a Tinder-like fashion.

appcard-tinder-swipe
import VPlayApps 1.0
import QtQuick 2.0

App {
  Page {
    AppCard {
      id: card
      width: parent.width
      margin: dp(15)
      paper.radius: dp(5)
      swipeEnabled: true
      cardSwipeArea.rotationFactor: 0.05
      
      // If the card is swiped out, this signal is fired with the direction as parameter
      cardSwipeArea.onSwipedOut: {
        console.debug("card swiped out: " + direction)
      }
      
      // … Card content
    }
  }
}

Embed YouTube Videos on your Qt App

With the YouTubeWebPlayer component, you can now directly embed YouTube videos in your app with a simple QML API.

youtube-player

This is how you can use the player in QML:

import VPlayApps 1.0

App {
  NavigationStack {
    Page {
      title: "YouTube Player"
      
      YouTubeWebPlayer {
        videoId: "KQgqTYCfJjM"
        autoplay: true
      }
      
    }
  }
}

The component uses a WebView internally and the YouTube Iframe-Player API. To show how you can use the player in your app, you can have a look at the YouTube Player Demo App. It uses the YouTube Data API to browse playlists and videos of a configured channel.

Have a look at this demo to see how to integrate the Qt WebView module and use the YouTubeWebPlayer to play videos. The demo also shows how to load content from the YouTube Data API via http requests.

QML QSortFilterProxyModel to use Sorting and Filters ListModels

You can now use SortFilterProxyModel, based on QSortFilterProxyModel, to apply filter and sorting settings to your QML ListModel items.

The following example shows the configured entries of the ListModel in a ListPage, and allows to sort the list using the name property:

sortfilterproxymodel-simple
import VPlayApps 1.0
import QtQuick 2.0

App {
  // data model
  ListModel {
    id: fruitModel
    
    ListElement {
      name: "Banana"
      cost: 1.95
    }
    ListElement {
      name: "Apple"
      cost: 2.45
    }
    ListElement {
      name: "Orange"
      cost: 3.25
    }
  }
  
  // sorted model for list view
  SortFilterProxyModel {
    id: filteredTodoModel
    sourceModel: fruitModel
    
    // configure sorters
    sorters: [
      StringSorter {
        id: nameSorter
        roleName: "name"
      }]
  }
  
  // list page
  NavigationStack {
    ListPage {
      id: listPage
      title: "SortFilterProxyModel"
      model: filteredTodoModel
      delegate: SimpleRow {
        text: name
        detailText: "cost: "+cost
        style.showDisclosure: false
      }
      
      // add checkbox to activate sorter as list header
      listView.header: AppCheckBox {
        text: "Sort by name"
        checked: nameSorter.enabled
        updateChecked: false
        onClicked: nameSorter.enabled = !nameSorter.enabled
        anchors.horizontalCenter: parent.horizontalCenter
        height: dp(48)
      }
    } // ListPage
  } // NavigationStack
} // App

Combine Multiple Filter and Sorting Settings on QML ListModels

The SortFilterProxyModel helps you to combine multiple filter and sorting settings on a model. You can find a detailed example in our documentation: Advanced SortFilterProxyModel Example.

It also fetches data from a REST API using the new HttpRequest type.

sortfilterproxymodel

More Features, Improvements and Fixes

Here is a compressed list of further improvements with this update:

  • The Page type now features two more signals appeared() and disappeared(). These signals fire when the page becomes active or inactive on a NavigationStack. They are convenience signals to avoid manual checks of Page::isCurrentStackPage.
  • Remove focus from SearchBar text field if it gets invisible.
  • Fixes a crash in the V-Play Live Client when using WikitudeArView.
  • When a device goes online, the App::isOnline property now becomes true only after a short delay. This is required, as otherwise the network adapter might not be ready yet, which can cause immediate network requests to fail.

For a list of additional fixes, please check out the changelog.

 

 

 

More Posts Like This

 

qt-machinelearning-tensorflow-Teaser
Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow

Qt AR: Why and How to Add Augmented Reality to Your Mobile App
Qt AR: Why and How to Add Augmented Reality to Your Mobile App

v-play-release-2-18-0-qt-5-11-1
Release 2.18.0: Update to Qt 5.11.1 with QML Compiler and Massive Performance Improvements

vplay-2-17-0-firebase-cloud-storage-downloadable-resources-and-more
Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

The post Release 2.18.1: Use JavaScript Promises with Qt, Material Cards and an Improved API to Connect to REST Services appeared first on V-Play Engine.

Qt 3D Studio 2.1 Beta 1 released

We are happy to announce the release of Qt 3D Studio 2.1 Beta 1. It is available via the online installer. Here’s a quick summary of the new features and functions in 2.1.

For detailed information about the Qt 3D Studio, visit the online documentation page.

Data Input

For data inputs, we are introducing a new data type; Boolean. Related to this, elements now have a Visible property which can be controlled with the Boolean data input. When item visibility is controlled by a data input, the eyeball icon in the timeline palette changes to orange to illustrate this.

data-input

Data inputs are now checked at presentation opening. If elements in the presentation are using data inputs that are not found from the data input list (in the .uia file), a warning dialogue is shown. Then, the user can choose to automatically remove all property controls that are using invalid data inputs.

Additionally, the visualization of data input control for slides and the timeline has improved. Now it is much clearer which data input is in control.

data-input-2

For more details on data inputs, see documentation.

New Project Structure

There is a new project structure with presentations and qml streams folders. Presentation (.uip) files are now visible in the project palette, it is also possible to have several .uip files in a project.

project-palette

In the project palette, it is now possible to double-click an asset to open it in the application associated by the operating system. .uip files will open in Qt 3D Studio.

Sub-Presentations

A lot of improvement has been done to make working with sub-presentations more convenient. Some of the key improvements are:

  • You can create a new presentation in the Studio without leaving your current project.
  • With the possibility to have many .uip files in one project, it is easy to share assets between presentations.
  • Importing both .uip and .qml presentations are done the same way you import other assets.
  • Assign sub-presentations to meshes or layers by dragging and dropping from the project palette.

For more details on sub-presentations, see documentation.

Installation

As mentioned, Qt 3D Studio 2.1 Beta 1 is available via the Qt online installer. You’ll find it in under the preview section. If you have a previous installation, please use the Update feature in the Qt Maintenance tool to get the latest version. 2.1 version will be installed alongside the old version. The Qt online installer can be downloaded from www.qt.io/download while commercial license holders can find the packages from account.qt.io.

The post Qt 3D Studio 2.1 Beta 1 released appeared first on Qt Blog.

Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow

Artificial intelligence and smart applications are steadily becoming more popular. Companies strongly rely on AI systems and machine learning to make faster and more accurate decisions based on their data.

This guide provides an example for Image Classification and Object Detection built with Google’s TensorFlow Framework.

 

By reading this post, you will learn how to:

  • Build TensorFlow for Android, iOS and Desktop Linux.
  • Integrate TensorFlow in your Qt-based V-Play project.
  • Use the TensorFlow API to run Image Classification and Object Detection models.

Why to Add Artificial Intelligence to Your Mobile App

As of 2017, a quarter of organisations already invest more than 15 percent of their IT budget in machine learning. With over 75 percent of businesses spending money and effort in Big Data, machine learning is set to become even more important in the future.

Real-World Examples of Machine Learning

Artificial intelligence is on its way to becoming a business-critical technology, with the goal of improving decision-making with a far more data-driven approach. Regardless of the industry, machine learning helps to make computing processes more efficient, cost-effective, and reliable. For example, it is used for:

  • Financial Services: To track customer and client satisfaction, react to market trends or calculate risks. E.g. PayPal uses machine learning to detect and combat fraud.
  • Healthcare: For personalised health monitoring systems, to enable healthcare professionals to spot potential anomalies early on.
  • Retail: Offer personalised recommendations based on your previous purchases or activity. For example, recommendations on Netflix or Spotify.
  • Voice Recognition Systems, like Siri or Cortana.
  • Face Recognition Systems, like DeepLink by Facebook.
  • Spam Email Detection and Filtering.

Image Classification and Object Detection Example

TensorFlow is Google’s open machine learning framework. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and architectures (desktops, clusters of servers, mobile and edge devices). It supports Linux, macOS, Windows, Android and iOS among others.

qt-machinelearning-tensorflow

About TensorFlow

TensorFlow has different flavors. The main one is TensorFlow. Another one is TensorFlow Lite which is TensorFlow’s lightweight solution for mobile and embedded devices. However, TensorFlow Lite is currently at technological preview state. This means that not all TensorFlow features are currently supported, although it will be the reference for mobile and embedded devices in the near future.

There is plenty of online material about how to build applications with Tensorflow. To begin with, we highly recommend the free ebook Building Mobile Applications with TensorFlow by Pete Warden, lead of the TensorFlow mobile/embedded team.

The example of this guide makes use of the original TensorFow flavor. It shows how to integrate TensorFlow with Qt and V-Play to create a simple multiplatform app that includes two pretrained neural networks, one for image classification and another one for object detection. The code of this example is hosted on GitHub.

Clone the Repository

To clone this repository execute the following command, clone it recursively since the TensorFlow repositoriy is inside it. The Tensorflow version included is 1.8.

git clone --recursive https://github.com/V-Play/TensorFlowQtVPlay.git

Many thanks to the project developers for sharing this example and preparing this guide:

  • Javier Bonilla, Ph.D. in Computer Science doing research about modeling, optimization and automatic control of concentrating solar thermal facilities and power plants at CIEMAT – Plataforma Solar de Almería (PSA), one of the largest concentrating solar technology research, development and test centers in Europe.
  • Jose Antonio Carballo, Mechanical Engenieer and Ph.D. student from University of Almería working on his doctoral thesis on modeling, optimization and automatic control for an efficient use of water and energy resources in concentrating solar thermal facilities and power plants at CIEMAT – Plataforma Solar de Almería (PSA).

Advantages of using V-Play and Qt with TensorFlow

V-Play and Qt are wonderful tools for multiplatform applications. Qt has a rich set of ready-to-use multiplaform components for diverse areas such as multimedia, network and connectivity, graphics, input methods, sensors, data storage and more. V-Play further contributes to ease the deployment to mobile and embedded devices and adds nice features such as resolution and aspect ratio independence and additional components and controls. V-Play also provides easier access to native features, as well as plugins for monetization, analytics, cloud services and much more.

One nice feature of V-Play is that it is not restricted to mobile devices, so you can test and prototype your app in your development computer, which is certainly faster than compiling and deploying your app to emulators. You can even use V-Play live reloading to see changes in code almost instantaneously. Live reloading is also supported on Android and iOS devices, which is perfect for fine-tuning changes or testing code snippets on mobile devices.

So Tensorflow provides the machine learning framework, whereas V-Play and Qt facilitate the app deployment to multiple platforms: desktop and mobile.

How to Build TensorFlow for Qt

We need to build TensorFlow for each platform and architecture. The recommended way is to use bazel build system. However, we will explore how to use make to build TensorFlow for Linux, Android and iOS in this example. Check that you have installed all the required libraries and tools, TensorFlow Makefile readme.

If you are interested in building Tensorflow for macOS, check the Supported Systems section on the Makefile readme. For Windows, check TensorFlow CMake build.

If you have issues during the compilation process have a look at open Tensorflow issues or post your problem there to get help.

Once you have built Tensorflow, your app can link against these three libraries: libtensorflow-core.a, libprotobuf.a and libnsync.a.

Note: When you build for different platforms and architectures, in the same Tensorflow source code folder, Tensorflow may delete previous compiled libraries, so make sure you back them up. These are the paths where you can find those libraries, with MAKEFILE_DIR=./tensorflow/tensorflow/contrib/makefile:

  • Linux
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf/lib64
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/default.linux.c++11/
  • Android ARM v7
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib/android_armeabi-v7a
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_android/armeabi-v7a/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/armeabi-v7a.android.c++11/
  • Android x86
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib/android_x86
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_android/x86/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/x86.android.c++11/
  • iOS
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_ios/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/arm64.ios.c++11/

The shell commands in the following sections only work if executed inside the main Tensorflow folder.

Building for Linux

We just need to execute the following script for Linux compilation.

./tensorflow/contrib/makefile/build_all_linux.sh

If you are compiling for the 64-bit version, you might run into the following compilation error:

ld: cannot find -lprotobuf

In this case, change the $(MAKEFILE_DIR)/gen/protobuf-host/lib references to $(MAKEFILE_DIR)/gen/protobuf-host/lib64 in the tensorflow/tensorflow/contrib/makefile/Makefile file.

With some GCC 8 compiler versions you can get the following error.

error: ‘void* memset(void*, int, size_t)’ clearing an object of type ‘struct
nsync::nsync_counter_s_’ with no trivial copy-assignment; use value-initialization
instead [-Werror=class-memaccess]

To avoid it, include the -Wno-error=class-memaccess flag in the PLATFORM_CFLAGS variable for Linux (case "$target_platform" in linux) in the tensorflow/tensorflow/contrib/makefile/compile_nsync.sh file.

Building for Android (on Linux)

First, you need to set the NDK_ROOT environment variable to point to your NDK root path. You cand download it from this link. Second, you need to compile the cpu features library in NDK. This example was tested with Android NDK r14e.

mkdir -p $NDK_ROOT/sources/android/cpufeatures/jni
cp $NDK_ROOT/sources/android/cpufeatures/cpu-features.*
   $NDK_ROOT/sources/android/cpufeatures/jni
cp $NDK_ROOT/sources/android/cpufeatures/Android.mk
   $NDK_ROOT/sources/android/cpufeatures/jni
$NDK_ROOT/ndk-build NDK_PROJECT_PATH="$NDK_ROOT/sources/android/cpufeatures"
   NDK_APPLICATION_MK="$NDK_ROOT/sources/android/cpufeatures/Android.mk"

Then, execute the following script to compile Tensorflow for ARM v7 instructions.

./tensorflow/contrib/makefile/build_all_android.sh

If you want to compile for x86 platforms. For instance for debugging in an Android emulator, execute the same command with the following parameters.

Note: If you face issues compiling for Android x86 whith Android NDK r14, use the Android NDK r10e and set the NDK_ROOT accordingly to its path.

./tensorflow/contrib/makefile/build_all_android.sh -a x86

The Tensorflow Android supported architectures are the following.

-a [architecture] Architecture of target android [default=armeabi-v7a] (supported
architecture list: arm64-v8a armeabi armeabi-v7a mips mips64 x86 x86_64 tegra)

Building for iOS (on macOS)

The following script is available to build Tensorflow for iOS on macOS.

/tensorflow/contrib/makefile/build_all_ios.sh

If you get the following error while building Tensorflow for iOS.

error: thread-local storage is not supported for the current target

You can avoid it performing the changes given in this comment. That is changing -D__thread=thread_local \ to -D__thread= \ in the Makefile (for the i386 architecture only).

How to Use TensorFlow in Your Qt Mobile App

The source code of the app is in a GitHub repository. This section walks through the app code.

Link TensorFlow in Your Project

The following code shows the lines added to our qmake project file in order to include the TensorFlow header files and link against TensorFlow libraries depending on the target platform.

For Android, ANDROID_NDK_ROOT was set to the path of Android NDK r14e and ANDROID_NDK_PLATFORM was set to android-21 in Qt Creator (Project -> Build Environment).

# TensorFlow - All
TF_MAKE_PATH = $$PWD/tensorflow/tensorflow/contrib/makefile
INCLUDEPATH += $$PWD/tensorflow/ \
               $$TF_MAKE_PATH/gen/host_obj \
               $$TF_MAKE_PATH/downloads/eigen

# TensorFlow - Linux
linux:!android {
      INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf/include
      LIBS += -L$$TF_MAKE_PATH/downloads/nsync/builds/default.linux.c++11/ \
              -L$$TF_MAKE_PATH/gen/protobuf/lib64/ \
              -L$$TF_MAKE_PATH/gen/lib/ \
              -lnsync \
              -lprotobuf \
              -ltensorflow-core \
              -ldl
      QMAKE_LFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive
}

# TensorFlow - Android
android {
    QT += androidextras
    LIBS += -ltensorflow-core -lprotobuf -lnsync -lcpufeatures \
            -L${ANDROID_NDK_ROOT}/sources/android/cpufeatures/obj/
              local/$$ANDROID_TARGET_ARCH
    QMAKE_LFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive

    # Platform: armv7a
    equals(ANDROID_TARGET_ARCH, armeabi-v7a) | equals(ANDROID_TARGET_ARCH, armeabi):\
    {
        INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf_android/armeabi-v7a/include
        LIBS += -L$$TF_MAKE_PATH/gen/lib/android_armeabi-v7a \
                -L$$TF_MAKE_PATH/gen/protobuf_android/armeabi-v7a/lib \
                -L$$TF_MAKE_PATH/downloads/nsync/builds/armeabi-v7a.android.c++11
    }
    # Platform: x86
    equals(ANDROID_TARGET_ARCH, x86):\
    {
        INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf_android/x86/include
        LIBS += -L$$TF_MAKE_PATH/gen/lib/android_x86 \
                -L$$TF_MAKE_PATH/gen/protobuf_android/x86/lib \
                -L$$TF_MAKE_PATH/downloads/nsync/builds/x86.android.c++11
    }
}

# TensorFlow - iOS - Universal libraries
ios {
    INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf-host/include
    LIBS += -L$$PWD/ios/lib \
            -L$$PWD/ios/lib/arm64 \
            -framework Accelerate \
            -Wl,-force_load,$$TF_MAKE_PATH/gen/lib/libtensorflow-core.a \
            -Wl,-force_load,$$TF_MAKE_PATH/gen/protobuf_ios/lib/libprotobuf.a \
            -Wl,-force_load,$$TF_MAKE_PATH/downloads/nsync/builds/
                            arm64.ios.c++11/libnsync.a
}

Create the GUI with QML

The GUI is pretty simple, there are only two pages.

  • Live video output page: The user can switch between the front and rear cameras.
  • Settings page: Page for setting the minimum confidence level and selecting the model: one for image classification and another one for object detection.

Main.qml

In main.qml, there is a Storage component to load/save the minimum confidence level, the selected model and if the inference time is shown. The inference time is the time taken by the Tensorflow neural network model to process an image. The storage keys are kMinConfidence, kModel and kShowTime. Their default values are given by defMinConfidence, defModel and defShowTime. The actual values are stored in minConfidence, model and showTime.

// Storage keys
readonly property string kMinConfidence: "MinConfidence"
readonly property string kModel: "Model"
readonly property string kShowTime: "ShowTime"

// Default values
readonly property double defMinConfidence: 0.5
readonly property string defModel: "ImageClassification"
readonly property bool defShowTime: false

// Properties
property double minConfidence
property string model
property bool showTime

// Local storage component
Storage {
    id: storage

    Component.onCompleted: {
        minConfidence = getValue(kMinConfidence) !== undefined ?
                        getValue(kMinConfidence) : defMinConfidence
        model = getValue(kModel) !== undefined ? getValue(kModel) : defModel
        showTime = getValue(kShowTime) !== undefined ? getValue(kShowTime) :
                                                       defShowTime
    }
}

There is a Navigation component with two NavigationItem, each one is a Page. The VideoPage shows the live video camera output. It reads the minConfidence, model and showTime properties. The AppSettingsPage reads also those properties and set their new values in the onMinConfidenceChanged, onModelChanged and onShowTimeChanged events.

import VPlayApps 1.0
import VPlay 2.0
import QtQuick 2.0

App {
    id: app

    ....

    Navigation {

        NavigationItem{
            title: qsTr("Live")
            icon: IconType.rss

            NavigationStack{
                VideoPage{
                    id: videoPage
                    minConfidence: app.minConfidence
                    model: app.model
                    showTime: app.showTime
                }
            }
        }

        NavigationItem{
            title: qsTr("Settings")
            icon: IconType.sliders

            NavigationStack{
                AppSettingsPage{
                    id: appSettingsPage
                    minConfidence: app.minConfidence
                    model: app.model
                    showTime: app.showTime

                    onMinConfidenceChanged: {
                        app.minConfidence = appSettingsPage.minConfidence
                        storage.setValue(kMinConfidence,app.minConfidence)
                    }

                    onModelChanged: {
                        app.model = appSettingsPage.model
                        storage.setValue(kModel,app.model)
                    }

                    onShowTimeChanged: {
                        app.showTime = appSettingsPage.showTime
                        storage.setValue(kShowTime,app.showTime)
                    }
                  }
                }
            }
        }
    }
}

VideoPage.qml

An screenshot of the VideoPage for object detection on iOS is shown below.

qt-machinelearning-tensorflow-VideoPage

The QtMultimedia module is loaded in this page.

import VPlayApps 1.0
import QtQuick 2.0
import QtMultimedia 5.9

The VideoPage has the minConfidence, model and showTime properties. It also has another property to storage the camera index, cameraIndex.

// Properties
property double minConfidence
property string model
property bool showTime

// Selected camera index
property int cameraIndex: 0

There is a camera component which is started and stopped when the page is shown or hidden. It has two boolean properties. The first one is true if there is at least one camera and the second one is true if there are at least two cameras.

Camera{
    id: camera
    property bool availableCamera:  QtMultimedia.availableCameras.length>0
    property bool availableCameras: QtMultimedia.availableCameras.length>1
}

// Start and stop camera
onVisibleChanged: {
    if (visible) camera.start()
    else camera.stop()
}

There is also a button in the navigation bar to switch the camera. This button is visible only when there is more than one camera available. The initialRotation() function is required due to the Qt bug 37955, which incorrectly rotates the front camera video output on iOS.

// Right-hand side buttons
rightBarItem: NavigationBarRow {

    // Switch camera button
    IconButtonBarItem {
        title: qsTr("Switch camera")
        visible: QtMultimedia.availableCameras.length>1
        icon: IconType.videocamera
        onClicked: {
            cameraIndex = cameraIndex+1 % QtMultimedia.availableCameras.length
            camera.deviceId = QtMultimedia.availableCameras[cameraIndex].deviceId
            videoOutput.rotation = initialRotation()
        }
    }
}

// BUG: front camera rotation on iOS [QTBUG-37955]
function initialRotation()
{
    return Qt.platform.os === "ios" && camera.position === Camera.FrontFace ? 180 : 0
}

When no camera is detected, an icon and a message are shown to the user.

// No camera found
Item{
    anchors.centerIn: parent
    width: parent.width
    visible: QtMultimedia.availableCameras.length<=0
    Column{
        width: parent.width
        spacing: dp(25)

        Icon{
            anchors.horizontalCenter: parent.horizontalCenter
            icon: IconType.videocamera
            scale: 3
        }

        AppText{
            anchors.horizontalCenter: parent.horizontalCenter
            text: qsTr("No camera detected")
        }
    }
}

When the camera is loading, an icon with a cool animation and a message are also
shown to the user.

// Loading camera
Item{
    anchors.centerIn: parent
    width: parent.width
    visible: QtMultimedia.availableCameras.length>=0 &&
             camera.cameraStatus !== Camera.ActiveStatus
    Column{
        width: parent.width
        spacing: dp(25)

        Icon{
            id: videoIcon
            anchors.horizontalCenter: parent.horizontalCenter
            icon: IconType.videocamera
            scale: 3

            SequentialAnimation {
                   running: true
                   loops: Animation.Infinite
                   NumberAnimation { target: videoIcon; property: "opacity";
                   from: root.maxVal; to: root.minVal; duration: root.aTime }
                   NumberAnimation { target: videoIcon; property: "opacity";
                   from: root.minVal; to: root.maxVal; duration: root.aTime }
             }
        }

        AppText{
            anchors.horizontalCenter: parent.horizontalCenter
            text: qsTr("Loading camera") + " ..."
        }
    }
}

The camera video output fills the whole page. It is only visible when at least
one camera is detected and active. We define a filter objectsRecognitionFilter which is implemented in a C++ class. This filter get each video frame, transforms it as input data to TensorFlow, invokes TensorFlow and draws the results over the video frame. This C++ class will be later introduced.

VideoOutput {
    id: videoOutput
    anchors.fill: parent
    source: camera
    visible: camera.availableCamera && camera.cameraStatus == Camera.ActiveStatus
    autoOrientation: true
    fillMode: VideoOutput.PreserveAspectCrop
    rotation: initialRotation()

    filters: [objectsRecognitionFilter]
}

AppSettingsPage.qml

An screenshot of this page on iOS is shown below.

qt-machinelearning-tensorflow-AppSettingsPage

The AppSettingsPage allows the user to select the minimum confidence level for
the detections with a slider. The slider value is stored in minConfidence.

AppSlider {
    id: slider
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    from:  0
    to:    1
    value: minConfidence
    live:  true
    onValueChanged: minConfidence = value
}

The inference time, the time Tensorflow takes to process an image, can be also shown on the screen. It can be enabled or disabled by means of a switch. The boolean value is stored in showTime.

AppSwitch{
    anchors.verticalCenter: parent.verticalCenter
    id: sShowInfTime
    checked: showTime
    onToggled: showTime = checked
}

There are also two exclusive check boxes to select the model: one for image classification and another for object detection. The selected model is stored in the `model` property. If the currently selected model is unchecked, the other model is automatically checked, as one of them should be always selected.

ExclusiveGroup { id: modelGroup }

AppCheckBox{
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    text: qsTr("Image classification")
    exclusiveGroup: modelGroup
    checked: model === "ImageClassification"
    onCheckedChanged: if (checked) model = "ImageClassification";
                      else chkDetection.checked = true
}

AppCheckBox{
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    text: qsTr("Object detection")
    exclusiveGroup: modelGroup
    checked: model === "ObjectDetection"
    onCheckedChanged: if (checked) model = "ObjectDetection";
                      else chkClassification.checked = true
        }
}

C++ TensorFlow Interface and Video Frame Filter

Two main tasks are programmed in C++.

  • Interfacing with TensorFow
  • Managing video frames

The source code of the C++ classes is not presented here in detail, instead the process is sketched and explained, links to further details are also given. Nevertheless, you can have a look at the source code hosted on GitHub.

Interfacing with Tensorflow

The Tensorflow C++ class interfaces with the TensorFlow library, check the code for a detailed description of this class. This class is a wrapper, check the Tensorflow C++ API documentation for further information.

Managing video frames

The workflow for managing video frames is shown in the next flow diagram.

qt-machinelearning-tensorflow-videoframeWorkflow

A object filter, ObjectsRecognizer, is applied to the VideoOutput to process frames. This filter is implemented by means of the C++ classes: ObjectsRecogFilter and ObjectsRecogFilterRunable, for further information about how to apply filters, check introducing video filters in Qt Multimedia.

The filter is processed in the `run` method of the ObjectsRecogFilter class. The general steps are the following.

  1. We need to convert our QVideoFrame to a QImage so we can manipulate it.
  2. We check if Tensorflow is running. Since Tensorflow is executed in another thread, we used the QMutex and QMutexLocker classes to thread-safety check if it is running. A nice example is given in QMutexLocker Class documentaion.
    • If Tensorflow is running – nothing is done
    • If Tensorflow is NOT running – we execute it in another thread by means of the C++ classes: TensorflowThread and WorkerTF, signals and slot are used to communicate the main thread and these classes, check [QThreads general usage](https://wiki.qt.io/QThreads_general_usage) for further details. We provide as input the video frame image. When Tensorflow is finished we store the results given be the selected model also by means of signals and slots.
  3. We get the stored results (if any) and apply them to the current video frame image. If our model is image classification, we just draw the name and score of the top image class if the score is above the minimum confidence value. If our model is object detection, we iterate over all the detections and draw the bounding boxes, names of objects and confidence values if they are above the minimum confidence level. There is an auxiliary C++ class, AuxUtils, which provides functions to draw on frames, such as drawText and drawBoxes.
  4. The last step is to convert back our QImage to a QVideoFrame to be processed by our QML VideoOutput component and then we go back to process a new video frame.

Neural Network Models for Image Classification and Object Detection

We need neural network models to perform the image classification and object detection tasks. Google provides a set of pre-trained models that do this. The file extension for Tensorflow frozen neural network models is .pb. The example on Github already includes MobileNet models: MobileNet V2 1.0_224 for image classification and SSD MobileNet V1 coco for object detection. MobileNets is a class of efficient neural network models for mobile and embedded vision applications.

Image Classification Models

Image classification models can be download from the TensorFlow-Slim image classification model library. Our example code is designed for MobileNet neural networks. For example, download mobilenet_v2_1.0_224.tgz, uncompress it, and copy the mobilenet_v2_1.0_224_frozen.pb file to our assets folder as image_classification.pb. The image size in this case, 224 x 224 pixels, is set in the constants fixed_width and fixed_height defined in our Tensorflow C++ class. The output layer, MobilenetV2/Predictions/Reshape_1 in this case, is also specified in the constant list variable listOutputsImgCla in the Tensorflow class. Labels for these models are already set in the image_classification_labels.txt file. Labels belong to ImageNet classes.

Object Detection Models

Check Tensorflow detection model Zoo for a comprehensive list of object detection models. Any SSD MobileNet model can be used. This kind of models provides caption, confidence and bounding box outputs for each detected object. For instance, download ssd_mobilenet_v1_coco_2018_01_28.tar.gz and uncompress it, copy the frozen_inference_graph.pb to our assets folder as object_detection.pb. Labels for this kind of models are already given by the object_detection_labels.txt file. Labels belong to COCO labels.

Known Issues

Although the presented example is functional, there is still room for improvement. Particularly in the C++ code where naive solutions were considered for simplicity.

There are also some issues to address, the following list summarizes them.

  • The app performance is much higher on iOS than on Android even for high-end mobile devices. Finding the root cause of this requires further investigation.
  • The sp method of the AuxUtils C++ class is intended to provide font pixel sizes independently on the screen size and resolution, although it does not work for all devices. Therefore, same implementation that the one provided by the V-Play QML sp function should be considered.
  • Asset files can be easily accessed from QML and Qt classes. For instance, assets:/assets/model.pb gives access to a file called model.pb stored in the assets folder on Android. However, accessing assets from general C++ classes is not so easy because those classes can not resolve assets:/. This is the case for the Tensorflow C++ class. The current solution is to copy the file to a well known path, for example to QStandardPaths::writableLocation(QStandardPaths::AppLocalDataLocation), but this involves checking if the destination folder exists (and create it otherwise), checking if the asset file exists and has not changed (and copy it otherwise).
  • QVideoFrame conversion to QImage is performed in order to draw on it in the run method of the ObjectsRecogFilterRunable C++ class. Currently, this is done using the qt_imageFromVideoFrame function included in a Qt private module: multimedia-private. Therefore, the app is tied to this specific Qt module build version and running the app against other versions of the Qt modules may crash at any arbitrary point. Additionally, the conversion of BGR video frames is not properly managed by the qt_imageFromVideoFrame function. Therefore, they are converted to images without using this function.
  • The current implementation continuously executes Tensorflow in a separated thread processing video frames. That is when the Tensorflow thread finishes, it is executed again with the latest frame. This approach provides a fluent user experience, but on the other hand it makes the device to considerably heat up and drain the battery fast.

If you have a business request for assistance to integrate TensorFlow in your V-Play apps, don’t hesitate to drop a line at support@v-play.net or contact us here. The V-Play SDK is free to use, so make sure to check it out!

 

If you enjoyed this post, feel free to share it on Facebook or Twitter.

More Relevant App Development Resources

The Best App Development Tutorials & Free App Templates

All of these tutorials come with full source code of the mobile apps! You can copy the code to make your own apps for free!

App Development Video Tutorials

Make Cross-Platform Apps with Qt: V-Play Apps

How to Add In-App Chat or Gamification Features to Your Mobile App

How to Make a Mobile App with Qt Quick Designer (QML Designer) & V-Play

 

The post Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow appeared first on V-Play Engine.

TableView

by Richard Moe Gustavsen (Qt Blog)

I’m happy to announce that in Qt 5.12, a new TableView item will be available in the QtQuick module. TableView is similar to the existing ListView, but with additional support for showing multiple columns.

Like with ListView, you can assign data models of any kind to TableView, like ListModels or plain Javascript arrays. But to create models with more than one column, you currently need to subclass QAbstractItemModel in C++. A QML TableModel is also in the works, but will come later.

TableView inherits Flickable. This means that while a table can have any number of rows and columns, only a subsection of them will usually be visible inside the viewport. As soon as you flick, new rows and columns enter the viewport, while old ones move out. A difference to ListView is that TableView will reuse the delegate items that are flicked out to build the rows and columns that are flicked in. This will of course greatly improve performance, especially when using a delegate that has lots of child items.

The fact that TableView reuses delegate items is not meant to be transparent to the developer. When a delegate item is reused, context properties like index, row, column, and model roles, will be updated. But other properties will not. Storing a state inside a delegate item is a bad idea in the first place, but if you do, you have to reset that state manually. If, for example, a child rectangle changes color after construction, you need to set it back when the delegate item is reused. Two attached signals are available for this purpose: TableView.onPooled and TableView.onReused. The former will notify the delegate item when it is no longer a part of the table and has been moved to the reuse pool. This can be a good time to pause ongoing animations or timers for example. The latter signal will be emitted when the delegate item has been moved back into the view. At this point you can restore the color. It might be tempting to use Component.onCompleted for such things as well, but that signal is only emitted when the delegate item is created, and not when it’s reused.

The following snippet shows how to use the attached signals to temporarily pause an animation while a delegate item is pooled:

TableView {
    anchors.fill: parent
    clip: true

    columnSpacing: 1
    rowSpacing: 1
    model: myQAbstractTableModel

    delegate: Rectangle {
        implicitWidth: 100
        implicitHeight: 50

        TableView.onPooled: rotationAnimation.pause()
        TableView.onReused: rotationAnimation.resume()

        Rectangle {
            id: rect
            anchors.centerIn: parent
            width: 40
            height: 5
            color: "green"

            RotationAnimation {
                id: rotationAnimation
                target: rect
                duration: (Math.random() * 2000) + 200
                from: 0
                to: 359
                running: true
                loops: Animation.Infinite
            }
        }
    }
}

For simple cases, TableView will determine the width of a column by reading the implicitWidth of the delegate items inside it. For this strategy to be consistent, all delegate items in the same column should have the same implicitWidth. For more advanced cases, you can instead assign a callback function to TableView that returns the width of any given column. That way the application is in full control of the widths, whether they are calculated or stored, rather than TableView trying to solve this efficiently for models with potentially thousands of rows and columns.

TableView {
    anchors.fill: parent
    clip: true
    model: myQAbstractTableModel
    delegate: Rectangle {}

    columnWidthProvider: function (column) { return column % 2 ? 100 : 200 }
    rowHeightProvider: function (row) { return row % 2 ? 100 : 200 }
}

Other than this, the API for the first release of TableView is kept pretty small and strict. More will be implemented later, such as using custom transitions when adding or removing rows and columns. We’re also working on a TableHeader, TableModel, as well as a DelegateChooser. The latter lets you assign several delegates to a TableView, and e.g use a different one for each column. The latter will already be available in Qt 5.12 as a labs import (Qt.labs.qmlmodels 1.0).

The post TableView appeared first on Qt Blog.

KD Chart 2.6.1 Released

This is the latest release of our powerful open-source Qt component, KD Chart, that allows you to create business charts and much more.

Release Highlights
  • Builds with modern Qt versions, at least up to Qt 5.10
  • Improves tooltip handling
  • Fixes horizontal bar chart
  • Uses @rpath for OSX dynamic libraries
  • Fixes build on Qt4/ARM

KD Chart makes use of the Qt Model-View programming model that allows re-use of existing data models to create charts. KD Chart is a complete implementation of the ODF (OpenDocument) Chart specification. It now includes Stock Charts, Box & Whisker Charts and the KD Gantt module for implementing ODF Gantt charts into applications.

Read more about KD Chart…

Get KD Chart here.

KD Chart is available under both a free software license (GPL) and a commercial license. The code is exactly the same under both licenses, so which license type you should choose depends on the project you want to use it for.

The post KD Chart 2.6.1 Released appeared first on KDAB.

Live update of Python code during debugging, using builtin reload()

Introduction

When debugging a Test Script, one can use Python’s built-in reload() from the Squish Script Console to get recent changes to module functions in the currently running Test Execution.

Debugging Python Test Scripts

While debugging your Test Scripts in the Squish IDE, the Script Console might come in handy, e.g. for getting immediate feedback and syntax confirmation, as described in an earlier article.

Sometimes it is more comfortable to modify the code right at its place, in the editor. It is assumed that you’re making use of Python’s import mechanism instead of the source() function for bringing in shared code to your script (if not, we explain how to work around that later in this article).

For example, let’s assume there is a aut_helper.py module in /shared/scripts/. This module provides higher level functions that deal with the AUT, like addRandomAddressbookEntry().

A Test Case using that function could look like this:

import aut_helper
...
def main():
    startApplication("addressbook")
    ...
    aut_helper.addRandomAddressbookEntry()

If addRandomAddressbookEntry() breaks, e.g. due to intended changes in the AUT, you head into the debugging mode, either by choosing the ‘Debug’ option in the Object Not Found dialog, by setting a breakpoint, or simply by pausing the Test Execution from the Control Bar. In the Squish Script Console, you can call functions defined in the aut_helper.py module, e.g.

>>> aut_helper.addRandomAddressbookEntry()

But it is also possible to make changes to the aut_helper.py module using the Squish IDE and (still being in the same Debug Session) invoke

>>> reload(aut_helper)
>>> aut_helper.addRandomAddressbookEntry()

This tells the Python interpreter to load the new function definitions into the current Test Execution. reload() is a builtin Python function, that takes an already loaded module as argument.

Now, without leaving your debugging Session, it is possible to make changes to your script functions in the Squish IDE editor, save them and retry its execution until the function is in the desired shape.

What about source()?

Even if your Test Suite is organized using the source() function, you can make use of reloading, but you would have to use a dedicated script file just for the purpose of composing snippets, and use import for that file from the Squish Script Console:

import names #remove this line, when not using the Scripted Object Map feature
from squish import *

def addRandomAddressbookEntry():
    pass

When execution is halted, you can then have the same “Edit Script, Save, call from Script Console” roundtrips as above.

>>> import scratch
>>> scratch.addRandomAddressbookEntry()
... Edit scratch.py file, Save
>>> reload(scratch)
>>> scratch.addRandomAddressbookEntry()
...

Conclusion

SquishIDE and squishrunner work great with the Python built-in reload() function. This allows you to modify and debug your test script functions while running a test case.

 

The post Live update of Python code during debugging, using builtin reload() appeared first on froglogic.

Qt AR: Why and How to Add Augmented Reality to Your Mobile App

Improved AR capabilities for mobile platforms are one of the biggest trends of 2018. Apps with AR features like Yelp, Google Translate or Pokémon GO are only the beginning. Augmented reality allows to create innovative user experiences that support your brand.

Mobile AR is on the Rise! Why?

Since the release of Apple’s ARKit and Google’s ARCore, augmented reality made its way into the mobile market. For example, to make it possible to:

  • Catch virtual monsters in your neighborhood. (Pokemon GO)
  • See restaurant descriptions while you’re walking the street. (Yelp)
  • Translate texts while you view a sign. (Google Translate)

AR Apps: Pokémon GO, Yelp, Google Translate

Those apps mix the real world with computer-generated content. They thus show the user a different reality. In that sense, augmented reality (AR) is quite similar to virtual reality (VR), which is why they are often confused.

Differences between VR and AR

Both technologies can change the way you look at the world. However, they aim towards a different goal. Virtual reality tries to build a simulated environment around the user. It can take you to places you’ve never seen and allows you to enter a new world. When VR does its job right, you will believe that you are actually there. For example, when driving in a virtual reality racing simulator:

Virtual Reality Racing Car

In contrast to VR, augmented reality does not take you to a different place. It enhances the world around you with digital information. For example, to see the route of your navigation system mixed into the real street image while driving in your car.

Wikitude Navigation

The world’s first pedestrian and car navigation system that integrates AR was the Wikitude Navigation app. The app was a revolutionary step forward in the navigation and guidance field and eliminates the need for a map.

Advantages of Immersive Experiences in Mobile Apps and Games

Since Apple launched its app store with 20k apps in 2008, it experienced a rapid growth and now offers more than 3M apps. More than ever, businesses and developers thus thrive to provide unique app experiences that support their brand. They empower users to be creative and connect in order to boost engagement and retention rates.

Mobile AR now allows to create immersive app experiences to surprise and engage the users. Businesses have understood this potential and the International Data Corporation forecast for 2018 even expects worldwide spendings on AR and VR to increase by 95%. Let’s have a look at some innovative AR apps:

Telekom: Lenz – Gorillaz App

Telekom Electronic Beats partnered up with the Gorillaz to create a new dimension in music. The Lenz app transforms magenta surfaces into digital portals which display exclusive Gorillaz content.

Washington Post App: Unesco Heritage

The Washington Post has published another successful AR-enhanced story. This time, the Post’s article promotes all 23 of the UNESCO World Heritage sites situated in the USA. To get readers to learn about, appreciate, and visit these locations, the daily newspaper included an AR feature to get users even more involved with the story.

Augmentors: Real Monster Battles

Following in the footsteps of Pokemon GO, Augmentors is the world’s first cross-platform (iOS & Android) augmented reality game backed by the Bitcoin Blockchain. Players can trade, swop, battle, and train gaming creatures in the real world. Early stage game supporters will be rewarded with unique currency and one-of-a-kind creatures.

Augmented Cocktails: AR in Low-Light Conditions

It can be difficult to provide rich AR experiences in all kinds of situations. For example when dealing with low-light scenarios. City Social in London is known for providing great food, drinks, service and a stunning skyscraper view. With the intention of delighting their customers, even more, they paired with Mustard Design. To create an innovative app that brings their cocktails to life:

Lufthansa AR Aviation Demo

Instead of shipping and installing costly demo equipment to be displayed at trade show exhibitions, Lufthansa Technik is innovatively using augmented reality technology to show their customers detailed installation information and connectivity solutions.

How Does Augmented Reality Work?

The above showcases all rely on the mobile device camera and sensors to track images, objects and scenes of the real world:

  • Telekom recognizes magenta surfaces to replace it with different content.
  • The Washington Post app tracks reader’s surroundings and instantly layers the camera view with virtual animals like a bison.
  • Augmentors combines such Instant 3D Tracking with Image Recognition to bring game cards to live.

Another example app that relies on location-based AR is the Osmino app: A quick scan of your surrounding provides you with a comprehensive listing of all free Wi-Fi hotspots around you:

Wikitude Showcase Osmino

You can integrate some of these  features in your mobile app with Apple’s ARKit and Google’s ARCore. But you also have the option to rely on cross-platform tools which go beyond ARKit and ARCore. In fact, the above showcases are all built with the Wikitude AR SDK.

Why use Wikitude instead of ARKit or ARCore?

Being in the market since 2008, Wikitude bridges the gap between different devices, platforms, and levels of AR support. With a single cross-platform API, it allows over 100,000 developers to integrate AR features across iOS, Android and Windows with a single code base, while having a much higher market reach than ARKit and ARCore.

Advantages of the Wikitude SDK Architecture

Wikitude provides a rich AR experience across platforms. To achieve that, it relies on several abstraction layers:

Wikitude SDK Architecture

The Core Components handle features like Image Recognition and Object/Scene Recognition. Wikitude built the so-called SLAM Engine to offer all AR features across devices and platforms.

In case Apple’s ARKit or Google’s ARCore are available, Wikitude can dynamically switch to these native frameworks instead of its own engine. In addition, Wikitude can also run on iOS, Android and Windows devices that do not have such native support for AR.

So compared to native development with ARKit or ARCore, Wikitude even supports AR on devices that are not able to run native AR features via ARKit or ARCore. This is a huge benefit, because your app is not bound by the market coverage of ARKit or ARCore. See this table for a comparison of ARKit and ARCore supported devices, vs the ones supported by Wikitude:

  • iOS ARKit Device Coverage: 81% (minimum iOS 11.0 + iPhone 6S, iPad 5 and newer models)
  • iOS Wikitude Device Coverage: 92% (iOS 9.0 + iPhone 4, iPad 2 and newer models)
    → Wikitude has + 11% iOS device coverage compared to ARKit
  • Android ARCore Device Coverage: 5% (minimum Android 7.0 + currently about 50 device models out of the thousands in the market)
  • Android Wikitude Device Coverage: 95% (minimum Android 4.4 + most existing device models), which means
    → Wikitude has +90% Android device coverage compared to ARCore

For detailed infos which devices are supported, see the official developer docs by Apple for ARKit supported devices, iOS version market share, and by Google for ARCore supported devices.

So if your goal is to make your app available on as many devices as possible, Wikitude is the go-to solution.

To use Wikitude, you can embed their augmented reality view into your existing native apps. It is not required to modify other views of your iOS, Windows or Android app. Wikitude also offers several plugins to use their SDK  in conjunction with cross-platform app frameworks like V-Play, via its Qt Plugin.

How to Use the Wikitude AR Plugin in Qt Apps

The Wikitude Qt AR Plugin developed by V-Play offers an API to:

  • Integrate Wikitude in Qt applications, and also
  • into existing or new native applications.

The Wikitude Qt AR plugin builds upon the native APIs of Wikitude and can run augmented reality worlds created with the Wikitude JS API.

If you have an existing or newly developed app based on Qt, you can simply load the Wikitude AR Plugin from QML-based Qt Quick applications or C++-based Qt Widgets applications.

How to Use Image Recognition and 3D Tracking in Your Mobile App

Since the release of V-Play Engine’s Wikitude Plugin you can integrate and use the Wikitude AR SDK in your Qt cross-platform app. It only takes a few lines of code. The examples below show how to run some of the Wikitude AR examples with V-Play.

Wikitude Makes Image Tracking Easy

The following demo code includes everything you need to embed a Wikitude view in your QML app. This example tracks certain images and overlays a transparent video, as if it were part of the image:

import QtQuick.Controls 2.0
import QtQuick 2.0
import VPlayApps 1.0
import VPlayPlugins 1.0

App {
 // name of the Wikitude example to load
 property string example: "11_Video_4_Bonus-TransparentVideo"
 readonly property bool exampleIsLoaded: samplesDl.available

 // NavigationStack can display Pages and adds a NavigationBar
 NavigationStack {
   id: navStack
   // at startup show either arPage or downloadPage, in case the example is not loaded yet
   Component.onCompleted: navStack.push(exampleIsLoaded ? arPage : downloadPage)
 }

 // arPage: Page with a Wikitude view
 property Component arPage: Page {
   title: "AR Example"

   // configure Wikitude view
   WikitudeArView {
     id: arView
     anchors.fill: parent
     arWorldSource: samplesDl.getExtractedFileUrl(example+"/index.html")
     running: true
     cameraPosition: WikitudeArView.BackCamera

     //license key for V-Play QML Live app
     licenseKey: "g0q44ri5X4TwuXQ/9MDYmZxsf2qnzTdDIyR2dWhO6IUkLSLU4IltPMLWFirdj+7kFZOdWAhRUD6fumVXLXMZe6Y1iucswe1Lfa5Q7HhQvPxEq0A7uSU8sfkHLPrJL0z5e72DLt7qs1h25RJvIOiRGDoRc/h/tCWwUdOL6ChDnyJTYWx0ZWRfX8Vh9c9kcuw4+pN/0z3srlwIHPV5zJuB1bixlulM4u1OBmX4KFn+4+2ASRCNI+bk655mIO/Pk3TjtYMrgjFR3+iYHvw1UmaYMVjsrgpcVkbzJCT6QmaW8LejnfXDNLAbZSov64pVG/b7z9IZPFLXxRSQ0MRLudoSDAh6f7wMTQXQsyqGrZeuQH1GSWtfjl/geJYOvQyDI+URF58B5rcKnrX6UZW3+7dP92Xg4npw7+iGrO1M4In/Wggs5TXrmm25v2IYOGhaxvqcPCsAvbx+mERQxISrV+018fPpL8TzR8RTZZ5h7PRfqckZ3W54U1WSiGn9bOj+FjDiIHlcvIAISpPg2Vuq88gLp0HJ5W+A+sVirqmmCyU9GKeV5Faiv62CJy6ANCZ83GGX2rWcIAh1vGOQslMr9ay4Js+rJsVN4SIhCYdw9Em9hSpoZgimnOaszI7zn9EnPwVQgNETgVm7pAZdLkH5hxFoIKOPG2e79ZKKmzlkB/IZigoHZWNDUCFnEHDNFlTZjOEwoPi8DDGfzOEOGngWE7jmp24N7GzAP7e54Y3e48KtmIJ1/U0PFKOoi2Yv0Gh+E1siU5MBf8dLO7y7GafJWJ2oCUqJG0pLb2cgTf9pjkr625BV3XxODRylgqc5/UymTY6l1J0qO43u5hH3zaejng4I9cgieA3Y553rAEafAsfhrRmWsLW/kBdu4KLfY4eQ9z4B0TweW/xsofS0bkIqxalh9YuGBUsUhrwNUY7w6jgC6fjyMhtDdEHAlXC2fW1xLHEvY9CKojLNJQUnA0d5QCa22arI8IK63Jn8Cser9Cw57wOSSY0ruoJbctGdlsr/TySUkayAJJEmHjsH73OdbAztGuMjVq7Y643bTog4P3Zoysc="
   }
 }

 // downloadPage: Page for downloading the Wikitude example at runtime
 // this is only required to retrieve the Wikitude sources for the V-Play QML Live app, Wikitude sources can also be bundled with the app otherwise
 property Component downloadPage: Page {
   title: "AR Example - Download"

   Column {
     anchors.fill: parent
     anchors.margins: dp(12)
     spacing: dp(12)

     AppText {
       text: samplesDl.status === DownloadableResource.UnAvailable
             ? qsTr("Wikitude example requires to be downloaded (~ 2MB)")
             : samplesDl.status === DownloadableResource.Downloading
               ? qsTr("Downloading example... (%1%)").arg(samplesDl.progress)
               : qsTr("Extracting example... (%1%)").arg(samplesDl.progress)
       width: parent.width
     }

     AppButton {
       text: samplesDl.status === DownloadableResource.UnAvailable ? qsTr("Start download") : qsTr("Cancel download")
       onClicked: if(samplesDl.status === DownloadableResource.UnAvailable)
                    samplesDl.download()
                  else samplesDl.cancel()
     }

     ProgressBar {
       width: parent.width
       from: 0
       to: 100
       value: samplesDl.progress
     }
   }
 }

 // component to download additional app resources, like the Wikitude example
 DownloadableResource {
   id: samplesDl
   source: "https://v-play.net/qml-sources/wikitude-examples/"+example+".zip"
   extractAsPackage: true
   storageLocation: FileUtils.DownloadLocation
   storageName: example
   onDownloadFinished: {
     if(error === DownloadableResource.NoError) {
       navStack.clearAndPush(arPage) // open AR page after download is finished
     }
   }
 }
}

 

You can test the Image Tracking AR demo with the image below. It is also found in the Wikitude Plugin documentation.

Wikitude Image Tracking Video Example Surfer

Most of the QML code above is a little overhead to let you instantly preview the example with V-Play QML Live Code Reloading.

What is V-Play QML Live Code Reloading?

It allows you to run and reload apps & games within a second on iOS, Android and Desktop platforms. You can just hit save and the app reloads instantly, without the need to build and deploy again! This is especially useful for AR, which usually requires a lot of on-device testing to tweak settings.

You can also use it to run all the examples listed here from the browser, without having to setup any native SDKs on your PC. Just download the V-Play Live Reload App, for Android or iOS to connect a mobile device.

The code above downloads the configured Wikitude example as zip, extracts the archive, and runs the demo in a Wikitude augmented reality view. Pretty amazing, actually. Go ahead and try it yourself by clicking on one of the “Run this Example” buttons.

The possibility to download assets or code at runtime is a super useful advantage of V-Play. This means that the original app can stay small while additional features are downloaded on demand. However, if the AR part is essential in your own app, you can also bundle the Wikitude code so the AR assets are available without an additional download.

The minimum QML code required thus boils down to a few lines of code:

import VPlayApps 1.0
import VPlayPlugins 1.0

App {
 WikitudeArView {
   id: arView
   anchors.fill: parent
   arWorldSource: Qt.resolvedUrl("assets/11_Video_4_Bonus-TransparentVideo/index.html")
   running: true
   cameraPosition: WikitudeArView.BackCamera
   licenseKey: ""
 }
}

How to Create Wikitude AR Worlds

The Wikitude SDK makes it easy to create such augmented reality views. It builds on web technologies (HTML, JavaScript, CSS) to create so-called ARchitect worlds. These augmented reality experiences are ordinary HTML pages. They use the ARchitect JavaScript API to create objects in augmented reality. That is why the WikitudeArView QML component in the above example has an arWorldSource property. It refers to the index.html of the ARchitect world:

<!DOCTYPE HTML>
<html>
<head>
 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 <meta content="width=device-width,initial-scale=1,maximum-scale=5,user-scalable=yes" name="viewport">
 <title></title>

 <script src="https://www.wikitude.com/libs/architect.js"></script>
 <script type="text/javascript" src="../ade.js"></script>
 <link rel="stylesheet" href="css/default.css">
</head>
<body>
 <script src="js/transparentvideo.js"></script>
</body>
</html>

It is quite simple, as all the magic happens in the JavaScript code for the Architect world. The above example includes transparentvideo.js, which amounts to only 80 lines of code. This is how the main part for the image tracking and video overlay looks like:

var World = {
 init: function initFn() {
   this.createOverlays();
 },

 // create augmented reality overlays
 createOverlays: function createOverlaysFn() {

   /* Initialize ClientTracker */
   this.targetCollectionResource = new AR.TargetCollectionResource("assets/magazine.wtc", {
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });

   this.tracker = new AR.ImageTracker(this.targetCollectionResource, {
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });

   /* initialize video drawable */
   var video = new AR.VideoDrawable("assets/transparentVideo.mp4", 0.7, {
     translate: {
       x: -0.2,
       y: -0.12
     },
     isTransparent: true
   });

   video.play(-1);
   video.pause();

   /* handle video playback when image is tracked */
   var pageOne = new AR.ImageTrackable(this.tracker, "*", {
     drawables: {
       cam: &lbrackvideo&rbrack
     },
     onImageRecognized: function onImageRecognizedFn() {
       video.resume();
     },
     onImageLost: function onImageLostFn() {
       video.pause();
     },
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });
 }
};

World.init();

See the Wikitude documentation for details of their JavaScript API and step-by-step tutorials.

Wikitude Studio – No Coding Required

For those who are not very comfortable with coding, Wikitude also offers a simple drag-and-drop web editor: Wikitude Studio. It is your one-stop shop for generating and managing target collections, as well as for creating and publishing AR experiences!

Wikitude Studio optimizes your projects for the Wikitude SDK. It minimizes the effort of creating image target collections (wtc files) and object target collections (wto files). The Studio Editor makes it possible to add augmentations to your targets. You can test AR experiences and make them available to clients inside the Wikitude App, or inside your own app built with the Wikitude Plugin.

The Power of Instant Tracking and 3D Rendering

Wikitude is not only simple, it is also powerful. In addition to Image Tracking, it can instantly track the camera (Instant Tracking) or real live objects (Object Tracking). The following demo uses Instant Tracking to put 3D objects into the world:


App {
 // changed configuration to load the instant tracking demo 
 property string example: "05_InstantTracking_4_SceneInteraction"

 // ... 

 // no other changes required, DownloadableResource automatically uses the new example as source
 DownloadableResource {
   source: "https://v-play.net/qml-sources/wikitude-examples/"+example+".zip"
   // ...
 }
}

With 230 lines of JavaScript code, the ARchitect world of this example is simple and short as well.

More Augmented Reality Examples

Do you wanna play around some more? Then go ahead and try one of these examples:

Geo Tracking: POI Radar

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "10_BrowsingPois_2_AddingRadar"

 

Can be used to:

  • Show Points Of Interest around you, based on the GPS position.
  • For example to implement Augmented Navigation or see infos of Hotels or Restaurants around you.

Gesture Image Tracking

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "02_AdvancedImageTracking_1_Gestures"

Wikitude Image Tracking Face Example

Can be used to:

  • Drop images, gifs or videos onto an image.
  • For example to let users create and share AR experiences, similar to SnapChat / Instagram video processing with tracked objects.

Snap-To-Screen 3D Model

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "07_3dModels_4_SnapToScreen"

Wikitude Showcase Snap-to-screen Car

Can be used to:

  • Show additional information or 3D scene when scanning a certain image.
  • For example to enhance your print advertisement in a magazine with AR features:

Media Markt Magazine with Augmented Reality

Wikitude SDK Examples App

The following demo app allows you to to browse all Wikitude SDK Examples from within a single app:


 import QtQuick.Controls 2.0
 import QtQuick 2.0
 import VPlayApps 1.0
 import VPlayPlugins 1.0

 App {
   id: app

   DownloadableResource {
     id: samplesDl
     source: "https://v-play.net/qml-sources/wikitude-examples/WikitudeSdkSamples.zip"
     extractAsPackage: true
     storageLocation: FileUtils.AppDataLocation
     storageName: "WikitudeSdkSamples"
   }

   //samples.json lists all the SDK examples
   readonly property url samplesJsonFileUrl: samplesDl.available ? samplesDl.getExtractedFileUrl("samples.json") : ""
   readonly property string samplesJson: samplesDl.available ? fileUtils.readFile(samplesJsonFileUrl) : "[]"

   //map the JSON file to a list model for ListPage
   readonly property var samplesData: JSON.parse(samplesJson)
   readonly property var samplesModel: samplesData.map(function(category) {
     return [ { isHeader: true, name: category.category_name } ].concat(category.samples)
   }).reduce(function(a, b) { return a.concat(b) }, [])

   Rectangle {
     anchors.fill: parent
     color: "white"
   }

   NavigationStack {
     id: navStack

     ListPage {
       id: examplesListPage

       listView.visible: samplesDl.available

       title: "Wikitude AR Examples"

       model: samplesModel

       delegate: SimpleRow {
         enabled: !modelData.isHeader
         style.backgroundColor: enabled ? Theme.backgroundColor : Theme.secondaryBackgroundColor

         iconSource: modelData.is_highlight ? IconType.star : ""
         icon.color: "yellow"

         text: modelData.name
         detailText: !modelData.isHeader && modelData.path || ""

         onSelected: navStack.push(arPage, { sample: modelData })
       }

       Column {
         visible: !samplesDl.available
         anchors.fill: parent
         anchors.margins: dp(12)
         spacing: dp(12)

         AppText {
           text: samplesDl.status === DownloadableResource.UnAvailable
                 ? qsTr("Wikitude SDK examples need to be downloaded (134 MB)")
                 : samplesDl.status === DownloadableResource.Downloading
                   ? qsTr("Downloading SDK examples... (%1%)").arg(samplesDl.progress)
                   : qsTr("Extracting SDK examples... (%1%)").arg(samplesDl.progress)
           width: parent.width
         }

         AppButton {
           text: samplesDl.status === DownloadableResource.UnAvailable ? qsTr("Start download") : qsTr("Cancel download")
           onClicked: if(samplesDl.status === DownloadableResource.UnAvailable)
                        samplesDl.download()
                      else samplesDl.cancel()
         }

         ProgressBar {
           width: parent.width
           from: 0
           to: 100
           value: samplesDl.progress
         }
       }
     }
   }

   property Component arPage: Page {
     property var sample
     readonly property bool usesGeo: sample.requiredFeatures.indexOf("geo") >= 0

     title: sample.name

     WikitudeArView {
       id: arView

       anchors.fill: parent

       arWorldSource: samplesDl.getExtractedFileUrl(sample.path)
       running: true

       //set this to false to use the device location service
       overrideLocation: !usesGeo

       //license key for V-Play QML Live app
       licenseKey: "g0q44ri5X4TwuXQ/9MDYmZxsf2qnzTdDIyR2dWhO6IUkLSLU4IltPMLWFirdj+7kFZOdWAhRUD6fumVXLXMZe6Y1iucswe1Lfa5Q7HhQvPxEq0A7uSU8sfkHLPrJL0z5e72DLt7qs1h25RJvIOiRGDoRc/h/tCWwUdOL6ChDnyJTYWx0ZWRfX8Vh9c9kcuw4+pN/0z3srlwIHPV5zJuB1bixlulM4u1OBmX4KFn+4+2ASRCNI+bk655mIO/Pk3TjtYMrgjFR3+iYHvw1UmaYMVjsrgpcVkbzJCT6QmaW8LejnfXDNLAbZSov64pVG/b7z9IZPFLXxRSQ0MRLudoSDAh6f7wMTQXQsyqGrZeuQH1GSWtfjl/geJYOvQyDI+URF58B5rcKnrX6UZW3+7dP92Xg4npw7+iGrO1M4In/Wggs5TXrmm25v2IYOGhaxvqcPCsAvbx+mERQxISrV+018fPpL8TzR8RTZZ5h7PRfqckZ3W54U1WSiGn9bOj+FjDiIHlcvIAISpPg2Vuq88gLp0HJ5W+A+sVirqmmCyU9GKeV5Faiv62CJy6ANCZ83GGX2rWcIAh1vGOQslMr9ay4Js+rJsVN4SIhCYdw9Em9hSpoZgimnOaszI7zn9EnPwVQgNETgVm7pAZdLkH5hxFoIKOPG2e79ZKKmzlkB/IZigoHZWNDUCFnEHDNFlTZjOEwoPi8DDGfzOEOGngWE7jmp24N7GzAP7e54Y3e48KtmIJ1/U0PFKOoi2Yv0Gh+E1siU5MBf8dLO7y7GafJWJ2oCUqJG0pLb2cgTf9pjkr625BV3XxODRylgqc5/UymTY6l1J0qO43u5hH3zaejng4I9cgieA3Y553rAEafAsfhrRmWsLW/kBdu4KLfY4eQ9z4B0TweW/xsofS0bkIqxalh9YuGBUsUhrwNUY7w6jgC6fjyMhtDdEHAlXC2fW1xLHEvY9CKojLNJQUnA0d5QCa22arI8IK63Jn8Cser9Cw57wOSSY0ruoJbctGdlsr/TySUkayAJJEmHjsH73OdbAztGuMjVq7Y643bTog4P3Zoysc="

       cameraPosition: sample.startupConfiguration.camera_position === "back"
                       ? WikitudeArView.BackCamera
                       : WikitudeArView.FrontCamera

       cameraResolution: WikitudeArView.AutoResolution
       cameraFocusMode: WikitudeArView.AutoFocusContinuous
     }
   }
 }

What’s the Future for AR?

Augmented reality still has a lot of exciting features and functionalities in store for users, for example with Cloud AR and Multiplayer AR capabilities. Wikitude already offers a cloud-based image recognition service. The latest release, SDK  8, which is supported by the Qt Wikitude Plugin, brought many interesting features like Scene Recognition, Instant Targets or Extended Object Tracking you can now use. And in terms of shared experiences, support workers can display 3D content even though they are remote on another user’s device.

Apple recently introduced their new ARKit 2 framework, a platform that allows developers to integrate

  • shared AR, which allows multiplayer augmented reality experiences
  • persistent experiences tied to a specific location
  • object detection and
  • image tracking to make AR apps even more dynamic.

To showcase the new multiplayer feature, Apple introduced their augmented reality game ‘Swift Shot’:

The use-cases for shared augmented reality are vast, for both mobile games and apps. For example, your AR navigation system could show augmentations that other users placed. You would then also see digital warning signs along the road in addition to the route.

You can also build such multi-user experiences with V-Play Multiplayer. Together with Wikitude, a shared augmented reality experience created with QML + JavaScript is also only a few steps away. V-Play also plans to integrate Qt 3D Rendering with Wikitude’s Native APIs to boost rendering performance even more.

If you have a business request for these cutting-edge features currently in development or if you need assistance in developing an AR experience with high quality standards, don’t hesitate to drop a line at support@v-play.net or contact us here. The V-Play SDK is free to use, so make sure to check it out!

 

If you enjoyed this post, please leave a comment or share it on Facebook or Twitter.

More Relevant App Development Resources

The Best App Development Tutorials & Free App Templates

All of these tutorials come with full source code of the mobile apps! You can copy the code to make your own apps for free!

App Development Video Tutorials

Make Cross-Platform Apps with Qt: V-Play Apps

How to Add In-App Chat or Gamification Features to Your Mobile App

How to Make a Mobile App with Qt Quick Designer (QML Designer) & V-Play

 

The post Qt AR: Why and How to Add Augmented Reality to Your Mobile App appeared first on V-Play Engine.

Enterprise Application Development with Velneo and Qt

Do you enjoy case studies? We sure do, especially when those case studies are examples of finest work born from one’s passion for code.

There are many great Qt user stories in desktop applications. One of them comes from Velneo, an innovative Spanish tech company with their development platform that includes a rapid application development tool called Velneo vDevelop, and its application engine.

Despite mobile and web apps being all the rage, desktop applications stay highly relevant in the enterprise market. Native desktop applications beat web apps in performance and far superior user experience regularly. Thousands of Velneo users will attest.

Velneo vDevelop is the visual editor following the WYSIWYG approach. It’s developed using Qt Widgets and other Qt components. In addition, Velneo has developed their own easy and simple-to-learn programming language that saves users from complex implementation details. Velneo vDevelop and the programming language are the main ingredients that let you cook powerful applications – fast and easy.

With the framework, users can create finished software that uses Qt Quick, Qt Widgets, and several other modules. The runtime uses Qt modules that open up various ways to meet user demands in this market.

See below some of the vDevelop editor screenshots.

velneo_vdevelop_screen_01

velneo_vdevelop_screen_02

velneo_vdevelop_screen_03

Check out this video for many examples of ERP, CRM, accounting, and other business applications built with Velneo and Qt. How many different Qt Widgets can you spot? 🙂

To learn more about how Velneo is using Qt, read the case study in our Built with Qt section. If you have any questions about desktop application development, get in touch!

The post Enterprise Application Development with Velneo and Qt appeared first on Qt Blog.

Post Akademy

So, it has been a busy week of Qt and KDE hacking in the beautiful city of Vienna.
Besides getting quite some of the Viennese staple food, schnitzel, it was an interesting adventure of getting smarter.

  • Getting smarter about making sure what happens in North Korea doesn’t stay in North Korea
  • Getting smarter about what is up with this newfangled Wayland technology and how KDE uses it
  • Getting smarter about how to Konquer the world and welcoming new contributors
  • Getting smarter about opensource licensing compliance
  • Getting smarter about KItinerary, the opensource travel assistant
  • Getting smarter about TNEF, a invitation transport format that isn’t that neutral
  • Getting smarter about Yocto, automotive and what KDE can do

And lots of other stuff.

Besides getting smarter, also getting to talk to people about what they do and to write some patches are important events.
I also wrote some code. Here is a highlight:

And a lot of other minor things, including handling a couple of Debian bugs.

What I’m hoping to either put to my own todolist, or preferably others, is

I felt productive, welcome and … ready to sleep for a week.

Python Extensions in QtCreator

Hello world! My name is Tilman and I have been an intern with The Qt Company in Berlin for the last few weeks. During my time here, I have worked on enabling Python extensibility for the QtCreator and I am happy to announce, that a first proof of concept version is available today!

So, what exactly do the Python extensions do?  Well, the goal is to eventually be able to do about anything a native C++ plugin could do. But for now, the scope is much narrower and only a very small part of the C++ API is exposed.

screenshot_20180809_160715

A Technical Perspective

The main goal for me was to explore how this vision could be implemented. For now the project focuses on getting the integration and setup right, rather than having as many bindings as possible.

Everything starts with a new QtCreator plugin, which initializes Python bindings and then loads the user provided Python extensions. This is done by executing their Python scripts in an embedded CPython interpreter. Getting this to work requires two main things:

  1. Bindings (and a mechanism for loading bindings only if the relevant plugins are loaded)
  2. A system for discovering, and running Python extensions

 

Generating Bindings

Some of you may be familiar with Qt for Python. This project enables developers to create Qt applications in Python by generating Python bindings for Qt’s C++ code. To do this, it uses a binding generator called Shiboken.

To generate the bindings for QtCreators APIs, I used the same tool. This means, that on top of all the QtCreator specific bindings, anything from Qt for Python is also available from Python.

Plugins in QtCreator can be disabled by the user. Thus, we can only expose bindings for the Core plugin and things like the Utils library directly without incurring dependencies. This is quite a harsh restriction on the bindings we can use.

To circumvent this problem, any other QtCreator plugin may provide an additional library, which is then dynamically loaded by the Python extensions plugin as necessary. These libraries will eventually be provided for all plugins maintained by the QtCompany. For now, there is one example of such a library available for the Project Explorer plugin.

The Embedded Interpreter

Python extensions are nothing but a directory containing a main.py file, which represents the entry point of the extension.

My main design goal was to make Python extensions ‘feel’ as if they were normal Python scripts, run from within their extension directory. Since all the extensions run in the same embedded Python, there is a good deal of code devoted to making sure extensions seem isolated, as well as setting the appropriate sys.path for each extension.

This means you can do things like import other files from your extensions directory or mess with sys.path, just like you would with a normal Python program.

If your extensions depend on any other Python modules, there is also a facility for loading these dependencies. By including a requirements.txt, all your dependencies are ‘pip installed’ before your extension is first run. Should you need to do any other setup before your main.py can run, you can also provide an optional setup.py, which is run before, and separately from, your main script.

Closing Words

While a lot of heavy lifting still needs to be done, the main challenges of this project are now solved. If you are interested in trying things out yourself, I highly encourage you to check out the projects git repository. There, you can also have a look at the code and a more in-depth documentation.

On top of the C++ code, build instructions and some initial documentation, you will find several examples of Python extensions that give a taste of what will be possible.

The post Python Extensions in QtCreator appeared first on Qt Blog.

Join us for Squish Days Europe 2018

Squish Days 2018

Join us for Squish Days Europe 2018

Want to learn from our froglogic engineers everything about automated GUI Testing with the Squish GUI Tester and Code Coverage Analysis with Squish Coco?

Join us for a day at the Squish Day in one of the below listed cities. You will learn from our engineers how our tools can be used most efficiently, have the chance for in-depth technical discussions with our engineers and other Squish users and you will learn about the future roadmap of Squish GUI Tester and Squish Coco.

The attendance including drinks and lunch is free.

>> Register here! <<

Locations:

 

Squish Day Cambridge

Date & Time: October 23, 2018 | 09:30 – 16:00
Location: Quy Mill Hotel & Spa
Church Rd, Stow cum Quy, Cambridge CB25 9AF, United Kingdom

Squish Day Stockholm

Date & Time: November 13, 2018 | 09:30 – 16:00
Location: Scandic Victoria Tower
Arne Beurlings Torg 3, 164 40 Kista, Sweden

Agenda:

9:30      Entrance
10:00    Introduction
10:30    Intro, demo and Q&A of Squish for Qt
12:30    Lunch
13:30    Intro, demo and Q&A of Squish Coco
14:30    Product Roadmap and demo of Team Server
15:30    General Q&A
16:00    End

Registration

[contact-form-7]

Past events:

Squish Day Amsterdam

Date & Time: March 27, 2018 | 09:30 – 16:00
Location: Room Mate Aitana Hotel
IJdok 6, 1013 MM Amsterdam, The Netherlands

 

Squish Day Munich

Date & Time: May 15, 2018 | 09:30 – 16:00
Location: Courtyard by Marriott Munich East
Orleansstraße 81-83, 81667 Munich, Germany

 

Squish Day Toulouse

Date & Time: June 5, 2018 | 09:30 – 16:00
Location: Hotel Mercure Toulouse Centre Saint Georges
Rue Saint-Jérôme, 31000 Toulouse, France

 

The post Join us for Squish Days Europe 2018 appeared first on froglogic.

Virtlyst 1.2.0 release

Virtlyst - a Web Interface to manage virtual machines build with Cutelyst/Qt/C++ got a new release.

This new release includes a bunch of bug fixes, most importantly probably being the ability to warn user before doing important actions to help avoid doing mistakes.

Most commits came from new contributor René Linder who is also working on a Bootstrap 4 theme and Lukas Steiner created a dockerfile for it. This is especially cool because Virtlyst repository now has 4 authors while Cutelyst which is way older has only 6.

For the next release I'll also try to add user management (today you have a single admin account, and to add new users that need to be done via SQL) which wasn't available on the original WebVirtMgr project but is surely the most important lacking feature.

Have Fun! https://github.com/cutelyst/Virtlyst/archive/v1.2.0.tar.gz

Release 2.18.0: Update to Qt 5.11.1 with QML Compiler and Massive Performance Improvements

V-Play 2.18.0 adds support for Qt 5.11.1, with all features, improvements and fixes. Major changes to the QML compiler pipeline and QML engine boost the performance of your apps and games. The Qt Quick Compiler is now also available with the open source Qt version. This update also adds several improvements and fixes to V-Play app and game components.

Improved QML and JavaScript Performance on iOS, Android and Desktop

Qt now uses a completely new QML compiler pipeline to compile QML and JavaScript into bytecode. Then JIT is used to compile heavy used functions to assembly code on the fly.

qt-qml-js-compiler-pipeline

Image from www.qt.io

Here are some more details about this great addition:

  • Lots of cleanups and performance improvement to the way function calls and JavaScript scopes are being handled.
  • Improved performance of JS property lookups.
  • A new bytecode format that is very compact, saving memory in many cases.
  • Significantly faster bytecode interpreter than in earlier versions of Qt, in many cases reaching almost the performance of the old JIT.
  • A new JIT that works on top of the bytecode interpreter and only compiles hot functions into assembly.
  • Overall test results show almost a doubling of the JS performance on platforms where JIT can’t be used (iOS and WinRT) compared to 5.10.
  • With the new JIT, JS performance is usually around 10-40% faster than in older Qt versions (depending on the use case).

Qt Quick Compiler for AOT Compilation of QML and JS

You can now use the Qt Quick Compiler in all your apps and games. This was previously limited to only commercial Qt users, but is now also available with the open source Qt version.

To use the Qt Quick Compiler, just add the following line to your .pro file

CONFIG += qtquickcompiler

and enable the qrc resource system as described in your .pro and main.cpp file. This will compile your QML and JavaScript files AOT to bytecode and embed them with your application.

Note for using the resource system: For the Qt Quick Compiler, it is not sufficient to just add the directory names to the resources.qrc file. Instead add all the files that you want to include.

Use the Qt Quick Compiler for a Faster App Start

Qt compiles and caches QML and JS files while your application is running. This results in significantly faster load times of applications, as the cache files are faster to load. However, the initial creation of cache files can still take time, especially when the application starts for the very first time. To avoid that initial step and provide faster start-up times from the very beginning, you can use the Qt Quick Compiler to generate the cache files ahead-of-time, when compiling your application.

You can find more info about this here.

Improved Performance and Reduced CPU Usage with Qt 3D

The update to Qt 5.11.1 also brings performance improvements and a lot of fixes to the Qt 3D module. This makes it even easier to add 3D content in your apps and games.

v-play-3d-cube-rotation-sensor-color-select

import VPlayApps 1.0
import QtQuick 2.9
// 3d imports
import QtQuick.Scene3D 2.0
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import Qt3D.Input 2.0
import Qt3D.Extras 2.0
import QtSensors 5.9

App {
  // Set screen to portrait in live client app (not needed for normal deployment)
  onInitTheme: nativeUtils.preferredScreenOrientation = NativeUtils.ScreenOrientationPortrait
          
  RotationSensor {
    id: sensor
    active: true
    // We copy reading to custom property to use behavior on it
    property real readingX: reading ? reading.x : 0
    property real readingY: reading ? reading.y : 0
    // We animate property changes for smoother movement of the cube
    Behavior on readingX {NumberAnimation{duration: 200}}
    Behavior on readingY {NumberAnimation{duration: 200}}
  }
  
  NavigationStack {
    Page {
      title: "3D Cube on Page"
      backgroundColor: Theme.secondaryBackgroundColor
      
      Column {
        padding: dp(15)
        spacing: dp(5)
        
        AppText {
          text: "x-axis " + sensor.readingX.toFixed(2)
        }
        AppText {
          text: "y-axis " + sensor.readingY.toFixed(2)
        }
      }
      
      // 3d object on top of camera
      Scene3D {
        id: scene3d
        anchors.fill: parent
        focus: true
        aspects: ["input", "logic"]
        cameraAspectRatioMode: Scene3D.AutomaticAspectRatio
        
        Entity {
          
          // The camera for the 3d world, to view our cube
          Camera {
            id: camera3D
            projectionType: CameraLens.PerspectiveProjection
            fieldOfView: 45
            nearPlane : 0.1
            farPlane : 1000.0
            position: Qt.vector3d( 0.0, 0.0, 40.0 )
            upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
            viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
          }
          
          components: [
            RenderSettings {
              activeFrameGraph: ForwardRenderer {
                camera: camera3D
                clearColor: "transparent"
              }
            },
            InputSettings { }
          ]
          
          PhongMaterial {
            id: material
            ambient: Theme.tintColor // Also available are diffuse, specular + shininess to control lighting behavior
          }
          
          // The 3d mesh for the cube
          CuboidMesh {
            id: cubeMesh
            xExtent: 8
            yExtent: 8
            zExtent: 8
          }
          
          // Transform (rotate) the cube depending on sensor reading
          Transform {
            id: cubeTransform
            // Create the rotation quaternion from the sensor reading
            rotation: fromAxesAndAngles(Qt.vector3d(1,0,0), sensor.readingX*2, Qt.vector3d(0,1,0), sensor.readingY*2)
          }
          
          // The actual 3d cube that consists of a mesh, a material and a transform component
          Entity {
            id: cubeEntity
            components: [ cubeMesh, material, cubeTransform ]
          }
        }
      } // Scene3D
      
      // Color selection row
      Row {
        anchors.horizontalCenter: parent.horizontalCenter
        anchors.bottom: parent.bottom
        spacing: dp(5)
        padding: dp(15)
        
        Repeater {
          model: [Theme.tintColor, "red", "green", "#FFFF9500"]

          Rectangle {
            color: modelData
            width: dp(48)
            height: dp(48)
            radius: dp(5)
            
            MouseArea {
              anchors.fill: parent
              onClicked: {
                material.ambient = modelData
              }
            }
          }
        }
      }
    } // Page
  } // NavigationStack
} // App

Add Turn-by-Turn Navigation with Qt Location

You can use many new features of Qt Location and Maps. With this release you can start experimenting with turn-by-turn navigation. There are also several brand new features available for the Mapbox plugin.

Fixes for Qt Quick Controls

Many controls of the Qt Quick Controls 2 module received fixes, which are also available with the derived V-Play controls. Examples of improved components are ButtonGroup, CheckBox, Combobox, RangeSlider, ScrollBar, Slider, SpinBox and many more.

Qt for Webassembly and Python

With Qt for Webassembly, Qt is working towards filling the last large gaps in cross-platform development, allowing users to target the web and browsers as a platform. The first version has been released as a technology preview.

In addition, to the above, Qt is actively working on supporting Qt on Python.

Create Augmented Reality Apps and Games with Wikitude

As mentioned already in a previous release, you can now create feature-rich Augmented Reality (AR) apps & games with the Wikitude Plugin. You will read more on this amazing addition in another blog post coming soon. Stay tuned!

More Features, Improvements and Fixes

Here is a compressed list of improvements with this update:

For a list of additional fixes, please check out the changelog.

How to Update V-Play

Test out these new features by following these steps:

  • Open the V-Play SDK Maintenance Tool in your V-Play SDK directory.
  • Choose “Update components” and finish the update process to get this release as described in the V-Play Update Guide.

V-Play Update in Maintenance Tool

If you haven’t installed V-Play yet, you can do so now with the latest installer from here. Now you can explore all of the new features included in this release!

For a full list of improvements and fixes to V-Play in this update, please check out the change log!

 

 

 

More Posts Like This

 

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

vplay-2-17-0-firebase-cloud-storage-downloadable-resources-and-more

Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

vplay-update-2.16.1-live-client-module-live-code-reloading-custom-cpp

Release 2.16.1: Live Code Reloading with Custom C++ and Native Code for Qt

teaser-iphonex-support-and-runtime-screen-orientation-change-705px

Release 2.16.0: iPhone X Support and Runtime Screen Orientation Changes

The post Release 2.18.0: Update to Qt 5.11.1 with QML Compiler and Massive Performance Improvements appeared first on V-Play Engine.

GammaRay 2.9.1 Release

GammarayWe have released version 2.9.1 of our Qt application introspection tool GammaRay. Besides important improvements for use in Android APK bundles this release fixes a number of corner cases in the Qt Quick remote view, including crashes and corrupt view content when encountering certain non-integer high DPI scaling factors. Problems with activating the Qt 3D inspector when attaching to a running target have also been addressed, as well as build issues with pre-release versions of Qt 5.12.

Next to maintaining the 2.9 series we are also working on some new features for the upcoming 2.10 release of course, such as a new unified system information view and a QtPositioning inspector allowing interactive location overrides and NMEA log replays.

If GammaRay helps you, please consider helping us to focus on the tools and platforms important to you by contributing usage statistics (see Help > Contribute… in the GammaRay client).

Find out more…

Download GammaRay here

 

The post GammaRay 2.9.1 Release appeared first on KDAB.

KDAB Training at Qt World Summit, Boston

On Monday, October 29th as part of Qt World Summit, Boston, KDAB is offering five, one-day courses – two we’re calling Introductory and three Advanced. You can see from the course Description what that means in the context of the course you choose.

All KDAB’s trainers are experts with current working knowledge from diverse projects, so this is a rare opportunity to get a rapid boost to your skillset before the conference and Exhibition on Tuesday 30th. And you can meet our trainers again at KDAB’s stand.

KDAB is proud to be a main sponsor of Qt World Summit Boston and we look forward to seeing you there.

KDAB Training in Boston

Introductory

Effective 3D in Qt with Mike Krus

Multithreading in Qt with Jim Albamont

Advanced

Profiling and Debugging for Linux with David Faure

QML Applications Architecture with André Somers

What’s new in C++17? with Giuseppe D’Angelo

 

All KDAB courses at Qt World Summit are One day.

 

The post KDAB Training at Qt World Summit, Boston appeared first on KDAB.

KDAB at CppCon, Sept 23-29, 2018

KDAB is once again proud to be sponsoring CppCon, the annual, week-long gathering, organized by the C++ community for the C++ community.

Kicking off with a Keynote from Bjarne Sjoustroup, and followed by a week of presentations, panels and Lightning Talks, social sessions and much more, CppCon is a rare treat in the C++ calendar for those lucky enough to be able to make it to Bellevue, Washington this Fall.

To round off the week there’s a great opportunity to learn from one of the stars in the Qt / KDE community, KDAB’s David Faure. He’s offering a 2-day training on Debugging and Profiling C++ Code on Linux that will get you up-to-speed on finding and fixing bugs, and knowing which tool you need for the job.

  • How much of the time you spend writing C++ code is dwarfed by the time spent looking for bugs and improving performance?
  • Wouldn’t it be great to get an expert introduction to the various tools that can help you use that time more effectively?

The training is focused on native applications written in C++ under Linux, and is applicable to both desktop and embedded developers.

Find out more and sign up…

The post KDAB at CppCon, Sept 23-29, 2018 appeared first on KDAB.

KDAB at SIGGRAPH 2018

It’s almost upon us – August 12-16, in Vancouver: SIGGRAPH, the hottest show in town, and the 45th SIGGRAPH Conference on Computer Graphics and Interactive Techniques.

This year KDAB is sharing a Qt booth with The Qt Company. You can find us at Stand 1333, not far from the Attendee Lounge and opposite The Experience Hall.

We’ll be talking about how we help build automation, graphical tools and pipelines with Qt and Python, in both the games and VFX worlds, and our ability to meet high-end 3D needs in industrial, medical and automotive with Qt 3D.

We’ll be sure to mention also our introductory and advanced training on C++, OpenGL and Qt delivered by experts with current working knowledge from diverse projects.

And you can see our latest Tron Frame Graph demo, with the following features:

Tron Frame Graph

  • Data-driven configuration of the Qt 3D renderer
  • Complex 3D frame graph with visual effects: bloom, reflection, glow
  • Simulation data coming from either C++ or Python backend
  • Live, interactive control over the appearance
  • KDAB drives the Qt 3D development

 

On August 14th, Michel Nederlof of Quantitative Image Systems will be talking about The Intersection of Graphics and Medicine at Siggraph NEXT: Connections. Meet Michel at our booth, where we’ll be showing an update to our phenomenal Qi Tissue demo, which shows what you can do with miscroscopic data from tissue samples:

Qi – Cellular Tissue Imaging in Qt 3D

  • 3D visualization of microscopy of tissue samples
  • 2D images from cutting edge clinical diagnostics data sets
  • Real-time conversion to 3D from 30 image channels using Qt 3D
#color4cancer

A novel attraction will be the Interactive Nanoquill Wall at our stand, where attendees can take a break and join in the fun of creating art that helps cancer researchers better determine the nature of rogue cells, thus making it possible to target treatment individually and much more effectively.

KDAB has sponsored the Nanoquill #color4cancer project from the beginning and helped create the software that Quantitative Imaging Systems uses to further their research. You can find out more about The Nanoquill Project, buy the book and download the images here.

Meet us at SIGGRAPH

The post KDAB at SIGGRAPH 2018 appeared first on KDAB.

qbs 1.12 released

We are happy to announce version 1.12.0 of the Qbs build tool.

What’s new?

Generating Interfaces for Qbs and pkg-config

When distributing software components such as libraries, you’d like to make it as simple as possible for other projects to make use of them. To this end, we have added two new modules: The Exporter.qbs module creates a Qbs module from a product, while the Exporter.pkgconfig module generates a .pc file.

For example:

DynamicLibrary {
    name: "mylib"
    version: "1.0"
    Depends { name: "cpp" }
    Depends { name: "Exporter.qbs" }
    Depends { name: "Exporter.pkgconfig" }
    files: "mylib.cpp"
    Group { 
        fileTagsFilter: "dynamiclibrary"
        qbs.install: true
        qbs.installDir: "lib"
    }
    Group { 
        fileTagsFilter: "Exporter.qbs.module"
        qbs.installDir: "lib/qbs/modules/mylib" 
    }
    Group {
        fileTagsFilter: "Exporter.pkgconfig.pc"
        qbs.install: true
        qbs.installDir: "lib/pkgconfig"
    }
}

When building this project, a Qbs module file mylib.qbs and a pkg-config file mylib.pc will be generated. They contain the information that is necessary to build against this library using the respective tools. The mylib.qbs file might look like this (the concrete content depends on the target platform):

Module {
    Group {
        filesAreTargets: true
        fileTags: "dynamiclibrary"
        files: "../../../libmylib.so.1.0.0"
    }
}

As you can see, the library file is specified using relative paths in order to make the installation relocatable.

Now anyone who wants to make use of the mylib library in their Qbs project can simply do so by declaring a dependency on it: Depends { name: "mylib" }.

System-level Settings

Until now, Qbs settings were always per-user. However, some settings should be shared between all users, for instance global search paths. Therefore, Qbs now supports system-level settings as well. These are considered in addition to the user-level ones, which take precedence in the case of conflicts. System-level settings can be written using the new --system option of the qbs-config tool. This operation usually requires administrator rights.

Language Improvements

We have added a new property type varList for lists of objects. You could already have those by using var properties, but the new type has proper list semantics, that is, values from different modules accumulate.

The FileInfo extension has two new functions suffix and completeSuffix.

Two changes have been done to the Rule item:

C/C++ Support

The cLanguageVersion and cxxLanguageVersion properties are now arrays. If they contain more than one value, then the one corresponding to the highest version of the respective language standard is chosen. This allows different modules to declare different minimum requirements.

Autotest Support

The AutotestRunner item has a new property auxiliaryInputs that can help ensuring that additional resources needed for autotest execution (such as helper applications) are built before the autotests run.

The working directory of an autotest is now the directory in which the respective test executable is located or AutotestRunner.workingDirectory, if it is specified. In the future, it will also be possible to set this directory per test executable.

Various things

All command descriptions now list the product name to which the generated artifact belongs. This is particularly helpful for larger projects where several products contain files of the same name, or even use the same source file.

The vcs module no longer requires a repository to create the header file. If the project is not in a repository, then the VCS_REPO_STATE macro will evaluate to a placeholder string.

It is now possible to generate Makefiles from Qbs projects. While it is unlikely that complex Qbs projects are completely representable in the Makefile format, this feature might still be helpful for debugging purposes.

Try It!

The Open Source version is available on the download page, and you can find commercially licensed packages on the Qt Account Portal. Please post issues in our bug tracker. You can also find us on IRC in #qbs on chat.freenode.net, and on the mailing list. The documentation and wiki are also good places to get started.

Qbs is also available on a number of packaging systems (Chocolatey, MacPorts, Homebrew) and updated on each release by the Qbs development team. It can also be installed through the native package management system on a number of Linux distributions including but not limited to Debian, Ubuntu, Fedora, and Arch Linux.

Qbs 1.12.0 is also included in Qt Creator 4.7.0, which was released this week as well.

The post qbs 1.12 released appeared first on Qt Blog.

How to Expose a Qt C++ Class with Signals and Slots to QML

Application Development with QML is simple and powerful. But Qt C++ can be more performant, offers many features and is less error-prone. This post shows you how to create apps that take advantage of both languages.

How to Communicate between C++ and QML

It is important to choose the right language for different features of your app. Integrate C++ components with QML to take your mobile app development to the next level.

Advantages of Coding in QML

V-Play Engine for Qt-based mobile apps and games uses the power of Qt Quick (QML + Javascript). This declarative scripting language is so powerful that it saves up to 60% lines of code compared to other programming languages.

Coding in QML has several advantages over development with C++:

  • Coding with QML + JavaScript is very easy to learn and allows to reduce the required amount of code a lot.
  • Language concepts like states, signals or property bindings are a huge time-saver.
  • QML makes adding animations simple. You can animate every property of your QML types with simple Animation components.
  • QML is extensible and flexible. For example, you can extend objects with new properties and features in-line. No need to create a new re-usable type for small extensions.
  • The QML Rendering Engine offers a great performance. The renderer uses C++ Qt and relies on a hardware accelerated scene graph. This makes it fast enough to power even high-performance games.

When to use C++ Instead

Qt app development with C++ has advantages as well. For some scenarios you need features that are only available with Qt C++. Also, C++ is fast and type-safe. This allows to provide the best possible performance for long-running and data-intense calculations.

For these examples, you would choose C++ over QML:

  • Native C++ code is the right choice for data-intense operations. It will outperform interpreted QML/JavaScript code.
  • C++ code is type-safe and compiled into object code. For parts where stability and security are important, using C++ helps to make your app less error-prone.
  • The Qt C++ components offer different and in some cases more features than the QML types. For example, advanced networking features.
  • It is also possible to mix C++ with native code for Android (over JNI) or iOS (Obj-C or Swift). This allows to provide such native functionality for QML as well.

V-Play Engine extends Qt for mobile app and game development. It already covers tasks like accessing native device features – so you don’t have to worry about going deep into C++ or Java and Obj-C.

Still, to get the most out of your application you can use the advantages of both languages. The full example of this guide is also available on GitHub:

How to Access a C++ Object from QML

Before we go into any details, let us start by creating a simple V-Play Apps project with Qt Creator. If you are new to V-Play and don’t know how, please consider having a look at the Getting Started Tutorial or the V-Play Designer Tutorial Video.

To sign-up and install V-Play, see the download page of the V-Play website.

Note: Adding custom C++ code is not supported when testing with QML Live. Please build your project with the classic RUN button to test the examples below.

Create a C++ Class in your V-Play Project

1. After creating a new app project, first replace the code in Main.qml with this basic structure:

import VPlayApps 1.0
import QtQuick 2.5

App {

 NavigationStack {
   Page {
     title: "Integrate C++ and QML"
   }
 }
}

It only includes the main App window and a Page within NavigationStack to show a navigation bar that holds the page title:

V-Play App with a Page

2. This is enough for our basic QML setup. Let’s go on by creating a new C++ class. First, right-click the C++ “Sources” folder of your project in Qt Creator, select “Add New…” and choose the “C++ Class” template in the C++ section:

Add a new C++ Class

3. Then set “MyGlobalObject” as Class Name and select “Include QObject” to include the QObject type, as the main requirement to prepare our class for usage with QML is to derive from QObject.

Derive C++ class from QObject

After completing the wizard, your project contains the class definition myglobalobject.h in the “Headers” folder and the implementation myglobalobject.cpp in the “Sources” folder of the project.

Qt Creator C++ type header and source files

Note that the *.pro configuration now also includes the new files in the HEADERS and SOURCES configuration.

Implement the C++ Class with Signals and Slots for Usage with QML

1. Open myglobalobject.h and add some code to derive from QObject – the required include statement is already in place:

#ifndef MYGLOBALOBJECT_H
#define MYGLOBALOBJECT_H

#include <QObject>

class MyGlobalObject : public QObject
{
 Q_OBJECT

public:
 MyGlobalObject();
};

#endif // MYGLOBALOBJECT_H

Do not forget to also add the Q_OBJECT preprocessor macro within the class definition.

2. Now that we have a new QObject, let’s add a simple method we will later call from QML. To make the method available in QML, it is required to mark it as a public slot:

class MyGlobalObject : public QObject
{
 Q_OBJECT

public:
 MyGlobalObject();

public slots: // slots are public methods available in QML
 void doSomething(const QString &text);
};

3. To complete our basic class, open myglobalobject.cpp and add the method implementation for doSomething(). We keep it simple and only print the given text to the debug output.

#include "myglobalobject.h"
#include <QDebug>

MyGlobalObject::MyGlobalObject()
{
 // perform custom initialization steps here
}

void MyGlobalObject::doSomething(const QString &text) {
 qDebug() << "MyGlobalObject doSomething called with" << text;
}

Expose an Object to QML as a Context Property

One possible way to work with a C++ object in QML is to add the object as a property to the root context of the QML tree. You can decide on a name for the property, which is then globally available in your QML code.

1. To create a new object of our class and add it as a property, we extend the main.cpp code:

// keep existing includes here
// include qml context, required to add a context property
#include <QQmlContext>

// include custom class
#include "myglobalobject.h"

int main(int argc, char *argv[])
{
 // V-Play initialization ...

 // add global c++ object to the QML context as a property
 MyGlobalObject* myGlobal = new MyGlobalObject();
 myGlobal->doSomething("TEXT FROM C++");
 engine.rootContext()->setContextProperty("myGlobalObject", myGlobal); // the object will be available in QML with name "myGlobalObject"

 engine.load(QUrl(vplay.mainQmlFileName()));
 return app.exec();
}

Note: It is possible to fully use the object also in C++. The above code example already includes a test-call to our doSomething method.

2. In the Main.qml of our project, we extend our Page with a Column and a first AppButton, which calls the doSomething() method when clicked:

   Page {
     title: "Integrate C++ and QML"

     // Example 1 - Global Context Property
     // NOTE: myGlobalObject is available here because it is set as a context property in main.cpp
     Column {

       // 1.1: Calling myGlobalObject.doSomething() function
       AppButton {
         text: "myGlobalObject.doSomething()"
         onClicked: myGlobalObject.doSomething("TEXT FROM QML")
       }

     }
   }

Button to call the c++ function

3. Let’s hit the green run button in Qt Creator to see how it works. The debug output shows the initial method call from main.cpp and with a click on the button another message appears:
MyGlobalObject doSomething called with “TEXT FROM QML”

Qt Creator C++ function log from QML

That’s all we need to call methods of a C++ Object from QML. This already allows simple communication from QML to C++, but there’s even more we can do. QML supports many amazing concepts like value-changed listeners of properties and property bindings, which make development a lot easier. So let’s add a full-featured QML property to our C++ class!

Add a Class Property with Full QML Support

1. Open mylgobalobject.h and add a private counter property with a public getter and setter method.

class MyGlobalObject : public QObject
{
// …


public:
 int counter() const;
 void setCounter(int value);

private:
 int m_counter;
};

2. Implement the required methods and initialize the counter property in myglobalobject.cpp

MyGlobalObject::MyGlobalObject() : m_counter(0)
{
 // perform custom initialization steps here
}

int MyGlobalObject::counter() const {
 return m_counter;
}

void MyGlobalObject::setCounter(int value) {
 if(m_counter != value) {
   m_counter = value;
 }
}

3. Similar to other properties in QML, we also want to be able to dynamically react to property changes in our QML code. In other words, we want to trigger functions in QML when the C++ property changes. Unlike the slots, which make C++ methods callable in QML, signals can be used to trigger QML code from C++. So the data flow looks like this:

C++ and QML data flow with signals and slots

Let’s add a signal counterChanged and trigger it in our setCounter implementation:

myglobalobject.h:

class MyGlobalObject : public QObject
{
//  ...

signals:
 void counterChanged();
};

myglobalobject.cpp:

void MyGlobalObject::setCounter(int value) {
 if(m_counter != value) {
   m_counter = value;
   counterChanged(); // trigger signal of counter change
 }
}

4. This simple change already allows us to add handler functions for the counterChanged() signal in QML. However, our counter property is still a normal C++ property with a getter and setter method. We can take care of that with an additional preprocessor macro:

class MyGlobalObject : public QObject
{
 Q_OBJECT
 Q_PROPERTY(int counter READ counter WRITE setCounter NOTIFY counterChanged) // this makes counter available as a QML property

// ...
};

The Q_PROPERTY macro defines a property counter and configures the methods for reading and writing the property, as well as the signal that notifies property changes. This configuration is used by QML to work with the property.

5. Let’s extend our Main.qml and use our new counter property. The following snippet adds a new button to increase the counter and a text item to display the value:

     Column {

       // ...

       // 1.2: Increasing myGlobalObject.counter property
       // NOTE: the defined setter function of the property is used automatically and triggers the counterChanged signal
       AppButton {
         text: "myGlobalObject.counter + 1"
         onClicked: {
           myGlobalObject.counter = myGlobalObject.counter + 1
         }
       }

       // 1.3: Showing myGlobalObject counter value in a QML text
       // NOTE: property bindings are supported, as the counter property definition includes the counterChanged signal, which is fired in the implementation of MyGlobalObject::setCounter() for each property change
       AppText {
         text: "Global Context Property Counter: " + myGlobalObject.counter
       }
     } // Example 1

Our property is usable like any other property in QML. Thanks to the counterChanged we prepared, the text even updates automatically every time we change the counter.

This is how the final example looks like:

Access C++ class property from QML

How to Register your C++ Class as a QML Type

The second possibility to use C++ components in QML is to register the class as a QML type. This allows to create objects (= instances) of your type directly in QML instead of C++. And the best thing is, the concepts with signals, slots and properties we used in the previous example still apply.

When to Use a Context Property and when a QML Object

If there’s only a single object instance you want to work with in QML you can add the object as a context property. When there can be multiple instances of your class, register it as a QML type and create the objects directly in QML where you need it.

1. For this example, we will create a new type we can use in QML. Let’s start with adding a new C++ Class named MyQMLType

Create a QML type with C++

2. Replace the code in myqmltype.h with this implementation:

#ifndef MYQMLTYPE_H
#define MYQMLTYPE_H

#include <QObject>

class MyQMLType : public QObject
{
 Q_OBJECT
 Q_PROPERTY(QString message READ message WRITE setMessage NOTIFY messageChanged) // this makes message available as a QML property

public:
 MyQMLType();

public slots: // slots are public methods available in QML
 int increment(int value);

signals:
 void messageChanged();

public:
 QString message() const;
 void setMessage(const QString& value);

private:
 QString m_message;

};

#endif // MYQMLTYPE_H

Similar to the previous example, this type will have one public slot and a full-featured property with a getter method, a setter method and a property changed signal. The increment method increases a given integer value by one and the message property will store a string value.

3. To complete the class, add the following code for myqmltype.cpp:

#include "myqmltype.h"

MyQMLType::MyQMLType() : m_message("")
{

}

int MyQMLType::increment(int value) {
 return value + 1;
}

QString MyQMLType::message() const {
 return m_message;
}

void MyQMLType::setMessage(const QString& value) {
 if(m_message != value) {
   m_message = value;
   messageChanged(); // trigger signal of property change
 }
}

Which Parameters Can you Pass between C++ and QML

In contrast to the previous example, our new class also uses a return value for the increment slot. No further adjustments are required to receive the return value in QML. Qt automatically maps basic C++ types to QML types for all method parameters and return values.

For more information about available Qt types and corresponding QML types, please see Data Type Conversion Between QML and C++.

Register and Use your C++ QML Type

1. In your main.cpp, first add an include statement for the new class:

#include "myqmltype.h"

2. Then use qmlRegisterType to add the class as a QML Type.

int main(int argc, char *argv[])
{
 // ...

 // register a QML type made with C++
 qmlRegisterType<MyQMLType>("com.yourcompany.xyz", 1, 0, "MyQMLType"); // MyQMLType will be usable with: import com.yourcompany.xyz 1.0

 engine.load(QUrl(vplay.mainQmlFileName()));
 return app.exec();
}

The method takes several parameters: The module identifier and version define the required QML import to use the type. The last parameter holds the name of the QML type, which can be different from the actual C++ class name.

3. Add the import which matches the used configuration of qmlRegisterType to your Main.qml:

// NOTE: the import identifier, version and QML type name are set in main.cpp at qmlRegisterType(...)
import com.yourcompany.xyz 1.0

4. For an example usage of our new QML Type, add the following snippet below the first example:

   Page {
     title: "Integrate C++ and QML"
 
     Column {
       // ...      

     // Example 2: Custom QML Type implemented with C++
     // NOTE: This type is declared in main.cpp and available after using "import com.yourcompany.xyz 1.0"
     MyQMLType {
       id: typeFromCpp

       // 2.1: Property Binding for MyQMLType::message property
       // NOTE: Similar to types created purely with QML, you may use property bindings to keep your property values updated
       message: "counter / 2 = " + Math.floor(myGlobalObject.counter / 2)

       // 2.2: Reacting to property changes
       // NOTE: With the onMessageChanged signal, you can add code to handle property changes
       onMessageChanged: console.log("typeFromCpp message changed to '" + typeFromCpp.message+"'")

       // 2.3: Run code at creation of the QML component
       // NOTE: The Component.onCompleted signal is available for every QML item, even for items defined with C++.
       // The signal is fired when the QML Engine creates the item at runtime.
       Component.onCompleted: myGlobalObject.counter = typeFromCpp.increment(myGlobalObject.counter)
     }

     // 2.1: Show typeFromCpp.message value, which is calculated automatically based on the myGlobalObject.counter value
     AppText {
       text: "Custom QML Type Message:\n" + typeFromCpp.message
     }
   }

The code shows that we can now use MyQMLType like any other QML item. The message property is initialized inline with a property binding, that shows the integer result of dividing myGlobalObject.counter by two. Whenever the counter changes, this expression is re-evaluated automatically.

In addition, when in turn the message changes (every 2 counter steps), we use the onMessageChanged signal to display the new message in the log output.

Similar to other QML Items, the Component.onCompleted signal is available to perform initialization steps when the QML engine creates the object. In this example, we use the increment slot to increase the counter by 1.

The AppText at the bottom simply displays the message property:

cpp-qml-2-2-use-qml-type-created-with-cpp

 

Use a Property, Signal or Slot?

As we’ve already seen in the previous examples, properties, signals and slots offer different types of communication between C++ and QML:

  • Slots allow communication from QML to C++: Slots are used to trigger C++ code from QML. You can use parameters and return values to pass data to and from C++.
  • Signals allow communication from C++ to QML: Signals are used to run QML code when certain events occur C++. You can pass parameters from C++ to QML. However, you can not return data from QML.
    In contrast to slots, signals may be handled by none, one or many components. There is no guarantee that triggering a signal in C++ will actually run QML code, unless there’s a handler defined.

Properties work both ways: Properties are read- and write-able from both C++ and QML. To support property bindings in QML, make sure to add a changed-signal for the property and do not forget to trigger the signal in C++ whenever the value changes.

C++ and QML data flow with properties, signals or slots

How to Start Long-running C++ Operations from QML

The above example already fully covers slots and properties, but only uses a signal as part of the property configuration. To complete the example, let’s add a new slot startCppTask(), a new method doCppTask() and a new signal cppTaskFinished() to myqmltype.h:

public slots: 
 int increment(int value);
 void startCppTask(); // starts internal calculations of doCppTask()

signals:
 void messageChanged();
 void cppTaskFinished(); // triggered after calculations in doCppTask()

public:
 QString message() const;
 void setMessage(const QString& value);

private:
 void doCppTask(); // method for internal calculations
 QString m_message;

We will later call the slot startCppTask() from QML, which executes the internal doCppTask() method. You can e.g. run calculations in another thread at this point to avoid blocking the QML UI while performing the task. This is useful for any cpu-intense or long-lasting operation you want to handle in C++. By adding the implementation for the methods to myqmltype.cpp, we are fnished with the C++ part.

void MyQMLType::startCppTask() {
 this->doCppTask();
}

void MyQMLType::doCppTask() {
 // NOTE: you can do calculations here in another thread, this may be used to perform
 // cpu-intense operations for e.g. AI (artificial itelligence), Machine Learning or similar purposes
 // When the work is done, we can trigger the cppTaskFinished signal and react anyhwhere in C++ or QML
 cppTaskFinished();
}

As everything is prepared now, we can add another AppButton that starts our C++ task:

// 2.4: Button to start cpp task
AppButton {
  text: "typeFromCpp.startCppTask()"
  onClicked: {
      typeFromCpp.startCppTask()
  }
}

The onCppTaskFinished() signal will notify us when the C++ part has finished calculations:

MyQMLType {
  // ...

  // 2.4: Handling a custom signal
  onCppTaskFinished: {
    myGlobalObject.counter = 0 // reset counter to zero, this will also update the message
  }
}

In this example, we simply reset our global counter to zero when the signal fires, which will also update the message property of MyQMLType.

This is how the final example looks like after executing the cpp task:

Run asynchronous C++ task with QML

Note: To handle custom signals in QML when using a context property, use the Connections QML Type. The following snippet adds a handler to the counterChanged() signal of myGlobalObject:

// 2.5: Connections allow to add signal handlers for global context property objects
Connections {
    target: myGlobalObject
    onCounterChanged: console.log("Counter changed to " + myGlobalObject.counter)
}

When to Derive from QQuickItem instead of QObject

In all used examples, we created a C++ Class which extends QObject. However there are some limitations to QObjects: QObjects do not have a visual representation. This means, they can not hold any child items and properties regarding visual features like size, position, visibility are not available.

A QObject only holds data and logic you can use in QML as properties, signals and slots. When registering a QObject class as a type for QML, keep this restriction in mind. To create a QML Item with C++ which should support a visual representation with all default properties, derive from QQuickItem instead.

As this short introduction does not cover implementing QQuickItems, please see the the Qt documentation for more information. The overview page about Integrating QML and C++ is found here.

The full source code of the project created in this guide can be found on GitHub:

 

 

More Posts Like This

 

Add Chat Service and Cross-Platform Leaderboard with User Profiles to Your iOS or Android App
Add Chat Service and Cross-Platform Leaderboard with User Profiles to Your iOS or Android App

Release 2.14.1: Update to Qt 5.9.3 | Use Live Code Reloading on macOS and Linux
V-Play Update 2.12.1: Qt Quick Designer Improvements

How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

How to Make a Qt app

The post How to Expose a Qt C++ Class with Signals and Slots to QML appeared first on V-Play Engine.

Qt Creator 4.7.0 released

We are happy to announce the release of Qt Creator 4.7.0!

C++ Support

We decided that it is time to turn the Clang code model on by default. It made huge progress during the last releases, and at some point we need to do this switch. The built-in model cannot keep up with the developments in the C++ language, nor with the development of the available tooling around it. We nowadays regularly close bug reports with the comment “works with Clang code model”. Also, the Clang code model provides much better information about issues in code without going through the edit-compile-analyze cycle explicitly. Please also have a look at Nikolai’s blog post on the Clang code model and the history of C/C++ support in Qt Creator.

There can be situations where the built-in model still works better for you than the Clang code model, and you continue to have the option to use it instead, by disabling the ClangCodeModel plugin. The global symbol index is also still created with the built-in model.

Project wide diagnostics and fixits in Qt Creator by clang-tidy and clazy

We upgraded the Clang code model to Clang 6.0. It now provides the information for the overview of the current document, which is used for the symbols dropdown, outline pane and “.” locator filter. You also have more freedom in deciding which Clang-Tidy and Clazy checks you want to run while editing, and have the option to run checks over your whole code base through a new tool in Debug mode (Analyze > Clang-Tidy and Clazy). The warnings and errors from the code model are now also optionally shown in the Issues pane.

Test Integration

If your text cursor in the C++ editor is currently inside a test function, you can directly run that individual test with the new Run Test Under Cursor action. The test integration now also marks the location of failed tests in the editor. For Google Test we added support for filtering.

Windows Hosts

On Windows we improved the scanning for MSVC compilers, which previously could block Qt Creator. We also fixed an issue which could lead to short term freezes while Qt Creator was listening to the global, shared Windows debug stream. And saving files on network drives should work again in all configurations.

Other Improvements

The kit options are one of the most important settings that you might need to adapt for your projects in Qt Creator. So we put them in their own top-level entry in the preferences dialog, which is also the very first one in the list.

If you have a HiDPI screen on Windows or Linux, you can now easily choose if you want Qt’s automatic scaling or not, by enabling or disabling the new option in Environment > Interface.

The File System view got new options for showing folders on top as opposed to integrated into the alphabetic sorting, and for turning off the synchronization of the base folder with the current document’s project. You can also create new folders directly in the File System view now.

There have been many more improvements and fixes. Please refer to our changes file for a more comprehensive list.

Get Qt Creator 4.7.0

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.7.0 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.7.0 released appeared first on Qt Blog.

Release 2.17.1: Use Qt 3D with Live Reloading and Test Plugin Code Examples from Browser

V-Play 2.17.1 adds a long list of improvements and fixes. You can now also use 3D components with live code reloading in your apps and games. The plugin documentation now includes the example run button. Use it to test code examples for ads, Firebase and more from the browser on your mobile device. You can also learn how to make custom list delegates with 2 new examples in the documentation.

Use Qt 3D in Your Apps and Games, with Live Code Reloading

V-Play and Qt make it easy to add 3D content to your apps or 2D games. With the QML 3D modules, you can embed 3D objects anywhere within your app. This feature is now also available with the Live Clients on desktop, iOS and Android.

Here is a small code example for you to try right away. It displays a 3D cube on your page. The cube rotates depending on the device rotation, using the RotationSensor. You can also change the color of the cube. All that with about 130 lines of code, without empty lines and comments it’s about 100 lines.

v-play-3d-cube-rotation-sensor-color-select

import VPlayApps 1.0
import QtQuick 2.9
// 3d imports
import QtQuick.Scene3D 2.0
import Qt3D.Core 2.0
import Qt3D.Render 2.0
import Qt3D.Input 2.0
import Qt3D.Extras 2.0
import QtSensors 5.9

App {
  // Set screen to portrait in live client app (not needed for normal deployment)
  onInitTheme: nativeUtils.preferredScreenOrientation = NativeUtils.ScreenOrientationPortrait
          
  RotationSensor {
    id: sensor
    active: true
    // We copy reading to custom property to use behavior on it
    property real readingX: reading ? reading.x : 0
    property real readingY: reading ? reading.y : 0
    // We animate property changes for smoother movement of the cube
    Behavior on readingX {NumberAnimation{duration: 200}}
    Behavior on readingY {NumberAnimation{duration: 200}}
  }
  
  NavigationStack {
    Page {
      title: "3D Cube on Page"
      backgroundColor: Theme.secondaryBackgroundColor
      
      Column {
        padding: dp(15)
        spacing: dp(5)
        
        AppText {
          text: "x-axis " + sensor.readingX.toFixed(2)
        }
        AppText {
          text: "y-axis " + sensor.readingY.toFixed(2)
        }
      }
      
      // 3d object on top of camera
      Scene3D {
        id: scene3d
        anchors.fill: parent
        focus: true
        aspects: ["input", "logic"]
        cameraAspectRatioMode: Scene3D.AutomaticAspectRatio
        
        Entity {
          
          // The camera for the 3d world, to view our cube
          Camera {
            id: camera3D
            projectionType: CameraLens.PerspectiveProjection
            fieldOfView: 45
            nearPlane : 0.1
            farPlane : 1000.0
            position: Qt.vector3d( 0.0, 0.0, 40.0 )
            upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
            viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
          }
          
          components: [
            RenderSettings {
              activeFrameGraph: ForwardRenderer {
                camera: camera3D
                clearColor: "transparent"
              }
            },
            InputSettings { }
          ]
          
          PhongMaterial {
            id: material
            ambient: Theme.tintColor // Also available are diffuse, specular + shininess to control lighting behavior
          }
          
          // The 3d mesh for the cube
          CuboidMesh {
            id: cubeMesh
            xExtent: 8
            yExtent: 8
            zExtent: 8
          }
          
          // Transform (rotate) the cube depending on sensor reading
          Transform {
            id: cubeTransform
            // Create the rotation quaternion from the sensor reading
            rotation: fromAxesAndAngles(Qt.vector3d(1,0,0), sensor.readingX*2, Qt.vector3d(0,1,0), sensor.readingY*2)
          }
          
          // The actual 3d cube that consists of a mesh, a material and a transform component
          Entity {
            id: cubeEntity
            components: [ cubeMesh, material, cubeTransform ]
          }
        }
      } // Scene3D
      
      // Color selection row
      Row {
        anchors.horizontalCenter: parent.horizontalCenter
        anchors.bottom: parent.bottom
        spacing: dp(5)
        padding: dp(15)
        
        Repeater {
          model: [Theme.tintColor, "red", "green", "#FFFF9500"]

          Rectangle {
            color: modelData
            width: dp(48)
            height: dp(48)
            radius: dp(5)
            
            MouseArea {
              anchors.fill: parent
              onClicked: {
                material.ambient = modelData
              }
            }
          }
        }
      }
    } // Page
  } // NavigationStack
} // App

Test Code Examples from Plugin Documentation

You can now test the code examples from the plugin documentation. This allows you to run code examples from the documentation on your mobile phone. Just like you are used to from the apps documentation, you can now also test plugins for ads, firebase, analytics and more right from your browser.

You can currently test code examples of the following plugins from the documentation:

 

Here’s a little example for an AdMob advertisement banner:

v-play-admob-banner-ios-live-client

import VPlayApps 1.0
import VPlayPlugins 1.0

App {
  NavigationStack {
    Page {
      title: "Admob Banner"
      
      AdMobBanner {
        adUnitId: "ca-app-pub-3940256099942544/6300978111" // banner test ad by AdMob
        banner: AdMobBanner.Smart
      }
    }
  }
}

New Examples for Custom App List View Delegates

Many of you requested this, so this update adds 2 new examples to the ScollView and ListView documentation. You can check out how to create custom delegate components and display your data with the modelData property.

The second example shows custom foldable delegate components.

applistview-subsections

Improved Handling of Screen Keyboard on iOS and Android

App and GameWindow provide new properties to make the native keyboard handling on Android and iOS easier. You can use keyboardVisible and keyboardHeight to adapt your app layout when the keyboard is shown.

The following example displays a floating action button above the keyboard. It also adapts to size changes of the keyboard:

keyboard-height-floating-action-button


import VPlayApps 1.0
import QtQuick 2.7

App {
  id: app
  
  // We unset the focus from the AppTextField after the keyboard was dismissed from the screen
  onKeyboardVisibleChanged: if(!keyboardVisible) textField.focus = false
  
  NavigationStack {
    
    Page {
      id: page
      title: qsTr("Keyboard Height")
      
      AppTextField {
        id: textField
        width: parent.width
        font.pixelSize: sp(25)
      }
      
      FloatingActionButton {
        // Add the keyboard height as bottom margin, so the button floats above the keyboard
        anchors.bottomMargin: app.keyboardHeight + dp(15)
        // We only show the button if the AppTextField has focus and the keyboard is expanded
        visible: textField.focus && app.keyboardHeight != 0
        icon: IconType.check
        backgroundColor: Theme.tintColor
        iconColor: "white"
        onClicked: textField.focus = false
      }
    }
  }
}

More Features, Improvements and Fixes

Here is a compressed list of improvements with this update:

For a list of additional fixes, please check out the changelog.

How to Update V-Play

Test out these new features by following these steps:

  • Open the V-Play SDK Maintenance Tool in your V-Play SDK directory.
  • Choose “Update components” and finish the update process to get this release as described in the V-Play Update Guide.

V-Play Update in Maintenance Tool

If you haven’t installed V-Play yet, you can do so now with the latest installer from here. Now you can explore all of the new features included in this release!

For a full list of improvements and fixes to V-Play in this update, please check out the change log!

 

 

 

More Posts Like This

 

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

vplay-2-17-0-firebase-cloud-storage-downloadable-resources-and-more

Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

vplay-update-2.16.1-live-client-module-live-code-reloading-custom-cpp

Release 2.16.1: Live Code Reloading with Custom C++ and Native Code for Qt

teaser-iphonex-support-and-runtime-screen-orientation-change-705px

Release 2.16.0: iPhone X Support and Runtime Screen Orientation Changes

The post Release 2.17.1: Use Qt 3D with Live Reloading and Test Plugin Code Examples from Browser appeared first on V-Play Engine.

Cutelyst 2.5.0 released

Cutelyst a C++ web framework based on Qt got a new release. This release has some important bug fixes so it's really recommended to upgrade to it.

Most of this release fixes came form a side project I started called Cloudlyst, I did some work for the NextCloud client, and due that I became interested into how WebDAV protocol works, so Cloudlyst is a server implementation of WebDAV, it also passes all litmus tests. WebDAV protocol makes heavy use of REST concept, and although it uses XML instead of JSON it's actually a good choice since XML can be parsed progressively which is important for large directories.

Since the path URL now has to deal with file paths it's very important it can deal well with especial characters, and sadly it did not, I had tried to optimize percent encoding decoding using a single QString instead of going back and forth toLatin1() then fromUTF8() and this wasn't working at all, in order to better fix this the URL is parsed a single time at once so the QString path() is fully decoded now, which will be a little faster and avoid allocations. And this is now unit tested :)

Besides that there was:

  • Fix for regression of auto-reloading apps in cutelyst-wsgi
  • Fix csrf token for multipart/form-data (Sebastian Held)
  • Allow compiling WSGI module when Qt was not built with SSL support

The last one and another commit were to fix some build issues I had with buildroot, which I also created a package so soon you will be able to select Cutelyst from buildroot menu.

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.5.0

Qt for Python available at PyPi

We are glad to announce that finally the technical preview of Qt for Python is available at the Python Package Index (PyPI).

For the past technical preview release, we were still discussing with the PyPi people to be able to upload our wheels to their servers, but now everything is in order, and you can get the PySide2 module with a simple:


pip install PySide2

Keep in mind that we will continue uploading the snapshot wheels to our servers,
so you also can get the latest features of Qt for Python!.

The post Qt for Python available at PyPi appeared first on Qt Blog.