Measuring The Impact of Qt on Your Business

Today, I wanted to talk to you about a recently conducted Total Economic Study for commercial customers of Qt by Forrester Consulting. Please find the report here.

picture1

In many ways, Qt has been ahead of its time. Its ability to work on basically anything now truly shines as the Internet of Things (or “Internet of Thinking”, as we move into the future) matures. And with the number of devices developed still exponentially outgrowing the number of developers, tools like Qt become a necessity to keep up with the market.

We all ask ourselves, as we should: “Are we doing the right thing?”

A lot of our customers have done so before they adopted Qt. Understandably so: You expect your software to be around for years, even decades, and committing to a development framework is a high-risk decision that will directly impact your organization, its bottom line, and competitiveness for years to come.

The questions will take various shapes as different considerations become relevant throughout the project life-cycle:

Short-term considerations: “What will my return on investment be?” “Will it help me meet the market requirements and customer expectations?” “Does our workforce have the right skillset to adopt the tool into a unified development process?”

Mid-term considerations: “How will a framework impact development and (for embedded devices) hardware cost?” “Will it help me go to market faster?” “What other competitive advantages can it incur?”

Long-term considerations: “Will the framework still be supported in ten years?” Does it help me minimize the maintenance of toolchains and the associated costs?” “What are the costs if I want to expand my software to other platforms in the future?”

At the Qt Company, we too constantly ask ourselves: “Are we doing the right thing?” And because we all have some degree of bias towards what is ours, we have to look outside for answers.

Our annual customer survey, for example, shows encouraging results: In 2017, Qt has met or exceeded 95.3% of our customer’s ROI expectations and made 78.2% of developers more productive. That is fantastic, but I was curious to see how someone from the outside would quantify the value of our product.

Forrester conducted a Total Economic Impact study to examine the potential return on investment businesses may realize with Qt and evaluate its potential financial impact on organizations.

To do that, Forrester interviewed four commercial Qt customers and used the data to construct a composite organization. The key findings on how Qt affected the organization’s business exceeded even the expectations set by our own survey:

  •  The payback period for Qt’s licensing and operational costs was three months
  • The ROI after three years was 289%.
  • The net present value measured over a three-year period was $422’833

Further findings from the interviews that apply to any organization were:

  • Customers saved (on average) 30% of their software development costs by using Qt.
  • Customers could reduce hardware costs by 10% coming from a native and up to 80% from an HTML toolchain.
  • Device innovations and improvements released to market sooner because the development process was simpler and faster.
  • The fact that Qt is mature, stable and well supported gave customers a sense of confidence for the future.

Forrester was so kind to share with us their algorithm used to estimate the ROI for Qt embedded customers. Please feel free to try it to find out how Qt would affect your company.

While these results reassure us that we’re doing the right thing in enabling you to create better products faster and at a lower cost, we do not want to sit on our laurels. Instead, we want to take this feedback as encouragement to keep questioning and improving ourselves.

So, to all of you that brought us this far by challenging us to achieve greater heights every day, the customers and the competitors, the developers and the business people, the thinkers and the dreamers.

Thank you!

Juha Varelius, CEO, The Qt Company

 

The post Measuring The Impact of Qt on Your Business appeared first on Qt Blog.

Modern Qt Development: The Top 10 Tools You Should Be Using

by Matthias Kalle Dalheimer (Qt Blog)

Why is using the right tool for the job so important? Efficiency and results are two reasons that immediately spring to mind. You don’t see construction workers using shoes to drive in nails – so why as software developers do we so often make do with manual solutions to find bugs or optimize code? It’s certainly less efficient and much more frustrating, and the final results can be less than ideal.

It always takes time to learn a new tool – so how do you know where you should invest your time? We at KDAB would like to share our favourite Qt development tools that we think are worth the effort. We regularly rely on these tools to help us locate and fix troublesome bugs and solve difficult optimization challenges. If you live at the cutting edge of Qt development you may be familiar with many of these, but we know there’ll be something new for you regardless of your level of expertise.

GammaRayintrospection tool that adds Qt-awareness to the debugger  

If you’ve been frustrated by debugging through endless Qt structure internals, you’ll definitely want to give this a try. GammaRay understands most of the core Qt components from a Qt perspective – QtQuick scene graphs, model/view structures, QTextDocuments, signal/slot activations, focus handling, GPU textures, QWidgets, state machines, and more – letting you observe and edit values at run-time in a natural way. You can debug applications from launch or attach to running apps (both local and remote).

Clazycompiler plug-in that understands Qt semantics

This is something that needs to be part of every Qt developer’s bag of tricks. By adding clazy to clang, you’ll get compile-time warnings for Qt best practices – unneeded memory allocations, misused APIs, or inefficient constructs. Clazy is a great way to improve your Qt code and best of all, it can provide automatic refactoring fixes for some of the errors it finds – no coding required!

Modern C++source code that uses C++11/14/17 improvements

Although C++11 and C++14 have been around for a while now, there are many old coding habits that die hard. Many developers don’t take advantage of newer C++ constructs that are more efficient, understandable, and maintainable. The thing is you don’t need to be a C++ standards expert to make small changes that can significantly help your code. We’ve got a few of the more important highlights covered in a paper below – or you can attend a training class or two for the real low-down.

Clang Tidycompiler toolto help modernize your C++

This is the lazy person’s way to modernize C++. Another clang-based tool, Clang Tidy points out older C++ idioms that could use updating. It flags where these should be replaced with new C++11 or C++14 improvements, and in many cases can do the refactoring automatically. That’s productivity for you!

HotSpottool to visualize your app’s CPU performance

When it comes to optimizing, nothing beats a profiler – but reading raw perf logs is a punishment that should only be reserved for people who believe zip files are a proper form of source control. HotSpot reads Linux perf logs and lets you see multiple different views (callers, timeline, top-down, bottom-up) to help you easily understand where you’re burning up your time.

apitraceset of tools to debug your graphics APIs and improve their performance

If you’re writing an app with a GUI, profiling doesn’t stop at your C++ code. You need a way to see the calls you’re making to OpenGL, Direct3D, or DirectDraw, view the contents of those calls in a graphical interpretation, and profile their performance. That’s exactly what apitrace does. It can also replay a trace file, allowing you to compare and benchmark performance once you’ve seen where to make improvements.

Kernel/System Profilertools for visualizing your operating system’s performance

Sometimes performance problems aren’t found in your app – they’re in multiple-process interactions, hidden in driver stacks, or a result of how you’re calling the operating system. For these kinds of really low-level debugging, you’ve got to have a system-profiling tool. It can feel like swatting a fly with a bazooka, but a system profiler is an incredibly invaluable tool that can find problems that nothing else will.

Heaptracktooling to watch your app’s memory usage

Sometimes optimization isn’t about speed – it’s about memory. If you’re trying to profile yourapplication’s memory usage, you’ll want to check this out. By showing your application’s peak memory usage, memory leaking functions, biggest allocators, and most temporary allocations, you’ll be able to really narrow down where your app’s memory is going and how to keep it on a diet.

Continuous Integration (CI)build systems for agile/XP development

Continuous integration fits hand-in-hand with unit testing as a methodology that can bring very real improvements to your software quality – whether you’re using agile development or not. Don’t bother creating your CI build system from scratch when there are a lot of great tools that can give you a leg up on delivering quality products.

Qt Creatorthe Qt IDE

Perhaps you think it’s cheating to include Qt Creator in this list since it’s already installed on every Qt developer’s desktop. Yes, but did you know you can find slowspots in your Qt Quickcode through the built-in QML profiler? How about hitting Alt+Enter to get a list of all the refactoring options at the cursor location?What about other handykey sequences to find symbol references, do a git diff, or record a macro, along with many other super helpful navigation and editing aides? Shortcuts that you might use ten times a day if you only knew they were there. Don’t be a slave to your mouse – print out our handy reference card and pin it up on your cube wall.

Those are our top ten tools for improving your Qt development tool chest. Don’t forget there are also things that can’t be automated but for which there are courses and customized training – such as effective code reviews or best coding practices.

Is there anything else that you’ve found to be invaluable that you want to share? Please leave a suggestion in the comments!

The post Modern Qt Development: The Top 10 Tools You Should Be Using appeared first on Qt Blog.

Clazy 1.4 released

Clazy 1.4 has been released and brings 10 new checks.

Clazy is a clang compiler plugin which emits warnings related to Qt best practices. We’ll be showing Clazy at Qt World Summit in Boston, Oct 29-30, where we are a main Sponsor.

You can read more about it in our previous blog posts:

Today I’ll go through all 10 new warnings, one by one. For other changes, check the complete 1.4 Changelog. So let’s start. I’ve ordered them according to my personal preference, starting with the ones that have helped me uncover the most bugs and finishing with some exotic ones which you’ll rarely need.

1. skipped-base-method

Warns when calling a method from the grand-base class instead of the direct base class one.

Example:

class MyFrame : public QFrame
{
    Q_OBJECT
public:
    bool event(QEvent *ev) override
    {
        (...)
        // warning: Maybe you meant to call QFrame::event() instead [-Wclazy-skipped-base-method]
        return QWidget::event(ev); 
    }
};

The motivation came after hitting bugs when overriding QObject::event() and QObject::eventFilter(). I would suggest always using this check and annotating your legit cases with // clazy:exclude=skipped-base-method, to make your intention clear. The same way you would comment your fallthroughs in switch statements.

This check might get removed in the future, as in the end it’s not specific to Qt and clang-tidy recently got a similar feature.

2. wrong-qevent-cast

Warns when a QEvent is cast to the wrong derived class via static_cast.

Example:

switch (ev->type()) {
    case QEvent::MouseMove:
        auto e = static_cast<QKeyEvent*>(ev);
}

Only casts inside switch statements are verified.

3. qhash-with-char-pointer-key

Finds cases of QHash<const char *, T>. It’s error-prone as the key is just compared by the address of the string literal, and not the value itself.

This check is disabled by default as there are valid use cases. But again, I would suggest always using it and adding a // clazy:exclude= comment.

4. fully-qualified-moc-types

Warns when a signal, slot or invokable declaration is not using fully-qualified type names, which will break old-style connects and interaction with QML.

Also warns if a Q_PROPERTY of type gadget is not fully-qualified (Enums and QObjects in Q_PROPERTY don’t need to be fully qualified).

Example:

namespace MyNameSpace {

    struct MyType { (...) };

    class MyObject : public QObject
    {
        Q_OBJECT
        Q_PROPERTY(MyGadget myprop READ myprop); // Wrong, needs namespace
    Q_SIGNALS:
        void mySignal(MyType); // Wrong
        void mySignal(MyNameSpace::MyType); // OK
    };
}

Beware that fixing these type names might break user code if they are connecting to them via old-style connects, since the users might have worked around your bug and not included the namespace in their connect statement.

5. connect-by-name

Warns when auto-connection slots are used. They’re also known as “connect by name“, an old and unpopular feature which isn’t recommended. Consult the official documentation for more information.

These types of connections are very brittle, as a simple object rename would break your code. In Qt 5 the pointer-to-member-function connect syntax is recommended as it catches errors at compile time.

6. empty-qstringliteral

Suggests to use an empty QString instead of an empty QStringLiteral. The later causes unneeded code bloat.

You should use QString() instead of QStringLiteral() and QString("") instead of QStringLiteral("").

7. qt-keywords (with fixit)

Warns when using Qt keywords such as emit, slots, signals or foreach.

This check is disabled by default. Using the above Qt keywords is fine unless you’re using 3rdparty headers that also define them, in which case you’ll want to use Q_EMIT, Q_SLOTS, Q_SIGNALS or Q_FOREACH instead.

This check is mainly useful due to its fixit to automatically convert the keywords to their Q_ variants. Once you’ve converted all usages, then simply enforce them via CONFIG += no_keywords (qmake) or ADD_DEFINITIONS(-DQT_NO_KEYWORDS) (CMake).

8. raw-environment-function

Warns when putenv() or qputenv() are being used and suggests the Qt thread-safe equivalents instead. This check is disabled by default and should be enabled manually if thread-safety is important for you.

9. qstring-varargs

This implements the equivalent of -Wnon-pod-varargs but only for QString.

This check is only useful in cases where you don’t want to enable -Wnon-pod-varargs. For example on projects with thousands of benign warnings (like with MFC’s CString), where you might only want to fix the QString cases.

10. static-pmf

Warns when storing a pointer to a QObject member function and passing it to a connect statement. Passing such variable to a connect() is known to fail at runtime when using MingW.

You can check if you’re affected by this problem with the following snippet:

static auto pmf = &QObject::destroyed;
if (pmf == &QObject::destroyed) // Should be false for MingW

Conclusion and thoughts for version 1.5

Clazy has matured and it’s getting increasingly difficult to come up with new ideas for checks. For version 1.5 I won’t be focusing in writing new warnings, but instead figuring out how to organize the existing ones.

This project has come a long way, there’s now 77 checks and I feel the current classification by false-positive probability (levels 0, 1, 2, 3) is not scaling anymore. I will try to organize them by categories (bug, performance, readability, containers, etc), which would be orthogonal to levels and hopefully also answer the following questions:

  • What’s the absolute sub-set of checks that every project should use ?
  • Which ones should abort the build if triggered (-Werror style) ?
  • How to make clazy useful in CI without getting in the way with annoying false-positives ?
  • How to let the user configure all this in an easy way ?

But of course, if you know of any interesting check that wouldn’t cost me many days of work please file a suggestion or catch me at #kde-clazy (freenode) and it might make it to the next release.

The post Clazy 1.4 released appeared first on KDAB.

Qt Creator 4.8 Beta released

We are happy to announce the release of Qt Creator 4.8 Beta!

languageclient_py_hs

Generic Programming Language Support

In Qt Creator 4.8 we’ll introduce experimental support for the language server protocol. For many programming languages there is a “language server” available, which provides IDEs with a whole lot of information about the code, as long as they support communicating via the protocol.

This means that by providing a client for the language server protocol, Qt Creator gets (some) support for many programming languages “for free”. Currently Qt Creator supports code completion, highlighting of the symbol under cursor, and jumping to the symbol definition, as well as integrates diagnostics from the language server. Highlighting and indentation are still provided by our generic highlighter, since they are not provided via the language server protocol.

To use a language server, you first need to enable the LanguageClient plugin in Help > About Plugins (Qt Creator > About Plugins on macOS). Then add the server in Tools > Options > Language Client, choosing a MIME type (glob patterns are in the works), server executable and any required command line arguments. The server is automatically started – just open a file of the corresponding type and start hacking :).

Note that the client is mostly tested with Python. There will most probably be one or the other issue when you try your most favorite language. Also, we currently do not support language servers that require special handling. If you find issues, we are happy to receive bug reports, preferably including Qt Creator console output with the environment variable  QT_LOGGING_RULES=qtc.languageclient.*=true set.

C++ Support

On the C++ side we added some experimental features.

Compilation Database Projects

Open a compilation database as a project purely for editing and navigating code. A compilation database is basically a list of files and the compiler flags that are used to compile them. Some build systems and tools are able to generate such databases for use in other tools. In Qt Creator it feeds the code model with the necessary information for correctly parsing the code. Enable the plugin CompilationDatabaseProjectManager to try it.

Clang Format Based Indentation

Does auto-indentation via LibFormat which is the backend used by Clang Format. Enable the plugin ClangFormat to try it.

Cppcheck Diagnostics

Integrates diagnostics generated by the tool Cppcheck into the editor. Enable the plugin Cppcheck to try it.

Aside from many other fixes, the Clang code model can now jump to the symbol represented by the auto keyword. It also allows generation of a compilation database from the information that the code model has via Build > Generate Compilation Database.

Debugging

We added support for simultaneously running debuggers on one or more executables. When multiple debuggers are currently running, you can switch between the instances with a new drop-down menu in the debugger tool bar in Debug mode.

There have been many more improvements and fixes in Qt Creator 4.8, which you can read about in more detail in our change log.

Get Qt Creator 4.8 Beta

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.8 Beta is also available under Preview > Qt Creator 4.8.0-beta1 in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.8 Beta released appeared first on Qt Blog.

Introducing the Distance Field Generator

by Eskil Abrahamsen Blomfeldt (Qt Blog)

At least from the perspective of rendering, text is often the most complex part of a traditional two-dimensional user interface. In such an interface, the two main components are rectangular images and text. The rectangular images are often quite static, and can be represented by two triangles and four indexes into a texture atlas that is uploaded to graphics memory once and then retained. This is something that has low complexity and which the graphics hardware has been optimized to handle quickly.

Text starts as a series of indexes into an international database of writing systems (Unicode). It is then, based on some selection algorithm, combined with one or more fonts, which is in principle a collection of shapes and some lookup tables and executable programs that convert said indexes into shapes and relative positions. These shapes, basically filled paths made out of bezier curves, then have to be rasterized at a specified size, and this can range from simple and neat outlines to complex ones with lots of detail. (By rasterization, I mean finding out how much of each target pixel, or subpixel in some cases, is covered by the shape.)

The letter Q

Objectively the most beautiful character in the Latin alphabet. Here represented by a rasterized image of three channels, respectively giving the coverage of the red, green and blue target subpixels. Scaled by 400% to make the pixels visible.

All combined, it is a heavy process. But luckily, instead of redoing every step for every string, we can often cache intermediate results and reuse them later.

For instance, it is possible to rasterize the glyphs the first time they are used, keep this in memory, and then at each subsequent use, render these glyphs the same way images are rendered as described above: By putting the rasterized glyphs in a texture atlas and representing the glyphs by two triangles and indexes into this atlas. In fact, when the Text.NativeRendering render type is in use in Qt Quick, this is precisely what happens. In this case, we will ask the underlying font system (CoreText on macOS, GDI/DirectWrite on Windows and Freetype on Linux) to rasterize the glyphs at a specific size, and then we will upload these to a texture atlas which can later be referenced by the triangles we put on the screen.

Texture glyph cache

Contents of texture atlas in a typical Qt application.

There are some limitations of this approach however: Since the font size has to be known before the glyphs are rasterized, we may end up rasterizing and caching the same glyphs multiple times if the text in the UI comes in many different sizes. For some UIs that can be too heavy both for frame rate and memory consumption. Animations on the font size, for instance, can cause us to rasterize the shapes again for every frame. This rasterization is also done on the CPU, which means we are not using the resources of the device to its full potential when preparing the glyphs.

Additionally, transformations on the NativeRendering text will give pixelation artifacts, since they will be applied to the pre-rasterized image of the glyph, not its actual mathematical shape.

So what is the alternative?

For a more flexible approach, we want to actually do the rasterization on the GPU, while rendering our frame. If we can somehow get the shapes into the graphics memory and rasterize them quickly using a fragment shader, we free up CPU resources and allow both transformations and size changes without any additional penalty on performance.

There are several approaches to this problem. The way is is done in Qt is by using so-called distance fields. Instead of storing the rasterized glyphs in texture memory, we store a representation of the shapes in a texture atlas where each texel contains the distance to the nearest obstacle rather than the coverage.

Distance field for letter Q in Deja Vu Sans

A distance field of the same Q, as an 8-bit map where each value is set to the distance to the nearest point on the outline of the glyph

Once these distance fields are created and uploaded to texture memory, we can render glyphs at any font size and scale quickly on the GPU. But the process of converting the shapes from the fonts into distance field is still a bottle neck for startup time, and that in particular is what this blog is about.

So what is the problem?

Creating the distance fields is CPU-bound, and – especially on lower-end hardware – it may be very costly. By setting the QT_LOGGING_RULES environment variable to “qt.scenegraph.time.glyph=true” we can gain some insight into what that cost is. Lets for instance say that we run an example that displays 50 unique Latin characters with the Deja Vu Sans font (the simple and neat outlines further up). With the logging turned on, and on an NXP i.MX6 we have for testing in our QA lab, we get the following output:

qt.scenegraph.time.glyph: distancefield: 50 glyphs prepared in 25ms, rendering=19, upload=6

From this output we can read that generating the necessary assets for these 50 glyphs took 19 ms, over one whole frame, whereas uploading the data to the graphics memory took 6 ms. It is the 19 ms for converting into distance fields that we will be able to reduce. These 19 ms may not seem like a lot, but it will cause the rendering to skip a frame at the point where it happens. If the 50 glyphs are displayed at startup, then those 25 ms may not be as noticeable, but if it is done during an animation, it would be something a user could notice. It is worth mentioning again, though, that it is a one-time cost as long as the font remains in use.

Running the same for the HVD Peace font (linked as the complex font above), we get the following output:

qt.scenegraph.time.glyph: distancefield: 50 glyphs prepared in 1016ms, rendering=1010, upload=6

In this case, we can see that rendering the distance fields takes a full second, due to the high complexity of the outlines in use.

Another use case where we may see high costs of generating distance fields is if the number of unique glyphs is very high. So let us test an arbitrary, auto-generated “lorem ipsum” text in Chinese with 592 distinct characters:

qt.scenegraph.time.glyph: distancefield: 592 glyphs prepared in 1167ms, rendering=1107, upload=60

Again, we see that generating the distance fields takes over one second. In this case, the upload also takes a bit longer, since there is more data to be uploaded into graphics memory. There is not much to be done about that though, other than making sure it is done at startup time and not while the user is watching a smooth animation. As mentioned, though, I will focus on the rendering part in this blog.

So what is the solution?

In Qt 5.12, we will release a tool to help you improve on this for your application. It is called “Qt Distance Field Generator” and you can already find the documentation in our documentation snapshot.

The way this works is that it allows you to pregenerate the distance fields for either a selection of the glyphs in a font or all of them. Then you can append these distance fields as a new font table at the end of the font file. Since this custom font table follows SFNT conventions, the font will still be usable as a normal TrueType or OpenType file (SFNT mandates that unsupported font tables are ignored).

So the font can be used as normal and is still compatible with e.g. Qt Widgets and Text.NativeRendering, where the rasterization will still go through the system.

When the font is used in Qt Quick with Text.QtRendering, however, the special font table will be detected, and its contents will be uploaded directly to graphics memory. The cache will therefore be prepopulated with the glyphs you have selected, and the application will only have to create distance fields at runtime if they are missing from this set.

The result of this can be impressive. I repeated the experiments, but with fonts where I had pregenerated distance fields for all the glyphs that were used in the example.

First example, simple and neat “Deja Vu Sans” font, 50 latin characters:

qt.scenegraph.time.glyph: distancefield: 50 pre-generated glyphs loaded in 11ms

Second example, complex “HVD Peace” font, 50 latin characters:

qt.scenegraph.time.glyph: distancefield: 50 pre-generated glyphs loaded in 4ms

Third example, 592 Chinese characters:

qt.scenegraph.time.glyph: distancefield: 592 pre-generated glyphs loaded in 42ms

Comparison of results on i.MX6

As we can see, there is a great improvement when a lot of time is spent on creating the distance fields. In the case of the complex font, we got from 1016 ms to 4 ms. When more data is uploaded, that will still take time, but in the case of the Chinese text, the upload was actually faster than when the distance fields were created on the fly. This is most likely a pure coincidence, however, caused by the order of the glyphs in the cache causing slightly different layouts and sizes.

Another peculiar thing we can see is that the complex font is faster to load than the simple one. This is simply because the glyphs in that font are square and compact, so there is not a lot of unused space in the cache. Therefore the texture atlas is a little bit smaller than for the simpler font. The complexity of the outlines does not affect the loading time of the atlas of course.

Running the same tests on my Windows Desktop workstation, we see that there is not as much overhead for generating the distance fields, but there is still some performance gain to be seen in some cases.

Comparison of results on Windows Desktop

For 50 Latin glyphs with Deja Vu Sans, both tests clocked in at 3 ms, which was mainly spent uploading the data. For HVD Peace, however, generating the distance fields took 131 ms (versus 1 ms for just the upload) and for the Chinese text it took 146 ms (vs 11)

Hopefully this can help some of you get even better performance out of your Qt devices and applications. The feature is already available in the Qt 5.12 beta, so download the package and take it for a test drive right away.

The post Introducing the Distance Field Generator appeared first on Qt Blog.

QML Debugging in Visual Studio

The next release of the Qt Visual Studio Tools, v2.3.0, will allow debugging of QML applications in Visual Studio. It will be possible to set breakpoints in QML files and step through the execution of QML code. While in break mode, it will be possible to watch variables and change their values, as well as evaluate arbitrary expressions. The QML debug session will run concurrently to the C++ debug session, so it will be possible to set breakpoints and watch variables in both C++ and QML during the same debug run of the application.

This new debugging feature of the Qt VS Tools integrates with the QML debugging infrastructure, a part of the Qt QML module which provides services for debugging, inspecting, and profiling applications via a TCP port. To extend the Visual Studio debugger with features of the QML debugging infrastructure, a Visual Studio QML debug engine is provided. This debug engine consists, for the most part, of implementations of interfaces from the Active Debugging 7 (AD7) extensibility framework for the Visual Studio debugger.

If a Qt project contains any QML resource files, starting a debug session (e.g. by pressing F5), besides launching the native application, now also connects to the QML debugging infrastructure of that application. This can be seen in the Processes window of the Visual Studio debugger: two processes are listed, a native process that corresponds to the actual physical process created for the C++ debugging session, and a QML process, which does not correspond to any physical process that is running on the machine, but rather represents the connection to the QML debugging runtime within the native process.

qml_vs_debug_processes_1

Since both a native process and a QML process are present, it is possible to request breakpoints both in C++ or QML code. The Visual Studio debugger will forward requests to the appropriate debug engine. As usual, a filled circular breakpoint marker in QML code indicates a valid breakpoint; this means that a breakpoint request for that file position has been sent to, and confirmed by the QML runtime.

qml_vs_debug_breakpoints_1

When a breakpoint is hit, Visual Studio will show the current state of the call stack. Unlike other scenarios of debugging applications that use different languages (e.g. .NET + Native debugging), the QML debug engine does not provide true mixed mode debugging. It runs concurrently with the native debug engine and, from the point of view of the Visual Studio debugger, it is not related to the native process. This means that, even though it is possible to debug both C++ and QML in the same debugging session, the stack that is shown when a QML breakpoint is hit will only include QML function calls — the C++ context of those calls will not be available.

qml_vs_debug_callstack_1

As in the case of native debugging, while in break mode, it is possible to view and modify the values of local variables, in the context of the currently active call stack frame, as well as create watches for any variable or expression. The Immediate window is also available for evaluation of any expression in the context of the current stack frame.

qml_vs_debug_watches_1

Moving the mouse over a QML expression pops up an instant watch window (or “DataTip”). The value of that expression in the current context is displayed and can also be modified.

qml_vs_debug_datatip_1

QML debugging is enabled by default for any Qt QML application. It is possible to disable QML debugging, and revert to native-only debugging, by opening the Qt project settings dialog and setting the “QML Debug” option to “Disable”. In this dialog, it is also possible to change the port that is used by the QML debugging runtime.

qml_vs_debug_options_2

As mentioned, the QML debugging feature of the Qt VS Tools will be available in the next version, scheduled for release in the Visual Studio Marketplace later this year. A preview version will shortly be available for download on the Qt website; we’ll post a quick update here when it is available.

The post QML Debugging in Visual Studio appeared first on Qt Blog.

Extending the Performance Analysis Toolset

The Linux Foundation holds its Open Source Summit + Embedded Linux Conference Europe in Edinburgh, October 22 – 24, 2018

In spite of the clumsy name, this is an event you won’t want to miss!

KDAB’s Christoph Sterz will be presenting a talk on Tuesday, October 23 • 15:50 – 16:30:

Extending the Performance Analysis Toolset 

Finding and analyzing performance issues on embedded devices can be a tiresome search. Nowadays, modern sampling and tracing technologies are built into the Linux kernel to address this, in the form of perf and LTTng respectively. Still, the vast amounts of data recorded are difficult to handle on the limited embedded devices themselves.

In his talk, Christoph will present Hotspot, an open-source performance analysis tool and how to optimize sophisticated tracepoint analysis as well as outline KDAB’s plans in instrumenting Qt for the LTTng tracing ecosystem.

Read more…

Open Source Summit + Embedded Linux Conference Europe (OSSEU) is the technical conference for professional open source and the leading conference for developers, architects and other technologists – as well as open source community and industry leaders.

15% Attendee Discount Offer

Sign up here and get a 15% discount, by using the following code: SPKSHARE15

 

The post Extending the Performance Analysis Toolset appeared first on KDAB.

Qt 5.12 LTS Beta Released

I am pleased to announce that we released the first beta of Qt 5.12 LTS today. Qt 5.12 LTS is expected to be a solid development base and to receive multiple patch-level updates during its three-year support period. Once released, we recommend updating to Qt 5.12 LTS for both current and new projects. We have convenient online binary installers in place so you can try out the upcoming features coming in Qt 5.12 LTS in its first beta state. We will continue to provide subsequent beta releases via the online installer. 

There is a huge number of things to talk about, so I’ll go through just some of the highlights of Qt 5.12 LTS in this blog post. For a more detailed overview of the new features coming in Qt 5.12 LTS, please check them out on the Qt 5.12 wiki page. We will also provide multiple blog posts and webinars for a more in-depth look at the Qt 5.12 LTS features.

Long-term support

Qt 5.12 LTS is a long-term supported release. It will be supported for three years, after which you can purchase extended support. As an LTS, it will receive multiple patch-level releases that provide bug fixes, improvements and security fixes. If you are about to begin a new project, you may want to start with Qt 5.12 LTS pre-releases already now. For ongoing projects, migration to Qt 5.12 LTS is recommended after it is released.

Qt 5.9 LTS entered ‘Strict’ phase at the beginning of February 2018. Going forward, Qt 5.9 will continue to receive critical bug and security fixes during the ‘Strict’ phase. We also continue to create new Qt 5.9.x patch releases during the ‘Strict’ phase, but at a slower cadence than before. The oldest Qt 5 LTS release, Qt 5.6 LTS, is in the last steps of the ‘Very Strict’ phase and has for a while received the most important security fixes only. There are no new patch releases currently planned for Qt 5.6 LTS. Those still using Qt 5.6 LTS should plan an update to a more recent version of Qt, as support of Qt 5.6 LTS ends in March 2019.

The reason for gradually reducing the amount of changes going into an LTS version of Qt is to avoid problems with stability. While each fix as such is beneficial, they also bring a risk for behavior changes and regressions, which we want to avoid in LTS releases.

Performance

Improved performance and reduced memory consumption have been important focus areas for Qt development already for Qt 5.9 LTS, and we have continued this for Qt 5.12 LTS. We have done multiple optimizations to the graphics and other functionalities, especially for running Qt 3D and Qt Quick on embedded hardware.

Qt 5.12 LTS provides good support for asset conditioning and improves upon the functionality introduced with Qt 5.10 and 5.11. One important new feature is the support for pre-generated distance field caches of fonts, which provides faster startup times, especially with complex and non-latin fonts.

The QML engine has also received multiple improvements and optimizations. Specifically, we focused on optimizing the memory consumption for Qt 5.12 LTS.

TableView

One of the most requested new controls is TableView and with Qt 5.12 LTS we finally provide it. The new TableView item is available in the Qt Quick module. TableView is similar to the existing ListView, but with additional support for showing multiple columns. We have developed the new TableView with performance in mind, with architecture that allows efficient handling of large tables. For a more detailed overview of the new TableView, please check the recent TableView blog post.

In addition to introducing the new TableView, Qt 5.12 LTS also provides multiple improvements and new features in Qt Quick Controls 2, as well as in the Qt VirtualKeyboard.

Input handling

With Qt 5.12 LTS, we introduce the new Input Handlers as a fully supported feature (earlier know as Pointer Handlers, a new approach for mouse, touch and gesture event handling). We have worked in this area for quite some time, and now it is ready for prime time. The main issue addressed with the new functionality is versatility, especially in multi-touch and multi-monitor applications. Those were areas where the previous functionalities have not been enough to tackle all use cases. The new functionalities enable many different input mechanisms in Qt applications, for example, based on hand gestures detected by a camera or a proximity sensor.

Input Handlers provides a QML API for recognizing and handling the most common mouse, touch and multi-touch gestures (press-hold-release, drag, swipe, and pinch) from mouse and touchscreen, in parallel across the scene. You can interact with multiple items simultaneously, across multiple screens when necessary. There is also a C++ API available, but that is still defined as a private API with Qt 5.12.

Python, Remote Objects and WebGL Streaming Plugin fully supported

Based on Qt 5.12 LTS we also provide an update to Qt for Python, initially released with Qt 5.11 as a technology preview. The Python community is very active and growing, so we are extremely happy to provide Qt for Python as a fully supported feature of Qt 5.12 LTS. You can get started conveniently with Qt for Python via the PyPI (Python Package Index).

In addition to Python, Qt Remote Objects and Qt WebGL Streaming Plugin are fully supported with Qt 5.12 LTS. Qt for WebAssembly continues to be a technology preview with Qt 5.12 LTS.

Tools for designers and developers

To get the maximum out of Qt 5.12 LTS we also have updates to our tooling underway. The upcoming Qt Design Studio 1.0 will will leverage a pre-release of Qt 5.12 LTS and will support final Qt 5.12 LTS when available. Qt Creator 4.8 is planned to be released together with Qt 5.12 LTS, offering a good set of new functionalities (e.g. support for multiple new programming languages and multiple simultaneous debugger sessions). As always, Qt Creator 4.8 will also work with earlier versions of Qt. In December we are releasing Qt 3D Studio 2.2, which is directly based on Qt 5.12 LTS and takes advantage of the numerous 3D related improvements of Qt 5.12 LTS.

Next steps towards the final release

After the Qt 5.12 LTS beta 1 released today, we will push out multiple new Beta N releases using the online installer. With this approach, it is easy for users to test the new features and provide feedback. During the beta phase, we expect to have new beta N releases within one to two weeks intervals. When we have reached a sufficient level of maturity we will create a release candidate of Qt 5.12 LTS. It will be made available directly via the online installer. We are not planning to publish separate blog posts for the subsequent beta releases and release candidate(s). In addition to binaries, source packages of each beta release are of course also available for those who prefer to build themselves.

Get Qt 5.12 LTS Beta

I hope many of you will install the Qt 5.12 LTS Beta releases, test and provide us your feedback to complete Qt 5.12 LTS. For any issues you may find, please submit a detailed bug report to bugreports.qt.io (please remember to mention which beta you found the issue with, check for duplicates and known issues). You are also welcome to join the discussions in the Qt Project mailing listsdeveloper forums and to contribute to Qt.

If you do not yet have the Qt online installer, get it from the Qt Account or from the Qt Download page.

The post Qt 5.12 LTS Beta Released appeared first on Qt Blog.

ICS talks Embedded UX Design, Hands-On Qt on Raspberry Pi, and more at Qt World Summit 2018 in Boston

Integrated Computer Solutions (ICS), the largest independent source of Qt expertise in North America, is proud to be a gold sponsor and training partner at The Qt World Summit Boston. We have big things planned for this important industry event. ICS CEO Peter Winston will share secrets learned from 500+ successful Qt projects, including a roadmap for prioritizing work and staffing projects. And we’ll share our experience using Qt to create voice-based user experiences for Amazon Alexa.

We’re also offering a full slate of pre-conference training sessions presented by our expert Qt instructors. Sessions cover a spectrum of topics, from the intro-level UX Design for Embedded Devices to specialized offerings, including Hands-on Qt on Raspberry Pi and Qt for Medical Devices.

Whatever your interest, ICS has a course to advance your Qt knowledge and help ensure your project’s success. Check out the full list:

  • Hands-on Qt on RaspberryPi (1 day)
  • Advanced QML (1 day)
  • UX Design for Embedded Devices (1 day)
  • Optimizing Qt Applications on Limited Hardware (½ day am)
  • High-Performance Applications with CAN Bus (½ day am)
  • Qt for Medical Devices (½ day pm)
  • Network Application Dev with ZeroMQ and Qt (½ day pm)

Want even more Qt? If you’re in town for The Qt World Summit, check out ICS’ latest 3-day training class, Implementing Modern Apps with Qt Quick, taking place in nearby Waltham on Oct. 31 – Nov. 2.

The post ICS talks Embedded UX Design, Hands-On Qt on Raspberry Pi, and more at Qt World Summit 2018 in Boston appeared first on Qt Blog.

Qt 3D Studio 2.1 Released

We are happy to announce that Qt 3D Studio 2.1 has been released. Qt 3D Studio is a design tool for creating 3D user interfaces and adding 3D content into Qt based applications.  With Qt 3D Studio you can easily define the 3D content look & feel, animations and user interface states. Please refer to earlier blog posts  and documentation for more details on Qt 3D Studio.

Editor

Sub-Presentations is a feature which allows embedding an another Studio presentation or a QML file in a Studio presentation. This enables for example dividing the design work into smaller projects and making re-usable components. Managing the Sub-Presentations and adding them into views is now easier with the 2.1 release. Project browser shows all the Qt Quick files (.qml) and Qt 3D Studio presentations (.uip) imported to the main project and then those can be easily added to a scene layer or a texture to a an object by dragging from the project browser to the scene. Please refer to the documentation for more details. Sub-Presentations are also now visible in the scene view so that you can see the whole user interface when creating the design.

Qt 3D Studio 2.1 release also contains a new option for Scene preview when you are working with different camera views (perspective, top etc.) which super handy when aligning objects in the scene.

Scene Preview in Perspective View

Scene Preview in Perspective View

 

Runtime

On the runtime side the main focus has been on performance and stability improvements. We have also been writing a new API that enables replacing the old runtime in the Qt 3D Studio Editor. In the future the new API will also enables dynamic content creation from the application side. Stay tuned.

Profiling view has gained some additional data and the possibility for e.g. changing the data input values.

Profiling UI

Qt 3D Studio Viewer Profiling UI

As you may know we introduced support for compressed textures in Qt Quick applications in Qt 5.11 and we are happy to announce that this support has been also added to the Qt 3D Studio runtime. So if you are targeting to a device that supports  ETC2 or ASTC Compressed textures you can improve the loading time and save memory by compressing the textures. Of course this is just the first step and we are introducing the asset compression management in the Editor side in the future versions of Qt 3D Studio.

Getting started

Qt 3D Studio 2.1 is available through Qt online installer under the Tools section. We also provide standalone offline installers which contain all you need to start designing Qt 3D Studio User Interfaces. Online installer also contains pre-build runtime for Qt 5.11 which is needed for developing Qt applications using Qt 3D Studio UI. Qt online installer and offline installers can be obtained from Qt Download page and commercial license holders can find the packages from Qt Account. Binary packages are are available for Windows and Mac. Instructions for building the editor & runtime to Linux please refer to the README file. Please also note that Qt 3D Studio runtime uses Qt 3D module for rendering which means that Qt 3D Studio 2.1 requires Qt 5.11.2.

Some example projects can be found under examples folder in the installation directory. Additional examples and demo applications can be found from https://git.qt.io/public-demos/qt3dstudio repository. If you encounter issues with using Qt 3D Studio or you would like to suggest new feature, please use Qt3D Studio project in the https://bugreports.qt.io

 

The post Qt 3D Studio 2.1 Released appeared first on Qt Blog.

Getting Started with QML

This tutorial shows how to develop a simple alarm application as an introduction to QML and Qt Quick Controls. Until 5.11 the documentation did have an example “Getting Started Programming with Qt Quick“, but that was not really written for someone who is a complete beginner in QML, nor did it use Qt Quick Controls.

The example is available in 5.12 and can be found here: Getting Started Programming with Qt Quick.

This is a simple app where the main screen is a ListView, and where most fields are filled using the Tumbler QML type. The app stores the alarms in a ListModel. The application is very similar to the alarm application usually found on an Android or IOS phone.

User Interface

The main screen shows the list of saved alarms:

mainscreen

The detail screen becomes visible when you click a particular alarm. It lets you edit or delete existing alarms. You can also select days on which the alarm needs to be repeated.

detailscreen

The dialog screen is used for adding new alarms. It pops up when you click on the “+” RoundButton on the bottom of the main screen:

addalarms

 

Entering and Saving Alarms

Most data is entered with the Tumbler QML type. Below you can see part of AlarmDialog.qml, the dialog for entering new alarms.qml.

    contentItem: RowLayout {
        RowLayout {
            id: rowTumbler
            Tumbler {
                id: hoursTumbler
                model: 24
                delegate: TumblerDelegate {
                    text: formatNumber(modelData)
                }
            }
            Tumbler {
                id: minutesTumbler
                model: 60
                delegate: TumblerDelegate {
                    text: formatNumber(modelData)
                }
            }
        }

        RowLayout {
            id: datePicker

            Layout.leftMargin: 20

            property alias dayTumbler: dayTumbler
            property alias monthTumbler: monthTumbler
            property alias yearTumbler: yearTumbler

            readonly property var days: [31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31]

            Tumbler {
                id: dayTumbler

                function updateModel() {
                    // Populate the model with days of the month. For example: [0, ..., 30]
                    var previousIndex = dayTumbler.currentIndex
                    var array = []
                    var newDays = datePicker.days[monthTumbler.currentIndex]
                    for (var i = 1; i <= newDays; ++i)
                        array.push(i)
                    dayTumbler.model = array
                    dayTumbler.currentIndex = Math.min(newDays - 1, previousIndex)
                }

                Component.onCompleted: updateModel()

                delegate: TumblerDelegate {
                    text: formatNumber(modelData)
                }
            }
            Tumbler {
                id: monthTumbler

                onCurrentIndexChanged: dayTumbler.updateModel()

                model: 12
                delegate: TumblerDelegate {
                    text: window.locale.standaloneMonthName(modelData, Locale.ShortFormat)
                }
            }
            Tumbler {
                id: yearTumbler

                // This array is populated with the next three years. For example: [2018, 2019, 2020]
                readonly property var years: (function() {
                    var currentYear = new Date().getFullYear()
                    return [0, 1, 2].map(function(value) { return value + currentYear; })
                })()

                model: years
                delegate: TumblerDelegate {
                    text: formatNumber(modelData)
                }
            }
        }
    }

Clicking on “OK” adds the new alarm to the ListModel with id alarmModel.

    onAccepted: {
        alarmModel.append({
            "hour": hoursTumbler.currentIndex,
            "minute": minutesTumbler.currentIndex,
            "day": dayTumbler.currentIndex + 1,
            "month": monthTumbler.currentIndex + 1,
            "year": yearTumbler.years[yearTumbler.currentIndex],
            "activated": true,
            "label": "",
            "repeat": false,
            "daysToRepeat": [
                { "dayOfWeek": 0, "repeat": false },
                { "dayOfWeek": 1, "repeat": false },
                { "dayOfWeek": 2, "repeat": false },
                { "dayOfWeek": 3, "repeat": false },
                { "dayOfWeek": 4, "repeat": false },
                { "dayOfWeek": 5, "repeat": false },
                { "dayOfWeek": 6, "repeat": false }
            ],
        })
    }

Some notes…

The program handles only the user interface and the ListModel that is storing the alarms. There is no code for storing the alarms physically in JSON format or SQLite, nor for the actual triggering of the alarm with sound or a popup window. Maybe that would be a nice challenge for someone starting with QML.

The post Getting Started with QML appeared first on Qt Blog.

Announcing QtCon Brazil 2018

Announcing QtCon Brazil 2018

We are happy to announce that the 2nd edition of the first Qt conference in America Latina (QtCon Brazil 2018) is gonna happen from 8th to 11th November, in São Paulo. The first edition of QtCon Brazil happened last year, also in São Paulo, and brought together 180 participants from universities, government institutions, and companies acting in the fields of IT services, simulation, medicine and biology, physics, embedded systems, mobile systems, and web services, just to mention a few. It was very revealing to see how much work has been built on top of Qt lately in Brazil. As a indirect result, the "Qt Brasil" telegram group — created during QtCon Brasil 2017 — has currently 320 participants, engaged in a number of daily discussions about all things related to Qt.

QtCon Brazil aims to be a forum where people interested in Qt can share their experiences, know a bit more about successful use-cases like KDE, and learn how Qt can leverage their business. And, of course, we want to take advantage of such an audience to spread the word about the Qt and KDE communities and the products we develop. Last year, we had Aleix Pol presenting a talk about Kirigami, Filipe talking about KDE Frameworks, and we had a booth showing off Plasma and our applications. Additionally, speakers from JetBrains, Toradex, and SUSE contributed to make QtCon Brazil a quite recommended conference in spite of being a totally recent endeavor.

As for the 2018's edition, we decided to expand it a bit by having two days of training sessions (in contrast to only one day in last year's edition) and keeping the weekend for talks from invited speakers, as well as those ones submited through the call for presentations. QtCon Brazil wouldn't be a real event without all the support The Qt Company has been providing us. This year, they'll a have a bigger presence and we're very honored in having the Senior Vice President, Strategy at The Qt Company — Dr. Tuukka Ahoniemi — as a keynote speaker. Tuukka will present a talk about the Qt roadmap and another one about Qt licensing. As for KDE matters, we're very honored and happy in having our Akademy-awarded Plasma master — Kai Uwe — also as a keynote speaker. We're very thankful to KDE e.V. for supporting the conference and helping us to make KDE technologies more visible to those interested in Qt-based solutions. Many thanks and kudos! I must admit, I'm excited, this's gonna be super-awesome! :)

Announcing QtCon Brazil 2018

QtCon Brazil 2017's training sessions got a incredible high demand and were totally sold out in two weeks. This year, the two initial days (8th and 9th November) will be dedicated to training sessions, which were expanded to 8-hours session, in contrast to last year's 4-hour sessions. We wanted to bring topics of wide interest of our audience while still keeping the sessions affordable like last year's ones. The training sessions will cover three different topics this year: "Developing Qt Applications with Embedded Linux", "Developing Android Applications with Qt", and "Computer Graphics with Qt3D". The first one will be presented by Sergio Prado and Cleiton Bueno, two of the most skilled professionals doing Qt and embedded development currently in Brazil. This trainning session will cover from generating a Qt-enabled Yocto distribution for embedded systems, to general Qt development using features like hardware access, serial ports, and so on. This session's attendees will have hands-on activities with development boards kindly provided by Toradex. The other training sessions will be presented by myself. The one about Android development will cover the basics of QML, QML-based APIs for handling sensors, multimedia, and location, as well as the basics of RESTful-based development of mobile applications. Finally, the one about Qt3D will cover how Computer Graphics concepts like geometries, rendering, lighting, materials, shading, textures, and animations are currently supported in Qt3D.

Last but not least, we'd like to thank our sponsors: The Qt Company, Toradex, openSUSE, and KDE. QtCon Brazil 2018 is a reality also because of our partners B2Open, CISS, Embarcados, and Embedded Labworks.

For further information, please refer to QtCon Brazil website or take a lot at last year's pictures.

See you in São Paulo \o/ :)

Qt 5.11.2 Released

Qt 5.11.2 is released today. As a patch release it does not add any new functionality, but provides important bug fixes, security updates and other improvements.

Compared to Qt 5.11.1, the Qt 5.11.2 release provides fixes for more than 250 bugs and it contains around 800 changes in total. For details of the most important changes, please check the Change files of Qt 5.11.2.

The recommended way for getting Qt 5.11.2 is using the maintenance tool of the online installer. For new installations, please download latest online installer from Qt Account portal (commercial license holders) or from qt.io Download page (open source).

Offline packages are also available for those who do not want to use the online installer.

The post Qt 5.11.2 Released appeared first on Qt Blog.

Qt Creator 4.7.1 released

We are happy to announce the release of Qt Creator 4.7.1!

The probably most prominent fixes we did were for Windows:

  • The amount of resources that we used for MSVC detection could trigger virus scanners, so we limit this now.
  • We no longer force Qt Creator’s use of ANGLE for OpenGL on user applications, so applications using desktop OpenGL run again from Qt Creator without environment modifications.

You find more details about other fixes in our change log.

Get Qt Creator 4.7.1

The opensource version is available on the Qt download page, and you find commercially licensed packages on the Qt Account Portal. Qt Creator 4.7.1 is also available through an update in the online installer. Please post issues in our bug tracker. You can also find us on IRC on #qt-creator on chat.freenode.net, and on the Qt Creator mailing list.

The post Qt Creator 4.7.1 released appeared first on Qt Blog.

Meet QSkinny, a lightweight Qt UI library

by Peter Hartmann (peter)
TL;DR: QSkinny offers a QWidget-like library built on top of the modern Qt graphic stack. It is using the Qt scene graph and is written fully in C++, thus making QML optional.
QSkinny offers a Qt API to write user interfaces in C++. It is inspired by the class design of QtWidgets, but runs on top of QtQuick. This means that QSkinny is hardware accelerated and can make use of e.g. animations and shaders. Below is a screenshot of a sample UI written with QSkinny:
1. How does it work?
Check out a simple "hello world" program written with QSkinny:
Looks familiar? Users of QtWidgets code will feel right at home when using QSkinny's API: There are similar controls in both worlds like text labels, push buttons, layouts, dialogs etc.
This diagram shows how QSkinny, QML and QtWidgets relate:
The layers in the diagram above are:
QSkinny: C++ UI controls
QML engine: declarative / JavaScript engine to parse UI files
QtQuick: basic layer of UI controls (containing e.g. x/y positioning and focus handling)
Qt scene graph: low level drawing primitives to make use of hardware acceleration
OpenGL: API to support hardware accelerated drawing
QtWidgets: C++ UI controls designed for desktop use
Qt raster paint engine: software (i.e. not hardware accelerated) drawing engine
QPainter API: interface for drawing images, text, shapes etc.
Since both QSkinny and QML elements are instances of QQuickItem, both technologies can be mixed: The QSkinny "buttons" example for instance is using a QskPushButton from QML.
2. Where is the code?
The code lives on github and is licensed under LGPLv2:
Its original authors are Uwe Rathmann and Andrew Knight, the author of this blog post started contributing later.
3. Why is it called QSkinny?
It is slim. The sample screenshot above shows 3 speedometers, each one of them consists of one QQuickItem, which itself contains several scene graph nodes: There is one node for the background one for the needle, one for the labels etc. In QML, each subcontrol is a QQuickItem and therefor a QObject.It separates the functionality of controls from their appearance; the latter being handled by so called Skinlets. Those Skinlets live on the scene graph thread and handle the actual drawing. How exactly they are drawn is determined by a so called Skin, and can be changed at runtime. This makes it easy to implement e.g. a daylight vs. nighttime theme or different brand schemes:
As an example, here is a skin setting all push buttons to have blue text on green background with a 10 pixel padding:
Those skin properties are similar to properties in QML.
Upon a skin change, a programmer would just replace the colors, padding etc. with different values and then trigger a repaint; the according animation can even interpolate between colors, as seen above (reduced frame rate due to GIF compression).
4. How mature is it?
QSkinny is currently used in a major automotive project which unfortunately cannot be shown in public yet. This means it is being stress tested for production, but still lacking in areas like documentation; moreover the controls currently implemented are aligned to the project needs so far.
Mixing QSkinny and QML is in a proof-of-concept state, because the application mentioned above is purely written in C++ and not using QML in any way.
This project is showing very good performance numbers, especially a fast startup time and low memory usage. Considering that there are lots of controls loaded right at application startup, those things do not seem to be an issue with QSkinny (at least for this project).
Also, since the developers working on it came from a QtWidgets background, they were familiar with the underlying concepts and productive right away with QSkinny.
Do you want to try it out? Just clone the repository above and let us know how it goes!
Contributions (source code changes, documentation etc.) will of course also be appreciated.

Qt 5.12 Alpha Released

I am pleased to announce that Qt 5.12 Alpha is released today. There are prebuild binaries as well in addition to source code packages with Alpha release.

Please check Qt 5.12 New Features wiki to see what new is coming with Qt 5.12 release. Please note that the feature list is still in progress and not to be considered final before the first Beta release.

Our target is to start releasing regular beta releases quite soon after this Alpha release.  First beta release should be out within few weeks, see details from 5.12 wiki.

Please take a tour and test Qt 5.12 Alpha. You can download the Qt 5.12 Alpha source packages from your Qt Account or from download.qt.io and prebuild binaries by using Qt Online Installer.

Most importantly, remember to give us feedback by writing to the mailing lists and reporting bugs to Jira.

The post Qt 5.12 Alpha Released appeared first on Qt Blog.

An introduction to texture mapping in Qt 3D Studio, part I

This is the first in a series of blog posts that will explain the basics of texture mapping in Qt 3D Studio. To follow the examples, you need to have a basic understanding on how to work with objects and import assets in Qt 3D Studio. If you have not used Qt 3D Studio before, please spend a minute to browse through the Getting Started section in documentation.

In this first post, we will go through how to apply texture maps to objects in Qt 3D Studio, additionally we will explain and show examples of the most common types of texture maps.

What is texture mapping?

So, what is texture mapping? Basically, texture mapping is applying images to the surface of 3D objects to control properties such as color, transparency, specularity and more.

Texture maps can be used to make objects more realistic and better looking as well as to create special effects.

Included in the Qt 3D Studio asset library you will find a set of different textures that you can use, some of the images used in these blog posts are taken from there. It is of course possible to use any image you like.

Applying texture maps

To apply a texture map to an object you first need to add the object to the scene For example, drag an object from the basic objects palette to the scene.

Next, expand the object in the timeline and select the material (Default).

2018-09-04_14-51-40

Now you will see all properties, including texture maps, in the inspector palette. To apply a texture map to the object, simply click the Select drop down menu for the desired map property, i.e. Diffuse Map.

2018-09-05_10-38-17

Diffuse Map

The diffuse map is the most used texture map. Applying an image as a diffuse map to a 3D object will wrap the image around the model.

Let’s try this out with a cube and a sphere in Qt 3D Studio. First you will need to import the diffuse map to your Studio project. In this example we will use the Wood7.png from the asset library but basically any image will do.

Sphere and cube without any texture map.

Once you have added the objects to the scene, select the default material in the timeline palette to display the properties of the objects material in the inspector palette.

Now, for each of the objects, click the Diffuse Map drop down and select the Wood7.png. You should now have a wooden sphere and a wooden cube.

Sphere and cube with wooden diffuse map

Bump Map

A bump map is used to apply small details (height differences) such as bumps or scratches to a 3D object. A bump map is basically a grayscale image that fakes height differences on the mesh. It does not alter the geometry of the object in any way, if you look closely you will see that a bump map does not change the silhouette or shadow of the object.

Let’s try this on our sphere using bump.png found in the asset library.

A bump map

First, remove the diffuse map from the sphere and import bump.png to your project. Then, in the inspector palette, set bump.png as Bump Map for the material of the sphere.

A sphere with a bump map

Now we got some structure on our sphere instead of the flat surface. Note that there is a Bump Amount setting for the material which you can use to change the strength of the bump map. A positive Bump Amount value will display black color as the lowest areas while a negative value will display white color as the lowest areas.

Opacity Map

Sometimes called alpha map or transparency map, the opacity map is used to control the opacity of an object. An opacity map needs to be of an image format that support transparency, i.e. PNG or DDS. Transparent parts of the image will render as transparent once applied as an opacity map to the object.

Let’s try adding an opacity map to our sphere, you can keep the bump map there if you wish to. In this case I will use an image I have created myself but there are many transparent images in the Alpha Maps directory of the assets library that can be used instead.

This is the image I am using, a black flower silhouette with transparent background.

An opacity map

Then I set it as Opacity Map for the material of the sphere, as you can see it wraps around the sphere and display the transparent areas of the images as transparent also on the sphere.

 A sphere with an opacity map

Specular Map

A specular map controls the specularity of an object, in most cases a grayscale image is used but it’s possible to use a color image if you wish to add a color tint to the reflections.  Black colors on the specular map will add no reflections, the lighter the color gets, the more reflective will the specific area be. A specular map can for example be used if you have a diffuse map showing different materials; some reflective, some non-reflective.

The textures in this example is from www.3dtextures.me, a great online resources for texture maps.

In this example we have applied a tile diffuse texture to a sphere.

A sphere with a tile diffuse map

It looks nice, but we need to add some reflections to make it more realistic. In Qt 3D Studio, you make the object reflective by increasing the Specular Amount value (by default it is set to 0) for the material. In this example I set it to 0.1 which adds reflections to the whole object. If Specular Amount is 0, the specular map will have no effect.

A sphere with a diffuse map and a specular material

Now it’s time for the specular map to fine tune the reflections. This is what the specular map looks like in this case; the darker the color of tha map is, the more reflections will be toned down. Black colors will remove all reflections while white color will leave reflections unchanged.  In this case  the specular map will add some variations to the reflections, tone down reflections on the side of the tiles and so on.

A specular map

Apply it as Specular Map to the cube.

A sphere with a diffuse map and a specular map.

Summary

In this blog post we had a look at the most common types of texture maps and how you can use them in Qt 3D Studio to improve the appearance of basic 3D objects.

In next blog post in this series we will go through more types of texture maps in Qt 3D Studio.

The post An introduction to texture mapping in Qt 3D Studio, part I appeared first on Qt Blog.

API Changes in Clang

I’ve started contributing to Clang, in the hope that I can improve the API for tooling. This will eventually mean changes to the C++ API of Clang, the CMake buildsystem, and new features in the tooling. Hopefully I’ll remember to blog about changes I make.

The Department of Redundancy Department

I’ve been implementing custom clang-tidy checks and have become quite familiar with the AST Node API. Because of my background in Qt, I was immediately disoriented by some API inconsistency. Certain API classes had both getStartLoc and getLocStart methods, as well as both getEndLoc and getLocEnd etc. The pairs of methods return the same content, so at least one set of them is redundant.

I’m used to working on stable library APIs, but Clang is different in that it offers no API stability guarantees at all. As an experiment, we staggered the introduction of new API and removal of old API. I ended up replacing the getStartLoc and getLocStart methods with getBeginLoc for consistency with other classes, and replaced getLocEnd with getEndLoc. Both old and new APIs are in the Clang 7.0.0 release, but the old APIs are already removed from Clang master. Users of the old APIs should port to the new ones at the next opportunity as described here.

Wait a minute, Where’s me dump()er?

Clang AST classes have a dump() method which is very useful for debugging. Several tools shipped with Clang are based on dumping AST nodes.

The SourceLocation type also provides a dump() method which outputs the file, line and column corresponding to a location. The problem with it though has always been that it does not include a newline at the end of the output, so the output gets lost in noise. This 2013 video tutorial shows the typical developer experience using that dump method. I’ve finally fixed that in Clang, but it did not make it into Clang 7.0.0.

In the same vein, I also added a dump() method to the SourceRange class. This prints out locations in the an angle-bracket format which shows only what changed between the beginning and end of the range.

Let it bind

When writing clang-tidy checks using AST Matchers, it is common to factor out intermediate variables for re-use or for clarity in the code.

auto valueMethod = cxxMethodDecl(hasName("value"));
Finer->addMatcher(valueMethod.bind("methodDecl"));

clang-query has an analogous way to create intermediate matcher variables, but binding to them did not work. As of my recent commit, it is possible to create matcher variables and bind them later in a matcher:

let valueMethod cxxMethodDecl(hasName("value"))
match valueMethod.bind("methodDecl")
match callExpr(callee(valueMethod.bind("methodDecl"))).bind("methodCall")

Preload your Queries

Staying on the same topic, I extended clang-query with a --preload option. This allows starting clang-query with some commands already invoked, and then continue using it as a REPL:

bash$ cat cmds.txt
let valueMethod cxxMethodDecl(hasName("value"))

bash$ clang-query --preload cmds.txt somefile.cpp
clang-query> match valueMethod.bind("methodDecl")

Match #1:

somefile.cpp:4:2: note: "methodDecl" binds here
        void value();
        ^~~~~~~~~~~~

1 match.

Previously, it was only possible to run commands from a file without also creating a REPL using the -c option. The --preload option with the REPL is useful when experimenting with matchers and having to restart clang-query regularly. This happens a lot when modifying code to examine changes to AST nodes.

Enjoy!

Release 2.18.1: Use JavaScript Promises with Qt, Material Cards and an Improved API to Connect to REST Services

V-Play 2.18.1 introduces new components for embedding YouTube videos, for creating material cards and Tinder-like swipe cards. It also simplifies connecting to REST services, with the new HttpRequest component. V-Play 2.18.1 also adds several other fixes and improvements.

Important Note for iOS Live Client: The current store version of the V-Play Live Client app is built with V-Play 2.17.1 and does not include the latest features. If you want to use QML live code reloading with the latest V-Play features on iOS, you can build your own live clients with Live Client Module.

Connect to REST Services with JavaScript Promises and Image Processing from QML

You can now use the HttpRequest type as an alternative to the default XmlHttpRequest. It is available as a singleton item for all components that use import VPlayApps 1.0:

import VPlayApps 1.0
import QtQuick 2.0

App {
  Component.onCompleted: {
    HttpRequest
    .get("http://httpbin.org/get")
    .timeout(5000)
    .then(function(res) {
      console.log(res.status);
      console.log(JSON.stringify(res.header, null, 4));
      console.log(JSON.stringify(res.body, null, 4));
    })
    .catch(function(err) {
      console.log(err.message)
      console.log(err.response)
    });
  }
}

Similar to HttpRequest, which matches the DuperAgent Request type, other DuperAgent features are also available in V-Play with the Http prefix:

The HttpRequest type also supports response caching of your requests out-of-the-box.

The DuperAgent package, which brings the HttpRequest type, also contains an implementation of the Promises/A+ specification and offers an API similar to the Promises API in ES2017. The Promise type works independently of DuperAgents’s http features:

import VPlayApps 1.0
import QtQuick 2.0

App {
  Component.onCompleted: {
    var p1 = Promise.resolve(3);
    var p2 = 1337;
    var p3 = HttpRequest
    .get("http://httpbin.org/get")
    .then(function(resp) {
      return resp.body;
    });
    
    var p4 = Promise.all([p1, p2, p3]);
    
    p4.then(function(values) {
      console.log(values[0]); // 3
      console.log(values[1]); // 1337
      console.log(values[2]); // resp.body
    });
  }
}

Add QML Material Design Cards and Tinder Swipe Gestures

Create material design cards with the new AppCard. You can also use Tinder-like swipe gesture with cards. With additional AppPaper and AppCardSwipeArea components, you can create fully custom card-like UI elements, that can be swiped in a Tinder-like fashion.

appcard-tinder-swipe
import VPlayApps 1.0
import QtQuick 2.0

App {
  Page {
    AppCard {
      id: card
      width: parent.width
      margin: dp(15)
      paper.radius: dp(5)
      swipeEnabled: true
      cardSwipeArea.rotationFactor: 0.05
      
      // If the card is swiped out, this signal is fired with the direction as parameter
      cardSwipeArea.onSwipedOut: {
        console.debug("card swiped out: " + direction)
      }
      
      // … Card content
    }
  }
}

Embed YouTube Videos on your Qt App

With the YouTubeWebPlayer component, you can now directly embed YouTube videos in your app with a simple QML API.

youtube-player

This is how you can use the player in QML:

import VPlayApps 1.0

App {
  NavigationStack {
    Page {
      title: "YouTube Player"
      
      YouTubeWebPlayer {
        videoId: "KQgqTYCfJjM"
        autoplay: true
      }
      
    }
  }
}

The component uses a WebView internally and the YouTube Iframe-Player API. To show how you can use the player in your app, you can have a look at the YouTube Player Demo App. It uses the YouTube Data API to browse playlists and videos of a configured channel.

Have a look at this demo to see how to integrate the Qt WebView module and use the YouTubeWebPlayer to play videos. The demo also shows how to load content from the YouTube Data API via http requests.

QML QSortFilterProxyModel to use Sorting and Filters ListModels

You can now use SortFilterProxyModel, based on QSortFilterProxyModel, to apply filter and sorting settings to your QML ListModel items.

The following example shows the configured entries of the ListModel in a ListPage, and allows to sort the list using the name property:

sortfilterproxymodel-simple
import VPlayApps 1.0
import QtQuick 2.0

App {
  // data model
  ListModel {
    id: fruitModel
    
    ListElement {
      name: "Banana"
      cost: 1.95
    }
    ListElement {
      name: "Apple"
      cost: 2.45
    }
    ListElement {
      name: "Orange"
      cost: 3.25
    }
  }
  
  // sorted model for list view
  SortFilterProxyModel {
    id: filteredTodoModel
    sourceModel: fruitModel
    
    // configure sorters
    sorters: [
      StringSorter {
        id: nameSorter
        roleName: "name"
      }]
  }
  
  // list page
  NavigationStack {
    ListPage {
      id: listPage
      title: "SortFilterProxyModel"
      model: filteredTodoModel
      delegate: SimpleRow {
        text: name
        detailText: "cost: "+cost
        style.showDisclosure: false
      }
      
      // add checkbox to activate sorter as list header
      listView.header: AppCheckBox {
        text: "Sort by name"
        checked: nameSorter.enabled
        updateChecked: false
        onClicked: nameSorter.enabled = !nameSorter.enabled
        anchors.horizontalCenter: parent.horizontalCenter
        height: dp(48)
      }
    } // ListPage
  } // NavigationStack
} // App

Combine Multiple Filter and Sorting Settings on QML ListModels

The SortFilterProxyModel helps you to combine multiple filter and sorting settings on a model. You can find a detailed example in our documentation: Advanced SortFilterProxyModel Example.

It also fetches data from a REST API using the new HttpRequest type.

sortfilterproxymodel

More Features, Improvements and Fixes

Here is a compressed list of further improvements with this update:

  • The Page type now features two more signals appeared() and disappeared(). These signals fire when the page becomes active or inactive on a NavigationStack. They are convenience signals to avoid manual checks of Page::isCurrentStackPage.
  • Remove focus from SearchBar text field if it gets invisible.
  • Fixes a crash in the V-Play Live Client when using WikitudeArView.
  • When a device goes online, the App::isOnline property now becomes true only after a short delay. This is required, as otherwise the network adapter might not be ready yet, which can cause immediate network requests to fail.

For a list of additional fixes, please check out the changelog.

 

 

 

More Posts Like This

 

qt-machinelearning-tensorflow-Teaser
Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow

Qt AR: Why and How to Add Augmented Reality to Your Mobile App
Qt AR: Why and How to Add Augmented Reality to Your Mobile App

v-play-release-2-18-0-qt-5-11-1
Release 2.18.0: Update to Qt 5.11.1 with QML Compiler and Massive Performance Improvements

vplay-2-17-0-firebase-cloud-storage-downloadable-resources-and-more
Release 2.17.0: Firebase Cloud Storage, Downloadable Resources at Runtime and Native File Access on All Platforms

feature
How to Make Cross-Platform Mobile Apps with Qt – V-Play Apps

The post Release 2.18.1: Use JavaScript Promises with Qt, Material Cards and an Improved API to Connect to REST Services appeared first on V-Play Engine.

Qt 3D Studio 2.1 Beta 1 released

We are happy to announce the release of Qt 3D Studio 2.1 Beta 1. It is available via the online installer. Here’s a quick summary of the new features and functions in 2.1.

For detailed information about the Qt 3D Studio, visit the online documentation page.

Data Input

For data inputs, we are introducing a new data type; Boolean. Related to this, elements now have a Visible property which can be controlled with the Boolean data input. When item visibility is controlled by a data input, the eyeball icon in the timeline palette changes to orange to illustrate this.

data-input

Data inputs are now checked at presentation opening. If elements in the presentation are using data inputs that are not found from the data input list (in the .uia file), a warning dialogue is shown. Then, the user can choose to automatically remove all property controls that are using invalid data inputs.

Additionally, the visualization of data input control for slides and the timeline has improved. Now it is much clearer which data input is in control.

data-input-2

For more details on data inputs, see documentation.

New Project Structure

There is a new project structure with presentations and qml streams folders. Presentation (.uip) files are now visible in the project palette, it is also possible to have several .uip files in a project.

project-palette

In the project palette, it is now possible to double-click an asset to open it in the application associated by the operating system. .uip files will open in Qt 3D Studio.

Sub-Presentations

A lot of improvement has been done to make working with sub-presentations more convenient. Some of the key improvements are:

  • You can create a new presentation in the Studio without leaving your current project.
  • With the possibility to have many .uip files in one project, it is easy to share assets between presentations.
  • Importing both .uip and .qml presentations are done the same way you import other assets.
  • Assign sub-presentations to meshes or layers by dragging and dropping from the project palette.

For more details on sub-presentations, see documentation.

Installation

As mentioned, Qt 3D Studio 2.1 Beta 1 is available via the Qt online installer. You’ll find it in under the preview section. If you have a previous installation, please use the Update feature in the Qt Maintenance tool to get the latest version. 2.1 version will be installed alongside the old version. The Qt online installer can be downloaded from www.qt.io/download while commercial license holders can find the packages from account.qt.io.

The post Qt 3D Studio 2.1 Beta 1 released appeared first on Qt Blog.

Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow

Artificial intelligence and smart applications are steadily becoming more popular. Companies strongly rely on AI systems and machine learning to make faster and more accurate decisions based on their data.

This guide provides an example for Image Classification and Object Detection built with Google’s TensorFlow Framework.

 

By reading this post, you will learn how to:

  • Build TensorFlow for Android, iOS and Desktop Linux.
  • Integrate TensorFlow in your Qt-based V-Play project.
  • Use the TensorFlow API to run Image Classification and Object Detection models.

Why to Add Artificial Intelligence to Your Mobile App

As of 2017, a quarter of organisations already invest more than 15 percent of their IT budget in machine learning. With over 75 percent of businesses spending money and effort in Big Data, machine learning is set to become even more important in the future.

Real-World Examples of Machine Learning

Artificial intelligence is on its way to becoming a business-critical technology, with the goal of improving decision-making with a far more data-driven approach. Regardless of the industry, machine learning helps to make computing processes more efficient, cost-effective, and reliable. For example, it is used for:

  • Financial Services: To track customer and client satisfaction, react to market trends or calculate risks. E.g. PayPal uses machine learning to detect and combat fraud.
  • Healthcare: For personalised health monitoring systems, to enable healthcare professionals to spot potential anomalies early on.
  • Retail: Offer personalised recommendations based on your previous purchases or activity. For example, recommendations on Netflix or Spotify.
  • Voice Recognition Systems, like Siri or Cortana.
  • Face Recognition Systems, like DeepLink by Facebook.
  • Spam Email Detection and Filtering.

Image Classification and Object Detection Example

TensorFlow is Google’s open machine learning framework. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and architectures (desktops, clusters of servers, mobile and edge devices). It supports Linux, macOS, Windows, Android and iOS among others.

qt-machinelearning-tensorflow

About TensorFlow

TensorFlow has different flavors. The main one is TensorFlow. Another one is TensorFlow Lite which is TensorFlow’s lightweight solution for mobile and embedded devices. However, TensorFlow Lite is currently at technological preview state. This means that not all TensorFlow features are currently supported, although it will be the reference for mobile and embedded devices in the near future.

There is plenty of online material about how to build applications with Tensorflow. To begin with, we highly recommend the free ebook Building Mobile Applications with TensorFlow by Pete Warden, lead of the TensorFlow mobile/embedded team.

The example of this guide makes use of the original TensorFow flavor. It shows how to integrate TensorFlow with Qt and V-Play to create a simple multiplatform app that includes two pretrained neural networks, one for image classification and another one for object detection. The code of this example is hosted on GitHub.

Clone the Repository

To clone this repository execute the following command, clone it recursively since the TensorFlow repositoriy is inside it. The Tensorflow version included is 1.8.

git clone --recursive https://github.com/V-Play/TensorFlowQtVPlay.git

Many thanks to the project developers for sharing this example and preparing this guide:

  • Javier Bonilla, Ph.D. in Computer Science doing research about modeling, optimization and automatic control of concentrating solar thermal facilities and power plants at CIEMAT – Plataforma Solar de Almería (PSA), one of the largest concentrating solar technology research, development and test centers in Europe.
  • Jose Antonio Carballo, Mechanical Engenieer and Ph.D. student from University of Almería working on his doctoral thesis on modeling, optimization and automatic control for an efficient use of water and energy resources in concentrating solar thermal facilities and power plants at CIEMAT – Plataforma Solar de Almería (PSA).

Advantages of using V-Play and Qt with TensorFlow

V-Play and Qt are wonderful tools for multiplatform applications. Qt has a rich set of ready-to-use multiplaform components for diverse areas such as multimedia, network and connectivity, graphics, input methods, sensors, data storage and more. V-Play further contributes to ease the deployment to mobile and embedded devices and adds nice features such as resolution and aspect ratio independence and additional components and controls. V-Play also provides easier access to native features, as well as plugins for monetization, analytics, cloud services and much more.

One nice feature of V-Play is that it is not restricted to mobile devices, so you can test and prototype your app in your development computer, which is certainly faster than compiling and deploying your app to emulators. You can even use V-Play live reloading to see changes in code almost instantaneously. Live reloading is also supported on Android and iOS devices, which is perfect for fine-tuning changes or testing code snippets on mobile devices.

So Tensorflow provides the machine learning framework, whereas V-Play and Qt facilitate the app deployment to multiple platforms: desktop and mobile.

How to Build TensorFlow for Qt

We need to build TensorFlow for each platform and architecture. The recommended way is to use bazel build system. However, we will explore how to use make to build TensorFlow for Linux, Android and iOS in this example. Check that you have installed all the required libraries and tools, TensorFlow Makefile readme.

If you are interested in building Tensorflow for macOS, check the Supported Systems section on the Makefile readme. For Windows, check TensorFlow CMake build.

If you have issues during the compilation process have a look at open Tensorflow issues or post your problem there to get help.

Once you have built Tensorflow, your app can link against these three libraries: libtensorflow-core.a, libprotobuf.a and libnsync.a.

Note: When you build for different platforms and architectures, in the same Tensorflow source code folder, Tensorflow may delete previous compiled libraries, so make sure you back them up. These are the paths where you can find those libraries, with MAKEFILE_DIR=./tensorflow/tensorflow/contrib/makefile:

  • Linux
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf/lib64
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/default.linux.c++11/
  • Android ARM v7
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib/android_armeabi-v7a
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_android/armeabi-v7a/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/armeabi-v7a.android.c++11/
  • Android x86
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib/android_x86
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_android/x86/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/x86.android.c++11/
  • iOS
    • libtensorflow-core: $(MAKEFILE_DIR)/gen/lib
    • libprotobuf: $(MAKEFILE_DIR)/gen/protobuf_ios/lib/
    • libsync: $(MAKEFILE_DIR)/downloads/nsync/builds/arm64.ios.c++11/

The shell commands in the following sections only work if executed inside the main Tensorflow folder.

Building for Linux

We just need to execute the following script for Linux compilation.

./tensorflow/contrib/makefile/build_all_linux.sh

If you are compiling for the 64-bit version, you might run into the following compilation error:

ld: cannot find -lprotobuf

In this case, change the $(MAKEFILE_DIR)/gen/protobuf-host/lib references to $(MAKEFILE_DIR)/gen/protobuf-host/lib64 in the tensorflow/tensorflow/contrib/makefile/Makefile file.

With some GCC 8 compiler versions you can get the following error.

error: ‘void* memset(void*, int, size_t)’ clearing an object of type ‘struct
nsync::nsync_counter_s_’ with no trivial copy-assignment; use value-initialization
instead [-Werror=class-memaccess]

To avoid it, include the -Wno-error=class-memaccess flag in the PLATFORM_CFLAGS variable for Linux (case "$target_platform" in linux) in the tensorflow/tensorflow/contrib/makefile/compile_nsync.sh file.

Building for Android (on Linux)

First, you need to set the NDK_ROOT environment variable to point to your NDK root path. You cand download it from this link. Second, you need to compile the cpu features library in NDK. This example was tested with Android NDK r14e.

mkdir -p $NDK_ROOT/sources/android/cpufeatures/jni
cp $NDK_ROOT/sources/android/cpufeatures/cpu-features.*
   $NDK_ROOT/sources/android/cpufeatures/jni
cp $NDK_ROOT/sources/android/cpufeatures/Android.mk
   $NDK_ROOT/sources/android/cpufeatures/jni
$NDK_ROOT/ndk-build NDK_PROJECT_PATH="$NDK_ROOT/sources/android/cpufeatures"
   NDK_APPLICATION_MK="$NDK_ROOT/sources/android/cpufeatures/Android.mk"

Then, execute the following script to compile Tensorflow for ARM v7 instructions.

./tensorflow/contrib/makefile/build_all_android.sh

If you want to compile for x86 platforms. For instance for debugging in an Android emulator, execute the same command with the following parameters.

Note: If you face issues compiling for Android x86 whith Android NDK r14, use the Android NDK r10e and set the NDK_ROOT accordingly to its path.

./tensorflow/contrib/makefile/build_all_android.sh -a x86

The Tensorflow Android supported architectures are the following.

-a [architecture] Architecture of target android [default=armeabi-v7a] (supported
architecture list: arm64-v8a armeabi armeabi-v7a mips mips64 x86 x86_64 tegra)

Building for iOS (on macOS)

The following script is available to build Tensorflow for iOS on macOS.

/tensorflow/contrib/makefile/build_all_ios.sh

If you get the following error while building Tensorflow for iOS.

error: thread-local storage is not supported for the current target

You can avoid it performing the changes given in this comment. That is changing -D__thread=thread_local \ to -D__thread= \ in the Makefile (for the i386 architecture only).

How to Use TensorFlow in Your Qt Mobile App

The source code of the app is in a GitHub repository. This section walks through the app code.

Link TensorFlow in Your Project

The following code shows the lines added to our qmake project file in order to include the TensorFlow header files and link against TensorFlow libraries depending on the target platform.

For Android, ANDROID_NDK_ROOT was set to the path of Android NDK r14e and ANDROID_NDK_PLATFORM was set to android-21 in Qt Creator (Project -> Build Environment).

# TensorFlow - All
TF_MAKE_PATH = $$PWD/tensorflow/tensorflow/contrib/makefile
INCLUDEPATH += $$PWD/tensorflow/ \
               $$TF_MAKE_PATH/gen/host_obj \
               $$TF_MAKE_PATH/downloads/eigen

# TensorFlow - Linux
linux:!android {
      INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf/include
      LIBS += -L$$TF_MAKE_PATH/downloads/nsync/builds/default.linux.c++11/ \
              -L$$TF_MAKE_PATH/gen/protobuf/lib64/ \
              -L$$TF_MAKE_PATH/gen/lib/ \
              -lnsync \
              -lprotobuf \
              -ltensorflow-core \
              -ldl
      QMAKE_LFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive
}

# TensorFlow - Android
android {
    QT += androidextras
    LIBS += -ltensorflow-core -lprotobuf -lnsync -lcpufeatures \
            -L${ANDROID_NDK_ROOT}/sources/android/cpufeatures/obj/
              local/$$ANDROID_TARGET_ARCH
    QMAKE_LFLAGS += -Wl,--allow-multiple-definition -Wl,--whole-archive

    # Platform: armv7a
    equals(ANDROID_TARGET_ARCH, armeabi-v7a) | equals(ANDROID_TARGET_ARCH, armeabi):\
    {
        INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf_android/armeabi-v7a/include
        LIBS += -L$$TF_MAKE_PATH/gen/lib/android_armeabi-v7a \
                -L$$TF_MAKE_PATH/gen/protobuf_android/armeabi-v7a/lib \
                -L$$TF_MAKE_PATH/downloads/nsync/builds/armeabi-v7a.android.c++11
    }
    # Platform: x86
    equals(ANDROID_TARGET_ARCH, x86):\
    {
        INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf_android/x86/include
        LIBS += -L$$TF_MAKE_PATH/gen/lib/android_x86 \
                -L$$TF_MAKE_PATH/gen/protobuf_android/x86/lib \
                -L$$TF_MAKE_PATH/downloads/nsync/builds/x86.android.c++11
    }
}

# TensorFlow - iOS - Universal libraries
ios {
    INCLUDEPATH += $$TF_MAKE_PATH/gen/protobuf-host/include
    LIBS += -L$$PWD/ios/lib \
            -L$$PWD/ios/lib/arm64 \
            -framework Accelerate \
            -Wl,-force_load,$$TF_MAKE_PATH/gen/lib/libtensorflow-core.a \
            -Wl,-force_load,$$TF_MAKE_PATH/gen/protobuf_ios/lib/libprotobuf.a \
            -Wl,-force_load,$$TF_MAKE_PATH/downloads/nsync/builds/
                            arm64.ios.c++11/libnsync.a
}

Create the GUI with QML

The GUI is pretty simple, there are only two pages.

  • Live video output page: The user can switch between the front and rear cameras.
  • Settings page: Page for setting the minimum confidence level and selecting the model: one for image classification and another one for object detection.

Main.qml

In main.qml, there is a Storage component to load/save the minimum confidence level, the selected model and if the inference time is shown. The inference time is the time taken by the Tensorflow neural network model to process an image. The storage keys are kMinConfidence, kModel and kShowTime. Their default values are given by defMinConfidence, defModel and defShowTime. The actual values are stored in minConfidence, model and showTime.

// Storage keys
readonly property string kMinConfidence: "MinConfidence"
readonly property string kModel: "Model"
readonly property string kShowTime: "ShowTime"

// Default values
readonly property double defMinConfidence: 0.5
readonly property string defModel: "ImageClassification"
readonly property bool defShowTime: false

// Properties
property double minConfidence
property string model
property bool showTime

// Local storage component
Storage {
    id: storage

    Component.onCompleted: {
        minConfidence = getValue(kMinConfidence) !== undefined ?
                        getValue(kMinConfidence) : defMinConfidence
        model = getValue(kModel) !== undefined ? getValue(kModel) : defModel
        showTime = getValue(kShowTime) !== undefined ? getValue(kShowTime) :
                                                       defShowTime
    }
}

There is a Navigation component with two NavigationItem, each one is a Page. The VideoPage shows the live video camera output. It reads the minConfidence, model and showTime properties. The AppSettingsPage reads also those properties and set their new values in the onMinConfidenceChanged, onModelChanged and onShowTimeChanged events.

import VPlayApps 1.0
import VPlay 2.0
import QtQuick 2.0

App {
    id: app

    ....

    Navigation {

        NavigationItem{
            title: qsTr("Live")
            icon: IconType.rss

            NavigationStack{
                VideoPage{
                    id: videoPage
                    minConfidence: app.minConfidence
                    model: app.model
                    showTime: app.showTime
                }
            }
        }

        NavigationItem{
            title: qsTr("Settings")
            icon: IconType.sliders

            NavigationStack{
                AppSettingsPage{
                    id: appSettingsPage
                    minConfidence: app.minConfidence
                    model: app.model
                    showTime: app.showTime

                    onMinConfidenceChanged: {
                        app.minConfidence = appSettingsPage.minConfidence
                        storage.setValue(kMinConfidence,app.minConfidence)
                    }

                    onModelChanged: {
                        app.model = appSettingsPage.model
                        storage.setValue(kModel,app.model)
                    }

                    onShowTimeChanged: {
                        app.showTime = appSettingsPage.showTime
                        storage.setValue(kShowTime,app.showTime)
                    }
                  }
                }
            }
        }
    }
}

VideoPage.qml

An screenshot of the VideoPage for object detection on iOS is shown below.

qt-machinelearning-tensorflow-VideoPage

The QtMultimedia module is loaded in this page.

import VPlayApps 1.0
import QtQuick 2.0
import QtMultimedia 5.9

The VideoPage has the minConfidence, model and showTime properties. It also has another property to storage the camera index, cameraIndex.

// Properties
property double minConfidence
property string model
property bool showTime

// Selected camera index
property int cameraIndex: 0

There is a camera component which is started and stopped when the page is shown or hidden. It has two boolean properties. The first one is true if there is at least one camera and the second one is true if there are at least two cameras.

Camera{
    id: camera
    property bool availableCamera:  QtMultimedia.availableCameras.length>0
    property bool availableCameras: QtMultimedia.availableCameras.length>1
}

// Start and stop camera
onVisibleChanged: {
    if (visible) camera.start()
    else camera.stop()
}

There is also a button in the navigation bar to switch the camera. This button is visible only when there is more than one camera available. The initialRotation() function is required due to the Qt bug 37955, which incorrectly rotates the front camera video output on iOS.

// Right-hand side buttons
rightBarItem: NavigationBarRow {

    // Switch camera button
    IconButtonBarItem {
        title: qsTr("Switch camera")
        visible: QtMultimedia.availableCameras.length>1
        icon: IconType.videocamera
        onClicked: {
            cameraIndex = cameraIndex+1 % QtMultimedia.availableCameras.length
            camera.deviceId = QtMultimedia.availableCameras[cameraIndex].deviceId
            videoOutput.rotation = initialRotation()
        }
    }
}

// BUG: front camera rotation on iOS [QTBUG-37955]
function initialRotation()
{
    return Qt.platform.os === "ios" && camera.position === Camera.FrontFace ? 180 : 0
}

When no camera is detected, an icon and a message are shown to the user.

// No camera found
Item{
    anchors.centerIn: parent
    width: parent.width
    visible: QtMultimedia.availableCameras.length<=0
    Column{
        width: parent.width
        spacing: dp(25)

        Icon{
            anchors.horizontalCenter: parent.horizontalCenter
            icon: IconType.videocamera
            scale: 3
        }

        AppText{
            anchors.horizontalCenter: parent.horizontalCenter
            text: qsTr("No camera detected")
        }
    }
}

When the camera is loading, an icon with a cool animation and a message are also
shown to the user.

// Loading camera
Item{
    anchors.centerIn: parent
    width: parent.width
    visible: QtMultimedia.availableCameras.length>=0 &&
             camera.cameraStatus !== Camera.ActiveStatus
    Column{
        width: parent.width
        spacing: dp(25)

        Icon{
            id: videoIcon
            anchors.horizontalCenter: parent.horizontalCenter
            icon: IconType.videocamera
            scale: 3

            SequentialAnimation {
                   running: true
                   loops: Animation.Infinite
                   NumberAnimation { target: videoIcon; property: "opacity";
                   from: root.maxVal; to: root.minVal; duration: root.aTime }
                   NumberAnimation { target: videoIcon; property: "opacity";
                   from: root.minVal; to: root.maxVal; duration: root.aTime }
             }
        }

        AppText{
            anchors.horizontalCenter: parent.horizontalCenter
            text: qsTr("Loading camera") + " ..."
        }
    }
}

The camera video output fills the whole page. It is only visible when at least
one camera is detected and active. We define a filter objectsRecognitionFilter which is implemented in a C++ class. This filter get each video frame, transforms it as input data to TensorFlow, invokes TensorFlow and draws the results over the video frame. This C++ class will be later introduced.

VideoOutput {
    id: videoOutput
    anchors.fill: parent
    source: camera
    visible: camera.availableCamera && camera.cameraStatus == Camera.ActiveStatus
    autoOrientation: true
    fillMode: VideoOutput.PreserveAspectCrop
    rotation: initialRotation()

    filters: [objectsRecognitionFilter]
}

AppSettingsPage.qml

An screenshot of this page on iOS is shown below.

qt-machinelearning-tensorflow-AppSettingsPage

The AppSettingsPage allows the user to select the minimum confidence level for
the detections with a slider. The slider value is stored in minConfidence.

AppSlider {
    id: slider
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    from:  0
    to:    1
    value: minConfidence
    live:  true
    onValueChanged: minConfidence = value
}

The inference time, the time Tensorflow takes to process an image, can be also shown on the screen. It can be enabled or disabled by means of a switch. The boolean value is stored in showTime.

AppSwitch{
    anchors.verticalCenter: parent.verticalCenter
    id: sShowInfTime
    checked: showTime
    onToggled: showTime = checked
}

There are also two exclusive check boxes to select the model: one for image classification and another for object detection. The selected model is stored in the `model` property. If the currently selected model is unchecked, the other model is automatically checked, as one of them should be always selected.

ExclusiveGroup { id: modelGroup }

AppCheckBox{
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    text: qsTr("Image classification")
    exclusiveGroup: modelGroup
    checked: model === "ImageClassification"
    onCheckedChanged: if (checked) model = "ImageClassification";
                      else chkDetection.checked = true
}

AppCheckBox{
    anchors.horizontalCenter: parent.horizontalCenter
    width: parent.width - 2*dp(20)
    text: qsTr("Object detection")
    exclusiveGroup: modelGroup
    checked: model === "ObjectDetection"
    onCheckedChanged: if (checked) model = "ObjectDetection";
                      else chkClassification.checked = true
        }
}

C++ TensorFlow Interface and Video Frame Filter

Two main tasks are programmed in C++.

  • Interfacing with TensorFow
  • Managing video frames

The source code of the C++ classes is not presented here in detail, instead the process is sketched and explained, links to further details are also given. Nevertheless, you can have a look at the source code hosted on GitHub.

Interfacing with Tensorflow

The Tensorflow C++ class interfaces with the TensorFlow library, check the code for a detailed description of this class. This class is a wrapper, check the Tensorflow C++ API documentation for further information.

Managing video frames

The workflow for managing video frames is shown in the next flow diagram.

qt-machinelearning-tensorflow-videoframeWorkflow

A object filter, ObjectsRecognizer, is applied to the VideoOutput to process frames. This filter is implemented by means of the C++ classes: ObjectsRecogFilter and ObjectsRecogFilterRunable, for further information about how to apply filters, check introducing video filters in Qt Multimedia.

The filter is processed in the `run` method of the ObjectsRecogFilter class. The general steps are the following.

  1. We need to convert our QVideoFrame to a QImage so we can manipulate it.
  2. We check if Tensorflow is running. Since Tensorflow is executed in another thread, we used the QMutex and QMutexLocker classes to thread-safety check if it is running. A nice example is given in QMutexLocker Class documentaion.
    • If Tensorflow is running – nothing is done
    • If Tensorflow is NOT running – we execute it in another thread by means of the C++ classes: TensorflowThread and WorkerTF, signals and slot are used to communicate the main thread and these classes, check [QThreads general usage](https://wiki.qt.io/QThreads_general_usage) for further details. We provide as input the video frame image. When Tensorflow is finished we store the results given be the selected model also by means of signals and slots.
  3. We get the stored results (if any) and apply them to the current video frame image. If our model is image classification, we just draw the name and score of the top image class if the score is above the minimum confidence value. If our model is object detection, we iterate over all the detections and draw the bounding boxes, names of objects and confidence values if they are above the minimum confidence level. There is an auxiliary C++ class, AuxUtils, which provides functions to draw on frames, such as drawText and drawBoxes.
  4. The last step is to convert back our QImage to a QVideoFrame to be processed by our QML VideoOutput component and then we go back to process a new video frame.

Neural Network Models for Image Classification and Object Detection

We need neural network models to perform the image classification and object detection tasks. Google provides a set of pre-trained models that do this. The file extension for Tensorflow frozen neural network models is .pb. The example on Github already includes MobileNet models: MobileNet V2 1.0_224 for image classification and SSD MobileNet V1 coco for object detection. MobileNets is a class of efficient neural network models for mobile and embedded vision applications.

Image Classification Models

Image classification models can be download from the TensorFlow-Slim image classification model library. Our example code is designed for MobileNet neural networks. For example, download mobilenet_v2_1.0_224.tgz, uncompress it, and copy the mobilenet_v2_1.0_224_frozen.pb file to our assets folder as image_classification.pb. The image size in this case, 224 x 224 pixels, is set in the constants fixed_width and fixed_height defined in our Tensorflow C++ class. The output layer, MobilenetV2/Predictions/Reshape_1 in this case, is also specified in the constant list variable listOutputsImgCla in the Tensorflow class. Labels for these models are already set in the image_classification_labels.txt file. Labels belong to ImageNet classes.

Object Detection Models

Check Tensorflow detection model Zoo for a comprehensive list of object detection models. Any SSD MobileNet model can be used. This kind of models provides caption, confidence and bounding box outputs for each detected object. For instance, download ssd_mobilenet_v1_coco_2018_01_28.tar.gz and uncompress it, copy the frozen_inference_graph.pb to our assets folder as object_detection.pb. Labels for this kind of models are already given by the object_detection_labels.txt file. Labels belong to COCO labels.

Known Issues

Although the presented example is functional, there is still room for improvement. Particularly in the C++ code where naive solutions were considered for simplicity.

There are also some issues to address, the following list summarizes them.

  • The app performance is much higher on iOS than on Android even for high-end mobile devices. Finding the root cause of this requires further investigation.
  • The sp method of the AuxUtils C++ class is intended to provide font pixel sizes independently on the screen size and resolution, although it does not work for all devices. Therefore, same implementation that the one provided by the V-Play QML sp function should be considered.
  • Asset files can be easily accessed from QML and Qt classes. For instance, assets:/assets/model.pb gives access to a file called model.pb stored in the assets folder on Android. However, accessing assets from general C++ classes is not so easy because those classes can not resolve assets:/. This is the case for the Tensorflow C++ class. The current solution is to copy the file to a well known path, for example to QStandardPaths::writableLocation(QStandardPaths::AppLocalDataLocation), but this involves checking if the destination folder exists (and create it otherwise), checking if the asset file exists and has not changed (and copy it otherwise).
  • QVideoFrame conversion to QImage is performed in order to draw on it in the run method of the ObjectsRecogFilterRunable C++ class. Currently, this is done using the qt_imageFromVideoFrame function included in a Qt private module: multimedia-private. Therefore, the app is tied to this specific Qt module build version and running the app against other versions of the Qt modules may crash at any arbitrary point. Additionally, the conversion of BGR video frames is not properly managed by the qt_imageFromVideoFrame function. Therefore, they are converted to images without using this function.
  • The current implementation continuously executes Tensorflow in a separated thread processing video frames. That is when the Tensorflow thread finishes, it is executed again with the latest frame. This approach provides a fluent user experience, but on the other hand it makes the device to considerably heat up and drain the battery fast.

If you have a business request for assistance to integrate TensorFlow in your V-Play apps, don’t hesitate to drop a line at support@v-play.net or contact us here. The V-Play SDK is free to use, so make sure to check it out!

 

If you enjoyed this post, feel free to share it on Facebook or Twitter.

More Relevant App Development Resources

The Best App Development Tutorials & Free App Templates

All of these tutorials come with full source code of the mobile apps! You can copy the code to make your own apps for free!

App Development Video Tutorials

Make Cross-Platform Apps with Qt: V-Play Apps

How to Add In-App Chat or Gamification Features to Your Mobile App

How to Make a Mobile App with Qt Quick Designer (QML Designer) & V-Play

 

The post Machine Learning: Add Image Classification for iOS and Android with Qt and TensorFlow appeared first on V-Play Engine.

TableView

by Richard Moe Gustavsen (Qt Blog)

I’m happy to announce that in Qt 5.12, a new TableView item will be available in the QtQuick module. TableView is similar to the existing ListView, but with additional support for showing multiple columns.

Like with ListView, you can assign data models of any kind to TableView, like ListModels or plain Javascript arrays. But to create models with more than one column, you currently need to subclass QAbstractItemModel in C++. A QML TableModel is also in the works, but will come later.

TableView inherits Flickable. This means that while a table can have any number of rows and columns, only a subsection of them will usually be visible inside the viewport. As soon as you flick, new rows and columns enter the viewport, while old ones move out. A difference to ListView is that TableView will reuse the delegate items that are flicked out to build the rows and columns that are flicked in. This will of course greatly improve performance, especially when using a delegate that has lots of child items.

The fact that TableView reuses delegate items is not meant to be transparent to the developer. When a delegate item is reused, context properties like index, row, column, and model roles, will be updated. But other properties will not. Storing a state inside a delegate item is a bad idea in the first place, but if you do, you have to reset that state manually. If, for example, a child rectangle changes color after construction, you need to set it back when the delegate item is reused. Two attached signals are available for this purpose: TableView.onPooled and TableView.onReused. The former will notify the delegate item when it is no longer a part of the table and has been moved to the reuse pool. This can be a good time to pause ongoing animations or timers for example. The latter signal will be emitted when the delegate item has been moved back into the view. At this point you can restore the color. It might be tempting to use Component.onCompleted for such things as well, but that signal is only emitted when the delegate item is created, and not when it’s reused.

The following snippet shows how to use the attached signals to temporarily pause an animation while a delegate item is pooled:

TableView {
    anchors.fill: parent
    clip: true

    columnSpacing: 1
    rowSpacing: 1
    model: myQAbstractTableModel

    delegate: Rectangle {
        implicitWidth: 100
        implicitHeight: 50

        TableView.onPooled: rotationAnimation.pause()
        TableView.onReused: rotationAnimation.resume()

        Rectangle {
            id: rect
            anchors.centerIn: parent
            width: 40
            height: 5
            color: "green"

            RotationAnimation {
                id: rotationAnimation
                target: rect
                duration: (Math.random() * 2000) + 200
                from: 0
                to: 359
                running: true
                loops: Animation.Infinite
            }
        }
    }
}

For simple cases, TableView will determine the width of a column by reading the implicitWidth of the delegate items inside it. For this strategy to be consistent, all delegate items in the same column should have the same implicitWidth. For more advanced cases, you can instead assign a callback function to TableView that returns the width of any given column. That way the application is in full control of the widths, whether they are calculated or stored, rather than TableView trying to solve this efficiently for models with potentially thousands of rows and columns.

TableView {
    anchors.fill: parent
    clip: true
    model: myQAbstractTableModel
    delegate: Rectangle {}

    columnWidthProvider: function (column) { return column % 2 ? 100 : 200 }
    rowHeightProvider: function (row) { return row % 2 ? 100 : 200 }
}

Other than this, the API for the first release of TableView is kept pretty small and strict. More will be implemented later, such as using custom transitions when adding or removing rows and columns. We’re also working on a TableHeader, TableModel, as well as a DelegateChooser. The latter lets you assign several delegates to a TableView, and e.g use a different one for each column. The latter will already be available in Qt 5.12 as a labs import (Qt.labs.qmlmodels 1.0).

The post TableView appeared first on Qt Blog.

KD Chart 2.6.1 Released

This is the latest release of our powerful open-source Qt component, KD Chart, that allows you to create business charts and much more.

Release Highlights
  • Builds with modern Qt versions, at least up to Qt 5.10
  • Improves tooltip handling
  • Fixes horizontal bar chart
  • Uses @rpath for OSX dynamic libraries
  • Fixes build on Qt4/ARM

KD Chart makes use of the Qt Model-View programming model that allows re-use of existing data models to create charts. KD Chart is a complete implementation of the ODF (OpenDocument) Chart specification. It now includes Stock Charts, Box & Whisker Charts and the KD Gantt module for implementing ODF Gantt charts into applications.

Read more about KD Chart…

Get KD Chart here.

KD Chart is available under both a free software license (GPL) and a commercial license. The code is exactly the same under both licenses, so which license type you should choose depends on the project you want to use it for.

The post KD Chart 2.6.1 Released appeared first on KDAB.

Live update of Python code during debugging, using builtin reload()

Introduction

When debugging a Test Script, one can use Python’s built-in reload() from the Squish Script Console to get recent changes to module functions in the currently running Test Execution.

Debugging Python Test Scripts

While debugging your Test Scripts in the Squish IDE, the Script Console might come in handy, e.g. for getting immediate feedback and syntax confirmation, as described in an earlier article.

Sometimes it is more comfortable to modify the code right at its place, in the editor. It is assumed that you’re making use of Python’s import mechanism instead of the source() function for bringing in shared code to your script (if not, we explain how to work around that later in this article).

For example, let’s assume there is a aut_helper.py module in /shared/scripts/. This module provides higher level functions that deal with the AUT, like addRandomAddressbookEntry().

A Test Case using that function could look like this:

import aut_helper
...
def main():
    startApplication("addressbook")
    ...
    aut_helper.addRandomAddressbookEntry()

If addRandomAddressbookEntry() breaks, e.g. due to intended changes in the AUT, you head into the debugging mode, either by choosing the ‘Debug’ option in the Object Not Found dialog, by setting a breakpoint, or simply by pausing the Test Execution from the Control Bar. In the Squish Script Console, you can call functions defined in the aut_helper.py module, e.g.

>>> aut_helper.addRandomAddressbookEntry()

But it is also possible to make changes to the aut_helper.py module using the Squish IDE and (still being in the same Debug Session) invoke

>>> reload(aut_helper)
>>> aut_helper.addRandomAddressbookEntry()

This tells the Python interpreter to load the new function definitions into the current Test Execution. reload() is a builtin Python function, that takes an already loaded module as argument.

Now, without leaving your debugging Session, it is possible to make changes to your script functions in the Squish IDE editor, save them and retry its execution until the function is in the desired shape.

What about source()?

Even if your Test Suite is organized using the source() function, you can make use of reloading, but you would have to use a dedicated script file just for the purpose of composing snippets, and use import for that file from the Squish Script Console:

import names #remove this line, when not using the Scripted Object Map feature
from squish import *

def addRandomAddressbookEntry():
    pass

When execution is halted, you can then have the same “Edit Script, Save, call from Script Console” roundtrips as above.

>>> import scratch
>>> scratch.addRandomAddressbookEntry()
... Edit scratch.py file, Save
>>> reload(scratch)
>>> scratch.addRandomAddressbookEntry()
...

Conclusion

SquishIDE and squishrunner work great with the Python built-in reload() function. This allows you to modify and debug your test script functions while running a test case.

 

The post Live update of Python code during debugging, using builtin reload() appeared first on froglogic.

Qt AR: Why and How to Add Augmented Reality to Your Mobile App

Improved AR capabilities for mobile platforms are one of the biggest trends of 2018. Apps with AR features like Yelp, Google Translate or Pokémon GO are only the beginning. Augmented reality allows to create innovative user experiences that support your brand.

Mobile AR is on the Rise! Why?

Since the release of Apple’s ARKit and Google’s ARCore, augmented reality made its way into the mobile market. For example, to make it possible to:

  • Catch virtual monsters in your neighborhood. (Pokemon GO)
  • See restaurant descriptions while you’re walking the street. (Yelp)
  • Translate texts while you view a sign. (Google Translate)

AR Apps: Pokémon GO, Yelp, Google Translate

Those apps mix the real world with computer-generated content. They thus show the user a different reality. In that sense, augmented reality (AR) is quite similar to virtual reality (VR), which is why they are often confused.

Differences between VR and AR

Both technologies can change the way you look at the world. However, they aim towards a different goal. Virtual reality tries to build a simulated environment around the user. It can take you to places you’ve never seen and allows you to enter a new world. When VR does its job right, you will believe that you are actually there. For example, when driving in a virtual reality racing simulator:

Virtual Reality Racing Car

In contrast to VR, augmented reality does not take you to a different place. It enhances the world around you with digital information. For example, to see the route of your navigation system mixed into the real street image while driving in your car.

Wikitude Navigation

The world’s first pedestrian and car navigation system that integrates AR was the Wikitude Navigation app. The app was a revolutionary step forward in the navigation and guidance field and eliminates the need for a map.

Advantages of Immersive Experiences in Mobile Apps and Games

Since Apple launched its app store with 20k apps in 2008, it experienced a rapid growth and now offers more than 3M apps. More than ever, businesses and developers thus thrive to provide unique app experiences that support their brand. They empower users to be creative and connect in order to boost engagement and retention rates.

Mobile AR now allows to create immersive app experiences to surprise and engage the users. Businesses have understood this potential and the International Data Corporation forecast for 2018 even expects worldwide spendings on AR and VR to increase by 95%. Let’s have a look at some innovative AR apps:

Telekom: Lenz – Gorillaz App

Telekom Electronic Beats partnered up with the Gorillaz to create a new dimension in music. The Lenz app transforms magenta surfaces into digital portals which display exclusive Gorillaz content.

Washington Post App: Unesco Heritage

The Washington Post has published another successful AR-enhanced story. This time, the Post’s article promotes all 23 of the UNESCO World Heritage sites situated in the USA. To get readers to learn about, appreciate, and visit these locations, the daily newspaper included an AR feature to get users even more involved with the story.

Augmentors: Real Monster Battles

Following in the footsteps of Pokemon GO, Augmentors is the world’s first cross-platform (iOS & Android) augmented reality game backed by the Bitcoin Blockchain. Players can trade, swop, battle, and train gaming creatures in the real world. Early stage game supporters will be rewarded with unique currency and one-of-a-kind creatures.

Augmented Cocktails: AR in Low-Light Conditions

It can be difficult to provide rich AR experiences in all kinds of situations. For example when dealing with low-light scenarios. City Social in London is known for providing great food, drinks, service and a stunning skyscraper view. With the intention of delighting their customers, even more, they paired with Mustard Design. To create an innovative app that brings their cocktails to life:

Lufthansa AR Aviation Demo

Instead of shipping and installing costly demo equipment to be displayed at trade show exhibitions, Lufthansa Technik is innovatively using augmented reality technology to show their customers detailed installation information and connectivity solutions.

How Does Augmented Reality Work?

The above showcases all rely on the mobile device camera and sensors to track images, objects and scenes of the real world:

  • Telekom recognizes magenta surfaces to replace it with different content.
  • The Washington Post app tracks reader’s surroundings and instantly layers the camera view with virtual animals like a bison.
  • Augmentors combines such Instant 3D Tracking with Image Recognition to bring game cards to live.

Another example app that relies on location-based AR is the Osmino app: A quick scan of your surrounding provides you with a comprehensive listing of all free Wi-Fi hotspots around you:

Wikitude Showcase Osmino

You can integrate some of these  features in your mobile app with Apple’s ARKit and Google’s ARCore. But you also have the option to rely on cross-platform tools which go beyond ARKit and ARCore. In fact, the above showcases are all built with the Wikitude AR SDK.

Why use Wikitude instead of ARKit or ARCore?

Being in the market since 2008, Wikitude bridges the gap between different devices, platforms, and levels of AR support. With a single cross-platform API, it allows over 100,000 developers to integrate AR features across iOS, Android and Windows with a single code base, while having a much higher market reach than ARKit and ARCore.

Advantages of the Wikitude SDK Architecture

Wikitude provides a rich AR experience across platforms. To achieve that, it relies on several abstraction layers:

Wikitude SDK Architecture

The Core Components handle features like Image Recognition and Object/Scene Recognition. Wikitude built the so-called SLAM Engine to offer all AR features across devices and platforms.

In case Apple’s ARKit or Google’s ARCore are available, Wikitude can dynamically switch to these native frameworks instead of its own engine. In addition, Wikitude can also run on iOS, Android and Windows devices that do not have such native support for AR.

So compared to native development with ARKit or ARCore, Wikitude even supports AR on devices that are not able to run native AR features via ARKit or ARCore. This is a huge benefit, because your app is not bound by the market coverage of ARKit or ARCore. See this table for a comparison of ARKit and ARCore supported devices, vs the ones supported by Wikitude:

  • iOS ARKit Device Coverage: 81% (minimum iOS 11.0 + iPhone 6S, iPad 5 and newer models)
  • iOS Wikitude Device Coverage: 92% (iOS 9.0 + iPhone 4, iPad 2 and newer models)
    → Wikitude has + 11% iOS device coverage compared to ARKit
  • Android ARCore Device Coverage: 5% (minimum Android 7.0 + currently about 50 device models out of the thousands in the market)
  • Android Wikitude Device Coverage: 95% (minimum Android 4.4 + most existing device models), which means
    → Wikitude has +90% Android device coverage compared to ARCore

For detailed infos which devices are supported, see the official developer docs by Apple for ARKit supported devices, iOS version market share, and by Google for ARCore supported devices.

So if your goal is to make your app available on as many devices as possible, Wikitude is the go-to solution.

To use Wikitude, you can embed their augmented reality view into your existing native apps. It is not required to modify other views of your iOS, Windows or Android app. Wikitude also offers several plugins to use their SDK  in conjunction with cross-platform app frameworks like V-Play, via its Qt Plugin.

How to Use the Wikitude AR Plugin in Qt Apps

The Wikitude Qt AR Plugin developed by V-Play offers an API to:

  • Integrate Wikitude in Qt applications, and also
  • into existing or new native applications.

The Wikitude Qt AR plugin builds upon the native APIs of Wikitude and can run augmented reality worlds created with the Wikitude JS API.

If you have an existing or newly developed app based on Qt, you can simply load the Wikitude AR Plugin from QML-based Qt Quick applications or C++-based Qt Widgets applications.

How to Use Image Recognition and 3D Tracking in Your Mobile App

Since the release of V-Play Engine’s Wikitude Plugin you can integrate and use the Wikitude AR SDK in your Qt cross-platform app. It only takes a few lines of code. The examples below show how to run some of the Wikitude AR examples with V-Play.

Wikitude Makes Image Tracking Easy

The following demo code includes everything you need to embed a Wikitude view in your QML app. This example tracks certain images and overlays a transparent video, as if it were part of the image:

import QtQuick.Controls 2.0
import QtQuick 2.0
import VPlayApps 1.0
import VPlayPlugins 1.0

App {
 // name of the Wikitude example to load
 property string example: "11_Video_4_Bonus-TransparentVideo"
 readonly property bool exampleIsLoaded: samplesDl.available

 // NavigationStack can display Pages and adds a NavigationBar
 NavigationStack {
   id: navStack
   // at startup show either arPage or downloadPage, in case the example is not loaded yet
   Component.onCompleted: navStack.push(exampleIsLoaded ? arPage : downloadPage)
 }

 // arPage: Page with a Wikitude view
 property Component arPage: Page {
   title: "AR Example"

   // configure Wikitude view
   WikitudeArView {
     id: arView
     anchors.fill: parent
     arWorldSource: samplesDl.getExtractedFileUrl(example+"/index.html")
     running: true
     cameraPosition: WikitudeArView.BackCamera

     //license key for V-Play QML Live app
     licenseKey: "g0q44ri5X4TwuXQ/9MDYmZxsf2qnzTdDIyR2dWhO6IUkLSLU4IltPMLWFirdj+7kFZOdWAhRUD6fumVXLXMZe6Y1iucswe1Lfa5Q7HhQvPxEq0A7uSU8sfkHLPrJL0z5e72DLt7qs1h25RJvIOiRGDoRc/h/tCWwUdOL6ChDnyJTYWx0ZWRfX8Vh9c9kcuw4+pN/0z3srlwIHPV5zJuB1bixlulM4u1OBmX4KFn+4+2ASRCNI+bk655mIO/Pk3TjtYMrgjFR3+iYHvw1UmaYMVjsrgpcVkbzJCT6QmaW8LejnfXDNLAbZSov64pVG/b7z9IZPFLXxRSQ0MRLudoSDAh6f7wMTQXQsyqGrZeuQH1GSWtfjl/geJYOvQyDI+URF58B5rcKnrX6UZW3+7dP92Xg4npw7+iGrO1M4In/Wggs5TXrmm25v2IYOGhaxvqcPCsAvbx+mERQxISrV+018fPpL8TzR8RTZZ5h7PRfqckZ3W54U1WSiGn9bOj+FjDiIHlcvIAISpPg2Vuq88gLp0HJ5W+A+sVirqmmCyU9GKeV5Faiv62CJy6ANCZ83GGX2rWcIAh1vGOQslMr9ay4Js+rJsVN4SIhCYdw9Em9hSpoZgimnOaszI7zn9EnPwVQgNETgVm7pAZdLkH5hxFoIKOPG2e79ZKKmzlkB/IZigoHZWNDUCFnEHDNFlTZjOEwoPi8DDGfzOEOGngWE7jmp24N7GzAP7e54Y3e48KtmIJ1/U0PFKOoi2Yv0Gh+E1siU5MBf8dLO7y7GafJWJ2oCUqJG0pLb2cgTf9pjkr625BV3XxODRylgqc5/UymTY6l1J0qO43u5hH3zaejng4I9cgieA3Y553rAEafAsfhrRmWsLW/kBdu4KLfY4eQ9z4B0TweW/xsofS0bkIqxalh9YuGBUsUhrwNUY7w6jgC6fjyMhtDdEHAlXC2fW1xLHEvY9CKojLNJQUnA0d5QCa22arI8IK63Jn8Cser9Cw57wOSSY0ruoJbctGdlsr/TySUkayAJJEmHjsH73OdbAztGuMjVq7Y643bTog4P3Zoysc="
   }
 }

 // downloadPage: Page for downloading the Wikitude example at runtime
 // this is only required to retrieve the Wikitude sources for the V-Play QML Live app, Wikitude sources can also be bundled with the app otherwise
 property Component downloadPage: Page {
   title: "AR Example - Download"

   Column {
     anchors.fill: parent
     anchors.margins: dp(12)
     spacing: dp(12)

     AppText {
       text: samplesDl.status === DownloadableResource.UnAvailable
             ? qsTr("Wikitude example requires to be downloaded (~ 2MB)")
             : samplesDl.status === DownloadableResource.Downloading
               ? qsTr("Downloading example... (%1%)").arg(samplesDl.progress)
               : qsTr("Extracting example... (%1%)").arg(samplesDl.progress)
       width: parent.width
     }

     AppButton {
       text: samplesDl.status === DownloadableResource.UnAvailable ? qsTr("Start download") : qsTr("Cancel download")
       onClicked: if(samplesDl.status === DownloadableResource.UnAvailable)
                    samplesDl.download()
                  else samplesDl.cancel()
     }

     ProgressBar {
       width: parent.width
       from: 0
       to: 100
       value: samplesDl.progress
     }
   }
 }

 // component to download additional app resources, like the Wikitude example
 DownloadableResource {
   id: samplesDl
   source: "https://v-play.net/qml-sources/wikitude-examples/"+example+".zip"
   extractAsPackage: true
   storageLocation: FileUtils.DownloadLocation
   storageName: example
   onDownloadFinished: {
     if(error === DownloadableResource.NoError) {
       navStack.clearAndPush(arPage) // open AR page after download is finished
     }
   }
 }
}

 

You can test the Image Tracking AR demo with the image below. It is also found in the Wikitude Plugin documentation.

Wikitude Image Tracking Video Example Surfer

Most of the QML code above is a little overhead to let you instantly preview the example with V-Play QML Live Code Reloading.

What is V-Play QML Live Code Reloading?

It allows you to run and reload apps & games within a second on iOS, Android and Desktop platforms. You can just hit save and the app reloads instantly, without the need to build and deploy again! This is especially useful for AR, which usually requires a lot of on-device testing to tweak settings.

You can also use it to run all the examples listed here from the browser, without having to setup any native SDKs on your PC. Just download the V-Play Live Reload App, for Android or iOS to connect a mobile device.

The code above downloads the configured Wikitude example as zip, extracts the archive, and runs the demo in a Wikitude augmented reality view. Pretty amazing, actually. Go ahead and try it yourself by clicking on one of the “Run this Example” buttons.

The possibility to download assets or code at runtime is a super useful advantage of V-Play. This means that the original app can stay small while additional features are downloaded on demand. However, if the AR part is essential in your own app, you can also bundle the Wikitude code so the AR assets are available without an additional download.

The minimum QML code required thus boils down to a few lines of code:

import VPlayApps 1.0
import VPlayPlugins 1.0

App {
 WikitudeArView {
   id: arView
   anchors.fill: parent
   arWorldSource: Qt.resolvedUrl("assets/11_Video_4_Bonus-TransparentVideo/index.html")
   running: true
   cameraPosition: WikitudeArView.BackCamera
   licenseKey: ""
 }
}

How to Create Wikitude AR Worlds

The Wikitude SDK makes it easy to create such augmented reality views. It builds on web technologies (HTML, JavaScript, CSS) to create so-called ARchitect worlds. These augmented reality experiences are ordinary HTML pages. They use the ARchitect JavaScript API to create objects in augmented reality. That is why the WikitudeArView QML component in the above example has an arWorldSource property. It refers to the index.html of the ARchitect world:

<!DOCTYPE HTML>
<html>
<head>
 <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
 <meta content="width=device-width,initial-scale=1,maximum-scale=5,user-scalable=yes" name="viewport">
 <title></title>

 <script src="https://www.wikitude.com/libs/architect.js"></script>
 <script type="text/javascript" src="../ade.js"></script>
 <link rel="stylesheet" href="css/default.css">
</head>
<body>
 <script src="js/transparentvideo.js"></script>
</body>
</html>

It is quite simple, as all the magic happens in the JavaScript code for the Architect world. The above example includes transparentvideo.js, which amounts to only 80 lines of code. This is how the main part for the image tracking and video overlay looks like:

var World = {
 init: function initFn() {
   this.createOverlays();
 },

 // create augmented reality overlays
 createOverlays: function createOverlaysFn() {

   /* Initialize ClientTracker */
   this.targetCollectionResource = new AR.TargetCollectionResource("assets/magazine.wtc", {
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });

   this.tracker = new AR.ImageTracker(this.targetCollectionResource, {
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });

   /* initialize video drawable */
   var video = new AR.VideoDrawable("assets/transparentVideo.mp4", 0.7, {
     translate: {
       x: -0.2,
       y: -0.12
     },
     isTransparent: true
   });

   video.play(-1);
   video.pause();

   /* handle video playback when image is tracked */
   var pageOne = new AR.ImageTrackable(this.tracker, "*", {
     drawables: {
       cam: &lbrackvideo&rbrack
     },
     onImageRecognized: function onImageRecognizedFn() {
       video.resume();
     },
     onImageLost: function onImageLostFn() {
       video.pause();
     },
     onError: function(errorMessage) {
       alert(errorMessage);
     }
   });
 }
};

World.init();

See the Wikitude documentation for details of their JavaScript API and step-by-step tutorials.

Wikitude Studio – No Coding Required

For those who are not very comfortable with coding, Wikitude also offers a simple drag-and-drop web editor: Wikitude Studio. It is your one-stop shop for generating and managing target collections, as well as for creating and publishing AR experiences!

Wikitude Studio optimizes your projects for the Wikitude SDK. It minimizes the effort of creating image target collections (wtc files) and object target collections (wto files). The Studio Editor makes it possible to add augmentations to your targets. You can test AR experiences and make them available to clients inside the Wikitude App, or inside your own app built with the Wikitude Plugin.

The Power of Instant Tracking and 3D Rendering

Wikitude is not only simple, it is also powerful. In addition to Image Tracking, it can instantly track the camera (Instant Tracking) or real live objects (Object Tracking). The following demo uses Instant Tracking to put 3D objects into the world:


App {
 // changed configuration to load the instant tracking demo 
 property string example: "05_InstantTracking_4_SceneInteraction"

 // ... 

 // no other changes required, DownloadableResource automatically uses the new example as source
 DownloadableResource {
   source: "https://v-play.net/qml-sources/wikitude-examples/"+example+".zip"
   // ...
 }
}

With 230 lines of JavaScript code, the ARchitect world of this example is simple and short as well.

More Augmented Reality Examples

Do you wanna play around some more? Then go ahead and try one of these examples:

Geo Tracking: POI Radar

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "10_BrowsingPois_2_AddingRadar"

 

Can be used to:

  • Show Points Of Interest around you, based on the GPS position.
  • For example to implement Augmented Navigation or see infos of Hotels or Restaurants around you.

Gesture Image Tracking

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "02_AdvancedImageTracking_1_Gestures"

Wikitude Image Tracking Face Example

Can be used to:

  • Drop images, gifs or videos onto an image.
  • For example to let users create and share AR experiences, similar to SnapChat / Instagram video processing with tracked objects.

Snap-To-Screen 3D Model

// run this demo to get a full QML snippet that downloads and opens the chosen example 
property string example: "07_3dModels_4_SnapToScreen"

Wikitude Showcase Snap-to-screen Car

Can be used to:

  • Show additional information or 3D scene when scanning a certain image.
  • For example to enhance your print advertisement in a magazine with AR features:

Media Markt Magazine with Augmented Reality

Wikitude SDK Examples App

The following demo app allows you to to browse all Wikitude SDK Examples from within a single app:


 import QtQuick.Controls 2.0
 import QtQuick 2.0
 import VPlayApps 1.0
 import VPlayPlugins 1.0

 App {
   id: app

   DownloadableResource {
     id: samplesDl
     source: "https://v-play.net/qml-sources/wikitude-examples/WikitudeSdkSamples.zip"
     extractAsPackage: true
     storageLocation: FileUtils.AppDataLocation
     storageName: "WikitudeSdkSamples"
   }

   //samples.json lists all the SDK examples
   readonly property url samplesJsonFileUrl: samplesDl.available ? samplesDl.getExtractedFileUrl("samples.json") : ""
   readonly property string samplesJson: samplesDl.available ? fileUtils.readFile(samplesJsonFileUrl) : "[]"

   //map the JSON file to a list model for ListPage
   readonly property var samplesData: JSON.parse(samplesJson)
   readonly property var samplesModel: samplesData.map(function(category) {
     return [ { isHeader: true, name: category.category_name } ].concat(category.samples)
   }).reduce(function(a, b) { return a.concat(b) }, [])

   Rectangle {
     anchors.fill: parent
     color: "white"
   }

   NavigationStack {
     id: navStack

     ListPage {
       id: examplesListPage

       listView.visible: samplesDl.available

       title: "Wikitude AR Examples"

       model: samplesModel

       delegate: SimpleRow {
         enabled: !modelData.isHeader
         style.backgroundColor: enabled ? Theme.backgroundColor : Theme.secondaryBackgroundColor

         iconSource: modelData.is_highlight ? IconType.star : ""
         icon.color: "yellow"

         text: modelData.name
         detailText: !modelData.isHeader && modelData.path || ""

         onSelected: navStack.push(arPage, { sample: modelData })
       }

       Column {
         visible: !samplesDl.available
         anchors.fill: parent
         anchors.margins: dp(12)
         spacing: dp(12)

         AppText {
           text: samplesDl.status === DownloadableResource.UnAvailable
                 ? qsTr("Wikitude SDK examples need to be downloaded (134 MB)")
                 : samplesDl.status === DownloadableResource.Downloading
                   ? qsTr("Downloading SDK examples... (%1%)").arg(samplesDl.progress)
                   : qsTr("Extracting SDK examples... (%1%)").arg(samplesDl.progress)
           width: parent.width
         }

         AppButton {
           text: samplesDl.status === DownloadableResource.UnAvailable ? qsTr("Start download") : qsTr("Cancel download")
           onClicked: if(samplesDl.status === DownloadableResource.UnAvailable)
                        samplesDl.download()
                      else samplesDl.cancel()
         }

         ProgressBar {
           width: parent.width
           from: 0
           to: 100
           value: samplesDl.progress
         }
       }
     }
   }

   property Component arPage: Page {
     property var sample
     readonly property bool usesGeo: sample.requiredFeatures.indexOf("geo") >= 0

     title: sample.name

     WikitudeArView {
       id: arView

       anchors.fill: parent

       arWorldSource: samplesDl.getExtractedFileUrl(sample.path)
       running: true

       //set this to false to use the device location service
       overrideLocation: !usesGeo

       //license key for V-Play QML Live app
       licenseKey: "g0q44ri5X4TwuXQ/9MDYmZxsf2qnzTdDIyR2dWhO6IUkLSLU4IltPMLWFirdj+7kFZOdWAhRUD6fumVXLXMZe6Y1iucswe1Lfa5Q7HhQvPxEq0A7uSU8sfkHLPrJL0z5e72DLt7qs1h25RJvIOiRGDoRc/h/tCWwUdOL6ChDnyJTYWx0ZWRfX8Vh9c9kcuw4+pN/0z3srlwIHPV5zJuB1bixlulM4u1OBmX4KFn+4+2ASRCNI+bk655mIO/Pk3TjtYMrgjFR3+iYHvw1UmaYMVjsrgpcVkbzJCT6QmaW8LejnfXDNLAbZSov64pVG/b7z9IZPFLXxRSQ0MRLudoSDAh6f7wMTQXQsyqGrZeuQH1GSWtfjl/geJYOvQyDI+URF58B5rcKnrX6UZW3+7dP92Xg4npw7+iGrO1M4In/Wggs5TXrmm25v2IYOGhaxvqcPCsAvbx+mERQxISrV+018fPpL8TzR8RTZZ5h7PRfqckZ3W54U1WSiGn9bOj+FjDiIHlcvIAISpPg2Vuq88gLp0HJ5W+A+sVirqmmCyU9GKeV5Faiv62CJy6ANCZ83GGX2rWcIAh1vGOQslMr9ay4Js+rJsVN4SIhCYdw9Em9hSpoZgimnOaszI7zn9EnPwVQgNETgVm7pAZdLkH5hxFoIKOPG2e79ZKKmzlkB/IZigoHZWNDUCFnEHDNFlTZjOEwoPi8DDGfzOEOGngWE7jmp24N7GzAP7e54Y3e48KtmIJ1/U0PFKOoi2Yv0Gh+E1siU5MBf8dLO7y7GafJWJ2oCUqJG0pLb2cgTf9pjkr625BV3XxODRylgqc5/UymTY6l1J0qO43u5hH3zaejng4I9cgieA3Y553rAEafAsfhrRmWsLW/kBdu4KLfY4eQ9z4B0TweW/xsofS0bkIqxalh9YuGBUsUhrwNUY7w6jgC6fjyMhtDdEHAlXC2fW1xLHEvY9CKojLNJQUnA0d5QCa22arI8IK63Jn8Cser9Cw57wOSSY0ruoJbctGdlsr/TySUkayAJJEmHjsH73OdbAztGuMjVq7Y643bTog4P3Zoysc="

       cameraPosition: sample.startupConfiguration.camera_position === "back"
                       ? WikitudeArView.BackCamera
                       : WikitudeArView.FrontCamera

       cameraResolution: WikitudeArView.AutoResolution
       cameraFocusMode: WikitudeArView.AutoFocusContinuous
     }
   }
 }

What’s the Future for AR?

Augmented reality still has a lot of exciting features and functionalities in store for users, for example with Cloud AR and Multiplayer AR capabilities. Wikitude already offers a cloud-based image recognition service. The latest release, SDK  8, which is supported by the Qt Wikitude Plugin, brought many interesting features like Scene Recognition, Instant Targets or Extended Object Tracking you can now use. And in terms of shared experiences, support workers can display 3D content even though they are remote on another user’s device.

Apple recently introduced their new ARKit 2 framework, a platform that allows developers to integrate

  • shared AR, which allows multiplayer augmented reality experiences
  • persistent experiences tied to a specific location
  • object detection and
  • image tracking to make AR apps even more dynamic.

To showcase the new multiplayer feature, Apple introduced their augmented reality game ‘Swift Shot’:

The use-cases for shared augmented reality are vast, for both mobile games and apps. For example, your AR navigation system could show augmentations that other users placed. You would then also see digital warning signs along the road in addition to the route.

You can also build such multi-user experiences with V-Play Multiplayer. Together with Wikitude, a shared augmented reality experience created with QML + JavaScript is also only a few steps away. V-Play also plans to integrate Qt 3D Rendering with Wikitude’s Native APIs to boost rendering performance even more.

If you have a business request for these cutting-edge features currently in development or if you need assistance in developing an AR experience with high quality standards, don’t hesitate to drop a line at support@v-play.net or contact us here. The V-Play SDK is free to use, so make sure to check it out!

 

If you enjoyed this post, please leave a comment or share it on Facebook or Twitter.

More Relevant App Development Resources

The Best App Development Tutorials & Free App Templates

All of these tutorials come with full source code of the mobile apps! You can copy the code to make your own apps for free!

App Development Video Tutorials

Make Cross-Platform Apps with Qt: V-Play Apps

How to Add In-App Chat or Gamification Features to Your Mobile App

How to Make a Mobile App with Qt Quick Designer (QML Designer) & V-Play

 

The post Qt AR: Why and How to Add Augmented Reality to Your Mobile App appeared first on V-Play Engine.

Enterprise Application Development with Velneo and Qt

Do you enjoy case studies? We sure do, especially when those case studies are examples of finest work born from one’s passion for code.

There are many great Qt user stories in desktop applications. One of them comes from Velneo, an innovative Spanish tech company with their development platform that includes a rapid application development tool called Velneo vDevelop, and its application engine.

Despite mobile and web apps being all the rage, desktop applications stay highly relevant in the enterprise market. Native desktop applications beat web apps in performance and far superior user experience regularly. Thousands of Velneo users will attest.

Velneo vDevelop is the visual editor following the WYSIWYG approach. It’s developed using Qt Widgets and other Qt components. In addition, Velneo has developed their own easy and simple-to-learn programming language that saves users from complex implementation details. Velneo vDevelop and the programming language are the main ingredients that let you cook powerful applications – fast and easy.

With the framework, users can create finished software that uses Qt Quick, Qt Widgets, and several other modules. The runtime uses Qt modules that open up various ways to meet user demands in this market.

See below some of the vDevelop editor screenshots.

velneo_vdevelop_screen_01

velneo_vdevelop_screen_02

velneo_vdevelop_screen_03

Check out this video for many examples of ERP, CRM, accounting, and other business applications built with Velneo and Qt. How many different Qt Widgets can you spot? 🙂

To learn more about how Velneo is using Qt, read the case study in our Built with Qt section. If you have any questions about desktop application development, get in touch!

The post Enterprise Application Development with Velneo and Qt appeared first on Qt Blog.

Post Akademy

So, it has been a busy week of Qt and KDE hacking in the beautiful city of Vienna.
Besides getting quite some of the Viennese staple food, schnitzel, it was an interesting adventure of getting smarter.

  • Getting smarter about making sure what happens in North Korea doesn’t stay in North Korea
  • Getting smarter about what is up with this newfangled Wayland technology and how KDE uses it
  • Getting smarter about how to Konquer the world and welcoming new contributors
  • Getting smarter about opensource licensing compliance
  • Getting smarter about KItinerary, the opensource travel assistant
  • Getting smarter about TNEF, a invitation transport format that isn’t that neutral
  • Getting smarter about Yocto, automotive and what KDE can do

And lots of other stuff.

Besides getting smarter, also getting to talk to people about what they do and to write some patches are important events.
I also wrote some code. Here is a highlight:

And a lot of other minor things, including handling a couple of Debian bugs.

What I’m hoping to either put to my own todolist, or preferably others, is

I felt productive, welcome and … ready to sleep for a week.

Python Extensions in QtCreator

Hello world! My name is Tilman and I have been an intern with The Qt Company in Berlin for the last few weeks. During my time here, I have worked on enabling Python extensibility for the QtCreator and I am happy to announce, that a first proof of concept version is available today!

So, what exactly do the Python extensions do?  Well, the goal is to eventually be able to do about anything a native C++ plugin could do. But for now, the scope is much narrower and only a very small part of the C++ API is exposed.

screenshot_20180809_160715

A Technical Perspective

The main goal for me was to explore how this vision could be implemented. For now the project focuses on getting the integration and setup right, rather than having as many bindings as possible.

Everything starts with a new QtCreator plugin, which initializes Python bindings and then loads the user provided Python extensions. This is done by executing their Python scripts in an embedded CPython interpreter. Getting this to work requires two main things:

  1. Bindings (and a mechanism for loading bindings only if the relevant plugins are loaded)
  2. A system for discovering, and running Python extensions

 

Generating Bindings

Some of you may be familiar with Qt for Python. This project enables developers to create Qt applications in Python by generating Python bindings for Qt’s C++ code. To do this, it uses a binding generator called Shiboken.

To generate the bindings for QtCreators APIs, I used the same tool. This means, that on top of all the QtCreator specific bindings, anything from Qt for Python is also available from Python.

Plugins in QtCreator can be disabled by the user. Thus, we can only expose bindings for the Core plugin and things like the Utils library directly without incurring dependencies. This is quite a harsh restriction on the bindings we can use.

To circumvent this problem, any other QtCreator plugin may provide an additional library, which is then dynamically loaded by the Python extensions plugin as necessary. These libraries will eventually be provided for all plugins maintained by the QtCompany. For now, there is one example of such a library available for the Project Explorer plugin.

The Embedded Interpreter

Python extensions are nothing but a directory containing a main.py file, which represents the entry point of the extension.

My main design goal was to make Python extensions ‘feel’ as if they were normal Python scripts, run from within their extension directory. Since all the extensions run in the same embedded Python, there is a good deal of code devoted to making sure extensions seem isolated, as well as setting the appropriate sys.path for each extension.

This means you can do things like import other files from your extensions directory or mess with sys.path, just like you would with a normal Python program.

If your extensions depend on any other Python modules, there is also a facility for loading these dependencies. By including a requirements.txt, all your dependencies are ‘pip installed’ before your extension is first run. Should you need to do any other setup before your main.py can run, you can also provide an optional setup.py, which is run before, and separately from, your main script.

Closing Words

While a lot of heavy lifting still needs to be done, the main challenges of this project are now solved. If you are interested in trying things out yourself, I highly encourage you to check out the projects git repository. There, you can also have a look at the code and a more in-depth documentation.

On top of the C++ code, build instructions and some initial documentation, you will find several examples of Python extensions that give a taste of what will be possible.

The post Python Extensions in QtCreator appeared first on Qt Blog.