Integrating Qt Test in Travis CI

For a long time I wanted to create a full Continuous Integration environment that uses Qt for both building apps and testing. In fact, one of the most important things for me was to find the right way to integrate Qt Test. For CI I’ve started to use Travis since it is one of the most widely used environments in GitHub. I took that decision after seeing other Qt projects (Musescore and Stellarium) using it.

The main difference between those projects and the one I wanted to set up, is that they’re using CTest as a testing tool. With that tool they can report when the tests are broken easily. But, how the hell I was going to do it with Qt?

How to make our test app fail when tests don’t pass

So far, my knowledge with Qt Test was mainly about how to create really basic Unit Testing for my classes. For that I only needed a test class and then add its execution to the main function. Let’s say that we have some code like the following:

If we execute that, we will see the output on the screen but there won’t be any other issues. Of course because we’re returning 0 at the end. The first thing I need is to change the return, so I could instead of returning 0, move the return to the qExec() method. That would have solved this particular problem but it would mean that I need to add as many qExec() in the return as tests I have.

After a bit of research I saw a good piece of code on StackOverflow (that I didn’t save and it’s being hard to find it again). It is a good start since it redirects the output and keep a counter on the tests that failed but I wanted to do something more portable in the long term. Of course, we could have a vector of QObject and insert the new tests as follows:

Is not a bad solution since qExec returns how many test failed. But in the long therm picture it will increase the size of the main function endless way. The nice thing of that is that we have a clear picture of what is required!

A colleague help me to figure out how we could achieve that and he came with the following solution:

  • A class to storage all our test (TestManager) and run them by calling a runTest() method
  • Some way to store automatically the tests in the class we created before.

The solution for adding tests automatically

The solution goes by creating a the classes we talked in the last two points of the section above. The first thing we need is a class to contain all the tests. Since we will need to add tests from several different places, it’d be nice if we make it singleton. We also need a method to add tests cases addTest and a method to actually run all the tests runTests and return the number of failures. Finally, a QVector<QObject*> to store them.

This second part can be divided in two different topics. The first one is to create a hierarchy structure where we have a BaseTest class and we inherit from there. Then in the constructor of this BaseTest we can call the method addTest of our TestManager class. In addition we can set the initTestCase() and the cleanupTestCase() method as all the methods that are required by QTest to start and end tests. This part is totally optional since we can implement tests as always. We’d have to put the call to the addTest method in our test class without inheriting, but I think this is nicer.

The second step it’s related to the inheritance from BaseTest. Every time we create a test case, we just need to inherit from there and the test will be added automatically to the manager and executed when runTests() is called. In fact, the only thing we need to do is to instantiate our test class. It can be done in an easy way by creating a const static variable in the header file.

Another option could be to create a new class or method and pass our test so it creates an instance similar to what QTEST_MAIN does.

The code, as a proof of concept can be found in GitHub.

Make Travis fail when QTest fails or have errors

So far, everything we’ve done was only a nicely way to prepare Travis to fail when our tests don’t pass. The easiest way to do it is to return the value of our runTest() method in the last step of Travis. If it fails, the return won’t be zero so Travis will report an error.

To summarize, in the main function return the output of QTest::exec() or from a method that calls that method for all your tests.

Comments

Markus says:

Why don’t you use the test project template that comes with Qt Creator? File -> New File or Project -> Auto Test Project …
This gives one executable per test class with a .pro file and a single .cpp file, which already returns PASS/FAIL as a return code as you need it. Test projects can nicely be added together to a single build & run with a SUBDIRS project in qmake. Build with qmake && make, run with make check.
Biggest benefit: If one of your tests crashes, the others can still be called. That needs “make -k check” to run the tests, though.

Francesc M. says:

Hi Markus!

Thanks for your feedback! The idea of having “subidirs” project for testing is great and I think it can work well for integration testing. There you can have your own “mini-project” for each individual part.

However, for unit testing having a single project for each class you want to test, and in addition have it separately from the integration test could not be the best idea. That’s mainly because the Qt Creator test project generator adds a Macro in your test class that basically creates the main there.

Markus says:

Out of interest: Did you do anything with the xml output format of the Qt test framework? I remember that I found this to be not working when I tried to call multiple test classes in one executable, that’s why I switched to the subdirs approach – for unit-tests, by the way 😉

I am using the Qt creator templates with the Macro to keep project files and test files as simple as possible, and I find it a good approach. The single project also forces you immediately to think about dependencies, because you have to seperately include them for each unit test – which is a good thing IMHO 😉

What makes integration tests different for you regarding the organisation of tests?

Leave a Reply