I had the idea of BenchmarkTool when I was trying to figure out why GitQlient was so slow on Windows machines. I had the impression and later on the evidences that in some cases the git client was holding for a long period without any known reason. And that was not happening on Linux.
Before I start hearing about profiling, I must say that I don’t want know how much CPU/RAM my application is using. This can be achieved with profilers, even for this case where the problem is on Windows. When I checked, those values were normal and the problem was in the what.
What process is making GitQlient so slow. Or in other words: what is taking so much execution time.
The idea of tracking time
The idea is not original at all. It’s similar to what Q_BENCHMARK does but for this specific case I wanted to have it in plain C++ so it’s as much portable as possible. This is fine to keep track of the execution of a method but sometimes you are losing the big picture. While doing operations on cascade you find yourself with a lot of tasks that aren’t long. The problem is that when you count them altogether they make a huge difference.
Additionally to the idea of benchmark the time for a single method execution I wanted to have a clear picture of the execution three inside workflow. This will allow me to identify the weak spots of the chain where the application is wasting more time.
Finally, another important thing is that I want to use it in other environments so the license must allow me that. That’s why I release it under BSD license.
How does BenchmarkTook work?
The current state is a singleton class that needs to be initialized with a TimeProvider object. That TimeProvider is optional and the BenchmarkTool creates one if not provided.
To start the benchmark of a method you can choose between two functions: BenchmarkStart or BenchmarkStartMsg. The only difference between these two is that the second one allows you to add a message to the function you want to benchmark.
Let’s say you are calling a method from different places. In this case you could put a message on the caller to differentiate when it’s called from.
Before the function ends, you will need to call BenchmarkEnd so it knows when the execution has finished and it can close that loop. This is important because otherwise the loop will be open and the output will be strange when not directly wrong.
Internally, the tool keeps a tree of calls so it waits until the every starting call ends to close the subtree. The library organizes the calls by thread so there is no danger in having several of them. They will be isolated in the output.
For now, it writes into a file when the execution finishes. But this is an area I’d like to improve before the first release.
Listeners in BenchmarkTool
BenchmarkTool also allows you to add listeners to it. A listener will be notify every time that a function starts and ends the benchmark. This is particularly useful for decentralised architectures where the functionality is better atomized.
As example, you could have a different library that sends the information collected by BenchmarkTool to a server. That tool would use the listener functionality to keep record of that.
Current status and future releases of BenchmarkTool
Right now the current status is officially under development. I wouldn’t use it in a production environment unless the goal is to do QA checks and validation. The main reason is that I still want to add some improvements and extensions on the features it supports.
As for the future, the plan is to release a first stable version before summer. But let’s see how that works. However, if you want to check the code, the GitHub repository is open, and the code is ready to check.