During the earlier days of collectl development a third design consideration was minimizing the amount of storage used for raw files. At the time disks were smaller and the amount of data collected was small enough that being more judicious about what was saved felt like it made a difference. However, since the inclusion of process, slab and now interrupt data along with larger disks the amount of storage used has become less of a consideration. In other words don't spend extra data collection cycles trying to be more selective about what is recorded if it's going to add to the overhead.
# time collectl -scdnm -i0 -c8640 -f /tmp real 0m9.711s user 0m7.480s sys 0m2.140sand you can see that collectl uses about 10 seconds out of 86400 or about 0.01% of the cpu to collect cpu, disk, network and memory data. If we repeat the test again for just cpu the time drops to under 4 seconds, so you can see if performance is really critical, you can improve things by recording less data or maybe just do it less frequently. The point is these are tradeoffs only you can make if you feel collectl is using too much resource.
So what happens if you take a different processing path and save collectl data in plot format? This means adding the additional overhead of parsing the /proc data and performing some basic math of the values. If we use the same command as above and include -P:
# time collectl -scdnm -i0 -c8640 -f /tmp real 0m20.607s user 0m17.970s sys 0m2.580swe can see that this takes a little over twice as much overhead even though it is still pretty low.
One other example is worth mentioning and that is measuring process monitoring overhead, which is probably the highest overhead operation collectl can do and one of the reasons it has its own monitoring interval. The overheard for collection of this type of data can vary quite broadly depending on how may processes are running at the time and on a system with only 138 processes look at this:
# time ./collectl.pl -sZ -i0 -c8640 -f /tmp real 1m8.453s user 0m54.650s sys 0m13.430snoting collectl is also smart enough to only look at 1/6 as many processes since that is the default relationship of process monitoring to other subsystem data. This also leads to the mention of a way to further optimize process monitoring. If you are monitoring a specify set of processes, say http daemons, collectl no longer has to look at as much data in /proc and so we now see:
# time ./collectl.pl -sZ -Zchttp -i0 -c8640 -f /tmp real 0m5.721s user 0m4.480s sys 0m1.180sIn fact, if we know there are never going to be any new http process appearing (collectl looks for new processes that match selection strings by default):
# time ./collectl.pl -sZ -Zchttp -i0 -c8640 --procopts p -f /tmp real 0m5.080s user 0m3.930s sys 0m1.130sAnd things get even better. In fact, you could even imaging monitoring these processes at the same interval as everything else with almost no additional overheard!
The number of different combinations of switches one can measure far exceeds the scope of this discussion - for example we haven't even talking about the pros/cons of compression. But in conclusion, the main thing to remember is collectl's overhead is already pretty low but if you're really concerned, measure it yourself and adjust accordingly.