This is the first in a series of articles where I will talk about some of the technology behind Plasma System Monitor. They will be quite technical.
About two years ago, a project was started to create an alternative to
ksysguardd, the process that does the actual statistics collection for KSysGuard. Initially, this was intended to power a new set of system monitor applets for Plasma, but while we were working on it we concluded that it would also be a good idea to build a new system monitor application on top of this. The result of that is Plasma System Monitor, which had a preview release at the start of November.
Now, the first question anyone is going to ask when someone says they will replace some working piece of code is "Why?". Why replace working code with something untested and new? To answer that, let me first exaplain how the old
ksysguardd is a binary that gets launched when KSysGuard (or another application that wants system data) gets launched. It implements the actual data collection side of KSysGuard, using a custom protocol over standard input to communicate with the application. It has different code paths for different operating systems, which each operating system "backend" exposing a number of sensors that read system data.
One of the first reasons to create something new was the requirement of
ksysguardd that it needs to be started separately for each process that wants to do something with the statistics. This has much to do with
ksysguardd using a custom protocol for communication. While twenty years ago writing a custom protocol was probably about the only way to get something like this to work, these days we tend to make use of a more sophisticated IPC mechanism: D-Bus.
D-Bus allows us to create a process that can run as a stand-alone service exposing a more robust RPC interface to applications that want to use this data. This in turn means we do not need to start a separate instance for each process that wants to do something with this data. It also means that the underlying code can now be changed to deal with proper data structures rather than writing just about anything to a text stream, which is what happens in
ksysguardd is written completely in C. While this is fine for certain cases, in this case that means it prevents us from reusing code that we already have to implement some of the functionalities. For example, the partition usage sensors need to know which partitions exist and then query those for usage amounts. In
ksysguardd this is implemented by reading
/etc/mtab. That file, however, contains everything that is mounted, including things like
cgroup and a lot of other things that are really not partitions. So the code has to filter that list, which leads to issues where we either do not list everything a user considers "a partition", or we show too much.
The thing is, we already have a solution for this problem. There are plenty of places in Plasma and other KDE software where we need to provide a list of partitions. For example, the device manager applet needs to show these, as well as the places panel in Dolphin. These all make use of the Solid framework to do this, which uses
udisks2 on Linux these days. So it would be nice to be able to reuse that code, since it provides a much better way of listing partitions, that is well tested on multiple platforms. However, Solid is C++ code. While it is technically possible to interface with that from C code, it is not going to be the nicest of solutions and would clutter up code that already is not the most readable.
There is an additional reason to move away from C code: Most code written as part of KDE software is C++, if not an even higher level language like QML. We are used to having all the facilities that modern C++ provides us, in addition to all the things that come from Qt. Having to write things in C is cumbersome, to say the least, making it a lot harder to make changes to what kind of data we expose.
All APIs are Equal, But Some are More Equal than Others
All that said, most of those issues are not so much architectural problems. However, there is one fundamental issue in
ksysguardd that is very much architectural and which we did not even solve in KSystemStats for a fairly long time. This has much to do with what can be considered the "API" of a separate service like this.
As I mentioned above, in
ksysguardd, the system specific "backend" deteremines which sensors there are and what kind of things they expose. However, this means that, on different platforms, the set of sensors exposed by the service can be different. When you then have an application that makes use of some sensors to display data, that application suddenly has to deal with the differences between these sensors, which makes things a lot harder for the application. Moreover, this problem is not even limited to different platforms, the same platform can change what sensors are exposed since there is absolutely nothing that enforces a structure.
In a way, these sensors can be considered the "public API" of the service. And like a good public API, they should not change at the whim of whatever the underlying system decides, but be mostly stable. Therefore, in KSystemStats, we decided to restrict things a lot more. First, everything is part of a subsystem. These are mostly meant for categorisation and include things like "CPU" or "Memory". Each subsystem can contain one or more "sensor objects", which represent more concrete objects in the system, like a CPU core or a GPU. Finally, each sensor object has a number of sensor properties, which are the objects that provide the actual system data. Sensor properties include things like CPU core usage and the amount of memory used.
One additional important aspect is that sensor properties represent not only the data value but also several bits of metadata about those properties. This includes a name and description, but also its unit and maximum value. Since the sensor properties are mostly static and defined up front, we can actually provide proper translated names for everything. In addition, with the extra metadata the client can make decisions about how to display that information, like how to format the data value or providing a reasonable default range for a line chart.
All this should lead to a much more stable "public API" for the sensors, which means that the experience when running an application that makes use of KSystemStats should behave a lot more like it is intended, regardless of the underlying system. This in turn means that the application can be polished a lot more to provide a good experience to users.
Plugins and Modularity
A change that was made fairly early in the project is that sensors are no longer hardcoded directly in the service itself, but are provided by plugins. This enforces separation between the service code that exposes things on D-Bus and the code that is reading values from the system and between the different subsystems. It also means that it becomes a lot simpler to add new sensors to the system, since that simply means writing a new plugin. Longer term, I hope this will lead to more things being supported. It already helped a lot when creating the [GPU integration].
In addition, it helped us transition to the new system, as we could create a plugin that uses ksysguardd behind the scenes, but then maps that to the structure as we defined it for KSystemStats. While not entirely painless, this allowed us to build on top of the new infrastructure and later on replace the underlying data collection code. That meant we couldto ship KSystemStats and the improved Plasma widgets in Plasma 5.19, while the data collection code has been mostly replaced for Plasma 5.21.
This has become a pretty long story, but I hope it highlights some of the reasoning and design ideas behind the KSystemStats service. In the next blog post I will talk about rendering charts and the KQuickCharts framework.