Linux On Desktop In 2023
I am using Linux in the form of various distributions for probably over 15 years now. After 8 years with my trusted Lenovo Thinkpad X1 Carbon, it was time for a hardware refresh. I got a framework which turned out pretty nice. I take this opportunity to also reflect on the current state of Linux on “Desktops”.
Upstream First & Fast
Whatever Linux distribution you choose, do yourself and all upstream maintainers a favor and use something that releases often (or has rolling releases1) and that does not patch upstream software to death. It is an illusion that “old” means “stable” as Debian & Co tried to teach us for many years. Sometimes it is hard to keep features and bug fixes separate and backports of fixes to year old software and the resulting bug reports – both up- and downstream – are a real waste of developers time. I recommend Fedora for the average Linux user or Arch Linux for the nerds.
The “Init Wars” Are Over
systemd has replaced SysVinit, Upstart & Co. It makes many things so much easier and provides features that you would expect from a reasonable operating system2:
- Service Management: You list/restart/terminate services w/o relying on a bunch of shell scripts and PID files and hope that none of the processes ever double-fork.
- Service Boundaries: Services are properly encapsulated into cgroups with limits, (optional) security contexts, and access limitations.
- Service Activation: Services can be started/stopped when you need them. This includes dbus endpoints and timers.
systemd is normally criticized for being monolithic. I do not share this opinion. It provides a bunch of features that you may or may not use, but that provide proper integration. Some of these features that I do not want to miss on a modern operating system are:
- Log Management: No per-service log file at some random location and with yet-another syntax and maybe some custom log rotation. You can finally correlate log messages.
- DNS: systemd-resolved offers local DNS resolver that comes with DNSSEC and split DNS. It “just works”. There is no need to use systemd-networkd for this, use NetworkManager instead which is better suited for laptops that connect to different and new networks regularly.
- Time Synchronization: You can use systemd-timesyncd to sync your computers clock using NTP. No extra software stack required.
Everything is UEFI based nowadays. Use a GPT. You only need two partitions:
- EFI System: FAT32, contains all EFI binaries required to boot. Mounted under
- Data: LUKS2, which gets unlocked during boot (also see encryption). Contains a Btrfs file system. This in turn can have subvolumes and a swap file (if you need it).
You do not need an fstab file or kernel parameters to mount the partitions anymore. Just comply with the Discoverable Partitions Specification and systemd-gpt-auto-generator will automatically discover, decrypt, and mount your partitions.
Everything in the new UEFI world is based on PE images. You can pack a “stub” which translates from UEFI to “Linux” (use systemd-stub), your kernel, the initrd, the kernel command line, and an optional splash screen into a Unified Kernel Image – which is a specific format of an PE image.
Technically you do not need a bootloader because the UEFI can boot your Unified Kernel Image directly. However, there are still a few good reasons to do:
- UEFI Memory: If you need multiple images for multiple kernels (e.g. as a fallback) you would need to register your different images on every update. Since apparently many UEFI mainboards use somewhat cheap persistent memory chips that should not be overwritten too often. So instead it is better to use the file system as a state store.
- Random Seeds: You can store a random seed on disk which is then used to seed your kernel during the early boot phase. This improves security.
- Boot Counting: The bootloader can count boot attempts and fall back to stable kernel after an update.
systemd-boot is a good bootloader that implements all nice features and is a solid but simple player within an UEFI environment.
I advise you to using an initrd that comes with systemd. This avoids having two completely different system managers (one during boot, one during runtime).
We need to make a few assumption regarding security on modern systems:
- Distro: The Linux distribution in use is trusted. Updates should be provided promptly.
- Upstream: All system level packages are trusted.
- Hardware: The hardware the system is running on is trusted.
Especially the latter point provides us not only with some drawbacks but also with advantages. All laptops nowadays come with a TPM and allow us to use Secure Boot. This means that – even when your laptop was physically accessed – only trusted software can decrypt your data (see encryption). This is a very important property. I cannot overstate how important this combination of encryption and a trusted boot environment is. While encryption for sure provides data at rest, not having a trusted environment where you type the decryption key into is somewhat pointless3.
There has been some beef with Microsoft over Secure Boot because they claimed exclusive key access on many systems. Luckily framework allows you to install custom keys at basically every layer, so even if you want to secure your Linux installation, you can still dual-boot Windows or be sure that OpROMs still load.
So the workflow is as follows:
- Create key to sign your Unified Kernel Image, likely a “Signature Database (db)” key.
- Enroll key into UEFI. Using the framework system, just copy it to
/efiand enroll it using the UEFI UI.
- Every time your build a new Unified Kernel Image, sign that with the aforementioned key.
- Make sure you have a very strong UEFI password.
LUKS2 is used to encrypt all critical data. The Unified Kernel Image does not contain any secrets. Use cryptsetup for setup, it comes with state-of-the-art defaults. You have multiple options on how to decrypt your data:
- Password: You use a complex password. Since cryptsetup chose proper defaults, this should be reasonably safe.
- TPM only: Rely on the hardware to provide the decryption key if Secure Boot was successful. Note that allows attackers to boot up the system, so you then need a proper user account password and security vulnerabilities in the booted software may be exploited.
- TPM & PIN: Same as above but requires an additional PIN4, which prevents booting the system by an attacker. The PIN is usually shorter than a full-blown password, but the TPM usually implements a lockout mechanism if there are too many failed attempts, which is nice.
- FIDO2 with and without PIN: Same as TPM but use an external hardware token instead.
The TPM and FIDO2 options require systemd-cryptenroll.
Many laptops come with fingerprint readers. fprint allows you to use that if your like and has a good integration with Gnome.
In my mind a firewall on a Laptop mostly prevents external systems and attackers from connecting to local services that accidently or by bad design open an unprotected network interface. In an ideal world this would not be required, but it is a good safety net to have. FirewallD provides good integration with NetworkManager and provides zone management, i.e. you can have different rules for your trusted home network, your office, and an open Wi-Fi.
USBGuard protects you against BadUSB. It has a rudimentary integration with Gnome, so you get a desktop notification whenever a device was blocked. Sadly there is no “click here to allow new device” button, so currently the protection level is “lockscreen”, so all devices that are plugged into a locked system are blocked. This is better than the default behavior of Linux – e.g. connecting to a random network if a USB ethernet dongle is plugged in.
In my personal experience USBGuard often blocks devices that should just work, e.g. Bluetooth. So I had to disable it.
Secure Software Foundation
Your system software should be designed to sandbox applications as good as possible. This can (and has to be) archived on multiple levels:
- System Services: See systemd.
- Applications: See Flatpak.
- Resources & IPC: See Wayland, and PipeWire. D-Bus and polkit help on the IPC front.
X115 might have been nice back in the 90s, but it is a pile of hack and a security issue – every application can wiretap everything. Wayland is here to clean up the messy display stack (Mir tried that as well but lost). Apps that still rely on X11 are bridged via Xwayland. Screensharing works (see PipeWire).
See “Wayland Limitations”, because not everything is roses at the moment.
PipeWire replaced PulseAudio. It has proper Bluetooth support and is super stable. It also allows JACK clients, so you only need one audio solution. One may wonder why we need an audio daemon and cannot use ALSA directly. There are multiple good reasons for PipeWire:
- Virtual Hardware: Sometimes your “hardware” is not physically connected to your system. This includes Bluetooth speakers, loopback devices, network streaming devices, etc.
- Central Controls: Have a central and easy way to reroute audio, select default devices, mute applications, and set application-specific levels without clicking through menus of every single application.
- Filters: Using apps like EasyEffects you can add filters to input and output devices like noise cancellation, EQ, compressors, deessers, etc. For the framework laptop, the community offers EQ presets which improve audio quality significantly.
- Video & Screensharing: PipeWire is not only for audio but can also manage webcams and screensharing.
- Sandboxing: Flatpak plays very well with it.
Flatpak allows application developers to package and distribute (e.g. through Flathub) their work without learning yet another package manager or without fighting all your distribution weirdness. Furthermore, Flatpak comes with some form of sandboxing, even though this aspect can be improved in the future. The combination of unified packaging and sandboxing makes it easier to consume proprietary software like Discord, Steam, or Zoom; but I also find myself consuming many open source applications through it. It has become my go-to way to install GUI applications. Use Flatseal to manage permissions.
Currently, Canonical tries to push their own format called Snap. I wonder if they again overestimate their market position after the Upstart and Mir disasters.
See “Flatpak & GPU” for current limitations.
Backups should be standard these days – not just any backup, but a good backup. To be precise, a backup should have the following properties:
- Automated: We are all lazy and forgetful, so backups should run automated in regular intervals.
- Opt-out: All files should be automatically included if they are not explicitly block-listed. Otherwise, you are going to forget to include new data.
- Off-site: A NAS or a hard drive under your bed will not help you when your house is on fire. So store your data at a distant location.
- Raid-safe: The physical location of your data should ideally be owned by someone who is disconnected to yourself as good as possible. If the police (or whoever) is raiding your home, you want your backups to be safe.
- End-to-End Encrypted: Nobody without credentials should be able to read your data. Encryption makes sure you can include even sensitive information without having to worry. Encryption must include metadata like file names.
- Incremental: You do not want to upload / transfer your whole data again every day.
- Minimal Management: If it is set up it should just work. Especially the storage location should not require you to perform weekly checkups. This requirement mostly eliminates private servers as storage locations.
- Snapshots: There should be multiple snapshots, not only the last one. A virus or yourself may delete or overwrite certain files, and it might take you a while until you realize that.
- Easy Access: It should be simple to access individual files or whole directories for a given snapshot. Mounting a virtual file system is a very good way.
- Compression: Ideally data is compressed at least to a certain extent to safe transfer and storage costs.
There are programs that handle these requirements: Restic and Kopia. The latter has the better UI but I am using Restic for a while now, and it just works. You can choose multiple cloud providers for storage. Backblaze B2 is my personal choice, but there might be better options.
Power Management & Fan Control
I somewhat feel that not draining your battery, heating up the keyboard to fry eggs, and being able to hear your own voice through the fan noise is still one of the biggest struggles Linux has.
To get a base level of sane behavior, I suggest you install power-profiles-daemon which also comes with Gnome integration6. You can add powertop on top to debug the current drainage but also to use its auto tune option for even better battery life.
On top there seem to be a few magic kernel arguments that for some reason are not the default but help a lot:
Now you have all the requirements to be power efficient, but some apps like your browser may still insist on roasting your CPU instead of your GPU. Use
intel_gpu_top from Intel GPU Tools to debug this. Firefox for example required the
media.ffmpeg.vaapi.enabled config to be set to use GPU decoding.
Even though software and hardware are as power efficient as possible, your fans may still run on full speed. I do not know why the kernel or whatever driver does not handle this by default, but you may use thermald with dptfxtract to fix that.
Use a Desktop Environment that supports the aforementioned technologies and works without requiring editing config files just to change the resolution of an external monitor. Gnome and KDE are both reasonable choices. In this post I often refer to Gnome because this is my daily driver.
Sometimes you want to run software that is not available for your distribution or even for Linux. Depending on the software there are a few ways to achieve that.
For OCI containers use Podman. It can run “rootless”, i.e. without requiring a system-wide daemon with root privileges that may backdoor your entire system. It also integrates better with the rest of a modern Linux system than Docker.
For Windows games your may just use Steam. It uses Proton which is based on Wine and mostly just works.
Other Windows Software
If you have other software that only runs on Windows you may try Bottles. It is a tool to configure and run Wine but easier to use than prefixes, manual hacks, or winetricks.
If you need a real Windows or even a macOS use Quickemu, which is an easy frontend for QEMU.
There is more to a laptop / desktop than the operating system. The UEFI has a software version, different embedded controllers like SSD run mini operating systems on their own. Traditionally vendors either provided software updates via Windows programs or via FreeDOS-based update disks. Recently the LVFS (Linux Vendor Firmware Service) and the client program fwupd allow Linux users to update all kinds of embedded controllers and connected devices like webcams. This is a game changer.
What Does Not Work (Yet)
Here a few things that I tried to get to work, but that did not.
iwd aims to replace wpa_supplicant with a clean implementation and improved kernel interfaces. Sadly on three machines that I tested it on the Wi-Fi connection just stops working after a while with pretty unclear log messages mostly blaming the access point. wpa_supplicant however just works, so I keep using that.
Flatpak & GPU
If your Flatpak application want to use GPGPU interfaces like OpenCL or oneAPI you are pretty much out of luck at the moment, and you have to use non-sandboxed applications. Interfaces like OpenGL, Vulkan, and CUDA seem to work though. This affects for example Blender and darktable.
Even years after its appearance Wayland still has some limitations. Now before we get into them, you may wonder if this is a heavy regression because X11 had all of that sorted out. To be honest I do not think it did. It somehow allowed all kinds of features but the way they were implemented was likely more a hack than a proper solution.
First is HDR and color management. The protocols (color-representation-v1, color-management-v1) for that are currently under discussion and reading through all the details makes one aware that the complexity is probably justified and that we hopefully will have a very solid solution soon.7
Fractional scaling is when you want to scale your applications to your display, but this conversion is not a whole number. Now you cannot squeeze a fraction of a pixel into another, so this is far from trivial if you want to avoid a super blurry or otherwise broken result. The protocol fractional-scale-v1 was recently accepted and also merged into Gnome/Mutter so it should be available soon – if you ignore legacy X11 apps.
Summary and The Future
The Linux ecosystem provides a solid base of a Desktop operating system and the current projects are moving things into the right direction. I personally like Linux, and it is the right choice for many techy people. However, I think 2023 is not “The Year of the Linux Desktop”. There are too many gaps compared to what I would call a “state of the art” operating system.
Distributions are very conservative when it comes to hardware requirements and default configurations. Energy saving could be way more aggressive by default. The fact that hardware-based video decoding is still opt-in for many applications is frustrating. Software is often compiled for processors released decades ago without using new instruction set extensions by default. On the other hand, commercial competitors like Microsoft require you to have at least a somewhat up-to-date hardware configuration. I am not saying that Linux has to be equally aggressive, but it should be more progressive.
The fact that installing a Linux distribution (with exceptions like Fedora Silverblue) requires a package manager is arcane and error prone. We have plenty of disk space nowadays and it would be way more streamlined and secure to use an immutable installation / image-based OS – either via libostree or mkosi.
Applications should be sandboxed via Flatpak and if a developer needs some specialized tooling or a build environment that is not provided by the installation, they can still use OCI containers or Nix.
The fact that the security measurements discussed in this blog post are not the default is just wild. People often claim that Linux is more secure than Windows which might be true, but the bar is actually way higher if you look at a vanilla Android or macOS for example.
Image: by Mariola Grobelska on Unsplash
Note that “rolling release” neither means “untested” nor “unreleased”. Arch Linux has a testing repository and does not ship unreleased software. Also see dont-ship.it.
I will call the whole combination of Linux as a kernel, the boot loader, systemd, and all system services an “operating system”. Many people have different opinions, but I think for “Linux on desktop” this is a reasonable point of view.
Some people may argue that you only have to make sure that you never leave your laptop out of sight. If you want to travel, I find this a very unreasonable assumption.
While PIN stands for “personal identification number”, it does not have to be numeric-only. Rather it is a short password that can be typed in via a keyboard.
I treat X11 and X.Org as the same thing here, because they virtually are the same for most people.
Some people recommend TLP because it is “better” than power-profiles-daemon. I could not find a difference and since the latter has Gnome integration, I stick to that.