ShaniOS Technical Documentation

Comprehensive guide to the immutable Linux OS with atomic updates

Overview

Welcome to the ShaniOS technical documentation. This wiki provides comprehensive information about ShaniOS's architecture, installation, configuration, and daily use.

ShaniOS is an immutable Linux desktop built on Arch Linux. It ships in two editions — GNOME and KDE Plasma — and is designed for users who want a stable, secure, low-maintenance system without sacrificing full Linux capability. In philosophy it is similar to Fedora Silverblue or SteamOS, but built on Arch's rolling release model for maximum package freshness.

Five ideas define how ShaniOS works:

  • Immutability: The root filesystem is read-only. Neither accidental commands nor malware can corrupt the OS — it always boots to a known-good state.
  • Atomic updates via blue-green deployment: Two complete system images (@blue and @green) are maintained at all times. Updates are written to the inactive image; you boot into it only when it's ready. The previous image remains as an instant rollback target.
  • Selective persistence: Your data, configuration, Flatpak apps, containers, and service credentials all live in separate Btrfs subvolumes that survive every update and rollback untouched.
  • Defence-in-depth security: Six Linux Security Modules run simultaneously (lsm=landlock,lockdown,yama,integrity,apparmor,bpf), LUKS2 argon2id encryption, TPM2 auto-unlock, Secure Boot, Intel ME disabled by default, and every OS image SHA256+GPG verified before deployment.
  • Zero telemetry: No usage data, crash reports, analytics, or tracking of any kind — ever. The entire codebase is public on GitHub; the claims are verifiable.

New to ShaniOS? Visit shani.dev for a general introduction. This wiki focuses on technical implementation and usage details.

Introduction

What is ShaniOS?

ShaniOS is an immutable Linux distribution that brings enterprise DevOps practices to desktop computing. Built on Arch Linux with Btrfs filesystem, it provides atomic updates, instant rollback, and system integrity by design. It ships as two editions — GNOME and KDE Plasma — and works out of the box with no post-install tweaking required.

The core pillars of ShaniOS are:

  • Immutability: The root filesystem (/) is mounted read-only. System binaries and libraries cannot be changed at runtime — by you, by software, or by malware. The only way to modify the OS is through the controlled shani-deploy update pipeline.
  • Blue-Green Deployment: Two complete system images (@blue and @green) are maintained. While one is active, updates are written to the other. On reboot you switch to the updated image. The previous one is kept as a one-command rollback target.
  • Atomic Updates: Updates are all-or-nothing. The running system is never touched during an update. If something goes wrong, boot failure is detected automatically and the system rolls back — your work is never interrupted.
  • Selective Persistence: User data, configuration changes (/etc overlay), Flatpak apps, containers, VMs, and service credentials all live in dedicated Btrfs subvolumes that survive every update and rollback, always. A dedicated @nix subvolume is pre-created for use with the Nix package manager.
  • Defence-in-Depth Security: Six Linux Security Modules run simultaneously (lsm=landlock,lockdown,yama,integrity,apparmor,bpf), LUKS2 argon2id full-disk encryption, TPM2 auto-unlock, Secure Boot with MOK, Intel ME disabled by default, and every OS image is SHA256+GPG verified before deployment.
  • Zero Telemetry: No usage data, crash reports, analytics, or tracking of any kind — ever. The entire codebase is public on GitHub; every claim is independently verifiable.
Traditional vs Immutable OS — Layer Comparison Traditional Linux ShaniOS (Immutable) /boot/efi (ESP — FAT32) ✏️ Writable — accidental overwrites possible, no integrity check /usr /bin /lib /sbin ✏️ Writable — any process can corrupt system binaries /etc /opt ✏️ Writable — config drift accumulates silently /var (logs, state, spool) ✏️ Mixed — grows unbounded, hard to audit /home (user data) ✏️ Writable — no snapshot isolation by default Consequences ❌ A bad update can leave system unbootable ❌ Dependency conflicts corrupt shared libraries ❌ Rollback requires manual snapshot discipline ❌ System state drifts from original install ❌ Any exploited process can modify /usr /bin ❌ No cryptographic verification of boot chain vs /boot/efi (FAT32, LABEL=shani_boot, noauto) 🔒 Mounted only during updates — UKIs Secure Boot signed per slot /usr /bin /lib /sbin (@blue / @green) 🔒 Read-only root — kernel-enforced, no writes possible /etc (OverlayFS → @data) ⚙️ Writable overlay — changes persist, base untouched /var (tmpfs + bind mounts from @data) 🔄 Volatile base + selectively persisted service state /home (@home Btrfs subvolume) ✅ Persistent — Btrfs snapshots available on demand Guarantees ✅ Atomic updates — all or nothing, never half-applied ✅ Instant rollback — previous slot always on disk ✅ Verified boot chain — SHA256 + GPG + Secure Boot ✅ No system drift — root replaced wholesale each update ✅ IMA/EVM runtime integrity measurement active ✅ Six LSMs active simultaneously (AppArmor, Landlock…) Fully writable Read-only (immutable) Overlay (writable + persistent) Volatile (tmpfs, cleared on reboot)

Immutable OS separates read-only system layers from writable user/config layers — every layer has a defined, auditable behaviour

1. What's Included

ShaniOS comes fully equipped with a comprehensive software stack, carefully curated for desktop computing, development, gaming, and professional workloads. Everything works out of the box—no configuration required.

System Foundation

  • Base: Arch Linux with rolling release updates, latest stable kernel (linux) with Intel and AMD microcode (intel-ucode, amd-ucode)
  • Firmware: Comprehensive collection split by vendor: linux-firmware-amdgpu, linux-firmware-atheros, linux-firmware-broadcom, linux-firmware-cirrus, linux-firmware-intel, linux-firmware-mediatek, linux-firmware-nvidia, linux-firmware-radeon, linux-firmware-realtek, linux-firmware-other, broadcom-wl, sof-firmware, alsa-firmware
  • Bootloader: systemd-boot with UEFI and Secure Boot support (shim-signed, sbctl, mokutil, sbsigntools, efibootmgr, efivar, efitools, edk2-shell)
  • Boot Graphics: Plymouth with BGRT theme (displays manufacturer logo during boot)
  • Initramfs: dracut with full UKI (Unified Kernel Image) generation via gen-efi; cpio (initramfs archive format)
  • Filesystem: Btrfs with compression, CoW snapshots, automated maintenance (scrubbing, balancing, defrag, TRIM) and continuous background deduplication via bees
  • Image Integrity: ostree and composefs underpin image verification; squashfuse enables read-only squashfs mounts where needed
  • Volume Management: LVM2, software RAID (mdadm, dmraid), thin-provisioning-tools
  • Encryption: LUKS2 (cryptsetup), fscrypt, ecryptfs-utils, gocryptfs, volume_key
  • Additional Filesystems: ext4, XFS, F2FS, exFAT, NTFS (full read/write via ntfs-3g), FAT
  • Network Storage: NFS client/server (nfs-utils, nfsidmap, rpcbind), Samba/CIFS (samba, smbclient, cifs-utils), SSHFS
  • Advanced Storage: nvme-cli, ndctl (persistent memory), quota-tools, squashfs-tools, fuse-overlayfs, nbd, duperemove, compsize, udisks2 with Btrfs plugin (udisks2-btrfs), dosfstools, e2fsprogs, xfsprogs, f2fs-tools, exfatprogs, fatresize, mtools, btrfs-progs, btrfsmaintenance
  • Updates: Atomic deployment via shani-deploy — uses btrfs-progs, zstd, sbsigntools, dracut, jq, zsync, wget, yad (GUI notifications), duperemove (block-level deduplication), compsize (compressed size analysis), bees (background deduplication daemon). Automatic rollback on failed boot detected via boot-health tracking services.
  • Firmware Updates: fwupd with fwupd-efi and automatic periodic checks (fwupd-refresh.timer) for UEFI, Thunderbolt, storage, and peripheral firmware via the Linux Vendor Firmware Service. Both fwupd.service and fwupd-refresh.timer are enabled at install time.
  • Passthrough & Sharing: passim (local content sharing for caching fwupd metadata and update payloads across machines on the same network)

Security

  • Mandatory Access Control: AppArmor (apparmor) enabled by default with policies enforced at boot; snapd.apparmor.service loads Snap confinement profiles automatically
  • Firewall: firewalld active with pre-configured rules for KDE Connect/GSConnect and Waydroid; firewall-config GUI included
  • Intrusion Prevention: fail2ban for automated log-based banning of repeated authentication failures
  • Policy Enforcement: verdict — used for network/policy decision logging and enforcement
  • TPM: tpm2-tools and tpm2-tss for hardware security module interaction (TPM 2.0); oath-toolkit for TOTP/HOTP two-factor authentication
  • Smart Card & Hardware Keys: opensc, ccid, acsccid, libnfc (NFC), libfido2 (FIDO2/U2F security keys), pcsclite
  • Cryptography: gnupg, pinentry, gpgme, p11-kit, libksba, npth, leancrypto
  • Remote Access: OpenSSH (openssh 10.x), Tailscale mesh VPN (WireGuard-based, state persisted in /data/varlib/tailscale), cloudflared zero-trust tunnels (state persisted in /data/varlib/cloudflared)
  • VPN Protocols: All NetworkManager VPN plugins are pre-installed — OpenVPN, WireGuard (wireguard-tools), L2TP (networkmanager-l2tp), PPTP (networkmanager-pptp), strongSwan/IKEv2 (networkmanager-strongswan), Cisco AnyConnect (networkmanager-openconnect, openconnect), SSTP (networkmanager-sstp, sstp-client), Fortinet SSL (openfortivpn), Cisco VPNC (networkmanager-vpnc, vpnc). Configured via the NetworkManager GUI with no manual installation needed.
  • Audit: audit daemon (audit) for kernel security event logging

Desktop & Applications

  • Desktop Environment: GNOME or KDE Plasma (chosen at download), Wayland-first with full X11 compatibility via xorg-xwayland
  • GNOME Edition includes: GDM, Mutter, gnome-shell with AppIndicator, Caffeine, and GSConnect extensions; Nautilus (with image-converter and share plugins), GNOME Software, Disks, System Monitor, Control Center, Logs, Firmware, Screenshots, Remote Desktop (gnome-remote-desktop), Keyring, Initial Setup, Tour, gSmartControl, Baobab, File Roller, Rygel (media server), GPaste (clipboard manager), Transmageddon (video converter), Malcontent (parental controls), complete GVFS stack including OneDrive (gvfs-onedrive), Google Drive (gvfs-google), MTP, SMB, NFS, AFP, gPhoto2, DNS-SD, WSD discovery; adwaita-fonts, Papirus icon theme, gnome-themes-extra, webp-pixbuf-loader
  • KDE Plasma Edition includes: plasma-login-manager (SDDM), KWin (kwin), KScreen (kscreen, libkscreen), Plasma Desktop, NetworkManager applet (plasma-nm), audio applet (plasma-pa), Discover, DrKonqi, Spectacle, Breeze and Breeze-GTK themes with cursors, Aurorae window decoration engine (aurorae), KDE GTK config (kde-gtk-config), KWallet with PAM integration (kwallet-pam), Bluedevil (bluetooth), Plasma Vault, Plasma Firewall (plasma-firewall), Plasma Thunderbolt, Plasma System Monitor (plasma-systemmonitor), kRDP (krdp), kRFB (krfb), KDE Connect (kdeconnect), Konsole, Yakuake, Ark, Dolphin with admin/GDrive/Zeroconf KIO plugins and thumbnailers (ffmpegthumbs, kdegraphics-thumbnailers, kimageformats, icoutils, libappimage, libspectre, libid3tag, libraw, libopenraw, libwmf, libultrahdr, djvulibre), partitionmanager (kpmcore), KCron, KSysLog, KJournald, Filelight, KDF, Sweeper, Converseen, Kvantum, QRca, Colord-KDE, plasma-applet-window-buttons, plasma6-applets-window-title, plasma-setup-git; Ocean sound theme; Knighttime (blue-light screen tint)
  • Pre-installed Apps: Vivaldi browser (full codecs), OnlyOffice suite, Gear Lever (AppImage GUI manager)
  • Package Formats: Flatpak (primary, from Flathub, auto-updates every 12 hours), AppImage via Gear Lever (portable apps with automatic desktop integration). Nix package manager is pre-installed and running — nix-daemon.socket is enabled at boot and all Nix data lives in the dedicated @nix Btrfs subvolume shared across both slots. Nix packages survive all updates and rollbacks. A channel must be added on first use before installing packages.
  • Containers: Podman (rootless, Docker-compatible, podman.socket enabled at boot), podman-docker (drop-in Docker CLI replacement), podman-compose (Docker Compose support), buildah (OCI image builder, no daemon), skopeo (OCI image inspection and copying between registries), conmon (container monitor), slirp4netns (user-mode networking for rootless containers), netavark + aardvark-dns (Podman network stack), Distrobox (BoxBuddy GUI installable from Flathub), LXC (@lxc subvolume), LXD (@lxd subvolume, lxd.socket enabled), lxcfs (filesystem virtualisation for containers, lxcfs.service enabled), systemd-nspawn (@machines subvolume), Apptainer (HPC/scientific container runtime, formerly Singularity), Snap (@snapd subvolume, snapd.socket enabled), fuse-overlayfs, catatonit (init for containers)
  • Android: Waydroid pre-configured with waydroid-helper (auto-setup and management), waydroid-container.service enabled at boot, python-pyclip (clipboard integration between Android and the host), firewall rules pre-configured, android-tools (adb, fastboot), android-udev
  • VM Guest Tools: spice-vdagent, qemu-guest-agent, virtualbox-guest-utils, open-vm-tools — grouped under shani-video-guest.target, which pulls in vboxservice, vmtoolsd, vmware-vmblock-fuse, and spice-vdagentd automatically when the package is installed

Multimedia

  • Audio: PipeWire (1.4.x) with full ALSA, JACK, and PulseAudio compatibility (pipewire-alsa, pipewire-jack, pipewire-pulse, pipewire-v4l2, pipewire-libcamera, pipewire-zeroconf), WirePlumber session manager, realtime-privileges, sndio (for BSD-style audio applications)
  • Audio Codecs: FLAC, OPUS, Vorbis, MP3 (lame, mpg123), AAC (faac, faad2), AC3/DTS (a52dec, libdca), ALAC, Speex, SBC (Bluetooth), LDAC (Bluetooth hi-res), aptX (libfreeaptx), LC3 (liblc3), WavPack, musepack, MOD/XM (libmodplug, libopenmpt, wildmidi), GSM, TwoLAME (MPEG layer 2)
  • Video: Complete GStreamer plugin suite (gst-plugins-base, gst-plugins-good, gst-plugins-bad, gst-plugins-ugly, gst-libav for FFmpeg-based codecs, gst-plugin-va for VA-API hardware decode, gst-plugin-pipewire, gst-plugin-libcamera, gst-plugins-espeak for speech synthesis)
  • Video Codecs: FFmpeg (full build), AV1 encode via svt-av1 and rav1e, AV1 decode via dav1d and aom, HEVC encode via svt-hevc, VP8/VP9 via libvpx, H.264 via x264, H.265 via x265, MPEG2 (libmpeg2), XviD (xvidcore), AVIF (libavif), HEIF (libheif), JPEG XL (libjxl), WebP (libwebp), OpenEXR, Theora, Dirac; video filter and frame processing via vapoursynth and vid.stab; VMAF quality metric (libvmaf); HDR via libplacebo and libdovi, UHD HDR via libultrahdr
  • Image Processing: ImageMagick, Imlib2, Converseen (batch converter), GraphicsMagick-class processing via libgd; camera RAW via libraw, libopenraw; DNG/EXIF via exiv2
  • 3D Audio: Ambisonics/HRTF via libmysofa; room correction via libebur128; pitch/time-stretch via rubberband and soundtouch; sample-rate conversion via libsamplerate
  • Media Server: Rygel (DLNA/UPnP), GUPnP stack
  • Firmware: sof-firmware (Sound Open Firmware for Intel DSP audio), alsa-firmware (firmware for ALSA-supported legacy hardware)

Graphics Drivers & APIs

  • OpenGL/GLES: Mesa 3D (latest, 25.x) for Intel, AMD, NVIDIA (nouveau), and software rendering; GLU, freeglut
  • Vulkan: vulkan-icd-loader + drivers for Intel (vulkan-intel), AMD (vulkan-radeon), NVIDIA open (vulkan-nouveau), software (vulkan-swrast), VMs (vulkan-virtio), DirectX-on-Vulkan (vulkan-dzn), Qualcomm (vulkan-freedreno), Apple Silicon (vulkan-asahi), Android/gfxstream (vulkan-gfxstream); validation and debug layers (vulkan-mesa-layers, vulkan-mesa-implicit-layers), vulkan-tools for diagnostics
  • OpenCL: ocl-icd (ICD loader) + Mesa/GPU driver OpenCL support; clinfo for enumeration
  • Shader Compilation: glslang (GLSL/HLSL to SPIR-V), shaderc (Vulkan shader compilation), spirv-tools (SPIR-V optimization and validation)
  • Hardware Video Acceleration: Intel: libva-intel-driver (legacy), intel-media-driver (iHD, Gen8+), vpl-gpu-rt, intel-gmmlib; NVIDIA: nvidia-open (open kernel module), nvidia-utils, nvidia-prime; generic VDPAU: libvdpau, libvdpau-va-gl; VA-API loader: libva
  • Hybrid Graphics: nvidia-prime (PRIME render offload), switcheroo-control (automatic GPU selection for dual-GPU laptops)
  • Display Protocol: EGL via egl-gbm, egl-wayland (EGLStream), egl-wayland2 (GBM/NVK), egl-x11; EGLExternalPlatform framework; libxcvt (EDID timing)
  • Power: power-profiles-daemon (performance / balanced / power-saver switching)

Printing & Scanning

  • Print System: CUPS 2.4 (cups, libcups) with filters (cups-filters, libcupsfilters), PDF printer (cups-pdf), network browsing (cups-browsed), PolicyKit helper (cups-pk-helper), ipp-usb (driverless IPP-over-USB), Avahi for auto-discovery, ghostscript (PostScript/PDF), gsfonts (Ghostscript core fonts), a2ps (text-to-PostScript), print-manager (KDE print job manager), system-config-printer (GTK printer GUI)
  • Printer Drivers: Foomatic databases (foomatic-db, foomatic-db-engine, foomatic-db-ppds, foomatic-db-gutenprint-ppds), Gutenprint (gutenprint), HP (hplip-minimal), Epson ESC/P2 (epson-inkjet-printer-escpr2), Brother laser (brlaser), Canon CIJFILTER (cnijfilter2), generic PCL (foo2zjs-nightly, splix)
  • Scanning: SANE with sane-airscan (driverless network scanning via IPP), colord-sane (scanner color management), argyllcms (full ICC color profiling)
  • Barcode & Label: zbar (barcode detection/decoding), zint (barcode generation, multiple symbologies), qrencode (QR code generation), libdmtx (Data Matrix 2D barcodes)

Networking

  • Connection Management: NetworkManager (1.54.x) with nm-connection-editor GUI, ModemManager (3G/4G/5G, libmbim, libqmi), wpa_supplicant, wireless-regdb, wireless_tools, iw, usb_modeswitch, mobile-broadband-provider-info
  • VPN (all pre-installed, configured via GUI): OpenVPN (networkmanager-openvpn), WireGuard (wireguard-tools), L2TP/IPsec (networkmanager-l2tp, xl2tpd, ppp, pptpclient), PPTP (networkmanager-pptp), strongSwan/IKEv2 (networkmanager-strongswan, strongswan), Cisco AnyConnect/OpenConnect (networkmanager-openconnect, openconnect), SSTP (networkmanager-sstp, sstp-client, network-manager-sstp), Fortinet SSL VPN (openfortivpn), Cisco VPNC (networkmanager-vpnc, vpnc)
  • Mesh VPN / Tunneling: Tailscale (tailscale, WireGuard-based zero-config mesh), cloudflared (Cloudflare Zero Trust tunnels and WARP)
  • DNS & Discovery: dnsmasq (local DNS/DHCP), openresolv (resolv.conf management), nss-mdns, Avahi (mDNS/DNS-SD), bind (full resolver), pacrunner (PAC/WPAD), dnssec-anchors, aardvark-dns (Podman container DNS)
  • File Sharing: Samba (samba, smbclient, libwbclient), NFS (nfs-utils, nfsidmap, rpcbind), kdenetwork-filesharing (KDE Samba sharing GUI)
  • Web Server: Caddy (caddy 2.10.x) with automatic HTTPS via Let's Encrypt
  • Diagnostics: nmap, tcpdump, iperf3, mtr, iftop, ethtool, traceroute, whois, socat, ngrep, nethogs, net-snmp, lsof
  • Utilities: curl, wget, aria2, rsync, zsync, openbsd-netcat, net-tools, inetutils, iproute2, rclone (cloud sync to 40+ providers), restic (encrypted backups)
  • Remote Desktop: FreeRDP (freerdp) client; kRDP (krdp) and kRFB (krfb) servers on KDE; gnome-remote-desktop on GNOME

Gaming Hardware & Performance

  • Controllers: game-devices-udev (8BitDo, PlayStation DS3/DS4/DualSense/DualSense Edge, Xbox 360/One/Series, Nintendo Switch Pro/Joy-Cons/GameCube adapter, Google Stadia, NVIDIA Shield, Steam Controller, Razer, Nacon, Hori, PowerA, PDP, Mad Catz, Astro C40), fight sticks (Razer Panthera, Hit Box), VR (Vive, PSVR, Valve Index/SteamVR), flight sticks (VKBSim, Logitech), uinput for virtual device emulation (streaming, mapping tools)
  • Racing Wheels: Logitech (G25/G27/G29/G920/G923/PRO/Driving Force/MOMO), Thrustmaster (T150/T300RS/T500RS/T248/TS-XW/TS-PC), Fanatec — with full force feedback and deadzone removal
  • RGB Peripherals: OpenRGB udev rules (60-openrgb.rules) for keyboards, mice, mousepads, headsets, fans, and accessories from ASUS Aura, Corsair, Razer, SteelSeries, Logitech G, MSI Mystic Light, Cooler Master, NZXT, and others
  • Gaming Mice: libratbag + ratbagd daemon for DPI configuration, button remapping, and LED control on gaming mice (Logitech, Razer, SteelSeries, etc.)
  • Input Mapping: AntiMicroX available via Flatpak with pre-configured uinput access
  • Gamemode: gamemode daemon (ananicy-cpp for process priorities) — auto-applies CPU governor, I/O priority, and GPU performance mode when games are launched
  • Legacy: linuxconsole (joystick calibration, legacy gamepads), inputattach (serial input devices)

Wireless & Connectivity

  • Bluetooth: Full BlueZ stack (bluez 5.86) — bluez-utils, bluez-hid2hci, bluez-cups (printing over Bluetooth), bluez-obex (file transfer), bluez-mesh (mesh networking), bluez-tools; Bluedevil (KDE) / gnome-bluetooth (GNOME) GUI; bluez-qt for Qt/KDE integration
  • Biometrics: fprintd (fingerprint readers via libfprint, supporting capacitive and optical sensors)
  • Smart Cards & Keys: opensc (PKCS#11 smart cards), ccid (USB CCID driver), acsccid (ACS smart card readers), pcsclite (PC/SC daemon), libnfc (NFC hardware), libfido2 (FIDO2/U2F hardware security keys — YubiKey, etc.), hidapi (HID device access, used by FIDO2 and many peripherals)
  • Mobile Devices: usbmuxd (iOS connectivity, tethering, AFC), libimobiledevice, libimobiledevice-glue, libtatsu, libplist; android-tools (adb, fastboot), android-udev
  • Sensors: iio-sensor-proxy (accelerometer, gyroscope, ambient light sensor — auto-rotates screen), geoclue (geolocation for applications), gpsd (GPS hardware — gpsd.socket), lirc (IR remote controls)
  • Thunderbolt: bolt (Thunderbolt 3/4 device authorization and security management)
  • UPS: apcupsd (APC UPS monitoring, automatic shutdown on power loss)
  • Optical Media: libdvdcss (encrypted DVD decryption), libdvdread, libdvdnav, libbluray (Blu-ray), cdparanoia (CD ripping), cdio

Development Tools

  • Version Control: Git (2.53.x), Subversion (svn), Mercurial (hg)
  • Editors: Vim, Vi (ex-vi-compat), Nano, Micro (terminal GUI editor), ed, Tmux (terminal multiplexer)
  • Languages & Runtimes: Python 3.14 with pip, Perl 5.42 with comprehensive CPAN modules (XML, HTTP, LWP, DBI, URI, HTML), Tcl/Tk (tcl), Lua 5.4, Guile 3.0 (Scheme); gcc-libs, binutils for native compilation; GDB with Python support
  • Build & Tooling: make (via coreutils), patch, flex, m4, pkgconf, fakeroot, libtool; CMake and Meson available via Flatpak/Distrobox
  • Debugging & Profiling: strace, lsof, elfutils, gdb, ltrace-via-gdb; gperftools (heap/CPU profiler), valgrind available via Distrobox
  • Graphics Dev: vulkan-tools, mesa-utils, clinfo (OpenCL enumeration), glslang, shaderc, spirv-tools
  • Data & Config: jq (JSON), yq-via-jq, SQLite, LMDB, kyotocabinet, yajl, jansson, json-c; YAML via libyaml, python-yaml; XML via libxml2, libxslt, perl-xml-libxml
  • Containers & CI: Podman, buildah, skopeo, distrobox — full OCI toolchain available without Docker daemon

System Utilities

  • Core Tools: sudo, which, less, diffutils, patch, time, gnupg, man-db, plocate, grep, sed, gawk, findutils, coreutils, util-linux, procps-ng, psmisc, shadow
  • Compression & Archives: 7zip, zip/unzip, unrar, unarchiver (The Unarchiver), unace, arj, lrzip, lzop, lz4, zstd, bzip2, xz, gzip, brotli, snappy, lzo, zlib-ng; squashfs-tools, squashfuse, elfutils (ELF binary inspection and DWARF debugging)
  • Monitoring & Diagnostics: htop, inxi, fastfetch, dmidecode, tree, ncdu, sysstat (iostat, sar), smartmontools (S.M.A.R.T. disk health), lm_sensors, hwloc, hwdata, pciutils, usbutils, lsof, strace
  • Automation & Scripting: cronie, logrotate, expect, dialog, pv, jq, yad (GTK dialog), at, run-parts
  • File Management: rsync, rclone, restic, duperemove, compsize, fzf, ripgrep (rg), ripgrep-all (rga — search PDFs, Office docs, etc.), fd (find alternative)
  • Document Processing: ghostscript, poppler (PDF), libspectre (PostScript), a2ps, psutils, discount (Markdown), convertlit (LIT ebooks), ebook-tools
  • Misc: bc (calculator), words (dictionary), lynx (terminal browser), bsd-games, texinfo, source-highlight, editorconfig-core-c

Networking Tools

  • Traffic Analysis: iftop (real-time bandwidth per connection), nethogs (per-process bandwidth), bandwhich (bandwidth utilisation by process and remote address), ngrep (network packet grep), tcpdump (packet capture), iperf3 (throughput measurement)
  • Diagnostics & Discovery: ethtool (NIC configuration and link statistics), net-tools (ifconfig, netstat, route), nmap (port scanner and host discovery), traceroute, whois, wireless_tools (iwconfig, iwlist), mtr (traceroute + ping combined), bind (dig, nslookup), inetutils (ping, telnet, ftp, hostname), usbutils
  • Connectivity: openbsd-netcat (nc — general TCP/UDP tool), socat (multipurpose relay), curl, wget, aria2 (multi-protocol downloader with BitTorrent/Metalink support), nbd (network block device)
  • Sync & Backup: rsync (incremental file sync), zsync (delta sync over HTTP), rclone (cloud storage sync — S3, GDrive, Backblaze, etc.), restic (encrypted backup)
  • Tunnelling & Services: Tailscale (mesh VPN), cloudflared (Cloudflare Zero Trust tunnel), Caddy (automatic HTTPS web server and reverse proxy)

Shell Experience

  • Default Shell: Zsh 5.9 with fish-style syntax highlighting (zsh-syntax-highlighting), autosuggestions (zsh-autosuggestions), and history substring search (zsh-history-substring-search); full completion support via zsh-completions
  • Prompt: Starship — fast cross-shell prompt with git integration, environment detection for Python, Node.js, Rust, Go, and more
  • Smart History: McFly — neural network command history search that learns from usage patterns and directory context
  • Fuzzy Finder: FZF — integrated into Zsh for Ctrl+R history search, Ctrl+T file insertion, and Alt+C directory navigation
  • Alternatives: Bash 5.3 (with bash-completion), Fish 4.5 (with fish-style interactive features natively) — all three shells are installed; new users default to Zsh

Fonts & Accessibility

  • Fonts: Noto fonts family (noto-fonts for Latin/Greek/Cyrillic, noto-fonts-cjk for Chinese/Japanese/Korean, noto-fonts-emoji for color emoji, TTF Hack for monospace/coding), gsfonts (Ghostscript PostScript fonts), Adwaita fonts; fontconfig pre-tuned with rendering profiles (70-noto-cjk.conf, 75-noto-color-emoji.conf, 90-indian-fonts.conf for Indian language scripts including Devanagari, Tamil, Telugu, etc.)
  • Input Methods: IBus with ibus-typing-booster (predictive/multi-language), ibus-libpinyin (Chinese Simplified/Traditional), ibus-anthy (Japanese), ibus-hangul (Korean), ibus-unikey (Vietnamese); Anthy and libpinyin dictionaries included; m17n-lib and m17n-db for additional script transliteration (Arabic, Thai, Indic, etc.)
  • Accessibility Stack (shani-accessibility): Orca screen reader (orca), espeak-ng TTS engine, espeakup (console speech bridge), brltty (braille displays — note: excluded from initramfs, available after boot), speech-dispatcher (unified speech abstraction layer), pcaudiolib (audio support for espeak-ng), at-spi2-core (assistive technology infrastructure), liblouis (braille translation), dotconf; ddcutil (monitor brightness/contrast control via DDC/CI)
  • Localization: glibc-locales (all locale data), iana-etc, iso-codes, tzdata, lsb-release

2. User Configuration

The primary user is automatically configured with appropriate permissions during installation. ShaniOS also watches for newly created users: the shani-user-setup.path unit monitors /etc/passwd for changes and triggers shani-user-setup.service whenever a new regular user (UID 1000–59999) is detected. That service automatically adds the user to all required groups and sets their default shell to /bin/zsh. This means any user created post-installation via the GUI or useradd/adduser gets the same setup automatically — both of those commands are also wrapped to inject the default group list.

User Groups

  • wheel: Sudo privileges for system administration
  • input: Direct input device access (keyboards, mice, controllers)
  • realtime: Real-time scheduling, HPET/RTC access for audio production and low-latency gaming
  • video: GPU and video hardware access
  • sys: Hardware monitoring and sensor access
  • cups, lp: Printer management and job submission
  • scanner: Scanner device access
  • nixbld: Nix build users group — required for the Nix package manager daemon
  • lxc, lxd: LXC/LXD container management without root
  • kvm: Virtual machine management (KVM hardware access)
  • libvirt: libvirt VM management via virsh and virt-manager

Firewall Rules

  • KDE Connect/GSConnect: Ports opened in public zone for device pairing, file transfer, notifications, remote control
  • Waydroid: DNS (53/udp, 67/udp), packet forwarding enabled, waydroid0 in trusted zone

3. System Optimizations

ShaniOS includes extensive performance, gaming, and reliability optimizations out of the box, eliminating the need for manual tweaking. These enterprise-grade optimizations are pre-configured and active from first boot.

Memory & Storage Management

  • ZRAM Compression: Automatic RAM compression using the zstd algorithm (configured to use full RAM size) provides improved memory efficiency without requiring swap partitions. The system dynamically allocates compressed swap in RAM, reducing disk I/O and extending SSD lifespan.
  • Optimized Swappiness: Tuned to 133 for balanced memory management on modern systems. This value encourages the kernel to use compressed ZRAM swap before evicting clean page cache, maintaining system responsiveness under memory pressure.
  • Btrfs Maintenance: Automated filesystem maintenance runs in the background, including periodic scrubbing for data integrity, balance operations to optimize storage layout, and defragmentation for improved performance. These operations are scheduled to minimize impact on system usage.
  • Profile Sync Daemon: Browser profiles are stored in tmpfs (RAM) for dramatically faster load times and reduced disk writes. Changes are periodically synchronized back to persistent storage, combining speed with data safety. This reduces SSD wear while improving browser responsiveness.
  • Optimized Page Lock Unfairness: Set to 1 to reduce lock contention and improve multi-threaded application performance, particularly beneficial for databases and high-concurrency workloads.

Gaming & Performance Optimizations

ShaniOS is optimized for gaming performance with multiple layers of system tuning:

Gaming Hardware Support:

  • game-devices-udev Package: Comprehensive udev rules for gaming peripherals with user-grade permissions, including:
    • Controllers: 8BitDo, PlayStation (DualShock 3/4, DualSense, DualSense Edge), Xbox (360, One, Series), Nintendo (Switch Pro, Joy-Cons, GameCube adapter), Google Stadia, NVIDIA Shield, Razer, Nacon, Hori, PowerA, PDP, Mad Catz, Astro C40, and more
    • Fight Sticks & Arcade: Razer Panthera, Mad Catz FightStick, Hit Box, and arcade-style controllers
    • VR Equipment: HTC Vive, PlayStation VR, Valve Index/SteamVR devices with full hardware access
    • Flight Sticks & Sim: VKBSim Gladiator, Logitech F310/F710, and other simulation controllers
    • uinput Support: Early creation of /dev/uinput for virtual device emulation (gamepad mappers, streaming tools)
  • Additional RGB & Peripheral Support: OpenRGB udev rules (60-openrgb.rules) for comprehensive RGB control:
    • RGB Keyboards: ASUS Aura, Corsair, Razer, SteelSeries, Logitech G-series, MSI Mystic Light, Cooler Master, NZXT, and dozens more
    • RGB Mice: Gaming mice from all major manufacturers with LED control
    • RGB Mousepads: Corsair, Razer, SteelSeries, and other illuminated mousepads
    • RGB Accessories: Headset stands, mouse docks, PC case lighting, AIO coolers, and more
  • Racing Wheel Support: Full force feedback and configuration access:
    • Logitech wheels (99-logitech-wheel-perms.rules): G29, G920, G923, G PRO, and legacy wheels (G25, G27, Driving Force, MOMO) with range, gain, LED, and force feedback tuning
    • Thrustmaster wheels (99-thrustmaster-wheel-perms.rules): T150, T300RS, T500RS, T248, TS-XW, TS-PC with spring, damper, and friction control
    • Fanatec wheels (99-fanatec-wheel-perms.rules): Full hardware access with deadzone removal via evdev-joystick
  • Game Input Utilities:
    • AntiMicroX: Gamepad to keyboard/mouse mapper with uinput access (60-antimicrox-uinput.rules)
    • Steam Controller support: Native hardware access for Valve's gaming ecosystem

Kernel-Level Gaming Optimizations:

  • Game Compatibility Kernel Parameters: Pre-configured for maximum compatibility:
    • Increased PID limit (65535) supports complex games with many processes
    • Expanded memory map areas (2147483642) accommodates large game worlds and high-resolution texture streaming
    • Enhanced inotify limits (1024 instances, 524288 watches) for games with extensive file monitoring
  • Network Optimization: TCP FIN timeout reduced to 5 seconds (from default 60s) enables games to quickly reuse network ports when restarting. This Valve SteamOS optimization prevents "address already in use" errors in multiplayer games.
  • CPU Scheduler Tuning: Multiple scheduler optimizations for gaming performance:
    • CFS bandwidth slice: 3000μs for improved interactivity and reduced input latency
    • Base slice: 3000000ns (3ms) for consistent frame times
    • Migration cost: 500000ns to prevent excessive CPU migration overhead
    • Migration limit: 8 tasks per balance operation for optimal multi-core utilization
    • Child runs first: Disabled (0) to prevent priority inversion in gaming scenarios
    • Autogroup scheduling: Enabled (1) to automatically group related processes for better desktop responsiveness
  • Advanced Memory Management:
    • MGLRU (Multi-Gen LRU) enabled with aggressive settings for intelligent memory reclaim that preserves recently accessed game data
    • Transparent Hugepage set to madvise for selective large page usage, improving performance for games that explicitly request it
    • Optimized watermark scaling (500) and minimum free memory (1MB) for consistent response times under load
    • Reduced compaction proactiveness (0) minimizes latency spikes during memory allocation
    • Watermark boost factor: Set to 1 (minimal) to reduce aggressive memory reclaim
    • Zone reclaim disabled (0) prevents unnecessary NUMA overhead on multi-socket systems
  • GameMode Integration: GameMode daemon enabled globally for all users, providing automatic performance optimizations when gaming. GameMode dynamically adjusts CPU governor, process priorities, I/O scheduling, and GPU performance levels. Games are automatically detected and optimized without manual configuration.
  • High-Precision Timers: HPET and RTC frequencies increased to 3072 Hz (from default 64 Hz) for superior timing accuracy. This reduces input lag and improves frame pacing in games requiring precise timing, particularly beneficial for rhythm games and competitive shooters.

Process Management & Responsiveness

ShaniOS enables several system services for intelligent resource management:

  • Ananicy-cpp: Automatic process priority management service with game-aware rules (enabled by default). Background tasks are automatically deprioritized when games or media applications are active, ensuring smooth performance without manual nice/renice commands.
  • systemd-oomd: System-wide OOM (Out-Of-Memory) daemon enabled by default prevents system freezes. When memory pressure is detected, systemd-oomd selectively terminates low-priority processes based on memory usage patterns and process priorities, keeping critical services and active applications running.
  • IRQBalance: Service enabled to optimize interrupt distribution across CPU cores, preventing a single core from becoming overwhelmed with hardware interrupts. This is particularly important for systems with high-bandwidth network cards or NVMe drives, ensuring balanced load across all cores.
  • Increased File & Process Limits: Default limits raised to 1,048,576 for both open files (NOFILE) and processes (NPROC). This prevents "too many open files" errors in development environments, containerized workloads, and servers running multiple services.
  • Fast Shutdown: Reduced timeout values (10s for stop, 10s for abort) ensure quick system shutdowns without waiting for unresponsive services. Services that fail to stop gracefully are automatically killed after the timeout.

Security Hardening

  • AppArmor Mandatory Access Control:
    • AppArmor (apparmor.service): Enabled by default to confine system services and applications with security profiles
  • Firewall Protection:
    • firewalld: Active firewall enabled by default, denying inbound connections while allowing essential services
    • Pre-configured rules: Automatic firewall configuration for KDE Connect/GSConnect and Waydroid during installation
  • Kernel Hardening:
    • NMI watchdog disabled for faster boot times and reduced CPU overhead (this is safe on modern systems with other hang detection mechanisms)
    • Unprivileged user namespaces enabled for secure containerization (required for Podman, Flatpak sandboxing, and Distrobox)
    • Magic SysRq keys enabled for emergency recovery (Alt+SysRq+REISUB sequence for safe reboots during system hangs)
  • Hardware Access Controls: The realtime group receives permission to access HPET and RTC devices directly. This is essential for professional audio production, real-time multimedia applications, and low-latency gaming. Users in the realtime group can run applications with real-time scheduling priorities.
  • Blacklisted Kernel Modules:
    • PC speaker (pcspkr) disabled to eliminate annoying beeps
    • Intel Management Engine (mei, mei_me) disabled for enhanced privacy and security, removing Intel's remote management interface

Boot & System Efficiency

  • Plymouth (BGRT Theme): Smooth graphical boot experience with the BGRT (Boot Graphics Resource Table) theme enabled by default. Plymouth provides a polished boot animation while hiding technical boot messages, presenting a clean interface for password entry during LUKS unlock. The BGRT theme displays your manufacturer's logo for a seamless firmware-to-OS transition.
  • Journal Size Limit: SystemD journal capped at 50MB to prevent excessive disk usage from log accumulation. This is particularly important on systems with limited storage, while still maintaining sufficient logs for troubleshooting recent issues.
  • Reduced Kernel Messages: Console printk level set to 3 (errors and critical messages only) for cleaner boot output. This eliminates informational noise while preserving important error messages needed for diagnostics.
  • Optimized Service Startup: Parallel service initialization with optimized dependencies reduces boot time. Non-critical services are delayed or started on-demand, prioritizing services required for desktop readiness.
  • Time Synchronization: systemd-timesyncd enabled by default for automatic network time protocol (NTP) synchronization, ensuring accurate system time without requiring a full NTP daemon.
  • Socket Activation: Many services use socket activation for on-demand startup, reducing boot time and memory usage:
    • pcscd.socket: Smart card daemon (PC/SC) starts only when smart cards are accessed
    • lircd.socket: Linux Infrared Remote Control daemon for IR remote support
    • gpsd.socket: GPS daemon for location services and navigation hardware
    • cups.socket: Printing service starts on-demand when print jobs are sent
    • avahi-daemon.socket: Network service discovery (mDNS/DNS-SD) for zero-configuration networking
    • saned.socket: Scanner daemon starts when scanning is needed

Hardware & Peripheral Support

ShaniOS enables comprehensive hardware support services by default:

  • Printing:
    • CUPS (cups.socket): Common UNIX Printing System for all printer support
    • cups-browsed: Automatic printer discovery on the network
    • ipp-usb: IPP-over-USB support for modern driverless printers
    • Avahi (avahi-daemon.socket): Network printer discovery via Bonjour/mDNS
  • Scanning:
    • SANE (saned.socket): Scanner Access Now Easy for scanner support
    • ipp-usb: Also provides driverless scanning support
  • Biometric Authentication:
    • fprintd: Fingerprint reader support for login and authentication
  • Bluetooth:
    • bluetooth.service: Full Bluetooth stack for wireless peripherals, audio, and file transfer
  • Mobile Devices:
    • usbmuxd: iOS device support for iPhone/iPad connectivity, file transfer, and tethering
  • Networking:
    • NetworkManager: Intelligent network connection management for wired, wireless, VPN, and mobile connections
    • ModemManager: Mobile broadband modem support (3G/4G/5G)
  • Gaming Peripherals:
    • ratbagd: Gaming mouse configuration daemon for DPI, button mapping, and LED control (libratbag)
    • inputattach: Serial input device support (legacy joysticks, game controllers)
  • Location Services:
    • geoclue: Geolocation service for applications requiring location data
    • gpsd.socket: GPS hardware support
  • Thunderbolt Security:
    • bolt: Thunderbolt 3 device authorization and management
  • Power Management:
    • apcupsd: APC UPS monitoring and management for battery backup systems
    • power-profiles-daemon: System power profile management (performance, balanced, power-saver)
    • switcheroo-control: Hybrid graphics switching for laptops with dual GPUs (Intel + NVIDIA/AMD)
  • Firmware Updates:
    • fwupd: Linux Vendor Firmware Service for automatic firmware updates
    • fwupd-refresh.timer: Periodic firmware update checks

Enhanced Shell Experience

ShaniOS provides a modern, feature-rich shell environment that rivals the best developer setups:

  • Zsh as Default Shell: Powerful shell with advanced completion, globbing, and command-line editing. Zsh provides better defaults than Bash while maintaining compatibility with Bash scripts.
  • Fish-Style Features:
    • Syntax highlighting shows valid/invalid commands in real-time as you type
    • Autosuggestions suggests commands from history with ghost text completion
    • History substring search allows arrow-up/down to search through history based on what you've typed
  • Starship Prompt: Fast, minimal, and infinitely customizable prompt with intelligent git integration, showing branch status, ahead/behind commits, and repository state. The prompt adapts to display context-relevant information (Python virtualenv, Node.js version, Rust toolchain, etc.).
  • McFly: Neural network-powered command history search learns from your usage patterns. McFly prioritizes frequently used commands in relevant directories and contexts, providing smarter suggestions than traditional reverse-i-search.
  • FZF (Fuzzy Finder): Integrated fuzzy finding for files, command history, and directory navigation. Ctrl+R searches history, Ctrl+T inserts file paths, and Alt+C changes directories—all with fuzzy matching and preview windows.
  • Multiple Shell Support: Bash and Fish shells included with completion support. Switch shells anytime with chsh while maintaining consistent functionality across all options.

Automated Maintenance

ShaniOS performs regular background maintenance to keep your system healthy without manual intervention. These systemd timers and services are enabled by default:

  • btrfs-scrub.timer: Monthly scrubbing to detect and repair data corruption using filesystem checksums
  • btrfs-balance.timer: Periodic filesystem balancing redistributes data across drives for optimal performance
  • btrfs-defrag.timer: Automatic defragmentation runs on fragmented files to improve read performance
  • btrfs-trim.timer: Regular TRIM operations for SSD optimization and lifespan extension
  • bees (beesd) — Continuous Background Deduplication: The bees daemon performs ongoing block-level deduplication across all Btrfs subvolumes. It is auto-configured at every boot by beesd-setup.service, which writes a per-UUID config to /etc/bees/ and enables the beesd@<UUID>.service unit. The hash database size is automatically scaled to disk capacity (256 MB per TB, capped at 1 GB). This works alongside the CoW sharing between @blue and @green to maximise storage efficiency over time.
  • mark-boot-in-progress / bless-boot / mark-boot-success / check-boot-failure: A coordinated set of services that implement boot health tracking. At every boot, mark-boot-in-progress plants a flag and clears previous results; bless-boot calls bootctl set-good early; mark-boot-success writes /data/boot-ok once multi-user.target is reached; and a 15-minute timer runs check-boot-failure to record the booted slot in /data/boot_failure if the system never reached a successful state.
  • flatpak-update-system.timer / flatpak-update-user.timer: Two independent timers — one system-level, one per-user — update all installed Flatpak apps, uninstall unused runtimes (--delete-data), and run flatpak repair automatically. Both fire 5 minutes after boot then every 12 hours with a 15-minute randomized delay to avoid hammering servers simultaneously.
  • profile-sync-daemon (psd): Browser cache synchronization from RAM to disk happens automatically (enabled globally for all users)
  • systemd-timesyncd: Automatic time synchronization keeps system clock accurate
  • ZRAM compression/decompression: Transparently handles memory pressure without user interaction

All maintenance operations are scheduled during low-usage periods and use minimal system resources to avoid impacting active work.

Note: All these optimizations are pre-configured and active from first boot. No manual configuration, tweaking, or performance tuning required. ShaniOS provides enterprise-grade optimization out of the box.

Migrating from a Traditional Linux Distro

If you're coming from Ubuntu, Fedora, Arch, or any mutable Linux distro, the main adjustment is how you install software and make system-level changes. Everything else — your dotfiles, shell, /etc configs, and all files under /home — works exactly as you'd expect.

The Mental Model Shift

Stop thinking about "installing software into the system." Think instead in terms of these four layers:

  • Flatpak is your new apt install / pacman -S for GUI applications. Flathub is pre-configured and ready to use from day one.
  • Distrobox is your escape hatch. It creates a full mutable Linux container (Ubuntu, Fedora, Arch — your choice) with seamless desktop integration. Use it for development tools, build environments, anything that needs apt or pacman, and apps that export to your launcher.
  • Nix (pre-installed, on the dedicated @nix subvolume) covers CLI tools and language runtimes with pinned versions and zero conflicts. Think of it as a supercharged user-space package manager. The @nix subvolume is shared across both slots so installed Nix packages survive updates and rollbacks. Add a channel with nix-channel --add before installing packages for the first time.
  • The /etc overlay still works exactly like a normal /etc. Edit any config file with sudo nano, sudo vim, etc. — changes persist across all updates.

What Cannot Be Done (and the Workaround)

Traditional approach ShaniOS equivalent
sudo pacman -S foo / sudo apt install foo flatpak install flathub foo, or Distrobox
sudo pip install globally pip install --user, or nix-env -iA nixpkgs.python3Packages.foo, or Distrobox
sudo npm install -g nix-env -iA nixpkgs.nodejs, or npm install -g inside Distrobox
make install to system paths Build and install inside Distrobox; export binaries to host with distrobox-export --bin
Modify /usr, /opt, /bin directly For config files: use the /etc overlay. For binaries: Distrobox or Nix.

Dotfiles work normally. Everything in /home is fully writable. Your ~/.zshrc, ~/.config/, ~/.local/ — unchanged. The immutability applies only to the OS itself, not your user space.

Concepts

Understanding Immutability

ShaniOS's immutability fundamentally changes how you interact with the system. Understanding this concept is key to using ShaniOS effectively.

What You CAN Do

  • ✅ Install applications via Flatpak
  • ✅ Edit configuration files in /etc
  • ✅ Create and modify files in /home
  • ✅ Run containers (Podman, Distrobox, LXC)
  • ✅ Update the entire system atomically
  • ✅ Store data in /data and persistent subvolumes

What You CANNOT Do

  • ❌ Use pacman to install traditional packages (use Flatpak, Nix, or Distrobox)
  • ❌ Modify files in / (root filesystem) (read-only by design)
  • ❌ Edit files in /usr, /bin, /lib directly (use /etc overlay for configs, Distrobox for binaries)
  • ❌ Install software that requires system-level changes (use containers or AppImages instead)
  • ❌ Run sudo pip install globally (use pip install --user, Nix, or Distrobox)
  • ❌ Run sudo npm install -g to system paths (use Nix or install inside Distrobox)
  • ❌ Use make install to install built software into system directories (build and export from Distrobox)
  • ❌ Modify files in /opt or /usr/share directly (/etc overlay for config; Distrobox for everything else)

Why This Design?

Security

Malware cannot modify system files or persist across reboots

Reliability

Updates are atomic - they either work completely or fail safely

Rollback

Instant recovery from failed updates or system issues

Consistency

System state is always predictable and reproducible

Blue-Green Deployment

ShaniOS implements blue-green deployment using Btrfs subvolumes—a strategy adapted from DevOps for desktop Linux.

Blue-Green Deployment Model @blue slot Currently Running (Active) /usr /bin /lib (read-only root) UKI: shanios-blue.efi (signed) Kernel cmdline: rootflags=subvol=@blue ACTIVE — booted now /data/current-slot = blue Version: e.g. 20251201 Entry label: shanios-blue (Active) No writes possible to this root shani-deploy writes update next cycle ⟵ slots alternate ⟶ @green slot Standby (Inactive — update target) /usr /bin /lib (new version) UKI: shanios-green.efi (signed) Kernel cmdline: rootflags=subvol=@green STANDBY — ready after reboot Becomes active on next boot Version: e.g. 20260101 (new) Entry label: shanios-green (Candidate) Old @green kept as Btrfs snapshot Shared Subvolumes — Survive All Updates & Rollbacks @home @root @data @flatpak @containers @nix @log @libvirt/@lxc Flatpak apps, Nix packages, containers, VMs, user files, logs, and /etc overlay all live outside the slots — switching slots or rolling back never touches them The Cycle @blue active ↓ update @green reboot → @green active ↓ update @blue reboot → @blue active…

Two complete, independently bootable system images alternate as active/standby — shared subvolumes (user data, Flatpaks, containers, Nix) persist unchanged across every update and rollback

How It Works

  1. System maintains @blue and @green subvolumes
  2. One subvolume is active (mounted as /), the other inactive
  3. Updates apply to inactive subvolume
  4. Bootloader updated to point to updated subvolume
  5. Reboot switches to updated system
  6. Previous version remains available for instant rollback

Advantages

  • Atomic Updates: All-or-nothing updates
  • Zero Downtime: Active system never modified
  • Instant Rollback: Boot into previous version anytime
  • Safe Testing: Validate updates before committing

Atomic Updates

ShaniOS uses an intelligent multi-layered update system with automatic checking, user notifications, and the shani-deploy tool for atomic system updates.

Update Process Flow 1 Check Auto checker shani-update timer finds new release 2 Notify GUI dialog (yad/ zenity/kdialog) User approves 3 Download R2 CDN primary SourceForge fallback Resume support 4 Verify SHA256 checksum GPG signature Btrfs snapshot backup 5 Deploy Extract to inactive Gen UKI (gen-efi) Update bootloader 6 Reboot Reboot prompt Boot into updated slot 7 Confirm Boot counters mark-boot-success runs after login Rollback If boot fails: Auto-fallback to previous slot on failure Before Update — Running @blue @blue ✅ Active (Booted) Current system @green ⏸️ Inactive (Old) Update written here shani-deploy writes new image into @green Btrfs snapshot of old @green kept as backup Bootloader updated → @green set as next default After Reboot — Running @green @blue ⏸️ Inactive Instant rollback @green ✅ Active (Updated!) New system running System now runs @green with new version @blue preserved as instant rollback option Startup-check confirms success after login

7-step zero-downtime update process with automatic rollback on boot failure

Persistence Strategy

ShaniOS selectively persists data across immutable system updates through bind mounts and dedicated subvolumes.

What Persists Across Updates? ❌ Replaced on Update (@blue or @green slot is overwritten) / (root — @blue or @green) /usr /bin /sbin /lib /opt /srv (pre-installed) /boot/efi (ESP — UKI updated) /boot (kernel/initramfs in UKI) What happens: New image extracted into inactive slot Old slot preserved as rollback target Btrfs snapshot of old slot kept gen-efi regenerates the UKI per slot Bootloader entry updated to new slot ✅ Persistent (Survives All) (dedicated Btrfs subvolumes) /home (@home) /root (@root — root user home) /data (@data — overlay + service state) /var/log (@log) /var/cache (@cache) /var/lib/flatpak (@flatpak) /var/lib/containers (@containers) /nix (@nix — Nix pkg store) /var/lib/libvirt (@libvirt, nodatacow) /var/lib/qemu (@qemu, nodatacow) /var/lib/lxc /machines /lxd /var/lib/waydroid (@waydroid) /var/lib/snapd (@snapd) → Survives slot switch, update, rollback → @nix + @flatpak shared by both slots 🔄 Volatile (Cleared on Reboot) (systemd.volatile=state kernel param → tmpfs) /var (tmpfs — via kernel param, not fstab) /tmp (tmpfs — explicit fstab entry) /run (tmpfs — explicit fstab entry) /dev /proc /sys (virtual) /var/lock /var/run (symlinked) Exception — bind-mounted back: /var/lib/NetworkManager → @data /var/lib/bluetooth → @data /var/lib/cups, sshd, sudo… → @data /var/spool/cron, at, postfix → @data shanios-tmpfiles-data.service recreates dirs; bind mounts restore service state on every boot

Three clear categories: replaced-on-update system files, persistent Btrfs subvolumes (including @nix, @flatpak, @containers shared by both slots), and volatile tmpfs cleared each boot with key service state bind-mounted back from @data

Location Type Behavior
/ Read-only Immutable system files
/etc Overlay Writable, persists in @data
/var tmpfs Volatile, cleared on reboot
/var/log Btrfs @log Persistent logs
/var/lib/* Bind mount Selective persistence from @data/varlib
/var/spool/* Bind mount Selective persistence from @data/varspool
/home Btrfs @home Persistent user data

Bind-Mounted Service State

Critical service data is bind-mounted from @data subvolume:

# Example bind mounts from fstab
# All bind mounts use these options for correct boot ordering:
# bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data

# Network configuration
/data/varlib/NetworkManager /var/lib/NetworkManager none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0

# Bluetooth pairings
/data/varlib/bluetooth /var/lib/bluetooth none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0

# systemd state
/data/varlib/systemd /var/lib/systemd none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0

What Persists — Full Bind Mount List

The following directories are bind-mounted from @data/varlib/ to /var/lib/, surviving all updates and slot switches:

  • System Core: dbus (machine-id), systemd (unit states, timers, random seed), fontconfig (font cache)
  • Networking: NetworkManager (WiFi passwords, VPN configs, connection profiles), bluetooth (paired devices), firewalld (rules), samba (shares, lock state), nfs (exports, client tracking)
  • Remote Access & VPN: caddy (web server state), tailscale (VPN keys), cloudflared (tunnel credentials), geoclue (location permissions)
  • Display Managers: gdm (GNOME session state), sddm (KDE session state), colord (color profiles), pipewire (audio state), rtkit (realtime scheduling)
  • Hardware: cups (printer configs), sane (scanner configs), upower (battery history), fwupd (firmware metadata), tpm2-tss (TPM2 persistent objects and sealed keys)
  • Authentication & Security: fprint (fingerprint enrollment), AccountsService (user avatars, session prefs), boltd (Thunderbolt authorization), sudo (auth timestamp cache), sshd (privilege separation), polkit-1 (authorization cache), fail2ban (jail database)
  • Data Protection: restic (backup metadata), rclone (remote configs), appimage (AppImage cache)

Spool Directories

The following directories are bind-mounted from @data/varspool/ to /var/spool/:

  • Scheduling: anacron (job timestamps), cron (user crontabs), at (one-time jobs)
  • Queues: cups (print queue), samba (SMB print spool), postfix (mail queue)

Bind Mount Handling

ShaniOS intelligently handles bind mounts during configuration:

  • Source check: Only mounts if source directory exists in @data
  • Target check: Only mounts if service is installed in the system
  • Graceful skipping: Missing services (e.g., sddm on GNOME) don't cause errors
  • Optional services: Services not present in base system are automatically skipped

Installation

System Requirements

ShaniOS System Requirements
Component Minimum Recommended
Processor x86_64 dual-core with VT-x/AMD-V x86_64 quad-core or better
Memory 4 GB RAM 8 GB RAM or more
Storage 32 GB (dual-image architecture) 64 GB or more
Firmware UEFI (required) UEFI with TPM 2.0
Installation Media 8 GB USB drive 16 GB USB 3.0 drive

Why 32GB minimum? ShaniOS maintains two complete system images (@blue and @green) for atomic updates. However, Btrfs Copy-on-Write shares unchanged data between them, resulting in only ~18% overhead compared to traditional systems.

Disk Partition Layout

ShaniOS uses a simple two-partition layout — there are no separate /home, /var, or swap partitions. All subvolumes live within the single Btrfs partition.

Partition Filesystem Size Purpose
EFI System Partition (ESP) FAT32 ~512 MB Bootloader, UKI images — mounted at /boot/efi
Root partition Btrfs (or LUKS2 → Btrfs) Remainder of disk All system subvolumes (@blue, @green, @home, @data, etc.)

When full-disk encryption is chosen, the Btrfs partition is wrapped in a LUKS2 container (/dev/mapper/shani_root). The ESP is never encrypted — only the root partition is.

Pre-Installation Setup

BIOS/UEFI Configuration

Configure your firmware before installation (typically accessed via F2, F10, Del, or Esc during startup):

1

Enable UEFI Boot

Disable legacy/CSM mode. ShaniOS requires UEFI

2

Disable Fast Boot

Fast Boot can interfere with USB boot and Linux installation

3

Disable Secure Boot (Temporarily)

Required for installation. Can be re-enabled after enrolling ShaniOS MOK keys

4

Set SATA Mode to AHCI

Ensures optimal disk performance and compatibility

5

Enable Virtualization

Enable Intel VT-x or AMD-V for container support

Creating Installation Media

Download the ShaniOS ISO from shani.dev and write it to USB:

  • Recommended: Balena Etcher (cross-platform, user-friendly)
  • Windows: Rufus (advanced options available)
  • Linux/Mac: dd command or GNOME Disks

Installation Steps

Installation takes approximately 10-15 minutes:

1

Boot from USB

Press F12, F2, or Del during startup. Select your USB drive from the boot menu.

2

Select "Install ShaniOS"

Choose the installation option from the boot menu

3

Language & Region

Select language, timezone, and keyboard layout

4

Disk Selection

Choose target disk and partitioning scheme (automatic recommended)

5

Encryption (Optional)

Enable LUKS2 full-disk encryption. Recommended for laptops and portable systems.

6

Install

The installer creates Btrfs subvolumes, installs the base system, and configures the bootloader

7

Reboot

Remove USB drive when prompted and reboot into ShaniOS

First Boot Configuration

Plymouth BGRT Boot Theme

ShaniOS uses the Plymouth BGRT boot theme. Plymouth provides a smooth graphical boot experience, suppressing kernel and systemd messages from the screen. The BGRT (Boot Graphics Resource Table) theme reads the manufacturer's logo directly from the UEFI firmware and displays it during boot — providing a seamless transition from firmware to OS that matches the device's branding. On laptops and OEM hardware this typically shows the manufacturer's logo; on custom-built machines it shows the motherboard vendor's logo or a fallback graphic.

If LUKS2 full-disk encryption is enabled, Plymouth presents the passphrase prompt over the boot animation — no raw terminal text. With TPM2 auto-unlock enrolled, even this prompt is skipped and the disk unlocks silently.

First Deployment — Automatic Background Setup

On first boot after installation, ShaniOS automatically runs the initial deployment in the background. This is a one-time process that takes a few minutes and requires no user interaction:

  • All Btrfs subvolumes (@home, @data, @cache, @log, @flatpak, @containers, @nix, @snapd, @waydroid, @lxc, @lxd, @machines, @libvirt, @qemu, @swap, and more) are created and mounted
  • The swapfile is created in @swap sized to your RAM, with CoW disabled — hibernation is automatically configured
  • Both @blue and @green slots are prepared
  • The beesd deduplication daemon is configured for your Btrfs volume UUID
  • User groups are assigned and service state directories are initialised

After this completes, your system is fully configured — no manual post-install steps required.

Initial Setup Wizard

After first deployment completes, the Initial Setup wizard guides you through:

  • Creating your user account and setting a password
  • Configuring network connections (Wi-Fi, wired)
  • Setting language, locale, and keyboard layout
  • Setting privacy preferences
  • Enabling location services (optional)
  • Customising appearance settings

The wizard runs automatically. If you skip it, re-run with gnome-initial-setup (GNOME) or from System Settings → Welcome (KDE).

After the Wizard — Recommended First Steps

  • Flathub is pre-configured. No flatpak remote-add needed — open GNOME Software or KDE Discover and browse apps immediately.
  • Nix channel: Nix is pre-installed and running. Add a channel before installing packages: nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs && nix-channel --update. See the Nix section for full details.
  • TPM enrollment (if available): If your machine has a TPM 2.0 chip, enroll your LUKS key for automatic disk decryption at next boot. See the TPM Encryption section.
  • Waydroid (Android apps): Waydroid is pre-installed. Run sudo waydroid-helper init for automatic setup. Firewall rules are already configured. See the Android section.
  • Secure Boot: If your BIOS supports it, enroll the MOK key and enable Secure Boot for the strongest TPM security guarantee. See the Secure Boot section.
  • Check current slot: Run cat /data/current-slot to confirm whether you booted into @blue or @green.

OEM & Fleet Deployment

ShaniOS is designed for OEM and fleet use. Every machine imaging from the same signed ISO will boot into an identical, verified state. The Initial Setup wizard runs on first user login per machine, so user-specific personalisation (account, language, network) is captured without requiring per-device pre-configuration.

  • Rollback never requires reimaging — the previous OS slot is always in the boot menu
  • The boot-counting pipeline detects boot failures and automatically reverts the slot before the user sees an error, minimising support calls
  • All user-facing changes (/etc customisations, systemd units, SSH keys, service configs) are in the @data OverlayFS and survive every update and rollback without reimaging
  • The Plymouth BGRT theme shows the device manufacturer's logo automatically from UEFI — no per-model boot screen configuration needed
  • passim (local content sharing daemon) broadcasts available fwupd firmware payloads via mDNS — machines on the same LAN avoid downloading the same firmware repeatedly

User settings location: Your account preferences and avatar are stored in /data/varlib/AccountsService, which is bind-mounted and persistent across all system updates and rollbacks.

Architecture

Btrfs Deep Dive

ShaniOS leverages advanced Btrfs features for immutability and efficiency.

Copy-on-Write (CoW)

Btrfs CoW minimizes storage duplication:

  • Shared data blocks between @blue and @green
  • Only ~18% overhead despite dual root system
  • Modified blocks consume additional space
  • Efficient updates even with two complete systems

Transparent Compression

# Default mount options
compress=zstd,space_cache=v2,autodefrag
  • Reduces disk usage by 30-50%
  • Minimal CPU overhead
  • Improves SSD lifespan

Performance Optimizations

Specific subvolumes use nodatacow:

  • @swap: CoW disabled for swap files (required)
  • @libvirt: VM disk images benefit from direct writes

Mount Options by Subvolume

Subvolume(s) Mount Options Notes
@blue / @green Mounted by dracut via kernel cmdline
rootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag
Not in fstab — selected at boot by systemd-boot, mounted read-only by initramfs
@root, @home, @data rw,noatime,compress=zstd,space_cache=v2,autodefrag Core persistent data — always mounted, no nofail
@nix nofail,noatime,compress=zstd,space_cache=v2,autodefrag CoW kept intentionally for bees deduplication; nofail (created on first use)
@cache, @log rw,noatime,compress=zstd,space_cache=v2,autodefrag,x-systemd.after=var.mount,x-systemd.requires=var.mount Mount-ordered after /var
@flatpak, @snapd, @waydroid, @containers, @machines, @lxc, @lxd nofail,noatime,compress=zstd,space_cache=v2,autodefrag,x-systemd.after=var.mount,x-systemd.requires=var.mount nofail — system boots cleanly even if not yet created
@libvirt, @qemu nofail,noatime,nodatacow,nospace_cache,x-systemd.after=var.mount,x-systemd.requires=var.mount nodatacow required for VM disk performance; nospace_cache required with nodatacow; no compression
@swap nofail,noatime,nodatacow,nospace_cache nodatacow + nospace_cache mandatory for swapfile correctness on Btrfs

Why noatime?

All subvolumes use noatime to improve performance:

  • Prevents writing to disk every time a file is read
  • Significantly reduces SSD wear
  • Improves battery life on laptops
  • No impact on most applications (very few rely on access times)

Manual Btrfs Snapshots

Btrfs snapshots are instant, space-efficient copies of a subvolume at a point in time. Because they use CoW, a fresh snapshot consumes almost no additional space — only changes accumulate. ShaniOS automates slot snapshots via shani-deploy, but you can create and manage snapshots of your own data freely.

# Create a read-only snapshot of /home (best practice for backups)
sudo btrfs subvolume snapshot -r /home /data/snapshots/home-$(date +%Y%m%d)

# Create a writable snapshot (for testing changes to a subvolume)
sudo btrfs subvolume snapshot /home /data/snapshots/home-writable

# List all subvolumes and snapshots on the filesystem
sudo btrfs subvolume list /

# Show snapshot details (creation time, ID, parent)
sudo btrfs subvolume show /data/snapshots/home-$(date +%Y%m%d)

# Restore from a snapshot — replace @home contents with snapshot
# (Do this from a live USB or alternate slot to avoid conflicts)
sudo btrfs subvolume delete /home
sudo btrfs subvolume snapshot /data/snapshots/home-20250101 /home

# Delete an old snapshot to free space
sudo btrfs subvolume delete /data/snapshots/home-20240601

# Send a snapshot to another drive as a backup (incremental after first)
sudo btrfs send /data/snapshots/home-20250101 | sudo btrfs receive /mnt/backup/
# Incremental send (only sends the diff)
sudo btrfs send -p /data/snapshots/home-20250101 /data/snapshots/home-20250201 \
  | sudo btrfs receive /mnt/backup/

Snapshots are not backups if they live on the same disk — a disk failure loses both. Use btrfs send to an external drive, or restic/rclone to back up to cloud storage. The snapshots ShaniOS keeps of @blue/@green are exclusively for rollback; they are stored on the same drive.

Checking Deduplication Status

The bees daemon runs continuously in the background and deduplicates data across all subvolumes. To check its activity and measure compression savings:

# Check bees daemon status
sudo systemctl status "beesd@*"

# View recent dedup activity from journal
sudo journalctl -u "beesd@*" --since today | grep -E "dedup|hash|block|crawl"

# Run an on-demand deduplication pass (in addition to background bees)
sudo shani-deploy --optimize

# Check compression ratio per subvolume
sudo compsize /
sudo compsize /home
sudo compsize /nix
sudo compsize /var/lib/flatpak

# Full storage usage report
sudo shani-deploy --storage-info

Filesystem Structure

ShaniOS uses Btrfs with a sophisticated subvolume layout:

Btrfs Subvolume Structure /dev/sdX2 (single Btrfs volume) System Slots read-only root per slot @blue @green One active, one standby ro,compress=zstd,noatime Btrfs snapshot before each update + ESP /boot/efi (FAT32) User Data persistent across all updates @home → /home @root → /root @data → /data @data: /etc overlay + service state Package Ecosystems shared by both slots @flatpak → /var/lib/flatpak @nix → /nix @containers → /var/lib/containers @nix: CoW on (bees dedup); @flatpak shared Virtualisation nodatacow + nospace_cache (VM disks only) @libvirt → /var/lib/libvirt @qemu → /var/lib/qemu (@waydroid → System Support row) no compress; CoW disabled for VM I/O perf System Support Subvolumes @log → /var/log @cache → /var/cache @machines → /var/lib/machines @lxc → /var/lib/lxc @lxd → /var/lib/lxd @snapd → /var/lib/snapd @waydroid → /var/lib/waydroid @swap → /swap @swap: nodatacow + nospace_cache (both required for swapfile) — resume= kernel param embedded in UKI All subvolumes: noatime,compress=zstd,space_cache=v2,autodefrag (except @libvirt, @qemu, @swap which use nodatacow) Read-only root Writable, persistent Package store (shared slots) nodatacow + nospace_cache (@libvirt/@qemu) System support (compress=zstd) nodatacow + nospace_cache (@swap)

Full Btrfs subvolume map — system slots (@blue/@green) replace on update; all other subvolumes persist independently; @nix and @flatpak are shared by both slots

ShaniOS Subvolume Layout
Subvolume Mount Point Purpose
@blue / @green / Root filesystems for blue-green deployment
@root /root Root user home — persists across slot switches
@home /home User data and personal configurations
@data /data Overlay storage and persistent service data (bind-mount source tree)
@nix /nix Nix package manager store — shared across both slots, CoW kept for compression/dedup via bees
@log /var/log System logs across reboots
@cache /var/cache Package manager cache
@flatpak /var/lib/flatpak Flatpak applications and runtimes
@snapd /var/lib/snapd Snap package storage, revisions, and writable snap data
@waydroid /var/lib/waydroid Android system images and data
@containers /var/lib/containers Podman/Docker container storage
@machines /var/lib/machines systemd-nspawn containers
@lxc /var/lib/lxc LXC containers
@lxd /var/lib/lxd LXD container and VM storage
@libvirt /var/lib/libvirt Virtual machine disk images (nodatacow)
@qemu /var/lib/qemu Bare QEMU VM disk images (nodatacow)
@swap /swap Swap file container (nodatacow)

Persistent Service State — Bind Mounts from @data

Because /var is volatile (systemd.volatile=state mounts a tmpfs over /var on every boot), all service state that must survive reboots is stored in the @data subvolume and bind-mounted back into place. The bind mounts are grouped by function in /etc/fstab:

Persistent bind-mount layout
Category Source (@data) Target
System Core/data/varlib/dbus
/data/varlib/systemd
/var/lib/dbus
/var/lib/systemd
Font Rendering/data/varlib/fontconfig/var/lib/fontconfig
Networking/data/varlib/NetworkManager
/data/varlib/bluetooth
/data/varlib/firewalld
/var/lib/NetworkManager
/var/lib/bluetooth
/var/lib/firewalld
File Sharing/data/varlib/samba
/data/varlib/nfs
/var/lib/samba
/var/lib/nfs
Remote Access & VPN/data/varlib/caddy
/data/varlib/tailscale
/data/varlib/cloudflared
/data/varlib/geoclue
/var/lib/caddy
/var/lib/tailscale
/var/lib/cloudflared
/var/lib/geoclue
Display Manager/data/varlib/gdm
/data/varlib/sddm
/data/varlib/colord
/var/lib/gdm
/var/lib/sddm
/var/lib/colord
Audio & Peripherals/data/varlib/pipewire
/data/varlib/rtkit
/data/varlib/cups
/data/varlib/sane
/data/varlib/upower
/var/lib/pipewire
/var/lib/rtkit
/var/lib/cups
/var/lib/sane
/var/lib/upower
Auth & Security/data/varlib/fprint
/data/varlib/AccountsService
/data/varlib/boltd
/data/varlib/sudo
/data/varlib/sshd
/data/varlib/polkit-1
/data/varlib/tpm2-tss
/var/lib/fprint
/var/lib/AccountsService
/var/lib/boltd
/var/lib/sudo
/var/lib/sshd
/var/lib/polkit-1
/var/lib/tpm2-tss
Hardware & Firmware/data/varlib/fwupd/var/lib/fwupd
Data Protection/data/varlib/fail2ban
/data/varlib/restic
/data/varlib/rclone
/data/varlib/appimage
/var/lib/fail2ban
/var/lib/restic
/var/lib/rclone
/var/lib/appimage
Job Scheduling/data/varspool/anacron
/data/varspool/cron
/data/varspool/at
/var/spool/anacron
/var/spool/cron
/var/spool/at
Print & Mail Spools/data/varspool/cups
/data/varspool/samba
/data/varspool/postfix
/var/spool/cups
/var/spool/samba
/var/spool/postfix

Mount Options Reference

The root slots (@blue and @green) are not in fstab — they are mounted read-only by dracut/initramfs via the kernel command line (rootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag). All other Btrfs subvolumes use noatime,compress=zstd,space_cache=v2,autodefrag by default. VM disk subvolumes (@libvirt, @qemu) and the swap subvolume (@swap) use nodatacow,nospace_cache to avoid CoW fragmentation — note that compression is incompatible with nodatacow and is not applied to these subvolumes. Container and virtualisation subvolumes are mounted with nofail so the system boots cleanly even if they have not yet been created. The /etc overlay uses index=off,metacopy=off for maximum compatibility with the read-only lower layer. The /var tmpfs is provided by the systemd.volatile=state kernel parameter (not an fstab overlay entry, which exists in fstab but is commented out). All bind mounts use bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data to ensure ordering and graceful degradation.

Overlay Filesystem

OverlayFS enables writable /etc on read-only root.

OverlayFS — /etc on Immutable Root Lower Layer (Read-Only) /etc from @blue or @green Base system configs (read-only root) e.g. /etc/fstab, /etc/hostname /etc/systemd, /etc/pacman.conf… 🔒 Upper Layer (Writable — stored in @data) /data/overlay/etc/upper/ Only your changes live here Modified files override lower layer Deleted files = whiteout entries ✏️ Work Dir (Required by kernel) /data/overlay/etc/work/ Kernel uses this for atomic copy-up operations Must be same filesystem as upper ⚙️ Merged View — what /etc looks like to the system /etc appears fully writable to all processes Upper overrides lower: your changes take precedence Unchanged files are served from read-only lower layer ✅ Changes in upper (@data) survive OS updates — lower is replaced, upper untouched overlay /etc overlay rw,lowerdir=/etc,upperdir=/data/overlay/etc/upper,workdir=/data/overlay/etc/work,index=off,metacopy=off,x-systemd.requires-mounts-for=/data 0 0

Three-layer OverlayFS: read-only lower (/etc from active slot) + writable upper (@data) + kernel workdir — changes in upper persist across updates because only the lower layer is replaced

# /etc overlay mount (from fstab)
overlay /etc overlay rw,lowerdir=/etc,upperdir=/data/overlay/etc/upper,workdir=/data/overlay/etc/work,index=off,metacopy=off,x-systemd.requires-mounts-for=/data 0 0

Overlay Directories

The /etc overlay is stored under @data with this structure:

@data/overlay/etc/
  ├── lower/      # (unused, just structure)
  ├── upper/      # Your changes stored here
  └── work/       # Overlay working directory

Viewing Changes

# List all overlay modifications
ls -la /data/overlay/etc/upper

# Compare with base system
diff -r /etc /data/overlay/etc/upper

Boot Process

Understanding how ShaniOS boots:

ShaniOS Boot Sequence Security Layer 1 UEFI Firmware Reads boot entry: \EFI\BOOT\BOOTX64.EFI (Shim, signed by Microsoft CA) Secure Boot: verifies Shim → Shim verifies MOK-signed systemd-boot (grubx64.efi) Secure Boot UEFI + Shim + MOK 2 systemd-boot Shows boot menu: shanios-blue (Active) / shanios-green (Candidate) — 5s timeout Default slot read from /data/current-slot — editor disabled (no cmdline tampering) MOK signed editor=false 3 Unified Kernel Image (UKI) shanios-blue.efi or shanios-green.efi — single PE binary: kernel + initramfs + cmdline Built per slot by gen-efi/dracut — cmdline embedded (rootflags=subvol=@blue/green, resume=, lsm=…) MOK signed PCR 0,7 measured cmdline locked in 4 Initramfs (dracut) — LUKS Unlock If encrypted: LUKS2 unlock via TPM2 (rd.luks.options=tpm2-device=auto) or passphrase TPM2 auto-unlock: key sealed to PCRs 0+7 — tampered firmware/kernel = no auto-unlock Mounts selected @blue/@green Btrfs subvolume as read-only root (rootfstype=btrfs, ro) LUKS2 + TPM2 PCR sealing sd-cryptsetup rd.vconsole.keymap 5 systemd Init & Filesystem Assembly Root read-only (systemd.volatile=state); /var mounted as tmpfs (volatile, cleared each boot) @data, @home, @root, @log, @flatpak, @nix, @containers… mounted via /etc/fstab /etc OverlayFS mounted; all bind mounts applied (NetworkManager, bluetooth, cups…) ↳ shanios-tmpfiles-data → beesd-setup → etc-daemon-reload → etc-start-services → shani-user-setup IMA/EVM active 6 LSMs loaded AppArmor profiles Landlock + BPF bees dedup start 6 Desktop Environment GNOME (GDM) or KDE Plasma (SDDM) starts — Plymouth BGRT theme shown during boot 15s post-login: startup-check.sh runs — checks for boot failure, offers rollback if needed mark-boot-success startup-check.sh shani-update timer ⟲ Automatic Rollback Path If boot fails (check-boot-failure.timer fires 15 min post-boot): next reboot automatically reverts to previous slot

Six-stage boot: UEFI Secure Boot → systemd-boot → signed UKI → dracut+LUKS2/TPM2 → systemd filesystem assembly → desktop; security layer annotations show what's verified at each step

Important Notes:

  • Dual booting not recommended (other OSes may break bootloader)
  • Use Flatpak, containers, or AppImages for software
  • Root filesystem is immutable by design

Early Boot Service Chain

After systemd takes over the root filesystem, ShaniOS runs a sequence of custom services before the desktop starts:

  1. shanios-tmpfiles-data.service: Recreates required directories inside @data (overlay upper/work dirs, varlib, varspool) using systemd-tmpfiles. Runs after data.mount and before the /etc overlay is applied.
  2. beesd-setup.service: Configures and enables the bees background deduplication daemon for the Btrfs volume. Runs after the root filesystem and etc-overlay.mount, before the daemon reload. Idempotent — skips if already configured at the same version.
  3. etc-daemon-reload.service: Runs mount -a to apply all overlay and bind mounts from /etc/fstab, then issues a non-blocking systemctl daemon-reload so systemd discovers any new unit files that appeared in the overlay. Runs after beesd-setup.
  4. etc-start-services.service: Starts any enabled services that were not yet running — specifically those whose unit files only became visible after the overlay was mounted. Runs once per boot (guarded by /run/start-overlay-services.done), with a 30-second timeout.
  5. shani-user-setup.path / shani-user-setup.service: Watches /etc/passwd for changes. When a new regular user (UID 1000–59999) is detected, automatically adds them to the required groups (input, realtime, video, sys, cups, lp, scanner, nixbld, lxc, lxd, kvm, libvirt) and sets their shell to /bin/zsh.
  6. startup-check.sh (autostart / systemd user): Runs 15 seconds after desktop login to give polkit and the session time to fully initialise. Reads /data/boot_failure and /data/current-slot; if a fallback boot is confirmed (booted slot ≠ current-slot and a matching failure file exists) it presents a GUI dialog (yad → zenity → kdialog → console) titled "Boot Failure Detected" prompting the user to rollback. If approved, it opens a terminal running pkexec shani-deploy --rollback, monitors completion via a status file, and on success offers to reboot immediately. Skips silently if /data/boot-ok is present or if no display is available.

Bootloader Configuration

ShaniOS uses systemd-boot with Unified Kernel Images (UKI):

  • Timeout: 5 seconds
  • Console mode: Maximum resolution
  • Editor: Disabled for security (prevents boot parameter tampering)
  • Default entry: Current active slot (@blue or @green)

Boot Entry Naming

Boot menu entries clearly show system state:

  • shanios-blue (Active): Currently running system
  • shanios-green (Candidate): Standby system, will be booted after next update

After each deployment, shani-deploy rewrites both boot entries — the newly updated slot becomes "Candidate" and the currently running slot stays "Active". The labels do not switch automatically at boot; they are explicitly updated by the deploy tool.

UEFI Boot Entry

ShaniOS creates a UEFI boot entry during installation:

  • Label: shanios
  • Loader: \EFI\BOOT\BOOTX64.EFI (Shim for Secure Boot)
  • Fallback: Boot entry creation failures don't stop installation

Shim and Secure Boot Chain

The boot chain when Secure Boot is enabled:

UEFI Firmware
    ↓
shimx64.efi (signed by Microsoft — validates the next stage via MOK)
    ↓
mmx64.efi / grubx64.efi (systemd-boot, named grubx64.efi for shim compatibility, signed by MOK)
    ↓
Unified Kernel Image / shanios-blue.efi or shanios-green.efi (signed by MOK, built by gen-efi)
    ↓
Linux Kernel + Initramfs (embedded in UKI)

Kernel Command Line

ShaniOS uses these boot parameters, generated per-slot by gen-efi and saved to /etc/kernel/install_cmdline_<slot>. If that file already exists for a slot, gen-efi reuses it on subsequent UKI rebuilds rather than regenerating. The file is embedded directly into the UKI at build time by dracut.

quiet splash systemd.volatile=state ro
lsm=landlock,lockdown,yama,integrity,apparmor,bpf
rootfstype=btrfs
rootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag
root=/dev/mapper/shani_root  # If encrypted
rd.luks.uuid=... rd.luks.name=... rd.luks.options=tpm2-device=auto  # If TPM enrolled
rd.vconsole.keymap=...       # Injected from /etc/vconsole.conf KEYMAP= if set
resume=UUID=... resume_offset=...  # If swapfile exists (offset via btrfs inspect-internal map-swapfile)

Kernel Hardening

Security-focused kernel parameters are enabled by default:

lsm=landlock,lockdown,yama,integrity,apparmor,bpf

This enables six Linux Security Modules simultaneously — Landlock (filesystem sandboxing), Lockdown (kernel modification restrictions), Yama (ptrace scope restrictions), Integrity/IMA/EVM (runtime file integrity measurement), AppArmor (mandatory access control), and BPF LSM (dynamic eBPF security hooks). Most distributions enable one or two; ShaniOS runs all six concurrently. See the Security Features section for full details on each module.

  • Landlock: Filesystem sandboxing for unprivileged processes
  • Lockdown: Restricts kernel access and modification even by root
  • Yama: Ptrace restrictions to prevent cross-process debugging attacks
  • Integrity (IMA/EVM): Runtime file integrity measurement and extended attribute protection
  • AppArmor: Mandatory Access Control — confines processes to what they legitimately need
  • BPF LSM: eBPF-based dynamic security policy hooks

Kernel Parameter: systemd.volatile=state

The systemd.volatile=state kernel parameter creates a tmpfs (RAM-based) overlay for /var, making it volatile by default:

# From kernel command line
systemd.volatile=state

# Result: /var is a tmpfs, cleared on every reboot.
# Persistent data is restored via bind mounts into /var at boot:
#   /var/log        ← @log subvolume
#   /var/cache      ← @cache subvolume
#   /var/lib/flatpak    ← @flatpak subvolume
#   /var/lib/containers ← @containers subvolume
#   /var/lib/waydroid   ← @waydroid subvolume
#   /var/lib/machines   ← @machines subvolume
#   /var/lib/lxc        ← @lxc subvolume
#   /var/lib/libvirt    ← @libvirt subvolume
#   /var/lib/NetworkManager, bluetooth, etc. ← bind from @data/varlib/

Hibernation Support

ShaniOS automatically configures hibernation if a swapfile is created:

  • Swapfile location: /swap/swapfile (in @swap subvolume)
  • Size: Equal to RAM (or less if disk space limited)
  • Resume offset: Automatically calculated using btrfs inspect-internal map-swapfile
  • Btrfs CoW: Disabled on @swap subvolume for reliability

Disk Space for Swapfile / zram fallback: If available disk space is less than RAM size at first deployment, swapfile creation is skipped and the system uses zram for swap instead. Hibernation will not be available in this case.

zram is a compressed RAM-based swap device. When memory pressure is high, the kernel compresses pages and writes them to a zram device in RAM rather than to disk — dramatically faster than disk-based swap (no I/O latency), reduces SSD write wear, and keeps the system responsive under memory pressure without touching storage. zram is several orders of magnitude faster than swapping to an SSD and does not suffer from the seek latency of spinning disk swap at all. The trade-off is that zram competes for the same physical RAM it is compressing, so it is most effective when pages are compressible (most text, code, and data are). On ShaniOS, zram is managed by [email protected].

Security

Security Features

ShaniOS implements defence-in-depth security across multiple independent layers. The read-only root prevents runtime modification of system files even by root. Six Linux Security Modules run simultaneously via a single lsm= kernel parameter — most distributions enable one or two. LUKS2 with argon2id protects data at rest with a memory-hard key derivation function specifically resistant to GPU and ASIC brute-force. Flatpak sandboxes applications. AppArmor confines system services with enforced profiles. firewalld denies all unsolicited inbound connections from first boot. Intel ME modules are blacklisted, removing the low-level hardware management channel from the attack surface. Every OS image is SHA256 + GPG verified before deployment. Together, these layers mean that even a fully compromised application cannot permanently damage the OS — a reboot to the unmodified, verified system image is always available.

🔒

Immutable Root

Read-only filesystem prevents malware from modifying system files even at runtime as root. Rebooting always restores the verified, unmodified state.

🔐

Full-Disk Encryption

LUKS2 with argon2id — a memory-hard key derivation function designed to resist GPU and ASIC brute-force. Optional TPM 2.0 enrollment enables automatic unlocking on trusted hardware without a passphrase at every boot.

📦

App Sandboxing

Flatpak provides application isolation and fine-grained permission control via Portals. Snap packages are additionally confined via AppArmor profiles loaded by snapd.apparmor.service.

🛡️

AppArmor MAC

Mandatory Access Control confines system services with profiles enforced from first boot. snapd.apparmor.service loads Snap confinement profiles automatically.

🔥

Active Firewall

firewalld denies all inbound connections by default from first boot. Rules for KDE Connect and Waydroid networking are pre-configured. Zone-based — add rules only for services you explicitly enable.

🖊️

Kernel LSM Stack

Landlock, Lockdown, Yama, Integrity (IMA/EVM), AppArmor, and BPF LSM are all active simultaneously via lsm=landlock,lockdown,yama,integrity,apparmor,bpf. Most distributions run one or two; ShaniOS runs all six concurrently.

🔇

Intel ME Disabled

The Intel Management Engine kernel modules (mei, mei_me) are blacklisted by default, removing Intel's remote management interface from the attack surface. This is a genuine differentiator — most distributions leave ME active.

🔏

Supply Chain Integrity

Every OS image is SHA256 + GPG verified before deployment. The public key is on public keyservers. The build system and deploy toolchain are public on GitHub — independently auditable end to end. A tampered or corrupted image is rejected outright; the update aborts, nothing changes.

Linux Security Modules — Full Detail

The six LSMs are activated together via a single kernel command-line parameter embedded in the Unified Kernel Image at build time:

lsm=landlock,lockdown,yama,integrity,apparmor,bpf
  • Landlock: Filesystem sandboxing at the process level. Allows unprivileged processes to restrict their own filesystem access — a compromised process can be limited to only the paths it legitimately needs, even without root.
  • Lockdown: Restricts kernel modifications even by root. Prevents unsigned kernel module loading, direct kernel memory writes, and other operations that could subvert kernel integrity. Two modes: integrity (prevent kernel modifications) and confidentiality (also prevent kernel data extraction).
  • Yama: Restricts ptrace scope and other process tracing. Prevents one user's processes from attaching a debugger to another user's processes — closes a common privilege escalation path.
  • Integrity (IMA/EVM): Runtime file integrity measurement. IMA (Integrity Measurement Architecture) records cryptographic hashes of files as they are accessed, building a runtime measurement log. EVM (Extended Verification Module) protects extended attributes (including security labels) from offline tampering.
  • AppArmor: Mandatory Access Control that confines processes to the files, capabilities, and network operations they legitimately need. Profiles are shipped pre-written and enforced from first boot. snapd.apparmor.service loads Snap confinement profiles automatically.
  • BPF LSM: eBPF-based security policy hooks. Allows dynamic, programmable security enforcement without modifying the kernel — security tools can attach BPF programs to kernel hooks to monitor and control behaviour at runtime.

Intel Management Engine — Disabled by Default

The Intel Management Engine (ME) is a separate, low-power processor embedded in Intel chipsets that operates independently of the main CPU and OS — including when the system is powered off (as long as power is connected). The ME runs its own firmware and has broad access to system resources.

ShaniOS blacklists the ME kernel modules (mei and mei_me) by default. This removes the OS-level communication channel to the ME, reducing the attack surface accessible from within a running Linux session. This is a meaningful privacy and security measure that most Linux distributions do not implement.

# Verify ME modules are blacklisted
cat /etc/modprobe.d/blacklist-intel-me.conf

# Confirm the modules are not loaded
lsmod | grep mei     # Should return nothing

Hardware Security Keys, Smart Cards & NFC

Enterprise and high-security authentication methods work at first boot — no setup required:

  • FIDO2/U2F (libfido2): YubiKey and other FIDO2/U2F hardware security keys work for PAM login, sudo authentication, and web authentication (via browser WebAuthn) without any configuration. The relevant udev rules and pcscd.socket are pre-configured.
  • Smart Cards (opensc + ccid + acsccid): PKCS#11 smart card authentication for login and sudo. CCID and ACS smart card readers are supported. pcscd.socket (PC/SC daemon) starts on demand when a card is accessed — no idle resource use.
  • NFC (libnfc): NFC hardware is supported for authentication workflows that use NFC-based tokens or contactless smart cards.
  • Fingerprint (fprintd): Biometric login and sudo authentication via PAM integration. Works at first boot on supported hardware.
# Test FIDO2 key detection
fido2-token -L

# List connected smart cards
opensc-tool -l

# Check fingerprint reader
fprintd-list $USER

# Check pcscd socket status
systemctl status pcscd.socket

Supply Chain Integrity & Image Verification

Supply chain attacks — injecting malicious code between a trusted source and the end user — are one of the most serious threats to software distribution. ShaniOS's update model is designed with this in mind.

Every OS image is GPG-signed before distribution. Before shani-deploy extracts a new image it verifies both the SHA256 checksum and the GPG signature. A tampered or corrupted image is rejected outright — the update aborts and nothing changes on your system. The verification uses the key ID 7B927BFF...8014792, which is published on public keyservers and can be independently fetched and verified.

# Manually verify an image (shani-deploy does this automatically)
# Fetch the public key from keyservers
gpg --keyserver keys.openpgp.org --recv-keys 7B927BFF8014792

# Verify the signature file against the image
gpg --verify shanios-image.zst.sig shanios-image.zst

# Verify SHA256 checksum
sha256sum -c shanios-image.zst.sha256

The build system, deploy scripts, and full signing workflow are public on GitHub. You can verify the full chain yourself — from build to signed image to deployment — without trusting any single party's word.

Secure Boot Setup

ShaniOS supports UEFI Secure Boot via Machine Owner Keys (MOK).

Automatic Secure Boot Enrollment (If Enabled During Install)

If you enabled Secure Boot during installation:

  1. On first reboot, MOK Manager will appear automatically
  2. Select "Enroll MOK"
  3. Enter the password: shanios
  4. Reboot to complete the enrollment
  5. Secure Boot will be active with ShaniOS keys enrolled

MOK (Machine Owner Key): This allows ShaniOS's signed bootloader and kernel to boot with Secure Boot enabled. The password "shanios" is only used during the one-time enrollment process and is not stored.

Manual Secure Boot Enablement

To enable Secure Boot after installation:

1

Import MOK Key

sudo mokutil --import /etc/secureboot/keys/MOK.der

Set a one-time password when prompted

2

Reboot and Enroll

MOK Manager appears on reboot. Select "Enroll MOK" and enter your password

3

Enable in BIOS

Enter BIOS/UEFI settings and enable Secure Boot if still disabled

4

Verify Enrollment

sudo mokutil --list-enrolled

Note: If MOK Manager doesn't appear, repeat step 1 and reboot. It only triggers when a key is pending enrollment.

TPM-Based Encryption

Enroll your LUKS encryption key to TPM 2.0 for automatic unlocking.

Prerequisites

  • TPM 2.0 hardware module (enabled in BIOS/UEFI)
  • LUKS2 encryption enabled during installation
  • Recommended: Secure Boot enabled for maximum security

Enrollment Process

1

Verify TPM

systemd-cryptenroll --tpm2-device=list
2

Check Secure Boot Status

mokutil --sb-state

If disabled, consider enabling Secure Boot first

3

Find Your LUKS Device

ShaniOS uses /dev/mapper/shani_root as the unlocked LUKS mapping. To find the underlying encrypted device:

# Get the physical encrypted device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
echo "Your encrypted device is: $LUKS_DEVICE"

# Example output: /dev/sda2 or /dev/nvme0n1p2
4

Enroll Encryption Key

With Secure Boot (recommended):

LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE"

Without Secure Boot:

LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0 "$LUKS_DEVICE"

Enter your LUKS password when prompted

5

Regenerate Boot Images

gen-efi configure can only regenerate the UKI for the currently booted slot directly. To update both slots, use shani-deploy which chroots into the candidate slot automatically:

# Regenerate only the currently booted slot (e.g., if booted into @blue):
sudo gen-efi configure blue

# To regenerate the OTHER slot's UKI as well, trigger a redeployment:
sudo shani-deploy --force

Running gen-efi configure green while booted into @blue (or vice versa) will be rejected by the script to prevent generating a UKI with the wrong kernel. shani-deploy --force handles this correctly by chrooting into the candidate slot.

6

Test & Verify

Reboot. System should unlock automatically.

# Verify enrollment
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo cryptsetup luksDump "$LUKS_DEVICE" | grep systemd-tpm2

Remove TPM Enrollment

To restore password-only unlocking:

LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"

# Regenerate UKI for the currently booted slot:
sudo gen-efi configure blue   # if booted into @blue
# or: sudo gen-efi configure green   # if booted into @green

# To update the other slot's UKI too:
sudo shani-deploy --force

Re-enroll After Enabling Secure Boot

If you enable Secure Boot after TPM enrollment:

# Get the device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')

# Remove old enrollment
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"

# Re-enroll with Secure Boot protection
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE"

# Regenerate UKI for the currently booted slot (e.g. @blue):
sudo gen-efi configure blue

# Then force-redeploy so the other slot also gets an updated UKI:
sudo shani-deploy --force

Important:

  • Your LUKS password remains valid as fallback
  • Firmware changes break TPM unlock (password required to re-enroll)
  • After any TPM change, regenerate the currently booted slot's UKI with gen-efi configure <slot>, then run shani-deploy --force to update the other slot
  • Keep LUKS password secure for recovery — it is your only fallback if TPM unlock fails

Updates & Config

System Updates

Automatic Update Checker

ShaniOS includes a background service that automatically checks for updates:

  • shani-update: Runs as a per-user systemd service (shani-update.timer), firing 5 minutes after login then every 2 hours. It is slot-aware: if the system detects it is running from the candidate slot (a post-update test boot), it silently exits and sends a notification asking the user to validate the new system before the next update cycle. When an update is available it presents a GUI dialog (yad for X11/Wayland with backend fallback, then zenity, kdialog, or console) asking the user to install or defer. If deferred, it schedules a follow-up via systemd-run --user --on-active=86400s (24 hours). If approved, it opens a terminal window running pkexec shani-deploy under systemd-inhibit, monitors completion via a status file, and on success presents a reboot prompt. Logs are kept in ~/.cache/shani-update.log with automatic 1 MB rotation.

Manual Update

sudo shani-deploy

The update process includes:

  1. Self-update check: shani-deploy fetches the latest version of itself from GitHub and re-executes if a newer version is found (can be skipped with --skip-self-update)
  2. System inhibit: prevents sleep, shutdown, and lid-close events for the duration of the update
  3. Prerequisites check (root access, internet connectivity, disk space — minimum 10 GB free)
  4. Boot validation: reads /data/current-slot and confirms it matches the running slot; detects and warns on slot mismatch
  5. Mirror discovery (R2 CDN primary at downloads.shani.dev, SourceForge mirror as fallback)
  6. Intelligent download with resume support (up to 5 attempts; R2 supports resume, SourceForge restarts on failure)
  7. Verification: SHA256 checksum and GPG signature (key ID 7B927BFF...8014792)
  8. Btrfs snapshot backup of the candidate slot before overwriting (named <slot>_backup_<timestamp>)
  9. Deployment: extract new image into a temporary subvolume via zstd | btrfs receive, then snapshot to @candidate and set read-only
  10. Filesystem verification: ensures all required subvolumes exist; creates missing ones (including swapfile at RAM size if @swap is new)
  11. UKI generation: chroots into the new candidate slot and runs gen-efi configure <slot>
  12. Boot entry update: candidate slot marked "Candidate", active slot stays "Active"; loader.conf default set to active slot
  13. Cleanup: keeps only the latest backup per slot, removes old download files
  14. Storage analysis: prints filesystem usage and per-subvolume compression stats via compsize

Automatic Rollback — Full Boot-Counting Pipeline: ShaniOS uses a multi-layer system to detect and recover from bad boots:

  1. mark-boot-in-progress.service runs very early (before local-fs.target) and creates /data/boot_in_progress while clearing any previous boot-ok, boot_failure, or boot_hard_failure markers. This flags every boot as "in progress" until proven successful.
  2. bless-boot.service runs shortly after (before basic.target) and calls bootctl set-good, which tells systemd-boot that this boot attempt was good, clearing its internal try-counter. Only runs if boot_in_progress exists.
  3. mark-boot-success.service runs after multi-user.target and writes /data/boot-ok, then removes boot_in_progress. This is the definitive "system reached a usable state" marker.
  4. check-boot-failure.timer fires 15 minutes after boot and runs check-boot-failure.service, which checks: if boot_in_progress still exists and boot-ok is absent, the boot stalled — it writes the failed slot name to /data/boot_failure. A dracut hook can write /data/boot_hard_failure even earlier for kernel-level failures.
  5. startup-check.sh runs at login (via autostart or systemd user session) and reads the failure files. If a fallback boot is confirmed — booted slot differs from current-slot and a boot_failure file exists for the expected slot — it presents a GUI dialog (yad / zenity / kdialog with console fallback) prompting the user to rollback. If the user approves, it opens a terminal and runs pkexec shani-deploy --rollback, monitors completion, then offers to reboot.
  6. shani-deploy also detects the mismatch independently on next update run via the /data/current-slot vs booted-slot comparison, giving a second recovery path even if the user skips the GUI prompt.

A /data/deployment_pending flag prevents a partially-completed deploy from being lost if power fails mid-extraction. The shani-update user-mode checker also silently exits if it detects the system is still in a "candidate boot" validation state, suppressing spurious update prompts until stability is confirmed.

Update Channels

ShaniOS supports multiple update channels:

  • stable (default): Thoroughly tested releases
  • latest: Newest features (may be less stable)

Versions use a date-based format (YYYYMMDD, e.g. 20251201). The channel manifest (stable.txt or latest.txt) is fetched from SourceForge and contains the current image filename. Version comparison is lexicographic — newer dates are always greater — and the user-space checker (shani-update) also handles profile changes at the same version date as a valid update trigger.

Set your preferred channel:

# Choose channel
sudo shani-deploy -t latest

# Or set permanently
echo "latest" | sudo tee /etc/shani-channel

Update Options

# Show help
sudo shani-deploy --help

# Rollback to previous version (restores from Btrfs backup snapshot)
sudo shani-deploy --rollback

# Manual cleanup of old backups and downloaded images
sudo shani-deploy --cleanup

# Storage analysis: filesystem usage and per-subvolume compression stats
sudo shani-deploy --storage-info

# Manual deduplication run across @blue, @green, and backup subvolumes
# (bees handles continuous background dedup; this is for on-demand use)
sudo shani-deploy --optimize

# Dry run (simulate without making changes)
sudo shani-deploy --dry-run

# Force redeploy even if already on latest version
sudo shani-deploy --force

# Verbose output for troubleshooting
sudo shani-deploy --verbose

# Skip self-update of the deploy tool
sudo shani-deploy --skip-self-update

shani-deploy Quick Reference

Command Description
sudo shani-deploy Standard update: check → download → verify → deploy
sudo shani-deploy --rollback Restore previous slot from its Btrfs backup snapshot
sudo shani-deploy --cleanup Remove old backups and cached downloads to free space
sudo shani-deploy --storage-info Print filesystem usage and per-subvolume compression stats
sudo shani-deploy --optimize Run on-demand block-level deduplication across all slots
sudo shani-deploy --dry-run Simulate the full update process without making any changes
sudo shani-deploy --force Force a full redeploy even when already on the latest version
sudo shani-deploy --verbose Enable verbose output for troubleshooting
sudo shani-deploy --skip-self-update Skip the automatic self-update of the deploy tool itself
sudo shani-deploy -t latest Run this update using the latest channel

shani-deploy self-updates itself: On every run, shani-deploy fetches its own latest version from GitHub. If a newer version is found, it re-executes automatically before continuing. Use --skip-self-update to suppress this behaviour — useful in automated or scripted contexts. The current version is printed at startup and logged to the system journal.

Recovery: System Won't Boot

Firmware Updates (fwupd / LVFS)

fwupd is pre-installed and fwupd-refresh.timer runs automatically to check for new firmware. Update BIOS, NVMe controllers, SSD firmware, keyboard firmware, Thunderbolt devices, and other hardware supported by the Linux Vendor Firmware Service — no Windows, no USB boot drive, no manufacturer tools required.

# Refresh the LVFS metadata (normally automatic via fwupd-refresh.timer)
fwupdmgr refresh

# Check for available firmware updates
fwupdmgr get-updates

# Apply all available firmware updates
fwupdmgr update

# List all hardware devices fwupd can manage
fwupdmgr get-devices

# Show details and update history for a specific device
fwupdmgr get-details <device-id>

# Downgrade to a previous firmware version (if available)
fwupdmgr downgrade

passim — LAN firmware caching: passim is pre-installed alongside fwupd. It broadcasts available firmware payloads via mDNS/Avahi so that other ShaniOS machines on the same LAN can download firmware from a local machine rather than re-fetching from LVFS. In a multi-machine home or office environment this saves bandwidth and speeds up firmware updates across all devices automatically.

All Ecosystem Updates — Quick Reference

# OS (always use shani-deploy — never pacman -Syu)
sudo shani-deploy

# Flatpak apps (also auto-updates every 12 hours via timer)
flatpak update

# Snap packages (also auto-updates in background)
snap refresh

# Nix packages (imperative profile)
nix-channel --update && nix-env -u '*'

# Nix flakes
nix flake update

# Home Manager (if installed)
home-manager switch

# Podman container images
podman auto-update

# Firmware (BIOS, NVMe, SSD controllers, Thunderbolt, etc.)
fwupdmgr refresh && fwupdmgr update

Shell & Environment

The default shell is Zsh. All user shell configuration lives in /home and is fully writable.

  • Add environment variables permanently: append to ~/.zshrc (Zsh), ~/.bashrc (Bash), or ~/.config/fish/config.fish (Fish).
  • System-wide environment variables: edit /etc/environment — this uses the /etc overlay and persists across all updates.
  • Change your default shell: chsh -s /bin/fish (or /bin/bash, /bin/zsh). Changes take effect at next login. The shell binary must be listed in /etc/shells.
  • Starship prompt customisation: edit ~/.config/starship.toml. Full reference at starship.rs/config.

McFly — Neural Network Shell History

McFly replaces the standard Ctrl+R history search with a neural network that learns from your usage patterns — prioritising commands by current directory, recent context, and exit codes. The longer you use it, the better it predicts what you need.

# Trigger McFly search (replaces standard Ctrl+R)
Ctrl+R

# McFly trains on your shell history automatically.
# The database grows smarter over time and is never cleared on updates.
# Database location:
ls ~/.local/share/mcfly/

# View McFly's training data size
du -sh ~/.local/share/mcfly/history.db

# Temporarily disable McFly and use default history search:
MCFLY_DISABLE=true zsh

# Delete McFly history and start fresh (rare)
rm ~/.local/share/mcfly/history.db

Profile Sync Daemon (PSD) — Browser Profiles in RAM

PSD moves your browser profiles to a tmpfs RAM filesystem, then syncs them back to disk periodically and on shutdown. The result is faster browser startup, faster tab loading, and substantially less SSD write wear from day-to-day browsing. It is enabled globally for all users on ShaniOS.

# Check PSD status
systemctl --user status psd

# Preview what PSD manages (dry run)
psd preview

# PSD supports all major browsers automatically:
# Vivaldi, Firefox, Chromium, Chrome, Brave, Opera, and more.
# Profiles are detected from their default locations.

# Manually sync profiles back to disk now (normally automatic)
psd sync

# PSD sync on shutdown is handled automatically by the systemd user unit.
# Your data is safe — sync runs every few minutes and on every logout.

# Log location:
journalctl --user -u psd -f

FZF — Fuzzy Finder Integration

# Fuzzy search shell history (alternative to McFly)
Ctrl+R

# Fuzzy insert a file path into the command line
Ctrl+T

# Fuzzy cd into a subdirectory
Alt+C

# Pipe any command output through fzf
ls | fzf
cat /etc/passwd | fzf

System Configuration

The /etc directory uses overlay filesystem, allowing modifications:

# Edit system configuration normally
sudo nano /etc/some-config-file

# Changes are stored in /data/overlay/etc/upper
# and persist across all system updates and slot switches

# List everything you've changed from defaults
ls -la /data/overlay/etc/upper

# Compare your changes with the current base
diff -r /etc /data/overlay/etc/upper 2>/dev/null | grep -v "^Only in /data"

# To revert a single file to its system default:
sudo rm /data/overlay/etc/upper/path/to/file

Software & Apps

Software Management

ShaniOS uses Flatpak as the primary application delivery method, with Snaps for sandboxed cross-distro packages, AppImages for portable self-contained binaries, Nix for CLI tools, containers for development environments, and optionally Homebrew for cross-platform tooling.

Software Installation Methods 📦 Flatpak Best For: Desktop GUI apps Web browsers Office suites Media players Sandboxed & Isolated flatpak install flathub org.app.Name Stored: @flatpak 🫰 Snap Best For: Cross-distro packages Canonical-published apps Auto-updating services Classic & strict modes Sandboxed snap install app snap run app Stored: @snapd ❄️ Nix Best For: CLI tools & dev libs Reproducible envs Pinned versions Cross-user sharing Declarative & Atomic nix-env -iA nixpkgs.tool nix shell nixpkgs#python Stored: @nix (shared) 🐳 Containers Best For: Mutable environments Build toolchains Team dev images Full OS testing Full Linux Inside distrobox, podman, lxc, nspawn Stored: @containers 📄 AppImage Best For: Portable apps One-time tools Not on Flatpak Vendor releases Self-Contained chmod +x app.AppImage ./app.AppImage GUI: Gear Lever 🍺 Homebrew Best For: macOS-familiar tools Latest versions Cross-OS consistency Not in Nix/Distro User-Space Only brew install tool /home/linuxbrew/ Manual install required

Choose the right installation method — Flatpak for GUI apps, Snap for cross-distro packages, AppImage for portable binaries, Nix for CLI tools, containers for full environments

Flatpak

Managing Applications via GUI

  • GNOME: Use GNOME Software for graphical app management
  • KDE: Use Discover for browsing and installing Flatpaks
  • Both: Support Flathub integration out of the box

Installing Flatpak Applications

# Search for applications
flatpak search keyword

# Install from Flathub
flatpak install flathub org.application.Name

# List installed apps
flatpak list

# Remove an app
flatpak uninstall org.application.Name

Gaming

Game Launchers:

flatpak install flathub com.valvesoftware.Steam
flatpak install flathub com.heroicgameslauncher.hgl
flatpak install flathub net.lutris.Lutris
flatpak install flathub org.libretro.RetroArch

Gaming Peripherals:

flatpak install flathub io.github.antimicrox.antimicrox  # Gamepad mapper
flatpak install flathub org.freedesktop.Piper            # Mouse configuration
flatpak install flathub org.openrgb.OpenRGB              # RGB lighting

Performance Tools:

flatpak install flathub io.github.benjamimgois.goverlay  # Overlay manager
flatpak install flathub org.freedesktop.Platform.VulkanLayer.MangoHud
flatpak install flathub org.freedesktop.Platform.VulkanLayer.gamescope

Windows Applications

Run Windows apps using Wine via Bottles:

flatpak install flathub com.usebottles.bottles

Snaps

Snap packages are sandboxed, self-contained applications published to the Snap Store by Canonical and third-party developers. ShaniOS ships with snapd pre-installed and enabled (snapd.socket and snapd.apparmor.service are active at boot), and all Snap data lives in the dedicated @snapd Btrfs subvolume mounted at /var/lib/snapd — surviving all system updates and rollbacks.

How Snaps Work on ShaniOS

Snaps come in two confinement modes:

  • Strict: Fully sandboxed via AppArmor and seccomp. The snap can only access what its permissions (plugs/slots) explicitly allow. Most GUI and CLI apps use this mode.
  • Classic: Unrestricted access to the host filesystem — behaves like a traditionally installed app. Requires --classic flag when installing. Used by developer tools (e.g. code editors, compilers) that need broad access.

Snaps auto-update silently in the background by default. Each snap revision is kept on disk so rollback is instant if an update breaks something.

Basic Snap Commands

# Search for a snap
snap find keyword

# Install a snap (strict confinement)
snap install app-name

# Install a snap with classic confinement
snap install app-name --classic

# List installed snaps
snap list

# Check available updates
snap refresh --list

# Update all snaps
snap refresh

# Update a specific snap
snap refresh app-name

# Roll back a snap to the previous revision
snap revert app-name

# Remove a snap
snap remove app-name

# Run a snap
snap run app-name

Snap Permissions (Interfaces)

Snaps declare the system resources they need as interfaces. Some connect automatically; others require manual approval.

# List all interfaces for a snap
snap connections app-name

# Connect an interface manually
snap connect app-name:camera

# Disconnect an interface
snap disconnect app-name:camera

# List all available interfaces on the system
snap interface

Managing the Snap Daemon

# Check snapd status
sudo systemctl status snapd

# View snapd logs
journalctl -u snapd -f

# Check AppArmor confinement status
sudo apparmor_status | grep snap

Snap Store — Popular Snaps

# Developer tools
snap install code --classic          # VS Code
snap install sublime-text --classic  # Sublime Text
snap install android-studio --classic

# Communication
snap install slack --classic
snap install discord

# Utilities
snap install bitwarden
snap install multipass               # Ubuntu VM manager

Snap storage on ShaniOS: All snap revisions, writable data, and runtime mounts live inside the @snapd Btrfs subvolume at /var/lib/snapd. This subvolume is shared across both @blue and @green slots — your installed snaps are available regardless of which slot is booted, and they persist through every system update and rollback.

AppImage

An AppImage is a single self-contained executable that bundles an application and all its dependencies into one file. No installation, no root access, no package manager — download, make executable, and run. AppImages are ideal for applications not available on Flathub or the Snap Store, or for running specific upstream release versions directly from a vendor.

Gear Lever — AppImage GUI Manager

Gear Lever is pre-installed on ShaniOS and provides full GUI management of AppImages. It handles desktop integration automatically — registering the app in your application launcher, associating file types, and managing updates.

# Open Gear Lever from your app launcher, or launch it from terminal
gear-lever

# Gear Lever automatically:
# - Registers the AppImage in your .local/share/applications
# - Creates a desktop entry with icon
# - Associates MIME types if declared in the AppImage
# - Checks for AppImageUpdate-compatible update feeds

Running AppImages Manually

# Make an AppImage executable
chmod +x MyApp-x86_64.AppImage

# Run it directly
./MyApp-x86_64.AppImage

# Optionally move it to a persistent location
mkdir -p ~/Applications
mv MyApp-x86_64.AppImage ~/Applications/

# Extract the contents without running (for inspection)
./MyApp-x86_64.AppImage --appimage-extract

AppImage Persistence

AppImages stored in /home live on the @home Btrfs subvolume and survive all system updates and rollbacks automatically. Desktop entries created by Gear Lever are stored in ~/.local/share/applications, also within @home. The /var/lib/appimage integration cache (Gear Lever's registry) is bind-mounted from /data/varlib/appimage in the @data subvolume and also persists across slot switches.

AppImage vs Flatpak vs Snap

  • AppImage: No daemon, no store, no sandboxing by default. Maximum portability — the file itself is the app. Best for offline use, vendor-distributed software (e.g. JetBrains IDEs, Obsidian, Kdenlive nightlies), or when you need a specific version.
  • Flatpak: Sandboxed, managed, auto-updates via Flathub. Best for everyday GUI apps with ongoing updates.
  • Snap: Sandboxed, managed by snapd, auto-updates via Snap Store. Good for cross-distro packages and Canonical-supported software.

AppImages and the immutable root: Because /usr is read-only, AppImages must be stored in /home, /data, or another writable location — never in /usr/local/bin or similar system paths. Gear Lever handles this correctly by storing everything in user-space.

Containers

Containers — Full Reference

ShaniOS ships a complete pre-installed container stack: Podman (rootless, daemonless, Docker-compatible), podman-docker (drop-in docker CLI alias), podman-compose (Docker Compose support), buildah (OCI image builder), skopeo (image inspection/copying), Distrobox (mutable Linux envs with desktop integration), LXC (full system containers), and systemd-nspawn (lightweight OS containers). Container data lives in dedicated Btrfs subvolumes (@containers, @lxc, @machines) and survives all system updates and rollbacks.

Pre-installed Container & Isolation Stack 🐳 Podman Rootless · Daemonless Docker-compatible API + podman-docker drop-in docker CLI + podman-compose Docker Compose support Pods (K8s-like groups) GUI: Pods OCI + Docker registries Stored: @containers 🔨 buildah OCI Image Builder No daemon required Builds from Dockerfile or scripted (bud) Push to any OCI registry Rootless builds Fine-grained layer control skopeo for registry ops No daemon / no root 📦 Distrobox Mutable Linux env Desktop integration Apps in launcher Shared /home Any distro image Use apt/dnf/pacman Dev tools escape hatch GUI: BoxBuddy Stored: @containers 🖥️ LXC Full system containers Own kernel namespace Own network stack Own init system Near-VM isolation Persistent storage LXD management layer Stored: @lxc / @lxd ⚙️ nspawn systemd-nspawn Lightweight OS containers Built into systemd machinectl management chroot + networking Managed as systemd units Stored: @machines 🔬 Apptainer HPC / Scientific containers Runs as calling user No daemon, no root SIF (SquashFS) images Compatible with Docker Hub OCI auto-conversion Reproducible research Multi-user safe (formerly Singularity) No privilege escalation OCI runtime OCI build tool Mutable dev env System container Lightweight chroot+ HPC / research

Six pre-installed container and isolation runtimes — Podman (primary, Docker-compatible), buildah (OCI image builder), Distrobox (mutable dev environments), LXC/LXD (full system containers), systemd-nspawn (lightweight OS containers), and Apptainer (HPC/scientific, no privilege escalation)

Podman — Rootless Docker-Compatible Containers

Podman is the primary container runtime on ShaniOS. It is rootless (no daemon, no root required), fully Docker-compatible via the podman-docker drop-in, and stores all data in the @containers Btrfs subvolume.

# --- Basic Usage ---
# Pull an image
podman pull docker.io/library/nginx:latest
podman pull quay.io/fedora/fedora:latest

# Run a container interactively
podman run -it --rm ubuntu:22.04 bash

# Run a detached service (web server on port 8080)
podman run -d --name myapp -p 8080:80 nginx:latest

# List running containers
podman ps

# List all containers (including stopped)
podman ps -a

# Stop / start / restart
podman stop myapp
podman start myapp
podman restart myapp

# Remove container and image
podman rm myapp
podman rmi nginx:latest

# View logs
podman logs -f myapp

# Exec into a running container
podman exec -it myapp bash
# --- Volumes & Mounts ---
# Named volume (managed by Podman)
podman volume create mydata
podman run -d -v mydata:/app/data nginx:latest

# Bind mount (host path → container path)
podman run -d -v /home/user/website:/usr/share/nginx/html:Z nginx:latest
# :Z sets correct SELinux/AppArmor labels

# List volumes
podman volume ls
podman volume inspect mydata
# --- Networking ---
# Default bridge network
podman network ls
podman network create mynet

# Run two containers on the same network (they see each other by name)
podman run -d --name db --network mynet postgres:16
podman run -d --name app --network mynet -p 3000:3000 myapp:latest

# Port mapping
podman run -d -p 127.0.0.1:8080:80 nginx  # localhost only
podman run -d -p 8080:80 nginx             # all interfaces
# --- Pods (multi-container groups, like Kubernetes pods) ---
# Create a pod with a shared network namespace
podman pod create --name myapp-pod -p 8080:80

# Add containers to the pod
podman run -d --pod myapp-pod --name frontend nginx:latest
podman run -d --pod myapp-pod --name backend node:20

# Manage the entire pod
podman pod start myapp-pod
podman pod stop myapp-pod
podman pod rm myapp-pod

# List pods
podman pod ls
# --- Systemd Integration (auto-start containers at boot) ---
# Generate a systemd unit for a running container
podman generate systemd --new --name myapp > ~/.config/systemd/user/myapp.service

# Enable auto-start at login
systemctl --user daemon-reload
systemctl --user enable --now myapp

# Or use quadlet (modern declarative approach, Podman 4.4+)
# Create ~/.config/containers/systemd/myapp.container:
# [Unit]
# Description=My App
# [Container]
# Image=docker.io/library/nginx:latest
# PublishPort=8080:80
# Volume=/home/user/html:/usr/share/nginx/html:Z
# [Install]
# WantedBy=default.target
systemctl --user daemon-reload
systemctl --user start myapp

podman-docker drop-in: The podman-docker package provides a docker command that transparently calls Podman. Any script or tool that uses docker will work unchanged — including CI scripts, makefiles, and development tooling. The Docker socket path is also emulated at /run/user/UID/podman/podman.sock.

Docker Compose with podman-compose

podman-compose is pre-installed and understands standard docker-compose.yml files. Use it to run multi-service stacks without Docker.

# Example docker-compose.yml — WordPress + MariaDB
# Save to ~/projects/wordpress/docker-compose.yml

# version: "3.9"
# services:
#   db:
#     image: mariadb:11
#     environment:
#       MYSQL_ROOT_PASSWORD: rootpass
#       MYSQL_DATABASE: wordpress
#       MYSQL_USER: wp
#       MYSQL_PASSWORD: wppass
#     volumes:
#       - db_data:/var/lib/mysql
#
#   wordpress:
#     image: wordpress:latest
#     ports:
#       - "8080:80"
#     environment:
#       WORDPRESS_DB_HOST: db
#       WORDPRESS_DB_USER: wp
#       WORDPRESS_DB_PASSWORD: wppass
#       WORDPRESS_DB_NAME: wordpress
#     depends_on:
#       - db
#
# volumes:
#   db_data:

# Run the stack
podman-compose up -d

# Or via the docker drop-in
docker compose up -d

# View logs
podman-compose logs -f

# Stop and remove
podman-compose down

# Stop and remove including volumes
podman-compose down -v

buildah — Build OCI Images (Dockerfile & Scripted)

buildah builds OCI-compliant container images without a daemon and without root. It can build from a standard Dockerfile (using buildah bud) or from shell scripts for fine-grained control. Images built with buildah are immediately usable with Podman.

# --- Build from a Dockerfile ---

# Example Dockerfile (save as ~/myapp/Dockerfile):
# FROM node:20-alpine
# WORKDIR /app
# COPY package*.json ./
# RUN npm ci --only=production
# COPY . .
# EXPOSE 3000
# CMD ["node", "server.js"]

# Build the image (bud = Build Using Dockerfile)
buildah bud -t myapp:latest ~/myapp/

# Or using podman (calls buildah under the hood)
podman build -t myapp:latest ~/myapp/

# Build with build arguments
buildah bud --build-arg NODE_ENV=production -t myapp:prod .

# Build for a specific platform
buildah bud --platform linux/amd64 -t myapp:amd64 .
buildah bud --platform linux/arm64 -t myapp:arm64 .

# Multi-platform manifest
buildah manifest create myapp:multi
buildah bud --platform linux/amd64,linux/arm64 \
  --manifest myapp:multi .
# --- Scripted builds (buildah native API) ---
# More control than Dockerfile — useful for complex layering

# Start from a base image
ctr=$(buildah from ubuntu:22.04)

# Run commands inside (each becomes a layer)
buildah run $ctr -- apt-get update
buildah run $ctr -- apt-get install -y python3 python3-pip
buildah run $ctr -- pip3 install flask

# Copy files in
buildah copy $ctr ./app /opt/app

# Set metadata
buildah config --entrypoint '["python3", "/opt/app/main.py"]' $ctr
buildah config --port 5000 $ctr
buildah config --label maintainer="[email protected]" $ctr

# Commit to a named image
buildah commit $ctr myflaskapp:latest

# Remove the working container
buildah rm $ctr

# List images
buildah images

# Inspect an image
buildah inspect myflaskapp:latest
# --- Push images to a registry ---

# Push to Docker Hub
buildah push myapp:latest docker://docker.io/yourusername/myapp:latest

# Push to Quay.io
buildah push myapp:latest docker://quay.io/yourusername/myapp:latest

# Push to a local registry (e.g., Podman's built-in registry)
podman run -d -p 5000:5000 --name registry docker.io/library/registry:2
buildah push myapp:latest docker://localhost:5000/myapp:latest

# Push to an OCI archive (tarball)
buildah push myapp:latest oci-archive:/tmp/myapp.tar

# Tag an image
buildah tag myapp:latest myapp:1.0.0

Distrobox

Distrobox — Mutable Linux Environments with Desktop Integration

Distrobox wraps Podman containers to create mutable Linux environments that share your /home, display, audio, and USB devices. Apps installed inside appear in your desktop launcher. It is the recommended way to use traditional package managers (apt, dnf, pacman) on ShaniOS.

# --- Create containers ---
# Ubuntu 22.04
distrobox create --name ubuntu --image ubuntu:22.04

# Fedora latest
distrobox create --name fedora --image fedora:latest

# Arch Linux (for AUR access)
distrobox create --name arch --image archlinux:latest

# With custom home directory
distrobox create --name dev --image ubuntu:22.04 --home ~/distrobox-dev

# List all boxes
distrobox list

# Enter a box
distrobox enter ubuntu

# Run a single command without entering
distrobox run --name ubuntu -- apt list --installed
# --- Inside the box: install anything ---
# In the ubuntu box:
sudo apt update && sudo apt install -y nodejs npm python3-pip gcc make

# In the arch box (full AUR access with yay):
sudo pacman -S yay
yay -S some-aur-package
# --- Export apps & commands to host desktop ---
# Export a GUI app to your host launcher (appears in GNOME/KDE app menu)
distrobox-export --app firefox       # from inside the box
distrobox-export --app code          # VS Code installed inside box

# Export a CLI binary to host PATH (~/.local/bin/)
distrobox-export --bin /usr/bin/node --export-path ~/.local/bin
distrobox-export --bin /usr/bin/python3 --export-path ~/.local/bin

# Remove an exported app/bin
distrobox-export --app firefox --delete
distrobox-export --bin /usr/bin/node --export-path ~/.local/bin --delete
# --- Manage boxes ---
# Stop a running box
distrobox stop ubuntu

# Remove a box (keeps /home data)
distrobox rm ubuntu

# Remove box and delete its home dir
distrobox rm ubuntu --rm-home

# Upgrade all packages inside a box
distrobox run --name ubuntu -- sudo apt upgrade -y

# Generate a systemd unit to auto-start a box at login
distrobox generate-entry ubuntu

BoxBuddy GUI: For a graphical Distrobox front-end, install BoxBuddy from Flathub: flatpak install flathub io.github.dvlv.boxbuddyrs. It lets you create, enter, and manage Distrobox containers visually without any terminal commands.

LXC — Full System Containers

LXC runs full OS containers with their own init system, network stack, and process tree — closer to a VM than an application container, but with much lower overhead. Container data is stored in the @lxc Btrfs subvolume.

# List available images
lxc image list images: | grep ubuntu

# Launch a container
lxc launch images:ubuntu/22.04 my-server

# Launch Fedora
lxc launch images:fedora/39 fedora-box

# List running containers
lxc list

# Open a shell in the container
lxc exec my-server -- bash

# Copy files to/from container
lxc file push ~/myconfig.conf my-server/etc/myconfig.conf
lxc file pull my-server/var/log/syslog ~/syslog.txt

# Start / stop / restart
lxc start my-server
lxc stop my-server
lxc restart my-server

# Snapshot and restore
lxc snapshot my-server snap0
lxc restore my-server snap0

# Delete container
lxc delete my-server

LXD — System Containers & Virtual Machines

LXD is the management daemon for LXC that adds clustering, projects, profiles, storage pools, and VM support. The lxd.socket is enabled at boot on ShaniOS; container and VM data lives in the @lxd Btrfs subvolume. LXD can run both system containers (like LXC) and full hardware-accelerated VMs sharing the same management interface.

# Initialise LXD (first-time setup — runs an interactive wizard)
sudo lxd init

# List available images from the LXD image server
lxc image list ubuntu: | head -20

# Launch an Ubuntu container
lxc launch ubuntu:22.04 web-server

# Launch a full VM (not a container)
lxc launch ubuntu:22.04 my-vm --vm

# List all instances (containers + VMs)
lxc list

# Get a shell
lxc exec web-server -- bash

# Show resource usage
lxc info web-server

# Profiles — apply predefined resource limits
lxc profile list
lxc profile show default
lxc profile apply web-server default

# Storage pools
lxc storage list
lxc storage info default

# Snapshot and restore
lxc snapshot web-server clean-install
lxc restore web-server clean-install

# Stop and delete
lxc stop web-server
lxc delete web-server

Apptainer — HPC & Scientific Containers

Apptainer (formerly Singularity) is the standard container runtime for HPC clusters and scientific computing. Unlike Docker/Podman, Apptainer containers run as the calling user (no daemon, no root escalation), making them safe for multi-user clusters and reproducible research environments. Pre-configured on ShaniOS — pair with the immutable host OS for a fully reproducible research stack where both the container and the host are verifiable artifacts.

# Pull an image from Docker Hub (converted to SIF format automatically)
apptainer pull docker://ubuntu:22.04
# Creates: ubuntu_22.04.sif

# Pull from a specific registry
apptainer pull docker://ghcr.io/owner/repo:tag

# Run a command inside a container
apptainer exec ubuntu_22.04.sif python3 script.py

# Run the container's default entrypoint
apptainer run ubuntu_22.04.sif

# Interactive shell in the container
apptainer shell ubuntu_22.04.sif

# Bind mount host directories (automatic: $HOME, /tmp, /proc)
apptainer exec --bind /data:/mnt/data ubuntu_22.04.sif ls /mnt/data

# Use GPU (NVIDIA) inside the container
apptainer exec --nv cuda_image.sif python3 gpu_script.py

# Build a custom SIF from a definition file
apptainer build myimage.sif myimage.def

# Build a sandbox (writable directory, for development)
apptainer build --sandbox myimage_sandbox/ myimage.def

# Inspect an image's metadata
apptainer inspect ubuntu_22.04.sif

# Cache location (default: ~/.apptainer/cache)
ls ~/.apptainer/cache/

# Clear cache
apptainer cache clean

For researchers: Submit reproducible environments to HPC clusters using Apptainer SIF images. Your collaborators can run the exact same image — same OS, same libraries, same tool versions — regardless of what cluster they use. Pair with ShaniOS's GPG-verified, immutable host for a full stack that is reproducible from kernel to container.

systemd-nspawn — Lightweight OS Containers

systemd-nspawn is a lightweight container runtime built into systemd. It runs a full OS tree in a namespace, managed as systemd machine units via machinectl. Container data lives in the @machines subvolume at /var/lib/machines.

# Bootstrap a Debian container into /var/lib/machines/debian12
sudo debootstrap stable /var/lib/machines/debian12 http://deb.debian.org/debian

# Start it as a machine (boots fully with systemd)
sudo machinectl start debian12

# Or start interactively
sudo systemd-nspawn -D /var/lib/machines/debian12 --boot

# List running machines
machinectl list

# Open a shell in a running machine
machinectl shell debian12

# Enable auto-start at boot
machinectl enable debian12

# Stop / poweroff
machinectl stop debian12
machinectl poweroff debian12

# Transfer files
machinectl copy-to debian12 ~/file.txt /root/file.txt
machinectl copy-from debian12 /etc/hostname ~/hostname.txt

Pods GUI (Podman GUI): For a graphical Podman front-end, install Pods from Flathub: flatpak install flathub com.github.marhkb.Pods. It lets you manage containers, images, volumes, pods, and networks visually — supporting log viewing, shell access, and compose stacks without the terminal.

Nix Package Manager

Nix is pre-installed and running on ShaniOS. The nix-daemon.socket is enabled at boot and all Nix data lives in the dedicated @nix Btrfs subvolume mounted at /nix. This subvolume is shared across both @blue and @green slots — any packages you install via Nix survive every system update and rollback without reinstallation. CoW is kept enabled on @nix so bees can deduplicate shared store paths across the two slots.

Nix is a purely functional, reproducible package manager. Each package is installed into its own unique path in /nix/store (named by a cryptographic hash of all its inputs), meaning multiple versions of the same tool can coexist without conflict, and installing or removing a package never breaks another.

Why Nix on an immutable OS? Flatpak covers GUI apps; Distrobox covers full mutable environments. Nix fills the gap for CLI tools, libraries, and language runtimes that need specific versions pinned or need to coexist — without root, without touching the read-only root filesystem.

First-Time Setup — Add a Channel

Nix is installed but has no channel configured by default. A channel is a URL pointing to a verified snapshot of the Nixpkgs repository — it determines which package versions are available and provides a binary cache so packages are downloaded pre-built rather than compiled locally.

Add the nixpkgs-unstable channel (rolling, latest packages) as your user, then fetch it:

# Add the nixpkgs-unstable channel (recommended for most users on ShaniOS)
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs

# Fetch the channel expressions and binary cache metadata
nix-channel --update

# Verify — should print: nixpkgs https://nixos.org/channels/nixpkgs-unstable
nix-channel --list

If you prefer a stable release channel (e.g. 25.05), substitute the URL:

# Stable channel (replace 25.05 with the current release)
nix-channel --add https://nixos.org/channels/nixpkgs-25.05 nixpkgs
nix-channel --update

Available Channels

  • nixpkgs-unstable — Rolling, latest Nixpkgs. Packages update as soon as Hydra CI passes. Recommended for desktop use.
  • nixos-25.05 (or current stable) — Stable release, conservative updates, security patches backported. Good if you want minimal churn.
  • nixos-unstable — NixOS channel; includes full NixOS modules. Slightly behind nixpkgs-unstable due to additional NixOS-specific testing.
  • nixos-25.05-small / nixos-unstable-small — Faster-moving small channels with fewer tested jobs; intended for servers, not desktops.

See current channel status at status.nixos.org and the full channel list at nixos.org/channels.

Managing Channels

# List subscribed channels
nix-channel --list

# Add an additional channel (e.g. home-manager)
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager

# Remove a channel
nix-channel --remove home-manager

# Update all channels (download latest expressions)
nix-channel --update

# Update a specific channel only
nix-channel --update nixpkgs

# List all channel generations (history of updates)
nix-channel --list-generations

# Roll back channels to the previous generation
nix-channel --rollback

# Roll back to a specific generation number
nix-channel --rollback 12

Channels are per-user. Running nix-channel --update as your regular user updates only your environment. The root channel (used by the daemon) is separate. Channel state is stored in ~/.nix-channels and symlinked via ~/.nix-defexpr/channels.

Installing and Managing Packages

# Search for packages (after channel is set)
nix search nixpkgs ripgrep
nix-env -qaP | grep nodejs          # list all matching in channel

# Install a package into your default profile
nix-env -iA nixpkgs.ripgrep
nix-env -iA nixpkgs.nodejs_22
nix-env -iA nixpkgs.python312

# List installed packages
nix-env -q

# Upgrade a specific package
nix-env -uA nixpkgs.ripgrep

# Upgrade all installed packages to latest channel versions
nix-env -u '*'

# Uninstall a package
nix-env -e ripgrep

# Roll back the last nix-env operation (install/upgrade/remove)
nix-env --rollback

# List profile generations
nix-env --list-generations

# Switch to a specific profile generation
nix-env --switch-generation 42

Temporary Environments and One-Shot Commands

One of Nix's most useful features is running tools without permanently installing them:

# Drop into a temporary shell with specific packages (gone when you exit)
nix-shell -p python312 nodejs_22 gcc

# New-style equivalent (Nix flakes / nix CLI)
nix shell nixpkgs#python312 nixpkgs#nodejs_22

# Run a single command from a package without installing it
nix run nixpkgs#cowsay -- "Hello from Nix"
nix run nixpkgs#ffmpeg -- -i input.mp4 output.webm

# Start a per-project dev shell defined in a shell.nix file
nix-shell                           # reads shell.nix in current directory

Updating Packages

# Step 1: fetch latest channel expressions
nix-channel --update

# Step 2: upgrade all installed packages to the versions now in the channel
nix-env -u '*'

# Or do both in one line
nix-channel --update && nix-env -u '*'

Garbage Collection

Nix never deletes old store paths automatically. Old generations and unreferenced paths accumulate in /nix/store. Run garbage collection periodically to reclaim disk space:

# Delete all old profile generations and collect unreachable store paths
nix-collect-garbage -d

# Collect garbage but keep the last N generations
nix-env --delete-generations old
nix-collect-garbage

# Just see how much space would be freed (dry run)
nix-collect-garbage --dry-run

# Check current store disk usage
du -sh /nix/store

Desktop Integration

GUI apps installed via Nix won't automatically appear in your desktop launcher unless you add Nix's share directory to $XDG_DATA_DIRS. Add this to your ~/.zshrc or ~/.bashrc:

export XDG_DATA_DIRS=$HOME/.nix-profile/share:$XDG_DATA_DIRS

Home Manager — Declarative User Environment

Home Manager lets you declare your entire user environment (packages, dotfiles, shell config, services) in a single Nix file and apply it atomically. It is the recommended way to manage your Nix setup long-term.

# Add the home-manager channel matching your nixpkgs channel
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
nix-channel --update

# Install home-manager (standalone)
export NIX_PATH=$HOME/.nix-defexpr/channels:/nix/var/nix/profiles/per-user/root/channels${NIX_PATH:+:$NIX_PATH}
nix-shell '<home-manager>' -A install

# After install — edit your configuration
micro ~/.config/home-manager/home.nix   # or nano, vim, etc.

# Apply your configuration
home-manager switch

# List home-manager generations
home-manager generations

# Roll back to the previous generation
home-manager rollback

A minimal home.nix example:

{ config, pkgs, ... }: {
  home.username = "alice";
  home.homeDirectory = "/home/alice";
  home.stateVersion = "25.05";

  home.packages = with pkgs; [
    ripgrep
    fd
    jq
    nodejs_22
    python312
  ];

  programs.zsh.enable = true;
  programs.git = {
    enable = true;
    userName  = "Alice";
    userEmail = "[email protected]";
  };
}

Nix Flakes (Experimental)

Flakes are an experimental but widely-used Nix feature that replaces channels with pinned, reproducible inputs declared in a flake.nix file. They are disabled by default but can be enabled by adding to /etc/nix/nix.conf (via the /etc OverlayFS on ShaniOS):

# Enable flakes and the new nix CLI (add to /etc/nix/nix.conf)
experimental-features = nix-command flakes

# Reload the daemon after editing
sudo systemctl restart nix-daemon

# With flakes enabled — use the new nix CLI
nix search nixpkgs#ripgrep
nix shell nixpkgs#ripgrep
nix run nixpkgs#hello

Unlike Flatpak, Nix supports CLI tools, development environments, and packages that need specific pinned library versions — all without root access and without touching the immutable root. Combined with Flatpak for GUI apps and Distrobox for full Linux environments, Nix provides complete software coverage on ShaniOS.

Homebrew (brew)

Homebrew (brew) — Optional

Homebrew is the popular macOS package manager that also runs on Linux. It is not pre-installed on ShaniOS but can be installed in user-space to /home/linuxbrew/.linuxbrew — entirely outside the read-only root, making it fully compatible with ShaniOS's immutability.

When to use Homebrew vs Nix: If you already use Homebrew on macOS and want the same workflow on ShaniOS, or need a tool that's in Homebrew's formulae but not in Nixpkgs, Homebrew is a good fit. For most users, Nix is preferred as it is more powerful and better integrated with the system. Both can coexist.

Install Homebrew:

# Run the official install script (installs to /home/linuxbrew/.linuxbrew)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Add Homebrew to your PATH (add to ~/.zshrc or ~/.bashrc):

# Add to shell config (~/.zshrc for zsh, ~/.bashrc for bash)
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.zshrc

# Apply immediately in current session
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

# Verify installation
brew doctor
brew --version

Common brew commands:

# Search for a package
brew search htop

# Install a package
brew install htop

# List installed packages
brew list

# Update Homebrew and all formulae
brew update && brew upgrade

# Remove a package
brew uninstall htop

# Show info about a package
brew info git

# Uninstall Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"

ShaniOS compatibility notes:

  • Homebrew installs to /home/linuxbrew/.linuxbrew — completely outside the read-only root, fully persistent across updates
  • No sudo needed after the initial install (except to create the /home/linuxbrew directory)
  • Homebrew uses its own gcc/glibc copies when needed — does not depend on system libraries beyond the minimum
  • Homebrew does not survive OS rollbacks unless you create a bind mount for /home/linuxbrew — Nix's @nix subvolume does survive rollbacks by design
  • Casks (macOS .app installers) are not available on Linux — use Flatpak for GUI apps instead

Android (Waydroid)

Waydroid is pre-installed on both GNOME and KDE editions. It runs a full Android container via LXC on top of your Linux desktop — hardware-accelerated on Intel and AMD GPUs. waydroid-container.service is enabled at boot so the container is ready when you need it. All Android system images and user app data live in the dedicated @waydroid Btrfs subvolume, surviving every system update and rollback.

waydroid-helper is a ShaniOS-specific automation tool that handles first-run initialisation, Android image downloading, kernel module setup, and networking configuration automatically — so you don't have to follow the upstream manual setup process.

GPU Acceleration

  • Intel & AMD GPUs: Full hardware acceleration via virtio-gpu / drm render nodes. Android apps run at native speeds with GPU rendering.
  • NVIDIA GPUs: Software rendering (llvmpipe) is used. Android runs but GPU-accelerated apps and games will be slower.

ARM Translation

Waydroid ships ARM translation support via libhoudini (or libndk_translation) automatically configured by waydroid-helper. This means ARM-only Android APKs run on x86_64 hardware without needing a physical ARM device. Most apps from the Play Store (or F-Droid) are ARM-compiled — ARM translation is what makes them work on a standard x86_64 PC. Performance is good for most applications; GPU-heavy ARM-native games may have some overhead.

Initialisation

# Recommended: waydroid-helper automates the full init process
sudo waydroid-helper init

# Or manually, if you prefer:
sudo waydroid init

# Start the Waydroid session
sudo systemctl start waydroid-container
waydroid show-full-ui

# Check Waydroid status
waydroid status

Installing Android Apps

# Install an APK from host (sideloading)
waydroid app install /path/to/app.apk

# List installed Android apps
waydroid app list

# Launch a specific app
waydroid app launch com.example.app

# Open the full Android UI
waydroid show-full-ui

Clipboard & File Integration

python-pyclip is pre-installed for clipboard integration between the Android container and the Linux desktop — copy text in Android, paste it in a Linux app, and vice versa. Files can be shared via the ~/android_documents shared folder that Waydroid mounts automatically.

For Android Developers

Waydroid is a full, hardware-accelerated Android stack — not a slow Android Virtual Device (AVD). Test your apps on a real hardware-accelerated Android environment without a physical device. android-tools (adb, fastboot) and android-udev are pre-installed for device debugging workflows. Connect to Waydroid via ADB:

# Get Waydroid IP (for ADB connection)
waydroid status | grep IP

# Connect ADB to Waydroid
adb connect 192.168.240.112:5555

# Now use standard adb commands against Waydroid
adb devices
adb logcat
adb shell

Stopping & Managing Waydroid

# Stop the Waydroid session
waydroid session stop

# Stop the container service
sudo systemctl stop waydroid-container

# Disable auto-start at boot (if you don't use Waydroid regularly)
sudo systemctl disable waydroid-container

# Reset Waydroid completely (wipes all Android data)
sudo waydroid init -f

The @waydroid subvolume at /var/lib/waydroid survives all system updates and rollbacks automatically. Android app data, user settings, and installed applications persist there regardless of which OS slot is booted.

Virtual Machines (QEMU/KVM)

ShaniOS fully supports hardware-accelerated virtual machines via QEMU/KVM. VM disk images are stored in the @libvirt Btrfs subvolume (mounted at /var/lib/libvirt) with CoW disabled (nodatacow) for reliable performance with qcow2 images. The subvolume survives all system updates and rollbacks.

Virtualisation is enabled automatically — all users are added to the kvm and libvirt groups at first boot. Verify KVM is available:

# Should return /dev/kvm if VT-x / AMD-V is enabled in BIOS
ls /dev/kvm

# Check KVM kernel modules
lsmod | grep kvm

GNOME Boxes — Simple GUI (Recommended for Most Users)

Clean, minimal VM manager. Ships its own QEMU/libvirt runtimes — no system daemon needed. Best for creating and running desktop VMs quickly.

flatpak install flathub org.gnome.Boxes

virt-manager — Full Management UI

Advanced VM manager with full control over networking, storage pools, CPU pinning, PCIe passthrough, and snapshots. Also ships its own QEMU/libvirt runtime as a Flatpak.

flatpak install flathub org.virt_manager.virt-manager

# Or install the QEMU extension alongside for full feature set
flatpak install flathub org.virt_manager.virt-manager
flatpak install flathub org.virt_manager.virt-manager.Extension.Qemu

Quickemu — One-Command VMs (no libvirt)

Quickemu wraps QEMU directly to create pre-configured VMs for popular OSes with a single command. No libvirt daemon. Install via Nix or Distrobox:

# Install via Nix
nix-env -iA nixpkgs.quickemu

# Create and start an Ubuntu VM
quickget ubuntu 22.04
quickemu --vm ubuntu-22.04.conf

# List available OS options
quickget --list

Windows VM — Recommended Setup

# Using quickemu (easiest)
quickget windows 11
quickemu --vm windows-11.conf

# Using GNOME Boxes: File → New → Download OS → Windows 11
# (Boxes handles VirtIO drivers automatically)

# For virt-manager: use the VirtIO disk and network drivers
# for best performance — download virtio-win.iso from Fedora

Why nodatacow on @libvirt? Btrfs CoW creates excessive metadata overhead when repeatedly overwriting VM disk images (which qcow2 does constantly). Disabling CoW on @libvirt eliminates fragmentation and dramatically improves VM I/O performance. Snapshots are handled by libvirt/qcow2 natively inside the Flatpak sandbox.

Networking

Web Hosting & Networking

ShaniOS lets you host websites and services directly from your desktop or laptop—even on home internet connections without a static IP. All tools listed here are installed for convenience and are not enabled or running by default.

Security First: These services are disabled by default. Only enable what you need, and always configure firewall rules appropriately. Never expose services publicly without understanding the security implications.

How Desktop Web Hosting Works

Desktop Web Hosting Architecture 💻 Your Desktop/Laptop Caddy Web Server localhost:8080 — content hosted Also serves as: Tailscale subnet router Optional: exit node TLS: /data/varlib/caddy persists across updates 🛡️ firewalld Host Firewall Zones: public / trusted Tailscale iface: trusted cloudflared: outbound only No inbound ports needed for either tunnel path State: /data/varlib/firewalld 🌐 Cloudflared Tunnel Encrypted outbound-only tunnel No port forwarding / static IP needed Hides your real IP address Credentials: /data/varlib/cloudflared PUBLIC ACCESS 🌍 🌍 Public Internet Anyone with your URL yoursite.example.com Automatic HTTPS via CF DDoS protection included WORLD ACCESSIBLE 🌐 🔐 Tailscale VPN Private WireGuard mesh network P2P when possible, relay fallback Subnet router: share LAN to mesh Exit node: route all traffic via host MagicDNS: hostname resolution State: /data/varlib/tailscale PRIVATE ONLY 🔒 📱 Your Devices Phone · Tablet · Laptop Access via Tailscale IP or MagicDNS hostname Can use host as exit node for internet access too AUTHORIZED ONLY 🔐 PUBLIC PRIVATE Cloudflared: Public HTTPS · No inbound port · Hides IP · CF DDoS protection Tailscale: Private WireGuard mesh · Subnet router · Optional exit node · MagicDNS Caddy TLS state: Certs in /data/varlib/caddy — survive all OS updates and rollbacks Both paths require zero inbound firewall holes — cloudflared and tailscaled establish outbound connections only

Two secure remote access paths: 🌐 public HTTPS via Cloudflared (outbound-only tunnel, no static IP) and 🔐 private WireGuard mesh via Tailscale (subnet router, exit node, MagicDNS) — all TLS and tunnel state persists in @data across every update

Networking Tools Reference

Caddy Web Server

Modern, zero-config web server with automatic HTTPS via Let's Encrypt. Ideal for hosting static sites, reverse proxying containers, dashboards, and APIs directly from your desktop. Not active by default — start it when you need it.

Start & enable:

sudo systemctl enable --now caddy

# Reload config without downtime
sudo systemctl reload caddy

# View logs
sudo journalctl -u caddy -f

# Validate Caddyfile syntax before reloading
caddy validate --config /etc/caddy/Caddyfile

Static site with automatic HTTPS:

# /etc/caddy/Caddyfile

mysite.example.com {
    root * /var/www/html
    file_server
    encode gzip zstd
    log {
        output file /var/log/caddy/access.log
    }
}

# Local dev site (no certificate needed)
:8080 {
    root * /home/user/my-website
    file_server browse
}

Reverse proxy — forward to a container or app:

app.example.com {
    reverse_proxy localhost:3000
}

# Multiple subdomains → different backend services
api.example.com     { reverse_proxy localhost:8000 }
dashboard.example.com { reverse_proxy localhost:9090 }

# Load balance across replicas
lb.example.com {
    reverse_proxy localhost:3001 localhost:3002 localhost:3003 {
        lb_policy round_robin
        health_uri /health
        health_interval 10s
    }
}

Basic auth, headers, rate limiting:

# Generate bcrypt password hash
caddy hash-password --plaintext "yourpassword"

# Caddyfile with auth + security headers
secure.example.com {
    basicauth {
        admin $2a$14$YOUR_HASH_HERE
    }
    header {
        Strict-Transport-Security "max-age=31536000; includeSubDomains"
        X-Content-Type-Options nosniff
        X-Frame-Options DENY
        -Server
    }
    reverse_proxy localhost:8080
}

# Rate limit (requires caddy-ratelimit module, install via Flatpak/container)
# For production rate limiting, deploy Caddy inside a container with plugins

Serve from a Podman container:

# Run a web app container and proxy it with Caddy
podman run -d --name webapp -p 127.0.0.1:3000:3000 myapp:latest

# Caddyfile entry
myapp.example.com {
    reverse_proxy 127.0.0.1:3000
}

sudo systemctl reload caddy

TLS certificates and ACME state are stored in /data/varlib/caddy and persist across all system updates and rollbacks.

Alternative servers via Flatpak/container: For more complex setups, Nginx and Apache HTTPD are available as container images. Run them with Podman and proxy them through Caddy, or use them standalone on a non-standard port. Example: podman run -d -p 127.0.0.1:8081:80 -v /home/user/html:/usr/share/nginx/html:Z nginx:alpine

Cloudflared — Zero-Trust Tunnels

Creates an encrypted outbound-only tunnel to Cloudflare's edge, exposing local services publicly without opening inbound firewall ports or having a static IP. Disabled by default.

# Authenticate with your Cloudflare account
cloudflared tunnel login

# Create a named tunnel
cloudflared tunnel create my-tunnel

# Configure ingress — edit ~/.cloudflared/config.yml
# tunnel: <YOUR-TUNNEL-UUID>
# credentials-file: /home/user/.cloudflared/<UUID>.json
# ingress:
#   - hostname: mysite.example.com
#     service: http://localhost:8080
#   - hostname: api.example.com
#     service: http://localhost:3000
#   - service: http_status:404

# Route DNS to this tunnel (Cloudflare manages DNS record automatically)
cloudflared tunnel route dns my-tunnel mysite.example.com

# Test tunnel locally before enabling as service
cloudflared tunnel run my-tunnel

# Install and enable as a persistent systemd service
sudo cloudflared service install
sudo systemctl enable --now cloudflared

# Check tunnel status
cloudflared tunnel list
cloudflared tunnel info my-tunnel
sudo journalctl -u cloudflared -f

Tunnel credentials are stored at /data/varlib/cloudflared and survive all updates. No inbound firewall changes are needed — Cloudflared establishes only outbound HTTPS connections.

Tailscale — Private WireGuard Mesh

Builds a private peer-to-peer encrypted network between all your devices using WireGuard. All traffic is authenticated; devices are only reachable by other Tailscale nodes. Inactive until you sign in.

# Enable the daemon
sudo systemctl enable --now tailscaled

# Authenticate (opens browser)
sudo tailscale up

# Bring up with specific options
sudo tailscale up --accept-routes --ssh   # accept subnet routes; enable Tailscale SSH
sudo tailscale up --advertise-exit-node   # act as exit node (route all traffic)
sudo tailscale up --advertise-routes=192.168.1.0/24  # expose your LAN to the tailnet

# Status and peer list
tailscale status
tailscale netcheck           # diagnose NAT/relay connectivity

# Check your Tailscale IP
tailscale ip -4

# Ping a peer by its Tailscale hostname
tailscale ping myhostname

# Open UDP 41641 for best performance (optional — works without)
sudo firewall-cmd --add-port=41641/udp --permanent
sudo firewall-cmd --reload

# Built-in SSH (no OpenSSH needed on target)
tailscale ssh user@myhostname

Tailscale state is bind-mounted from /data/varlib/tailscale and persists across all updates — you stay authenticated and connected after every system update.

OpenSSH — Remote Shell & File Transfer

SSH server is not enabled by default. Enable it only when needed; prefer access via Tailscale rather than exposing port 22 publicly.

# Enable SSH server
sudo systemctl enable --now sshd

# Harden /etc/ssh/sshd_config (key settings)
sudo nano /etc/ssh/sshd_config
# PasswordAuthentication no      ← key-only auth
# PermitRootLogin no
# Port 2222                       ← non-default port (optional)
# AllowUsers youruser             ← restrict to specific users
# MaxAuthTries 3

sudo systemctl restart sshd

# Allow SSH in firewall (LAN only is safest)
sudo firewall-cmd --add-service=ssh --permanent
sudo firewall-cmd --reload

# Generate an ED25519 key pair on the CLIENT
ssh-keygen -t ed25519 -C "mydesktop"

# Copy public key to server
ssh-copy-id [email protected]
# or manually append ~/.ssh/id_ed25519.pub to server's ~/.ssh/authorized_keys

# Connect
ssh [email protected]
ssh -p 2222 [email protected]   # non-default port

# Tunnel a local port via SSH (port forward)
ssh -L 8080:localhost:3000 [email protected]   # access server's :3000 at localhost:8080

# Transfer files
scp localfile.txt [email protected]:/home/user/
rsync -avz /local/dir/ [email protected]:/remote/dir/

# SFTP interactive session
sftp [email protected]

firewalld — Dynamic Firewall

Active by default with restrictive rules. Manages nftables under the hood using a zone-based model. Pre-configured zones for KDE Connect and Waydroid are included.

# Status and current rules
sudo firewall-cmd --state
sudo firewall-cmd --list-all                      # active zone
sudo firewall-cmd --list-all-zones                # all zones

# Open a service (named)
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo firewall-cmd --add-service=ssh --permanent
sudo firewall-cmd --reload

# Open a specific port
sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --add-port=41641/udp --permanent  # Tailscale
sudo firewall-cmd --reload

# Remove a rule
sudo firewall-cmd --remove-service=http --permanent
sudo firewall-cmd --remove-port=8080/tcp --permanent
sudo firewall-cmd --reload

# Lock down to a source IP (allow SSH only from LAN)
sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" service name="ssh" accept' --permanent
sudo firewall-cmd --reload

# Block an IP
sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="1.2.3.4" reject' --permanent
sudo firewall-cmd --reload

# GUI: firewall-config is pre-installed
sudo firewall-config

Fail2ban — Brute-Force Protection

Monitors logs and temporarily bans IPs that fail authentication repeatedly. Integrates with firewalld automatically. Not enabled by default — enable when running any public-facing service.

# Enable and start
sudo systemctl enable --now fail2ban

# Check overall status
sudo fail2ban-client status

# Check SSH jail
sudo fail2ban-client status sshd

# Manually ban/unban an IP
sudo fail2ban-client set sshd banip 1.2.3.4
sudo fail2ban-client set sshd unbanip 1.2.3.4

# View banned IPs from firewall perspective
sudo firewall-cmd --direct --get-all-rules

# Watch the fail2ban log live
sudo journalctl -u fail2ban -f

Custom jail for Caddy (HTTP brute-force):

# /etc/fail2ban/jail.d/caddy.conf
# [caddy]
# enabled  = true
# port     = http,https
# filter   = caddy
# logpath  = /var/log/caddy/access.log
# maxretry = 10
# bantime  = 3600
# findtime = 600

sudo systemctl restart fail2ban

NFS — Network File System

Native Linux file sharing at near-local disk speeds. Best for Linux-to-Linux sharing on a trusted LAN. State bind-mounted from /data/varlib/nfs.

# ── SERVER ──────────────────────────────────
sudo systemctl enable --now nfs-server

# Define exports in /etc/exports
# /home/user/shared   192.168.1.0/24(rw,sync,no_subtree_check)
# /data/media         192.168.1.0/24(ro,sync,no_subtree_check)
echo "/home/user/shared  192.168.1.0/24(rw,sync,no_subtree_check)" | sudo tee -a /etc/exports
sudo exportfs -arv

# Allow through firewall
sudo firewall-cmd --add-service=nfs --permanent
sudo firewall-cmd --add-service=rpcbind --permanent
sudo firewall-cmd --add-service=mountd --permanent
sudo firewall-cmd --reload

# Check active exports
sudo exportfs -v
showmount -e localhost

# ── CLIENT ──────────────────────────────────
# Temporary mount
sudo mount -t nfs 192.168.1.100:/home/user/shared /mnt/remote

# Automount at boot — add to /etc/fstab
# 192.168.1.100:/home/user/shared  /mnt/remote  nfs  defaults,_netdev,nfsvers=4  0 0

# Check mount
mount | grep nfs

Samba — SMB/CIFS (Windows / macOS / Linux)

SMB/CIFS shares visible to Windows, macOS Finder, and other Linux machines. Samba state bind-mounted from /data/varlib/samba.

# Enable Samba services
sudo systemctl enable --now smb nmb

# Edit /etc/samba/smb.conf — add a share
sudo nano /etc/samba/smb.conf

# Example share (append to smb.conf):
# [SharedDocs]
#    comment = My Documents
#    path = /home/user/Documents
#    browseable = yes
#    read only = no
#    valid users = youruser
#    create mask = 0664
#    directory mask = 0775

# Set Samba password (separate from system password)
sudo smbpasswd -a youruser

# Apply config changes
sudo systemctl restart smb nmb

# Test config syntax
testparm

# Allow through firewall
sudo firewall-cmd --add-service=samba --permanent
sudo firewall-cmd --reload

# List active Samba shares
smbclient -L localhost -U youruser

# ── MOUNTING REMOTE SHARES ──────────────────
# From another Linux machine
sudo mount -t cifs //192.168.1.100/SharedDocs /mnt/samba \
  -o username=youruser,uid=$(id -u),gid=$(id -g),vers=3.0

# Persistent (add credentials file for security)
# echo "username=youruser" > ~/.smbcredentials
# echo "password=yourpass" >> ~/.smbcredentials
# chmod 600 ~/.smbcredentials
# Then in /etc/fstab:
# //192.168.1.100/SharedDocs /mnt/samba cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,_netdev 0 0

SSHFS — Mount Remote Directories over SSH

Mount any directory from an SSH-accessible machine as a local folder. Requires only an SSH server on the remote — no special server software needed.

# Mount a remote directory
sshfs [email protected]:/home/user/projects ~/mnt/remote-projects

# Mount with specific options
sshfs [email protected]:/data ~/mnt/server \
  -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3

# Unmount
fusermount -u ~/mnt/remote-projects

# Auto-mount at login (add to /etc/fstab)
# [email protected]:/home/user/projects /home/user/mnt/remote fuse.sshfs defaults,_netdev,reconnect,uid=1000,gid=1000,IdentityFile=/home/user/.ssh/id_ed25519 0 0

dnsmasq — Local DNS, DHCP & Split DNS

Lightweight DNS forwarder and DHCP server for homelab setups. Use it for custom .home domains, split DNS (different resolution for LAN vs internet), or local ad-blocking. Not active by default.

sudo systemctl enable --now dnsmasq

# /etc/dnsmasq.conf — key examples:

# Custom local hostnames
# address=/nas.home/192.168.1.10
# address=/printer.home/192.168.1.20
# address=/desktop.home/192.168.1.50

# Upstream DNS servers
# server=1.1.1.1
# server=8.8.8.8

# Local DHCP range (if dnsmasq acts as DHCP server)
# dhcp-range=192.168.1.100,192.168.1.200,12h
# dhcp-option=3,192.168.1.1  ← default gateway

# Block trackers/ads by routing to 0.0.0.0
# address=/ads.doubleclick.net/0.0.0.0

# Restart after changes
sudo systemctl restart dnsmasq

# Point NetworkManager to use dnsmasq
# Add /etc/NetworkManager/conf.d/dns.conf:
# [main]
# dns=dnsmasq
sudo systemctl restart NetworkManager

Avahi — Zero-Config mDNS/DNS-SD (Bonjour)

Avahi is active by default — your machine is immediately reachable as hostname.local on the LAN. Used by CUPS (printers), KDE Connect, DLNA, and SSH discovery.

# Discover services on your network
avahi-browse -a                          # all services
avahi-browse -at                         # one-shot (no live view)
avahi-browse _http._tcp                  # HTTP services only
avahi-browse _ssh._tcp                   # SSH servers
avahi-browse _smb._tcp                   # Samba shares
avahi-browse _ipp._tcp                   # Printers

# Look up a .local hostname
avahi-resolve --name myhostname.local
avahi-resolve --address 192.168.1.50

# Publish a custom service
# Create /etc/avahi/services/caddy.service:
# <?xml version="1.0" standalone='no'?>
# <service-group>
#   <name>My Web Server</name>
#   <service><type>_http._tcp</type><port>8080</port></service>
# </service-group>

sudo systemctl status avahi-daemon

KDE Connect & GSConnect — Phone Integration

KDE Connect (KDE edition) and GSConnect (GNOME extension, pre-installed on GNOME edition) connect your Android phone to your Linux desktop over the local network. Features: notification sync, clipboard sharing, file transfer, remote control, media control, SMS from desktop, and running commands on the PC from your phone. The necessary firewall ports are pre-opened on ShaniOS — no manual firewall configuration needed.

# ── SETUP ───────────────────────────────────────────────────────────────
# 1. Install KDE Connect on your Android phone from Google Play or F-Droid
# 2. Ensure phone and PC are on the same WiFi network
# 3. Open KDE Connect on KDE, or click the GSConnect icon in GNOME top bar
# 4. The phone appears automatically — click "Pair" on both devices

# ── KDE CONNECT CLI (kdeconnect-cli) ─────────────────────────────────────
# List paired devices
kdeconnect-cli --list-devices
kdeconnect-cli --list-available   # discoverable devices on LAN

# Send a file to your phone
kdeconnect-cli --device "Phone Name" --share ~/Documents/file.pdf

# Ping your phone (find it)
kdeconnect-cli --device "Phone Name" --ping

# Send a text/notification to phone
kdeconnect-cli --device "Phone Name" --ping-msg "Hey, check this"

# Lock phone screen
kdeconnect-cli --device "Phone Name" --lock

# Ring phone (find it in your couch)
kdeconnect-cli --device "Phone Name" --ring

# ── FIREWALL ─────────────────────────────────────────────────────────────
# Ports are pre-opened on ShaniOS (1714-1764 TCP+UDP in public zone)
# To verify:
sudo firewall-cmd --list-all | grep 1714

# ── GSCONNECT (GNOME) ────────────────────────────────────────────────────
# GSConnect is a GNOME Shell extension — it appears as an icon in the top bar
# after pairing. No CLI; use the extension menu or the phone app directly.
# To enable GSConnect if not already active:
gnome-extensions enable [email protected]

Rygel — DLNA/UPnP Media Server

Streams your music, videos, and photos to smart TVs, game consoles (PS4/PS5/Xbox), Kodi, and any DLNA renderer on the LAN. GNOME edition only. Not active by default.

# Start Rygel as a user service
systemctl --user enable --now rygel

# Configure — ~/.config/rygel.conf
# [MediaExport]
# enabled=true
# uris=/home/user/Music;/home/user/Videos;/home/user/Pictures

# Check discovery
avahi-browse -a | grep -i rygel

# View logs
journalctl --user -u rygel -f

KDE / more powerful alternatives: Install Jellyfin (flatpak install flathub com.jellyfin.JellyfinServer) for a full self-hosted media server with web UI, transcoding, and multi-user support. Run it as a container for more control: podman run -d --name jellyfin -p 8096:8096 -v /home/user/media:/media:Z jellyfin/jellyfin

NetworkManager — Connection & VPN Management

NetworkManager is pre-installed and active by default. All network connections — wired, Wi-Fi, mobile broadband, and VPN — are managed through it. Use nmcli for scripting, nmtui for a terminal UI, or the full nm-connection-editor GUI.

# --- Connection management ---
nmcli device status                        # show all interfaces
nmcli connection show                      # list all saved connections
nmcli connection up "MyWifi"               # activate a saved connection
nmcli device wifi list                     # scan for available Wi-Fi
nmcli device wifi connect "SSID" password "pass"

# --- Hotspot (share internet over Wi-Fi) ---
nmcli device wifi hotspot ssid "MyHotspot" password "secret"

# --- Static IP (example) ---
nmcli connection modify "eth0" ipv4.method manual ipv4.addresses 192.168.1.50/24 ipv4.gateway 192.168.1.1 ipv4.dns 1.1.1.1

# --- Terminal UI (interactive) ---
nmtui

VPN Protocols — All Pre-installed, Configured via GUI

All major VPN protocols are pre-installed as NetworkManager plugins. Connect to any VPN by opening Settings → Network → VPN → + and choosing your protocol — no manual package installation needed.

# --- OpenVPN (most common, .ovpn file) ---
# Import from GUI: Settings → Network → VPN → + → Import from file → select .ovpn
nmcli connection import type openvpn file /path/to/client.ovpn
nmcli connection up "MyOpenVPN"

# --- WireGuard (fast, modern) ---
# Import from GUI or from a .conf file:
nmcli connection import type wireguard file /etc/wireguard/wg0.conf
# Or use the NM GUI to paste your private key, peer public key, and endpoint.

# --- strongSwan / IKEv2 (corporate, certificate-based) ---
# Configure via nm-connection-editor: IPsec/IKEv2, enter server, your certificate.
nmcli connection up "CorpVPN"

# --- Cisco AnyConnect / OpenConnect ---
# GUI: Settings → Network → VPN → + → Cisco AnyConnect Compatible
# CLI:
openconnect --protocol=anyconnect vpn.example.com

# --- Fortinet SSL VPN ---
openfortivpn vpn.example.com:443 --username=you

# --- L2TP/IPsec, PPTP, SSTP, VPNC ---
# All configured through Settings → Network → VPN → +, select protocol.

# --- List and toggle VPN connections ---
nmcli connection show --active
nmcli connection up   "MyVPN"
nmcli connection down "MyVPN"

Remote Desktop — FreeRDP, kRDP, kRFB, gnome-remote-desktop

ShaniOS includes both client and server tools for remote desktop access.

# --- FreeRDP — connect TO a Windows/RDP server (pre-installed) ---
xfreerdp /v:192.168.1.100 /u:username /p:password /dynamic-resolution /gfx /rfx

# Full-screen RDP session:
xfreerdp /v:server.example.com /u:me /f /multimon

# RDP over SSH tunnel (recommended for security):
ssh -L 3389:192.168.1.100:3389 jumphost
xfreerdp /v:localhost /u:username

# --- kRDP (KDE RDP server — pre-installed on KDE edition) ---
# Enable: Settings → System → Remote Desktop → Enable Remote Desktop
# Uses GNOME RDP protocol; accessible from Windows "Remote Desktop Connection"
systemctl --user enable --now plasma-remotedesktop

# --- kRFB / krfb (KDE VNC server — pre-installed on KDE edition) ---
systemctl --user enable --now krfb
# Connect from any VNC viewer: vncviewer hostname:5900

# --- gnome-remote-desktop (GNOME edition — pre-installed) ---
# Enable: Settings → Sharing → Remote Desktop
# Supports both RDP and VNC protocols
grdctl rdp enable
grdctl rdp set-credentials username password
grdctl status

# --- SSH with X forwarding (run GUI apps remotely) ---
ssh -X user@host
ssh -Y user@host  # trusted (faster, less secure)

ModemManager — Mobile Broadband (3G/4G/5G)

ModemManager is pre-installed and integrates with NetworkManager for USB/PCIe mobile broadband modems and SIM cards. Manages LTE/5G, SMS, and signal monitoring.

# --- ModemManager status ---
mmcli -L                          # list detected modems
mmcli -m 0                        # detailed modem info (signal, tech, SIM)
mmcli -m 0 --signal-get           # signal strength

# --- Connect via NetworkManager ---
# NM auto-detects the modem; configure APN in Settings → Network → Mobile Broadband
nmcli connection show             # the modem connection will appear here

# --- SMS (if supported by modem) ---
mmcli -m 0 --messaging-list-sms
mmcli -s 0                        # read SMS message 0
mmcli -m 0 --messaging-create-sms="text=Hello&number=+1234567890"

Server Software via Podman Containers

Only Caddy is pre-installed as a server. All other server software runs as rootless Podman containers — this is the recommended approach on ShaniOS. Container data lives in the persistent @containers Btrfs subvolume and survives all system updates. Use --restart unless-stopped for services you want to auto-start after reboot, and optionally generate a systemd unit with podman generate systemd for tighter integration.

Rootless tip: Always bind container ports to 127.0.0.1 (e.g. -p 127.0.0.1:3000:3000) and let Caddy proxy them. This way the service is only reachable via Caddy's HTTPS — not directly from the network. Use :Z on volume mounts to apply the correct SELinux/AppArmor label for the container.

Nginx — high-performance web / reverse proxy:

podman run -d \
  --name nginx \
  -p 127.0.0.1:8081:80 \
  -v /home/user/www:/usr/share/nginx/html:ro,Z \
  -v /home/user/nginx.conf:/etc/nginx/nginx.conf:ro,Z \
  --restart unless-stopped \
  nginx:alpine

# Simple nginx.conf for serving static files
# /home/user/nginx.conf:
# server {
#   listen 80;
#   root /usr/share/nginx/html;
#   index index.html;
#   try_files $uri $uri/ =404;
# }

# Caddy proxies it publicly:
# mysite.example.com { reverse_proxy localhost:8081 }

Apache HTTPD — classic web server with .htaccess support:

podman run -d \
  --name apache \
  -p 127.0.0.1:8082:80 \
  -v /home/user/www:/usr/local/apache2/htdocs:ro,Z \
  --restart unless-stopped \
  httpd:alpine

# With custom config
podman run -d \
  --name apache \
  -p 127.0.0.1:8082:80 \
  -v /home/user/www:/usr/local/apache2/htdocs:Z \
  -v /home/user/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro,Z \
  --restart unless-stopped \
  httpd:latest

Nginx + PHP-FPM — PHP apps (WordPress, Laravel, custom):

# ~/php-stack/compose.yml
# services:
#   nginx:
#     image: nginx:alpine
#     ports: ["127.0.0.1:8080:80"]
#     volumes:
#       - ./www:/var/www/html:ro,Z
#       - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro,Z
#     depends_on: [php]
#
#   php:
#     image: php:8.3-fpm-alpine
#     volumes:
#       - ./www:/var/www/html:Z
#     depends_on: [db]
#
#   db:
#     image: mariadb:11
#     environment:
#       MYSQL_ROOT_PASSWORD: rootpass
#       MYSQL_DATABASE: myapp
#       MYSQL_USER: appuser
#       MYSQL_PASSWORD: apppass
#     volumes: [db_data:/var/lib/mysql]
#
# volumes: {db_data: {}}

mkdir -p ~/php-stack/www
podman-compose -f ~/php-stack/compose.yml up -d

MariaDB / MySQL — relational database:

podman run -d \
  --name mariadb \
  -p 127.0.0.1:3306:3306 \
  -e MYSQL_ROOT_PASSWORD=strongpassword \
  -e MYSQL_DATABASE=mydb \
  -e MYSQL_USER=myuser \
  -e MYSQL_PASSWORD=myuserpass \
  -v mariadb_data:/var/lib/mysql \
  --restart unless-stopped \
  mariadb:11

# Connect from host
podman exec -it mariadb mariadb -u myuser -p mydb
# or use mysql client if installed via Distrobox

PostgreSQL — advanced relational database:

podman run -d \
  --name postgres \
  -p 127.0.0.1:5432:5432 \
  -e POSTGRES_USER=myuser \
  -e POSTGRES_PASSWORD=strongpassword \
  -e POSTGRES_DB=mydb \
  -v postgres_data:/var/lib/postgresql/data \
  --restart unless-stopped \
  postgres:16-alpine

# Connect
podman exec -it postgres psql -U myuser -d mydb

# pgAdmin web UI (optional)
podman run -d \
  --name pgadmin \
  -p 127.0.0.1:5050:80 \
  -e [email protected] \
  -e PGADMIN_DEFAULT_PASSWORD=admin \
  --restart unless-stopped \
  dpage/pgadmin4

Redis — in-memory cache & message broker:

podman run -d \
  --name redis \
  -p 127.0.0.1:6379:6379 \
  -v redis_data:/data \
  --restart unless-stopped \
  redis:7-alpine redis-server --appendonly yes

# Connect and test
podman exec -it redis redis-cli ping
podman exec -it redis redis-cli set mykey "hello"
podman exec -it redis redis-cli get mykey

# Redis with password
podman run -d \
  --name redis \
  -p 127.0.0.1:6379:6379 \
  -v redis_data:/data \
  --restart unless-stopped \
  redis:7-alpine redis-server --appendonly yes --requirepass strongpassword

MongoDB — document database:

podman run -d \
  --name mongodb \
  -p 127.0.0.1:27017:27017 \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=strongpassword \
  -v mongodb_data:/data/db \
  --restart unless-stopped \
  mongo:7

# Connect
podman exec -it mongodb mongosh -u admin -p strongpassword --authenticationDatabase admin

SQLite via Litestream — replicated SQLite:

# Litestream continuously replicates a SQLite database to S3/Backblaze/local
podman run -d \
  --name litestream \
  -v /home/user/app/db:/data:Z \
  -v /home/user/litestream.yml:/etc/litestream.yml:ro,Z \
  --restart unless-stopped \
  litestream/litestream replicate

# litestream.yml example:
# dbs:
#   - path: /data/app.db
#     replicas:
#       - url: s3://mybucket/app.db

Jellyfin — self-hosted media server (movies, TV, music):

podman run -d \
  --name jellyfin \
  -p 127.0.0.1:8096:8096 \
  -v /home/user/jellyfin/config:/config:Z \
  -v /home/user/jellyfin/cache:/cache:Z \
  -v /home/user/media:/media:ro,Z \
  --restart unless-stopped \
  jellyfin/jellyfin

# With hardware transcoding (Intel VA-API)
podman run -d \
  --name jellyfin \
  -p 127.0.0.1:8096:8096 \
  --device /dev/dri/renderD128:/dev/dri/renderD128 \
  -v /home/user/jellyfin/config:/config:Z \
  -v /home/user/media:/media:ro,Z \
  --restart unless-stopped \
  jellyfin/jellyfin

# Proxy: jellyfin.example.com { reverse_proxy localhost:8096 }

Navidrome — self-hosted music streaming (Subsonic-compatible):

podman run -d \
  --name navidrome \
  -p 127.0.0.1:4533:4533 \
  -v /home/user/navidrome/data:/data:Z \
  -v /home/user/Music:/music:ro,Z \
  -e ND_SCANSCHEDULE="@every 1h" \
  -e ND_LOGLEVEL=info \
  --restart unless-stopped \
  deluan/navidrome:latest

# Compatible with DSub, Ultrasonic, Symfonium apps on Android/iOS
# Proxy: music.example.com { reverse_proxy localhost:4533 }

Nextcloud — self-hosted Google Drive/Dropbox alternative:

# ~/nextcloud/compose.yml
# services:
#   db:
#     image: mariadb:11
#     environment:
#       MYSQL_ROOT_PASSWORD: rootpass
#       MYSQL_DATABASE: nextcloud
#       MYSQL_USER: nc
#       MYSQL_PASSWORD: ncpass
#     volumes: [db_data:/var/lib/mysql]
#     restart: unless-stopped
#
#   redis:
#     image: redis:7-alpine
#     restart: unless-stopped
#
#   nextcloud:
#     image: nextcloud:29
#     ports: ["127.0.0.1:8888:80"]
#     environment:
#       MYSQL_HOST: db
#       MYSQL_DATABASE: nextcloud
#       MYSQL_USER: nc
#       MYSQL_PASSWORD: ncpass
#       REDIS_HOST: redis
#       NEXTCLOUD_ADMIN_USER: admin
#       NEXTCLOUD_ADMIN_PASSWORD: changeme
#     volumes: [nc_data:/var/www/html]
#     depends_on: [db, redis]
#     restart: unless-stopped
#
# volumes: {db_data: {}, nc_data: {}}

mkdir -p ~/nextcloud
podman-compose -f ~/nextcloud/compose.yml up -d
# Proxy: cloud.example.com { reverse_proxy localhost:8888 }

Syncthing — peer-to-peer file sync (no central server):

podman run -d \
  --name syncthing \
  -p 127.0.0.1:8384:8384 \
  -p 22000:22000/tcp \
  -p 22000:22000/udp \
  -p 21027:21027/udp \
  -v /home/user/syncthing/config:/var/syncthing/config:Z \
  -v /home/user/sync:/var/syncthing/Sync:Z \
  -e PUID=$(id -u) \
  -e PGID=$(id -g) \
  --restart unless-stopped \
  syncthing/syncthing:latest

# Open firewall ports for Syncthing
sudo firewall-cmd --add-port=22000/tcp --permanent
sudo firewall-cmd --add-port=22000/udp --permanent
sudo firewall-cmd --add-port=21027/udp --permanent
sudo firewall-cmd --reload

# Web UI: http://localhost:8384
# Proxy (optional): sync.example.com { reverse_proxy localhost:8384 }

Filebrowser — web-based file manager:

podman run -d \
  --name filebrowser \
  -p 127.0.0.1:8085:80 \
  -v /home/user:/srv:Z \
  -v /home/user/filebrowser.db:/database.db:Z \
  --restart unless-stopped \
  filebrowser/filebrowser:s6

# Access at http://localhost:8085 (admin/admin — change immediately)
# files.example.com { reverse_proxy localhost:8085 }

Gitea — self-hosted Git with web UI, issues, CI:

podman run -d \
  --name gitea \
  -p 127.0.0.1:3000:3000 \
  -p 127.0.0.1:2222:22 \
  -v /home/user/gitea:/data:Z \
  -e USER_UID=$(id -u) \
  -e USER_GID=$(id -g) \
  --restart unless-stopped \
  gitea/gitea:latest

# git.example.com { reverse_proxy localhost:3000 }

# SSH push via non-default port
# In ~/.ssh/config on clients:
# Host git.example.com
#   Port 2222

Forgejo — community fork of Gitea:

podman run -d \
  --name forgejo \
  -p 127.0.0.1:3000:3000 \
  -p 127.0.0.1:2222:22 \
  -v /home/user/forgejo:/data:Z \
  -e USER_UID=$(id -u) \
  -e USER_GID=$(id -g) \
  --restart unless-stopped \
  codeberg.org/forgejo/forgejo:latest

Vaultwarden — self-hosted Bitwarden-compatible password manager:

podman run -d \
  --name vaultwarden \
  -p 127.0.0.1:8180:80 \
  -v /home/user/vaultwarden:/data:Z \
  -e WEBSOCKET_ENABLED=true \
  -e ADMIN_TOKEN=$(openssl rand -base64 48) \
  --restart unless-stopped \
  vaultwarden/server:latest

# HTTPS is REQUIRED — Vaultwarden refuses connections over plain HTTP
# vault.example.com {
#     reverse_proxy localhost:8180
#     reverse_proxy /notifications/hub localhost:3012
# }

Prometheus + Grafana — metrics collection and dashboards:

# Create a minimal Prometheus config first
mkdir -p ~/monitoring
cat > ~/monitoring/prometheus.yml <<'EOF'
global:
  scrape_interval: 15s
scrape_configs:
  - job_name: 'node'
    static_configs:
      - targets: ['node-exporter:9100']
EOF

# Node Exporter — expose system metrics (CPU, RAM, disk, network)
podman run -d \
  --name node-exporter \
  --network host \
  -v /proc:/host/proc:ro,rslave \
  -v /sys:/host/sys:ro,rslave \
  -v /:/rootfs:ro,rslave \
  --restart unless-stopped \
  prom/node-exporter \
  --path.procfs=/host/proc --path.sysfs=/host/sys

# Prometheus
podman run -d \
  --name prometheus \
  -p 127.0.0.1:9090:9090 \
  -v ~/monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro,Z \
  -v prometheus_data:/prometheus \
  --restart unless-stopped \
  prom/prometheus

# Grafana
podman run -d \
  --name grafana \
  -p 127.0.0.1:3001:3000 \
  -v grafana_data:/var/lib/grafana \
  -e GF_SECURITY_ADMIN_PASSWORD=changeme \
  --restart unless-stopped \
  grafana/grafana

# grafana.example.com { reverse_proxy localhost:3001 }

Uptime Kuma — self-hosted service uptime monitoring:

podman run -d \
  --name uptime-kuma \
  -p 127.0.0.1:3001:3001 \
  -v /home/user/uptime-kuma:/app/data:Z \
  --restart unless-stopped \
  louislam/uptime-kuma:latest

# status.example.com { reverse_proxy localhost:3001 }

Netdata — real-time system performance monitoring:

podman run -d \
  --name netdata \
  -p 127.0.0.1:19999:19999 \
  --cap-add SYS_PTRACE \
  --security-opt apparmor=unconfined \
  -v netdata_config:/etc/netdata \
  -v netdata_lib:/var/lib/netdata \
  -v netdata_cache:/var/cache/netdata \
  -v /etc/passwd:/host/etc/passwd:ro \
  -v /proc:/host/proc:ro \
  -v /sys:/host/sys:ro \
  --restart unless-stopped \
  netdata/netdata

# metrics.example.com { reverse_proxy localhost:19999 }

Home Assistant — home automation platform:

podman run -d \
  --name homeassistant \
  --network host \
  -v /home/user/homeassistant/config:/config:Z \
  -e TZ=Europe/London \
  --restart unless-stopped \
  ghcr.io/home-assistant/home-assistant:stable

# Access at http://localhost:8123 for initial setup
# hass.example.com { reverse_proxy localhost:8123 }

# Note: --network host is recommended so HA can discover devices on LAN
# Open firewall if using host networking
sudo firewall-cmd --add-port=8123/tcp --permanent
sudo firewall-cmd --reload

Ntfy — push notifications server:

podman run -d \
  --name ntfy \
  -p 127.0.0.1:8090:80 \
  -v /home/user/ntfy/cache:/var/cache/ntfy:Z \
  -v /home/user/ntfy/etc:/etc/ntfy:Z \
  --restart unless-stopped \
  binwiederhier/ntfy serve

# ntfy.example.com { reverse_proxy localhost:8090 }

# Send a notification from the command line
curl -d "Your backup completed successfully" ntfy.example.com/my-alerts

# Subscribe in the ntfy Android/iOS app or browser

Gotify — self-hosted push notification server:

podman run -d \
  --name gotify \
  -p 127.0.0.1:8070:80 \
  -v /home/user/gotify/data:/app/data:Z \
  --restart unless-stopped \
  gotify/server

# push.example.com { reverse_proxy localhost:8070 }

Woodpecker CI — lightweight CI/CD for Gitea/Forgejo:

# ~/woodpecker/compose.yml
# services:
#   woodpecker-server:
#     image: woodpeckerci/woodpecker-server:latest
#     ports: ["127.0.0.1:8000:8000"]
#     environment:
#       WOODPECKER_OPEN: "true"
#       WOODPECKER_HOST: https://ci.example.com
#       WOODPECKER_GITEA: "true"
#       WOODPECKER_GITEA_URL: https://git.example.com
#       WOODPECKER_GITEA_CLIENT: <oauth2-client-id>
#       WOODPECKER_GITEA_SECRET: <oauth2-client-secret>
#       WOODPECKER_AGENT_SECRET: <random-secret>
#     volumes: [woodpecker_data:/var/lib/woodpecker]
#     restart: unless-stopped
#
#   woodpecker-agent:
#     image: woodpeckerci/woodpecker-agent:latest
#     environment:
#       WOODPECKER_SERVER: woodpecker-server:9000
#       WOODPECKER_AGENT_SECRET: <same-random-secret>
#     volumes: [/run/user/1000/podman/podman.sock:/var/run/docker.sock]
#     depends_on: [woodpecker-server]
#     restart: unless-stopped

podman-compose -f ~/woodpecker/compose.yml up -d

Pi-hole — network-wide DNS ad blocker:

podman run -d \
  --name pihole \
  -p 127.0.0.1:8083:80 \
  -p 53:53/tcp \
  -p 53:53/udp \
  -e TZ=Europe/London \
  -e WEBPASSWORD=changeme \
  -v /home/user/pihole/etc-pihole:/etc/pihole:Z \
  -v /home/user/pihole/etc-dnsmasq.d:/etc/dnsmasq.d:Z \
  --restart unless-stopped \
  pihole/pihole:latest

# Open DNS port in firewall (if using as LAN DNS)
sudo firewall-cmd --add-service=dns --permanent
sudo firewall-cmd --reload

# Point your router's DNS to this machine's IP
# Web UI: http://localhost:8083/admin

AdGuard Home — DNS-based ad/tracker blocker with DoH/DoT:

podman run -d \
  --name adguardhome \
  -p 53:53/tcp -p 53:53/udp \
  -p 127.0.0.1:3000:3000 \
  -p 853:853/tcp \
  -v /home/user/adguard/work:/opt/adguardhome/work:Z \
  -v /home/user/adguard/conf:/opt/adguardhome/conf:Z \
  --restart unless-stopped \
  adguard/adguardhome

# Initial setup wizard at http://localhost:3000
# Supports DNS-over-HTTPS and DNS-over-TLS out of the box

Nginx Proxy Manager — GUI-based reverse proxy with Let's Encrypt:

# ~/npm/compose.yml
# services:
#   app:
#     image: jc21/nginx-proxy-manager:latest
#     ports:
#       - "80:80"
#       - "443:443"
#       - "127.0.0.1:81:81"
#     volumes:
#       - /home/user/npm/data:/data:Z
#       - /home/user/npm/letsencrypt:/etc/letsencrypt:Z
#     restart: unless-stopped

podman-compose -f ~/npm/compose.yml up -d
# Web UI at http://localhost:81 ([email protected] / changeme)

Stirling PDF — self-hosted PDF manipulation tool:

podman run -d \
  --name stirling-pdf \
  -p 127.0.0.1:8080:8080 \
  -v /home/user/stirling/trainingData:/usr/share/tessdata:Z \
  -v /home/user/stirling/extraConfigs:/configs:Z \
  --restart unless-stopped \
  frooodle/s-pdf:latest

# pdf.example.com { reverse_proxy localhost:8080 }

Paperless-ngx — document management / OCR:

# ~/paperless/compose.yml
# services:
#   broker:
#     image: redis:7-alpine
#     restart: unless-stopped
#
#   db:
#     image: postgres:15-alpine
#     environment:
#       POSTGRES_DB: paperless
#       POSTGRES_USER: paperless
#       POSTGRES_PASSWORD: paperless
#     volumes: [pgdata:/var/lib/postgresql/data]
#     restart: unless-stopped
#
#   webserver:
#     image: ghcr.io/paperless-ngx/paperless-ngx:latest
#     ports: ["127.0.0.1:8000:8000"]
#     environment:
#       PAPERLESS_REDIS: redis://broker:6379
#       PAPERLESS_DBHOST: db
#       PAPERLESS_OCR_LANGUAGE: eng
#       PAPERLESS_TIME_ZONE: Europe/London
#     volumes:
#       - /home/user/paperless/data:/usr/src/paperless/data:Z
#       - /home/user/paperless/media:/usr/src/paperless/media:Z
#       - /home/user/Documents/inbox:/usr/src/paperless/consume:Z
#     depends_on: [broker, db]
#     restart: unless-stopped

podman-compose -f ~/paperless/compose.yml up -d
# docs.example.com { reverse_proxy localhost:8000 }

Miniflux — minimal RSS / Atom feed reader:

podman run -d \
  --name miniflux \
  -p 127.0.0.1:8080:8080 \
  -e DATABASE_URL="postgres://miniflux:password@localhost/miniflux?sslmode=disable" \
  -e RUN_MIGRATIONS=1 \
  -e CREATE_ADMIN=1 \
  -e ADMIN_USERNAME=admin \
  -e ADMIN_PASSWORD=changeme \
  --restart unless-stopped \
  miniflux/miniflux:latest

# rss.example.com { reverse_proxy localhost:8080 }

LinkWarden — collaborative bookmark manager:

podman run -d \
  --name linkwarden \
  -p 127.0.0.1:3000:3000 \
  -e DATABASE_URL="mongodb://localhost:27017/linkwarden" \
  -e NEXTAUTH_SECRET=$(openssl rand -base64 32) \
  -e NEXTAUTH_URL=https://links.example.com \
  -v /home/user/linkwarden/data:/data/data:Z \
  --restart unless-stopped \
  ghcr.io/linkwarden/linkwarden:latest

Make containers start automatically at login (systemd user service):

# Generate a systemd unit for a running container
podman generate systemd --name jellyfin --files --new

# Move the generated unit to your user systemd dir
mv container-jellyfin.service ~/.config/systemd/user/

# Enable it (starts at login, without needing root)
systemctl --user daemon-reload
systemctl --user enable --now container-jellyfin.service

# Enable lingering so services start at boot even without a login session
loginctl enable-linger $USER

# Check status
systemctl --user status container-jellyfin.service

rclone — Cloud Sync (40+ Providers)

Sync, copy, and mount cloud storage (S3, Backblaze B2, Google Drive, Nextcloud, SFTP, WebDAV, and 40+ more). Config stored in /data/varlib/rclone.

# Interactive setup wizard
rclone config

# Sync local → cloud (mirror; deletes files removed locally)
rclone sync /home/user/data remote:my-bucket --progress

# Copy local → cloud (never deletes remote)
rclone copy /var/www/html remote:backups --progress

# Mount cloud storage as local folder
mkdir -p ~/mnt/cloud
rclone mount remote:my-bucket ~/mnt/cloud --daemon --vfs-cache-mode writes

# Unmount
fusermount -u ~/mnt/cloud

# List remote files
rclone ls remote:my-bucket
rclone lsd remote:              # list buckets/containers

# Check sync differences without transferring
rclone check /local/dir remote:bucket

# Delete files older than 30 days on remote
rclone delete remote:old-logs --min-age 30d

# Filter by pattern
rclone sync /home/user/docs remote:docs --include "*.pdf" --exclude "*.tmp"

restic — Encrypted Deduplicated Backups

Content-addressed, encrypted, deduplicated backups to any storage rclone can reach. Config stored in /data/varlib/restic.

# Initialize repository (local or rclone backend)
restic init --repo /mnt/backup/myrepo
restic -r rclone:remote:restic-repo init

# Create a snapshot
restic -r /mnt/backup/myrepo backup /home/user /var/www/html /data/varlib

# Use RESTIC_REPOSITORY and RESTIC_PASSWORD env vars to avoid repetition
export RESTIC_REPOSITORY=/mnt/backup/myrepo
export RESTIC_PASSWORD=mysecretpassword
restic backup /home/user

# List snapshots
restic snapshots

# Restore latest snapshot
restic restore latest --target /tmp/restored

# Restore a specific snapshot (use ID from `restic snapshots`)
restic restore 1a2b3c4d --target /tmp/restored

# Mount snapshots as a FUSE filesystem (browse by time)
restic mount /mnt/restic-browse

# Check repository integrity
restic check

# Prune old snapshots (keep 7 daily, 4 weekly, 12 monthly)
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --prune

Automated daily backup via systemd timer:

# Create ~/.config/systemd/user/restic-backup.service:
# [Unit]
# Description=restic backup
# [Service]
# Type=oneshot
# Environment="RESTIC_REPOSITORY=/mnt/backup/myrepo"
# Environment="RESTIC_PASSWORD=mysecretpassword"
# ExecStart=restic backup /home/%u
# ExecStartPost=restic forget --keep-daily 7 --keep-weekly 4 --prune

# Create ~/.config/systemd/user/restic-backup.timer:
# [Unit]
# Description=Daily restic backup
# [Timer]
# OnCalendar=daily
# Persistent=true
# [Install]
# WantedBy=timers.target

systemctl --user daemon-reload
systemctl --user enable --now restic-backup.timer

Network Diagnostics & Troubleshooting Commands

All tools listed here are pre-installed. Packages: iproute2 (ip/ss), iputils (ping/arping), inetutils (traceroute/ftp/telnet), net-tools (ifconfig/netstat/arp/route), bind (dig/host/nslookup), iw, wireless_tools (iwconfig/iwlist/iwspy), wpa_supplicant (wpa_cli/wpa_passphrase), nmap, tcpdump, mtr, iperf3, iftop, nethogs, bandwhich (bandwidth by process and remote address), ethtool, socat, ngrep, whois, net-snmp (snmpwalk/snmpget), openbsd-netcat, curl, wget, aria2 (multi-protocol: HTTP/FTP/BitTorrent/Metalink), zsync, lsof, openldap (ldapsearch), openssl.

ping / arping — reachability & MAC detection:

# ICMP ping (from iputils)
ping -c 4 shani.dev
ping -c 4 -i 0.2 192.168.1.1        # fast ping, 200ms interval
ping -s 1400 -c 4 192.168.1.1       # large packet (MTU test)
ping6 -c 4 ipv6.google.com          # IPv6 reachability

# ARP ping — verify a host is alive at Layer 2 (LAN only)
sudo arping -I eth0 -c 3 192.168.1.1
sudo arping -I eth0 192.168.1.50    # find MAC address of an IP

traceroute / mtr — path tracing:

# Classic traceroute (from inetutils)
traceroute shani.dev
traceroute -n shani.dev             # no DNS reverse lookup (faster)
traceroute -T -p 443 shani.dev      # TCP traceroute (bypasses ICMP blocks)
traceroute -U -p 53 8.8.8.8         # UDP traceroute

# MTR — live traceroute + per-hop packet loss/latency (mtr package)
mtr shani.dev                       # interactive TUI
mtr --report --report-cycles 20 shani.dev  # non-interactive report
mtr -n --report shani.dev           # no DNS (pure IPs)
mtr --tcp --port 443 shani.dev      # TCP mode through firewalls

DNS resolution — dig / host / nslookup / resolvectl:

# dig (from bind package) — full DNS query
dig shani.dev                       # A record (default)
dig shani.dev AAAA                  # IPv6 record
dig shani.dev MX                    # mail exchange
dig shani.dev NS                    # nameservers
dig shani.dev TXT                   # TXT records (SPF, DKIM, etc.)
dig +short shani.dev A              # clean output, IP only
dig +trace shani.dev                # full recursive resolution trace
dig @1.1.1.1 shani.dev              # query specific resolver (Cloudflare)
dig @8.8.8.8 shani.dev              # query Google DNS directly
dig -x 1.2.3.4                      # reverse DNS (PTR lookup)

# host — simpler alternative
host shani.dev
host shani.dev 1.1.1.1              # via specific server

# nslookup — interactive or one-shot
nslookup shani.dev
nslookup shani.dev 8.8.8.8

# resolvectl — query systemd-resolved (what the system actually uses)
resolvectl status                   # per-interface DNS config
resolvectl query shani.dev          # resolved via system stub
resolvectl flush-caches             # flush DNS cache
cat /etc/resolv.conf                # current stub resolver config

ip — interfaces, addresses, routes, neighbours:

# Interfaces
ip link show                        # all interfaces (state, MAC)
ip -brief link show                 # compact: name + state + MAC
ip link show eth0                   # specific interface
ip link set eth0 up                 # bring interface up
ip link set eth0 down               # bring interface down

# Addresses
ip addr show                        # all IPs on all interfaces
ip -brief addr show                 # compact summary
ip addr show eth0                   # specific interface
ip addr add 192.168.1.50/24 dev eth0   # add temporary IP
ip addr del 192.168.1.50/24 dev eth0   # remove IP

# Routes
ip route show                       # routing table
ip route get 8.8.8.8                # which interface/gateway for a destination
ip route add 10.0.0.0/8 via 192.168.1.1  # add a route
ip route del 10.0.0.0/8             # remove a route

# Neighbours (ARP/NDP cache)
ip neigh show                       # ARP table (LAN MAC→IP mapping)
ip neigh flush all                  # clear ARP cache

# Statistics
ip -s link show eth0                # TX/RX byte and packet counters
ip -s -s link show eth0             # extended error counters

ss — socket statistics (replaces netstat):

ss -tlnp               # TCP listening sockets with process name/PID
ss -ulnp               # UDP listening sockets
ss -tlnp | grep :80    # who is listening on port 80
ss -tnp                # all established TCP connections
ss -s                  # summary: total sockets by state
ss -4 -tnp             # IPv4 only
ss -6 -tnp             # IPv6 only
ss -xnp                # Unix domain sockets
ss -tlnp src :443      # filter by local port
ss -tnp dst 192.168.1.100   # connections to a specific host

netstat / ifconfig / arp / route — legacy net-tools:

# netstat (net-tools — older but widely familiar)
netstat -tlnp          # listening TCP sockets (same as ss -tlnp)
netstat -s             # protocol statistics (TCP/UDP/IP error counters)
netstat -r             # routing table (same as ip route show)
netstat -i             # interface statistics

# ifconfig (legacy — use ip addr for new scripts)
ifconfig               # all interfaces
ifconfig eth0          # specific interface

# arp (legacy — use ip neigh for new scripts)
arp -n                 # ARP table without DNS resolution
arp -d 192.168.1.50    # remove an ARP entry

# route (legacy — use ip route for new scripts)
route -n               # routing table (numeric)

ethtool — NIC hardware & driver info:

# Link speed and duplex
ethtool eth0                        # speed, duplex, link detected
ethtool -i eth0                     # driver name and version
ethtool -S eth0                     # NIC statistics (errors, drops, missed)
ethtool -k eth0                     # offload features (TSO, GRO, etc.)
ethtool -a eth0                     # pause parameters (flow control)

# Wake-on-LAN
ethtool -s eth0 wol g               # enable WoL on magic packet
ethtool eth0 | grep "Wake-on"       # check WoL status

# Test link (loop-back self-test, if NIC supports it)
sudo ethtool -t eth0

nmap — port scanning & service discovery:

# Basic scans
nmap localhost                      # top 1000 TCP ports on localhost
nmap 192.168.1.100                  # remote host
nmap -p 22,80,443,8080 localhost    # specific ports

# Discovery
nmap -sn 192.168.1.0/24             # ping sweep: find live hosts (no port scan)
nmap -sn 192.168.1.0/24 --open      # only show up hosts

# Service & version detection
nmap -sV localhost                  # service version fingerprinting
nmap -sV -p 22 192.168.1.100        # SSH version on remote
nmap -A localhost                   # OS detect + version + scripts + traceroute

# Script-based probes
nmap --script=http-title 192.168.1.0/24    # grab HTTP page titles on LAN
nmap --script=ssh-hostkey 192.168.1.100    # get SSH host key fingerprint
nmap --script=ssl-cert 192.168.1.100 -p 443  # read TLS cert details

# Fast / quiet scans
nmap -F 192.168.1.100               # fast: top 100 ports only
nmap -T4 -F 192.168.1.0/24         # fast sweep of top 100 ports on subnet

# UDP scanning (requires root)
sudo nmap -sU -p 53,123,161 192.168.1.1  # DNS, NTP, SNMP

tcpdump — live packet capture:

# Capture on any interface
sudo tcpdump -i any -n              # all traffic, no DNS resolution
sudo tcpdump -i eth0 -n             # specific interface

# Filter by port
sudo tcpdump -i any port 80 -n -A  # HTTP traffic, ASCII body
sudo tcpdump -i any port 443 -n    # HTTPS (encrypted, shows handshake)
sudo tcpdump -i any port 53 -n     # DNS queries

# Filter by host
sudo tcpdump -i any host 192.168.1.50 -n
sudo tcpdump -i any src 192.168.1.50 -n   # only FROM that host
sudo tcpdump -i any dst 192.168.1.50 -n   # only TO that host

# Combine filters
sudo tcpdump -i any host 192.168.1.50 and port 80 -n

# Save to file for Wireshark analysis
sudo tcpdump -i any -w /tmp/capture.pcap
sudo tcpdump -i any -w /tmp/capture.pcap -C 10  # rotate at 10 MB

# Read saved capture
tcpdump -r /tmp/capture.pcap -n

# Show raw bytes
sudo tcpdump -i any -X port 80 -n | head -60

ngrep — grep over network traffic:

# Match HTTP method lines
sudo ngrep -d any -q "GET\|POST\|PUT\|DELETE" port 80

# Find passwords or tokens in plaintext traffic (internal audit use)
sudo ngrep -d any -qi "password\|token\|authorization"

# Match DNS queries
sudo ngrep -d any "" udp port 53

# Match on a specific host
sudo ngrep -d any -q "User-Agent" host 192.168.1.50 and port 80

# Quiet mode (only matched packets, no headers)
sudo ngrep -q "error\|fail" port 8080

socat — universal relay & TCP/UDP Swiss Army knife:

# Test TCP connectivity to a port
socat - TCP:192.168.1.100:3000
socat - TCP:192.168.1.100:8080,crnl   # with line-ending conversion

# Test UDP connectivity
socat - UDP:192.168.1.100:5000

# Simple TCP listener (simulate a server)
socat TCP-LISTEN:9999,fork -          # echo server on port 9999

# Port forward: local 8080 → remote 192.168.1.100:80
socat TCP-LISTEN:8080,fork TCP:192.168.1.100:80

# Test TLS/SSL connection (alternative to openssl s_client)
socat - OPENSSL:mysite.example.com:443,verify=0

# Bidirectional pipe between two ports
socat TCP-LISTEN:4444,fork TCP:192.168.1.100:22  # proxy SSH

nc (netcat) — port testing & simple transfer:

# Test if a port is open
nc -zv 192.168.1.100 22             # TCP: verbose, exit immediately
nc -zvw 3 192.168.1.100 443         # 3-second timeout
nc -zvu 192.168.1.100 53            # UDP port test

# Scan a port range
nc -zv 192.168.1.100 20-25

# Simple file transfer (no encryption — LAN only)
# Receiver:
nc -l -p 9999 > received_file.bin
# Sender:
nc 192.168.1.100 9999 < file_to_send.bin

# Banner grabbing (see what a service says on connect)
nc 192.168.1.100 25                 # SMTP banner
nc 192.168.1.100 21                 # FTP banner
nc 192.168.1.100 22                 # SSH version string

# Simple HTTP request
printf "GET / HTTP/1.0\r\nHost: example.com\r\n\r\n" | nc example.com 80

curl & wget — HTTP/HTTPS testing:

# curl
curl -v https://mysite.example.com               # verbose: headers + body
curl -I https://mysite.example.com               # HEAD: response headers only
curl -s -o /dev/null -w "%{http_code}\n" https://mysite.example.com  # status code only
curl -s -o /dev/null -w "HTTP %{http_code} | %{time_total}s | %{size_download} bytes\n" https://mysite.example.com
curl -k https://localhost:8443                   # ignore self-signed cert
curl -H "Host: mysite.example.com" http://127.0.0.1  # test vhost routing
curl -L https://example.com/redirect             # follow redirects
curl -u user:pass https://api.example.com/data   # HTTP basic auth
curl -X POST -H "Content-Type: application/json" \
     -d '{"key":"value"}' https://api.example.com/endpoint  # POST JSON
curl --resolve mysite.example.com:443:127.0.0.1 https://mysite.example.com  # test before DNS propagates
curl -w "@/dev/stdin" -o /dev/null -s https://mysite.example.com <<'EOF'
     time_namelookup:  %{time_namelookup}s\n
     time_connect:     %{time_connect}s\n
     time_starttransfer: %{time_starttransfer}s\n
     time_total:       %{time_total}s\n
EOF

# wget
wget -q --server-response -O /dev/null https://mysite.example.com 2>&1 | head -20
wget --spider https://mysite.example.com         # check if URL exists (no download)
wget -r -l 1 --spider https://mysite.example.com 2>&1 | grep broken  # check links

openssl — TLS/SSL certificate inspection:

# Inspect a server's TLS certificate
openssl s_client -connect mysite.example.com:443 </dev/null 2>&1 | openssl x509 -noout -text

# Check expiry date only
openssl s_client -connect mysite.example.com:443 </dev/null 2>/dev/null \
  | openssl x509 -noout -dates

# Test specific TLS version
openssl s_client -connect mysite.example.com:443 -tls1_2 </dev/null
openssl s_client -connect mysite.example.com:443 -tls1_3 </dev/null

# Inspect a local certificate file
openssl x509 -in /etc/caddy/certs/mysite.crt -noout -text
openssl x509 -in /etc/caddy/certs/mysite.crt -noout -dates -subject -issuer

# Test SMTP with STARTTLS
openssl s_client -connect mail.example.com:587 -starttls smtp

# Check if a certificate matches its private key
openssl x509 -noout -modulus -in cert.pem | md5sum
openssl rsa  -noout -modulus -in key.pem  | md5sum
# Both sums must match

whois — domain & IP registration info:

whois shani.dev                     # domain registration info
whois 8.8.8.8                       # IP WHOIS (ASN, ISP, abuse contact)
whois -h whois.arin.net 8.8.8.8    # query ARIN directly
whois AS15169                       # ASN info (Google's AS)

iperf3 — bandwidth & throughput testing:

# Run server on the remote/target machine
iperf3 -s                           # default port 5201
iperf3 -s -p 9999                   # custom port

# TCP throughput test (from client)
iperf3 -c 192.168.1.100             # default 10-second TCP test
iperf3 -c 192.168.1.100 -t 30       # 30-second test
iperf3 -c 192.168.1.100 -P 4        # 4 parallel streams (saturate link)
iperf3 -c 192.168.1.100 -R          # reverse: server sends to client

# UDP test (for latency-sensitive or wireless links)
iperf3 -c 192.168.1.100 -u -b 100M  # UDP at 100 Mbps
iperf3 -c 192.168.1.100 -u -b 1G    # UDP at 1 Gbps (gigabit test)

# JSON output for scripting
iperf3 -c 192.168.1.100 -J | jq '.end.sum_received.bits_per_second'

nethogs & iftop — per-process & per-connection bandwidth:

# nethogs — which process is using bandwidth (like top for network)
sudo nethogs                        # all interfaces
sudo nethogs eth0                   # specific interface
sudo nethogs -d 1                   # refresh every 1 second
# Inside nethogs: 'm' cycles units (KB/s, MB/s, GB/s); 'q' quits

# iftop — per-connection bandwidth usage (like top for flows)
sudo iftop                          # all interfaces, interactive
sudo iftop -i eth0                  # specific interface
sudo iftop -n                       # no DNS resolution (faster)
sudo iftop -P                       # show port numbers
sudo iftop -B                       # show in bytes instead of bits
# Inside iftop: 'n' toggles DNS; 'p' toggles ports; 'q' quits

ethtool — NIC hardware diagnostics:

ethtool eth0                        # link speed, duplex, auto-negotiate, link detected
ethtool -i eth0                     # kernel driver name and firmware version
ethtool -S eth0                     # detailed NIC statistics (rx_errors, tx_drops, etc.)
ethtool -k eth0                     # hardware offload features (TSO, GSO, GRO, LRO)
ethtool -a eth0                     # flow control (pause frames)
ethtool -c eth0                     # interrupt coalescing settings
sudo ethtool -s eth0 speed 1000 duplex full autoneg on  # force speed/duplex
sudo ethtool -t eth0                # run built-in NIC self-test (if supported)

# Find which physical port a NIC is on (LED blink)
sudo ethtool -p eth0 5              # blink NIC LED for 5 seconds

net-snmp — SNMP queries to network devices:

# Query a router/switch/AP via SNMP v2c
snmpwalk -v2c -c public 192.168.1.1                           # walk entire MIB
snmpget -v2c -c public 192.168.1.1 sysDescr.0                # system description
snmpget -v2c -c public 192.168.1.1 ifInOctets.1              # bytes in on interface 1
snmpwalk -v2c -c public 192.168.1.1 ifTable                  # all interfaces

# SNMP v3 (authenticated + encrypted)
snmpwalk -v3 -u admin -l authPriv -a SHA -A "authpass" \
  -x AES -X "privpass" 192.168.1.1 sysUpTime

# Translate OID numbers to names
snmptranslate .1.3.6.1.2.1.1.1.0
snmptranslate -On sysDescr.0        # OID number for a named OID

# SNMP trap listener (receive traps from devices)
sudo snmptrapd -f -Lo -c /etc/snmp/snmptrapd.conf

inetutils — ftp, telnet, rsh for legacy protocol testing:

# FTP (for testing FTP servers — use sftp/rsync for actual transfers)
ftp 192.168.1.100
ftp -n 192.168.1.100 <<EOF
quote USER anonymous
quote PASS [email protected]
ls
quit
EOF

# Telnet (for testing TCP services by hand — NOT for production access)
telnet 192.168.1.100 25    # test SMTP handshake
telnet 192.168.1.100 110   # test POP3
telnet 192.168.1.100 80    # type HTTP manually:
                            #   GET / HTTP/1.0
                            #   Host: 192.168.1.100
                            #   (press Enter twice)

# inetutils also provides: rlogin, rsh, rcp (for legacy systems)

lsof — open files, sockets, and processes:

# Network-related lsof usage
lsof -i                             # all network connections
lsof -i TCP                         # TCP only
lsof -i UDP                         # UDP only
lsof -i :80                         # what's using port 80
lsof -i :80-443                     # port range
lsof -i @192.168.1.100              # connections to/from a host
lsof -i TCP:443 -sTCP:LISTEN        # who is LISTENING on 443
lsof -i -n -P                       # all connections, no DNS, numeric ports
lsof -p $(pgrep caddy) -i           # all network files opened by caddy
lsof -u www-data -i                 # all connections by a specific user

iw & wireless_tools — WiFi scanning & diagnostics:

# iw — modern nl80211 wireless tool
iw dev                              # list wireless interfaces
iw dev wlan0 info                   # interface details (freq, channel, SSID)
iw dev wlan0 link                   # current association (signal, bitrate)
iw dev wlan0 station dump           # connected station info (AP or peers)
sudo iw dev wlan0 scan              # scan for nearby networks (requires root)
sudo iw dev wlan0 scan | grep -E "SSID|signal|freq"  # summarised scan
iw phy                              # physical device capabilities
iw phy phy0 info                    # supported bands, channels, features
iw reg get                          # current regulatory domain (country)
sudo iw reg set GB                  # set regulatory domain

# wireless_tools — legacy wext interface (still useful for older drivers)
iwconfig                            # show all wireless interfaces
iwconfig wlan0                      # specific interface: SSID, rate, signal
iwlist wlan0 scan                   # scan for networks (older method)
iwlist wlan0 scan | grep -E "ESSID|Quality|Frequency"
iwlist wlan0 rate                   # supported bit rates
iwlist wlan0 channel                # available channels
iwspy wlan0                         # per-MAC signal tracking

wpa_cli & wpa_passphrase — WPA supplicant management:

# wpa_cli — control wpa_supplicant (NetworkManager usually manages this)
sudo wpa_cli status                 # connection state
sudo wpa_cli scan                   # trigger scan
sudo wpa_cli scan_results           # view scan results
sudo wpa_cli list_networks          # configured networks
sudo wpa_cli disconnect             # disconnect
sudo wpa_cli reconnect              # reconnect

# wpa_passphrase — generate wpa_supplicant config blocks
wpa_passphrase "MySSID" "MyPassword"
# Output can be pasted into /etc/wpa_supplicant/wpa_supplicant.conf

zsync — delta download for large files:

# zsync downloads only the changed parts of a file using a .zsync metafile
# Used internally by shani-deploy for efficient OS image updates

# Download a file using zsync (only transfers changed blocks)
zsync https://example.com/largefile.iso.zsync

# Resume an interrupted zsync download
zsync -i partial_file.iso https://example.com/largefile.iso.zsync

# Specify output filename
zsync -o /tmp/output.iso https://example.com/file.iso.zsync

# Check how much data would be transferred (seed with existing file)
zsync -i existing.iso https://example.com/updated.iso.zsync

ldapsearch & openldap — LDAP directory queries:

# Query an LDAP directory (openldap package)
ldapsearch -x -H ldap://192.168.1.100 -b "dc=example,dc=com"

# Authenticated bind
ldapsearch -x -H ldap://192.168.1.100 \
  -D "cn=admin,dc=example,dc=com" -W \
  -b "dc=example,dc=com" "(objectClass=person)"

# Search for a specific user
ldapsearch -x -H ldap://192.168.1.100 \
  -b "dc=example,dc=com" "(uid=jsmith)"

# LDAPS (TLS)
ldapsearch -x -H ldaps://192.168.1.100 -b "dc=example,dc=com"

# Check if LDAP port is open first
nc -zv 192.168.1.100 389           # LDAP
nc -zv 192.168.1.100 636           # LDAPS

Service log inspection:

# Follow a service log live
sudo journalctl -u caddy -f
sudo journalctl -u sshd -f
sudo journalctl -u smb -f
sudo journalctl -u nfs-server -f
sudo journalctl -u tailscaled -f
sudo journalctl -u cloudflared -f
sudo journalctl -u fail2ban -f
sudo journalctl -u NetworkManager -f
sudo journalctl -u dnsmasq -f
sudo journalctl -u avahi-daemon -f

# Filter by priority (err, warning, info, debug)
sudo journalctl -u caddy -p err -n 50

# Kernel ring buffer — driver errors, firewall drops
sudo journalctl -k -f
sudo journalctl -k | grep -E "FINAL_REJECT|DROP|INVALID"
sudo journalctl -k | grep -E "eth0|wlan0|link"    # interface events

# AppArmor denials
sudo journalctl -k | grep apparmor

# All failed systemd units since boot
systemctl --failed

# Check a specific boot's journal (after rollback / kernel panic)
journalctl --list-boots                   # show all recorded boots
journalctl -b -1 -p err                   # previous boot, errors only

rsync & aria2 — File Transfer

rsync — incremental sync & deployment:

# Deploy a website to a remote server (delete files removed locally)
rsync -avz --delete /var/www/html/ user@server:/var/www/html/

# Sync with progress and checksums
rsync -avz --progress --checksum /src/ /dst/

# Dry run — preview changes without transferring
rsync -avz --dry-run /src/ /dst/

# Exclude patterns
rsync -avz --exclude="*.log" --exclude=".git/" /src/ /dst/

# Sync over SSH on a custom port
rsync -avz -e "ssh -p 2222" /src/ user@server:/dst/

# Mirror a remote directory locally
rsync -avz [email protected]:/home/user/docs/ ~/mirror/docs/

aria2 — parallel multi-protocol downloader:

# Download a file (HTTP/HTTPS/FTP/SFTP/BitTorrent/Magnet)
aria2c https://example.com/largefile.zip

# Multi-connection download (up to 16 connections per server)
aria2c -x 16 -s 16 https://example.com/largefile.iso

# Download from multiple mirrors simultaneously
aria2c https://mirror1.com/file.iso https://mirror2.com/file.iso

# Torrent download
aria2c /path/to/file.torrent
aria2c "magnet:?xt=urn:btih:..."

# Download list from a file
aria2c -i ~/urls.txt

# Run as JSON-RPC daemon (for AriaNG web UI)
aria2c --enable-rpc --rpc-listen-all=false --rpc-secret=mysecret --daemon

Troubleshooting

Boot Issues

System won't boot after update

ShaniOS should automatically rollback. If not:

  1. At boot menu, select the previous entry (@blue or @green)
  2. System will boot to last known working state
  3. Once logged in, a "Boot Failure Detected" dialog appears — click "Rollback Now" to restore the broken slot from its Btrfs snapshot
  4. If the dialog doesn't appear, run: sudo shani-deploy --rollback
  5. Report issue via GitHub or Telegram

Black screen or crash after login (compositor issue)

If the desktop crashes or shows a black screen after login:

  1. At the login screen, click the gear/settings icon and switch from Wayland to X11 session
  2. Log in and check for GPU or compositor errors: journalctl -b --user -p err
  3. For NVIDIA: ensure the proprietary driver is active with prime-run glxinfo | grep renderer
  4. If the issue persists across reboots, try rolling back: sudo shani-deploy --rollback

TPM auto-unlock stops working after a BIOS/firmware update

BIOS and firmware updates change the TPM PCR values, invalidating the enrolled key. This is expected security behaviour. To fix it:

  1. Enter your LUKS passphrase manually at the boot prompt to unlock
  2. Remove the old TPM enrollment:
    LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
    sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"
  3. Re-enroll with updated PCR values (see TPM Encryption section)
  4. Regenerate UKI: sudo gen-efi configure <slot>, then sudo shani-deploy --force

Boot entries (@blue or @green) missing from the boot menu

systemd-boot entries are stored in /boot/efi/loader/entries/. If missing, regenerate them:

# Rebuild the UKI and boot entry for the currently booted slot
sudo gen-efi configure blue   # if booted into @blue
# or
sudo gen-efi configure green  # if booted into @green

# Then force a redeploy to rebuild both slots
sudo shani-deploy --force

# Verify entries exist
ls /boot/efi/loader/entries/

System boots but /etc overlay is not mounted (configuration not applied)

If your custom /etc changes are not being applied, the overlay mount may have failed:

# Check overlay mount service
sudo systemctl status shanios-tmpfiles-data.service
sudo systemctl status etc-daemon-reload.service

# Verify the overlay is mounted
mount | grep "on /etc"
# Should show: overlay on /etc type overlay (rw,...)

# Check for errors in the early boot services
journalctl -b -u shanios-tmpfiles-data
journalctl -b -u etc-daemon-reload

# The overlay source directories
ls /data/overlay/etc/upper/   # Your changes
ls /data/overlay/etc/work/    # Should be an empty working dir

If the upper directory is missing, shanios-tmpfiles-data.service may not have run. A reboot usually fixes this. If not, file a bug report.

LUKS password not working after TPM enrollment

TPM enrollment adds a new unlock method but always preserves your password as a fallback. If experiencing issues:

  1. Wait a few seconds at the unlock prompt — TPM unlock happens first; password entry becomes available shortly after
  2. Type your password carefully (input is not echoed)
  3. If the password genuinely doesn't work, boot from a live USB, chroot into the system, and verify the LUKS keyslots: sudo cryptsetup luksDump /dev/sdXn

Update Issues

shani-deploy fails with network error

Check network connectivity:

# Test connectivity
ping -c 3 shani.dev

# Check NetworkManager
systemctl status NetworkManager

# Retry update
sudo shani-deploy

shani-deploy automatically retries up to 5 times and falls back from the R2 CDN mirror to SourceForge if needed. If both fail, check if your ISP is blocking the download domains.

shani-deploy fails due to insufficient disk space

shani-deploy requires at least 10 GB free and aborts if space is insufficient. Free up space and retry:

# Show filesystem usage and compression stats
sudo shani-deploy --storage-info

# Remove old backups and cached downloads
sudo shani-deploy --cleanup

# Remove unused Flatpak runtimes
flatpak uninstall --unused

# Run on-demand deduplication to reclaim shared blocks
sudo shani-deploy --optimize

# Check Btrfs usage per subvolume
sudo btrfs filesystem du -s --human-readable /
sudo compsize /

shani-deploy fails with checksum or GPG signature error

This usually means a corrupted or incomplete download. Steps to resolve:

# Remove cached download files and retry
sudo shani-deploy --cleanup
sudo shani-deploy

# Force a fresh download ignoring any cached state
sudo shani-deploy --force

Persistent checksum failures may indicate DNS manipulation or ISP interference with the download. Try switching to a different DNS server (e.g., 1.1.1.1) in NetworkManager settings, then retry. If the problem persists, report it at github.com/shani8dev with the full --verbose output.

Update hangs or stalls mid-deployment

Do not interrupt a deployment mid-extraction — this could leave a partially-written subvolume. Verify the deploy is still active:

# Check if shani-deploy is still running
journalctl -fu shani-deploy

# If shani-deploy was interrupted, a deployment_pending flag may remain
cat /data/deployment_pending   # If this file exists, a deploy was interrupted

# Clean up and retry
sudo shani-deploy --cleanup
sudo shani-deploy

The /data/deployment_pending flag prevents a corrupted deploy from being treated as complete. --cleanup removes it and any partial extraction.

Application Issues

Flatpak app won't start

Try these steps:

# Update Flatpak
flatpak update

# Repair installation
flatpak repair

# Check permissions
flatpak override --show org.app.Name

# Reset all overrides for the app
flatpak override --reset org.app.Name

# Run from terminal to see errors
flatpak run org.app.Name

# Check GPU / graphics issues (common for Electron & games)
flatpak run --env=LIBGL_DEBUG=verbose org.app.Name

Snap app won't start or snapd has issues

# Check snapd daemon status
sudo systemctl status snapd
sudo systemctl status snapd.apparmor

# Check AppArmor confinement is loaded for snaps
sudo apparmor_status | grep snap

# Restart snapd if it's stuck
sudo systemctl restart snapd

# Check snap logs for the failing app
snap run --shell app-name   # drop into a shell to inspect
journalctl -u snapd -f

# Refresh a specific snap to latest revision
snap refresh app-name

# Roll back a snap to its previous working revision
snap revert app-name

# Remove and reinstall a snap cleanly
snap remove app-name
snap install app-name

# List installed snaps and their revisions
snap list

Nix package or environment issues

# Check nix-daemon is running
sudo systemctl status nix-daemon

# Restart the daemon
sudo systemctl restart nix-daemon

# No packages found / nix-env -iA fails with "attribute not found"
# — you probably haven't added a channel yet
nix-channel --list   # should show nixpkgs
# If empty, add it:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update

# Broken profile after failed install — roll back
nix-env --rollback

# List profile generations and switch to a working one
nix-env --list-generations
nix-env --switch-generation 5   # replace 5 with a known-good number

# GUI apps installed via Nix not appearing in launcher
# Add to ~/.zshrc or ~/.bashrc:
export XDG_DATA_DIRS=$HOME/.nix-profile/share:$XDG_DATA_DIRS

# Repair broken store paths
nix-store --verify --check-contents --repair

# Garbage collect to free space
nix-collect-garbage -d

AppImage won't run or shows permission errors

# Most common cause: AppImage is not executable
chmod +x MyApp-x86_64.AppImage
./MyApp-x86_64.AppImage

# AppImage must be stored in a writable location
# /usr is read-only on ShaniOS — move it to home or /data:
mkdir -p ~/Applications
mv MyApp.AppImage ~/Applications/

# Missing FUSE — AppImages use FUSE by default
# ShaniOS includes FUSE2 and FUSE3; if still failing, extract and run:
./MyApp.AppImage --appimage-extract
./squashfs-root/AppRun

# Gear Lever shows "broken" AppImage
# Re-import it: open Gear Lever → Remove → re-add the AppImage file

# AppImage from a different architecture (ARM vs x86_64)
file MyApp.AppImage   # check ELF architecture in output

Distrobox container can't access host devices or files

# Distrobox shares /home by default — files there are accessible
# For other paths, add them at creation time:
distrobox create --name mybox --image ubuntu:22.04 \
  --volume /data:/data:rw

# Or pass a device:
distrobox create --name mybox --image ubuntu:22.04 \
  --additional-flags "--device /dev/video0"

# Check container status
distrobox list

# Restart a stopped container
distrobox start mybox

# Re-create a broken container (data in /home is preserved)
distrobox rm mybox
distrobox create --name mybox --image ubuntu:22.04

# Enter container and check what's mounted
distrobox enter mybox -- df -h

Configuration Issues

A bad /etc change is causing services to fail at boot

If a misconfigured file in /etc (e.g., broken /etc/fstab, /etc/sshd_config, or /etc/sudoers) causes boot or login failures:

  1. Boot the alternate slot. The other slot (@blue or @green) has a separate /etc overlay. Select it from the systemd-boot menu at startup. From there you can fix or delete the broken file in /data/overlay/etc/upper/.
  2. Fix the file directly: Once booted into the working slot, find and correct the problematic file in /data/overlay/etc/upper/. To revert a file entirely to its system default, simply delete it from the upper directory:
    sudo rm /data/overlay/etc/upper/etc/fstab
    # (path inside upper mirrors the /etc path)
  3. After fixing, reboot back into your normal slot to confirm it boots correctly.

How do I add a custom kernel module or out-of-tree driver?

ShaniOS includes a comprehensive set of out-of-tree drivers (broadcom-wl, SOF audio, ALSA firmware, etc.), so most hardware works without intervention. If you still need a module:

  • Preferred: File a request at github.com/shani8dev to include it in the base image. Community-requested drivers are regularly evaluated.
  • Temporary testing: Use a Distrobox container with access to the host kernel headers. Build the module inside the container, then load it on the host with sudo insmod /path/to/module.ko. This survives only until the next reboot.
  • Persistent workaround: Place the module in ~/.local/lib/modules/$(uname -r)/ and use a systemd user service to load it on each boot with insmod. This is user-space and unaffected by system updates.

Encryption & LUKS Issues

How to check if my system is encrypted

# Method 1: Check for active LUKS mapping
sudo cryptsetup status shani_root
# If encrypted, shows: "is active" with device info

# Method 2: Check for crypto_LUKS partitions
sudo blkid | grep crypto_LUKS
# Shows all LUKS encrypted partitions

# Method 3: Check mounted devices
mount | grep mapper
# If encrypted, shows: /dev/mapper/shani_root

# Method 4: List block devices with filesystem types
lsblk -f
# Look for TYPE="crypto_LUKS"

# If none of these show encryption, your system is NOT encrypted
# and you cannot use TPM enrollment

Remove TPM enrollment and return to password-only

# Get your LUKS device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep device: | awk '{print $2}')

# Remove TPM enrollment
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"

# Regenerate UKI for the currently booted slot only (e.g. if booted into @blue):
sudo gen-efi configure blue
# or: sudo gen-efi configure green   # if booted into @green

# To regenerate the other slot's UKI as well, trigger a redeployment:
sudo shani-deploy --force

# Done - system will now require password on boot

Networking Issues

No network connection after boot

# Check NetworkManager status
systemctl status NetworkManager
sudo journalctl -u NetworkManager -n 50

# List all connections (saved profiles)
nmcli connection show

# Check active connections
nmcli device status

# Restart NetworkManager
sudo systemctl restart NetworkManager

# Reconnect a specific interface
nmcli device disconnect eth0
nmcli device connect eth0

# For WiFi: scan and reconnect
nmcli device wifi list
nmcli device wifi connect "MySSID" --ask

# Check if interface is up at kernel level
ip link show
ip addr show

DNS resolution not working (can ping IPs but not hostnames)

# Check current DNS servers
resolvectl status
cat /etc/resolv.conf

# Test DNS resolution directly
dig shani.dev @1.1.1.1         # query Cloudflare DNS directly
dig shani.dev @8.8.8.8         # query Google DNS directly
dig shani.dev                  # use system resolver

# If direct queries work but system DNS fails, check systemd-resolved
sudo systemctl status systemd-resolved
sudo systemctl restart systemd-resolved

# Flush DNS cache
sudo resolvectl flush-caches

# If dnsmasq is running and conflicting, check it
sudo systemctl status dnsmasq
sudo journalctl -u dnsmasq -n 30

# Set a manual DNS server in NetworkManager
nmcli connection modify "ConnectionName" ipv4.dns "1.1.1.1 8.8.8.8"
nmcli connection modify "ConnectionName" ipv4.ignore-auto-dns yes
nmcli connection up "ConnectionName"

Caddy not starting or HTTPS certificate failing

# Check Caddy status and logs
sudo systemctl status caddy
sudo journalctl -u caddy -n 80 --no-pager

# Validate Caddyfile syntax before reloading
caddy validate --config /etc/caddy/Caddyfile

# Common ACME/certificate errors:
# - Port 80 must be open for Let's Encrypt HTTP-01 challenge
# - Domain must point to your public IP (check with: dig mysite.example.com)
# - Cloudflared handles its own TLS — Caddy can use internal certs for tunneled sites

# Check TLS certificate status
caddy trust         # install CA into system trust store (for local dev)

# Reload after config changes (no downtime)
sudo systemctl reload caddy

# If Let's Encrypt rate limit hit, test with staging CA first
# Add to your Caddyfile block:
# tls {
#   ca https://acme-staging-v02.api.letsencrypt.org/directory
# }

# Check port 80/443 are reachable
sudo firewall-cmd --list-all
ss -tlnp | grep caddy

Tailscale not connecting or devices not visible

# Check daemon and connection state
sudo systemctl status tailscaled
sudo journalctl -u tailscaled -n 50

# Re-authenticate if session expired
sudo tailscale up

# Check peer reachability and NAT type
tailscale netcheck

# View all peers and their status
tailscale status

# Test direct connection to a peer
tailscale ping myhostname

# If peer is showing "offline": check that tailscaled is running on the other device
# If routing is broken: check MagicDNS settings at https://login.tailscale.com/admin/dns

# Enable Tailscale SSH if needed
sudo tailscale up --ssh

# Check if route advertisement is working
tailscale status --peers

# Reset and rejoin network
sudo tailscale down
sudo tailscale up

Cloudflared tunnel is down or failing to connect

# Check service status and logs
sudo systemctl status cloudflared
sudo journalctl -u cloudflared -n 80 --no-pager

# List tunnels and their status
cloudflared tunnel list
cloudflared tunnel info my-tunnel

# Test tunnel manually (bypasses systemd service)
cloudflared tunnel run my-tunnel

# Check config file is valid
cloudflared tunnel ingress validate

# Diagnose connectivity to Cloudflare
cloudflared tunnel diagnose

# If credentials missing/expired: re-login
cloudflared tunnel login
cloudflared tunnel token my-tunnel   # get a service token

# Verify DNS routing still points to tunnel
cloudflared tunnel route dns my-tunnel mysite.example.com

# Restart service after config fix
sudo systemctl restart cloudflared

SSH connection refused or timing out

# On the SERVER — check sshd is running
sudo systemctl status sshd
sudo journalctl -u sshd -n 30

# Check sshd is actually listening
ss -tlnp | grep sshd

# Validate sshd config
sudo sshd -t

# Check firewall allows SSH
sudo firewall-cmd --list-all | grep ssh

# Check fail2ban hasn't banned your client IP
sudo fail2ban-client status sshd

# Unban your IP if needed
sudo fail2ban-client set sshd unbanip YOUR_IP

# On the CLIENT — diagnose with verbose output
ssh -vvv [email protected]

# Common issues:
# "Connection refused"  → sshd not running, or wrong port, or firewall blocking
# "Permission denied"   → wrong key, or PasswordAuthentication disabled
# "Host key changed"    → run: ssh-keygen -R 192.168.1.50  then reconnect

NFS share not mounting or showing "access denied"

# On server: check NFS is running and exports are active
sudo systemctl status nfs-server
sudo exportfs -v               # list active exports
showmount -e localhost         # show what clients see

# Re-export after /etc/exports changes
sudo exportfs -arv

# Check firewall on server
sudo firewall-cmd --list-services | grep -E "nfs|rpc"

# On client: try a verbose mount
sudo mount -v -t nfs 192.168.1.100:/share /mnt/nfs

# If "access denied": check client IP is in the export's allowed range
# If "portmapper" error: ensure rpcbind is open in server firewall
sudo firewall-cmd --add-service=rpcbind --permanent
sudo firewall-cmd --add-service=mountd --permanent
sudo firewall-cmd --reload

# Check NFS server logs
sudo journalctl -u nfs-server -n 30

Samba share not visible on network / authentication failing

# Check Samba is running
sudo systemctl status smb nmb
sudo journalctl -u smb -n 30

# Validate smb.conf syntax
testparm

# List active shares
smbclient -L localhost -N

# Test authentication for a user
smbclient //localhost/ShareName -U youruser

# Reset Samba password
sudo smbpasswd -a youruser

# Firewall check
sudo firewall-cmd --list-services | grep samba

# If share shows but can't connect from Windows:
# Ensure "valid users" in smb.conf matches your username exactly
# Check SELinux/AppArmor isn't blocking (look for apparmor denials):
sudo journalctl -k | grep apparmor | tail -20

# Restart after config changes
sudo systemctl restart smb nmb

Firewall blocking a service unexpectedly

# See all active rules
sudo firewall-cmd --list-all
sudo firewall-cmd --list-all-zones

# Check if packet drops are being logged
sudo journalctl -k | grep -E "FINAL_REJECT|DROP" | tail -20

# Enable firewalld logging of dropped packets (temporarily)
sudo firewall-cmd --set-log-denied=all
sudo journalctl -k -f | grep "FINAL_REJECT"  # watch live
# Disable after debugging
sudo firewall-cmd --set-log-denied=off

# Check which zone an interface is in
sudo firewall-cmd --get-active-zones

# Move interface to a different zone (e.g. trusted for LAN interface)
sudo firewall-cmd --zone=trusted --add-interface=eth0 --permanent
sudo firewall-cmd --reload

# Check if nftables (underlying) has any extra rules
sudo nft list ruleset | grep -v "^#"

Podman container can't reach the internet or local network

# Check container network is up
podman network ls
podman network inspect podman   # default bridge network

# Test from inside the container
podman exec mycontainer ping -c 2 8.8.8.8
podman exec mycontainer curl -s https://ifconfig.me

# Check DNS inside container
podman exec mycontainer cat /etc/resolv.conf
podman exec mycontainer nslookup google.com

# If DNS fails: aardvark-dns may have a stale state
podman network reload

# Rebuild the network stack (WARNING: stops all containers)
podman system reset --force    # nuclear option — recreates everything

# Check if IP forwarding is enabled (required for container networking)
cat /proc/sys/net/ipv4/ip_forward      # should be 1
sudo sysctl -w net.ipv4.ip_forward=1  # enable if 0

# Check firewalld isn't blocking container traffic
sudo firewall-cmd --zone=trusted --add-interface=cni-podman0 --permanent
sudo firewall-cmd --reload

VPN (WireGuard / OpenVPN) not connecting via NetworkManager

# Check NetworkManager VPN plugin is loaded
nmcli connection show               # list all connections
nmcli connection up "VPN Name"      # connect
nmcli connection down "VPN Name"    # disconnect

# Verbose diagnostics
sudo journalctl -u NetworkManager -f  # watch live while connecting

# WireGuard-specific: check interface came up
ip link show wg0
sudo wg show wg0

# Check WireGuard handshake is happening
sudo wg show    # "latest handshake" should update

# OpenVPN: check routing after connect
ip route show
ip route show table all | grep -v "^broadcast"

# For L2TP/IPsec: check xl2tpd and strongswan logs
sudo journalctl -u xl2tpd -n 30
sudo journalctl -u strongswan -n 30

# Restart NetworkManager if a VPN leaves a stale state
sudo systemctl restart NetworkManager

Audio Issues

No sound / audio device not detected

# Check PipeWire status
systemctl --user status pipewire pipewire-pulse wireplumber

# Restart PipeWire stack (no logout required)
systemctl --user restart pipewire pipewire-pulse wireplumber

# List audio devices PipeWire sees
pactl list sinks short           # output devices
pactl list sources short         # input devices (microphones)
wpctl status                     # WirePlumber full status

# Check default output device
pactl info | grep "Default Sink"

# Set a different default output (replace sink name from list above)
pactl set-default-sink alsa_output.pci-0000_00_1f.3.analog-stereo

# Check volume is not muted
pactl set-sink-mute @DEFAULT_SINK@ 0
pactl set-sink-volume @DEFAULT_SINK@ 100%

# Check for SOF firmware (Intel DSP audio)
sudo journalctl -k | grep -E "sof|SOF|intel-sof" | tail -20
# If missing, the firmware is in sof-firmware which is pre-installed

# Test audio output directly
speaker-test -t wav -c 2        # plays L/R speaker test tones

# Check ALSA layer beneath PipeWire
aplay -l                         # list ALSA devices
arecord -l                       # list capture devices

Bluetooth audio (headphones) crackling or won't connect

# Check Bluetooth service
sudo systemctl status bluetooth
sudo journalctl -u bluetooth -n 50

# Restart Bluetooth
sudo systemctl restart bluetooth

# Check PipeWire Bluetooth module
pactl list cards | grep -A3 bluez   # should show BT card when headphones connected

# If headphones pair but no audio:
# Try switching profile to A2DP (high quality)
pactl list cards | grep "Name: bluez"  # get card name
pactl set-card-profile bluez_card.XX_XX_XX_XX_XX_XX a2dp-sink

# In audio settings GUI, ensure "High Fidelity Playback (A2DP Sink)" is selected
# not "Hands-Free Audio" (HFP) which degrades quality

# Crackling: disable power-saving on BT adapter
echo 'options btusb enable_autosuspend=n' | sudo tee /etc/modprobe.d/btusb-autosuspend.conf
sudo systemctl restart bluetooth

JACK audio for pro audio / low-latency setup

PipeWire provides a native JACK compatibility layer — no need to install JACK separately. JACK applications work unmodified. For professional audio with minimum latency, ensure your user is in the realtime group (it is by default on ShaniOS) and configure PipeWire's quantum:

# Verify realtime group membership
groups | grep realtime

# Check current PipeWire quantum (buffer size / latency)
pw-cli info 0 | grep quantum

# Set a lower quantum for lower latency (e.g. 64 samples at 48kHz = ~1.3ms)
# Add to ~/.config/pipewire/pipewire.conf.d/low-latency.conf:
# context.properties = {
#   default.clock.rate = 48000
#   default.clock.quantum = 64
#   default.clock.min-quantum = 32
# }
# Then restart:
systemctl --user restart pipewire wireplumber

# Check latency
pw-metadata | grep quantum

Gaming Issues

Steam won't launch or Proton games crash

# Run Steam from terminal to see errors
flatpak run com.valvesoftware.Steam

# Proton game crashes: try a different Proton version
# In Steam → right-click game → Properties → Compatibility → Force Proton version

# Check Vulkan works
vulkaninfo | grep "GPU id"
vkcube                              # should render a spinning cube

# Check GPU driver is loaded
lspci -k | grep -A2 -E "VGA|3D"    # look for "Kernel driver in use:"

# For NVIDIA: check open driver is loaded
nvidia-smi
lsmod | grep nvidia

# Missing 32-bit libraries (common for older games):
# Steam Flatpak bundles its own 32-bit runtime — this is usually fine.
# If still failing, run with more debugging:
PROTON_LOG=1 flatpak run com.valvesoftware.Steam
# Check ~/steam-*.log after trying to launch the game

# Reset Steam runtime (re-downloads runtime files)
flatpak run com.valvesoftware.Steam steam://resetsteam

Controller not detected in games

# Check controller is seen by the kernel
ls /dev/input/js*       # joystick device
ls /dev/input/event*    # evdev device
cat /proc/bus/input/devices | grep -A5 "Gamepad\|Controller\|Joystick"

# Check udev rules are applied (ShaniOS ships game-devices-udev)
udevadm info /dev/input/js0

# Test controller input with jstest
jstest /dev/input/js0

# Check your user is in the input group (it is by default)
groups | grep input

# For PS4/PS5 DualSense over USB: ensure hidraw permissions
ls -l /dev/hidraw*
# If permission denied, check game-devices-udev rules are active
sudo udevadm control --reload-rules
sudo udevadm trigger

# AntiMicroX — remap controller to keyboard/mouse
flatpak install flathub io.github.antimicrox.antimicrox
flatpak run io.github.antimicrox.antimicrox

# Piper — configure gaming mouse DPI and buttons
flatpak install flathub org.freedesktop.Piper
flatpak run org.freedesktop.Piper

Poor gaming performance / low FPS

# Check GameMode is running (ShaniOS ships it pre-installed)
systemctl status gamemoded
# GameMode is activated automatically when games start via Flatpak Steam
# For non-Steam games, launch with:
gamemoderun %command%

# Check power profile is set to Performance (especially on laptops)
powerprofilesctl get        # check current profile
powerprofilesctl set performance

# Check CPU frequency scaling
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Should be "schedutil" (default); GameMode switches to "performance" while gaming

# Check GPU is being used (not integrated)
# For NVIDIA Prime on a laptop:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia game_executable

# Check thermal throttling
sensors | grep -E "temp|Core"
# If CPUs are hitting 90°C+ check thermal paste / cooling

# GOverlay — performance overlay (FPS, GPU usage, temps)
flatpak install flathub io.github.benjamimgois.goverlay
# Enable MangoHud from within GOverlay

Waydroid / Android Issues

Waydroid won't start or shows "failed to start session"

# Check waydroid-container service
sudo systemctl status waydroid-container

# Restart the container service
sudo systemctl restart waydroid-container

# Check Waydroid logs
sudo waydroid log

# If never initialised: run the helper first
sudo waydroid-helper init

# Check binder kernel module is loaded
lsmod | grep binder
# If missing: reboot (ShaniOS loads it at boot via dracut)

# Check if session is already running
waydroid status

# Kill stuck session and restart
waydroid session stop
sudo systemctl restart waydroid-container
waydroid show-full-ui

Android apps show "not installed" or APK fails to install

# Ensure the session is running before installing
waydroid status         # should show "RUNNING"

# Install APK
waydroid app install ~/Downloads/app.apk

# If "INSTALL_FAILED_NO_MATCHING_ABIS": the APK is ARM-only
# ARM translation (libhoudini) should handle this automatically
# Check ARM translation is configured:
waydroid prop get ro.product.cpu.abi     # should show x86_64 + arm64

# If ARM translation is not working, re-run the helper
sudo waydroid-helper init --force

# List installed apps
waydroid app list

# Check Android data is healthy
sudo btrfs subvolume show /var/lib/waydroid   # should be @waydroid subvolume

Printing & Scanning Issues

Printer not detected or jobs stuck in queue

# Check CUPS is running
sudo systemctl status cups
sudo systemctl restart cups

# List detected printers
lpstat -p -d

# Test print
echo "test" | lp -d PrinterName

# Cancel all stuck jobs for a printer
cancel -a PrinterName

# For network printers: check Avahi can find them
avahi-browse _ipp._tcp -t         # IPP printers
avahi-browse _printer._tcp -t     # legacy LPD printers

# Open CUPS web UI for full configuration
xdg-open http://localhost:631

# Flush and restart CUPS
sudo systemctl stop cups
sudo rm -f /var/spool/cups/tmp/*  # clear spool (bind-mounted from @data)
sudo systemctl start cups

# For HP printers: check hplip-minimal status
hp-check -r                        # report HP printer status
hp-setup                           # run HP setup wizard

Scanner not detected by Simple Scan / Skanlite

# Check your user is in the scanner group (it is by default)
groups | grep scanner

# List SANE-detected scanners
scanimage -L

# For network scanners (IPP Everywhere / AirScan):
# sane-airscan discovers them automatically via Avahi
avahi-browse _uscan._tcp -t       # network scan targets

# Test a scan from terminal
scanimage --device-name=airscan:e0:... --format=png --output-file=/tmp/test.png

# Restart SANE daemon if needed
sudo systemctl restart saned.socket

# Check SANE can access the USB scanner
lsusb | grep -i scan              # confirm scanner is seen
ls -l /dev/bus/usb/...            # check permissions

Frequently Asked Questions

Software & Packages

Can I install traditional Arch packages with pacman?

No. The root filesystem is read-only and immutable by design. This is a core security and stability feature, not a limitation. Use Flatpak for GUI applications, and containers (Distrobox, Podman) for development tools and CLI utilities. ShaniOS ships a custom pacman wrapper that blocks mutating operations (-S, -U, -R) but still allows read-only queries (-Q, -F, -Ss).

What happens if an update fails?

ShaniOS includes automatic boot failure detection via a multi-layer pipeline. If the system fails to boot after an update, it automatically falls back to the previous working slot. After you log in, a "Boot Failure Detected" dialog appears offering a one-click rollback via shani-deploy --rollback, which restores the broken slot from its Btrfs backup snapshot. Your personal data in /home and /data is never affected by rollbacks.

Does ShaniOS support hibernation?

Yes. ShaniOS uses a dedicated @swap subvolume with Copy-on-Write disabled (nodatacow), providing reliable hibernation on Btrfs. This approach solves the traditional Btrfs hibernation challenges. The swapfile size is automatically set to match your system's RAM on first deployment.

How do I make persistent changes to system configuration?

The /etc directory uses an overlay filesystem, making it writable despite the read-only root. Simply edit files in /etc normally — changes are stored in /data/overlay/etc/upper and persist across updates and blue-green slot switches. To see what you've changed, run: ls -la /data/overlay/etc/upper.

Why does ShaniOS need 32GB minimum storage?

ShaniOS maintains two complete system images (@blue and @green) for atomic updates. However, Btrfs Copy-on-Write shares unchanged data blocks between them, resulting in only ~18% overhead. The benefit is zero-downtime updates and instant rollback capability — a worthwhile trade-off compared to traditional single-image systems.

Can I use ShaniOS on legacy BIOS systems?

No. ShaniOS requires UEFI firmware. UEFI is required for Unified Kernel Images (UKIs), Secure Boot support via MOK, the systemd-boot bootloader, and the gen-efi boot management tooling. Legacy BIOS is not supported.

Can I dual-boot ShaniOS with Windows or another Linux?

Not recommended. Other operating systems may modify the ESP or bootloader entries, potentially breaking ShaniOS's boot configuration. For running other systems, use virtual machines (GNOME Boxes or virt-manager via Flatpak) or containers (Distrobox, LXC) instead.

How do I switch between @blue and @green manually?

At the systemd-boot menu during startup, select the alternative boot entry (both @blue and @green entries are always present). On a running system you can also run sudo shani-deploy --rollback to restore and switch to the previous slot's Btrfs backup snapshot, then reboot.

How do I install software that needs system-level access?

Use containers. Distrobox lets you create a full mutable Linux environment (Arch, Ubuntu, Fedora, etc.) that integrates with your desktop — apps appear in your launcher, files are shared. For development tools and CLI utilities, Distrobox is the recommended approach. For GUI apps, Flatpak is best. AppImages via Gear Lever work for portable tools not available as Flatpaks. Nix (pre-installed; the @nix subvolume is shared across both slots) is excellent for CLI tools, language runtimes, and pinned library versions — packages installed via Nix survive OS updates and rollbacks, just add a channel first with nix-channel --add. Homebrew can also be installed in user-space to /home/linuxbrew/.linuxbrew — it works on Linux and doesn't touch the read-only root.

Updates & Rollback

Will my WiFi passwords and Bluetooth pairings survive an update?

Yes. ShaniOS bind-mounts critical service state directories from the persistent @data subvolume. This includes /var/lib/NetworkManager (WiFi passwords, VPN configs), /var/lib/bluetooth (paired devices), /var/lib/cups (printer configs), fingerprint enrollment, and many more. All of these persist through every system update and rollback.

How do I check which slot (@blue or @green) I'm currently running?

Run: cat /data/current-slot — this shows the currently active slot name. You can also check the kernel command line: cat /proc/cmdline | grep -o 'subvol=[^,\ ]*' to see which Btrfs subvolume was mounted at boot.

How do I set up automatic disk decryption (no password prompt at boot)?

Use TPM 2.0 enrollment. If your system has a TPM 2.0 module (enabled in BIOS), you can enroll your LUKS key into it with: LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}') && sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE". With Secure Boot enabled (PCR 7), the TPM will only unlock if the bootloader hasn't been tampered with. See the TPM Encryption section for full details.

Hardware & Gaming

Can I play games on ShaniOS?

Yes. Install Steam, Lutris, Heroic Games Launcher, or Bottles as Flatpaks from Flathub. ShaniOS ships full Mesa (Vulkan + OpenGL), NVIDIA open driver support, Wayland-native compositor, comprehensive controller/racing wheel udev rules, and GPU switching (nvidia-prime, switcheroo-control) for hybrid graphics laptops. The full GStreamer codec stack is also included for media playback.

How often does ShaniOS get updates, and are they automatic?

Updates follow a date-based versioning scheme (YYYYMMDD) on the stable (default) or latest channels. A background service (shani-update.timer) checks for updates 5 minutes after login and every 2 hours thereafter. When an update is found, a GUI dialog asks you to install or defer (defer schedules a reminder after 24 hours). Updates are never applied silently — user approval is always required before shani-deploy runs.

Is my data safe during a rollback?

Yes. Rollbacks only replace the system slot (@blue or @green) from its Btrfs backup snapshot. Your personal data in /home, all persistent service state in /data, your Flatpak apps, containers, and VMs are completely untouched. The separation of system and user data is by design — they live in separate Btrfs subvolumes.

Can I run Android apps on ShaniOS?

Yes. Waydroid is pre-configured and available. It runs a full Android container via LXC on top of your Linux desktop. Android data is stored in the dedicated @waydroid Btrfs subvolume. The necessary firewall rules are already set up. Initialize it once with sudo waydroid init, then launch from your application menu.

Storage & Encryption

How do I back up my data?

ShaniOS does not automatically back up user data. Two tools are pre-installed:

  • restic + rclone for encrypted, deduplicated backups to cloud storage (Backblaze B2, S3, Google Drive, local NAS). See the Web Hosting section for command examples.
  • Btrfs snapshots for instant, space-efficient local snapshots:
    # Snapshot your home directory
    sudo btrfs subvolume snapshot /home /data/snapshots/home-$(date +%Y%m%d)
    
    # List snapshots
    sudo btrfs subvolume list /

What to back up: /home and /data (service configs, VPN keys, overlay changes). Flatpak apps and system slots are re-deployable — they don't need backup.

Critical — LUKS header backup: If your system is encrypted, back up the LUKS header. Losing it means permanent data loss:

LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo cryptsetup luksHeaderBackup "$LUKS_DEVICE" --header-backup-file ~/luks-header-backup.img
# Store this file offsite (cloud storage, encrypted USB, etc.)

How do I set up printing and scanning?

CUPS and drivers for all major printer brands are pre-installed. Most printers work automatically:

  • Network printers: Open GNOME Settings → Printers, or KDE System Settings → Printers. Avahi discovers network printers automatically.
  • USB printers (modern): Just plug in — ipp-usb provides driverless IPP-over-USB support for any printer advertised as AirPrint or Mopria compatible.
  • Troubleshooting: sudo systemctl restart cups and lpstat -p -d to list active queues.

For scanning, install GNOME Simple Scan (flatpak install flathub org.gnome.SimpleScan) or Skanlite (flatpak install flathub org.kde.skanlite). SANE with network scan support (sane-airscan) is pre-installed.

How do I run virtual machines on ShaniOS?

Install GNOME Boxes for an easy graphical VM manager: flatpak install flathub org.gnome.Boxes. For more advanced control, install Virtual Machine Manager: flatpak install flathub org.virt_manager.virt-manager. Both Flatpaks bundle their own libvirt and QEMU runtimes — no system daemon setup required. VM disk images are stored in the @libvirt Btrfs subvolume with CoW disabled for optimal performance, and survive all system updates and rollbacks.

How do I check deduplication savings and compression ratios?

# Check bees deduplication daemon status
sudo systemctl status "beesd@*"

# View actual compression ratios per subvolume
sudo compsize /
sudo compsize /home
sudo compsize /nix

# Full storage analysis with per-subvolume breakdown
sudo shani-deploy --storage-info

# Run on-demand deduplication pass
sudo shani-deploy --optimize

Security & Privacy

Does ShaniOS collect telemetry or usage data?

No. ShaniOS collects zero telemetry, sends no crash reports, and has no analytics of any kind — ever. No opt-out required because there is nothing to opt out of. The update tool (shani-deploy) connects to download servers to fetch images, but transmits only what a standard HTTP download requires — no hardware fingerprints, system IDs, or usage statistics. Intel ME modules (mei, mei_me) are blacklisted by default, removing the low-level hardware management channel. Because the entire codebase is public on GitHub, these claims are verifiable — you can read every script that runs on your system.

What security modules and protections are active by default?

ShaniOS activates six Linux Security Modules simultaneously via lsm=landlock,lockdown,yama,integrity,apparmor,bpf — most distributions run one or two. Alongside those: immutable read-only root (even root cannot modify OS files at runtime), LUKS2 with argon2id full-disk encryption (optional, recommended), TPM2 auto-unlock, Secure Boot via shim/sbctl, Intel ME disabled by default, firewalld active from first boot, Flatpak and Snap sandboxing, SHA256+GPG verified OS images, and fwupd for keeping firmware current. See the Security Features section for full details on each layer.

Can I verify the OS image myself before deploying it?

Yes. Every image is SHA256 + GPG signed. The public key (ID 7B927BFF...8014792) is on public keyservers. To verify manually:

gpg --keyserver keys.openpgp.org --recv-keys 7B927BFF8014792
gpg --verify shanios-image.zst.sig shanios-image.zst
sha256sum -c shanios-image.zst.sha256

shani-deploy does this automatically before every deployment. The build system, deploy scripts, and signing workflow are public on GitHub — the full chain of trust is independently auditable.

Does my YubiKey / FIDO2 key / smart card work?

Yes. libfido2 (FIDO2/U2F), opensc + ccid (smart cards), and libnfc (NFC tokens) are all pre-installed with the necessary udev rules and pcscd.socket configured. Hardware security keys work for PAM login, sudo authentication, and browser WebAuthn at first boot — no setup required. Test with:

fido2-token -L          # list connected FIDO2 keys
opensc-tool -l           # list smart cards
systemctl status pcscd.socket

Laptop & Hardware

Is ShaniOS good for laptops? Does battery life suffer?

ShaniOS is well-suited for laptops. Several features are specifically beneficial: fingerprint login works at first boot on supported hardware (fprintd + PAM); hibernation works out of the box — the swap subvolume is auto-sized to your RAM with CoW disabled; TPM2 auto-unlock means LUKS2 encryption with no passphrase at every boot; Profile Sync Daemon runs browser profiles from RAM, substantially reducing SSD write wear; volatile /var further reduces unnecessary writes; power-profiles-daemon (power-saver, balanced, performance) is pre-installed; switcheroo-control handles hybrid Intel+NVIDIA/AMD GPU switching. The read-only root also protects the system from corruption during unexpected shutdowns.

Does NVIDIA work? What about hybrid GPU laptops?

Yes. NVIDIA open-source drivers (nvidia-open, nvidia-utils) and NVIDIA Prime are pre-installed and configured — including full Vulkan support. Works at first boot on most NVIDIA hardware. For hybrid GPU laptops (Intel iGPU + NVIDIA dGPU), switcheroo-control and nvidia-prime are both pre-installed for GPU switching. Verify:

# Check NVIDIA driver is loaded
nvidia-smi

# Run an app on the discrete NVIDIA GPU explicitly
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep "OpenGL renderer"

# Check available GPUs (switcheroo)
gdbus call --system --dest net.hadess.SwitcherooControl \
  --object-path /net/hadess/SwitcherooControl \
  --method org.freedesktop.DBus.Properties.Get \
  net.hadess.SwitcherooControl GPUs

Does fingerprint login work?

fprintd is pre-installed and integrated with PAM — fingerprint authentication works for login (GDM/SDDM), sudo, and screen unlock on supported hardware. Enroll your fingerprint:

# Enroll a finger (run as your user, not root)
fprintd-enroll

# List enrolled fingerprints
fprintd-list $USER

# Verify it works
fprintd-verify $USER

# If your reader isn't detected, check
lsusb | grep -i finger
journalctl -u fprintd -f

If your reader is supported by libfprint but not auto-detected, check the libfprint supported devices list.

How do I update my BIOS / firmware without Windows?

fwupd is pre-installed with fwupd-refresh.timer running automatic checks. Update BIOS, NVMe controllers, SSD firmware, Thunderbolt devices, and other LVFS-supported hardware directly from ShaniOS:

# Check for available firmware updates
fwupdmgr get-updates

# Apply updates
fwupdmgr update

# See all manageable devices
fwupdmgr get-devices

Supported hardware includes most major laptop manufacturers (Dell, Lenovo, HP, ASUS, Framework, etc.), NVMe drives from most vendors, and many USB peripherals. Check fwupd.org/lvfs/devices for the full list.

Glossary

Key terms used throughout this documentation.

Term Definition
@blue / @green The two Btrfs subvolumes used as root filesystems in the blue-green deployment model. Only one is active (mounted as /) at any time; the other is the standby slot used for updates.
Atomic update An update that either completes entirely or fails cleanly without touching the running system. The active slot is never modified during an update.
CoW (Copy-on-Write) A Btrfs mechanism where modifying data creates a new copy of the changed blocks rather than overwriting in place. Enables efficient snapshots and sharing of unchanged data between @blue and @green.
UKI (Unified Kernel Image) A single signed EFI binary containing the kernel, initramfs, and kernel command line. ShaniOS generates one per slot using gen-efi and dracut. Stored in the ESP, loaded directly by the UEFI firmware via systemd-boot.
shani-deploy The atomic update and deployment tool. Handles downloading, verifying, extracting, and deploying new system images; rollbacks; storage analysis; and deduplication. Also self-updates on every run.
gen-efi ShaniOS tool that generates the Unified Kernel Image (UKI) for the currently booted slot using dracut. Called automatically by shani-deploy during updates; can also be run manually.
@data The persistent Btrfs subvolume (mounted at /data) storing the /etc overlay upper/work directories, service state bind-mount sources (/data/varlib/), logs, and system markers. Everything that must survive across updates lives here.
Overlay (/etc) OverlayFS mount that presents /etc as writable by layering user changes (upper dir in @data) on top of the read-only base /etc from the active slot. Changes persist across all updates and rollbacks.
MOK (Machine Owner Key) A user-managed key enrolled into the UEFI Secure Boot database via MOK Manager. Allows ShaniOS's signed bootloader (Shim → systemd-boot) and UKIs to boot with Secure Boot enabled.
bees / beesd A background Btrfs deduplication daemon that continuously finds and eliminates duplicate data blocks across all subvolumes. Auto-configured at boot by beesd-setup.service; hash database scales automatically with disk size.
Distrobox A tool that creates mutable Linux container environments (any distro — Ubuntu, Fedora, Arch, etc.) with seamless desktop integration: shared /home, exported app launchers, shared audio/display/USB. The recommended way to use traditional package managers on ShaniOS.
Slot One of the two system images: @blue or @green. At any time one slot is "Active" (currently booted) and the other is "Candidate" (updated, waiting for next boot, or kept as rollback). Determined by /data/current-slot.
systemd.volatile=state Kernel parameter that creates a tmpfs overlay for /var, making it volatile (cleared on every reboot). Persistent service data is restored via Btrfs subvolume mounts and bind mounts at boot, not by writing to the root filesystem.
nodatacow Btrfs mount option that disables Copy-on-Write for a subvolume. Used on @libvirt (VM disks) and @swap (swapfile) where CoW causes excessive fragmentation and performance degradation.
PCR (Platform Configuration Register) A set of TPM registers that record measurements of the boot process (firmware, bootloader, kernel). Used during TPM enrollment to bind the LUKS key to a specific, verified boot chain. PCR 0 = firmware; PCR 7 = Secure Boot state.
ostree / composefs Infrastructure libraries used for image content addressing and verification. composefs provides a read-only filesystem layer derived from a Merkle tree of the image content, enabling integrity checks of the active slot's file tree.
skopeo A command-line tool for inspecting, copying, signing, and syncing OCI/Docker container images between registries — without pulling them into local storage first. Part of the pre-installed Podman ecosystem alongside buildah.
waydroid-helper A ShaniOS-specific utility that automates Waydroid initialisation: downloads the Android image, configures kernel parameters, sets up the LXC container, and validates firewall rules — all in a single guided command.
@nix A dedicated Btrfs subvolume mounted at /nix for the Nix package manager. Shared across both @blue and @green slots so Nix packages survive all system updates and rollbacks. Nix is pre-installed on ShaniOS; a channel must be added on first use with nix-channel --add before installing packages.
passim A local caching and sharing daemon that speeds up fwupd firmware downloads by broadcasting available firmware files to other machines on the LAN via mDNS/Avahi, avoiding repeated downloads of the same payload across multiple ShaniOS installations.
shani-* packages The family of ShaniOS-specific meta-packages (shani-core, shani-deploy, shani-desktop-plasma, shani-multimedia, shani-network, shani-storage, shani-tools, shani-accessibility, shani-bluetooth, shani-fonts, shani-printer, shani-scanner, shani-video, shani-video-guest, shani-peripherals, and others) that group systemd services, udev rules, configuration snippets, and default settings into logical units. Each pulls in the appropriate base packages and pre-configures them for ShaniOS's immutable environment.
shani-update The per-user systemd service and timer (shani-update.service / shani-update.timer) that runs in the background to check for OS updates. Fires 5 minutes after login then every 2 hours. When an update is available it presents a GUI dialog (yad/zenity/kdialog) asking the user to install or defer. Runs pkexec shani-deploy under systemd-inhibit when approved. Logs to ~/.cache/shani-update.log.
dracut The initramfs generator used by ShaniOS to build the initrd that is embedded in each Unified Kernel Image (UKI). Called by gen-efi during every deployment. dracut handles LUKS2 unlock, Btrfs subvolume mounting, the OverlayFS /etc mount, and bind mounts before handing off to systemd as PID 1.
IMA / EVM Linux Integrity Measurement Architecture and Extended Verification Module — the "Integrity" component of ShaniOS's LSM stack (lsm=...integrity...). IMA measures file hashes at access time and logs or blocks access to files that have changed since measurement. EVM protects extended attributes (including IMA hashes) using an HMAC. Together they provide runtime file integrity verification beyond what the read-only root enforces.
MGLRU Multi-Generation LRU — a Linux kernel memory reclaim algorithm enabled by default on ShaniOS with aggressive settings. MGLRU is more efficient than the traditional LRU at deciding which memory pages to evict under pressure, reducing latency spikes during gaming and other workloads. Controlled via /sys/kernel/mm/lru_gen/.
PSD (Profile Sync Daemon) A systemd user service that moves browser profiles (Firefox, Chromium, Vivaldi, etc.) into a tmpfs RAM filesystem at login and syncs them back to disk periodically and at logout. Reduces SSD write wear and improves browser performance. Pre-enabled for all users on ShaniOS.
@flatpak The Btrfs subvolume mounted at /var/lib/flatpak. Stores all system-wide Flatpak runtimes, applications, and their data. Shared across @blue and @green slots — installed Flatpaks are immediately available regardless of which slot is booted and survive all updates and rollbacks.
@containers The Btrfs subvolume mounted at /var/lib/containers. Stores all Podman container images, volumes, and overlay layers for the current user. CoW is enabled so container layers are efficiently deduplicated by bees. Survives all system updates and rollbacks.
@swap The Btrfs subvolume that holds the system swapfile. Created on first boot sized to match installed RAM. CoW is disabled (nodatacow) on this subvolume — a requirement for swapfiles on Btrfs. Enables reliable hibernation. The resume= and resume_offset= kernel parameters pointing to this swapfile are embedded in the UKI.
Apptainer A HPC/scientific container runtime (formerly Singularity) pre-installed on ShaniOS. Unlike Docker/Podman, Apptainer containers run as the calling user with no daemon and no privilege escalation — safe for multi-user environments and reproducible research. Consumes SIF (SquashFS Image Format) container images. Compatible with Docker Hub and OCI registries via automatic image conversion.

Contribute

ShaniOS is open-source and welcomes community contributions.

Code Contributions

Contribute to core development, tools, or installer

GitHub

Bug Reports

Help improve stability and reliability

Telegram

Documentation

Improve this wiki and make ShaniOS more accessible

Edit Wiki