Security
Malware cannot modify system files or persist across reboots
Comprehensive guide to the immutable Linux OS with atomic updates
Welcome to the ShaniOS technical documentation. This wiki provides comprehensive information about ShaniOS's architecture, installation, configuration, and daily use.
ShaniOS is an immutable Linux desktop built on Arch Linux. It ships in two editions — GNOME and KDE Plasma — and is designed for users who want a stable, secure, low-maintenance system without sacrificing full Linux capability. In philosophy it is similar to Fedora Silverblue or SteamOS, but built on Arch's rolling release model for maximum package freshness.
Five ideas define how ShaniOS works:
@blue and @green) are maintained at all times. Updates are written to the inactive image; you boot into it only when it's ready. The previous image remains as an instant rollback target.lsm=landlock,lockdown,yama,integrity,apparmor,bpf), LUKS2 argon2id encryption, TPM2 auto-unlock, Secure Boot, Intel ME disabled by default, and every OS image SHA256+GPG verified before deployment.New to ShaniOS? Visit shani.dev for a general introduction. This wiki focuses on technical implementation and usage details.
ShaniOS is an immutable Linux distribution that brings enterprise DevOps practices to desktop computing. Built on Arch Linux with Btrfs filesystem, it provides atomic updates, instant rollback, and system integrity by design. It ships as two editions — GNOME and KDE Plasma — and works out of the box with no post-install tweaking required.
The core pillars of ShaniOS are:
/) is mounted read-only. System binaries and libraries cannot be changed at runtime — by you, by software, or by malware. The only way to modify the OS is through the controlled shani-deploy update pipeline.@blue and @green) are maintained. While one is active, updates are written to the other. On reboot you switch to the updated image. The previous one is kept as a one-command rollback target./etc overlay), Flatpak apps, containers, VMs, and service credentials all live in dedicated Btrfs subvolumes that survive every update and rollback, always. A dedicated @nix subvolume is pre-created for use with the Nix package manager.lsm=landlock,lockdown,yama,integrity,apparmor,bpf), LUKS2 argon2id full-disk encryption, TPM2 auto-unlock, Secure Boot with MOK, Intel ME disabled by default, and every OS image is SHA256+GPG verified before deployment.Immutable OS separates read-only system layers from writable user/config layers — every layer has a defined, auditable behaviour
ShaniOS comes fully equipped with a comprehensive software stack, carefully curated for desktop computing, development, gaming, and professional workloads. Everything works out of the box—no configuration required.
gen-efi; cpio (initramfs archive format)fwupd.service and fwupd-refresh.timer are enabled at install time.snapd.apparmor.service loads Snap confinement profiles automatically/data/varlib/tailscale), cloudflared zero-trust tunnels (state persisted in /data/varlib/cloudflared)nix-daemon.socket is enabled at boot and all Nix data lives in the dedicated @nix Btrfs subvolume shared across both slots. Nix packages survive all updates and rollbacks. A channel must be added on first use before installing packages.podman.socket enabled at boot), podman-docker (drop-in Docker CLI replacement), podman-compose (Docker Compose support), buildah (OCI image builder, no daemon), skopeo (OCI image inspection and copying between registries), conmon (container monitor), slirp4netns (user-mode networking for rootless containers), netavark + aardvark-dns (Podman network stack), Distrobox (BoxBuddy GUI installable from Flathub), LXC (@lxc subvolume), LXD (@lxd subvolume, lxd.socket enabled), lxcfs (filesystem virtualisation for containers, lxcfs.service enabled), systemd-nspawn (@machines subvolume), Apptainer (HPC/scientific container runtime, formerly Singularity), Snap (@snapd subvolume, snapd.socket enabled), fuse-overlayfs, catatonit (init for containers)waydroid-container.service enabled at boot, python-pyclip (clipboard integration between Android and the host), firewall rules pre-configured, android-tools (adb, fastboot), android-udevshani-video-guest.target, which pulls in vboxservice, vmtoolsd, vmware-vmblock-fuse, and spice-vdagentd automatically when the package is installedThe primary user is automatically configured with appropriate permissions during installation. ShaniOS also watches for newly created users: the shani-user-setup.path unit monitors /etc/passwd for changes and triggers shani-user-setup.service whenever a new regular user (UID 1000–59999) is detected. That service automatically adds the user to all required groups and sets their default shell to /bin/zsh. This means any user created post-installation via the GUI or useradd/adduser gets the same setup automatically — both of those commands are also wrapped to inject the default group list.
virsh and virt-managerShaniOS includes extensive performance, gaming, and reliability optimizations out of the box, eliminating the need for manual tweaking. These enterprise-grade optimizations are pre-configured and active from first boot.
ShaniOS is optimized for gaming performance with multiple layers of system tuning:
Gaming Hardware Support:
Kernel-Level Gaming Optimizations:
madvise for selective large page usage, improving performance for games that explicitly request itShaniOS enables several system services for intelligent resource management:
realtime group receives permission to access HPET and RTC devices directly. This is essential for professional audio production, real-time multimedia applications, and low-latency gaming. Users in the realtime group can run applications with real-time scheduling priorities.ShaniOS enables comprehensive hardware support services by default:
ShaniOS provides a modern, feature-rich shell environment that rivals the best developer setups:
chsh while maintaining consistent functionality across all options.ShaniOS performs regular background maintenance to keep your system healthy without manual intervention. These systemd timers and services are enabled by default:
bees daemon performs ongoing block-level deduplication across all Btrfs subvolumes. It is auto-configured at every boot by beesd-setup.service, which writes a per-UUID config to /etc/bees/ and enables the beesd@<UUID>.service unit. The hash database size is automatically scaled to disk capacity (256 MB per TB, capped at 1 GB). This works alongside the CoW sharing between @blue and @green to maximise storage efficiency over time.mark-boot-in-progress plants a flag and clears previous results; bless-boot calls bootctl set-good early; mark-boot-success writes /data/boot-ok once multi-user.target is reached; and a 15-minute timer runs check-boot-failure to record the booted slot in /data/boot_failure if the system never reached a successful state.--delete-data), and run flatpak repair automatically. Both fire 5 minutes after boot then every 12 hours with a 15-minute randomized delay to avoid hammering servers simultaneously.All maintenance operations are scheduled during low-usage periods and use minimal system resources to avoid impacting active work.
Note: All these optimizations are pre-configured and active from first boot. No manual configuration, tweaking, or performance tuning required. ShaniOS provides enterprise-grade optimization out of the box.
If you're coming from Ubuntu, Fedora, Arch, or any mutable Linux distro, the main adjustment is how you install software and make system-level changes. Everything else — your dotfiles, shell, /etc configs, and all files under /home — works exactly as you'd expect.
Stop thinking about "installing software into the system." Think instead in terms of these four layers:
apt install / pacman -S for GUI applications. Flathub is pre-configured and ready to use from day one.apt or pacman, and apps that export to your launcher.@nix subvolume) covers CLI tools and language runtimes with pinned versions and zero conflicts. Think of it as a supercharged user-space package manager. The @nix subvolume is shared across both slots so installed Nix packages survive updates and rollbacks. Add a channel with nix-channel --add before installing packages for the first time./etc overlay still works exactly like a normal /etc. Edit any config file with sudo nano, sudo vim, etc. — changes persist across all updates.| Traditional approach | ShaniOS equivalent |
|---|---|
sudo pacman -S foo / sudo apt install foo |
flatpak install flathub foo, or Distrobox |
sudo pip install globally |
pip install --user, or nix-env -iA nixpkgs.python3Packages.foo, or Distrobox |
sudo npm install -g |
nix-env -iA nixpkgs.nodejs, or npm install -g inside Distrobox |
make install to system paths |
Build and install inside Distrobox; export binaries to host with distrobox-export --bin |
Modify /usr, /opt, /bin directly |
For config files: use the /etc overlay. For binaries: Distrobox or Nix. |
Dotfiles work normally. Everything in /home is fully writable. Your ~/.zshrc, ~/.config/, ~/.local/ — unchanged. The immutability applies only to the OS itself, not your user space.
ShaniOS's immutability fundamentally changes how you interact with the system. Understanding this concept is key to using ShaniOS effectively.
sudo pip install globally (use pip install --user, Nix, or Distrobox)sudo npm install -g to system paths (use Nix or install inside Distrobox)make install to install built software into system directories (build and export from Distrobox)Malware cannot modify system files or persist across reboots
Updates are atomic - they either work completely or fail safely
Instant recovery from failed updates or system issues
System state is always predictable and reproducible
ShaniOS implements blue-green deployment using Btrfs subvolumes—a strategy adapted from DevOps for desktop Linux.
Two complete, independently bootable system images alternate as active/standby — shared subvolumes (user data, Flatpaks, containers, Nix) persist unchanged across every update and rollback
ShaniOS uses an intelligent multi-layered update system with automatic checking, user notifications, and the shani-deploy tool for atomic system updates.
7-step zero-downtime update process with automatic rollback on boot failure
ShaniOS selectively persists data across immutable system updates through bind mounts and dedicated subvolumes.
Three clear categories: replaced-on-update system files, persistent Btrfs subvolumes (including @nix, @flatpak, @containers shared by both slots), and volatile tmpfs cleared each boot with key service state bind-mounted back from @data
| Location | Type | Behavior |
|---|---|---|
| / | Read-only | Immutable system files |
| /etc | Overlay | Writable, persists in @data |
| /var | tmpfs | Volatile, cleared on reboot |
| /var/log | Btrfs @log | Persistent logs |
| /var/lib/* | Bind mount | Selective persistence from @data/varlib |
| /var/spool/* | Bind mount | Selective persistence from @data/varspool |
| /home | Btrfs @home | Persistent user data |
Critical service data is bind-mounted from @data subvolume:
# Example bind mounts from fstab
# All bind mounts use these options for correct boot ordering:
# bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data
# Network configuration
/data/varlib/NetworkManager /var/lib/NetworkManager none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0
# Bluetooth pairings
/data/varlib/bluetooth /var/lib/bluetooth none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0
# systemd state
/data/varlib/systemd /var/lib/systemd none bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data 0 0
The following directories are bind-mounted from @data/varlib/ to /var/lib/, surviving all updates and slot switches:
The following directories are bind-mounted from @data/varspool/ to /var/spool/:
ShaniOS intelligently handles bind mounts during configuration:
| Component | Minimum | Recommended |
|---|---|---|
| Processor | x86_64 dual-core with VT-x/AMD-V | x86_64 quad-core or better |
| Memory | 4 GB RAM | 8 GB RAM or more |
| Storage | 32 GB (dual-image architecture) | 64 GB or more |
| Firmware | UEFI (required) | UEFI with TPM 2.0 |
| Installation Media | 8 GB USB drive | 16 GB USB 3.0 drive |
Why 32GB minimum? ShaniOS maintains two complete system images (@blue and @green) for atomic updates. However, Btrfs Copy-on-Write shares unchanged data between them, resulting in only ~18% overhead compared to traditional systems.
ShaniOS uses a simple two-partition layout — there are no separate /home, /var, or swap partitions. All subvolumes live within the single Btrfs partition.
| Partition | Filesystem | Size | Purpose |
|---|---|---|---|
| EFI System Partition (ESP) | FAT32 | ~512 MB | Bootloader, UKI images — mounted at /boot/efi |
| Root partition | Btrfs (or LUKS2 → Btrfs) | Remainder of disk | All system subvolumes (@blue, @green, @home, @data, etc.) |
When full-disk encryption is chosen, the Btrfs partition is wrapped in a LUKS2 container (/dev/mapper/shani_root). The ESP is never encrypted — only the root partition is.
Configure your firmware before installation (typically accessed via F2, F10, Del, or Esc during startup):
Disable legacy/CSM mode. ShaniOS requires UEFI
Fast Boot can interfere with USB boot and Linux installation
Required for installation. Can be re-enabled after enrolling ShaniOS MOK keys
Ensures optimal disk performance and compatibility
Enable Intel VT-x or AMD-V for container support
Download the ShaniOS ISO from shani.dev and write it to USB:
Do Not Use Ventoy: Ventoy has known compatibility issues with ShaniOS's immutable architecture and will cause installation failures.
Installation takes approximately 10-15 minutes:
Press F12, F2, or Del during startup. Select your USB drive from the boot menu.
Choose the installation option from the boot menu
Select language, timezone, and keyboard layout
Choose target disk and partitioning scheme (automatic recommended)
Enable LUKS2 full-disk encryption. Recommended for laptops and portable systems.
The installer creates Btrfs subvolumes, installs the base system, and configures the bootloader
Remove USB drive when prompted and reboot into ShaniOS
ShaniOS uses the Plymouth BGRT boot theme. Plymouth provides a smooth graphical boot experience, suppressing kernel and systemd messages from the screen. The BGRT (Boot Graphics Resource Table) theme reads the manufacturer's logo directly from the UEFI firmware and displays it during boot — providing a seamless transition from firmware to OS that matches the device's branding. On laptops and OEM hardware this typically shows the manufacturer's logo; on custom-built machines it shows the motherboard vendor's logo or a fallback graphic.
If LUKS2 full-disk encryption is enabled, Plymouth presents the passphrase prompt over the boot animation — no raw terminal text. With TPM2 auto-unlock enrolled, even this prompt is skipped and the disk unlocks silently.
On first boot after installation, ShaniOS automatically runs the initial deployment in the background. This is a one-time process that takes a few minutes and requires no user interaction:
@home, @data, @cache, @log, @flatpak, @containers, @nix, @snapd, @waydroid, @lxc, @lxd, @machines, @libvirt, @qemu, @swap, and more) are created and mounted@swap sized to your RAM, with CoW disabled — hibernation is automatically configured@blue and @green slots are preparedbeesd deduplication daemon is configured for your Btrfs volume UUIDAfter this completes, your system is fully configured — no manual post-install steps required.
After first deployment completes, the Initial Setup wizard guides you through:
The wizard runs automatically. If you skip it, re-run with gnome-initial-setup (GNOME) or from System Settings → Welcome (KDE).
flatpak remote-add needed — open GNOME Software or KDE Discover and browse apps immediately.nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs && nix-channel --update. See the Nix section for full details.sudo waydroid-helper init for automatic setup. Firewall rules are already configured. See the Android section.cat /data/current-slot to confirm whether you booted into @blue or @green.ShaniOS is designed for OEM and fleet use. Every machine imaging from the same signed ISO will boot into an identical, verified state. The Initial Setup wizard runs on first user login per machine, so user-specific personalisation (account, language, network) is captured without requiring per-device pre-configuration.
/etc customisations, systemd units, SSH keys, service configs) are in the @data OverlayFS and survive every update and rollback without reimagingpassim (local content sharing daemon) broadcasts available fwupd firmware payloads via mDNS — machines on the same LAN avoid downloading the same firmware repeatedlyUser settings location: Your account preferences and avatar are stored in /data/varlib/AccountsService, which is bind-mounted and persistent across all system updates and rollbacks.
ShaniOS leverages advanced Btrfs features for immutability and efficiency.
Btrfs CoW minimizes storage duplication:
# Default mount options
compress=zstd,space_cache=v2,autodefrag
Specific subvolumes use nodatacow:
| Subvolume(s) | Mount Options | Notes |
|---|---|---|
| @blue / @green | Mounted by dracut via kernel cmdlinerootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag |
Not in fstab — selected at boot by systemd-boot, mounted read-only by initramfs |
| @root, @home, @data | rw,noatime,compress=zstd,space_cache=v2,autodefrag | Core persistent data — always mounted, no nofail |
| @nix | nofail,noatime,compress=zstd,space_cache=v2,autodefrag | CoW kept intentionally for bees deduplication; nofail (created on first use) |
| @cache, @log | rw,noatime,compress=zstd,space_cache=v2,autodefrag,x-systemd.after=var.mount,x-systemd.requires=var.mount | Mount-ordered after /var |
| @flatpak, @snapd, @waydroid, @containers, @machines, @lxc, @lxd | nofail,noatime,compress=zstd,space_cache=v2,autodefrag,x-systemd.after=var.mount,x-systemd.requires=var.mount | nofail — system boots cleanly even if not yet created |
| @libvirt, @qemu | nofail,noatime,nodatacow,nospace_cache,x-systemd.after=var.mount,x-systemd.requires=var.mount | nodatacow required for VM disk performance; nospace_cache required with nodatacow; no compression |
| @swap | nofail,noatime,nodatacow,nospace_cache | nodatacow + nospace_cache mandatory for swapfile correctness on Btrfs |
All subvolumes use noatime to improve performance:
Btrfs snapshots are instant, space-efficient copies of a subvolume at a point in time. Because they use CoW, a fresh snapshot consumes almost no additional space — only changes accumulate. ShaniOS automates slot snapshots via shani-deploy, but you can create and manage snapshots of your own data freely.
# Create a read-only snapshot of /home (best practice for backups)
sudo btrfs subvolume snapshot -r /home /data/snapshots/home-$(date +%Y%m%d)
# Create a writable snapshot (for testing changes to a subvolume)
sudo btrfs subvolume snapshot /home /data/snapshots/home-writable
# List all subvolumes and snapshots on the filesystem
sudo btrfs subvolume list /
# Show snapshot details (creation time, ID, parent)
sudo btrfs subvolume show /data/snapshots/home-$(date +%Y%m%d)
# Restore from a snapshot — replace @home contents with snapshot
# (Do this from a live USB or alternate slot to avoid conflicts)
sudo btrfs subvolume delete /home
sudo btrfs subvolume snapshot /data/snapshots/home-20250101 /home
# Delete an old snapshot to free space
sudo btrfs subvolume delete /data/snapshots/home-20240601
# Send a snapshot to another drive as a backup (incremental after first)
sudo btrfs send /data/snapshots/home-20250101 | sudo btrfs receive /mnt/backup/
# Incremental send (only sends the diff)
sudo btrfs send -p /data/snapshots/home-20250101 /data/snapshots/home-20250201 \
| sudo btrfs receive /mnt/backup/
Snapshots are not backups if they live on the same disk — a disk failure loses both. Use btrfs send to an external drive, or restic/rclone to back up to cloud storage. The snapshots ShaniOS keeps of @blue/@green are exclusively for rollback; they are stored on the same drive.
The bees daemon runs continuously in the background and deduplicates data across all subvolumes. To check its activity and measure compression savings:
# Check bees daemon status
sudo systemctl status "beesd@*"
# View recent dedup activity from journal
sudo journalctl -u "beesd@*" --since today | grep -E "dedup|hash|block|crawl"
# Run an on-demand deduplication pass (in addition to background bees)
sudo shani-deploy --optimize
# Check compression ratio per subvolume
sudo compsize /
sudo compsize /home
sudo compsize /nix
sudo compsize /var/lib/flatpak
# Full storage usage report
sudo shani-deploy --storage-info
ShaniOS uses Btrfs with a sophisticated subvolume layout:
Full Btrfs subvolume map — system slots (@blue/@green) replace on update; all other subvolumes persist independently; @nix and @flatpak are shared by both slots
| Subvolume | Mount Point | Purpose |
|---|---|---|
| @blue / @green | / | Root filesystems for blue-green deployment |
| @root | /root | Root user home — persists across slot switches |
| @home | /home | User data and personal configurations |
| @data | /data | Overlay storage and persistent service data (bind-mount source tree) |
| @nix | /nix | Nix package manager store — shared across both slots, CoW kept for compression/dedup via bees |
| @log | /var/log | System logs across reboots |
| @cache | /var/cache | Package manager cache |
| @flatpak | /var/lib/flatpak | Flatpak applications and runtimes |
| @snapd | /var/lib/snapd | Snap package storage, revisions, and writable snap data |
| @waydroid | /var/lib/waydroid | Android system images and data |
| @containers | /var/lib/containers | Podman/Docker container storage |
| @machines | /var/lib/machines | systemd-nspawn containers |
| @lxc | /var/lib/lxc | LXC containers |
| @lxd | /var/lib/lxd | LXD container and VM storage |
| @libvirt | /var/lib/libvirt | Virtual machine disk images (nodatacow) |
| @qemu | /var/lib/qemu | Bare QEMU VM disk images (nodatacow) |
| @swap | /swap | Swap file container (nodatacow) |
@dataBecause /var is volatile (systemd.volatile=state mounts a tmpfs over /var on every boot), all service state that must survive reboots is stored in the @data subvolume and bind-mounted back into place. The bind mounts are grouped by function in /etc/fstab:
| Category | Source (@data) |
Target |
|---|---|---|
| System Core | /data/varlib/dbus /data/varlib/systemd | /var/lib/dbus /var/lib/systemd |
| Font Rendering | /data/varlib/fontconfig | /var/lib/fontconfig |
| Networking | /data/varlib/NetworkManager /data/varlib/bluetooth /data/varlib/firewalld | /var/lib/NetworkManager /var/lib/bluetooth /var/lib/firewalld |
| File Sharing | /data/varlib/samba /data/varlib/nfs | /var/lib/samba /var/lib/nfs |
| Remote Access & VPN | /data/varlib/caddy /data/varlib/tailscale /data/varlib/cloudflared /data/varlib/geoclue | /var/lib/caddy /var/lib/tailscale /var/lib/cloudflared /var/lib/geoclue |
| Display Manager | /data/varlib/gdm /data/varlib/sddm /data/varlib/colord | /var/lib/gdm /var/lib/sddm /var/lib/colord |
| Audio & Peripherals | /data/varlib/pipewire /data/varlib/rtkit /data/varlib/cups /data/varlib/sane /data/varlib/upower | /var/lib/pipewire /var/lib/rtkit /var/lib/cups /var/lib/sane /var/lib/upower |
| Auth & Security | /data/varlib/fprint /data/varlib/AccountsService /data/varlib/boltd /data/varlib/sudo /data/varlib/sshd /data/varlib/polkit-1 /data/varlib/tpm2-tss | /var/lib/fprint /var/lib/AccountsService /var/lib/boltd /var/lib/sudo /var/lib/sshd /var/lib/polkit-1 /var/lib/tpm2-tss |
| Hardware & Firmware | /data/varlib/fwupd | /var/lib/fwupd |
| Data Protection | /data/varlib/fail2ban /data/varlib/restic /data/varlib/rclone /data/varlib/appimage | /var/lib/fail2ban /var/lib/restic /var/lib/rclone /var/lib/appimage |
| Job Scheduling | /data/varspool/anacron /data/varspool/cron /data/varspool/at | /var/spool/anacron /var/spool/cron /var/spool/at |
| Print & Mail Spools | /data/varspool/cups /data/varspool/samba /data/varspool/postfix | /var/spool/cups /var/spool/samba /var/spool/postfix |
The root slots (@blue and @green) are not in fstab — they are mounted read-only by dracut/initramfs via the kernel command line (rootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag). All other Btrfs subvolumes use noatime,compress=zstd,space_cache=v2,autodefrag by default. VM disk subvolumes (@libvirt, @qemu) and the swap subvolume (@swap) use nodatacow,nospace_cache to avoid CoW fragmentation — note that compression is incompatible with nodatacow and is not applied to these subvolumes. Container and virtualisation subvolumes are mounted with nofail so the system boots cleanly even if they have not yet been created. The /etc overlay uses index=off,metacopy=off for maximum compatibility with the read-only lower layer. The /var tmpfs is provided by the systemd.volatile=state kernel parameter (not an fstab overlay entry, which exists in fstab but is commented out). All bind mounts use bind,nofail,x-systemd.after=var.mount,x-systemd.requires-mounts-for=/data to ensure ordering and graceful degradation.
OverlayFS enables writable /etc on read-only root.
# /etc overlay mount (from fstab)
overlay /etc overlay rw,lowerdir=/etc,upperdir=/data/overlay/etc/upper,workdir=/data/overlay/etc/work,index=off,metacopy=off,x-systemd.requires-mounts-for=/data 0 0
The /etc overlay is stored under @data with this structure:
@data/overlay/etc/
├── lower/ # (unused, just structure)
├── upper/ # Your changes stored here
└── work/ # Overlay working directory
# List all overlay modifications
ls -la /data/overlay/etc/upper
# Compare with base system
diff -r /etc /data/overlay/etc/upper
Understanding how ShaniOS boots:
Six-stage boot: UEFI Secure Boot → systemd-boot → signed UKI → dracut+LUKS2/TPM2 → systemd filesystem assembly → desktop; security layer annotations show what's verified at each step
Important Notes:
After systemd takes over the root filesystem, ShaniOS runs a sequence of custom services before the desktop starts:
@data (overlay upper/work dirs, varlib, varspool) using systemd-tmpfiles. Runs after data.mount and before the /etc overlay is applied.bees background deduplication daemon for the Btrfs volume. Runs after the root filesystem and etc-overlay.mount, before the daemon reload. Idempotent — skips if already configured at the same version.mount -a to apply all overlay and bind mounts from /etc/fstab, then issues a non-blocking systemctl daemon-reload so systemd discovers any new unit files that appeared in the overlay. Runs after beesd-setup./run/start-overlay-services.done), with a 30-second timeout./etc/passwd for changes. When a new regular user (UID 1000–59999) is detected, automatically adds them to the required groups (input, realtime, video, sys, cups, lp, scanner, nixbld, lxc, lxd, kvm, libvirt) and sets their shell to /bin/zsh./data/boot_failure and /data/current-slot; if a fallback boot is confirmed (booted slot ≠ current-slot and a matching failure file exists) it presents a GUI dialog (yad → zenity → kdialog → console) titled "Boot Failure Detected" prompting the user to rollback. If approved, it opens a terminal running pkexec shani-deploy --rollback, monitors completion via a status file, and on success offers to reboot immediately. Skips silently if /data/boot-ok is present or if no display is available.ShaniOS uses systemd-boot with Unified Kernel Images (UKI):
Boot menu entries clearly show system state:
After each deployment, shani-deploy rewrites both boot entries — the newly updated slot becomes "Candidate" and the currently running slot stays "Active". The labels do not switch automatically at boot; they are explicitly updated by the deploy tool.
ShaniOS creates a UEFI boot entry during installation:
The boot chain when Secure Boot is enabled:
UEFI Firmware
↓
shimx64.efi (signed by Microsoft — validates the next stage via MOK)
↓
mmx64.efi / grubx64.efi (systemd-boot, named grubx64.efi for shim compatibility, signed by MOK)
↓
Unified Kernel Image / shanios-blue.efi or shanios-green.efi (signed by MOK, built by gen-efi)
↓
Linux Kernel + Initramfs (embedded in UKI)
ShaniOS uses these boot parameters, generated per-slot by gen-efi and saved to /etc/kernel/install_cmdline_<slot>. If that file already exists for a slot, gen-efi reuses it on subsequent UKI rebuilds rather than regenerating. The file is embedded directly into the UKI at build time by dracut.
quiet splash systemd.volatile=state ro
lsm=landlock,lockdown,yama,integrity,apparmor,bpf
rootfstype=btrfs
rootflags=subvol=@blue,ro,noatime,compress=zstd,space_cache=v2,autodefrag
root=/dev/mapper/shani_root # If encrypted
rd.luks.uuid=... rd.luks.name=... rd.luks.options=tpm2-device=auto # If TPM enrolled
rd.vconsole.keymap=... # Injected from /etc/vconsole.conf KEYMAP= if set
resume=UUID=... resume_offset=... # If swapfile exists (offset via btrfs inspect-internal map-swapfile)
Security-focused kernel parameters are enabled by default:
lsm=landlock,lockdown,yama,integrity,apparmor,bpf
This enables six Linux Security Modules simultaneously — Landlock (filesystem sandboxing), Lockdown (kernel modification restrictions), Yama (ptrace scope restrictions), Integrity/IMA/EVM (runtime file integrity measurement), AppArmor (mandatory access control), and BPF LSM (dynamic eBPF security hooks). Most distributions enable one or two; ShaniOS runs all six concurrently. See the Security Features section for full details on each module.
The systemd.volatile=state kernel parameter creates a tmpfs (RAM-based) overlay for /var, making it volatile by default:
# From kernel command line
systemd.volatile=state
# Result: /var is a tmpfs, cleared on every reboot.
# Persistent data is restored via bind mounts into /var at boot:
# /var/log ← @log subvolume
# /var/cache ← @cache subvolume
# /var/lib/flatpak ← @flatpak subvolume
# /var/lib/containers ← @containers subvolume
# /var/lib/waydroid ← @waydroid subvolume
# /var/lib/machines ← @machines subvolume
# /var/lib/lxc ← @lxc subvolume
# /var/lib/libvirt ← @libvirt subvolume
# /var/lib/NetworkManager, bluetooth, etc. ← bind from @data/varlib/
ShaniOS automatically configures hibernation if a swapfile is created:
btrfs inspect-internal map-swapfileDisk Space for Swapfile / zram fallback: If available disk space is less than RAM size at first deployment, swapfile creation is skipped and the system uses zram for swap instead. Hibernation will not be available in this case.
zram is a compressed RAM-based swap device. When memory pressure is high, the kernel compresses pages and writes them to a zram device in RAM rather than to disk — dramatically faster than disk-based swap (no I/O latency), reduces SSD write wear, and keeps the system responsive under memory pressure without touching storage. zram is several orders of magnitude faster than swapping to an SSD and does not suffer from the seek latency of spinning disk swap at all. The trade-off is that zram competes for the same physical RAM it is compressing, so it is most effective when pages are compressible (most text, code, and data are). On ShaniOS, zram is managed by [email protected].
ShaniOS implements defence-in-depth security across multiple independent layers. The read-only root prevents runtime modification of system files even by root. Six Linux Security Modules run simultaneously via a single lsm= kernel parameter — most distributions enable one or two. LUKS2 with argon2id protects data at rest with a memory-hard key derivation function specifically resistant to GPU and ASIC brute-force. Flatpak sandboxes applications. AppArmor confines system services with enforced profiles. firewalld denies all unsolicited inbound connections from first boot. Intel ME modules are blacklisted, removing the low-level hardware management channel from the attack surface. Every OS image is SHA256 + GPG verified before deployment. Together, these layers mean that even a fully compromised application cannot permanently damage the OS — a reboot to the unmodified, verified system image is always available.
Read-only filesystem prevents malware from modifying system files even at runtime as root. Rebooting always restores the verified, unmodified state.
LUKS2 with argon2id — a memory-hard key derivation function designed to resist GPU and ASIC brute-force. Optional TPM 2.0 enrollment enables automatic unlocking on trusted hardware without a passphrase at every boot.
Flatpak provides application isolation and fine-grained permission control via Portals. Snap packages are additionally confined via AppArmor profiles loaded by snapd.apparmor.service.
Mandatory Access Control confines system services with profiles enforced from first boot. snapd.apparmor.service loads Snap confinement profiles automatically.
firewalld denies all inbound connections by default from first boot. Rules for KDE Connect and Waydroid networking are pre-configured. Zone-based — add rules only for services you explicitly enable.
Landlock, Lockdown, Yama, Integrity (IMA/EVM), AppArmor, and BPF LSM are all active simultaneously via lsm=landlock,lockdown,yama,integrity,apparmor,bpf. Most distributions run one or two; ShaniOS runs all six concurrently.
The Intel Management Engine kernel modules (mei, mei_me) are blacklisted by default, removing Intel's remote management interface from the attack surface. This is a genuine differentiator — most distributions leave ME active.
Every OS image is SHA256 + GPG verified before deployment. The public key is on public keyservers. The build system and deploy toolchain are public on GitHub — independently auditable end to end. A tampered or corrupted image is rejected outright; the update aborts, nothing changes.
The six LSMs are activated together via a single kernel command-line parameter embedded in the Unified Kernel Image at build time:
lsm=landlock,lockdown,yama,integrity,apparmor,bpf
ptrace scope and other process tracing. Prevents one user's processes from attaching a debugger to another user's processes — closes a common privilege escalation path.snapd.apparmor.service loads Snap confinement profiles automatically.The Intel Management Engine (ME) is a separate, low-power processor embedded in Intel chipsets that operates independently of the main CPU and OS — including when the system is powered off (as long as power is connected). The ME runs its own firmware and has broad access to system resources.
ShaniOS blacklists the ME kernel modules (mei and mei_me) by default. This removes the OS-level communication channel to the ME, reducing the attack surface accessible from within a running Linux session. This is a meaningful privacy and security measure that most Linux distributions do not implement.
# Verify ME modules are blacklisted
cat /etc/modprobe.d/blacklist-intel-me.conf
# Confirm the modules are not loaded
lsmod | grep mei # Should return nothing
Enterprise and high-security authentication methods work at first boot — no setup required:
pcscd.socket are pre-configured.pcscd.socket (PC/SC daemon) starts on demand when a card is accessed — no idle resource use.# Test FIDO2 key detection
fido2-token -L
# List connected smart cards
opensc-tool -l
# Check fingerprint reader
fprintd-list $USER
# Check pcscd socket status
systemctl status pcscd.socket
Supply chain attacks — injecting malicious code between a trusted source and the end user — are one of the most serious threats to software distribution. ShaniOS's update model is designed with this in mind.
Every OS image is GPG-signed before distribution. Before shani-deploy extracts a new image it verifies both the SHA256 checksum and the GPG signature. A tampered or corrupted image is rejected outright — the update aborts and nothing changes on your system. The verification uses the key ID 7B927BFF...8014792, which is published on public keyservers and can be independently fetched and verified.
# Manually verify an image (shani-deploy does this automatically)
# Fetch the public key from keyservers
gpg --keyserver keys.openpgp.org --recv-keys 7B927BFF8014792
# Verify the signature file against the image
gpg --verify shanios-image.zst.sig shanios-image.zst
# Verify SHA256 checksum
sha256sum -c shanios-image.zst.sha256
The build system, deploy scripts, and full signing workflow are public on GitHub. You can verify the full chain yourself — from build to signed image to deployment — without trusting any single party's word.
ShaniOS supports UEFI Secure Boot via Machine Owner Keys (MOK).
If you enabled Secure Boot during installation:
shaniosMOK (Machine Owner Key): This allows ShaniOS's signed bootloader and kernel to boot with Secure Boot enabled. The password "shanios" is only used during the one-time enrollment process and is not stored.
To enable Secure Boot after installation:
sudo mokutil --import /etc/secureboot/keys/MOK.der
Set a one-time password when prompted
MOK Manager appears on reboot. Select "Enroll MOK" and enter your password
Enter BIOS/UEFI settings and enable Secure Boot if still disabled
sudo mokutil --list-enrolled
Note: If MOK Manager doesn't appear, repeat step 1 and reboot. It only triggers when a key is pending enrollment.
Enroll your LUKS encryption key to TPM 2.0 for automatic unlocking.
Security Warning: Without Secure Boot, attackers with physical access can replace your bootloader to steal the encryption key. With Secure Boot, TPM refuses to unlock if the system is tampered with.
systemd-cryptenroll --tpm2-device=list
mokutil --sb-state
If disabled, consider enabling Secure Boot first
ShaniOS uses /dev/mapper/shani_root as the unlocked LUKS mapping. To find the underlying encrypted device:
# Get the physical encrypted device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
echo "Your encrypted device is: $LUKS_DEVICE"
# Example output: /dev/sda2 or /dev/nvme0n1p2
With Secure Boot (recommended):
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE"
Without Secure Boot:
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0 "$LUKS_DEVICE"
Enter your LUKS password when prompted
gen-efi configure can only regenerate the UKI for the currently booted slot directly. To update both slots, use shani-deploy which chroots into the candidate slot automatically:
# Regenerate only the currently booted slot (e.g., if booted into @blue):
sudo gen-efi configure blue
# To regenerate the OTHER slot's UKI as well, trigger a redeployment:
sudo shani-deploy --force
Running gen-efi configure green while booted into @blue (or vice versa) will be rejected by the script to prevent generating a UKI with the wrong kernel. shani-deploy --force handles this correctly by chrooting into the candidate slot.
Reboot. System should unlock automatically.
# Verify enrollment
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo cryptsetup luksDump "$LUKS_DEVICE" | grep systemd-tpm2
To restore password-only unlocking:
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"
# Regenerate UKI for the currently booted slot:
sudo gen-efi configure blue # if booted into @blue
# or: sudo gen-efi configure green # if booted into @green
# To update the other slot's UKI too:
sudo shani-deploy --force
If you enable Secure Boot after TPM enrollment:
# Get the device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
# Remove old enrollment
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"
# Re-enroll with Secure Boot protection
sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE"
# Regenerate UKI for the currently booted slot (e.g. @blue):
sudo gen-efi configure blue
# Then force-redeploy so the other slot also gets an updated UKI:
sudo shani-deploy --force
Important:
gen-efi configure <slot>, then run shani-deploy --force to update the other slotShaniOS includes a background service that automatically checks for updates:
shani-update.timer), firing 5 minutes after login then every 2 hours. It is slot-aware: if the system detects it is running from the candidate slot (a post-update test boot), it silently exits and sends a notification asking the user to validate the new system before the next update cycle. When an update is available it presents a GUI dialog (yad for X11/Wayland with backend fallback, then zenity, kdialog, or console) asking the user to install or defer. If deferred, it schedules a follow-up via systemd-run --user --on-active=86400s (24 hours). If approved, it opens a terminal window running pkexec shani-deploy under systemd-inhibit, monitors completion via a status file, and on success presents a reboot prompt. Logs are kept in ~/.cache/shani-update.log with automatic 1 MB rotation.sudo shani-deploy
The update process includes:
shani-deploy fetches the latest version of itself from GitHub and re-executes if a newer version is found (can be skipped with --skip-self-update)/data/current-slot and confirms it matches the running slot; detects and warns on slot mismatchdownloads.shani.dev, SourceForge mirror as fallback)7B927BFF...8014792)<slot>_backup_<timestamp>)zstd | btrfs receive, then snapshot to @candidate and set read-only@swap is new)gen-efi configure <slot>loader.conf default set to active slotcompsizeAutomatic Rollback — Full Boot-Counting Pipeline: ShaniOS uses a multi-layer system to detect and recover from bad boots:
local-fs.target) and creates /data/boot_in_progress while clearing any previous boot-ok, boot_failure, or boot_hard_failure markers. This flags every boot as "in progress" until proven successful.basic.target) and calls bootctl set-good, which tells systemd-boot that this boot attempt was good, clearing its internal try-counter. Only runs if boot_in_progress exists.multi-user.target and writes /data/boot-ok, then removes boot_in_progress. This is the definitive "system reached a usable state" marker.check-boot-failure.service, which checks: if boot_in_progress still exists and boot-ok is absent, the boot stalled — it writes the failed slot name to /data/boot_failure. A dracut hook can write /data/boot_hard_failure even earlier for kernel-level failures.current-slot and a boot_failure file exists for the expected slot — it presents a GUI dialog (yad / zenity / kdialog with console fallback) prompting the user to rollback. If the user approves, it opens a terminal and runs pkexec shani-deploy --rollback, monitors completion, then offers to reboot./data/current-slot vs booted-slot comparison, giving a second recovery path even if the user skips the GUI prompt.A /data/deployment_pending flag prevents a partially-completed deploy from being lost if power fails mid-extraction. The shani-update user-mode checker also silently exits if it detects the system is still in a "candidate boot" validation state, suppressing spurious update prompts until stability is confirmed.
ShaniOS supports multiple update channels:
Versions use a date-based format (YYYYMMDD, e.g. 20251201). The channel manifest (stable.txt or latest.txt) is fetched from SourceForge and contains the current image filename. Version comparison is lexicographic — newer dates are always greater — and the user-space checker (shani-update) also handles profile changes at the same version date as a valid update trigger.
Set your preferred channel:
# Choose channel
sudo shani-deploy -t latest
# Or set permanently
echo "latest" | sudo tee /etc/shani-channel
# Show help
sudo shani-deploy --help
# Rollback to previous version (restores from Btrfs backup snapshot)
sudo shani-deploy --rollback
# Manual cleanup of old backups and downloaded images
sudo shani-deploy --cleanup
# Storage analysis: filesystem usage and per-subvolume compression stats
sudo shani-deploy --storage-info
# Manual deduplication run across @blue, @green, and backup subvolumes
# (bees handles continuous background dedup; this is for on-demand use)
sudo shani-deploy --optimize
# Dry run (simulate without making changes)
sudo shani-deploy --dry-run
# Force redeploy even if already on latest version
sudo shani-deploy --force
# Verbose output for troubleshooting
sudo shani-deploy --verbose
# Skip self-update of the deploy tool
sudo shani-deploy --skip-self-update
| Command | Description |
|---|---|
sudo shani-deploy |
Standard update: check → download → verify → deploy |
sudo shani-deploy --rollback |
Restore previous slot from its Btrfs backup snapshot |
sudo shani-deploy --cleanup |
Remove old backups and cached downloads to free space |
sudo shani-deploy --storage-info |
Print filesystem usage and per-subvolume compression stats |
sudo shani-deploy --optimize |
Run on-demand block-level deduplication across all slots |
sudo shani-deploy --dry-run |
Simulate the full update process without making any changes |
sudo shani-deploy --force |
Force a full redeploy even when already on the latest version |
sudo shani-deploy --verbose |
Enable verbose output for troubleshooting |
sudo shani-deploy --skip-self-update |
Skip the automatic self-update of the deploy tool itself |
sudo shani-deploy -t latest |
Run this update using the latest channel |
shani-deploy self-updates itself: On every run, shani-deploy fetches its own latest version from GitHub. If a newer version is found, it re-executes automatically before continuing. Use --skip-self-update to suppress this behaviour — useful in automated or scripted contexts. The current version is printed at startup and logged to the system journal.
If the system fails to boot after an update:
startup-check), showing which slot failed and offering a "Rollback Now" button. Click it to restore the broken slot from its Btrfs backup snapshot.sudo shani-deploy --rollbackIf the system boots but is unstable: the check-boot-failure.timer fires 15 minutes post-boot. If mark-boot-success never ran (i.e. the system stalled before multi-user.target), the failure is recorded and the startup-check dialog will appear on next login.
Your personal data (/home, /data) is never affected by rollbacks — only the system core is restored.
fwupd is pre-installed and fwupd-refresh.timer runs automatically to check for new firmware. Update BIOS, NVMe controllers, SSD firmware, keyboard firmware, Thunderbolt devices, and other hardware supported by the Linux Vendor Firmware Service — no Windows, no USB boot drive, no manufacturer tools required.
# Refresh the LVFS metadata (normally automatic via fwupd-refresh.timer)
fwupdmgr refresh
# Check for available firmware updates
fwupdmgr get-updates
# Apply all available firmware updates
fwupdmgr update
# List all hardware devices fwupd can manage
fwupdmgr get-devices
# Show details and update history for a specific device
fwupdmgr get-details <device-id>
# Downgrade to a previous firmware version (if available)
fwupdmgr downgrade
passim — LAN firmware caching: passim is pre-installed alongside fwupd. It broadcasts available firmware payloads via mDNS/Avahi so that other ShaniOS machines on the same LAN can download firmware from a local machine rather than re-fetching from LVFS. In a multi-machine home or office environment this saves bandwidth and speeds up firmware updates across all devices automatically.
# OS (always use shani-deploy — never pacman -Syu)
sudo shani-deploy
# Flatpak apps (also auto-updates every 12 hours via timer)
flatpak update
# Snap packages (also auto-updates in background)
snap refresh
# Nix packages (imperative profile)
nix-channel --update && nix-env -u '*'
# Nix flakes
nix flake update
# Home Manager (if installed)
home-manager switch
# Podman container images
podman auto-update
# Firmware (BIOS, NVMe, SSD controllers, Thunderbolt, etc.)
fwupdmgr refresh && fwupdmgr update
The default shell is Zsh. All user shell configuration lives in /home and is fully writable.
~/.zshrc (Zsh), ~/.bashrc (Bash), or ~/.config/fish/config.fish (Fish)./etc/environment — this uses the /etc overlay and persists across all updates.chsh -s /bin/fish (or /bin/bash, /bin/zsh). Changes take effect at next login. The shell binary must be listed in /etc/shells.~/.config/starship.toml. Full reference at starship.rs/config.McFly replaces the standard Ctrl+R history search with a neural network that learns from your usage patterns — prioritising commands by current directory, recent context, and exit codes. The longer you use it, the better it predicts what you need.
# Trigger McFly search (replaces standard Ctrl+R)
Ctrl+R
# McFly trains on your shell history automatically.
# The database grows smarter over time and is never cleared on updates.
# Database location:
ls ~/.local/share/mcfly/
# View McFly's training data size
du -sh ~/.local/share/mcfly/history.db
# Temporarily disable McFly and use default history search:
MCFLY_DISABLE=true zsh
# Delete McFly history and start fresh (rare)
rm ~/.local/share/mcfly/history.db
PSD moves your browser profiles to a tmpfs RAM filesystem, then syncs them back to disk periodically and on shutdown. The result is faster browser startup, faster tab loading, and substantially less SSD write wear from day-to-day browsing. It is enabled globally for all users on ShaniOS.
# Check PSD status
systemctl --user status psd
# Preview what PSD manages (dry run)
psd preview
# PSD supports all major browsers automatically:
# Vivaldi, Firefox, Chromium, Chrome, Brave, Opera, and more.
# Profiles are detected from their default locations.
# Manually sync profiles back to disk now (normally automatic)
psd sync
# PSD sync on shutdown is handled automatically by the systemd user unit.
# Your data is safe — sync runs every few minutes and on every logout.
# Log location:
journalctl --user -u psd -f
# Fuzzy search shell history (alternative to McFly)
Ctrl+R
# Fuzzy insert a file path into the command line
Ctrl+T
# Fuzzy cd into a subdirectory
Alt+C
# Pipe any command output through fzf
ls | fzf
cat /etc/passwd | fzf
The /etc directory uses overlay filesystem, allowing modifications:
# Edit system configuration normally
sudo nano /etc/some-config-file
# Changes are stored in /data/overlay/etc/upper
# and persist across all system updates and slot switches
# List everything you've changed from defaults
ls -la /data/overlay/etc/upper
# Compare your changes with the current base
diff -r /etc /data/overlay/etc/upper 2>/dev/null | grep -v "^Only in /data"
# To revert a single file to its system default:
sudo rm /data/overlay/etc/upper/path/to/file
ShaniOS uses Flatpak as the primary application delivery method, with Snaps for sandboxed cross-distro packages, AppImages for portable self-contained binaries, Nix for CLI tools, containers for development environments, and optionally Homebrew for cross-platform tooling.
Choose the right installation method — Flatpak for GUI apps, Snap for cross-distro packages, AppImage for portable binaries, Nix for CLI tools, containers for full environments
No pacman: Traditional package management via pacman is not supported. ShaniOS ships a custom pacman wrapper that intercepts mutating operations (-S, -U, -R, -D, and variants) and prints an error explaining immutability. Read-only operations like -Q (query), -F (file lookup), and -Ss (search) still work normally. Use Flatpak, containers, or AppImages for applications. System-level changes only come through shani-deploy.
# Search for applications
flatpak search keyword
# Install from Flathub
flatpak install flathub org.application.Name
# List installed apps
flatpak list
# Remove an app
flatpak uninstall org.application.Name
Game Launchers:
flatpak install flathub com.valvesoftware.Steam
flatpak install flathub com.heroicgameslauncher.hgl
flatpak install flathub net.lutris.Lutris
flatpak install flathub org.libretro.RetroArch
Gaming Peripherals:
flatpak install flathub io.github.antimicrox.antimicrox # Gamepad mapper
flatpak install flathub org.freedesktop.Piper # Mouse configuration
flatpak install flathub org.openrgb.OpenRGB # RGB lighting
Performance Tools:
flatpak install flathub io.github.benjamimgois.goverlay # Overlay manager
flatpak install flathub org.freedesktop.Platform.VulkanLayer.MangoHud
flatpak install flathub org.freedesktop.Platform.VulkanLayer.gamescope
Run Windows apps using Wine via Bottles:
flatpak install flathub com.usebottles.bottles
Snap packages are sandboxed, self-contained applications published to the Snap Store by Canonical and third-party developers. ShaniOS ships with snapd pre-installed and enabled (snapd.socket and snapd.apparmor.service are active at boot), and all Snap data lives in the dedicated @snapd Btrfs subvolume mounted at /var/lib/snapd — surviving all system updates and rollbacks.
Snaps come in two confinement modes:
--classic flag when installing. Used by developer tools (e.g. code editors, compilers) that need broad access.Snaps auto-update silently in the background by default. Each snap revision is kept on disk so rollback is instant if an update breaks something.
# Search for a snap
snap find keyword
# Install a snap (strict confinement)
snap install app-name
# Install a snap with classic confinement
snap install app-name --classic
# List installed snaps
snap list
# Check available updates
snap refresh --list
# Update all snaps
snap refresh
# Update a specific snap
snap refresh app-name
# Roll back a snap to the previous revision
snap revert app-name
# Remove a snap
snap remove app-name
# Run a snap
snap run app-name
Snaps declare the system resources they need as interfaces. Some connect automatically; others require manual approval.
# List all interfaces for a snap
snap connections app-name
# Connect an interface manually
snap connect app-name:camera
# Disconnect an interface
snap disconnect app-name:camera
# List all available interfaces on the system
snap interface
# Check snapd status
sudo systemctl status snapd
# View snapd logs
journalctl -u snapd -f
# Check AppArmor confinement status
sudo apparmor_status | grep snap
# Developer tools
snap install code --classic # VS Code
snap install sublime-text --classic # Sublime Text
snap install android-studio --classic
# Communication
snap install slack --classic
snap install discord
# Utilities
snap install bitwarden
snap install multipass # Ubuntu VM manager
Snap storage on ShaniOS: All snap revisions, writable data, and runtime mounts live inside the @snapd Btrfs subvolume at /var/lib/snapd. This subvolume is shared across both @blue and @green slots — your installed snaps are available regardless of which slot is booted, and they persist through every system update and rollback.
An AppImage is a single self-contained executable that bundles an application and all its dependencies into one file. No installation, no root access, no package manager — download, make executable, and run. AppImages are ideal for applications not available on Flathub or the Snap Store, or for running specific upstream release versions directly from a vendor.
Gear Lever is pre-installed on ShaniOS and provides full GUI management of AppImages. It handles desktop integration automatically — registering the app in your application launcher, associating file types, and managing updates.
# Open Gear Lever from your app launcher, or launch it from terminal
gear-lever
# Gear Lever automatically:
# - Registers the AppImage in your .local/share/applications
# - Creates a desktop entry with icon
# - Associates MIME types if declared in the AppImage
# - Checks for AppImageUpdate-compatible update feeds
# Make an AppImage executable
chmod +x MyApp-x86_64.AppImage
# Run it directly
./MyApp-x86_64.AppImage
# Optionally move it to a persistent location
mkdir -p ~/Applications
mv MyApp-x86_64.AppImage ~/Applications/
# Extract the contents without running (for inspection)
./MyApp-x86_64.AppImage --appimage-extract
AppImages stored in /home live on the @home Btrfs subvolume and survive all system updates and rollbacks automatically. Desktop entries created by Gear Lever are stored in ~/.local/share/applications, also within @home. The /var/lib/appimage integration cache (Gear Lever's registry) is bind-mounted from /data/varlib/appimage in the @data subvolume and also persists across slot switches.
AppImages and the immutable root: Because /usr is read-only, AppImages must be stored in /home, /data, or another writable location — never in /usr/local/bin or similar system paths. Gear Lever handles this correctly by storing everything in user-space.
ShaniOS ships a complete pre-installed container stack: Podman (rootless, daemonless, Docker-compatible), podman-docker (drop-in docker CLI alias), podman-compose (Docker Compose support), buildah (OCI image builder), skopeo (image inspection/copying), Distrobox (mutable Linux envs with desktop integration), LXC (full system containers), and systemd-nspawn (lightweight OS containers). Container data lives in dedicated Btrfs subvolumes (@containers, @lxc, @machines) and survives all system updates and rollbacks.
Six pre-installed container and isolation runtimes — Podman (primary, Docker-compatible), buildah (OCI image builder), Distrobox (mutable dev environments), LXC/LXD (full system containers), systemd-nspawn (lightweight OS containers), and Apptainer (HPC/scientific, no privilege escalation)
Podman is the primary container runtime on ShaniOS. It is rootless (no daemon, no root required), fully Docker-compatible via the podman-docker drop-in, and stores all data in the @containers Btrfs subvolume.
# --- Basic Usage ---
# Pull an image
podman pull docker.io/library/nginx:latest
podman pull quay.io/fedora/fedora:latest
# Run a container interactively
podman run -it --rm ubuntu:22.04 bash
# Run a detached service (web server on port 8080)
podman run -d --name myapp -p 8080:80 nginx:latest
# List running containers
podman ps
# List all containers (including stopped)
podman ps -a
# Stop / start / restart
podman stop myapp
podman start myapp
podman restart myapp
# Remove container and image
podman rm myapp
podman rmi nginx:latest
# View logs
podman logs -f myapp
# Exec into a running container
podman exec -it myapp bash
# --- Volumes & Mounts ---
# Named volume (managed by Podman)
podman volume create mydata
podman run -d -v mydata:/app/data nginx:latest
# Bind mount (host path → container path)
podman run -d -v /home/user/website:/usr/share/nginx/html:Z nginx:latest
# :Z sets correct SELinux/AppArmor labels
# List volumes
podman volume ls
podman volume inspect mydata
# --- Networking ---
# Default bridge network
podman network ls
podman network create mynet
# Run two containers on the same network (they see each other by name)
podman run -d --name db --network mynet postgres:16
podman run -d --name app --network mynet -p 3000:3000 myapp:latest
# Port mapping
podman run -d -p 127.0.0.1:8080:80 nginx # localhost only
podman run -d -p 8080:80 nginx # all interfaces
# --- Pods (multi-container groups, like Kubernetes pods) ---
# Create a pod with a shared network namespace
podman pod create --name myapp-pod -p 8080:80
# Add containers to the pod
podman run -d --pod myapp-pod --name frontend nginx:latest
podman run -d --pod myapp-pod --name backend node:20
# Manage the entire pod
podman pod start myapp-pod
podman pod stop myapp-pod
podman pod rm myapp-pod
# List pods
podman pod ls
# --- Systemd Integration (auto-start containers at boot) ---
# Generate a systemd unit for a running container
podman generate systemd --new --name myapp > ~/.config/systemd/user/myapp.service
# Enable auto-start at login
systemctl --user daemon-reload
systemctl --user enable --now myapp
# Or use quadlet (modern declarative approach, Podman 4.4+)
# Create ~/.config/containers/systemd/myapp.container:
# [Unit]
# Description=My App
# [Container]
# Image=docker.io/library/nginx:latest
# PublishPort=8080:80
# Volume=/home/user/html:/usr/share/nginx/html:Z
# [Install]
# WantedBy=default.target
systemctl --user daemon-reload
systemctl --user start myapp
podman-docker drop-in: The podman-docker package provides a docker command that transparently calls Podman. Any script or tool that uses docker will work unchanged — including CI scripts, makefiles, and development tooling. The Docker socket path is also emulated at /run/user/UID/podman/podman.sock.
podman-compose is pre-installed and understands standard docker-compose.yml files. Use it to run multi-service stacks without Docker.
# Example docker-compose.yml — WordPress + MariaDB
# Save to ~/projects/wordpress/docker-compose.yml
# version: "3.9"
# services:
# db:
# image: mariadb:11
# environment:
# MYSQL_ROOT_PASSWORD: rootpass
# MYSQL_DATABASE: wordpress
# MYSQL_USER: wp
# MYSQL_PASSWORD: wppass
# volumes:
# - db_data:/var/lib/mysql
#
# wordpress:
# image: wordpress:latest
# ports:
# - "8080:80"
# environment:
# WORDPRESS_DB_HOST: db
# WORDPRESS_DB_USER: wp
# WORDPRESS_DB_PASSWORD: wppass
# WORDPRESS_DB_NAME: wordpress
# depends_on:
# - db
#
# volumes:
# db_data:
# Run the stack
podman-compose up -d
# Or via the docker drop-in
docker compose up -d
# View logs
podman-compose logs -f
# Stop and remove
podman-compose down
# Stop and remove including volumes
podman-compose down -v
buildah builds OCI-compliant container images without a daemon and without root. It can build from a standard Dockerfile (using buildah bud) or from shell scripts for fine-grained control. Images built with buildah are immediately usable with Podman.
# --- Build from a Dockerfile ---
# Example Dockerfile (save as ~/myapp/Dockerfile):
# FROM node:20-alpine
# WORKDIR /app
# COPY package*.json ./
# RUN npm ci --only=production
# COPY . .
# EXPOSE 3000
# CMD ["node", "server.js"]
# Build the image (bud = Build Using Dockerfile)
buildah bud -t myapp:latest ~/myapp/
# Or using podman (calls buildah under the hood)
podman build -t myapp:latest ~/myapp/
# Build with build arguments
buildah bud --build-arg NODE_ENV=production -t myapp:prod .
# Build for a specific platform
buildah bud --platform linux/amd64 -t myapp:amd64 .
buildah bud --platform linux/arm64 -t myapp:arm64 .
# Multi-platform manifest
buildah manifest create myapp:multi
buildah bud --platform linux/amd64,linux/arm64 \
--manifest myapp:multi .
# --- Scripted builds (buildah native API) ---
# More control than Dockerfile — useful for complex layering
# Start from a base image
ctr=$(buildah from ubuntu:22.04)
# Run commands inside (each becomes a layer)
buildah run $ctr -- apt-get update
buildah run $ctr -- apt-get install -y python3 python3-pip
buildah run $ctr -- pip3 install flask
# Copy files in
buildah copy $ctr ./app /opt/app
# Set metadata
buildah config --entrypoint '["python3", "/opt/app/main.py"]' $ctr
buildah config --port 5000 $ctr
buildah config --label maintainer="[email protected]" $ctr
# Commit to a named image
buildah commit $ctr myflaskapp:latest
# Remove the working container
buildah rm $ctr
# List images
buildah images
# Inspect an image
buildah inspect myflaskapp:latest
# --- Push images to a registry ---
# Push to Docker Hub
buildah push myapp:latest docker://docker.io/yourusername/myapp:latest
# Push to Quay.io
buildah push myapp:latest docker://quay.io/yourusername/myapp:latest
# Push to a local registry (e.g., Podman's built-in registry)
podman run -d -p 5000:5000 --name registry docker.io/library/registry:2
buildah push myapp:latest docker://localhost:5000/myapp:latest
# Push to an OCI archive (tarball)
buildah push myapp:latest oci-archive:/tmp/myapp.tar
# Tag an image
buildah tag myapp:latest myapp:1.0.0
Distrobox wraps Podman containers to create mutable Linux environments that share your /home, display, audio, and USB devices. Apps installed inside appear in your desktop launcher. It is the recommended way to use traditional package managers (apt, dnf, pacman) on ShaniOS.
# --- Create containers ---
# Ubuntu 22.04
distrobox create --name ubuntu --image ubuntu:22.04
# Fedora latest
distrobox create --name fedora --image fedora:latest
# Arch Linux (for AUR access)
distrobox create --name arch --image archlinux:latest
# With custom home directory
distrobox create --name dev --image ubuntu:22.04 --home ~/distrobox-dev
# List all boxes
distrobox list
# Enter a box
distrobox enter ubuntu
# Run a single command without entering
distrobox run --name ubuntu -- apt list --installed
# --- Inside the box: install anything ---
# In the ubuntu box:
sudo apt update && sudo apt install -y nodejs npm python3-pip gcc make
# In the arch box (full AUR access with yay):
sudo pacman -S yay
yay -S some-aur-package
# --- Export apps & commands to host desktop ---
# Export a GUI app to your host launcher (appears in GNOME/KDE app menu)
distrobox-export --app firefox # from inside the box
distrobox-export --app code # VS Code installed inside box
# Export a CLI binary to host PATH (~/.local/bin/)
distrobox-export --bin /usr/bin/node --export-path ~/.local/bin
distrobox-export --bin /usr/bin/python3 --export-path ~/.local/bin
# Remove an exported app/bin
distrobox-export --app firefox --delete
distrobox-export --bin /usr/bin/node --export-path ~/.local/bin --delete
# --- Manage boxes ---
# Stop a running box
distrobox stop ubuntu
# Remove a box (keeps /home data)
distrobox rm ubuntu
# Remove box and delete its home dir
distrobox rm ubuntu --rm-home
# Upgrade all packages inside a box
distrobox run --name ubuntu -- sudo apt upgrade -y
# Generate a systemd unit to auto-start a box at login
distrobox generate-entry ubuntu
BoxBuddy GUI: For a graphical Distrobox front-end, install BoxBuddy from Flathub: flatpak install flathub io.github.dvlv.boxbuddyrs. It lets you create, enter, and manage Distrobox containers visually without any terminal commands.
LXC runs full OS containers with their own init system, network stack, and process tree — closer to a VM than an application container, but with much lower overhead. Container data is stored in the @lxc Btrfs subvolume.
# List available images
lxc image list images: | grep ubuntu
# Launch a container
lxc launch images:ubuntu/22.04 my-server
# Launch Fedora
lxc launch images:fedora/39 fedora-box
# List running containers
lxc list
# Open a shell in the container
lxc exec my-server -- bash
# Copy files to/from container
lxc file push ~/myconfig.conf my-server/etc/myconfig.conf
lxc file pull my-server/var/log/syslog ~/syslog.txt
# Start / stop / restart
lxc start my-server
lxc stop my-server
lxc restart my-server
# Snapshot and restore
lxc snapshot my-server snap0
lxc restore my-server snap0
# Delete container
lxc delete my-server
LXD is the management daemon for LXC that adds clustering, projects, profiles, storage pools, and VM support. The lxd.socket is enabled at boot on ShaniOS; container and VM data lives in the @lxd Btrfs subvolume. LXD can run both system containers (like LXC) and full hardware-accelerated VMs sharing the same management interface.
# Initialise LXD (first-time setup — runs an interactive wizard)
sudo lxd init
# List available images from the LXD image server
lxc image list ubuntu: | head -20
# Launch an Ubuntu container
lxc launch ubuntu:22.04 web-server
# Launch a full VM (not a container)
lxc launch ubuntu:22.04 my-vm --vm
# List all instances (containers + VMs)
lxc list
# Get a shell
lxc exec web-server -- bash
# Show resource usage
lxc info web-server
# Profiles — apply predefined resource limits
lxc profile list
lxc profile show default
lxc profile apply web-server default
# Storage pools
lxc storage list
lxc storage info default
# Snapshot and restore
lxc snapshot web-server clean-install
lxc restore web-server clean-install
# Stop and delete
lxc stop web-server
lxc delete web-server
Apptainer (formerly Singularity) is the standard container runtime for HPC clusters and scientific computing. Unlike Docker/Podman, Apptainer containers run as the calling user (no daemon, no root escalation), making them safe for multi-user clusters and reproducible research environments. Pre-configured on ShaniOS — pair with the immutable host OS for a fully reproducible research stack where both the container and the host are verifiable artifacts.
# Pull an image from Docker Hub (converted to SIF format automatically)
apptainer pull docker://ubuntu:22.04
# Creates: ubuntu_22.04.sif
# Pull from a specific registry
apptainer pull docker://ghcr.io/owner/repo:tag
# Run a command inside a container
apptainer exec ubuntu_22.04.sif python3 script.py
# Run the container's default entrypoint
apptainer run ubuntu_22.04.sif
# Interactive shell in the container
apptainer shell ubuntu_22.04.sif
# Bind mount host directories (automatic: $HOME, /tmp, /proc)
apptainer exec --bind /data:/mnt/data ubuntu_22.04.sif ls /mnt/data
# Use GPU (NVIDIA) inside the container
apptainer exec --nv cuda_image.sif python3 gpu_script.py
# Build a custom SIF from a definition file
apptainer build myimage.sif myimage.def
# Build a sandbox (writable directory, for development)
apptainer build --sandbox myimage_sandbox/ myimage.def
# Inspect an image's metadata
apptainer inspect ubuntu_22.04.sif
# Cache location (default: ~/.apptainer/cache)
ls ~/.apptainer/cache/
# Clear cache
apptainer cache clean
For researchers: Submit reproducible environments to HPC clusters using Apptainer SIF images. Your collaborators can run the exact same image — same OS, same libraries, same tool versions — regardless of what cluster they use. Pair with ShaniOS's GPG-verified, immutable host for a full stack that is reproducible from kernel to container.
systemd-nspawn is a lightweight container runtime built into systemd. It runs a full OS tree in a namespace, managed as systemd machine units via machinectl. Container data lives in the @machines subvolume at /var/lib/machines.
# Bootstrap a Debian container into /var/lib/machines/debian12
sudo debootstrap stable /var/lib/machines/debian12 http://deb.debian.org/debian
# Start it as a machine (boots fully with systemd)
sudo machinectl start debian12
# Or start interactively
sudo systemd-nspawn -D /var/lib/machines/debian12 --boot
# List running machines
machinectl list
# Open a shell in a running machine
machinectl shell debian12
# Enable auto-start at boot
machinectl enable debian12
# Stop / poweroff
machinectl stop debian12
machinectl poweroff debian12
# Transfer files
machinectl copy-to debian12 ~/file.txt /root/file.txt
machinectl copy-from debian12 /etc/hostname ~/hostname.txt
Pods GUI (Podman GUI): For a graphical Podman front-end, install Pods from Flathub: flatpak install flathub com.github.marhkb.Pods. It lets you manage containers, images, volumes, pods, and networks visually — supporting log viewing, shell access, and compose stacks without the terminal.
Nix is pre-installed and running on ShaniOS. The nix-daemon.socket is enabled at boot and all Nix data lives in the dedicated @nix Btrfs subvolume mounted at /nix. This subvolume is shared across both @blue and @green slots — any packages you install via Nix survive every system update and rollback without reinstallation. CoW is kept enabled on @nix so bees can deduplicate shared store paths across the two slots.
Nix is a purely functional, reproducible package manager. Each package is installed into its own unique path in /nix/store (named by a cryptographic hash of all its inputs), meaning multiple versions of the same tool can coexist without conflict, and installing or removing a package never breaks another.
Why Nix on an immutable OS? Flatpak covers GUI apps; Distrobox covers full mutable environments. Nix fills the gap for CLI tools, libraries, and language runtimes that need specific versions pinned or need to coexist — without root, without touching the read-only root filesystem.
Nix is installed but has no channel configured by default. A channel is a URL pointing to a verified snapshot of the Nixpkgs repository — it determines which package versions are available and provides a binary cache so packages are downloaded pre-built rather than compiled locally.
Add the nixpkgs-unstable channel (rolling, latest packages) as your user, then fetch it:
# Add the nixpkgs-unstable channel (recommended for most users on ShaniOS)
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
# Fetch the channel expressions and binary cache metadata
nix-channel --update
# Verify — should print: nixpkgs https://nixos.org/channels/nixpkgs-unstable
nix-channel --list
If you prefer a stable release channel (e.g. 25.05), substitute the URL:
# Stable channel (replace 25.05 with the current release)
nix-channel --add https://nixos.org/channels/nixpkgs-25.05 nixpkgs
nix-channel --update
nixpkgs-unstable due to additional NixOS-specific testing.See current channel status at status.nixos.org and the full channel list at nixos.org/channels.
# List subscribed channels
nix-channel --list
# Add an additional channel (e.g. home-manager)
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
# Remove a channel
nix-channel --remove home-manager
# Update all channels (download latest expressions)
nix-channel --update
# Update a specific channel only
nix-channel --update nixpkgs
# List all channel generations (history of updates)
nix-channel --list-generations
# Roll back channels to the previous generation
nix-channel --rollback
# Roll back to a specific generation number
nix-channel --rollback 12
Channels are per-user. Running nix-channel --update as your regular user updates only your environment. The root channel (used by the daemon) is separate. Channel state is stored in ~/.nix-channels and symlinked via ~/.nix-defexpr/channels.
# Search for packages (after channel is set)
nix search nixpkgs ripgrep
nix-env -qaP | grep nodejs # list all matching in channel
# Install a package into your default profile
nix-env -iA nixpkgs.ripgrep
nix-env -iA nixpkgs.nodejs_22
nix-env -iA nixpkgs.python312
# List installed packages
nix-env -q
# Upgrade a specific package
nix-env -uA nixpkgs.ripgrep
# Upgrade all installed packages to latest channel versions
nix-env -u '*'
# Uninstall a package
nix-env -e ripgrep
# Roll back the last nix-env operation (install/upgrade/remove)
nix-env --rollback
# List profile generations
nix-env --list-generations
# Switch to a specific profile generation
nix-env --switch-generation 42
One of Nix's most useful features is running tools without permanently installing them:
# Drop into a temporary shell with specific packages (gone when you exit)
nix-shell -p python312 nodejs_22 gcc
# New-style equivalent (Nix flakes / nix CLI)
nix shell nixpkgs#python312 nixpkgs#nodejs_22
# Run a single command from a package without installing it
nix run nixpkgs#cowsay -- "Hello from Nix"
nix run nixpkgs#ffmpeg -- -i input.mp4 output.webm
# Start a per-project dev shell defined in a shell.nix file
nix-shell # reads shell.nix in current directory
# Step 1: fetch latest channel expressions
nix-channel --update
# Step 2: upgrade all installed packages to the versions now in the channel
nix-env -u '*'
# Or do both in one line
nix-channel --update && nix-env -u '*'
Nix never deletes old store paths automatically. Old generations and unreferenced paths accumulate in /nix/store. Run garbage collection periodically to reclaim disk space:
# Delete all old profile generations and collect unreachable store paths
nix-collect-garbage -d
# Collect garbage but keep the last N generations
nix-env --delete-generations old
nix-collect-garbage
# Just see how much space would be freed (dry run)
nix-collect-garbage --dry-run
# Check current store disk usage
du -sh /nix/store
GUI apps installed via Nix won't automatically appear in your desktop launcher unless you add Nix's share directory to $XDG_DATA_DIRS. Add this to your ~/.zshrc or ~/.bashrc:
export XDG_DATA_DIRS=$HOME/.nix-profile/share:$XDG_DATA_DIRS
Home Manager lets you declare your entire user environment (packages, dotfiles, shell config, services) in a single Nix file and apply it atomically. It is the recommended way to manage your Nix setup long-term.
# Add the home-manager channel matching your nixpkgs channel
nix-channel --add https://github.com/nix-community/home-manager/archive/master.tar.gz home-manager
nix-channel --update
# Install home-manager (standalone)
export NIX_PATH=$HOME/.nix-defexpr/channels:/nix/var/nix/profiles/per-user/root/channels${NIX_PATH:+:$NIX_PATH}
nix-shell '<home-manager>' -A install
# After install — edit your configuration
micro ~/.config/home-manager/home.nix # or nano, vim, etc.
# Apply your configuration
home-manager switch
# List home-manager generations
home-manager generations
# Roll back to the previous generation
home-manager rollback
A minimal home.nix example:
{ config, pkgs, ... }: {
home.username = "alice";
home.homeDirectory = "/home/alice";
home.stateVersion = "25.05";
home.packages = with pkgs; [
ripgrep
fd
jq
nodejs_22
python312
];
programs.zsh.enable = true;
programs.git = {
enable = true;
userName = "Alice";
userEmail = "[email protected]";
};
}
Flakes are an experimental but widely-used Nix feature that replaces channels with pinned, reproducible inputs declared in a flake.nix file. They are disabled by default but can be enabled by adding to /etc/nix/nix.conf (via the /etc OverlayFS on ShaniOS):
# Enable flakes and the new nix CLI (add to /etc/nix/nix.conf)
experimental-features = nix-command flakes
# Reload the daemon after editing
sudo systemctl restart nix-daemon
# With flakes enabled — use the new nix CLI
nix search nixpkgs#ripgrep
nix shell nixpkgs#ripgrep
nix run nixpkgs#hello
Unlike Flatpak, Nix supports CLI tools, development environments, and packages that need specific pinned library versions — all without root access and without touching the immutable root. Combined with Flatpak for GUI apps and Distrobox for full Linux environments, Nix provides complete software coverage on ShaniOS.
Homebrew is the popular macOS package manager that also runs on Linux. It is not pre-installed on ShaniOS but can be installed in user-space to /home/linuxbrew/.linuxbrew — entirely outside the read-only root, making it fully compatible with ShaniOS's immutability.
When to use Homebrew vs Nix: If you already use Homebrew on macOS and want the same workflow on ShaniOS, or need a tool that's in Homebrew's formulae but not in Nixpkgs, Homebrew is a good fit. For most users, Nix is preferred as it is more powerful and better integrated with the system. Both can coexist.
Install Homebrew:
# Run the official install script (installs to /home/linuxbrew/.linuxbrew)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Add Homebrew to your PATH (add to ~/.zshrc or ~/.bashrc):
# Add to shell config (~/.zshrc for zsh, ~/.bashrc for bash)
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.zshrc
# Apply immediately in current session
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
# Verify installation
brew doctor
brew --version
Common brew commands:
# Search for a package
brew search htop
# Install a package
brew install htop
# List installed packages
brew list
# Update Homebrew and all formulae
brew update && brew upgrade
# Remove a package
brew uninstall htop
# Show info about a package
brew info git
# Uninstall Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/uninstall.sh)"
ShaniOS compatibility notes:
/home/linuxbrew/.linuxbrew — completely outside the read-only root, fully persistent across updatessudo needed after the initial install (except to create the /home/linuxbrew directory)/home/linuxbrew — Nix's @nix subvolume does survive rollbacks by designWaydroid is pre-installed on both GNOME and KDE editions. It runs a full Android container via LXC on top of your Linux desktop — hardware-accelerated on Intel and AMD GPUs. waydroid-container.service is enabled at boot so the container is ready when you need it. All Android system images and user app data live in the dedicated @waydroid Btrfs subvolume, surviving every system update and rollback.
waydroid-helper is a ShaniOS-specific automation tool that handles first-run initialisation, Android image downloading, kernel module setup, and networking configuration automatically — so you don't have to follow the upstream manual setup process.
Waydroid ships ARM translation support via libhoudini (or libndk_translation) automatically configured by waydroid-helper. This means ARM-only Android APKs run on x86_64 hardware without needing a physical ARM device. Most apps from the Play Store (or F-Droid) are ARM-compiled — ARM translation is what makes them work on a standard x86_64 PC. Performance is good for most applications; GPU-heavy ARM-native games may have some overhead.
# Recommended: waydroid-helper automates the full init process
sudo waydroid-helper init
# Or manually, if you prefer:
sudo waydroid init
# Start the Waydroid session
sudo systemctl start waydroid-container
waydroid show-full-ui
# Check Waydroid status
waydroid status
# Install an APK from host (sideloading)
waydroid app install /path/to/app.apk
# List installed Android apps
waydroid app list
# Launch a specific app
waydroid app launch com.example.app
# Open the full Android UI
waydroid show-full-ui
python-pyclip is pre-installed for clipboard integration between the Android container and the Linux desktop — copy text in Android, paste it in a Linux app, and vice versa. Files can be shared via the ~/android_documents shared folder that Waydroid mounts automatically.
Waydroid is a full, hardware-accelerated Android stack — not a slow Android Virtual Device (AVD). Test your apps on a real hardware-accelerated Android environment without a physical device. android-tools (adb, fastboot) and android-udev are pre-installed for device debugging workflows. Connect to Waydroid via ADB:
# Get Waydroid IP (for ADB connection)
waydroid status | grep IP
# Connect ADB to Waydroid
adb connect 192.168.240.112:5555
# Now use standard adb commands against Waydroid
adb devices
adb logcat
adb shell
# Stop the Waydroid session
waydroid session stop
# Stop the container service
sudo systemctl stop waydroid-container
# Disable auto-start at boot (if you don't use Waydroid regularly)
sudo systemctl disable waydroid-container
# Reset Waydroid completely (wipes all Android data)
sudo waydroid init -f
The @waydroid subvolume at /var/lib/waydroid survives all system updates and rollbacks automatically. Android app data, user settings, and installed applications persist there regardless of which OS slot is booted.
ShaniOS fully supports hardware-accelerated virtual machines via QEMU/KVM. VM disk images are stored in the @libvirt Btrfs subvolume (mounted at /var/lib/libvirt) with CoW disabled (nodatacow) for reliable performance with qcow2 images. The subvolume survives all system updates and rollbacks.
Virtualisation is enabled automatically — all users are added to the kvm and libvirt groups at first boot. Verify KVM is available:
# Should return /dev/kvm if VT-x / AMD-V is enabled in BIOS
ls /dev/kvm
# Check KVM kernel modules
lsmod | grep kvm
Clean, minimal VM manager. Ships its own QEMU/libvirt runtimes — no system daemon needed. Best for creating and running desktop VMs quickly.
flatpak install flathub org.gnome.Boxes
Advanced VM manager with full control over networking, storage pools, CPU pinning, PCIe passthrough, and snapshots. Also ships its own QEMU/libvirt runtime as a Flatpak.
flatpak install flathub org.virt_manager.virt-manager
# Or install the QEMU extension alongside for full feature set
flatpak install flathub org.virt_manager.virt-manager
flatpak install flathub org.virt_manager.virt-manager.Extension.Qemu
Quickemu wraps QEMU directly to create pre-configured VMs for popular OSes with a single command. No libvirt daemon. Install via Nix or Distrobox:
# Install via Nix
nix-env -iA nixpkgs.quickemu
# Create and start an Ubuntu VM
quickget ubuntu 22.04
quickemu --vm ubuntu-22.04.conf
# List available OS options
quickget --list
# Using quickemu (easiest)
quickget windows 11
quickemu --vm windows-11.conf
# Using GNOME Boxes: File → New → Download OS → Windows 11
# (Boxes handles VirtIO drivers automatically)
# For virt-manager: use the VirtIO disk and network drivers
# for best performance — download virtio-win.iso from Fedora
Why nodatacow on @libvirt? Btrfs CoW creates excessive metadata overhead when repeatedly overwriting VM disk images (which qcow2 does constantly). Disabling CoW on @libvirt eliminates fragmentation and dramatically improves VM I/O performance. Snapshots are handled by libvirt/qcow2 natively inside the Flatpak sandbox.
ShaniOS lets you host websites and services directly from your desktop or laptop—even on home internet connections without a static IP. All tools listed here are installed for convenience and are not enabled or running by default.
Security First: These services are disabled by default. Only enable what you need, and always configure firewall rules appropriately. Never expose services publicly without understanding the security implications.
Two secure remote access paths: 🌐 public HTTPS via Cloudflared (outbound-only tunnel, no static IP) and 🔐 private WireGuard mesh via Tailscale (subnet router, exit node, MagicDNS) — all TLS and tunnel state persists in @data across every update
Modern, zero-config web server with automatic HTTPS via Let's Encrypt. Ideal for hosting static sites, reverse proxying containers, dashboards, and APIs directly from your desktop. Not active by default — start it when you need it.
Start & enable:
sudo systemctl enable --now caddy
# Reload config without downtime
sudo systemctl reload caddy
# View logs
sudo journalctl -u caddy -f
# Validate Caddyfile syntax before reloading
caddy validate --config /etc/caddy/Caddyfile
Static site with automatic HTTPS:
# /etc/caddy/Caddyfile
mysite.example.com {
root * /var/www/html
file_server
encode gzip zstd
log {
output file /var/log/caddy/access.log
}
}
# Local dev site (no certificate needed)
:8080 {
root * /home/user/my-website
file_server browse
}
Reverse proxy — forward to a container or app:
app.example.com {
reverse_proxy localhost:3000
}
# Multiple subdomains → different backend services
api.example.com { reverse_proxy localhost:8000 }
dashboard.example.com { reverse_proxy localhost:9090 }
# Load balance across replicas
lb.example.com {
reverse_proxy localhost:3001 localhost:3002 localhost:3003 {
lb_policy round_robin
health_uri /health
health_interval 10s
}
}
Basic auth, headers, rate limiting:
# Generate bcrypt password hash
caddy hash-password --plaintext "yourpassword"
# Caddyfile with auth + security headers
secure.example.com {
basicauth {
admin $2a$14$YOUR_HASH_HERE
}
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains"
X-Content-Type-Options nosniff
X-Frame-Options DENY
-Server
}
reverse_proxy localhost:8080
}
# Rate limit (requires caddy-ratelimit module, install via Flatpak/container)
# For production rate limiting, deploy Caddy inside a container with plugins
Serve from a Podman container:
# Run a web app container and proxy it with Caddy
podman run -d --name webapp -p 127.0.0.1:3000:3000 myapp:latest
# Caddyfile entry
myapp.example.com {
reverse_proxy 127.0.0.1:3000
}
sudo systemctl reload caddy
TLS certificates and ACME state are stored in /data/varlib/caddy and persist across all system updates and rollbacks.
Alternative servers via Flatpak/container: For more complex setups, Nginx and Apache HTTPD are available as container images. Run them with Podman and proxy them through Caddy, or use them standalone on a non-standard port. Example: podman run -d -p 127.0.0.1:8081:80 -v /home/user/html:/usr/share/nginx/html:Z nginx:alpine
Creates an encrypted outbound-only tunnel to Cloudflare's edge, exposing local services publicly without opening inbound firewall ports or having a static IP. Disabled by default.
# Authenticate with your Cloudflare account
cloudflared tunnel login
# Create a named tunnel
cloudflared tunnel create my-tunnel
# Configure ingress — edit ~/.cloudflared/config.yml
# tunnel: <YOUR-TUNNEL-UUID>
# credentials-file: /home/user/.cloudflared/<UUID>.json
# ingress:
# - hostname: mysite.example.com
# service: http://localhost:8080
# - hostname: api.example.com
# service: http://localhost:3000
# - service: http_status:404
# Route DNS to this tunnel (Cloudflare manages DNS record automatically)
cloudflared tunnel route dns my-tunnel mysite.example.com
# Test tunnel locally before enabling as service
cloudflared tunnel run my-tunnel
# Install and enable as a persistent systemd service
sudo cloudflared service install
sudo systemctl enable --now cloudflared
# Check tunnel status
cloudflared tunnel list
cloudflared tunnel info my-tunnel
sudo journalctl -u cloudflared -f
Tunnel credentials are stored at /data/varlib/cloudflared and survive all updates. No inbound firewall changes are needed — Cloudflared establishes only outbound HTTPS connections.
Builds a private peer-to-peer encrypted network between all your devices using WireGuard. All traffic is authenticated; devices are only reachable by other Tailscale nodes. Inactive until you sign in.
# Enable the daemon
sudo systemctl enable --now tailscaled
# Authenticate (opens browser)
sudo tailscale up
# Bring up with specific options
sudo tailscale up --accept-routes --ssh # accept subnet routes; enable Tailscale SSH
sudo tailscale up --advertise-exit-node # act as exit node (route all traffic)
sudo tailscale up --advertise-routes=192.168.1.0/24 # expose your LAN to the tailnet
# Status and peer list
tailscale status
tailscale netcheck # diagnose NAT/relay connectivity
# Check your Tailscale IP
tailscale ip -4
# Ping a peer by its Tailscale hostname
tailscale ping myhostname
# Open UDP 41641 for best performance (optional — works without)
sudo firewall-cmd --add-port=41641/udp --permanent
sudo firewall-cmd --reload
# Built-in SSH (no OpenSSH needed on target)
tailscale ssh user@myhostname
Tailscale state is bind-mounted from /data/varlib/tailscale and persists across all updates — you stay authenticated and connected after every system update.
SSH server is not enabled by default. Enable it only when needed; prefer access via Tailscale rather than exposing port 22 publicly.
# Enable SSH server
sudo systemctl enable --now sshd
# Harden /etc/ssh/sshd_config (key settings)
sudo nano /etc/ssh/sshd_config
# PasswordAuthentication no ← key-only auth
# PermitRootLogin no
# Port 2222 ← non-default port (optional)
# AllowUsers youruser ← restrict to specific users
# MaxAuthTries 3
sudo systemctl restart sshd
# Allow SSH in firewall (LAN only is safest)
sudo firewall-cmd --add-service=ssh --permanent
sudo firewall-cmd --reload
# Generate an ED25519 key pair on the CLIENT
ssh-keygen -t ed25519 -C "mydesktop"
# Copy public key to server
ssh-copy-id [email protected]
# or manually append ~/.ssh/id_ed25519.pub to server's ~/.ssh/authorized_keys
# Connect
ssh [email protected]
ssh -p 2222 [email protected] # non-default port
# Tunnel a local port via SSH (port forward)
ssh -L 8080:localhost:3000 [email protected] # access server's :3000 at localhost:8080
# Transfer files
scp localfile.txt [email protected]:/home/user/
rsync -avz /local/dir/ [email protected]:/remote/dir/
# SFTP interactive session
sftp [email protected]
Never expose SSH directly to the internet. Use SSH over Tailscale (tailscale ssh), or tunnel it through Cloudflared. If public SSH is unavoidable: use key-only auth, disable PasswordAuthentication, change the default port, and enable Fail2ban.
Active by default with restrictive rules. Manages nftables under the hood using a zone-based model. Pre-configured zones for KDE Connect and Waydroid are included.
# Status and current rules
sudo firewall-cmd --state
sudo firewall-cmd --list-all # active zone
sudo firewall-cmd --list-all-zones # all zones
# Open a service (named)
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo firewall-cmd --add-service=ssh --permanent
sudo firewall-cmd --reload
# Open a specific port
sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --add-port=41641/udp --permanent # Tailscale
sudo firewall-cmd --reload
# Remove a rule
sudo firewall-cmd --remove-service=http --permanent
sudo firewall-cmd --remove-port=8080/tcp --permanent
sudo firewall-cmd --reload
# Lock down to a source IP (allow SSH only from LAN)
sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" service name="ssh" accept' --permanent
sudo firewall-cmd --reload
# Block an IP
sudo firewall-cmd --add-rich-rule='rule family="ipv4" source address="1.2.3.4" reject' --permanent
sudo firewall-cmd --reload
# GUI: firewall-config is pre-installed
sudo firewall-config
Monitors logs and temporarily bans IPs that fail authentication repeatedly. Integrates with firewalld automatically. Not enabled by default — enable when running any public-facing service.
# Enable and start
sudo systemctl enable --now fail2ban
# Check overall status
sudo fail2ban-client status
# Check SSH jail
sudo fail2ban-client status sshd
# Manually ban/unban an IP
sudo fail2ban-client set sshd banip 1.2.3.4
sudo fail2ban-client set sshd unbanip 1.2.3.4
# View banned IPs from firewall perspective
sudo firewall-cmd --direct --get-all-rules
# Watch the fail2ban log live
sudo journalctl -u fail2ban -f
Custom jail for Caddy (HTTP brute-force):
# /etc/fail2ban/jail.d/caddy.conf
# [caddy]
# enabled = true
# port = http,https
# filter = caddy
# logpath = /var/log/caddy/access.log
# maxretry = 10
# bantime = 3600
# findtime = 600
sudo systemctl restart fail2ban
Native Linux file sharing at near-local disk speeds. Best for Linux-to-Linux sharing on a trusted LAN. State bind-mounted from /data/varlib/nfs.
# ── SERVER ──────────────────────────────────
sudo systemctl enable --now nfs-server
# Define exports in /etc/exports
# /home/user/shared 192.168.1.0/24(rw,sync,no_subtree_check)
# /data/media 192.168.1.0/24(ro,sync,no_subtree_check)
echo "/home/user/shared 192.168.1.0/24(rw,sync,no_subtree_check)" | sudo tee -a /etc/exports
sudo exportfs -arv
# Allow through firewall
sudo firewall-cmd --add-service=nfs --permanent
sudo firewall-cmd --add-service=rpcbind --permanent
sudo firewall-cmd --add-service=mountd --permanent
sudo firewall-cmd --reload
# Check active exports
sudo exportfs -v
showmount -e localhost
# ── CLIENT ──────────────────────────────────
# Temporary mount
sudo mount -t nfs 192.168.1.100:/home/user/shared /mnt/remote
# Automount at boot — add to /etc/fstab
# 192.168.1.100:/home/user/shared /mnt/remote nfs defaults,_netdev,nfsvers=4 0 0
# Check mount
mount | grep nfs
SMB/CIFS shares visible to Windows, macOS Finder, and other Linux machines. Samba state bind-mounted from /data/varlib/samba.
# Enable Samba services
sudo systemctl enable --now smb nmb
# Edit /etc/samba/smb.conf — add a share
sudo nano /etc/samba/smb.conf
# Example share (append to smb.conf):
# [SharedDocs]
# comment = My Documents
# path = /home/user/Documents
# browseable = yes
# read only = no
# valid users = youruser
# create mask = 0664
# directory mask = 0775
# Set Samba password (separate from system password)
sudo smbpasswd -a youruser
# Apply config changes
sudo systemctl restart smb nmb
# Test config syntax
testparm
# Allow through firewall
sudo firewall-cmd --add-service=samba --permanent
sudo firewall-cmd --reload
# List active Samba shares
smbclient -L localhost -U youruser
# ── MOUNTING REMOTE SHARES ──────────────────
# From another Linux machine
sudo mount -t cifs //192.168.1.100/SharedDocs /mnt/samba \
-o username=youruser,uid=$(id -u),gid=$(id -g),vers=3.0
# Persistent (add credentials file for security)
# echo "username=youruser" > ~/.smbcredentials
# echo "password=yourpass" >> ~/.smbcredentials
# chmod 600 ~/.smbcredentials
# Then in /etc/fstab:
# //192.168.1.100/SharedDocs /mnt/samba cifs credentials=/home/user/.smbcredentials,uid=1000,gid=1000,_netdev 0 0
Mount any directory from an SSH-accessible machine as a local folder. Requires only an SSH server on the remote — no special server software needed.
# Mount a remote directory
sshfs [email protected]:/home/user/projects ~/mnt/remote-projects
# Mount with specific options
sshfs [email protected]:/data ~/mnt/server \
-o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3
# Unmount
fusermount -u ~/mnt/remote-projects
# Auto-mount at login (add to /etc/fstab)
# [email protected]:/home/user/projects /home/user/mnt/remote fuse.sshfs defaults,_netdev,reconnect,uid=1000,gid=1000,IdentityFile=/home/user/.ssh/id_ed25519 0 0
Lightweight DNS forwarder and DHCP server for homelab setups. Use it for custom .home domains, split DNS (different resolution for LAN vs internet), or local ad-blocking. Not active by default.
sudo systemctl enable --now dnsmasq
# /etc/dnsmasq.conf — key examples:
# Custom local hostnames
# address=/nas.home/192.168.1.10
# address=/printer.home/192.168.1.20
# address=/desktop.home/192.168.1.50
# Upstream DNS servers
# server=1.1.1.1
# server=8.8.8.8
# Local DHCP range (if dnsmasq acts as DHCP server)
# dhcp-range=192.168.1.100,192.168.1.200,12h
# dhcp-option=3,192.168.1.1 ← default gateway
# Block trackers/ads by routing to 0.0.0.0
# address=/ads.doubleclick.net/0.0.0.0
# Restart after changes
sudo systemctl restart dnsmasq
# Point NetworkManager to use dnsmasq
# Add /etc/NetworkManager/conf.d/dns.conf:
# [main]
# dns=dnsmasq
sudo systemctl restart NetworkManager
Avahi is active by default — your machine is immediately reachable as hostname.local on the LAN. Used by CUPS (printers), KDE Connect, DLNA, and SSH discovery.
# Discover services on your network
avahi-browse -a # all services
avahi-browse -at # one-shot (no live view)
avahi-browse _http._tcp # HTTP services only
avahi-browse _ssh._tcp # SSH servers
avahi-browse _smb._tcp # Samba shares
avahi-browse _ipp._tcp # Printers
# Look up a .local hostname
avahi-resolve --name myhostname.local
avahi-resolve --address 192.168.1.50
# Publish a custom service
# Create /etc/avahi/services/caddy.service:
# <?xml version="1.0" standalone='no'?>
# <service-group>
# <name>My Web Server</name>
# <service><type>_http._tcp</type><port>8080</port></service>
# </service-group>
sudo systemctl status avahi-daemon
KDE Connect (KDE edition) and GSConnect (GNOME extension, pre-installed on GNOME edition) connect your Android phone to your Linux desktop over the local network. Features: notification sync, clipboard sharing, file transfer, remote control, media control, SMS from desktop, and running commands on the PC from your phone. The necessary firewall ports are pre-opened on ShaniOS — no manual firewall configuration needed.
# ── SETUP ───────────────────────────────────────────────────────────────
# 1. Install KDE Connect on your Android phone from Google Play or F-Droid
# 2. Ensure phone and PC are on the same WiFi network
# 3. Open KDE Connect on KDE, or click the GSConnect icon in GNOME top bar
# 4. The phone appears automatically — click "Pair" on both devices
# ── KDE CONNECT CLI (kdeconnect-cli) ─────────────────────────────────────
# List paired devices
kdeconnect-cli --list-devices
kdeconnect-cli --list-available # discoverable devices on LAN
# Send a file to your phone
kdeconnect-cli --device "Phone Name" --share ~/Documents/file.pdf
# Ping your phone (find it)
kdeconnect-cli --device "Phone Name" --ping
# Send a text/notification to phone
kdeconnect-cli --device "Phone Name" --ping-msg "Hey, check this"
# Lock phone screen
kdeconnect-cli --device "Phone Name" --lock
# Ring phone (find it in your couch)
kdeconnect-cli --device "Phone Name" --ring
# ── FIREWALL ─────────────────────────────────────────────────────────────
# Ports are pre-opened on ShaniOS (1714-1764 TCP+UDP in public zone)
# To verify:
sudo firewall-cmd --list-all | grep 1714
# ── GSCONNECT (GNOME) ────────────────────────────────────────────────────
# GSConnect is a GNOME Shell extension — it appears as an icon in the top bar
# after pairing. No CLI; use the extension menu or the phone app directly.
# To enable GSConnect if not already active:
gnome-extensions enable [email protected]
Streams your music, videos, and photos to smart TVs, game consoles (PS4/PS5/Xbox), Kodi, and any DLNA renderer on the LAN. GNOME edition only. Not active by default.
# Start Rygel as a user service
systemctl --user enable --now rygel
# Configure — ~/.config/rygel.conf
# [MediaExport]
# enabled=true
# uris=/home/user/Music;/home/user/Videos;/home/user/Pictures
# Check discovery
avahi-browse -a | grep -i rygel
# View logs
journalctl --user -u rygel -f
KDE / more powerful alternatives: Install Jellyfin (flatpak install flathub com.jellyfin.JellyfinServer) for a full self-hosted media server with web UI, transcoding, and multi-user support. Run it as a container for more control: podman run -d --name jellyfin -p 8096:8096 -v /home/user/media:/media:Z jellyfin/jellyfin
NetworkManager is pre-installed and active by default. All network connections — wired, Wi-Fi, mobile broadband, and VPN — are managed through it. Use nmcli for scripting, nmtui for a terminal UI, or the full nm-connection-editor GUI.
# --- Connection management ---
nmcli device status # show all interfaces
nmcli connection show # list all saved connections
nmcli connection up "MyWifi" # activate a saved connection
nmcli device wifi list # scan for available Wi-Fi
nmcli device wifi connect "SSID" password "pass"
# --- Hotspot (share internet over Wi-Fi) ---
nmcli device wifi hotspot ssid "MyHotspot" password "secret"
# --- Static IP (example) ---
nmcli connection modify "eth0" ipv4.method manual ipv4.addresses 192.168.1.50/24 ipv4.gateway 192.168.1.1 ipv4.dns 1.1.1.1
# --- Terminal UI (interactive) ---
nmtui
All major VPN protocols are pre-installed as NetworkManager plugins. Connect to any VPN by opening Settings → Network → VPN → + and choosing your protocol — no manual package installation needed.
# --- OpenVPN (most common, .ovpn file) ---
# Import from GUI: Settings → Network → VPN → + → Import from file → select .ovpn
nmcli connection import type openvpn file /path/to/client.ovpn
nmcli connection up "MyOpenVPN"
# --- WireGuard (fast, modern) ---
# Import from GUI or from a .conf file:
nmcli connection import type wireguard file /etc/wireguard/wg0.conf
# Or use the NM GUI to paste your private key, peer public key, and endpoint.
# --- strongSwan / IKEv2 (corporate, certificate-based) ---
# Configure via nm-connection-editor: IPsec/IKEv2, enter server, your certificate.
nmcli connection up "CorpVPN"
# --- Cisco AnyConnect / OpenConnect ---
# GUI: Settings → Network → VPN → + → Cisco AnyConnect Compatible
# CLI:
openconnect --protocol=anyconnect vpn.example.com
# --- Fortinet SSL VPN ---
openfortivpn vpn.example.com:443 --username=you
# --- L2TP/IPsec, PPTP, SSTP, VPNC ---
# All configured through Settings → Network → VPN → +, select protocol.
# --- List and toggle VPN connections ---
nmcli connection show --active
nmcli connection up "MyVPN"
nmcli connection down "MyVPN"
ShaniOS includes both client and server tools for remote desktop access.
# --- FreeRDP — connect TO a Windows/RDP server (pre-installed) ---
xfreerdp /v:192.168.1.100 /u:username /p:password /dynamic-resolution /gfx /rfx
# Full-screen RDP session:
xfreerdp /v:server.example.com /u:me /f /multimon
# RDP over SSH tunnel (recommended for security):
ssh -L 3389:192.168.1.100:3389 jumphost
xfreerdp /v:localhost /u:username
# --- kRDP (KDE RDP server — pre-installed on KDE edition) ---
# Enable: Settings → System → Remote Desktop → Enable Remote Desktop
# Uses GNOME RDP protocol; accessible from Windows "Remote Desktop Connection"
systemctl --user enable --now plasma-remotedesktop
# --- kRFB / krfb (KDE VNC server — pre-installed on KDE edition) ---
systemctl --user enable --now krfb
# Connect from any VNC viewer: vncviewer hostname:5900
# --- gnome-remote-desktop (GNOME edition — pre-installed) ---
# Enable: Settings → Sharing → Remote Desktop
# Supports both RDP and VNC protocols
grdctl rdp enable
grdctl rdp set-credentials username password
grdctl status
# --- SSH with X forwarding (run GUI apps remotely) ---
ssh -X user@host
ssh -Y user@host # trusted (faster, less secure)
ModemManager is pre-installed and integrates with NetworkManager for USB/PCIe mobile broadband modems and SIM cards. Manages LTE/5G, SMS, and signal monitoring.
# --- ModemManager status ---
mmcli -L # list detected modems
mmcli -m 0 # detailed modem info (signal, tech, SIM)
mmcli -m 0 --signal-get # signal strength
# --- Connect via NetworkManager ---
# NM auto-detects the modem; configure APN in Settings → Network → Mobile Broadband
nmcli connection show # the modem connection will appear here
# --- SMS (if supported by modem) ---
mmcli -m 0 --messaging-list-sms
mmcli -s 0 # read SMS message 0
mmcli -m 0 --messaging-create-sms="text=Hello&number=+1234567890"
Only Caddy is pre-installed as a server. All other server software runs as rootless Podman containers — this is the recommended approach on ShaniOS. Container data lives in the persistent @containers Btrfs subvolume and survives all system updates. Use --restart unless-stopped for services you want to auto-start after reboot, and optionally generate a systemd unit with podman generate systemd for tighter integration.
Rootless tip: Always bind container ports to 127.0.0.1 (e.g. -p 127.0.0.1:3000:3000) and let Caddy proxy them. This way the service is only reachable via Caddy's HTTPS — not directly from the network. Use :Z on volume mounts to apply the correct SELinux/AppArmor label for the container.
Nginx — high-performance web / reverse proxy:
podman run -d \
--name nginx \
-p 127.0.0.1:8081:80 \
-v /home/user/www:/usr/share/nginx/html:ro,Z \
-v /home/user/nginx.conf:/etc/nginx/nginx.conf:ro,Z \
--restart unless-stopped \
nginx:alpine
# Simple nginx.conf for serving static files
# /home/user/nginx.conf:
# server {
# listen 80;
# root /usr/share/nginx/html;
# index index.html;
# try_files $uri $uri/ =404;
# }
# Caddy proxies it publicly:
# mysite.example.com { reverse_proxy localhost:8081 }
Apache HTTPD — classic web server with .htaccess support:
podman run -d \
--name apache \
-p 127.0.0.1:8082:80 \
-v /home/user/www:/usr/local/apache2/htdocs:ro,Z \
--restart unless-stopped \
httpd:alpine
# With custom config
podman run -d \
--name apache \
-p 127.0.0.1:8082:80 \
-v /home/user/www:/usr/local/apache2/htdocs:Z \
-v /home/user/httpd.conf:/usr/local/apache2/conf/httpd.conf:ro,Z \
--restart unless-stopped \
httpd:latest
Nginx + PHP-FPM — PHP apps (WordPress, Laravel, custom):
# ~/php-stack/compose.yml
# services:
# nginx:
# image: nginx:alpine
# ports: ["127.0.0.1:8080:80"]
# volumes:
# - ./www:/var/www/html:ro,Z
# - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro,Z
# depends_on: [php]
#
# php:
# image: php:8.3-fpm-alpine
# volumes:
# - ./www:/var/www/html:Z
# depends_on: [db]
#
# db:
# image: mariadb:11
# environment:
# MYSQL_ROOT_PASSWORD: rootpass
# MYSQL_DATABASE: myapp
# MYSQL_USER: appuser
# MYSQL_PASSWORD: apppass
# volumes: [db_data:/var/lib/mysql]
#
# volumes: {db_data: {}}
mkdir -p ~/php-stack/www
podman-compose -f ~/php-stack/compose.yml up -d
MariaDB / MySQL — relational database:
podman run -d \
--name mariadb \
-p 127.0.0.1:3306:3306 \
-e MYSQL_ROOT_PASSWORD=strongpassword \
-e MYSQL_DATABASE=mydb \
-e MYSQL_USER=myuser \
-e MYSQL_PASSWORD=myuserpass \
-v mariadb_data:/var/lib/mysql \
--restart unless-stopped \
mariadb:11
# Connect from host
podman exec -it mariadb mariadb -u myuser -p mydb
# or use mysql client if installed via Distrobox
PostgreSQL — advanced relational database:
podman run -d \
--name postgres \
-p 127.0.0.1:5432:5432 \
-e POSTGRES_USER=myuser \
-e POSTGRES_PASSWORD=strongpassword \
-e POSTGRES_DB=mydb \
-v postgres_data:/var/lib/postgresql/data \
--restart unless-stopped \
postgres:16-alpine
# Connect
podman exec -it postgres psql -U myuser -d mydb
# pgAdmin web UI (optional)
podman run -d \
--name pgadmin \
-p 127.0.0.1:5050:80 \
-e [email protected] \
-e PGADMIN_DEFAULT_PASSWORD=admin \
--restart unless-stopped \
dpage/pgadmin4
Redis — in-memory cache & message broker:
podman run -d \
--name redis \
-p 127.0.0.1:6379:6379 \
-v redis_data:/data \
--restart unless-stopped \
redis:7-alpine redis-server --appendonly yes
# Connect and test
podman exec -it redis redis-cli ping
podman exec -it redis redis-cli set mykey "hello"
podman exec -it redis redis-cli get mykey
# Redis with password
podman run -d \
--name redis \
-p 127.0.0.1:6379:6379 \
-v redis_data:/data \
--restart unless-stopped \
redis:7-alpine redis-server --appendonly yes --requirepass strongpassword
MongoDB — document database:
podman run -d \
--name mongodb \
-p 127.0.0.1:27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=strongpassword \
-v mongodb_data:/data/db \
--restart unless-stopped \
mongo:7
# Connect
podman exec -it mongodb mongosh -u admin -p strongpassword --authenticationDatabase admin
SQLite via Litestream — replicated SQLite:
# Litestream continuously replicates a SQLite database to S3/Backblaze/local
podman run -d \
--name litestream \
-v /home/user/app/db:/data:Z \
-v /home/user/litestream.yml:/etc/litestream.yml:ro,Z \
--restart unless-stopped \
litestream/litestream replicate
# litestream.yml example:
# dbs:
# - path: /data/app.db
# replicas:
# - url: s3://mybucket/app.db
Jellyfin — self-hosted media server (movies, TV, music):
podman run -d \
--name jellyfin \
-p 127.0.0.1:8096:8096 \
-v /home/user/jellyfin/config:/config:Z \
-v /home/user/jellyfin/cache:/cache:Z \
-v /home/user/media:/media:ro,Z \
--restart unless-stopped \
jellyfin/jellyfin
# With hardware transcoding (Intel VA-API)
podman run -d \
--name jellyfin \
-p 127.0.0.1:8096:8096 \
--device /dev/dri/renderD128:/dev/dri/renderD128 \
-v /home/user/jellyfin/config:/config:Z \
-v /home/user/media:/media:ro,Z \
--restart unless-stopped \
jellyfin/jellyfin
# Proxy: jellyfin.example.com { reverse_proxy localhost:8096 }
Navidrome — self-hosted music streaming (Subsonic-compatible):
podman run -d \
--name navidrome \
-p 127.0.0.1:4533:4533 \
-v /home/user/navidrome/data:/data:Z \
-v /home/user/Music:/music:ro,Z \
-e ND_SCANSCHEDULE="@every 1h" \
-e ND_LOGLEVEL=info \
--restart unless-stopped \
deluan/navidrome:latest
# Compatible with DSub, Ultrasonic, Symfonium apps on Android/iOS
# Proxy: music.example.com { reverse_proxy localhost:4533 }
Nextcloud — self-hosted Google Drive/Dropbox alternative:
# ~/nextcloud/compose.yml
# services:
# db:
# image: mariadb:11
# environment:
# MYSQL_ROOT_PASSWORD: rootpass
# MYSQL_DATABASE: nextcloud
# MYSQL_USER: nc
# MYSQL_PASSWORD: ncpass
# volumes: [db_data:/var/lib/mysql]
# restart: unless-stopped
#
# redis:
# image: redis:7-alpine
# restart: unless-stopped
#
# nextcloud:
# image: nextcloud:29
# ports: ["127.0.0.1:8888:80"]
# environment:
# MYSQL_HOST: db
# MYSQL_DATABASE: nextcloud
# MYSQL_USER: nc
# MYSQL_PASSWORD: ncpass
# REDIS_HOST: redis
# NEXTCLOUD_ADMIN_USER: admin
# NEXTCLOUD_ADMIN_PASSWORD: changeme
# volumes: [nc_data:/var/www/html]
# depends_on: [db, redis]
# restart: unless-stopped
#
# volumes: {db_data: {}, nc_data: {}}
mkdir -p ~/nextcloud
podman-compose -f ~/nextcloud/compose.yml up -d
# Proxy: cloud.example.com { reverse_proxy localhost:8888 }
Syncthing — peer-to-peer file sync (no central server):
podman run -d \
--name syncthing \
-p 127.0.0.1:8384:8384 \
-p 22000:22000/tcp \
-p 22000:22000/udp \
-p 21027:21027/udp \
-v /home/user/syncthing/config:/var/syncthing/config:Z \
-v /home/user/sync:/var/syncthing/Sync:Z \
-e PUID=$(id -u) \
-e PGID=$(id -g) \
--restart unless-stopped \
syncthing/syncthing:latest
# Open firewall ports for Syncthing
sudo firewall-cmd --add-port=22000/tcp --permanent
sudo firewall-cmd --add-port=22000/udp --permanent
sudo firewall-cmd --add-port=21027/udp --permanent
sudo firewall-cmd --reload
# Web UI: http://localhost:8384
# Proxy (optional): sync.example.com { reverse_proxy localhost:8384 }
Filebrowser — web-based file manager:
podman run -d \
--name filebrowser \
-p 127.0.0.1:8085:80 \
-v /home/user:/srv:Z \
-v /home/user/filebrowser.db:/database.db:Z \
--restart unless-stopped \
filebrowser/filebrowser:s6
# Access at http://localhost:8085 (admin/admin — change immediately)
# files.example.com { reverse_proxy localhost:8085 }
Gitea — self-hosted Git with web UI, issues, CI:
podman run -d \
--name gitea \
-p 127.0.0.1:3000:3000 \
-p 127.0.0.1:2222:22 \
-v /home/user/gitea:/data:Z \
-e USER_UID=$(id -u) \
-e USER_GID=$(id -g) \
--restart unless-stopped \
gitea/gitea:latest
# git.example.com { reverse_proxy localhost:3000 }
# SSH push via non-default port
# In ~/.ssh/config on clients:
# Host git.example.com
# Port 2222
Forgejo — community fork of Gitea:
podman run -d \
--name forgejo \
-p 127.0.0.1:3000:3000 \
-p 127.0.0.1:2222:22 \
-v /home/user/forgejo:/data:Z \
-e USER_UID=$(id -u) \
-e USER_GID=$(id -g) \
--restart unless-stopped \
codeberg.org/forgejo/forgejo:latest
Vaultwarden — self-hosted Bitwarden-compatible password manager:
podman run -d \
--name vaultwarden \
-p 127.0.0.1:8180:80 \
-v /home/user/vaultwarden:/data:Z \
-e WEBSOCKET_ENABLED=true \
-e ADMIN_TOKEN=$(openssl rand -base64 48) \
--restart unless-stopped \
vaultwarden/server:latest
# HTTPS is REQUIRED — Vaultwarden refuses connections over plain HTTP
# vault.example.com {
# reverse_proxy localhost:8180
# reverse_proxy /notifications/hub localhost:3012
# }
Prometheus + Grafana — metrics collection and dashboards:
# Create a minimal Prometheus config first
mkdir -p ~/monitoring
cat > ~/monitoring/prometheus.yml <<'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
EOF
# Node Exporter — expose system metrics (CPU, RAM, disk, network)
podman run -d \
--name node-exporter \
--network host \
-v /proc:/host/proc:ro,rslave \
-v /sys:/host/sys:ro,rslave \
-v /:/rootfs:ro,rslave \
--restart unless-stopped \
prom/node-exporter \
--path.procfs=/host/proc --path.sysfs=/host/sys
# Prometheus
podman run -d \
--name prometheus \
-p 127.0.0.1:9090:9090 \
-v ~/monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro,Z \
-v prometheus_data:/prometheus \
--restart unless-stopped \
prom/prometheus
# Grafana
podman run -d \
--name grafana \
-p 127.0.0.1:3001:3000 \
-v grafana_data:/var/lib/grafana \
-e GF_SECURITY_ADMIN_PASSWORD=changeme \
--restart unless-stopped \
grafana/grafana
# grafana.example.com { reverse_proxy localhost:3001 }
Uptime Kuma — self-hosted service uptime monitoring:
podman run -d \
--name uptime-kuma \
-p 127.0.0.1:3001:3001 \
-v /home/user/uptime-kuma:/app/data:Z \
--restart unless-stopped \
louislam/uptime-kuma:latest
# status.example.com { reverse_proxy localhost:3001 }
Netdata — real-time system performance monitoring:
podman run -d \
--name netdata \
-p 127.0.0.1:19999:19999 \
--cap-add SYS_PTRACE \
--security-opt apparmor=unconfined \
-v netdata_config:/etc/netdata \
-v netdata_lib:/var/lib/netdata \
-v netdata_cache:/var/cache/netdata \
-v /etc/passwd:/host/etc/passwd:ro \
-v /proc:/host/proc:ro \
-v /sys:/host/sys:ro \
--restart unless-stopped \
netdata/netdata
# metrics.example.com { reverse_proxy localhost:19999 }
Home Assistant — home automation platform:
podman run -d \
--name homeassistant \
--network host \
-v /home/user/homeassistant/config:/config:Z \
-e TZ=Europe/London \
--restart unless-stopped \
ghcr.io/home-assistant/home-assistant:stable
# Access at http://localhost:8123 for initial setup
# hass.example.com { reverse_proxy localhost:8123 }
# Note: --network host is recommended so HA can discover devices on LAN
# Open firewall if using host networking
sudo firewall-cmd --add-port=8123/tcp --permanent
sudo firewall-cmd --reload
Ntfy — push notifications server:
podman run -d \
--name ntfy \
-p 127.0.0.1:8090:80 \
-v /home/user/ntfy/cache:/var/cache/ntfy:Z \
-v /home/user/ntfy/etc:/etc/ntfy:Z \
--restart unless-stopped \
binwiederhier/ntfy serve
# ntfy.example.com { reverse_proxy localhost:8090 }
# Send a notification from the command line
curl -d "Your backup completed successfully" ntfy.example.com/my-alerts
# Subscribe in the ntfy Android/iOS app or browser
Gotify — self-hosted push notification server:
podman run -d \
--name gotify \
-p 127.0.0.1:8070:80 \
-v /home/user/gotify/data:/app/data:Z \
--restart unless-stopped \
gotify/server
# push.example.com { reverse_proxy localhost:8070 }
Woodpecker CI — lightweight CI/CD for Gitea/Forgejo:
# ~/woodpecker/compose.yml
# services:
# woodpecker-server:
# image: woodpeckerci/woodpecker-server:latest
# ports: ["127.0.0.1:8000:8000"]
# environment:
# WOODPECKER_OPEN: "true"
# WOODPECKER_HOST: https://ci.example.com
# WOODPECKER_GITEA: "true"
# WOODPECKER_GITEA_URL: https://git.example.com
# WOODPECKER_GITEA_CLIENT: <oauth2-client-id>
# WOODPECKER_GITEA_SECRET: <oauth2-client-secret>
# WOODPECKER_AGENT_SECRET: <random-secret>
# volumes: [woodpecker_data:/var/lib/woodpecker]
# restart: unless-stopped
#
# woodpecker-agent:
# image: woodpeckerci/woodpecker-agent:latest
# environment:
# WOODPECKER_SERVER: woodpecker-server:9000
# WOODPECKER_AGENT_SECRET: <same-random-secret>
# volumes: [/run/user/1000/podman/podman.sock:/var/run/docker.sock]
# depends_on: [woodpecker-server]
# restart: unless-stopped
podman-compose -f ~/woodpecker/compose.yml up -d
Pi-hole — network-wide DNS ad blocker:
podman run -d \
--name pihole \
-p 127.0.0.1:8083:80 \
-p 53:53/tcp \
-p 53:53/udp \
-e TZ=Europe/London \
-e WEBPASSWORD=changeme \
-v /home/user/pihole/etc-pihole:/etc/pihole:Z \
-v /home/user/pihole/etc-dnsmasq.d:/etc/dnsmasq.d:Z \
--restart unless-stopped \
pihole/pihole:latest
# Open DNS port in firewall (if using as LAN DNS)
sudo firewall-cmd --add-service=dns --permanent
sudo firewall-cmd --reload
# Point your router's DNS to this machine's IP
# Web UI: http://localhost:8083/admin
AdGuard Home — DNS-based ad/tracker blocker with DoH/DoT:
podman run -d \
--name adguardhome \
-p 53:53/tcp -p 53:53/udp \
-p 127.0.0.1:3000:3000 \
-p 853:853/tcp \
-v /home/user/adguard/work:/opt/adguardhome/work:Z \
-v /home/user/adguard/conf:/opt/adguardhome/conf:Z \
--restart unless-stopped \
adguard/adguardhome
# Initial setup wizard at http://localhost:3000
# Supports DNS-over-HTTPS and DNS-over-TLS out of the box
Nginx Proxy Manager — GUI-based reverse proxy with Let's Encrypt:
# ~/npm/compose.yml
# services:
# app:
# image: jc21/nginx-proxy-manager:latest
# ports:
# - "80:80"
# - "443:443"
# - "127.0.0.1:81:81"
# volumes:
# - /home/user/npm/data:/data:Z
# - /home/user/npm/letsencrypt:/etc/letsencrypt:Z
# restart: unless-stopped
podman-compose -f ~/npm/compose.yml up -d
# Web UI at http://localhost:81 ([email protected] / changeme)
Stirling PDF — self-hosted PDF manipulation tool:
podman run -d \
--name stirling-pdf \
-p 127.0.0.1:8080:8080 \
-v /home/user/stirling/trainingData:/usr/share/tessdata:Z \
-v /home/user/stirling/extraConfigs:/configs:Z \
--restart unless-stopped \
frooodle/s-pdf:latest
# pdf.example.com { reverse_proxy localhost:8080 }
Paperless-ngx — document management / OCR:
# ~/paperless/compose.yml
# services:
# broker:
# image: redis:7-alpine
# restart: unless-stopped
#
# db:
# image: postgres:15-alpine
# environment:
# POSTGRES_DB: paperless
# POSTGRES_USER: paperless
# POSTGRES_PASSWORD: paperless
# volumes: [pgdata:/var/lib/postgresql/data]
# restart: unless-stopped
#
# webserver:
# image: ghcr.io/paperless-ngx/paperless-ngx:latest
# ports: ["127.0.0.1:8000:8000"]
# environment:
# PAPERLESS_REDIS: redis://broker:6379
# PAPERLESS_DBHOST: db
# PAPERLESS_OCR_LANGUAGE: eng
# PAPERLESS_TIME_ZONE: Europe/London
# volumes:
# - /home/user/paperless/data:/usr/src/paperless/data:Z
# - /home/user/paperless/media:/usr/src/paperless/media:Z
# - /home/user/Documents/inbox:/usr/src/paperless/consume:Z
# depends_on: [broker, db]
# restart: unless-stopped
podman-compose -f ~/paperless/compose.yml up -d
# docs.example.com { reverse_proxy localhost:8000 }
Miniflux — minimal RSS / Atom feed reader:
podman run -d \
--name miniflux \
-p 127.0.0.1:8080:8080 \
-e DATABASE_URL="postgres://miniflux:password@localhost/miniflux?sslmode=disable" \
-e RUN_MIGRATIONS=1 \
-e CREATE_ADMIN=1 \
-e ADMIN_USERNAME=admin \
-e ADMIN_PASSWORD=changeme \
--restart unless-stopped \
miniflux/miniflux:latest
# rss.example.com { reverse_proxy localhost:8080 }
LinkWarden — collaborative bookmark manager:
podman run -d \
--name linkwarden \
-p 127.0.0.1:3000:3000 \
-e DATABASE_URL="mongodb://localhost:27017/linkwarden" \
-e NEXTAUTH_SECRET=$(openssl rand -base64 32) \
-e NEXTAUTH_URL=https://links.example.com \
-v /home/user/linkwarden/data:/data/data:Z \
--restart unless-stopped \
ghcr.io/linkwarden/linkwarden:latest
Make containers start automatically at login (systemd user service):
# Generate a systemd unit for a running container
podman generate systemd --name jellyfin --files --new
# Move the generated unit to your user systemd dir
mv container-jellyfin.service ~/.config/systemd/user/
# Enable it (starts at login, without needing root)
systemctl --user daemon-reload
systemctl --user enable --now container-jellyfin.service
# Enable lingering so services start at boot even without a login session
loginctl enable-linger $USER
# Check status
systemctl --user status container-jellyfin.service
Sync, copy, and mount cloud storage (S3, Backblaze B2, Google Drive, Nextcloud, SFTP, WebDAV, and 40+ more). Config stored in /data/varlib/rclone.
# Interactive setup wizard
rclone config
# Sync local → cloud (mirror; deletes files removed locally)
rclone sync /home/user/data remote:my-bucket --progress
# Copy local → cloud (never deletes remote)
rclone copy /var/www/html remote:backups --progress
# Mount cloud storage as local folder
mkdir -p ~/mnt/cloud
rclone mount remote:my-bucket ~/mnt/cloud --daemon --vfs-cache-mode writes
# Unmount
fusermount -u ~/mnt/cloud
# List remote files
rclone ls remote:my-bucket
rclone lsd remote: # list buckets/containers
# Check sync differences without transferring
rclone check /local/dir remote:bucket
# Delete files older than 30 days on remote
rclone delete remote:old-logs --min-age 30d
# Filter by pattern
rclone sync /home/user/docs remote:docs --include "*.pdf" --exclude "*.tmp"
Content-addressed, encrypted, deduplicated backups to any storage rclone can reach. Config stored in /data/varlib/restic.
# Initialize repository (local or rclone backend)
restic init --repo /mnt/backup/myrepo
restic -r rclone:remote:restic-repo init
# Create a snapshot
restic -r /mnt/backup/myrepo backup /home/user /var/www/html /data/varlib
# Use RESTIC_REPOSITORY and RESTIC_PASSWORD env vars to avoid repetition
export RESTIC_REPOSITORY=/mnt/backup/myrepo
export RESTIC_PASSWORD=mysecretpassword
restic backup /home/user
# List snapshots
restic snapshots
# Restore latest snapshot
restic restore latest --target /tmp/restored
# Restore a specific snapshot (use ID from `restic snapshots`)
restic restore 1a2b3c4d --target /tmp/restored
# Mount snapshots as a FUSE filesystem (browse by time)
restic mount /mnt/restic-browse
# Check repository integrity
restic check
# Prune old snapshots (keep 7 daily, 4 weekly, 12 monthly)
restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --prune
Automated daily backup via systemd timer:
# Create ~/.config/systemd/user/restic-backup.service:
# [Unit]
# Description=restic backup
# [Service]
# Type=oneshot
# Environment="RESTIC_REPOSITORY=/mnt/backup/myrepo"
# Environment="RESTIC_PASSWORD=mysecretpassword"
# ExecStart=restic backup /home/%u
# ExecStartPost=restic forget --keep-daily 7 --keep-weekly 4 --prune
# Create ~/.config/systemd/user/restic-backup.timer:
# [Unit]
# Description=Daily restic backup
# [Timer]
# OnCalendar=daily
# Persistent=true
# [Install]
# WantedBy=timers.target
systemctl --user daemon-reload
systemctl --user enable --now restic-backup.timer
All tools listed here are pre-installed. Packages: iproute2 (ip/ss), iputils (ping/arping), inetutils (traceroute/ftp/telnet), net-tools (ifconfig/netstat/arp/route), bind (dig/host/nslookup), iw, wireless_tools (iwconfig/iwlist/iwspy), wpa_supplicant (wpa_cli/wpa_passphrase), nmap, tcpdump, mtr, iperf3, iftop, nethogs, bandwhich (bandwidth by process and remote address), ethtool, socat, ngrep, whois, net-snmp (snmpwalk/snmpget), openbsd-netcat, curl, wget, aria2 (multi-protocol: HTTP/FTP/BitTorrent/Metalink), zsync, lsof, openldap (ldapsearch), openssl.
ping / arping — reachability & MAC detection:
# ICMP ping (from iputils)
ping -c 4 shani.dev
ping -c 4 -i 0.2 192.168.1.1 # fast ping, 200ms interval
ping -s 1400 -c 4 192.168.1.1 # large packet (MTU test)
ping6 -c 4 ipv6.google.com # IPv6 reachability
# ARP ping — verify a host is alive at Layer 2 (LAN only)
sudo arping -I eth0 -c 3 192.168.1.1
sudo arping -I eth0 192.168.1.50 # find MAC address of an IP
traceroute / mtr — path tracing:
# Classic traceroute (from inetutils)
traceroute shani.dev
traceroute -n shani.dev # no DNS reverse lookup (faster)
traceroute -T -p 443 shani.dev # TCP traceroute (bypasses ICMP blocks)
traceroute -U -p 53 8.8.8.8 # UDP traceroute
# MTR — live traceroute + per-hop packet loss/latency (mtr package)
mtr shani.dev # interactive TUI
mtr --report --report-cycles 20 shani.dev # non-interactive report
mtr -n --report shani.dev # no DNS (pure IPs)
mtr --tcp --port 443 shani.dev # TCP mode through firewalls
DNS resolution — dig / host / nslookup / resolvectl:
# dig (from bind package) — full DNS query
dig shani.dev # A record (default)
dig shani.dev AAAA # IPv6 record
dig shani.dev MX # mail exchange
dig shani.dev NS # nameservers
dig shani.dev TXT # TXT records (SPF, DKIM, etc.)
dig +short shani.dev A # clean output, IP only
dig +trace shani.dev # full recursive resolution trace
dig @1.1.1.1 shani.dev # query specific resolver (Cloudflare)
dig @8.8.8.8 shani.dev # query Google DNS directly
dig -x 1.2.3.4 # reverse DNS (PTR lookup)
# host — simpler alternative
host shani.dev
host shani.dev 1.1.1.1 # via specific server
# nslookup — interactive or one-shot
nslookup shani.dev
nslookup shani.dev 8.8.8.8
# resolvectl — query systemd-resolved (what the system actually uses)
resolvectl status # per-interface DNS config
resolvectl query shani.dev # resolved via system stub
resolvectl flush-caches # flush DNS cache
cat /etc/resolv.conf # current stub resolver config
ip — interfaces, addresses, routes, neighbours:
# Interfaces
ip link show # all interfaces (state, MAC)
ip -brief link show # compact: name + state + MAC
ip link show eth0 # specific interface
ip link set eth0 up # bring interface up
ip link set eth0 down # bring interface down
# Addresses
ip addr show # all IPs on all interfaces
ip -brief addr show # compact summary
ip addr show eth0 # specific interface
ip addr add 192.168.1.50/24 dev eth0 # add temporary IP
ip addr del 192.168.1.50/24 dev eth0 # remove IP
# Routes
ip route show # routing table
ip route get 8.8.8.8 # which interface/gateway for a destination
ip route add 10.0.0.0/8 via 192.168.1.1 # add a route
ip route del 10.0.0.0/8 # remove a route
# Neighbours (ARP/NDP cache)
ip neigh show # ARP table (LAN MAC→IP mapping)
ip neigh flush all # clear ARP cache
# Statistics
ip -s link show eth0 # TX/RX byte and packet counters
ip -s -s link show eth0 # extended error counters
ss — socket statistics (replaces netstat):
ss -tlnp # TCP listening sockets with process name/PID
ss -ulnp # UDP listening sockets
ss -tlnp | grep :80 # who is listening on port 80
ss -tnp # all established TCP connections
ss -s # summary: total sockets by state
ss -4 -tnp # IPv4 only
ss -6 -tnp # IPv6 only
ss -xnp # Unix domain sockets
ss -tlnp src :443 # filter by local port
ss -tnp dst 192.168.1.100 # connections to a specific host
netstat / ifconfig / arp / route — legacy net-tools:
# netstat (net-tools — older but widely familiar)
netstat -tlnp # listening TCP sockets (same as ss -tlnp)
netstat -s # protocol statistics (TCP/UDP/IP error counters)
netstat -r # routing table (same as ip route show)
netstat -i # interface statistics
# ifconfig (legacy — use ip addr for new scripts)
ifconfig # all interfaces
ifconfig eth0 # specific interface
# arp (legacy — use ip neigh for new scripts)
arp -n # ARP table without DNS resolution
arp -d 192.168.1.50 # remove an ARP entry
# route (legacy — use ip route for new scripts)
route -n # routing table (numeric)
ethtool — NIC hardware & driver info:
# Link speed and duplex
ethtool eth0 # speed, duplex, link detected
ethtool -i eth0 # driver name and version
ethtool -S eth0 # NIC statistics (errors, drops, missed)
ethtool -k eth0 # offload features (TSO, GRO, etc.)
ethtool -a eth0 # pause parameters (flow control)
# Wake-on-LAN
ethtool -s eth0 wol g # enable WoL on magic packet
ethtool eth0 | grep "Wake-on" # check WoL status
# Test link (loop-back self-test, if NIC supports it)
sudo ethtool -t eth0
nmap — port scanning & service discovery:
# Basic scans
nmap localhost # top 1000 TCP ports on localhost
nmap 192.168.1.100 # remote host
nmap -p 22,80,443,8080 localhost # specific ports
# Discovery
nmap -sn 192.168.1.0/24 # ping sweep: find live hosts (no port scan)
nmap -sn 192.168.1.0/24 --open # only show up hosts
# Service & version detection
nmap -sV localhost # service version fingerprinting
nmap -sV -p 22 192.168.1.100 # SSH version on remote
nmap -A localhost # OS detect + version + scripts + traceroute
# Script-based probes
nmap --script=http-title 192.168.1.0/24 # grab HTTP page titles on LAN
nmap --script=ssh-hostkey 192.168.1.100 # get SSH host key fingerprint
nmap --script=ssl-cert 192.168.1.100 -p 443 # read TLS cert details
# Fast / quiet scans
nmap -F 192.168.1.100 # fast: top 100 ports only
nmap -T4 -F 192.168.1.0/24 # fast sweep of top 100 ports on subnet
# UDP scanning (requires root)
sudo nmap -sU -p 53,123,161 192.168.1.1 # DNS, NTP, SNMP
tcpdump — live packet capture:
# Capture on any interface
sudo tcpdump -i any -n # all traffic, no DNS resolution
sudo tcpdump -i eth0 -n # specific interface
# Filter by port
sudo tcpdump -i any port 80 -n -A # HTTP traffic, ASCII body
sudo tcpdump -i any port 443 -n # HTTPS (encrypted, shows handshake)
sudo tcpdump -i any port 53 -n # DNS queries
# Filter by host
sudo tcpdump -i any host 192.168.1.50 -n
sudo tcpdump -i any src 192.168.1.50 -n # only FROM that host
sudo tcpdump -i any dst 192.168.1.50 -n # only TO that host
# Combine filters
sudo tcpdump -i any host 192.168.1.50 and port 80 -n
# Save to file for Wireshark analysis
sudo tcpdump -i any -w /tmp/capture.pcap
sudo tcpdump -i any -w /tmp/capture.pcap -C 10 # rotate at 10 MB
# Read saved capture
tcpdump -r /tmp/capture.pcap -n
# Show raw bytes
sudo tcpdump -i any -X port 80 -n | head -60
ngrep — grep over network traffic:
# Match HTTP method lines
sudo ngrep -d any -q "GET\|POST\|PUT\|DELETE" port 80
# Find passwords or tokens in plaintext traffic (internal audit use)
sudo ngrep -d any -qi "password\|token\|authorization"
# Match DNS queries
sudo ngrep -d any "" udp port 53
# Match on a specific host
sudo ngrep -d any -q "User-Agent" host 192.168.1.50 and port 80
# Quiet mode (only matched packets, no headers)
sudo ngrep -q "error\|fail" port 8080
socat — universal relay & TCP/UDP Swiss Army knife:
# Test TCP connectivity to a port
socat - TCP:192.168.1.100:3000
socat - TCP:192.168.1.100:8080,crnl # with line-ending conversion
# Test UDP connectivity
socat - UDP:192.168.1.100:5000
# Simple TCP listener (simulate a server)
socat TCP-LISTEN:9999,fork - # echo server on port 9999
# Port forward: local 8080 → remote 192.168.1.100:80
socat TCP-LISTEN:8080,fork TCP:192.168.1.100:80
# Test TLS/SSL connection (alternative to openssl s_client)
socat - OPENSSL:mysite.example.com:443,verify=0
# Bidirectional pipe between two ports
socat TCP-LISTEN:4444,fork TCP:192.168.1.100:22 # proxy SSH
nc (netcat) — port testing & simple transfer:
# Test if a port is open
nc -zv 192.168.1.100 22 # TCP: verbose, exit immediately
nc -zvw 3 192.168.1.100 443 # 3-second timeout
nc -zvu 192.168.1.100 53 # UDP port test
# Scan a port range
nc -zv 192.168.1.100 20-25
# Simple file transfer (no encryption — LAN only)
# Receiver:
nc -l -p 9999 > received_file.bin
# Sender:
nc 192.168.1.100 9999 < file_to_send.bin
# Banner grabbing (see what a service says on connect)
nc 192.168.1.100 25 # SMTP banner
nc 192.168.1.100 21 # FTP banner
nc 192.168.1.100 22 # SSH version string
# Simple HTTP request
printf "GET / HTTP/1.0\r\nHost: example.com\r\n\r\n" | nc example.com 80
curl & wget — HTTP/HTTPS testing:
# curl
curl -v https://mysite.example.com # verbose: headers + body
curl -I https://mysite.example.com # HEAD: response headers only
curl -s -o /dev/null -w "%{http_code}\n" https://mysite.example.com # status code only
curl -s -o /dev/null -w "HTTP %{http_code} | %{time_total}s | %{size_download} bytes\n" https://mysite.example.com
curl -k https://localhost:8443 # ignore self-signed cert
curl -H "Host: mysite.example.com" http://127.0.0.1 # test vhost routing
curl -L https://example.com/redirect # follow redirects
curl -u user:pass https://api.example.com/data # HTTP basic auth
curl -X POST -H "Content-Type: application/json" \
-d '{"key":"value"}' https://api.example.com/endpoint # POST JSON
curl --resolve mysite.example.com:443:127.0.0.1 https://mysite.example.com # test before DNS propagates
curl -w "@/dev/stdin" -o /dev/null -s https://mysite.example.com <<'EOF'
time_namelookup: %{time_namelookup}s\n
time_connect: %{time_connect}s\n
time_starttransfer: %{time_starttransfer}s\n
time_total: %{time_total}s\n
EOF
# wget
wget -q --server-response -O /dev/null https://mysite.example.com 2>&1 | head -20
wget --spider https://mysite.example.com # check if URL exists (no download)
wget -r -l 1 --spider https://mysite.example.com 2>&1 | grep broken # check links
openssl — TLS/SSL certificate inspection:
# Inspect a server's TLS certificate
openssl s_client -connect mysite.example.com:443 </dev/null 2>&1 | openssl x509 -noout -text
# Check expiry date only
openssl s_client -connect mysite.example.com:443 </dev/null 2>/dev/null \
| openssl x509 -noout -dates
# Test specific TLS version
openssl s_client -connect mysite.example.com:443 -tls1_2 </dev/null
openssl s_client -connect mysite.example.com:443 -tls1_3 </dev/null
# Inspect a local certificate file
openssl x509 -in /etc/caddy/certs/mysite.crt -noout -text
openssl x509 -in /etc/caddy/certs/mysite.crt -noout -dates -subject -issuer
# Test SMTP with STARTTLS
openssl s_client -connect mail.example.com:587 -starttls smtp
# Check if a certificate matches its private key
openssl x509 -noout -modulus -in cert.pem | md5sum
openssl rsa -noout -modulus -in key.pem | md5sum
# Both sums must match
whois — domain & IP registration info:
whois shani.dev # domain registration info
whois 8.8.8.8 # IP WHOIS (ASN, ISP, abuse contact)
whois -h whois.arin.net 8.8.8.8 # query ARIN directly
whois AS15169 # ASN info (Google's AS)
iperf3 — bandwidth & throughput testing:
# Run server on the remote/target machine
iperf3 -s # default port 5201
iperf3 -s -p 9999 # custom port
# TCP throughput test (from client)
iperf3 -c 192.168.1.100 # default 10-second TCP test
iperf3 -c 192.168.1.100 -t 30 # 30-second test
iperf3 -c 192.168.1.100 -P 4 # 4 parallel streams (saturate link)
iperf3 -c 192.168.1.100 -R # reverse: server sends to client
# UDP test (for latency-sensitive or wireless links)
iperf3 -c 192.168.1.100 -u -b 100M # UDP at 100 Mbps
iperf3 -c 192.168.1.100 -u -b 1G # UDP at 1 Gbps (gigabit test)
# JSON output for scripting
iperf3 -c 192.168.1.100 -J | jq '.end.sum_received.bits_per_second'
nethogs & iftop — per-process & per-connection bandwidth:
# nethogs — which process is using bandwidth (like top for network)
sudo nethogs # all interfaces
sudo nethogs eth0 # specific interface
sudo nethogs -d 1 # refresh every 1 second
# Inside nethogs: 'm' cycles units (KB/s, MB/s, GB/s); 'q' quits
# iftop — per-connection bandwidth usage (like top for flows)
sudo iftop # all interfaces, interactive
sudo iftop -i eth0 # specific interface
sudo iftop -n # no DNS resolution (faster)
sudo iftop -P # show port numbers
sudo iftop -B # show in bytes instead of bits
# Inside iftop: 'n' toggles DNS; 'p' toggles ports; 'q' quits
ethtool — NIC hardware diagnostics:
ethtool eth0 # link speed, duplex, auto-negotiate, link detected
ethtool -i eth0 # kernel driver name and firmware version
ethtool -S eth0 # detailed NIC statistics (rx_errors, tx_drops, etc.)
ethtool -k eth0 # hardware offload features (TSO, GSO, GRO, LRO)
ethtool -a eth0 # flow control (pause frames)
ethtool -c eth0 # interrupt coalescing settings
sudo ethtool -s eth0 speed 1000 duplex full autoneg on # force speed/duplex
sudo ethtool -t eth0 # run built-in NIC self-test (if supported)
# Find which physical port a NIC is on (LED blink)
sudo ethtool -p eth0 5 # blink NIC LED for 5 seconds
net-snmp — SNMP queries to network devices:
# Query a router/switch/AP via SNMP v2c
snmpwalk -v2c -c public 192.168.1.1 # walk entire MIB
snmpget -v2c -c public 192.168.1.1 sysDescr.0 # system description
snmpget -v2c -c public 192.168.1.1 ifInOctets.1 # bytes in on interface 1
snmpwalk -v2c -c public 192.168.1.1 ifTable # all interfaces
# SNMP v3 (authenticated + encrypted)
snmpwalk -v3 -u admin -l authPriv -a SHA -A "authpass" \
-x AES -X "privpass" 192.168.1.1 sysUpTime
# Translate OID numbers to names
snmptranslate .1.3.6.1.2.1.1.1.0
snmptranslate -On sysDescr.0 # OID number for a named OID
# SNMP trap listener (receive traps from devices)
sudo snmptrapd -f -Lo -c /etc/snmp/snmptrapd.conf
inetutils — ftp, telnet, rsh for legacy protocol testing:
# FTP (for testing FTP servers — use sftp/rsync for actual transfers)
ftp 192.168.1.100
ftp -n 192.168.1.100 <<EOF
quote USER anonymous
quote PASS [email protected]
ls
quit
EOF
# Telnet (for testing TCP services by hand — NOT for production access)
telnet 192.168.1.100 25 # test SMTP handshake
telnet 192.168.1.100 110 # test POP3
telnet 192.168.1.100 80 # type HTTP manually:
# GET / HTTP/1.0
# Host: 192.168.1.100
# (press Enter twice)
# inetutils also provides: rlogin, rsh, rcp (for legacy systems)
lsof — open files, sockets, and processes:
# Network-related lsof usage
lsof -i # all network connections
lsof -i TCP # TCP only
lsof -i UDP # UDP only
lsof -i :80 # what's using port 80
lsof -i :80-443 # port range
lsof -i @192.168.1.100 # connections to/from a host
lsof -i TCP:443 -sTCP:LISTEN # who is LISTENING on 443
lsof -i -n -P # all connections, no DNS, numeric ports
lsof -p $(pgrep caddy) -i # all network files opened by caddy
lsof -u www-data -i # all connections by a specific user
iw & wireless_tools — WiFi scanning & diagnostics:
# iw — modern nl80211 wireless tool
iw dev # list wireless interfaces
iw dev wlan0 info # interface details (freq, channel, SSID)
iw dev wlan0 link # current association (signal, bitrate)
iw dev wlan0 station dump # connected station info (AP or peers)
sudo iw dev wlan0 scan # scan for nearby networks (requires root)
sudo iw dev wlan0 scan | grep -E "SSID|signal|freq" # summarised scan
iw phy # physical device capabilities
iw phy phy0 info # supported bands, channels, features
iw reg get # current regulatory domain (country)
sudo iw reg set GB # set regulatory domain
# wireless_tools — legacy wext interface (still useful for older drivers)
iwconfig # show all wireless interfaces
iwconfig wlan0 # specific interface: SSID, rate, signal
iwlist wlan0 scan # scan for networks (older method)
iwlist wlan0 scan | grep -E "ESSID|Quality|Frequency"
iwlist wlan0 rate # supported bit rates
iwlist wlan0 channel # available channels
iwspy wlan0 # per-MAC signal tracking
wpa_cli & wpa_passphrase — WPA supplicant management:
# wpa_cli — control wpa_supplicant (NetworkManager usually manages this)
sudo wpa_cli status # connection state
sudo wpa_cli scan # trigger scan
sudo wpa_cli scan_results # view scan results
sudo wpa_cli list_networks # configured networks
sudo wpa_cli disconnect # disconnect
sudo wpa_cli reconnect # reconnect
# wpa_passphrase — generate wpa_supplicant config blocks
wpa_passphrase "MySSID" "MyPassword"
# Output can be pasted into /etc/wpa_supplicant/wpa_supplicant.conf
zsync — delta download for large files:
# zsync downloads only the changed parts of a file using a .zsync metafile
# Used internally by shani-deploy for efficient OS image updates
# Download a file using zsync (only transfers changed blocks)
zsync https://example.com/largefile.iso.zsync
# Resume an interrupted zsync download
zsync -i partial_file.iso https://example.com/largefile.iso.zsync
# Specify output filename
zsync -o /tmp/output.iso https://example.com/file.iso.zsync
# Check how much data would be transferred (seed with existing file)
zsync -i existing.iso https://example.com/updated.iso.zsync
ldapsearch & openldap — LDAP directory queries:
# Query an LDAP directory (openldap package)
ldapsearch -x -H ldap://192.168.1.100 -b "dc=example,dc=com"
# Authenticated bind
ldapsearch -x -H ldap://192.168.1.100 \
-D "cn=admin,dc=example,dc=com" -W \
-b "dc=example,dc=com" "(objectClass=person)"
# Search for a specific user
ldapsearch -x -H ldap://192.168.1.100 \
-b "dc=example,dc=com" "(uid=jsmith)"
# LDAPS (TLS)
ldapsearch -x -H ldaps://192.168.1.100 -b "dc=example,dc=com"
# Check if LDAP port is open first
nc -zv 192.168.1.100 389 # LDAP
nc -zv 192.168.1.100 636 # LDAPS
Service log inspection:
# Follow a service log live
sudo journalctl -u caddy -f
sudo journalctl -u sshd -f
sudo journalctl -u smb -f
sudo journalctl -u nfs-server -f
sudo journalctl -u tailscaled -f
sudo journalctl -u cloudflared -f
sudo journalctl -u fail2ban -f
sudo journalctl -u NetworkManager -f
sudo journalctl -u dnsmasq -f
sudo journalctl -u avahi-daemon -f
# Filter by priority (err, warning, info, debug)
sudo journalctl -u caddy -p err -n 50
# Kernel ring buffer — driver errors, firewall drops
sudo journalctl -k -f
sudo journalctl -k | grep -E "FINAL_REJECT|DROP|INVALID"
sudo journalctl -k | grep -E "eth0|wlan0|link" # interface events
# AppArmor denials
sudo journalctl -k | grep apparmor
# All failed systemd units since boot
systemctl --failed
# Check a specific boot's journal (after rollback / kernel panic)
journalctl --list-boots # show all recorded boots
journalctl -b -1 -p err # previous boot, errors only
rsync — incremental sync & deployment:
# Deploy a website to a remote server (delete files removed locally)
rsync -avz --delete /var/www/html/ user@server:/var/www/html/
# Sync with progress and checksums
rsync -avz --progress --checksum /src/ /dst/
# Dry run — preview changes without transferring
rsync -avz --dry-run /src/ /dst/
# Exclude patterns
rsync -avz --exclude="*.log" --exclude=".git/" /src/ /dst/
# Sync over SSH on a custom port
rsync -avz -e "ssh -p 2222" /src/ user@server:/dst/
# Mirror a remote directory locally
rsync -avz [email protected]:/home/user/docs/ ~/mirror/docs/
aria2 — parallel multi-protocol downloader:
# Download a file (HTTP/HTTPS/FTP/SFTP/BitTorrent/Magnet)
aria2c https://example.com/largefile.zip
# Multi-connection download (up to 16 connections per server)
aria2c -x 16 -s 16 https://example.com/largefile.iso
# Download from multiple mirrors simultaneously
aria2c https://mirror1.com/file.iso https://mirror2.com/file.iso
# Torrent download
aria2c /path/to/file.torrent
aria2c "magnet:?xt=urn:btih:..."
# Download list from a file
aria2c -i ~/urls.txt
# Run as JSON-RPC daemon (for AriaNG web UI)
aria2c --enable-rpc --rpc-listen-all=false --rpc-secret=mysecret --daemon
Security Best Practices for Running Services:
tailscale ssh or a Cloudflared tunnel instead. If you must open SSH, use key-only auth, disable PasswordAuthentication, change the port, and enable Fail2ban.internal or home zone only — never in the public zone. Caddy HTTP/HTTPS may go in public.sudo journalctl -u sshd -n 50, sudo fail2ban-client status sshd, sudo journalctl -u caddy -n 50127.0.0.1 rather than 0.0.0.0 so they're not directly reachable./home/user, /data/varlib, and any container volumes regularly.ShaniOS should automatically rollback. If not:
sudo shani-deploy --rollbackIf the desktop crashes or shows a black screen after login:
journalctl -b --user -p errprime-run glxinfo | grep renderersudo shani-deploy --rollbackBIOS and firmware updates change the TPM PCR values, invalidating the enrolled key. This is expected security behaviour. To fix it:
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"sudo gen-efi configure <slot>, then sudo shani-deploy --forcesystemd-boot entries are stored in /boot/efi/loader/entries/. If missing, regenerate them:
# Rebuild the UKI and boot entry for the currently booted slot
sudo gen-efi configure blue # if booted into @blue
# or
sudo gen-efi configure green # if booted into @green
# Then force a redeploy to rebuild both slots
sudo shani-deploy --force
# Verify entries exist
ls /boot/efi/loader/entries/
If your custom /etc changes are not being applied, the overlay mount may have failed:
# Check overlay mount service
sudo systemctl status shanios-tmpfiles-data.service
sudo systemctl status etc-daemon-reload.service
# Verify the overlay is mounted
mount | grep "on /etc"
# Should show: overlay on /etc type overlay (rw,...)
# Check for errors in the early boot services
journalctl -b -u shanios-tmpfiles-data
journalctl -b -u etc-daemon-reload
# The overlay source directories
ls /data/overlay/etc/upper/ # Your changes
ls /data/overlay/etc/work/ # Should be an empty working dir
If the upper directory is missing, shanios-tmpfiles-data.service may not have run. A reboot usually fixes this. If not, file a bug report.
TPM enrollment adds a new unlock method but always preserves your password as a fallback. If experiencing issues:
sudo cryptsetup luksDump /dev/sdXnCheck network connectivity:
# Test connectivity
ping -c 3 shani.dev
# Check NetworkManager
systemctl status NetworkManager
# Retry update
sudo shani-deploy
shani-deploy automatically retries up to 5 times and falls back from the R2 CDN mirror to SourceForge if needed. If both fail, check if your ISP is blocking the download domains.
shani-deploy requires at least 10 GB free and aborts if space is insufficient. Free up space and retry:
# Show filesystem usage and compression stats
sudo shani-deploy --storage-info
# Remove old backups and cached downloads
sudo shani-deploy --cleanup
# Remove unused Flatpak runtimes
flatpak uninstall --unused
# Run on-demand deduplication to reclaim shared blocks
sudo shani-deploy --optimize
# Check Btrfs usage per subvolume
sudo btrfs filesystem du -s --human-readable /
sudo compsize /
This usually means a corrupted or incomplete download. Steps to resolve:
# Remove cached download files and retry
sudo shani-deploy --cleanup
sudo shani-deploy
# Force a fresh download ignoring any cached state
sudo shani-deploy --force
Persistent checksum failures may indicate DNS manipulation or ISP interference with the download. Try switching to a different DNS server (e.g., 1.1.1.1) in NetworkManager settings, then retry. If the problem persists, report it at github.com/shani8dev with the full --verbose output.
Do not interrupt a deployment mid-extraction — this could leave a partially-written subvolume. Verify the deploy is still active:
# Check if shani-deploy is still running
journalctl -fu shani-deploy
# If shani-deploy was interrupted, a deployment_pending flag may remain
cat /data/deployment_pending # If this file exists, a deploy was interrupted
# Clean up and retry
sudo shani-deploy --cleanup
sudo shani-deploy
The /data/deployment_pending flag prevents a corrupted deploy from being treated as complete. --cleanup removes it and any partial extraction.
Try these steps:
# Update Flatpak
flatpak update
# Repair installation
flatpak repair
# Check permissions
flatpak override --show org.app.Name
# Reset all overrides for the app
flatpak override --reset org.app.Name
# Run from terminal to see errors
flatpak run org.app.Name
# Check GPU / graphics issues (common for Electron & games)
flatpak run --env=LIBGL_DEBUG=verbose org.app.Name
# Check snapd daemon status
sudo systemctl status snapd
sudo systemctl status snapd.apparmor
# Check AppArmor confinement is loaded for snaps
sudo apparmor_status | grep snap
# Restart snapd if it's stuck
sudo systemctl restart snapd
# Check snap logs for the failing app
snap run --shell app-name # drop into a shell to inspect
journalctl -u snapd -f
# Refresh a specific snap to latest revision
snap refresh app-name
# Roll back a snap to its previous working revision
snap revert app-name
# Remove and reinstall a snap cleanly
snap remove app-name
snap install app-name
# List installed snaps and their revisions
snap list
# Check nix-daemon is running
sudo systemctl status nix-daemon
# Restart the daemon
sudo systemctl restart nix-daemon
# No packages found / nix-env -iA fails with "attribute not found"
# — you probably haven't added a channel yet
nix-channel --list # should show nixpkgs
# If empty, add it:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update
# Broken profile after failed install — roll back
nix-env --rollback
# List profile generations and switch to a working one
nix-env --list-generations
nix-env --switch-generation 5 # replace 5 with a known-good number
# GUI apps installed via Nix not appearing in launcher
# Add to ~/.zshrc or ~/.bashrc:
export XDG_DATA_DIRS=$HOME/.nix-profile/share:$XDG_DATA_DIRS
# Repair broken store paths
nix-store --verify --check-contents --repair
# Garbage collect to free space
nix-collect-garbage -d
# Most common cause: AppImage is not executable
chmod +x MyApp-x86_64.AppImage
./MyApp-x86_64.AppImage
# AppImage must be stored in a writable location
# /usr is read-only on ShaniOS — move it to home or /data:
mkdir -p ~/Applications
mv MyApp.AppImage ~/Applications/
# Missing FUSE — AppImages use FUSE by default
# ShaniOS includes FUSE2 and FUSE3; if still failing, extract and run:
./MyApp.AppImage --appimage-extract
./squashfs-root/AppRun
# Gear Lever shows "broken" AppImage
# Re-import it: open Gear Lever → Remove → re-add the AppImage file
# AppImage from a different architecture (ARM vs x86_64)
file MyApp.AppImage # check ELF architecture in output
# Distrobox shares /home by default — files there are accessible
# For other paths, add them at creation time:
distrobox create --name mybox --image ubuntu:22.04 \
--volume /data:/data:rw
# Or pass a device:
distrobox create --name mybox --image ubuntu:22.04 \
--additional-flags "--device /dev/video0"
# Check container status
distrobox list
# Restart a stopped container
distrobox start mybox
# Re-create a broken container (data in /home is preserved)
distrobox rm mybox
distrobox create --name mybox --image ubuntu:22.04
# Enter container and check what's mounted
distrobox enter mybox -- df -h
If a misconfigured file in /etc (e.g., broken /etc/fstab, /etc/sshd_config, or /etc/sudoers) causes boot or login failures:
@blue or @green) has a separate /etc overlay. Select it from the systemd-boot menu at startup. From there you can fix or delete the broken file in /data/overlay/etc/upper/./data/overlay/etc/upper/. To revert a file entirely to its system default, simply delete it from the upper directory:
sudo rm /data/overlay/etc/upper/etc/fstab
# (path inside upper mirrors the /etc path)ShaniOS includes a comprehensive set of out-of-tree drivers (broadcom-wl, SOF audio, ALSA firmware, etc.), so most hardware works without intervention. If you still need a module:
sudo insmod /path/to/module.ko. This survives only until the next reboot.~/.local/lib/modules/$(uname -r)/ and use a systemd user service to load it on each boot with insmod. This is user-space and unaffected by system updates.# Method 1: Check for active LUKS mapping
sudo cryptsetup status shani_root
# If encrypted, shows: "is active" with device info
# Method 2: Check for crypto_LUKS partitions
sudo blkid | grep crypto_LUKS
# Shows all LUKS encrypted partitions
# Method 3: Check mounted devices
mount | grep mapper
# If encrypted, shows: /dev/mapper/shani_root
# Method 4: List block devices with filesystem types
lsblk -f
# Look for TYPE="crypto_LUKS"
# If none of these show encryption, your system is NOT encrypted
# and you cannot use TPM enrollment
# Get your LUKS device
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep device: | awk '{print $2}')
# Remove TPM enrollment
sudo systemd-cryptenroll --wipe-slot=tpm2 "$LUKS_DEVICE"
# Regenerate UKI for the currently booted slot only (e.g. if booted into @blue):
sudo gen-efi configure blue
# or: sudo gen-efi configure green # if booted into @green
# To regenerate the other slot's UKI as well, trigger a redeployment:
sudo shani-deploy --force
# Done - system will now require password on boot
# Check NetworkManager status
systemctl status NetworkManager
sudo journalctl -u NetworkManager -n 50
# List all connections (saved profiles)
nmcli connection show
# Check active connections
nmcli device status
# Restart NetworkManager
sudo systemctl restart NetworkManager
# Reconnect a specific interface
nmcli device disconnect eth0
nmcli device connect eth0
# For WiFi: scan and reconnect
nmcli device wifi list
nmcli device wifi connect "MySSID" --ask
# Check if interface is up at kernel level
ip link show
ip addr show
# Check current DNS servers
resolvectl status
cat /etc/resolv.conf
# Test DNS resolution directly
dig shani.dev @1.1.1.1 # query Cloudflare DNS directly
dig shani.dev @8.8.8.8 # query Google DNS directly
dig shani.dev # use system resolver
# If direct queries work but system DNS fails, check systemd-resolved
sudo systemctl status systemd-resolved
sudo systemctl restart systemd-resolved
# Flush DNS cache
sudo resolvectl flush-caches
# If dnsmasq is running and conflicting, check it
sudo systemctl status dnsmasq
sudo journalctl -u dnsmasq -n 30
# Set a manual DNS server in NetworkManager
nmcli connection modify "ConnectionName" ipv4.dns "1.1.1.1 8.8.8.8"
nmcli connection modify "ConnectionName" ipv4.ignore-auto-dns yes
nmcli connection up "ConnectionName"
# Check Caddy status and logs
sudo systemctl status caddy
sudo journalctl -u caddy -n 80 --no-pager
# Validate Caddyfile syntax before reloading
caddy validate --config /etc/caddy/Caddyfile
# Common ACME/certificate errors:
# - Port 80 must be open for Let's Encrypt HTTP-01 challenge
# - Domain must point to your public IP (check with: dig mysite.example.com)
# - Cloudflared handles its own TLS — Caddy can use internal certs for tunneled sites
# Check TLS certificate status
caddy trust # install CA into system trust store (for local dev)
# Reload after config changes (no downtime)
sudo systemctl reload caddy
# If Let's Encrypt rate limit hit, test with staging CA first
# Add to your Caddyfile block:
# tls {
# ca https://acme-staging-v02.api.letsencrypt.org/directory
# }
# Check port 80/443 are reachable
sudo firewall-cmd --list-all
ss -tlnp | grep caddy
# Check daemon and connection state
sudo systemctl status tailscaled
sudo journalctl -u tailscaled -n 50
# Re-authenticate if session expired
sudo tailscale up
# Check peer reachability and NAT type
tailscale netcheck
# View all peers and their status
tailscale status
# Test direct connection to a peer
tailscale ping myhostname
# If peer is showing "offline": check that tailscaled is running on the other device
# If routing is broken: check MagicDNS settings at https://login.tailscale.com/admin/dns
# Enable Tailscale SSH if needed
sudo tailscale up --ssh
# Check if route advertisement is working
tailscale status --peers
# Reset and rejoin network
sudo tailscale down
sudo tailscale up
# Check service status and logs
sudo systemctl status cloudflared
sudo journalctl -u cloudflared -n 80 --no-pager
# List tunnels and their status
cloudflared tunnel list
cloudflared tunnel info my-tunnel
# Test tunnel manually (bypasses systemd service)
cloudflared tunnel run my-tunnel
# Check config file is valid
cloudflared tunnel ingress validate
# Diagnose connectivity to Cloudflare
cloudflared tunnel diagnose
# If credentials missing/expired: re-login
cloudflared tunnel login
cloudflared tunnel token my-tunnel # get a service token
# Verify DNS routing still points to tunnel
cloudflared tunnel route dns my-tunnel mysite.example.com
# Restart service after config fix
sudo systemctl restart cloudflared
# On the SERVER — check sshd is running
sudo systemctl status sshd
sudo journalctl -u sshd -n 30
# Check sshd is actually listening
ss -tlnp | grep sshd
# Validate sshd config
sudo sshd -t
# Check firewall allows SSH
sudo firewall-cmd --list-all | grep ssh
# Check fail2ban hasn't banned your client IP
sudo fail2ban-client status sshd
# Unban your IP if needed
sudo fail2ban-client set sshd unbanip YOUR_IP
# On the CLIENT — diagnose with verbose output
ssh -vvv [email protected]
# Common issues:
# "Connection refused" → sshd not running, or wrong port, or firewall blocking
# "Permission denied" → wrong key, or PasswordAuthentication disabled
# "Host key changed" → run: ssh-keygen -R 192.168.1.50 then reconnect
# On server: check NFS is running and exports are active
sudo systemctl status nfs-server
sudo exportfs -v # list active exports
showmount -e localhost # show what clients see
# Re-export after /etc/exports changes
sudo exportfs -arv
# Check firewall on server
sudo firewall-cmd --list-services | grep -E "nfs|rpc"
# On client: try a verbose mount
sudo mount -v -t nfs 192.168.1.100:/share /mnt/nfs
# If "access denied": check client IP is in the export's allowed range
# If "portmapper" error: ensure rpcbind is open in server firewall
sudo firewall-cmd --add-service=rpcbind --permanent
sudo firewall-cmd --add-service=mountd --permanent
sudo firewall-cmd --reload
# Check NFS server logs
sudo journalctl -u nfs-server -n 30
# Check Samba is running
sudo systemctl status smb nmb
sudo journalctl -u smb -n 30
# Validate smb.conf syntax
testparm
# List active shares
smbclient -L localhost -N
# Test authentication for a user
smbclient //localhost/ShareName -U youruser
# Reset Samba password
sudo smbpasswd -a youruser
# Firewall check
sudo firewall-cmd --list-services | grep samba
# If share shows but can't connect from Windows:
# Ensure "valid users" in smb.conf matches your username exactly
# Check SELinux/AppArmor isn't blocking (look for apparmor denials):
sudo journalctl -k | grep apparmor | tail -20
# Restart after config changes
sudo systemctl restart smb nmb
# See all active rules
sudo firewall-cmd --list-all
sudo firewall-cmd --list-all-zones
# Check if packet drops are being logged
sudo journalctl -k | grep -E "FINAL_REJECT|DROP" | tail -20
# Enable firewalld logging of dropped packets (temporarily)
sudo firewall-cmd --set-log-denied=all
sudo journalctl -k -f | grep "FINAL_REJECT" # watch live
# Disable after debugging
sudo firewall-cmd --set-log-denied=off
# Check which zone an interface is in
sudo firewall-cmd --get-active-zones
# Move interface to a different zone (e.g. trusted for LAN interface)
sudo firewall-cmd --zone=trusted --add-interface=eth0 --permanent
sudo firewall-cmd --reload
# Check if nftables (underlying) has any extra rules
sudo nft list ruleset | grep -v "^#"
# Check container network is up
podman network ls
podman network inspect podman # default bridge network
# Test from inside the container
podman exec mycontainer ping -c 2 8.8.8.8
podman exec mycontainer curl -s https://ifconfig.me
# Check DNS inside container
podman exec mycontainer cat /etc/resolv.conf
podman exec mycontainer nslookup google.com
# If DNS fails: aardvark-dns may have a stale state
podman network reload
# Rebuild the network stack (WARNING: stops all containers)
podman system reset --force # nuclear option — recreates everything
# Check if IP forwarding is enabled (required for container networking)
cat /proc/sys/net/ipv4/ip_forward # should be 1
sudo sysctl -w net.ipv4.ip_forward=1 # enable if 0
# Check firewalld isn't blocking container traffic
sudo firewall-cmd --zone=trusted --add-interface=cni-podman0 --permanent
sudo firewall-cmd --reload
# Check NetworkManager VPN plugin is loaded
nmcli connection show # list all connections
nmcli connection up "VPN Name" # connect
nmcli connection down "VPN Name" # disconnect
# Verbose diagnostics
sudo journalctl -u NetworkManager -f # watch live while connecting
# WireGuard-specific: check interface came up
ip link show wg0
sudo wg show wg0
# Check WireGuard handshake is happening
sudo wg show # "latest handshake" should update
# OpenVPN: check routing after connect
ip route show
ip route show table all | grep -v "^broadcast"
# For L2TP/IPsec: check xl2tpd and strongswan logs
sudo journalctl -u xl2tpd -n 30
sudo journalctl -u strongswan -n 30
# Restart NetworkManager if a VPN leaves a stale state
sudo systemctl restart NetworkManager
# Check PipeWire status
systemctl --user status pipewire pipewire-pulse wireplumber
# Restart PipeWire stack (no logout required)
systemctl --user restart pipewire pipewire-pulse wireplumber
# List audio devices PipeWire sees
pactl list sinks short # output devices
pactl list sources short # input devices (microphones)
wpctl status # WirePlumber full status
# Check default output device
pactl info | grep "Default Sink"
# Set a different default output (replace sink name from list above)
pactl set-default-sink alsa_output.pci-0000_00_1f.3.analog-stereo
# Check volume is not muted
pactl set-sink-mute @DEFAULT_SINK@ 0
pactl set-sink-volume @DEFAULT_SINK@ 100%
# Check for SOF firmware (Intel DSP audio)
sudo journalctl -k | grep -E "sof|SOF|intel-sof" | tail -20
# If missing, the firmware is in sof-firmware which is pre-installed
# Test audio output directly
speaker-test -t wav -c 2 # plays L/R speaker test tones
# Check ALSA layer beneath PipeWire
aplay -l # list ALSA devices
arecord -l # list capture devices
# Check Bluetooth service
sudo systemctl status bluetooth
sudo journalctl -u bluetooth -n 50
# Restart Bluetooth
sudo systemctl restart bluetooth
# Check PipeWire Bluetooth module
pactl list cards | grep -A3 bluez # should show BT card when headphones connected
# If headphones pair but no audio:
# Try switching profile to A2DP (high quality)
pactl list cards | grep "Name: bluez" # get card name
pactl set-card-profile bluez_card.XX_XX_XX_XX_XX_XX a2dp-sink
# In audio settings GUI, ensure "High Fidelity Playback (A2DP Sink)" is selected
# not "Hands-Free Audio" (HFP) which degrades quality
# Crackling: disable power-saving on BT adapter
echo 'options btusb enable_autosuspend=n' | sudo tee /etc/modprobe.d/btusb-autosuspend.conf
sudo systemctl restart bluetooth
PipeWire provides a native JACK compatibility layer — no need to install JACK separately. JACK applications work unmodified. For professional audio with minimum latency, ensure your user is in the realtime group (it is by default on ShaniOS) and configure PipeWire's quantum:
# Verify realtime group membership
groups | grep realtime
# Check current PipeWire quantum (buffer size / latency)
pw-cli info 0 | grep quantum
# Set a lower quantum for lower latency (e.g. 64 samples at 48kHz = ~1.3ms)
# Add to ~/.config/pipewire/pipewire.conf.d/low-latency.conf:
# context.properties = {
# default.clock.rate = 48000
# default.clock.quantum = 64
# default.clock.min-quantum = 32
# }
# Then restart:
systemctl --user restart pipewire wireplumber
# Check latency
pw-metadata | grep quantum
# Run Steam from terminal to see errors
flatpak run com.valvesoftware.Steam
# Proton game crashes: try a different Proton version
# In Steam → right-click game → Properties → Compatibility → Force Proton version
# Check Vulkan works
vulkaninfo | grep "GPU id"
vkcube # should render a spinning cube
# Check GPU driver is loaded
lspci -k | grep -A2 -E "VGA|3D" # look for "Kernel driver in use:"
# For NVIDIA: check open driver is loaded
nvidia-smi
lsmod | grep nvidia
# Missing 32-bit libraries (common for older games):
# Steam Flatpak bundles its own 32-bit runtime — this is usually fine.
# If still failing, run with more debugging:
PROTON_LOG=1 flatpak run com.valvesoftware.Steam
# Check ~/steam-*.log after trying to launch the game
# Reset Steam runtime (re-downloads runtime files)
flatpak run com.valvesoftware.Steam steam://resetsteam
# Check controller is seen by the kernel
ls /dev/input/js* # joystick device
ls /dev/input/event* # evdev device
cat /proc/bus/input/devices | grep -A5 "Gamepad\|Controller\|Joystick"
# Check udev rules are applied (ShaniOS ships game-devices-udev)
udevadm info /dev/input/js0
# Test controller input with jstest
jstest /dev/input/js0
# Check your user is in the input group (it is by default)
groups | grep input
# For PS4/PS5 DualSense over USB: ensure hidraw permissions
ls -l /dev/hidraw*
# If permission denied, check game-devices-udev rules are active
sudo udevadm control --reload-rules
sudo udevadm trigger
# AntiMicroX — remap controller to keyboard/mouse
flatpak install flathub io.github.antimicrox.antimicrox
flatpak run io.github.antimicrox.antimicrox
# Piper — configure gaming mouse DPI and buttons
flatpak install flathub org.freedesktop.Piper
flatpak run org.freedesktop.Piper
# Check GameMode is running (ShaniOS ships it pre-installed)
systemctl status gamemoded
# GameMode is activated automatically when games start via Flatpak Steam
# For non-Steam games, launch with:
gamemoderun %command%
# Check power profile is set to Performance (especially on laptops)
powerprofilesctl get # check current profile
powerprofilesctl set performance
# Check CPU frequency scaling
cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Should be "schedutil" (default); GameMode switches to "performance" while gaming
# Check GPU is being used (not integrated)
# For NVIDIA Prime on a laptop:
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia game_executable
# Check thermal throttling
sensors | grep -E "temp|Core"
# If CPUs are hitting 90°C+ check thermal paste / cooling
# GOverlay — performance overlay (FPS, GPU usage, temps)
flatpak install flathub io.github.benjamimgois.goverlay
# Enable MangoHud from within GOverlay
# Check waydroid-container service
sudo systemctl status waydroid-container
# Restart the container service
sudo systemctl restart waydroid-container
# Check Waydroid logs
sudo waydroid log
# If never initialised: run the helper first
sudo waydroid-helper init
# Check binder kernel module is loaded
lsmod | grep binder
# If missing: reboot (ShaniOS loads it at boot via dracut)
# Check if session is already running
waydroid status
# Kill stuck session and restart
waydroid session stop
sudo systemctl restart waydroid-container
waydroid show-full-ui
# Ensure the session is running before installing
waydroid status # should show "RUNNING"
# Install APK
waydroid app install ~/Downloads/app.apk
# If "INSTALL_FAILED_NO_MATCHING_ABIS": the APK is ARM-only
# ARM translation (libhoudini) should handle this automatically
# Check ARM translation is configured:
waydroid prop get ro.product.cpu.abi # should show x86_64 + arm64
# If ARM translation is not working, re-run the helper
sudo waydroid-helper init --force
# List installed apps
waydroid app list
# Check Android data is healthy
sudo btrfs subvolume show /var/lib/waydroid # should be @waydroid subvolume
# Check CUPS is running
sudo systemctl status cups
sudo systemctl restart cups
# List detected printers
lpstat -p -d
# Test print
echo "test" | lp -d PrinterName
# Cancel all stuck jobs for a printer
cancel -a PrinterName
# For network printers: check Avahi can find them
avahi-browse _ipp._tcp -t # IPP printers
avahi-browse _printer._tcp -t # legacy LPD printers
# Open CUPS web UI for full configuration
xdg-open http://localhost:631
# Flush and restart CUPS
sudo systemctl stop cups
sudo rm -f /var/spool/cups/tmp/* # clear spool (bind-mounted from @data)
sudo systemctl start cups
# For HP printers: check hplip-minimal status
hp-check -r # report HP printer status
hp-setup # run HP setup wizard
# Check your user is in the scanner group (it is by default)
groups | grep scanner
# List SANE-detected scanners
scanimage -L
# For network scanners (IPP Everywhere / AirScan):
# sane-airscan discovers them automatically via Avahi
avahi-browse _uscan._tcp -t # network scan targets
# Test a scan from terminal
scanimage --device-name=airscan:e0:... --format=png --output-file=/tmp/test.png
# Restart SANE daemon if needed
sudo systemctl restart saned.socket
# Check SANE can access the USB scanner
lsusb | grep -i scan # confirm scanner is seen
ls -l /dev/bus/usb/... # check permissions
No. The root filesystem is read-only and immutable by design. This is a core security and stability feature, not a limitation. Use Flatpak for GUI applications, and containers (Distrobox, Podman) for development tools and CLI utilities. ShaniOS ships a custom pacman wrapper that blocks mutating operations (-S, -U, -R) but still allows read-only queries (-Q, -F, -Ss).
ShaniOS includes automatic boot failure detection via a multi-layer pipeline. If the system fails to boot after an update, it automatically falls back to the previous working slot. After you log in, a "Boot Failure Detected" dialog appears offering a one-click rollback via shani-deploy --rollback, which restores the broken slot from its Btrfs backup snapshot. Your personal data in /home and /data is never affected by rollbacks.
Yes. ShaniOS uses a dedicated @swap subvolume with Copy-on-Write disabled (nodatacow), providing reliable hibernation on Btrfs. This approach solves the traditional Btrfs hibernation challenges. The swapfile size is automatically set to match your system's RAM on first deployment.
The /etc directory uses an overlay filesystem, making it writable despite the read-only root. Simply edit files in /etc normally — changes are stored in /data/overlay/etc/upper and persist across updates and blue-green slot switches. To see what you've changed, run: ls -la /data/overlay/etc/upper.
ShaniOS maintains two complete system images (@blue and @green) for atomic updates. However, Btrfs Copy-on-Write shares unchanged data blocks between them, resulting in only ~18% overhead. The benefit is zero-downtime updates and instant rollback capability — a worthwhile trade-off compared to traditional single-image systems.
No. ShaniOS requires UEFI firmware. UEFI is required for Unified Kernel Images (UKIs), Secure Boot support via MOK, the systemd-boot bootloader, and the gen-efi boot management tooling. Legacy BIOS is not supported.
Not recommended. Other operating systems may modify the ESP or bootloader entries, potentially breaking ShaniOS's boot configuration. For running other systems, use virtual machines (GNOME Boxes or virt-manager via Flatpak) or containers (Distrobox, LXC) instead.
At the systemd-boot menu during startup, select the alternative boot entry (both @blue and @green entries are always present). On a running system you can also run sudo shani-deploy --rollback to restore and switch to the previous slot's Btrfs backup snapshot, then reboot.
Use containers. Distrobox lets you create a full mutable Linux environment (Arch, Ubuntu, Fedora, etc.) that integrates with your desktop — apps appear in your launcher, files are shared. For development tools and CLI utilities, Distrobox is the recommended approach. For GUI apps, Flatpak is best. AppImages via Gear Lever work for portable tools not available as Flatpaks. Nix (pre-installed; the @nix subvolume is shared across both slots) is excellent for CLI tools, language runtimes, and pinned library versions — packages installed via Nix survive OS updates and rollbacks, just add a channel first with nix-channel --add. Homebrew can also be installed in user-space to /home/linuxbrew/.linuxbrew — it works on Linux and doesn't touch the read-only root.
Yes. ShaniOS bind-mounts critical service state directories from the persistent @data subvolume. This includes /var/lib/NetworkManager (WiFi passwords, VPN configs), /var/lib/bluetooth (paired devices), /var/lib/cups (printer configs), fingerprint enrollment, and many more. All of these persist through every system update and rollback.
Run: cat /data/current-slot — this shows the currently active slot name. You can also check the kernel command line: cat /proc/cmdline | grep -o 'subvol=[^,\ ]*' to see which Btrfs subvolume was mounted at boot.
Use TPM 2.0 enrollment. If your system has a TPM 2.0 module (enabled in BIOS), you can enroll your LUKS key into it with: LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}') && sudo systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+7 "$LUKS_DEVICE". With Secure Boot enabled (PCR 7), the TPM will only unlock if the bootloader hasn't been tampered with. See the TPM Encryption section for full details.
Yes. Install Steam, Lutris, Heroic Games Launcher, or Bottles as Flatpaks from Flathub. ShaniOS ships full Mesa (Vulkan + OpenGL), NVIDIA open driver support, Wayland-native compositor, comprehensive controller/racing wheel udev rules, and GPU switching (nvidia-prime, switcheroo-control) for hybrid graphics laptops. The full GStreamer codec stack is also included for media playback.
Updates follow a date-based versioning scheme (YYYYMMDD) on the stable (default) or latest channels. A background service (shani-update.timer) checks for updates 5 minutes after login and every 2 hours thereafter. When an update is found, a GUI dialog asks you to install or defer (defer schedules a reminder after 24 hours). Updates are never applied silently — user approval is always required before shani-deploy runs.
Yes. Rollbacks only replace the system slot (@blue or @green) from its Btrfs backup snapshot. Your personal data in /home, all persistent service state in /data, your Flatpak apps, containers, and VMs are completely untouched. The separation of system and user data is by design — they live in separate Btrfs subvolumes.
Yes. Waydroid is pre-configured and available. It runs a full Android container via LXC on top of your Linux desktop. Android data is stored in the dedicated @waydroid Btrfs subvolume. The necessary firewall rules are already set up. Initialize it once with sudo waydroid init, then launch from your application menu.
ShaniOS does not automatically back up user data. Two tools are pre-installed:
# Snapshot your home directory
sudo btrfs subvolume snapshot /home /data/snapshots/home-$(date +%Y%m%d)
# List snapshots
sudo btrfs subvolume list /What to back up: /home and /data (service configs, VPN keys, overlay changes). Flatpak apps and system slots are re-deployable — they don't need backup.
Critical — LUKS header backup: If your system is encrypted, back up the LUKS header. Losing it means permanent data loss:
LUKS_DEVICE=$(sudo cryptsetup status shani_root | grep 'device:' | awk '{print $2}')
sudo cryptsetup luksHeaderBackup "$LUKS_DEVICE" --header-backup-file ~/luks-header-backup.img
# Store this file offsite (cloud storage, encrypted USB, etc.)CUPS and drivers for all major printer brands are pre-installed. Most printers work automatically:
ipp-usb provides driverless IPP-over-USB support for any printer advertised as AirPrint or Mopria compatible.sudo systemctl restart cups and lpstat -p -d to list active queues.For scanning, install GNOME Simple Scan (flatpak install flathub org.gnome.SimpleScan) or Skanlite (flatpak install flathub org.kde.skanlite). SANE with network scan support (sane-airscan) is pre-installed.
Install GNOME Boxes for an easy graphical VM manager: flatpak install flathub org.gnome.Boxes. For more advanced control, install Virtual Machine Manager: flatpak install flathub org.virt_manager.virt-manager. Both Flatpaks bundle their own libvirt and QEMU runtimes — no system daemon setup required. VM disk images are stored in the @libvirt Btrfs subvolume with CoW disabled for optimal performance, and survive all system updates and rollbacks.
# Check bees deduplication daemon status
sudo systemctl status "beesd@*"
# View actual compression ratios per subvolume
sudo compsize /
sudo compsize /home
sudo compsize /nix
# Full storage analysis with per-subvolume breakdown
sudo shani-deploy --storage-info
# Run on-demand deduplication pass
sudo shani-deploy --optimize
No. ShaniOS collects zero telemetry, sends no crash reports, and has no analytics of any kind — ever. No opt-out required because there is nothing to opt out of. The update tool (shani-deploy) connects to download servers to fetch images, but transmits only what a standard HTTP download requires — no hardware fingerprints, system IDs, or usage statistics. Intel ME modules (mei, mei_me) are blacklisted by default, removing the low-level hardware management channel. Because the entire codebase is public on GitHub, these claims are verifiable — you can read every script that runs on your system.
ShaniOS activates six Linux Security Modules simultaneously via lsm=landlock,lockdown,yama,integrity,apparmor,bpf — most distributions run one or two. Alongside those: immutable read-only root (even root cannot modify OS files at runtime), LUKS2 with argon2id full-disk encryption (optional, recommended), TPM2 auto-unlock, Secure Boot via shim/sbctl, Intel ME disabled by default, firewalld active from first boot, Flatpak and Snap sandboxing, SHA256+GPG verified OS images, and fwupd for keeping firmware current. See the Security Features section for full details on each layer.
Yes. Every image is SHA256 + GPG signed. The public key (ID 7B927BFF...8014792) is on public keyservers. To verify manually:
gpg --keyserver keys.openpgp.org --recv-keys 7B927BFF8014792
gpg --verify shanios-image.zst.sig shanios-image.zst
sha256sum -c shanios-image.zst.sha256shani-deploy does this automatically before every deployment. The build system, deploy scripts, and signing workflow are public on GitHub — the full chain of trust is independently auditable.
Yes. libfido2 (FIDO2/U2F), opensc + ccid (smart cards), and libnfc (NFC tokens) are all pre-installed with the necessary udev rules and pcscd.socket configured. Hardware security keys work for PAM login, sudo authentication, and browser WebAuthn at first boot — no setup required. Test with:
fido2-token -L # list connected FIDO2 keys
opensc-tool -l # list smart cards
systemctl status pcscd.socketShaniOS is well-suited for laptops. Several features are specifically beneficial: fingerprint login works at first boot on supported hardware (fprintd + PAM); hibernation works out of the box — the swap subvolume is auto-sized to your RAM with CoW disabled; TPM2 auto-unlock means LUKS2 encryption with no passphrase at every boot; Profile Sync Daemon runs browser profiles from RAM, substantially reducing SSD write wear; volatile /var further reduces unnecessary writes; power-profiles-daemon (power-saver, balanced, performance) is pre-installed; switcheroo-control handles hybrid Intel+NVIDIA/AMD GPU switching. The read-only root also protects the system from corruption during unexpected shutdowns.
Yes. NVIDIA open-source drivers (nvidia-open, nvidia-utils) and NVIDIA Prime are pre-installed and configured — including full Vulkan support. Works at first boot on most NVIDIA hardware. For hybrid GPU laptops (Intel iGPU + NVIDIA dGPU), switcheroo-control and nvidia-prime are both pre-installed for GPU switching. Verify:
# Check NVIDIA driver is loaded
nvidia-smi
# Run an app on the discrete NVIDIA GPU explicitly
__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia glxinfo | grep "OpenGL renderer"
# Check available GPUs (switcheroo)
gdbus call --system --dest net.hadess.SwitcherooControl \
--object-path /net/hadess/SwitcherooControl \
--method org.freedesktop.DBus.Properties.Get \
net.hadess.SwitcherooControl GPUsfprintd is pre-installed and integrated with PAM — fingerprint authentication works for login (GDM/SDDM), sudo, and screen unlock on supported hardware. Enroll your fingerprint:
# Enroll a finger (run as your user, not root)
fprintd-enroll
# List enrolled fingerprints
fprintd-list $USER
# Verify it works
fprintd-verify $USER
# If your reader isn't detected, check
lsusb | grep -i finger
journalctl -u fprintd -fIf your reader is supported by libfprint but not auto-detected, check the libfprint supported devices list.
fwupd is pre-installed with fwupd-refresh.timer running automatic checks. Update BIOS, NVMe controllers, SSD firmware, Thunderbolt devices, and other LVFS-supported hardware directly from ShaniOS:
# Check for available firmware updates
fwupdmgr get-updates
# Apply updates
fwupdmgr update
# See all manageable devices
fwupdmgr get-devicesSupported hardware includes most major laptop manufacturers (Dell, Lenovo, HP, ASUS, Framework, etc.), NVMe drives from most vendors, and many USB peripherals. Check fwupd.org/lvfs/devices for the full list.
Key terms used throughout this documentation.
| Term | Definition |
|---|---|
| @blue / @green | The two Btrfs subvolumes used as root filesystems in the blue-green deployment model. Only one is active (mounted as /) at any time; the other is the standby slot used for updates. |
| Atomic update | An update that either completes entirely or fails cleanly without touching the running system. The active slot is never modified during an update. |
| CoW (Copy-on-Write) | A Btrfs mechanism where modifying data creates a new copy of the changed blocks rather than overwriting in place. Enables efficient snapshots and sharing of unchanged data between @blue and @green. |
| UKI (Unified Kernel Image) | A single signed EFI binary containing the kernel, initramfs, and kernel command line. ShaniOS generates one per slot using gen-efi and dracut. Stored in the ESP, loaded directly by the UEFI firmware via systemd-boot. |
| shani-deploy | The atomic update and deployment tool. Handles downloading, verifying, extracting, and deploying new system images; rollbacks; storage analysis; and deduplication. Also self-updates on every run. |
| gen-efi | ShaniOS tool that generates the Unified Kernel Image (UKI) for the currently booted slot using dracut. Called automatically by shani-deploy during updates; can also be run manually. |
| @data | The persistent Btrfs subvolume (mounted at /data) storing the /etc overlay upper/work directories, service state bind-mount sources (/data/varlib/), logs, and system markers. Everything that must survive across updates lives here. |
| Overlay (/etc) | OverlayFS mount that presents /etc as writable by layering user changes (upper dir in @data) on top of the read-only base /etc from the active slot. Changes persist across all updates and rollbacks. |
| MOK (Machine Owner Key) | A user-managed key enrolled into the UEFI Secure Boot database via MOK Manager. Allows ShaniOS's signed bootloader (Shim → systemd-boot) and UKIs to boot with Secure Boot enabled. |
| bees / beesd | A background Btrfs deduplication daemon that continuously finds and eliminates duplicate data blocks across all subvolumes. Auto-configured at boot by beesd-setup.service; hash database scales automatically with disk size. |
| Distrobox | A tool that creates mutable Linux container environments (any distro — Ubuntu, Fedora, Arch, etc.) with seamless desktop integration: shared /home, exported app launchers, shared audio/display/USB. The recommended way to use traditional package managers on ShaniOS. |
| Slot | One of the two system images: @blue or @green. At any time one slot is "Active" (currently booted) and the other is "Candidate" (updated, waiting for next boot, or kept as rollback). Determined by /data/current-slot. |
| systemd.volatile=state | Kernel parameter that creates a tmpfs overlay for /var, making it volatile (cleared on every reboot). Persistent service data is restored via Btrfs subvolume mounts and bind mounts at boot, not by writing to the root filesystem. |
| nodatacow | Btrfs mount option that disables Copy-on-Write for a subvolume. Used on @libvirt (VM disks) and @swap (swapfile) where CoW causes excessive fragmentation and performance degradation. |
| PCR (Platform Configuration Register) | A set of TPM registers that record measurements of the boot process (firmware, bootloader, kernel). Used during TPM enrollment to bind the LUKS key to a specific, verified boot chain. PCR 0 = firmware; PCR 7 = Secure Boot state. |
| ostree / composefs | Infrastructure libraries used for image content addressing and verification. composefs provides a read-only filesystem layer derived from a Merkle tree of the image content, enabling integrity checks of the active slot's file tree. |
| skopeo | A command-line tool for inspecting, copying, signing, and syncing OCI/Docker container images between registries — without pulling them into local storage first. Part of the pre-installed Podman ecosystem alongside buildah. |
| waydroid-helper | A ShaniOS-specific utility that automates Waydroid initialisation: downloads the Android image, configures kernel parameters, sets up the LXC container, and validates firewall rules — all in a single guided command. |
| @nix | A dedicated Btrfs subvolume mounted at /nix for the Nix package manager. Shared across both @blue and @green slots so Nix packages survive all system updates and rollbacks. Nix is pre-installed on ShaniOS; a channel must be added on first use with nix-channel --add before installing packages. |
| passim | A local caching and sharing daemon that speeds up fwupd firmware downloads by broadcasting available firmware files to other machines on the LAN via mDNS/Avahi, avoiding repeated downloads of the same payload across multiple ShaniOS installations. |
| shani-* packages | The family of ShaniOS-specific meta-packages (shani-core, shani-deploy, shani-desktop-plasma, shani-multimedia, shani-network, shani-storage, shani-tools, shani-accessibility, shani-bluetooth, shani-fonts, shani-printer, shani-scanner, shani-video, shani-video-guest, shani-peripherals, and others) that group systemd services, udev rules, configuration snippets, and default settings into logical units. Each pulls in the appropriate base packages and pre-configures them for ShaniOS's immutable environment. |
| shani-update | The per-user systemd service and timer (shani-update.service / shani-update.timer) that runs in the background to check for OS updates. Fires 5 minutes after login then every 2 hours. When an update is available it presents a GUI dialog (yad/zenity/kdialog) asking the user to install or defer. Runs pkexec shani-deploy under systemd-inhibit when approved. Logs to ~/.cache/shani-update.log. |
| dracut | The initramfs generator used by ShaniOS to build the initrd that is embedded in each Unified Kernel Image (UKI). Called by gen-efi during every deployment. dracut handles LUKS2 unlock, Btrfs subvolume mounting, the OverlayFS /etc mount, and bind mounts before handing off to systemd as PID 1. |
| IMA / EVM | Linux Integrity Measurement Architecture and Extended Verification Module — the "Integrity" component of ShaniOS's LSM stack (lsm=...integrity...). IMA measures file hashes at access time and logs or blocks access to files that have changed since measurement. EVM protects extended attributes (including IMA hashes) using an HMAC. Together they provide runtime file integrity verification beyond what the read-only root enforces. |
| MGLRU | Multi-Generation LRU — a Linux kernel memory reclaim algorithm enabled by default on ShaniOS with aggressive settings. MGLRU is more efficient than the traditional LRU at deciding which memory pages to evict under pressure, reducing latency spikes during gaming and other workloads. Controlled via /sys/kernel/mm/lru_gen/. |
| PSD (Profile Sync Daemon) | A systemd user service that moves browser profiles (Firefox, Chromium, Vivaldi, etc.) into a tmpfs RAM filesystem at login and syncs them back to disk periodically and at logout. Reduces SSD write wear and improves browser performance. Pre-enabled for all users on ShaniOS. |
| @flatpak | The Btrfs subvolume mounted at /var/lib/flatpak. Stores all system-wide Flatpak runtimes, applications, and their data. Shared across @blue and @green slots — installed Flatpaks are immediately available regardless of which slot is booted and survive all updates and rollbacks. |
| @containers | The Btrfs subvolume mounted at /var/lib/containers. Stores all Podman container images, volumes, and overlay layers for the current user. CoW is enabled so container layers are efficiently deduplicated by bees. Survives all system updates and rollbacks. |
| @swap | The Btrfs subvolume that holds the system swapfile. Created on first boot sized to match installed RAM. CoW is disabled (nodatacow) on this subvolume — a requirement for swapfiles on Btrfs. Enables reliable hibernation. The resume= and resume_offset= kernel parameters pointing to this swapfile are embedded in the UKI. |
| Apptainer | A HPC/scientific container runtime (formerly Singularity) pre-installed on ShaniOS. Unlike Docker/Podman, Apptainer containers run as the calling user with no daemon and no privilege escalation — safe for multi-user environments and reproducible research. Consumes SIF (SquashFS Image Format) container images. Compatible with Docker Hub and OCI registries via automatic image conversion. |
ShaniOS is open-source and welcomes community contributions.