malobeo infrastructure

this repository contains nixos configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner.

deploying configuration

hosts are deployed automatically from master. The hydra build server will build new commits and on success, hosts will periodically pull those changes. Big changes (like updating flake lock) could be commited to the staging branch first. Hydra builds staging seperate, and on success you can merge into master.

deploy fresh host

if you want to deploy a completly new host refer to docs

testing configuration

refer to https://docs.malobeo.org/anleitung/microvm.html#testing-microvms-locally

development

requirements

we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your configuration.nix or nix.conf

nix.extraOptions = ''
  experimental-features = nix-command flakes
'';

More information about flakes can be found here

dev shell

a development shell with the correct environment can be created by running nix develop

If you're using direnv you can add flake support by following those steps: link

build a configuration

to build a configuration run the following command (replace <hostname> with the actual hostname):

nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel

documentation

documentation is automatically build from master and can be found here: docs.malobeo.org
locally you can run documentation using nix run .#docs or nix run .#docsDev

Durruti

Hetzner Server

Lucia

Lokaler Raspberry Pi 3

#Website

hosted on uberspace
runs malobeo.org(wordpress) and forum.malobeo.org(phpbb)
access via ssh with public key or password
Files under /var/www/virtual/malobeo/html

musik

TODO

  • Dieses wiki schreiben

infrastructure

  • host a local wiki with public available information about the space, for example:
    • how to use coffe machine
    • how to turn on/off electricity
    • how to use beamer
    • how to buecher ausleihen
    • ...
  • host a local wiki with infrastructure information
  • host some pad (codimd aka hedgedoc)
  • some network fileshare for storing the movies and streaming them within the network
    • Currently developed in the 'fileserver' branch
      • NFSV4 based
  • malobeo network infrastructure rework
    • request mulvad acc
    • remove freifunk, use openwrt with mulvad configured
  • evaluate imposing solutions
    • pdfarranger

external services

we want to host two services that need a bit more resources, this is a booking system for the room itself and a library system.

  • analyse best way to include our stuff into external nixOs server
    • writing some module that is included by the server
    • directly use nixOs container on host
    • combination of both (module that manages nginx blabla + nixOs container for the services

bots&progrmaming

  • create telegram bot automatically posting tuesday events
  • create webapp/interface replacing current task list pad
    • could be a simple form for every tuesday
    • element bot should send updates if some tasks are not filled out

Initrd-ssh

The initssh module can be used by importing inputs.self.nixosModules.malobeo.initssh

let cfg = malobeo.initssh

cfg.enable

Enable the initssh module

Default false

cfg.authorizedKeys

Authorized keys for the initrd ssh

Default [ ]

cfg.ethernetDrivers

Ethernet drivers to load in the initrd.
Run lspci -k | grep -iA4 ethernet

Default: [ ]

Example: [ "r8169" ]

Disks

The disks module can be used by importing inputs.self.nixosModules.malobeo.disko

let cfg = malobeo.disks

cfg.enable (bool)

  • Type: bool
  • Default: false
  • Description:
    Enables the disk creation process using the disko tool. Set to true to initialize disk setup.

cfg.hostId (string)

  • Type: string
  • Default: ""
  • Description:
    The host ID used for ZFS disks. This ID should be generated using a command like head -c4 /dev/urandom | od -A none -t x4.

cfg.encryption (bool)

  • Type: bool
  • Default: true
  • Description:
    Determines if encryption should be enabled. Set to false to disable encryption for testing purposes.

cfg.devNodes (string)

  • Type: string
  • Default: "/dev/disk/by-id/"
  • Description:
    Specifies where the disks should be mounted from.
    • Use /dev/disk/by-id/ for general systems.
    • Use /dev/disk/by-path/ for VMs.
    • For more information on disk name conventions, see OpenZFS FAQ.

let cfg = malobeo.disks.root

cfg.disk0 (string)

  • Type: string
  • Default: ""
  • Description:
    The device name (beginning after /dev/ e.g., sda) for the root filesystem.

cfg.disk1 (string)

  • Type: string
  • Default: ""
  • Description:
    The device name (beginning after /dev/ e.g., sdb) for the optional mirror disk of the root filesystem.

cfg.swap (string)

  • Type: string
  • Default: "8G"
  • Description:
    Size of the swap partition on disk0. This is applicable only for the root disk configuration.

cfg.reservation (string)

  • Type: string
  • Default: "20GiB"
  • Description:
    The ZFS reservation size for the root pool.

cfg.mirror (bool)

  • Type: bool
  • Default: false
  • Description:
    Whether to configure a mirrored ZFS root pool. Set to true to mirror the root filesystem across disk0 and disk1.

let cfg = malobeo.disks.storage

cfg.enable (bool)

  • Type: bool
  • Default: false
  • Description:
    Enables the creation of an additional storage pool. Set to true to create the storage pool.

cfg.disks (list of strings)

  • Type: listOf string
  • Default: []
  • Description:
    A list of device names without /dev/ prefix (e.g., sda, sdb) to include in the storage pool.
    Example: ["disks/by-id/ata-ST16000NE000-2RW103_ZL2P0YSZ"].

cfg.reservation (string)

  • Type: string
  • Default: "20GiB"
  • Description:
    The ZFS reservation size for the storage pool.

cfg.mirror (bool)

  • Type: bool
  • Default: false
  • Description:
    Whether to configure a mirrored ZFS storage pool. Set to true to mirror the storage pool.

Example Configuration

{
  options.malobeo.disks = {
    enable = true;
    hostId = "abcdef01";
    encryption = true;
    devNodes = "/dev/disk/by-id/";
    
    root = {
      disk0 = "sda";
      disk1 = "sdb";
      swap = "8G";
      reservation = "40GiB";
      mirror = true;
    };
    
    storage = {
      enable = true;
      disks = [ "sdc" "sdd" "disks/by-uuid/sde" ];
      reservation = "100GiB";
      mirror = false;
    };
  };
}

Create host with nixos-anywhere

We use a nixos-anywhere wrapper script to deploy new hosts. The wrapper script takes care of copying persistent host keys before calling nixos-anywhere.

To accomplish that boot the host from a nixos image and setup a root password.

sudo su
passwd

After that get the hosts ip using ip a and start deployment from your own machine:

# from infrastrucutre repository root dir:
nix develop .#
remote-install hostname 10.0.42.23

Testing Disko

Testing disko partitioning is working quite well. Just run the following and check the datasets in the vm:

nix run -L .\#nixosConfigurations.fanny.config.system.build.vmWithDisko

Sops

How to add admin keys

  • Git:

    • Generate gpg key
    • Add public key to ./machines/secrets/keys/users/
    • Write the fingerprint of the gpg key in .sops.yaml under keys: in the format - &admin_$USER $FINGERPRINT
  • Age:

    • Generate age key for Sops:
      $ mkdir -p ~/.config/sops/age
      $ age-keygen -o ~/.config/sops/age/keys.txt
      
      or to convert an ssh ed25519 key to an age key
      $ mkdir -p ~/.config/sops/age
      $ nix-shell -p ssh-to-age --run "ssh-to-age -private-key -i ~/.ssh/id_ed25519 > ~/.config/sops/age/keys.txt"
      
    • Get public key using $ age-keygen -y ~/.config/sops/age/keys.txt
    • Write public key in .sops.yaml under keys: in the format - &admin_$USER $PUBKEY
  • Write - *admin_$USER under the apropriate key_grups: of the secrets the user should have access to

  • cd machines/ and reencrypt existing secrets for the new key with sops updatekeys $path/to/secrets.yaml

How to add host keys

If a new host is created we have to add its age keys to the sops config. Do the following:

# ssh into the host and run:
nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
# create new host with the output of that command in /machines/.sops.yaml

MaloVPN

Running in the cloud. To let a host access the VPN you need to do the following:

  • generate a wireguard keypair
  • add the host to ./machines/modules/malobeo/peers.nix
  • enable the malovpn module on the host

Generate Wireguard keys

Enter nix shell for wg commands nix-shell -p wireguard-tools

umask 077
wg genkey > wg.private
wg pubkey < wg.private > wg.pub

Now you have a private/public keypair. Add the private key to the hosts sops secrets if you like.

Add host to peers.nix

peers.nix is a central 'registry' of all the hosts in the vpn. Any host added here will be added to the vpn servers peerlist allowing it to access the VPN. This allows us to controll who gets access by this repository.

  • Add your host to /machines/modules/malobeo/peers.nix
  • Set the role to "client"
  • choose a ip address as 'address' that is not taken already
  • set allowedIPs as the others, except we want to limit this host to only access certain peers
  • Add your public Key here as string

After that commit your changes and either open a PR or push directly to master
Example:

"celine" = {
  role = "client";
  address = [ "10.100.0.2/24" ];
  allowedIPs = [ "10.100.0.0/24" ];
  publicKey = "Jgx82tSOmZJS4sm1o8Eci9ahaQdQir2PLq9dBqsWZw4=";
};

Enable MaloVPN on Host

Either you configure wireguard manually or use the malobeo vpn module
The 'name' must match your hosts name in peers.nix:

sops.secrets.private_key = {};

imports = [
  malobeo.nixosModules.malobeo.vpn
];

services.malobeo.vpn = {
  enable = true;
  name = "celine";
  privateKeyFile = config.sops.secrets.private_key.path;
};

After a rebuild-switch you should be able to ping the vpn server 10.100.0.1. If the peers.nix file just was commited shortly before it may take a while till the vpn server updated its peerlist.

Updates

Nextcloud

Update nextcloud to a new major version:

  • create state directories: mkdir /tmp/var /tmp/data
  • run vm state dirs to initialize state sudo run-vm nextcloud --dummy-secrets --networking --var /tmp/var --data /tmp/data
  • Update lock file nix flake update --commit-lock-file
  • Change services.nextcloud.package to the next version (do not skip major version upgrades)
  • change custom extraApps to the new version
  • TEST!
  • run vm again, it should successfully upgrade nextcloud from old to new version
  • run vm state dirs to initialize state sudo run-vm nextcloud --dummy-secrets --networking --var /tmp/var --data /tmp/data

Rollbacks

Declaring a MicroVM

The hosts nixosSystems modules should be declared using the makeMicroVM helper function. Use durruti as orientation:

    modules = makeMicroVM "durruti" "10.0.0.5" [
      ./durruti/configuration.nix
    ];

"durruti" is the hostname.
"10.0.0.5" is the IP assigned to its tap interface.

Testing MicroVMs locally

MicroVMs can be built and run easily on your localhost for development.
We provide the script run-vm to handle stuff like development (dummy) secrets, sharing directories, ect. easily. Usage examples:

# run without args to get available options and usage info
run-vm

# run nextcloud locally with dummy secrets
run-vm nextcloud --dummy-secrets

# share a local folder as /var/lib dir so that nextcloud application data stays persistent between boots
mkdir /tmp/nextcloud
run-vm nextcloud --dummy-secrets --varlib /tmp/nextcloud

# enable networking to provide connectivity between multiple vms
# for that the malobeo hostBridge must be enabled on your host
# this example deploys persistent grafana on overwatch and fetches metrics from infradocs
mkdir overwatch
run-vm overwatch --networking --varlib /tmp/overwatch
run-vm infradocs --networking

Fully deploy microvms on local host

In order to test persistent microvms locally we need to create them using the microvm command.
This is necessary to be able to mount persistent /etc and /var volumes on those hosts.
Do the following:

Prepare your host by including microvm.nixosModules.host in your flake.nix Microvm Docs

# go into our repo and start the default dev shell (or use direnv)
nix develop .#

# create a microvm on your host (on the example of durruti)
sudo microvm -c durruti -f git+file:///home/username/path/to/infrastructure/repo

# start the vm
sudo systemctl start microvm@durruti.service

# this may fail, if so we most probably need to create /var /etc manually, then restart
sudo mkdir -p /var/lib/microvms/durruti/{var,etc}

# now you can for example get the rsa host key from /var/lib/microvms/durruti/etc/ssh/

# alternatively u can run the vm in interactive mode (maybe stop the microvm@durruti.service first)
microvm -r durruti

# after u made changes to the microvm update and restart the vm 
microvm -uR durruti

# deleting the vm again:
sudo systemctl stop microvm@durruti.service
sudo systemctl stop microvm-virtiofsd@durruti.service
sudo rm -rf /var/lib/microvms/durruti

Host Setup

Network Bridge

To provide network access to the VMs a bridge interface needs to be created on your host. For that:

  • Add the infrastructure flake as input to your hosts flake
  • Add inputs.malobeo.nixosModules.malobeo to your hosts imports
  • enable the host bridge: services.malobeo.microvm.enableHostBridge = true;

If you want to provide Internet access to the VM it is necessary to create a nat. This could be done like this:

networking.nat = {
  enable = true;
  internalInterfaces = [ "microvm" ];
  externalInterface = "eth0"; #change to your interface name
};

Auto Deploy VMs

By default no MicroVMs will be initialized on the host - this should be done using the microvm commandline tool. But since we want to always deploy certain VMs it can be configured using the malobeo.microvm.deployHosts option. VMs configured using this option will be initialized and autostarted at boot. Updating still needs to be done imperative, or by enabling autoupdates.nix

The following example would init and autostart durruti and gitea:

malobeo.microvm.deployHosts = [ "durruti" "gitea" ];

Updating nextcloud

Updating the draggable patch

The draggable patch is a one line patch found in the deck repo under src/components/cards/CardItem.vue
Direct link: https://git.dynamicdiscord.de/ahtlon/deck/commit/77cbcf42ca80dd32e450839f02faca2e5fed3761

The easiest way to apply is

  1. Sync the repo with remote https://github.com/nextcloud/deck/tree/main
  2. Checkout the stable branch for the nextcloud version you need
  • example git checkout stable31
  1. Apply the patch using git cherry-pick bac32ace61e7e1e01168f9220cee1d24ce576d5e
  2. Start a nix-shell with nix-shell -p gnumake krankerl php84Packages.composer php nodejs_24
  3. run krankerl package
  4. upload the archive at "./build/artifacts/deck.tar.gz" to a file storage (ask Ahtlon for access to the storj s3 or use own)
  5. Change url and sha in the nextcloud configuration.nix deck = pkgs.fetchNextcloudApp {};