Compare commits

..

11 commits

17 changed files with 709 additions and 4 deletions

2
.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
pyinfra-debug.log
.venv/

329
README.md
View file

@ -12,3 +12,332 @@ or
- run `git pull` to fetch the newest version
- run `pyinfra @local deploy.py` to install/update `0x90.ssh_config` trustmebro
- run `pyinfra --dry inventory.py deploy.py` and check that you are on the same state that is already deployed
# social practices
maintainers: people who know (next to) everything and would be able to learn the rest
adepts: people who are still learning about the infrastructure, but don't need to keep everything in mind
associates: others, who just need to maintain a certain service
Discussions can happen:
- in presence (gathering), should happen at least every 3-4 months, to discuss the big picture
- in presence (coworking), while working on new services
- in issues and PRs for concrete proposals
- in online calls to fix emergencies
- in chat groups for exploring ideas and everything else
## structure of this repository
this repository documents the current state
of the infrastructure.
For each server/VM,
it contains a directory with
- a README.md file which gives an overview on the server
- a pyinfra inventory.py file
- a pyinfra deploy.py file which documents what's installed
- the configuration files pyinfra deploys
- optional: a deploy-restore.py file which can restore data from backup
- optional: other pyinfra deploy files which only manage certain services or tasks, like upgrades
The repository also contains a lib/ directory
with pyinfra packages we reuse accross servers.
With pull requests we can propose changes
to the current infrastructure.
PRs need to be approved by at least one maintainer.
The pyinfra code in PRs can already be deployed,
if it is not destructive - decide responsibly.
## create a VM
To add a new VM for a service you want to manage,
0. Checkout a new branch with `git checkout -b your-server-name`
1. Add your VM to inventory.py
2. Create a directory for the VM
3. Add your VM to ararat/deploy.py
4. Ask the core team to run `pyinfra ararat.0x90.space ararat/deploy.py`
to create your VM
5. Write your pyinfra deployment script in your-server-name/deploy.py
6. Deploy it, if it doesn't work change it, repeat until the service works
7. Copy TEMPLATE.md to your-server-name/README.md and fill it out.
You can leave out parts which are obvious from your deploy.py file.
8. Commit your changes, push them to your branch,
open a pull request from your branch to the development branch,
and ask a maintainer to review and merge it
## tools we use
The hope is that you don't need to know all of these tools
to already do useful things,
but can systematically dive deeper into the infrastructure.
### pass
password manager to store passphrases and secrets,
the repository with our secrets
is at <https://git.0x90.space/missytake/0x90-secrets> for now.
### ssh
to connect to servers and VMs with root@,
no sudo,
root should have set a password,
but via SSH, password access should be forbidden.
There should be no shared SSH keys,
one SSH key per person.
SSH private keys should be password-protected
and only stored on laptops
with hard disk encryption.
### systemctl & journalctl
to look at status and log output of services.
systemd is a good way of keeping services running,
at least on Linux machines.
On openBSD we will use /etc/rc.d/ scripts.
### git
for updating the documentation,
pushing and pulling secrets,
and opening PRs to doku/pyinfra repos.
to be discussed:
- Keep in mind that PRs can and will be deployed to servers. OR
- The main branch should always reflect the state of the machine.
### markdown + sembr
for documenting the infrastructure.
[Semantic line breaks](https://sembr.org/) are great
for formatting text files
which are managed in git.
### kvm + virsh
as a hypervisor
which we can use to create VMs
for specific services.
The hypervisor is a minimal alpine linux,
with "boot to RAM",
the data-partition for the VM images is encrypted.
### pyinfra
as a nice declarative config tool for deployment.
we can also maintain some of the things we need
in extra python modules.
pyinfra vs. ansible? ~> need to investigate. currently ansible setup on golem, pyinfra used in deltachat and 1 ezra service.
### podman
to isolate services in root-less containers.
a podman container should run in a systemd process.
it takes some practice to understand
how to run commands inside a container
or where the files are mounted.
But it goes well with pyinfra
if it's managed in systemd.
### nftables
as a declarative firewall
which can be managed in pyinfra.
### nginx
as an HTTPS reverse proxy,
passing traffic on to the podman containers.
### acmetool
as a tool to manage Let's Encrypt certificates,
which goes well with pyinfra
because of it's declarative nature.
It also ships acmetool-redirector
which redirects HTTP traffic on port 80
to nginx on port 443.
There is a pyinfra package for it at
https://github.com/deltachat/pyinfra-acmetool/
https://man.openbsd.org/acme-client + https://man.openbsd.org/relayd on OpenBSD
### cron
to schedule recurring tasks,
like acmetool's certificate renewals
or the nightly borgbackup runs.
on OpenBSD already daily cronjob that executes /etc/daily.local
### borgbackup
can be used to back up application data
in a nightly cron job.
Backups need to be stored at an extra backup server.
There is a pyinfra package for it at
https://github.com/deltachat/pyinfra-borgbackup/
might also look at restic ~> append-only backup better restricted
### wireguard
as a VPN to connect the backup server,
which can be at some private house,
with the production servers.
### prometheus
as a tool to measure service uptime
and measure typical errors
from journalctl output.
It can expose metrics via HTTPS
behind basic auth.
### grafana
as a visual dashboard to show service uptime
and whether services throw errors.
It can also send out email alerts.
### team-bot
a deltachat bot to receive support requests
and email alerts from grafana.
# Set up alpine on hetzner
This was only tested with a cloud VPS so far.
Source: <https://gist.github.com/c0m4r/e38d41d0e31f6adda4b4c5a88ba0a453>
(but it's less of a hassle than described there)
To create an alpine server on hetzner,
you need to first create a Debian VPS or something similar.
Then you boot into the rescue system.
Get the download link of the latest VIRTUAL x86_64 alpine iso
from <https://alpinelinux.org/downloads/>.
Login to the rescue system via console or SSH,
and write the ISO to the disk:
```
ssh root@xxxx:xxxx:xxxx:xxxx::1
wipefs -a /dev/sda
wget https://dl-cdn.alpinelinux.org/alpine/v3.20/releases/x86_64/alpine-virt-3.20.3-x86_64.iso # or whatever link you got from alpine
dd if=alpine-virt-3.20.3-x86_64.iso of=/dev/sda
reboot
```
Then open the server console (SSH doesn't work),
login to root (no password required),
and proceed with:
```
cp -r /.modloop /root
cp -r /media/sda /root
umount /.modloop /media/sda
rm /lib/modules
mv /root/.modloop/modules /lib
mv /root/sda /media
setup-alpine
```
Then select what you wish,
contrary to the guide above,
DHCP is actually fine.
The drive should be sda,
the installation type can be sys
(why go through the hassle).
Voilà! reboot and login.
Probably the first SSH login will be via root password,
as copy-pasting your public SSH key into the console doesn't work really.
Make sure the SSH config allows this
(and turn passwort root access off afterwards).
## Encrypting /var/lib/libvirt partition
**Status: tested with Hetzner VPS, not deployed in production yet**
Messing with file systems and partitions
should not be done by automation scripts,
so I created the LUKS-encrypted /dev/sdb partition manually.
(So far, /dev/sdb was added via a Hetzner volume,
but it can be any partition actually)
To create a partition in the VPS volume
(which was formatted to ext4 originally),
- I ran `fdisk /dev/sdb`,
- entered `o` to create a DOS partition table,
- added `n` to add a new primary partition, using all available space,
- and `w` to save to disk and exit.
Then I ran `cryptsetup luksFormat /dev/sdb1`
and entered the passphrase from `pass 0x90/ararat/sdb-crypt`
to create a LUKS volume.
Now I could decrypt the new volume with
`cryptsetup luksOpen /dev/sdb1 sdb_crypt`
and entering the passphrase from `pass 0x90/ararat/sdb-crypt`.
Finally, I ran `mkfs.ext4`
to create an ext4 file system
in the encrypted partition.
# mount qcow2 VM disk images
This is a quick guide to mounting a qcow2 disk images on your host server. This is useful to reset passwords,
edit files, or recover something without the virtual machine running.
**Step 1 - Enable NBD on the Host**
```
modprobe nbd max_part=8
```
**Step 2 - Connect the QCOW2 as network block device**
```
qemu-nbd --connect=/dev/nbd0 /var/lib/vz/images/100/vm-100-disk-1.qcow2
```
**Step 3 - Find The Virtual Machine Partitions**
```
fdisk /dev/nbd0 -l
```
**Step 4 - Mount the partition from the VM**
```
mount /dev/nbd0p1 /mnt
```
**Step 5 - After you are done, unmount and disconnect**
```
umount /mnt/somepoint/
qemu-nbd --disconnect /dev/nbd0
rmmod nbd
```

104
TEMPLATE.md Normal file
View file

@ -0,0 +1,104 @@
# Server: Server name
## Usage
Who is using this server?
Who needs the server and will be affected if the server is not working?
## Maintainers
Who to ask about this server?
## Domain Settings
Where are the DNS settings? E.g. with Hetzner or in a DNS zone file.
How to change DNS settings?
Which domains and subdomains exist?
## Hosting
Where is the server hosted?
Add a link to the hosting admin interface, e.g. <https://console.hetzner.cloud/>.
## Services
Which services are running there?
E.g. there are a `www.example.org` and `ci.example.org` services.
### Service: ci.example.org
Each service has a greppable heading starting with `### Service: `.
Which software the service is running? E.g. nginx.
How was it deployed? E.g. manually or with pyinfra.
How can the software be managed,
Where the admin credentials are stored if you need to fix something (e.g. for mailcow)?
Is there an admin chatgroup (e.g. for mailadm) and how to join it?
#### Monitoring
How to read the logs of the service?
How admins are notified when the service is down?
#### Deployment
How the service was deployed?
How to reinstall it?
#### Upgrade Strategy
How the service is upgraded?
Which commands to run to upgrade it, e.g. where the upgrade script is located and how to run it?
If there is an official documentation, put a link to it in this section.
#### Maintainers
Who to ask about the service?
#### Integration
How the service is related to other services running on this or other servers?
E.g. service `ci.example.org` uses the secret storage `secrets.example.net` and runner `runner.example.com` hosted elsewhere.
### Service: www.example.org
Description similar to the other service.
## Users
Who has access to this server?
Which admin accounts are there?
Which service accounts are there?
Which user accounts are there?
## Monitoring
How do we notice if something fails?
Where do the errors show up?
Where the logs for the services are located, e.g. Postfix logs go to `/var/log/mail.log`.
## Upgrade Strategy
How do we keep the services up to date?
## Backup and Restore
How the server is backed up and how to restore the backup?
## Deployment
How to reinstall the server?
Which settings were selected to create the server? E.g. the operating system image.
Are there deployment scripts, and if any, where they are located and how to run them?
# Changelog
## 2023-05-30 - Created the server
Document the steps taken here.
## 2023-06-10 - Installed nginx
...

View file

@ -0,0 +1,145 @@
import os
from pyinfra import host, inventory
from pyinfra.operations import server, apk, files, openrc
from pyinfra.facts.server import Mounts
from pyinfra_util import get_pass
files.replace(
name="Enable TCP forwarding via SSH server",
path="/etc/ssh/sshd_config",
text="AllowTcpForwarding no",
replace="AllowTcpForwarding yes",
)
openrc.service(
name="Restart sshd",
service="sshd",
restarted=True,
)
files.replace(
name="Enable community repository",
path="/etc/apk/repositories",
text="#http://dl-cdn.alpinelinux.org/alpine/v3.20/community",
replace="http://dl-cdn.alpinelinux.org/alpine/v3.20/community",
)
apk.update()
apk.packages(
packages=["cryptsetup", "vim"]
)
mounts = host.get_fact(Mounts)
if "/var/lib/libvirt" not in mounts:
decryption_password = get_pass('0x90/ararat/sdb-crypt').strip()
if decryption_password:
server.shell(
name="Decrypt and mount /data",
commands=[
f" echo -n '{decryption_password}' | cryptsetup luksOpen --key-file - /dev/sdb1 sdb_crypt || true",
"mount /dev/mapper/sdb_crypt /var/lib/libvirt",
]
)
apk.packages(
packages=["libvirt-daemon", "qemu-img", "qemu-system-x86_64", "virt-install"]
)
openrc.service(
name="Start libvirtd",
service="libvirtd",
running=True,
enabled=False,
)
# add networking: https://wiki.alpinelinux.org/wiki/KVM#Networking
server.modprobe(
name="activate tun kernel module",
module="tun",
)
# echo "tun" >> /etc/modules-load.d/tun.conf
files.line(
name="autostart tun",
path="/etc/modules-load.d/tun.conf",
line="tun",
)
# cat /etc/modules | grep tun || echo tun >> /etc/modules
#files.line(path="/etc/modules",line="tun")
# add VMs to public network:
virsh_network_guests = []
for vm in inventory.groups.get("debian_vms"):
#sudo ip addr add 65.109.242.20 dev eth0
ipv4 = vm.data.get("ipv4")
mac_address = '52:54:00:6c:3c:%02x'%vm.data.get("id")
files.template(
name=f"Add {ipv4} for {vm} to ararat",
src="ararat/files/floating-ip.cfg.j2",
dest=f"/etc/network/interfaces.d/60-{vm}-floating-up.cfg", # doesn't work, interfaces.d isn't included
vm=vm,
ipv4=ipv4,
)
#server.shell(name=f"Add {ipv4} for {vm} to ararat", commands=[f"ip addr add {ipv4} dev eth{vm}"],)
virsh_network_guests.append(f"<host mac='{mac_address}' name='{vm}' ip='{ipv4}' />")
openrc.service(
service="networking",
restarted=True,
)
# create public kvm network
files.template(
name="Generate libvirt public network XML",
src="ararat/files/public.network.j2",
dest="/tmp/public.network",
guests='\n '.join(virsh_network_guests),
host_ipv4=host.name,
)
server.shell(
name="Update libvirt public network",
commands=[
"virsh net-destroy public ; virsh net-undefine public || true",
"virsh net-define /tmp/public.network",
"virsh net-start public",
]
)
# disable ipv6 in a bridge if necessary
debian_image_path = "/var/lib/libvirt/images/debian-12-generic-amd64.qcow2"
files.download(
name="Download Debian 12 base image",
src="https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2",
dest=debian_image_path,
)
for vm in inventory.groups.get("debian_vms"):
if os.path.isfile(f"{vm}/files/cloud-init.yml"):
files.put(
name=f"Upload {vm}-cloud-init.yml",
src=f"{vm}/files/cloud-init.yml",
dest=f"/root/{vm}-cloud-init.yml",
)
#virt-install
else:
if vm.data.get("authorized_keys"):
authorized_keys = "ssh_authorized_keys:\n - " + " - ".join(
[get_pass(f"0x90/ssh_keys/{admin}.pub") for admin in vm.data.get("authorized_keys")]
)
else:
authorized_keys = ""
files.template(
name=f"Upload {vm}-cloud-init.yml",
src="ararat/files/cloud-init.yml.j2",
dest=f"/root/{vm}-cloud-init.yml",
ssh_authorized_keys=authorized_keys,
)
mac_address = '52:54:00:6c:3c:%02x' % vm.data.get("id")
memory = 1024
vcpus = 1
disk_size = 4
server.shell(
name=f"virt-install {vm}",
commands=[
f"virsh list -all | grep {vm} || " # only run virt-install if VM doesn't exist yet
f"virt-install --name {vm} --disk=size={disk_size},backing_store={debian_image_path} "
f"--memory {memory} --vcpus {vcpus} --cloud-init user-data=/root/{vm}-cloud-init.yml,disable=on "
f"--network 'bridge=virbr0,network=public,mac_address={mac_address}' --osinfo=debian12 || true",
]
)

View file

@ -0,0 +1,25 @@
#cloud-config
keyboard:
layout: de
variant: nodeadkeys
locale: en_US
timezone: UTC
disable_root: false
users:
- name: root
shell: /bin/bash
{{ ssh_authorized_keys }}
- name: mop
# so our user can just sudo without any password
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
# content from $HOME/.ssh/id_rsa.pub on your host system
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKZYJ91RLXRCQ4ZmdW6ucIltzukQ/k+lDOqlRIYwxNRv missytake@systemli.org
# Examples: https://cloudinit.readthedocs.io/en/latest/reference/examples_library.html#examples-library

View file

@ -0,0 +1,4 @@
auto eth0:{{ vm }}
iface eth0:{{ vm }} inet static
address {{ ipv4 }}
netmask 32

View file

@ -0,0 +1,10 @@
<network>
<name>public</name>
<forward mode='route' />
<bridge />
<ip address='{{ host_ipv4 }}' prefix='32'>
<dhcp>
{{ guests }}
</dhcp>
</ip>
</network>

View file

@ -1,5 +1,15 @@
targets = [
"@local",
("ararat.0x90.space", dict(ssh_port=42022)),
("baixun.0x90.space", dict(ssh_port=42023)),
localhost = "@local"
hypervisor = [("95.217.163.200", dict(ssh_user="root"))]
debian_vms = [
# "cloud",
(
"playground",
{
"authorized_keys": ["missytake", "hagi", "vmann"],
"ipv4": "65.109.242.20",
"id": 0,
}
),
]

View file

@ -0,0 +1,3 @@
Metadata-Version: 2.1
Name: pyinfra-util
Version: 0.1

View file

@ -0,0 +1,7 @@
pyproject.toml
pyinfra_util/__init__.py
pyinfra_util/util.py
pyinfra_util.egg-info/PKG-INFO
pyinfra_util.egg-info/SOURCES.txt
pyinfra_util.egg-info/dependency_links.txt
pyinfra_util.egg-info/top_level.txt

View file

@ -0,0 +1 @@

View file

@ -0,0 +1 @@
pyinfra_util

View file

@ -0,0 +1 @@
from .util import get_pass, deploy_tmux

View file

@ -0,0 +1,56 @@
"""
nginx deploy
"""
import subprocess
from pyinfra.operations import files, apt
def get_pass(filename: str) -> str:
"""Get the data from the password manager."""
try:
r = subprocess.run(["pass", "show", filename], capture_output=True)
except FileNotFoundError:
readme_url = "https://git.0x90.space/deltachat/secrets"
print(f"Please install pass and pull the latest version of our pass secrets from {readme_url}")
exit()
return r.stdout.decode('utf-8')
def deploy_tmux(home_dir="/root", escape_key="C-b", additional_config=[]):
apt.packages(
name="apt install tmux",
packages=["tmux"],
)
config = [
f"set-option -g prefix {escape_key}",
"set-option -g aggressive-resize on",
"set-option -g mouse on",
"set-option -g set-titles on",
"set-option -g set-titles-string '#I:#W - \"#H\"'",
"unbind-key C-b",
"bind-key ` send-prefix",
"bind-key a last-window",
"bind-key k kill-session",
]
for item in additional_config:
config.append(item)
for line in config:
files.line(
path=f"{home_dir}/.tmux.conf",
line=line,
)
dot_profile_add = """
# autostart tmux
if [ -t 0 -a -z "$TMUX" ]
then
test -z "$(tmux list-sessions)" && exec tmux new -s "$USER" || exec tmux new -A -s $(tty | tail -c +6) -t "$USER"
fi
"""
files.block(
name="connect to tmux session on login",
path=f"{home_dir}/.profile",
content=dot_profile_add,
try_prevent_shell_expansion=True,
)

View file

@ -0,0 +1,7 @@
[build-system]
requires = ["setuptools>=45"]
build-backend = "setuptools.build_meta"
[project]
name = "pyinfra-util"
version = "0.1"