Skip to main content

Ralsina.Me — Roberto Alsina's website

Posts about linux (old posts, page 21)

Pet Server (September 2022 update)

This is a longer-term up­date on the state of my home serv­er. You can read more about it in these 1 2 3 4 post­s.

What's the hardware nowadays?

  • Radxa Ze­ro: 4-­core 4GB RAM, 32GB eMMC
  • 2 1TB HDD over USB
  • 100Mbps eth­er­net wired con­nec­tion to the router

And ... so far, it's more than enough. No strain :-)

What is it running?

Al­so back­ups via restic and oth­er ran­dom things. I have a sep­a­rate oc­to­print serv­er but I may move it here.

Really, it runs all that?

Yeah, and it works just fine. Of course this is most­ly be­cause I am the on­ly re­al user, but hey, it work­s.

Anything else you would want to run?

  • Maybe some sort of push no­ti­fi­ca­tion thingie so I can no­ti­fy my phone if there's a prob­lem
  • Maybe grafana? But don't care enough yet.

Was it worth it?

I could run all this on cloud­s, but the hard­ware I am us­ing would cost maybe 100 dol­lars, and month­ly fees for ser­vices will quick­ly reach that amount and sur­pass it. I am al­ready pay­ing for the band­width any­way.

OTOH I have spent maybe 50 or 60 hours set­ting this up and if I look at my hourly rates ... ick.

Anything weird?

I re­al­ly don't un­der­stand oth­er peo­ple's home­lab­s, they seem wild­ly over­pow­ered.

VMs in small ARM servers

Background (I swear I get to VMs later)

I have been run­ning a per­son­al serv­er at my of­fice for a lit­tle while (See 1 and 2) where I run a num­ber of con­tainer­ized ser­vices.

Since I had it, I want­ed to add a way to eas­i­ly de­ploy my own ex­per­i­men­tal code so I can do quick "server­s" for things I am play­ing with.

I could just cre­ate my code as, say, a flask app and cre­ate con­tain­ers for them and de­ploy them that way, and then add ingress rules in my gate­way and ... it gets ex­haust­ing pret­ty fast.

What I want­ed was a way to run my own Heroku, sor­ta. Just write a bit of code, run a com­mand, have it be avail­able.

Af­ter googling I found a so­lu­tion that did­n't re­quire me to im­ple­ment a k8s clus­ter: faasd. The prom­ise is:

  • Run a shell script to in­stall faasd
  • Write your code as a func­tion
  • Run a com­mand to de­ploy
  • It all runs out of a sin­gle ingress path (ex­cept CORS of course)

So, min­i­mal con­fig, ease of de­ploy­men­t, no need to con­stant tweak­ing of my gate­way. All good!

Ex­cept ... faasd does­n't play along with Dock­er. Both use con­tain­erd and cni and oth­er things as their back­end, and faasd says that re­al­ly, they like spe­cif­ic ver­sions so they should in­stall them, not the sys­tem, and then run­ning dock­er gets pret­ty dicey.

So, I could just get a sec­ond serv­er. It's not like I don't have more small com­put­er­s.

But my serv­er has spare ca­pac­i­ty! So I don't WAN­NA START A SEC­OND SERVER!

Al­so, this is go­ing to of­ten be toy code I have not care­ful­ly vet­ted for se­cu­ri­ty, so it would be bet­ter if it ran in iso­la­tion.

So? I need­ed a VM.

Really, a VM inside a tiny ARM computer

My serv­er is a Radxa Ze­ro. It's small­er than a cred­it card. It has, how­ev­er, 4 cores, and 4GB of RAM, so sure­ly there must be a way to run a VM in it that can iso­late Faasd and let it run its wonky ver­sions of things while the rest of the sys­tem does­n´t care.

And yes, there is!

Fire­crack­er claims that you can start a VM fast, that it has over­head com­pa­ra­ble to a con­tain­er, and that it pro­vides iso­la­tion! It's what Ama­zon us­es for Lamb­da, so it should be enough for me.

On the oth­er hand, Fire­crack­er is a pain if you aren't a freak­ing Ama­zon SRE, which I am re­al­ly not, but ...

Ig­nite is a VM man­ag­er that has a "con­tain­er UX" and can man­age VMs declar­a­tive­ly!

So I set out to run ig­nite on my serv­er. And guess what? It work­s!

It's pack­aged for Arch, which is what I am us­ing, so I just in­stalled it, run a cou­ple of scripts to cre­ate a VM:

[ralsina@pinky faas]$ cat build.sh
#!/bin/sh -x
# Create and configure a VM with faasd in it
set -e

NAME=faas

waitport() {
    while ! nc -z $1 $2 ; do sleep 1 ; done
}

sudo ignite create weaveworks/ignite-ubuntu \
        --cpus 2 \
        --memory 1GB \
        --size 10GB \
        --ssh=id_rsa.pub \
        -p 8082:8081 \
        --name $NAME

sudo ignite vm start $NAME

IP=$(sudo ignite vm ls | grep faas | cut -f9 -d\        )
waitport $IP 22

ssh -o "StrictHostKeyChecking no" root@$IP mkdir -p /var/lib/faasd/secrets
ssh root@$IP "echo $(pass faas.ralsina.me) > /var/lib/faasd/secrets/basic-auth-password"
scp setup.sh root@$IP:
ssh root@$IP sh setup.sh

# Login
export OPENFAAS_URL=http://localhost:8082
ssh root@$IP cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login --password-stdin

# Setup test function
faas-cli store deploy figlet

echo 'Success!' | faas-cli invoke figlet
[ralsina@pinky faas]$ cat setup.sh
#!/bin/sh -x

set -e
apt update
apt upgrade -y
apt install -y git

git clone https://github.com/openfaas/faasd
cd faasd
./hack/install.sh

If you run build.sh it will create a ubuntu-based VM with Faasd installed, start it, map a port to it, setup SSH keys so you can ssh into it, and configure authentication for Faasd so you can log into that too.

Does it work?

In­deed it does!

Are there any problems?

There is one and it's pret­ty bad.

If the server closes badly (and that means: without explicitly shutting down the VM), the VM gets corrupted, every time. It either ends in a "Running" state in ignite while it's dead in containerd, or the network allocation is somehow duplicated and denied, or one of half a dozen other failure states at which point it's easier to remove everything in /var/lib/firecracker and recreate it.

Is it easy to deploy stuff?

You betcha! Here's an example from https://nombres.ralsina.me, if I run build.sh it builds it, deploy.sh deploys it, the actual code is in the busqueda/ and historico/ folders.

It's very sim­ple to write code, and it's very sim­ple to de­ploy.

If I found a bet­ter way to han­dle the VMs I would con­sid­er this fin­ished.

CORS config for FaaS

Be­cause I want to be able to de­ploy ran­dom python code eas­i­ly to my own server, I have set­up a "Func­tion as a Ser­vice" thing, called faasd (think of it as poor peo­ple's AWS lamb­da). More de­tails on how, why and how it turned out will come in the fu­ture. BUT: this ex­plains how to fix the un­avoid­able headache CORS will give you.

Sit­u­a­tion:

What will happen?

You will set­up your func­tion, test it out us­ing curl, be hap­py it work­s, then set it up in your web app and get an er­ror in the con­sole about how CORS is not al­low­ing the re­quest.

What is CORS and why is it annoying me?

CORS is a way for a ser­vice liv­ing in a cer­tain URL to say which oth­er URLs are al­lowed to call it. So, if the app are in, say, http­s://nom­bres.ralsi­na.me and the func­tion lives in http­s://­faas.ralsi­na.me then the ORI­GIN for the app is not the same as the ORI­GIN for the func­tion, so this is a "Cross Ori­gin Re­quest" and you are try­ing to do "Cross Ori­gin Re­source Shar­ing" (CORS) and the brows­er won't let you.

How do I fix it?

There are a num­ber of fix­es you can try, but they all come down to the same two ba­sic ap­proach­es:

Option 1

Make it so the re­quest is not cross-­source. To do that, move the func­tion some­how in­to the same URL as the page, and bob's your un­cle.

So, just change the proxy con­fig so nom­bres.ralsi­na.me/­func­tions is prox­ied to the faasd server's /func­tions and change the page to use a re­quest that is not cross-o­rig­in, and that's fixed.

I don't want to do this be­cause I don't want to have to set­up the proxy dif­fer­ent­ly for each ap­p.

Option 2

Have the func­tion re­turn a head­er that says "Ac­cess-­Con­trol-Al­low-O­rig­in: some­thing". That "some­thing" should be the ori­gin of the re­quest (in our ex­am­ple nom­bres.ralsi­na.me) or "*" to say "I don't care".

So, you may say "Fine, I'll just add that head­er in my re­sponse and it will work!". Oh sweet sum­mer child. That will NOT work (at least not in the case of Faas­d)

Why?

Be­cause web browsers don't just make the re­quest they want and then look at the head­er­s. They do a spe­cial pre­flight re­quest, which is some­thing like "Hey, server, if I were to ask you to give me /func­tion­s/what­ev­er from this orig­in, would you give me a CORS per­mis­sion or not?"

That re­quest is done us­ing the OP­TIONS HTTP method, and Faasd (and, to be hon­est, most web frame­work­s) will not process those by pass­ing them to your code.

So, even if your func­tion says CORS is al­lowed, you still will get CORS er­rors.

You can see this if you ex­am­ine your browser's HTTP traf­fic us­ing the de­vel­op­er tool­s. There will be an OP­TIONS pre­flight re­quest, and that one does­n't have the head­er.

So, the eas­i­est thing is to add those in the proxy.

So, in my case, in the prox­y's ng­inx.­con­f, I had to add this in "the right place":

  add_header 'Access-Control-Allow-Origin' '*';

What is the right place will vary de­pend­ing on how you have con­fig­ured things. But hey, there you go.

My Backup Solution

Intro

Back­ups are im­por­tant ok? I know that. You should know that. So, why don't most peo­ple do prop­er back­ups of their com­put­er­s?

Be­cause most of the ways to do back­ups are ei­ther in­con­ve­nient or use­less.

So, here's how the so­lu­tion I have im­ple­ment­ed that makes back­ups con­ve­nient and use­ful.

The backup tool itself

I use restic be­cause it kicks as­s. It work­s, it's fast, it's space ef­fi­cien­t, and it's easy.

You just need to write a short script like this one:

#!/bin/bash -x

# Where to backup?
MOUNTDIR=/backup
BACKUPDIR=$MOUNTDIR/backup-$HOSTNAME

if [ -d $BACKUPDIR ]
then
    # Backups are password protected
    export RESTIC_PASSWORD=passwordgoeshere

    # What to backup
    restic -r $BACKUPDIR --verbose backup \
            /home/ralsina \
            --exclude ~ralsina/.cargo \
            --exclude ~ralsina/.local/share/Steam/ \
            --exclude ~ralsina/.cache \
            --exclude ~ralsina/.config/google-chrome/ \
            --exclude ~ralsina/.rustup \
            --exclude ~ralsina/.npm \
            --exclude ~ralsina/.gitbook \
            \
            /etc/systemd/system/backup.* \
            /usr/local/bin

    # Keep at most one backup for the last 7 days that have backups
    restic -r $MOUNTDIR/backup-pinky forget --prune --keep-daily=7
    # Cleanup
    restic -r $MOUNTDIR/backup-pinky prune
    # Make really sure things are stored
    sync; sync; sync; sync
fi

Backup rule 3-2-1

The 3-2-1 rule:

  • 3 copies of the back­up da­ta (1 pri­ma­ry, 2 copies)
  • 2 dif­fer­ent me­dia
  • 1 must be off­site

In my case, these are:

  • Pri­ma­ry back­up is to disk
  • Sec­ondary back­up is to a disk in an­oth­er ma­chine (sim­i­lar scrip­t, us­ing sftp)
  • Ter­tiary back­up is to a pen drive (d­if­fer­ent me­di­a) I then put in my pock­et (off­site).

To per­form the pri­ma­ry and sec­ondary back­up­s, it's just two slight­ly dif­fer­ent ver­sions of that script (ac­tu­al­ly, it's just one script with ar­gu­ments, left as an ex­er­cise for the read­er).

The ter­tiary back­up is a bit more com­pli­cat­ed, be­cause I want­ed it to be con­ve­nient

The Convenient Way To Backup to a Removable Drive

My us­er sto­ry was this:

As a per­son that needs an off­site back­up but don't want to trans­mit all that data, I want to plug a pen drive in­to the ma­chine and have it AU­TO­MAT­I­CAL­LY start back­ing the da­ta in­to the pen drive.

Then, once the back­up is fin­ished, at some point, I can just un­plug it and take it with me.

Let's just say that find­ing a way that works took me a few hours and I am pret­ty sure my so­lu­tion is more com­pli­cat­ed than it needs to be. But hey, it work­s, so it's good enough.

This be­ing Lin­ux and the year be­ing 2022 ... this so­lu­tion in­volves sys­temd. And be­cause it's sys­temd, it's com­pli­cat­ed.

Automount

First part is we need to mount the pen drive automatically in a well known location. For this we need two things. An automount service, so systemd will automatically mount something in /backup:

/etc/systemd/system/backup.automount

[Unit]
Description=Automount Backup

[Automount]
Where=/backup
TimeoutIdleSec=5min

[Install]
WantedBy=multi-user.target

And a mount service so it knows what to mount in /backup and how:

/etc/systemd/system/backup.mount

[Unit]
Description=Backup
Wants=backup.service
Before=backup.service

[Mount]
What=/dev/disk/by-uuid/74cac511-4d7a-4221-9c0f-e554de12fbf1
Where=/backup
Type=ext4
Options=auto

[Install]
WantedBy=multi-user.target

The in­ter­est­ing parts are:

  • Wants and Before: that backup.service is going to be a systemd service that actually runs the backup script. We want it to run, and to run AFTER the device is mounted.
  • Where and What: Where is the mountpoint, and What is the pen drive's UUID as shown by sudo blkid

En­able and start the au­to­mount ser­vice, no need to do any­thing to the mount one.

Then of course we need the back­up ser­vice it­self. Just a "oneshot". When it's start­ed, it runs the back­up scrip­t:

/etc/systemd/system/backup.service

[Unit]
Description=Backup
Requires=backup.mount
After=backup.mount

[Service]
Type=oneshot
ExecStart=/usr/local/bin/backup.sh

[Install]
WantedBy=multi-user.target

En­able but don't start this ser­vice. Since it's "Want­ed" by the moun­t, that means when the de­vice is ef­fec­tive­ly mount­ed the back­up will start.

OR THAT WOULD IT DO IF THINGS MADE SENSE.

Sadly, the device is only mounted when, after being inserted, something tries to use the mountpoint. So, with these three services installed nothing happens unless, after you plug the pen drive you go and do something like ls /backup, which triggers the mount, which triggers the backup script.

So, how does one fix that? No idea. My workaround was to add TWO MORE SERVICES, so that ls /backup runs every minute.

/etc/systemd/system/backup_try.timer

[Unit]
Description=Try to run backup

[Timer]
OnUnitActiveSec=1min

[Install]
WantedBy=timers.target

/etc/systemd/system/backup_try.service

[Unit]
Description=Trigger Backup

[Service]
Type=oneshot
ExecStart=/bin/ls /backup

[Install]
WantedBy=multi-user.target

And with that, yes, I can just plug the pen drive when I get to the of­fice in the morn­ing and un­plug it lat­er, know­ing there is a back­up in it.

The cases I built for my mini servers

I have writ­ten a cou­ple posts about my rasp­ber­ry-pi home server­s. And peo­ple seem to like the cas­es I 3d-print­ed for them.

Well, if you liked them here they are.

For a Rasp­ber­ry pi 3-based serv­er you need:

  • The case it­self
  • Caps to lock each disk in its slot: 1 and 2
  • Cap to lock the pi in its slot

For a Rasp­ber­ry pi 4-based serv­er you need:

  • The case it­self
  • Caps to lock each disk in its slot: 1 and 2
  • Cap to lock the pi in its slot

All these are just the fol­low­ing de­signs from thin­gi­verse slapped to­geth­er:

  • Pi 3 sleeve case: here
  • Pi 4 sleeve case: here
  • HDD case: here

They work bet­ter (or at al­l!) if you use a pow­ered USB hub. There is no room in the case for it, just buy one you like and glue it to the side :-)

Case with a USB server glued to it


Contents © 2000-2024 Roberto Alsina