Ir al contenido principal

Ralsina.Me — El sitio web de Roberto Alsina

Publicaciones sobre linux (publicaciones antiguas, página 21)

VMs in small ARM servers

Background (I swear I get to VMs later)

I have been run­ning a per­son­al serv­er at my of­fice for a lit­tle while (See 1 and 2) where I run a num­ber of con­tainer­ized ser­vices.

Since I had it, I want­ed to add a way to eas­i­ly de­ploy my own ex­per­i­men­tal code so I can do quick "server­s" for things I am play­ing with.

I could just cre­ate my code as, say, a flask app and cre­ate con­tain­ers for them and de­ploy them that way, and then add ingress rules in my gate­way and ... it gets ex­haust­ing pret­ty fast.

What I want­ed was a way to run my own Heroku, sor­ta. Just write a bit of code, run a com­mand, have it be avail­able.

Af­ter googling I found a so­lu­tion that did­n't re­quire me to im­ple­ment a k8s clus­ter: faasd. The prom­ise is:

  • Run a shell script to in­stall faasd
  • Write your code as a func­tion
  • Run a com­mand to de­ploy
  • It all runs out of a sin­gle ingress path (ex­cept CORS of course)

So, min­i­mal con­fig, ease of de­ploy­men­t, no need to con­stant tweak­ing of my gate­way. All good!

Ex­cept ... faasd does­n't play along with Dock­er. Both use con­tain­erd and cni and oth­er things as their back­end, and faasd says that re­al­ly, they like spe­cif­ic ver­sions so they should in­stall them, not the sys­tem, and then run­ning dock­er gets pret­ty dicey.

So, I could just get a sec­ond serv­er. It's not like I don't have more small com­put­er­s.

But my serv­er has spare ca­pac­i­ty! So I don't WAN­NA START A SEC­OND SERVER!

Al­so, this is go­ing to of­ten be toy code I have not care­ful­ly vet­ted for se­cu­ri­ty, so it would be bet­ter if it ran in iso­la­tion.

So? I need­ed a VM.

Really, a VM inside a tiny ARM computer

My serv­er is a Radxa Ze­ro. It's small­er than a cred­it card. It has, how­ev­er, 4 cores, and 4GB of RAM, so sure­ly there must be a way to run a VM in it that can iso­late Faasd and let it run its wonky ver­sions of things while the rest of the sys­tem does­n´t care.

And yes, there is!

Fire­crack­er claims that you can start a VM fast, that it has over­head com­pa­ra­ble to a con­tain­er, and that it pro­vides iso­la­tion! It's what Ama­zon us­es for Lamb­da, so it should be enough for me.

On the oth­er hand, Fire­crack­er is a pain if you aren't a freak­ing Ama­zon SRE, which I am re­al­ly not, but ...

Ig­nite is a VM man­ag­er that has a "con­tain­er UX" and can man­age VMs declar­a­tive­ly!

So I set out to run ig­nite on my serv­er. And guess what? It work­s!

It's pack­aged for Arch, which is what I am us­ing, so I just in­stalled it, run a cou­ple of scripts to cre­ate a VM:

[ralsina@pinky faas]$ cat
#!/bin/sh -x
# Create and configure a VM with faasd in it
set -e


waitport() {
    while ! nc -z $1 $2 ; do sleep 1 ; done

sudo ignite create weaveworks/ignite-ubuntu \
        --cpus 2 \
        --memory 1GB \
        --size 10GB \ \
        -p 8082:8081 \
        --name $NAME

sudo ignite vm start $NAME

IP=$(sudo ignite vm ls | grep faas | cut -f9 -d\        )
waitport $IP 22

ssh -o "StrictHostKeyChecking no" root@$IP mkdir -p /var/lib/faasd/secrets
ssh root@$IP "echo $(pass > /var/lib/faasd/secrets/basic-auth-password"
scp root@$IP:
ssh root@$IP sh

# Login
export OPENFAAS_URL=http://localhost:8082
ssh root@$IP cat /var/lib/faasd/secrets/basic-auth-password | faas-cli login --password-stdin

# Setup test function
faas-cli store deploy figlet

echo 'Success!' | faas-cli invoke figlet
[ralsina@pinky faas]$ cat
#!/bin/sh -x

set -e
apt update
apt upgrade -y
apt install -y git

git clone
cd faasd

If you run it will create a ubuntu-based VM with Faasd installed, start it, map a port to it, setup SSH keys so you can ssh into it, and configure authentication for Faasd so you can log into that too.

Does it work?

In­deed it does!

Are there any problems?

There is one and it's pret­ty bad.

If the server closes badly (and that means: without explicitly shutting down the VM), the VM gets corrupted, every time. It either ends in a "Running" state in ignite while it's dead in containerd, or the network allocation is somehow duplicated and denied, or one of half a dozen other failure states at which point it's easier to remove everything in /var/lib/firecracker and recreate it.

Is it easy to deploy stuff?

You betcha! Here's an example from, if I run it builds it, deploys it, the actual code is in the busqueda/ and historico/ folders.

It's very sim­ple to write code, and it's very sim­ple to de­ploy.

If I found a bet­ter way to han­dle the VMs I would con­sid­er this fin­ished.

CORS config for FaaS

Be­cause I want to be able to de­ploy ran­dom python code eas­i­ly to my own server, I have set­up a "Func­tion as a Ser­vice" thing, called faasd (think of it as poor peo­ple's AWS lamb­da). More de­tails on how, why and how it turned out will come in the fu­ture. BUT: this ex­plains how to fix the un­avoid­able headache CORS will give you.


  • The Faasd serv­er runs in some ma­chine, which is prox­­ied by a Ng­inx serv­er avail­able at http­s://­­faas­d.ral­si­­

  • Apps are just HTML pages some­where in­­­side ei­ther http­s://ral­si­­ or some oth­­er sim­i­lar do­­main.

What will happen?

You will set­up your func­tion, test it out us­ing curl, be hap­py it work­s, then set it up in your web app and get an er­ror in the con­sole about how CORS is not al­low­ing the re­quest.

What is CORS and why is it annoying me?

CORS is a way for a ser­vice liv­ing in a cer­tain URL to say which oth­er URLs are al­lowed to call it. So, if the app are in, say, http­s://nom­bres.ralsi­ and the func­tion lives in http­s://­faas.ralsi­ then the ORI­GIN for the app is not the same as the ORI­GIN for the func­tion, so this is a "Cross Ori­gin Re­quest" and you are try­ing to do "Cross Ori­gin Re­source Shar­ing" (CORS) and the brows­er won't let you.

How do I fix it?

There are a num­ber of fix­es you can try, but they all come down to the same two ba­sic ap­proach­es:

Option 1

Make it so the re­quest is not cross-­source. To do that, move the func­tion some­how in­to the same URL as the page, and bob's your un­cle.

So, just change the proxy con­fig so nom­bres.ralsi­­func­tions is prox­ied to the faasd server's /func­tions and change the page to use a re­quest that is not cross-o­rig­in, and that's fixed.

I don't want to do this be­cause I don't want to have to set­up the proxy dif­fer­ent­ly for each ap­p.

Option 2

Have the func­tion re­turn a head­er that says "Ac­cess-­Con­trol-Al­low-O­rig­in: some­thing". That "some­thing" should be the ori­gin of the re­quest (in our ex­am­ple nom­bres.ralsi­ or "*" to say "I don't care".

So, you may say "Fine, I'll just add that head­er in my re­sponse and it will work!". Oh sweet sum­mer child. That will NOT work (at least not in the case of Faas­d)


Be­cause web browsers don't just make the re­quest they want and then look at the head­er­s. They do a spe­cial pre­flight re­quest, which is some­thing like "Hey, server, if I were to ask you to give me /func­tion­s/what­ev­er from this orig­in, would you give me a CORS per­mis­sion or not?"

That re­quest is done us­ing the OP­TIONS HTTP method, and Faasd (and, to be hon­est, most web frame­work­s) will not process those by pass­ing them to your code.

So, even if your func­tion says CORS is al­lowed, you still will get CORS er­rors.

You can see this if you ex­am­ine your browser's HTTP traf­fic us­ing the de­vel­op­er tool­s. There will be an OP­TIONS pre­flight re­quest, and that one does­n't have the head­er.

So, the eas­i­est thing is to add those in the proxy.

So, in my case, in the prox­y's ng­inx.­con­f, I had to add this in "the right place":

  add_header 'Access-Control-Allow-Origin' '*';

What is the right place will vary de­pend­ing on how you have con­fig­ured things. But hey, there you go.

Mi solución de backup


¿Los ba­ckups son im­por­tan­tes, ok? Yo lo sé, vos de­be­rías sa­ber­lo. ¿En­ton­ce­s, por qué la ma­yo­ría no tie­ne ba­ckups de­cen­tes de sus com­pus?

Por­que la ma­yo­ría de las ma­ne­ras de ha­cer ba­ckups son in­con­ve­nien­tes o inú­ti­le­s.

Así que acá es­tá la so­lu­ción que yo im­ple­men­té, que ha­ce mis ba­ckups úti­les y con­ve­nien­tes.

La herramienta de backup en sí

Uso res­tic por­que es­tá bue­na. Fun­cio­na, es rá­pi­da, es efi­cien­te con el es­pa­cio, y es fá­ci­l.

Na­da más hay que es­cri­bir un script co­mo es­te:

#!/bin/bash -x

# Backup dónde?

if [ -d $BACKUPDIR ]
    # Backups con password
    export RESTIC_PASSWORD=passwordgoeshere

    # Backup qué cosa?
    restic -r $BACKUPDIR --verbose backup \
            /home/ralsina \
            --exclude ~ralsina/.cargo \
            --exclude ~ralsina/.local/share/Steam/ \
            --exclude ~ralsina/.cache \
            --exclude ~ralsina/.config/google-chrome/ \
            --exclude ~ralsina/.rustup \
            --exclude ~ralsina/.npm \
            --exclude ~ralsina/.gitbook \
            /etc/systemd/system/backup.* \

    # Guardar un backup por día de los últimos 7 días que hay backups
    restic -r $MOUNTDIR/backup-pinky forget --prune --keep-daily=7
    # Limpieza
    restic -r $MOUNTDIR/backup-pinky prune
    # Asegurate que están en disco
    sync; sync; sync; sync

La regla 3-2-1 de los backups

Di­ce la re­gla 3-2-1:

  • 3 co­pias de los da­tos (1 ba­ckup pri­ma­rio, dos se­cun­da­rio­s)
  • 2 me­dios de al­ma­ce­na­mien­to dis­tin­to
  • 1 re­mo­to

En mi ca­so, es así:

  • Ba­ckup pri­ma­rio es en el dis­co
  • Ba­ckup se­cun­da­rio es a dis­co en otra má­qui­na (un script si­mi­la­r, usan­do sftp)
  • Ba­ckup ter­cia­rio a un pen dri­ve (me­dio dis­tin­to) que des­pués va en mi bol­si­llo (re­mo­to)

Pa­ra ha­cer los ba­ckups pri­ma­rio y se­con­da­rio son dos ver­sio­nes de ese script (en rea­li­dad el mis­mo scrip­t, con ar­gu­men­to­s).

El ba­ckup ter­cia­rio es un po­co más com­pli­ca­do, por­que quie­ro que sea con­ve­nien­te.

La manera conveniente de hacer backup a un medio removible

Es­ta es mi "u­ser sto­r­y":

Co­mo per­so­na que quie­re un ba­ckup offsi­te pe­ro no tie­ne ga­nas de trans­mi­tir to­dos esos da­to­s, quie­ro en­chu­far un pen dri­ve en la má­qui­na a ba­cku­pear y que AU­TO­MÁ­TI­CA­MEN­TE se ha­ga el ba­ckup de los da­tos al pen dri­ve.

En­ton­ce­s, cuan­do en al­gún mo­men­to el ba­ckup ter­mi­ne, lo pue­do des­en­chu­far y lle­vár­me­lo.

Di­ga­mos que en­con­trar la for­ma de ha­cer eso me to­mó va­rias ho­ras y es­toy bas­tan­te se­gu­ro que mi so­lu­ción es más com­pli­ca­da de lo ne­ce­sa­rio. Pe­ro bue­no, fun­cio­na, así que es­tá bien.

Sien­do es­to Li­nux en el año 2022 ... es­ta so­lu­ción im­pli­ca sys­te­m­d. Y por­que es sys­te­m­d, es com­pli­ca­do.


Lo primero es montar el pen drive automáticamente en un lugar conocido. Para eso se necesitan dos cosas. Un servicio de automount, para que systemd monte algo en /backup:


Description=Automount Backup



Y un servicio de mount, para que sepa qué se monta en /backup y cómo:





Las par­tes in­te­re­san­tes so­n:

  • Wants y Before: ese backup.service v a a ser un servicio systemd que efectivamente corre el script de backup. Queremos que lo haga, y que lo haga DESPUÉS de que el pen drive se haya montado.
  • Where y What: "Where" dónde se monta y "What" es el UUID del pen drive como lo muestra sudo blkid

Hay que ha­bi­li­tar e ini­ciar el ser­vi­cio au­to­moun­t, no ha­ce fal­ta ha­cer na­da con el de moun­t.

Por su­pues­to des­pués vie­ne el ser­vi­cio de ba­ckup en sí. Es un "o­nes­ho­t": cuan­do arran­ca eje­cu­ta el script de ba­cku­p:





Hay que ha­bi­lia­tar­lo pe­ro no ha­cer "s­tar­t". Ya que es­tá en el "Wante­d" del moun­t, cuan­do el dis­po­si­ti­vo se mon­ta el ba­ckup se eje­cu­ta.


Lamentablemente el dispositivo solo se monta cuando, después de ser insertado, alguien trata de usar el mountpoint. Así que con estos servicios nada pasa hasta que, después de enchufar el pen drive uno va y hace algo como ls /backup, lo que dispara el mount, que a su vez inicia el script de backup.

¿Y cómo se arregla? Ni idea. Mi workaround fue agregar OTROS DOS SERVICIOS, así ls /backup se ejecuta una vez por minuto.


Description=Try to run backup




Description=Trigger Backup

ExecStart=/bin/ls /backup


Y con eso, sí, pue­do en­chu­far el pen dri­ve cuan­do lle­go a la ofi­ci­na a la ma­ña­na y des­en­chu­far­lo más tar­de, sa­bien­do que tie­ne un ba­ckup de hoy aden­tro.

Radxa Zero!

Lo qué?

Las Ras­pbe­rry Pi son úti­le­s, bue­nas y ba­ra­ta­s. Ex­cep­to por un de­ta­lle: es im­po­si­ble com­prar­la­s.

No hay sto­ck de Ras­pbe­rry PI 4, no hay sto­ck de Ras­pbe­rry Pi Ze­ro 2W, no hay sto­ck de na­da, ex­cep­to Pi Pi­co (que son mi­cro­con­tro­la­do­res) y Pi 400 (que no son lo que quie­ro­).

En­ton­ces bus­qué al­ter­na­ti­va­s, y en­contré la bue­na gen­te de All­net Chi­na que ven­de la lí­nea Ro­ck Pi.

¿Cómo son las Rock Pi?

  • Son ... pa­re­ci­das a las Ras­pbe­rry
  • En har­dwa­re, en al­gu­nos ca­so­s, son me­jo­res!
  • En so­ftwa­re ... es­tán un po­co más cru­da­s.
  • En com­pa­ti­bi­li­dad son más com­pli­ca­da­s.


Así que me com­pre un par de Ra­d­xa Ze­ro­s. Son co­mo una Pi Ze­ro ... pe­ro la CPU es mas o me­nos co­mo la de una Pi 4????

Tu­ve un pro­ble­mi­ta ini­cial­men­te pa­ra pa­gar­lo pe­ro me lo re­sol­vie­ron de in­me­dia­to y me upgra­dea­ron el shi­pping a Fe­dEx, así que lle­ga­ron en dos se­ma­nas y no pa­gué adua­na.

Eso sí, Fe­dEx no te avi­sa que cuan­do te lo traen te van a co­brar "al­go­", y no te di­cen cuán­to ni que te­nés que pa­gar cas­h.

En mi ca­so fue­ron $5400 y no me avi­sa­ron a tiem­po asi que tu­ve que es­pe­rar un día ex­tra.

Y qué tal andan?

Her­mo­so. La CPU es rá­pi­da. Man­ja­ro an­da 10 pun­tos (u­na vez que en­ten­dí co­mo ins­ta­lar­lo­).

Tie­ne eM­MC así que no ne­ce­si­ta tar­je­ta SD!

Un pro­ble­ma es que no he po­di­do ha­cer an­dar mi dis­play 1920­x480 (hay so­lo una men­ción de otra per­so­na en la In­ter­net in­ten­tán­do­lo, tam­po­co le an­du­vo)

Sal­vo eso? Ex­ce­len­te.

Es­toy mi­gran­do mi ser­ver Pi­nky a una Ra­d­xa, ya que la ten­go y con­su­me lo mis­mo que la Pi 3

Si bien leí va­rias ad­ver­ten­cias so­bre que se pue­den re­ca­len­tar no pa­re­cen pa­sar de los 48 gra­dos en uso nor­ma­l.

En ge­ne­ra­l, las re­co­mien­do, pe­ro ca­da uno ten­drá que ver cuán­to quie­re gas­tar y pa­ra qué.

The cases I built for my mini servers

I have writ­ten a cou­ple posts about my rasp­ber­ry-pi home server­s. And peo­ple seem to like the cas­es I 3d-print­ed for them.

Well, if you liked them here they are.

For a Rasp­ber­ry pi 3-based serv­er you need:

  • The case it­self
  • Caps to lock each disk in its slot: 1 and 2
  • Cap to lock the pi in its slot

For a Rasp­ber­ry pi 4-based serv­er you need:

  • The case it­self
  • Caps to lock each disk in its slot: 1 and 2
  • Cap to lock the pi in its slot

All these are just the fol­low­ing de­signs from thin­gi­verse slapped to­geth­er:

  • Pi 3 sleeve case: here
  • Pi 4 sleeve case: here
  • HDD case: here

They work bet­ter (or at al­l!) if you use a pow­ered USB hub. There is no room in the case for it, just buy one you like and glue it to the side :-)

Case with a USB server glued to it

Contents © 2000-2023 Roberto Alsina