Skip to main content

Ralsina.Me — Roberto Alsina's website

Posts about programming

Creating a demo site for a service

Re­cent­ly I wrote an app called Grafi­to to view sys­temd/jour­nald logs (those are the logs in most Lin­ux sys­tem­s) and be able to fil­ter them, see de­tails of a spe­cif­ic en­try, etc.

One prob­lem with this kind of tool is that I can't just open it to the world be­cause then ev­ery­one would be able to see the logs in a re­al ma­chine. While that is usu­al­ly not a prob­lem be­cause the in­for­ma­tion is not ter­ri­bly use­ful (sure, you will know what's run­ning, big whoop­s), it may dis­play a dan­ger­ous piece of da­ta which I may not want to ex­pose.

So, the right way to do this is to cre­ate a de­mo site. It could be show­ing the re­al da­ta from a throw­away sys­tem (like a vir­tu­al ma­chine) or like I did show fake da­ta.

To show fake da­ta you can use a fak­er. Fak­ers are fun! I am us­ing askn/­fak­er which is a Crys­tal one. Fak­ers let you ask for, you guessed it... fake da­ta.

For ex­am­ple, you can ask for an ad­dress or a cred­it card num­ber and it would give you some­thing ran­dom that match­es the ob­vi­ous pat­terns of what you ask.

One I love is to ask for say_something_smart which gives you smart things!

Faker::Hacker.say_something_smart #=> 
"Try to compress the SQL interface, maybe it will program the 
back-end hard drive!"

So, I wrote a function that works like journalctl but is totally fake. The source code is just a quick hack.

Then, I used a con­di­tion­al com­pile flag to route the in­fo re­quests in­to that fake func­tion:

{% if flag?(:fake_journal) %}
  require "./fake_journal_data" # For fake data generation
{% end %}

{% if flag?(:fake_journal) %}
    Log.info { "Journalctl.known_service_units: Using FAKE service units." }
    fake_units = FakeJournalData::SAMPLE_UNIT_NAMES.compact.uniq.sort
    Log.debug { "Returning #{fake_units.size} fake service units." }
    return fake_units
{% else %}
    # Return actual good stuff
{% end %}

And that's it! If I compile with -Dfake_journal it builds a binary that is using only fake data. Then I had it run in my home server and voilá: a demo!

See it in ac­tion! grafi­to-de­mo.ralsi­na.me

Revisiting the RPU (Ralsina Programmatic Universe)

A while back I no­ticed I had start­ed many projects us­ing the Crys­tal lan­guage, be­cause it re­al­ly made me want to code more.

Of course those projects are not stan­dalone, many are li­braries or tools used by oth­er pro­ject­s, and some are forks of oth­er peo­ple's tools I made mi­nor changes to ("­fork­s") and some are web­sites, and so on.

Well I semi-au­to­mat­ed the gen­er­a­tion of a chart show­ing how things con­nec­t. First, this is the chart:

RPU Chart

And here is a hacky python script I used to gen­er­ate it via mer­maid (as­sumes you have all your re­pos cloned and will be use­ful for NO­BODY)

from glob import glob

print("graph LR")
sites = [
    "faaso.ralsina.me",
    "nicolino.ralsina.me",
    "nombres.ralsina.me",
    "ralsina.me",
    "tapas.ralsina.me",
]

for site in sites:
    print(f"  {site}>{site}]")

nicolino_sites = [
    "faaso.ralsina.me",
    "nicolino.ralsina.me",
]
faaso_sites = [
    "nombres.ralsina.me",
    "tapas.ralsina.me",
]
caddy_sites = [
    "faaso.ralsina.me",
    "nicolino.ralsina.me",
    "ralsina.me",
]

planned = [
    ("nicolino", "markd"),
    ("nicolino", "cr-wren"),
    ("crycco", "libctags.cr"),
    ("crycco", "crystal-ctags"),
]

hace_repos = [
    "nicolino",
    "tartrazine",
    "crycco",
    "markterm",
    "sixteen",
]

for repo in glob("*/"):
    repo = repo.strip("/")
    if repo == "forks":
        continue
    print(f"  {repo}(({repo}))")

for repo in glob("forks/*/"):
    repo = repo.split("/")[-2]
    print(f"  {repo}([{repo}])")

for s in nicolino_sites:
    print(f"  {s} ---> nicolino")
for s in faaso_sites:
    print(f"  {s} ---> faaso")
for s in caddy_sites:
    print(f"  {s} ---> caddy-static")


def ralsina_deps(shard):
    pass


for shard in glob("**/shard.yml"):
    repo = shard.split("/")[-2]
    for line in open(shard).readlines():
        if "ralsina/" in line or "markd" in line:
            dest = line.split(":")[-1].split("/")[-1]
            if not dest.strip():
                continue
            print(f"  {repo} ---> {dest}")

for a, b in planned:
    print(f"{a} -.-> {b}")


for repo in hace_repos:
    print(f"{repo} ---> hace")

Ideas for programs that don't exist: 3

This is an oc­ca­sion­al se­ries of posts where I will share ideas for pro­grams that don't ex­ist, but should. The goal is to in­spire de­vel­op­ers to cre­ate use­ful tools that can make our lives eas­i­er. Or, more like­ly, to re­mind me about these ideas so I can cre­ate them my­self. Or even more like­ly, to just get them out of my head so I can stop think­ing about them.

Idea 3: A program that does backups the way I want

I don't want to con­fig­ure things on many com­put­er­s. I want to con­fig­ure things once in one com­put­er and have back­ups for ev­ery­thing I care about, done prop­er­ly.

There is a great back­up pro­gram called restic that does back­ups RIGHT. It is fast, it is se­cure, it is easy to use, and it has a lot of fea­tures.

There is a frontend for it called backrest that does a lot of things right. It separates the concept of repo which is what you backup to and plan which is what and when you backup.

But I want more, I want to separate plan (when and how) from source (where the data is)

I want to work like this:

  1. Create a repo like "this folder here on this computer is a repo called 'foo' and it has a password and whatnot"
  2. Create a plan like "every day at 3 AM, backup to 'foo', keep 30 days of backups there, and every week at 2am on sundays, backup to 'bar' which is a remote repo and keep 10 weeks of backups there"
  3. Create a source like "the /home/ralsina folder in my notebook"
  4. Create a backup which is a combination of a source and a plan like "backup the /home/ralsina folder in my notebook to 'foo' every day at 3 AM"

Fur­ther, I want this to just work on all my com­put­ers as long as I have ssh cor­rect­ly con­fig­ured to al­low it.

My back­up con­troller should log in­to what­ev­er com­put­er the re­po is in and in­stall restic there. Then log in­to the source com­put­er anb in­stall restic there. Then cre­ate the re­po, and when the plan says it's time to back­up, it should log in­to the source com­put­er and run the back­up com­mand there, and then log in­to the re­po com­put­er and run some­thing there if it needs run­ning.

Then I want it to keep logs and no­ti­fy me via go­ti­fy or some­thing if some­thing goes wrong.

Is that too much to ask?

Ideas for programs that don't exist: 2

This is a new oc­ca­sion­al se­ries of posts where I will share ideas for pro­grams that don't ex­ist, but should. The goal is to in­spire de­vel­op­ers to cre­ate use­ful tools that can make our lives eas­i­er. Or, more like­ly, to re­mind me about these ideas so I can cre­ate them my­self. Or even more like­ly, to just get them out of my head so I can stop think­ing about them.

Idea 2: A nice web frontend for journald

I do some self host­ing. It's tempt­ing, when you self­-host, to run things as if it was a com­pa­ny's pro­duc­tion set­up. So, there are some who run mul­ti­ple large servers on ku­ber­netes and so on.

Not me, I run a sin­gle SBC with a bunch of dock­er­ized ser­vices.

So, how do I see logs if some­thing goes wrong?

Well, I log to the system's journal, so I can use journalctl to see the logs.

It's just this bit of YAML in your com­pos­er def­i­ni­tion:

  logging:
    driver: "journald"
    options:
      tag: "whatever"

That tags the logs from that con­tain­er with "what­ev­er". So, I can run:

journalctl -t whatever

This tool, journalctl is quite nice, and you can filter by date, grep for things, follow the live logs, and so on. But it's a command line tool, which I like.

But the ¨do it like a re­al prod thing"crowd us­es logstash or some­such, and have a web dash­board for this kind of things.

Well, I should have one of those too, but backed by journalctl

There is one that comes with sys­temd, but it's sort of crap­py, and there is no rea­son for it to be. It's run­ning in the same serv­er where the logs are, it's sim­ple, and it would be a nice lit­tle project to do.


Contents © 2000-2025 Roberto Alsina