Posts about programming

Coffee As a Service Architecture

Coffee As A Service Architecture

Intro

Today I was in a meeting with recruiters (yes, really) because they want to be better at technical recruiting and they had the idea that talking to me would help them (oh, sweet summer children).

A nice time was had by all (I hope) and at one point I was asked about what architecture was, and more specifically, about the difference between microservices and a monolith.

Which I tried to explain using what I had at hand: coffee cups, sugar dispensers, a spoon and so on. It didn't quite work out but I kept thinking about it on my way home and ... let's try again.

What is Architecture?

Architecture, when it comes to software, can be defined in many ways, but one way I like is to say that architecture involves:

  • What the components of your system are
  • How they are done
  • How they talk to each other

There is a lot more, but you start with that, and that is more or less enough to explain monoliths and microservices.

The Coffee Service

One thing of massive importance about systems is that they are meant to do something. They exist for a purpose. So, let's suppose the purpose of our system is to make coffee and put it in a cup.

We can call the cup the "coffee client" and whatever we use to make the coffee is the "coffee system" or "coffee service"

So, assuming you have a can full of cofee beans and a cup, how do you make coffee?

The Coffee Monolith

This is my very own coffee machine. Not only is it monolith-shaped, it's functionally monolithic (it's also large enough to deserve its own table, as you can see).

It has two buckets on top. On one you put water, in the other you put coffee beans. Then, you put a cup under the spigot and press a button or two.

It will:

  • Grind the beans
  • Put the ground coffee in the right place and apply the "right" pressure
  • Heat the water to the "right" temperature
  • Run water through the coffee grounds
  • Pour the coffee into the cup
  • Discard the grounds into a hidden deposit

Sounds awesome, right? It is!

It takes all of 30 seconds to go from coffee beans to a nice cup of coffee! It tastes good!

And it's important to keep that in mind. IT IS GREAT.

Monoliths, when they done correctly and you are not expecting anything out of their operating parameters, are awesome.

The problem with monoliths is not that they can't be done right, it's that it's hard to do them right, and that even when you do get it right, in our industry the meaning of "right" is not fixed.

So, because the whole point is to ride this analogy into the ground, let's consider all the things about this awesome machine.

Flexibility

It grounds the coffee. What happens if you want it ground finer? Or coarser?

It turns out that if you have the right tool you can adjust the mill's output (it's not in the manual).

In a microservice-based coffemaker I would replace the grinder.

How about water temperature?

It has three settings. Want anything else? No luck.

In a microservice-based coffee service I would just use an adjustable kettle.

How about the amount of coffee per cup?

It has three settings. Want anything else? No luck.

In microservice-cofee I would just transmit as much coffee as I wanted.

How about changing the bean variety between cups?

The bean hopper takes half a pound of beans. It's not easy to get them out. So, no.

In microservice-coffee heaven I could have multiple hoppers providing beans of all varieties and just connect to the one I want today!

Cup size?

It does two sizes (but you reprogram those sizes)

In microservice-cofee I would just pour as much water as I liked.

A monolith has the flexibility its designers thought of adding, no more, no less. And changing it is ... not trivial.

I could use a vacuum cleaner to remove the beans from the hopper and change varieties. I would consider that a hack. I have also done it. I regret nothing.

Unused Features

It has a thing that lets you setup a credit system for coffee cups I will never use. A milk foamer I use once a week. Why? Because "we may need this and it's hard to add it later, so let's just do it from the beginning" ground coffee.

Sometimes yes, it's useful (capuccino!) but sometimes it's just something I paid for and will never use (coffee credits!)

In a microservice architecture I would just get a new milk foamer, use both for a while and then keep using the one I like.

Hard to Improve

How do I add a better foaming thingie?

By buying one and putting it in the table.

How do I add a more flexible coffee grinder?

I can't because this machine is incompatible with pre-ground coffee. There is a newer, more expensive model that can take that but this one? You need to throw it away.

Modifying a monolithic system is difficult because the pieces are tightly coupled. I can't use a separate grinder because the system requires the coffee grounds to arrive via a specific internal duct at a specific point in the coffee-making cycle, there is just no way to insert my grind-o-matic-3000 in there without a saw and duct tape.

In a modular system I would unplug the grinder and insert a compatible-but-different grinder, in a microservice architecture I would just use whatever grinder and put the coffee grounds in a message and have the next piece in the system pick it from there.

Expensive

This coffee machine is expensive. It's much more expensive than buying a grinder, a coffee machine a kettle and a milk foamer.

What it provides in exchange for the extra money (and reduced flexibility and so on) is performance. I don't boil water, I don't grind coffee, I don't pour, I just press a damned button and enjoy coffee.

Outsourcing

You can buy pre-ground coffee and effectively outsource that part of the process to some external provider.

I can't! I am doomed to ground my own coffee forever.

Maintenance

I have a lubrication schedule, or else my expensive machine will break.

I have to disinfect the coffee ground bin or else it will have maggots.

I have to empty the water waste tray before it overflows.

I have to have a thing to dump the bits of dirty water it uses to clean itself when it turns on/off.

I have to buy special acid to periodically remove scale from its innards or it will stop working. That costs actual money and takes half an hour.

I need to cleanup coffee crud from all the internal springs, levers and thingies every couple of weeks.

Now, you, readers with normal coffee making things? How is your coffee machine maintenance routine? What, you don't have one? Thought so.

Conclusion

So, that's why nowadays most people prefer to pay the performance penalty of a microservice architecture instead of using an awesome monolith.

This is not exhaustive, there is still separation of concerns, encapsulation, rigidity of contracts and a lot more, but it should be convincing enough without being dogmatic :-)

GitHub and GitLab for newbies

I wrote a git tutorial for those who don't know git where I tried to explain how to use Git for version control on your local machine.

Of course those of you who know about these things already know that half the fun of git is not using it locally, but using a server that can centralize the develpment and allow collaboration.

Well, good news! I just wrote the chapter where I cover that part!

Read and let me know what you think:

Git Hosting

Playing With Picolisp (Part 1)

I want to learn new languages. But new as in "new to me", not new as in "created last week". So I decided to play with the grandaddy of all cool languages, LISP. Created in 1958, it's even older than I, which is good because it's experienced.

One "problem" with LISP is that there are a million LISPs. You can use Scheme or Common Lisp, or Emacs' Lisp, or a bazillion others. I wanted something simple so it was supposed to be Scheme... but a few days ago I ran into something called Picolisp and it sounded so cool.

Read more…

Playing with Nim

A few days ago I saw a mention in twitter about a language called Nim

And... why not. I am a bit stale in my programming language variety. I used to be fluent in a dozen, now I do 80% python 10% go, some JS and almost nothing else. Because I learn by doing, I decided to do something. Because I did not want a problem I did not know how to solve to get in the way of the language, I decided to reimplement the example for the python book I am writing: a text layout engine that outputs SVG, based on harfbuzz, freetype2 and other things.

This is a good learning project for me, because a lot of my coding is glueing things together, I hardly ever do things from scratch.

So, I decided to start in somewhat random order.

Preparation

I read the Nim Tutorial quickly. I ended referring to it and to Nim by example a lot. While trying out a new language one is bound to forget syntax. It happens.

Wrote a few "hello world" 5 line programs to see that the ecosystem was installed correctly. Impression: builds are fast-ish. THey can get actually fast if you start using tcc instead of gcc.

SVG Output

I looked for libraries that were the equivalent of svgwrite, which I am using on the python side. Sadly, such a thing doesn't seem to exist for nim. So, I wrote my own. It's very rudimentary, and surely the nim code is garbage for experienced nim coders, but I ended using the xmltree module of nim's standard library and everything!

import xmltree
import strtabs
import strformat

type
        Drawing* = tuple[fname: string, document: XmlNode]

proc NewDrawing*(fname: string, height:string="100", width:string="100"): Drawing =
        result = (
            fname: fname,
            document: <>svg()
        )
        var attrs = newStringTable()
        attrs["baseProfile"] = "full"
        attrs["version"] = "1.1"
        attrs["xmlns"] = "http://www.w3.org/2000/svg"
        attrs["xmlns:ev"] = "http://www.w3.org/2001/xml-events"
        attrs["xmlns:xlink"] = "http://www.w3.org/1999/xlink"
        attrs["height"] = height
        attrs["width"] = width
        result.document.attrs = attrs

proc Add*(d: Drawing, node: XmlNode): void =
        d.document.add(node)

proc Rect*(x: string, y: string, width: string, height: string, fill: string="blue"): XmlNode =
        result = <>rect(
            x=x,
            y=y,
            width=width,
            height=height,
            fill=fill
        )

proc Text*(text: string, x: string, y: string, font_size: string, font_family: string="Arial"): XmlNode =
        result = <>text(newText(text))
        var attrs = newStringTable()
        attrs["x"] = x
        attrs["y"] = y
        attrs["font-size"] = font_size
        attrs["font-family"] = font_family
        result.attrs = attrs

proc Save*(d:Drawing): void =
   writeFile(d.fname,xmlHeader & $(d.document))

when isMainModule:
        var d = NewDrawing("foo.svg", width=fmt"{width}cm", height=fmt"{height}cm")
        d.Add(Rect("10cm","10cm","15cm","15cm","white"))
        d.Add(Text("HOLA","12cm","12cm","2cm"))
        d.Save()

While writing this I ran into a few issues abd saw a few nice things:

To build a svg tag, you can use <>svg(attr=value) which is delightful syntax. But what happens if the attr is "xmlns:ev"? That is not a valid identifier, so it doesn't work. So I worked around it by creating a StringTable filling it and setting all attributes at once.

A good thing is the when keyword. usingit as when isMainModule means that code is built and executed when svgwrite.nim is built standalone, and not when used as a module.

Another good thing is the syntax sugar for what in python we would call "object's methods".

Because Add takes a Drawing as first argument, you can just call d.Add() if d is a Drawing. Is simple, it's clear and it's useful and I like it.

One bad thing is that sometimes importing a module will cause weird errors that are hard to guess. For example, this simplified version fails to build:

import xmltree

type
        Drawing* = tuple[fname: string, document: XmlNode]

proc NewDrawing*(fname: string, height:string="100", width:string="100"): Drawing =
        result = (
            fname: fname,
            document: <>svg(width=width, height=height)
        )

when isMainModule:
        var d = NewDrawing("foo.svg")
$ nim c  svg1.nim
Hint: used config file '/etc/nim.cfg' [Conf]
Hint: system [Processing]
Hint: svg1 [Processing]
Hint: xmltree [Processing]
Hint: macros [Processing]
Hint: strtabs [Processing]
Hint: hashes [Processing]
Hint: strutils [Processing]
Hint: parseutils [Processing]
Hint: math [Processing]
Hint: algorithm [Processing]
Hint: os [Processing]
Hint: times [Processing]
Hint: posix [Processing]
Hint: ospaths [Processing]
svg1.nim(9, 19) template/generic instantiation from here
lib/nim/core/macros.nim(556, 26) Error: undeclared identifier: 'newStringTable'

WAT? I am not using newStringTable anywhere! The solution is to add import strtabs which defines it, but there is really no way to guess which imports will trigger this sort of issue. If it's possible that importing a random module will trigger some weird failure with something that is not part of the stdlib and I need to figure it out... it can hurt.

In any case: it worked! My first working, useful nim code!

Doing a script with options / parameters

In my python version I was using docopt and this was smooth: there is a nim version of docopt and using it was as easy as:

  1. nimble install docopt
  2. import docopt in the script

The usage is remarkably similar to python:

import docopt
when isMainModule:
        let doc = """Usage:
        boxes <input> <output> [--page-size=<WxH>] [--separation=<sep>]
        boxes --version"""

        let arguments = docopt(doc, version="Boxes 0.13")
        var (w,h) = (30f, 50f)
        if arguments["--page-size"]:
            let sizes = ($arguments["--page-size"]).split("x")
            w = parse_float(sizes[0])
            h = parse_float(sizes[1])

        var separation = 0.05
        if arguments["--separation"]:
            separation = parse_float($arguments["--separation"])
        var input = $arguments["<input>"]
        var output = $arguments["<output>"]

Not much to say, other that the code for parsing --page-size is slightly less graceful than I would like because I can't figure out how to split the string and convert to float at once.

So, at that point I sort of have the skeleton of the program done. The missing pieces are calling harfbuzz and freetype2 to figure out text sizes and so on.

Interfacing with C libs

One of the main selling points of Nim is that it interfaces with C and C++ in a striaghtforward manner. So, since nobody had wrapped harfbuzz until now, I could try to do it myself!

First I tried to get c2nim working, since it's the recommended way to do it. Sadly, the version of nim that ships in Arch is not able to build c2nim via nimble, and I ended having to manually build nim-git and c2nim-git ... which took quite a while to get right.

And then c2nim just failed.

So then I tried to do it manually. It started well!

  • To link libraries you just use pragmas: {.link: "/usr/lib/libharfbuzz.so".}

  • To declare types which are equivalent to void * just use distinct pointer

  • To declare a function just do some gymanstics:

    proc create*(): Buffer {.header: "harfbuzz/hb.h", importc: "hb_buffer_$1" .}

  • Creates a nim function called create (the * means it's "exported")

  • It is a wrapper around hb_buffer_create (see the syntax there? That is nice!)

  • Says it's declared in C in "harfbuzz/hb.h"

  • It returns a Buffer which is declared thus:

type
    Buffer* = distinct pointer

Here is all I could do trying to wrap what I needed:

{.link: "/usr/lib/libharfbuzz.so".}
{.pragma: ftimport, cdecl, importc, dynlib: "/usr/lib/libfreetype.so.6".}

type
        Buffer* = distinct pointer
        Face* = distinct pointer
        Font* = distinct pointer

        FT_Library*   = distinct pointer
        FT_Face*   = distinct pointer
        FT_Error* = cint

proc create*(): Buffer {.header: "harfbuzz/hb.h", importc: "hb_buffer_$1" .}
proc add_utf8*(buffer: Buffer, text: cstring, textLength:int, item_offset:int, itemLength:int) {.importc: "hb_buffer_$1", nodecl.}
proc guess_segment_properties*( buffer: Buffer): void {.header: "harfbuzz/hb.h", importc: "hb_buffer_$1" .}
proc create_referenced(face: FT_Face): Font {.header: "harfbuzz/hb.h", importc: "hb_ft_font_$1" .}
proc shape(font: Font, buf: Buffer, features: pointer, num_features: int): void {.header: "harfbuzz/hb.h", importc: "hb_$1" .}

proc FT_Init_FreeType*(library: var FT_Library): FT_Error {.ft_import.}
proc FT_Done_FreeType*(library: FT_Library): FT_Error {.ft_import.}
proc FT_New_Face*(library: FT_Library, path: cstring, face_index: clong, face: var FT_Face): FT_Error {.ft_import.}
proc FT_Set_Char_Size(face: FT_Face, width: float, height: float, h_res: int, v_res: int): FT_Error {.ft_import.}

var buf: Buffer = create()
buf.add_utf8("Hello", -1, 0, -1)
buf.guess_segment_properties()

var library: FT_Library
assert(0 == FT_Init_FreeType(library))
var face: FT_Face
assert(0 == FT_New_Face(library,"/usr/share/fonts/ttf-linux-libertine/LinLibertine_R.otf", 0, face))
assert(0 == face.FT_Set_Char_Size(1, 1, 64, 64))
var font = face.create_referenced()
font.shape(buf, nil, 0)

Sadly, this segfaults and I have no idea how to debug it. It's probably close to right? Maybe some nim coder can figure it out and help me?

In any case, conclusion time!

Conclusions

  • I like the language
  • I like the syntax
  • nimble, the package manager is cool
  • Is there an equivalent of virtualenvs? Is it necessary?
  • The C wrapping is, indeed, easy. When it works.
  • The availability of 3rd party code is of course not as large as with other languages
  • The compiling / building is cool
  • There are some strange bugs, which is to be expected
  • Tooling is ok. VSCode has a working extension for it. I miss an opinionated formatter.
  • It produces fast code.
  • It builds fast.

I will keep it in mind if I need to write fast code with limited dependencies on external libraries.

My Git tutorial for people who don't know Git

As part of a book project aimed at almost-beginning programmers I have written what may as well pass as the first part of a Git tutorial. It's totally standalone, so it may be interesting outside the context of the book.

It's aimed at people who, of course, don't know Git and could use it as a local version control system. In the next chapter (being written) I cover things like remotes and push/pull.

So, if you want to read it: Git tutorial for people who don't know git (part I)

PS: If the diagrams are all black and white, reload the page. Yes, it's a JS issue. Yes, I know how to fix it.

Lois Lane, Reporting

So, 9 years ago I wrote a post about how I would love a tool that took a JSON data file, a Mako template, and generated a report using reStructured Text.

If you don't like that, pretend it says YAML, Jinja2 and Markdown. Anyway, same idea. Reports are not some crazy difficult thing, unless you have very demanding layout or need to add a ton of logic.

And hey, if you do need to add a ton of logic, you do know python, so how hard can it be to add the missing bits?

Well, not very hard. So here it is, 9 years later because I am sitting at an auditorium and the guy giving the talk is having computer problems.

Lois Lane Reports from PyPI. and GitHub

Gyro 0.3

Gyro grows some legs

It was just a few days ago that I started an experimental wiki project called Gyro ... it's always fun when a project just grows features organically. It does this, so it makes sense to make it do that, and then this other thing is easy, and so on.

So, here is what happened with Gyro:

  • Federico Cingolani made it run on docker
  • I added some features:
    • UI for creating new pages
    • UI for deleting pages
    • Support for multilevel pages (so you can have "foo" and "foo/bar")
    • Autocompletion with titles in search
    • Breadcrumbs so you can actually follow the multilevel pages
    • Lots of code cleanup
    • Themes (via Bootswatch)
    • Custom fonts (via Google WebFonts)
    • Automatic linking for WikiWords if you like that kind of thing

And, I published it as a Google Chrome Extension ... so you can now have a wiki on your chrome. If you saw how it worked before, you may wonder how it became an extension, since those are pure Javascript. Well... I made it have pluggable backends, so it can either use the older Sanic-based python API or use LocalStorage and just save things inside your browser.

The behavior is identical in both cases, it's just a matter of where things are saved, and how they are retrieved. The goal is that you should not be able to tell apart one implementation from the other, but of course YMMV.

And since I was already doing a chrome extension ... how hard would it be to run it as an electron "desktop" app? Well, not very. In fact, there are no code changes at all. It's just a matter of packaging.

And then how about releasing it as a snap for Ubuntu? Well, easy too, just try snap install gyro --beta

All the Gyros

Is it finished? Of course not! A non exhaustive list of missing MVP features include:

  • Import / Export data
  • A syncing backend
  • General UI polish (widget focus, kbd shortcuts)
  • Better error handling
  • General testing

But in any case, it's nice to see an app take shape this fast and this painlessly.

New mini-project: Gyro

History

Facubatista: ralsina, yo, vos, cerveza, un local-wiki-server-hecho-en-un-solo-.py-con-interfaz-web en tres horas, pensalo

Facubatista: ralsina, you, me, beer, a local-wiki-server-done-in-one-.py-with-web-interface in three hours, think about it

/images/gyro-1.thumbnail.png

The next day.

So, I could not get together with Facu, but I did sort of write it, and it's Gyro. [1]

Technical Details

Gyro has two parts: a very simple backend, implemented using Sanic [2] which does a few things:

  • Serve static files out of _static/
  • Serve templated markdown out of pages/
  • Save markdown to pages/
  • Keep an index of file contents updated in _static/index.js

The other part is a webpage, implemnted using Bootstrap [3] and JQuery [4]. That page can:

  • Show markdown, using Showdown [5]
  • Edit markdown, using SimpleMDE [6]
  • Search in your pages using Lunr [7]

And that's it. Open the site on any URL that doesn't start with _static and contains only letters and numbers:

  • http://localhost:8000/MyPage : GOOD
  • http://localhost:8000/MyDir/MyPage: BAD
  • http://localhost:8000/__foobar__: BAD

At first the page will be sort of empty, but if you edit it and save it, it won't be empty anymore. You can link to other pages (even ones you have not created) using the standard markdown syntax: [go to FooBar](FooBar)

There is really not much else to say about it, if you try it and find bugs, file an issue and as usual patches are welcome.


[1] Why Gyro? Gyros are delicious fast food. Wiki means quick. Also, I like Gyros. to check it out. So, since this was a toy project, why not?
[2] Why Sanic? Ever since Alejandro Lozanoff mentioned a flask-like framework done with the intention to be fast and async I wanted
[3] Why bootstrap? I know more or less what it does, and the resulting page is not totally horrible.
[4] Why JQuery? It's easy, small and I sort of know how it works.
[5] Why Showdown? It's sort of the standard to show markdown on the web.
[6] Why SimpleMDE? It looks and works awesome!
[7] Why Lunr? It works and is smaller than Tipue which is the only other similar thing I know.

La Importancia de los Dedos en el Pensamiento Informático

Pensar con cosas que no sean el cerebro es descalificatorio: vos pensás con el culo, vos pensás con el pene. Es una variante de hacer cualquier cosa con la parte incorrecta del cuerpo, porque yo escribo con los codos, ella programa con las patas, etc. Tal vez por eso me siento incómodo cuando empiezo un proyecto nuevo, porque siento una picazón indecente de empezar a pegarle a las teclas con las yemas, como si las ideas de como implementar cosas no salieran de mi cabeza, como si brotaran de mis dedos, como si fluyeran por mis brazos, como Palpatine electrocutando a Darth Vader, con esa prepotencia Arltiana de no poder conversar sino tipear en orgullosa soledad programas que encierren la violencia de un cross a la mandíbula, y "que los eunucos bufen".

Y no, no es la manera ideal de hacer las cosas, sospecho, en el mismo sentido que chapar en la primera cita o tocar ese culo consentido en el primer lento de Air Supply fueron decisiones que parecieron buenas en el momento pero muchos hemos vivido para lamentar, pensar demasiado con los dedos produce código de mierda, como era de mierda el noviazgo que empezó en aquel asalto, pero es realmente código de mierda si es código que existe comparado con el teórico noviazgo con la chica que no quiso bailar con uno? No, es código copado, es código gauchito, es código con savoir faire.

Pensar demasiado es someterse al waterfall interior, que es el peor waterfall, y sí, a veces he pensado un programa muy lentamente durante cinco años, dejándolo madurar en mi interior como una Tahina spectabilis que florece cada cien años, pero recuerden que la flor que produce huele como un cadáver y la planta muere inmediatamente. Los proyectos maduros son proyectos pudriéndose, es un equilibrio fino que no cualquiera puede caminar, no somos todos Philippe Petit, no sabemos cruzar de una torre a la otra sobre una soga, nos caemos como King Kong, trepados a una torre que no entendemos pensando en Jessica Lange.

La programación no es prog rock, no es Lark Tongues in Aspic, programar es, el 90% del tiempo, los mismos cuatro acordes de Sheena is a Punk Rocker, cambiados de lugar, más rápido o más lento, mientras hacés temas de dos minutos porque tu papá no te quiso cuando eras chico, es recordar que el primero se tira, como el mate, que el primero te lo regalan el segundo te lo venden, que por eso el primero lo regalás, el segundo lo hacés bonito y lo regalás también, que carajo.

Y mientras tanto, escuchen "Como salvajes" de Attaque 77, que dura tres minutos, te da ganas de salir a patear bolsas de basura por la calle, y es un cuento de scifi medianamente decente, no perfecto, pero mucho mejor que el que no escribiste.

I am now using almost an IDE

I have long been a proponent of simple text editors.

Not for me was emacs, with its multitude of modes and magical elisp code to do everything.

Not even vim with its multitude of extensions achieving magical productivity with three keystrokes.

Not even would I use the ubiquitous jetbrains IDE with magic refactoring that writes code on its own.

No, for twenty years or so I have written my code using a plain text editor. Until recently, that meant kwrite. Not even kate. Kwrite, the one that is slightly more powerful than notepad.

But then I got a new job, and everyone uses an IDE so I started thinking... I must be missing something.

Because if everyone is doing it differently from you, then one of the following things is likely to be true:

  • everyone is wrong
  • it's purely an opinion thing and it doesn't matter much
  • you are missing out

You know you are old once you assume the first. Since I am going through some sort of weird mid life crisis I am forcing myself to choose the last option most of the time. So, I started trying out stuff. Which is why I no longer use bash. Or unity. Or KDE. But those are stories for some other bonfire, this one is about my text editor midlife crisis.

Atom

It's huge. And slow. Like, really slow. And the extension quality is very uneven. For example, all the terminals felt wrong.

Once it started dragging after being open for a couple of days... well, I removed it and smugly went back to my old workflow.

And then I tried...

Pycharm

The extension quality was soooo much better! And some are just awesome. The way you can choose a virtualenv interpreter for a project is awesome.

Compared to Atom it's downright snappy!

The only things I did not like were:

  • So much magic in place, sometimes things only worked in the IDE.
  • Too slow to start, so I still had to use a plain text editor for casual edits.
  • At one point, things started to rot, and functions that had been working fine started to misbehave.

So then I had my goldielocks moment...

VSCode

I was expecting to hate it. It's called Visual Studio! It comes from Microsoft! It's electron-based like Atom!

Yet, I loved it at first sight.

Not going to go over many details because I am not in the business of convincing people of things but here are some of the highlights:

  • Good python support, including virtualenvs, formatting, autocomplete, refactoring, debugger, etc.
  • Good Go support.
  • Nice terminal gadget! Ctrl+click to open files mentioned in the terminal!
  • Good markdown/reSt support including previews
  • The "compared to working tree" view is genious
  • If you run "vscode somefile" in the terminal, it opens in the current vscode.
  • The settings mechanism and UX are a great idea.
  • It's fast enough
  • The UI is fairly minimal, so most of the time it will look like my previous workflow used to look: two text files open side by side.
  • Test runner integration is neat.
  • In Ubuntu you can install it as snap install vscode --classic ... takes all of 30 seconds. And it's updated forever.
  • Lots and lots and lots of decent quality extensions.

So, all in all it does all the things I liked from the IDE side of the universe while not making the things I liked from text editors less convenient. And that's why I use it now.