Sneak peek: CobraPy
http://ralsina.me/categories/cobrapy.html
http://ralsina.me/categories/cobrapy.html
I am doing some semi-serious Raspberry Pi development, so it was time I figured out how to do it comfortably.
My desktop setup is a two-monitor configuration, with my notebook on the table and a larger monitor above it. I like it, it's nice. The pointer naturally goes from one screen to the other in the obvious way.
Specially nice is that the laptop's screen is touch and has an active pen, so I can use it naturally.
But now, with the Raspberry, I want to occasionally show its display. And that means switching the monitor to it. Since I hate plugging and unplugging things, I use one of these:
It's a cheap tiny black plastic box that takes up to 5 HDMI inputs and switches between them to its one output by clicking a button. It only goes through the inputs that have signal, so since I only have the laptop's and the Pi's the button toggles between them.
If your monitor has more than one HDMI input you can probably just use that, but mine has just one.
But... what about keyboard and mouse?
I could get a multidevice keyboard and mouse, but I like the ones I have.
I could use a USB switch and toggle between the two devices, but ... I don't have one.
So, I use barrier and configure it in both the raspberry pi and in the laptop so that when my pointer goes "up" input goes to the Pi, and when it goes "down" input goes to the laptop. That's exactly the same as with the dual-display setup, but with two computers. Neat!
So, go ahead and configure barrier. It's easy and there are tons of tutorials.
Next, make sure barrier starts when I login into both computers. They way I prefer to do these things is using systemd.
Put this in ~/.local/share/systemd/user/barrier.service
in both machines:
[Unit] Description=Barrier server [Service] Environment=DISPLAY=:0 Type=simple TimeoutStartSec=0 ExecStart=/usr/bin/barrier [Install] WantedBy=default.target
Now you can make it start with systemctl --user start barrier
or stop with systemctl --user stop barrier
and make it start on every login with systemctl --user enable barrier
But while this is nice, it presents a problem. When I am using both displays for the laptop, I don't want barrier running! Since I can't see the Pi's display, it makes no sense.
So, I want to start barrier when the laptop is using one monitor, and stop it when it's using two.
To do that, the trick is udev
in the laptop. Put this (replacing my username with yours) in /etc/udev/rules.d/90-barrier.rules
:
ACTION=="change", \ KERNEL=="card0", \ SUBSYSTEM=="drm", \ ENV{DISPLAY}=":0", \ ENV{XAUTHORITY}="/home/ralsina/.Xauthority", \ ENV{XDG_RUNTIME_DIR}="/run/user/1000", \ RUN+="/home/ralsina/bin/monitors-changed"
Basically that means "when there is a change in the configuration of the video card, run monitors-changed
. Change the 1000
by your user ID, too.
The last piece of the puzzle is the monitors-changed
script:
if `xrandr --listmonitors | grep -q HDMI` then # The HDMI output is connected, stop barrier su ralsina -c '/usr/bin/systemctl stop --user barrier' else # The Pi is using the monitor, start barrier su ralsina -c '/usr/bin/systemctl start --user barrier' fi
And that's it!
This is the behaviour now:
When the laptop is using both displays, they work normally in a "extended display" configuration. They behave like a single large screen.
When I click on the HDMI switch and change the top display to show the Pi's desktop, automatically barrier starts in the laptop, and now the pointer and keyboard change from one computer to the other when the pointer moves from one monitor to the next.
If I click on the HDMI switch again, barrier stops on the laptop and I have a single two-screen desktop again.
Everything behaves perfectly and I can switch between computers by clicking a button.
Alternatively, we could start the barrier client when the raspberry pi "gets" the display, and stops it when it goes away. The result should be the same except for some corner cases, but it has the added benefit of allowing for a setup with up to 5 devices :-)
Disclaimer: This is a bit of a rant, but it's a friendly rant :-)
When people look at code coverage, they are reading it wrong.
Suppose you have a class, something stupid, like your own implementation of a stack, called Stack
.
Because you are not a total monster, you have tests in your code right? In fact, you are claiming that you are doing TDD (Test Driven Development), or at least you
like TDD, or you would like the idea of TDD, or, let's be honest here, you just say you are
doing TDD, but what you do is you sprinkle the tests you feel are needed, which is largely OK, I am not
going to judge you, you freak.
And then you add test coverage checks, and it says: 80%
What most people feel when they see that is dread. They see that 80% and feel "OMFG, my tests suck! I don't have enough! If even 100% coverage is not enough then this 80% means my code is an unstable piece of garbage!"
Well, no.
Whether your code is good or not is independent of tests. Tests give you the ability to know if your code is crap or not... sometimes. What tests really give you (if they are not total garbage in themselves) is the confidence that you can change your code without significantly affecting the behaviours the tests are testing.
So, if your tests of Stack
ensure that:
Stack.push
puts the element at the topStack.pop
gets the top elementThen what you implemented is a stack. Period. It works. It's fine. It may be inefficient, it may be
ugly, who knows, but tests are not going to give you good taste. All they are going to do is ensure
that Stack
is, indeed, a stack, and behaves like a stack, and that when you stick your mittens in it
and change things inside it it stays a stack.
Yet, your coverage is 80%.
Should you add more tests?
No.
You should delete 20% of your code.
Since code is a liability and the asset is the code's behaviour, then that's what the first D in TDD is for.
Test Driven Development.
Use the tests to define the behaviour you want. Then add code to implement that behaviour.
Don't chase useless stats like coverage.
If coverage is not 100%, consider your tests.
Is there behaviour you want that is not represented as a scenario in a test?
If yes: then add tests.
If not: remove code.
And using "coverage is low" as an opportunity to delete code instead of adding tests is something a lot of developers miss.
A group of crows is called a murder of crows. A group of hares is called a council of hares.
In fact, that's just a bunch of things victorians made up because they had lots of free time and they hadn't invented the Internet yet, and most of those were never actually widely used.
BUT what's the name for a group of things that are in a minimally viable state?
Well, CobraPy, my 80s-style python programming environment is slowly crawling into becoming one of those.
Of the components I want, I have one of each. They all suck but they suck in the same way a 3 year old playing pool sucks. He will not be great but it's still cool.
And also, I have combined all the things so that you can start a window that:
Listo.
— Roberto H. Alsina (@ralsina) October 13, 2020
Siguiente objetivo: agregar un editor "modal" pic.twitter.com/WfI2Ow00lT
What next? I could think about what next. Or ...
I could try to write a simple game and implement all the things that don't exist.
Except for input. I need to solve how to do input. You see, the user-created programs don't run in the same space as the window. That's why we have a graphics protocol. The program puts things in it, the window reads them and graphics appear.
But input needs to go the other way around. So I need to add a second protocol to send back events and it needs to be pretty fast. I don't think it's going to be a problem (user actions happen only once every few dozens of milliseconds!) but after that's done?
It's going to be time for ...
Or actually, to fail at implementing it, but improving the platform in the process. Because failure is what improvements are made of.
As it happens in early stages in fun products, progress in CobraPy has been both faster and slower than expected.
In the past few days a number of things happened:
I already had a terminal but I fixed a number of things.
Terminals with graphics support have a very long tradition. This is a VT55, released in 1977 displaying graphics:
How did it work? Well you can read the programmer's manual if you want, but basically you sent a control sequence that put it in "graphics mode" and then sent commands describing what to display.
Similar ideas with different protocol details were used in many different future terminals, including ReGIS graphics and Tektronix vector graphics and you could even trace this all the way to a current Linux desktop's X11 graphics.
So, what did I do? Not that, exactly. I am creating a side-channel as a sort-of-RPC where you send serialized python method names and arguments.
I wanted an interactive mode that was slightly friendlier than Python comes with, but not something overwhelming and powerful like IPython or BPython.
I did some research, and found ptpython which is pretty awesome, but still a bit too much awesome.
And then I started on a much, much lamer version of it. Still embrionic, but it does work. I have some plans for it.
All the graphics and basically everything you see in this project is done using the awesome raylib and a homegrown CFFI binding for it. I was not using it right, now I use it better, and things that took several hundredth seconds now take a few dozen microseconds.
So, I integrated it enough that you can start the terminal, launch the REPL in it, and use the graphics protocol to draw something!
Por otro lado, el que dibujar 10 círculos usando IPC tarde 20 segundos sí hay que optimizarlo ahora :-) pic.twitter.com/0ywf9nzxkH
— Roberto H. Alsina (@ralsina) October 11, 2020
Now comes a round of integration, cleanup and optimization.
After that will come a new round of feature work, and so on for the next ... 10 years? If it goes well?