Posts about programming (old posts, page 12)

2007-08-07 08:21

Fixing old tutorials

I got a mail from a reader telling me that he couldn't download the sources for Notty, the toy app I developed in my Rapid Application development using PyQt and Eric3 ... in realtime! tutorial.

So, I checked and was shocked at just how much the server moves had wrecked that article, which is one of my favourites!

No images, broken link to the sources, no syntax highlights!

So, I rejiggered the thing quickly with some search and replace (thanks restructured text!) and now it should be up to standards, except that... it's still about Qt3 and I am not even sure it works nowadays.

Normally that would be simple to fix: change the code as needed, make it work, and be happy.

But the fun thing about that article was that it was written in 3 hours, and it talks about how it was written in 3 hours. So, I think I may have to do keep that and add a note with a link to a corrected/updated one, someday.

2007-08-06 17:44

If you try to use quotactl on Linux...

Always remember to do this:

#define _LINUX_QUOTA_VERSION 2

Or else, your code will break in misterious ways.

That's because sys/quota.h has this:

/*
 * Select between different incompatible quota versions.
 * Default to the version used by Linux kernel version 2.4.21
 * or earlier (in RHEL version 1 is AS2.1, version 2 is RHEL3 and later).  */
#ifndef _LINUX_QUOTA_VERSION
# define _LINUX_QUOTA_VERSION 1
#endif

Why? I have no idea. But this is true at least on Centos4, I have no clue of this is also true for your distro, but it is sooooo wrong :-(

2007-08-06 09:57

A bit sad about this

It seems that during the big SVN conversion some data were lost in commits.

So, if you check KDE's svn for the really old stuff, it has no author information.

For example, check this out:

http://websvn.kde.org/branches/KDE/1.1/kdenetwork/krn/

I know I wrote pretty much everything there, but you are probably not going to notice it, and I spent a good couple of years working really hard on that thing.

Sure, it was crap, but it was hard-worked crap, and there is at least one thing I am slightly not ashamed of: TypeLayout, which is still nicer than most toolkits have, even if it was shamlessly copied from LinuxConf.

But what the heck, it's old stuff.

2007-08-04 14:49

Coming ideas

Nowadays, the very little time I have for personal projects is spent doing things like fixing little things and adding little features to BartleBlog [1] and thinking how I could use GLE and mako templates to create a cool nerdy tool to create charts.

However, I will be having a little time for personal projects in a couple of weeks, and having stumbled onto chipmunk today really got me thinking.

It's a seriously nifty 2D physics library. Consider this demo video:

Now, what could possibly be done with it... I need to really think.

[1] What I use to post this.

2007-07-23 15:59

Rater progresses (slowly)

I am hacking a bit on rater my daemon/client to see if things are happening more often than they should (in other words, generic rate limiting).

I had to take a few days off, since my brother got married and we all went back to Santa Fe for that and a weekend, and then everyone else has sore throats and I am the only one healthy.

But hey, it works well enough already:

  • The simplistic protocol is done
  • The server works
    • It can take hours of gibberish without problems.
    • It can take hours of valid input without problems.
    • It does what it's supposed to do.
  • It's staying below 300SLOC, which was my goal.

Missing stuff:

  • Valgrind it.
  • Client library.
  • Generic CLI client.
  • A qmail-spp plugin that uses it.

And then, I can forget all about it.

2007-07-11 20:10

Snow and rates

Monday was a very special day:

  • Holiday (Independence day)
  • Anniversary (3 years as Rosario's boyfriend)
  • The first snowfall in Buenos Aires in 89 years.

Besides that, this week my brother is getting married so the whole family (including 2.5 month-old JF) is leaving for my ancestral lands tomorrow.

And I started a new small project, whcih should be finished soon.

This is something that seems useful to me in the context of mail servers, but maybe it will also find its uses elsewhere.

I call it rater, and it tells you if things are happening faster than a specific rate.

For example, I intend to use it to figure out if a specific IP is connecting to a server more than X times every Y seconds, or if a user is sending more than Z emails every T minutes.

The only thing I found for this is relayd, which is old, unmaintained and whose site has vanished.

The config file is something like this (thanks to libconfig):

limits : {
      user: (
                    ("rosario",90,20),
                    ("ralsina",90,10),
              ("*",2,10)
              );
      ip:   (
                    ("10.0.0.*",90 , 20),
                    ("10.0.1.*",90 , 20),
                    ("*",2 , 10)
              );

};

You can define as many classes of limits as you want (that would be ip and user in this example) and as many limit keys as you want, that will be matched using something like fnmatch.

I am using an in-memory SQLite DB for the accounting, and an interesting library called libut for the sockets, logging, and event loop.

This library has a very interesting feature: your app gets an administrative interface for free!

[[email protected] rater]$ telnet localhost 4445
Trying 127.0.0.1...
Connected to localhost.localdomain.
Escape character is '^]'.
libut control port interpreter
Type 'help' for command list.

help
command           description
----------------- -------------------------------------
* mem             - memory pool usage summary
* var             - Display or set config variables
* log             - Change log file or verbosity
  fds             - list selected file descriptors
  tmr             - show pending timers
  uptime          - show uptime
* prf             - Performance/profiling stats
* cops            - List coprocesses
  help            - view command help
  exit            - close connection

Commands preceded by * have detailed help. Use help <command>.

Ok
var
 name                 description                    value
--------------------- ------------------------------ --------------------
*ut_log_level         log level                      Debugk
*ut_log_file          log file                       /dev/stdout
*ut_jobname           job name                       job1
*ut_control_port      control port IP/port           127.0.0.1:4445
*ut_basedir           shl base directory             /mnt/centos/home/ralsina/Desktop/proyectos/rater

Variables prefixed with '*' can be changed.

Ok
var ut_log_level Debug

Ok
var
 name                 description                    value
--------------------- ------------------------------ --------------------
*ut_log_level         log level                      Debug
*ut_log_file          log file                       /dev/stdout
*ut_jobname           job name                       job1
*ut_control_port      control port IP/port           127.0.0.1:4445
*ut_basedir           shl base directory             /mnt/centos/home/ralsina/Desktop/proyectos/rater

Variables prefixed with '*' can be changed.

Ok

Pretty neat.

Beyond this, there will be a small client-side library that hides all the network stuff behind a couple of blocking calls (or you can do your own because the protocol is silly simple).

2007-07-05 11:07

Quote of the day (ok, of May 21st, 2007, but I only saw it today)

Said Giles Bowkett

The Perl community's starting to look more and more like the Lisp community every day. The combination of incredible power, reclusive wizards, and antisocial Slashdotters gives it the vibe of a lava-filled wasteland dotted with towers where strange men with white beards obsess over unspeakable knowledge. I spoke to someone once who compared programming in Lisp to studying Kabbalah, in that it does strange things to your head. Parts of Perl are like that. Still, source filtering's kind of cool. Unnecessary, but cool.

So, now we know. Saruman used too much Perl.

2007-07-04 10:06

The Linux software ecosystem is boring and a little lame (a rant).

Quick, answer this:

What was the last time a basic piece of the Linux system was redesigned and replaced by everyone?

And the new piece was not a drop-in replacement or evolutionary development for the old garbage?

Please post the answers on comments, because the best I can come up with is one of the following:

  • Postfix replacing Sendmail
  • Everything else replacing Wu-ftpd
  • GRUB replacing LILO ? (not that GRUB is all that great, but at least you have a decent chance of fixing it when it breaks)
  • OpenSSH replacing telnet and rlogin

There are still distros shipping Wu-imap and its offspring!

There are still distros shipping the old syslog!

Let's consider a basic, tty Linux first.

  1. GRUB (ok)
  2. Linux kernel (ok I guess)
  3. Ancient SysV init (unless you use pardus/gobo/some other radical distro)
  4. Services, which probably include
    1. Syslog-NG (which is marginally less broken than old syslog)
    2. Sendmail (even if only for loopback addresses, it's still lame)
    3. OpenSSH (ok, although I think the client sucks because I can't figure out how to store passwords and passfrases in KWallet)
  5. A getty

At least here there is not much room for innovation because we are trying to start something that is a lot like a 30-year-old Unix box.

So, let's go server-ish. What would you normally use?

  • BIND

    Ancient software with a terrible security history. Yes I know it's rewritten lately. They did that before, too, you know.

  • Apache

    For all the good things Apache has, it has some bad ones, too.

    • It's overkill for most servers.
    • As the A in LAMP it has lead people to believe PHP4 is the right language to develop applications in, and MySQL a good place to store their data.
    • If it fails to do what you want, you may get an error. Or not.
    • The configuration is in some sort of pseudo-XML

    Let's get real. For most modern web apps what you want is a decent, high performance WSGI thingie for python apps, and whatever you use for Rails. Apache may or may not be that or have that inside, but who needs the rest of it? What for? PHP pages? mod_perl web apps?

    No, really, I'm asking a question here. What pieces of Apache do you use nowadays?

  • Samba

    • It does what it does.
    • Noone else does it.
    • Ergo, it's the best at what it does.
    • That doesn't mean that losing its TDB every once in a while while doing a "RPC vampire" is not annoying.

    But actually, I am pretty happy about Samba. I mean, what's the alternative, here? NFreakingS?

  • CUPS

    Ok, not too many new print servers out there, but hey, it's better than LPRng!

And if I had written this rant three years ago, I would have used the exact same examples.

Where's the vibrant new server app?

Who's going to write a cool, performing, easy to configure HTTP+WSGI server in D?

Who's going to implement a fast, secure, simple, zeroconf-enabled, file serving daemon?

Who's going to replace BIND?

Who's going to create a Linux server distro with only decent software in it?

Me? No way, I have diapers to change. And there used to be smarter and more driven people around to do this stuff. Are they all changing diapers now?

Come on, stop rehearsing with your band that plays "metal with medieval influences"! Stop growing your stamp collection! Stop

Come on, it's only going to consume at most a year or two of your life. It's not going to harm you more than a budding alcoholism, or a poetry hobby, or attending furry conventions, young man (or woman)!

You don't need to be all that knowledgeable (look at the BIND4 sources) or brilliant, all you need is to be industrious.

Grow a spine and get cranking! Show us old hacks what you've got!

2007-06-20 13:23

Old READMEs: Atlast... make everything programmable!

I have been exploring embeddable languages for the last month or so. I have learned forth and some of its many many many variants [1] and while exploring one of the most obscure ones called Atlast [2], I found a very interesting README which I will quote liberally in this post.

Virtually every industry analyst agrees that open architecture is essential to the success of applications. And yet, even today, we write program after program that is closed--that its users cannot program--that admits of no extensions without our adding to its source code. If we believe intellectually, from a sound understanding of the economic incentives in the marketplace, that open systems are better, and have seen this belief confirmed repeatedly in the marketplace, then the only question that remains is why? Why not make every program an open program?

And this is indeed very important. Any app worth developing is probably worth developing in an extensible manner. After all, you are probably never going to figure out what the user really wants to do.

But why is not every program "open" in this manner?

Well, because it's HARD! Writing a closed program has traditionally been much less work at every stage of the development cycle: easier to design, less code to write, simpler documentation, and far fewer considerations in the test phase. In addition, closed products are believed to be less demanding of support, although I'll argue later that this assumption may be incorrect.

This is true, although it was much more true in 1991 when Atlast was released to the public domain. Nowadays the generalized adoption of more dynamic languages has made this much easier. After all, you can write a plugin system for your python app in perhaps ten lines of code.

However, there is a strong backlash from the pains of MSOffice scripting, which involved loveliness like function names that changed according to your locale, and basic as the language of choice.

On static languages this is harder, but there are technological solutions. For example, in KDE there is already a limited scriptability culture based on how easy it is to provide external DCOP interfaces (I suppose I should say DBUS nowadays). Also for KDE4 the Kross scripting framework promises language-neutral scripting.

Most programs start out as nonprogrammable, closed applications, then painfully claw their way to programmability through the introduction of a limited script or macro facility, succeeded by an increasingly comprehensive interpretive macro language which grows like topsy and without a coherent design as user demands upon it grow. Finally, perhaps, the program is outfitted with bindings to existing languages such as C.

Noone provides straight custom C bindings for applications now, right? But yes, I expect scripting is not the first thing that gets implemented.

An alternative to this is adopting a standard language as the macro language for a product. This approach has many attractions. First, choosing a standard language allows users to avail themselves of existing books and training resources to learn its basics. The developer of a dedicated macro language must create all this material from scratch. Second, an interpretive language, where all programs are represented in ASCII code, is inherently portable across computers and operating systems. Once the interpreter is gotten to work on a new system, all the programs it supports are pretty much guaranteed to work. Third, most existing languages have evolved to the point that most of the rough edges have been taken off their design. Extending an existing language along the lines laid down by its designers is much less likely to result in an incomprehensible disaster than growing an ad-hoc macro language feature by neat-o feature.
Unfortunately, interpreters are slow, slow, slow. A simple calculation of the number of instructions of overhead per instruction that furthers the execution of the program quickly demonstrates that no interpreter is suitable for serious computation. As long as the interpreter is deployed in the role of a macro language, this may not be a substantial consideration. However, as soon as applications try to do substantial computation, the overhead of an interpreter becomes a crushing burden, verging on intolerable. The obvious alternative is to provide a compiled language. But that, too, has its problems.

Interesting point here is that again, the speed of interpreters is not so big a deal nowadays. After all, LuaJIT is pretty fast for most uses.

Using a compiler as an extension language is not something I have run into but I could do some pretty things using, for example, TCC, Python and python-instant.

Now we get to ATLAST itself:

ATLAST is a toolkit that makes applications programmable. Deliberately designed to be easy to integrate both into existing programs and newly-developed ones, ATLAST provides any program that incorporates it most of the benefits of programmability with very little explicit effort on the part of the developer. Indeed, once you begin to "think ATLAST" as part of the design cycle, you'll probably find that the way you design and build programs changes substantially. I'm coming to think of ATLAST as the "monster that feeds on programs," because including it in a program tends to shrink the amount of special-purpose code that would otherwise have to be written while resulting in finished applications that are open, extensible, and more easily adapted to other operating environments such as the event driven paradigm.
The idea of a portable toolkit, integrated into a wide variety of products, all of which thereby share a common programming language seems obvious once you consider its advantages. It's surprising that such packages aren't commonplace in the industry. In fact, the only true antecedent to ATLAST I've encountered in my whole twisted path through this industry was the universal macro package developed in the mid 1970's by Kern Sibbald and Ben Cranston at the University of Maryland. That package, implemented on Univac mainframes, provided a common macro language shared by a wide variety of University of Maryland utilities, including a text editor, debugger, file dumper, and typesetting language. While ATLAST is entirely different in structure and operation from the Maryland package, which was an interpretive string language, the concept of a cross-product macro language and appreciation of the benefits to be had from such a package are directly traceable to those roots.

This concept was later adopted by Lua, Tcl, and you could use Python in this manner, although usually it's done the other way around. Which means that there is not so much sharing of extension languages as there could be.

And onto the conclusions:

Everything should be programmable. EVERYTHING! I have come to the conclusion that to write almost any program in a closed manner is a mistake that invites the expenditure of uncounted hours "enhancing" it over its life cycle. Further tweaks, "features," and "fixes" often result in a product so massive and incomprehensible that it becomes unlearnable, unmaintainable, and eventually unusable.

Amen.

Far better to invest the effort up front to create a product flexible enough to be adapted at will, by its users, to their immediate needs. If the product is programmable in a portable, open form, user extensions can be exchanged, compared, reviewed by the product developer, and eventually incorporated into the mainstream of the product.

This prefigures the current FLOSS ecosystem, which is pretty impressive for a product that was mature in 1991, although of course there was already a community of EMACS hacking and similar niches.

It is far, far better to have thousands of creative users expanding the scope of one's product in ways the original developers didn't anticipate--in fact, working for the vendor without pay, than it is to have thousands of frustrated users writing up wish list requests that the vendor can comply with only by hiring people and paying them to try to accommodate the perceived needs of the users.

True if a little too cynical for my current taste ;-) Of course in FLOSS there is no such conflict between users and developers because the developers can usually just whine explain to the users the realities of free software development. But hey, a low barrier to entry is always a nice thing to have.

Open architecture and programmability not only benefits the user, not only makes a product better in the technical and marketing sense, but confers a direct economic advantage upon the vendor of such a product--one mirrored in a commensurate disadvantage to the vendor of a closed product.
The chief argument against programmability has been the extra investment needed to create open products. ATLAST provides a way of building open products in the same, or less, time than it takes to construct closed ones. Just as no C programmer in his right mind would sit down and write his own buffered file I/O package when a perfectly fine one was sitting in the library, why re-invent a macro language or other parameterisation and programming facility when there's one just sitting there that's as fast as native C code for all but the most absurd misapplications, takes less than 51K with every gew-gaw and optional feature at its command enabled all at once, is portable to any machine that supports C by simply recompiling a single file, and can be integrated into a typical application at a basic level in less than 15 minutes?

And then proceeds to throw a Forth variant at you ;-) Good concept, perhaps not the nicest language to use, although the choice is very understandable.

Am I proposing that every application suddenly look like FORTH? Of course not; no more than output from PostScript printers looks like PostScript, or applications that run on 80386 processors resemble 80386 assembly language. ATLAST is an intermediate language, seen only by those engaged in implementing and extending the product. Even then, ATLAST is a chameleon which, with properly defined words, can look like almost anything you like, even at the primitive level of the interpreter.
Again and again, I have been faced with design situations where I knew that I really needed programmability, but didn't have the time, the memory, or the fortitude to face the problem squarely and solve it the right way. Instead, I ended up creating a kludge that continued to burden me through time. This is just a higher level manifestation of the nightmares perpetrated by old-time programmers who didn't have access to a proper dynamic memory allocator or linked list package. Just because programmability is the magic smoke of computing doesn't mean we should be spooked by the ghost in the machine or hesitant to confer its power upon our customers.

Oh yes. Libraries rule, and when there is no such thing as the library you need, it's the worst position to be in.

Don't think of ATLAST as FORTH. Don't think of it as a language at all. The best way to think of ATLAST is as a library routine that gives you programmability, in the same sense other libraries provide file access, window management, or graphics facilities. The whole concept of "programmability in a can" is odd--it took me two years from the time I first thought about it until I really got my end effector around it and crushed it into submission.

I am probably going to use ATLAST or something similar in a program, but the user will not see a forth-like language at all, but a classical Excel-like formula language which would get compiled to the forthish language behind the scenes.

Open is better. ATLAST lets you build open programs in less time than you used to spend writing closed ones. Programs that inherit their open architecture from ATLAST will share, across the entire product line and among all hardware platforms that support it, a common, clean, and efficient means of user extensibility. The potential benefits of this are immense.

Indeed. Probably not using ATLAST, but Lua or Python or something else instead. And still, 16 years after ATLAST was released to the public domain, we are still walking down this road, and not even close to the goal.

[1] As they say, "Once you see a forth, you have seen a forth"
[2] The "Autodesk Threaded Language Application System Toolkit", by John Walker. Version 1.0 was released in August 1995, 1.1 in July 2002... I expect 1.2 around August 2009 :-)

2007-06-04 13:42

Sometimes, you need to do it the hard way.

You may have noticed no posts about StupidSheet for about a week.

Well, I ran into the limitations of the formula parser I was doing using Aperiot. I just couldn't make it parse this:

A1=IF(A2=B2,1,0)

So, I spent the next week trying one parsing package for Python a day until I could find one I understood and could make it parse that example.

I must say it was educational.

So, now the parser is based on PLY which is pretty much Lex+YACC with a (slightly more) pythonic syntax, and it works.

Yes, it's a bit harder, but by trying to do things simply I was limiting myself too much, and, perhaps underestimating myself.

I am a pretty smart guy, there is no reason I can't understand these things.

Contents © 2000-2018 Roberto Alsina