Loading CL-SDL…

I have been playing with Latrunculi again as of late. With the contracting I have done, it has been a while since I have had the time, but here we go again. I have been working on a Common Lisp branch of Latrunculi (which can be found in Subversion under branches/clisp-branch). The reasons are several. First, there are no real bindings in Chicken to SDL or GTK. There are some half finished, sort of working ones, but I don’t really feel like writing large quantities of binding code. Secondly, I wanted to learn Common Lisp. Thirdly, the threading works, but it is a nasty mess (as is all of the graphics code). Finally, some of Common Lisp’s idioms and built in datatypes are a better match for what I am trying to do (real 2D arrays instead of vectors of vectors, anyone?). I don’t really like using large quantities of SRFI code or bindings that are not compatible with any other implementation, which is another reason that Common Lisp seems like a good choice. In Common Lisp there are fewer implementations, but even the bindings are often compatible across multiple implementations (CFFI and UFFI provide this).

One of the big goals here was to continue using OpenGL for the primary game rendering, but use SDL to load images, display text, and handle windowing. In trunk, I have written my own Targa loader (which does not implement all of the format, as I only wrote enough to load the textures for the game; which means that, when saving them, very specific options have to be set in Gimp for it to work…), created bindings for some obscure texting library (the link for which is dead and it would not have been a long term solution anyway due to its non-commercial license being in conflict with the GPL), and used GLUT for windowing and events. All in all, a mess that I want to clean up. Fortunately, bindings already exist for the libraries in which I am interested in the form of the CL-SDL project. Other goals of Latrunculi involve being cross platform (and that includes Windows) and having the ability to distribute binaries (since few users, even Linux users, compile from source). CLisp and ECL seem to be the best for this, both having Windows versions and compilers. ECL, I understand, has threading so I may use this in the end.

With this background, the task seemed rather easy: load the bindings and go. The catch is that the choice of Lisp implementation was defined primarily by the cross platform compatibility as several implementations (SBCL and CMUCL among them) offer only support for *NIX platforms. Neither CLisp nor ECL have true UFFI support. ECL has a UFFI compatible FFI layer which, on the surface, seems like it ought to make the whole thing easy. However, I have not found an easy way to make use of this feature.

So far, I can see a few possibly good ways to get this baby running:

  • Make use of ECL’s UFFI compatible FFI; most likely, this would include modifying CL-SDL’s ASDF package not to depend on UFFI or to depend on ECL’s FFI package or writing some code that “aliases” ECL’s FFI to ASDF:UFFI so that everything else is happy and dandy
  • Use the CLisp patches for UFFI and try to get it to run
  • Use CFFI’s UFFI compatibility layer to load up the bindings and use them

This sounds harder than it really is, I think. Most likely, a lot of the problems I have are stemming from the precanned nature of using Lisp packages through Ubuntu’s repositories. I am thinking that I will probably try to do this without taking the “easy” way out and using .deb packages. Instead, I will probably try to go from source beginning to end by hand and see if I get anywhere. I wanted to post a final explanation of whatever steps I got to work, but this little outlook may solicit some reaction or, at least, serve to get my ideas out.

Piracy

One Cliff Harris recently wrote a post about piracy. I know, I know there are about a zillion articles on the world wide web about piracy. Loud bombastic ones promoting it, loud angry voices defending/promoting it, and timid little voices saying “can’t we all just be friends…you know, get around a campfire and sing kumbaya?” Cliff Harris, apparently (I had never heard of him until running into this on reddit), runs a small indie-style game studio and posted a call for emails from pirates. The article above is his report on his findings. The real question he asked was why do pirates pirate? On the face, the question sounds like asking why a bird is a bird. On the other hand, even questions rooted in pedantry can be of interest. 🙂 The list in the article itself is nothing that you probably haven’t heard before if you have been following the issue even vaguely. I don’t follow it much anymore; I haven’t heard anything new in quite some time and it gets rather old reading the exact same flamewar again and again. If I want to read a good flamewar, I go back and read some archives of the Torvalds vs. Tannenbaum flamewar of the century. Despite being an old list, a few things stood out to me on a reread:

  • No DRM. Of all the reasons presented, this is the one that I sympathize with the most. It is absolutely not a good reason to take something without paying the creator, but I could understand buying a copy then pirating it. That way, the creator gets paid and you get a DRM free game. In my experience, it is not the hardcore pirates that get hurt the most out of DRM. It is the honest, I bougth this and I want to use it type that gets burnt. I have written before about the effort it took to get around a bit of DRM for use with a legitmate copy of the game. In that example, I had a legal copy of Windows running legally in a VM on which I was trying to run a game that I bought in a store. Nothing shady about it. But it was a major pain.
  • Demos are too short. This one is absurd. Someone may prefer a longer demo, but really: you need to pirate a game to try it out? My gaming budget is pretty small. I have been buying games once ever few months, I’d estimate. I could buy more, but I’d rather have a new book on my nightstand than another game for my PC or the Xbox. I seldom load demos of any kind. I usually look for a specific kind of game (action or strategy), then read the reviews on the latest and greatest. If something sounds good, I buy it. Otherwise, I don’t. Somtimes I load a demo, but it has been a while. Usually, other people’s impressions mean a lot more.
  • Price. Irrelevant. I think gas costs too much. Does that give me the right to fill up my car and drive off without paying? Of course not. Why should games be any different? Many games are overpriced when they come out (the article quotes people complaining about “$60 games”). However, if you wait a few months something amazing happens: the prices drop. Far and fast. It isn’t long until they run at $20. A little longer and they are at $10.
  • Quality. I agree with the criticisms of the state of modern game development. Most games do lack originality, most are poor knockoffs of other games, and so on. But if the game’s quality is too low to be worth paying for, then why play the stinking game at all? I have seen a lot of rotten games that I wouldn’t buy–but then again, I wouldn’t play them, either.

Really, I think the sticking point is #3: price. DRM is a hassle that afflicts the innocent, but I think for the most part, the people who claim this as a reason (that is, if they even know what the heck DRM is) probably are not those most affected by it. Even in my example above, I was able to work around the issues I had. Whether people simply don’t think the quality is worth it (the paradox of which I have already pointed out; if the quality isn’t worth it, why use it?) and others unabashedly don’t want to pay for it. All in all, it seems rather indicative of our society. Attitudes of entitlement, if you can get away with it, it’s fine, and so on.

KDE 4…

Now playing at Windows machines everywhere. Well, not everywhere, but at least at Computer World’s. Faced with doing a big of web work, I fired up into my Vista partition so that I would have the glory of IE 7 at my beck and call…and because we had been running some tests at work on it and I was too darn lazy to reboot into Linux. I saw the aforelinked article a little before this most auspicious occasion and decided to download KOffice on Vista. The reason? I wanted to use Krita for my image work, rather than gimp. The reason is a tired old one, but still true: I hate the way the gimp creates about a zillion windows, cluttering up everything. Usually, MDI is a bad thing, but image manipulation is one of the few occaisions on which I would personally sanction it. Heck, without virtual desktops (either the built in ones on Linux or through the fine add on Virtuawin for Windows) I’d say that the gimp is well nigh unusable. Anyway, I digress. The article on Computer World is actually pretty favorable towards KDE 4 apps on Windows. I really wish that the situation were as sunny as they made it out to be. I used, or tried to use, Krita and Amarok (which is the finest music player, IMHO, that I have ever used). Krita hung and crashed and Amarok, well, I gave up on Amarok: I have all of my music on my Linux partition which I mount on Windows as the L: drive. I figured that I would specify the path to Amarok for the collection and away I’d go, listening to my oggs happily (which is a pain to get set up on Windows Media Player: it requires a separate codec download and still fails to show any ogg files in the collection, just MP3) except for one glitch: in its current state, Amarok 2 on Windows will not allow you to select a directory on any drive except the C: drive. Now, I understand that Amarok is in alpha and that KDE 4 isn’t much better than one, but I will say this: I can’t really use the KDE 4 suites on Windows for my main work, yet. It is just too flaky. I hold out high hopes. As a developer, I understand that new software requires some work to get fully polished, but KOffice 2 isn’t quite ready to challenge OpenOffice as the best Office clone (which is another rant for another time).

Recaptcha

Well, I installed recaptcha on this blog last week. I was sick of receiving tons of spam for such sleaze as I would rather not think of. Recaptcha was selected on the recommendation of a colleague. This past week, a form that we set up for a client was creating spam so bad that our (legitimate) servers were being blacklisted all over the globe. Finally, we put in the captcha (over objections from more sales and marketing-oriented minds) and it, combined with changing the static IP for our mail server, got us unblacklisted. So, it worked at work and I am happy to say that I have not had any spam in “awaiting moderation” since installing it. I am equally sure that it will see use in a website that I am currently building.

The other cool thing about recaptcha, besides the fact that is an excellent captcha in its own right, is the somewhat novel method used for generation and verification of images. From their website:

reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly.

But if a computer can’t read such a CAPTCHA, how does the system know the correct answer to the puzzle? Here’s how: Each new word that cannot be read correctly by OCR is given to a user in conjunction with another word for which the answer is already known. The user is then asked to read both words. If they solve the one for which the answer is known, the system assumes their answer is correct for the new one. The system then gives the new image to a number of other people to determine, with higher confidence, whether the original answer was correct.”

Cool, huh? It also occurs to me that the usages of this could go well beyond aiding in the OCRing of a bunch of documents. If their OCR software is using neural networks (and today, whose isn’t?) the amount of training data that could wind up in their particular network is nothing short of astounding. It would be nice if we could see the end result! The project itself is being run by Carnegie Melon so I’m sure that if anything truly interesting comes of it, something will be published. That said, the site doesn’t seem to contain any references to the influence this could have on artificial intelligence and character recognition so I can’t even be sure that they are trying to observe the pattern matching or if it is just a bright idea to improve on existing QA methods for OCR.

Now, on Friday no less, you get a twofer: captcha recommendation and rambling on a tangent about the AI involved. But, that is how the MCS’s mind works.

InnoDB Diversion

At the MySQL Performance Blog, the good writer took time out recently to show us his script to convert tables to InnoDB. Recently, I also had to convert a large quantity of MyISAM tables (come on! you’re better off with SQLite if you’re going to use MyISAM for an application) to InnoDB. My approach used, not the fine tools from Maatkit, but good old Bash in conjunction with the MySQL command line client:

#!/bin/bash
echo 'show tables' | mysql -uroot -ppassword $1 | sed "/^Tables_in_$1$/D" | awk '{ print "alter table " $0 " engine=innodb;" }'

Naturally, root’s password would not be password and, if you are on a server hosted/administered by someone else, you would not want
to leave the password in the shell’s history, but you get the idea. The advantage: on a Linux box running MySQL, you can depend on
bash, the mysql client, sed and awk being installed a lot more than Maatkit.