Ubuntu 8.10

…is, without a doubt, the worse Ubuntu release I have ever used (understanding, of course, that the first release I used was 6.06 or so, and that on on an older machine; prior to that I was 100% Gentoo). Not having anything better to publish yet, but a lot on the burner, so stay tuned!, I figured I’d rant a minute.

I run Ubuntu, primarily, on my Compaq Presario F700 laptop. I had everything working that I tried under Ubuntu 8.04. Wireless (thanks to my old friend ndiswrapper), accelerated video (how I love nvidia), the whole shebang. All I hadn’t tried was the modem (why are there still modems on laptops, anyway? I can’t remember the last time I used one) and the xD/MMC/SD card reader. I swore I wouldn’t upgrade. Everything was working and working well. Then I got curious. Kubuntu even defaulted to 4.1, which I had heard so many good things about after the dreadful (or, perhaps, it would be more fair to say incomplete) 4.0 release. So, I dist-upgraded my system–then the world fell apart before my eyes. Suspending and hibernation broke on the spot, wireless followed, and before I knew it, I had broken the nvidia binary driver. I reinstalled. Shortly, I had nvidia back. It took some hassling, but eventually I got wireless back. But suspending and hibernation were broken. After more tinkering I have hibernation, but no suspend.

After trying to deal with it for a bit, I have decided that I don’t want to deal with it. I am downgrading to 8.04 and will roll on with life. Here is hoping that 9.04 is a better release than 8.10. More up to date software has been touted as the advantage of using Ubuntu over Debian. It is, in fact, an advantage–but it is also a pretty big disadvantage. Frankly, I would rather see one awesome Ubuntu release a year than having to retinker my laptop every six months. I am glad to see, though, that 8.10 finally uses the restricted drivers automatically, allowing the user to revert to a free-software only system if they want to. It makes getting up and running easier (I know that the first thing I do on my laptop is to enable restricted drivers) and it fits the needs of a much greater percentage of the users or potential users of Ubuntu.
Postscript: My downgrade is more or less complete and everything works beautifully again.

Why isn’t FTP Dead Yet?

Today I was looking at setting up a git repository for a little utility (which I hope to release shortly) to share code with the big old world, and I found myself googling how to use git, and other distributed source control systems, over FTP and found myself asking “why?”. The first half of the why is very simple. I want to share the repository when my killer app (I wish) gets released, but I would like to host it on mad-computer-scientist.com, rather than setting up a Google Code or Sourceforge page for it. It seems like rather too much work for a little app. This being a cheap, shared host that MCP is hosted on, the only file access is by FTP. So naturally, any source control (download or upload) will have to come over the FTP protocol. Which brings me to the second half of the why and the thrust of the whole post.

Why, at this point, do we still not have a better, widespread method of exchanging files? FTP, by default, exchanges login credentials in plain text and is, therefore, quite insecure. Yes, there is SFTP and FTP over SSL, but the vast majority of FTP setups do not and would not use these measures. And, at any rate, they are mere spackling over cracks in a poor protocol. In active mode, the idea of a duplex data connection is problematic for modern NAT firewalls.

It would be mean spirited and short sighted to declaim any of these faults as being “obviously” wrong. They may be now, but FTP dates back almost thirty years. It is an example of experimentation, both successful and not, and an example of a design that was outgrown by the demands placed on it. The dual socket design and plaintext authentication were not problems when the first RFCs were coming out. They were features. They made the protocol easy to implement and use (back in an era when the idea of a user actually entering raw protocol commands was not far fetched). Today, these things, and other like it, are a pain. A pain that has been hacked around to enable FTP to continue functioning in the 21st century, but a pain nonetheless.
So, why don’t we have a better file transfer protocol than the File Transfer Protocol? Here is what I would like to see in an NFTP (new file transfer protocol):

  • Drop the whole idea of ASCII/Binary transfer mode. It’s all bytes at the end. Use a MIME type, if necessary, to indicate what is being transferred.
  • No more Active/Passive mode. Like HTTP, just have a request/response.
  • Make the authentication process secure by design. No, this does not inherently solve all problems, but, at the minimum, mandate encryption for the authentication stage.
  • A standard way of representing the file system hierarchy in general. I can’t remember where, but I remember reading that parsing the file listing format was often a problem when implementing an FTP client because servers differed so much on how they returned the data.

I’m sure that there are other things that could and should belong on this list. Maybe a protocol like this exists and I just don’t know about it. Of course, someone will probably explain how I am an idiot for saying most of this and that’s fine. I’m just rambling anyway. But even if we got this protocol tomorrow, it would matter little. FTP is everywhere, especially on cheap hosting servers. It would be quite a while before the majority of the world benefited. Just like it was a long time before anything besides PHP/MySQL was available on most shared hosting accounts, it will be a long time before anything other than vanilla FTP is offered in shared hosting. The answer to the title is simple: FTP exists because it is the lowest common denominator, which makes it too common to simply die.

PHP\namespaces\backwards_compatibility

As may or may not be evident, my current professional programming is largely PHP for a transportation company. As such, it obviously behooves me to keep abreast of changes in the world of PHP. The next version of PHP, PHP 6, is set to come out with one major feature that I have personally wanted: namespaces. No language that is used for large scale development can really live without it. You can hack your way around it, but, in the end, you are using namespaces (or packages, or assemblies, or whatever the name du jour is), you are just using them poorly. A case in point is the ubiquitous mysql extension. Every function begins with the prefix mysql_. The reasons are obvious. If you just say query(‘select…’) there is no guarantee that this will not clash with some user defined function. In addition, how many database systems wouldn’t want to use this name? Which is the problem. The chance for conflict is simply too large. So, they define a namespace mysql where the ad-hoc implementation for namespaces simply involves tacking mysql_ in front of everything. To be clear: I do not blame the writers of the extension for this. It is a necessity given the current state of PHP.

Even in my own code, I have seen a use for this. We have had to write implementations for EDI (ANSI X12) formats. Naturally, the terminology in the files themselves is similar and I usually find myself dividing a file into header, detail, and footer sections anyway. The names would naturally conflict. Usually, the formats do not get used within the same script, so it is not apt to be a problem. They sometimes do, however, and the chance is always there for it to be needed in the future. So, I tack the name of the format to the beginning of each class name Edi999Footer, for example. Creating a package Edi999 with a class Footer would seem much more natural and would head off any potential problem very nicely. So, in short, I was looking forward to namespaces.

Then I found out what the currently selected operand is: \. Like a great many other people, I do not like this at all. While some were purportedly saying that they do not want their source files looking like Windows registry dumps, my reason is actually that the backslash is, almost universally, an escape character. I know that I will probably read

new  Package\Nubia()

as

new Package
ubia()

while I am scanning through source files (not that these names exist; just a hypothetical). It is just wrong to use the escape character as a separator (on a tangent, it was wrong when MSDOS did it, too).

That said, I understand the snag that they had hit. They really were running out of common characters and character combinations. That is, they were if the goal was to maintain near 100% compatibility with the previous syntaxes. To select an operand that would be considered more decent, one would have to break something in the current PHP language. A friend and I tried a little thought experiment on a blackboard. I wrote up a statement like the one above, but removed the separator. Then we tried to come up with one that wouldn’t break the current language. We couldn’t come up with anything that was really better.Ideas like ==| and ===> were as good as they got, but those are pretty lousy.
However, the addition of classes (which I count as really happening in PHP 5; PHP 4 classes were little better than C structs) and namespaces is, itself, a break within PHP. A break from a dumbed down Perl with a very scriptish hackish feel to something that is more akin to the “professional” nature of Java or C#. In short, they are already breaking with the original spirit of PHP, so why not break a little from its syntax? I am not proposing to make PHP as strict as Java or C#. If that’s the objective, we should just use Java or C#, but if the thrust of the language is going to be more for “professional” developers then why not modify the syntax accordingly?