Paperless Offices

I recently inherited a large set of files from a coworker who had moved on to greener pastures, by which I do not mean I received a zip full of Excel spreadsheets. These are honest to goodness paper files and folders. Working in IT, you would think that if the paperless office had arrived, it would at least be present in that department, if no other. Alas, it has not. This is the second set of files that I have inherited during my time here and my own set of files is getting to be fairly large. Most folders are stuffed with hand-written notes I took during various planning, scoping, and implementation stages.

The point here being, I am throwing no stones, but am making an observation: even in technical fields, few people are actually implementing paperless offices. If anything, the laser printer allows us to run off far more paper than ever before because its low threshold of use and expense allows people to use it for far more trivial things than would have ever been justified in the dark ages. Computers allow us to store more information than is readily doable with paper, in a more portable fashion (most laptops have enough hard drive space to carry libraries of information, available at the brush of the finger), more safely (some laptops and spare drives are a lot less of a fire hazard than cupboards stuffed with paper and cardboard), and more intelligibly (typed notes will be legible by anyone, whereas many people, like me, have terrible handwriting).

Here, it would almost be convenient to scapegoat the older workers and simply blame them. It is their dedication to the old order that keeps the rest of us from making progress. There is some small amount of truth here. We have probably all seen that one person who prints off every darn e-mail only to turn around and file it in a cabinet. However, this would be greatly oversimplified. When I was in college, I noticed that few computer science students used their laptops for notetaking. There were plenty of laptops in the room, mind you, but most of their owners were surfing or playing solitaire (I played solitaire through a solid semester of Jewish History–did well on the tests, though). The students who actually took notes did so, by and large, on notebooks or in binders. So, that old guy over there might abuse the printer a little more than usual, but he is not the real issue here after all, the younger generation who is growing up on iPods and Facebook, takes notes by hand.

The reasons for this, then, are more deep seated than a simple matter of generation. The sad truth for the proponent of the digital office is that computers are simply not convenient enough, yet, for this purpose. Notepads are still much more convenient than laptops for most purposes. When taking notes, it is not uncommon to be scribbling down diagrams and making outlines in ways that a computer may present better, but require a little more effort upfront. If you are typing your proposal, those extra few seconds to make the bullets look good is well worth it. If there is a flurry of talk in a meeting or during a lecture, those extra few seconds put you too far behind. Similarly, those diagrams may take, for a fast user, ten minutes to put together–but that ten minutes is simply too long when everything is happening realtime, especially when they can be sketched in seconds.

Laptops, as light as they are, are appreciably heavier so it is a lot more convenient to grab a notepad than to haul out the laptop. Battery life is also a concern. Despite advertising to the contrary, the best you can usually do is a few hours of battery time at full use. Sure, you can get more life if you don’t use the machine as much, but then it isn’t as useful either. This particular group of objections should be handled within the next few generations of hardware. With netbooks becoming more common, general laptop size decreasing, and battery life increasing, this should go away quite soon. Cost does not really seem to be an issue anymore. Most college students have laptops–virtually all have computers and could have had a laptop should they have so chosen. Like I wrote above, the problem was not that students did not have laptops, but that they were not using them to go paperless.

All right, then, if we can’t get people to use their computers yet, what about digitizing the output? At least, we could save the storage and remove that old fire hazard. Not necessarily a bad idea, but retyping and resketching all notes is quite time consuming. Scanning presents another option, but at several times the amount of storage (so, storing those notes as TIFFs instead of TXT or DOC will tax your drive space more) with greater difficulty reading (scans are often, especially with penciled or highlighted text, harder to read), loss of the ability to search (which, as Google, Apple, and Microsoft are all realizing, is one of the most important abilities of computerized documents), loss of flexibility (when things change, altering those TIFFs is a lot harder than changing a text document), and poor software interfaces (have you seen most document management systems?) make this a loss.

The real problem is ultimately one of convenience. We could bring our laptops to everything. We could type it all in Word or OpenOffice. We could use the touch pad to do all of our diagramming. But we don’t because it is not sufficiently convenient for the problem at hand and the reasons lie in both the hardware and the software.

On the hardware side of things, we need to see laptops that are even lighter (without loss in functionality; about the only thing I see being able to go is a CD drive–more and more software is web driven or, at least, could be deployed from another machine and more and more music is being stored digitally) with even longer batter life. Additionally, an easy way to make quick sketches is key. I am sure advancements could be made in diagramming software, but until someone can take a stylus and make a quick sketch as readily as they could with a pen, the laptop will still not be good enough. It would also help if the laptop screens were more like paper–in short, if we could see digital ink making its way from niche ebook readers onto laptops so that notes can be viewed cleanly and crisply in a way that will not tire the eyes the way traditional displays do.

On the software side of things, we need software that is more conducive to taking notes in the way that people take them in real life. Outliners are good, but people do not take perfectly outlined notes on the fly–nor can they be expected to. Often times, notes are taken in brainstorming or design sessions. These meetings cannot be organized or else they will lose all utility. One of the benefits of paper note taking is the loose, semi-organized way in which notes and diagrams can be taken and mixed up. This would need to be made available through software.

There is still the X factor. Speaking for myself, I enjoy the feel of handwriting and the look of paper. It is a relief after using computers and technology all day long to be able to look at and feel something different. I doubt that for me, personally, this will ever go away. However, by and large, except for a few strange people (like me; I even have a manual typewriter) this will fall away in the next generation or so leaving only the items above.

So, will we ever see the paperless office? I do not think that question can be answered with any degree of certainty. My personal point of view is that the hardware will be there within the next ten years. The software is a trickier proposition–it could happen at any time. Tomorrow someone could write the perfect software or it could take another thirty years. Even once this happens, paper will linger a while longer.

A Quick PL Thought

I hate programming languages that make me do a lot of typing at a stretch. If I can type for a long time, it means I don’t have to think while I’m doing it and if I don’t have to think about it, it can be automated–and if it can be automated, the blasted machine should be doing it anyway.

Open Source 3D Printing

Recently, I was very surprised to come across some open source 3D printers. I stumbled across them looking to see how cheap one could buy a 3D printer (or “fabber”). Cheapest thing I found was about $15k. Certainly reasonable if you are a large corporation and this is for your engineers or R&D people, but not so reasonable if you are just a mad tinkerer like me. The schematics are free to download and the software is released under the GPL. Frankly, I am a lot more interested in those free schematics than in the GPL’d code, but I’ll take the whole shebang. So far, I have come across at least three open source kits to do this:

There is even the beginnings of a community at Thingiverse swapping designs for objects that can be created with these home brew fabbers. This really has my curiosity running, at this point. Obviously, the components can be purchased for a fraction of the price that you would hit buying a “cheap” 3D printer. The idea has a lot of appeal for me, personally. When I was younger, I used to design board games on notebook paper, use pen and crayons to lay out the pieces and such, then tape all of the pieces together. I could go back to the old drawing board with the ability to do something a little more elaborate. Moreover, I’ve had various ideas for things over the years that this would have been excellent for. Finally, building the thing would be a heck of a lot of fun. When it comes down to it, you would be building a manufacturing robot. How awesome is that?

Lots of Insipid Stupid Parentheses

For a bit of private research, I was reading some papers on MLisp, a Lisp dialect (pre-processor, technically as it simply compiles its input into normal, S-expression Lisp code) based on M-expressions. Given that the first paper I read was published in 1968, it seems that people have been griping about Lisp’s parentheses for almost as long as there has been a Lisp to complain about. Of course, as Bjarne Stroustrup said, “There are only two kinds of languages: the ones people complain about and the ones nobody uses.”

Some of the original motivations behind MLisp have fallen away. For example, the MLisp User’s Manual mentions three motivations (page 2):

  1. The flow of control is very difficult to follow. Since comments are not permitted, the programmer is completely unable to provide written assistance.
  2. An inordinate amount of time is spent balancing parentheses. It is frequently a non-trivial task just to determine which expressions belong to which other expressions.
  3. The notation of LISP is far from the most natural or mnemonic for a language, making the understanding of most routines fairly difficult.

Both Scheme and Common Lisp (pretty much the only remaining living variants of Lisp) provide comments. Since R6RS, Scheme includes multiline comments as well as single line, so this motivation is clearly gone. Two and Three really have no business being separate. They both say that Lisp is hard to read, something that almost thirty years later, to the point where Peter Seibel’s 2003 book, Practical Common Lisp briefly addresses this objection near the beginning.

Here is a snippet of the result, fromĀ Enea, Horace (1968) MLISP

A:=DO I :=I+1 UNTIL FN(1);
RR:=#:acAD(),READ()>; A:=DO I :=I+1 UNTIL FN(1); B:=COLLECT <I:=F'N(I)>UNTIL I EQ 'END; WHILE ,((A:=READ()) EQ 'END) DO INFUT(A); C:=WHILE 7((A:=READ()) EQ 'END) COLLECT 4.b; FOR I ON L DO FN(1); J:=FOR I IN L DO FN(I) UNTIL QN(1); FOR I IN 1 BY 4 TO 13 DO FN(1); FOR I IN 1 TO 10 DO FN(1); J:=FOR I IN L COLLECT FN(1); J:=FN(FUNCTION(+), FUNCTIC!N(TIMES)); J:=<g,2>,<4,<6,8>>> SUB ~,l>; J:=q(a Y 2 ., 3 9 4 ? 5 Y 6Y 7 Y 8 Y gY o>); OFF; END. (Input follows end.)

This MLisp, which looks like an evil union between Pascal and Basic, is the result of one of (if not the) earliest attempts to solve that problem of those pesky parentheses. So, over forty years ago, we get to see two traditions established:

  1. People whining about the parentheses in Lisp
  2. People using Lisp to build DSLs

And the world ever is as it always was.

Web Servers in the Language du Jour

Has anyone besides me noticed an increased tendency for people to write new web servers in their language du jour? For example, we’ve got the WebServer CodePlex project to write one in C# .NET. Django packages one written in Python, for development purposes. Ruby has Mongrel. There is Hunchentoot for Common Lisp. Heck, I even found a Perl one on SourceForge whose last file release date was in 2000.

The height of absurdity comes with nanoweb, a web server written in PHP. That just seems wrong, like the programming gods should strike someone down for even thinking about it. That’s right. It’s not enough to watch the world blow security holes in PHP web applications, now they get to do it in PHP web servers, too. That’s just great.

Whatever happened to good old C-based web servers, like Apache? About the only one in that list I can really see is Django’s. It really does simplify development by allowing you to push deployment details off until you are ready to deploy. Visual Studio does the same thing when you are testing ASP.NET applications. The other ones, though, actually want to be production web servers. Django warns you against deploying on the development web server. About the only way you could use Visual Studio’s (which, dollars to donuts, is probably just a stripped down version of IIS) is to run the project in debug mode on the server in an instance of Visual Studio–which would be just plain stupid. Hunchentoot is also nice, because few web servers have good tools to integrate with Common Lisp. About the best you’ll do is straight CGI or mod_lisp–and, with mod_lisp, you will still have to interact with the module at a fairly low level (which I found disappointing).

If you are running a web application for the whole world to see, than you are far better off with a larger-scale HTTP server, like Apache, IIS, or Lighttpd. If you are using embedded applications, use one of the micro C-based servers–you’ll need those precious ounces of resources that C can save even more if you are embedding the thing in a printer or something like that.

Dumb quote of the day…

“The value of comments should be obvious: in general, the clarity of a program is directly proportional to the number of comments.” –David Canfield Smith, “MLisp Users’ Guide” Stanford Artificial Intelligence Project, Memo AI-84, page 5.

I guess Mr. Smith never bumped into one of those programmers (we’ve all seen them), who do things like this:

// add 1 to i

Such programmers fill the source with comments that contribute nothing to the understanding of the flow of the program. Or, how about someone who does this:

[Ed: snipped about three hundred lines ]
printf('hello, world!');

The Smith Conjecture, as I do here and now dub it, is fatally flawed, doomed to be replaced with: “Programmers are in a race with the Universe to create bigger and better idiot-proof programs, while the Universe is trying to create bigger and better idiots. So far the Universe is winning.” (Richard Cook).

Perhaps, if the whole world were full of Donald Knuths, whose literate programming considered every bit of code as being like a piece of literature, things would be different. However, we have a lot more blub programmers running around abusing comments to their maximum. I would also argue that few real life systems can afford to self-document, whether done by hack programmers or true craftsmen. If you are building anything that is needed, there is probably little room for this approach because the world will keep changing and it will continue to change at such a rate that you cannot write another book every time it does.

Ubuntu’s Hardware Support

When I bought a new computer over a year and a half ago, I was unpleasantly surprised to find that the then-present Ubuntu did not support my hardware in full. I have written here about some of the trials and tribulations I have had getting everything set up just so. Ultimately, I had to wait a couple of versions, but the most important thing I did to get everything running stably was to upgrade from the Ubuntu sanctioned kernel version of 2.6.28 to the newer 2.6.30.

Many an article has been written about how I never should have had to do that or know that or have any concept of what a kernel was, how it differed from the operating system or desktop environment as a whole, or what version I needed. I will say upfront, that I agree. While I have a pretty good nuts ‘n bolts knowledge of a Linux desktop, I should never need it to get up and running. The real problem here is less one of raw technical capability (since I was able to solve the problem with an upgrade), than it is the simple fact that most manufacturers give Linux no thought on the desktop. Windows would run a lot rougher if OEMs didn’t work with Microsoft to ensure that it is otherwise. The idea of this all working, out of the box, with no OEM involvement is simply ridiculous. The only ones who can test a hardware configuration before it is released into the wild are the ones putting it together in the first place.

Until OEMs start working with the major Linux distros (or, at least, the major distro of their choice), this problem will never entirely go away. Contrary to what many Linux advocates say, OEMs are not evil. Ultimately, they don’t care what operating system people run, as long as the money winds up in their project. If Dell believed that an immediate adoption of Haiku (an OSS BeOS clone) would make them top dog, they would do it. Apple straddles a fine line between being a software company and a hardware company, but this is not so of HP/Compaq, Dell, Gateway, eMachines & company. They sell boxen. If changing OSes or supporting more OSes would mean more sales, they would do it.

The only way, then, that the OEMs will ever support Linux will be when there are enough Linux desktop users that it is worthwhile in terms of simple supply and demand. The only way that Linux will make its inroads is if distro packagers make life as easy as is humanly possibly in the meanwhile. I have to assume that I am not the only one to purchase a machine that needed a bleeding edge setup to work properly. So, the only way to really service these users (like me!) is to make it easier to go bleeding edge when it is necessary. I understand the idea of sticking to a version of the kernel, like 2.6.28, for the duration of a release. It makes it a lot simpler to ensure that all of the software will work together.

However, to accommodate those with newer rigs, the clear solution is to make it easier to go bleeding edge. It should not be something so trivial as clicking a check box on some preferences dialog, but it should be easier to use a later kernel with a given release. Fedora almost gets it right with Rawhide. By changing a simple option, you can go bleeding edge. However, it is less bleeding edge than it is like running off a random dev’s test box. You never know if it will work the next morning. For the system components that directly support hardware, like the kernel and some of the low level daemons, I would recommend a special backports type of repository that is being updated alongside the new one. If there are hardware difficulties, make it easier for the user to use NDISwrapper (the best thing to ever happen to Linux wireless) and upgrade to later versions of the kernel without sitting atop of Linus’s Git branch. It would not be perfect, but it would help a great deal because, as things stand now, you have to fight with Fedora or Ubuntu (or else do an inordinate amount of work) to use a version of the kernel that is not officially sanctioned.


I have been following the R6RS and the R7RS discussion processes since shortly after the beginning of the former. It is educational, if nothing else, and I do enjoy watching the debates, though I have seldom posted to the group. As with virtually all engineering, most decisions are less matters of things that are strictly correct or strictly incorrect (read: wrong) than they are a discussion of tradeoffs. I have little doubt that there would be a lot less heat in these debates if more issues where strictly right or strictly wrong. It would then become less a question of design and more a question of solving the problem in the straight-out method used to solve problems in mathematics. All of those posting are extremely intelligent. This is not surprising. Given the state of the industry, few blub programmers ever make it so far as to hear about Scheme, let alone care about the next standard issued under that name. Most of these people have PhDs and are doing this as part of their research.

So, I would summarize the R6+RS mailing list as being a lot of smart people arguing heatedly over design tradeoffs. At least, it keeps things from being boring. I find it interesting how dedicated these people are to sitting down and proving to the whole group that their way is obviously the best way. It may very well be, but if the majority of those standardizing Scheme do not want it, why worry? Why not take the R 4, 5, or 6 RS and draft your own spec, publishing it under your own name? Just take it and create another Lisp dialect. Show us all that it is better than Scheme or Common Lisp. It is almost like the languages world has decided that we shall have precisely two Lisps: Scheme and Common Lisp. Most of the Lisp-esque languages out there are starting from Scheme or Common Lisp and making some minimum number of tweaks (often, like Clojure, to make it run on some other platform) rather than designing a new language from the ground up. It seems to me that there is plenty of room for interesting experimentation. In fact, it seems to me that the standardization process would be a lot more fruitful if we could see a lot more Lisps out in the wild. We could take the good and avoid the bad and have a real, living model to look at instead of some airy discussions.

This is Awesome

MonoDevelop, in version 2.0, has vi keybindings available. As a vi-addict, this makes me one happy camper. Especially because MonoDevelop is available on Windows, Mac, and Linux…