ZenCart–how NOT to do Upgrades

I am currently upgrading ZenCart. Why and where are not important. Suffice it to say, the more time I’ve spent with ZenCart, the more I realize that, open source or not, the project manages to do everything wrong.

It all started when I looked at the upgrade instructions. We were upgrading from version 1.3.8 to 1.3.9h. The essence of the instructions is to put a copy of your current install (with template modifications and all) in one directory, an unmodified version of your original install in another, and a fresh install of the new version in a third. Then, you do a diff of the installed version versus the unmodified version of the same version and manually copy your changes into the new directory. Finally, you run the automated database upgrade.

That is way to much work, especially when you consider the fact that those instructions are what you do for minor upgrades.

The process should be very simple. Backup the current setup, unpack the new files, and run the database upgrade script.

A large part of the reason this is the fact that Zen Cart also does templating wrong. Rather than stashing all of the files somewhere simple (/includes/templates/TEMPLATE, using their organization scheme), they are scattered across the entire install in the form of little overrides. Keeping track of the changes made to an install is unpleasant to begin with (source control helps, but it does not make it at all clear which files of the overly-many .php files are original and which are modifications).

When you also add the horrific security bugs that existed in the 1.3.8 line, you get an ecommerce system that I would definitely advise against using.

Vim-like extensions for Visual Studio 2010

Now that I have written about configuring Vim to make it interact better with Visual Studio, I want to take a moment and look at some extensions that seek to put Vim in Visual Studio.

The first, probably the oldest, is ViEmu. It was the first thing I came across in my quest to use Vim’s fluid editing for .NET development. At $99, it isn’t exactly dirt cheap, but I would happily have bought a license to have Vim in Visual Studio. As an added bonus, the $99 version also integrates into SQL Server Management Studio. So, I downloaded the trial beta for VS 2010 and installed it. The addition made Visual Studio so unstable, the results were astounding. Multiple crashes, out of the blue, with no rhyme or reason–except that when I removed ViEmu, it all stopped. Apparently, I’m not the only one.

If this were an open source project, I would have been sorely tempted to dig in and see what the problem was, but it isn’t. I simply won’t buy software that makes my development life miserable.

Later, I stumbled across VsVim, an open source project with very similar aims to ViEmu. So far, this has proven to be very, very nice. It detects conflicts between its keybindings and Visual Studio. The biggest signs of its overall youth (VS 2010 is the only version it supports, or ever has), is that there are many motions that are not fully implemented. For example, if you move the cursor over to an opening parentheses or brace in command mode, and type ‘d%’ without the quotes, you get an invalid motion error. In Vim proper, it deletes the parentheses and everything between them.

Looking at the project activity on GitHub, it looks like there is a good deal of activity, which is always a plus on these kinds of projects. The only real oddity is that the author cannot accept source contributions. I guess if anyone wants to make significant changes, they will have to fork it.

Overall, I am very happy with VsVim and am using it day to day. I still use gvim alongside it for those cases that I want to get full vim happiness (and, especially, when I am in familiar enough territory that I don’t need or want IntelliSense.

On PowerShell–Or, how Microsoft does not really “get” CLI

I have been splitting time between PowerShell & bash at work and, so, I have been able to get a little more acquainted with it, which is nice since I have been wanting to ever since my professor (a die-hard bash user) mentioned that Microsoft had just come out with a new shell that was, in some ways, more advanced than bash.

There are some niceties in PowerShell and, in truth, it can mostly be summarized as being Bash.NET or, more likely, Bash#. There are, however, some warts. They are the kind of warts that make one thing abundantly clear: PowerShell was designed by theorists, not everyday users. What I mean is this: some of the usability issues (yes, believe it or not, there is such a thing as usability on the CLI) are so glaringly obvious that the only explanation for them is that the designers were theorizing as to what someone who used a shell would want, rather than how they themselves would use one. Let’s run through a few examples.

Execution of script files is off by default. For anyone who has used any of the Unix shells, this almost incomprehensible. One off scripts, far too long to be typed command by command in a running session, but short enough to be dashed off in minutes, are the order of the day. The idea is that, by default, you cannot actually run PowerShell scripts is just astounding. To get scripts to run, you must either launch powershell.exe with a switch modifying the policy for that particular session (i.e. something like

powershell.exe -ExecutionPolicy Unrestricted

) or by using the Set-ExecutionPolicy commandlet. The latter, however, modifies the registry and so requires a reboot.

Next, we have the common house-keeping task of setting permissions. Sysadmins do it all the time. In PowerShell, the process to do this mundane task is absolutely daunting. (Note: there is a DOS Command attrib that will fulfill a similar function with much less headache. However, we are trying to judge PowerShell on its own merits, not on the fact that another command happens to be installed on the system.)

In order to actually change file permissions, you must first get an ACL object for the file system object in question, then modify it and set the ACL. The example at the link is fairly innocuous looking, but it is far more work than chmod & chown, and only gets worse as you want to do something nontrivial.

You cannot zip or unzip directly from the command line. If you run

& .\foo.zip

you will get the Windows zip wizard to come up, the same as if you had invoked any other file that way, but there is no equivalent to:

unzip foo.zip

that will just unzip the file, no questions asked. I have had it suggested, that the issue is one of licensing (namely, Microsoft’s licensing agreement for zip technologies does not permit them to create a commandlet with this functionality). This is certainly possible, but I, as a user do not really care. Of course, this being PowerShell, you could also write (as some already have) a commandlet that uses an external DLL like sharpzip to handle it. That is all well and good, but it would still mean that I have to manually copy PowerShell commandlets and DLLs to customers’ systems–something that is not usually possible.

Perhaps the most touted feature of PowerShell is the ability to dynamically load assemblies (DLLs) through reflection and expose the objects to the shell, making it especially useful in a standard BL-BO-DAL architecture when you have some setup tasks, to load your assembly and perform deployment tasks. On paper anyway. Unless–yes!–unless something goes wrong (we all knew it would go wrong, otherwise it would not have found its way into this post).

Like the fact that the latest version of PowerShell, v. 2.0, cannot load .NET 4.0 DLLs.

I guess that isn’t really fair. According to Microsoft it can’t. If you jimmy a couple of registry settings and pray that nothing bad befall you, it might work.

Ultimately, flaws like this are a natural offshoot of Microsoft’s traditionally anti-CLI culture. Since they almost single-handedly drive the philosophy of the ecosystem, the result is that few true Microsofties (as opposed to people who just happen to use Windows) understand CLI, so, when customers demand it (for systems administration a good shell installed by default is simply essential–we’ve been stuck with cmd for too long), there is no one who truly understands what they are supposed to be building.

Hopefully, Microsoft has been made aware of shortcomings like this and we can expect to see PowerShell refined into a truly pleasant shell. That has, after all, been Microsoft’s forte for years. Improving software to be what it should have been all along.

Microblogging & Programming

Microblogging, especially through Twitter, but also through its cousin, Facebook statuses, has become the thing of late. I have little doubt that, like most things that are “the thing”, its popularity will fade into the landfill of fads.

In one sense, I have never truly “gotten” microblogging. To be sure, I understand the idea of short (140 characters, if you are a Twitterer) messages–and I have always found them to be a sign of a declining societal intellect. Once, our forefathers, in the 18th century, conducted flamewars in large, thick volumes (if any one doubts me, read up on Alexander Pope and the rivalries that spawned the delightful Dunciad). Now, we discuss grave matters in only 140 characters.

But lately, I have been wondering if a development team might not be the ideal place to put microblogging to good use. Most teams have neither the time nor the inclination to write and maintain copious notes on design and implementation, but they do have running dialog. Shared knowledge makes the discussions short, for the most part, and the decisions and information passed on are so short as to almost be not worth the effort. Wikis are a step in the right direction, but these are far too much like the longer documents that no one wishes to maintain.

The Twitter model of lots of short little notes might actually be a good fit to the stream of consciousness that pervades every development team. Architecture discussions could be left on a private microblogging platform of sorts. A private set up also allows all of the notes to be made semipublic by default, so we avoid the problem of emails where things can (in larger organizations than the one I am in) get caught up into a he-said she-said that could only be cleared up by a sysadmin. The use of Twitter-style @ and # notation would make it easier to cross-reference development notes.

This is actually the same advantage it has over IRC, the traditional hacker standby. Since ever thing is public or semipublic by default, no one has to remember to log the conversation or post the log–or rile up feathers because a log was kept at all.

The largest irony of these musings, is that I know full well that, in one sense, the only purpose I have found for microblogging is in flagrant violation of the model put forth by the site that made it popular, Twitter. On Twitter, everything is public. If, hypothetically, my wife and I were to twitter notes back and forth about family matters (e.g. can you pick up some milk on the way home?), it is public. There is nothing wrong with that, but it is superfluous fluff to just about everyone on the world wide web.

Incidentally, this is why, if you really want to do something like that, set up a private instance of microblogging software. The only valid use your present whereabouts can be to the general public, is as an invitation to be stalked.

But, back on topic, I think that is what is wrong with the microblogging model in the first place. The vast majority of what I have to say in < 140 characters, is something that is not really worthwhile for more than a select few to hear.

There are exceptions, of course.

Some musicians I like use it extensively for tour announcements and to push each others’ stuff. It makes perfect sense. There will be a lot of tour announcements that I, as a fan, am interested in that are less than 140 characters. Where are you appearing? When? (For smaller groups, this information changes a lot, and quickly). Oh, that new album is out?

Most of us are not those exceptions, but I think that when you put some constraints on topic matter and audience, there is definite potential. What I would be the most curious to see, would be an open source project that relies primarily on Twitter or Identi.ca for dev discussions, instead of IRC or email. That would, I think, be the ultimate test of the merit of the idea.

Finally, a little googling made me painfully aware that I am not the only one to have such thoughts. I even saw a few academic papers on the subject, though I have not yet had time to read through them. I think a little survey of the literature on this blog might very well be forthcoming…

Links & Notes on Using Vim for .NET Development

My new job is writing in C#/.NET. Overall, I like this quite a bit. C# has some nice features over Java (I went to a Java school, so I do speak from experience) and the .NET framework is quite nice. I have, though, been having to optimize Vim for .NET development and wanted to share some handy links.

  1. http://kevin-berridge.blogspot.com/2008/09/visual-studio-development.html — a very nice series on setting up Vim with C#.
  2. http://arun.wordpress.com/2009/04/10/c-and-vim/ — Some excellent suggestions here, particularly in making ctags a little more automatic.
  3. http://stackoverflow.com/questions/1747091/how-do-you-use-vims-quickfix-feature — some questions about vim’s quickfix feature.
  4. http://vimdoc.sourceforge.net/htmldoc/quickfix.html#quickfix-window — documentation on using vim’s quickfix feature.
  5. http://www.vim.org/scripts/script.php?script_id=356 — dbext makes life so much easier.

At least the first link there suggests the use of NERDTree. While I have used NERDTree in the past, it was definitely more useful with Visual Studio, because the hierarchies ran deeper than is, I think, typical in other projects without it.

ctags is wonderful for cross-referencing code within the project itself. The suggestions at link #2 were particularly helpful in getting things set up so that updates to the tags file would happen automatically in a timely manner.

Some of you may ask, why not use ViEmu? The answer is: I tried. I tried hard and I wanted to like it. It is less set up time and less hassle to have Vim in Visual Studio than it is to build just the right amount of bridging between Vim and Visual Studio. The problem I hit was that ViEmu crashed Visual Studio 2010. Often. Badly. Irritatingly. The stability hit was just too much.

Team Foundation Server is another big one. I had to tinker with it a little bit, but link #1 provides some good pointers on getting this set up.

Finally, here are three files:

The first file is a literate explanation of the configuration files. The second two are the tangled output. If you want a more thorough explanation of what is going on, consult the first file before moving on. Anyone with comments or questions, feel free to post them here. Some things I am still looking for in my .NET vim config:

  • Better designer integration support. Visual Studio’s ASPX designer generates code (the *.designer.cs files) based on events that happen in the IDE–not as part of the build process. This means that making wide ranging changes outside the IDE causes compilation errors that can only be fixed by opening the .aspx file and making a change or two (I tend to cut the whole file and paste it back into itself), then rebuilding.
  • A communication bridge between Vim and the debugger would be nice.
  • Similarly, it would be nice to launch the embedded IIS server from within Vim.