A Short Introduction to MVVM

Our team is building an application using WPF with the Model-View-ViewModel design pattern and I wanted to take a few minutes to give an introduction to MVVM. The pattern itself is comparable to the venerable MVC pattern, though by no means identical. Let’s begin by examining each piece and then looking at how they fit together.

  1. Model–the model is very much the same thing as the model in MVC or business objects in a three tiered architecture. It is a straight up model of the data being manipulated without any display logic of any variety.
  2. View–the view is, again, very much the same as the view in MVC. It is the formatting or display.
  3. ViewModel–if you are familiar with MVC or similar patterns, the ViewModel is the largest departure. There are two ways to look at a ViewModel, which will become clearer after reading through some code.
    1. The ViewModel as an adapter between the model and the view. This is, perhaps, the most familiar and comforting way to view it, though it is also the least accurate, as the logic behind a view is also encapsulated in the ViewModel.
    2. The ViewModel can be seen as an encapsulation of the logic and state of the view, independent of any display logic. In short, a ViewModel Models a View.

On the ViewModel, the second explanation is the best, though I did find #1 helpful when first examining the pattern.

MVVM is a fairly new pattern, seeing most (or all?) of its use in some of the newer Microsoft technologies, WPF and Silverlight. As a result, the fit between framework and pattern is often subpar. The easiest example (which does not seem to arise in Silverlight) is that of popping up a dialog in a WPF application. If the ViewModel knows how to pop up a dialog, then we are clearly violating the pattern, as the ViewModel is supposed to model a view’s operations and state and leave such details to the view.

After all, the whole idea here is that we should be able to bolt multiple views onto a single Model-View pair, especially (and here is where the aims differ a little from MVC, if not in theory, at least in practice), views that cross paradigm. For example, a WPF view and a Silverlight view, allowing the application to exist as both a desktop application and a web-based application.

If you do not do something, though, you are unable to perform an elementary task: prompt the user (after some fashion or another) for input. In practice, we are using a mediator to allow the ViewModel to send messages which the View can then receive and act on as its implementation mandates.

On one hand, this works well and I like how it falls out in practice. The View and the ViewModel remain separate and mockups or tests could be written that simply interact with the mediator.

From a more theoretical standpoint, it makes me uneasy because it is plastering over a severe weakness in the pattern that, perhaps, ought to be addressed at the pattern level instead of at the implementation level. Moreover, what is a mediator, really? It is very much like an ad-hoc event handling system. Would it not be better to simply use events as they were meant to be used?

Another thing I noted was causing some people angst, was that the MSDN description of MVVM (see the section entitled “Relaying Command Logic”) said that the codebehind for a xaml file should be empty. While I certainly think the idea of the View itself not doing anything, as it were, is a good one, there is sometimes logic that is View-specific and should, therefore, be kept in the view. A better formulation, in my humble opinion, is that there should be only tasks specific to the view itself in the codebehind. For example, if you are writing the basic set of CRUD operations for some object, the act of saving the object will not be view specific. Taking care of some rendering details might be. The optimum case is, of course, that all logic find its way into the ViewModel. Until WPF and MVVM are a better fit, there will still be oddball cases that mandate violating the principal.

To wrap up, the most important thing about MVVM is that the ViewModel acts as a model for a view rather than a traffic controller (like the Controller in MVC) so that, in theory, one could bolt entirely different UIs on top of one Model-ViewModel set. In practical terms, MVVM is in its infancy and, consequently, there are still some rough edges that developers should be aware of when writing code.

No, I do not want to reboot…

What is it with Windows and this urgent, burning desire to reboot?

Here is how the last couple of weeks on my work PC have gone, as an example.

I boot up my computer (which is hard, because every so often, for no discernible reason, the PC hangs on boot) and login. Windows chipperly informs me that it updated everything. Yay! Butterflies and daisies and happiness. I start up my usual army of suspects. Visual Studio. Firefox. Bug tracker. Et cetera, et cetera.

Then AVG Professional pops up a message. All perky, it tells me that it has finished update. I need to reboot, how about now?

Grrrrrr. I’m just getting down to work and you want to reboot? Heck no.

But I can’t say no. I can postpone it. For 60 minutes.

Fine. 1 hour. Just get the heck out of my face.

So, every hour or so I tell AVG it better flipping not reboot my computer.

Then the shiny, dolphin blue box pops up. Windows has, like, just finished installing the most totally awesome bunch of updates. How ’bout rebooting now?


Well, all right then. We can postpone for four hours.


Then, as it turns out, Flash and Java want to update too.

Image Based Approaches for reading PDFs on the Nook

I’ve been trying to get my nook to provide a more pleasant experience reading PDFs. In all honesty, I saw PDF capabilities as one of the nook’s biggest selling points. I had hoped to take all of the academic papers I was interested in and throw them on the nook, saving on time and paper. I was disappointed to find that the reflowing, which is fine for single column, non-technical material, was a huge pain for a large quantity of the papers I wanted to read.

Recently, I discovered that the nook will not reflow a PDF if the text size is set to small. While this is not at all obvious, it is easy enough once you are aware of it. One problem remains: there is no ability to zoom or pan on the nook. So, for a multicolumn paper with generous margins (fairly typical in academic literature), the text becomes either truly unreadable or straining to the eye. As of firmware update 1.5, this has not been addressed.

The ironic thing is that these missing features are simply huge. I am sure from reading the nook forums that there are a great many others who are or were excited about the nook because of its ability to read PDFs (something that the Kindle DX 3 is supposed to have, alongside panning and zooming). Moreover, since they are using Adobe Editions on an Android platform, the feature would not have been hard to add. Finally, from a UI perspective, I think all that we really want is for panning to work on the PDFs the way it works on the web browser and for there to be an extra zooming feature.

Fortunately, there are some tools to get around these shortcomings, at least in the short term. These all revolve around chopping the PDF a little bit, and most also work by rasterizing the PDF so that the nook’s retouching has no effect.

  1. briss is an application to crop PDFs. While one could also do this with ImageMagick, briss first analyzes the PDF, clustering the pages into a couple/few layouts, then allowing the user to set the cropping boundaries manually. It is important to note that, of the three listed here, briss is the only one that does not actually rasterize the PDF.
  2. papercrop is an application that analyzes a PDF, dividing each page into one or more “crops”, which it then puts in order and outputs. Because of the analysis it does on the documents, it is particularly well suited to multicolumn PDFs. It was originally built with academic use in mind, so it works especially well for PDFs that fit that mold: computer generated documents of low to medium complexity. Unfortunately, it does not do so well with scanned documents, such as those that come from the Internet Archive, because it treats the speckling from the scans or from dirt on the page as being legitimate parts of the document, reflowing its crops around them.
  3. pdfread was one of the first applications developed for the purpose of making PDFs readable on dedicated ebook readers. It rasterizes the content, then breaks it down into image chunks that fit well onto the ereaders screen. It does not support the nook as such, but the Sony Reader PRS-500 profile works perfectly on the nook, since the two devices have the same screen resolution.

In my experience, for whatever that is worth, briss is the best option with single column materials with wide margins. Simply cut the whole PDF down and the nook display is just fine. I use papercrop for anything in which layout design is important. I do not use pdfread that often, to be honest with you, but it is still handy to have around for the odd ball document.

In the final analysis, this toolkit has made a large number of documents readable on my nook, including such fine titles as the Unix Haters Handbook and Paul  Graham’s tome, On Lisp, but in a perfect world (one with panning and zooming on the nook) it would seldom, if ever be necessary.

Symbols vs. Keywords in Common Lisp

I was resuming work on my Sheepshead game today (more will be coming in time on this), and it occurred to me: what is the difference between a symbol and a keyword? If you type

(symbolp 'foo)


(symbolp :foo)

both return


but, if you type

(eq 'foo 'foo)
(eq :foo :foo)

both return



(eq :foo 'foo)



Finally, if you type

(symbol-name 'foo)


(symbol-name :foo)

both return


So, what gives? Both are symbols and symbols with the same print name, at that. The difference, is that keywords are all generated in the KEYWORD package, but symbols identified with the QUOTE operator are generated in the current package. So,

* (symbol-package 'foo)                                                                   


* (symbol-package :foo)                                                                   

Just a quick little tidbit.

TFS Frustrations

The other day, I encountered a strange error while trying to unshelve some work in TFS. For those of you who may not be familiar with it, TFS is one of the more server-oriented source control systems around. To keep work that you want, but that is not ready for checkin, you shelve it–which amounts to a semi-private checkin that is not on the main branch. Viewers can see and share one anothers’ shelves.

In this case, I had started some major changes that would not be working for a demo, but some other changes were needed for the demo. The new changes included adding a variety of files, besides many modifications. So, I shelved it.

A few weeks later, I tried to unshelve it. I assumed that I would just have to merge the changes together (not a big deal, in this case). Instead, TFS complained that all of the new files still existed in the folder. The first TFS frustration. By default, when files are deleted from the project and from the source repository, TFS leaves them on disk. This would be sensible enough, if they at least handled it correctly in situations like this.

So, I decided to move all of the files to another location (instead of deleting them–time has made me paranoid about things like this). I reran the command. TFS still insisted that the files existed. Frustration #2. The files simply did not exist, yet TFS insisted that they did.

The long and short of it is that I had to take the files that TFS left on disk and manually readd and merge them into the project, as TFS simply would not allow the work to be unshelved.

It turns out that there is a known bug in TFS shelves, that occurs when files are added, then shelved, and then unshelved again. As a bug, this is so severe I don’t know how TFS ever got released in this state, especially since using shelving in this manner is precisely the sort of thing that Microsoft recommends.