The conception, birth, and first steps of an application named Charlie

Subscribe: Atom or RSS

Implementing Raw Templating - Part 2

by Alister Jones (SomeNewKid)

In the previous weblog entry, I introduced the IWebTemplateParser interface, which looks like this:

public interface IWebTemplateParser
{
    String[] ControlDirectives { get; }
    String Parse(String template);
}

The first line allows each plugin to “tell” Charlie about any custom server controls that the plugin uses. But what is the purpose of the second line? Well, here is the next template that I inserted into the database:

<html>
    <head>
    </head>
    <body>
        <h1><insert:Text name="Welcome" /></h1>
    </body>
</html>

This is an example of the declarative templating system that I am now implementing. There are two things to note about the bold tag shown above.

First, using <insert:Text /> and <insert:Snippet /> and <insert:Image /> would require that each of these custom controls reside within the same assembly, since each prefix (being ‘insert’ in this case) can point only to a single assembly. However, while I want the website owner to have only one prefix to learn, I want each control to point to a different plugin and, therefore, a different assembly. So <insert:Text /> should go to the Globalization plugin, since that is where the text resources are handled. And <insert:Snippet /> should go to the HTML plugin, since that is where HTML snippets are handled. So I need some way of allowing the same prefix of ‘insert’ to point to different assemblies.

The second thing to notice about the above tag is that it contains no runat="server" parameter. That is a functional requirement of ASP.NET that should not be forced upon the website owner.

So, it is to solve the above two problems that the IWebTemplateParser has the Parse method. Before Charlie uses the template it draws out of the database, it will give the template to each plugin. Each plugin can then “fix” the template before it goes to the Page.ParseControl method. The Globalization plugin, for example, will look for the following tag:

<insert:Text name="Welcome" />

If found, the tag will be changed to this:

<globalization:Text name="Welcome" runat="server" />

With that simple replace operation, we have solved the two problems. The general ‘insert’ prefix has been replaced by a specific ‘globalization’ prefix. (Remember, the first line of the IWebTemplateParser interface is where each plugin can “tell” Charlie about custom prefixes.) We have also added the runat="server" attribute that ASP.NET requires.

Here is the full code from the Globalization plugin:

namespace Charlie.Globalization.Interface
{
   class TemplateParser : IWebTemplateParser
   {
      public String[] ControlDirectives
      {
         get
         {
            String[] directives = new String[1];
            directives[0] = 
             @"<%@ Register TagPrefix=""globalization""
                   Namespace=""Charlie.Globalization.Interface""
                   Assembly=""Charlie.Globalization"" %>";
            return directives;
         }
      }

      public String Parse(String template)
      {
         String from = @"<insert:Text(\s)";
         String to = @"<globalization:Text$1runat=""server"" ";
         template = Regex.Replace(template, from, to);
         return template;
      }
   }
}

Right now, the regular expression-based find-and-replace operation within the Parse method is overly simplistic. But right now I am just making sure the functionality works—the find-and-replace operations can be tweaked later. But the point is that the above code is very simple. Each plugin can inspect the template for known tags, such as <insert:Text>, and make any changes it requires. It will make changes so that a custom server control will be called into play. In this example, the Globalization plugin will use a custom server control that is an extension of the Literal control. Here is its code:

namespace Charlie.Globalization.Interface
{
   public class Text : Literal
   {
      public String Name
      {
         get
         {
            return this.name;
         }
         set
         {
            this.name = value;
         }
      }
      private String name;

      protected override void OnPreRender(EventArgs e)
      {
         this.Text = ResourceManager.Current.GetString(this.Name);
         base.OnPreRender(e);
      }
   }
}

This too is about as simple as a custom server control can get. So, what have we achieved here? Well, previously a template using localized text needed to be a User Control that included both a declarative server control and procedural code:

<%@ Control Language="C#" 
    Inherits="Charlie.Framework.Interface.TemplateControl" %>
<%@ Register 
    TagPrefix="aspx"
    Namespace="Charlie.Framework.Interface.Controls"
    Assembly="Charlie.Framework" %>
    
<h1><aspx:Label ID="Welcome" runat="server" /></h1>

<script runat="server">
    private void Page_Load(Object sender, EventArgs e)
    {
        Welcome.Text = ResourceManager.Current.GetString("Welcome");
    }
</script>

Now, the relative complexity above has been replaced with a very simple declarative tag:

<h1><insert:Text name="Welcome" /></h1>

That is all the code that is needed to insert localized text into the template. If the English version of the page is requested, the heading will be “Welcome”. If the French version of the page is requested, the heading will be “Bienvenue”. This is the sort of simplicity that I want from the “raw” templating system that I have devised. And so far, so good.

by Alister Jones | Next up: Charlie’s Own Two Feet

2 comments

----

Implementing Raw Templating - Part 1

by Alister Jones (SomeNewKid)

Over the last few weblog entries, I have described the “raw” templating system that I will add to Charlie.

In describing it as a flexible templating system, I noted that I would use a default-and-specific approach to templating. A given type of webpage (such as an Article or a Photo Gallery) would have a default template, but a specific page can override that default template and effectively “tweak” it for just that specific page. This is the same cascading logic that I used for the authorization roles in Charlie’s security system. So the default-and-specific approach to templating was implemented easily, using the same database schema and the same sort of code.

Into the database I inserted the following test template:

<html>
    <head>
    </head>
    <body>
        <h1><asp:Label text="Welcome" runat="server" /></h1>
    </body>
</html>

The code to get this template onto the Page surface is incredibly simple. The code here executes within a code-behind or code-beside file (that is, within a class that inherits from System.Web.UI.Page).

String template = this.Template.Text;
Control parsed = this.ParseControl(template);
this.Controls.Add(parsed);

The above code will take the raw template string, parse it to end up with a handful of web server controls, and place those controls onto the Page surface. That part was easy, since the template above used built-in web server controls. The next part was to introduce custom server controls. Here is the next template I attempted to use:

<html>
    <head>
    </head>
    <body>
        <h1><custom:Label text="Welcome" runat="server" /></h1>
    </body>
</html>

The custom server control will give rise to the following exception:

Unknown server tag 'custom:Label'.

Normally, the .aspx page will have the following directive to “tell” it about custom controls:

<%@ Register 
    TagPrefix="custom" 
    Namespace="Charlie.Framework.Interface.Controls" 
    Assembly="Charlie.Framework" %>

However, even if that directive were to exist on the Page, such directives will not apply to any string of controls given to its ParseControl method. I had actually faced this same problem long ago when I was new to ASP.NET, and I posed this problem on the ASP.NET Forums. Fortunatey, Teemu Keiski gave me the answer. If the string that is passed into ParseControl contains custom tags, you must also pass in the directives. So, the string that is given to the ParseControl method must look like this:

<%@ Register 
    TagPrefix="custom" 
    Namespace="Charlie.Framework.Interface.Controls" 
    Assembly="Charlie.Framework" %>
<html>
    <head>
    </head>
    <body>
        <h1><custom:Label text="Welcome" runat="server" /></h1>
    </body>
</html>

The above will work, and the custom server control will be added to the Page. The remaining problem, however, is that Charlie cannot possibly know all of the directives that may be needed, since only the plugins will know. This is an easy problem to solve, since we have an existing IPlugin interface that defines those methods and properties that a plugin must implement in order for Charlie to “query” the plugin. The updated IPlugin interface looks like this:

public interface IPlugin
{
    Int32 ID { get; }
    String Name { get; }
    IWebModule[] GetModules();
    IWebResponseFilter[] GetResponseFilters();
    IWebContextFilter GetWebContextFilter();
    Presenter[] GetPresenters();
    IWebTemplateParser GetTemplateParser();
}

The last line is the new line. This means that Charlie can get the information it needs from the plugin in order to use custom server controls in the template strings that get passed to ParseControl. Here is what the IWebTemplateParser interface looks like:

public interface IWebTemplateParser
{
    String[] ControlDirectives { get; }
    String Parse(String template);
}

The first line of the interface allows the plugin to “give” to Charlie an array of strings, each of which will be a directive that needs to be added to a template before it is given to the ParseControl method. Here is a concrete example:

public String[] ControlDirectives
{
    get
    {
        String[] directives = new String[1];
        directives[0] = 
          @"<%@ Register 
                TagPrefix=""custom"" 
                Namespace=""Charlie.Framework.Interface.Controls""
                Assembly=""Charlie.Framework"" %>";
        return directives;
    }
}

And with this interface, the plugins can use any custom server controls that they want, and Charlie never needs to know about them. All that happens is that Charlie grabs the text of the template, and then says to each plugin, “If you use custom server controls, give me their directives.” Charlie then adds the directives to the front of the template, before passing it to the ParseControl method.

String template = this.Template.Text;
StringBuilder builder = new StringBuilder();
foreach (IWebTemplateParser parser in 
         PluginManager.Current.GetTemplateParsers())
{
    if (parser.ControlDirectives != null)
    {
        foreach (String directive in parser.ControlDirectives)
        {
            builder.Append(directive);
        }
    }
}
String directives = builder.ToString();
Control parsed = this.ParseControl(directives + template);
this.Controls.Add(parsed);

So that was the first problem to be solved in implementing the raw templating system: allowing for custom server controls. In the next weblog entry, I will explain the second part of the IWebTemplateParser interface.

by Alister Jones | Next up: Implementing Raw Templating - Part 2

0 comments

----

Recursion is not a Dirty Word

by Alister Jones (SomeNewKid)

Here is an example of the “raw” templating system to be implemented by Charlie.

<html>
    <head>
    </head>
    <body>
        <insert:Snippet name="Header" />
        <h1><this:Title /></h1>
        <this:Content />
    </body>
</html>

If the Snippet to be inserted was simple text, then this templating system could be implemented with a series of find-and-replace operations. However, the inserted Snippet itself may include another <insert> element:

<div id="header">
    <h1>Company Name</h1>
    <insert:Snippet name="SearchBar" />
</div>

More troubling, that child Snippet may include an <insert> element that grabs not another Snippet, but a Text Resource:

<div id="searchbar">
    <insert:Text name="Search" />
    <input name="SearchTerm" />
</div>

This series of inserts within inserts within inserts is the age-old programming challenge of recursion. Fortunately, ASP.NET’s existing web server control system makes short work of recursion. ASP.NET allows us to place a single web server control onto a page, and that control may contain child controls. And those child controls may contain more controls. And so on.

Even better, ASP.NET provides a ParseControl method that allows us to pass in a string (exactly like the snippets shown above) and receive a Control in return. We can then plonk that control onto our web page, at which point ASP.NET’s page lifecycle will take over. In other words, this single method allows us to go from a simple string to a fully-functional, recursion-enabled web server control that can participate in all of ASP.NET’s goodies such as state management, event handling, and output rendering.

So while I have chosen to ignore ASP.NET’s built-in master pages, themes, skins, and so on, I will definitely make use of its web server control system. Anyone who has ever authored a custom web server control knows that this system is relatively simple, yet infinitely flexible. And you all know my goals for Charlie: keep it simple and keep it flexible.

by Alister Jones | Next up: Implementing Raw Templating - Part 1

0 comments

----

The Raw Approach is Extensible

by Alister Jones (SomeNewKid)

A short while ago I introduced the story of three website owners: Tom, Michelle, and Joshua. Each owner wanted a different level of control over the look and feel of his or her website. Tom accepted a starting template provided by the Wizard. Michelle also accepted a starting template, but then used the Control Panel to customise the colours of the template. Joshua too accepted a starting template, but then updated the declarative templating system to customise all aspects of his website’s design.

It should be obvious that it is the Wizard that provides the default declarative templates. After the Wizard has been used to select the starting template, that starting template can be customised declaratively or through a graphical user interface. This was described in the previous weblog entry.

But how does the Control Panel fit into this templating system? Here is how the Colours may be defined through the Control Panel.

How then do we get these colour definitions into the declarative templating, which looks as follows?

<html>
   <head>
       <insert:Style name="Daylight" />
   </head>
   <body>
       <insert:Snippet name="Header" />
       <h1><this:Title /></h1>
       <this:Content />
       <insert:Text name="Disclaimer" />
   </body>
</html>

The first <insert> element is for a Style. The style will reside in a Styles folder, and its editor will look something like this:

What should be obvious is that the Style editor uses the same declarative approach to “inserting” external elements. To drive the point home, here is how our Snippet editor might look:

And where do these inserted Text elements come from? Well, just as the Control Panel provides the Colors icon, it would also provide a Text Resources icon. Here is how the Text Resources screen might look:

To summarize then, each <insert> element will “pull” information either from the Control Panel or from a special folder (such as Styles). Moreover, the inserted Snippet of HTML can do its own “pull” of information. So one Snippet can pull in another Snippet, which in turn pulls in a Text Resource, which in turn pulls in a Money Resource, and so on.

What this means is that for a website owner like Joshua who wants extensive control over how the website is designed, only a single declarative trick needs to be learned. That trick can then be applied over and over again. For a website owner like Michelle who wants limited control over the look of the website, only the Control Panel needs to be used—no need to learn the declarative trick. For a website owner like Tom who couldn’t care less about the look of the website, only the Wizard ever needs to be used to select a starting template.

At the end of the weblog entry titled A Dirty Sheet of Paper, I said that I took a pen and paper and said to myself, “Forget that you’re using ASP.NET. If you could create a user interface from scratch that allows a website owner to update both content and design, how would you do it?” The preceding weblog entries describe the templating system that I consider to be the most simple and the most flexible, with no consideration at all given to ASP.NET concepts such as master pages, themes, or skins.

After putting ASP.NET to one side and coming up with a theoretical templating system, it is time to bring ASP.NET back into play and come up with a working implementation of this templating system.

by Alister Jones | Next up: Recursion is not a Dirty Word

0 comments

----

The Raw Approach is Simple and Flexible

by Alister Jones (SomeNewKid)

I finished my previous weblog entry by saying that I believed the simplest and most flexible way to style an article webpage would be to use a “raw” template, similar to the following:

<html>
    <head>
    </head>
    <body>
        <h1><this:Title /></h1>
        <this:Content />
    </body>
</html>

If you looked carefully at the last screenshot in that previous weblog entry, you would have noticed the following wording: “Note: Changes here will apply only to this single article. To update the template for all articles, edit the ‘ArticlePage’ template within the Templates folder.” If it is not already obvious, the idea is that a particular type of webpage (such as an Article, Weblog Entry, or Photo Gallery) would have a default template that would apply to all webpages of that type. The website owner is able to update these default templates.

In addition to being able to update the default templates, the website owner may override the template that will be applied to a specific webpage. If one article is a review of the latest album by Bruce Springsteen (I am not hip to any music released this century), then the website owner can update the template so that it includes a large image of Bruce before the article, and a graphical affiliate link following the article.

<html>
    <head>
    </head>
    <body>
        <p>
            <img src="/images/BruceSpringsteen.jpg" />
        </p>
        <h1><this:Title /></h1>
        <this:Content />
        <p>
            <a href="http://www.amazon.com?123456">
                <img src="/ads/Nebraska.gif" />
            </a>
        </p>
    </body>
</html>

The first benefit here is that a system of default-and-specific templates is very flexible. If you have ever tried to “tweak” a site-wide master page—so that a specific page is just a little bit different—you will know how frustratingly inflexible a master page system can be.

A second benefit here is that those website owners who want to dabble in HTML can do so. HTML is a wonderfully simple technology that is easy to learn, and it would be disrespectful to hide the HTML away as if to say to the website owner, “You’d just screw it up, so we’ve put it out of your reach.”

But what about website owners who do not care to dabble in HTML? There are two approaches that we can take to support website owners who want to control the look and feel of their websites, but who do not want to learn HTML.

The first approach is stay at the same “raw” level, but to abstract some of the HTML. Here is an example:

<html>
    <head>
        <insert:Style name="Daylight" />
    </head>
    <body>
        <insert:Image name="Bruce" />
        <h1><this:Title /></h1>
        <this:Content />
        <insert:Link to="www.amazon.com?12345"
                     image="/ads/Nebraska.gif" />
    </body>
</html>

What I like about this declarative approach is that we can extend it to support localized text.

<html>
    <head>
    </head>
    <body>
        <h1><this:Title /></h1>
        <this:Content />
        <insert:Text name="Disclaimer" />
    </body>
</html>

We can also extend this approach to insert common elements such as the headers and footers.

<html>
    <head>
    </head>
    <body>
        <insert:Snippet name="Header" />
        <h1><this:Title /></h1>
        <this:Content />
        <insert:Snippet name="Footer" />
    </body>
</html>

We can also extend this approach to insert elements from other plugins. Here is how a homepage might look.

<html>
    <head>
    </head>
    <body>
        <insert:Snippet name="Header" />

        <h1><insert:Text name="WebsiteTitle" /></h1>
        <h2><insert:Text name="WebsiteSummary" /></h2>

        <insert:Text name="RecentWeblogEntries" />
        <weblog:Listing recent="5" />

        <insert:Text name="RecentPhotos" />
        <photos:Gallery recent="5" />

        <insert:Snippet name="Footer" />
    </body>
</html>

Remember now, most of this template stuff was automatically created by the Wizard of a turnkey website, or by the developer of a custom website. The website owner does not need to touch this simple declarative templating unless he or she wants to make changes.

But what about the website owner who wants to control the look and feel of the website, but does not want to muck about in this declarative templating? The alternative approach is to introduce a pretty graphical user interface.

What is notable here is that this graphical user interface builds upon the declarative templating. Why is this notable? Well, if we had started with a GUI, we would have used logic to generate the resulting page:

Homepage homepage = GetHomepage();

TitleLabel.Text = homepage.Title;
SummaryLabel.Text = homepage.Summary;

if (homepage.ShowWeblog)
{
    WeblogEntryCollection entries =
        Weblog.GetRecentEntries(homepage.WeblogNumber);
    WeblogGridView.DataSource = entries;
    WeblogGridView.DataBind();
}

if (homepage.ShowPhotos)
{
    PhotoCollection photos =
        Photo.GetRecentPhotos(homepage.PhotoNumber);
    PhotoGridView.DataSource = photos;
    PhotoGridView.DataBind();
}

If we had started with the GUI and ended up with code like that above, we’d have one hell of a hard time retro-fitting a templating system that allows a website owner to make little changes here and there.

However, we did not start with the GUI. Rather, we finished with the GUI, where it simply provides a way of cloaking the declarative template. The template is still there, but it is hidden behind the GUI. But because the template is still there, and because the default-and-specific templating system is still there, we have not sacrificed the flexibility of the original “raw” approach to styling a webpage. If the website owner uses the GUI to define the default template for articles, he or she can still override the template that will be applied to a specific article. Now we have the friendliness of a GUI for the default template, and the flexibility to hand-craft the template for specific pages.

With this approach, we have now accounted for each of the three website owners in our little story. Tom was satisfied with the Wizard. Michelle was satisfied with the Control Panel. Joshua is satisfied with this flexible templating system.

What I have not yet discussed is how the settings in the Control Panel will apply to the declarative templating system. I’ll discuss that in the next weblog entry, as it highlights the flexibility provided by this declarative templating system.

by Alister Jones | Next up: The Raw Approach is Extensible

0 comments

----

A Raw Approach to Styling Webpages

by Alister Jones (SomeNewKid)

At the end of a recent weblog entry, I said that I had given myself the challenge, “Forget that you’re using ASP.NET. If you could create a user interface from scratch that allows a website owner to update both content and design, how would you do it?”

I started by drawing the user interface by which the website owner can compose an article. Mocked up in Photoshop, the user interface looks like this.

As an aside, the reason I have used Apple-like bubbles is that I am thinking ahead to the final user experience. The idea I am toying with is that when such a button is clicked, the icon animates to show that something is happening. This will be doubly-important if I introduce AJAX elements. So I’m using bubbles simply because that is where my thoughts are presently. This style may or may not find its way into Charlie’s final user interface.

Coming back to the topic of this weblog entry, you will see the Show Template button at the bottom of the article editor. Clicking on it will reveal the following additional interface elements.

This was the idea I had when I put aside my preconceptions of master pages, templates, skins, themes, and everything else. As you will see, it is a very “raw” approach to styling a website. My contention is that anything less raw will be less flexible. It is also my contention that anything less raw will be less simple. In other words, I believe this raw styling mechanism represents the most simple and the most flexible option available. I’ve given this idea a lot of thought, so I’ll dedicate a few weblog entries to this “raw” approach to styling.

by Alister Jones | Next up: The Raw Approach is Simple and Flexible

0 comments

----

Olive, Teal, Peach, and Mauve

by Alister Jones (SomeNewKid)

In our story of Tom, Michelle, and Joshua, we have seen how each has used a Wizard to choose the starting template for his or her website.

Tom has no interest in customising the appearance of his website, so he’s happy to run with one of the templates. That is, the template selection presented by the Wizard is all the customization that Tom wants.

Michelle wants to present her photography against a black background, with all text in mauve. We can presume that she will use the Wizard to choose a template that is close to the look she wants. Now, how can we allow Michelle to “tweak” the template to her liking? She doesn’t want to change the layout of the template—just the colours used.

Let’s consider how we might allow Michelle to change the colours of her website. In order to provide Michelle with a happy user experience, we should enable Michelle to change the colours of the website in the way that she expects to be able to change the colours of the website.

If I had elected to use the PowerPoint interface convention, then the most likely way to change the colours of the website would be through the Format menu. However, I earlier decided against using the PowerPoint interface convention.

My long-ago decision was to use an Explorer interface convention. I am still very comfortable with this decision, since an Explorer interface convention is familiar to users and flexible for Charlie. So, within this convention, where would a user look to change the colours of the website? Fortunately, both the Windows and the Macintosh operating systems place colour options in the same location: within the Control Panel.

At this point, I am relatively unconcerned with the design of the Control Panel. Still, the above looks fairly need and tidy, so Charlie may ultimately use a similar design. (The icons are based on those provided by Amit.)

Once Michelle has selected the Colours option, what should she see?

My first idea was that it would be very user friendly to have a dirt-simple preview pane that allows the website owner to see which colours he or she is changing. I started by mocking up the following illustration.

As I started working with this illustration, I thought to myself, “You’re creating a headache for yourself. How can you have a workable preview when templates can change, fonts can change, and anything and everything can change?” I also worried about how the colour palette (not shown, but would be there on the right) could “point” to the appropriate spot in the preview image. After thinking it through, I have decided that this is simply not a feasible approach.

I then performed a Google Image search on user interfaces that allow a user to select a colour palette. I could not find an interface that would support the flexibility I desired. I then fired up CityDesk. Here is its colour palette.

This is an extremely simple way of allowing users to select colours. Is it too simple? I don’t think so, and in fact I think its simplicity ties in neatly to the “styling” idea that I have. I will come back to this idea in the next few weblog entries. For now, I’d like to make just one comment on the above dialog.

You will notice that the dialog allows a palette of eight colours. CityDesk does not allow you to change this number. You cannot even change the names of the colours. This sort of rigidity would cause many developers to wear Doc Martins so that they can stomp around in protest. While I cannot speak for Joel Spolsky, I think he might point us to the common programmer thought pattern: there are only three numbers: 0, 1, and n. If n is allowed, all n’s are equally likely. In this case, if CityDesk or Charlie allows its users to specify two colours, the software should allow the users to specify 73 colours. I’ll let you read the Joel’s argument against that idea, since he can express himself much better than I can. I’ll just take the easy tack of saying, “I agree with Joel.”

What is interesting—if I may play fast and loose with the meaning of the word—is that this idea circles back to the concept of making opinionated software. Give the user a palette of eight colours, and be done with it. If the user wants to use more colours than that, we should send him or her a book about taste and style.

So Charlie is going to take its cue from CityDesk with respect to allowing the user to change colours. I will come back to CityDesk in the next few weblog entries, and you’ll see why I am not “thinking this one through” too much.

by Alister Jones | Next up: A Raw Approach to Styling Webpages

0 comments

----

Ready, Set, Go!

by Alister Jones (SomeNewKid)

In my previous weblog entry I introduced three characters to the story of Charlie. Tom, Michelle, and Joshua have each decided that they need a website, and have visited a Charlie-based website that will allow them to create a turnkey website. They have each clicked on the “create a new website” button. What should happen next?

What should happen next is whatever the visitor expects to happen next. And what will the visitor expect? It is my firm belief that the visitor will expect some sort of Wizard process to start. Every computer user has installed either an operating system, an application, a game, a utility, a driver, an internet connection, or something else. And nearly every such installation involves a Wizard process. In fact, anything except a Wizard process will be so unexpected as to feel somehow “wrong”. So I am not even going to think about alternative approaches. Moreover, I think that the Wizard approach is one of the better software interface innovations. A Wizard it will be.

The first thing that the Wizard should do is have Charlie say hello. Long ago I made the argument that Charlie should be visible to the user. It seems to me that the first step of the Wizard process should be for Charlie to introduce itself.

I did consider making the first Wizard step devoid of having the user enter any information. But that would just drag out the sign-up process with a useless, if polite, first step. Worse, a long introduction might smack of salesmanship, and nobody likes a salesman.

Right now, I am not too concerned with how the Wizard works. The point of this exercise is to find the simplest way for Tom, Michelle, and Joshua to get going with the look and feel of their turnkey websites. So let’s just presume that the next screen asks the user to provide a password.

Let us also presume that the following step is for the user to choose his or her username.

Now we come to the heart of the matter. Tom just wants to get going with his turnkey website. Michelle wants to some control over how her website looks. Josha wants great control over how his website appears. Should we ask the user how much control he or she wants? Not only would that be a silly question, it would suggest to the user that once he or she has made a choice, that choice cannot be undone. A much better approach is to give the user some starting options, and let the user know that the choice can be changed.

Providing starting templates is a common approach in many software applications. Typically, those starting templates provide a base from which the user can either change the template or use it as a starting point for extensive customisation. In other words, this is a common approach that precisely reflects the customisation available to the Charlie user.

A starting template is helpful to Tom, Michelle, and Joshua, even though they each have very different customisation needs. So this will be the approach taken with Charlie.

In retrospect, this idea of a starting template is so common that it seems comical that I have given it any consideration. But stay with me, because we still need to determine how this templating system will be implemented in Charlie. Will it use the MasterPage system of ASP.NET version 2.0, use a custom system based on User Controls, or use some other system altogether?

by Alister Jones | Next up: Olive, Teal, Peach, and Mauve

0 comments

----

A Clean Sheet of Paper

by Alister Jones (SomeNewKid)

In part two of his four-part article series on Painless Functional Specifications, Joel Spolsky recommends that we engage in a little bit of story telling. We should come up with a few fictional, but realistic, users of our software product, and tell the story of how the users work with our product. To put a slight spin on this suggestion, we might say that the story should describe how the users want to work with our product. Then, we should design the product to get as close as possible to having it work the way our users want it to work.

I am going to tell the story of how three website owners will want to work with Charlie.

The first website owner is named Tom—a no-nonsense name for a no-nonsense guy. All Tom wants is to click a button or two and have a website that is ready to go. He doesn’t want to be worried about templates and colours and other shit. He just wants a website where he can start writing articles about his favourite subject, Carrol Shelby.

The second website owner is named Michelle—an amateur photographer who wants to show her photos to the world. She just wants a website that presents a few albums where each photo can be clicked in order to see the full-sized photo. Standard stuff. But whereas Tom doesn’t care about templates and colours, Michelle does. She wants the background to be black, since most of her photography is black and white. She’d like the title and the photo borders to be mauve, which is her favourite colour.

The third website owner is named Joshua—the owner of a boutique winery. He has paid a lot of money to a graphic design firm to come up with a design for his wine labels, so he wants his website to reflect the same style. His labels display titles in a Garamond typeface, flush right, with the first letter in red and the remaining letters in black. The website should use the same style.

One of the originally-specified features for Charlie was that it would support both custom websites and turnkey websites.

A custom website would come about by Tom, Michelle, or Joshua sending me an email or calling me on the phone and saying, “I hear you’re the second-greatest website designer in the world. Jason Santa Maria is unavailable, so I’d like for you to design my website.” Creating a custom website is relatively easy—it just takes time. It is the turnkey websites that require special attention.

A turnkey website is one where the website-creation process is automated. This would come about by Tom, Michelle, or Joshua visiting a Charlie-based website and clicking on the “create a new website” link. Creating a turnkey website is relatively hard—it requires designing a system with the right tradeoffs between automation and customization. Tom wants it all automated, Michelle wants a little customization, and Joshua needs extensive customization.

In the following weblog entries on creating the user experience, I’ll be looking only at the turnkey experience, since that is the one that requires planning and coding. Any custom website will really just be a single-instance turnkey site with lots of customization. In other words, planning and coding for turnkey websites will lay the foundation for custom websites too.

So, having forgotten that Charlie just happens to use ASP.NET—and thereby starting with a clean sheet of paper—how can the product be designed to provide Tom, Michelle, and Joshua with a pleasant and effective experience in creating their turnkey websites?

by Alister Jones | Next up: Ready, Set, Go!

0 comments

----

A Dirty Sheet of Paper

by Alister Jones (SomeNewKid)

Over the last couple of days, I have introduced to Charlie a few of the goodies from ASP.NET version 2.0. Namely, the application can now use MasterPages and the Login server control. I already had a simple templating system and a simple login control, but I wanted to keep Charlie consistent with the direction that ASP.NET is taking. So I replaced my simple versions with the built-in versions. Then, after finding the built-in versions of templating and the login control harder to work with, I reverted back to my simple versions.

A few days earlier I had needed to update the HTML of one small part of just one page. The existing, simple templating system made this much harder than it should have been. And the new ASP.NET MasterPage system would have made it even harder again.

A few weeks ago I attempted to skin Community Server. My recent motorbike accident was less painful. Many months ago I had abandoned my weblog at AspAdvice.com because I could not take control of how Community Server styled my weblog. By contrast, the templating system used here at Blogger is much better.

These experiences made me realise that I am approaching Charlie’s interface layer with a number of preconceptions—the sheet of paper is not clean, but dirty.

In addition to my preconceptions about “how skinning is done in ASP.NET,” I am also bringing forward preconceptions about how data should be entered on a webpage.

One year ago I worked as an editor of a website, and the website used a clunky old rich text box. I spent many, many hours undoing all of the HTML junk the rich text box introduced. It was a terrible waste of time.

A little before that experience, I worked on the design of a website, which was then handed over to a web development company to implement in their content management system. It turned out that the CMS, which they trumpeted as being state-of-the-art, was nothing but a free and simple rich text box. They just slapped the HTML into a database, and gave the client a rich text box with which to edit the content. The problem was, by not separating the content from the design, the client kept inadvertently destroying the design of the site.

As a result of this experience, I have an additional preconception that “rich text boxes are bad.”

Yesterday I decided that I needed to start with a clean sheet of paper. I sat on a couch with a pen and paper and said to myself, “Forget that you’re using ASP.NET. If you could create a user interface from scratch that allows a website owner to update both content and design, how would you do it?”

by Alister Jones | Next up: A Clean Sheet of Paper

0 comments

----

“You Are Here”

by Alister Jones (SomeNewKid)

Most ASP.NET applications have many files with the extension, .aspx. If the visitor requests the page about.aspx, then ASP.NET finds the file with that name, loads it up, executes it, and returns the HTML to the visitor. If the user requests the page contact.aspx, ASP.NET will find the file with that name, load it up, execute it, and return the HTML.

Some ASP.NET applications take a different approach. Rather than having a whole bunch of .aspx files, they have just one; typically it will be named default.aspx. What these applications do is use this single default.aspx file to “build” a dynamic page. So even if the visitor requests about.aspx, the application secretly redirects the request to default.aspx, and then builds the About page. If the visitor requests contact.aspx, the application again secretly redirects the request to default.aspx, and then builds the Contact page.

Charlie takes an altogether different approach. There is no .aspx file anywhere in the Charlie application. Instead, a page request will be handled by the WebHandler object, which exists only in memory. There is no corresponding .aspx file.

While I was in the early stages of developing Charlie, this approach worked fine. But as development progressed, this approach introduced two problems.

The first problem is that I could not get the Page.ParseControl method to work. Charlie kept complaining about its VirtualPathProvider. I told myself not to worry about this problem, because I was not yet at the stage of working with Charlie’s Interface layer. So I fudged my way around it.

Today I tried to introduce MasterPages to Charlie. The code is simple:

Page.MasterPageFile = "~/Templates/Default.master";

But Charlie was having none of it. Once again it started complaining about its VirtualPathProvider. This time, I could not fudge my way around the problem—I had to work it out.

If I created a new web application and put the above code into the code-file of an .aspx page, it worked fine. But if the “page” exists only in memory, as with Charlie, it does not work. I could not figure out why. The above code clearly states where the master page file is located, yet Charlie would not load it up. Why?

It turns out that a new feature of ASP.NET version 2.0 was throwing a spanner in the works. With ASP.NET version 1.1, a tilde-based path (such as “~/Templates”) would always resolve from the application’s root folder. This is precisely what the tilde means, so this was precisely the expected behaviour. However, with ASP.NET version 2.0, a tilde-based path may or may not resolve from the application’s root folder. ASP.NET version 2.0 allows an application to change how a tilde-based path is resolved. David Ebbo provides a weblog entry on how to do this. Ironically, I am the customer to whom David refers.

The problem for Charlie was that because its WebHandler object has no corresponding .aspx file, it did not know how to resolve a tilde-based path. In other words, Charlie did know know where this “virtual page” was located within the file system. The solution to the problem then was to tell the Charlie page, “You are here.”

Page.AppRelativeVirtualPath = @"~/";

This tells Charlie to consider that its virtual page is located within the application’s root folder. With that single line of code, Charlie has stopped complaining about its VirtualPathProvider. And I have stopped swearing at Charlie—at least for the time being.

by Alister Jones | Next up: A Dirty Sheet of Paper

0 comments

----

The Valley of Data Access - Part 6

by Alister Jones (SomeNewKid)

Done. The database access code for Charlie has now been refactored.

All duplicate code has been pulled out of the many Mapper classes and moved into a single Helper class. The Helper class has also taken on the responsibility of performing the cascading security and localization work. Even better, the cascading logic no longer requires repeated trips to the database. Where previously Charlie required an embarrassing 49 hits to the database in order to serve the first webpage, it now requires twelve. Subsequent page requests require about eight hits.

Charlie currently hits the database once to retrieve an business entity or an entity collection, and then hits the database again to retrieve the security roles for the entity or collection. As I mentioned in my last weblog entry, I had a go at combining these two queries into one. I believe it can be done, but the approach involves a few penalties. The first penalty is that I’d have to swap from using fast DataReaders to relatively slow DataSets. The second, and greater, penalty is that I’d have to introduce a tight coupling between Charlie and its Security plugin, so that they can “gang up” their database queries. The third penality is that ganging up the queries would make it awkward for a plugin to use a different data store. Maybe the Weblog plugin could make use of the free MySQL database available on my WebHost4Life account, while the Security plugin uses the Microsoft SQL Server database. That flexibility appeals to me.

So I have put aside the idea of trying to combine the queries. If the extra database queries ever become a problem, that is the time I will look again at the issue. And even if the issue returns, a hardware solution might prove to be better than a software solution. If a website ever becomes so highly trafficked that the extra hits incur a major performance penalty, then that website might warrant a dedicated server. But again, it is a problem only when it becomes a problem.

One slight improvement to Charlie’s data access code is the introduction of a SqlConnectionManager. Previously, each Mapper created, opened, used, and then closed a new connection. Now, each Mapper pulls an open database connection from the SqlConnectionManager and then, when it’s finished with the connection, returns it to the Manager. So that Charlie doesn’t end up hanging on to open connections for too long, the SqlConnectionManager receives a call to its CloseConnections method at two points in the ASP.NET page lifecycle. The first point is before the HttpHandler starts to execute, and the second point is when the page request has finished processing. What this means is that the absolute longest that Charlie holds an open database connection is 0.4 seconds, and it is usually much more brief. Here is the code for the simple SqlConnectionManager:

internal static class SqlConnectionManager
{

   private static String contextKey = 
      "Charlie.Framework.Services.DataAccess.SqlConnectionManager";

   internal static SqlConnection GetConnection(String connectionString)
   {
      HttpContext context = HttpContext.Current;
      Hashtable connections = context.Items[contextKey] as Hashtable;
      if (connections == null)
      {
         connections = new Hashtable();
         context.Items[contextKey] = connections;
      }
      SqlConnection connection = 
         connections[connectionString] as SqlConnection;
      if (connection == null)
      {
         connection = new SqlConnection(connectionString);
         connections.Add(connectionString, connection);
      }
      if (connection.State != ConnectionState.Open)
      {
         connection.Open();
      }
      return connection;
   }

   internal static void ReturnConnection(SqlConnection connection)
   {
      // nothing yet
   }

   internal static void CloseConnections()
   {
      HttpContext context = HttpContext.Current;
      Hashtable connections = context.Items[contextKey] as Hashtable;
      if (connections != null)
      {
         IEnumerator enumerator = connections.GetEnumerator();
         while (enumerator.MoveNext())
         {
            DictionaryEntry entry = (DictionaryEntry)enumerator.Current;
            SqlConnection connection = entry.Value as SqlConnection;
            if (connection != null)
            {
               if (connection.State != ConnectionState.Closed)
                  connection.Close();
               connection = null;
            }
         }
         connections.Clear();
      }
   }
}

With the exception of the first page request and its inexplicable two-second delay, Charlie is taking between 0.02 and 0.4 seconds to serve a page request. While the slower responses are always due to database activity, the response times still seem fast enough. I am not going to try to optimise the code further.

So the bulk of Charlie’s Business layer and Persistence layer is done. I’m now going to move onto the Controller layer. That should be easy, because I am going to pirate some code. Stay tuned for some swashbuckling tales.

by Alister Jones | Next up: “You Are Here”

2 comments

----

The Valley of Data Access - Part 5

by Alister Jones (SomeNewKid)

A two-letter word has had a profound impact on Charlie’s database access code. Before I learned of this word, I did not know how to tell SQL Server to test for one of a range of values. So, I took the brute-force approach of simply issuing the same query over and over, passing in new parameter values each time.

Here is the text of the command that I needed to execute:

String query =
      @"SELECT
            r.Role_Id, 
            Role_Name, 
            DomainEntityRole_CanCreate, 
            DomainEntityRole_CanRetrieve, 
            DomainEntityRole_CanUpdate, 
            DomainEntityRole_CanDelete
        FROM
            Charlie_DomainEntityRole der
            INNER JOIN
                Charlie_Role r
            ON
                r.Role_Id = der.Role_Id
        WHERE
            EntityType_Id = @entitytypeID
        AND
            Entity_Id = @entityID
        AND
            Domain_Id = @domainID";

Then, in order to test different parameter values, I issued three separate requests for a DataReader:

command.Parameters.Clear();
command.Parameters.AddWithValue("@entitytypeID", entityTypeId);
command.Parameters.AddWithValue("@domainID", domainId);
command.Parameters.AddWithValue("@entityID", entityId);
reader = command.ExecuteReader();
triplet.CollectionByEntityId = FillRoleCrudCollection(reader);
reader.Close();

command.Parameters.Clear();
command.Parameters.AddWithValue("@entitytypeID", entityTypeId);
command.Parameters.AddWithValue("@domainID", domainId);
command.Parameters.AddWithValue("@entityID", -1);
reader = command.ExecuteReader();
triplet.CollectionByDomainId = FillRoleCrudCollection(reader);
reader.Close();

command.Parameters.Clear();
command.Parameters.AddWithValue("@entitytypeID", entityTypeId);
command.Parameters.AddWithValue("@domainID", -1);
command.Parameters.AddWithValue("@entityID", -1);
reader = command.ExecuteReader();
triplet.CollectionByEntityTypeId = FillRoleCrudCollection(reader);
reader.Close();

If this were a rare requirement, then this brute-force approach might be acceptable. However, this was the code that implemented the cascading logic needed for each entity to receive its security roles. So each time an entity was requested from the database, this silly code issued a further three queries.

Fortunately my Google searching turned up a little gem of a tutorial: Introduction to Structured Query Language by James Hoffman. Included in the tutorial is a brief example of the IN keyword, and with that example I was able to undo the silliness above. Here is the updated command text:

String query =
      @"SELECT
            r.Role_Id, 
            Role_Name, 
            DomainEntityRole_CanCreate, 
            DomainEntityRole_CanRetrieve, 
            DomainEntityRole_CanUpdate, 
            DomainEntityRole_CanDelete,
            Domain_Id,
            Entity_Id
        FROM
            Charlie_DomainEntityRole der
            INNER JOIN
                Charlie_Role r
            ON
                r.Role_Id = der.Role_Id
        WHERE
            EntityType_Id = @entitytypeid
        AND
            Entity_Id IN (@entityID, -1)
        AND
            Domain_Id IN (@domainID, -1)";

With that change it now takes only one database hit to retrieve the roles for an entity. Fortunately, it did not take too long for me to realise that this exact same query could be used to retrieve the roles not just for a single entity, but also for a collection of entities. Rather than passing in a single @entityID parameter, I would pass in the ID values of all the entities in the collection:

String IDmarker = "[[@entityIDs]]";
String query =
      @"SELECT
            r.Role_Id, 
            Role_Name, 
            DomainEntityRole_CanCreate, 
            DomainEntityRole_CanRetrieve, 
            DomainEntityRole_CanUpdate, 
            DomainEntityRole_CanDelete,
            Domain_Id,
            Entity_Id
        FROM
            Charlie_DomainEntityRole der
            INNER JOIN
                Charlie_Role r
            ON
                r.Role_Id = der.Role_Id
        WHERE
            EntityType_Id = @entitytypeid
        AND
            Entity_Id IN (" + IDmarker + @")
        AND
            Domain_Id IN (@domainid, -1)";
StringBuilder builder = new StringBuilder();
for (Int32 i = 0; i < criteria.EntityIDs.Count; i++)
{
    String param = String.Format("@entityid{0}", i.ToString());
    builder.AppendFormat("{0},", param);
    command.Parameters.AddWithValue(param, (Int32)criteria.EntityIDs[i]);
}
builder.Append("-1");
query = query.Replace(IDmarker, builder.ToString());
command.CommandText = query;
command.Parameters.AddWithValue("@entitytypeid", criteria.EntityTypeId);
command.Parameters.AddWithValue("@domainid", criteria.DomainId);

Previously, a collection of ten entities would require 30 database hits in order to retrieve all the roles for that collection. Now, no matter how many entities are within the collection, only a single database visit is required to retrieve the security roles.

That’s a great improvement, but I am still a little disturbed that one database visit is required to retrieve an entity or an entity collection, and then a separate database visit is required to retrieve its roles. To state the bleeding obvious, this approach is doubing the number of hits to the database.

I have been in two minds about whether this is a problem that needs to be solved. While an extra database hit is of course undesirable, performance is not everything. The current design cleanly separates entity content from entity security.

What I have resolved to do is to have a bash at combining the two queries into one. In most circumstances I would say to myself, “Until it actually becomes a problem, it is not a problem to be solved.” However, I need to get a better understanding of SQL, so I consider this to be an exercise that may also provide a performance boost for Charlie. I’ll give it a go, but I won’t be too concerned if I cannot get it to work.

by Alister Jones | Next up: The Valley of Data Access - Part 6

0 comments

----

Charlie Wears Lead Boots

by Alister Jones (SomeNewKid)

I finished my previous weblog entry by saying that I would teach myself enough SQL so that I could move a little bit of logic to the database, rather than keeping the logic in the C# code and hitting the database multiple times.

Fortunately, a friend has offered to help me with writing the queries, and I’ll report on how that goes. So that we focus on the most “chatty” parts of Charlie’s database access, I decided that I should use SQL Profiler to see what is going on at the database. I “touched” the Global.asax file in order to mark the Charlie application for recycling, and requested the home page. I switched to SQL Profiler and looked at its trace. I was horrified.

To start with, the trace showed signficantly more hits to the database than should have occurred. Since this is an honest account of Charlie’s development, I’ll give you the true figure, as much as I’d like to halve it in order to save some face. The first page request required 49 hits to the database. Just as troubling, there was a two-second delay between two subsequent database hits. What the heck was Charlie doing in that time?

I switched on Charlie’s logging plugin, cleared the SQL Profiler trace, and again forced the Charlie application to recycle. After requesting the home page again, I now had a record of what Charlie was doing, and what the database was doing. I brought the details together in an Excel spreadsheet, and analysed what was going on.

The first problem was that Charlie was issuing the same queries over and over. In fact, I counted 23 redundant queries. This is a coding error on my part, and some good old-fashioned debugging will solve it. So, while I am suitably embarrassed, I am not too worried about this.

The second problem was that Charlie was issuing three queries that should really be combined into one query. This is the logic problem that I prompted this investigation. Hopefully, my friend can help me here.

The third problem was the Charlie was querying once for each entity, and then once again for each entity’s roles. Performing two queries for each entity seems unnecessary, so I may look at combining the queries.

The fourth problem was the database connection was opened and closed eight times. It is my understanding that there is some overhead involved in opening a new connection, so I may consider keeping a connection open when I know another query is imminent.

If I address each of the above four problems, I should be able to get the initial database access down to about six queries and two connections. That will solve the “chatty” database access problem. But, there was a bigger problem.

The fifth problem was that the SQL Profiler trace showed a two-second gap between two adjacent queries. The output of the logging plugin shows where the delay occurs, even though I don’t know the cause of the delay.

Charlie’s extension of the HttpApplication class (the class behind the Global.asax file) logs the time immediately before the target HttpHandler receives its ProcessRequest command. Then, that HttpHandler logs the time when it starts processing the request. The log shows that there is a 2.1 second delay. Charlie is not doing anything at all during this time. ASP.NET is doing something. What the hell is it doing that is taking 2.1 seconds?

First, it is not a logging error, because the same delay is recorded by both Charlie and SQL Profiler. The delay is real.

Second, it is not a delay caused by ASP.NET parsing an .aspx file and storing the dynamically-generated DLL. The HttpHandler used by Charlie is a ready-to-go class that is fully contained within an assembly.

Third, this delay only occurs during the first page request after recycling the ASP.NET application. However, it is not the time taken to restart the application, since both the logging plugin and SQL Profiler do not come into play until after the application has started and the request has commenced processing.

What is ASP.NET doing that takes 2.1 seconds? The HttpHandler is loaded and ready to go, but there is a marked delay before anything happens.

To put this delay in context, the request for the Home page after an application restart takes 3.1 seconds. A subsequent request for the About page (so there are a few database hits for the new content) takes 0.1 second. A new request for the Home page (so everything is drawn from cache) takes 0.03 seconds. This inexplicable 2.1 second delay during the processing of the first request is far and away the slowest part of Charlie.

As I write this, I have no idea why there is a long delay before the first HttpHandler executes. I fully understand the delay caused by the ASP.NET application restarting and reloading. But I do not understand what ASP.NET is doing from the time it fires the PreRequestHandlerExecute event to the time the handler actually executes. If I learn the cause of the delay, I will of course let you know. And if you know the cause, you will of course let me know, won’t you?

By the way, I know that optimisation should be a final polishing step. But, this investigation did not come about due to a desire to optimise Charlie, but came about as a consequence of using SQL Profiler to address Charlie’s chatty database access code.

by Alister Jones | Next up: The Valley of Data Access - Part 5

0 comments

----

The Valley of Data Access - Part 4

by Alister Jones (SomeNewKid)

In the previous weblog entry I set out the plan of how I would refactor the database access code. I am pleased to say that, with just one variation, the refactoring went as planned.

The variation was reasonably cosmetic. In the original plan, the Mapper would gather together the raw query text and a collection of parameters, and pass them off to the new Helper class. This plan was a little short-sighted, as future developments may mean that the Mapper wants to pass the name of a stored procedure to its Helper class, rather than pass the raw query text. So, to allow for future flexibility, I created a new EntityCommand class that looks like this:

namespace Charlie.Framework.Persistence
{
    public class EntityCommand
    {
        public String CommandText
        {
            get
            {
                return this.commandText;
            }
            set
            {
                this.commandText = value;
            }
        }
        private String commandText;

        public EntityParameterCollection Parameters
        {
            get
            {
                return this.parameters;
            }
        }
        private EntityParameterCollection parameters = 
            new EntityParameterCollection();
    }
}

So, rather than passing the raw query text and parameters to its Helper, the Mapper passes a high-level EntityCommand object. In the future, I can extend this EntityCommand object to expose a CommandType property, allowing the Mapper to tell its Helper that a stored procedure is to be used.

With the refactoring complete, the long Retrieve method shown in the second entry in this series has been reduced to this:

public override Entity Retrieve(Entity entity, EntityCriteria crit)
{
    NoteCriteria criteria = (NoteCriteria)crit;

    EntityCommand command = new EntityCommand();
    command.CommandText =
          @"SELECT
                s.Note_Id, 
                Note_CreatedBy, 
                Note_CreationDate, 
                Note_UpdateDate, 
                NoteLocalized_Title, 
                NoteLocalized_Content
            FROM
                Charlie_Note s
            INNER JOIN
                    Charlie_NoteLocalized sc
                ON
                    s.Note_Id = sc.Note_Id
            WHERE
                s.Note_Id = @noteid
            AND
                sc.NoteLocalized_Culture = @culture";
    command.Parameters.Add("@noteid", criteria.Id);
    command.Parameters.Add("@culture", criteria.Culture);

    Note note = (Note)this.Helper.Retrieve(
                entity, criteria, command, this.CreateNoteFromReader);

    return note;
}

All of the red code has been moved out to the Helper class. Better yet, all that red code is shared amongst all of the Mapper classes, where before it was duplicated across those classes.

The second-last line of code includes a delegate that points to the following method of the Mapper:

private Entity CreateNoteFromReader(IDataReader reader)
{
    Note note = new Note();
    note.Title = Convert.ToString(reader["NoteLocalized_Title"]);
    note.Content = Convert.ToString(reader["NoteLocalized_Content"]);
    note.CreationDate = Convert.ToDateTime(reader["Note_CreationDate"]);
    note.UpdateDate = Convert.ToDateTime(reader["Note_UpdateDate"]);
    return note;
}

What you may notice is that the only code that is left in the Mapper is the code that actually does the mapping of the database fields to the business object properties. Everything else has been relocated to the Helper class. If I was an OOP nerd, I might suggest that the cohesion of the class has been increased. But as I am an OOP newbie, I’ll just say that the class is now simpler.

The exercise of refactoring the database access code did bring to light a weakness with Charlie, which is actually a weakness with me. The code in the RoleMapper class could not be refactored to the new system, because its Retrieve method hits the database three times, whereas all other Retrieve methods hit the database one time. The only reason the code hits the database three times is because I could not figure out a more effective SQL command.

So guess what I’m going to learn next?

by Alister Jones | Next up: Charlie Wears Lead Boots

0 comments

----

The Valley of Data Access - Part 3

by Alister Jones (SomeNewKid)

At this point in the life of Charlie, I am refactoring the data access code. In the previous weblog entry, I explained why my approach is to extract from the data access methods the parts that stay the same, and leave the mappers to concentrate on their unique requirements. Staying with the earlier example of a Retrieve method, I see four unique parts that the Mapper must gather together before passing them off to the Helper to execute.

Starting from the top of the method and working down, the first unique part is the entity that is being retrieved.

Document document = (Document)entity;

The second unique part of the Retrieve method is the query string:

String query =
   @"SELECT
         d.Document_Id,
         d.Document_ParentId,
         d.Document_Name,
         d.Document_FriendlyUrl,
         d.Document_Position,
         d.Document_CreationDate, 
         d.Document_UpdateDate, 
         c.DocumentLocalized_Culture,
         c.DocumentLocalized_Title
     FROM
            Charlie_Document d
         JOIN
            Charlie_DocumentLocalized c
         ON
            c.Document_Id = d.Document_Id
     WHERE ";
if (criteria.LoadById == true)
{
   query += " d.Document_Id = @documentid";
}
else if (criteria.LoadByUrl == true)
{
   query += " d.Document_FriendlyUrl = @friendlyurl";
}

Now, a seasoned developer would be horrified at the appearance of a hard-coded query like that. Personally though, I consider its clumsiness to be offset by three compelling benefits. First, it communicates clearly the query that will be executed. Second, it allows me to copy the query into Query Analyser, test it and perhaps tweak it, and paste it back into Charlie’s code. Third, it’s simple. The seasoned developer can add attributes to the business objects or add mapping rules to an XML file. Charlie and I will just slap the query in place.

The third unique thing that the Retrieve method must gather together is the collection of command parameters. Currently, the code looks like this:

if (criteria.LoadById == true)
{
   command.Parameters.AddWithValue("@documentid", criteria.Id);
}
else if (criteria.LoadByUrl == true)
{
   command.Parameters.AddWithValue("@friendlyurl", criteria.Url);
}

This just needs to be updated slightly so that the mapper gathers together a single collection of parameters:

ParameterCollection parameters = new ParameterCollection();

if (criteria.LoadById == true)
{
   parameters.Add(new Parameter("@documentid", criteria.Id));
}
else if (criteria.LoadByUrl == true)
{
   parameters.Add(new Parameter("@friendlyurl", criteria.Url));
}

I will use a custom Parameter class, because I want to be able to define “alternative” values in order to support the cascading logic used throughout Charlie. Using localization as an example, the following may be a custom parameter:

Parameter cultureParameter = new Parameter("@culture", "fr-FR", "fr", "en");

The final unique thing within each mapper’s Retrieve method is seen in the bold line below:

if (reader.Read())
{
   document = NewDocumentFromReader(reader);
}

The document was the first unique thing gathered by the Mapper, so we have that part. But what can we do about the second part? How can we tell the Helper method that once it has executed the passed-in query and obtained the resulting data reader, we want that reader to be passed to our NewDocumentFromReader method?

What would seem to be an obvious solution would be to have the Helper simply return the reader to the calling Mapper code. The Mapper code then passes the reader to the NewDocumentFromReader method. However, the Mapper is then left holding a darn data reader, which it must tidy up. But the whole point of this refactoring exercise is to free the Mapper from having to worry about data connections, transactions, readers, exceptions, and everything else.

Fortunately there is another solution. Just as the Mapper can pass the entity object, the query string, and the parameter collection to the Helper class, it can also pass an entire method to the Helper class. In truth, it does not pass the method itself but rather a delegate of the method. Teemu Keiski describes this process in his article, Using Delegates with Data Readers to Control DAL Responsibility. If the concept of delegates is new for you, I wrote a tiny tutorial on delegates on the ASP.NET Forums.

We have seen that the Retrieve method in each EntityMapper needs to gather together four objects to send to its Helper class to execute. The skeleton of the Retrieve class therefore looks like this:

public override Entity Retrieve(Entity entity, EntityCriteria crit)
{
    // Get the entity
    Document document = (Document)entity;
    
    // Get the query
    String query = @"SELECT ... ";
    
    // Get the parameters
    ParameterCollection parameters = new ParameterCollection();
    parameters.Add(new Parameter("@name", value));
    
    // Get the delegate
    IReaderHandler handler = new IReaderHandler(NewDocumentFromReader)); 
    
    // Pass them off to the Helper for executing
    this.Helper.Retrieve(document, query, parameters, handler);

    return document;
}

This trim Mapper.Retrieve method means that all the red code in the previous weblog entry has been extracted to the Helper.Retrieve method. This Helper.Retrieve method assumes responsibility for the database connection, any transactions that may be in progress, any readers that are generated, any exceptions that are thown, and any logging that may be required. Since all the Mappers will use this Helper.Retrieve method, there is now a single point at which the data access code can be tweaked or corrected.

Now, as I said in the first entry on refactoring the data access code, I am writing this series of weblog entries “live”—I have not yet performed this refactoring.

The next step is to actually do the refactoring. I will of course report on any surprises along the way.

by Alister Jones | Next up: The Valley of Data Access - Part 4

0 comments

----

The Valley of Data Access - Part 2

by Alister Jones (SomeNewKid)

Once I had decided to refactor the data access code for Charlie, the first thing I did was look for a guide. I am frightfully inexperienced with SQL, so I wanted to see if I could find an article, or project, or generator, or anything else, that would guide me.

I started with Google. But every article I found presented the same technique: code a single method that will create a database connection, create a command, add the parameters, open the connection, execute the command, get the data, and close the connection. Yet this everything-in-one-method approach is prone to error and prone to duplication, and is precisely what I wanted to avoid.

I next looked as some open-source projects, including some rather expensive ones, hoping they would take a more considered approach to data access. But no. Each of the projects I looked at took the same everything-in-one-method approach, with an occasional variation being the use of stored procedures in place of hard-coded queries.

I then trialed a commercial code generator. Unfortunately, I could not for the life of me work out how to use it. So I opened a sample project, comprising just four business objects, and used it to generate a data access layer. I don’t know, maybe it’s just me, but I think 3,000 lines of data access code per business object is a tad unnecessary. And then there was the fudgy business object code needed to support the whopping data access code.

After casting about, looking for a guide but failing to find one, I made the increasingly-common decision, “To hell with it, I’ll do it myself.”

I started by looking at the following two pieces of code. The first is the hard-coded query:

String query =
      @"SELECT
            s.Note_Id, 
            Note_CreationDate, 
            Note_UpdateDate, 
            NoteLocalized_Title, 
            NoteLocalized_Content
        FROM
            Charlie_Note s
        INNER JOIN
                Charlie_NoteLocalized sc
            ON
                s.Note_Id = sc.Note_Id
        WHERE
            s.Note_Id = @noteid
        AND
            sc.NoteLocalized_Culture = @culture";

The second was the method that accepts the returned data reader, and populates a business object.

note.CreationDate = Convert.ToDateTime(reader["Note_CreationDate"]);
note.UpdateDate =   Convert.ToDateTime(reader["Note_UpdateDate"]);
note.Title =        Convert.ToString(reader["NoteLocalized_Title"]);
note.Content =      Convert.ToString(reader["NoteLocalized_Content"]);

To put the code into words, the first query string defines the source of the data, while the second method defines the destination of that data. My first thought was, “It would be great if I could create a method that described the mapping between the source table and column names, and the destination property names.” I scribbed down the following code on a piece of paper:

protected Mappings GetMappings()
{
    return new Mappings(
        // Table        // Column            // Property
        "Charlie_Note", "Note_CreationDate", "CreationDate",
        "Charlie_Note", "Note_UpdateDate",   "UpdateDate",
        // and so on
        );
}

My mind then started thinking about how my new data access code would use this mapping information to automate the process of retrieving data from the database and applying it to the business object. I spent a few minutes thinking about this before the little devil on my shoulder whispered, “Alister, you’re talking about creating your own little O/R Mapper here, and we both know you’re not smart enough for that.” The angel on my other shoulder then whispered, “Well, you may be smart enough, but it’s still a dumb idea.”

I went back to looking at the code in my data access methods. I scratched my head a bit. What bothered me is that while each of the methods were very similar, each one had enough little quirks to make it hard to extract any common code. I scratched my head a little more. I then remembered a key design principle from my favourite book for nerds, Head First Design Patterns:

“Identify the aspects of your application that vary
and separate them from what stays the same.”

With that principle in mind, I looked at each of the methods and noted which parts varied and which parts stay the same. In the following listing, the red code is what stays the same within all Retrieve methods.

public override Entity Retrieve(Entity entity, EntityCriteria crit)
{
    Document document = (Document)entity;
    DocumentCriteria criteria = (DocumentCriteria)crit;
    SqlConnection connection =
       new SqlConnection(this.ConnectionString);
    SqlCommand command = new SqlCommand();
    String query =
       @"SELECT
             d.Document_Id,
             d.Document_ParentId,
             d.Document_Name,
             d.Document_FriendlyUrl,
             d.Document_Position,
             d.Document_CreationDate, 
             d.Document_UpdateDate, 
             c.DocumentLocalized_Culture,
             c.DocumentLocalized_Title
         FROM
                Charlie_Document d
             JOIN
                Charlie_DocumentLocalized c
             ON
                c.Document_Id = d.Document_Id
         WHERE ";
    if (criteria.LoadById == true)
    {
       query += " d.Document_Id = @documentid";
       command.Parameters.AddWithValue("@documentid", criteria.Id);
    }
    else if (criteria.LoadByUrl == true)
    {
       query += " d.Document_FriendlyUrl = @friendlyurl";
       command.Parameters.AddWithValue("@friendlyurl", criteria.Url);
    }
    else
    {
       throw new ArgumentException("Invalid criteria.");
    }
    command.CommandText = query;
    command.Connection = connection;
    SqlDataReader reader = null;
    try
    {
       connection.Open();
       reader = command.ExecuteReader();
       if (reader.Read())
       {
          document = NewDocumentFromReader(reader);
       }
       reader.Close();
       base.AddRolesToEntity(document, criteria, connection);
       connection.Close();
    }
    catch (Exception exception)
    {
       throw new DataAccessException(
          "Could not load document.", exception);
    }
    finally
    {
       if (reader != null && reader.IsClosed == false)
          reader.Close();
       if (connection.State != ConnectionState.Closed)
          connection.Close();
    }
    return document;
}

I decided that whatever parts stayed the same would be moved out to a helper class. That would leave the mapper to concentrate on its unique requirements.

What is good about this approach is that by concentrating all of the common code in a single helper class, I would have one point at which to enhance that common code. When I had previously discovered that I was not properly rolling back transactions, I had to go into every one of ten data access classes and make the correction. This way, I would have just one point at which to correct the transaction-based code.

Because this approach had been inspired by the Head First Design Patterns book, my mind started thinking about whether any patterns would help me here. But the little devil on my shoulder whispered in my ear, “Keep it simple, stupid.” The angel on my other shoulder whispered, “You’re not stupid, but keep it simple, sweetheart.”

I decided then that the Mapper class would simply create a query string, create a collection of parameters, and pass them all into the new helper class to be executed. No pattern there, just a clean separation of preparing the database command from executing the database command. Simple.

by Alister Jones | Next up: The Valley of Data Access - Part 3

2 comments

----

The Valley of Data Access - Part 1

by Alister Jones (SomeNewKid)

I am reporting live to you from the valley of data access. All around me I see towering mountains of SQL. Rising in front of me is the DocumentMapper mountain, upon which grows putrid-smelling commands and parameters. To its left is the ContainerMapper mountain, with more foul-smelling stuff. To its right is the RoleMapper mountain, and to the right of that is the UserMapper mountain. I am surrounded by these mountains of SQL.

I hate this place. It is dark and hostile, and I do not have a map or a torch. I need to find a way out. I need to find a way. Out.

I look into my toolkit, and I see three tools at my disposal. One is a grappling hook labelled “consultant,” one is an unopened box labelled “O/R Mapper,” and one is a knife labelled “refactor.”

I’ve thrown the grappling hook to a consultant who stands on top of these mountains. She wraps the hook around a rock, on which she has etched the word “experience,” and I start to climb out. But a man approaches the consultant. He says his name is Shane, and he has something to show her. She must make a choice, between the man dangling at the end of a rope, or the stranger standing before her. She cuts the rope. I fall, crashing back into the valley.

After the pain of the fall subsides, I look at the box labelled “O/R Mapper.” I open it and read the instructions. “For use only by those who know what they’re doing.” That’s not me. I close the box and think, “Maybe some other day.”

I take out the knife labelled “refactor.” I like this knife. I have used it before.

Enough of the story? I thought so too.

Right now, I have ten Mapper classes within Charlie. Each Mapper contains a method for Create, Retrieve, Update, and Delete. Each method is very long, yet each method is not much different from any other. So I have forty large, half-redundant data access methods. Every time I add a feature to Charlie, each method gets a little larger, a little more redundant. Every time I make a change to the database schema, I need to update many, if not all, of the forty separate methods. I need to take control—to reduce the size of the methods and eliminate the redundancy. I have decided to refactor this code before it explodes beyond a maintainable size.

To provide a working sample, here is one of the shortest of the unweildy methods.

public override Entity Retrieve(Entity entity, EntityCriteria crit)
{
    Document document = (Document)entity;
    DocumentCriteria criteria = (DocumentCriteria)crit;
    SqlConnection connection =
       new SqlConnection(this.ConnectionString);
    SqlCommand command = new SqlCommand();
    String query =
       @"SELECT
             d.Document_Id,
             d.Document_ParentId,
             d.Document_Name,
             d.Document_FriendlyUrl,
             d.Document_Position,
             d.Document_CreationDate, 
             d.Document_UpdateDate, 
             c.DocumentLocalized_Culture,
             c.DocumentLocalized_Title
         FROM
                Charlie_Document d
             JOIN
                Charlie_DocumentLocalized c
             ON
                c.Document_Id = d.Document_Id
         WHERE ";
    if (criteria.LoadById == true)
    {
       query += " d.Document_Id = @documentid";
       command.Parameters.AddWithValue("@documentid", criteria.Id);
    }
    else if (criteria.LoadByUrl == true)
    {
       query += " d.Document_FriendlyUrl = @friendlyurl";
       command.Parameters.AddWithValue("@friendlyurl", criteria.Url);
    }
    else
    {
       throw new ArgumentException("Invalid criteria.");
    }
    command.CommandText = query;
    command.Connection = connection;
    SqlDataReader reader = null;
    try
    {
       connection.Open();
       reader = command.ExecuteReader();
       if (reader.Read())
       {
          document = NewDocumentFromReader(reader);
       }
       reader.Close();
       base.AddRolesToEntity(document, criteria, connection);
       connection.Close();
    }
    catch (Exception exception)
    {
       throw new DataAccessException(
          "Could not load document.", exception);
    }
    finally
    {
       if (reader != null && reader.IsClosed == false)
          reader.Close();
       if (connection.State != ConnectionState.Closed)
          connection.Close();
    }
    return document;
}

As I opened by saying, this report is coming to you live. I have printed out a few of these dastardly SQL methods, to look at how I might refactor them. I have not yet started to refactor this code. I’ll update this weblog as I do so.

by Alister Jones | Next up: The Valley of Data Access - Part 2

1 comments

----