The conception, birth, and first steps of an application named Charlie

Subscribe: Atom or RSS

Sunken Ships and Pirates

by Alister Jones (SomeNewKid)

I have finished rebuilding the main parts of the Wilson WebPortal. I have been so impressed with its simplicity and its effectiveness that I have given great thought to whether to move the best of Charlie to the WebPortal, or the best of the WebPortal to Charlie.

While I would naturally have some misgivings about kicking the chair out from under Charlie, I am very cognisant of the economic concept of a sunk cost. Specifically, the time and effort I have put into Charlie is a sunk cost—a cost incurred in the past that should have no bearing on future decisions. All that is relevant to future decisions is future costs. In this case, the future cost will be either the time taken to move Charlie to the WebPortal, or the time taken to move the WebPortal’s ideas to Charlie. Whichever approach provides the greatest benefits to costs ratio is the approach to take, regardless of the sunk cost of Charlie’s past development.

So what would be the benefits to switching to the Wilson WebPortal? I can see three benefits. First, I would be using a website framework that would be developed and maintained by someone else, leaving me free to focus on creating websites rather than the underlying framework. Second, I could use any modules created by Paul or other developers. Third, I could potentially sell any really good modules that I might create.

What then would be the costs involved in switching to the Wilson WebPortal? I can see two costs. First, I would need to spend time learning the new framework. Second, I would be limited to the functionality that the framework provides. Most notably, the current version of the Wilson WebPortal does not support globalization in its code or in its database schema, so I would be giving this up.

Now, since I am a subscriber to Paul’s website, I have the source to the Wilson WebPortal. Strictly speaking I could tweak the framework’s code to do whatever I like. However, I cannot see that this is a sustainable option. The moment I touch the core framework, I will be taking it down another development path that will prevent me from then “upgrading” to any future versions released by Paul. And if I cannot upgrade to future versions, then that immediately kills the benefit of having someone else maintain the base framework.

To solve this problem, I considered introducing a façade layer. Any “tweaks” could be applied to this façade layer, so that the underlying WebPortal framework could be kept in sync with any releases from Paul. However, while this sounds like a workable approach, a façade layer can only introduce relatively superficial enhancements to the underlying framework. Globalization, for example, is too deep an enhancement to be implemented in a façade layer. Globalization affects everything from threading to the business objects to the database schema to the persistence code. Even more problematic is that such an enhancement to the framework would be a breaking change. Even if I submitted my code revisions to Paul, he could not include them in future versions of the WebPortal without breaking everyone’s existing website. These concerns mean that I cannot see that tweaking the WebPortal framework is a sustainable approach. In turn this means that by moving to the Wilson WebPortal, I would be giving up Charlie’s working globalisation, which would be a major cost to incur.

So what would be the benefits of taking the approach of a pirate? I could take my cutlass to the Wilson WebPortal, steal its treasures, and bury them in the sand of Charlie under the light of the full moon. (This would be within the terms of use of the WebPortal, by the way.) I can see two benefits. First, I can apply some of Paul’s great ideas to Charlie, most notably his KeepAlive code and custom Cache code. This would mean adopting about 100 lines of Paul’s code. It’s not much, but it’s code that I could not have written myself. Second, Charlie would stay a bespoke application, whereas the WebPortal is a general application, and having a bespoke application has many attendant benefits.

What then is the cost involved in sticking with Charlie over moving to the Wilson WebPortal? Well, the cost is the same cost that has long been bothering me: that I’m creating and maintaining everything myself.

So which approach provides the greatest ratio of benefits to costs? Should I sink the Charlie ship and move to the WebPortal ship? Or should I keep the Charlie ship afloat and plunder the treasures of the WebPortal ship?

After thinking about this at some great length, I have decided to keep the Charlie ship afloat, and let it take the pirate approach of plundering goodies from any other ship it finds on the high seas of ASP.NET. Unlike pirates, however, I’ll only take what I’m entitled to take.

by Alister Jones | Next up: The Cache is a Shadow, Not a Box

0 comments

----

The Provider Model, Unveiled

by Alister Jones (SomeNewKid)

In my previous post, I suggested that only a small percentage of ASP.NET developers could describe where the new provider model fits within a properly-architected application. I suggested that the reason so few developers understand the technology they are using is because .NET envangelists are typically magicians who perform tricks, rather than teachers who explain concepts.

I am still learning ASP.NET version 2.0, but I’ll nonetheless tell you my current understanding of the provider model. Let’s start with a look at the common layers in a properly-designed application.

The magicians will tell you that the default providers work against SQL Server or SQL Server Express, and if you want to use another database you must implement a different provider. You can then place on your .aspx page a control such as the Login control, and point that control at your custom provider. What this suggests is that the Login control is in the Interface layer and the provider is in the Data Access layer.

Even the naming of the providers suggest that they belong in the Data Access layer. The default Profile provider is the SqlProfileProvider. As one alternative, Microsoft provides a downloadable AccessProfileProvider.

If you read the magicians’ articles and blog entries, then visualising the Providers as belonging to the Data Access layer is about the only possible conclusion. But anyone with a properly-architected application will shake his or her head at the magician, and grumble about Microsoft eschewing proper architecture in favour of the slap-it-in-place-in-Visual-Studio approach. Anyone with a properly-architected application will have a User business object and probably a Role business object, both of which the above design will avoid much like a pedestrian will side-step dog turd.

The above illustrates one of the reasons I was so concerned about ASP.NET version 2.0. How can it be a good thing to use these database providers that skip over business objects?

What has finally occurred to me however is that the providers do not belong in the Data Access layer, despite the names of the providers, and despite what the magicians tell me. No, the providers belong in the Controller layer. In the case of the default SQL provider, it acts like the other layers are not even there. The default providers work as though the other layers have been erased, and it’s okay to talk directly to the database.

For small applications, it probably is okay that the providers talk directly to the database. But for a properly-architected application, let’s look at what it means that the providers are in the Controller layer. The providers “hide” the business objects from the interface layer. All the interface layer sees is the provider, and has no idea whether that provider is talking directly to a SQL database, talking directly to an Oracle database, talking to business objects, talking to web services, or talking to Felix the Cat.

In my look at Paul Wilson’s practical approach to business objects, I said that he uses a class-in-charge design where the user is represented by one User class, and the authorization role is represented by one Role class. In his WebPortal project, he uses a custom Profile provider that talks to these business objects, rather than talk directly to the database. Even better, the project’s use of an O/R Mapper means that the same provider and same business objects could use a SQL database, an Oracle database, or one of many other databases. So here is the location of Paul’s custom provider in the context of his WebPortal project. Notice that the provider is talking to the business objects, and not to the database.

Charlie does not use the same class-in-charge design used by the Wilson WebPortal. Rather, it uses a five-part Entity System. I can introduce a provider into Charlie’s Controller layer, which talks directly to the relevant Manager classes, in which case the architecture would look as follows.

What is great about the provider model—once you ignore the magicians—is that it allows the same rich server controls to work with any architecture. The Login control will work with a shallow architecture where the provider works directly with the database, or with a multi-layered architecture where the provider works with business objects (such as in the Wilson WebPortal), or with a multi-layered architecture where the provider works with manager objects (as in Charlie).

In many respects, the providers are the controllers in the popular model-view-controller pattern. The server control provides the view, the business objects provide the model, and the provider provides the controller. The providers are not true controllers, since they are not designed according to the true MVC pattern. However, they do provide the same separation of responsibilities that is the reason why the MVC pattern is so popular.

We can see then that the provider truly enhances the architectural integrity of .NET applications, and this is a very good thing. My earlier concern about providers was wholly unfounded, because I did not understand where the providers exist in the context of a layered application. I don’t think my misunderstanding is the result of me being a fool. (I may be a fool, but I don’t think it caused this misunderstanding.) Rather, I really do think .NET evanglists are magicians who focus on details (the tricks), rather than teachers who look at the bigger picture.

Now that I understand the provider model, I can see how I can maintain Charlie’s existing architecture while staying true to the design of ASP.NET version 2.0. Even better, this fleshes out Charlie’s architecture even further, since previously the Controller layer was something of a desolate landscape—there was nothing to see.

Finally, a disclaimer of sorts. This is my current understanding of the provider model, and it is an understanding brought about by my look at the Wilson WebPortal. If my understanding is wrong, or if I have misrepresented the Wilson WebPortal in my diagram above, the error is wholly mine. So if I am right, the thanks go to Paul. If I am wrong, the blame goes to me.

by Alister Jones | Next up: Sunken Ships and Pirates

0 comments

----

The Magician and the Teacher

by Alister Jones (SomeNewKid)

Tom has come home from his first high-school physics class, and already he is sensing that he’s going to fail physics. All those numbers and formulas. He goes to his Dad and shows him the formula for momentum, p=mv, and asks his Dad what it means. “That’s simple, son; it’s just mass times velocity.” He grabs his calculator and says, “If the mass is four and the velocity is three, then the momentum is,” wherein he pauses to let Tom type the equation into the calculator, “twelve. Simple!” He pats his son on his head, and walks out to watch some TV. Along the way he thinks, I hope Tom is good at football, because his maths sure sucks. And Tom is thinking, I hope I can grow up to be a football player, because I’m going to fail physics.

Alison is in the same class as Tom, and asks her mother the same question. Her mother takes her by the hand and together they go to the spare room. Her mother pulls out the bowling ball that her husband bought her, hoping that she’d show an interest in his one hobby. She then finds a baseball and places it beside the bowling ball. Feeling slightly guilty, the mother says, “Imagine that you’re standing over there,” as she points to the doorway, “and each of these balls is going to roll toward you and hit your feet.” Alison looks a little amused, so her mother presses on with the analogy. “Imagine that they are both rolling toward you at about the speed you can run. Which one is going to hurt more?” Alison thinks the question is silly, but says, “The bowling ball is going to hurt, because it’s so heavy.” To which her mother says, “Correct.” Her mother then goes on to say, “Now, imagine that your baby brother has pushed the bowling ball toward you, but a cannon has shot the baseball toward you. Which one will hurt more?” Alison says, “The baseball, since it’s going so fast.” With the demonstration complete, Alison’s mother takes her back to her textbook where they look at the formula for momentum, which is p=mv. The mother explains that m is the weight of the bowling ball or the baseball, and v is the speed at which the ball is going. Harking back to the demonstration, her mother explains that sometimes a baseball will carry more momentum, and sometimes a bowling ball will carry more momentum. They then do the same sums that Tom did.

Guess who passed the first physics test? Well, actually, Tom and Alison both failed, since they had fallen in love and spent the entire classes passing love notes back and forth. But had love not blinded them both, Alison would have passed because her mother took the time to teach her, while Tom’s father only took the time to show him.

What this little story shows is that it takes me three paragraphs to make a single point. And the point is that a formula and a calculator is like a magic trick and the magician—they hide from you the reality. If you know the formula and can work the calculator, then all you know is the formula and the calculator. You know the trick but not the reality. To see the reality, you need a teacher, not a magician.

Let’s look at another equation that has become common in the ASP.NET world:

control + provider = functionality

All over the internet, and in books about ASP.NET 2.0, we have magicians repeating the formula and telling us what buttons to press on our calculator (Visual Studio). But Tom will not learn physics this way, and we will not learn ASP.NET this way. All we learn is the trick. Where oh where are the teachers who will explain the reality behind the trick?

I have become increasingly frustrated with the magical approach taken by .NET evangelists. Put one control and one provider into a top hat, wave a wand over the hat, and—abracadabra—a bunny appears. That’s a trick. Where are the teachers who will explain the reality behind the trick?

To bring this rambling weblog into context, let’s look at the architecture for Charlie. If I want to align Charlie with the direction that ASP.NET is taking, where in that architecture would a provider fit? Now Charlie’s architecture is so familiar that any architectural question that troubles Charlie must trouble thousands of other applications too. So for those of us who attempt to create properly-architected applications, where does the new .NET provider model fit within this well-known n-layered design?

How many users of ASP.NET version 2.0 could answer this question of where the provider model fits within a properly-architected application? One in two? One in five? One in ten? I would hazard a guess that it’s a very low percentage indeed. Why? Because .NET is being evangelised by magicians, and not by teachers.

This is one of the reasons why I was initially fearful of ASP.NET version 2.0. All the evangelists were magicians pulling rabbits from their hats and scarves from their mouths. Impressive, maybe, but pretty darn useless in the real world. I was fearful that ASP.NET was nothing but a bag of tricks that had no place in a properly-architected application. This is a real shame, because there is a very sweet logic to ASP.NET version 2.0 that is completely at home in an n-layered application design. Rob Howard and Scott Guthrie and the team for ASP.NET version 2.0 have done a sterling job, but the evangelical magicians are doing the ASP.NET Team a complete disservice. I truly feel that Microsoft should seek evanglists who are teachers, not magicians. Teachers can explain what Microsoft has done, why it was done, and how customers can use what it’s done. Leave the magicians to entertain Tom and Alison and other kids that their birthday parties.

by Alister Jones | Next up: The Provider Model, Unveiled

0 comments

----

The Practical Man

by Alister Jones (SomeNewKid)

A few moons ago, I was unsure of how to start Charlie. Fortunately, by working my way through Cuyahoga, I gained the inspiration I needed to move forward. As this new moon shows its face, I need a guide to what ASP.NET version 2.0 can contribute to Charlie. For this reason I am working through the Wilson WebPortal in precisely the same way that I worked through Cuyahoga: by rebuilding it from scratch. Once I have learned the best bits of ASP.NET version 2.0 and the best bits of the Wilson WebPortal, I will determine how I can combine those strengths with the strengths of Charlie. In doing so, I hope to simultaneously overcome the major weakness of Charlie: that I’m creating everything from scratch.

The Wilson WebPortal is free to use, but its source is available only to those who subscribe to WilsonDotNet.com. I cannot then show any code on this weblog, but I’d definitely like to talk about some of the lessons I am learning by working through its code.

The first lesson is actually about Paul himself: he is so darn practical. Way back when ASP.NET was new, many developers complained about the time needed to serve the first webpage request after a period of website inactivity. ASP.NET was designed to unload applications that had not been active for a while. The problem was that websites without constant traffic would unload, and the next visitor would experience a slow-loading first page. Rather than complain about the problem, Paul took the more practical approach of solving the problem. He introduced what amounted to an alarm clock for an ASP.NET application, that would keep waking the app up before it had the chance to fall back to sleep. Naturally, the Wilson WebPortal includes this alarm clock.

As I started working through the Wilson WebPortal, I came across the following comments introducing a custom Cache class:

// I use my own cache instead of HttpRuntime.Cache to better control lifetime.
// My experience on shared hosts is that HttpRuntime.Cache drops far too often.
// So this is intended to be a major performance win, although it may look odd.

This is so practical. How many of us would simply use HttpRuntime.Cache without giving it a second thought? And how many of us would then curse about our website still being slow even though we were using the built-in cache? With its whitespace removed, the code for Paul’s custom cache is about 50 lines. In my opinion, it is this sort of practicality and simplicity that distinguishes Paul from most other developers. He’s a practical man.

Another area where Paul puts practicality first is in the design of his business objects. Every single business object in Charlie is based on the five-part Entity System. The benefit is the simplicity of each class taking on a single responsibility. The penality is the complexity involved in maintaining a large number of classes and the means by which those classes communicate. Paul takes precisely the opposite approach. A business object is represented by a single class. The benefit is the simplicity of having one “class in charge”—everything to do with the user is in the User class, everything to do with the portal is in the Portal class, and so on. The penality is that these single classes are relatively complex, as they take on numerous responsibilities. I asked Paul about this on his own website, so you can read Paul’s own rationale: About the responsibilities of the classes in Wilson.WebPortal.Core. The architects can argue about the merits of a class-in-charge design, but what cannot be denied is the resulting simplicity and practicality that the design achieves. Paul also refers to the utility of this design in the context of ASP.NET 2.0 providers. I haven’t yet come to terms with providers, so I cannot yet see the connection—but I’ll get there.

Right now, Charlie is not too far behind the capabilities of the Wilson WebPortal. But what the Wilson WebPortal does, that Charlie does not, is stay true to the design of ASP.NET version 2.0. I feel sure that this is a major strength of the Wilson WebPortal, and a major weakness of Charlie. I haven’t yet decided what to do to address this weakness (if anything), because I’m still working my way through the Wilson WebPortal as a step towards learning the best bits of ASP.NET 2.0.

by Alister Jones | Next up: The Magician and the Teacher

0 comments

----

Charlie. The Little Engine that Could

by Alister Jones (SomeNewKid)

In a much earlier weblog entry, I described how my stubborn nature has had an impact on the way in which Charlie has developed. Specifically, I have been tackling one problem at a time, and stubbornly refusing to solve the problem “the Microsoft way” if I think that way is either complex or flawed.

Another aspect to my person that is having an impact on Charlie is my lack of confidence in myself. I am acutely aware that I am not an experienced developer, and I am acutely aware that I’m just not that smart. For both of these reasons, I have been losing faith in what I’ve been doing with Charlie. I look at the work by the developers of Castle and think, “Now those guys are really smart, I should be using Castle.” I then look at the work by Paul Wilson and think, “Now that guy is really smart, I should be using the Wilson WebPortal.” I then look at the work by Telligent and think, “Now those guys are really smart, I should be using Community Server.” Quite simply, all of those coconuts are so smart, and I lack so much confidence, that I feel almost compelled to give up on Charlie. How could I create anything worthwhile unless I create it on top of their own projects?

While I spent time researching Castle, I kept hearing the word that means so much to me: simplicity. The documentation for Castle talks about how its components achieve simplicity, flexibility, re-use, and all the other good things in life that do not involve sugar or pheromones. What eventually occurred to me however is that they are using monstrously complex ways of achieving simplicity. On the other hand, I have been using simple ways of achieving simplicity. My lack of confidence tells me that if I’m not doing things the same way as these smart guys, I must be doing it wrong.

I have been looking through the code for Paul’s Wilson WebPortal. Without doubt, he has created a very clever system that makes good use of the best that ASP.NET 2.0 has to offer. As I study his code more closely, I can see that he is solving the same problems I have solved, but just in a different way. My lack of confidence tells me that if I’m not doing things the same way as this smart guy, I must be doing it wrong.

However, it has occurred to me that just because these guys are smart does not mean that what they do is right. And just because I am not as smart does not meant that what I do is wrong.

I have just fired up the current version of Charlie, and surfed between the pages of my sample website. You know what? While it is not as complex as Castle, or as clever as the Wilson WebPortal, it is working precisely how I want it to work. All of the pages are secure and localized, which is precisely the goal I had for Charlie at this stage in its development. Charlie is chugging along, getting the job done, and staying on track. Charlie is the little engine that could, and maybe I should have a little more faith in it. And in myself.

In my last web entry on the monkey or the gorilla, I commented on the difference between a framework application and a bespoke application. As I look now at the code for Charlie, I can see that much of its simplicity and effectiveness comes from the fact that Charlie has been designed as a bespoke application. I have been finding the simplest and most effective solution to each problem, without having to worry about whether other developers could work with the solution. In this regard too, Charlie is chugging along, getting things done in its own way. The little engine that could.

Now that I recognise that Charlie is properly a bespoke application, it answers the question of whether to move to Mono 1.0 or move to .NET 2.0. At this point in time, .NET 2.0 offers greater opportunities for me to find simple solutions to each problem, most notably with generics. I can, and should, make use of anything that enhances simplicity. By the time I’ve finished Charlie, Mono will probably have become compatible with .NET 2.0, so it’s really a moot question. I should not have worried myself about it.

So, after having been tempted away by other projects, I have come back to Charlie with a newfound belief in what I’ve been doing with it. Charlie is the little engine that could.

by Alister Jones | Next up: The Practical Man

2 comments

----

The Monkey or the Gorilla?

by Alister Jones (SomeNewKid)

The monkey is a reference to Mono, which is the spanish word for monkey. The gorilla is a reference to .NET, released by the 800-pound Microsoft gorilla. This weblog entry is an extension of my earlier entry concerning my indecision about which technology to use. Mono 1.0 is fully compatible with .NET 1.1, with the exception of Enterprise Services. Mono is not yet compatible with .NET 2.0, so there is a decision to be made about which technology to choose.

Right now, Charlie does not use a single capability provided by .NET 2.0. So, I could move it over to Mono 1.0 with very little effort. Or, I could evolve it to use .NET 2.0 with very little effort. But those are divergent paths that will not converge again until some time in the future. So, which path should I choose? Should I go with the monkey, or go with the gorilla?

I said in my earlier post that going with the Mono monkey would lessen my business opportunities. In a reply to that post, Brendan Ingram challenged that statement, and rightly so. I did not really explain what I meant when I said that. Brendan also concluded by saying that going with Mono may in fact increase my business opportunities. Whether that is true or not depends on the business model for Charlie. That is something to which I have not given due thought, which is a horrific oversight on my part. So, let me explain my thoughts on the matter.

First though, I want to put my thoughts in context. Specially, I want to repeat that I come into this industry from outside. As a result, the only sample applications that I have ever seen are framework applications, never bespoke applications. A framework application is like DotNetNuke, where its target is developers. They take the framework, plug in new pieces, flip a few preference switches, add some content, and away they go. A bespoke application is like Flickr, where its target is end users. Charlie started out as a bespoke application, as I had explicitly stated that its target users were my own clients, not other developers and their respective clients. But, because all of the sample applications that I have seen are framework applications, I keep losing sight of the fact that Charlie started out as a bespoke application. For example, I added plugins to Charlie in a way that would support other developers adding plugins too, even though that was not a goal for Charlie—it was just the way I have seen things done, so it was the way I did it too.

Now, framework applications and bespoke applications are naturally different, and require different decisions to be made to the same problems. Take for example the need to style a web application. If Charlie remains a bespoke application, then I can implement styling any damn way I please. No-one else will ever see the implementation, and no-one else will even care. But if Charlie were to become a framework application, then I need to implement styling in a developer-friendly way, which, in the context of ASP.NET 2.0, means using Themes and Skins. As a further example, take the possibility of adding AJAX functionality. If Charlie remains a bespoke application, I can implement client-side functionality any damn way I please. But if Charlie were to become a framework application, then I would need to implement AJAX in a developer-friendly way, which, in the context of ASP.NET 2.0, means Atlas.

I feel that I have two choices with Charlie. Leave it as a bespoke application, in which case I can go with either Mono or .NET. Or, I can make it a framework application, in which case I really need to go with the gorilla. Which should I choose? Well, that should be a business decision, and I see four ways in which Charlie can earn money.

The first way in which Charlie can earn money is the obvious way: find website clients and use Charlie to build the website. Charlie will be technology by which I can anwser “Yes” to questions such as “Can we edit the content of the website?” and “Can we extend our website later?” Whether Charlie is a framework application or a bespoke application makes no great difference here. But Mono would put a smile on the face of those website owners whose technical folk inevitably ask, “Do you use open-source software?”

The second way in which Charlie can earn money is by selling it, or its components, to other developers. Both the Wilson WebPortal and Community Server have a business model that includes this aspect. Going with .NET over Mono would help here, since the market is larger. More important, most Mono developers will be advanced developers who would have no interest in the simple Charlie. Conversely, many .NET developers are novice and intermediate developers who may certainly have an interest. But, this approach would mean that Charlie should become a framework application, not a bespoke application.

The third way in which Charlie can earn money is by releasing it as open-source software, and trying to encourage community development. Community development would mean that Charlie becomes a richer application than I could ever achieve by myself. Having a richer application would mean that Charlie could help me to obtain clients that I could not have obtained if I’d kept Charlie as a closed-source application. Going with .NET over Mono would help here too, simply because of the size of the market. However, I think the ASP.NET community is already saturated with open-source website frameworks, so I don’t think this is a realistic model for Charlie. Still, it is an option that needs to be considered.

The final way that Charlie can earn money is by providing an avenue by which I can earn incidental income. It has been suggested a number of times that I should write articles. If I were to move Charlie over to Mono, there would be very little that I could write about that would be of interest to others. MonoRail, for example, is just too hard for the average developer to install and use, and just too complex to write an article about (the example application on CodeProject doesn’t even work). But if I were to learn ASP.NET 2.0, I could write articles on the things that I learn along the way.

Does this rambling weblog entry explain why I sense that there are more opporunities if I were to follow the ASP.NET 2.0 path rather than the Mono 1.0 path (remembering that the paths are currently diverged, and may not converge again for a good long while)?

Another thing that concerns me (I worry about a lot, don’t I?) is that what I like most about Mono and its projects seem at odds with .NET 2.0. For example, the provider model of .NET 2.0 does not seem to be compatible with the O/R Mapping model that is prevalent in the Mono world. For another example, the rapid application development model of .NET 2.0 does not seem to be compatibile with the Inversion of Control model that Castle promotes. The goals of .NET 2.0 developers and Mono developers do not seem very compatible, which is why I sense that I need to make a decision on which way to go. I may have this completely wrong, and my concerns may be unfounded. However this weblog is a record of my ongoing learning and my increasing understanding, and this weblog entry notes my current thoughts on the monkey and the gorilla.

by Alister Jones | Next up: Charlie. The Little Engine that Could

1 comments

----

Evaluating the Wilson Web Portal

by Alister Jones (SomeNewKid)

In a previous weblog entry, I summarily dismissed the Wilson WebPortal because it takes the ASP.NET 2.0 approach to web applications, about which I have grave doubts. However, three separate concerns have made me reconsider the Wilson WebPortal and, therefore, the ASP.NET 2.0 approach.

The first concern is that, as I have been creating Charlie, I have been getting annoyed by the amount of repetitive code I have been writing. One area is that old chestnut: data access. I’m brand new to databases, and have written about thirty CRUD methods, but already I am super-bored with data access. The Wilson WebPortal naturally uses the WilsonORMapper, so I am super-tempted to take the same approach.

Another area where I find myself repeating the same darn code over and over again is the code to compare one business object to another. Every single one of my business objects has the following boilerplate code:

public new static Boolean Equals(Object object1, Object object2)
{
    if (object1 is Article && object2 is Article)
    {
        return ((Article)object1).Equals((Article)object2);
    }
    else
    {
        return false;
    }
}

public override Boolean Equals(Object object1)
{
    if (object1 is Article)
    {
        return Equals((Article)object1);
    }
    else
    {
        return false;
    }
}

public Boolean Equals(Article article)
{
    if (article.Id == this.Id)
    {
        return true;
    }
    else
    {
        return false;
    }
}

public override Int32 GetHashCode()
{
    return base.GetHashCode();
}

Because these tests must be against the final type (such as an Article, or Blog, or Webpage object), this code to test whether one business object is equal to another cannot be moved into a base class. (At least, not in .NET 1.1.) But a peek at the code for the WilsonWebPortal suggests that generics provides a way to avoid this repetitive code. To date I have been unable to find an article on generics that makes the slightest sense to me. Fortunately, if you look at the documentation long enough, you get an “ah-ha!” moment when generics actually starts making sense.

So, the problem of repetitive code, and the simple and direct way in which Paul solves this problem, has given me the first reason to look at the Wilson WebPortal.

The second concern that has been haunting me ever since I started Charlie is the knowledge that I am reinventing the wheel. There are already mature applications that do the same thing, but I feel that they get it wrong—I honestly believe that I can do better. The Wilson WebPortal is not yet a mature application. It is brand-new, written from the ground up using the best that ASP.NET 2.0 has to offer. It is therefore free of legacy crap (like the remanants of IBuySpy that shackle DotNetNuke) and free of accumulated crap (like all the gee-whizz junk that bloats DotNetNuke). It is a clean slate provided by a guy who does things in a way that I both respect and admire. Paul’s only weakness is visual design, but that is where I can add real value to what he has achieved with his new product.

The third concern that has been troubling me is the knowledge that I am actively avoiding ASP.NET version 2.0. While I am compiling against .NET version 2.0, I have not used a single aspect of this new technology. In everything I have been doing, I have been doing it in a non-ASP.NET 2.0 way. What this means is that I am driving my project off the sealed highway and onto a dirt track that leads somewhere else. I fear that I am taking an ideological approach rather than a practical approach, which will come back to haunt me when I find myself stranded with an application that is too different in too many ways.

For these reasons, I am going to work through Wilson WebPortal, and learn how it does its thing. I will then either move Charlie onto this project, or I will keep Charlie as a separate project and apply the lessons learned from the Wilson WebPortal. Either way, I will try to bring Charlie back onto the sealed highway that is ASP.NET version 2.0.

by Alister Jones | Next up: The Monkey or the Gorilla?

0 comments

----

“And if you don’t know where you’re going…

by Alister Jones (SomeNewKid)

…Any road will take you there” — George Harrison

When we last left off, I was about to embark on an investigation into the Windows Vista user experience. Because I was unable to get the preview to install, I have had to work from Microsoft’s documentation and screenshots. As I have been working through it, I have been horrified. With each operating system release, Microsoft describes it as a “bet the company” release. If that’s true then, on face value, Microsoft is in danger of losing that bet with Vista.

To market any product, the vendor needs to find and promote a unique selling proposition. Apple makes its USP clear: “Let the Mac be the centre of your digital lifestyle.” If my sister is anything to go by, Apple is delivering on its promise. My sis is in the process of getting rid of her stereo and her TV because her Apple equipment has made them unnecessary. She uses an iMac, iBook, and iPod, and she can get stuff done that I could not hope to do on my Windows XP machine.

What is the unique selling proposition for Windows Vista? “Bring clarity to your world.” What the hell does that mean? This lack of focus seems to be reflected in the product itself. There seems to be no rhyme or rhythm, logic or art, to any part of the Vista user experience. My concern about Windows Vista follows my concern about ASP.NET version 2.0.

I am starting to worry that Windows XP and .NET version 1.1 represent the pinnacle of Microsoft, and Windows Vista and .NET version 2.0 represent the first fall on the slippery slide to obsolescence.

Because of my concern, I have been contemplating switching to Mono. Projects like Castle and Cuyahoga demonstrate the true thought and sheer intelligence that Mono developers bring to the platform. What do we get over here on the .NET side? Crappy, useless stuff like the provider pattern (which is not even a pattern anyhow). Over on Mono they have complete implementations for Model View Controller, for Inversion of Control, for Aspect Oriented Programming, and for so much more. I know we can use these components in a .NET application, but I have a concern with stradding Mono 1.0 and .NET 2.0 that is beyond the scope of a weblog entry.

There are only two reasons why I have not yet jumped ship. (I’ve downloaded Mono and starting coming to grips with Castle and its goals, so I’m ready to jump.)

The first thing that holds me back from Mono is the Wilson WebPortal. The initial release is very awkward to use, but it shows great potential. Most importantly, for me, is that Paul Wilson never ceases to find simple and direct solutions. If ASP.NET version 2.0 is good enough for this clever cucumber, then it should be good enough for me, too. I’ll come back to the WilsonWebPortal in my next weblog entry.

The second thing that holds me back is that while it makes ideological sense for me to switch to Mono, I don’t think it makes much business sense to switch. If I stay with .NET, I can earn money from writing articles and selling components; if I switch to Mono, the opportunities are limited. If I stay with .NET, I can benefit from the truly awesome support provided by Microsoft and its developer community; if I switch to Mono, the support is limited. If I stay with .NET, I can use a greater number of free and commercial components; if I switch to Mono, the components are limited.

By the way, if you are not already aware, this is a flow of consciousness—I am resolving my indecision as I write this. My heart wants me to switch to Mono. My head tells me to stay with .NET. I started writing this blog entry immediately upon returning from a ride on my motorbike, where I was thinking through the arguments for and against moving to Mono, and whether I might make use of the Wilson WebPortal. My non-committal thought at the end of the ride was that I should stick with .NET rather than switch to Mono, and this weblog entry has sealed the deal. In the next weblog entry, I’ll address the question of whether I should make use of the Wilson WebPortal.

by Alister Jones | Next up: Evaluating the Wilson Web Portal

2 comments

----