Friday, 26 November 2010

hiJump - Why CQS?

One of the most important aims of hiJump was to try to deal with the issue of communication between the browser and the server. We will have all come across the following pattern of communication:
  1. Browser requests page from server (get request, full page refresh)
  2. User fills in form
  3. Browser posts data to server (post request, full page refresh)
  4. Server finds issue with data and returns same page as the get request in step 1 but with a message describing the problem
Now this is very standard but not very nice. Why is it not nice? Well you have 'gets' returning html and you have 'posts' sending data (in effect a command) which also returns html. Yuck.

Now this sequence just about hangs together in an html interface which is not ajax based but it starts to cause real problems in an RIA (Rich Internet Application - AJAX based UI to you and me).

Take a look at this example of flow in an RIA:
  1. User clicks on a button to open a modal window (we are talking jquery modal here)
  2. An ajax get request is sent from the server for the content of the modal window and it is displayed
  3. The user fills in the form and clicks OK
  4. An ajax post is sent to the server
  5. The popup is closed
..... What happens if there is an error\validation error on the server??? This is the crux of the problem. If we are rendering html from our post requests, the return from our post needs to tell the UI to either close the popup (succeeded) or render the return from the post containing the form and the message back into the popup (failed). This is hard, horrible, non-consistent, non-intuitive stuff and you can imagine how easily this is abused over time in the code.

So, instead of posts ever returning html, they alway returns a json object containing only the following data:
  • Success\Fail flag
  • List of failure reasons if fail
  • Id of generated entity if success and the post asked to create something
  • List of domain events on the server which were fired during the execution of the command
If there is a failure: the UI can handle it in a consistent way providing a nicely formatted list of reasons in a modal window and stop execution at this point.

If it is a success: the UI can show the list of things that happened and can continue on its way and close the modal window. Or in the case of a 'create' can go ahead and load some more html using the ID returned

This works out quite nicely for the following reasons:
  • You do not have to do the old 'viewstate' type work of making sure people do not lose what they have entered if there has been an error. What they entered is still sat there and they can edit it and try to press the button again
  • The UI is controlling the application flow not a combination of the UI and the server post responses
  • All browser\server communication can go through this consistent process leading to much greater consistency in the code
In my next post I will be running through some fundamental requirements around domain modeling etc needed to make this work

Tuesday, 23 November 2010

hiJump - Introduction

This is the first in a series of blog posts giving an overview of the way we are now building rich internet applications (RIAs).

Building web applications is a complex business. Building rich internet applications is a ridiculously complex business. Anyone who has worked on a sizable RIA will have come across difficulties in ensuring the application develops in a consistent manner in such areas as:
  • How communication with the server is handled
  • How validation is handled
  • The actual HTML
  • The CSS
  • Error handling when things go wrong on the server
  • How javascript and javscript plugins are handled and configured
  • How business logic is modelled in the logic of the application
  • Who has control over the flow of the application?
Before starting work on our latest product we really needed to formalize the entire structure around building webapps. With a sizable development team and a complex project, the complexity of the application had to grow as linearly as possible.

hiJump addresses the issues above by using a variety of techniques which will be described in later posts in the series.

hiJump sits on top of a number of opensource projects:

Friday, 19 November 2010

Domain events and Growl!

Today we came across a really really sweet side effect of an event driven architecture. We were integrating a jquery growl plugin to give nice 'no ok button' notifications to the users when they click on transaction buttons (Save, Relase Order etc etc), when we thought "wouldn't it be great if the front end was not responsible for the notifications, but rather the infrastructure that is used to raise the events on the back end that make everything happen keep a list of what events have happened during the transaction".

This has a couple of nice side effects:
  • The front end does not have to have any real knowledge what this button actually does. But..
  • The user gets fantastic visibility of what pressing that button actually did! 3 Growls: Timesheet approved, Customer invoice line created, Email sent to employee... All on one button click.
  • No additional code to write! It is just part of the infrastructure.
This is made possible due to the CQS nature in which the browser communicates with the server in the hiJump infrastructure pattern we use. In a future post I will go deeper into this as the CQS approach is working out very nicely for use and greatly simplifies the relationship between the views and the back end.

Until next time.

Tuesday, 16 November 2010

Securing data from DBA's and developers

We have a pretty interesting issue here. We are developing an application for managing software companies and we will be using this ourselves.
This application will be storing some sensitive data such as salaries and rates.

Now, how on earth do you protect this information from the eyes of the very people who are building and supporting the solution??

Obviously the data for these fields will have to be encrypted within the database, but where do you keep the decryption key? And how do developers support\view the app without knowing the key? The requirements for the solution are as follows:
  • The data is encrypted within the database so is not human readable
  • The key is not stored anywhere other than in memory on the server
  • The pages which read the data can happily display the data in either encrypted form, or decrypted with the wrong key. This allows devs to view\debug the page.
  • The dev can encrypt and store the data using the wrong key if they want as as writing data is not a security concern.
I am currently considering this key to be an additional password which is held in the users session and if this has not been set, then the data is not shown. If an incorrect key is used a warning is given and what will be shown will be garbage. The key is checked for validity by encrypting a known word with it and comparing this with a pre-encrypted copy which is stored in config.

Anyone have any other ideas?? It is a tricky topic...

Tuesday, 2 November 2010

New Harmony ERP Website - ERP for software and services companies

As we approach the final major milestones on the Harmony ERP system we have released the new website to give new clients an overview of the features and functions of this software. The solution exists to provide ERP functionality specific to software and services companies including true niche functionality such as time billing, price list modelling, recurring revenue management and project monitoring.

Domain Events Rock!

It seems nowadays that the answer to many of our architectural conundrums are answered over atUdi Dahan's blog. He certainly has a habit of hittings the heads of nails.
The latest pattern of his we have been using is Domain Event. Here are some of the uses we have found for this pattern

Decoupling between bounded contexts within your application

You may have heard or read about the idea of bounded contexts. As an application grows it becomes very difficult to picture it as a whole and keep the whole thing in your head. Before you know it it becomes hard to understand the dependencies between the area of the app you are working on and other areas. There are many links to objects in other areas which makes your code fragile as other people refactor their areas. One answer to this is to denormalise any information required from another area of the application into properties of objects in this area. However, this leaves you with the issues of how to keep this denormalized information up to date. Domain events!

Avoiding injection of repositories etc into the domain model

You want to keep the logic in the domain. A method on a domain object does something which then has a knock on effect on somewhere else in the model which this domain object doesnt have a referenece too. Raise a domain event! The event handler that handles this is created via the IOC container so can get the required object from the repository and act on. An example of this interaction might be Order raises a OrderDispatched domain event which is handled by the UpdateStock : Handles class. This then gets the correct stock item and decrements it.

Emails everywhere

You want to send notification emails at all sorts of points in your application. You want to manage these, and what causes them all in one place. Once again... you guessed it. You can have a part of your application just for sending emails with a handler for any domain event you want to send an email for. This provides great visibility of what emails are sent and why. And also means you can easily add additional email notifications to the system without having to touch the calling code.

In addition to this, you can easily start to implement a CQRS-like architecture over time by updating reporting views from the domain events either synchronously or asynchronously for areas of the application which are slow to report on.

All in all, very handy!

Forcing browser to refresh on back button

This is a bit of perennial issue with web apps. When the user hits the back button the page comes out of their cache and it looks like they have just undone what they just did. This is a particular problem with applications as they are so postback heavy.

It is pretty easy to force the page to refresh from the server in IE but it is not obvious in firefox. Anyway, after much internet trawling I found this in a blog comment somewhere and thought it deserves it's own post. (Many thanks person who figured this out).

// The following combination disables page caching

// for firefox\ie\chrome

Response.Buffer = true;




TeamCity, Nunit, .net sln build - A simple how to

We have installed TeamCity for our CI needs and very nice it is too. However, when we came to get it to run our nUnit tests we found it very hard to find an example of the most simple way to get this working. The TeamCity documentation is not clear at all if you are MSBuild newbies like us.

So, to save you the pain, here is how it works

  • You need to realise first of all that you need to make an MSBuild script which will build your .sln file and also run the nunit tests
  • Then when you configure your project in TeamCity at the Runnner:MSBuild page point the 'Build File Path' to your msbuild file. We checked this msbuild file into subversion.
  • In order to get TeamCity to run the tests this is what the MSBuild file should look like what is shown below
  • Please note that when we put ANY rather than v2.0 in the exec line, we got nunit errors
  • Please also not this is a starting point and we know nothing of MSBuild so I am sure there are many other ways of doing this.... Hope this helps!
<Project DefaultTargets="Test" xmlns="">
<Target Name="Test" DependsOnTargets="Build">
<Exec Command="$(teamcity_dotnet_nunitlauncher) v2.0 x86 NUnit-2.4.8 D:\PathToMyTestDLL\MyTests.dll" />
<Target Name="Build">
<MSBuild Projects="PathToMySolutionFileRelativeToThisMSBuildFile/MySolution.sln"/>

Goodbye Linq to SQL POCO, Hello nHibernate

After a long time trying to get model first poco working with Linq to SQL (using xml mapping files) we have finally given up and have moved over to nHibernate.

Trying to get Linq to SQL working was confusing, error prone and imposed many requirements on how you implemented you domain objects. The worst thing about it was the fragility of the xml mapping. If you got anything wrong the errors you got were not descriptive making debugging a nightmare.

I have to say so far nHibernate has been a breath of fresh air. I had heard it was hard to learn and was over complex but I have found it fantastically straightforward to use. Our domain objects are true POCO and mappings are easily testable. We are using Fluent nHibernate for the mapping and when there is any error in the configuration the error message is descriptive and easy to debug. User base and community is very helpful and knowledgable.

It is true that you have to give up some of the LINQ features, but we are currently using Linq to nHibernate and may revert to Linq to SQL purely for the reporting as this can render from in-memory repositories transparently.

We have created a number of helper functions\patterns to enable using the domain objects as DTOs in our MVC interface which I will blog about shortly.... happy hibernation everyone..

New blog layout

we have moved our blog here. This was due to spam issues we were getting on our old blogging platform. The initial posts will be re-posts of posts from the old platform.