Get real,
get cocktail.

Build complete business applications in XAML.

Best Practices for the Real World

by Marcel Good on June 25, 2012 5:14 pm

In this week’s blog post, I would like to talk about some best practices for writing applications with Cocktail. Even though I’ll talk about these practices in the context of Cocktail, they are not limited to Cocktail, but rather make sound advice for any rich client application.


Asynchronous, Asynchronous, Asynchronous

Did I mention asynchronous? This is probably one of my most favorite topics. Asynchronous programming is here to stay and for good reasons. So, let’s take a quick look at what this means. A typical modern client application has at least one thread of execution. This thread is commonly referred to as the UI thread. It has two main responsibilities. Drawing the UI and responding to user input. If the UI thread is busy doing something else, or worse yet, blocked, two things will happen. The UI won’t draw and the application doesn’t respond to user input, giving the impression that it hangs. Both of these will eventually make the user run for the door. A UI thread that doesn’t respond to user input becomes even a bigger issue as we move to more touch-enabled applications.


One way to combat this situation is to multi-thread one’s application. Those of you who have tried will know that it isn’t easy to manage multiple threads, communicate between those threads, and marshal data back to the UI. A better way is the asynchronous style of programming where the developer doesn’t have to explicitly manage threads. Instead, the environment takes care of dispatching method calls to background threads and marshaling the results back to the caller’s thread. With C# 5 and VB 11 this kind of support is built into the compilers with the new Task-based Asynchronous Pattern (TAP), making asynchronous programming a first class citizen for the first time.


In the meantime, Cocktail provides all you need to program asynchronously today by means of Coroutines, the OperationResult and OperationResult<T> types, the asynchronous query infrastructure of DevForce, and provides support for TAP via the “Cocktail Async Pack for Visual Studio 2012”. In Silverlight, asynchronous is the only way to execute a query against the back-end data source. The same will be the case in Windows Runtime, where Microsoft itself has made every method taking longer than 50ms an asynchronous method. This includes things that up until now we’ve all used synchronously even in Silverlight, such as the trusty File Open dialog.


In WPF, one has a choice between programming synchronously or asynchronously. The default unfortunately is synchronous, but it’s very easy to change that. First change takes place in your head. Forget that there is a synchronous way of executing queries and always execute them with ExecuteAsync(). Be extra careful when using ToList() and ToArray() and immediate execution queries such as FirstOrDefault(). If used on a query, they will synchronously execute it under the hood. The second change is telling DevForce to asynchronously load navigation properties just like in Silverlight, where it is the default behavior. By adding the following code to the bootstrapper, every EntityManager on the client will lazy load navigation properties asynchronously.


protected override void StartRuntime()
   // Enable asynchronous navigation for all navigation properties in every client EntityManager.
   EntityManager.EntityManagerCreated +=
      (sender, args) =>
                          if (args.EntityManager.IsClient)
                             args.EntityManager.DefaultEntityReferenceStrategy = EntityReferenceStrategy.DefaultAsync;


Unit of Work Pattern

The Unit of Work pattern (UoW) is a more recent addition to Cocktail. In fact it’s not part of the Cocktail core just yet; it’s being nurtured as part of the CocktailContrib project. Cocktail doesn’t enforce a particular way for how a ViewModel should interact with an EntityManager for the purpose of querying, creating and modifying data, nor does it enforce where the business logic should live. In the DevForce Resource Center, we provide guidance for a simplified approach using a kind of data repository that acts as the abstraction layer between the UI (ViewModel) and the Persistence Layer (EntityManager). This simplified approach has its limitations and is quickly outgrown by requirements in larger real world applications. Such requirements include, but are not limited to cross-functional workflows, task oriented UIs and complex workflows/business processes in general. I often run into code that has become unwieldy after a while with business logic spread between entities, repositories, and ViewModels. It’s very hard to refactor, improve performance, or implement new requirements.


So, what’s this UoW thing you say? UoW is a design pattern that’s been around for much longer than Cocktail. In short, a UoW represents the task or workflow a user performs in the UI, tracking the process data and state information, providing mechanisms to access and manipulate the process data and state information and save or roll back the changes at the end. A UoW is composed of specific parts that have very defined responsibilities, leading to a much better organized code-base that’s easier to refactor, maintain, and improve over time.


Factories: Factories are used to create new root domain objects.
Repositories: Repositories retrieve domain objects from the back-end data source or cache.
Services: Services implement stateless business logic. They use factories, repositories, and other services to implement a particular process. E.g. transferring money from one account to another. Such a transfer service would use an account repository to retrieve the necessary account entities and then modify them according to the business rules.


The implementation, as seen in CocktailContrib, ships with generic UoW, Factory and Repository implementations and provides the foundation for building custom UoWs as complex as you require.


The following is the UoW interface from TempHire, where all this can be seen in action.


public interface IDomainUnitOfWork : IUnitOfWork
   bool HasEntity(object entity);
   // Factories
   IFactory<StaffingResource> StaffingResourceFactory { get; }
   // Repositories
   IRepository<AddressType> AddressTypes { get; }
   IRepository<State> States { get; }
   IRepository<PhoneNumberType> PhoneNumberTypes { get; }
   IRepository<RateType> RateTypes { get; }
   IRepository<StaffingResource> StaffingResources { get; }
   // Services
   IStaffingResourceSearchService Search { get; }
   void Clear();


Avoid Navigation Properties

Whoa, did I just say that? This topic is dear to my heart, because it’s probably the single biggest source of poor performing applications I’m seeing in the wild. Ever since the arrival of Object-relational mapping (ORM, O/RM or O/R mapping) developers have been forgetting that their objects and related objects are actually coming from a database and it’s expensive to fetch those objects over a network connection.


Navigation properties are great for composing queries and I’m in no way suggesting avoiding navigation properties for this purpose. They save us from having to write ugly joins. They are also very useful when directly bound to the UI, in particular if you followed my advice from the beginning of this blog post and all your navigations are asynchronous. With very little code, you’ll arrive at a UI that populates its controls in parallel and lets the user start interacting with the screen much quicker than if everything were loaded sequentially … or worse, the UI thread gets blocked while data is loaded synchronously.


So, what’s the problem with navigation properties then? Didn’t I just say that they could actually lead to a faster performing application? The issue with navigation properties comes to light when used in business logic. When accessing a navigation property, one of three things will happen. You got lucky and the related entity/entities were previously loaded in cache, resulting in an inexpensive in-memory operation. Or you got unlucky and the related data isn’t in the cache. The latter case has two outcomes. I told you there were three things. If the navigation properties load asynchronous, the rest of the business logic will most likely fail or do the wrong thing, because you just got handed an empty collection while the EntityManager is retrieving the related entities. If the navigation properties load synchronously then the worst case of all happens. The UI thread gets blocked while the related entities are being retrieved. Have a bunch of these worst case scenarios and you’ll see your application performance plummeting faster than a rock sinks.


The most problematic part here is that the business logic makes assumptions on the current state of the cache without having control over it. Often developers resort to running an upfront query with a long list of includes, which as the amount of data grows becomes a performance bottleneck itself. The other problem is a developer changes the business logic six months later, all of sudden accessing navigation properties that aren’t in the include list, slowly introducing hidden roundtrips. Wrap all that in a loop and you just got hit by the infamous n+1 problem.


So, how do we deal with this? Here is where the services we discussed earlier as part of the UoW pattern come in. The services should explicitly retrieve the data needed for the business logic, both from the cache or the back-end data source and of course asynchronously. Coroutines are your friend here. Filter the data as part of the query, so that we only retrieve from the database what we need. One issue with navigation properties is that they load the entire collection, even if you only need a subset of the items. By explicitly loading related entities from a repository, we not only have the ability to filter what we need, but we can also leverage projections to pull down only the properties of the entities that matter for the logic instead of entire entities wasting precious bandwidth bringing down data that isn’t needed.


By following these principals, the loading of the necessary data and the logic acting on the data is in one place making refactoring it, fixing bugs, and improving performance a lot easier than having to go through the call stack to chase down where the data was loaded in the first place and figuring out what’s missing.


See it in action

I think at this point I leave you to digest all of this. I highly encourage you to study the TempHire application where you can see all this and more in action. We are also planning on shooting some deep-dive videos explaining the parts in TempHire in more detail, so stay tuned, keep reading the blog and visit our forum if you have any questions.





* Used for confirmation purposes only. Your email address will not be displayed with your comment.