About me

Michael L Perry

Improving Enterprises

Principal Consultant

@michaellperry

User login

Michael L Perry's blog

The CAP Theorem and its Consequences

Here you will find the slides and code from my talk “The CAP Theorem and its Consequences”. The slides are in Silverlight, so use these controls to navigate:

  • Page down/up to go forward and back by slide.
  • Arrow down/up to go forward and back within a slide (Definitions, Proof, Choices, and Event Sourcing).
  • Scroll wheel to zoom in.
  • Drag to pan.

You can also download the source code and run the examples yourself. Run databases.sql to create the three databases. Populate the databases with test data in accounts “12345”, “23456”, and “34567”. The application transfers money among these accounts.

Get Microsoft Silverlight

For more information, please read the summary article.



Dallas TechFest 2011

I presented two sessions at Dallas TechFest this year. I also attended some excellent sessions, hung out with old friends, and made some new contacts. Thanks to Tim Rayburn, Teresa Burger, Shane Holder, and other organizers, this was a fantastic event. They’ve really set the bar high for next year.

My sessions

My first session, Provable APIs, was very well attended. This talk presents techniques that you can use to ensure that people consuming your API do so correctly. These techniques take advantage of the compiler, and the type system in particular. I present samples in C#, but the techniques work well in any statically typed language.

The session was very well attended. We had people using not only .NET, but also Java, and C++. We even had some dynamically-typed languages represented including JavaScript and Ruby. Although they don’t have a compiler and static type system to leverage, they can still use some of these techniques.

The audience was extremely interactive. They posed some excellent questions. They were particularly interested to see a refactoring of the .NET Socket class, with its notoriously error-prone API.

My second session, Correspondence: Occasionally-Connected Windows Phone Apps, was not as well attended. Those that did participate were rather quiet during the theory portion, but started to interact during the demo. Based on their feedback, I will move the demo to the beginning of the talk, and save the theory only for after I get questions about how it works.

The app

The demo for the Correspondence session was actually the Dallas TechFest 2011 Windows Phone 7 app. I got some great feedback from people who were using the app during the conference. I also saw a few bugs that others reported and that I experienced myself. I’ll be rolling out bug fixes for upcoming conferences.

All told, I had 22 downloads of the app. Out of those, I received 2 ratings with feedback, for an average of 4.5 stars. 22 downloads out of a possible set of about 200 attendees, only about 30% of which carry Windows Phones, is pretty decent reach. And to get 10% of the downloads giving feedback is excellent. Sure, the absolute numbers are too low to make any generalizations, but I’ll take what I can get.

The DevLink 2011 app is in the Marketplace now. If you are attending DevLink and you carry a Windows Phone 7, please give it a try.

Old friends and new contacts

Tim, Teresa, and Shane are all old friends, so to see them pull off such an outstanding conference was a real treat. As an added bonus, I got to see a good friend from work present in his first conference. Girish Gangadharan presented on JQuery, and packed the room. Actually, I didn’t see him present because my Correspondence session was at the same time. But I saw pictures, so I know it was a full house.

I also got to meet some new folks. Devlin Liles from Improving Enterprises and Jay Smith from Tyson Foods are active members of the open source community. Their tools help out user groups and event organizers across the country and the technology spectrum. I look forward to working with them on community-centered open source projects.

I had a great time at Dallas TechFest this year. I’m looking forward to following up on all the contacts I’ve made, to applying all the techniques that I’ve learned, and most importantly to participating next year.

Now on to DevLink.



Correspondence

Correspondence is a collaboration framework for occasionally-connected clients. Express your model once, and it gives you local storage, synchronization, and push notification across devices. We currently support Silverlight, Windows Phone, WPF, and MVC 3. Android is coming soon.

Collaborative framework

LogoPeople don’t just own one computer anymore. Now they have a desktop, a laptop, and a phone. They want their data to seamlessly flow across all of their devices.

People take their devices with them. These devices aren’t always connected. And even when they are, people don’t want to wait for them to connect to a central server. Everyone should have their own data on their own device for immediate access.

People use software to collaborate with each other. Some domains are overtly collaborative, such as social networking and gaming. Others are more discreet, like customer relationship management and project planning. Actions performed by collaborators affect the user experience.

Systems built with Correspondence

We have built a number of systems using Correspondence.

  • HoneyDo List: Todo lists that you can share with your family.
  • Commuter: A continuous podcast playlist for iTunes on Windows.
  • Faceted Reversi: Head-to-head reversi game for Windows Phone.
  • Dallas TechFest 2011: Personal conference schedule for Windows Phone.
  • Thought Cloud: Collaborative mind mapper (demo).

Slides

The slides for the presentation are rendered in Silverlight. Once the page loads, click on it to give it focus. Then use Page Down to progress through the presentation. Hit F11 to enter full-screen mode. The sample code used in this presentation is on GitHub.

Videos

Get started

These are the resources you will need to build an occasionally connected Windows Phone 7 or Silverlight application using Correspondence.

  1. Install the NuGet Package Manager through the Visual Studio 2010 Extension Manager. Detailed instructions on the NuGet project site.
  2. Add the Correspondence.WindowsPhone.AllInOne or Correspondence.Silverlight.AllInOne package to a Windows Phone 7 or Silverlight 4 application.
  3. Follow the walkthrough on the Correspondence project site to learn how to build a Correspondence model.
  4. Sign up for a synchronization server API Key. Put the API key in your POXConfigurationProvider.


Speaking at Dallas TechFest

I’ll be presenting next week at Dallas TechFest. I’m doing two sessions: Provable APIs and Correspondence for Windows Phone 7.

Provable APIs is a session that I originally created for the Q.E.D. lunches at work. It shows how you can use regular features of your compiler – not even the new contract validation stuff – to verify that people are using your classes correctly. If you create your APIs provably, then your users will find it a pleasure to work with your code.

Correspondence is a library for building occasionally connected clients. I used it to build the Dallas TechFest conference scheduling app for Windows Phone 7. In this session, I take apart the conference app and show you how you can build your own great user experiences.

As an added bonus, I will reveal the Correspondence logo at the conference. Come out and see it before I post it on the project site.



Microsoft MVP in Client Application Development

MVPLogo

I have received the Microsoft MVP award in Client Application Development. While this award is intended to recognize community leadership, I feel that it is only possible because of the support I get from the community. Thank you for attending my talks, asking great questions, and encouraging me to learn and to teach. This recognition will open even more doors for us to converse. And for that I am grateful.

Special thanks go out to Chris Koenig, who gave me little pushes, key opportunities, and finally the nomination that lead to this award.



Thought Cloud

Capture thoughts and share them with others.

Get Microsoft Silverlight

Right-click and install on this computer for off-line use.



Approaching an ideal from two directions

The ideal specification is unambiguous. After reading such a specification, the consumer should have no question as to the correct behavior of the system described.

The ideal code is not extraneous. There are no implementation details written in the code that were not part of the problem domain. Such details would not have been described in the specification, and could therefore not be verified as correct. If they can’t be verified as correct, then they must be arbitrary. If more than one arbitrary implementation choice could lead to a correct program, then why must the programmer make the decision? And why must he express that decision in code? The computer could figure it out.

As specifications approach their ideal lack of ambiguity, they remain firmly in the state of having no extraneous parts. The specification doesn’t contain implementation details. The customer doesn’t care how it is implemented. They just want it to be correct, and optimized for cost and speed.

As code approaches its ideal brevity, it remains firmly unambiguous. Since code is executed by a deterministic computer, the computer knows at any point what the program will do. There is no question.

Specification, in trying to become unambiguous, is approaching code. Code, in removing extraneous details, is trying to become specification.

Example: a checkbook ledger

If we were to write ideal specifications for a checkbook ledger, we would use only terms from the problem domain. We would define a transaction. We would say what a user could do with a transaction. And we would describe the reaction of the system to those operations. These statements would leave no doubt as to the correct behavior of the system. For example, we could write:

  • A transaction is a named, dated event that either increases or decreases the balance of an account by some positive amount.
  • A user can create a transaction, giving the name, date, and amount of increase or decrease.
  • A user can change the name, date, or amount of a transaction.
  • A user can void a transaction.
  • The system displays transactions in chronological order.
  • The system displays the running account balance of each transaction, that being the previous transaction’s running account balance plus the current transaction’s increase or minus its decrease.
    • The chronologically first transaction, having no previous transaction, assumes a prior balance of zero.
    • A voided transaction does not increase or decrease the prior balance.

If we were to write ideal code for a checkbook ledger, we would only solve the business problem. Storage, communications, and representation details would be handled by our chosen frameworks. We wouldn’t have to describe how a transaction is turned into bits on the network, nor would we have to generate SQL statements to persist or query them. For example, we might write this code:

Transaction(BelongingTo: Account) {
    Name: string
    Effective: date
    Change: [Increase, Decrease]
    Amount: decimal {Amount > 0.0}
}

Void(Voided: Transaction) { }

Account {
    Transactions {
        Transaction t : t.BelongingTo = this
        where not exists Void v : v.Voided = t
        order by t.Effective
        select {
            t,
            RunningBalance = (prior.RunningBalance ?? 0.0) +
                (t.Change = Increase) ? t.Amount : -t.Amount
        }
    }
}

The code verifiably satisfies the specifications, because it says no more and no less than the specifications do. In fact, besides the syntax, the code is the specification. The two artifacts, each evolving toward its own ideal, have ended up in the same place.

Programming languages of the future

No programming language yet developed is as concise as the above code. But we are getting closer. Modern frameworks raise the level of abstraction to remove implementation details from application code. Languages are moving away from the imperative style and adopting declarative and functional styles.

Today, programs are be specified by a Business Analyst, who is skilled in decomposing and unambiguously describing business processes. And programs are written by developers, who are skilled in constructing solutions from technical components at all levels of abstraction. But when programming languages finally do reach this level of brevity, a new discipline will emerge. Neither skill set will be sufficient to solve business problems with software. One person will create this artifact, an executable specification, using a mix of these two skill sets. This person will possess the mind of a mathematician.



Line of business applications in XAML

I spoke at the Dallas XAML User Group on Tuesday about building line-of-business applications in WPF and Silverlight. This was a hands-on event, so the participants followed along with their own laptops. Please download the source code and try it yourself.

The idea behind the talk was to exercise some of the most common features of XAML, Visual Studio, and Blend that are used in a line-of-business application. There are more features in XAML than one person can master. This is the subset that will give you the best return on your learning investment.

Unit test your view models

One of the benefits of separation patterns like MVVM is that you can unit test more of your code. Take advantage of it.

You can only unit test code that is in a class library. So it follows that view models should be in class libraries.

View models need to call services to access data and make changes. At run time, these can be WCF, RIA Services, OData feeds, or any other kind of service. But at unit test time, these have to be mocks. There are two reasons for mocking a service in a unit test:

  • Unit tests should run in isolation; they should not pass or fail depending upon the environment.
  • You need to test that the service gets called appropriately.

To mock a service at unit test time, we inject our dependencies. For example:

public interface ICatalogService
{
    List<Product> LoadProducts();
}
public class CatalogViewModel
{
    private ICatalogService _catalogService;

    public CatalogViewModel(ICatalogService catalogService)
    {
        _catalogService = catalogService;
    }
}

By injecting the service into the constructor via an interface, we can provide a real implementation at run time, and a mock implementation at unit test time.

public class MockCatalogService : ICatalogService
{
    public List<Product> LoadProducts()
    {
        return new List<Product>()
        {
            new Product() { Name = "Widget" },
            new Product() { Name = "Gadget" }
        };
    }
}
[TestClass]
public class CatalogViewModelTest
{
    private CatalogViewModel _catalogViewModel;

    [TestInitialize]
    public void Initialize()
    {
        MockCatalogService mockService = new MockCatalogService();
        _catalogViewModel = new CatalogViewModel(
            mockService);
        _catalogViewModel.Load();
    }

    [TestMethod]
    public void CanGetProductListFromService()
    {
        var products = _catalogViewModel.Products;
        Assert.AreEqual(2, products.Count());
    }
}

Put all views in UserControls

Don’t use the MainWindow as a view. Don’t drop controls directly into it. Instead, create a new UserControl and build your view in there. Doing so will have several benefits:

  • You can reuse and compose UserControls, but not Windows.
  • You can switch from one UserControl to another within a Window for simple navigation.
  • UserControls can be used within a DataTemplate to polymorphically select the view based on the view model.
  • The container can inject the DataContext into a UserControl, rather than relying upon the view model locator pattern.

That last point requires further explanation. We’ve already uses dependency injection to get the service into the view model. We could have used a pattern called “service locator” instead to let the view model find the service itself. Service locator is well documented to have disadvantages when compared to dependency injection.

The view model locator pattern is just another service locator. As such, it suffers from many of the same drawbacks. For example, when the user navigates to a view, they have already selected a specific object. The view model locator must somehow locate that selected object. We end up with a Hail Mary pass, where the selection is stored in one view and retrieved in the locator for another view. If the object was instead injected into the view, it becomes much easier to manage.

To inject the first view model at run time, set the DataContext inside of the main window’s Loaded event.

private void Window_Loaded(object sender, RoutedEventArgs e)
{
    CatalogViewModel viewModel = new CatalogViewModel(
        new RealCatalogService());
    viewModel.Load();
    this.DataContext = viewModel;
}

From that point forward, you can use properties of the main view model to declaratively inject child view models. DataTemplates come in handy for this.

Design time data

There are two ways to use design-time data in Blend with the MVVM pattern. The easy way is to create sample data from a class.

image

Select the view model class. Then layout your controls and drag properties onto them to databind.

The second way is to create an object data source.

image

First create a class with one property: the view model. Initialize this view model with a mock service. This mock data will appear at design time.

public class CatalogDesignerData
{
    public CatalogViewModel ViewModel
    {
        get
        {
            CatalogViewModel viewModel = new CatalogViewModel(
                new MockCatalogService());
            viewModel.Load();
            return viewModel;
        }
    }
}

Drag and drop to create list boxes

If you create sample data from a class, you will be able to drag and drop collections onto the art board. This will automatically generate a ListBox and give it an ItemTemplate. Then you can customize this ItemTemplate using Edit Additional Templates: Edit Generated Items: Edit Current.

image

By default, all properties are added to a StackPanel. If you want a horizontal layout:

  • Set the StackPanel’s Orientation property to Horizontal.
  • Right-click the StackPanel in the Objects and Timeline view and select Change Layout Type: Grid.
  • Click the grid ruler to create columns.
  • Right-click each property and select Auto Size: Fill.
  • Set all but the last column to Pixel sized. Set the last column to Auto sized.

If you want a vertical layout, adjust the Margin of the lower properties to indent them and add spacing between items. Also, adjust the colors and fonts to enhance the contrast of the first property.

If you use the object data source method, Blend will not create a ListBox when you drag and drop. I recommend starting with the simple sample data, then creating an object data source later if you find you need to.

Use ValueConverters for application-specific view logic

You might need to represent data using different styles based on its value. For example, positive numbers appear black while negative numbers appear red. Create ValueConverters for these cases.

public class RedWhenNegativeValueConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        if (targetType != typeof(Brush))
            throw new ApplicationException("You must bind to a brush.");

        decimal decimalValue = (decimal)value;
        if (decimalValue < 0.00m)
            return new SolidColorBrush(Colors.Black);
        else
            return new SolidColorBrush(Colors.Red);
    }

    public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
    {
        throw new NotImplementedException();
    }
}

Data bind the element’s color to the value. Apply value converters in Blend using the advanced properties pane in the binding dialog. Click the ellipsis to create a new instance of the value converter class.

These are the techniques that I find myself using most frequently when building line-of-business applications in WPF and Silverlight. I don’t know everything there is to know about XAML, but by focusing on the features that I find most useful, I’m able to build apps quickly.



Entities, values, facts, and projections

No model is perfect, including the model of software behavior that we know as Object-Oriented. One of the most significant flaws in OO thinking is that everything is an object.

OO has certainly brought us forward in reasoning about machines. During that time, we’ve learned to model identity, interfaces, ownership, contracts, and many other useful concepts. We’ve also learned the drawbacks of shared state, remote objects, and persistence mapping, just to name a few challenges.

The original definitions of Object-Orientation do not necessarily insist that everything is an object. The founders simply defined the behavior of an object in software. Nevertheless, modern OO languages -- like Java, C#, and Ruby -- encourage us to make everything an object. If you ask an average developer if everything is an object, he will most likely say yes.

Entities and values

Domain Driven Design draws a line between entities and values. This distinction predates DDD, but the practitioners (most notably Eric Evans) are very explicit on this point. Entities have identity; values do not.

Identity is the property of an object that allows us to distinguish it from other objects. Even if two objects have exactly the same state, they are two different objects.

Identity implies consistency. Two observers looking at the same object will see the same behavior. If one of the observers acts upon the object and changes its behavior, the other one sees that change as well. Peers can use consistency to collaborate through a shared object.

Values, on the other hand, do not have identity. When you pass a value, the recipient does not receive the same object as you. They receive a copy.

Since values do not have identity, they do not have consistency. You cannot change a value to collaborate with a peer. Taken to the extreme, you cannot even rely upon self-consistency. If you change a value, you cannot be sure that you yourself will observe the change in behavior. Therefore, values are usually treated as immutable.

Toward a complete system of behavior

While we as an industry have found the distinction between entities and values to be necessary, we have not found it to be sufficient. Problems still arise in real systems when we have just these two classifications.

The identity of entities allows peers to collaborate via shared state. But when those two peers are on different threads, different processes, or different machines, shared state becomes a challenge. Consistency is promised, but cannot be guaranteed. Entities are not sufficient to solve distributed computing problems. Just look to DCOM if you need convincing.

Persistence is also a problem. The well-documented Object-Relational Impedance Mismatch plagues OO systems backed by Relational databases. Network, Object, and Document databases map better, since these systems treat objects themselves as persistent. Nevertheless, persistent entities pose a challenge. Two peers can read the same entity, but yet still end up with two copies. Unless the database management system is careful about maintaining consistency, the version of the entity in memory is stale once it is read from the database.

Mapping and change propagation are desirable features of software, but not directly satisfied by either entities or values. Consider declarative UI frameworks, such as HTML and XAML. These are mappings of object behavior into visible structure. Entities, having identity, will change. When those changes occur, the visible structure should update. Features such as HTML templating and XAML data binding have been created to propagate changes to the UI, but these are not part of a complete OO model. Additional imperative constructs -- message busses or AJAX calls -- are required to make these solutions work. Functional programming languages have typically been much better at mapping than OO languages have.

The entity-value distinction is insufficient because it considers only one dimension of behavior: identity. If we add the dimensions of persistence and consistency, we can define a system that is both necessary and sufficient for describing software behavior.

Facts

To solve the problem of shared state, let us add the classification of “fact”. Two peers changing a shared entity are relying upon consistency to collaborate. But consistency cannot be guaranteed in a distributed system. We therefore define a fact to be immutable.

Since a fact is immutable, two peers observing the same fact will observe the same state. But this alone is not enough to facilitate collaboration. Each peer must be able to make changes by expressing related facts. We cannot guarantee that other peers will see these new facts immediately. But we can guarantee that they will see these new facts eventually. “Eventually” might be a very long time if peers remain disconnected, but facts will flow once the connection is reestablished.

Now that we have facts to solve the shared state problem, we can take that responsibility away from entities. So we mandate that an entity is no longer consistent in a large scale. It is only consistent within a narrow scope. Ideally, that scope is a single thread, but it is certainly a single process on a single machine.

Since the scope of an entity has been constrained, we must also mandate that it is not persistent. The narrow scope must end not only at a thread or process boundary, but also at a temporal boundary. A new process started at a later point in time cannot load that same entity from a database. It can only load facts.

Projections

To solve the problems of mapping and change propagation, let us add the classification of “projection”. A projection is an object that has no state of its own. It merely delegates all of its behavior to other objects. Changes to those objects are immediately visible in the projection.

Views such as HTML pages or XAML controls are projections. They simply transform the behavior of other objects into a structure that the user can observe. Any actions of the user are applied to the underlying objects. Similarly, changes to the underlying objects are immediately propagated to the structure that the user sees.

The MVVM pattern has emerged in the XAML space as a way of projecting Model behavior onto the View. XAML views can only data bind through properties. Models don’t always expose the properties that the view requires. It is therefore necessary to project all visible behavior through properties of an intermediate object: the View Model. So a View Model, like the View that it serves, is a projection.

in summary, the classifications of object are:

    Entity Has identity Transient Mutable Constrained consistency
    Value Has no identity Transient Immutable No consistency
    Fact Has identity Persistent Immutable Eventually consistency
    Projection Has identity Transient Immutable Constrained consistency

This classification taxonomy is necessary and sufficient to model the behavior of distributed collaborative software systems. There are certainly systems that fall outside of this set. This taxonomy may not be necessary and sufficient for those systems.



Provable APIs

imageTool vendors like Microsoft are not the only ones who publish APIs. When we create layered software, each layer has an API that is consumed by the next one up. To ensure the quality of our software, we should try to create provable APIs. These are interfaces that guide the caller to the correct usage patterns. They help the compiler help us to verify the correctness of our code.

An unhelpful API throws exceptions whenever we get something wrong. These kinds of APIs can cause stress and lead to bugs that are difficult to correct. There is a right way to call them, but there is also a wrong way. The wrong way still compiles, but it contains bugs nonetheless.

Some language features and patterns that can help us to prove the correctness of code:

  • Parameters
  • Callbacks
  • Foreign keys
  • Factories
  • Constructors

You must set a property before calling this method

A ShoppingService uses a Transaction to perform some basic operations. For example:

public class Transaction
{
}

public class ShoppingService
{
    public Transaction Transaction { get; set; }

    public void AddToCart(int cartId, int itemId, int quantity)
    {
    }
}

public static void Right()
{
    ShoppingService shoppingService = new ShoppingService();
    shoppingService.Transaction = new Transaction();
    shoppingService.AddToCart(1, 2, 3);
}

public static void Wrong()
{
    ShoppingService shoppingService = new ShoppingService();
    shoppingService.AddToCart(1, 2, 3);
}

It has a Transaction property that must be set before it is called. If you forget to set it, the method throws an exception. This API is unhelpful. If instead the method takes the transaction as a parameter, the compiler enforces this rule.

public class ShoppingService
{
    public void AddToCart(Transaction transaction, int cartId, int itemId, int quantity)
    {
    }
}

public static void Right()
{
    ShoppingService shoppingService = new ShoppingService();
    shoppingService.AddToCart(new Transaction(), 1, 2, 3);
}

In this version of the code, we’ve refactored the Transaction property and turned it into a method parameter. The right way of calling the method compiles. The wrong way does not.

You must check a condition before calling this method

Now let’s look at the interface for a cache. You can Add an item, Get an item, or check to see if the cache already Contains an item. There is a right way to use this API, and a couple of wrong ways.

public class Cache<TKey, TItem>
{
    public bool Contains(TKey key)
    {
        return false;
    }

    public void Add(TKey key, TItem item)
    {
        if (Contains(key))
            throw new ApplicationException();
    }

    public TItem Get(TKey key)
    {
        if (!Contains(key))
            throw new ApplicationException();

        return default(TItem);
    }
}

public static void Right()
{
    Cache<int, string> cache = new Cache<int, string>();
    int key = 42;
    string value;

    if (cache.Contains(key))
    {
        value = cache.Get(key);
    }
    else
    {
        value = LoadValue(key);
        cache.Add(key, value);
    }
}

public static void Wrong1()
{
    Cache<int, string> cache = new Cache<int, string>();
    int key = 42;
    string value;

    value = cache.Get(key);
    if (value == null)
    {
        value = LoadValue(key);
        cache.Add(key, value);
    }
}

public static void Wrong2()
{
    Cache<int, string> cache = new Cache<int, string>();
    int key = 42;
    string value;

    value = LoadValue(key);
    cache.Add(key, value);
}

private static string LoadValue(int key)
{
    return "the value";
}

The right way is to check the condition first. If the item is not there, load it and add it. If the item is already there, get it.

But you might be confused. Maybe you need to get it first, and if Get returns null you know it’s not there. That is not the contract of this class, but it is impossible to see that from the public API alone. It will throw an exception.

You might also make the mistake of trying to add an item to the cache without first checking to see if it is there. This could be a copy/paste bug, or perhaps your code took a path that you didn’t anticipate. This is going to throw an exception, too.

Let’s refactor this code by pulling the right usage pattern into the Cache itself. Since we need to do some work right in the middle, we’ll provide a callback.

public class Cache<TKey, TItem>
{
    public bool Contains(TKey key)
    {
        return false;
    }

    public TItem GetValue(TKey key, Func<TKey, TItem> fetchValue)
    {
        TItem value;
        if (Contains(key))
        {
            value = Get(key);
        }
        else
        {
            value = fetchValue(key);
            Add(key, value);
        }
        return value;
    }

    private void Add(TKey key, TItem item)
    {
        if (Contains(key))
            throw new ApplicationException();
    }

    private TItem Get(TKey key)
    {
        if (!Contains(key))
            throw new ApplicationException();

        return default(TItem);
    }
}

public static void Right()
{
    Cache<int, string> cache = new Cache<int, string>();
    int key = 42;
    string value;

    value = cache.GetValue(key, k => LoadValue(k));
}

After moving this code into the Cache class, we can make the Add and Get methods private. This makes it impossible to use the Cache incorrectly.

You must call this method after setting properties

It’s a good idea to have business objects that perform validation. It lets you respond to the user, and it prevents bad data from getting into the database. But what if you forget to call the Validate method?

public class Customer
{
    private static Regex ValidPhoneNumber = new Regex(@"\([0-9]{3}\) [0-9]{3}-[0-9]{4}");

    public string Name { get; set; }
    public string PhoneNumber { get; set; }

    public bool Validate()
    {
        if (!ValidPhoneNumber.IsMatch(PhoneNumber))
            return false;

        return true;
    }
}

public static void Right()
{
    Customer customer = new Customer()
    {
        Name = "Michael L Perry",
        PhoneNumber = "(214) 555-7909"
    };

    if (!customer.Validate())
        throw new ApplicationException();
}

public static void Wrong()
{
    Customer customer = new Customer()
    {
        Name = "Michael L Perry",
        PhoneNumber = "555-7909"
    };
}

Nothing about this API forces you to call Validate. And if you don’t, bad data can get through.

The problem is that the PhoneNumber is a string – a very permissive type. We can make it a more restrictive type and use a factory method to enforce validation.

public class PhoneNumber
{
    private static Regex ValidPhoneNumber = new Regex(@"\([0-9]{3}\) [0-9]{3}-[0-9]{4}");

    private string _value;

    private PhoneNumber(string value)
    {
        _value = value;
    }

    public string Value
    {
        get { return _value; }
    }

    public static PhoneNumber Parse(string value)
    {
        if (!ValidPhoneNumber.IsMatch(value))
            throw new ApplicationException();

        return new PhoneNumber(value);
    }
}

public class Customer
{
    public string Name { get; set; }
    public PhoneNumber PhoneNumber { get; set; }
}

public static void Right()
{
    Customer customer = new Customer()
    {
        Name = "Michael L Perry",
        PhoneNumber = PhoneNumber.Parse("(214) 555-7909")
    };
}

Now we are forced to validate the string in order to get a PhoneNumber object. We can still provide feedback on user input, since that’s the time at which we will be parsing the string. But now we can’t forget.

You cannot change this property after calling a method

The .NET Connection class requires that you provide a connection string before you access any data. And it also prevents you from changing the connection string after you connect. These rules are fine. The problem is that they are enforced by a state machine behind an unhelpful API that throws exceptions if you get it wrong.

public class Connection
{
    private string _connectionString;
    private bool _connected = false;

    public string ConnectionString
    {
        get
        {
            return _connectionString;
        }
        set
        {
            if (_connected)
                throw new ApplicationException();

            _connectionString = value;
        }
    }

    public void Connect()
    {
        if (String.IsNullOrEmpty(_connectionString))
            throw new ApplicationException();

        _connected = true;
    }

    public void Disconnect()
    {
        _connected = false;
    }
}

public static void Right()
{
    Connection connection = new Connection();
    connection.ConnectionString = "DataSource=//MyMachine";
    connection.Connect();
    connection.Disconnect();
}

public static void Wrong1()
{
    Connection connection = new Connection();
    connection.Connect();
    connection.Disconnect();
}

public static void Wrong2()
{
    Connection connection = new Connection();
    connection.ConnectionString = "DataSource=//MyMachine";
    connection.Connect();
    connection.ConnectionString = "DataSource=//HisMachine";
    connection.Disconnect();
}

If we were to make the connection string a constructor parameter instead of a property, we wouldn’t be able to change it.

public class Connection
{
    private string _connectionString;

    public Connection(string connectionString)
    {
        _connectionString = connectionString;
    }

    public string ConnectionString
    {
        get { return _connectionString; }
    }

    public void Connect()
    {
    }

    public void Disconnect()
    {
    }
}

public static void Right()
{
    Connection connection = new Connection("DataSource=//MyMachine");
    connection.Connect();
    connection.Disconnect();
}

The .NET Connection class has a constructor that takes a connection string. But it also has a constructor that does not. The overloaded constructor and modifiable property make it possible to do the wrong thing. Rip them out and let the compiler enforce correctness for you.

You must dispose this object

Let’s go back to the ShoppingService. There’s still a problem with the code. It’s possible to leak database transactions if you forget to dispose them.

public class Transaction : IDisposable
{
    public void Dispose()
    {
    }
}

public class ShoppingService
{
    public void AddToCart(Transaction transaction, int cartId, int itemId, int quantity)
    {
    }
}

public static void Right()
{
    ShoppingService shoppingService = new ShoppingService();
    using (Transaction transaction = new Transaction())
    {
        shoppingService.AddToCart(transaction, 1, 2, 3);
    }
}

public static void Wrong()
{
    ShoppingService shoppingService = new ShoppingService();
    shoppingService.AddToCart(new Transaction(), 1, 2, 3);
}

The compiler doesn’t require you to dispose an object that implements IDisposable. It doesn’t even issue a warning. Some refactoring tools and static analysis tools look for these problems, but we can refactor the API to enforce it at the compiler level. We’ll use a combination of a factory and a callback to take that responsibility away from the caller.

public class TransactionFactory
{
    private Func<Transaction> _factoryMethod;

    public TransactionFactory(Func<Transaction> factoryMethod)
    {
        _factoryMethod = factoryMethod;
    }

    public void Do(Action<Transaction> action)
    {
        using (var transaction = _factoryMethod())
        {
            action(transaction);
        }
    }
}

public static void Right(TransactionFactory transactionFactory)
{
    ShoppingService shoppingService = new ShoppingService();
    transactionFactory.Do(transaction =>
    {
        shoppingService.AddToCart(transaction, 1, 2, 3);
    });
}

The caller receives a TransactionFactory, rather than creating a Transaction himself. But the factory doesn’t just ensure that the Transaction is created properly, it also ensures that it is disposed of properly.

This step must occur before that step

Finally, we can even use patterns to prove things about the business process as a whole. For example, A patient must be diagnosed with a disease before the doctor selects a treatment plan.

image

It’s possible to insert a Diagnosis before a TreatmentPlanSelection, but nothing about the data model requires it. Let’s use a foreign key to prove that the steps happen in the right order.

image

By moving the foreign key from Patient to Diagnosis, we’ve made it impossible to select a treatment plan before diagnosing the patient. We haven’t lost the ability to query for the patient. It just requires one additional join.

Furthermore, we can now easily add logic to verify that the selected treatment plan is approved for the same condition with which the patient was diagnosed. Sadly, we cannot enforce this rule in the data model.

It doesn’t require any special tools to prove that an API is properly used. All it takes is a little forethought to turn an unhelpful API that buzzes and throws exceptions into a helpful, provable API.