About me

Michael L Perry

Improving Enterprises

Principal Consultant

@michaellperry

User login

Correspondence for the Web

Correspondence was originally launched as a solution for occasionally connected smart clients. It provided local storage, queuing, and synchronization for Silverlight and Windows Phone applications. This satisfied some common scenarios (collaborative mobile apps, out-of-browser business applications, etc.), but left a significant gap.

Correspondence Web AppCorrespondence 1.3 closes that gap by adding support for MVC 3. As the back-end data store for a web application, Correspondence enables many more use cases. Some of these use cases are difficult to achieve with other alternatives.

Cloud application, local services

A cloud-based application works fine in an isolated silo. But it is much more difficult to integrate a cloud application into an enterprise solution, especially when mission-critical business processes run on-premises. Correspondence brings the cloud and enterprise together.

A cloud-hosted web application connects to a Correspondence Synchronization Server, just as a client would. The web application has its own local cache of the shared data, but it continuously synchronizes that cache with other consumers. A service located within the enterprise can subscribe to facts published on the web, and publish its results back to the web application.

For example, suppose you wanted to host an e-commerce web application in the cloud. This would give you the elasticity you require to scale up during peak load, and back down again as demand falls off. However, you want your ERP system to run in-house. With Correspondence, your web application publishes orders that the customers place on-line. Your ERP system subscribes to those orders, fulfills them, and publishes status. Users log on to the web site to check order status.

Continuous backup of your data

A web application, whether hosted in the cloud or in your own datacenter, is an active accumulation of data that must be regularly backed up. How frequently should you back up your database? How much data loss can you tolerate? How quickly can you get back on-line after restoring from backup?

If your web application is backed by a Correspondence Synchronization Server, then your data is backed up continuously. You don’t have to plan downtime for maintenance to take a snapshot of your database. And when your server goes down, simply stand up another one and point it to the same Synchronization Server. It will pull down all of the data.

Better yet, have the backup standing by live, pointing at the same Synchronization Server as the production system. Data will be automatically synchronized between the two systems. When production goes down, switch over to your hot backup and minimize the interruption in business. And when you can restore the production system, it will synchronize and pick up exactly where the hot backup left off.

Graceful degradation of UI

Correspondence works well for WPF, Silverlight, and Windows Phone applications. These cover a large number of platforms, including Windows, Mac, and Phone. But what if you want to run on an iPhone, iPad, or Android device? What if the user you want to reach doesn’t have Silverlight installed?

With a Correspondence MVC 3 application, you can create a web alternative for your rich application. Go ahead and serve the Silverlight client on the web site. But if the user doesn’t have or cannot run the browser plug-in, replace the static Download Silverlight image with an HTML-based experience.

Rich client integration without a custom API

Many of the popular apps on mobile devices are rich front-ends to web applications. Twitter, Foursquare, Facebook, and TripIt are all great examples. Each of these web applications has a custom REST API designed for use by mobile applications.

If you want to build a web application with a rich mobile client, don’t spend your time writing REST APIs. Use Correspondence as both your web and mobile application back end. The app will synchronize with the web through the Correspondence Synchronization Service.

Reporting against an off-line database

Your web application is collecting valuable data. You want to report on it. But the application database is not optimized for reporting. And any reports that you run against the application database will hurt production performance.

Correspondence 1.3 includes a new SQL Server storage strategy. This is a great choice for caching data in the web application, but it also works well as a back-end for an on-premises reporting server. Write a service that subscribes to facts, and turns them into inserts, updates, and deletes against a relational store. Run your reports against a local relational database for the best possible performance without impacting transaction processing in production.

Engage with your site’s visitors using rich applications

Visitors to your site are only there for a short time. You need to entice them to come back. The best way to do that is to engage your visitors in a collaborative workflow. They leave an email address, and you give them regular updates. They leave a comment, and you respond.

If you rely upon traditional tools, you will be using the same web site as your visitors. But while they are casual guests, you are a permanent resident. You should be able to take advantage of a rich, responsive application.

A Correspondence-backed web application synchronizes with a rich Correspondence application. You can build the best possible experience for yourself, and still get the reach that you need to engage with your customers.

These are just some of the use cases that Correspondence for the web enables. Watch this short video for a demonstration, and visit Correspondence to get started building your own collaborative site.

Correspondence for the Web and WPF from Michael L Perry on Vimeo.



Unit tests for a collaborative framework

Correspondence is a framework for creating collaborative applications. This makes for some interesting unit testing.

[TestClass]
public class ModelTest
{
    private Community _communityFlynn;
    private Community _communityAlan;
    private Identity _identityFlynn;
    private Identity _identityAlan;

    [TestInitialize]
    public void Initialize()
    {
        var sharedCommunication = new MemoryCommunicationStrategy();
        _communityFlynn = new Community(new MemoryStorageStrategy())
            .AddCommunicationStrategy(sharedCommunication)
            .Register<CorrespondenceModel>()
            .Subscribe(() => _identityFlynn)
            .Subscribe(() => _identityFlynn.MessageBoards)
            ;
        _communityAlan = new Community(new MemoryStorageStrategy())
            .AddCommunicationStrategy(sharedCommunication)
            .Register<CorrespondenceModel>()
            .Subscribe(() => _identityAlan)
            .Subscribe(() => _identityAlan.MessageBoards)
            ;

        _identityFlynn = _communityFlynn.AddFact(new Identity("flynn"));
        _identityAlan = _communityAlan.AddFact(new Identity("alan"));
        _identityFlynn.JoinMessageBoard("The Grid");
        _identityAlan.JoinMessageBoard("The Grid");
    }

    [TestMethod]
    public void InitiallyNoMessages()
    {
        Assert.IsFalse(_identityAlan.MessageBoards.Single().Messages.Any());
    }

    [TestMethod]
    public void FlynnSendsAMessage()
    {
        _identityFlynn.MessageBoards.Single().SendMessage("Reindeer flotilla");

        Synchronize();

        Message message = _identityAlan.MessageBoards.Single().Messages.Single();
        Assert.AreEqual("Reindeer flotilla", message.Text);
    }

    private void Synchronize()
    {
        while (_communityFlynn.Synchronize() || _communityAlan.Synchronize()) ;
    }
}


Determinism and Dependency

75 years ago this month, Alan Turing published a math paper that described a theoretical computer. This “computing machine” was capable of running a program constructed from simple state transitions, but performing complex calculations. This was the foundation of computer software as we know it today.

All of the capabilities and limitations of software are found in Turing’s first machine. Others have proven that any software written in our modern programming languages can be executed by a Turing Machine. And the converse is also true. So any limitation of the Turing Machine is a limitation of all software. This is a property known as Turing Completeness.

Dependency Tracking

The first Turing Machines were deterministic. The actions of a deterministic Turing Machine are based only upon the internal state of the machine and the symbol that it is scanning. If the machine never scans a symbol, then changing that symbol would have no impact on the behavior of the machine. Since this is true of a Turing Machine, it is true of all software.

If you flip this around, you can see how to track dependencies within a software system. If you need to calculate one number (say the balance of your checking account), you can write a program to scan a bunch of other numbers (the amounts of your checks) and come up with an answer. The behavior of the machine, and therefore the final outcome, depends upon the scanned values. If you were to change a value somewhere in the computer’s memory that it didn’t scan, it would still come up with the same value.

The balance of your checking account depends upon the amounts of all your checks. You can tell by keeping track of which symbols the machine scans while it’s making that calculation. If any of those symbols is changed, then the balance needs to be recalculated. What if the amount of a check is modified? The machine scanned the amount of that check while calculating the balance, so it needs to be recalculated. What if a new check is added? The machine at some point scanned a symbol that told it that it was done with the list of checks. To add a new check, you must have modified this symbol. So, again, the balance would need to be recalculated.

Dependency tracking systems keep a record of every symbol that a program scans while it is calculating a value. Then, when one of those scanned symbols changes, it knows that the value needs to be recalculated. I know of two such dependency tracking systems: Knockout.js for JavaScript, and Update Controls for .NET.

Determinism can always be converted into dependency. If it is true for a Turing Machine, then it is also true of any modern programming language. That’s the advantage of knowing that a language is Turing Complete.



75 years of software

q1

None

R

q2

q2

None

R

q3

q3

None

P0

R

q4

q4

None

R

q5

q5

None

P1

R

q6

q6

None

R

q7

q7

None

P0

R

q8

q8

None

R

q9

q9

None

P0

R

q10

q10

None

R

q11

q11

None

P1

R

q12

q12

None

R

q13

q13

None

P0

R

q14

q14

None

R

q15

q15

None

P1

R

q16

q16

None

R

q17

q17

None

P1

R

q18

q18

None

R

q3

This month marks the 75 year anniversary of the publication of Alan Turing’s paper On Computable Numbers, with an Application to the Entscheidungsproblem. In that paper, Turing defined a universal computing machine, a theoretical construct capable of running programs. He didn’t set out to invent software. But that’s exactly what happened.

The Ensheidungsproblem

Turing’s goal was to show that there are statements in mathematics that are well-formed, but not decidable. That isn’t to say unprovable. That is to say that we can’t even decide whether a proof exists. This is an extremely important mathematical proof, but what is more revolutionary was the way in which Turing proved it.

Turing invented a machine that is capable of running programs. Each program prints out the digits 0 and 1, in addition to possibly doing some record keeping. A program is a finite series of instructions in a state transition table like the one above. If you “compile” the program, you will get an integer. It’s a really big integer, but it is finite. This lets you count the programs and put them in order.

Now, suppose that you had a program that would run all of these programs in order, but stop the nth program when it prints the nth digit. Would this program keep printing digits, like any good Turing program should? Or would it seize up?

If every program compiles to a finite number, then surely this program does too. Eventually, the program will run itself. When it does, it has already printed n-1 digits. It will run itself until it prints the nth digit. But it never will!

Turing equated deciding whether a program gets caught in an infinite loop with deciding whether a statement is provable. Since we can’t evaluate a program to see if it will enter an infinite loop, neither can we decide whether an arbitrary statement is provable.

Modern day provability

These days, we have theorem provers, statically analysis, and code contracts that verify assertions about programs. And yet when Turing invented programming, he proved that these things could not work! Was Turing wrong?

No, Turing was absolutely right. These tools have limitations. There are some programs that you could write that these tools would not be able to prove correct. They won’t work in the general case.

But even though there is a vast infinity of programs that we cannot prove, there are plenty that we can. If we constrain ourselves to the provable set, then these tools can be of incredible value. Knowing where the boundary is helps us to get the most of these tools. And that is the true value of the Entsheidungsproblem.

Please read “The Annotated Turing” by Charles Petzold for a fantastic journey through this foundational paper. This is where our industry was born, only 75 years ago.



Keep functional and non-functional requirements separate

Functional requirements change frequently. System architecture is difficult to change. If system architecture depends upon functional requirements, then the system is going to be brittle and expensive. Instead, system architecture should only be based on non-functional requirements. It should be isolated from changes in functional requirements.

The difference between functional and non-functional requirements

Functional requirements describe how the system behaves within the problem domain. A banking application will express its functional requirements in terms of customers, accounts, and transactions – financial transactions, not database transactions. If a branch manager decides to offer a free checking account with every home mortgage, then he is changing functional requirements.

Non-functional requirements describe how the system behaves from a technical perspective. They are independent of the problem domain. Whether it’s a banking application, a healthcare application, or a property management system, non-functional requirements are expressed in the same terms. If data needs to be replicated to a different location, that’s a non-functional requirement. If records should be denormalized for reporting, that’s a non-functional requirement. If the branch manager wants to launch a new web site, he is changing non-functional requirements.

Web services

Too many of our foundational systems conflate functional and non-functional requirements. The result is software that is difficult and costly to change. Take, for example, web services. Every web service defines two things:

  • A message contract
  • A delivery mechanism

The message contract describes what is to be sent from the client to the server, and what is to be returned from the server to the client. The contents of a message are elements of the problem domain. A banking application will have web services to list a customer’s accounts, and to get transactions by account and date range. These are functional terms. When functional requirements change, the message contract has to change.

The delivery mechanism describes how messages are sent between the client and the server. Practically speaking, web services use HTTP. They are synchronous in nature, and initiated by the client. The message contract is defined by the server. Message delivery is not guaranteed. These are the non-functional consequences of choosing web services.

A web service conflates the functional and non-functional requirements through its use of the Web Service Definition Language (WSDL). WSDL contains a definition of the message contract, a functional construct. When functional requirements change, the WSDL is updated. The server must publish the new WSDL for all of its clients to consume. The non-functional relationship between clients and servers is now dependent upon functional requirements expressed in the message contract.

Relational databases

Another example of conflated requirements can be found in a relational database. Relational databases require developers to define a schema, which describes tables and columns. Tables are entities in the domain, and columns are attributes of those entities. A schema is dependent upon functional requirements. When the functional requirements change, the schema changes.

A relational database has tools to help us satisfy non-functional requirements. Non-functional requirements will describe the Service Level Agreement (SLA) governing how quickly transactions must be processed. They will also describe how frequently reports are run. Based on these requirements, a DBA may choose to index the data differently. They may choose to replicate from the transactional database to the reporting database. They may even decide to denormalize the reporting database for better performance. All of these decisions are non-functional in nature.

Indexing, replication, and denormalization all depend upon the database schema. When the schema changes, these decisions must be revisited. The schema changes whenever functional requirements change. As a result, expensive architectural decisions are affected by quickly changing functional requirements.

This can only lead to one of two outcomes:

  1. Changes to functional requirements are expensive, or
  2. Changes to functional requirements are discouraged.

Either outcome is death to business.

Application-agnostic architectures

To allow functional requirements to change frequently, architectural decisions should be completely isolated from the problem domain. If an architectural decision depends upon a functional requirement, then it runs the risk of becoming invalidated by future requirements changes. Architectural decisions should therefore be made with knowledge of non-functional requirements alone.

To some, this may sound like Big Architecture Up Front. Well, it is Up Front, for all the reasons described above. But it doesn’t have to be big. We generally know before getting into the details of the problem domain whether we will need a web application, a reporting database, or a service bus. The up front architectural decisions will simply select these components based on non-functional requirements. You may even decide to use technologies like web services or relational database that conflate requirements. Just be careful that you don’t centralize these technologies, allowing them to cause friction as functional requirements change.

One consequence of isolating architectural decisions from the problem domain is that architecture can be amortized. Since architecture does not depend upon the problem domain, it can be done up front with no knowledge of the application. Transaction processing is a common non-functional requirement. Reporting is a common non-functional requirement. A good architecture can be crafted for performing transaction processing and reporting with no knowledge of the applications for which it might eventually be applied. That architecture can then be used in many different applications, with no additional cost.

It is common for a consultant to answer a question with “it depends”. If the question is an architectural one, then “it depends” should be followed with a non-functional requirement. If architectural decisions depend upon functional requirements, then you will be paying that consultant over and over again as you make changes to your business model. But if we as an industry create a ready-made set of application-agnostic architectures, we can pull the right one from the shelf and apply it to a large number of problem domains. If these architectures are carefully constructed not to conflate requirements, then they can drive down costs over time.



Correspondence on Android

Correspondence is all about synchronizing data between users, between devices, and now between platforms. Correspondence is running in Mono for Android.

I created a Silverlight application called ThoughtCloud for a demo. Users share thoughts with this collaborative mind mapper. This Android app presents a read-only list of the clouds shared in the Silverlight application. It can be extended to view those clouds and make changes.

User interface updates

Correspondence is built on top of Update Controls, which is responsible for updating the user interface. The Silverlight and Windows Phone versions of Update Controls work with data binding to implement MVVM. But in Android, there is no data binding. Hence, there is no MVVM. Instead, Android uses adapters.

A ListView is the Android equivalent of the Silverlight ListBox. It is attached to a ListAdapter. It works like an ObservableCollection: any item added to the ListAdapter will appear on the ListView. The following code adds the name of each cloud to the adapter.

private void UpdateCloudArray()
{
    _cloudArrayAdapter.Clear();
    foreach (Cloud cloud in _identity.SharedClouds)
    {
        Thought centralThought = cloud.CentralThought;
        string text = centralThought == null
            ? "<null>"
            : centralThought.Text.Value ?? "<empty>";
        _cloudArrayAdapter.Add(text);
    }
}

It’s not very smart to clear the whole adapter each time, but this is proof-of-concept code. In a more sophisticated system, it will update items in the adapter in place.

Tracking dependencies

Now we have to make sure that UpdateCloudArray is run whenever it needs to be. That’s where Update Controls comes in. Update Controls observes what your code touches. What your code reads is what it depends upon. This code reads the SharedClouds, which is a query in the factual model.

fact Identity {
key:
    string anonymousId;

query:
    Cloud* sharedClouds {
        Share s : s.recipient = this
        Cloud c : s.cloud = c
    }
}

It also reads the CentralThought of each cloud, and the Text of each central thought. These are both mutable properties in the factual model. Whenever any of these things changes, Update Controls can tell. The following code hooks it up.

_depCloudArray = new Dependent(UpdateCloudArray);
_depCloudArray.Invalidated += delegate
{
    RunOnUiThread(delegate
    {
        _depCloudArray.OnGet();
    });
};
_depCloudArray.OnGet();

The Dependent object keeps track of dependencies. It takes an update method as a constructor parameter. Whenever it needs to update, it calls this method. It depends upon the things that this method reads.

When one of the dependencies changes, the Dependent goes out-of-date. At that moment, the Invalidated event is raised. The delegate above brings the Dependent back up-to-date on the UI thread.

Finally, the Dependent starts its life out-of-date. So we call OnGet() to bring it up-to-date immediately.

Next steps

The above code is more than I’d like to write for each adapter. Instead, I intend to create adapter classes that build dependency tracking right in. This is the process I went through for the Update Controls library for Windows Forms. WPF and Silverlight have data binding hooks, so I didn’t have to individually augment every control for those versions.

The Silverlight version of Thought Cloud uses HTTP long polling to push changes to other users, so your thoughts immediately appear on their machine. On a phone, this is done through push notifications. I haven’t yet written the push notification service for Android. So this code will not automatically reflect changes made online. That is my next milestone. For now, you have to exit and reenter the app.

This Mono version is slow. I’ve written Android apps in Java, and they were not nearly as slow as this. It could just be that I have to do some optimization in Correspondence. Or it could be the extra layers of the Mono .NET runtime. My Faceted Worlds partner Russell and I are in the middle of the Java port. We’ll know soon whether Correspondence can be performant as a native Android library.



XAML has Come Home to Windows

Yesterday Steven Sinofsky presented Windows 8 from a developer’s perspective. He emphasized that Windows 8 has a brand new API, that it enables native application development in multiple languages, and that markup is a significant component of the rendering engine. In so doing, he mentioned that all of our Silverlight skills transfer to the new operating system. He explicitly did not say that Windows 8 is running Silverlight. This speaks to the branding that Microsoft has developed around Windows and .NET.

In 2007, Microsoft released .NET 3.0. Despite it’s major version number, it was not a new version of the .NET runtime. Rather, it was four new libraries running on top of .NET 2.0: Windows Communication Foundation (WCF), Windows Workflow Foundation (WF), Windows CardSpace, and Windows Presentation Foundation (WPF). Not .NET Presentation Foundation – Windows.

In 2008, Silverlight 2.0 took the .NET Framework and WPF outside of the Windows platform. With this release, applications written with XAML and C# or VB ran in the browser on Windows and the Mac. Later releases would take those applications out of the browser. Silverlight was originally intended as a bridge to take Microsoft technology across all platforms. But then the strategy around Silverlight shifted.

In the first Windows 8 video, Sinofsky emphasized that Windows apps could be written in HTML 5 and JavaScript. He made no mention of Silverlight. This worried the Silverlight community, reigniting fears that Silverlight was dead. Yesterday’s keynote cleared things up a bit. Windows 8 apps will be written primarily in four languages: C#, VB, C++, and JavaScript. The markup will be based on XAML when C#, VB, or C++ is employed, or on HTML 5 when JavaScript is employed. This is good news for Silverlight developers, because their skills are still of value.

This is also a good strategy for Microsoft. Before, Microsoft was attempting to build the cross-platform bridge. They faced severe push-back from other platform vendors, particularly Apple. Sure, Silverlight worked on the Mac, but Microsoft was forbidden from porting it to the iPhone and iPad. So instead of Microsoft bridging out, they are letting the industry build bridges into Windows. Let HTML 5 and JavaScript be the cross-platform markup and programming language. XAML and C#/VB will be first-class Microsoft tools.

What is telling is the words that Sinofsky used on his architecture slide.

  • WinRT – Not the .NET Framework
  • C# and VB – Not .NET
  • XAML – Not Silverlight

WPF is not dead. Silverlight is not dead. The components that made them successful are now core components of Windows. XAML has always been a Windows technology. Sinofsky has just pulled it back into the fold.



What’s so special about Windows Phone?

I’ll be presenting at DalMob this Tuesday evening. DalMob is a mobile development group that crosses all of the platforms. Most of the attendees are iPhone and Android developers. I’m going to demonstrate the key features that make Windows Phone different from a developer’s perspective. These are:

  • Panorama and pivot
  • Live tiles
  • Storyboard animation
  • Data binding

Demo code is borrowed from Chris Koenig. Slides are borrowed from Megan Donahue on Advanced Application Design for Windows Phone at Mix’11.



Correspondence presentation from Dallas .NET User Group

The recording of my Correspondence presentation is up on UserGroup.tv. This one covers Silverlight out-of-browser applications. I used Thought Cloud as a demo, and opened it up for the audience to participate. It makes for some interesting on-screen shenanigans.



DevLink app results

DevLink 2011 was a fantastic conference. I met some well-known members of the development community. And I met some brilliant people. Occasionally, these were one-and-the-same!

I presented two sessions at DevLink. The first was on the CAP Theorem. I’ve posted the slides and demo for your perusal. The second was on Correspondence, the same session I gave at Dallas TechFest the week prior. Both sessions were well attended, though I did have problems with my live coding demo in the Correspondence session. Fortunately, the first demo had already gotten the point across.

Like I did for Dallas TechFest, I created a conference application using Correspondence. I pushed this app to the Marketplace for attendees to use. This time, I got 30 downloads out of about 650 attendees, and only one review. So the overall percentage was lower this time around, but still useful.

The one reviewer gave the app 4 out of 5 stars, with the following comment:

So far I really like it. It seems a little slow but I like the format a lot better than the eventboard program.

The EventBoard application that he references was the official app available on the Windows Phone and iPhone platforms. It’s really cool to be favorably compared with a professional app. I think the primary differentiator between the two is that, where EventBoard is all about the event, my app was all about the attendee. The first page of my app shows your schedule. The first page of EventBoard shows a list of events.

Performance enhacements

The reviewer noted that the application seems a little slow. This is because of performance problems in Correspondence. I made some improvements, but was unable to get them into the Marketplace in time. More performance enhancements are on the way.

The first enhancement was to reuse the IsolatedStorageFile object rather than recreating it. Every example of using isolated storage that I’ve seen follows the same pattern:

public class IsolatedStorageStorageStrategy : IStorageStrategy
{
    public static IsolatedStorageStorageStrategy Load()
    {
        var result = new IsolatedStorageStorageStrategy();

        using (IsolatedStorageFile store = IsolatedStorageFile.GetUserStoreForApplication())
        {
            using (BinaryReader input = new BinaryReader(store.OpenFile(ClientGuidFileName, FileMode.Open)))
            {
                result._clientGuid = new Guid(input.ReadBytes(16));
            }
        }

        return result;
    }
}

Isolated storage is notoriously slow. It was easy to isolate the slowness to the creation of the IsolatedStorageFile object. So I just changed the pattern:

public class IsolatedStorageStorageStrategy : IStorageStrategy
{
    private IsolatedStorageFile _store;

    private IsolatedStorageStorageStrategy(IsolatedStorageFile store)
    {
        _store = store;
    }

    public static IsolatedStorageStorageStrategy Load()
    {
        IsolatedStorageFile store = IsolatedStorageFile.GetUserStoreForApplication();
        var result = new IsolatedStorageStorageStrategy(store);

        using (BinaryReader input = new BinaryReader(store.OpenFile(ClientGuidFileName, FileMode.Open)))
        {
            result._clientGuid = new Guid(input.ReadBytes(16));
        }

        return result;
    }
}

This was low-hanging fruit. Other performance enhancements that I have in the queue are:

  • Pooling the Stream objects instead of opening and closing files.
  • Switching from XML to binary serialization of facts for HTTP communications.
  • Pulling more facts at a time from the server.
  • Evaluating SQL CE as an alternate storage strategy.