CodeSnips

Wednesday, September 18, 2013

Versioning REST web services in WebApi

Much has been written on the subject of the proper approach to versioning REST web services  If you google long enough, you find there are two basic approaches:

1. Put the version identifier on the URI somehow:
    http://myhost/Account
    http://myhost/v2/Account
    http://myhost/Account/?version=2.0
    http://myhostV2/Account

2. Use the HTTP headers: 'Accept', 'Content-Type', and/or 'X-mycustom' to pass the desired version from the client.
    Accept: application/myhost-v2+js
    Content-Type: application/myhost-v2+js
    X-Version: 2.0


Any scheme which requires a URI change seems like a bad idea to me. The URI is supposed to be, essentially, a coordinate to a specific entity. Adding version to the coordinate will break older clients immediately regardless of whether the entity has changed in a breaking way.

A more robust approach would allow clients and the web service to negotiate the version needed, but leave the URI the same. This is more compatible with the semantics of HTTP.

Specifically in the case where we are using Microsoft's WebApi to provide REST web services, I wanted to see if I could add version handling in the least obtrusive way for programmers, and to that end I wanted something that fit the following design goals:

Design Goals
1. The URI doesn't change.
2. Handling the logic to shape the entity based on version should be done within a single method (or class.)
3. I want to minimize any special handling in the ApiController.
4. I want to decouple my domain entities from my model entities.
5. I want to have zero configuration changes to make in IIS or my web.config.

I think the following solution meets each of the above goals.

Example Code

To avoid creating new content types, which would require me to alter my IIS config [goal 5], I've chosen to use a custom header: "X-Version". This allows me to leave my URI format alone [goal 1]

To minimize any special handling of this header in my ApiControllers [goal 3], I've created a base class from which my controllers will inherit. It sets a "Version" property on the controllers which inherit from this base class:

    public class BaseApiController : ApiController
    {
        public double Version { get; set; }

        protected override void Initialize(HttpControllerContext controllerContext)
        {
             base.Initialize(controllerContext);
             Version = 1.0;
             var versionHeader = 
                 Request.Headers.FirstOrDefault(h => h.Key == "X-Version");
             if (versionHeader.Value != null)
             {
                 double version;
                 if (double.TryParse(versionHeader.Value.ToList()[0],out version))
                 {
                     Version = version;
                 }
             }
        }
    }

    // Example usage
    public class AccountController : BaseApiController
    {
        IAccountRepo _accountRepo;

        public AccountController(IAccountRepo accountRepo)
        {
            _accountRepo = accountRepo;
        }

        public IEnumerable Get()
        {
            foreach (var acct in _accountRepo.getAll())
            {
                yield return acct.AccountMap(Version);
            }
        }

        public Account Get(int id)
        {
            return _accountRepo.getById(id).AccountMap(Version);
        }

        // We handle the other verbs (POST, PUT, etc.) similarly
        // with a mapping from Model to Domain.
   }



I want to send back a Version property on all my model entities so my client can confirm it is getting the right version of the model entities it requested.

    public abstract class ModelBase
    {
        public double Version { get; set; }
    }

    public class Account : ModelBase
    {
        public int Id { get; set; }
        public string AccountCode { get; set; }
        public string Name { get; set; }
        public string AccountName { get; set; }
        public bool IsActive { get; set; }
    }



To decouple my domain entities (in this case Account) [goal 4] from my model entities, I've chosen to implement a "mapper" as an extension method on my domain entity type. This extension method takes a version parameter so it can choose how to return the data. The mapper also updates the Version property. All my mapping logic goes into this method [goal 2] so I only have to make changes here if I create a new version of the model entity.

        

        // Note: this is the mapper from Domain to Model entity.
        // Same idea for the Model to Domain mapper would apply (not shown here)
        //
        public static Model.Account AccountMap(this Domain.Account acct,
                                               double version)
        {
            if (acct == null)
                return null;

            // Version 1 Account Model

            if (version == 1.0)
            {
                return new Account
                {
                    Version = version,
                    Id = acct.Id,
                    AccountCode = acct.AccountCode,
                    Name = acct.Name,
                    IsActive = acct.IsActive,
                    AccountName = null,               // not used in v1
                };
            }

            // Version 2 (Current) Account Model
            return new Account
            {
                Version = version,
                Id = acct.Id,
                AccountCode = "V2" + acct.AccountCode, // content
                Name = null,                           // deprecated
                IsActive = acct.IsActive,
                AccountName = acct.Name,               // new

            };
        }
    }



Version mapping. There are basically four kinds of changes: adding properties, renaming properties, deprecating properties, and changing the content of existing properties.

One drawback of the serialization of the C# model classes is they can't be dynamic (at least without a bunch of ugly work), so to avoid breaking changes, you have to leave old properties in place on the model classes. However, this isn't too bad and you're able to handle the various kinds of changes noted above and not break clients expecting the old versions. New version clients will not be cluttered with old properties either - so most of this mapping mess goes into a single set of mappers in the server code.

Summary

The work in this example creates a couple of small base classes and a version-aware mapping function. The base classes could easily be added to a common library for other applications to use.  The mapping function is an example of one way to transform models based on version - there are undoubtedly better approaches, but the end result is you can manage version changes to models in one place.

I've learned that to handle versioning in a clean way, it's important to design ahead of time and make sure allowances are made for future change in both client and server code. This particular approach meets the goals I set out.

Lastly, non-WebApi REST web service frameworks that are implemented in more dynamic, loosely typed, languages would have a bit easier time transforming the returned data based on version. In my case, I wanted to use as much of C# and WebApi out-of-the-box as possible (I guess this is the implied design "goal 6"), but I think any solution should probably address the first 5 goals above.

Monday, September 16, 2013

Intelligent Design - Irreducible Complexity

Not exactly a code posting, but I just can't believe that one of my favorite science shows - Through the Wormhole - hosted by Morgan Freeman, brought Michael Behe on the "Did God Create Evolution?" episode to present his alternative theory of intelligent design. First of all:  Behe's argument for "Irreducible Complexity" has been invalidated so many times, most recently in the Dover case, and by many other examples. But I'll set out the argument here very simply.

Behe claims that certain biological structures could not be the product of evolutionary ADDITION of changes because SUBTRACTING any of the constituent parts would result in a non-functional result. He often uses the bacterial flagellum as an example - essentially a "outboard motor" that certain bacteria use to propel themselves around.

 The problem with this argument is Behe makes the assumption that all evolutionary changes are ADDITIONS. In fact, many evolutionary changes are SUBTRACTIONS. I emphasize these terms in capital letters because it's so critical to understanding the misleading nature of his argument. It is clear that many "irreducible" structures can be created by the combination of incremental additions AND subtractions. Take the bridge structure. Here is a fully irreducible bridge:

 |===========|
XX           XX

This bridge cannot function if any part is removed. Right? So therefore, according to Behe, it must be designed because removing any part would make this bridge structure "non-functional".

But, in fact, let's assume we started with something like this:

 XX|==|==|==|XX

Some blocks have fallen (been added) between the two banks (XX), This could easily happen if blocks were dropped over the gap. It's a bridge. Not a very elegant bridge, but a bridge nonetheless - and one that could have formed naturally.

Now let's see what happens if a large block is added to this:

 |==========|
XX|==|==|==|XX

We still have a bridge - and again one that could easily have been formed by a natural process.

Now, lets SUBTRACT something (the underlying blocks) and behold the "irreducibly complex" structure of the bridge.

 |==========|
XX          XX

This process of addition AND subtraction happens incrementally in evolution all the time. Behe conveniently ignores this simple fact of incremental change. It is not all addition. It includes subtraction as well. This guy is supposed to be an expert biochemist -yet, he's NEVER seen examples of proteins being formed by addition and subtraction of molecules?!  Right. I think not. The man is shamelessly misleading people. 

So, here is a well-known answer to the so-called irreducible complexity argument. And yet, no mention was made in the Through the Wormhole episode of this, nor of the fact that such evidence was presented in the Dover trial in response to Behe's "theory".  It boggles my mind that Behe is given screen time presenting an argument that is so demonstrably weak.

Now go take a look at these structures and re-consider the Behe "theory" that functional structure must be designed by a "designer"
A "designed" bridge

OK. Off the soapbox now. Back to reality.


Thursday, September 5, 2013

Doing SignalR the AngularJS Way

I wanted to see how to use SignalR and AngularJS together, and worked up this example as an exercise. All my javascript code needs to be in a controller and I probably also want to encapsulate the SignalR stuff behind a service facade.

I'm assuming in this article you have some exposure to AngularJS and SignalR, so I won't get into the nitty gritty details on how to wire everything up in Visual Studio. If you are interested in the full example code, you can find it here on GitHub.

I will be interacting with this trivial SignalR Hub class:
public class NotesHub : Hub
{
    public void AddNote(string note)
    {
        Clients.All.noteAdded(note);
    }
}
This class exposes an 'AddNote' method and sends a 'noteAdded' message.

Now for the AngularJS code. I make sure that I include the jQuery module as part of my module setup, because I will be injecting that as a dependency to my service.
var mainApp = angular.module('mainApp', ['ui.bootstrap']);
mainApp.value('$', $);


now for the service code. Explanation follows below:
mainApp.factory('noteService', ['$','$rootScope',
 function($,$rootScope) {
  var proxy;
  var connection;
  return {
    connect: function () {
      connection = $.hubConnection();
      proxy = connection.createHubProxy('notesHub');
      connection.start();
      proxy.on('noteAdded', function (note) {
          $rootScope.$broadcast('noteAdded', note);
      });
    },
    isConnecting: function () {
       return connection.state === 0;
    },
    isConnected: function () {
       return connection.state === 1;
    },
    connectionState: function () {
       return connection.state;
       },
    addNote: function (note) {
       proxy.invoke('addNote', note);
    },
 }
}]);

First of all, you'll notice I am generating the proxy (line 8) for the SignalR hub explicitly. This is necessary to take control of when the proxy is created, and it replaces the need to have a <script src="signalr/hubs" type="text/javascript"></script> line on your main html page (or main layout page).
After starting the connection, (line 9), I setup an event handler on the 'noteAdded' message that the SignalR hub will emit. I use AngularJS $rootScope.$broadcast service to relay the event to any other controllers and/or services that may be interested in the event.
The service exposes 'addNote' as a facade over the proxy.invoke call for that method on the Hub class. The other functions exposed are convenience methods mainly so I can do some basic testing of this service (more on that later).

So, my controller ends up looking like this. Explanation of code follows:
mainApp.controller('mainController', 
function mainController ($scope, noteService) {
    noteService.connect();

    $scope.notes = [];

    $scope.$on('noteAdded', function (event, note) {
        $scope.notes.push(note);
        $scope.$apply();
    });

    $scope.addNote = function (note) {
        noteService.addNote(note);
        $scope.note = '';
    };
});

The noteworthy code here is in the $scope.$on handler for the 'noteAdded' event. Since we are handling a custom event, we need to explicitly call $scope.$apply() to insure any changes we've made to the scope are communicated through the DOM.
A word about testing
I use jasmine.js as well as Karma for my client-side Javascript testing. For my service test, I mainly wanted to make sure the various pieces needed to establish a connection were in place.
So, here's the example jasmine test:
/// <reference path="../apps/main/noteService.js" /&gt
/// <reference path="../jasmine.js" /&gt
'use strict';

describe('noteService Tests', function () {
    var noteSvc;

    beforeEach(module('mainApp'));
    beforeEach(inject(function (noteService) {
        noteSvc = noteService;
        noteSvc.connect();
    }));

    it('should attempt connection',function () {
        expect(noteSvc.isConnecting()).toBe(true);
    });

})

Friday, March 29, 2013

Arduino - Sending Serial Data

Playing with Arduino Uno board. The language for Arduino "sketches" is essentially java.

This script reads the value on analog pin 0 and write the value on the serial port every 150 milliseconds.


const int SENSOR = 0;
const int LED = 13;

int val = 0;

void setup()
{
  Serial.begin(9600);
  pinMode(LED,OUTPUT);
  digitalWrite(LED,LOW);
}

void loop()
{
  val = analogRead(SENSOR);
  Serial.println(val);
  delay(150);
}
You then just need a client that reads the data and displays it in some clever way. The companion language for clients for Arduino is "Processing" - a java library that makes it easy to write graphical clients. There is also a companion version - Processing.js - written by John Resig that cleverly compiles Processing code into Javascript so that programs written in this language can be displayed on web pages without the need for a java applet.
The following Processing script displays an eye graphic that changes pupil size based on the relative brightness of light sensed on the analog pin 0 (via a photoresistor).

// This sketch uses serial IO to communicate with the Arduino.
// The Arduino is writing out values from
// analog input 0 - which in this case has a photoresistor
// on it.
//
// The code auto-scales based on lowest and highest readings
// and draws a "pupil" in proportion to the amount of light
// sensed.
//
import processing.serial.*;

Serial myPort;
int minval=9999;
int maxval=0;
float r=0;
int a0;
PFont f;

void setup() {
  size(200, 200);
  myPort = new Serial(this,"/dev/cu.usbmodemfd121",9600);
  myPort.clear();
  f = createFont("Krungthep",20,true);
  smooth();
}

// Reads string between two delimiters in serial stream 
String readLine(Serial p,int firstDelim, int secondDelim) {
  StringBuilder sb = new StringBuilder();
  Boolean isFirstDelim = false;
  int c;
  while (true)
  {
    while ((c = p.read()) < 0);
    if (!isFirstDelim)
    {
      if (c != firstDelim)
        continue;
      isFirstDelim = true;
    }
    else
    if (c != secondDelim)
      sb.append((char)c);
    else
      break;
  }  
  return sb.toString();
}

void getReading()
{
  String line;
  if (myPort.available()>0) {
    line = readLine(myPort,10,13);
    if (line != null && line.length()>0)
      a0 = Integer.parseInt(line);
  }
}

void draw() {
  background(255);
  textFont(f);
  textAlign(CENTER);
  getReading();
  if (a0 < minval) minval = a0;
  if (a0 > maxval) maxval = a0;
  if (maxval==minval || abs(maxval-minval) <= 20)
      r=60;
  else
  {
      r = (float) abs(a0-minval) / (float) abs(maxval-minval);
      r *= 50;
      r += 10;
  }
    
  pushMatrix();
  translate(20,40);
  drawEye(r);
  popMatrix();
  
  fill(0,127,127);
  text("A0="+a0,width/2,height-25);
  
  if (mousePressed)
  {
    minval = 9999;
    maxval = 0;
  }
  
}

void drawEye(float pupilSize)
{
    int theta=0;
    pushStyle();
    stroke(0);
    noFill();
    bezier(0,40,30,-12,130,-12,160,40);
    bezier(0,40,30,92,130,92,160,40);
    pushMatrix();
    fill(#9B9DF7);
    ellipse(80,40,80,80);
    translate(80,40);
    for (int i=0;i<360;i+=(360/12))
    {
        rotate((2*PI)/12); 
        line(-40,0,40,0);
    }
    fill(0);
    ellipse(0,0,pupilSize,pupilSize);
    fill(255);
    ellipse(15,-15,8,8);
    popMatrix();
    popStyle();
}


Thursday, March 21, 2013

Javascript from C#, JInt, Jurrasic

Just a quick code snippet. I was interested in executing javascript from C#. So far, the two clear winners for a framework to do this are JInt and Jurrasic.  Needless to say, the V8 Javascript.Net project is dead and doesn't even try to pretend otherwise...
JScript .Net has been dead a long time - let's not even go there.

Jurassic takes a bit more effort to integrate existing .Net objects into scripts as parameters, but makes perhaps is faster.

JInt is the easiest to use and integrate (takes objects directly as parameters) - and for quick DSL-like scripting using Javascript - wins my vote.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Jint;
using Jurassic;
using Newtonsoft.Json;

namespace RunJSInt
{
    public class Program
    {

        static void Main(string[] args)
        {
            JintExample();
            JurrasicExample();
            Console.ReadLine();
        }

        public class Developer
        {
            public string Name { get; set; }
            public string Title { get; set; }
        }

        public class DeveloperObjectInstance : Jurassic.Library.ObjectInstance
        {
            public DeveloperObjectInstance(ScriptEngine engine) : 
                base(engine)
            {

            }
            public void init(Developer dev)
            {
                this["Name"] = dev.Name;
                this["Title"] = dev.Title;
            }
        }

        private static void JurrasicExample()
        {
            var dev = new Developer { Name = "Mike", Title = "Developer" };
            var engine = new Jurassic.ScriptEngine();
            var obj = new DeveloperObjectInstance(engine);
            obj.init(dev);
            engine.Evaluate(@"function test(a) { 
                                  return 'Jurrasic says, hello ' + a.Name; 
                            }");
            Console.WriteLine(engine.CallGlobalFunction("test",obj));
        }


        static void JintExample()
        {
            var dev = new Developer { Name = "Mike", Title = "Developer" };
            JintEngine engine = new JintEngine();
            engine.SetParameter("message", dev);
            var result = engine.Run(@"
                return 'Jint says, hello ' + message.Name;");
            Console.WriteLine(result);
        }
    }
}