tirsdag 8. oktober 2019

Introducing the new programming model of TransMock

Little history

Since its very early days TransMock relied explicitly on the very well established programming model of BizUnit. This was mainly due to the fact that the few developers out there who had grasped the paradigm of automating the testing of their integrations  in a fully fledged ALM/DevOps pipeline were already very familiar with this model. So for them taking into usage TransMock which was bringing complete isolation from external dependencies and thus greater flexibility was a no brainer.
But with the years while introducing new capabilities to TransMock this programming model started to show certain weaknesses. Particularly 3 such weaknesses that top the chart:
  • The execution model of BizUnit - all the steps are added to a list of steps and then the test case execution is performed in the context of the Run() method. This imposes big challenges especially when it comes to the inability to debug the tests.
  • The validation model of BizUnit - based on introducing validation step classes which encapsulate the specific validation logic. This was partially addressed by introducing the LambdaValidationStep class in TransMock 1.3 which allows the programmer to perform in-place validation, without this complex OO approach by simply defining a validation method callback that will be invoked at the desired point of test case execution
  • The code size of the tests - due to the way the steps were defined and validation steps added the code for a bit more complicated scenario is quickly blowing out of manageable proportions.
All this has lead to the creation of a completely new programming model that addresses many of the shortcomings of BizUnit. It is available from v1.5 of the TransMock framework and will be the main interface to it going forward.

Fundamental idea

The fundamental idea behind the new programming model is about emphasizing the message exchange between the test harness and the tested service in a neat way while giving the entire control of the test execution in the hands of the developer. And all this backed by a more modern syntax following a well known approach for creating object mocks from other well-established mocking frameworks out there.

On a very hi level the new programming model can be described as 2 main objects that interact with each other through exchanging messages of type MockMessage. 1 object represents the mocked endpoints of the actual service/integration that is being tested and the other is a messaging client that can only send and receive messages to/from the mocked endpoints in the first object. Verification of received messages from the service/integration is performed in place where the messages are actually received. This is a fundamental difference from how verification was performed in BizUnit.

The rest of the components in TransMock are intact - the mock adatper is still used to enable communication with the test cases and the mockifier is still used the same way to mockify the bindings and to produce the mocked addresses helper class. One slight difference is that the mocked addresses helper class that the mockifier generates is now strong typed, meaning that there is performed a design time type check based on the corresponding endpoints' 
messaging patterns in the message exchange methods of the messaging client object.


For the new programming model it has been deliberately chosen to utilize fluid-like syntax and to be based on the principles of functional programming. This allows the developers to really focus on the logic of the tests in place of where they author those.

The two main classes in the new programming model are called respectively EndpointsMock<TAddresses> and TestMessagingClient<TAddresses>. The EndpointsMock class represents the set of mocked endpoints of a service/integration that will be tested. The TAddresses is a class of type EndpointAddress. The generated by the mockifier *MockAddresses helper class now inherits from this class bringing whole lot new capabilities for type checks. The EndpointsMock class defines a set of Setup methods depending on the type of the message exchange pattern implemented by a given endpoint. This is exactly as with classical mocking frameworks - they allow you to set up some mocked behaviour of methods, properties and other class members.  

The available setup methods in the EndpointsMock class are:

  • SetupSend - sets up a one-way send endpoint
  • SetupReceive - sets up a one-way receive endpoint
  • SetupSendRequestAndReceiveResponse- sets up a two-way send endpoint, one that is sending a request and receiving a response
  • SetupReceiveRequestAndSendResponse- sets up a two way receive endpoint, one that is receiving a request and sending a response
Common to all these methods is that they all operate against an instance of the defined TAddresses type through lambda expressions. This allows for design time type checks against the type of the endpoint which ensures that one can never setup a receive endpoint mock against a send endpoint or vice verse! This is very big improvement as it was totally possible to do such setup with the old, BizUnit based programming model, which was one of the main challenges for new beginners with TransMock.

Quick demo on creating instance of the EndpointsMock class:

/// Assumes The MyService_MockAddresses has a single one-way receive and single
/// one-way send endpoints
  var serviceMock = new EndpointsMock<MyService_MockAddresses>();
  serviceMock.SetupReceive(ep => ep.ReceivePO_FILE)
      .SetupSend(ep => ep.SendWMSRequest_MQSeries);

The TestMessagingClient<TAddresses> is responsible for sending and receiving messages to/from the mocked endpoints. It cannot be instantiated directly but only through a factory method on the instance of EndpointsMock class. This way it is ensured that the client will only ever exchange messages with the intended service/integration represented by the endpoints mock instance.

The TestMessagingClient class has the following methods that allow for exchanging messages with the mocked service/integration:
  • Send - sends a message to a receive mocked endpoint
  • Receive - receives a message from a send mocked endpoint
  • SendRequestAndReceiveResponse - sends a request and waits for receiving a response from the 2-way receive mocked endpoint
  • ReceiveRequestAndSendResponse - receives a request and sends a response to a 2-way send mocked endpoint
Here is a quick demo on how an instance of the TestMessagingClient is created and utilized:
/// Continuing from the example above

var testClient = serviceMock.CreateMessagingClient();
testClient.Send(rp => rp.ReceivePO_FILE, "TestPO.xml")
    .Receive(sp => sp.SendWMSRequest_MQSeries);


The above 3 lines of code efficiently test the entire service from end to end! One can chain the methods as in any other modern fluid syntax framework and the execution follows this very same path. No Run methods, no hidden magic!
In addition to the 4 Send&Receive methods the TestMessagingClient class defines a method called InParallel() which allows for executing a set of defined Send/Receive methods from the same test messaging client instance in a parallel manner. This comes very handy in situations where the tested service/integration does some complex parallel processing.

Authoring tests with the new programming model

Enough with the theory, time for a demonstration of how to utilize the new programming model. For the sake of simplicity we will use a very basic case:

  1. A two-way WCF-WebHttp receive location receives a call from a mobile app for an order placement
  2. The service invokes an order validation WCF-BasicHttp service on an order system
  3. After successfull validation the order is placed on an MSMQ queue and a response to the calling app is supplied simultaniously.
Few assumptions about the setup, as in line with previous examples:
  • Application is named MobileOrderPlacement
  • BTDF is used for deployment and the reader is familiar of how to set it up to work with TransMock targets for generating the *MockAddresses class
  • Bindings are prepared for mocking of the endpoints
  • The endpoints are named as follows:
    • ReceiveRestOrder_WCFWebHttp - the WebHttp 2-way receive location exponsing the REST service for order reception
    • SendValidateOrder_WCFBasicHttp - the 2-way send port for invoking the order validation web service
    • SendOrderRequest_MSMQ - the 1-way send port for placing the order request on a queue
The mockifier produces a class called MobileOrderPlacementMockAddresses which has the following properties:
  • ReceiveRestOrder_WCFWebHttp of type TwoWayReceiveAddress
  • SendValidateOrder_WCFBasicHttp of type TwoWaySendAddress
  • SendOrderRequest_MSMQ of type OneWaySendAddress
Important prerequisite for producing the correct variant of the *MockAddresses helper class is to set the following property in your *.btdfproj file with the given value:


This was introduced in order to maintain backward compatibility when generating the *MockAddresses class as the new style of this class is considered a breaking change. The default behaviour of the mockifier when invoked from BTDF is to generate a *MockAddresses class with string properties only, which is referred as Legacy.

First we create an instance of the EndpointsMock class and setup the mocked endpoints:

var serviceMock = new EndpointsMock<MyService_MockAddresses>();
serviceMock.SetupReceiveRequestAndSendResponse(ep => ep.ReceiveRestOrder_WCFWebHttp )
      .SetupSendRequestAndReceiveResponse(ep => ep.SendValidateOrder_WCFBasicHttp )
      .SetupSend(ep => ep.SendOrderRequest_MSMQ)

Then we define the actual test flow execution through the instance of the TestMessagingClient. Here it is important to note the usage of the InParallel() method of this class. It is required in this case as we have a synchronous request-response operation that initiates the service and waits for response while the rest of the flow is being executed.

var testClient = serviceMock.CreateMessagingClient();
   (tc) => tc.ReceiveRequestAndSendResponse(
        sp => sp.SendValidateOrder_WCFBasicHttp,
        responseSelector: rs => new StaticFileResponseSelector()
                FilePath = "OrderApprovedResponse.xml"
        requestValidator: rv => VerifyIncomingOrder(rv)),
        tc.Receive(sp => sp.SendOrderRequest_MSMQ,
            requestValidator: rv => VerifyOrderRequest(rv)
.SendRequestAndReceiveResponse(rp => rp.ReceiveRestOrder_WCFWebHttp,
    responseValidator: rv => VerifyOrderResponse(rv))

Note the usage of the helper class StaticFileResponseSelector() in the place where assigning a response for the order validation service. This class does what its name suggests - selects a response from a static file.

Note as well the last method invoked in the chain - VerifyParallel(). This method is required when performing operations in parallel in order to ensure that any operation that was started parallely either completes or fails and the exception is re-thrown in the main execution thread.

And finally the verification methods are defined as follows:

private bool VerifyIncomingOrder(ValidatableMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The incoming order reqeust message is empty");

    var xDoc = XDocument.Load(v.Message.BodyStream);
        xDoc.Root.Name.LocalName == "ValidateOrderRequest",
       "The contents of the order validation request is not as expected");
    return true;

private bool VerifyOrderRequest(indexedMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The request message is empty");

    var xDoc = XDocument.Load(v.Message.BodyStream);
        xDoc.Root.Name.LocalName == "OrderRequest",
       "The contents of the order validation request is not as expected");

    return true;

private bool VerifyOrderResponse(IndexedMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The order response message is empty");

    var orderResponse = JsonConverter
    // OrderResponse is a pre-defined entity type corresponding to 
    // the OrderResponse JSON message
       "The order response was not as expected!");
    return true;

This is all about a test created with the new TransMock programming model! Compact and neat syntax empowering you to both control the flow of execution and verify the outcome of each and every message reception from the tested service/integration right there and then! And if you get stuck somewhere you simply put a breakpoint and debug the test to really see what is going on and why it keeps failing. Something that is still very difficult to do with BizUnit based tests.

tirsdag 25. juni 2019

How to create a custom image in DevTest labs from a stand-alone VM in Azure?

DevTest labs in Azure is a beast of its own. VMs in the Microsoft cloud are the fundament of the computing capability, yet they are somehow treated differently when under the DevTest labs hood. And there are perhaps about a zillion (good) reasons to be like that. Yet imagine a situation where you have a stand alone VM that is specked with loads of good stuff and you just do not want to go through all that process of building it from scratch on top of a base image in a DevTest labs machine. In fact you want all the juice available right in the lab, in a single click of a button please!
And this is ufortunately not supported in an easy and lean manner through using Portal,PowerShell or CLI compared to many other offerings in Azure. All the articles about VMs and DevTest labs talk about either stand alone or as part of a lab - as if they are never to be mixed together, a sort of some digital cloud anatema if you wish! Yet both VM types are based on the very same technology. So is it really that impossible to move a stand alone VM under a DevTest lab?
The answer is no, it is far from impossible, though it is a bit more demanding. If one follows the documentation slavishly one will unfortunately get nowhere with such a task. However if one starts to connect the few vague red dots it suddenly becomes apparent that it is actually doable and with all the available tooling indeed.

What I am about to explain below is a way to create a custom DevTest lab image from an existing stand alone VM with a single disk. I personally think that there is no point of moving a single image to a DevTest lab as there is little upside of all this. If that is the case, you'd better stick to the stand alone VM. With a custom lab image you have the ability to quickly provision multiple VMs from the same source taking all the management benefits of what DevTest labs can pull out of its sleeves.
As for the case with data disks I will explore this in a later post.

Here is the instruction. But before we set off one important prerequisite - all the resources involved below are within the same subscription!

  1. First follow the isntructions on generalizing the VM as discribed here. If you do it with CLI/PowerShell just make sure you do not run the last commands for creating a new image as per the examples in the article. If you do so you will end up with an image outside the DevTest lab premises.
  2. Once this is done the VM will be stopped, dealocated and marked as generalized. Now here is where the trick comes - you need to copy the VMs disk underlying VHD to the default storage account of the DevTest lab. That makes sense, right? Sure! But where is the path to the VHD? Properties of the VM you may say? Or the ones of the OS disk perhaps? Sorry Mack, these goodies are long gone! The VHD path is well hidden primarily, I reckon, due to security reasons. Which is a good thing for everybody, right? So how do we get hold of the VHD then you may ask? There is an option in the stand alone VMs disk blade for downloading the VHD. Select this option and you will be presented with a text field and a Generate link button:

    Please set the value for export time validity to a reasonably high value. Default is 3600 secs which is 1 hour. I suggest you roughly estimate 100GB/Hour and then calculate based on the disk's size. Add some 10-15 mins (600-900 secs) as a buffer too. Once the calculated time value has been keyed in click on the Generate URL button. This will render the link in the large grayed out field on the top of the view as show below: 

    Copy the URL.
  3. Now it is time to figure out the default storage account for your dev test lab. This one is pretty well hidden too and it is nowhere to be found in the portal. Here comes in hand our friend Azure CLI. Open the console in the portal and type the following command:

    lab get --name '<your lab name>' -g '<your resource group name>'
    --query 'defaultStorageAccount'

    This command will spit out the resource Id path to the default storage account of your DevTest lab. Look at the last part of this URI and you will see what the name of the storage account is.
  4. Navigate to the storage account from within the portal, or through CLI if you prefer and note down 2 things for it:
    • The URL to the account
    • One of the access keys from the properties view in the storage account blade.

    You will need those in the next step. You may also create a container where you would like to store your image VHD file. I think by default there is created an uploads container upon DevTest lab creation so you can simply use that one.
  5. Now all is set up and we can start the data copy process. For this we shall be using the Azure CLI in the portal again. Type the following command:

  6. azcopy
    --source '<the download disk link from step 2>' --destination '<the
    link to the DevTest lab storage account blob container/yourimagename.vhd>'
    --dest-key '<the key for accessing the storage account>'

    Note the single quotes - they are required, especially for the first link as it has few & chars which renders the command useless when you hit Enter if not wrapped in quotes. Replace the yourimagename in the last part above with whatever name you fancy.
  7. Well thats all it takes! Once the VHD file is uploaded to the DevTest labs container you simply follow the instructions on creating a custom lab image from a generalized VHD file that can be found here
  8. And finally - once the image is created the last thing that remains is to finally spin a VM instance or 2 from that Image. Detailed description to be found here

  9. Obviously for the last 2 steps above you can use Azure CLI instead.