tirsdag 8. oktober 2019

Introducing the new programming model of TransMock

Little history

Since its very early days TransMock relied explicitly on the very well established programming model of BizUnit. This was mainly due to the fact that the few developers out there who had grasped the paradigm of automating the testing of their integrations  in a fully fledged ALM/DevOps pipeline were already very familiar with this model. So for them taking into usage TransMock which was bringing complete isolation from external dependencies and thus greater flexibility was a no brainer.
But with the years while introducing new capabilities to TransMock this programming model started to show certain weaknesses. Particularly 3 such weaknesses that top the chart:
  • The execution model of BizUnit - all the steps are added to a list of steps and then the test case execution is performed in the context of the Run() method. This imposes big challenges especially when it comes to the inability to debug the tests.
  • The validation model of BizUnit - based on introducing validation step classes which encapsulate the specific validation logic. This was partially addressed by introducing the LambdaValidationStep class in TransMock 1.3 which allows the programmer to perform in-place validation, without this complex OO approach by simply defining a validation method callback that will be invoked at the desired point of test case execution
  • The code size of the tests - due to the way the steps were defined and validation steps added the code for a bit more complicated scenario is quickly blowing out of manageable proportions.
All this has lead to the creation of a completely new programming model that addresses many of the shortcomings of BizUnit. It is available from v1.5 of the TransMock framework and will be the main interface to it going forward.

Fundamental idea

The fundamental idea behind the new programming model is about emphasizing the message exchange between the test harness and the tested service in a neat way while giving the entire control of the test execution in the hands of the developer. And all this backed by a more modern syntax following a well known approach for creating object mocks from other well-established mocking frameworks out there.

On a very hi level the new programming model can be described as 2 main objects that interact with each other through exchanging messages of type MockMessage. 1 object represents the mocked endpoints of the actual service/integration that is being tested and the other is a messaging client that can only send and receive messages to/from the mocked endpoints in the first object. Verification of received messages from the service/integration is performed in place where the messages are actually received. This is a fundamental difference from how verification was performed in BizUnit.

The rest of the components in TransMock are intact - the mock adatper is still used to enable communication with the test cases and the mockifier is still used the same way to mockify the bindings and to produce the mocked addresses helper class. One slight difference is that the mocked addresses helper class that the mockifier generates is now strong typed, meaning that there is performed a design time type check based on the corresponding endpoints' 
messaging patterns in the message exchange methods of the messaging client object.


For the new programming model it has been deliberately chosen to utilize fluid-like syntax and to be based on the principles of functional programming. This allows the developers to really focus on the logic of the tests in place of where they author those.

The two main classes in the new programming model are called respectively EndpointsMock<TAddresses> and TestMessagingClient<TAddresses>. The EndpointsMock class represents the set of mocked endpoints of a service/integration that will be tested. The TAddresses is a class of type EndpointAddress. The generated by the mockifier *MockAddresses helper class now inherits from this class bringing whole lot new capabilities for type checks. The EndpointsMock class defines a set of Setup methods depending on the type of the message exchange pattern implemented by a given endpoint. This is exactly as with classical mocking frameworks - they allow you to set up some mocked behaviour of methods, properties and other class members.  

The available setup methods in the EndpointsMock class are:

  • SetupSend - sets up a one-way send endpoint
  • SetupReceive - sets up a one-way receive endpoint
  • SetupSendRequestAndReceiveResponse- sets up a two-way send endpoint, one that is sending a request and receiving a response
  • SetupReceiveRequestAndSendResponse- sets up a two way receive endpoint, one that is receiving a request and sending a response
Common to all these methods is that they all operate against an instance of the defined TAddresses type through lambda expressions. This allows for design time type checks against the type of the endpoint which ensures that one can never setup a receive endpoint mock against a send endpoint or vice verse! This is very big improvement as it was totally possible to do such setup with the old, BizUnit based programming model, which was one of the main challenges for new beginners with TransMock.

Quick demo on creating instance of the EndpointsMock class:

/// Assumes The MyService_MockAddresses has a single one-way receive and single
/// one-way send endpoints
  var serviceMock = new EndpointsMock<MyService_MockAddresses>();
  serviceMock.SetupReceive(ep => ep.ReceivePO_FILE)
      .SetupSend(ep => ep.SendWMSRequest_MQSeries);

The TestMessagingClient<TAddresses> is responsible for sending and receiving messages to/from the mocked endpoints. It cannot be instantiated directly but only through a factory method on the instance of EndpointsMock class. This way it is ensured that the client will only ever exchange messages with the intended service/integration represented by the endpoints mock instance.

The TestMessagingClient class has the following methods that allow for exchanging messages with the mocked service/integration:
  • Send - sends a message to a receive mocked endpoint
  • Receive - receives a message from a send mocked endpoint
  • SendRequestAndReceiveResponse - sends a request and waits for receiving a response from the 2-way receive mocked endpoint
  • ReceiveRequestAndSendResponse - receives a request and sends a response to a 2-way send mocked endpoint
Here is a quick demo on how an instance of the TestMessagingClient is created and utilized:
/// Continuing from the example above

var testClient = serviceMock.CreateMessagingClient();
testClient.Send(rp => rp.ReceivePO_FILE, "TestPO.xml")
    .Receive(sp => sp.SendWMSRequest_MQSeries);


The above 3 lines of code efficiently test the entire service from end to end! One can chain the methods as in any other modern fluid syntax framework and the execution follows this very same path. No Run methods, no hidden magic!
In addition to the 4 Send&Receive methods the TestMessagingClient class defines a method called InParallel() which allows for executing a set of defined Send/Receive methods from the same test messaging client instance in a parallel manner. This comes very handy in situations where the tested service/integration does some complex parallel processing.

Authoring tests with the new programming model

Enough with the theory, time for a demonstration of how to utilize the new programming model. For the sake of simplicity we will use a very basic case:

  1. A two-way WCF-WebHttp receive location receives a call from a mobile app for an order placement
  2. The service invokes an order validation WCF-BasicHttp service on an order system
  3. After successfull validation the order is placed on an MSMQ queue and a response to the calling app is supplied simultaniously.
Few assumptions about the setup, as in line with previous examples:
  • Application is named MobileOrderPlacement
  • BTDF is used for deployment and the reader is familiar of how to set it up to work with TransMock targets for generating the *MockAddresses class
  • Bindings are prepared for mocking of the endpoints
  • The endpoints are named as follows:
    • ReceiveRestOrder_WCFWebHttp - the WebHttp 2-way receive location exponsing the REST service for order reception
    • SendValidateOrder_WCFBasicHttp - the 2-way send port for invoking the order validation web service
    • SendOrderRequest_MSMQ - the 1-way send port for placing the order request on a queue
The mockifier produces a class called MobileOrderPlacementMockAddresses which has the following properties:
  • ReceiveRestOrder_WCFWebHttp of type TwoWayReceiveAddress
  • SendValidateOrder_WCFBasicHttp of type TwoWaySendAddress
  • SendOrderRequest_MSMQ of type OneWaySendAddress
Important prerequisite for producing the correct variant of the *MockAddresses helper class is to set the following property in your *.btdfproj file with the given value:


This was introduced in order to maintain backward compatibility when generating the *MockAddresses class as the new style of this class is considered a breaking change. The default behaviour of the mockifier when invoked from BTDF is to generate a *MockAddresses class with string properties only, which is referred as Legacy.

First we create an instance of the EndpointsMock class and setup the mocked endpoints:

var serviceMock = new EndpointsMock<MyService_MockAddresses>();
serviceMock.SetupReceiveRequestAndSendResponse(ep => ep.ReceiveRestOrder_WCFWebHttp )
      .SetupSendRequestAndReceiveResponse(ep => ep.SendValidateOrder_WCFBasicHttp )
      .SetupSend(ep => ep.SendOrderRequest_MSMQ)

Then we define the actual test flow execution through the instance of the TestMessagingClient. Here it is important to note the usage of the InParallel() method of this class. It is required in this case as we have a synchronous request-response operation that initiates the service and waits for response while the rest of the flow is being executed.

var testClient = serviceMock.CreateMessagingClient();
   (tc) => tc.ReceiveRequestAndSendResponse(
        sp => sp.SendValidateOrder_WCFBasicHttp,
        responseSelector: rs => new StaticFileResponseSelector()
                FilePath = "OrderApprovedResponse.xml"
        requestValidator: rv => VerifyIncomingOrder(rv)),
        tc.Receive(sp => sp.SendOrderRequest_MSMQ,
            requestValidator: rv => VerifyOrderRequest(rv)
.SendRequestAndReceiveResponse(rp => rp.ReceiveRestOrder_WCFWebHttp,
    responseValidator: rv => VerifyOrderResponse(rv))

Note the usage of the helper class StaticFileResponseSelector() in the place where assigning a response for the order validation service. This class does what its name suggests - selects a response from a static file.

Note as well the last method invoked in the chain - VerifyParallel(). This method is required when performing operations in parallel in order to ensure that any operation that was started parallely either completes or fails and the exception is re-thrown in the main execution thread.

And finally the verification methods are defined as follows:

private bool VerifyIncomingOrder(ValidatableMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The incoming order reqeust message is empty");

    var xDoc = XDocument.Load(v.Message.BodyStream);
        xDoc.Root.Name.LocalName == "ValidateOrderRequest",
       "The contents of the order validation request is not as expected");
    return true;

private bool VerifyOrderRequest(indexedMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The request message is empty");

    var xDoc = XDocument.Load(v.Message.BodyStream);
        xDoc.Root.Name.LocalName == "OrderRequest",
       "The contents of the order validation request is not as expected");

    return true;

private bool VerifyOrderResponse(IndexedMessageReception v)
    Assert.IsTrue(v.Message.Body.Length > 0, "The order response message is empty");

    var orderResponse = JsonConverter
    // OrderResponse is a pre-defined entity type corresponding to 
    // the OrderResponse JSON message
       "The order response was not as expected!");
    return true;

This is all about a test created with the new TransMock programming model! Compact and neat syntax empowering you to both control the flow of execution and verify the outcome of each and every message reception from the tested service/integration right there and then! And if you get stuck somewhere you simply put a breakpoint and debug the test to really see what is going on and why it keeps failing. Something that is still very difficult to do with BizUnit based tests.

tirsdag 25. juni 2019

How to create a custom image in DevTest labs from a stand-alone VM in Azure?

DevTest labs in Azure is a beast of its own. VMs in the Microsoft cloud are the fundament of the computing capability, yet they are somehow treated differently when under the DevTest labs hood. And there are perhaps about a zillion (good) reasons to be like that. Yet imagine a situation where you have a stand alone VM that is specked with loads of good stuff and you just do not want to go through all that process of building it from scratch on top of a base image in a DevTest labs machine. In fact you want all the juice available right in the lab, in a single click of a button please!
And this is ufortunately not supported in an easy and lean manner through using Portal,PowerShell or CLI compared to many other offerings in Azure. All the articles about VMs and DevTest labs talk about either stand alone or as part of a lab - as if they are never to be mixed together, a sort of some digital cloud anatema if you wish! Yet both VM types are based on the very same technology. So is it really that impossible to move a stand alone VM under a DevTest lab?
The answer is no, it is far from impossible, though it is a bit more demanding. If one follows the documentation slavishly one will unfortunately get nowhere with such a task. However if one starts to connect the few vague red dots it suddenly becomes apparent that it is actually doable and with all the available tooling indeed.

What I am about to explain below is a way to create a custom DevTest lab image from an existing stand alone VM with a single disk. I personally think that there is no point of moving a single image to a DevTest lab as there is little upside of all this. If that is the case, you'd better stick to the stand alone VM. With a custom lab image you have the ability to quickly provision multiple VMs from the same source taking all the management benefits of what DevTest labs can pull out of its sleeves.
As for the case with data disks I will explore this in a later post.

Here is the instruction. But before we set off one important prerequisite - all the resources involved below are within the same subscription!

  1. First follow the isntructions on generalizing the VM as discribed here. If you do it with CLI/PowerShell just make sure you do not run the last commands for creating a new image as per the examples in the article. If you do so you will end up with an image outside the DevTest lab premises.
  2. Once this is done the VM will be stopped, dealocated and marked as generalized. Now here is where the trick comes - you need to copy the VMs disk underlying VHD to the default storage account of the DevTest lab. That makes sense, right? Sure! But where is the path to the VHD? Properties of the VM you may say? Or the ones of the OS disk perhaps? Sorry Mack, these goodies are long gone! The VHD path is well hidden primarily, I reckon, due to security reasons. Which is a good thing for everybody, right? So how do we get hold of the VHD then you may ask? There is an option in the stand alone VMs disk blade for downloading the VHD. Select this option and you will be presented with a text field and a Generate link button:

    Please set the value for export time validity to a reasonably high value. Default is 3600 secs which is 1 hour. I suggest you roughly estimate 100GB/Hour and then calculate based on the disk's size. Add some 10-15 mins (600-900 secs) as a buffer too. Once the calculated time value has been keyed in click on the Generate URL button. This will render the link in the large grayed out field on the top of the view as show below: 

    Copy the URL.
  3. Now it is time to figure out the default storage account for your dev test lab. This one is pretty well hidden too and it is nowhere to be found in the portal. Here comes in hand our friend Azure CLI. Open the console in the portal and type the following command:

    lab get --name '<your lab name>' -g '<your resource group name>'
    --query 'defaultStorageAccount'

    This command will spit out the resource Id path to the default storage account of your DevTest lab. Look at the last part of this URI and you will see what the name of the storage account is.
  4. Navigate to the storage account from within the portal, or through CLI if you prefer and note down 2 things for it:
    • The URL to the account
    • One of the access keys from the properties view in the storage account blade.

    You will need those in the next step. You may also create a container where you would like to store your image VHD file. I think by default there is created an uploads container upon DevTest lab creation so you can simply use that one.
  5. Now all is set up and we can start the data copy process. For this we shall be using the Azure CLI in the portal again. Type the following command:

  6. azcopy
    --source '<the download disk link from step 2>' --destination '<the
    link to the DevTest lab storage account blob container/yourimagename.vhd>'
    --dest-key '<the key for accessing the storage account>'

    Note the single quotes - they are required, especially for the first link as it has few & chars which renders the command useless when you hit Enter if not wrapped in quotes. Replace the yourimagename in the last part above with whatever name you fancy.
  7. Well thats all it takes! Once the VHD file is uploaded to the DevTest labs container you simply follow the instructions on creating a custom lab image from a generalized VHD file that can be found here
  8. And finally - once the image is created the last thing that remains is to finally spin a VM instance or 2 from that Image. Detailed description to be found here

  9. Obviously for the last 2 steps above you can use Azure CLI instead.

søndag 19. august 2018

New features in TransMock 1.4

In the latest version 1.4 of TransMock there have been introduced several new features that will come in handy for creating even more useful tests.

The most important enhancements are:
  • new message based abstraction for communicating with the mock adapter
  • the ability to dynamically set the response in a MockRequestResponseStep
  • the ability to promote properties on a per-message base when sending messages to BizTalk
  • the ability to validate promoted properties in messages received from BizTalk

The MockMessage type

The new MockMessage class is the new way a message is conveyed to and from the mock adapter. The developer is generally abstracted from its usage directly as the Mock* BizUnit steps take care of the details. However this class will be directly used during validation in receive steps. It comes with several handy properties that will ease the validation logic. These are:

    BodyString - the message body as a string

    BodyStringBase64 - the message body as a base64 encoded string
    BodyStream - the message body as a raw byte stream.
    Encoding - the actual encoding of the message

The LambdaValidationStep has beed equipped with a new callback property called MessageValidationCallback. This is intended to be used for assigning to it validation methods which accept a MockMessage instance which will be validated against.
For example validating a message which has XML contents is performed as shown below:

var receiveValidationStep = new LambdaValidationStep(){
    MessageValidationCallback = (message) => ValidateMessage(message)

private bool ValidateMessage(MockMessage message){
    var xMessage = XDocument.Parse(message.BodyStream);
    // Now it is up to the implementer to perform the validation logic on the XDocument
    // object representing the message contents

Dynamically setting the response

This feature gives a developer the ability to set the response content in a MockRequestResponseStep instance dynamically. This comes in handy in a number of scenarios but is especially useful in a de-batch schenario where one step instance handles multiple requests and it is desirable to serve different responses for the different requests. Until now this was not possible. This is achieved in a really convenient way by introducing a new callback property to the MockRequestResponseStep class called ResponseSelector. This property takes a pointer to a method with 2 parameters - a MockMessage instance (new type from this version) and an integer representing the index of the request in a de-batch scenario. The method should return a string representing the path to the file where the contents of the response is to be taken from. This gives complete freedom to the developer to decide in which way a particular response shell be chosen. Two typical scenarios here are:
  • index based response - the file path for the response is chosen based on the index of the request message
  • content based response - the request message contents is inspected and based on a particular rule against certain value/s the desired response is returned
Here is an example of the index based response selector:

public void MyTest()
    var reqRespStep = new MockRequestResponseStep(){
        DebatchCount = 5,
        ResponseSelector = (mockMessage, index) =>
            SelectEvenOddResponse(mockMessage, index)
private string SelectEvenOddResponse(MockMessage mockMessage, int index)
    if(index % 2 == 0)
        return "EvenResponse.xml";]
        return "OddResponse.xml";

As one can see above the snippet implements a response selector that returns path to the OddResponse.xml file when the index is an odd number and EvenResponse.xml for the even indexes. The file name is enough here as it is specified as a DeploymentItem for the test method and will resolve correctly during test execution.

Promoting/Validating properties

Here we are talking not only about adapter properties, but any context property that is recognizable by BizTalk server!

How this can be of help to you and your test cases one may ask? First of all one will be able to mimic more precisely the behaviour of the real adapters that are used in your integrations. For instance a WebHttp adapter is a very good example where it allows you to promote custom properties in the message context when a GET request is received in a RESTful receive location. Or to demote such properties in a send port and assign them to designated parts of the path and query in the URL. 
Until now one had to do some tricks in order to achieve this sort of behaviour. For example I have usually ended up in creating a new message schema with its elements mapped to the various custom properties that shall be promoted. But this is not enough - a new receive location with the XmlReceive pipeline was also required as the solution did not have the need to parse the incoming request body as all the services were GET ones. As you can see this is not a good practice as there are suddenly introduced new artifacts only for the purpose of testing. This definitely helped me testing the solution thoroughly and as a result no errors were detected down the life cycle path, but still those artifacts are part of that solution and get deployed to all the environments.

In other circumstances you would perhaps need to have a specific adapter or system property promoted in your message context in order for your solution to work as expected. The new feature allows you to promote more than 200+ such context properties in a very convenient way.

The promotion itself is performed in a very simple way - in the 2 mock send step types MockSendStep and MockSolicitResponseStep has been introduced a new property called Properties. It is of type dictionary with a string key and string value. The key identifies the property to be promoted in the context of the message once received in BizTalk through the mock adapter.

Promoting custom properties

For promoting custom properties the syntax of the key is following the standard BizTalk notation for a promoted property - <namespace>#<property name>. It is important to mention here that the assembly containing the property schema with the properties promoted this way has to be properly deployed as part of the solution being tested. Otherwise nothing will be promoted to the message context. Example:

var sendStep = new MockSendStep {
Properties.Add("http://example.com/customprops#MyProp", "SomeValue"),

Promoting adapter and system properties

Promoting properties of other adapters or BizTalk system properties is performed the same way as above with the only difference being the syntax for the key. For convenience and to avoid lots of typing there has been introduced a namespace prefix approach for identifying the desired property to be promoted. The syntax is following the dot notation - [namespace prefix].[property name]. For instance promoting the MQSeries adapter property MQMD_MsgSeqNumber would look as follows:

var sendStep = MockSendStep {
Properties.Add("MQSeries.MQMD_MsgSeqNumber, "10"),

Links to the complete list of adapter and system properties that are supported can be found at the end of this post. Note that this list is the same for all the different BizTalk versions from 2010 and up and is based on the latest version of BizTalk - 2016. This means that certain properties wont be available for lower versions. This will not cause any trouble for your tests - if a property hasn't been located it simply won't be promoted to the message context and the processing will continue as intended.

Validating promoted properties in received messages

This is the other new addition to TransMock - when messages are received in the MockReceive and MockRequestResponse steps it is now possible to inspect both their content and their context properties. This is achieved in a bit different way compared to the property promotion technique. There is no new Properties dictionary property introduced to the steps. Instead the LambdaValidationStep has been extended with a new validation method signature that receives a parameter of type MockMessage. This is the only supported way of validating both content and context of a message received from the mock adapter. The existing validation method signature in LambdaValidationStep  expecting a Stream is still fully supported so no breaking change there. However it is strongly recommended to move over to the new signature as it brings much more value to the table.
Here is an example of how the validation will be performed:

var receiveStep = new MockRequestResponseStep()

var receiveValidationStep = new LambdaValidationStep(){
    MessageValidation = (message) => ValidateMessage(message)


private bool ValidateMessage(MockMessage message)
    Assert.IsTrue(message.Properties.Count > 0, "No context properties found in message");
    // Validating custom property
    Assert.AreEqual("101", "http://www.example.com/custprops#UserId",
        "UserId property had wrong value");
    // Validating system property
    Assert.Operation("GetUserById", "BTS.Operation", "Operation property not as expected");

As seen above there is shown validation of 2 different properties - custom and system one. They follow the exact same naming convention as when promoting properties on inbound messages. Custom properties use the # convention, while system properties use the . convention with namepsace prefix.

System and adapter property reference

Here is the list of links to the reference of the various adapter and system properties that can be promoted in inbound messages or validation in outbound messages:

AppStream - app stream system properties
BTS - biztalk system properties
EDI - EDI properties
EDIV2 - EDI V2 properties
EDIAS2 - AS2 properties
FILE - file adapter properties
FTP - FTP adapter properties
HTTP - HTTP adapter properties
MIME - MIME properties
MQSeries - MQSeries adapter properties
MSMQ - MSMQ adapter properties
MSMQT - MSMQ Transactional adapter properties
POP3 - POP3 adapter properties
ServiceBus - Azure ServiceBus adapter properties
SFTP - SFTP adapter properties
SMTP - SMTP adapter properties
SOAP - SOAP adapter properties
SQL - SQL adapter properties
WCF - WCF adapter properties
WSS - Windows Sharepoint Services adapter properties