Eric's Blog

Day to day experience in .NET
Welcome to Blogs @ IRM Sign in | Join | Help
 Search

Disclaimer

The content of this site is my own personal opinion and does not in any way represent my employer, it's subsideries or affiliates. These postings are provided "AS IS" with no warranties, and confer no rights.

This Blog

  • Asynchronous Calls Crashed the Application (Not Anymore)

    I’m probably late to the game with this one, but I thought that since I have missed I’m probably not the only one. I have a pretty typical Windows Forms project which uses WCF to communicate with the server. Over the years we have moved more and more to asynchronous calls to not lock the client and to be able to give better user feedback. In one customers installation we have gotten an increased number of error reports from the users where the application has crashed and the standard Windows dialog has been showed (the one that says it’s looking for a solution). We have general error handlers that is hooked both to CurrentDomain.UnhandledException and Application.ThreadException so I have for quite a while been thinking that there must be some problem with their environment (hardware, OS and so on). Unfortunately it’s all our fault, and I have just missed this completely.

    What I’ve learned just recently is that WCF terminates the application when unhandled exceptions is happening on asynchronous threads. Digging a little bit deeper this seems to be the way all unhandled exceptions are treated when happening on a non-UI thread. This significant change was introduced already with .NET 2.0.

    When I realized the problem it was simple to resolve. What I needed to do was to hook our error handling to ExceptionHandler.AsynchronousThreadExceptionHandler too and to be able to do that there must be a class that inherits from ExceptionHandler.

        public class ExceptionHandler : System.ServiceModel.Dispatcher.ExceptionHandler
        {
            private static readonly SynchronizationContext syncContext = SynchronizationContext.Current;
            private readonly ILogger logger;
            private readonly IMessageBoxService messageBoxService;
    
            public ExceptionHandler(ILogger logger, IMessageBoxService messageBoxService)
            {
                Contract.Requires<ArgumentNullException>(logger != null);
                Contract.Requires<ArgumentNullException>(messageBoxService != null);
    
                this.logger = logger;
                this.messageBoxService = messageBoxService;
            }
    
            public override bool HandleException(Exception exception)
            {
                Exception ex = exception;
                if (ex.InnerException != null)
                {
                    if (ex is System.Reflection.TargetInvocationException)
                        ex = ex.InnerException;
                    if (ex.GetType().FullName == "System.Runtime.CallbackException")
                        ex = ex.InnerException;
                    if (ex.GetType().FullName == "System.ServiceModel.Diagnostics.CallbackException")
                        ex = ex.InnerException;
                }
    
                try
                {
                    logger.Error(ex);
                    DoOperationThreadSafe(state => messageBoxService.Show(ex.Message, MessageBoxButtons.OK, MessageBoxIcon.Error), null);
                }
                catch (Exception errorHandlerException)
                {
                    DoOperationThreadSafe(state => messageBoxService.Show("ErrorHandler: " + errorHandlerException, MessageBoxButtons.OK, MessageBoxIcon.Error), null);
                    return false;
                }
    
                return true;
            }
    
            /// <summary>
            /// Dispatches an operation to a synchronization context.
            /// </summary>
            /// <param name="operation">The <see cref="SendOrPostCallback"/> delegate to call.</param>
            /// <param name="state">The object passed to the delegate.</param>
            /// <remarks>
            /// Use this method to ensure that a method is called on the UI thread for a windows forms/windows presentation application.
            /// </remarks>
            /// <seealso cref="SynchronizationContext"/>
            [System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Design", "CA1062:Validate arguments of public methods", MessageId = "0")]
            protected static void DoOperationThreadSafe(SendOrPostCallback operation, object state)
            {
                Contract.Requires<ArgumentNullException>(operation != null);
    
                if (syncContext != null)
                {
                    try
                    {
                        syncContext.Send(operation, state);
                    }
                    catch (System.Reflection.TargetInvocationException ex)
                    {
                        if (ex.InnerException != null)
                            throw ex.InnerException;
                        throw;
                    }
                }
                else
                    operation.Invoke(state);
            }
    
        }
    

    Now I hook this class to the three exception handlers mentioned earlier.

                var exceptionHandler = container.Resolve<System.ServiceModel.Dispatcher.ExceptionHandler>();
                System.ServiceModel.Dispatcher.ExceptionHandler.AsynchronousThreadExceptionHandler = exceptionHandler;
                Application.ThreadException += (sender, eventArgs) => exceptionHandler.HandleException(eventArgs.Exception);
                AppDomain.CurrentDomain.UnhandledException +=
                    (sender, eventArgs) => exceptionHandler.HandleException(eventArgs.ExceptionObject as Exception);
    
  • Installing a Fibaro Realy Switch

    This worked in my home (in Sweden)! Before telling you how to install the switch, it is important to know that I’m not an electrician (and since I’m not I might very well use the wrong English words, but I hope it is understandable) and I’m only sharing my experience. Changing anything in the installation might be of great danger to yourself and/or your family. Please contact a professional electrician if you feel uncertain.

    This is my fourth post about home automation and you can read all of them here. Before reading this post you should read my post about Installing a Fibaro Dimmer, because I will only point out some of the differences in this post and not repeat all steps.

    The switch requires a neutral lead (N) and can not be bridged like the dimmer can so even before buying the switch it is good to check that you have a neutral lead available where you want to install it. If it is missing and you want to anyway you should call your local electrician for some help. Another difference with the relay switch is that it can control two lamps (or whatever you want to be able to turn on/off). Even if you don’t have two sources to control you should consider to allow two buttons anyway, because the second can be used to start a scene. I have even installed a switch just to control two scenes and nothing else. You probably want buttons for scenes like turning off everything when you go to bed or leaving the house so that you won’t have to pull up the phone each time.

    Lets take a look at the installation steps:

    1. 1-3 is the same as when installing the dimmer. So unmount the adjuster after turning the power of and remember where you live lead (L, brown) and the output (often black, but in the pictures below it is orange) is connected in the adjuster.
    2. As I wrote earlier the switch requires a neutral lead (N) so begin by connecting that to N in the switch. Then there are two connectors for live lead (called L and In on the switch and compared to just one for the dimmer) that typically would be bridged. I’m not sure in which scenarios you would connect different leads to them, but there probably are some since my feeling is that the Fibaro system is well thought-out. You will also need to share the live lead to the adjuster since there won’t be any live lead from the switch to the adjuster. This could of course be solved in many different ways and the picture below shows one way. It can be a little hard to get two leads into the same connector on the switch since they are made to fit one, but in that case you could use an external clip to connect the cables with each other.

    3. Next step is to move the output terminal from the adjuster to O1 (output 1) on the switch and connect a new cable from S1 on the switch to the same place that you moved the cable from. If you want to have a second button control a scene you should connect the second output on the adjuster and S2 on the switch, but you won’t have to put anything at all in O2.

    4. With everything wired up to the switch you just follow step 6-9 from last post about installing the dimmer and you should be finished.

    Hope this helps.

  • Installing a Fibaro Dimmer

    This worked in my home (in Sweden)! Before telling you how to install the dimmer, it is important to know that I’m not an electrician (and since I’m not I might very well use the wrong English words, but I hope it is understandable) and I’m only sharing my experience. Changing anything in the installation might be of great danger to yourself and/or your family. Please contact a professional if you feel uncertain.

    This is my third post about home automation and you can read all of them here.

    The dimmer is available in a price range that is fully comparable to a traditional “stupid” dimmer. As I noted already in my previous post, Fibaro has packaged their products beautifully as you can se on the pictures below (even though the pictures are from a switch, it looks the same for the dimmer).

     

    When you start building your network of z-wave devices you should preferably start closest to your Home Center and then work yourself outwards from there. This is the steps I usually follows when installing the dimmer.

    1. The very first step is to turn the power off to the room where you want to install the dimmer. You should also have equipment so that you can control that there are no power when the cover is removed.
    2. With the power off remove the covers. They can sit a little tight, but they can handle some “violence”. You’ll need to remove all of the buttons so that you can remove the frame.
       
    3. When the covers are removed, you need to unscrew and lift out adjuster. There should be a brown cable (connected to L on the adjuster) which is the power cord and then another cable (often black, but can be any color) which is connected to your lamp. Don’t forget where the cables are connected, because in step 5 you should connect new cables to the same places.

    4. Now it is time to move the brown cable from L on the adjuster to L on the dimmer. If you got a blue cable (N) this should be connected to N on the dimmer and finally you connect the black cable to O (output) on the dimmer. Notice that the cable should be screwed quite hard, because you should be able pull in the cable without the cable disconnecting.


      If you don’t have any blue cable (N) available it is possible to bridge output and N (as you can see on the picture below, but in this case with a grey cable to the lamp). It can be hard to fit two cables in O, but in that case you can use an external clip to connect the cables with each other.

    5. Next up is to connect the adjuster to the dimmer. Use a brown cable to connect Sx from the dimmer to L on the adjuster and then a black cable from S1 to the same place where the black cable where connected before.
       
    6. Now, push the dimmer and the cable gently back and screw the adjuster back on.
    7. Set Home Center in learning mode (you do this from Devices in the user interface), and then turn the power back on. When the power is turned on and everything is correctly connected the dimmer will be added to Home Center. When the dialog closes you can find you dimmer under “Unassigned” named with a number.
    8. Click on the number under “Unassigned” and the change the name to something meaningful and choose in which room the dimmer belongs.
    9. If everything went smooth you can now return the covers to the adjuster (and turn off the power before you do this to be on the safe side).

    Good luck with your work or call your local electrician.

  • Getting Started with Fibaro Home Center 2

    This is my second post about home automation and you can read all of them here.

    All things from Fibaro comes nicely packaged which immediately send a signal of quality (I believe they’ve read Steve Jobs biography?) and the Home Center 2 is not an exception (unfortunately I didn’t take a picture before unpacking). The Home Center 2 (HC2) itself is also a very clean and good looking piece of hardware (as you can see on the picture below from their web site).

    In the back you have to remove a cover so that you could connect the power cord and a network cable. This is easily done and self explaining. When the device has started up you can either download a HC2 Finder tool from their web site or just open your router and check which IP address it got. I recommend you to reserve that address for HC2 in the DHCP (so you need to log in to the router anyway). When this is done you can now use any web browser on your local network to browse the IP address and log in to HC2. This is of course not so interesting until you start installing devices, but there are a couple of things that you can do now to enable some features.

    First of all you probably want to upgrade your HC2 to the latest software version. If you install Beta software, you should create a backup first (even though I believe Beta installation will always do this, at least the current Beta does) because if you want to return to the last stable release you will have to restore that backup. As always with Beta software there are a risk that you will have to reset HC2 to factory settings, but typically I don’t think this should be necessary.

    With the latest and greatest software in place you might want to make your HC2 available from a remote location, for example so that you can power on/off devices from your work. You can do this by creating an account on http://home.fibaro.com. When you create an account you will have to register the serial number and MAC address of your HC2. Both of these can be found directly under Configuration in HC2.
    When I did this I had also configured port forwarding on port 80 to my HC2, but I’m not sure that is necessary. After doing that setup the port forwarding is not necessary at least, because I have removed it and I can still log in to HC2 through https://home.fibaro.com/.

    You should also configure location information for your house under Configuration –> Location in HC2. When you do you can also tell HC2 the coordinates of your house so you can use location as parts of your scenes. But how do you know the longitude and latitude for your house? One way to find out is to use Google maps.

    1. Browse to http://maps.google.com and type your address in the search field.
    2. Right click on the map where your house is located and choose “What’s here?”
    3. Now the coordinates is shown in the search field or you can click on the green arrow to see them.

    A detailed instruction on how to show the coordinates on a Google map can also be found in “How do I look up latitude and longitude coordinates on Google Maps?”.

    You also might want to begin configuring more users to your system. You want to do this now if you want to be able to use different users when configuring the iPhone app on many phones in the next step. The users are found under Panels –> Access Control Panel in HC2. If you add users now, don’t forget to return here later on and edit access rights to devices, cameras and scenes after these have been added to HC2.

    If you have an iPhone (or want to use the iPhone app on an iPad) you also need to configure the app on the phone so that it can access HC2 correctly. When starting the app it says that you need to configure user name and password under settings before you can use it. I went to settings in the application, but there are no possibility to configure user name and password there and I couldn’t figure out how to do it. But the good guys at Gröna Hus quickly responded to my email and explained that I need to go into Settings in iOS, scroll down to the Fibaro application and there it was self explaining what to register. With this fixed the application was able to talk to HC2.

    When the iPhone app has been connected to HC2 you can find it listed under iOS devices list in the Access Control Panel and you will have to check the devices that you want HC2 to push notifications too. This can be useful for different alarm devices, for example I have configured the fire alarms to push a message (and send an email) in case the status changes for the fire alarm.

  • Home Automation with the Fibaro System

    When we built our house four/five years ago I wanted to be able control our light sources in an intelligent way. For example I want to be able to control all the external lamps with a single button/command so that all turns on or off. The programmer in me also wanted to be able to program against the system. The problem back then was that all systems I looked at were quite expensive. I got at least two offers (both on bus-based systems), both costing about 40 000 – 60 000 SEK (6 000 – 9 000 USD) and that was only to cover the most prioritized parts of our house. For us, this was a little to much for the value they offered. I also looked on simpler systems like Nexa, but I wasn’t satisfied with the limitations (for example that it wasn’t communicating in both directions).

    Move forward to this summer and I have finally found a systems that is flexible enough and also affordable. I have chosen the Fibaro System and I bought it from their Swedish reseller Gröna hus. A couple of weeks ago I bought their Home Center 2 together with dimmers (both for installations and plugins), switches (both for installations and plugins), sensors and fire alarms to cover everything in our house for a little more than 20 000 SEK (3 000 USD).

    The Fibaro System is based on the Z-Wave standard, which is a wireless communication protocol for home automation. It is a MESH network, where each device is capable of both sending and receiving commands. The network can handle devices being really close to each other (even though there are preferred minimum distance, I haven’t had any problems at all) and the commands can be passed on by one device to another and therefor extending the distance it can reach. The installation dimmers and switches are small enough to be easily added to an existing installation. Devices from different manufactures seems to be very compatible with each other (for example the fire alarms are not from Fibaro) and they all go throw a certification process.

    The user interface to the Fibaro Home Center 2 is not perfect (yet) but well done, user friendly (most of the time at least) and available in many different languages (English, Polish, German, Swedish, Portuguese, Italian, French, Dutch and Romanian). The web based UI can handle many mobile phones, iPad and of course different browsers on your desktop too. When this is written there are only an app for iPhone which is a little bit limited (I think), but I have an idea of maybe creating a UI for Windows Phone and/or Windows 8 (but there are a big risk that I will never get time enough to actually do it).

    More posts about Home Automation with the Fibaro System:

  • Assembly Versioning in a TFS Team Build

    There are plenty of post about how to set a custom assembly version in a TFS Team Build, but since I ended up choosing an approach that was a mix of the others, I thought I could just as well blog about it too. This post will hopefully be followed of another post about doing ClickOnce publishing in the TFS Team Build too and I will use the same version in there too.

    Lets start with a look at the result I wanted to achieve. The last two items in the Build Explorer are the standard name of builds in Team System 2010, but as seen in the top three I have a custom name which is based on the version number. The drop folders have the same name as the build and the name is also used for labeling (which is default). All built assemblies in the drop folder have the File Version set to the corresponding version number.

    These are the steps I did to achieve this result:

    1. First of all I prepared the solution so that it has a single file for all projects that contains the version number (and some of the other attributes from AssemblyVersion.cs) and I called the shared file SolutionInfo.cs. This is a common trick to make it easier to have all projects share the same version numbers and there are plenty of posts covering the details, for example this one.
    2. In Community TFS Build Extensions there are custom build activities for working with version numbers. To be able to use it in my builds I first downloaded it and added all of the assemblies in a new folder under BuildProcessTemplates.

    3. With this in place it is time to customize the build template. I took a copy of DefaultTemplate.xaml and called it SolutionVersioningDefaultTemplate.xaml.
    4. Almost the first activity in the workflow is called “Update Build Number”. The default implementation for that activity uses the BuildNumberFormat variable to set the build number and I changed that to instead be “String.Format("{0}_{1}", BuildDetail.BuildDefinition.Name, VersionNumber)”. VersionNumber is a variable that I have added and before using it, it must be set to the version number that should be used for this build.
    5. I added the TfsVersion activity before “Update Build Number” and set its Action property to just “GetVersion”.


      I also added variables for Major, Minor and StartDate properties and made these configurable for the build. The StartDate is used when the “VersionFormat” is set to “Elapsed”, which means that the Build part of the version number will be the number of days that have elapsed since StartDate.
       

      With the steps above in place the name of the build and the name of drop folder will be as desired and the only thing that is left is to set the version number in the SolutionInfo.cs file so that all assemblies will get the same file version number.
    6. One important note about setting the version number is that you don’t want to check in the modifies SolutionInfo.cs file since that could trigger a new build and there are really no reason for doing it, so just don’t.
    7. To update the SolutionInfo.cs file is a two step process. First of all it is necessary to find the file to update and if it is found it should be updated.


      I choose to set the version before the build is labeling, so scroll down in the workflow until you find “If CreateLabel” (almost half way down).
    8. I used the standard activity FindMatchingFiles to search for the SolutionInfo.cs file and MatchPattern is set to “String.Format("{0}\**\{1}", SourcesDirectory, SolutionVersionFile)” where SolutionVersionFile is a variable that can be set to different values when creating the build definition. The MatchPattern variable supports the same syntax as the standard .NET Directory class supports for searching.
    9. If the SolutionInfo.cs file is found I use the TfsVersion activity again, but this time with the Action property set to “SetVersion”. I think that this solution would work also with AssemblyInfo files in the projects, but I haven’t tested that.
      If no file is found a warning is written to the user.

    It took me a while to learn the basics of TFS Team Build and its workflow templates, but when the initial confusion is gone I believe it is powerful enough to support you in most scenarios. There are of course plenty of options for automating builds, but I guess you were interested in TFS Team Build if you made it this far. :)

  • SlowCheetah XML Transforms as part of a TFS Build

    A half year ago I blogged about “Deploying ClickOnce to Multiple Environments”, but now I want to take this a couple of steps further and use a build server to automate the deploys. The first thing that I looked into was how to get SlowCheetah XML Transformations to work on our build server and this is the steps I took:

    1. First of all I added the files the SlowCheetah installed to a place relative to my project files. You can find the files in you profile folder under “AppData\Local\Microsoft\MSBuild\SlowCheetah\v1”, so I copied them to our packages folder and checked them in to TFS Source Control.
    2. After this I right-clicked on my project in Solution Explorer and choose “Edit Project File”. Search for SlowCheetahTargets and replace “$(LOCALAPPDATA)\Microsoft\MSBuild\SlowCheetah\v1\SlowCheetah.Transforms.targets” with the relative path to the targets file. In my case I replaced it with “..\packages\SlowCheetah.Transforms.targets”.
    3. Now when you create a new Build definition in TFS you can set “Configurations to Build” (under “Build process parameters” and “Items to Build”) to the configuration that you want to build and the transformations will be applied on the build server too.
  • When the Test Agent Fails on the Build Server

    When I set up a continues integration build on our new build-server, I got “The agent process was stopped while the test was running.” when the test ran. This is not a very helpful error message and the troubleshooting is hard when all test succeeds locally. Fortunately I have the possibility to read the event log on the build server and there were more details. In my case the problem was that the code contracts wasn’t installed on the server.

    Once again the lesson is to read the event log. Am I the only one that often forgets to do that initially?

  • Information and Events are Stable Parts of Business

    At IRM we often use this picture to talk about the stable parts of the business. Actually we have used it at least during the eight years I‘ve been working here.

    The least stable part is the organisation itself. Many of us have been affected by organisational changes during the last year and just as many of us will be affected by a change this year. Since it changes all the time it would be unwise to base any software architecture on the current organisation.
    As a side note: This is one reason to why I often talk about (user) roles as a bad thing to base the security checks on for a system, even though it is often well supported by the platform.

    The most stable part of the business is the structure of the information we need. As long as IRM have been a consultancy firm we have provided services to our customers in projects for which we have always tracked the hours (time) each employee have worked. Surely the information changes too, for example we have for many years educated other in both internal and open courses, but this was not part of the business when IRM started back in 1982. The content changes (hopefully) rapidly with new customers and projects all the time, but not what information we need to track.

    How we have performed our services and the internal work have changed over time. It is the process we try to improve all the time. Since the process itself is something that we want to change this is also unwise to base the software architecture on. Still this was a very common advise when SOA was hitting the hype curve, but I believe that is a mistake that happened because we so eagerly wanted to align our solutions better to the business process (of course the system need to support the processes very well, but it shouldn’t be the part that we base our architecture on).
    If we dig a little bit deeper into processes they are a series of activities, triggered by an event, that are performed to deliver a value for a customer. The activities are more stable than the process itself, but even more stable are the events. Actually there is an event after each activity that triggers the next activity. An event can also be found when studying the state transitions of the information, for example visualized in an UML State Diagram. This have lead me to talk about the events as the line between Process and Information in the stability diagram above.

    I would say that information and events are two important artifacts to pay special attention to when designing software. This spans from defining the correct aggregates (in DDD) to defining service boundary (in SOA) or Bounded Context (in DDD) to integration between systems. I haven’t written about business capabilities in this post, but also when using capabilities for example to define services, you need to make sure that information which must be consistent (not eventually consistent) belongs to the same capability and the most important way of communication between services are based on events.

  • Deploying ClickOnce to Multiple Environments

    So in my struggling to get a lean and effective way to roll out new versions of an application for two of my clients, I have some more to share with you. I want it to be extremely easy for me to create a new ClickOnce installer for my clients and I have identified three things that I need to solve:

    • The installations will be run from different installation URL:s.
    • Getting unique config-files packed with the ClickOnce publish. I have 2 clients, each with a test and a production environment, and I need to handle my own test environment too.
    • A client machine needs to be able to run installations for both test and production on a single machine. This means that ClickOnce must see my applications as different applications even though it is only one.

    I have solved the last two, so lets start with the last one.

    Running both test and production environments on the same client

    There are more than one way to solve this, but for me the easiest solution was to create two Visual Studio projects. Originally I had a single Windows Forms exe project in Visual Studio, but I changed this to be just a DLL-project (Output type in Application settings). Then I created two new projects with the names Application and Application.Test and for both of them I configured ClickOnce as usual. There is only a single line of code in each of these projects and that’s to call the original entry point in my old exe-project:

    Program.Main(args);

    With this solved I still needed to handle the differences in configurations.

    Different config for each environment

    Visual Studio 2010 has support for transforming configuration files when deploying a web project, but only when deploying and that is only supported for web projects. Luckily Sayed Ibrahim Hashimi and Chuck England have created a small VSiX called SlowCheetah XML Transforms that do the same thing, but when you build your project. So naturally the first step is to install the add-in. Next up is to create new project configurations in configuration manager (I created CustomerATest, CustomerAProd, CustomerBTest and CustomerBTest). With this in place you can right-click on the config-file in you project and select Add Transform, which adds a sub-config file for each configuration. In these config files you then apply the transformations for the changes you want to do in the config file.

    This is so useful in many more scenarios than just for deploying with ClickOnce.


    Unfortunately I haven’t solved the problem of having different installation URL:s for each configuration, but maybe someone has a tip? I have tried to edit the project file and move the installation url element to respective configuration. This works if I remember to reload the project each time I select another configuration (which I of course don’t), but if I don’t reload the settings will be overwritten with the value from the configuration that was used when the project was loaded so that won’t work. The best thing would of course be if Visual Studio could start to support different installation URL for each configuration.

  • ClickOnce to the Rescue

    I have two clients who are outsourcing their PCs and servers and one consequence of that is from now on it will take 3 weeks (plus additional costs) to get a MSI delivered to the clients. This way to far from our fast deliveries that we have today, so I recommended them that we should move on to ClickOnce for distributing the application. This will allow us to roll out new versions quickly and keep the cost down to a minimum.

    There is a switchboard integrated though and I was a little bit uncertain if we could handle the requirements of the switchboard to start the client with arguments and always with a single instance of the application. It turned out to be really simple. First of all, we did not need to make any changes in our single instance code (which uses named pipes to communicate from second instance to first).

    When someone calls to my client, the switchboard makes a lookup by calling a web service. In the answer it retrieves the path to the installed client, including the correct command line arguments that should be used to start the application. When switching to a ClickOnce distribution the service are not able to include a local path anymore, since the server has no idea where the application is installed. After a quick search I found that ClickOnce can also be called with parameters so that was the path I choose to go down and it worked really well. This is what I needed to do:

    1. In the “Manifests” settings (found under “Publish Options”) it is required to check “Allow URL parameters to be passed to application”.
    2. Next I changed the web service so it returned the URL to the ClickOnce installation (not the bootstrapper exe, but the .application file which also needs to be the installation source) followed by regular query string parameters.
    3. In my code, I just added a small function that takes command line parameters (so I continue to support local installs) as in parameters and the returns them or the parameters send through the query string.

      private static string[] GetCommandLineArguments(string[] args)
      {
          if (ApplicationDeployment.IsNetworkDeployed && ApplicationDeployment.CurrentDeployment != null)
          {
              if (ApplicationDeployment.CurrentDeployment.ActivationUri != null)
              {
                  string queryString = ApplicationDeployment.CurrentDeployment.ActivationUri.Query;
                  if (!string.IsNullOrEmpty(queryString))
                  {
                      queryString = queryString.Replace('?', '/').Replace('=', ':');
                      return queryString.Split('&');
                  }
              }
          }
      
          return args;
      }
      The ActivationUri will always be null if the “Allow URL parameters to be passed to application” isn’t set.
  • Hosting Rhino Service Bus in IIS

    This is my third post (part 1, part 2) with notes from my exploration of Rhino Service Bus and in this I will focus on how to set everything up for Pub/Sub when having IIS as a host. The first thing that must be done is to choose a strategy for where to do the initialization of the bus and other parts of the service/application. Here is a good blog post covering the choices and for this sample I choose the AppInitialized because it supports other protocols than http and I don’t need to support any other hosts than IIS so I don’t need to cover that.

    public static void AppInitialize()
    {
        var host = new DefaultHost();
        host.Start<Bootstrapper>();
    
        var consumerHost = new DefaultHost();
        consumerHost.UseStandaloneCastleConfigurationFileName("Consumer.config");
        consumerHost.Start<Bootstrapper>();
    }

    In this test I trigger the events from button clicks on a webpage, but except for that everything works as in the previous post.

    When deploying it to a server with IIS installed on, I manually created the Rhino Queue folders (in my case Publisher.esent, Publisher_subscriptions.esent, Consumer.esent and Consumer_subscriptions.esent). For each folder I than gave modify permission to the account running the application pool (you will get a access denied exception if you don’t do this and giving Modify permission on the parent folder does not seem like a good idea).

    I haven’t been able to get a sample with RemoteAppDomainHost to work. It complains about not being able to find Rhino.ServiceBus assembly and I’m pretty sure it has to do with ASP.NET shadow copying of files. Has anyone got this working? Please leave a comment or link in that case. Thanks, Eric.

  • Consuming Events in the Same Process as the Publisher with Rhino Service Bus

    I recently blogged about getting started with Rhino Service Bus for publishing and subscribing to events. If you did not read that post, I recommend you to do that before continuing since I will just outline my modifications in this post.

    My scenario is that I want to decouple things that happens when my server is getting a command. I would like to just save the parts that is core to my domain as a direct effect of the command, but other things like calling other external systems and creating a change history I would like to do later on and in its own transactions. For this I wanted to publish events from my domain and then have subscribers handling each and every scenario separately. I thought it wouldn’t be necessary to have a second process though, so in this post I will show you how both the publisher and the subscriber can exist in the same process.

    So based on the steps in my previous post, I copied the class that consumes my events (implementing the ConsumerOf<> interface) to the publisher project. Now I need to have a second service bus host in the Publisher project and that second host must have its own configuration file.

    1. I created a new configuration file, but instead of classic app.config, I called it Consumer.config and I also set the “Copy to Output Directory” to “Copy if newer”. The configuration file is a little bit different than the previous, because the host don’t want the surrounding castle element.

      <?xml version="1.0" encoding="utf-8" ?>
      <configuration>
        <facilities>
          <facility id="rhino.esb">
            <bus threadCount="1" numberOfRetries="5" endpoint="rhino.queues://localhost:31317/RSB_Consumer_In_Publisher" name="Consumer"/>
            <messages>
              <add
                name="RSB.Events"
                endpoint="rhino.queues://localhost:31315/RSB_Publisher"
                />
            </messages>
          </facility>
        </facilities>
      </configuration>
      
    2. After starting the Publisher host, I also added code to start the in-process Consumer host. The big difference here, compared to the consumer described in my last post, is that I tell the host to use another config-file, by calling UseStandaloneCastleConfigurationFileName.

      static void Main(string[] args)
      {
          var host = new DefaultHost();
          host.Start<Bootstrapper>();
          Console.WriteLine("Started server publisher.");
      
          var consumerHost = new DefaultHost();
          consumerHost.UseStandaloneCastleConfigurationFileName("Consumer.config");
          consumerHost.Start<Bootstrapper>();
          Console.WriteLine("Started server consumer.");
      
          IServiceBus bus = host.Container.Resolve<IServiceBus>();
          bus.Notify(new Event1());
          Console.WriteLine("Published Event1.");
      
          Thread.Sleep(2000);
      
          bus.Notify(new Event2());
          Console.WriteLine("Published Event2.");
      
          Console.ReadLine();
      }

    Another option for hosting in process is to use the RemoteAppDomainHost, which I believe will run in the same process, but in another AppDomain. This would probably add some robustness to the solution. The difference when using that hosting option is that the Consumer.config file will need to be an ordinary config-file with configSection and the facilities element must be contained in a castle element. The code for bringing the host up is similar to the one above.

    var consumerHost = new RemoteAppDomainHost(typeof(Bootstrapper));
    consumerHost.Configuration("Consumer.config");
    consumerHost.Start();
    Console.WriteLine("Started server consumer.");
  • Getting Started with Pub/Sub using Rhino Service Bus

    I have a project where I want to start publishing events on the server and then have a consumer subscribing to these events and take action. First I took a quick look at nServiceBus, but I also thought that it would be interesting to see what else exists on the .NET platform. I found both Mass Transit and Rhino Service Bus (RSB). I decided that Rhino Service Bus might fit very well with this project since it has built-in support for Rhino Queues (of course) and it is still open source without restrictions. Rhino Queues is interesting because it is a xcopy deployment.

    This post is mainly about giving RSB a first run, so it will be very basic and more like a dump of my own progress.

    First I created a simple project for defining the events (messages) and it only contains two empty classes Event1 and Event2. It might be worth noting that there is no requirements on the classes, aka it is not necessary to implement any interfaces or mark them with any attributes.

    After this I created a Publisher and a Consumer project and in both projects I added a reference to the Events project and I used NuGet to add a reference to Rhino Service Bus. I began with the publisher:

    1. Added an empty classes called BootStrapper which inherit from AbstractBootStrapper.
    2. Added an app.config

      <configSections>
        <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor"/>
      </configSections>
      <castle>
        <facilities>
          <facility id="rhino.esb">
            <bus threadCount="1" numberOfRetries="5" endpoint="rhino.queues://localhost:31315/RSB_Publisher" name="Publisher"/>
            <messages/>
          </facility>
        </facilities>
      </castle>
      
      
      The endpoint attribute of the bus element defines the queue this bus is listening to. The “rhino.queues” defines that Rhino Queues should be used, but RSB also supports MSMQ by using the “msmq” moniker instead.
    3. All that is left now is to start the bus and publish the events. The code for this is straight forward and self explained.

      static void Main(string[] args)
      {
          var host = new DefaultHost();
          host.Start<Bootstrapper>();
      
          Console.WriteLine("Starting server publisher.");
          Thread.Sleep(1000);
      
          IServiceBus bus = host.Container.Resolve<IServiceBus>();
          bus.Notify(new Event1());
          Console.WriteLine("Published Event1.");
      
      
          Thread.Sleep(2000);
      
          bus.Notify(new Event2());
          Console.WriteLine("Published Event2.");
      
          Console.ReadLine();
      }
      The IServiceBus supports both Notify and Publish for publishing events. The difference is that the later requires at least one listener.

    So with the Publisher wrapped up, lets do the same three steps, plus one additional for the Consumer.

    1. Added an empty classes called BootStrapper which inherit from AbstractBootStrapper.
    2. Added an app.config
    3. <configSections>
        <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor"/>
      </configSections>
      <castle>
        <facilities>
          <facility id="rhino.esb">
            <bus threadCount="1" numberOfRetries="5" endpoint="rhino.queues://localhost:31316/RSB_Consumer" name="Consumer"/>
            <messages>
              <add
                name="RSB.Events"
                endpoint="rhino.queues://localhost:31315/RSB_Publisher"
                />
            </messages>
          </facility>
        </facilities>
      </castle>
      
      
      Again the endpoint attribute of the bus element defines the queue this bus is listening to. In the consumer’s configuration I also added the messages that I want to listen for and pointing it to the endpoint that is publishing these events, aka the Publisher above.
    4. Before starting the bus I also need to create a classes that can handle the events when they get published. This is done by implementing the ConsumerOf<> interface.

      public class EventConsumer : ConsumerOf<Event1>, ConsumerOf<Event2>
      {
          public void Consume(Event1 message)
          {
              Console.WriteLine("Received Event1");
          }
      
          public void Consume(Event2 message)
          {
              Console.WriteLine("Received Event2");
          }
      }
    5. As for the Publisher the last step is to start the bus and when this is done it will automatically tell the Publisher which events the Consumer want to receive.

      static void Main(string[] args)
      {
          var host = new DefaultHost();
          host.Start<Bootstrapper>();
      
          Console.ReadLine();
      }

  • Learning about CQRS and Event Sourcing

    In my struggle to use Domain-Driven Design (DDD) in better ways to get more value out of it and not ending up with anemic models, I began reading about Command Query Responsibility Segregation (CQRS) and Event Sourcing. These are two patterns that fit very well with DDD and two of the most influent thinkers in this area is Greg Young and Udi Dahan. In this post I will simply list a couple of resources that I have found to be very useful, but first let’s start with some of the reasons why I believe this is so interesting:

    • When combining CQRS and Event Sourcing you will have a built-in integration model. For me, this is huge, since when is the last time you build an application/system in isolation? The problem often is that the need for integration is not taken care of until late in the project and then we add something on top of what we created more common than not with an unsatisfied result.
    • CQRS offers great separation of concerns and removes the need for supporting reporting in the domain model.
    • The application/system is built with three relatively loose coupled parts (domain model, read model and user interface) that can be developed and evolved independently from each other.
    • We store all events generated by the system (Event Sourcing). I believe events will get more and more attention over the coming years (if not Event-Driven Architecture and BI has already put a lot of focus on it) and it opens for many interesting business scenarios.
    • It is bare bone. This can sound strange, but during the last years I have tried many new frameworks (for example different O/R mappers, WCF RIA Services and so on) without getting the “reward” I’ve been hoping for. In my lab project so far I have not used any frameworks, but I don’t believe that I ‘m writing more code. Rather I find my code becoming better in many ways (easier to read and maintain, better separation of concerns, faster and more).
    • It fits very well with both cloud and SOA.
    • Offers some really good way to do unit testing (or at least I have learned better ways to do it when learning about CQRS and Event Sourcing).

    There are more reasons which is outlined in many of the following resources that I’ve found good to start with:

    Of course Domain-Driven Design by Eric Evans is pre-requisite and Jimmy Nilsson’s Applying Domain-Driven Design and Patterns book is also a good read.

More Posts Next page »
Powered by Community Server, by Telligent Systems