NuGet dependency resolution considered harmful

First let me tell you that I like NuGet, it’s awesome to get a decent package manager for .Net and the team has done a great job getting NuGet into its current state. That said there is still room for improvements and the way NuGet resolves dependencies is one of them.

The problem here is that NuGet by default finds the oldest version of package dependency that falls within the range you have specified. So if you have dependencies that follow SemVer, meaning that everything within the same major number is compatible, this would lead you to specify the following dependency  range:

<dependency id=”ExamplePackage” version=”[1.0.0, 2.0.0)" />

Meaning that everything within that range is ok for NuGet to use. In this scenario NuGet will by default look for the package with "the lowest minor version". While this is the "safest" approach I'm pretty sure it's not what the majority of users wants and expects. It also seems to contradict the nuget doco it self when it comes to versioning since it clearly states:

When most people install packages from NuGet, they want the latest “stable” release of that package

So if users would like the "latest stable release" it seems reasonable to assume that they would like the latest stable version of depending packages as well?

While this could be argued back and forth regarding Minor releases this rule also apply to patch versions. Ie if you depend on >= 4.5.0 and there is a 4.5.1 package available NuGet will pick 4.5.0. This to me seems completely wrong since you bump patch when you put out a hotfix for critical bugs. This means that users will get potentially buggy versions of their dependencies.

I'd argue that this is a bug in NuGet!

The workaround

While we wait for the NuGet team to fix this the workaround is to specify an explicit dependency strategy when installing packages:

install-package -DependencyVersion HighestMinor  {package name}

For a complete list of strategies please refer to the official doco.

You can take this one step further and make it the default strategy by putting the following in you machine wide nuget.config:

1 2 3 4 5
<configuration>
<config>
<add key="DependencyVersion" value="HighestMinor" />
</config>
</configuration>

In closing

While I can see that the NuGet team want to take the safest approach and until SemVer has become the standard perhaps that’s the right thing to do. My proposal would be to add an extra qualifier to the dependency to declare if the dependency is following SemVer or not. That could then cause NuGet to switch to the HighestMinor strategy which by definition is safe for SemVer compliant projects given that you specify a [{CurrentMajor}, {NextMajor}) which can be setup as the default if you use Ripple to generate your NuSpecs.

<dependency id=”ExamplePackage” version=”[1.0.0, 2.0.0)” strategy=”SemVer” />

Perhaps that could be taken one step further by package publishers declaring in their package if they follow SemVer or not and that could be used by NuGet to select the most appropriate strategy.

Hope this helps!

Posted in Uncategorized | Comments closed

Why the Particular Service Platform is a game changer

It’s been cooking for a while but last week we released the first version of the Particular Service Platform. I truly believe this Platform will be a game changer in terms of how you build, deploy and run your distributed systems going forwards. The core is still the same robust NServiceBus that seen production usage for the last 6-7 years. The new additions include ServiceMatrix that will help you design of your solution. ServiceInsight will help you visualize message flows and what’s even cooler, you can now see the state changes of your sagas visually. Trust me developing and debugging sagas has never been easier!

Last but not least we have ServicePulseI’m really exited to finally be able to provide a tool for devops built from the ground up and tailored to make it as intuitive as possible for you to make sure your distributed components are up and running, delivering the business value.

Enough of me talking here is an amazing video showing you all this in action:

 

While the new apps in the Platform get a lot of attention I want to take this post in another direction and focus on a few other areas and tools that help us make sure that we can deliver on our promises to be the best Service Platform for .Net.

Developer experience

Getting quickly up to speed developing platform solutions with a minimum amount of hassle has always been one of our top priorities and we’re continuing to invest in this area. With this release we introduce our new Platform Installer (PI) which will help you get your machine ready with all the tools and infrastructure needed to build solutions on top of our platform.

Since we’re developers our self we decided to take the PI in a slightly different direction compared to your traditional installer. We do this by leaning heavily on the Chocolatey package manager. This means that everything we help you install on your machine including queueing systems, storages, tools and of course our core apps like ServicePulse, ServiceInsight and ServiceMatrix will also be available to install via the powershell commands that Chocolatey provides. This gives us a robust and future proof infrastructure as our backbone and for you as a developer a way for you to quickly pave new machines by using tools like Boxstarter.

Not using a package manager? Please give it a try, I promise it will change the way you approach installing things on your machine!

Just click our download link to take it for a spin!

 Documentation by developers for developers

We’ve heard your feedback loud and clear and indeed we have fallen short at providing you with high quality documentation guiding you how to build and run your solutions. While that won’t change overnight we’re definitely giving it our best shot. To make adding and updating documentation as easy as possible we decided to create a new documentation platform from the ground up. And since we’re developers and our target audience is mostly developers the choice of platform was easy – GitHub.

All documentation is now stored as markdown in a github repository and rendered continuously to our new developer portal docs.particular.net. This means that updating doco is just a commit + push away which is pretty much as easy as it gets. For you as a user this means that, with a single click on the “Improve this doco” button found on each page, you can send us a pull request if you spot any error or needed additions. We love contributions and we’ll keep the swag coming your way to show our appreciation if you decide to chime in!

While contributions are nice we’re fully committed to creating the best documentation out there a few months back we created an entire team whose sole purpose is to provide guidance and documentation. This will help make sure that our doco is constantly improved and kept up to date as we evolve our platform going forwards.

Go and see for yourself:

http://docs.particular.net/

Backwards compatibility FTW

Continuously delivering a platform on which customers build and run their businesses is not a challenge to be taken lightly. One of the most important things for us is to make sure that you can upgrade your systems endpoint by endpoint without incurring downtime. Failure to deliver in this area is a obviously a non starter if you claim to be a platform for building distributed systems. While we’ve always been proud to be leaders in this area we’ve taken steps this past year that have taken us to a completely different level.

Before we dig into that lets define backwards compatibility. First we have wire-compatibility, this means that messages we send over the “wire” needs to be compatible across different versions of NServiceBus. If we fail here you won’t be able to update your versions of NServiceBus on a per endpoint basis. This leads to big bang upgrade with massive risk and guaranteed downtime. In short, not acceptable! 

To make sure that we don’t drop the ball in this area we have a fully automated suite of tests that makes sure each version starting with NServiceBus 3.3 is able to send to and receive from all other versions of NServiceBus.

Next up is data-compatibility and by that we mean the data that is stored by the framework itself on a per endpoint basis. This includes subscriptions, timeouts, saga data etc. While less critical than wire compatibility breaking changes here cause unneeded friction when upgrading so we make sure that if we are forced to introduce breaking changes they will only happen when we move from one major version to another.

We check this before each release to make sure that new versions are able to consume data stored by previous version.

Finally we have api compatibility which means the api that you as a developer code against. We follow SemVer 2.0 and it stipulates that breaking changes are only allowed in new major versions. At this stage we use strict code reviews to enforce this but rules that need to be enforced manually are bound to be broken. To remedy this we plan to make this verification a automated step as part of all our builds thereby making it impossible for us to make breaking changes to our public api. 

Fine grained repositories rock!

But man did we have to work hard to get there! As we support more and more queuing technologies, storages, containers etc. it became clear to us that dumping all code into a single repository wouldn’t cut it. Since we are heavily automated we aim for each commit to be a potential release and this breaks down completely when you mix things with different release cycles on the same repository.

User: “So what’s new in 4.0.3?”

Us: “We updated to RabbitMQ 2.1″

User: “But I’m using MSMQ?”

Us: “Uhm…”

In order to keep the signal to noise ratio regarding releases to a bare minimum for you as a user we realized that we need to split things out to separate repositories. We’re now at around 40 repositories and counting…

This move made it painfully clear to us that the tooling needed for operations at this scale just wasn’t there so we had to build our own/contribute to exiting ones:

  • We use and contribute to ripple in order to manage our dependencies across repositories
  • We built GitVersion to automatically version our binaries based Git commits. Special thanks to Jake Ginnivan who has been co managing the project with us.
  • We built SyncOMatic to keep our repositories in sync
  • We built GitHubReleaseNotes to automate the creation of our release notes

While this can be seen as a drawback now that we’ve made the investment we can release much faster and with better quality than ever before making it possible to hotfix and release new versions of our software in a much more timely maner to make sure that you’re never stuck. At this time we’re put out on average 2-3 new releases per week and a highly automated and repeatable way.

Testing, testing, testing…

Testing that distributed communication works as expected is tricky. Mix that with a wide range of queuing systems, databases, containers and operating systems it becomes pretty darn hard. Oh , did I mention that there are different versions available of all those moving pieces?

Yes it hard but if you’re in this business you have to be an expert in this area. While we have the obvious set of unit tests, etc., infrastructure like this is pretty hard to test in isolated ways. To tackle this we’ve created our own framework for end to end acceptance testing by running full blown NServiceBus endpoints in various test suites to make sure all the combinations works as expected. Since those tests are completely black box this has allowed us to keep evolving the code base without unit tests getting in the way. This allows us to focus on using unit tests where it makes the most sense and complementing those with acceptance tests to make sure everything works as expected from an end user perspective.

A nice side effect of moving to separate repositories is that we now can run our constantly growing suite of acceptance tests against all the different technologies and frameworks we support. Since running those tests takes a while we’re leaning heavily on TeamCity to distribute that load on a elastic farm of build agents on the Amazon EC2 cloud platform.

Without all this automation we could never have scaled up the number of technologies we support and maintain the quality you’d come to expect from us!

NSBCon London

In case you’ve missed it the first ever NServiceBus conference is happening 26th -27th June in London. There is a whole host of excellent speakers so make sure to not miss it!

https://skillsmatter.com/conferences/6198-nsbcon

I’m going to deliver a session on how we do things inside engineering here at Particular Software, the abstract for the talk is:

 

Continuously delivering a top quality platform on which customers build and run their businesses is not a challenge to be taken lightly. Join Andreas Öhlund, Director of Engineering at Particular Software, for a journey through custom tooling, testing the un-testable and massive cloud service bills in the name of backwards compatibility, performance and reliability.

An adventurous journey indeed but an absolute must to ensure that the software you depend on delivers what’s stated on the box.

If you want to hear more on this topic please join me at NSBCON this June!

Enough talking!

Go to www.particular.net and click the big “download now” button and take the new platform for a spin!

Posted in Particular Service Platform | Comments closed

NSBCon – Under the Hood of Particular Software

In case you’ve missed it the first ever NServiceBus conference is happening 26th -27th June in London. There is a whole host of excellent speakers so make sure to not miss it!

https://skillsmatter.com/conferences/6198-nsbcon

I’m going to deliver a session on how we do things inside engineering here at Particular. This is a talk that I’ve been really wanting to do for a long term because I’m leading a truly awesome team and we’ve really been pushing the boundaries for what I thought was possible in terms of automating of “all the things” :)

The abstract for the talk is:

 

Continuously delivering a top quality platform on which customers build and run their businesses is not a challenge to be taken lightly. Join Andreas Öhlund, Director of Engineering at Particular Software, for a journey through custom tooling, testing the un-testable and massive cloud service bills in the name of backwards compatibility, performance and reliability.

An adventurous journey indeed but an absolute must to ensure that the software you depend on delivers what’s stated on the box.

The talk will give an in-depth look on what we do to continuously deliver our service platform with the top quality that is expected from us,

See you there!

Posted in NServiceBus | Tagged | Comments closed

How we version our software – in Particular

In my last post I talked about how we aim to have a release build in Visual Studio produce production ready binaries. Obviously for something to be production ready it must have a version in order for users know what they’re actually using. Just to clarify, when I talk about versioning in this post I refer to the technical version, most likely following SemVer, that you give the artifacts you release. I’m not taking about the marketing version that you might brand your things with.

The strategies for technical versioning I’ve seen can be split into 2 camps:

  1. Have the build server determine the version
  2. Commit version information into the repository it self

Since we want the version to be correct even when you go back in time and build older branches and tags #1 is ruled out. Other drawbacks with this strategy is that you need to have the build server up and running to produce a release meaningful version. We definitely want a local build to generate a good version so that puts us firmly in camp #2.

At this time 2 different forces where in play, firstly we didn’t like all those “Bumping version to X” commits cluttering our repos. And a process like that cause a lot of emails from users saying “the latest release have the wrong version, did you forget to change it?” appear in your inbox. Secondly since we’re moving to a multi repository model we needed to ensure that we use a consistent branching model across our repositories. Why is that important? Trust me spending the first few brain cycles on figuring out where to start coding every time you switch repo is definitely a drag on productivity.

For us the branching model we’ve decided to go with is GitFlow and as we started to discuss our versioning scheme we realized that if we follow the GitFlow model and also adhere to SemVer we can deduct the version based on the branch names and merge commits in Git. If we could pull that off we could kill 2 birds with one stone, version information is now embedded in the repo without dreaded “bumping to X” commits and we could also enforce our branching model by failing the build if we detected illegal commits and branch names. As an example a commit straight to “master” is not valid using GitFlow since the head will always point to either a merge commit for  a hotfix or release branch.

GitFlowVersion was born

I’m not going into full detail on the algorithm as its already documented here but in short the develop branch is always one minor version above the master. Master is always the SemVer version of the latest merge commit (eg: hotfix-4.1.1 or release-4.3.0) and so on.

We use GFV across all our repos and so far it seems to hold up.  The major hurdles so far has been all the quirks in NuGet that we’ve had to work around since the version generated by GitFlowVersion also is used to version our NuGet packages. One of the main snags was that NuGet has no concept of builds  and since we use NuGet-packages to manage dependencies across repositories we need to make sure that our develop branch always outputs packages that sorts highest at all time. This forced us to give our develop builds the “Unstable” pr-erelease suffix. This makes sure that they stay on top since Unstable > Beta, RC, etc. I’ll go into more details on how we do our cross repo dependencies in an upcoming post so stay tuned!

Here is the NServiceBus repository being versioned by GFV:  (the number you see is the PR number that’s assigned by GitHub)

 

 

TL;DR;

Use GitFlowVersion to automatically version your code without mucking around with either the buildserver or VERSION.txt files…

https://github.com/Particular/GitFlowVersion

Posted in Particularities | Comments closed

Build scripts – in Particular

In my previous post I mentioned that we’re moving towards a more fine grained structure for our repositories. This direction has had a quite interesting effects on our build scripts…

We don’t use build scripts any more

Yes you read that correctly. Since we now have much less moving pieces in our repositories the complexity of our build has decrease to the point that we don’t see any value in running scripts to perform our build. Actually that isn’t entirely true since we use Visual Studio to do our builds and that’s technically a build script.

So let me rephrase: We don’t use a separate scripting language apart from msbuild (csprojs + sln’s) to build our source code. This means that we build locally using Visual Studio (or msbuild on the commandline) and on the build server (TeamCity) we use the Visual Studio solution build runner. This means that we can skip the dreaded readme.txt that tells you all the steps you need to do before you can fire up Visual Studio and get some coding done.

But MsBuild sucks?

Yes it surely sucks, being a recovering NAnt user I can sincerely tell you that all that xml is bad for your eyes. We moved from NAnt to psake and that was a great relief to manage our very complex build at that time without paying the xml-tax. But now it seems we’ve come full circle, yes we need to drop back to hacking xml straight into the csprojs now and then but we see that as healthy friction that tells us that we need to simplify in order to keep our goal that a “release build in visual studio should create production ready binaries”. Yes we’re not there yet hence the “release build” part. We needed to make this trade off since we still do ilmerge which can be quite time consuming so we decided to only do it for release builds. Again this friction is good since that is pushing us to find ways to avoid ilmerging  dependencies since that tends to come back to haunt you in the end.  If we need more advanced things like package restore using ripple and automatic versioning using GitFlowVersion we’ve created our own msbuild tasks to avoid to much xml editing.

Final words

I’m not saying that you should avoid build scripts but our journey towards small repos has allowed us to get away with just the .sln’s and .csprojs’s to handle out build and it has worked out very nicely for us.

Posted in Particularities | Comments closed

The pain and suffering of large source code repositories

Back in the days when we where hacking away at NServiceBus v2 we kept all code in a single repository, all was good, happy days…

Things has changed since then, we started to feel pain, one day the happiness was gone…

Perhaps we’d outgrown the single repo model?

The first sign that we needed something else was the rename of the company to Particular. The move was driven by us wanting provide other NServiceBus related products and to create what we hope to be an awesome service platform. All those new products, ServiceMatrix, ServiceInsight, ServicePulse and ServiceControl obviously needed to go onto separate GitHub repositories. We now had to manage more than one repo and that process opened our eyes to the possibility to split the NServiceBus repo even further. I’m not saying that this applies to all of you out there but as you grow do pay attention and as the pain starts to increase it might be a sign that you need to split things up.

NServiceBus v3 added support for Windows Azure as a transport, RavenDB as persistence and a host of new IoC containers. V4 added transports for ActiveMQ, RabbitMQ, WebSphereMQ, SqlServer. We’ve hit the tipping point, we couldn’t stand the pain anymore…

Sources of pain – One release to rule them all

Our massive NServiceBus repo effectively locked us to a “release everything every time” model. While this was fine when we only supported MSMQ as the underlying transport it got ridiculous with the addition of alternate transports. Imagine having to put out a patch release for RabbitMQ just because there was a bug in MSMQ. Yes you can do partial releases from one big repo but since everything from TeamCity to GitHub is really built with the repository == product == one release cycle mindset so the friction of partial releases would cause makes this workaround unsustainable.

Since we need to release our transports, storages, containers in close relation to the release cycles of those said dependencies this essentially meant that we had to release a lot of components even though nothing had changed.

User: “So what’s new in 4.0.3?”

Us: “We updated to RavenDB 2.2″

User: “But I’m using NHibernate to store my stuff in SqlServer?”

#FAIL! 

Branching models

This pain is really an extension to the release cycle issues mentioned above but your branching model will most likely fall apart when you’re trying to juggle multiple hotfixes/release for different parts of your source tree. We’re using GitFlow as our branching model but I’m convinced the same would apply to pretty much all the different model out there. Imaging the confusion when you create a release-4.1.0 branch to start stabilising X and then the week after having to create a release-1.1.0 to stabilising Y. Since developers always seems mystified by the use of branches, I blame you TFS, Subversion, ClearCase!, this added dimension is definitely going to throw you right into “lets stick with the master branch” mode.

Test complexity and speed

We have an extensive suite of automated tests that runs for each transport, storage, container implementation we have. In order to support everything running of the same repo we needed to put in a lot of code to handle those “permutations”. The other obvious drawback is since those tests are fairly slow and we need to run them all for every change it didn’t scale at all. Since we’ve starting to split things up test complexity has gone down and tests runs much faster since we only run the ones that actually are relevant for the code that has changed.

Versioning

Since we want different versions for each “component”, we follow semver, trying to achieve that on one big repo is doomed to fail. You can’t put the version info into the build server since it it will build the entire repo for each change. Putting it into the repo it self is tricky and as well since it requires a lot of tinkering and causes the build complexity to go up seriously. Putting version info in the repo it self is IMO not a good idea since it causes tons of unneeded churn with all those “Bumping version to X.Y.Z” commits. I’ll leave it to a follow up post to show you our take on solving this in a full automated way. (Spoiler – GitFlowVersion)

These where the main pain points for us but the list goes on with Issue management, Pull request management, Contributions, Permissions, Repo size, Build complexity etc…

Is this multi repo thing all unicorns and cute kittens?

NO!

If this was a few years back I’d say we couldn’t have pulled it off since support for this type of structure is nowhere to be found in the tools at that time. Even today we’ve had to build a lot of custom tools to help us get where we want to be. Thing that could be done manually before transforms to productivity killers and I’ll tell you this:

If you’re not prepared to breed an “automate everything” culture this thing is not for you. When going multi repo any manual task that might have been fine in the past will grind you to a halt. At Particular “automate everything” is one of our core values since we’re not enough people to spend our precious time on “automatable” things like creating release notes, bumping versions, updating dependencies etc.

We’ve put in a lot of effort to move in this direction and our collaboration with the FubuMVC project has been invaluable since they are quite far down the fine grained repo road and has provided us with great insights and tools. Thanks Josh and Jeremy!

I definitely hope to get my blogging act together so this is the first post in a series where I talk about all the tools we use/created processes we follow/had to adjust to embark this journey. So far we’ve got the transports separated and setting our sights on the storages and containers next. We’re even looking at cleaning up our core even further by moving optional components like or Gateway and Distributor to separate repositories as well.

TL;DR; Fine grained repositories requires a great deal of effort but is definitely worth it! #yourmileagemayvary

Posted in Particularities | Comments closed

Accessing audit and error messages in NServiceBus v4.0

NServiceBus has always supported centralized audit and error handling by allowing you to specify centralized queues where successfully and failed messages ends up. NServiceBus will guarantee that each message you send/publish will end up in one of those queues, hopefully the audit queue:). Earlier versions of NServiceBus had some rudimentary support for managing errors but for audit messages it was up to you as a user to make something useful with all that data. This has now changed with the introduction of ServiceInsight.

ServiceInsight gives you a graphical view of your messages flows, insight into why a given message has failed and also a way to issue retries for those messages. Since queues are not really that great for longtime storage of data we needed a way to process all those message in order to make it easier to query and manipulate them from the UI. We also decided to not limit access to just our own tools but instead make all data and operations on it available to you via a REST’ish api.

The api is still early days but we hope to keep on expanding it to help you build management extensions for NServiceBus for your tool of choice.

Official documentation is coming soon but I don’t expect much to change from what’s mentioned below.

Installing the backend

The api backend is installed automatically when you run the NServiceBus or the ServiceInsight installers. The backend is just a regular NServiceBus host that gets installed as a windows service. Once that is done it will continuously import messages from your audit and error queues and making the data available over http. By default the api will respond on http://localhost:33333/api but the installers will allow you to tweak the uri to your liking. If you forgot where it got installed you can always consult the following regkey:

HKEY_LOCAL_MACHINE\SOFTWARE\ParticularSoftware\ServiceBus\Management

Once installed the endpoint will import and index all error and audit messages.

Where did my errors go?

The management endpoint will import all errors into our internal storage and leave a copy in a queue called {name or your error q}.log so in your case it will most likely be in error.log.

This means that you can still run tool like QueueExplorer, ReturnToSourceQueue.exe etc to manage your errors in the same way as before if you don’t have access to ServiceInsight.

Supported API calls

The API is heavily inspired by the GitHub api guidelines for urls, paging, sorting etc

At the time of writing the following calls are supported:

GET /audit   – lists all messages successfully processed

GET /endpoints/:name/audit  – lists all messages successfully processed for the given endpoint

GET /errors   – lists all failed messages
GET /endpoints/:name/errors  - lists all failed messages for the given endpoint

GET /messages/:id – Gets the given message

GET /messages/search/:criteria – Does a free text search with the given lucene query

GET /endpoints – Lists all the known endpoints

GET /endpoints/:name/messages – Lists all messages (including failed ones) for the given endpoint

GET /conversations/:id- Gets all message for the given conversation id. (I’ll talk more about conversations in a follow up post)

In terms of updates we only support one at the moment:

POST /errors/:id/retry  - Issues as retry for the given message. Returns Http Accepted  (202) if successful since the actual retry is performed async.

 

Security

The is no explicit security in place at the moment and the service is expected to run on a secured server that only authorized personnel has access to.

Lets see it in action

I’ve created a few ScriptCs scripts to show the usage of some of the above calls.

The full repo can be found here:
Latest unstable Management API:

Please take it for a spin and tell me what you think!

You can use the Video store sample that comes with NServiceBus to generate some test data if you don’t have a v4 system up and running!

 

What is missing?

Which tools would you build NServiceBus plugins for?

Comments are most welcome!

 

Limitations:

* The first drop only supports the MSMQ transport but support for the others is our top priority so expect support for all other transports in the very near future

 

Posted in NServiceBus | Comments closed

A particular blog post

You might have wondered why this blog has been so silent, yes I know workload is a poor excuse for not writing blog posts, but the reason is that I’ve been heads down trying to get NServiceBus v4.0 out of the door. We released v3 March 08 2012 so its been a little over a year since our last major release but finally v4.0 is  now available as release candidate with a go live license. You can read the release notes here.

Another major change it that we renamed the company to Particular Software since we felt that we wanted to broaden our offerings include a range of other products related to NServiceBus. But hey don’t worry we’re the same guys writing software particularly for you!

I’ll be ramping up my blogging both here and on our new company blog going forwards!

Posted in NServiceBus | Comments closed

NServiceBus v4 – Beta1 is out

I just want to let you know that we released the first beta of the upcoming v4 of NServiceBus last friday. The main focus for v4 has been to make NServiceBus run on a wider range of queuing infrastructures while still giving you the same developer experience.

Out of the box v4 will support ActiveMq, RabbitMq, WebSphereMq and SqlServer, yes using your old and trusty database as a queuing platform. While adding support for all those new transports we had to do quite a lot of refactoring deep in the bowels of NServiceBus and the end result is a much cleaner codebase and its my hope that adding new transports should be a breeze going forwards. There is already a Redis transport in the works by our amazing community.

We have of course made a lot of other improvements as well so please take a look in the release notes for the full scope.

From now on we’ll be focusing on stabilizing the release and ramp up on documentation so hopefully you’ll see an increase in v4 related blog post here as well.

You can grab the new bits either as a msi download over at our site or via nuget.

If all goes well we hope to get the final version out of the door within the next 3-4 weeks.

Go ahead and take it for a spin!

 

 

 

Posted in NServiceBus | Comments closed

Pluralsight interview

I was interviewed by the good folks over at Pluralsight a few weeks ago and the result is now online. If you’re interested in a few war stories from my dark past and also a glimpse into what’s coming up in NServiceBus vNext you can listen to the full interview here:

http://blog.pluralsight.com/2012/12/04/meet-the-author-andreas-ohlund-on-introduction-to-nservicebus/

 

 

Posted in NServiceBus | Comments closed