This project has moved and is read-only. For the latest updates, please go here.

New branching article in upcoming MSDN Magazine.

Jan 10, 2011 at 7:42 PM

After a few years hiatus, i am returning to the world of *published author*. My next article will be published in the February 2011 issue of MSDN Magazine ( ) The topic will be *Visual Studio ALM Rangers Branching Guidance for Team Foundation Server (TFS) Team Projects*. My co-author is Willy-Peter Schaub, also on the VS ALM Ranger team at Microsoft.

I am also blogging on this topic:

Feel free to post questions here on new topics you would like to see added to the next release of the Rangers's Branching Guidance. You may also post comments on my blog.

Bill Heys
VS ALM Ranger

Jan 12, 2011 at 12:16 AM
Edited Jan 12, 2011 at 2:50 AM

Looking forward to your article in MSDN, Bill.

I was reading tech crunch today and saw this article :

The main part of the article that interests me is the following:

"Anthony LaForge, the technical program manager at Google overseeing Chrome development, created the presentation below (and posted it on Google Docs) to explain how Chrome’s development cycles work. Instead of a traditional software development cycle where features are crammed into each release or delay the release, Chrome puts out a new release no matter what every six weeks. If new features aren’t ready, they wait for the next release, just like waiting for the next scheduled train at Grand Central.

Another thing that speeds things along is that the Chrome browser is simultaneously developed along three different “channels” (dev, beta, and stable). Users can pick which one they are most comfortable with, and their browsers are updated automatically. New features are introduced first in the dev and beta channels, which merge with the stable channel as those features get patched and stabilized.

The versions start to blend together. The approach is more like updating a website than a piece of client software. The version numbers don’t really matter. What version of Amazon are you on? Exactly."

This model seems at odds with the branching guidance from the rangers....BUT Google ships this browser to Hundreds of MILLIONS of customers so it HAS to work. I'd love to get the rangers insight into how Google's model could be applied to TFS....we do enterprise click once development so our updates get pushed out to our employees rapidly and while branching has streamlined our "release" process, I'm very interested (as is my entire executive chain up to the CIO) about alternative models that produce and deliver software/value faster, better, and cheaper.

Looking forward to your article and your thoughts.

I found this one quote on the slides to make my head spin "The Branch point is the end of our development cycle."  huh?!?!?

In the TFS world isn't the guidance from the rangers that the branch point (whether by feature, by team, etc) is at the beginning of the development cycle?

Oh and just because Chrome isn't developed by Microsoft...please don't fall into the [NIH - Not invented Here syndrome], could google's brillant engineers have come up with some new fangled better branching model?

I've been thinking about this a little bit more and we know that Google is using Perforce...could Google be RELYing on how Perforce handles Baseless merging to accomplish this feat of simultaneous development across multiple branches without regressions? My branching spider senses are tingling that somehow Google's figured out how to exploit Baseless merges for good instead of evil.... 

Jan 12, 2011 at 3:01 AM

Thanks Allen.

 You won't get a NIH response from me here. We all can learn from competition. My hope (and experience)  is that the VS ALM Ranger Branching Guidance, while written for the TFS audience is useful for users of other Source Code Managment (SCM) tools. I am at a bit of a disadvantage as to the Google deployment model, so I need to do a bit of research. I have been intending to blog on concepts similar to this. It is, I believe, simply a combination of branching flexibility (complexity) and release (deployment) flexibility. Look for a blog soon, which I will link from this thread.


Bill Heys
VS ALM Ranger 


Jan 12, 2011 at 10:14 PM
Edited Jan 12, 2011 at 10:15 PM

Thanks for Responding Bill.
We've been looking at this for awhile and flexiblity in deployment and rapid deployment seems to be a key of large scale successful internet companies. Take flickr for example, at and on the bottom you'll see another example of rapid deployment. 

I'd love to learn more about how to manage rapid deployments (even simultaneous deployments of concurent code lines) using TFS branching.

Jan 12, 2011 at 10:49 PM
Edited Jan 12, 2011 at 10:50 PM


As I think about your first comment (Google Chrome Release) it seems that there are two concepts here:

  • Development Process
  • Release Process

It seems Google Chrome uses a modified interative, incremental (Agile) process for development. They have decided that each increment will be an overlapping eleven-week iteration (the first six weeks being development). Within that eleven-week timeframe they add new functionality (features), perhaps fix bugs, test, and release an increment of working features.

That process could easily be mapped to Scrum. Martin Hinselwood has done an interesting blog on branching for Scrum: You will note that I had the opportunity to review this blog before it was published. It draws heavily from the Rangers Branching Guidance.

If you are familiar with Scrum, you probably know that you need to start with a set of features to develop (the Product Backlog). This is prioritzed and as many of the top-priority features as can be developed and shipped in one Sprint are then moved onto the Sprint Backlog (during the Sprint Planning Session). Sprints are often 30-days in duration, but some are as short as two weeks, while some may go longer. You could, I argue, suggest that the Google Chrome development takes place as 6-week Sprints (after a fashion).

You will often see the activities of a Sprint depicted as a circle - to show the iterative nature of Scrum software development. It starts with a Sprint planning meeting, where the Sprint backlog is agreed to. It proceeds unhindered by the stakeholders, until the Sprint backlog is developed and ready to ship. This is followed by a Sprint retrospective, that takes a look back at what went well, and what did not go well. Following this, the next Sprint begins. If you wanted to look at this on a timeline, you could *unwind* the circle and lay it on a straight timeline. The length of this timeline would correspond to the length you have chosen for a Sprint (typically 30-days)

From a branching perspective, there are two schools of thought. One - create a single Development branch as a full child of Main - which is where the Sprint Team works on development during a Sprint. At the end of the testing, this branch is merged back down to Main (RI) to be stabilized for release. If you look at Martin's blog, you will see that the timeframe for a Sprint begins with the Sprint planning meeting. Most of the activities during the Sprint take place on the Development (Sprint) branch. But the Sprint does not end when the Development (Sprint) branch is merged to Main. It ends when the Main branch is tested (stabilized) and then branched for Release. You would have one release branch per Sprint.

The second school of thought is the same as the first, EXCEPT you would have a new Development (Sprint) branch created each time a new Sprint begins. I happen to prefer the first approach (a Sprint branch that simply continues from one Sprint to another). Martin makes the point that by having a single Sprint branch, you have continuous history for that branch.

The Sprint Branches in this example would correspond to the Dev branch in the Google Chrome presentation. The Main branch, as it is being stabilized, would correspond to the Beta branch in the Google Chrome presentation. Finally the Release branch would correspond to the Stable Release branch in the Google Chrome Presentation.

Next we come to deployment.

I have always viewed support for multiple environments such as Feature Testing, Integration Testing, System Testing, User Acceptance Testing to be deployment issues, not branching issues. By that I mean that I don't necessarily have a need to build a QA branch in order allow my QA team to test the code in mhy Feature branch. I deploy the code from the Feature branch to the QA environment. I can control how often and when I do this deployment. In this way I can also associate bugs from QA with the specific deployment (which might be labeled in the associated branch).

So, back to the Google Chrome scenario. They talk about the concept of *channels*. Channels, to me are deployment options. Here, Google allows customers to subscribe to one of three channels: Dev, Beta, or Release. All this means, in my view is that you have three Drop locations which customers can subscribe to. A Dev drop location, a Beta drop location, and a Release drop location. When Google *drops* a new deployment into one of these drop locations, your subscription to update from the corresponding channel will bring these changes to your environment.

What is different here from a typical environment? They allow you to subscribe to features in Development *before* they have been Beta tested. Or to features in Beta test *before* they are ready for Release. Presumably there is increased risk and less stability when you subscribe to the Beta channel and even more risk and less stability when you subscribe to the Dev channel. This does not, in my view, mean you would do a daily drop from Dev into the Dev Drop location and make it available to the subscribers to the Dev channel. What it does mean is that roughly six weeks after a dev cycle begins, when the code is branched to Beta, the Dev code can be dropped into the Dev channel. Then after the code in Beta has been tested, perhaps it is dropped weekly into the Beta channel. Finally when the code is ready for release, it is branched for Release and dropped into the Release channel.

The key is code must be feature complete and tested in the Dev channel, while it can be incrementally dropped on a weekly basis from the Beta channel, and then dropped into the Release channel. The full timeline for a Google Release is approximately eleven weeks (six in Dev, and five in Beta). It is agile, but takes place on a longer cycle than typical Scrum projects.

I will take a look at Flickr to see how it influences my thinking next.

Bill Heys
VS ALM Ranger 


Jan 17, 2011 at 7:07 PM
Edited Jan 17, 2011 at 7:10 PM

Bill - I found another interesting article about how facebook ships code. It might make great reference material for the next rev of the branching guidance.

I thought these 4 sections were very interesting:

  • no QA at all, zero.  engineers responsible for testing, bug fixes, and post-launch maintenance of their own work.  there are some unit-testing and integration-testing frameworks available, but only sporadically used.
  • re: surprise at lack of QA or automated unit tests — “most engineers are capable of writing bug-free code.  it’s just that they don’t have an incentive to do so at most companies.  when there’s a QA department, it’s easy to just throw it over to them to find the errors.”
  • by default all code commits get packaged into weekly releases (tuesdays)
  • getting svn-blamed, publicly shamed, or slipping projects too often will result in an engineer getting fired.  ”it’s a very high performance culture”.  people that aren’t productive or aren’t super talented really stick out.  Managers will literally take poor performers aside within 6 months of hiring and say “this just isn’t working out, you’re not a good culture fit”.  this actually applies at every level of the company, even C-level and VP-level hires have been quickly dismissed if they aren’t super productive.
  • Jan 17, 2011 at 7:29 PM


    Interesting article, but there is not much here that directly pertains to branching.

    They are doing weekly releases, but the concept is similar to the blog I just published (the primary differences being the number of engineering teams working in parallel and the frequency of releases)

    There is not a lot of insight in this article (or the Flickr article) as to how branching supports their development / release process.

    Clearly Facebook, by design, is an intimidating engineering environment. They rely on individual excellence for their success, rather than formal processes. They will find this model increasingly difficult as they grow. Microsoft is at least 50x the size of Facebook. At some point you cannot manage 100,000 employees by expecting them all to be individually responsible for writing perfect (bug-free) code. In fact I find the very concept to be arrogant and naive. Most engineers simply do not understand how to test, nor do they have the desire to do so. Engineers, at least those I have worked with, much prefer building something over finding bugs in what they build. When engineers test, they often focus on proving something works, rather than proving where it does not work. For example, do a check-out for a shopping cart, and you get a confirmation screen for your order. But did it log all of the correct information, what happens if you enter a negative quantity into a line item. Does the system consolidate multiple line items *of the same thing* into one line item on the order (by incrementing quantity), how does it handle dates in the past, etc.

    QA departments, by contrast, are motivated to find the things that don't work, and they are perhaps less interested in finding the things that do work. Engineers have already tested that perspective.

    This article, therefore, offers more insight into the (alleged) culture at Facebook, the lack of a formal QA process (not a branching discussion), weekly releases (more of a deployment discussion), lack of automated testing (a silly, and arrogant excuse for not having proper regression testing processes in place), etc.

    The one concept that I do think has implications for branching is the (*all code commits* get packaged into a release - the concept of releasing the latest version rather than cherry picking changes for release is fundamentally part of our guidance.


    Bill Heys
    VS ALM Ranger 


    May 31, 2011 at 12:03 AM
    Edited May 31, 2011 at 12:54 AM

    Bill, I saw a really interesting presentation on facebook on their branching structure and pushes:

    I was surprised that they have such a simplified branching model. I'm also surprised they are using GIT, SVN, and Perforce....and not TFS. I'd love to learn more why they aren't using TFS...given their close relationship with Microsoft ($100 Million dollars).


    May 31, 2011 at 2:59 AM

    Thanks for this Allen.
    I am working with a customer that seems to be working under similar constraints. Frequent releases, rapidly growing engineering team.
    This customer has been using Subversion, but they want to move to TFS to take advantage of the dashboard capabilities. They need to be more transparent with respect to reporting status to their senior managment and stake holders.
    Interestingly, they don't use branching *at all*. They have one *trunk* in Subversion. When they want to do a release, each developer *checks-in* their changes to the trunk (sounds like FB).

    I would observe that the FB branching structure maps *somewhat* to the Basic branch plan. Their *Trunk* is what we call the development branch. It is where developers check-in their code. Our main branch is where FB does their stabilization and release, etc.