Cumulative Benefit TasksMar15

Saturday, 15 March 2014 by haemoglobin

I’ve been struggling to think of a term for a concept that’s been on my mind recently, and have settled on the term “Cumulative Benefit Task” – unless there is already a term for these or a better one let me know. 

Basically, it’s anything (as you would already know) that has an initial set of effort or inertia, but once done, rewards will be reaped continuously from then on over time. These aren’t tasks that you get the full benefit from immediately. It may be that each regular “reward” for this initial effort is quite small, but its value increases more and more over time the longer that benefit is available. The net effect being that cumulative benefit tasks are always the most valuable the earlier you set them up. 

The concept is also based on the fact that work projects, and life are finite in time. So if you set these things up late in the process, the benefit you get from them will be much less. The idea being, do the initial effort now to set yourself up for stress/problem free and continual cumulative benefit from then on to get maximum advantage.

The chart below also reflects the initial effort cost which may potentially be substantial (although most the time not so bad once you actually get around to doing it), so the earlier it is done, the sooner it will be “paid off” and made worth it in the long run. Put another way, if you are going to put it off for too long due to being “too busy”, it might almost not be worth doing at all if done later.

image

Let me give some seemingly random and simple examples just to give the idea: 

Work Life / Tech

  • Continuous Integration / Development Tools
    • Initial Effort: Potentially fairly involved setup of build process, build server and various automation of deployment / development tasks.
    • Cumulative Benefit: Much faster and robust development process for the rest of the lifetime project – there is little point putting this off to the end. 
  • Development Process
    • Initial Effort: Setup of the agile process early.
    • Cumulative Benefit: Advantages of agile software development project for the entire project duration – this is also self improving so the longer it self improves the better. 
  • Daily Workflow
    • Initial Effort: Decide on how best to structure your emails, calendar, tasks and categories within Outlook for example.
    • Cumulative Benefit: Day to day workflow is set early and run in an efficient manner and run on auto-pilot from then on.
  • Kit
    • Initial Effort: Getting licenses for software tools and physical kit that increase productivity (extra monitor? wireless headset? comfortable chair? fast PC?).
    • Cumulative Benefit: Productivity gains have the biggest impact the longer they are in place.

Home Life

  • Habits / Fitness / Diet / Routine
    • Initial Effort: Time spent setting up good habits, routine, diet and fitness plan.
    • Cumulative Benefit: Once this is setup, you can continue your busy life knowing that the underlying good habits that have become common nature are keeping you on track and healthy.
  • Investments
    • Initial Effort: Time spent investigating financial investments.
    • Cumulative Benefit: The earlier this is done of course the better - put the money to work, as opposed to earning negative interest behind inflation. This is one example where you could also have exponential benefit, not just a cumulative benefit when compounding interest is involved for example. This might also be buying a house.
  • Buying a car / bike
    • Initial Effort: Time for research, dealing with car sales people and initial cost.
    • Cumulative Benefit: The longer you have the car or bike the more opportunities you have to take advantage of this.
  • Setup at home
    • Initial Effort: Investigating and buying of a sound system, filing cabinet, digital picture frame, bread maker, seeding a garden etc.
    • Cumulative Benefit: Every day you can take advantage of things around the home that you spent time setting up in the beginning.

These were just random examples, but I propose that these types of things should be placed into the high importance category of our to do lists (maybe even tagged as a Cumulative Benefit task), and set as a high priority in life as opposed to putting these off longer than they need to.

Lets get to it!

Update: @JasonGlover highlighted me to the following xkcd sketch which also relates in many ways :)

Categories:  
Actions:   E-mail | Permalink | Comments

Essential Application Install ListOct27

Saturday, 27 October 2012 by haemoglobin

Here are the applications that are on my essential install list. I install these immediately when I setup a brand new operating system (which I have been doing a bit of recently going through all the Windows 8 developer preview / release candidate / RTM builds):

  • Firefox with Bookmark Sync
    • Addons:
      • Statusbar
      • Scroll Bar Anywhere
      • Read It Later / Pocket
      • Lastpass
  • Chrome
  • Visual Studio 2012 (Dark Theme) / Resharper / NCrunch
  • TortoiseGit / Github for Windows
  • Evernote
  • Launchy
  • Autohotkey
  • Windows Live Writer
  • Dynamic Bing Theme
  • Filezilla
  • Notepad++
  • VLC Media Player
  • Adobe Reader
  • Paint.NET
  • 7-zip

This will normally be enough to keep me going for quite a long time. Everything else I download as I need them.

What’s on your essentials list?

Categories:   Technology
Actions:   E-mail | Permalink | Comments

Does RX Switch Dispose old SubscriptionsOct27

Saturday, 27 October 2012 by haemoglobin

I am currently lucky enough to be making use of RX at work, and I must say – as with most people who have the chance to use it, I love it.

It is amazing to be able to pass an observable around your application, combine it with other observables, subscribe to the result and have events pumped straight to you when and how you want. You hear the term composable banded about but without seeing it done it’s hard to appreciate.

A recent requirement was for a screen to subscribe to push updates from the server every time a set of data is retrieved. Whenever a new set of data is retrieved, then a new subscription for push updates is setup, at the same time we no longer need to continue receiving the updates from the previous set so we need to ensure that this is torn down. 

The RX Switch operator looked like a perfect operator to take care of this – however I wanted to make sure that the old subscription is disposed of so we are not keeping needless connections open.

The behaviour of Switch() is easy to test:

The above code will create two observables, i being 1 then 2 in each case. We are looking to see the first observable is being disposed i.e the one with i being of value 1.

This passes and shows that with the Switch operator, the subscription to the first observable is indeed disposed of when the second observable is created and used as the new subscription. If Switch() is replaced with Merge() for example, then understandably both are kept alive and the test fails.

Categories:   Development
Actions:   E-mail | Permalink | Comments

My Electronic WorkflowJul21

Saturday, 21 July 2012 by haemoglobin

After two years of being in London I have refined my electronic workflow to the point where it works so well I’ve been wanting to document it. It has a nature of having circles of information flowing seamlessly between different services, eventually settling in the right place for you to deal with at the right time and in the right place. It also works very well if you spend periods of time with no internet connectivity such as commuting underground or travelling.

The setup for me looks like this, devices on the outside with supporting apps, and cloud services down the middle:

Workflow

 

    A well managed information flow between devices of course requires all information to be stored in the cloud. All the boxes down the middle are places on the internet where my life in essence is stored, and can be accessed easily from any device or any internet cafe.

    These are:

  • Pocket (formerly known as Read it Later)

    • A brilliant service for saving internet articles to read later at a time that is more suitable. The really well made iPhone/iPad app will sync these articles for you to read offline anywhere and at anytime. If offline, it will remember your marking of read items so it will all sync up again when connected to the internet next.

    • Both Byline (RSS) and Echofon (Twitter) have Pocket integration so articles can be sent in from there to read later. Byline does a much better job at queuing links sent to Pocket when offline, Echofon on the other hand annoyingly requires you to be online which not as convenient, hence the dotted line. You can however send links to Pocket via email which will always work offline on the iPhone/iPad.

    • Most of my reading of long articles I wait till I see it inside of Pocket as opposed to reading directly from Twitter/RSS feeds for the simple fact that if it takes too long to read, I’m blocked from seeing any of the other twitter / rss action until I’m done.

  • Google Reader

    • Still the best RSS service around by far and great source of information. Byline is the app I have been using to process these for a while now and it does the job well. As mentioned above, Byline integrates with Pocket very well.

  • Twitter

    • Another great source of information in our industry. Echofon supports online integration to Pocket. 

  • Buffer

    • A service that I have started using for queuing up tweets and sending them to twitter at set times to avoid spamming followers with flurries of tweets all at once and have them more evened out. It is very sleek and allows you to have 10 items in your queue at anytime for free. Buffer accepts email as input which makes it very easy to integrate and send tweets from any application (and allowing tweets to be sent offline).

  • Gmail

    • The best webmail service out there. Syncs great with everything.

  • Toodledo

    • My main to-do list service now which I am very pleased with. I have written about my conversion to Toodledo in a previous post. The main importance in the workflow is the ability to send tasks to Toodledo through email. Most apps in iOS can send email which means you can send yourself to-do tasks from any application that has some information that you want to capture. This is a great way to get info off iOS and into PC land for “proper work” to be done on it. Emails also queue very nicely offline in iOS and all the emails will send out automatically as soon as you walk into a wifi/3G connection again.

    • The  Toodledo app will also sync 100% offline to your iOS device.

  • Evernote

    • A great place to keep your reference material / notes. Evernote does great apps across all devices.

  • Dropbox

    • A fantastic online file storage that most people are already using. Not normally too much part of information workflow, but it can be – for example a to-do list task or an Evernote entry may refer to a file in Dropbox which you can access from wherever you happen to be. 

Circles of Flow

As you can see in the diagram, a piece of information can come into the system, captured and shifted around until it settles to where it should be. For example, someone writes a blog post, it enters your system through Google Reader then shifts through Byline –> Pocket –> Buffer –> Twitter, and all of this can happen even if you are offline.

It’s the level of disconnection I think that is healthy, as opposed to always feeling the immediate need to stay on top of everything, as you know it’s in the system and you can read it later when time and priorities match.

It is also very easy to send myself to-do items to Toodledo via email (either from within various apps or directly) which then come into an inbox/processing bucket there. In true GTD style I then process this inbox weekly into the right area within Toodledo (i.e give it a due date / importance / context). This is another area though I have refined which is essentially a zoomed in part of my overall GTD workflow above, which I can definitely save for a future post.

Bonus: The drawing above was made with Lucidchart, I’m always on the lookout for cheap and easy Visio replacements, this has great integration into google drive and works well!

Categories:   Productivity
Actions:   E-mail | Permalink | Comments

First Impressions of Windows 8Jun30

Saturday, 30 June 2012 by haemoglobin

Recently I have switched over to the Windows 8 Release Candidate as my main OS at home and, despite all the negative words around the internet about missing start menu’s and the like, I’m actually quite liking it – I don’t feel like going back to Windows 7 at all!

What I Like

Here are just a few little things I have noticed immediately during my time using it:

  • Much faster to boot and reduced time to actually get started working on things than my Win7 setup.
    • Albeit this is a new install so hasn’t had time to accumulate things to slow it down like my Win7 probably has – but I believe MS has devoted a fair bit of resources to performance here.

  • Better multi-monitor support.
    • A different desktop background on each monitor (highly recommend the Bing Rotating Wallpaper by the way).
    • The taskbar stretches across monitors and can be configured to display the apps that are only open on the screen it is on.

  • Deleting a file in windows explorer sends it straight to the recycle bin without the annoying prompt. This is a good example of something that on the surface sounds bad but with some real thought, it’s more likely you aren’t making a mistake and don’t want to be nagged. If you did, just retrieve it from the recycle bin. Simple things make a big difference.

  • If moving / deleting files that are in use, it will display the name of the application that has it open.
    • Less need for applications such as LockHunter (although this is still a great tool and will probably have this installed anyway).

  • Metro start page replacement for the start menu.
    • In windows 7, I progressed towards typing the program I was looking for in the start menu, this is by far the most efficient way of finding an app to open. The Metro start page is no different – you type to find your application, however it is much more sleek/visual and very quick.
    • Searching for files here also has a very nice interface, and much better than the Win7 start menu for doing this.

  • Metro apps definitely have their own advantage.
    • To be honest I’m currently not sold on all of the out of the box ones such as mail & calendar and prefer to use the gmail pages directly for that, but others for example the Photos app is absolutely great. For the first time ever I have actually enjoyed going back through all my photos because the interface to do so is just so nice. The People app I will also likely use for contacts.
    • I’ve also recently installed a Metro twitter client which I think will be the best place to browse tweets, as I always preferred to run my desktop twitter app in full screen anyway, but it will also make it tidy and neat to have the metro twitter app docked to the side of the screen.
    • As above, docking metro apps in general to the side of the desktop I think is a great thing.
    • With two screens, it can be nice having a metro app full screen on one screen, and desktop on the other.
    • Aren’t we actually lucky that we can use all the same apps a tablet will have available, as well as full access to the desktop at the same time? That’s a first in my opinion.

  • Obvious and well talked about improvements to task manager to track the performance of the system.
    • I also like the Startup tab which keeps track of items that start with windows, and tells you how much of an impact they are having on your startup time.
    • File copy dialogs display progress using a chart of copy speed over time.

Keyboard shortcuts

Thankfully Windows 8 was also developed with keyboard shortcuts in mind. The full list can be found here, I tend to use the following (as well as all the usual shortcuts I have been using with Win7 in desktop mode):

  • Windows key: To take me to the start page and type the application I want to open.
  • Windows - D: I use this quite a lot to bring me back into the desktop – this happens fast.
  • Windows - I: To bring up the metro settings side bar.
  • Windows - X: Pops up direct links to desktop related features.
  • Windows - Tab: To navigate between metro apps (and close them by right clicking). 
  • Windows – . : To snap a metro app to the right.

I’ll probably eventually also start using Windows – C to bring up the charms bar for sharing between metro apps.

There are plenty of other extra things Win8 does that can be found around the place, including here, but these are just what I’ve noticed from my initial usage – which has so far been positive even for a primary desktop user.

Categories:   Technology
Actions:   E-mail | Permalink | Comments

Productivity Tip 5: Gmail to RSSJun2

Saturday, 2 June 2012 by haemoglobin

I’m always finding new ways to refine my day to day workflow, one of my recent wins was deciding to convert many email newsletter subscriptions into RSS feeds and out of my inbox.

This suits me brilliantly because:

  1. I like to practice inbox zero - I don’t like having newsletters piling up in my inbox that I intend to read at some point but don’t have time to read now.
  2. My main bulk of reading is done on my iPad in the London Underground where there is no internet connection, so I rely on everything to be synced either as RSS through the Byline app or as saved articles through Read It Later (now known as Pocket).

To solve the first point, for a while I started automatically tagging and archiving the email subscriptions in Gmail, and setup reminders to read the unread items (say in the weekend) – but this did not work at all since I do most of my reading in the London Underground during the week during my commute.

I had a search around the internet and found a couple of options. The first option I tried was using the Gmail feed URL itself (for example https://mail.google.com/mail/feed/atom/LabelName/) however, this has two problems – one it’s an authenticated feed which means Google Reader will not be able to access the feed unless you route it through a third party service such as http://freemyfeed.com/ which strips the authentication part off. This means handing over your username and password however which isn’t ideal. The other issue is the GMail feed is truncated, so your RSS reader will only be able to read the subject and a few starting lines which is no good.

I then found http://emails2rss.appspot.com/ – this free little gem lets you auto-forward your subscription emails to your account there, and they will turn these emails into an RSS feed that you can subscribe to. This has worked perfectly for me and my RSS reader has the full email synced for offline reading, and even in the case of HTML email it comes through correctly formatted – brilliant.

The instructions for setting this up with GMail is on their site here – you need to login to the service using your Gmail account which lets them know your email address so they can match the emails as they come in from you – but at least no handing over of passwords (as it uses Gmail’s login service). If you are extra paranoid about protecting your main email address, then nothing is stopping you from creating a second Gmail account used solely for email subscriptions that you forward all incoming through to the Email To Rss service. I ended up doing this since I had already created a second account when I was testing the first option with freemyfeed.

All up very happy with this, long live RSS – lets hope it never dies!

Categories:   Productivity
Actions:   E-mail | Permalink | Comments

Saying Toodledo to Remember the MilkMay20

Sunday, 20 May 2012 by haemoglobin

As you can probably tell from some of my blog posts, I have been using Remember the Milk (or RTM for short) to manage my to-do list tasks.

For many reasons that I am about to describe, I have stopped using Remember the Milk and have converted all of my tasks over to the popular and much better (in my opinion) Toodledo service instead.

Here are my reasons:

Remember the Milk Negatives: image_thumb[7]

    1. The reason I started looking into alternatives in the first place was due to the poor customer support and the seeming complete disregard RTM has for customer requests. My particular case was that I was in the process of writing a beautiful (and free) desktop client UI for managing RTM tasks in a clean way. I made a good start on this, and proceeded to send RTM a support email with a very simple question I needing answering about the API. I sent this a month ago now with still no reply to date. I followed up with a couple of very polite tweets to their twitter account a few days apart asking if they had received the message as this was a fairly urgent request for me…. I still have no response even to my tweets and I doubt that I ever will. I sent one more follow up email explaining my disappointment, doing the right thing by still allowing them more time for them to get back to me before I wrote anything publicly (still no response).

      What I find most disappointing by this is I am a long term paying (pro) customer, who are apparently entitled to priority support. One big frustration is that the twitter account is very active and actively sending tweets to people who casually mention @rememberthemilk (so they would have been received), but simply no answer to mine which were for actual questions I desperately needed answers to. I would have even been happy with a simple “we have received your email but the queue is currently long, give it a few more weeks sorry!” - but not even a peep which I find is disconcerting and just plain rude to be honest.
    2. There have also been very little updates to Remember the Milk for years, the common theme in the forums are people desperately asking for features such as subtasks, calendar pickers for dates, calendars views etc all of which seem to fall on deaf ears.
    3. Questionable ethic. An iPhone client Appigo Todo had their API access to RTM revoked without warning, affecting all of their users, here is what they said in the blog entry:
      Unfortunately for users of Todo, rather than contact us about the problem upfront, RTM chose to immediately disable the sync service available in Todo, Todo Lite, and Todo for iPad, causing immediate interruption to the service. We strongly feel that pulling the plug on users without warning is never the way to deal with a potential business relationship issue.

 

  • Toodledo Positives: image_thumb[5]
    1. Toodledo seem to listen to their users and make an effort to work on features that are important to people. Just have a look at this chart comparing the two services – Toodledo winning hands down with a whole lot of features that people have been asking about for years in remember the milk to no avail.
    2. The pro subscription is cheaper at $15 a year (compared to $25 for RTM). Toodledo’s pro subscription is also only really needed if you require the subtasks functionality and a few other bits. Likely you will be fine with a once off payment for the Toodledo iPhone app with no further charges and you will be set with a lot more features than RTM can ever provide.
      For RTM you really need the pro subscription because otherwise your mobile app will only be able to sync once every 24 hours, and you will not be able to receive any push notifications to your mobile for due tasks both of which come to be essential.
    3. The Toodledo API is much more open and free.
    4. With Toodledo extra features such as start date, status and context fields have simplified my workflow a lot. Things which were starting to become very complicated and troublesome to do in RTM became much cleaner to implement in Toodledo. It is a much more powerful platform, so to do the same thing in RTM you have to try and work around a lot of shortcomings. Subtasks can also potentially tidy things up quite a bit if you opt for a pro account.

Toodledo admittedly does have a slightly less friendly looking UI (based more on a functional grid), but as a power user it takes little time to learn and the benefits are well worth it. To be honest, the RTM interface isn’t perfect either, it’s not usable for a power user without the a bit better rtm 3rd party addon and I still found that the layout was buggy with the floating details panel often getting stuck in odd places.

I know who I will be making my UI for now instead Smile If you are a serious about maintaining those to-do lists and use them day in day out, I would definitely consider Toodledo over Remember the Milk and have been very pleased with it so far.

Hamish

Categories:   Productivity
Actions:   E-mail | Permalink | Comments

RE: IQueryable vs. IEnumerable in LINQ to SQL queriesMar27

Tuesday, 27 March 2012 by haemoglobin

I came across an interesting blog post today (admittedly an old one!) from Jon Kruger who experimented with some interesting behaviour differences between IQueryable<T> and IEnumerable<T>. The blog post can be found here, however the summary of the findings are below: 

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable<Product> list = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list = list.Take<Product>(10);
Debug.WriteLine(list.Count<Product>());  //Does not generate TOP 10 !!

and

NorthwindDataContext dc = new NorthwindDataContext();
IEnumerable<Product> list2 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"))
     .Take<Product>(10);
Debug.WriteLine(list2.Count<Product>()); //Works correctly

and

NorthwindDataContext dc = new NorthwindDataContext();
IQueryable<Product> list3 = dc.Products
     .Where(p => p.ProductName.StartsWith("A"));
list3 = list3.Take<Product>(10);
Debug.WriteLine(list3.Count<Product>()); //Works correctly

The first example will neglect the very important TOP 10 statement from the generated SQL query, returning all rows into memory and then returning the first 10 from there (obviously not ideal). The next two correctly include the TOP 10 statement returning only those rows from the database.

The reason the first statement fails is the call to Take is actually calling the IEnumerable<T> extension method from Enumerable. In ILSpy, this has the following implementation (TakeIterator being a private method returned from Take):

private static IEnumerable<TSource> TakeIterator<TSource>(IEnumerable<TSource> source, int count)
{
    if (count > 0)
    {
        foreach (TSource current in source)
        {
            yield return current;
            if (--count == 0)
            {
                break;
            }
        }
    }
    yield break;
}
 

In the second and third examples however, Take is called on a IQueryable<T> which executes the extension method defined in Querable. This has a totally different implementation as ILSpy shows below: 

public static IQueryable<TSource> Take<TSource>(this IQueryable<TSource> source, int count)
{
    if (source == null)
    {
        throw Error.ArgumentNull("source");
    }
    return source.Provider.CreateQuery<TSource>(Expression.Call(null, ((MethodInfo)MethodBase.GetCurrentMethod()).MakeGenericMethod(new Type[]
    {
        typeof(TSource)
    }), new Expression[]
    {
        source.Expression,
        Expression.Constant(count)
    }));
}

As per the MSDN documentation, calls made on IQueryable operate by building up the internal expression tree instead.
"These methods that extend IQueryable(Of T) do not perform any querying directly. Instead, their functionality is to build an Expression object, which is an expression tree that represents the cumulative query. "

When Count is called in the last two examples, Take has already been built into the expression tree causing a TOP 10 to appear in the SQL statement. In the first example however, Take on IEnumerable<T> will start iterating the IQueryable returned from Where, which does not have “Take” in it’s expression tree, hence the behaviour.  If you are wondering why an IQueryable can be cast to an IEnumerable as in the first two examples, IQueryable extends IEnumerable, IQueryable hence being pretty much the same interface but provides a few extra properties to house the LINQ provider and internal expression tree. The actual extension methods off each of these interfaces however are quite different.

Another thing that helps when thinking about LINQ queries is they in effect execute from the last call in, not the other way around like most traditional method call chains. So for example, calling Count will start enumerating over the IEnumerable returned from Take, which itself will enumerate over the IEnumerable returned from Where and so on, depending on how many LINQ operators you chain together.

Categories:   Development
Actions:   E-mail | Permalink | Comments

Enabling TeamCity Push to GitHubMar26

Monday, 26 March 2012 by haemoglobin

The following documentation describes what is necessary to run commands such as git push to a GitHub repository from a build within TeamCity.
Without the correct SSH configuration, any call to remote git repositories in the build script will cause the agent to hang the build while it is attempting to ask the user to add the remote host to the ~/.ssh/known_hosts file.
This is the message that looks like the following (requiring user input), interaction which is not possible from the build:

    The authenticity of host 'github.com (207.97.227.239)' can't be established.
    RSA key fingerprint is d2:80:ef:7a:71:4b:92:89:c7:3d:fb:e6:f5:26:44:e1.
    Are you sure you want to continue connecting (yes/no)?

This text will also not be written to any build output or logs (due to the blocking call) making it difficult to diagnose.

Business Need

When we build our NuGet packages, we build them off a branch. This is because they are released libraries and we need a reference back to code that is in use in UAT/PROD etc. The source symbol paths that we write into the built PDB's (as per my last post on GitHub Source Symbol Indexing) are indexed with HTTP references back to GitHub on that branch, this is so source symbol stepping into our libraries will work for any developer using our libraries (and without them needing to have Git installed).

The source files that are downloaded into the developers Visual Studio debugging session (when stepping into our code) has an auto-generated file header describing when that file was built and what version of the library it is from.
The header looks similar to below, varying depending on the file:

    // YourLibrary SDK
    // YourCompany.YourLibrary\Config\ConfigureWindsorBuilder.cs
    // YourCompany.YourLibrary, Version 3.20.0.114, Published 07/03/2012 17:06
    // (c) Copyright YourCompany 
    // ----

If you are interested, the powershell script we are using to do this can be downloaded from here.

Since we do this as part of the build we need the build to push these changes back to the repository itself so the source stepping will download the file from GitHub with the added headers.
Build agents are able to interact with the repository if the TeamCity checkout mode is set to Automatically on agent.

The rest of this documentation will describe what steps are necessary to ensure the build agent is setup to support communicating with the remote repository and how TeamCity itself needs
to be configured.

The official TeamCity 7 documentation regarding git support can be found here: http://confluence.jetbrains.net/display/TCD7/Git+%28JetBrains%29
Mike Nichols has also written about TeamCity/GitHub interaction in his blog post here.

Configuring TeamCity for Agent Side Checkout

Steps

  • Attach a Git VCS root, configure the VCS root name, Fetch URL, Ref name (branch), User Name Style as usual.
  • Authentication Method needs to be set to "Default Private Key".
    • This is the only supported method for agent-side checkout with SSH.
  • Ensure "Ignore Known Hosts Database" is ticked.
    • This saves any potential hassle with the Java/Windows home drive mismatch below.
  • Everything else can be left as default, including "Path to git" which should be set to %env.TEAMCITY_GIT_PATH%.
  • Back in the main VCS Settings, ensure "Automatically on agent" is selected for the CVS checkout mode.
  • Currently, "Clean all files before build" needs to be checked to avoid a hanging build due to a TeamCity issue.
    • This should be solved in TeamCity 7.1 and there is already a patch available.

Configuring TeamCity Build Agent

Any agents that do not have git installed will appear *incompatible* with regards to this configuration, the incompatible agent requirement message env.TEAMCITY_GIT_PATH exists will be displayed until git is installed on the agent and the TeamCity agent service restarted.

Steps if Git is not already installed and configured on the Agent

  • Follow http://help.github.com/win-set-up-git/ to install msysgit on the build agent.
  • Choose the "Run Git from the Windows Command Prompt" option which enables us to call git commands easily from the build.
  • Using Git bash, generate ssh keys to the default "home directory" location (~/.ssh), with no passphrase (TeamCity does not support passphrases for Default Private Key authentication).
  • Add the public key to your GitHub account as per the instructions.
  • Run the following commands to ensure the agent appears correctly in the Git history when it makes commits (this is stored in the file ~/.gitconfig):
    • git config --global user.name "Build Agent"
    • git config --global user.email "buildagent@db.com"

NOTE: ~ above refers to the home directory, on Windows this is the concatenation of the HOMEDRIVE and HOMEPATH environment variables.

Bypass Known Hosts Check

It is the request to add the remote host to the list of known hosts that hangs the build during git calls involving remote repositories. If the remote host has not already been added by manual means on the agent (an entry will be in ~/.ssh/known_hosts if it has) then the following needs to be done so it is added automatically without requiring user interaction when the first request is made by the build.

Steps

  • Create a file named "config" (no extension) under ~/.ssh and add the following lines:
    • Host github.dbatlas.gto.intranet.db.com
    • StrictHostKeyChecking no

Check Java/Windows Home Directory Mismatch

Since TeamCity uses a Java implementation of SSH for the initial checkout, java considers the home directory (and hence the directory it looks for ssh keys) to be one level up of the the value of the registry key HKEY_CURRENT_USER\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\Desktop.

In a corporate environment this can sometimes be different to the HOMEDRIVE and HOMEPATH environment variables that git used above to write the ssh keys to.

This is described here. If there is a mismatch, you will likely see an Authentication failed error during the agent side git checkout since TeamCity will be looking for the SSH keys in the incorrect home directory location.

Steps

  • Within TeamCity, find the agent and view the "Agent Parameters". Under System Properties, check the variable *user.home* (this is the home directory that java considers it to be).
  • If this is different to where the ssh keys were created in the previous steps you have two options:
    • Copy the .ssh folder to this location as well, or
    • Create a blank .ssh folder here and add a file "config" (no extension) including the following line which maps to the private key created previously:
      • IdentityFile {path_to}\id\_rsa

NOTE: In the {path_to} replacement above, don't use any mapped networked drives, use the full network path instead. Also relative directory locations do not seem to work here.

Check Home Directory Variables Available to Agent

Since the git installation actually adds c:\Program Files (x86)\Git\cmd to the path, when you call git from a build script you are in
effect really calling c:\Program Files (x86)\Git\cmd\git.cmd which is another batch file.

Git.cmd combines the %HOMEDRIVE% and %HOMEPATH% environment variables into a %HOME% variable which git.exe uses for the location of the
.ssh keys, .gitconfig and config file as configured above.

If the TeamCity agent is running as a service (as opposed to started from the command prompt through c:\BuildAgent\bin\agent start), these variables may not be available and our calls to git in the build script will fail.

Steps

  • In c:\BuildAgent\conf\buildAgent.properties add the line:
    • env.HOME={path to user home directory containing the .ssh folder and .gitconfig}

IMPORTANT NOTE: Remember to use escape backslashes in this file for TeamCity to process these correctly, so c:\Users\MyHomeDirectory becomes c:\\Users\\MyHomeDirectory.
Also, do not use any network mapped drive names as this does not seem to resolve, i.e instead of X:\, use \\\\SERVERNAME\\NETWORKSHARE\\USERSHOMEDIR

Considerations when calling Git from a batch file in the build

Always ensure you begin any calls to git in a batch file with call.

In a Windows batch file, any call to another batch file will not return unless you use the call keyword beforehand. So prefer to use call git push over git push for example.

Debugging

The log files in c:\BuildAgent\logs can all have useful information when trying to resolve the issues discussed above, specifically the files teamcity-vcs.log, teamcity-build.log and teamcity-agent.log.

Conclusion

As you might imagine, this all took quite a bit of fiddling around to see how to get this all going - and I must say the documentation available does seem to be vague in a lot of areas. I think it is perfectly reasonable to argue that the extra build complication may outweigh the potentially “nice to have” use case there happens to be for pushing into the repository from the build, at least until this becomes less difficult.

But I hope that this is helpful for anyone doing the same thing!

Categories:   Development
Actions:   E-mail | Permalink | Comments

TeamCity SpecFlow IntegrationJan11

Wednesday, 11 January 2012 by haemoglobin

For a recent project I introduced SpecFlow to automate acceptance tests in true BDD style which I have been rather pleased with. SpecFlow’s HTML output report is very good for reporting the current behaviour and state of the system to stakeholders – an example SpecFlow report can be seen here.

Using TeamCity as our Continuous Integration platform (highly recommended by the way, I’m a big fan of TeamCity and is free for smaller teams), what we wanted was to have the SpecFlow tests run by TeamCity after every check-in, and the SpecFlow output report displayed in an easily accessible tab of the build result.

In order to demonstrate how this can be done I’ve stripped it right back from the ground up using an example SpecFlow project which is available on GitHub.

These steps assume you have git installed, however the basic idea will work with any source control system.

  1. Download an install TeamCity, SpecFlow and NUnit.
  2. From a command prompt / git bash shell – navigate to where you want to clone/checkout the SpecFlow examples (e.g c:\git), and run:  
    git clone https://github.com/techtalk/SpecFlow-Examples.git
    This will create the directory c:\git\SpecFlow-Examples containing the repository’s source. Being a DVCS however, this directory is a git repository itself that we can point TeamCity directly at for our example.
  3. In BowlingKata\BowlingKata-Nunit add a new batch file RunSpecFlow.bat with the following contents:
    "C:\Program Files (x86)\NUnit 2.5.10\bin\net-2.0\nunit-console.exe" /labels /out=TestResult.txt /xml=TestResult.xml /framework=net-4.0 Bowling.Specflow\bin\Debug\Bowling.Specflow.dll 
    
    "C:\Program Files (x86)\TechTalk\SpecFlow\specflow.exe" nunitexecutionreport Bowling.Specflow\Bowling.SpecFlow.csproj
    
    IF NOT EXIST TestResult.xml GOTO FAIL
    IF NOT EXIST TestResult.html GOTO FAIL
    EXIT 
    
    :FAIL
    echo ##teamcity[buildStatus status='FAILURE']
    EXIT /B 1

    The first line will run the SpecFlow tests which in essence are standard NUnit tests under the hood. The reason we need to run it like this from the command line instead of using the TeamCity inbuilt NUnit runner is that we need to have TestResult.txt and TestResult.xml available to specflow.exe for it to generate the report in the next line.

    There are a couple of extra checks at the end to see if the output files exist otherwise fail the TeamCity build in case of catastrophic error running the commands. Step 10 will let TeamCity fail the build if any test fails.  

    Note that your installation locations to NUnit and SpecFlow may vary. In the real world, we actually included NUnit and SpecFlow as part of our repository so things would work on any build agent.
  4. Commit this new file to the repository with:
    git add RunSpecFlow.bat
    git commit –m “Added RunSpecFlow.bat”
  5. In TeamCity, add a new project “SpecFlow Integration Test”.
  6. Create a git VCS root called SpecFlow Examples with the Fetch URL of C:\git\SpecFlow-Examples or whatever you cloned the SpecFlow Examples repository to in step 2 above. All other settings can be left as the default.
    image
  7. In the VCS settings section, tick “Clean all files before build”.
    image
    The reason for this is due to the last lines in the batch file checking for the existence of the output files. If we don’t clean the agent first, the old test output files will still be hanging around from the time before (if running on the same build agent) so we won’t get a failure if the files failed to be created for some reason. It’s slightly slower having this option on, but in practice I’ve had less strange issues with build agents if I always start with a clean checkout.
  8. Create a configuration in the project called “Run SpecFlow Tests”. In the artifacts path, add the following:
    BowlingKata\BowlingKata-Nunit\TestResult.html=>.
    This will copy the SpecFlow output report into the artifacts for the build.
    image
  9. Add an MSBuild step to compile the BowlingKata\BowlingKata-Nunit\Bowling.sln solution using the .NET 4.0 MSBuild version.
    image
  10. Add a Command Line step using the working directory BowlingKata\BowlingKata-Nunit and our command executable RunSpecflow.bat
    image
  11. Add a Build Feature of type XML report processing with BowlingKata\BowlingKata-Nunit\TestResult.xml in the monitoring rules section. This will allow TeamCity to display the number of tests run (with pass/fail counts) and will fail the build if any tests fail. Without this, the build will always be green.
    image
  12. Navigate to TeamCity Administration (top right) –> Server Configuration –> Report Tabs
    Create a new Report Tab with:image

    This tells TeamCity that whenever it sees an artifact named TestResult.html at the root level of the output artifacts, add a new tab to the build output called SpecFlow Results, and display TestResult.html within it.
  13. Run the build!
    Have a look at the lovely output below (I purposely made a test fail to show how it would look): image
    image

image

Of course, the next step is to just add a TeamCity trigger so all of this runs on every check-in to the repository.

I hope this shows how you can get SpecFlow integrated into TeamCity which I think is very effective, especially if your clients have access to your TeamCity server directly as a way of reporting progress or state of the system.

Categories:   Development Process
Actions:   E-mail | Permalink | Comments

Powered by BlogEngine.NET 1.6.1.0 | Design by styleshout | Enhanced by GravityCube.net | 1.4.5 Changes by zembian.com | Adapted by HamishGraham.NET
(c) 2010 Hamish Graham. Banner Image (c) Chris Gin