Gregs

helping me remember what I figure out

Build Your Own Technology Radar

| Comments

This weeks Ruby Rogues episode had Neal Ford on to talk about the ThoughtWorks Technology Radar. One of the things that Neal discussed was creating your own Technology Radar.

I am always on the lookout for new ways of working, particularly to make my learning better. Neal recommends putting together such a radar to help focus how we learn by being more strategic, rather then tactical. That and it serves as reminder of the things we might want to look into :) (well for me at least).

So here’s my first go at putting such a list together:

Hold

  • RequireJs (Tools)

Assess

  • Gulp (Tools)
  • Scala (Languages and Frameworks)
  • Swift (Languages and Frameworks)
  • ReactJs (Languages and Frameworks)
  • Living CSS Style Guides (Techniques)
  • Gradle (Tools)
  • Play Framework (Languages and Frameworks)
  • SnapCI (Tools)
  • ES6 Transpilers (Tools)

Trial

  • Browserify (Languages and Frameworks)
  • Functional programming (Techniques)
  • ES6 (Languages and Frameworks)
  • Dashing (Languages and Frameworks)
  • Phantomas (Tools)
  • Build your own Technology Radar (Techniques)
  • Docker (Tools)
  • Programming by Intention (Techniques)
  • AWS (Platforms)
  • CircleCi (Tools)

Adopt

  • Grunt (Tools)
  • Vagrant (Tools)
  • Heroku (Platforms)
  • Linode (Platforms)
  • CodeShip (Tools)
  • Git Pull Request Workflow (Techniques)

The next step is to experiment with this visual tool for displaying your own Technology Radar. Let’s revisit this post in 6 months to see how effective this technique was and what has changed in my technology bubble.

Have you put your own Technology Radar together yet?

Building Coder_wally Using Metrics - Part Deux

| Comments

Earlier last week I posted a short piece on building Coder_Wally and some of the tools I used to improve the code. This post continues on from where I left on talking about Reek and RuboCop.

RuboCop was an interesting tool, as it helped me write code in more idomatic way, i.e. more like a Ruby developer would. Some of the methods I had written used things like get_ or has_. The Ruby way you don’t bother with the get_something, instead it just becomes something. has_something simply becomes something?.

One thing that bit me though was changing " to ' when no string interpolation was happening. Changing this:

spec.files = `git ls-files -z`.split("\x0") 

to

spec.files  = `git ls-files -z`.split('\x0')

The Gem would no longer build or install throwing string contains null byte message error (somewhat after the fact). The problem: single quotes around split('\x0'). Need to look into exactly why this was a problem. I hope I am not being unfair to RuboCop, but it’s primary focus is to help your code follow the Ruby Style guide and not point out code hot spots.

Reek on the other hand was awesome for finding hotspots. It flagged a bunch of potential code smells (duplicate calls and nested iterators) and pointed out where methods on certain classes were exhibiting feature envy and utility function behaviour. I am linking to the docs as they are really insightful.

The outcome of refactoring the code was a few more classes. I extracted the exception handling to it’s own class and created and error code finder. Methods that were manipulating objects to get there work done, were updated to have less knowledge and only work on what they needed. Take this method as an example:

# parse account information from data
def parse_accounts(response)
  Account.new(response[accounts]) if response[accounts]
end

This method actually has two problems for one it was making duplicate calls (response[accounts]). This could have been fixed by extracting the calls to a variable; however by fixing the underlying problem (the utility function behaviour) would also fix that issue. The method knew too much about response object and what it contains in order to get it’s work done, the change is quite simple extract the knowledge to the calling method:

# parse account information from data
def parse_accounts(accounts)
  Account.new(accounts) if accounts
end

...

# build CoderWall object from API response
def build(response)
  badges = parse_badges(response['badges'])
  accounts = parse_accounts(response['accounts'])
  user = parse_user(response)

  CoderWall.new badges, user, accounts
end

Interestingly enough after extracting my Exception Handler object, I was now warned about a Control Parameter smell. The solution there was to revert the extraction of helper method for request errors that raised an exception based on the status code by inlining them back again.
I found this iterative approach of running RuboCop and Reek, really helpful and it certainly led to cleaner looking code. At least I thinks so :). Again, do not blindly follow metrics, but use your judgement. In the absence of having someone to review your work these tools certainly help. Overall an interesting exercise and productive exercise.

Presence and a Future Worth Wanting

| Comments

James Whittaker is pissed off, really pissed off. We are held hostage by our browser and we should not stand for it.

That’s how he opens his talk on his vision for the future. I stumbled across this again the other week, when I came across this post, where he also goes over some tips for stage presence.

Really quickly let’s go over these as they are super handy. There are 5 (well really 4.5) bullet points of about stage presence worth keeping in mind:

  1. Come out swinging
  2. Attention span interlude
  3. Know your shit
  4. Make it epic
  5. Be brief, be right, be gone

He gives regular sessions on campus about Stage presence and I was able to watch one of the recordings and thoroughly enjoyed it and recommend watching it, if you get the chance.

This post is not about stage presence though, rather his view of the future. It does provide a nice lead in though :). Reading that post reminded me that I saw James Whittaker give a talk in person at our office on this topic, and he certainly came out swinging and kept on swinging for the whole duration. Below is a link to a similar talk he did:

A Future Worth Wanting - James Whittaker, Microsoft

For the most part I enjoyed his talk about his vision of the future. The jist was that we shouldn’t have to go to the web to the find (to hunt) the information we are after. We shouldn’t need to context switch. We don’t need apps to do that either (to gather). Our tools should be context aware and fetch the information for us (to farm). His example centered around going to a concert with his daughter after having received an email from her, asking him go with her to see Of Monsters and Men (loved that album and thanks for putting me on to them ;)).

His tools, in this case Outlook, should be context aware and be smart enough to fetch maps/travel directions/suggestions for restaurants and book the tickets. He bemoans the context switch out of whatever tool you are in to open a browser or an App to complete those tasks. He firmly(?) believes that Microsoft is one of the few companies that is in a position to deliver on this proposition based on the tools and services they offer. These tools are ‘Super apps’! Things like Outlook + ‘Bing knows’

While I agree with the premise, at the time I came away from the talk feeling another walled garden in the making. Rather than building open APIs and services it felt heavily slanted to being embedded in the Microsoft ecosystem. Despite mentioning Twitter and Facebook as well (and does talk tongue in cheek about ‘this is branding’ when referring to the XBox and Surface), I couldn’t shake that feeling. For what it’s worth I have similar sentiments towwards Apple… but they do make lovely products…

I believe that in order for companies (particularly Microsoft) to remain relevant, they need to be more open and allow tools from all sources to build on these systems simply and efficiently. Innovation seems to happen more frequently and rapidly outside of large bureaucratic companies and while they have the resources to deliver, they are slow to do so. Just in case you didn’t realise, I work at Skype, at least for now.

The web was built on openness and I find it quite tragic that more and more we are seeing tiered service provisioning, vendor lock in and data lock in. Yes, yes, companies need to make money, I am not that naive, but there are surely better ways.

I don’t think we should be locked into a world where the only way achieve this vision is with my Windows Phone (or iPhone for that matter), i.e. one ecosystem. Maybe I am the odd one out, at work I have a Windows machine, my phone is an Android device, my home setup is a Mac+iPad. Building so called ‘Super apps’ for all platforms is a big ask. And therein lies the crux of the matter… All of these devices already have a ‘Super App’ in common: the browser! We have had it for decades! Yes it was a pain to build for all of the different makes and versions; and while there are still problems, the last 3 or so years have seen an incredible convergence in supported features and functionality.

We might have afforded the browser an ‘incredible’ amount of power, I use it for almost everything. I don’t mind using apps; however my context switch happens when I need to leave the browser to use Outlook for example. I would argue that we need to invest more into making our ‘web apps’ better (services, uis, browsers). Turn these web apps into ‘Super apps’ that leverage the Ueber App that is the venerable browser, so that I don’t need to have Outlook or something else open to get what I need. Rather than invest in a walled garden of comfort that is proprietary and closed. It should run on any device, anywhere I am connected and that is the browser! All hail the Ueber app home of the super apps.

Building My Coder_wally Gem Using Metrics

| Comments

A few weeks back I decided to add CoderWall badges to the feed on my site. I could have just grabbed an existing gem but I decided to build my own. If you are truly keen you can also find it over at Rubygems.org and add to the other 1,245 downloads :).

To get the ball rolling I followed the steps described over at How I Start. The first stab ended up looking like this:

require "coder_wally/version"
require "open-uri"
require "json"

# All code in the gem is namespaced under this module.
module CoderWally
    # The URL of API we'll use.
    Url = "https://coderwall.com/%s.json"

    class Badge 
        attr_reader :name, :badge, :description, :created

        def initialize(hashed_badge)
            @name = hashed_badge.fetch("name")
            @badge = hashed_badge.fetch("badge")
            @description = hashed_badge.fetch("description") 
            @created = hashed_badge.fetch("created")
        end
    end

    def CoderWally.get_badges_for username
        raise(ArgumentError, "Plesae provide a username") if username.empty?
        url = URI.parse(Url % username)
        response = JSON.load(open(url))      

        response["badges"].map { |badge| Badge.new(badge) }
    end
end

It simply fetched JSON from the API for a given username and returned a collection of badges. Over the next couple of iterations I reworked a few things and added support for user details and accounts. For testing purposes (and to speed things up) I used Webmock to fake responses from the service. The most interesting thing to solve, was how to dynamically assign attr_accessors to the account object. I eventually found that you could do so by using a combination of singleton_class.class_eval and self.instance_variable_set. With the features done I looked around at other gems and what their README’s and tool chain looked like.

The first tool I decided to investigate was Flog which Sandy Metz used in a talk I saw recently. The Initial run yielded the following output:

 find lib -name \*.rb | xargs flog                                                                                                                                    
 84.3: flog total
 6.0: flog/method average

19.1: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:47
16.2: CoderWally::Client#send_request  lib/coder_wally/client.rb:20
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 8.8: main#none

The Client object could use a little love. To start things of, I decided to pull all of the API related calls into their own class, so anything relating to send_request:

 89.0: flog total
 5.2: flog/method average

16.4: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:27
16.2: CoderWally::API#send_request     lib/coder_wally/api.rb:13
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 9.9: main#none

That just moved the complexity around, but at least the Client object was now a little simpler, so to fix the complexity I extracted methods from the the send_request, which looked started off as follows:

# Dispatch the request
    def send_request url
      begin
        open(url)
      rescue OpenURI::HTTPError => error
        raise UserNotFoundError, 'User not found' if  error.io.status[0] == '404'
        raise ServerError, 'Server error' if  error.io.status[0] == '500'
      end
    end

Not overly complicated, but let’s follow the advice and see if we can’t improve on this, by extracting methods. I ended up with this:

  def send_request(url)
    begin
      open(url)
    rescue OpenURI::HTTPError => error
      handle_user_not_found_error(error)
      handle_server_error(error)
    end
  end

  # Parse status code from error
  def get_status_code(error)
    error.io.status[0]
  end

  # Raise server error
  def handle_server_error(error)
    raise ServerError, 'Server error' if  get_status_code(error) == '500'
  end

  # Raise user not found error
  def handle_user_not_found_error(error)
    raise UserNotFoundError, 'User not found' if  get_status_code(error) == '404'
  end

Running flog showed that the code in the send_request method was now no longer being flagged as complicated.

86.1: flog total
 4.3: flog/method average

15.1: CoderWally::Client#build_coder_wall lib/coder_wally/client.rb:27
11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
 9.9: main#none
 6.3: CoderWally::API#uri_for_user     lib/coder_wally/api.rb:36
 5.7: CoderWally::Badge#initialize     lib/coder_wally/badge.rb:9
 5.0: CoderWally::User#initialize      lib/coder_wally/user.rb:9

Next I tackled CoderWally::Client#build_coder_wall method. This led to creating a Coderwall builder object with simpler and more single purpose methods:

module CoderWally
    # Builds the CoderWall object from the response
    class Builder
            # Instantiate class
            def initialize
            end

            # parse badges from data
            def parse_badges(data)
                data['badges'].map { |badge| Badge.new(badge) } if data['badges']
            end

            # parse account information from data
            def parse_accounts(data)
                Account.new(data['accounts']) if data['accounts']
            end

            # parse user information from data
            def parse_user(data)
                User.new(data['name'], data['username'],
                        data['location'], data['team'], data['endorsements'])
            end

            # build CoderWall object from API response
            def build response
                badges = parse_badges(response)
                accounts = parse_accounts(response)
                user = parse_user(response)

                CoderWall.new badges, user, accounts
            end
    end
end 

Still not happy with all of the names, but it did feel and look better than this:

# Builds a CoderWall object
    def build_coder_wall(username)
        json_response = JSON.load(send_request(uri_for_user(username)))
        badges = json_response['badges'].map { |badge| Badge.new(badge) }
        accounts = Account.new(json_response['accounts'])
        user = User.new(json_response['name'], json_response['username'],
                json_response['location'], json_response['team'], json_response['endorsements'])

        CoderWall.new badges, user, accounts
    end

Flog agreed as well, and while the flog total was on the up, the method average kept going down (we started with a 6.0 average and ended up with 3.9):

93.4: flog total
    3.9: flog/method average

11.1: CoderWally::Account#initialize   lib/coder_wally/account.rb:6
    11.0: main#none
    7.0: CoderWally::Builder#parse_user   lib/coder_wally/builder.rb:15
    6.3: CoderWally::API#uri_for_user     lib/coder_wally/api.rb:38
    5.7: CoderWally::Badge#initialize     lib/coder_wally/badge.rb:9
    5.0: CoderWally::Builder#build        lib/coder_wally/builder.rb:20
    5.0: CoderWally::User#initialize      lib/coder_wally/user.rb:9
    4.0: CoderWally::API#send_request     lib/coder_wally/api.rb:13
    3.9: CoderWally::API#get_status_code  lib/coder_wally/api.rb:23

I am going to stop here. In a follow up post I will talk specifically about Rubocop and Metric_fu and how they further impacted the design and reability of the code. Before I go though, I wanted to finish up with some thoughts on using Flog and how it changed my code.

I started with one object that did everything and through a series of refactorings I ended up with several smaller more cohesive objects that also followed the Single Responsibility principle more closely (I wasn’t there yet and probably still am not).

I felt that my initial implementation was simple and readable enough. But that’s just the thing isn’t it? We feel that our code is good enough, but statistics can back these ‘feelings’ up or indeed refute them. I am not saying that one should blinbdly follow these kind of metrics and drive our code based off of these, but they are a good source of information and as this little experiment has shown can help improve the code. In the absence of being able to pair with somone or have someone else review your code, Flog proved very useful. Overall I am happier with having a class for API calls and who’s methods are more intention revealing. Likewise with my builder object and it’s methods, in the next post I will show how I continued on the improvement path for that particular class using Metric_fu (Reek in particular) and Rubocop.

Publishing Blog Posts the Git and Ci Way

| Comments

I recently switched over the source control of my blog from Bitbucket to Github, because I wanted to try out a new workflow with regards to editing and publishing posts.

As I tend to create a new git branch for each post I am working on, I wanted to use the pull request approach to publishing postsI first came across this idea, thanks those wonderful folks over at ThoughtBot. Now granted I am not collaborating with others on posts; however I still find this review process handy. Reading the post in a different context has been beneficial. Furthermore I now tend to give myself a few days between writing and posting as a result of this process. Using Github and their editor I can review, re-read and edit posts at my convenience. So far it’s worked well for me.

This got me thinking though: are there any other improvements I could make to my workflow… Well yes there are. As I mentioned I tend to create a branch for each post, followed by running the new post rake task. I started by modifying the tasks for posts and pages to create a new branch for me using the title. Then I realised I could take it even further, create an initial commit and create a new tracked remote branch. Here’s what the output looks like:

blog git:(change-workflow) rake 'new_post[test branch]'
mkdir -p source/_posts
Creating new post: source/_posts/2015-01-24-test-branch.markdown
Switched to a new branch 'test-branch'
[test-branch c5c786a] created new post entry: test-branch
2 files changed, 8 insertions(+), 4 deletions(-)
create mode 100644 source/_posts/2015-01-24-test-branch.markdown
Counting objects: 8, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (8/8), done.
Writing objects: 100% (8/8), 801 bytes | 0 bytes/s, done.
Total 8 (delta 6), reused 0 (delta 0)
To git@github.com:gregstewart/blog.git
* [new branch]      test-branch -> test-branch
Branch test-branch set up to track remote branch test-branch from origin.

The code for this is pretty straight forward and not foolproof, but a good starting point:

# create git branch
def create_branch(branch_name)
    exec "git checkout -b #{branch_name}; git add .; git commit -m 'created new post entry: #{branch_name}'; git push -u origin #{branch_name}"
end

The next thing I wanted to improve upon was the publishing step. Commit/Push/Generate and Deploy were the steps I used in the past, a bit long winded and repetitive. Also if I was not at home, then I had to wait to publish an update. Given how I am now using Pull Requests and use Github to sign off on and merge these, why not use CI to build and publish the Blog on merge to master? So I created a new project over at CodeShip, left the test settings empty, but under deployment added:

bundle exec rake generate
bundle exec rake deploy

Now whenever I merge a pull request, CI takes over and publishes my post to my server! Note that, if like me you use rsync, you will need to add CodeShips public key to your authorized_keys in order for Octopress’ rsync publishing to work. This post is the first to feature this new workflow!

Update: it turns out you can complete this workflow using Bitbucket as well!

Refactoring Your Grunt File

| Comments

Things left unchecked over time just grow to be unwieldy. Take this Gruntfile for example:

module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            code: {
                dest: 'app/js/main.min.js',
                src: 'node_modules/weatherly/js/**/*.js',
                options: {
                    transform: ['uglifyify']
                }
            },
            test: {
                dest: 'app/js/test.js',
                src: 'tests/unit/**/*.js'
            }
        },
        karma: {
            dev: {
                configFile: 'karma.conf.js'
            },
            ci: {
                configFile: 'karma.conf.js',
                singleRun: true,
                autoWatch: false,
                reporters: ['progress']
            }
        }
    });
    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');
    grunt.loadNpmTasks('grunt-karma');
    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);
    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);
    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};

It’s the Gruntfile for the project in my book. I’ll be honest the principle reason I want to refactor this file is because it makes the book editing quite painful and for the reader being able to make changes is quite difficult. However the same thing can be said for people working with the file, it’s getting to be difficult to see what is happening in this file, so let’s make this better for all.

Let’s start with the karma tasks, these can be extracted to a file called test.js (I am keeping this generic, just in case I decide to switch testing frameworks at a later stage) and let’s save it under a folder called build:

(function (module) {
    'use strict';
    var config = {
        dev: {
            configFile: 'karma.conf.js'
        },
        ci: {
            configFile: 'karma.conf.js',
            singleRun: true,
            autoWatch: false,
            reporters: ['progress']
        }
    };
    module.exports = function (grunt) {
        grunt.loadNpmTasks('grunt-karma');

        grunt.config('karma', config);
    }
})(module);

I have extracted the task configuration and the loading of the task from the Gruntfile.js, leaving us with:

 module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            code: {
                dest: 'app/js/main.min.js',
                src: 'node_modules/weatherly/js/**/*.js',
                options: {
                    transform: ['uglifyify']
                }
            },
            test: {
                dest: 'app/js/test.js',
                src: 'tests/unit/**/*.js'
            }
        }
    });

    grunt.loadTasks('build');

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);

    grunt.registerTask('heroku:production', 'build');
};

Apart from removing the code for Karma, I also added the grunt.loadTasks directive pointing it to our new created build folder. To validate that everything is still ok, just run grunt karma:dev. Let’s do the same for our browserify task, once again create a new file (called browserify.js) and save it under our build folder:

(function(module) {
    'use strict';
    var config = {
        code: {
            dest: 'app/js/main.min.js',
            src: 'node_modules/weatherly/js/**/*.js',
            options: {
                transform: ['uglifyify']
            }
        },
        test: {
            dest: 'app/js/test.js',
            src: 'tests/unit/**/*.js'
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-browserify');
        grunt.config('browserify', config);
    }
})(module);

And remove the code from the Gruntfile.js:

module.exports = function (grunt) {
    'use strit';
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        }
    });

    grunt.loadTasks('build');

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-bower-task');


    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};

Let’s test our task by running grunt browserify:code or grunt browserify:test. To speed things up a little in the following I am just going to show the extracted code.

Bower.js

(function(module) {
    'use strict';
    var config = {
        install: {
            options: {
                cleanTargetDir: false,
                targetDir: './bower_components'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-bower-task');
        grunt.config('bower', config);
    }
})(module);

Copy.js

(function(module) {
    'use strict';
    var config = {
        fonts: {
            expand: true,
            src: ['bower_components/bootstrap/fonts/*'],
            dest: 'app/fonts/',
            filter: 'isFile',
            flatten: true
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-contrib-copy');

        grunt.config('copy', config);
    }
})(module);

Less.js

(function(module) {
    'use strict';
    var config = {
        production: {
            options: {
                paths: ['app/css/'],
                cleancss: true
            },
            files: {
                'app/css/main.css': 'src/less/main.less'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-contrib-less');

        grunt.config('less', config);
    }
})(module);

Cucumber.js

(function(module) {
    'use strict';
    var config = {
        src: 'tests/e2e/features/',
        options: {
            steps: 'tests/e2e/steps/'
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-selenium-webdriver');
        grunt.loadNpmTasks('grunt-cucumber');

        grunt.config('cucumberjs', config);
    }
})(module);

Express.js

(function(module) {
    'use strict';
    var config = {
        test: {
            options: {
                script: './server.js'
            }
        }
    };

    module.exports = function(grunt) {
        grunt.loadNpmTasks('grunt-express-server');

        grunt.config('express', config);
    }
})(module);

Leaving us now with a Gruntfile that is so much more lightweight and only concerns itself with loading and registering tasks:

module.exports = function (grunt) {
    'use strit';
    grunt.loadTasks('build');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify:code']);
    grunt.registerTask('build', ['bower:install', 'generate']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:ci', 'e2e']);
    grunt.registerTask('heroku:production', 'build');
};

New Year’s Resolution

| Comments

First post of 2015… Yay. It’s that time of the year for reflection and fresh starts. I guess most folks would have done that last month. I have been feeling quite sheepish about not having made any resolutions for the new year. However after having listened to a few Podcasts and read bunch of posts about what folks have done in 2014 on the re-commenced daily commute I feel even more sheepish… So I have pondered what I would like to get out of 2015 and do/achieve.

For starters I would like to be a tad more analytical/critical. I have realised that I just really consume content, without reflecting on it too deeply or applying it in earnest to see if it works for me. Kind of like consuming an entire box set of some TV series in one or a couple of sittings, to the point where all episodes just blend into one. To help me with that I want to put up least one blog post a week, nd yes I realise it’s the second week of January and I am already behind.. The idea of course is to write more and get better at it, but I don’t just want to say share something I learned, but also demonstrate why it’s useful. Or should I have read something that provoked some thoughts share and discuss these. Well that’s the intention anyway…

I actually really, really want to make progress on some of my side projects, that have suffered from fits and starts over the years, right at the top of that list is making good progress on my book. Short aside for those that are reading this and following it’s progress, I have started work on the model and testing it.

I have also been tinkering with a few apps/games over the years. For some reason a few months back, I had been reminiscing about some old computer games from the Spectrum days that I used to play as a kid. One that sticks out was Football Manager. When you play it now, well let’s just say… Nostalgia… Regardless I thought it would be fun to try and build a clone, as I see the potential for some interesting challenges and applications of tools/technologies and services. I believe my friend Jack termed it as over engineered when I told him about all the things I wanted to try out as part of it.

The site itself also needs some love, it’s been two years, so it is time for a little refresh.

Let’s see how this all pans out.

Change of Tack

| Comments

Another one of those quick posts on state of play about the book project. First off while updates have been a little slow of late (summer, holidays, work, etc…) I have been busy-ish planning the next chapters of the book and hope to push some of these out by the end of the month.

I am also please to reveal that over at GitBook, some 130 people have been viewing the book, which is just awesome and roughly 129 more people than I had hoped for! I also see that one person would be willing to buy the book over at LeanPub.

One thing though is that I have had 0 feedback on the book and it’s content. I have pondered this for some time now and I have decided to change the book from free to paid. The reasoning being that paying customers might speak up some more about any issues or better yet things that they like! So starting today I am changing the book to paid on GitBook, starting at $5.00 for the first section. If you purchase it now you will of course get the updates/fixes and subsequent chapters as they are written. I have also published a copy over at LeanPub. You can still get a free version of the book over at my gihub repository, reading it that way won’t be nearly as enjoyable as using Gitbook’s reader or the many eBook options you can get with GitBook or LeanPub. Of course there always the blog posts of the chapters, however I do ask suggest that if you like the book and it’s content, that maybe buying it is not such a bad idea after all :)

As always please let me know your thoughts and feedback.

Development Guided by Tests

| Comments

Time for a sneak peak of some work in progress. Here we cover setting up Karma to run our unit tests, As always you can read the chapter here , as well as the full book.

Here’s also a list of chapters you can find as blog posts on the site:


Up until now we have been very much focused on setting up our build pipeline and writing a high level feature tests. And while I promised that it was time to write some code, we do have do a few more setup steps before we can get stuck in. To get confidence in our code we will be writing JavaScript modules using tests and we want those tests to run all the time (i.e. with each save). To that end we need to set up some more tasks to run those tests for us and add them to our our deployment process. Furthermore we want these test to run during our build process.

Setting up our unit test runner using karma

I have chosen Karma as our Unit test runner, if you are new to Karma I suggest you take a peak at some of the videos on the site. It comes with a variety of plugins and supports basically all of the popular unit test frameworks. As our testing framework we will use Jasmine.

Before going to far, let’s quickly create a few folders in the root of our project. src/js is where we will store all of our JavaScript source code, later on we will create a task to concatenate/minify and move it to our app folder:

-> tests
    -> unit
-> src
    ->js

TODO: this for now but really I want to do commonJS

As with all tasks, let’s create a new branch:

> git checkout -b test-runner

And then let’s install the package and add it to our package.json file:

> npm install karma --save-dev

Ok time to create our Karma configuration file, typically you would type in the root of your project:

> karma init karma.conf.js

This would guide you through the process of setting up your test runner, here’s how I answered the setup questions:

Which testing framework do you want to use ?
Press tab to list possible options. Enter to move to the next question.
> jasmine

Do you want to use Require.js ?
This will add Require.js plugin.
Press tab to list possible options. Enter to move to the next question.
> no

Do you want to capture any browsers automatically ?
Press tab to list possible options. Enter empty string to move to the next question.
> PhantomJS
>

What is the location of your source and test files ?
You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".
Enter empty string to move to the next question.
> src/js/**/*.js
WARN [init]: There is no file matching this pattern.

> tests/unit/**/*.js
WARN [init]: There is no file matching this pattern.

>

Should any of the files included by the previous patterns be excluded ?
You can use glob patterns, eg. "**/*.swp".
Enter empty string to move to the next question.
>

Do you want Karma to watch all the files and run the tests on change ?
Press tab to list possible options.
> no

Config file generated at "/Users/writer/Projects/github/weatherly/karma.conf.js".

And here’s the corresponding configuration that was generated:

// Karma configuration
// Generated on Sun Jul 20 2014 16:18:54 GMT+0100 (BST)

module.exports = function (config) {
    config.set({

        // base path that will be used to resolve all patterns (eg. files, exclude)
        basePath: '',


        // frameworks to use
        // available frameworks: https://npmjs.org/browse/keyword/karma-adapter
        frameworks: ['jasmine'],


        // list of files / patterns to load in the browser
        files: [
            'src/js/**/*.js',
            'tests/unit/**/*.js'
        ],


        // list of files to exclude
        exclude: [
        ],


        // preprocess matching files before serving them to the browser
        // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
        preprocessors: {
        },


        // test results reporter to use
        // possible values: 'dots', 'progress'
        // available reporters: https://npmjs.org/browse/keyword/karma-reporter
        reporters: ['progress'],


        // web server port
        port: 9876,


        // enable / disable colors in the output (reporters and logs)
        colors: true,


        // level of logging
        // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
        logLevel: config.LOG_INFO,


        // enable / disable watching file and executing tests whenever any file changes
        autoWatch: true,


        // start these browsers
        // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
        browsers: ['PhantomJS'],


        // Continuous Integration mode
        // if true, Karma captures browsers, runs the tests and exits
        singleRun: true
    });
};

Let’s take it for a spin:

> karma start
> INFO [karma]: Karma v0.12.17 server started at http://localhost:9876/
> INFO [launcher]: Starting browser PhantomJS
> WARN [watcher]: Pattern "/Users/writer/Projects/github/weatherly/tests/unit/**/*.js" does not match any file.
> INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket iqriF61DkEH0qp-sXlwR with id 10962078
> PhantomJS 1.9.7 (Mac OS X): Executed 0 of 0 ERROR (0.003 secs / 0 secs)

So we got an error, but that is because we have no tests. Let’s wrap this into a grunt task:

> npm install grunt-karma --save-dev

And update our Gruntfile to load the task:

module.exports = function (grunt) {
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        karma: {
            unit: {
                configFile: 'karma.conf.js'
            }
        }
    });

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-karma');

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);
};

Let’s try this out our new grunt task:

> grunt karma:unit

> Running "karma:unit" (karma) task
> INFO [karma]: Karma v0.12.17 server started at http://localhost:9876/
> INFO [launcher]: Starting browser PhantomJS
> WARN [watcher]: Pattern "/Users/gregstewart/Projects/github/weatherly/src/js/**/*.js" does not match any file.
> WARN [watcher]: Pattern "/Users/gregstewart/Projects/github/weatherly/tests/unit/**/*.js" does not match any file.
> INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket QO4qLCSO-4DZVO7eaRky with id 9893379
> PhantomJS 1.9.7 (Mac OS X): Executed 0 of 0 ERROR (0.003 secs / 0 secs)
> Warning: Task "karma:unit" failed. Use --force to continue.

> Aborted due to warnings.

Similar output, with the difference that our process terminated this time because of the warnings about no files macthing our pattern. We’ll fix this issue by writing our very first unit test!

Writing and running our first unit test

In the previous chapter we created a source folder and added a sample module, to confirm our build process for our JavaScript assets worked. Let’s go ahead and create one test file, as well as some of the folder structure for our project:

> mkdir tests/unit/
> mkdir tests/unit/model/
> touch tests/unit/model/TodaysWeather-spec.js

What we want to do know is validate our Karma configuration before we starting our real tests, so let’s add a sample test to our TodaysWeather-spec.js:

describe('Today \'s weather', function () {
    it('should return 2', function () {
        expect(1+1).toBe(2);
    });
});

We could try and run our Karma task again, but this would only result in an error, because we are using the CommonJS module approach and we would see an error stating that module is not defined, because our module under tests uses:

module.exports = TodaysWeather;

In order to fix this we need run our browserify task before our karma task, so let’s register a new task unit in our grunt file to handle this:

module.exports = function (grunt) {
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            dist: {
                files: {
                    'app/js/main.min.js': ['src/js/**/*.js']
                }
            },
            options: {
                transform: ['uglifyify']
            }
        },
        karma: {
            unit: {
                configFile: 'karma.conf.js'
            }
        }
    });

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');
    grunt.loadNpmTasks('grunt-karma');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify']);
    grunt.registerTask('build', ['bower:install', 'generate']);
    grunt.registerTask('unit', ['browserify', 'karma:unit']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('heroku:production', 'build');
};

And modify our karma.conf.js to point to the built version of our JavaScript code by updating the files block to point to app/js/**/*.js instead of src/js/**/*.js.

// Karma configuration
// Generated on Sun Jul 20 2014 16:18:54 GMT+0100 (BST)

module.exports = function (config) {
    config.set({

        // base path that will be used to resolve all patterns (eg. files, exclude)
        basePath: '',


        // frameworks to use
        // available frameworks: https://npmjs.org/browse/keyword/karma-adapter
        frameworks: ['jasmine'],


        // list of files / patterns to load in the browser
        files: [
            'app/js/**/*.js',
            'tests/unit/**/*.js'
        ],


        // list of files to exclude
        exclude: [
        ],


        // preprocess matching files before serving them to the browser
        // available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
        preprocessors: {
        },


        // test results reporter to use
        // possible values: 'dots', 'progress'
        // available reporters: https://npmjs.org/browse/keyword/karma-reporter
        reporters: ['progress'],


        // web server port
        port: 9876,


        // enable / disable colors in the output (reporters and logs)
        colors: true,


        // level of logging
        // possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
        logLevel: config.LOG_INFO,


        // enable / disable watching file and executing tests whenever any file changes
        autoWatch: true,


        // start these browsers
        // available browser launchers: https://npmjs.org/browse/keyword/karma-launcher
        browsers: ['PhantomJS'],


        // Continuous Integration mode
        // if true, Karma captures browsers, runs the tests and exits
        singleRun: true
    });
};

Now let’s our setup:

> grunt unit
> Running "browserify:dist" (browserify) task

> Running "karma:unit" (karma) task
> INFO [karma]: Karma v0.12.17 server started at http://localhost:9876/
> INFO [launcher]: Starting browser PhantomJS
> INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket 8eDrHt--bJFzVxSN36mN with id 14048651
> PhantomJS 1.9.7 (Mac OS X): Executed 1 of 1 SUCCESS (0.002 secs / 0.002 secs)

> Done, without errors.

Perfect!

Running our tests as part of the build

Now that we have our test runner set up’ let’s add it to our build process. This is going to require us to register a new task as we will need to do a few things:

  • build our assets
  • run our unit tests
  • run our end to end tests

Let’s go ahead and create a task called test in our Gruntfile and configure it to execute these tasks:

module.exports = function (grunt) {
    grunt.initConfig({
        express: {
            test: {
                options: {
                    script: './server.js'
                }
            }
        },
        cucumberjs: {
            src: 'tests/e2e/features/',
            options: {
                steps: 'tests/e2e/steps/'
            }
        },
        less: {
            production: {
                options: {
                    paths: ['app/css/'],
                    cleancss: true
                },
                files: {
                    'app/css/main.css': 'src/less/main.less'
                }
            }
        },
        copy: {
            fonts: {
                expand: true,
                src: ['bower_components/bootstrap/fonts/*'],
                dest: 'app/fonts/',
                filter: 'isFile',
                flatten: true
            }
        },
        bower: {
            install: {
                options: {
                    cleanTargetDir:false,
                    targetDir: './bower_components'
                }
            }
        },
        browserify: {
            dist: {
                files: {
                    'app/js/main.min.js': ['src/js/**/*.js']
                }
            },
            options: {
                transform: ['uglifyify']
            }
        },
        karma: {
            unit: {
                configFile: 'karma.conf.js'
            }
        }
    });

    grunt.loadNpmTasks('grunt-express-server');
    grunt.loadNpmTasks('grunt-selenium-webdriver');
    grunt.loadNpmTasks('grunt-cucumber');
    grunt.loadNpmTasks('grunt-contrib-less');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-browserify');
    grunt.loadNpmTasks('grunt-bower-task');
    grunt.loadNpmTasks('grunt-karma');

    grunt.registerTask('generate', ['less:production', 'copy:fonts', 'browserify']);
    grunt.registerTask('build', ['bower:install', 'generate']);
    grunt.registerTask('unit', ['browserify', 'karma:unit']);

    grunt.registerTask('e2e', [
        'selenium_start',
        'express:test',
        'cucumberjs',
        'selenium_stop',
        'express:test:stop'
    ]);

    grunt.registerTask('test', ['build', 'karma:unit', 'e2e']);

    grunt.registerTask('heroku:production', 'build');
};

And let’s make sure everything runs as intended:

> grunt test
> Running "less:production" (less) task
> File app/css/main.css created: 131.45 kB → 108.43 kB

> Running "copy:fonts" (copy) task
> Copied 4 files

> Running "browserify:dist" (browserify) task

> Running "karma:unit" (karma) task
> INFO [karma]: Karma v0.12.17 server started at http://localhost:9876/
> INFO [launcher]: Starting browser PhantomJS
> INFO [PhantomJS 1.9.7 (Mac OS X)]: Connected on socket 1kikUD-UC4_Gd6Qh9T49 with id 53180162
> PhantomJS 1.9.7 (Mac OS X): Executed 1 of 1 SUCCESS (0.002 secs / 0.002 secs)

> Running "selenium_start" task
> seleniumrc webdriver ready on 127.0.0.1:4444

> Running "express:test" (express) task
> Starting background Express server
> Listening on port 3000

> Running "cucumberjs:src" (cucumberjs) task
> ...

> 1 scenario (1 passed)
> 3 steps (3 passed)

> Running "selenium_stop" task

> Running "express:test:stop" (express) task
> Stopping Express server

> Done, without errors.

If you recall we configured our build to execute grunt e2e, we need to update this now to execute grunt test. Log into to your Codeship dashboard and edit the test configuration:

Codeship dashboard with updating test configuration

Ready to give this is a spin?

> git status
> git add .
> git commit -m "Karma test configuration added and new build test task created"
> git checkout master
> git merge test-runner
> git push

If we keep an eye on our dashboard we should see a build kicked-off and test task being executed:

Codeship dashboard with updating test configuration


You can read the full book over at GitBook or LeanPub. Updated content for this chapter can be found on GitHub.

Continuous Delivery

| Comments

This is the chpater on setting our continuous delivery pipeline, as always you can read the chapter of the book here, as well as the full book.

Here’s also a list of chapters you can find as blog posts on the site:


In the previous part we wrote our first functional test (or feature test or end 2 end test) and automated the running using a set of Grunt tasks. Now we will put these tasks to good use and have our Continuous Integration server run the test with each commit to our remote repository. There are two parts two Continuous Delivery: Continuous Integration and Continuous Deployment. These two best practices were best defined in the blog post over at Treehouse, do read the article, but here’s the tl;rd:

Continuous Integration is the practice of testing each change done to your codebase automatically and as early as possible. But this paves the way for the more important process: Continuous Deployment.

Continuous Deployment follows your tests to push your changes to either a staging or production system. This makes sure a version of your code is always accessible.

In this ection we’ll focus on Continuous Integration. As always before starting we’ll create a dedicated branch for our work:

git checkout -b ci

Setting up our Continuous Integration environment using Codeship

In the what you will need section I suggested signing up for a few services, if you haven’t by now created an account with either Github and Codeship now is the time! Also if you haven’t already now is the time to connect your Githuib account with Codeship. You can do this by looking under your account settings for connected services:

Link your Github account to Codeship

To get started we need to create a new project:

Create a new project

This starts starts a three step process:

  1. Connect to your source code provider
  2. Choose your repository
  3. Setup test commands

The first step is easy, choose the Github option, then for step two choose the weatherly repository from the list.

If you hadn’t already signed up for Github and hadn’t pushed your changes to it, then the repository won’t be showing up in the list. Link your local repository and push all changes up before continuing.

Not it’s time to set up the third step, set up out test commands. From the drop down labelled Select your technology to prepopulate basic commands choose node.js.

Next we need to tackle the section: Modify your Setup Commands. The instructions tell us that it can use the Node.js version specified in our package.json file, given that we have not added this information previously let’s go ahead and do that now. If you are unsure of the version of Node.js simply type:

node --version

In my case the output was 0.10.28, below is my package.json file, look for the block labelled with engines:

{
    "name": "weatherly",
    "version": "0.0.0",
    "description": "Building a web app guided by tests",
    "main": "index.js",
    "engines" : {
        "node" : "~0.10.28"
    },
    "scripts": {
        "test": "grunt test"
    },
    "repository": {
        "type": "git",
        "url": "https://github.com/gregstewart/weatherly.git"
    },
    "author": "Greg Stewart",
    "license": "MIT",
    "bugs": {
        "url": "https://github.com/gregstewart/weatherly/issues"
    },
    "homepage": "https://github.com/gregstewart/weatherly",
    "dependencies": {
        "express": "^4.4.5"
    },
    "devDependencies": {
        "chai": "^1.9.1",
        "cucumber": "^0.4.0",
        "grunt": "^0.4.5",
        "grunt-cucumber": "^0.2.3",
        "grunt-express-server": "^0.4.17",
        "grunt-selenium-webdriver": "^0.2.420",
        "webdriverjs": "^1.7.1"
    }
}

With that added we can edit the set up commands to look as follows:

npm install
npm install grunt-cli

Now let’s edit the Modify your Test Commands section. In the previous chapter we created a set of tasks to run our tests and wrapped them in a grunt command grunt e2e. Let’s add this command to our configuration:

grunt e2e

That’s hit the big save button. Right now we are ready to push some changes to our repository. Luckily we have a configuration change ready to push!

git add package.json
git commit -m "Added node version to the configuration for CI"
git checkout master
git merge ci
git push

And with that go over to your codeship dashboard and if it all went well, then you should see something like this:

First CI run!

You have to admit that setting this up was a breeze. Now we are ready to configure our Continous Deployment to Heroku.

Setting up Continous Deployment to Heroku

Before we configure our CI server to to deploy our code to Heroku on a successful build, we’ll need to create a new app through our Heroku dashboard:

Heroku dashboard

And click on the Create a new app link and complete the dialogue box.

Creating a new app

The name weatherly was already taken so I left it blank to get one assigned, if you do this as well, just be sure to make a note of it as we’ll need it shortly. I choose Europe, well because I live in Europe, so feel free to choose what ever region makes sense to you.

Confirmation screen

Armed with this information let’s head back to our project on Codeship and let’s configure our deployment. From the project settings choose the Deployment tab and from the targets select Heroku. You will need your Heroku app name (see above) and your Heroku api key which you can find under your account settings under the Heroku dashboard:

Codeship settings for heroku deployment

We will be deploying from our master branch. Once you are happy with your settings click on the little green tick icon to save the information. Time to test our set up! We just need to make one little change to our app configuration which is handy because that will allow us to commit and a change and verify the whole process from start to finish. In the previous section we have configured our web server to listen on port 3000, well Heroku assigns a part dynamically, so we to account for that by editing our server.js file by adding process.env.PORT to our listen function:

var express = require('express');
var app = express();

app.use(express.static(__dirname + '/app'));

var server = app.listen(process.env.PORT || 3000, function() {
    console.log('Listening on port %d', server.address().port);
});

Now let’s commit the change:

git add server.js
git commit -m "Server configured to handle dynamic port allocation"
git push

If we check our build dashboard we should see a succesful build and deployment to our Heroku instance:

Successful build and deployment

The build process checks that we get a 200 response back and marks the build as successful, so let’s open up our browser to see the results of our work:

Weatherly running on Heroku

And there you are your Continuous Delivery pipeline has been created and in less than a minute we go from commit to production!

Recap

In this last section we:

  • configued our ci envinronment
  • it runs our feature test
  • created a Heroku app
  • configured our CI environment to deploy to that instance
  • modified our web server to handle dynamic port allocation

And now it’s time to write some code!


You can read the full book over at GitBook or LeanPub. Updated content for this chapter can be found on GitHub.