ywen.in.coding

Everythng I do

My Coding Standard for Rails Projects (Part 3)

| Comments

Unit Testing

Unit Testing is the central piece of a project for two reasons:

  • Tests direct how the production code is written.
  • Tests assure any given execution paths within a unit (method) output when a developer thinks they output

I am going to trying to describe how the tests direct how the code under test is written.

I have class which takes a hash called settings and use the settings to do something.

Like below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
class SomeClass
  attr_reader :settings
  private :settings
  def initialize(settings)
    @settings = settings
  end

  def deliver!(mail)
    list = Redis::List.new settings[:redis_key_name], :marshal => true
    settings[:marshallable_converters].each do |setting|
      mail = MarshallableConverterSetting.new(setting).marshallable_class.marshallable(mail)
    end
    list << mail
  end
end

Now the problem is that sometimes settings will miss some keys. It may not have :redis_key_name, for example. In this case, I can assume that a default for the missing keys.

The first attempt is to do something like this in the initialize:

1
2
3
4
5
6
7
8
def initialize(settings)
  @settings = default_settings.merge(settings)
end

private
def default_settings
  {:redis_key_name => "some-name", :marshallable_converters => :some_default}
end

First, let me say although this is relatively small, it breaks something important about OO design: the logic for inserting the defaults doesn’t belong to this class logically, it should not be this class’s responsibility to consider this.

In the real world, I may not realize this at all, it is such a small and innocent adds. But what happned was, I wrote my tests first and tried to go with this path, then I found out how am I going to test this? I could not check the settings hash since it is private (and it should be). I could call deliver!, adding contexts to check when a setting is misisng, the default takes over, but that will make the tests so messy and irrelevant. Then I figured I couldn’t do this. I should seek other implementations. So the end result is like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class SomeClass
  attr_reader :settings
  private :settings

  # @api private
  def initialize(settings)
    @settings = Settings.new(settings)
  end

  # @api private
  def deliver!(mail)
    list = Redis::List.new settings.redis_key_name, :marshal => true
    settings.marshallable_converters.each do |setting|
      mail = MarshallableConverterSetting.new(setting).marshallable_class.marshallable(mail)
    end
    list << mail
  end
end

A new class Settings is being added for handling default values. Both the code and tests become much simpler. As a bonus, Settings class is later being reused by other classes as well.

Integration Tests

A few words about the tools.

I don’t like cucumber since day one. It is messy with global steps; it claims writing cucumber tests is doing BDD, well, it is not. It is doing integration testing, pure and simple.

Reardless, cucumber is widely used and something better than cuucmber coming out from cucumber, such as sinnach and turnip.

The great thing about cucumber is Gherkin, a great way communicating between Business Analyst (BA) / Product Manager (PM) / QA and developers.

Integration tests are essential to a project: unit testing should cover all the logic within a method, but there is no guarantee that calls between methods will work correctly. We cannot prove the correctness of a program in a reasonable time, but we can increase the chance of its being correctly written by using the integration tests.

When a developer starting on a new feature, he writes a feature with several scenarios (could be written by a BA), which of course won’t pass. And actively communicate with business people on if this is what they want. Getting some resonable insights on what the feature is about, his task then become clearly defined: implement something to make the feature goes green.

Same for bug fixes, first write a feature that will fail because of the bug in question. Then fix the bug and verify the feature passes. This approach will give developer a clear defined goal. And in the end, the developer can say with confidence: I fixed the bug because now this feature is green. This is also a great way to let the managements have faith in the development team as well.

Comments