The maze book for programmers!

Algorithms, circle mazes, hex grids, masking, weaving, braiding, 3D and 4D grids, spheres, and more!

DRM-Free Ebook

The Buckblog

assorted ramblings by Jamis Buck


3 January 2007 — 2-minute read

Tip of the day: rcov.

Writing tests is all well and good, but how do you know when your application is sufficiently tested? Especially when you’re just learning how to do automated testing, it can sometimes feel pretty arbitrary. However, there are many different metrics for evaluating the efficiency of your tests, and one of the simpler to measure is code coverage. How much of your code do your tests exercise?

Mauricio’s “rcov” utility does just that. You use it to run your tests, and it then reports a percentage (total, as well as per file) of how many lines of code were executed. It even gives you a view of each file, with the untested lines in red! Really, really helpful. Your tests will run slower under rcov, but not much slower—and it is incredibly faster than other previous tools. Also, it works really well with Rails applications.

Now, those of you that are testing gurus will be quick to point out that relying solely on code coverage can be dangerous, and I will agree. Code coverage should not be the only metric you use to evaluate your tests. Ensuring that every line of code has been executed at least once does not even come close to guaranteeing that your application is correct, but it is a lot better than shooting tests randomly into your domain and hoping for the best.

Besides simple code coverage, others in the Ruby community are working all the time on different techniques for testing. You could do a lot worse than to follow what Ryan Davis is concocting with his ZenTest suite of tools.

Reader Comments

Hi Jamis,

I am new to testing. But can you explain me something. We do DNA testing, particle acceleration, orbits into space you name it.

Why don’t we have a solution for this programming problem?


Peter, what programming problem are you talking about? Code coverage analysis? rcov, presented in this article, satisfies that nicely. Am I misunderstanding your question?

In addition to test coverage, you can also use rcov to analyze code usage:

rcov script/server

Hi Jamis,

I should stick to one glass of wine ;-)

What I mean is the following. If its allready out somewhere pardon me. There are all kinds of assumptions in Rails for the good. Why can’t code be written for tests based on those assumptions. For example: In my model I have an attribute :price So in the real world I know I won’t find a book in a store with a price tag of zero or -4.17 dollar, and at the counter I would look like a fool trying to pay with 0 dollar bill. So, could I assume some clever software being able to write tests based on say for example an attribute with the name :price. Then book.price = -1 assert !book.valid

book.price = 0 assert !book.valid

would be written automatically. Now I am a noob at testing, so this probably doesn’t qualify for rcov. And for a noob its difficult what to choose for rails testing.

Sorry if this is the wrong topic for my question

regards, peter

Peter: I’m sure Jamis could answer your question much better than I, but I think what you are alluding to is Behaviour Driven Development. RSpec is a great plugin/gem for doing this.

As far as the software writing tests for you, I don’t think that is a wise decision. Human behaviour is too unpredictable. What you call price, another could call amount, and another something else. Besides, tests are quite easy to write yourself.

You can either use TDD (Test Driven Development) that comes pre-packaged with Rails or use RSpec for BDD (Behaviour Driven Development). I tend to think testing is going to shift to behaviour driven development as you don’t need to test the Rails Framework, but you do need to test the behaviour (methods) you’ve written.

In addition to Robert’s comments, it’s useful to point out that testing validations is the “uninteresting” part of testing. “Is this value what I expect?” It’s easy to query and easy to test.

The interesting cases are things like “is the correct branch being rendered in my partial,” or “are all the parts of this case statement being taken, with the correct results”. Things like that can’t really be tested using just convention, since the “expected” result differs from application to application.

Also, it is use mentioning, here, this computer science thingy known as the halting theorem. Without getting into all the gory details, that theorem basically implies that it is not possible to prove that some arbitrary computer program will function correctly. This means that it is not possible to write a program that can check the correct functioning of another arbitrary program. This means that we’re stuck finding better and better (but still imperfect) ways of testing our applications.

I cannot recommend strongly enough the importance of learning multiple different testing strategies. Using a single strategy is certainly better than using none at all, and it is a good place to start, but don’t stop there. The more ways you use to test your app, the more accurate your app will be.

Hi Jamis and Robert,

Thanks for answering