I’m wondering when you drop down and write a controller spec, or a model spec.
Do you let the integration tests drive this entirely, and let the error messages from the integration tests drag you through or do you write specs for each section of code individually? So you start with a integration test, and then move to a route spec, then a controller spec, then a view spec, the a model spec.
After following the exercises it seemed like there was close to 100% coverage and at all of those types. Was this because it was a exercise, or is this a pretty normal practice?
I tend to have the majority of my tests at the model level, with a much smaller number at the feature level covering the main features of the app. I’ll typically have only a handful in between (controller, view, helper). Code climate has a great blog post about The Rails Testing Pyramid that describes a similar structure.
I will typically start with the feature spec and use it to drive things out for as long as it is giving me meaningful feedback. Say I were building a users index that needed to display a list of users with a particular bit of data about them, then I would begin with a feature spec and run it, following the failure messages:
“no route matches /users” clearly tells me to add the needed route.
“missing file UsersController” tells me to create the controller.
“missing action #index in UsersController” tells me to write the action
“missing template” tells me to create the template
I might be able to push a bit farther based on the desired content of the template, but at some point I’ll feel that the feature spec’s failures are no longer giving me useful feedback that is driving the next bit of code I need to write. When that happens, I’ll typically drop down to a unit / model spec to add drive out the behavior I need on my User model. When that is done I will pop back up to the feature spec and in theory it should now be passing as well.
I use controller specs sparingly when there is unique logic in the controller. In general I try to avoid this (significant logic in the controller), instead opting to extract a class that contains the logic and have the controller collaborate with that object, but some things do belong in the controller. An example would be unique authorization or access control. I will typically write control specs to drive out that sort of behavior. This is most common when I am working on any sort of API.
View specs I use sparingly if there is logic that exists mostly in the view but is hard to test from a different level. Again, I think this is not a great sign, and would probably want to refactor to pull that logic out into an object or at least a view helper (which I would test independently), but it does come up.
100% coverage is certainly not a bad thing, but I rarely find myself chasing it as a goal in and of itself. TDD will get you to the important 90%. Further, I would trust my intuition far above any coverage number. I’ve seen situations where I had close to 100% coverage, but didn’t feel confident that the tests covered everything. Similarly, I’ve had well less than 100% coverage but felt confident in the code.
@christoomey great write up. Sometimes I feel a bit duplication on my feature spec and unit/model spec. For example, let’s say we are building a blog engine, so on homepage we will list all published posts, so we start from feature spec and do setting up data with unpublished posts and published posts, and then we have expectations to see only published posts when users visit home page. When we test on unit/models level to implement a scope to return only published posts, so in the tests we will do the same setup data for unpublished and published posts, and then call the method and expect the results only published posts. That’s why I feel duplicate knowledge. I know in this example, you might even don’t bother to test the scope, but in real app the scenario might be more complicate than this that needs to be tested for both levels. what do you think?