So what is Outside-in & Inside-out

So I am half way through the test driven rails workshops and I have noticed outside-in testing has been mentioned and I completely unaware of what this means, along with inside-out. I have tried to have a look back through the videos but I cannot find where it was introduced.

I had a go at Googling around for a brief explanation but I cant find anything that can give me a straight insight into both of those concepts. Have any of you stumbled upon any documentation? Of if anyone could spare a post explaining it, I would very much appreciate it.


1 Like

Hi @Damien_Hogan, from what I’ve learned from the book Growing Object-Oriented Software Guided by Tests, outside-in means starting with a high-level acceptance/integration test and moving down to the unit test level when your higher-level test stops driving the implementation.

So, the opposite, inside-out testing starts with a unit test and moves up gradually to an acceptance/integration test to make sure the system works at a unit level and at a system level (this is why integration tests sometimes get called ‘end-to-end’ tests).

Hope that helps!

I will just explain my process and why I call it what I call it and let you do with that information what you like…

I build outside-in, however I start with the repl and unit tests. I also build modular so I think that changes my outcome from a more coupled coder. Let me give an example…

I have a rails app that already exists. In it, I need to add a new bit of functionality to process a file the user uploads, lint, and store it in a persistence layer.

I would not just start building models and controller here. Instead I would start with a PRO (pure ruby object), what object and how? I start with the entry point, the file the user is going to upload.

There I draw a boundary. My code lives here, while rails lives up there somewhere. I do not need to care about rails at this point, I only need to care about the file… There is no spoon.

So the requirements are that given a file I need to make sure it is valid, then I need to store the values into a database structure of some time.

That leads me to qualify what the file is, what it looks like, which means I need an example from the business or a specification in which to build my sample file. The former is always better.

Now I know what is valid and invalid, and what the structure of the data I am storing looks like, it’s time for a test. Notice up til now I have written 0 production code.

My object would probably start life named UserSubmittedFilePersister and that would evolve over time as I learn more about the domain I am solving. But back to the tests, with the above name chosen, based on the nouns and verbs.

So my first unit test would be to verify the structure of the data. I would use the example as fixture 1 and mutate it in 1 incorrect way and save it as fixture 2. Then I simply assert given a valid file, the objects single public method will indeed validate the file.

Then I write the production code to make that pass. Then I write a test to fail it based on the ‘bad fixture’. Then I continue on until the validation part is flushed out.

Since validation and persisting arent the same concern, I would break that production code and tests into a new object… ValidateUserSubmittedFile. The tests will pass again after the rename.

Now I plug that validator into the original object UserSubmittedFilePersister and continue the cycle to define what persisting means.

Once my domain logic is all flushed out minus the file origin and the actual database save part, it is time to write integration tests to help me integrate it into my rails app. I already know the interface and the contract at that point, so it is easy.

Only at that point do I attempt to integrate it. The integration should not force design on the domain logic in my not so humble opinion.

Some would call this bottom up. I of course, would disagree, and have. :slight_smile:

1 Like

@Dreamr I really like that idea. It sounds like a process where you unit test your PORO boundary objects first, and then drive out the Rails side outside-in [1]. It means you focus on the OO way to do things and then, as you say, when you have to deal with Rails integration, you already know your API.

I think we always end up trying to extract POROs from Rails once we’ve already integrated, which feels more natural (and people are less likely to shout YAGNI!), but your process sounds equally good. It means that Rails is more of an afterthought, which obviously is a little bit risky, but should lead to much cleaner divide between Rails and your domain.

[1] We don’t always need to give processes a restrictive name anyway!

Thanks @aaronmcadam & @Dreamr. Both are good insights. I will have to keep plugging away with the test-driven rails as it is still fuzzy for me. I think practice for me, and maybe the book you mentioned, is the best way to get a true understanding.

@Damien_Hogan Yes, GOOS is up there with the best TDD books you can get! A good Ruby-based companion for that book is POODR as both books talk about modelling code with Roles and Responsibilities and concentrating on message passing and communication between objects.

Make sure to show us some of your attempts as you go!

@aaronmcadam I noticed you mentioned unit tests. As I am going through the testing with the workshop and rspec I notice that we touch on integration tests and testing of the models, however I dont hear a reference to unit tests. Where does this fall in terms of using rspec? I understand unit testing is a way of TDD in its own right.

@Damien_Hogan In the context of Rails, a Model spec is a unit test, it’s not really anything to do with RSpec parlance, it’s just that Thoughtbot and others tend to see unit tests and model specs as the same thing. But when you extract a Service for example, you should still write a unit test/spec for it.

@Damien_Hogan Outside-in means that you drive your development from high-level tests and work your way down to lower-level tests. This is what the process looks like at thoughtbot:

Say we have a user story that says that:

As a guest, I can add items to my shopping cart and see those items in my cart

Before thinking about objects, database tables, or controllers we write a feature/integration spec (both terms roughly synonymous) to drive this behavior. This is a high-level test that describes the behavior from the user’s point of view. We typically write these using RSpec and Capybara. There is no mocking/stubbing here. A test for the user story might look like this:

feature 'User views shopping cart' do
  scenario 'and sees items' do
    user = create(:user, password: 'secret')
    item = create(:item)

    visit root_path
    fill_in 'Search', with:
    click_on 'Add to cart'
    click_on 'Shopping Cart'

    expect(page).to have_content

Depending on how much of the app is already built, this could break in multiple places. For example, we might get an error saying that there is no link with the text “Add to cart”. We would then create a link on the appropriate view. The next error might prompt us to create a route. Eventually we would might reach an error saying that there is no class ShoppingCart. At that point, we would write some unit tests for that object. These tests run in isolation, we mock/stub all calls to external objects.

describe ShoppingCart do
  it 'can have items added to it' do
    item = double
    cart =


    expect(cart.items).to eq [item]

Once this test passes, we’ve probably passed the error in our high level test as well. The next failure of our integration test may lead us to drop down a level and TDD a smaller component again with a unit test. This is the general pattern with outside-in development. Start with a high-level test, drop down to lower-level unit tests where necessary and go back to your high level test. Repeat until the high-level test is green. At this point, we’ve successfully implemented our user story.

A big advantage of outside-in testing is that you start the development of a new feature without having to know about the possible architecture of the solution. Instead, all you need to know is the user experience you want to have and then let your test guide you towards the solution.

1 Like

Hi @joelq, my app is completely API driven, I’ve always stubbed the API requests in my feature cukes. Would you recommend not stubbing those external requests and take the slowness hit, partly because integration tests shouldn’t know what calls the system will make?

VCR is a good compromise in the situation you’re describing, which will get you some independence from a network connection. If you’re dependent on some external service, though, I recommend having some tests that aren’t part of your standard test run (filtered out by default in spec_helper) that you can run on demand to confirm that your network-dependent functionality is indeed working.

EDIT: If you are talking about your own internal API, then you shouldn’t be stubbing at all in integration tests. If the APIs in question are external services, I think it’s realistic to assume that sometimes you’ll use VCR or webmock.

Good point @aaronmcadam. While we don’t stub out requests to other objects, the database, or the filesystem in an integration test, we do stub out requests to the internet. I’ve used VCR, webmock, and fake apis to do this on various projects.

@geoffharcourt Yeah, my app connects to our internal Rails API. I’ve used VCR extensively in the past, but found that the cassettes kept getting overwritten in weird ways. So now I just use WebMock directly and hand-make the JSON fixtures. It’s painful, but at least I can add edge cases where needed. I may go back to try VCR sometime, maybe I’ll get on a bit better with it.

I think VCR has an option to only write to a cassette if none exists. You might need to make the request matching more or less specific. Configuring it is kind of a drag.

@geoffharcourt Yeah that’s part of the reason why I dropped it in the end, because WebMock just wouldn’t match the request, it can be quite strict.

@geoffharcourt @joelq have you guys ever looked at something like this: GitHub - rwz/mock5: Create and manage API mocks with Sinatra?

It’s a Sinatra app that you run your tests against. I’m not sure if it’s worth it or not. One other option myself and others on the team have thought about was running the cukes against a local instance of the API. Does that sound plausible?

My concern in cases like that is that the complexity of running another app in addition to the first one is probably larger than the complexity involved in maintaining VCR and tweaking the rules for your cassette matching.

I might be a bit biased here because some of my most productive programming moments have been on planes and trains when I haven’t had any Wifi.

1 Like

@geoffharcourt Yeah that’s one of the worries I’ve got about a test suite depending on another app.

I guess I’ll have to try VCR again and see what happens

Neat gem.

I use a Sinatra app for testing external APIs myself and it works great - the biggest advantage over mocking is that it can also handle the JavaScript requests in your integration tests. (This is super useful when you’re dealing with getting tokens from the Stripe API, for example.) As long as you’re launching the Sinatra app from your test suite and not separately it’s not a hassle to deal with.

This was the blog post (also from thoughtbot) that inspired my own approach:

Using Capybara to Test JavaScript that Makes HTTP Requests

1 Like

@gyardley Cool, thanks for your input!

As an aside, I use mocha and konacha/teaspoon for my JS tests. I don’t think it makes sense to be testing JS with Capybara.