So what is Outside-in & Inside-out

So I am half way through the test driven rails workshops and I have noticed outside-in testing has been mentioned and I completely unaware of what this means, along with inside-out. I have tried to have a look back through the videos but I cannot find where it was introduced.

I had a go at Googling around for a brief explanation but I cant find anything that can give me a straight insight into both of those concepts. Have any of you stumbled upon any documentation? Of if anyone could spare a post explaining it, I would very much appreciate it.

Damien

1 Like

Hi @Damien_Hogan, from what Iā€™ve learned from the book Growing Object-Oriented Software Guided by Tests, outside-in means starting with a high-level acceptance/integration test and moving down to the unit test level when your higher-level test stops driving the implementation.

So, the opposite, inside-out testing starts with a unit test and moves up gradually to an acceptance/integration test to make sure the system works at a unit level and at a system level (this is why integration tests sometimes get called ā€˜end-to-endā€™ tests).

Hope that helps!

I will just explain my process and why I call it what I call it and let you do with that information what you likeā€¦

I build outside-in, however I start with the repl and unit tests. I also build modular so I think that changes my outcome from a more coupled coder. Let me give an exampleā€¦

I have a rails app that already exists. In it, I need to add a new bit of functionality to process a file the user uploads, lint, and store it in a persistence layer.

I would not just start building models and controller here. Instead I would start with a PRO (pure ruby object), what object and how? I start with the entry point, the file the user is going to upload.

There I draw a boundary. My code lives here, while rails lives up there somewhere. I do not need to care about rails at this point, I only need to care about the fileā€¦ There is no spoon.

So the requirements are that given a file I need to make sure it is valid, then I need to store the values into a database structure of some time.

That leads me to qualify what the file is, what it looks like, which means I need an example from the business or a specification in which to build my sample file. The former is always better.

Now I know what is valid and invalid, and what the structure of the data I am storing looks like, itā€™s time for a test. Notice up til now I have written 0 production code.

My object would probably start life named UserSubmittedFilePersister and that would evolve over time as I learn more about the domain I am solving. But back to the tests, with the above name chosen, based on the nouns and verbs.

So my first unit test would be to verify the structure of the data. I would use the example as fixture 1 and mutate it in 1 incorrect way and save it as fixture 2. Then I simply assert given a valid file, the objects single public method will indeed validate the file.

Then I write the production code to make that pass. Then I write a test to fail it based on the ā€˜bad fixtureā€™. Then I continue on until the validation part is flushed out.

Since validation and persisting arent the same concern, I would break that production code and tests into a new objectā€¦ ValidateUserSubmittedFile. The tests will pass again after the rename.

Now I plug that validator into the original object UserSubmittedFilePersister and continue the cycle to define what persisting means.

Once my domain logic is all flushed out minus the file origin and the actual database save part, it is time to write integration tests to help me integrate it into my rails app. I already know the interface and the contract at that point, so it is easy.

Only at that point do I attempt to integrate it. The integration should not force design on the domain logic in my not so humble opinion.

Some would call this bottom up. I of course, would disagree, and have. :slight_smile:

1 Like

@Dreamr I really like that idea. It sounds like a process where you unit test your PORO boundary objects first, and then drive out the Rails side outside-in [1]. It means you focus on the OO way to do things and then, as you say, when you have to deal with Rails integration, you already know your API.

I think we always end up trying to extract POROs from Rails once weā€™ve already integrated, which feels more natural (and people are less likely to shout YAGNI!), but your process sounds equally good. It means that Rails is more of an afterthought, which obviously is a little bit risky, but should lead to much cleaner divide between Rails and your domain.

[1] We donā€™t always need to give processes a restrictive name anyway!

Thanks @aaronmcadam & @Dreamr. Both are good insights. I will have to keep plugging away with the test-driven rails as it is still fuzzy for me. I think practice for me, and maybe the book you mentioned, is the best way to get a true understanding.

@Damien_Hogan Yes, GOOS is up there with the best TDD books you can get! A good Ruby-based companion for that book is POODR as both books talk about modelling code with Roles and Responsibilities and concentrating on message passing and communication between objects.

Make sure to show us some of your attempts as you go!

@aaronmcadam I noticed you mentioned unit tests. As I am going through the testing with the workshop and rspec I notice that we touch on integration tests and testing of the models, however I dont hear a reference to unit tests. Where does this fall in terms of using rspec? I understand unit testing is a way of TDD in its own right.

@Damien_Hogan In the context of Rails, a Model spec is a unit test, itā€™s not really anything to do with RSpec parlance, itā€™s just that Thoughtbot and others tend to see unit tests and model specs as the same thing. But when you extract a Service for example, you should still write a unit test/spec for it.

@Damien_Hogan Outside-in means that you drive your development from high-level tests and work your way down to lower-level tests. This is what the process looks like at thoughtbot:

Say we have a user story that says that:

As a guest, I can add items to my shopping cart and see those items in my cart

Before thinking about objects, database tables, or controllers we write a feature/integration spec (both terms roughly synonymous) to drive this behavior. This is a high-level test that describes the behavior from the userā€™s point of view. We typically write these using RSpec and Capybara. There is no mocking/stubbing here. A test for the user story might look like this:

feature 'User views shopping cart' do
  scenario 'and sees items' do
    user = create(:user, password: 'secret')
    item = create(:item)

    visit root_path
    fill_in 'Search', with: item.name
    click_on item.name
    click_on 'Add to cart'
    click_on 'Shopping Cart'

    expect(page).to have_content item.name
  end
end

Depending on how much of the app is already built, this could break in multiple places. For example, we might get an error saying that there is no link with the text ā€œAdd to cartā€. We would then create a link on the appropriate view. The next error might prompt us to create a route. Eventually we would might reach an error saying that there is no class ShoppingCart. At that point, we would write some unit tests for that object. These tests run in isolation, we mock/stub all calls to external objects.

describe ShoppingCart do
  it 'can have items added to it' do
    item = double
    cart = ShoppingCart.new

    cart.add(item)

    expect(cart.items).to eq [item]
  end
end

Once this test passes, weā€™ve probably passed the error in our high level test as well. The next failure of our integration test may lead us to drop down a level and TDD a smaller component again with a unit test. This is the general pattern with outside-in development. Start with a high-level test, drop down to lower-level unit tests where necessary and go back to your high level test. Repeat until the high-level test is green. At this point, weā€™ve successfully implemented our user story.

A big advantage of outside-in testing is that you start the development of a new feature without having to know about the possible architecture of the solution. Instead, all you need to know is the user experience you want to have and then let your test guide you towards the solution.

1 Like

Hi @joelq, my app is completely API driven, Iā€™ve always stubbed the API requests in my feature cukes. Would you recommend not stubbing those external requests and take the slowness hit, partly because integration tests shouldnā€™t know what calls the system will make?

VCR is a good compromise in the situation youā€™re describing, which will get you some independence from a network connection. If youā€™re dependent on some external service, though, I recommend having some tests that arenā€™t part of your standard test run (filtered out by default in spec_helper) that you can run on demand to confirm that your network-dependent functionality is indeed working.

EDIT: If you are talking about your own internal API, then you shouldnā€™t be stubbing at all in integration tests. If the APIs in question are external services, I think itā€™s realistic to assume that sometimes youā€™ll use VCR or webmock.

Good point @aaronmcadam. While we donā€™t stub out requests to other objects, the database, or the filesystem in an integration test, we do stub out requests to the internet. Iā€™ve used VCR, webmock, and fake apis to do this on various projects.

@geoffharcourt Yeah, my app connects to our internal Rails API. Iā€™ve used VCR extensively in the past, but found that the cassettes kept getting overwritten in weird ways. So now I just use WebMock directly and hand-make the JSON fixtures. Itā€™s painful, but at least I can add edge cases where needed. I may go back to try VCR sometime, maybe Iā€™ll get on a bit better with it.

I think VCR has an option to only write to a cassette if none exists. You might need to make the request matching more or less specific. Configuring it is kind of a drag.

@geoffharcourt Yeah thatā€™s part of the reason why I dropped it in the end, because WebMock just wouldnā€™t match the request, it can be quite strict.

@geoffharcourt @joelq have you guys ever looked at something like this: GitHub - rwz/mock5: Create and manage API mocks with Sinatra?

Itā€™s a Sinatra app that you run your tests against. Iā€™m not sure if itā€™s worth it or not. One other option myself and others on the team have thought about was running the cukes against a local instance of the API. Does that sound plausible?

My concern in cases like that is that the complexity of running another app in addition to the first one is probably larger than the complexity involved in maintaining VCR and tweaking the rules for your cassette matching.

I might be a bit biased here because some of my most productive programming moments have been on planes and trains when I havenā€™t had any Wifi.

1 Like

@geoffharcourt Yeah thatā€™s one of the worries Iā€™ve got about a test suite depending on another app.

I guess Iā€™ll have to try VCR again and see what happens

Neat gem.

I use a Sinatra app for testing external APIs myself and it works great - the biggest advantage over mocking is that it can also handle the JavaScript requests in your integration tests. (This is super useful when youā€™re dealing with getting tokens from the Stripe API, for example.) As long as youā€™re launching the Sinatra app from your test suite and not separately itā€™s not a hassle to deal with.

This was the blog post (also from thoughtbot) that inspired my own approach:

Using Capybara to Test JavaScript that Makes HTTP Requests

1 Like

@gyardley Cool, thanks for your input!

As an aside, I use mocha and konacha/teaspoon for my JS tests. I donā€™t think it makes sense to be testing JS with Capybara.