← Back to Upcase

Writing tests for a Rails applications with models across multiple databases

(Orlando Del Aguila) #1

Hello Guys,

I have a Rails application that has models in multiple databases. It’s like an admin interface that doesn’t interact with a public API. Instead, it consumes the database directly.

For example, I have an Agent model that lives in the user database, and an Account model that lives in the account database.

class Agent < ApplicationRecord
  include Auditable

  self.establish_connection ENV['AGENTS_DATABASE_URL']
  self.table_name = 'users'

class Account < ApplicationRecord
  include Auditable

  self.establish_connection ENV['SALDO_DATABASE_URL']
  self.table_name = 'accounts'

  belongs_to :agent, foreign_key: :user_id

What will be the best approach to testing these models?, I don’t have a way to replicate all these databases locally, since these are externals to my application.


(Emmanuel Delgado) #2

It depends on the aspects that you want to test. In a strict sense, and based on that piece of code a few options could be:

  • Test associations
    • Only if you are adding custom code, not the framework code.
  • Test connections
    • Maybe a database-coupled-test that just ensures that you used the proper framework syntaxis.
  • Ensure the code is correct
    • As long as the code executes it may suggest it is correct.
  • Test the Auditable module
    • There should be separate unit tests for it.

Some tests may not add that much value, some tests may even overlap. So I would give priority to:

  1. Ensure Auditable has unit tests.
  2. Write an integration test that ensures that Account and Agent does become Auditable.

You don’t have to have copy of the real databases. What you want here is to create a set of repeatable tests that can be executed without having to depend on the external world. How much coupling your test suite has with external entities is something that you will have to decide.

Hope this helps.

(Orlando Del Aguila) #3


The problem is that I some methods make changes to the external DBs. Like reversal of transactions and things like that (that are not in the public API).

I ended up testing everything using docker, and creating a “test environment” using the database setup process from the external repo. That way I can test that the actual data gets written correctly (there are some other things like database triggers and functions that need to be executed after some transactions are persisted).

I think that replicating or mocking this locally is not the right approach for this case, I would like to heard your comments

(Emmanuel Delgado) #4

That makes sense.

I been in cases where my team has written scripts to do cross functional tests between systems. We did started testing manually and later we automated our steps.

Some pieces of logic are not tested in integration. We tested smaller pieces of business logic with defined defined inputs and outputs in isolation with external dependencies stubbed or injected.

Some parts are easier to test in isolation, plus writing tests integration tests for multiple systems is a little bit harder but not impossible.

Hope this helps.