My company has four systems that can have data related to each other, but that is not a requirement as they can operate independent of each other as well.
I am refactoring a view in one of the systems (rails project) that grabs data from all of the systems and pulls it together for a report. The user workflow is such that the user can select a group of entities which at times I have seen that to be up to 300 at a time (although in theory it can be more). Some of these entities will have data in the other systems and some will not.
The original code made one api call per entity per system to get data, but that results in a lot of api calls. I started the refactoring and moved towards calling each api once and retrieving all of the data from each system and this in practice is faster by leaps and bounds (130 second total time down to 18 seconds). This makes me feel like this is a good direction to move. However the mashup of the data seems very messy. There are no shared keys between the systems (like UUIDS) and each api has a separate key to match against when the data comes back.
Given that the user requirements can’t change and the external systems won’t change, does anyone have any strategies that would be useful in a scenario like this?