I walked into a project (meaning I didn’t write the following code) and I need to speed up the run method in this code. walkthrough.rb · GitHub
Obviously without seeing the entire project it would be difficult to optimize it fully, but I’m interested in an analysis of the #create_walk_thru_table method. There’s a serious Law of Demeter violation, but I’m not sure if there is a way to speed that up.
I assume this is a Rails project and this is the “slowest” part that makes the most sense to optimize (I guess an underlying question is - is this the slowest part that you should be optimizing?)?
There a few tools and talks I’ve found helpful:
Upcase video on Rails performance by Joe Ferris using New Relic - I used this and was able to keep the average response time of a Rails based API to ~200 ms or less
a) Lines like facility.units.without_deleted can be worth looking at in my experience in the sense that may lead to large arrays of objects which always ended up dragging down performance in my API. What helped keep things lean was trying to avoid creating too many objects at once (the bang method can help)…
b) The other thing I recalled doing was trying to find more performant ways of getting the results I wanted - for example, using methods like pluck which avoided the overhead of creating too many ActiveRecord objects in memory and is faster to query (as opposed to doing something like Model.all.map { |x| x.id } to get a list of ids).
c) I see a lot of possible relations (e.g., unit.events) - are you using db indexes? (check out lol_dba)
d) The other thing I learned was that sometimes what I thought was the problem turned out not to be the problem at all - and the profiling tools like New Relic quickly pointed me in the right direction and helped me trim down my SQL queries.