Unicorn 502 Bad Gateway?

I’ve been using Chef and Capistrano to deploy a Nginx + Rails stack on a Digital Ocean for a while now, I use Nginx as a reverse proxy for unicorn upstreams. Recently I’ve upgraded one of my applications to the latest versions of Ruby (2.1.2) and Rails (4.1.2) in a sandbox staging environment that is virtualy identicle to the one in production in every way other than the Ruby and Rails version changes.

With my application deployed, and running using upstart, I’m getting a 502 Bad Gateway error message when I attempt to navigate to my application in the browser. Similarly, running a curl request on localhost:8080 returns curl: (52) Empty reply from server.

If I tail unicorn’s log files, however, it looks like all the requests are being picked up, but my workers are timing out:

Started GET “/” for 127.0.0.1 at 2014-06-12 14:38:17 -0400
Processing by ApplicationController#index as /
Rendered news/latest.html.haml (55.2ms)
Rendered upload_stores/featured.html.haml (141.9ms)
Rendered application/index.html.haml within layouts/application (211.0ms)
E, [2014-06-12T14:38:48.300405 #30669] ERROR – : worker=1 PID:329 timeout (31s > 30s), killing
E, [2014-06-12T14:38:48.328049 #30669] ERROR – : reaped #<Process::Status: pid 329 SIGKILL (signal 9)> worker=1
I, [2014-06-12T14:38:48.340087 #971] INFO – : worker=1 ready

So I tried doubling the timeout time in my unicorn configuration from 30 seconds to 1 minute but met similar results. If I load into my rails production console and fire off some queries, it seems to respond in sufficient time so I don’t think the holdup is on my database server…

Can anybody with experience using Unicorn suggest how I can go about debugging my timeouts further? Also note, I get these timeouts for every page, not just the root page.

I noticed my CPU usage grind down to a halt whenever I have to serve JS files in particular…, CSS files are OK, but any page that loads JavaScript will simply timeout…

To be even more specific, it seemed to only chug on JavaScript files being loaded from gems (e.g., jquery-rails). I have a hunch it’s something awry with my production configuration. Even still, for it to spike from < 5% CPU usage up to 100% just from a few JavaScript files… it’s gotta be something with the new Ruby. I’ll have to investigate further.

As a short term fix though, moving all my assets over to CloudFront seems to have resolved the CPU spike.