Once you get rid of your N+1s the bottleneck in my experience (working with Rails now since v1.2) has always be Rails / Ruby itself. It is so incredibly slow, even using just Metal (even Sinatra for that matter). The slowdown at the view level is significant.
I always have a caching strategy (usually varnish in front of nginx) with Rails unless it's literally only supporting a handful of users, and anytime I need to support non-cacheable hits like writes to more than 50 or so concurrents I consider swapping in Node or Go or something reasonably performant to handle just those writes.
Lately I've been looking into Elixir as a Rails alternative for APIs for performance and scalability. I am very intrigued by a PostgreSQL based REST API.
The point is that when Rails gets too slow it is very easy to switch to something like cacheing or C (Or Go, or whatever). Even if you just split it off at nginx or use a worker pool in a faster language. Or if you need lots of concurrency use Go. Or even replace the Ruby code with one fairly nasty SQL statement or a single stored procedure.
The other 95% of your code can be slow Rails. You know those pages where a user adds another email address, or where they report a comment as being hateful, or where they select what language they want the app to be in, or where you have your teams page, or your business partners page, or your API docs and key registration / invalidation.
The database doesn't scale without pain though. You have joins, you're going to need to get rid of them. You have one table on a machine, you're going to need to split it. You have speedy reliable writes, you are going to have to either make due with inconsistency and possibly have a whole strategy to clean up the data after the fact or lose the speediness.
I'm intrigued about shuffling the serialization of JSON to Postgres, but that is different than what the OP was talking about.
By the same logic though, at the point that heavy write load becomes a reality it's just as feasible to move the heavy write table to an isolated datastore and leave 95% of your data (structurally) in the PG. Even use a PG foreign data wrapper to connect to that new datastore to allow PG to continue any necessary queries against it.
I'm not ever going to argue for heavy stored procedure usage but there are definitely times when it makes sense and more times still when using the features in your database instead of setting up multiple different standalone systems for pubsub, search, json data, etc when your database can do it all makes sense.
It's very similar to the "you can always switch the slow parts" point with Rails to move a part to Go. You can do it all in PostgreSQL and then when you actually reach a point where you've grown it into a bottleneck, move it out.
Postgres isn't SQL Server and it isn't Oracle and it isn't MySQL. It's Postgres. It's a tool that you choose because of it fits your needs, not because somebody told you it was a good database. You choose it as part of your stack. If you are using PostgreSQL because you wanted a dumb datastore then you chose the wrong database and should probably reavaluate your options. That's like getting a Lamborghini to make grocery runs.
I am a postgresql novice, but I've used the JSON serialization and it is indeed fast. But, here's my question:
When you do a 1-to-many join and return the same fields very many times, do the binary drivers optimize that or is it return many times? With JSON serialization (or serializing to arrays), you only get the one row.
I always have a caching strategy (usually varnish in front of nginx) with Rails unless it's literally only supporting a handful of users, and anytime I need to support non-cacheable hits like writes to more than 50 or so concurrents I consider swapping in Node or Go or something reasonably performant to handle just those writes.
Lately I've been looking into Elixir as a Rails alternative for APIs for performance and scalability. I am very intrigued by a PostgreSQL based REST API.