It isn't that difficult to optimize relational SQL databases. I've dealt with databases with billions of rows in hundreds of tables and the company's initial thought was to flatten the schema due to perf. complaints.
We got queries down from 30+ seconds to 5 ms simply by properly indexing, defragmentation, analyzing query plans, SQL Stored Procedures, etc.
I see a lot of complaints from developers claiming they have to join a "million row table to million row table" and reports are slow, and this gets blamed on the DB. These should not be slow outside of how much bandwidth is being pushed over the wire, which is often exactly what the problem is. They just didn't see it.
I'm a huge supporter of PGhero (https://github.com/ankane/pghero) for this reason. Sometimes a single index is all that's required, but if DB design isn't your forte it's hard to know where to put it.
When you start seeing devs make the same mistakes over and over again with relational databases in certain contexts, that increase the costs of operational maintainance and scaling (e.g. by requiring them to sit around analyzing query plans, lock contention, sharding, etc.), then it makes sense to consider building a DB with constraints that prevents those mistakes and ensures that every DB instance can meet those requirements with a lower overall cost. That is how I view NoSQL, graph databases, etc.
It is similar IMO to the diversity in programming languages. They are all Turing complete, at the end of the day. The difference is in the patterns they encourage and the constraints they impose.
I've always felt a more modular SQL database system is needed. SQL dbs are too black box. You can't interact with the query planner. NoSQL essentially is doing just this. SQL dbs have great underlying layers such as storage, caching, transactions, indexing, concurrency, gis support, backups, etc.
If anything, devs probably just need to be more familiar with DB internals and have tools setup to analyze queries easier.
We got queries down from 30+ seconds to 5 ms simply by properly indexing, defragmentation, analyzing query plans, SQL Stored Procedures, etc.
I see a lot of complaints from developers claiming they have to join a "million row table to million row table" and reports are slow, and this gets blamed on the DB. These should not be slow outside of how much bandwidth is being pushed over the wire, which is often exactly what the problem is. They just didn't see it.