In Postgres land, I think most businesses work around temporal tables with audit tables using triggers to dump jsonb or hstore. I wrote up how I used table-inheritance here [1].
I agree with your point. Postgres is starting to stick out compared to alternatives:
- MS SQL supports uni-temporal tables using system time.
- Snowflake has time travel which acts like temporal tables but with a limited retention window. Seems more like a restore mechanism.
- MariaDB has system-versioned tables (doesn't look like it's in MySQL).
- Cockroach DB has uni-temporal support with system time but limited to the garbage collection period. The docs indicate you don't want a long garbage collection period since all versions are stored in a single range.
- Oracle seems to have the best temporal support with their flashback tech. But it's hard to read between the lines to figure out what it actually does.
I've never seen a business actually use them, large or small. Any auditing requirements are usually fed from other sources, like Kafka event streams, files on S3, or a OLAP data warehouse.
How do you set up and feed the warehouse? Temporal-ish tables have been an obvious, simple, and mostly foolproof solution for many of our historical analytics and reporting needs.
Bitemporal stuff (enabling edited versions of history) is where things get hairy and I definitely question the utility outside of a dedicated use case.
Although temporal tables are a really good idea; it is possible to get away without them being a first class feature. They aren't hard to mimic if you can give up the guarantee of catching every detail. In an ideal world (ha ha, silly thought) the tables would be designed to be append-only anyway, or the amount of data would be significant. Both of which make temporal tables somewhat moot.
They are really easy to mimic in PostgreSQL with range types (tstzrange) and an exclusion constraint, so now overlapping values are allowed. I guess they will not add it to the core if a developer can add support to them so easy.
i’ve very rarely found that using a full temporal table is the right choice for online analysis—a dedicated schema serves you better in the long run and helps you design your indexes, etc appropriately. For compliance, PIT backups via WAL shipping should suffice, no?