Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never understood this argument. The ISO8601 date string is as much a convention as a unix timestamp is - without context neither are decipherable. If anything, unix timestamp is easier to explain to an alien than a date string. It has a starting point (1970/01/01 00:00 UTC) and it counts seconds from then. Care to explain how an ISO8601 date string is constructed?

Also calculating amount of time between two ISO8601 strings without libraries is nor trivial, or any other operation actually. When dealing with dates, there is only one simple way to work with non-localized times, and it's not ISO-something.

EDIT:

> ...and what epoch and timezone to use.

This is also not correct. Unix timestamp has a well defined epoch and it doesn't deal with timezones (though the epoch itself is, of course, usually defined as 1970/01/01 00:00:00 UTC). You are free to define other timestamps, but unix timestamp is well defined.



This is the biggest general problem: you can't accurately compare the duration between two date times without having proper context of how the datetimes were stored or generated, especially for future dates. For example, you can't say today what the Unix timestamp will be for 1 January 2030, 10:00:00 AM in Bucharest, Romania. Romania may very possibly change timezone by then. So, if a user sets a meeting for that time, converting to the Unix timestamp according to the current timezone info is wrong. You'll end up alerting them on 1 January 2030 at 09:00:00 AM, or perhaps 08:59:58 AM if some leap seconds get added. Or it may even be entirely different if there is some calendar change for whatever reason by then.

Note that UTC has a similar problem - which is why you actually need a local time string for this type of events.


Correct, neither iso strings mor numbers can get you out of this dilema.

While ISO uses UTC offset and time zone interchangeably it actually is only defined over offsets, not locations.

I've had this problem discussed before. Typically the situation is firing events based on times at specific locations. It is neither straight forward with date libraries nor ist it easy with numeric computation. Sometimes it makes sense to render it down to a specific point in time at storage and not expect the country change timezone overnight. Some times the location itself may be changing randomly (moving objects) but in that case it typically just meant recalculating offsets on movement events.


Seconds since epoch requires careful and precise handling of datatypes and assumptions. This works great inside a system where that can be managed but for the purpose of exchange between systems the ISO formats contain all relevant information. Either side can use a library to parse the string into their local representation.

Comprehension by extraterrestrial life is a localization problem.


I'm not sure I understand. How is this different from any other quantity with an associated unit? What's the possible failure mode here, interpreting the "created_time_us" field as a distance in km instead of a timedelta in microseconds?


Is the unit seconds or microseconds? How are leap seconds handled? What base type is used? Integer, number, or float? How many bits? What is the precision? What is the CPU architecture? Is it signed? How should overflows be handled?

You can store weight as ohms of resistance on a load cell. But if you want to share that data you need to either normalize the number to a mutually understood unit or provide detailed information about your scale.


> How are leap seconds handled?

Unix time has uniform handling over leap seconds, regardless of the precision.


Yes that is true of Unix time. But is it true of every implementation? Are you sure? It might look like seconds since Uinx epoch but is it?


Most stdlibs provide ways to work with Unix time, I don't think there isuch room for significant issues here.


Aside from the obvious issue with engineering, physics, etc.

A unix second is allowed to be different from a "real" elapsed second.

The unix day has 86400 "unix seconds" which are actually 86401 real seconds (for days with added leap seconds).

Any application logging real world events against unix time will screw up velocities, force, energy, etc when computing "instrument reading" per "unix second".

It might not matter to most people but it's an issue for surveyors, geophysicists, astrophysicists, engineers, etc.


> Any application logging real world events against unix time will screw up velocities, force, energy, etc when computing "instrument reading" per "unix second".

That is a fair point for situations that depend on short term observations. But, UTC has the same issues there as Unix epoch. I think it is a valid edge case but if you are doing, say, GPS based speed calc I would wager you are already pulling your time reference from a low level source that you can depend on will be running steadily for the X minutes/hours you need it.


Sure, this is what responsible STEM people do for any time lapsed measurements, be those position, nine axis magnetic field recordings, gravity, radiometrics, microwave return, etc - they use an independant epoch based clock source that counts true lapsed time rather than conventional human time.

Point being, it's an area of UnixTime that many overlook - to date, since the 1980s I've had long term multichannel data aquisition running throughout every leap second adjustment which would have had data glitches had the time channel been UnixTime, UTC, etc.


There's a lot more context baked into an ISO string than a unix timestamp though. Write a regex to find all the strict ISO8601-RFC3339 strings in a directory. Now write one for unix timestamps.

> Also calculating amount of time between two ISO8601 strings without libraries is nor trivial, or any other operation actually.

That's true of any datetime that actual humans use (as opposed to computers, financial systems, sysadmins and programmers).


> Write a regex to find all the strict ISO8601-RFC3339 strings in a directory

That is a weird edge case. I've had this before and the person proposing it hinged on some forensic practice which was based on analyzing human readable data. As I said in my original post, pretty printing ints to ISO timestamps is actually my original opinion.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: