It’s a perfectly well-defined integer function, and yet once you fix the axioms of mathematics, only finitely many values of the function can ever be proved, even in principle.
Statements like this make me suspicious that one day we will redefine the axioms of mathematics so that such a function is no longer considered “perfectly well-designed”. Just feels like it will lead to paradox, like the set of all sets which do not contain themselves. I am not a professional mathematician though ;-)
You need not be suspicious, for mathematicians have long since already done that. What you're reading as a "math" statement is actually a meta-math statement. It doesn't say "For this set of axioms, only finitely many values of the function can be proved." It says "For any given axiomization of mathematics, only finitely many values can be proved." The number of finite values may differ for different axioms, but it will always be finite.
There isn't "a" mathematics; at this level there's lots of them, and you need to state which one you are using, which is why the title mentions ZF set theory by name. ZF is a very popular one, and is more-or-less the underpinnings of what most people learn in K-12 and even into college [1], but it is not the only one.
We can be confident about this because we can produce contradictions if some axiom set "claims" to be able to prove all of them and thereby prove the axiom set inconsistent.
[1]: I wouldn't say this is 100% true, but more because "school" mathematics simply accepts some contradictions and/or fuzziness in it in the interests of keeping it simple. For instance, in the probably-about-three-day intro to set theory you got in high school, Russel's Paradox ("the set of all sets that don't contain themselves") is probably best thought of as not so much being a paradox as simply ill-defined; school set theory isn't strong enough to hold up such a statement long enough for it to contradict itself, so to speak. If you take a good calculus course in high school you can also run up against some limitations of the somewhat informal definition of "real numbers" that you tend to get, but then you end up just sort of backing away, because the next step up in rigor from there is generally college-level-math-specialist (math major, comp sci masters, etc.)... even if a given student could handle it, it's not something we can ask for from our math teachers at scale.
Many mathematicians have similar attitudes. The general philosophy is know as Constructivism[0]. Broadly, the idea is that we should only consider mathematical objects that we can actually "get our hands on".
Well, with Godel's Incompleteness Theorems, such behavior appears to be an unavoidable property of any mathematical or computational system that can do anything of note. We may or may not be able to build systems where such properties are further removed from the 'practical' aspects... although such work would necessarily build upon the work in the linked post, which is finding just how far removed it is in the current system.
Not quite "anything of note" it has to be able to do arithmetic. There are very simple systems that are useful but not powerful enough for Gödel's trick and so we think those would be safe.
Statements like this make me suspicious that one day we will redefine the axioms of mathematics so that such a function is no longer considered “perfectly well-designed”. Just feels like it will lead to paradox, like the set of all sets which do not contain themselves. I am not a professional mathematician though ;-)