That's a crude approximation of what Mathematica can do. Not all (in fact, very few) irrational numbers are computable. Yet among the computable numbers, what our computers can represent in any language is but a small subset of them. (Although computable numbers are equinumerous with the naturals.)
> Yet among the computable numbers, what our computers can
> represent in any language is but a small subset of them.
This is only true because of physical limitations of the machine (say, it has only finite memory). In the same way not all Turing machines cannot be implemented on actual computers. This is not a restriction of the languages we use. There is nothing stopping an actual programming language representing all computable real numbers.
> (Although computable numbers are equinumerous with the naturals.)
This is only true in classical mathematics. In constructive mathematics we are free to assume that all real numbers are computable (And hence, not equinumerous with the naturals, per cantor’s argument).
There are many possible representations of real numbers which could in theory be used in programming languages.
For instance, you could represent them as functions which given a natural number produces a rational approximation – say f(n) should be a rational number closed than 1/2ⁿ to the number you represent (Cauchy sequences). Then addition of two numbers f and g, would be[0] the function (f+g)(n) ≔ f(n) + g(n), where + denotes rational summation on the left-hand side.
Or you could have a function which given a rational number input produces a boolean which tells you if the real number is less than the input. (Dedekind cuts)
No mainstream language has real numbers as a primitive[2]. In fact, not many languages have even arbitrary integers as a primitive type. But many mainstream languages have functions as first class objects, and therefore there is nothing stopping you from using them to represent real numbers.
Not all of these representations are created equal. For instance if you wanted to represent real numbers as a function which gives you the n-th digit in the decimal expansions, you cannot implement addition. This because you would have to look arbitrarily far into the digits of each number to decide even the first digit.[1]
Someone is bound say "Oh, but that is just computable numbers and there are only countably many of those". This is true if you accept classical logic. In constructive mathematics (which is most relevant to computer science), however, one can only prove that there is a surjection from the natural numbers (but there is no constructive bijection). In fact it is consistent to assume all real numbers are computable.
[0]: Most likely, you want (f+g)(n) ≔ f(n+1) + g(n+1) to get a close enough approximation, but this is a technicality.
[1]: Imagine 0.00000⋯ + 0.99999⋯. At any point later the first number could become non-zero and the first digit would be 1. Or the second number could become smaller than 9 and the result would have to have first digit 0. No way to tell what the result would be without looking at infinitely many digits.
[2]: I know of at least one, non-mainstream, language which had built-in support for real numbers. It is called RealPCF – and I am not sure it was even implemented on a computer, or if it was just a theoretical construct.
There is by definition no irrational number in any language implementation that runs on a finite machine unless it is either a stream (which is still finite when realized) or an approximation.