Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"APL, and its successor J [...] provide a notational interface to an interesting model of computation: loop-free, recursion-free array processing." How is APL loop-free, exactly?

Later they say: "Under this implicit lifting, the iteration space is the argument frame rather than a sequence of loop indices." So if I understand correctly, we have iteration, but no loop. But that doesn't seem like a really important distinction...what am I missing?



The key feature is abstract iteration, like functional maps, filters and folds, and implicit iteration, where operations "penetrate" to the items of a vector or matrix automatically, rather than explicit iteration like a "for" loop.

Abstract iteration is useful because it results in programs with fewer "moving parts"- no loop induction variables to misplace or mutate in the middle of a loop body. Programs are necessarily expressed in a more rigid manner and some irregular algorithms can be difficult to express.

Summing a vector with an explicit loop in K (very non-idiomatic!):

    r:0;i:0; do[#v;r+:v@i;i+:1]; r
The equivalent using the "over" adverb:

    +/v
Both examples perform the same calculation. The latter is more concise and easier to reason about.


Stuff like +/v is easier to reason about, but it's also handled by special code which runs a lot faster. At some point I need to do a blog on all the horrible things that happen inside your computer (cache hits, memory being swapped in and out, stacks popping) when you do an interpreted, explicit loop; bytecode, AST or whatever. There are R&D interpretors which claim to remove this overhead for trivial for loops,which are the main kind that end up getting used in numeric code, but none of them ever seem to make it into production (I'm sure someone will correct me if I am wrong; I am pretty sure Art wasn't doing this in K4, though he was probably best positioned to do so).

The real reason we like +/v besides less typing; it can be handled with special code which runs close to the theoretical machine speed. Lots of small places and languages this fact can be exploited in. R and Matlab basically have a subset of operations you can do +/v type things with. APL is the main class of languages where this sort of thing is built into the semantics of the language. If you're dealing with numerics in an interpreted language, it should be built into the semantics of your language, and that's how you should do things. Really it should be in compiled languages too, and that's how people should reason about code, but it's probably asking too much since APL has only been around since the 70s...


> handled by special code which runs a lot faster

This is just another way of saying "we don't have a compiler, so don't process data at the element level if you want speed".

It is not an advantage of the language, but a disadvantage.

If you have a compiler, then it's only valid for aggressive, machine-specific optimizations. This is articulated by statements like "the library function is marginally faster than an open coded loop, because it uses some inline assembly that takes advantage of vectorized instructions on CPU X".

If I write an FFT routine in C myself, it will not lose that badly to some BLAS/LAPACK routine.


Forcing your compiler to figure out you're doing something trivial on a rank-n array is silly. So is writing all the overhead and logic (where a typo can break things) which goes into a for or while loop instead of two characters: +/

I encourage you to try writing an FFT routine in C yourself and compare it to FFTW, where they basically wrote a compiler for doing FFTs. It's also worth doing in an interpreted language in an array-wise fashion versus with a for-loop. You should get something like a factor of 100,000 speed up.


> Forcing your compiler to figure out you're doing something trivial on a rank-n array is silly.

What is the alternative, if there is no canned procedure for it?

The procedure has to be written somewhere, somehow, in some language.

If compilers are silly, assembly, I guess?


The analysis to recognize whether a "canned procedure" is applicable is nontrivial, to put it lightly.


I see. Good answer.


RodgerTheGreat has given a great answer; I'd like to add that implicit iteration also lends itself to automatic GPUization, automated SIMD and multithread parallelization; the semantics of the "rank/depth" penetration come with no a-priori guarantee about order of evaluation (and it shouldn't matter anyway unless you are extremely naughty with side effects, which the languages greatly discourage).

Compare to an vectorizing/parallelizing an explicit loop, about which there have been tens if not hundreds of PhD dissertations, and it is still not a solved problem.

This is why "for x in ..." in Python/C#, and even the "map" variants, are still inferior to APL/J/K: the facts that iteration order is predefined, and that the iteration itself my terminate prematurely, make it extremely hard for the compiler to optimize, whereas in most cases, neither consideration matters.


I think "loop-free" just means the language encourages you to think without loops, even if the implementation may or may not use traditional loops under the hood.

This is basically common to all languages with a strong emphasis on functional programming. Instead of looping, you perform operations directly on the arrays/matrices, and in fact APL and J are focused on matrix manipulation.


> This is basically common to all languages with a strong emphasis on functional programming

Also modern Fortran falls into this category. (I mention it because it's far from a functional language.)


That makes sense. Thanks.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: