Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems poorly grounded. In fact almost three decades after the release of the Java HotSpot runtime we're still waiting for even one system to produce the promised advantages. I guess consensus is that V8 has come closest?

But the reality is that hand-optimized AoT builds remain the gold standard for performance work.



The benchmarks I have seen show Hotspot is ahead of V8. E.g. https://stefan-marr.de/papers/oopsla-larose-et-al-ast-vs-byt...

What makes this very complicated is that 1) language design plays a big part in performance and 2) CPUs change as well and this anecdotally seems to have more impact on interpreter than compiler performance.

With regards to 1), consider optimizing Javascript. It doesn't have machine integers, so you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast. There are many other cases. Python is even worse in this regard. In comparison AOT compiled languages are usually designed to be fast, so they make tradeoffs that favour performance at the cost of some level of abstraction / expressivity. The JVM is somewhere in the middle, and so is its performance.

With regards to 2) this paper is an example, as is https://inria.hal.science/hal-01100647/file/InterpIBr-hal.pd...


> you have to do a bunch of analysis to figure when something is being used as an integer and then you can make that code fast

It doesn't get much attention now that WASM exists, but asm.js essentially solves this, so a more head-to-head comparison ought to be possible. (V8 has optimisations specific to asm.js.)

https://en.wikipedia.org/wiki/Asm.js


asm.js solves this in the specific case where somebody has compiled their C/C++ code to target asm.js. It doesn't solve it for arbitrary JS code.

asm.js is more like a weird frontend to wasm than a dialect of JS.


No, if you just use the standard JavaScript cast to integer incantation, |0, v8 will optimize it. asm.js is valid JavaScript.


Sure, but that was essentially my point. If we're trying to compare HotSpot and V8 for similar input code, Java and asm.js seem closer than Java and full-blown JavaScript with its dynamic typing.


With all respect that sounds like excuse-making. I mean, yeah, Javascript and JVM and .NET are slower runtimes than C or Rust[1]. Nonetheless that's the world we live in, and if you have a performance-sensitive problem to solve you pick up rustc or g++ and not a managed runtime. If that's wrong, someone's got to actually show that it's wrong.

[1] Maybe Go or Swift would be more apples-to-apples. But even then are there clear benchmarks showing Kotlin or C# beating similar AoT code? If anything the general sense of the community is that Go is faster than Java.


Excuses for what? I'm not the elected representative for JIT compiled languages, sworn to defend them. There are technical reasons they tend to be slower. I was sketching some of them.


I think the above comments are because JIT gets so much positive press, someone wandering in from outside could be mistaken for thinking that JIT isn't coming 2nd in a two-man race with AOT.

I've been around long enough to hear that Java and JIT are gonna overtake C++ any day now.

The title on this article doesn't help.


https://devblogs.microsoft.com/oldnewthing/20060731-15/?p=30...

https://blog.codinghorror.com/on-managed-code-performance-ag...

And that was 2005. Modern .NET is much, much faster.

> If anything the general sense of the community is that Go is faster than Java.

Faster where?


When things are performance-sensitive, you want things to be tunable and predictable. Good luck playing with the JIT if you rely that for performance...


Good luck with AOT as well, unless you hardcode the target hardware, like game consoles.


> But the reality is that hand-optimized AoT builds remain the gold standard for performance work.

It's considerably more complicated than that. After working in this area for 25 years, I have vacillated between extremes over decades-long arcs. The reality is much more nuanced than a four sentence HN comment. Profile and measure and stare at machine code. If you don't do that daily, it's hand waving and having hunches.


I'd also point out that it's an ever-shifting landscape. What was slow yesterday might not be today.

In my experience, while there are some negatives of the runtime selected, the vast majority of performance is won or lost at the algorithm level. It really doesn't matter that rust can be faster than ruby if you chose an O(n^3) algorithm. Rust will run the O(n^3) algorithm faster than ruby, for sure, but ruby will beat the pants off of rust if someone converts it into an O(n) algorithm.

It only starts mattering if you've already have an O(n) algorithm. However, in my experience, a LOT of programmers are happy writing a n^3 and moving on to the next task without considering what this will do.

    for (i : foo) { 
      for (j : foo) { 
        for (k : foo) { 
          bar(i, j, k)
        }
      }
    }


You may be underestimating the degree of difference in performance between Ruby and Rust.

Here's comparison of Ruby with JS, and Rust is of course faster still: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

If the code runs 100 times faster, it might just offset even highly inefficient implementation.

> a LOT of programmers are happy writing a n^3

I have the same experience.

Unfortunately, and this is an issue I keep fighting with in some .NET communities, languages like C, C++ and Rust tend to select for engineers which are more likely to care about writing reasonably efficient implementation.

At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.


> At the same time, higher-level languages sometimes can almost encourage the blindness to the real world model of computation, the execution implications be damned. In such languages you will encounter way more people who will write O(n^3) algorithm and will fight you tooth and nail to keep it that way because they have zero understanding of the fundamentals, wasting the heroic effort by the runtime/compiler to keep it running acceptably well.

I would say this tracks. I spent some time doing research on JVMs and largely found that, for example, the Java community largely values building OO abstractions around program logic and structuring things in ways that generally require more runtime logic and safety checks. For example, Java generics are erased and replaced with casts in the bytecode. Those checks the JVM has to blindly perform in the interpreter and any lower compiler tiers that don't inline. Only when you get to opt tiers does the compiler start to inline enough to see enough context to be able to statically eliminate these checks.

Of course Java hides these checks because they should never fail, so it's easy to forget they are there. As an API designer and as a budding library writer, Java programmers learn to use these abstractions, like the nicety of generics, in order to make things more general and usable. That's the higher priority, and when the decision criteria comes down to performance versus reuse, programmers choose reuse all the time.


> that generally require more runtime logic and safety checks.

These safety checks and runtime logic are a constant factor in the performance of a given java application.

Further, they are mostly miniscule compared to other things you are paying for by using java. The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check. Considering the fact that that object will then be immediately used by the following code (hence warm in cache) the price really isn't comparable to the already existing overhead of reaching down into ram to fetch it.

I won't say there aren't algorithms that will suffer, particularly if you are doing really heavy data crunching that extra check can be somewhat murder. However, in the very grand scheme of things, it's nothing compared to all the memory loading that goes on in a typical java application.

That is to say, the extra class cast on an `ArrayList<Point>` is nothing compared to the cost of the memory lookups when you do

    int sum = 0;
    for (var point : points) {
      sum += point.x + point.y + point.z
    }


> The class check requires loading the object from main memory/cpu cache but the actual check is a single cycle cmp check.

Only a guard or, possibly, a final class type-check (at least it's the case for sealed classes or exact type comparisons in .NET). For anything else this will be more involved due to inheritance.

Obviously for any length above ~3 this won't dominate but JVM type system defaults don't make all this any easier.


I'm not an expert but I think that the compiler requires the exact class on insertion so at use it's just a check.


> If the code runs 100 times faster, it might just offset even highly inefficient implementation.

That's the danger of algorithmic complexity. 100 is a constant factor. As n grows, the effects of that constant factor are overwhelmed by the algorithmic inefficiency. For something like an n^3, it really doesn't take long before the algorithm dominates the performance over any language considerations.

To put it in perspective, if the rust n^3 algorithm is 100x faster with n=10 compared to the ruby O(n) algorithm, it takes only around n=50 before ruby ends up faster than rust.

For the most part, the runtime complexity of languages is a relatively fixed factor. That's why algorithmic complexity ends up being extremely important, more so than the language choice.

I used to not think this way, but the more I've dealt with performance tuning the more I've come to realize the wisdom of Big Oh in day to day programming. Too many devs will justify an O(n^2) algorithm as being "simple" even though the O(n) algorithm is often just adding a new hashtable to the mix.


I've found this website provides different results: https://programming-language-benchmarks.vercel.app/typescrip...

It also shows different Ruby implementations. I've tried truffleruby myself and it's blazing fast on long-running CPU-intensive tasks.


The tests on this website run for very little time indeed. They use input values that e.g. the original BehnchmarksGame suggests for validation before running for a longer time to get actual performance (another case in point - surely you want to run a web server longer than a couple hundred milliseconds). In my experience the data there does not always replicate to what you get in real world scenarios. It’s an unfortunate tradeoff because the benchmark runs when you want to support so many languages will take a very long time, but in my opinion it’s better to have numbers that are useful for making informed decisions over pure quantity.

If you have something specific in mind, it can be more interesting to build and measure the exact scenario you’d like to know about (standard caveats to benchmarking properly apply), which is quite easier if you have, say, just two languages.


JVM implementations, especially those with PGO feedback loop across runs do quite well.

Likewise modern Android, runs reasonably well with its mix of JIT, AOT with JIT PGO metadata, baseline profiles shared across devices via Play Store.

The gold standard for anyone that actually cares about ultimate performance is hand written Assembly, naturally guided with a profilers capable to measure everything that the CPU is doing like VTune.


I agree, the "JITs can be faster because X Y Z" arguments have never turned into "JITs are actually faster".

Maybe that's because JIT is almost always used in languages that were slowed in the first place, e.g. due to GC.

Is there a JITing C compiler, or something like that? Would that even make sense?


Binary Translation could be seen as a generalized JIT for native code.

Dynamo: A Transparent Dynamic Optimization System https://dl.acm.org/doi/pdf/10.1145/358438.349303

> We describe the design and implementation of Dynamo, a software dynamic optimization system that is capable of transparently improving the performance of a native instruction stream as it executes on the processor. The input native instruction stream to Dynamo can be dynamically generated (by a JIT for example), or it can come from the execution of a statically compiled native binary. This paper evaluates the Dynamo system in the latter, more challenging situation, in order to emphasize the limits, rather than the potential, of the system. Our experiments demonstrate that even statically optimized native binaries can be accelerated Dynamo, and often by a significant degree. For example, the average performance of -O optimized SpecInt95 benchmark binaries created by the HP product C compiler is improved to a level comparable to their -O4 optimized version running without Dynamo. Dynamo achieves this by focusing its efforts on optimization opportunities that tend to manifest only at runtime, and hence opportunities that might be difficult for a static compiler to exploit. Dynamo's operation is transparent in the sense that it does not depend on any user annotations or binary instrumentation, and does not require multiple runs, or any special compiler, operating system or hardware support. The Dynamo prototype presented here is a realistic implementation running on an HP PA-8000 workstation under the HPUX 10.20 operating system.

https://www.semanticscholar.org/paper/Dynamo%3A-a-transparen...


Maybe the "allocate as little as possible, use sun.misc.Unsafe a lot, have lots of long-lived global arrays" style of Java programming some high-performance Java programs use would get close to being a good stand-in.


I'm pretty sure the major penalty is the lack of inline objects (thus requiring lots of pointer-chasing), rather than GC. GC will give you unpredictable performance but allocation has a penalty regardless of approach.

For purely array-based code, JIT is the only factor and Java can seriously compete with C/C++. It's impossible to be competitive with idiomatic Java code though.

C# has structs (value classes) if you bother to use them. Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.


Inline objects is a huge hit that hopefully gets solved soon.

But I'd posit that one programming pattern enabled by a GC is concurrent programming. Java can happily create a bunch of promises/futures, throw them at a thread pool and let that be crunched without worrying about the lifetimes of stuff sent in or returned from these futures.

For single threaded stuff, C probably has java beat on memory and runtime. However, for multithreading it's simply easier to crank out correct threaded code in Java than it is in C.

IMO, this is what has made Go so appealing. Go doesn't produce the fastest binaries on the planet, but it does have nice concurrency primitives and a GC that makes highly parallel processes easy.


I am extremely skeptical of any "concurrency made easy" claims. Rust has probably the best claim in that area but it's still pretty limited, and comes at the cost of making it hard to write normal code.


I wouldn't (and didn't) say "easy" just "easier". The thing that makes rust concurrency so gnarly to work with is the lifetime battles you have to do in order to make it work. That's still better than C/C++ because you aren't dealing with accidental memory corruption when the wrong thread frees memory at the wrong time.

For languages like rust/C/C++, thread safe data structures are VERY hard to pull off. That's because tracking the lifetime of things tracked by the data structures introduces all sorts of heartburn.

What GCed languages buy you is not needing to track those lifetimes. Yes, you can still have data races and shared memory mutation problems, but you can also write thread safe data structures like caches without the herculean efforts needed to communicate with users of the cache who owns what when and when that thing dies.

The best that Rust and C++ can do to solve these problems is ARC and a LOT of copying.


> Java has something allegedly similar with Project Valhalla, but my observation indicates they completely misunderstand the problem and their solution is worthless.

Hahah spicy take, I'd be interested to hear more. It definitely might not bode well that they opened the "Generics Reification" talk at JVMLS 2024 with "we have no answers, only problems."


I'm not going to investigate it again, there was probably more than this. But from what I recall:

* The compiler isn't actually guaranteed to store them by value at all. Basically, they're written to be an "optional extension" rather than a first-class feature in their own right.

* Everything is forced to be immutable, so you can't actually write most of the code that would take advantage of value types in the first place. Hot take: functional programming is mainly a bad workaround for languages that don't support value types in the first place.


The immutable thing is actually being sold as a strength, i.e. "you write your nice clean immutable code, and if you've tagged it as a value type or flattenable, the compiler will figure out it doesn't need a new allocation and will update the existing value inline." I think they see it as in keeping with the Java culture of "you get very good performance for straightforward code" but I definitely agree there's a hazard of introducing an unnecessary impedance mismatch.


It will be a lot of work for the compiler to unspill modifications on any non-trivial data structure and reduce register pressure, especially since it's Java's first foray into structs :)

(I suppose if the list of things you can do with structs is very short, this will be nowhere near as useful but will also reduce the amount of compiler changes)


The whole point is to introduce value types without a .NET Framework vs .NET Core schism.

Random jars taken out of Maven central should be able to continue to execute in a Valhala enabled JVM, without changes in their original semantics, while at the same time being able somewhat to take advantage of the Valhala world.

Naturally there is always the issue of APIs that no longer exist like Thread.stop(), but that is orthogonal to the idea to have binary libraries keep working in a new value aware world.

There are tons of compiler changes, minimal semantic changes and keeping bytecode ABI as much as possible is the engineering challenge.


To be fair, .NET has way more than just structs. But yes, they are a starting point.


> Is there a JITing C compiler, or something like that?

Yes, for example, compiling C to JavaScript (or asm.js, etc. [0]) leads to the C code being JITed.

And yes, there are definitely benchmarks where this is actually faster. Any time that a typical C compiler can't see that inlining makes sense is such an opportunity, as the JIT compiler sees the runtime behavior. The speedup can be very large. However, in practice, most codebases get inlined well using clang/gcc/etc., leaving few such opportunities.

[0] This may also happen when compiling C to WebAssembly, but it depends on whether the wasm runtime does JIT optimizations - many do not and instead focus on static optimizations, for simplicity.


C++/CLI is one example, it is C++, not C, but example holds.


Now the money question: can anyone come up with a benchmark where, due to the JIT, C++/CLI runs faster than normal C++ compiled for the same CPU?


Writing a program where a jit version is faster than the aot version is just an exercise in knowing the limitations of AOT.

People have been doing runtime code generation for a very long time for exactly this reason.

A general implementation faster than, say, g++ is a completely different beast.


It is not C++ (or C) but a Microsoft invented language - which is OK, but don't confuse it with C++ anymore than MS have already done


I love how folks worship GCC and clang compiler extensions as C and C++ or UNIX compiler vendors in general, including embedded RTOS toolchains, but when Microsoft does it, for whatever reason doesn't count.

Two weights, two measures.


I certainly don't "worship" any compiler, and am pretty quick to point out non-standard extensions in people's code. But C++/CLI goes far, far beyond extensions, and becomes a completely different language to C++, both syntactically and semantically.


Just like Linux kernel can only be compiled with GCC, or compilers that equally implement the same language extensions that aren't at all C, not being part of C23 ISO/IEC 9899:2024, including compiler switches that change C semantics as strict provenance.

If you want to further discuss what is what, lets see how up to date is your ISO knowledge, versus the plethora of extensions across C and C++ compilers.


If you pit virtual-call-heavy code written in C++ against C#, C# will come out on top every single time, especially if you consume dynamically-linked dependencies or if you can't afford to wait until the heat death of the universe when all the LTO plugins finish their job.

Or if you use SIMD-heavy path and your binary is built against, say, X86-64-v2/3 and the target supports AVX512, .NET will happily use the entirety of AVX512 thanks to JIT even when still using 256b-wide operations (i.e. bespoke path that uses Vector256) with AVX512VL. This tends to surpass what you can get out of runtime dispatch under LLVM.

re: Java challenges - those stem from the JVM bytecode being a very difficult optimization target i.e. every call is virtual by default with complex dispatch strategy, everything is a heap-allocated object by default save for very few primitives, generics lose type information and are never monomorphized - PGO optimization through tiered compilation and resulting guarded devirtualization and object escape analysis is something that reclaims performance in Java and makes it acceptable. C and C++ with templates are a massively easier optimization target for GCC, and GCC does not operate under strict time constraints too. Therefore we have the results that we do.

Also interesting data points here if you'd like to look at AOT capabilities of higher-level languages:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> I guess consensus is that V8 has come closest?

V8 better than the JVM? Insanity, maybe it can come to within an order of magnitude in terms of performance.


Comes closest to realizing the concept of a JIT that is better than AOT.


I think that's completely silly framing; you can AOT compile any code better—or at least, just as well—if you already know how you want it to perform at runtime. Any efficiency gain would necessarily need to be in the context of total productivity.


> I think that's completely silly framing

It's literally the framing of the linked article though, which takes as a prior that JIT compilers are already ahead of AoT toolchains. And... they aren't!


They are comparing Javascript JIT to Javascript AOT, to avoid the issue of language design.

"The fastest contemporary JavaScript implementations use JIT compilers [27]. ... However, JIT compilers may not be desirable or simply not available in some contexts, for instance if programs are to be executed on platforms with too limited resources or if the architecture forbids dynamic code generation. Ahead of time (AoT) compilers offer a response to these situations.

Hopc [25] is an AoT JavaScript-to-C compiler. Its performance is often in the same range as that of the fastest JIT compilers but its impossibility to adapt the code executed at runtime seems a handicap for some patterns and benchmarks [27]."

In the context of JS it's reasonable to think that JIT may have an advantage, as the language is difficult to statically analyse.


> This gives them an advantage when compared to Ahead-of-Time (AoT) compilers that must choose the code to generate once for all.

I assumed they were talking about the general case, which is nearly useless to discuss. I just kind of filtered it out as internecine bickering amongst academics. The actual data are still interesting tho.


> Java HotSpot runtime we're still waiting for even one system to produce the promised advantages.

What promised advantages are you waiting on?

There are lots of systems that have architectures that are similar to HotSpot, or that surpass it in some way. V8 is just one.


There were many many statements made that JIT compilers could be faster than AOT compilers because they had more information to use at runtime - originally this was mostly aimed at Java/HotSpot which has not, in practice, significantly displaced languages like C or C++ (or these days Rust) from high-performance work.


Yeah those statements were overly optimistic and I don’t think they’re representative of what most people in the JIT field think. It’s also not what I as a JIT engineer would have promised you.

The actual promise is just: JITs make dynamic languages faster and they are better at doing that than AOTs. I think lots of systems have delivered on that promise.


I concur here. 20 years ago I was a JIT cheerleader and in the intervening time I've realized that you're only going to get the super-optimized hot inner loop perfect after the JIT and runtime has chugged through a ton of other slop that tends to make programs bloated and slow. And the Java ecosystem in particular has a tendency to build a ton of ceremony and abstractions that the runtime system has to boil away, but can only really managed to do so with deep inlining and a lot of optimizations, many of which are speculative.

> JITs make dynamic languages faster and they are better at doing that than AOTs

Indeed.


Yeah, i'm curious how well JIT works on languages with less dynamism. Perhaps a combination of AOT + JIT on a strong statically typed language might provide the best of both worlds. Though I suppose PGO kinda does that.


I think about this a bit in the context of Virgil. Virgil's compiler is a whole-program optimizing compiler that does a lot of devirtualization and constant-folding. In the higher optimizations it does a bit of inlining, but I haven't found the huge 10X speedups that you get in, e.g. Java. More like 10-40% performance improvements from inlining.

I think Virgil could benefit a little from runtime information. For example, it could make better inlining and register allocation decisions, as well as code layout. I have a feeling that Virgil code would benefit a little from guarded inlining, but I don't think full-on speculation would help. In general, a lot of polymorphism can melt away if you can look at the whole program. Couple that also with Virgil's compiler doing monomorphization, which means that using parametric polymorphism costs only code space, and I think the gap is pretty small. I'd expect you could maybe get another 10-20% from these things all together--that's a lot of work to get a small amount.


Yup, agreed, in the case of dynamic languages it's much clearer and the evidence is a lot more favourable.

The linked article doesn't help here because the abstract only mentions Javascript in the context of their work to prove their concept, but the body of the paper is clearer that it is discussing JIT vs AOT in the context of Javascript specifically.


I think their findings are applicable to lots of languages where the fastest known implementation is JIT based.

Not all “JIT dominant” languages rely on ICs as part of the JIT’s performance story, but enough of them do that it’s worth studying.

And JS happens to be the language where ICs have been taken the furthest, in terms of just how many different ways have been investigated and how many person years went into tuning them. So in some sense they’re picking the hardest fight. I think that’s a good thing.


HotSpot definitely has delivered on that too. It's a super dynamic runtime with reflection and randomly loaded jars even if Java the language is terse.


It has in a bunch of places. C# is widely used in video games, and Java is widely used in financial trading including HFT scenarios where every millisecond matters. And obviously in Android it's used to write large parts of the OS.

There are places where it hasn't, but that's more due to missing features than JIT vs AOT. Java only got SIMD support recently and it's still in a preview mode, partly because it's all blocking on Valhalla value types.

PGO can make a big difference to C++ codebases, and as JIT is basically PGO with better deployment/developer ergonomics it could probably also work in C++ too. It's just that the most performance sensitive C++ codebases like Chrome prefer to take the build system complexity hit and get the benefits of PGO without the costs, and most C++ codebases just go without.


I guess distributed systems and OS GUI frameworks aren't it then.


> we're still waiting for even one system to produce the promised advantages

To be clear, successful JIT do runtime profiling+optimization, at significant benefit.

But on net, JIT languages are slower.

It is a valid question to ask whether AOT binaries can selectively use runtime optimizations, making them even faster.


Hand-optimized AoT builds with solid profile-based feedback, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: