Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Zig produces smaller, faster binaries than C++

This seems fairly significant to me and makes me wonder how this is possible with all the effort that has gone into optimizing C++



Because they can't do breaking changes and must live with legacy baggage.

While zig can work use modern techniques without such issues


This is wrong because you can strip out C++ code you don't need. Green code doesn't need to worry about legacy baggage


And how do you identify the unneeded code in a compiler?

Which is used all over the world?


A smaller binary will not necessarily be faster - specialization can bloat code size, but having a specific version for a given subtype can be much faster. It’s a tradeoff.


right after that sentence there's a link to a talk that explains one way this is true


We can hardly be expected to watch a 46 minute video on "A Practical Guide to Applying Data-Oriented Design" to find our answer. It seems unpromising on the surface.


Data oriented design is why the thing is faster.


I guess I'll wait for the benchmarks comparing C++ and Zig over a variety of problem solutions to see which one is faster




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: