Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And if you divide by 0 your program still crashes. The reality is that you can use vectors for memory allocations and you never have to worry about it. If you do need to wrap resource allocation, you do it once, test it and it will probably work from then on. This is much better than the alternative of having to remember to free the memory, close the file, unlock the mutex correctly every single time you need one.


I don't agree that "you never have to worry about it" unless you're also using smart pointers, which is rarely what I want.

I think that the alternative where all allocations and deallocations are made clear (in Zig, allocating routines are "coloured" by convention) is the better alternative, at least for the kind of low-level programming I do and for my way of thinking about low-level code.

When I write code where I don't want to see or worry about each implementation detail or see exactly where and when each operation is executed, I use Java.


which is rarely what I want.

If you don't want memory leaks, it probably is what you want.

There isn't a ton of difference between putting a delete in a destructor and using a smart pointer, but the best approach is to go beyond smart pointers and just use a vector, which does everything for you.

A lot of this seems like you haven't done a lot of modern C++ to see how elegant and smooth it is.


> If you don't want memory leaks, it probably is what you want.

No, sorry.

> A lot of this seems like you haven't done a lot of modern C++ to see how elegant and smooth it is.

It's true I don't want to use modern C++ (except for certain compile-time tests), and that's because when I write low-level code what I care about the most is being able to see the machine instructions that will be emitted, especially the ones related to memory management (I don't care so much about the computational instructions; the compiler, and the CPU, will rewrite them dramatically anyway).

If I find myself needing pretty abstractions, I reconsider my use of a low-level language. That's also why I'm philosophically opposed to the concept of zero-cost abstractions. What I want from C++ that C doesn't give me is templates and some other compile-time stuff, which are much more convenient for me than C macros. I don't want any implicitness in my low-level code. Zig gives me exactly what I want from a language that specialises in low-level code.

My problem with zero-cost abstractions is that they result in code that looks high-level, while still only really having a low abstraction level (what I mean by a high or low abstraction level is the extent to which I can make local changes without influencing non-local code). The resulting code looks pretty on the page, but makes me work a lot harder to understand what is being executed and when. When I don't want to care about such details, I use Java.

Just last week I had an interesting discussion, that's somewhat similar to this, about Haskell with a colleague. He said something like, look how clearly you can see the algorithm on the screen. And I said, yes, it looks great when trying to understand what it does by reading, but it's terrible to understand when you try analysing it in the debugger or profiler. The point is that there's different kinds of information that code can communicate. Sometimes you want just the function of the algorithm to be clear and want the language to hide execution details, and sometimes you're just as interested in the execution details.


Oooh yeah, sorry no, smart pointers and destructors do prevent memory leaks.

You said no then didn't back it up with anything and just went on a tangent of your own personal preferences.


`smart pointers + destructors => no memory leaks` does not entail `no memory leaks => smart pointers + destructors`.

"My solution offers X, therefore, if you want X, you should use my solution" is just a logical fallacy; the conclusion doesn't follow from the premise.

Also, I'm not so sure smart pointers and destructors actually prevent memory leaks. E.g. cycles. You can deal with cycles, but memory leaks due to them are not "prevented" just by the use of smart pointers and destructors.

Java does prevent leaks due to cycles, but it still doesn't prevent leaks due to "forgotten objects", so you get memory leaks in Java, too, even though it has fewer leaks than C++/Rust with GC pointers. So given that you can have fewer leaks than with smart pointers and still not have them completely gone, I wouldn't say that smart pointers "prevent" leaks. But yes, they're one of the ways to reduce them.


Also, I'm not so sure smart pointers and destructors actually prevent memory leaks

Well, they do, that's why people use them. I'm not sure why you would make the case against using another language that makes management manual.

Also reference counting cycles are only even possible if you use reference counting in the first place, which isn't necessary for single threaded scope based memory management.

Java does prevent leaks due to cycles

Fantastic but you said zig is safe than C++, what does java have to do with it?


> Well, they do, that's why people use them.

They're one of the ways of reducing, not preventing memory leaks. And they're pretty ok at that, but I find other approaches to work better for me.


You can do whatever you want, but systemically it is a lot better than doing it manually and anyone experienced with modern C++ will tell you it essentially stops being a problem.

Also you say not preventing memory leaks but your only example is cycles, which is only happens with reference counting, which is only even necessary with multi-threading. Also it implies a data structure that contains a bunch of shared pointers internally that end up referencing each other, which implies a linked list or tree made out of shared pointers, which is essentially a wild mistake huge mistake in the first place. In practice this doesn't really happen.


> You can do whatever you want, but systemically it is a lot better than doing it manually and anyone experienced with modern C++ will tell you it essentially stops being a problem.

Yeah, it's fine, but I think that systematically the Zig approach is a lot better for my needs and preferences.

> In practice this doesn't really happen.

I've only been programming low-level code for 25 years or so, including hard realtime safety-critical software, where a missed deadline or a stack overflow means dead people, so I have some grasp on what can really happen.

There's a clear tradeoff between forgetting to write some code and not noticing when code runs when you may not expect it to. Saying that one is universally better than the other is, at the very least, unsubstantiated.


You know when destructors run, they run at the end of a scope. It's very clear. Have you used modern C++?


Destructors are not modern C++. When I learnt C++, circa 1993, we had destructors. Saying, you should just remember to always look up which destructors run sounds as convincing to me as how I must sound to you when I say you should just remember to put a defer statement when you need to free a resource.

In my low-level code I don't want any calls that I can't see as explicit calls in the code (BTW, it's not that I don't use destructors at all - I'm not a fanatic - it's that I try to avoid relying on RAII).


No one said they were and that isn't the point (and you didn't answer the question). When destructors run isn't a mystery it's deterministic and you know resources need to be freed so you know what it is going to do.

Hidden functions aren't a big deal in practice. You have functions calling other functions in C all the time and you have to know what they are doing under the hood, same with data structures and their operators.


> Hidden functions aren't a big deal in practice.

Again, that's like me saying "forgetting defer isn't a big deal in practice." You seem to disagree with that and I disagree with your assertion. It's largely a matter of personal preference.

And no, except for some compile-time tests/assertions and a smattering of lambdas, we don't use most of "modern C++" (or almost any of std). Sometimes we need to emit hand-crafted machine code, and sometimes we may need to mess around with stack frames directly (which means we can't use destructors in those cases. We do use destructors a fair bit, but I personally soured on RAII some ten years ago.


Again, that's like me saying "forgetting defer isn't a big deal in practice."

It's the opposite, because for defer you need to be proactive and do it every time, but destructors are going to do the right thing and work, but if you question what's happening you can investigate.

One is happening automatically and already works, the other is manual and you always have to remember or your program is broken.

we don't use most of "modern C++"

You might want to try it out, it would probably help with all these misunderstandings.


> It's the opposite, because for defer you need to be proactive and do it every time, but destructors are going to do the right thing and work

No, it's the opposite, because with defer you always see all the operations that are happening, while with destructors you have to be proactive and every time check what operations are done in your methods.

Look, I've used destructors a lot for many, many years (and still do, as it's not always up to me), they have pros and cons, some people really like them, some don't, and it's okay. It's not like there's some universal truth here or empirical data that strongly favours one side over the other.

> You might want to try it out, it would probably help with all these misunderstandings.

Thank you for your suggestion, but being one of the most foundational pieces of C++ software in the world, I think we've got it covered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: