Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be clear, did he consider exceptions something to be avoided when possible or did he consider them to be a natural element of programming? I know exceptions do cause slow-downs on systems and most programming does its best to minimize occurrences within reason.


The downvotes are unfair for an extremely good question.

(I used to ask questions along this line when interviewing candidates.)

Exceptions are used to communicate an unexpected error. Specifically, an error that the caller doesn't expect to handle during normal operations of the program. These errors can range from unusual situations like a network failure, to even more perverse situations like true bugs in the program.

The example I discussed with job candidates was implementing a database access function called GetUserById. (In C#, pre compiler enforced null checking.) I would ask if the function should return null or throw an exception when there was no user with that ID.

What followed, (with the candidates who passed,) was a discussion about the trade-offs of returning null versus throwing an exception. Null allows the caller to know that there was no user with that ID without the overhead of the exception. But, returning null increases the risk of a NullReferenceException. This is risky, because it's harder to debug then a strongly typed exception with a useful error message. Thus, the "right" approach depended on if it was anticipated that someone calling GetUserById expected the user ID to always be for a valid user.

When there was time, we'd even get into the TryGet pattern that the .Net dictionaries use.

(By the way, now it's a good time to check out how Rust's enum type is used with error handling. It's really slick with no overhead.)


I would like to firmly push for a new definition of what an exception is for. It's for _aborting a sub-task in your program, if it cannot complete it's assigned goal_. Unlike the meaningless "it's for exceptional situations", this definition has the benefit of describing what they are for, and when you should use them.

Are you really not expecting an error, even as you are writing code to detect and report on it? No! You think it is unlikely to happen, but you are spending time preparing for it.

But after some errors, whatever sub-task the program was working on just isn't going to happen, and in that case the program needs to get back to a state where it can continue with the next sub-task. Exceptions do precisely that, letting you gracefully back out of the sub-task in a clean and clear manner.

Exceptions are not for aborting your entire program, as some people mistakenly hold (there's abort() for that). The fact that they are comparatively slow doesn't matter. Once you are not going to achieve your goal, it doesn't matter too much whether you'll do so at a rate of a thousand per second or a million per second, except perhaps for total system throughput.


I would like to firmly push for a new definition of what an exception is for.

I have some sympathy with your argument, particularly the idea that “for exceptional situations” is an empty tautology.

However, I find it a little strange to characterise exceptions as being “for” any specific purpose. This seems a common theme in the programming community, yet we don’t feel the need to characterise variables or for-loops or function calls as being “for” something in that way. They are just tools that our programming language provides, which have certain behaviour if we use them. That behaviour is (hopefully) defined objectively by the language specification, but how we then employ each tool is an open-ended and subjective question, a matter of judgement or perhaps convention.

In the case of exceptions, as provided in most mainstream programming languages, it is objectively true that they immediately exit lower level code and transfer control back up the call stack until they are handled at some higher level (or not). There are at least two reasons we might want to do that: something can’t do its job or something has now done its job. Either way, the outcome for that part of our program is now known and we are ready to proceed accordingly.

The proposal in the parent comment, aborting a sub-task if it can’t complete its assigned goal, is in the former camp. This might be the most widespread interpretation of what throwing/raising an exception represents. There is still plenty of debate about whether this should be used for “expected” failures like failing to find a file or connect to a network and/or for “unexpected” failures that imply some logic error in the program itself, but there is a degree of consensus that an exception represents some form of error condition.

But then you have languages like Python, which also uses built-in exceptions like StopIteration to indicate routine, successful completion conditions. This might be anathema to the school of thought that says exceptions are “only for indicating exceptional failures”, yet here it is, a different style that is used every day in one of the most popular programming languages ever created, and the sun still came up this morning.

Possibly the most unusual idea for using exceptions that I have encountered personally was to indicate a positive outcome from a complicated search algorithm. There were many mutually recursive functions that collectively scanned a graph-like data structure. On identifying a match, an exception would be raised to report the details. Using an exception in this way was like a multi-level early return statement or using a labelled break to exit multiple loops at once. It guaranteed a clean and immediate exit from the search, regardless of how it had recursed to reach that point, and the exception was then neatly handled at the same level of the code that started the recursive search, thus avoiding cluttering one or more paths through every recursive function in that search code with if(done) conditions. To some programmers, this might be a controversial use of the tool, but perhaps the question we should be asking is why, if the code was clearly correct according to the language rules and exceptions provided a neat, easily understood way to solve the design problem.


There are many ways to write correct programs within the bounds of any programming language. In languages that support goto, you don't even need functions and loops.

However, like any human discipline, there are good patterns for how to write a maintainable program, and there are bad patterns. There may be particular cases where a typically bad pattern is nevertheless the best available. But I believe GP is right on a good definition of the most understandable and maintanable reason for exceptions.


However, like any human discipline, there are good patterns for how to write a maintainable program, and there are bad patterns.

Sure. What I’m questioning is whether there is any rational, objective basis for arguing that using exceptions to exit early in positive cases is a bad pattern. The programming world is full of opinions, sometimes strongly held by experienced practitioners, that a certain style is bad. The programming world is also full of other experienced practitioners who use some of those controversial styles very successfully. Exceptions are a common source of controversy, but you could just as well look at type systems, significant white space, OOP, functional programming, or a hundred other areas where reasonable people can differ. I believe it’s important to distinguish arguments based on dogma or convention from arguments based on rational logic or empirical evidence. One type helps us to improve as individuals and as a community, while the other can only hold us back.


FWIW, I wouldn't argue it is necessarily a bad pattern, just a highly uncommon one; the sort of thing someone with a lot of experience can do, but that you wouldn't go teaching to beginners. I would also demand to see a comment explaining the unusual usage.

What I sort-of expected to be called out on is the definition of what a sub-task really is, because that is still quite vague. I'm inclined to match these to what a user of the program would consider a thing he does with the program at the largest scale level (things like "print a document" or "send a mail"), but there is certainly room for smaller granularity sub-tasks as well. I.e. if you are rendering a web page, but can't display an image for some reason, you'd just abort the image render, not the whole page render.

How would I classify the example given (loading a user record from a database)? Well, I don't know! It depends on the context: if we fail at loading that user, are we going to have to give up on whatever other things we were doing as well, or can we continue, possibly with some degraded functionality? If the first, it's an exception. If the second, null (or whatever passes for null in your language of choice, like std::optional in C++).

I suppose it wouldn't surprise you too much to learn that I do in fact have database access routines in both styles... There's database.load_one_record ("query"), which throws if it can't find that one thing you are looking for, but also database.load_one_record ("query", default_value), which returns the specified default value if no record matches the query. Because really, this one depends very heavily on context...


> I would ask if the function should return null or throw an exception when there was no user with that ID.

My answer would be “neither”. Both of these options are bad. Null is maybe less bad because it at least roughly approximates the algebraic type that accurately represents the image of the function. Ideally you would return some type like Optional<User>. I suspect this is what you’re referring to with the Rust reference.

A slightly more generalized version of this is to use the Either/Result monad, which can nicely handle multiple types of exceptional behavior at once.


Hrm, not the OP but, personally, it depends.

If you’re using a language without algebraic data types, you’d have to ask whether it’s perfectly normal for a record not to be found by ID at that point.

If it’s unusual, you should probably just throw the exception at that point because:

1. You check null and throw anyway, which gives you a stack trace just a little off the mark. That isn’t too bad but it’s more code for little gain.

2. There’s every chance you could forget to handle the null (It’s not the nil, it’s my discipline, I know, but it happens, we’ve all been there) and then you get a null pointer exception at some point later on in the flow, which is like the above but even worse.

If you’re using algebraic data types, you wouldn’t use an optional. There are a number of ways a DB look up can fail so Optional isn’t really ideal at all. Your suggestion of Either/Result is probably the ideal there.

Chances that you expect an explicit DB look up by ID to return nothing is pretty rare. Where has the program or client code got that ID from in the first place? Either you want to know about the error in your own code or you want the client code to know the error so you probably want some form of exception or Either.

In the rare case it is intended behaviour, yeah you could use null and push the responsibility for handling it to the calling code. Still, at that point, 2 kicks in - might be fine now but that assumption could easily change as the code evolves, which it can and often does.

You could argue you can implement something like Optional in, say, Java w/o algebraic data types but they are simply not ergonomic when the language does not help you use them.

But I don’t think you can hold up a single approach and say that’s the ideal. I doubt it was your intention but it’s rather dogmatic and doesn’t really play out nicely in real world code, especially legacy projects (not everything is written in Haskell|Rust).


> it’s rather dogmatic

I found it correct. I hold the same "anti null" and "anti exception" beliefs, and think Either/ResultObject/Maybe/Optional is ideal compared to them...

Your range of programmer thought seems to be limited by what is possible/idiomatic in popular languages.

> not everything is written in Haskell|Rust

The Truth is not measured in mass appeal.


The convention in C# is to make two versions of the method: GetFoo that throws, and TryGetFoo that returns a bool and places the result in an out parameter

For example, with the C# dictionary, you can either do: foo = dictionary["foo"]; Or: if(dictionary.TryGet("foo", out foo))

If Option<T> is your preferred pattern then you should use a language where that is supported.

Trying to recreate Option<T> in languages without compiler null checking has issues because either there's a risk that the Option<T> is null, or the value inside of it is null.


does C# support sum types? GP was talking about C# specifically.

Agree with your analysis though: it's essentially a partial function, so the range (codomain) of the function includes "no defined output for the specified input". It's therefore not a bug to return "not found". An algebraic datatype like Either/Optional/Result allows that to be encoded in the type system, and so provides explicit support syntactically and semantically for handling the situation.

I don't think C# supports sum types (though may well be wrong there, not so close to the language). Assuming not, the question is how to achieve the desired outcomes. Those might reasonably be one or more of:

* minimise the risk of null de-reference

* maintain performance

* be idiomatic

* be understandable

* be explicit

If (and it's a big if) the only language options are nulls and exceptions, there's not a clear winner there. So exploring the tradeoffs does seem like a reasonable discussion.


Effectively every OO language does. Inherit and override behavior. Add a piece of indirection that allows the result to specify the type of view, etc. Think of `UserSearchResult SarchUser()` as returning either `ARealUserResult { int userId; string display = "showUserView" or `UserNotFoundResult { string display = "notFoundView" }` (you could easily add MultipleUsersMatchResult or UserRequiresAdminPrivilegesResult ...)


Agreed. This is the most obvious way fwd. Rust, Haskell, Elm, even Kotlin seems to come along nicely.

Exceptions and null for error cases are horrible and Java and Go are doomed. (semi-jokingly)


I would disagree that network failure is unusual. It is something you should expect to have happen and should consider reporting such conditions a core feature of your API design as much as the happy path, not something bolted to the side.

Exceptions are best reserved for conditions that should be impossible, but become possible for an exceptional reason (bugs, cosmic rays flipping the wrong bits, etc.)


It's really critical that you don't have to litter every single function call and conditional in your code with "if network error." There's plenty of "one in a million" corner cases beyond network errors that an industrial strength program needs to handle.

Exceptions let you put a single error handler higher up in the stack for the one in a million errors, like network errors.


If a network error is a possible, there is no reason for it to not be a first class feature of your API. You can bolt it on to the side using goto by another name, but you shouldn’t.


I think he never considered the exceptions to the something to be avoided since he wrote Java most of his professional career - which ended when he decided to quit programming and go sell ornamental flowers in his family business in his hometown.

The middleware we worked had libraries from other companies (e.g. online prepaid transactions were handled by code from a jar/lib from Ericsson). But in the case the platform threw an exception, we could still charge the customer using a slower process (i.e. there's a window of time where the customer could use data/talk in excess until we realize there's no more credit when processing the slower transaction).

For him it was important to write these exceptions well. If something went wrong in production, we couldn't turn off the system, and if we were not charging users when we should to, the company would be losing millions (it was in Brazil, ~200 mi at the time, and the telco had ~60mi users I think, with mother's day and big brother being the craziest days with millions of messages per second.)


Throwing exceptions is slow. Adding exception handling code itself has little to no overhead.

Only throw exceptions in exceptional cases, not for frequently expected outcomes.


Java exceptions build stacktrace at construct time, which makes for the fun fact that instantiating an exception is the majority of its cost. At least the last time I checked. Actually throwing it is some fifth of the cost.

This is relevant in the log-and-throw-e situations.


Adding to that, assume any C++ code can throw exceptions unless it's specifically stated it won't.

If you're doing manual memory allocation in a function, your exception handling might be leaky. (gsl::finally is your friend.)


Imagine you're writing a function that parses an integer from a string. Would you let it throw an exception on failure?


Depends on whether you call the function `parse` or `tryParse`. :)

Throw if you expect that the input should never be invalid.

Don't throw, and do friendly logging and reporting of your parsing errors, if this is uncontrollable third party input.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: