Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The reason it's hard is that everything you do has to become a "transaction" with the ability to roll back. Say you send five messages in a function, now you have to pre allocate all of them in order to cancel them all if any allocation fails, before sending any. But this pre allocation tends to break abstraction barriers (every API might need separate "prepare" and "fire" calls). It doesn't sound that complicated at first but it gets that way in a hurry. Almost every function can fail, every operation needs the ability to rollback midstream... it makes a mess in a hurry.

It's also a LOT of extra code, really material bloat.

One experience I had doing this: http://blog.ometer.com/2008/02/04/out-of-memory-handling-d-b...

If you haven't written the test harness to test almost every malloc failing, you might think this is easier than it really is.

Adding 30-40% more code to your code base that will almost never get tested except maybe by your unit tests ... no thanks, not if it's possibly avoidable for a given application.



I find it odd to see a reply written as if I haven't written in a transactional style and seen it working well. But ignoring that for a second. Your blog post says the error handlers did the wrong thing 5% of the time. Can I read from that they did the right thing 95% of the time? And you will dismiss the technique for that?

Not to mention that there are coding styles that make the transactional approach less difficult. (OK, so reverting your work gets hairy in the presence of certain side effects. In many cases I would rather chose some behavior and stick with it than take down an entire process dereferencing a null pointer.)


all I'm saying is that "not hard" as you put it and "30% more code, transactions, and a complicated test harness" don't go together for me. If they do for you then enjoy :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: