Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The rational actor model assumes that a person will behave optimally - using all information available to make and carry out the best decision possible for their goals.

I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action. Most of our biases serve to either allow us to make decisions quicker or minimize the odds of catastrophically bad outcomes for our decisions, which fit nicely with this model. The fact is that indecision is often worse than a bad decision, and the evolutionary forces that shaped our brains are stochastic in nature and thus don't dock points for missed opportunities.



The idea you’re describing sounds similar to Satisficing Theory [1]. I agree this approach does a much better job of describing real life decision making than the traditional rational actor model. Unfortunately, Satisficing rarely gets discussed (at least in my experience) in mainstream economics/psychology, despite having been around since the 1950s.

[1] https://en.wikipedia.org/wiki/Satisficing


Seems obvious that type 1 thinking satisfices and type 2 maximizes.


Nobody can maximize that's impossible, it'd involve something like making perfect stock picks.


Finding the best possible choice is impossible, but selecting the choice that maximizes expectation is possible. The former would be driving a different route because you know that a specific driver would rear-end you, and is impossible because it requires knowing the future. The latter would be driving a different route because you know that there's an annoying left turn.

The distinction isn't in whether your have access to all possible information for use in a decision. It's whether you use all available information in a decision.


Nobody uses all available information, whoever did would probably start by collecting every millennium prize.


Not sure if you realize this is coming off a pedantic, but everybody realizes what you are getting at. It's just not useful or relevant.

Define information being available as what people are able to load up into working memory to make the decision. You can maximize with those factors easily.


I think the fact that you think this is pedantic rather than useful and relevant demonstrates that you don't realize what he's getting at, possibly because your definition of "information being available" is wrong; it would make type 1 thinking the same as type 2.


I can "load up" the axioms of set theory plus the necessary definitions into working memory, but I'm still not claiming any millennium prizes. I do not think that a model of a person that is only limited by information would be anything close to a person that is limited by computational ability.


Definitions != mental model

The representation isn’t the model and conflating the two makes it hard to give a meaningful response to your comments.


True, but even execution with literally zero unforced errors with the information one does have is something that can be pursued.

Or can it? Is it even possible or are humans so fundamentally flawed they they inevitably fail on day one? Pointing to monks is a standard example, but they tend to isolate themselves from difficult environments.

The laws of physics don't stop us, but something does.


If I ask you to solve a difficult puzzle in one second, is your failure an unforced error?


I wouldn't say so... omnipotence is something else entirely.


This is "bounded rationality" [1], where people make the best decisions possible given computational constraints on how they make decisions. A lot of interesting work tries to derive human cognitive biases from this idea.

[1] https://en.wikipedia.org/wiki/Bounded_rationality


> The rational actor model assumes that a person will behave optimally - using all information available to make and carry out the best decision possible for their goals. I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action.

This is a very profound insight that I completely agree with. I've noticed that exact phenomena in my own life in my peer groups. Basically disengaging, not looking for new local maximas (in fairness, because they are hard to detect as they are happening) because the current situation is good enough to keep coasting on.


> optimize the ease of decision making while still getting an acceptable course of action

This might explain some behavior, but how does this model explain why many people choose to hurt others out of spite even if it means hurting themselves? Those choices are neither easy, nor optimal, nor ultimately acceptable as many people who do stupid things like that end up regretting it. It seems to me and most of historical humanity that something is fundamentally broken in us beyond merely missing out on the optimal outcome due to stochastic acceptableness. Sometimes we deliberately choose to do something very difficult that we know is wrong because we desire the bad outcome. That is messed up.


Punishing bad behavior is critical for any social group. If people know that no matter how much they break the social contract you're not going to do anything about it, the social contract no longer exists. This goes for both future interactions with the person who made the transgression, as well as third parties who are aware of the transgression.

Now a rational actor would carefully evaluate the consequences of possible responses to come up with an appropriate option and if the cost of their feud were greater than the likely reward then they'ed simply let it go. While it leads to better outcomes, this is a slow and draining process.

On the other hand, a simple "eye for an eye" response will often lead to suboptimal results, particularly when the perceived sleight is very different from the actual transgression, but people still will be hesitant to mess with you all the same. While in our modern era of functional justice systems this approach is generally unnecessary, the overwhelming majority of our evolutionary history did not contain such a luxury.


I tend to optimize for the least amount of perceived effort, most benefit, least actual productive output. I will spend 2 days writing a utility so I never have to do the same repetitive 30 second task twice.

I’m rationally irrational.


All of this is great, you just need to adjust your threshold for automation so that it results in net savings when it comes to time.


I think there cannot be a single perspective for optimal behavior. If I work I want to be efficient, the opposite is true if I want to relax. When I want to have fun or be creative rationality isn't necessarily good company.

I also don't want to take every opportunity I get, that would be pretty exhausting. I would have opportunity to save some taxes if I invest a few hours into tax law. Certainly and opportunity and pretty productive. But I just don't want to because I hate doing taxes.

Sure, these models do not apply to individuals (although this fact is often neglected). Also a model is always a simplification. Intrinsic to that is that it will by definition only ever be approximative. It neglects parts of reality, hopefully the less important ones but you cannot be sure about correctness and extend of approximation.

For example if I know a behavioral scientist that I just don't like for any reason and he suggest I should exercise more, I might go eat an extra pot of ice cream. This would render "nudging" quite ineffective or worse have the opposite than the intended effect.

I think it is more constructive to accept limitations of a model. It can help for prognosis and diagnostics. Why is it for example that people exercise less? Probably work load or distractions from entertainment or whatever reason. I think the field should concentrate on trying to get answers to such questions.

Psychology is interesting and much of the content that cannot be replicated is probably still true under certain circumstances. But for generalization these circumstances need to be known.


The basic form of the rational actor model assumes nothing more than that the prediction errors an actor makes should be assumed to be unsystematic [unless proven otherwise] for the purpose of modelling. (And by extension, that some empirically observed group-level systematic deviations from theoretically optimal behaviour might be better explained by constraints on ability to act than on inability to anticipate)

Which is a pretty good null hypothesis, actually.

That's entirely consistent with people frequently optimising for ease of decision making, it's just not consistent with slavish adherence to a particular specified decision making function an economist has designed policy around exploiting. The canonical example in macroeconomics being that if a government announced its intention to increase inflation, it would be unreasonable to assume that people weren't rational enough to consider asking for a pay rise.

Epicycles were abandoned because we had a more parsimonious default model, not because we wanted to have a more complex idea of reality and handwaved about maybe being more multidisciplinary.


Also, evolution takes into account uncertainty of information. In contrast, when we reason intellectually, our first step is to clean up the data and get clear on what the question actually is - though we typically don't count that as "reasoning".

On the last point, evolution doesn't dock points for missed opportunities... provided someone else didn't miss them.


The deeper problem is modeling goal setting. We know people will hurt themselves and to punish others and economics is stuck at thinking people only wish to maximize value. People are much more complex than that.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: