The algorithms have this tendency. They use counterfactual reasoning to determine that assuming a nash player alike to them is their opponent when making their decisions. Sometimes they don't have a nash opponent, but they persist in this assumption anyway. In the cognitive bias framing this tendency is error. In the game theoretic framing this corresponds with minimizing the degree to which you would be exploited. You can find times where the algorithm plays against something that isn't nash and so it was operating according to a flawed model. You can call it biased for assuming that others operated according to that flawed model. From a complexity perspective this assumption lets you drop an infinite number of continuous strategy distributions from consideration - with strong theoretical backing for why it won't hurt you to do so - since nash is optimal according to some important metrics.
- Attentional bias
The tendency to pay attention to some things and not other things. Some examples of times where we do that are with alpha beta pruning. You can find moves that involve sacrifice that show the existence of this bias. The conceit in the cognitive bias framing is that it is stupid because some of the things might be important. The justification is that it some things are more promising than others and we have limited computational budget. Better to stop exploring the things which are not promising since they are not promising and direct efforts to where they are promising. Something like an upper confidence bound tree search in the cognitive bias model would turn balancing the explore exploit dynamic as part of approximating the nash equillibrium into erroneous reasoning because it doesn't choose to explore everything is an example of the lesser form of anchoring effects as they relate to attentional bias. It weights the action values according to the promising rollout more highly.
- Apophenia
Hashing techniques are used to reduce dimensionality. There is an error term here but you gain faster reasoning speed. Seen in blueprint abstraction - the poker example I gave - since we've hashing down using similarity to help bucket similar things. This gives rise to things like selective attention (another bias, and kind of related to this general category of bias).
Jumping ahead to something like confirmation bias the heuristic that all these algorithms are using are flawed in various ways. They see that they are flawed after a node expansion and update their beliefs, but they don't update the heuristic. In fact if a flawed heuristic was working well such that it won we would have greater confidence rather than lesser confidence in the bias.
---
Putting all that aside I would caution against specifity in understanding my point. I think approaching it in this direction - very specific examples - is horrible because it directs attention to the wrong things; when you look at specific examples you're always in a more specific situation and if you're in a more specific situation it means that your situation is more computationally tractable than the general situation which was being handled by the algorithm. So trying to focus on examples is actually going to give you weird inversions where the rules that applied in general don't apply to the specific situation.
You need to come about it from the opposite direction - from the problem descriptions to the necessary constraints on your solution. Then it happens that the error in reasoning is a natural result of trying to do well.
Could you give an example of this?