Another thing to note is ChatGPT is configured to respond concisely to reduce cost (every token costs money). This reduces its cognitive ability.
You literally have to tell it to think about what it is saying and to think of all of the possibilities iteratively. That is chain of thought prompting.
GPT-3.5 figures out the correct solution on first response:
"I am standing outside and observing the sun directly without goggles or filtering of any kind. The sun appears to be a shade of blue.
Where could I be standing? Think through all of the possibilities. After stating a list of possibilities, examine your response, and think of additional possibilities that are less realistic, more speculative, but scientifically plausible."
You literally have to tell it to think about what it is saying and to think of all of the possibilities iteratively. That is chain of thought prompting.
GPT-3.5 figures out the correct solution on first response:
"I am standing outside and observing the sun directly without goggles or filtering of any kind. The sun appears to be a shade of blue.
Where could I be standing? Think through all of the possibilities. After stating a list of possibilities, examine your response, and think of additional possibilities that are less realistic, more speculative, but scientifically plausible."