It's pretty slow, though looks like up to 60 seconds for some of the answers, and uses god knows how much compute, so there's probably going to be some trade offs -- you're going to want to make sure that that much context is actually useful for what you want.
TBF: when talking about the first "superintelligence", I'd expect it to take unreasonable amounts of compute and/or be slow -- that can always be optimized. Bringing it into existence in the first place is the hardest part.
Yea. Of course for some tasks we need speed, but i've been kinda surprised that we haven't seen very slow models which perform far better than faster models. We're treading new territory, and everyone seems to make models that are "fast enough".
I wanna see how far this tech can scale, regardless of speed. I don't care if it takes 24h to formulate a response. Are there "easy" variables which drastically improve output?
I suspect not. I imagine people have tried that. Though i'm still curious as to why.