I mean the productivity paradox was only temporarily remedied. Around 2005 we entered a second version of the paradox and it persists to this day. I'll note that 2005 was when the internet became dominated by walled-gardens and social-media, _and_ it was the last year that people got to use the internet without smartphones (in 2006 LG released a smartphone, with Apple releasing iPhone in 2007).
The combination of attention-draining social media walled gardens, and the high performance pocket-computers (which are really designed for consumption instead of productivity), created a positive feedback loop that helped destroy the productivity that we won by defeating the paradox in the 1990s. And we have been struggling against this new paradox for twenty years, since. AI seems like it should defeat the paradox because it is a kind of hands-free system, perfect for mobile phones -- but this is really just a very expensive solution to a problem that we have created and allowed to fester. We could just shun the walled gardens, and demand to be paid for our attention and data.
The new productivity paradox (which I do not think AI in its current form can fix[1][2]), is the price that we pay for a prosperous and valuable advertising industry. And as long as the web is seen as an ad-channel, and as long as the web is always vibrating in your pocket, we will keep paying this price. We will eventually end up (metaphorically) lobotomizing our children, and families, and communities, so that the grand-children of ad-executives and tech-bros and frat-bros can grow up healthy, psychologically stable, educated, and comfortably wealthy. (Brain drain: now available literally everywhere).
[1]: It is telling that most LLMs are centralized, and are most useful as search-engines/information-retrieval-systems. The centralization makes them _spyware_, and their ability to directly answer any question, encourages users to actually ask direct questions, instead of stringing search-terms together. This makes the prompts high-signal advertising data (i.e. instead inferring what you are looking for from the search-string, these companies can see _exactly_ what you are looking for and why -- and with LLMs, they can probably turn these promps into joint-probability-tables or whatever other kind of serialization they need to figure out which products to sell you (either on the web or directly in the response to your prompt)).
[2]: As far as copyright infringement goes, LLM outputs may require mass clean-room rewrites (so your productivity, as pathetic as it already is, now gets _halved_ long term) of text, prose, code, and anything else that is produced with them, because of how copyright law works. In legal arts this is called _the fruit of the poison tree_, and any short-term productivity gains, may become long term liabilities that need to be replaced due to _legal mandate_ -- so even if LLMs can eventually produce perfect and faultless outputs, the copyright laws _in all 200+ countries_ would have to be torn down and rebuilt (and this will certainly come at great expense).
The combination of attention-draining social media walled gardens, and the high performance pocket-computers (which are really designed for consumption instead of productivity), created a positive feedback loop that helped destroy the productivity that we won by defeating the paradox in the 1990s. And we have been struggling against this new paradox for twenty years, since. AI seems like it should defeat the paradox because it is a kind of hands-free system, perfect for mobile phones -- but this is really just a very expensive solution to a problem that we have created and allowed to fester. We could just shun the walled gardens, and demand to be paid for our attention and data.
The new productivity paradox (which I do not think AI in its current form can fix[1][2]), is the price that we pay for a prosperous and valuable advertising industry. And as long as the web is seen as an ad-channel, and as long as the web is always vibrating in your pocket, we will keep paying this price. We will eventually end up (metaphorically) lobotomizing our children, and families, and communities, so that the grand-children of ad-executives and tech-bros and frat-bros can grow up healthy, psychologically stable, educated, and comfortably wealthy. (Brain drain: now available literally everywhere).
[1]: It is telling that most LLMs are centralized, and are most useful as search-engines/information-retrieval-systems. The centralization makes them _spyware_, and their ability to directly answer any question, encourages users to actually ask direct questions, instead of stringing search-terms together. This makes the prompts high-signal advertising data (i.e. instead inferring what you are looking for from the search-string, these companies can see _exactly_ what you are looking for and why -- and with LLMs, they can probably turn these promps into joint-probability-tables or whatever other kind of serialization they need to figure out which products to sell you (either on the web or directly in the response to your prompt)).
[2]: As far as copyright infringement goes, LLM outputs may require mass clean-room rewrites (so your productivity, as pathetic as it already is, now gets _halved_ long term) of text, prose, code, and anything else that is produced with them, because of how copyright law works. In legal arts this is called _the fruit of the poison tree_, and any short-term productivity gains, may become long term liabilities that need to be replaced due to _legal mandate_ -- so even if LLMs can eventually produce perfect and faultless outputs, the copyright laws _in all 200+ countries_ would have to be torn down and rebuilt (and this will certainly come at great expense).