Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This April 3rd Pew research study is some fairly interesting reading:

https://www.pewresearch.org/internet/2025/04/03/how-the-us-p...

They find that the general public is overall much more skeptical that AI will benefit anyone, much more likely to view it as harmful and much less excited about its potential than "AI experts". A majority of Americans are more concerned than excited. There is interestingly a large gender gap between men and women -- women are much less likely to view AI favorably, to use it frequently or to be excited about its potential than men.

There is some research to suggest that consumers are less likely to buy a product and less likely to trust it (less "emotional trust") when AI is used prominently to market it:

https://www.tandfonline.com/doi/full/10.1080/19368623.2024.2...

So I think the data suggests that while there is excitement around AI, overall consumers are much less excited about AI than people in the industry think and that it may actually impact their buying decisions negatively. Will this gap go away over time? I don't know. For any of you working in tech at the time, was there a similar gap in perceptions around the Internet back in the days of the dot com bubble?

The other problem as pointed out is that MANY things are labeled as AI, ranging from logistic regression to chatbots, and probably there is more enthusiasm around some of these things than others.



Prior to the dot-com bubble itself, hype for the growing potential of the internet was modest and mostly in line with organic adoption and exploration. People at large weren't anticipating a revolution. They were just enjoying the growing areay of new products and opportunities that were appearing.

During the dot-com bubble, inasmuch as it represented a turning tide, this trickle had reached a tipping point and we witnessed a tsunami of innovative products that consumers were genuinely fascinated by. There were just too many of them for the market to sustain them all, and a correction followed, as you would expect.

This AI story is basically the opposite, much like the blockchain story. Many investors and some consumers who have living or borrowed memory of dot-com bubble or the smartphone explosion really really want another opportunity to cash in on a exponentially expanding market and/or live through a new technological revolution and are basically trying to will the next one into existence as soon as possible, independent of any organicity or practicality.

In contrast to blockchain hype, maybe it'll work here. Maybe it won't. But it's fundamentally a different scenario from the dot-com bubble either way.


> are basically trying to will the next one into existence

I think this hits the nail on the head. At least, it's the only explanation I've heard that makes any sense.


There've been a _few_ of these over the last decade; _two_ attempts at blockchain stuff (an initial "use blockchains for everything" one, and a "use NFTs for everything" one a couple years after the first crashed and burned. And then of course there was 'metaverse'.


What is the venture capitalist rhyme for "fake it until you make it" but starts with "hype it..."?


It is "pump and dump", hyping is pumping.


VCs just need to make sure there is enough hype by the time AI startups IPO, so that they can cash out. It's part of a bigger trend in finance, arguably caused by quantitative easing. It's why Uber could IPO, while they had never made a profit; but because of the hype, their stock price did great on day 1.


"Market the hype till the market is ripe"?


I find it interesting how a lot of other comments are saying how "HN users are a bubble, the public is actually really excited about AI", when the research indicates that the general public is even less interested in AI then HN is.

It's fine to express your opinion on AI, whether positive or negative. It's even fine to share anecdotes about how other people feel. Just don't say that's how "most people" feel without providing some actual evidence.


I think HN is a space where practically everyone has a grasp of what AI is and is not capable of, and of what tools could theoretically exist in the near future. I also think that HN is a space where there is not a consensus on whether AI is "good" or "bad," and there is a lot of discourse on the subject.

In my experience, this makes HN probably the most pro-AI spaces around. Most people in my life feel more negatively about AI, without a lot of defense for it (even if they do use it). The only space in my life that is more pro-AI than HN is when people from the C-suite are speaking about it at work meetings :/


How is HN an anti-AI bubble? People post their AI projects here all the time. That's not something the general public is doing.


In the sense that many anti-AI articles are shared and upvoted, and many anti-AI or AI-skeptical comments are upvoted. Of course, a lot of pro-AI and AI-enthuisastic content is also shared and commented.

HN has many, many users, and they're not all monolothic, which explains 90% of the questions along these lines that people raise.


chatgpt has 400 million weekly users and youre under the impression most people dont want ai?


I believe most people don't want AI because I read the Pew Research report linked by the parent comment, which indicated most non-experts don't want AI. That report has a pretty large sample size, the methodology seems sound, and Pew is an organization that's historically pretty good at studying this sort of thing.

Obviously one report is not the end of the discussion. And if more research is done that indicates that most people really are interested in AI, I'll shift my beliefs on the matter.

I was interested in that 400 million weekly user number you posted, so I did a little digging and found this source [1] (I also looked through their linked sources and double checked elsewhere, and this info seems reasonably accurate). It seems like that 400 million figure is what OpenAI is self-reporting, with no indication how that number is being calculated. Weekly user count is a figure that's fairly easy to manipulate or over-count, which makes me skeptical of the data. For example, is this figure just counting users that are directly interacting with ChatGPT, or is it counting users of services that utilize the ChatGPT API?

In addition, someone can use ChatGPT while having a neutral or negative opinion on it. My linked source [1] indicates that around 10 million people are actively paying for a ChatGPT subscription, which is a much more modest number then 400 million weekly users. There clearly are a lot of people who use and like AI, but that doesn't mean the majority of the population feels positively about it.

[1]: https://backlinko.com/chatgpt-stats


I use an AI chat service, but would prefer that research and investment that might yield more powerful AIs be banned. Maybe that is what the survey respondents meant when they said that they don't want AI.


You don't really have to rely on self reported numbers to see its scope. It has become that massive. ChatGPT was the 6th most visited site in the world in March and will likely be 5 in April.

https://x.com/Similarweb/status/1909544985629721070

The idea that a site that consistently has billions of visits every month but most of them have a negative opinion of it seems more delusion than reality.


How many are paying?

Thats the ultimate test, how many users will pay for something like this?

I also have usage pattern questions but I don’t think OpenAI publishes much data as to how their platform is most commonly used


Conversely the ultimate test may be that once the free plan is monetized with ads, how many users will continue to use ChatGPT.


Absolutely. People using it to write shitposts and spam and a first draft of something is one thing, but "fun toy" is not the same thing as "sea change"


The article is so stupid. AI is replacing traditional Google search. It's everywhere already and people not only want it, they use it every day. My Google phone replaced the old assistant a while ago.


When I buy a car I want to talk with a salesman, but outside of that I don't want salesmen ambushing me with car deals.

ChatGPT is like the former, while the AI features are like the latter.


> general public is even less interested in AI

Part of the problem is that the term has become utterly diluted to the point of becoming meaningless - any computer system can be called AI nowadays.

But what I think people dislike most is the genre of Generative AI aka AI slop: images and texts generated by machines, often of low quality, unchecked or barely checked by humans. Another one is cost cuts by replacing human support with automated responses - which can give you abysmal experience even in trivial cases.


> For any of you working in tech at the time, was there a similar gap in perceptions around the Internet back in the days of the dot com bubble?

I wasn't drawing a paycheck from tech at the time, but I was a massive nerd, and from my recollection: Yes, absolutely. Dialup modems were slow, and you only had The Internet on a desktop computer. Websites were ugly (yes, the remaining 1.0 sites are charming, but that's mainly our nostalgia speaking), and frequently broke. It was or could be) expensive: you had to pay for a second phone (land!) line (or else deal with the hassle of coordinating phone calls), and probably an "internet package" from your phone company, or else pay by the minute to connect; and, of course, rural phone providers were slow to adopt any of those avenues of adoption. Commerce, pre-PayPal, was difficult - I remember ordering things online and then mailing a paper check to the address on the invoice!

Above all, we underestimate (especially in fora like this) how few people actually were online. I don't remember exact numbers at any particular times, but I remember being astonished a few times - the 'net was so ubiquitous in my and my friends' lives that "What do you mean, only X minority of people have ever used the internet?" For people who weren't interested in tech (the vast majority), seeing web addresses and "e[Whatever]" all over the place was mainly irritating.

Those elements and attitudes are certainly analogous to AI Hype today. Whether everything else along that path will turn out roughly the same remains to be seen. From my point of view, looking back, the most-hyped (or maybe just most-memorable) 1.0 failures were fantastic ideas that just arrived ahead of their time. For instance, Webvan = InstaCart; Pets.com = Chewy; Netbank = any virtual bank you care to name; Broadcast.com = any streaming video company you care to name; honorable mention: Beenz (though this might be controversial) was the closest we ever came to a viable micro-payments model.

The necessary infrastructure for (love it or hate it) a commercialized web was the smart-phone, and 'always on' portable connectivity. By analogy, the necessary infrastructure for widespread, democratized AI (whether for good or for ill) may not yet exist.


For me it implies that we need a better way to quote credentials. We have plenty of skeptical experts, who were researching machine learning for decades.

AI experts is the same crowd as blockchain "experts". I used to work in the industry. Between me and my employer, I knew how the whole thing worked, but it was him who was quoted as an "expert". And I can't even call myself an expert, because for sure experts at the time were writing protocols and not implementing parts of them.

If there is a rift on whether or not AI benefits people right now, and the rift is between users and experts, the problem is the experts that were asked. People don't need much convincing to find something helpful, we love to cut corners.


I think the concept that AI can be used to sell itself -- that a product is more valuable simply because it incorporates AI -- has to end, and soon.

If you can actually use it to build a better product on it's own terms, great. But as has ALWAYS been true, a product has to actually be good.


>I think the concept that AI can be used to sell itself -- that a product is more valuable simply because it incorporates AI -- has to end, and soon.

I can't help but think of the iPhone 16 series's top-line marketing: "Built for Apple Intelligence." In practice, the use cases have been lackluster at best (e.g., Genmoji), if not outright garbage (e.g., misleading notification summaries).

I feel like a lot of AI use cases are solutions looking for a problem, and really sucking at solving those problems where the rubber meets the road. I can't even get something as low-stakes and well-bounded as accurate sports trivia and stats out of these systems reliably, and there's a plethora of good data on that out there.


I'm not skeptical about AI. I'm skeptical that the companies behind AI will deliver a product that makes my life better. Anything genuinely useful will be a toy or get bought and shut down, while the ones that survive will steal my personal data and serve me ads.


idk have you ever been stuck on some AI chatbot?

some credit card companies have botched chatbot process: "lost/forged credit card report" / "talk-to-person" are essential support process, but they require you to enter your PIN to get thru that process

(if the request for a new credit-card is faked, you're out of luck)


Incredible. Thanks for sharing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: