I was talking recently to someone who teaches AI-adjacent courses at a US university (not in a computer science department) and they said that enrollment in their class is lower than expected, which they think is likely due to the severity of the AI backlash among students on campus.
AI applications that would help normal people in a significant way are pretty lacking, so I'm not surprised. So much conversation about AI products is cycles of "this tech will change everything" without material backup outside of coding agents.
How much of the workforce is organising and other information dissemination or transformation?
I'm more on the skeptical side than the evangelist, but I can see how large parts of such things could theoretically be shifted away from humans. Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking.... AI is a whole lot of hot air when considering the "second 80%" of the work involved in any of these tasks, but that's still a lot of jobs that may make little sense to start studying these years, until you have some idea how the field will develop or if there's a giant surplus of, say, French-native Spanish language experts. At least for those for whom a given study is not a real passion and they might as well choose something else
> Planning someone's agenda, preparing relevant documents, arranging and coordinating things, translations (speech or text), narration, grammar checking
the issue is, these things "lie" subtly and not so subtly (they make up issues, rename agendas, forget questions and change meanings all the time) and for me that is a deal-breaker for a business tool that i need to rely on
Yes, for me as well, but large chunks of these tasks seem within the realm of what they can do when you break it up into small enough bits and control the prompt very tightly
Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators (due to some level of context "understanding" - or simulation thereof, at least). At 50x human speed, the energy consumption is also lower than keeping a human alive for that time. There is no scenario in which this capability goes unused
Or grammar checking, if you catch 98% (as even some of the weaker models seem to achieve), the editor who'd otherwise do this can do more intellectually stimulating things
It's not that there's no downsides but it also seems silly to dismiss it altogether
> Particularly machine translations are no worse than what an untrained native speaker would come up with, and much better than traditional translators
Sometimes. I use Google Translate (literally the same architecture, last I heard), and when it works, great. Every single time I've tried demonstrating that it can't do Chinese by quoting the output it gives me from English-to-Chinese, someone replies to tell me that the translated text is gibberish*.
Even with an easier pair, English <-> German, sometimes I get duplicate paragraphs. And there's definitely still cases where even the context-comprehension fails, as you should be able to see from going to a random German website e.g. https://www.bahn.de/ in e.g. Chrome and translating it into English and noticing the out-of-place words like how destination is "goal", the tickets are "1st grade" and "2nd grade" instead of class.
* I'm curious if this is still true, so let's see:
I'm not sure if we're on the same page. I mean LLMs right? Not whatever Google Translate and DeepL use. The latter was better than gtrans when it launched, nowadays it's probably similar idk, and both are machine learning clearly, but the products(' quality) predates LLMs. They're not LLMs. They haven't noticeably improved since LLMs. Asking an LLM produces better output (so long as the LLM doesn't get sidetracked by the text's contents). Presumably also orders of magnitude higher energy consumption per word, even if you ignore training
I agree that Google Translate, now on par with DeepL's free product afaik (but I'm not a gtrans user so I don't know), is decent but not a full replacement for humans, and that LLMs aren't as good as human translations either (not just for attention reasons), but it's another big step forwards right?
I'm not sure what DeepL uses, but Google invented the Transformer architecture, the T in GPT, for Google Translate.
IIRC, the original difference between them was about the attention mask, which is akin to how the Mandelbrot and Julia fractals are the same formula but the variables mean different things; so I'd argue they're basically still the same thing, and you can model what an LLM does as translating a prompt into a response.
I didn't know that! I had heard they made transformers and (then-Open)AI used it in GPT, but that explains how come Google wasn't then first to market with an LLM product when the intended application was translation
> It's not that there's no downsides but it also seems silly to dismiss it altogether
definitely silly to dismiss them all together, but the issue is using it for everything where its not appropriate or unreliable; so in the context of my posting, i cant rely on it for the things i outlined, thats all
I assume that's just a manner of speaking, like a judgmental form of hallucination
I remember HN piling on me for saying something along the lines of evolution causing a property (am I stupid, do I not understand that it's not intelligently chosen) rather than some unwieldy statement about a property having a positive selection pressure. I'm also much more familiar with the English phraseology of this non-tech topic now (so I can actually say that in the few words I just used), do we even have that vocabulary for LLMs?
You make it sound as if "coding" was a distinct thing with clear boundaries in the technical world. But this critically misses the fact that coding agents dramatically lowered the barrier to controlling everything with a microchip. The only thing that exists "outside [the reach] of coding agents" is purely the analog world and that boundary will get fuzzier than it is perceived to be.
If it's fundamentals of ML, I'm surprised to hear that.
If it's "how to use ChatGPT for creative writing" then I'm not surprised. Why would someone take a class from a teacher who has had only just as much experience with these tools as their students have?
Agree… OP said “not CS” so doesn’t seem surprising. If we’re going by anecdotes, AI classes in the CS dept have risen in popularity in the past few years.
Where are you seeing CS classes with increasing enrollment? Everyone I know is saying they're seeing smaller classes. Maybe some upper division from the last swell, but we're all definitely declining this year and last year, from what I'm seeing.
I actually feel the opposite. I don't think people from outside CS will have that much interest into the very basics of AI because there is usually a huge gap between "this is how back propagation works" to any AI model that is remotely useful. And if you are interested in the fundamentals themselves you would probably be majoring CS anyway.
A course on how to use existing AI tools will be pointless, but if there is anything I know about college students is they love taking easy courses for easy credits.
Students don't enroll in a class for various reasons, but most likely because it's useless (or at least people perceive it as useless). At top universities, even notoriously challenging courses have a decent class size.
The biggest visible AI impact, for me, is vibe coding. For that, I am convinced that the hype will collapse and will throw back the most enthusiast companies by years.
On the downside we have untrustworthy, doom or glory praising CEOs, companies slashing jobs, AI companies going into military business, hacks, spam, psychosis, general anxiety and uncertainty.
Even if you don't believe the hype and know that AI is just statistics, there is nothing to be positive about. I can't blame anyone to dismiss it. Maybe it's even the best thing that can happen, big tech won't take a sane route without civic supervision and calibration.
From what I know there was progress in AI cancer detection before the hype. I consider the big tech advancements is a side show for them. I may be wrong.
I heard nothing about the other stories. AI can code and write generic texts, can pull off a lot of knowledge. But the frontier models are general purpose idiots and any interesting specialization/innovation has probably nothing to do with them.