We had a mandatory ChatGPT training course at work. You had to sign up in limited space classes. This is a large company, needless to say it was chaos to get a significant number of people to participate.
I got a spot. We were shown how to copy and paste data from excel and other data sources into the chat interface. We had sample data to work with, there was always someone in class who would say "mine didn't work." The developers in the room asked about codex, the instructor said she wasn't a developer.
We did get a certificate though. There was nothing they could teach that you couldn't learn by using the free version in your own time. Whatever they are doing with the Maltese government is just to increase the monthly active user count.
I’m now responsible for improving AI literacy in the organization I work.
But the people in charge just want the employees to just answer some questions so they can handover Claude or Chat GPT licenses so they can show people are using AI to improve productivity.
There are people who don’t know when to use AI and when not to use Ai and think they can just Claude their way through everything. I wanted to change that but when the whole idea is to just increase AI use I guess they don’t care about how AI is used.
The leaders who mandate AI have no understanding on how to actually use it for productivity. They use it like a Magic 8-Ball to confirm whatever ignorance they have and believe the hype that it can do anything.
I was on a quarter demo the other day and the project lead for ai innovation was talking about the things he's preparing for the company.
I will not address the things he pitched (as coming soon), as I'm a developer and (hopefully) not the target audience, but I was quiet surprised when they made a questioneer asking how many people use ai and how frequently. (The target demographic was middle management, product owners etc)
75% of people answering said they're using it daily and considered it an essential tool they need to work
Considering it was anonymous I was expecting lower numbers, honestly.
In the recent past, my department received an email from on high with a list of people who were yet to complete the "anonymous" survey.
I always assume my work-survey answers are traceable back to me, whether it's via self-doxxing with my answers, tracing links of the rootkit-level MDM software that can record my screen, but they pinky-promise to only use for remote assistance, in case I open a ticket with IT.
Most external survey providers claimed anonymity but in their T&Cs stated in a very roundabout way that they could provide some information to customers for quality purposes or something. Read “we’ll deanonymize some users if the paying customer wants it”. Internal survey tools are subject to internal management pressure.
Even when you use a tool like Microsoft Forms, where MS really can’t be bothered to deanonimize users unless 3 letter agencies get involved, it’s still possible to do timestamp matching between the proxy/VPN logs and the submission time.
Asume real anonymity only if the URL is the same for everyone and you can fill the survey from any computer on the internet.
But the explanation for why people overhype AI usage is probably simpler. They want to keep their license because it’s a nice perk. They’ll use it to get the gist of a long email thread without bothering the read the details, to get some meeting minutes without validating if that was actually what was said, to generate some crappy modern equivalent of wordart graphics for their presentations, and feel like the time saved to generate what most time is slop was worth it.
When I worked on this (outside of coding) it was a pain to find a use case that really benefited. These were all niche uses that fit an LLM like a glove. These rest was slop, I could see the usage reports, and the BS self reporting surveys. Everyone inflated the numbers and usage to justify keeping their license.
It wasn't, and it was visibly updating while people were submitting their answers. I just rounded it as I don't remember the exact number at the time they closed the submission.
Could still be faked ofc, but I don't think they did.
> 75% of people answering said they're using it daily and considered it an essential tool they need to work
> (The target demographic was middle management, product owners etc)
This leaves a fairly wide set of options for what "essential" entails.
Do 75% of middle management and product owners actually need AI for their job? Seems unlikely.
Do 75% of middle management and product owners use AI to slop up emails, meeting "summaries", and reports? That's quite possible. Would they declare it to be an "essential tool"? One imagines they are not too fond of actually doing meaningful work.
It's quite easy to get high percentages like this when the AI is involved in make-work and the costs are low if not zero. The moment inference costs go up, most of this usage will evaporate.
Never expect anonymous voting/quizz/whatever to be fully anonymous in big corporations, if its something about touchy topics and/or can affect employment/performance of given person results will be skewed. If metric becomes the target it ceases to be a good metric and all that.
It all rest on the shoulders of responsible manager(s) on how moral they are. Many are not.
My saddest interaction recently was with a friend with a 1st class degree in computing and several years experience in software engineering in many prestigious companies.
I asked if he had tried out Claude code or anything similar.
His answer: My company has scheduled a training course in that so I'll wait
> We were shown how to copy and paste data from excel and other data sources into the chat interface.
Grnnnnnnnnnnnnnnnnnnguuurnnngh.
I remember the copy and paste drudgery from the early days of ChatGPT. It was a miserable and joyless experience. Nowadays (and for a long time) you can simply attach the file.
For everyone in the EU: Copying and pasting sensitive data (like customer data) into AI tools is a violation of the GDPR, and potentially the AI act, which will be enforced soon.
I would be cautious to advocate these laws that strongly in the context of AI tools:
Companies and employees always make their decisions based on a risk/reward basis.
Sometimes a commercial contract (like Microsoft Copilot) is enough to cover your ass and to meet the needs of the regulator.
Even if the operator is exactly the same.
Laws are constraints to navigate, but if you are successful enough (ahem, rich) then they don’t apply to you.
At the moment what the EU wants is to make sure that in the long-term they can access your private information.
Realistically if you are in the EU you have more risks telling your darkest secrets to a EU-hosted model that the government will arrest you, than to a Chinese-model (who doesn’t collaborate).
EU Chat Control, is here to protect kids and protect you from terrorists; you don’t want to claim you support pedophiles right ?
So following these rules is always a matter of choice.
Respect and you will be stuck with your shitty Mistral and no privacy, not respect and you have your shiny Claude that you have to think what to input inside.
I agree with you I could have made it more compact by making 1 point = 1 paragraph, sometimes it’s a bit difficult to cleanly articulate my ideas, and I try not to clean them up with GPT first in order to keep the original tone.
For the not liking it part, I guess that if someone writes a long text, there are more chances to find at least a point of disagreement than a very short sentence
You’re brushing too broad a stroke GDPR only affects personal information. There’s plenty of sensitive business information that is not covered by GDPR - for example per business customer revenue data - that is legal to put into an AI tool but your employer may not want you to.
Oh I don't know, it seems like a good step forward towards regulatory capture. First partner, then certify, then require the certification. A limited regional beta, like launching your app in New Zealand first.
But if you can prove any kind of success with Malta then you can go to the next 10 "slightly bigger" nations out there and tell them "See? It worked very well with Malta". And then move to a bigger layer, and a bigger layer...
In practice since this is valid for a year it is essentially a free-trial they are giving away and they hope that it may generate additional revenue at some point after that
Maybe this is what will turn software engineering into an Engineering field.
Right know, prompters are setting up whole company infrastructure. I personally know one. He migrated the companies database to a newer Postgres version. He was successful in the end, but I was gnawing my teeth when he described every step of the process.
It sounded like "And then, I poured gasoline on the servers while smoking a cigarette. But don't worry, I found a fire extinguisher in the basement. The gauge says it's empty, but I can still hear some liquid when I shake it..."
If he leaves the company, they will need an even more confident prompter to maintain their DB infrastructure.
As a junior dev there is this pressure to produce code, add features, and investigate bugs within unprecedented time period. I know whole code base is fking up but i will still add that feature or do a sloppy bug fix without digging deeper.
In my experience, AI really lowered the bar for bad code in the name of delivering faster.
I have seen people write highly complex code where all the complexity was not necessary. Think: deep unnecessary branching, pointless error handling and retries which make no sense in our context, hand-coded parsing using regexps, haphazard data flow, functions which seem purely computational but slyly make API calls, pointlessly nullable model fields, verbose doc comments which describe the implementation instead of the contract. I could go on.
The worst part is, even when "prompted" by bad coders, it works in the end. Even has tests (ostensibly mock-ridden, a pet peeve of mine which always falls on deaf ears). So I cannot reject the PR without being an asshole.
I am no luddite. I make heavy use of AI, with all the skills / AGENTS.md / style guides and clear specs, then review every line of code, prefer testing with minimal mocking. I'd even say with right prompting, it can write better low level code than me (eg: anticipating common error conditions).
But my biggest fear about AI is how it enables normies with little to no understanding of CS principles to produce code faster which looks correct but slowly poisons the codebase.
> it works in the end. Even has tests (ostensibly mock-ridden, a pet peeve of mine which always falls on deaf ears). So I cannot reject the PR without being an asshole.
This is a social problem that I had thought the industry had solved a long time ago.
So many fallbacks. So many function_exists. So much pointless type casting. I swear it’s like the system prompt is designed to waste as many tokens as possible.
I have a friend, smart guy, who is writing web services and “connecting them together” for a large firm; he has absolutely no programming experience.
Talking to him, he told me he couldn’t even reverse a string. He is at once many times more valuable than ever before to his company, but also far more dangerous than ever before.
He's "smart" but he chooses to be in a business where he's presumptively willfully ignorant of the fundamentals (since he surely should be able to learn to reverse a string if he wanted to learn)? He doesn't have a more lucrative opportunity available? Or does he somehow have a skillset that makes him able to "connect web services together" by prompting AIs in ways that other people (including ones who can reverse strings, etc.) couldn't?
This form of being "smart" is a bit difficult for me to comprehend, I must admit.
This is what fascinates me. I have a friend, also a smart guy, who has made it to the point he’s at by being a kind of solutions expert. He’s an IT guy, basically. He’s very technical but has never claimed to be a software engineer. He’s writing software with Claude now. The other day he sent me a screenshot of some other team at his work asking him to shut off something he made that was brutalizing an API of theirs. I asked him if he had ever heard of a 429 or exponential back offs. He said no. How do you meta-prompt for that without knowledge?
You can create an agent in Claude with the role of Technical Lead / Architect and have it review your code. That depends on your agent specification. Just have ChatGPT generate that first.
If you get the logs you can feed them in and ask for improvements, that sometimes helps.
> In my experience, AI really lowered the bar for bad code in the name of delivering faster.
I would've believed that 6 months ago, but not now.
If you have a good codebase with proper rails, hygiene and architecture, AI will produce better code than most engineers out there.
People forget that 90% of the field has always been charlatans barely able to implement a fizz buzz or go much beyond trial and error googling.
I'll say even more. I'm in the 10%, and it's increasingly clear to me that AI writes in minutes code that's better than mine.
Even stellar and respected OSS engineers are nowadays leveraging AI and guiding it less and less everyday beyond giving indications of what kind of data structure they may want for a complex problem or the kind of architecture they are looking for.
In any case, I don't like this field anymore, I have no joy from it, way too much work, way too many changes a human can cope with both on product and technological level (not even counting AI and its tooling itself). The interesting parts of thinking an entire afternoon or week experimenting to get that design right disassembling the pros and cons are gone.
Even if you want to do that, it's just faster to launch 6/7 worktrees with the different ideas and judge the results. But you don't get as intimate with the problem and the amount of information is way more than you can process.
I'm hand rolling a project right now because even frontier models I use bloat things beyond comprehension. Because I'm intimately familiar with the domain, I know the shape of things, how the data should flow, and so on, and if l even if I spec it clearly AI will write 2x to 5x the amount of code necessary to make something work.
"beyond comprehension" is a good way of putting it. I've been genuinely baffled by some of these AI designs - why any intelligent thing would write >10 lines of bloat for what should be a one-liner.
Proper rails, hygiene and architecture need to be actively maintained, they don’t just continue to exist in a developing codebase. Historically, a small proportion (the 10% as you say) had a disproportionate amount of influence on coding standards. When they can no longer keep up with that ongoing maintenance, which we’re seeing with the increased pressure to ship code, the hygiene will regress. We’re riding the tail of all the engineering practices we’ve developed as an industry.
This is what I’m seeing, anyways. Junior engineers are being rewarded for shipping so much code, it’s impossible to evaluate it all, and subtle changes in existing patterns are slipping through. Eventually all those subtle changes transform the rails.
Forgive my ignorance, but if the corpus of coding data was always 90% bad, isn't that the same data being used for training LLMs? How are they magically any better than that average?
When I read the discussions about AI making code worse I keep bringing the same argument: people made bad code even before AI. Average coder is barely functioning and that's a fact.
As others have elaborated, the problem is empowering them to ship mountains of bad code;
And yeah, many semi-technical M2s or even M1s can't distinguish bad code from good code, or worse bad architecture from good; this is golden time for those who are willing to sacrifice the future for present. Just burnnn'em tokenzzz.
People could, however, learn to not make bad code. LLMs are incapable of that feat because they do not have any understanding or ability to reason. They are strictly worse than a human.
And we were safe from them because they couldn’t produce a mountain of code every day. But soon many places will be buried under a planet of unmaintainable code. It’s adding friction and operational cost and often not adding value.
> Maybe this is what will turn software engineering into an Engineering field.
Oh man, I think you may have touched the third rail here.
My first job out of high school was as an AutoCAD/network admin at a large Civil & Structural firm. I later got further into tech, but after my initial experience with real Engineering, "software engineering" always made my eyes roll. Without real enforced standards, without consequences, it's been vibe engineering the whole time.
In Civil, Structural, and many other fields, Engineers have a path to Professional Engineer. That PE stamp means that you suffer actual legal consequences if you are found guilty of gross negligence in your field. This is why Engineering firms are a collective of actual Professional Engineer partners, and not your average corporate structure.
The issue is that in software dev, we move fast, SOC2 is screenshot theater, and actual Engineering would slow things way down. But, now that coding is fast, maybe you are correct! Maybe vibe coding is the forcing function for actual Software Engineering!
___
edit: I just searched to see if my comment was correct, and it turns out that Software PE was attempted! It was discontinued due to low participation.
> NCEES will discontinue the Principles and Practice of Engineering (PE) Software Engineering exam after the April 2019 exam administration. Since the original offering in 2013, the exam has been administered five times, with a total population of 81 candidates.
What makes it a profession is not just the certification, it's the burden of responsibility for consequences. Your lawyer, accountant, and real engineers carry "we need insurance for this" level of risk in their work, all the way up to "can go to prison for getting things really wrong".
Until and unless software is held to that standard, software will never be engineering and always just a craft that can be performed to any or no standard.
Note that other types of engineering are also often vibes based. The mechanical engineering for a rocket engine is extremely rigorous but the engineering for an injection molded housing for a cheap cell phone is a lot more about following a few heuristics and getting it out the door. Even in robotics where I work, it’s mostly about making parts that pass whatever acceptance tests you come up with. In civil engineering and aerospace failure costs human lives and millions or billions of dollars. In robotics maybe you have some machines fail in the field but in many instances you have one overarching safety system and many of the parts are irrelevant to that. The camera housing for example. So no paper trail or mathematical design validation is required to prove you designed it right. Often those are desirable but if you just manufacture it and test it a lot you’re probably fine.
This was something I noticed in my early career in mechanical engineering and later doing PCB design and software for robotics. It’s easy to find firms that just need adequate parts without the professional certifications or ass-covering calculations of other engineering fields.
All this to say, it’s not just software versus the rest of them. From my position, civil and aerospace seemed more like the exception while much of the rest of the engineering world is more vibes based.
Eh writing software for healthcare, or aircraft or self driving cars is more rigorous than an EE working on industrial lighting or toys.
Im sure for the most part, engineers in physical space deal with the same kind of tradeoffs software engineers make, where you try your best based on industry standards, personal past experiences without some way to prove what youve done is right
> Eh writing software for healthcare, or aircraft or self driving cars is more rigorous than an EE working on industrial lighting or toys.
That’s a relatively small field within the software industry.
Most of the work being done (adding new fields to CRUD apps etc) is glorified clerical work, where the people doing it are rightfully fearful of being automated out of existence by AI.
> Maybe this is what will turn software engineering into an Engineering field
I think it’ll be the opposite. Maybe it’ll be what will eventually cement the field as “talent” based field. Just like it was difficult to quantify what makes a flute player better than another, how good your are at endlessly prompting a blackbox machine would be the only measure. The engineers of ol’ whoe developed kernels and drivers would be thought of as the “crazy people who put the flute against their temple to tune it” LOL. we don’t need people like that. You can just buy a flute tuning device. who gives a fuck? Can you make the next “Shake it, Shake it”?
Now imagine if you’re one step removed. You don’t see the cigarettes, smell the gasoline, nor see the fire extinguisher gauge. You only see the servers running business-as-usual. Those “engineering” guys are always drama queens, you think. We have processes and fire extinguishers when shit hits the fan, right?
That’s basically every M2, and many if not most M1s, in the last 10 years. So fuck it. Why does any of it matters?
I work at software in a medical setting. We are piloting an integration with a startup for measuring [some bodily variable relevant in ICU setting]. They are obviously vibecoding (docs are telling) and their API is failing in unexpected ways that they are not able to resolve. I am just waiting when this are going to harm somebody.
This is the pattern you will see when medium-successful ignorant people take o ver a system that was based on some kind of standard.
You can see the same approach is taken by Trump and other people.
“You have TDS!! He is actually doing good. He doesn’t follow rules because the system is rigged etc.”
These arguments border on religion because it is predicated on you believing their ignorant point of view in the first place.
Engineering and science is built on rigor and empirical evidence, it is not built by scammers/businessman/ignorant-people/politicians because that is just not how it works
For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
Obvious slop still makes it to the front page of HN, and sometimes farms GitHub stars.
These posts also usually get all these glowing comments from users who clearly haven't checked the code. It's even worse when authors get busted and claim "Okay, Claude wrote it, but the design is mine" despite clearly not understanding the output themselves.
Unfortunately, that makes high-effort projects less visible. The SNR will probably keep getting worse until slop can be flagged on HN.
One thing I learned is that AI written text is not hard to spot. Usually, when I meet slop, I close it one or two paragraphs in. Although tools like this will become more common, they usually serve to win an argument, or confirm what you already believe.
Also, it was painful to learn that my very first blog post I wrote in 2013 is AI generated. But I'm fine with it because I read this:
> A short punchy opener (≤10 words) followed by two or more substantially longer elaboration sentences — the LLM "hook then evidence pile" rhythm.
... and realized that the entire app is AI generated.
If you can spot it, an AI can spot it too. We have a website with some AI generated content (about AI). I added a skill to correct AI slop. Content got a lot better when I put that in place. I actually made codex research slop patterns and it came up with a list of known AI slop linguistic anti patterns. It now fixes its own content using that list. I also put a guard rail in place to do a critical review of all produced content as a final quality gate. That actually catches a lot of baseless claims, and other slop. And there's another skill that ensures we use the right SEO relevant language (a list that is produced by a separate agent).
It's actually starting to generate interesting content based on me giving it a few bullets and ideas. I won't claim it's perfect but it does a decent enough job.
I have my reasons for doing this (we help people set up agentic work flows) and I appreciate that not everybody likes the idea of AI generated content. But I think it will start getting harder and harder to spot AI slop. Basically slop is what you get without guard rails and quality gates. Of course, most people still lack the skills to configure their AI tools properly. Particularly non technical people. But it's not that hard and I bet there are a few handy journalists out there getting better at this. Also, for technical writers this is not going to be optional.
Since voting is that power we say we have in the US. Does the public get to vote on this? If not...
> Voting, we might even say, is the next to last refuge of the politically impotent. The last refuge is, of course, giving your opinion to a pollster - Neil Postman
The US is a representative democracy, not a direct democracy. You don't get to vote on specific federal policies, you vote on the people who vote for those policies.
Voting with your wallet doesn't exist. Try to boycott Amazon by blocking the AWS IP ranges and see how unusable the internet becomes for everyday tasks. Corporations continue to push the personal responsibility narrative so they can externalize costs of unethical business practices.
how are you making them lose money by blocking their ip ranges? Your are pretty much giving them money because now they dont need to pay for bandwidth.
We can also engage in direct action the other 364.9 days of the year. Call/email your representatives, go to a town hall, call the leaders of both parties of both senate and house, go to a march or protest. There are other ways we can be heard, be substantive and thoughtful, they tally and track messages which are not hyperbole or copy-paste. If you can make it personal, even better. It only needs to be a few sentences.
You can look up Maroun Al-Ras [0] and it's map coordinate [1]. If you search for the name, you find a garden of the same name, but not the village. The instagram reel that was posted earlier had more context [2].
From wikipedia:
> In October 2024, IDF forces operated in the village as part of its invasion of southern Lebanon. The Israeli flag was raised, after the victory.
Which Apple might use as a justification. There is a Israeli flag, so it must belong to them.
One thing that will be incredibly useful is to limit comments from brand new accounts. A combination of vouching, limiting the posts velocity (5 daily limit), clear rules for new accounts, etc.
I understand we often see insightful comments from new accounts, but I always find it suspicious when non-throwaway accounts are created just in time only to make a quip.
This was discussed before. People will age accounts and buy/hack inactive ones. Meanwhile, often a link gets posted, the project owner (or someone affiliated) finds out, and they make a new account to comment; it would be a shame to lose these people.
800 is a huge achievement. But I have to admit, around 2011 I had completely given up on the Simpsons. Story and content aside, they did something with the audio. The quality of the voice is so clear that it sounds un-natural. You can see the same effect on many shows around that time. The voice is disconnected from the background music and sfx.
Anyway, they also improved the way the characters are drawn so much that it lost it's crude nature.
The cell layers constraint led to better art. The detail in the background was minimal and the artists world compensate for it by interesting framing and lighting. Go back and watch one fish two fish or the black widower episodes from early seasons - just incredible animation.
You can see the number of lines drawn go up like crazy around season 10 or so, making it feel less realistic. Coincidentally, the writing also started to get worse around this time.
The app didn't work for me. One that was shared right here on HN. I selected 25 miles radius, same ethnicity. Naturally I was matched with a person 700 miles away, of different ethnicity. So we got married... and deleted the app.
We were interviewed as a success story and our faces are plastered on the Internet now. My friends didn't find the same success, I concluded that they didn't know how to date. (wear the right clothes, etiquettes, conversation, navigate ghosting, etc.)
"What if the app could teach you how to do just that?" That's what I asked in our interview. That part was never published.
I got a spot. We were shown how to copy and paste data from excel and other data sources into the chat interface. We had sample data to work with, there was always someone in class who would say "mine didn't work." The developers in the room asked about codex, the instructor said she wasn't a developer.
We did get a certificate though. There was nothing they could teach that you couldn't learn by using the free version in your own time. Whatever they are doing with the Maltese government is just to increase the monthly active user count.
reply