No one believes Grok will be top 2 in a couple months. OpenAI and Gemini, in one of the two orderings, will continue to be far ahead of Grok in "the next couple of months". I encourage you to bookmark your claim here and return in 2 months to take stock in your ability to predict/bluff.
Author here. First of all, thanks for the compliment! It’s tough to get myself to write these days, so any motivation is appreciated.
And yes, once all the usual tricks have been exhausted, the nest step is looking at the cache/cache line sizes of the exact CPU you’re targeting and dividing the workload into units that fit inside the (lowest level possible) cache, so it’s always hot. And if you’re into this stuff, then you’re probably aware of cache-oblivious algorithms[0] as well :)
Personally, I almost never had the need to go too far into platform-specific code (except SIMD, of course), doing all the stuff in the post is 99% of the way there.
And yeah, C# is criminally underrated, I might write a post comparing high-perf code in C++ and C# in the future.
This approach can work for experienced speakers, in particular if you have spoken about the given topic before, but I'd strongly advise against not rehearsing for folks a bit newer into their speaking career. So often I have seen talks where folks either were time after half of their time slot, or they ran out of time towards the end. Or they lost track of the plot, went off on a tangent for way too long, etc.
All this is not great for the audience (who have "invested" into your session, by paying for the ticket, spending time away from work and family, not attending other concurrent sessions, etc.), and it can so easily be avoided by rehearsing.
The most common reason I have seen for folks skipping to rehearse is the awkward feeling you might have when speaking loud all by yourself. If that's the issue, it can help to do a dry run in front of colleagues. In any case, "winging it" is best reserved for later on, after having gathered quite a bit of speaking experience and having spoken about the same, or very similar, topics before.
I'd also recommend to avoid reading from slides during a talk as much as possible, it's also not a great experience for the audience. There shouldn't be much text on slides to begin with, as folks will either read that, or listen to what you say, but typically have a hard time doing both at once.
(All this is a general recommendation, not a comment on your talks which I have not seen)
> ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Yet, after a decade of embracing Swift that tries to eliminate that aspect of nil handling in obj-c, Apple software is buggier than it's ever been. Perhaps not crashing on every nil in large complex systems does lend itself to more stable system.
> So who's going to maintain the packages? Who's going to test them against other packages? Against distro upgrades? Who's going to fix issues?
I feel like you're not reading what I'm writing. The community.
That's how open source works: if you use an open source project and it has a bug, you can fix it and open an MR. If the upstream project doesn't want your fix, you can fork. Nothing forces the upstream project to accept your contributions. When they do, they take the responsibility for them (to some extent, as in: it is now part of their codebase).
If your distribution doesn't have a package you want, you can make it for yourself, locally. You can contribute it to a community repo (most distros have that). Maybe at some point, the distro maintainers will decide to take over your package in a more official repo, maybe not. Even if you are not the official maintainer of a package, if you use it and see a problem, you can contribute a fix.
In the open source world, most people are freeriders. A (big) subset of those feel entitled and are simply jerks. And a minority of people are not freeriders and actually contribute. That's the deal.
> And their efforts are needlessly duplicated across several packaging systems.
No! No no no no! If they don't want to put efforts into that, they don't have to. They could use Ubuntu, or Windows, or macOS. If they contribute to, say, Alpine or Gentoo, that's because they want to. I am not on Gentoo in the hope that it will become Ubuntu, that would be weird. But you sound like you want to solve "my Gentoo problems" by making it look more like Ubuntu (in the idea). Don't use Gentoo if you don't want to, and leave me alone! Don't try to solve my problems, you're not even a Gentoo user.
Following the public reporting timeline of builder.ai lays out a fascinating story akin to Theranos or Wework. I'm sure we'll get an exhaustive account of exactly what happen in due course, and there will likely be other such cases that come out of the most recent AI investing boom.
Aug 2019 - WSJ report that for builder.ai the "AI" means "Actually people in India"
Americans need to get over their view of “Asia” as being about making shoes. When I was working in engineering in the early aughts, we mocked the Chinese as being able only to copy American technology. Today, China is competitive with or ahead of America in key technology areas, including nuclear power, AI, EVs, and batteries.
We need to anticipate a future where China is equal to America on a per capita basis, but four times bigger. Is that a world where “Designed by Apple in California, Made in China” still makes sense? What will be America’s competitive edge in that scenario?
What seems most likely to me in the future is that the US will find itself in the same position the UK is in now. Dominating finance and services won’t mean anything when both the IP and the physical products are being produced somewhere else.
Pulling the latest from git, running "dotnet build" and sending the artifacts to zip/S3 is now much easier than setting up and managing Jenkins, et. al. You also get the benefit of having 100% of your CI/CD pipeline under source control alongside the product.
In my last professional application of this (B2B/SaaS; customer hosts on-prem), we didn't even have to write the deployment piece. All we needed to do was email the S3 zip link to the customer and they learned a quick procedure to extract it on the server each time.
And then arguably every flexbox item ought to have min-width (and/or min-height) set to 0 because flexbox has a "min content sized" automatic minimum size built-in, which is rarely what you want. But if the content isn't overflowing or can be compressed in some way then you can get away without this.
My mom ran an adult foster care home. Half the population was elderly, half developmentally disabled. I liked helping out with the developmentally disabled folks. They were a handful, but they were basically enjoying life. But the elderly wing was mostly people warehoused and waiting to die. I remember happy moments, but I don't remember anybody who I'd call happy, and quite a lot of them were miserable.
There was one guy, an occasional escapee and reasonably physically healthy for his 80s, who had severe Alzheimer's. He just wanted to go home. All the time, that's what he wanted. I forget the details, but he didn't have a home anymore. Nobody came to visit him. We did our best for him, but what can you do with that?
We did our best for all of them. But I remember one evening over dinner where my mom and my brother and I were talking about getting our medical/legal paperwork in order. My mom said, "If I end up like that, just wheel me out to a field and leave me."
We couldn't, of course. But when her time came, we did move her to hospice as soon as there was no hope of recovery. She lived her life and bravely fought the end of it, but she didn't want to be kept around as a body, a shrine to her former self. A choice I deeply agree with.
I used to be a schoolteacher, and the Growth Mindset was one of the Theories of the Year that I was expected to wholeheartedly believe in. Educational psychologists keep coming up with this stuff, and teachers lap it up, because teaching is all about morality, because there's no fair way of measuring efficacy in teaching.
In fields like physics (which I taught) or computer science, there's an underlying mathematical formalism that ties the whole thing together and gives it structure. Do an image search for 'map of physics' and there'll be a fair few interpretations of that structure. But in the humanities and the social sciences, there's none of that. My friends who did physics PhDs were told by their supervisors what they would research; when I did my Masters of Education, I was expected to decide on my own research question. Everyone in the humanities is building their own little hovel, whereas in the mathematised sciences, everyone is working together to build a magnificent castle.
As a physicist, if I were doing serious research in psychology, the first thing I would do is try to work out everything about how a neuron works: how it takes input, how it produces output, what information storage capability it has, and so on. Having done that, the next thing would be to study two neurons working in connection, and then gradually increase the complexity of the system until it's an actual brain. This bottom-up approach is key to the success of the mathematised sciences.
But in the real world of psychology, everyone comes in with their top-down questions and tries to answer those. Studies tell us we can only keep seven things in working memory, but what does that even mean? How can we be sure of that - how is that information stored, where can we find it in the brain? No one is building any knowledge of how the brain actually functions; it's just an expensive game using (often inappropriate) statistical techniques to extract (or manufacture) correlations from noise. Any explanatory theories, like Schema Theory, are 'just-so' stories - none of these theories have been subjected to any real attempts to experimentally refute them, because there's nothing falsifiable about them.
Dweck's Growth Mindset is particularly attractive to teachers because it's ruthlessly pragmatic with respect to the work of teachers. Intelligence used to be regarded as a fixed, innate quality of a pupil, so there was no point in teachers engaging with it. On the other hand, teachers can affect a pupil's effort, so it makes sense to encourage teachers to do just that. But thanks to Dweck, intelligence has been demoted from 'useless' to 'non-concept'. When a teacher tells you a pupil is 'good', they're talking about behaviour and attitude, never aptitude.
But for Microsoft, none of this really matters. Microsoft is a huge organisation, which means the people have to do a lot of work to force the organisation into a structure that they can make sense of (because humans have limited cognitive capacity). They've chosen Dweck's Growth Mindset, but it could have been anything.
In 2016, when Dweck came out with her 'False Growth Mindset', I was in my first full-year teaching job. (Previously I'd just been employed one term at a time to cover teachers on leave.) At the end of the year, I was let go, alongside twenty other teachers. That was nothing to do with Dweck - the school had its own, long-standing cult. I still don't know what you had to do to be part of the 'in crowd', but it doesn't matter now. You can blame any management practice you want, but in the end, it's just managers punishing and rewarding according to inappropriate criteria.
Automakers after exceptionally high profit while their volumes was lower and lower seems to be come to an end, the AI push, a clear bubble, seems ready to burst, most tech-centric projects from autonomous vehicles to smart cities show their bad outcome. Essentially in the west we seems to be still the best ONLY in PR and weapons. All others sectors are down.
BTW just to show where we are here in EU a BYD Atto 3 costs for the basic version ~37.990€ while in Thailand (so imported as well from China) cost ~8750€ while our small power tools producer have started to produce only crap and hyper-expensive non-crappy tools, with even DRM in batteries locking the user out of the ability to switch the same battery from two same vendor devices, in China all batteries can be swapped from any vendor simply because they choose standard connectors. There are simple examples, but also good enough indicators how bad things are.
We are still better in software, but with current management trends it will not last longer, and with current scholar reforms... Well... I doubt new generations can be less than mere incompetent than ever...
brew install dotnet-sdk
dotnet new console -o ConsoleApp --aot
cd ConsoleApp
dotnet publish -o .
./ConsoleApp
(it may take extra time for bulding it for the first time, particularly pulling in IL Compiler dependency, but all subsequent compilations will be fast)
Feel free to comment out `PublishAot` property in .csproj file if you don't want AOT (i.e. using libraries that want JIT), or use dotnet watch and run commands for quick iteration.
Note that for rich editing experience you do not need C# Dev Kit (which requires an account), only a baseline C# extension which has all the good bits. For running and debugging the project out of VS Code - upon first F5 press it will suggest generating config assets for .NET project it managed to find in an opened folder - just agree and it will work on the next F5 press.
Our website https://hydraulic.dev/ covers the most important points but compared to jlink/jpackage:
• Delta software updates without code changes. On Windows apps update in the background even when not running.
• It can do web-style synchronous updates on app launch.
• Can build for every platform from any platform. Linux cloud CI workers can be 10x cheaper than Mac workers, so this feature can save you a lot of money!
• Lots of usability work and features around code signing/notarization. It can do signing without MS/Apple tools and it knows how to use cloud signing services, remote HSMs, local HSMs, the Mac keychain (protected from other apps), custom enterprise signing services that can only be accessed via scripts, it can check for many kinds of mistakes (a particular weakness of vendor tools), it will help you buy certificates by generating CSRs for you.
• Generates a download HTML page that detects your user's OS and CPU to give them a big green download button, it can upload all the artifacts to S3 or GitHub Pages/Releases or any other server via SFTP. It renders icons for you, etc.
• Deployment by Windows IT is easy and installation doesn't require users to have admin rights on their system.
• It can publish to the MS Store which lets you avoid needing to buy signing certificates for a one off $19 fee.
• It comes with commercial support. With other tools if you get stuck you're on your own.
----
All the above works for Electron and native apps too. For JVM users specifically:
• It figures out which JDK modules you need using jdeps and bundles a minimized JDK.
• It provides a Gradle plugin that can read config out from your build configuration automatically.
• Easily bundle JCEF if you want an embedded Chromium.
• (soon) It will provide an API that lets you check for updates and trigger them manually. This is done, it just needs to be fully documented and published to Maven Central.
• It has way better support for native code than jpackage. It will dig through your JARs to find native libraries that don't match the target machine and delete them, it will sign them in-place, it can extract them ahead of time and then set up your system properties to make them be loaded from the right places and so on.
• It can bundle custom TLS root certificates into your JVM trust store. That's especially useful for enterprise settings.
That's not even a complete list by any means. Conveyor is packaged with itself and is a JVM app written mostly in Kotlin, so the above features get dogfooded by us.
I wrote this tool partly because I really wanted to close the gap between web and desktop dev, to let developers have more freedom around frameworks and languages than they do today. It's not right for everyone but if you aren't hog-tied to the browser then the pain of distribution is really removed by this thing, and that lets you focus on building your app.
Exactly because of much greater stakes. A lot of heavyweights involved, and nobody agrees to yield. The result is a compromise, aka a solution which is unsatisfactory to all parties to an equal degree, as they say.
Best designs are produced by small, tightly-knot, often one-person design teams. Examples: Unix, Clojure, SQLite, Boeing-747, Westminster Palace in London.
Sometimes a small team with a cohesive vision keeps on attracting like-minded people or matching ideas, and the project grows with contributions from a large number of people. The key part of every such success is the vetting process. Examples: Python, Ruby, FreeBSD and OpenBSD.
Worst designs with most glaring, painful, expensive, even tragic shortcomings are produced by large committees of very important people from huge, very (self-)important companies, agencies, departments, etc. Each of them has an axe to grind or, worse, a pet peeve. Examples: PL/I, Algol-68, the Space Shuttle (killed two crews because the escape system was removed from the quite sane initial design); to a lesser degree, it's also HTML, CSS, and, well, C++ to a large degree :( The disease has a name [1].
Sometimes a relatively small and cohesive team in a large honking corporation produces a nice, cohesive design, like this happened to Typescript, certain better parts of CSS, and some of the better parts of C++. This may sometimes make a false impression that "design by committee" sometimes works.
I work closely with both companies quite a lot so you can either take my word for it or not - it doesn’t bother me either way to be honest. They are pricing inference to make money.
I don’t feel like you really have much experience using LLMs in business. However an example of where they’re very powerful is in summarization. For instance we have a pretty complex customer support model for our fraud and other cases with various disparate data sets including prior cases, related possible fraudsters identified via our fraud models, etc. We built a copilot LLM multi agent system that has access to various functions as sub agents that are prompted and context aware of how to summarize their specified data sets. They also have the ability to render widgets on demand or if their context implies it’s relevant. This allows quite a lot of complex high cognitive load information to be distilled rapidly and the investigators to interrogate the copilot on a case. As the copilot develops “answers” as a summary it dynamically renders an appropriate contextual dashboard with the relevant visualization.
By structuring the application as a multi agent model we can constrain the LLM to pretty well specified tasks with fine tunings and very specific contexts for their specific task. This almost entirely eliminates hallucination and forgetfulness. Even if it were to do so the actual ground truth is visualized for the investigator.
Prior systems either dumped massive amounts of cognitive load in the investigators face or took man years of effort to create a specific workflow, and in an adversarial dynamic space like fraud you need a much more dynamic approach to different types of new attacks.
We aren’t replacing anyone. That’s not our goal. In fact we grew our investigator footprint because both our precision and recall have grown dramatically making our losses much less. We hire more skilled investigators and greater number to address more suspected cases faster and better.
Listen. When John Henry battled the steam drill he did win, but it killed him. Go to any modern bore site and you won’t see less people working on the tunnel but more people - people who aren’t there for their strong back and ability to swing a pick but because they’re highly trained experts. They’re just building more complex tunnels that don’t collapse and don’t lose dozens of workers per dig.
This form of automation is no different in my experience so far.
So, if all you can see is SEO and grift, it might be a lack of imagination and experience on your part and some magical AI thinking sprinkled in. All your points about LLMs failures are true but they also all have solutions that don’t require slot machines as you say or imply it’s all a scam. They’re a tool like any other and they require handling in specific ways to be most effective. Even if chatgpt is a pretty unconstrained interface and that leads to issues doesn’t mean that’s the only way to use the tech.
Use of LLMs to generate software is dumb. Although a protio, LLMs are actually pretty remarkable at generating Cucumber tests as Gherkin is a natural language grammar that plays into their native strength better than producing computer language grammars. This is useful if say you have business people or whatever writing effectiveness or whatever testing where they can provide a specification of policy and a well prompted LLM can generate pretty exhaustive cucumber tests (which can be pretty redundant and formulaic when asserting positive and negative cases exhaustively) which can then be revised by hand as needed. Since they’re natural language as well the business people tend to be pretty good at debugging the tests up front and with a large set of cucumber tests written by hand you’ll see tons of errors anyways. The LLM tests tend to be much much higher quality than the human written ones.
Copilot+ runs locally, but has the quality of 2 years ago, it's basically useless for anything but shitposts and spam.
The cloud AI tools all burn hideous amounts of money, all ran at a loss.
AGI is a red herring, and will not happen. The architecture of generative-AI systems simply doesn't permit the required logic and reasoning capability.
Even the "Actually, Indians" concept of outsourcing the tertiary sector to the developing world by way of having low-skill workers clean up AI generated trash is unviable. (It both doesn't work, and is politically doomed.)
What's going on here is that tech companies are tearing up everything to pump their stock prices after the covid-tech-boom and ZIRP ended. Burn down their core products to keep the bubble going just a bit longer.
We have a replacement for CUDA, it is called C++17 parallel algorithms. It has vendor support for running on the GPU by Intel, AMD and NVIDIA and will also run on all your cores on the CPU. It uses the GPU vendors compiler to convert your C++ to something that can natively run on the GPU. With unified memory support, it becomes very fast to run computations on heap allocated memory using the GPU, but implementations also support non-unified memory
Select eggs/sperm and make embryos at fertility clinic.
Ship them all off to Genomic Prediction.
After you get their simplified test results, call and ask for the full report that is compatible with 23andme.
Search snpedia for keywords like intelligence, retardation, educational attainment and cross reference their existence/non-existence with the 23andme text file.
Have fertility doctor implant the embryo of your choosing.
If you have/are a female, the price is dramatically cheaper. Otherwise its about $200k (all post-tax). Embryo creation $45k, surrogacy agency fee $30k, surrogate $70k, baby delivery $25k, embryo testing $6k.
Clarifying, the specter of a hidden animal will usually take the form of a diffuse sparkle or blur, typically hovering off to the person's side and somewhat above them, and as a result when carried through to the "other side" cannot possess what remains of the person in that domain (because they are returned to the origin in turn).
I don't want to read too much into it, but the person (supposedly) submitting the PR seems to work at 1Password since December last year, as per his Linkedin. (And his Linkedin page has a link to the Github profile that made the PR).
Personally, I've given up on Gemini, as it seems to have been censored to the point of uselessness. I asked it yesterday [0] about C++ 20 Concepts, and it refused to give actual code because I'm under 18 (I'm 17, and AFAIK that's what the age on my Google account is set to). I just checked again, and it gave a similar answer [1]. When I tried ChatGPT 3.5, it did give an answer, although it was a little confused, and the code wasn't completely correct.
This seems to be a common experience, as apparently it refuses to give advice on copying memory in C# [2], and I tried to do what was suggested in this comment [3], but by the next prompt it was refusing again, so I had to stick to ChatGPT.
There are alleged cost savings elsewhere. I haven't seen a company that's actually realised them. Every company I know of that's gone all-in on cloud is paying a fortune and still has a big team to manage everything.
Cloudflare -- Free for most services
OVH Cloud -- Free and unlimited
Scaleway -- Free for most services
Great:
Hetzner 20-60 TB / mo per instance $1.08
Not bad:
Linode 1-20 TB / mo per instance $5.00
Oracle Cloud 10 TB / mo $8.50
A bit much:
Backblaze 3x the amount of data stored $10.00
Bunny CDN -- $10.00
DigitalOcean 100 GB - 10 TB / mo per instance $10.00
UpCloud 500 GB - 24 TB / mo per instance $10.77
Vultr 2 TB / mo for most services $10.00
Uh...
Fly.io 100 GB / mo $20.00
Are you actually serious?
Microsoft Azure 100 GB / mo $78.30
Amazon Web Services 100 GB / mo $92.16
Railway -- $100.00
Zeabur 10-100 GB, depends on plan $100.00
Google Cloud Depends on service $111.60
Screw you guys:
Render 100 GB - 1 TB, depends on plan $300.00
Vercel 100 GB - 1 TB, depends on plan $400.00
Netlify 100 GB - 1 TB, depends on plan $550.00
(We use Netlify and have well over 1TB of monthly traffic. They're insanely expensive for what they are. As soon as we have roadmap time to revisit it, we'll move away.)
I'm starting to think of cloud as less of an asset and as more of a liability. We can leverage them for temporary scale, but in no way will we tie ourselves to a particular vendor.
TLDR: Release GTK/QT apps with menubars and xdg-desktop-portal on flathub and use stripe to impliment any and all desirable business models. You can do this today and your app wont look any more out of place than the browsers and office suites. IAP are the STDs of business models.
> You can't stop people from making crap apps, they exist even today.
The particular crapware that exists on Android is notably absent from built in software management GUIs so it looks like you CAN do this.
> To make an app feel at home, it needs to "blend" with the rest of the OS
68% of desktops are GTK based, 26% are KDE which themes GTK apps the same as KDE apps. Superficially apps blend well. Looking deeper you will notice many similarities. Many common shortcuts and the same, common UI paradigms, idioms. Look a little closer and you'll note differences especially gnome with its client side decorations which cram a toolbar + window controls into a single line with extra spacing, QT apps with the more traditional menubars and certain shortcuts in common. Then there are incredibly common apps that are obviously not fully consistent with any, firefox, chrome, gimp, libreoffice, thunderbird.
What one ought to realize shortly is that there is no singular Linux desktop to blend into and it works fine as is.
If you make a GTK app with a traditional menubar you will look reasonably at home on virtually all desktops. At least as at home as most of the most popular apps listed above.
> Then comes the question of system integrations - how do you offer a unified photo picker experience
xdg-desktop-portal
>how do you ensure you always ask for the right permission to access the camera, the clipboard or network APIs
In native apps none of this is controlled at all. Don't install things you think might look at you through the camera and upload your nudes to the cloud. In flatpak the permission to do so is front loaded into the permissions required by the app. Don't install things that require camera and network permission if you think they might upload your nudes to the cloud. Flatseal provides a way to modify this permission after the fact if you want to install something and modify what permissions it gets.
> I strongly disagree - what if you want to offer a "try before you buy",
The most obious thing to do is time or feature limit your usage and "unlock" your app by opening a url and handling payment on your website with stripe and then having the user copy a code and or open a link to communicate it to the app. This is relatively easy AND lets you keep 100% of the money and no capricious app store rejection can keep your users from using your app. If you for any reason were ever to have a challenge distributing your work on the official source both flatpak and traditional package management has the concept of multiple sources. Customer adds your source and your apps and official apps are displayed in the same integrated app store interface.
Flathub is supposed to introduce one off paid apps/subscriptions. I'm not clear what the progress on that feature is. I do not believe there is any plans for in app purchases however. Probably because 99.9% of the use case is dominated by porn, shitty games, and adware. Asking why non-gross environments don't impliment IAP is like going to the whore house and shouting "where all the gonorrhea at!"
It is better from the user perspective if payments/subscriptions are either managed on the website for the service or better yet one a singular store interface where customers can make an intelligent decision rather than having the dev low ball them and them ride the sunk cost fallacy and FOMO to a fuckin payday.