I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.
I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.
I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.
All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case.
All the AI-related Emacs settings are in my config².
Is this helpful, or you want some more concrete examples?
This is good stuff :) Honestly, I am having a lot of fun just using org mode files as prompts and eventually output for a Claude Code instance in vterm. All these files get saved with the output and magit is of course amazing, just good affordances and history keeping.
For context, David Kriesel gave the infamous talk called “BahnMining” at 36C3 highlighting this. IIUC it’s only available in German: https://youtu.be/0rb9CfOvojk
Come to think of it, maybe they had a play on 4o being “40”, and o4-mini being “04”, and having to append the “mini” to bring home the message of 04<40
Nama is pretty good and for more traditional DAW workflows, greatly simplifies using ecasound at some cost. Would love to see ecasound start getting developed again.
Edit: was comparing nama to ecasound there, not the more common graphical DAWs.
Funnily, AI already knows what stereotypical AI sounds like, so when I tell Claude to write a README but "make it not sounds like AI, no buzzwords, to the point, no repetition, but also don't overdo it, keep it natural" it does a very decent job.
Actually drastically improves any kind of writing by AI, even if just for my own consumption.
I'm not saying it is or isn't written by an LLM, but, Yegge writes a lot and usually well. It somehow seems unlikely he'd outsource the front page to AI, even if he's a regular user of AI for coding and code docs.
reply