Hacker Newsnew | past | comments | ask | show | jobs | submit | mimischi's commentslogin

At the rate that people claim AWS us-east to go down, folks will argue that OVH has a tendency to go up in flames!

Can you describe your setup on how you use LLMs within Emacs?

Of course.

I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.

I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.

I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.

All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².

Is this helpful, or you want some more concrete examples?

¹ https://github.com/agzam/death-contraptions

² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai


I’d like a concrete example on how you’re actually controlling emacs with LLMs. Is ECA the part that does that?

gptel has the built-in elisp eval tool. ECA doesn't have it built-in, I use my custom MCP (I posted the link in the comment above).

This is good stuff :) Honestly, I am having a lot of fun just using org mode files as prompts and eventually output for a Claude Code instance in vterm. All these files get saved with the output and magit is of course amazing, just good affordances and history keeping.

(setopt gptel-default-mode 'org-mode)

For persistence:

https://github.com/agzam/.doom.d/blob/main/modules/custom/ai...


For context, David Kriesel gave the infamous talk called “BahnMining” at 36C3 highlighting this. IIUC it’s only available in German: https://youtu.be/0rb9CfOvojk


Watch the original, there you can select an English simultaneous translation: https://media.ccc.de/v/36c3-10652-bahnmining_-_punktlichkeit...


What makes you think that? Genuine question, as I’ve not flagged it as such in my mind.


Ok I’ll bite. Was it worth it? What have people missed that haven’t used it.



As an aside: have you thought about using agent-shell?

https://github.com/xenodium/agent-shell


Come to think of it, maybe they had a play on 4o being “40”, and o4-mini being “04”, and having to append the “mini” to bring home the message of 04<40


Mind sharing your DAW app?


Looks like it's nama: https://github.com/bolangi/nama


Nama is pretty good and for more traditional DAW workflows, greatly simplifies using ecasound at some cost. Would love to see ecasound start getting developed again.

Edit: was comparing nama to ecasound there, not the more common graphical DAWs.


A classic issue of AI generated READMEs. Never to the point, always repetitive and verbose


Funnily, AI already knows what stereotypical AI sounds like, so when I tell Claude to write a README but "make it not sounds like AI, no buzzwords, to the point, no repetition, but also don't overdo it, keep it natural" it does a very decent job.

Actually drastically improves any kind of writing by AI, even if just for my own consumption.


I'm not saying it is or isn't written by an LLM, but, Yegge writes a lot and usually well. It somehow seems unlikely he'd outsource the front page to AI, even if he's a regular user of AI for coding and code docs.


And full of marketing hyperbole. When I have an AI produce a README I always have to ask it to tone it down and keep it factual.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: