I really dislike these AI middleman plans. The value-add that Microsoft brings to Github Copilot is near zero compared to directly buying from Anthropic or OpenAI, where 99% of the value is being delivered from. I don't understand why anyone would want to deal with Microsoft as a vendor if they don't have to. The short period of discounted usage was always the obvious rug pull.
I would also add that the models they supply through Azure Foundry are covered under my employer's existing customer agreement, by which MS is not allowed to train models on our data (which might include IP of the company or its clients). For organizations worried about that, it's nice & cozy.
Bingo. Github Copilot is mostly for organizations that have an existing Azure bill and would rather see that go up then get a new vendor bill. Professional middlemen.
If you’ve ever had to be part of the frankly batshit insane procurement process that some organizations force you to gauntlet through, it becomes a very obvious and appealing option to do this
It technically does indeed matter, because "then" means a totally different thing in that sentence, but using "then" in that way would be an odd enough way to construct that sentence that it's blindingly obvious that they meant "than".
What reasonable interpretation of the sentence is there if "then" is applied literally? I can only find validity using "than", and therefore the use of "then" doesn't matter as the author's intent isn't lost. That said, carrying the assumption that it does matter forward, how are you certain "then" isn't the correct interpretation of the author's intent?
Ah, the AWS Marketplace procurement model, where products mostly exist so that you can line item things through Amazon rather than going through a lengthy procurement process
Not surprised to see this is common. At my company basically everyone and their mother are using Claude Code via Bedrock, despite us having company-wide Windsurf, Copilot and ChatGPT Enterprise accounts
That sounds different, the parent is saying they're using that because then no new billing and stuff has to be negotiated/setup, but in your case everything is already setup and people have access, they just chose to use something else?
Indeed. The use case is like this: I'm a Devops/Platform/SRE/Infra/WhateverYouCallAWSAdminInYourOrg at BigCorp and end users are asking me to use software XYZ. It's on the AWS Marketplace. I have two choices. I could either
1. Go through a 1-2 month procurement process where I have to deal with not only the vendor's sales team on who I'm buying from but also probably multiple teams in my BigCorp. Vendor sales team wants to feel relevant and so I'm sitting in at least one meeting where I'm telling them I just want to buy your shit make it as fast as possible. But then the people in my BigCorp likely not only don't understand why the software is necessary, but need to feel relevant and as such will make me fight through bureaucratic hurdles. I have to get compliance involved. Finance involved. If there's a procurement team I have to get them involved. Probably there's a security questionnaire that my bigcorp's security team uses. I have to send that to the vendor's sales people. They have to send it to their security folks. Security folks on their end have to complete it and send it back. I have to send approvals up the chain on my end, after I've successfully convinced some clueless nontechnical user why software XYZ is important and no, the shit half baked thing we already have doesn't work.
OR alternatively:
2. I can go to the AWS marketplace, click a button, and now my AWS bill goes up X thousands of dollars per month and none of the bullshit from 1 is required. Because AWS is already an approved vendor. Everyone except perhaps someone monitoring the AWS bill for large increases is happy and doesn't care (well, maybe the security team does, but hopefully they aren't tattling on you to the procurement people who have nothing to do and want to stick their fingers in the process and we can make that process go quick), and I just need to tell that person that we are doing it.
It's not always the exact narrative I just laid out, but the gist of it is pretty much procurement at every bigcorp.
I disagree. I like the standard interface, being able to easily switch models as things invariably change from week to week, and having a relationship with one company. That's why I'm a big fan of openrouter and Cursor. Not too much experience with Copilot, but I think there's a huge value add in AI middlemen.
Because if you’re a vscode user up until a couple days ago you could hammer Opus 4.6 all day every day and pay nowhere close to the Claude Max plan. Many people exploited this and the subsidy is closing.
A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.
The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.
Been having a ton of fun with copilot cli directed to local qwen 3.6. If you’re willing to increase the amount of specificity in your prompts then delegating from a GPT-5.4 or Opus to local qwen has been great so far.
The Anthropic Pro plan cost double and gave you, I don't know, a tenth the usage, depending on how efficiently you used Copilot requests, and no access to a large set of models including GPT and Gemini and free ones.
Yes, I loved my $10 a month person subscription for light coding tasks, it worked great. I'd use claude code max for heavy lifting, but the $10 a month copilot plan kept me off cursor for the IDE centric things.
Well they charge per prompt, but with usage limits it is a mix of token and prompt. If prompt multiplier is higher, tokens are also multiplied, so limit is reached sooner.
It is basically a token based pricing, but you get alos a limitation of prompts (you can't just randomly ask questions to models, you have to optimize to make them do the most work for e.g. hour(s) without you replying - or ask them to use the question tool).
Opus 4.6 is no longer available and Opus 4.7 chews through monthly limits with reckless abandon. The value-add of GH Copilot is basically gone (at least for individuals on the Pro or Pro+ plans.)
> The value-add that Microsoft brings to Github Copilot is near zero
You are not their target audience.
The value add is the GitHub integration. By far the best.
GH has cloud agents that can be kicked off from VS Code; deeply integrated with GH and very easy to set up. You can apply enterprise policies on model access, MCP white lists, model behavior, etc. from GitHub enterprise and layered down to org and repo (multiple layers of controls for enterprises and teams). It aggregates and collects metrics across the org.
It also has tight integration with Codespaces which is pretty damn amazing. `gh codespace code` and it's an entire standalone full-stack that runs our entire app on a unique URL and GH credentials flow through into the Codespace so everything "just works". Basically full preview environments for the full application at a unique URL conveniently integrated into GH. But also a better alternative to git worktrees. This is a pretty killer runtime environment for agents because you can fully preview and work on multiple streams at once in totally isolated environments.
If you are a solo engineer, none of this is relevant and probably doesn't make sense (except Codespaces, which is pretty sweet in any case), but for orgs using the GH stack is a huge, huge value add because Microsoft is going to have a better understanding of enterprise controls.
If you want to understand the value add of Copilot, I think you need to spend a bit of time digging into the enterprise account featureset in GH, try Codespaces, try Copilot cloud agents. Then it clicks.
I found the Copilot harness generally more buggy/disfunctional. After seeing a "long" agent response get dropped (still counts against usage of course) too many times I gave up on the product.
It doesn't matter how competent the actual model is, or how long it's able to operate independently, if the harness can't handle it and drops responses. Made me think are they even using their own harness?
At least Anthropic is obviously dogfooding on Claude Code which keeps it mostly functional.
The value-add that Microsoft brings is checking the boxes that you want checked.
If you need some random Egyptian government compliance certification for your vendors or whatever, Microsoft probably has that, Anthropic probably doesn't. Microsoft's (as well as Oracle's) entire deal these days is figuring out what customers care about compliance-wise, and structuring their offerings to deliver exactly that. Whether they're selling their own products, or re-selling somebody else who doesn't have that kind of global footprint and clout, is secondary at best.
Copilot was there in AI based development first with tab completions.
Now, it may be the right call to immediately give up and shutdown after Opus 4.5, but models and subscriptions are in flux right now, so the right call is not at all obvious to me.
The agentic AI models could be commoditized, some model may excel in one area of SWE, while others are good for another area, local models may be at least good enough for 80%, and cloud usage could fall to 20%, etc. etc.
Staying in the market and providing multi-model and harness options (Claude and Codex usable in Copilot) is good for the market, even if you don't use it.
> The value-add that Microsoft brings to Github Copilot is near zero compared to directly buying from Anthropic or OpenAI
Over here in the EU, we need to store sensitive data in an EU server. Anthropic only offers US-hosted version of their models, while G-cloud and Azure has EU based servers.
It was so much cheaper! I subscribed with the monthly plan instead of the yearly one thinking that the deal won’t last. It has last a bit longer than expected.
Even in solo development, the value add of developer experience benefits of Github Copilot's integration into Github itself (kick off agent work from your phone on the GitHub site), VS Code, and other tools is quite high.
For direct instance: Anthropic's Claude Code, despite being primarily written in Python, didn't even properly support Windows until far too recently (suggesting people use WSL instead) and even now is not a great Windows experience (requiring git bash, illuminating that among other things Claude's models themselves haven't trained enough on PowerShell, and I try to avoid Claude's models when working on PowerShell scripts still, personally).
Meanwhile VS Code works everywhere I want it to, out of the box, and VS Code's GitHub Copilot integration does the same.
Also, your "near zero" value add includes engineers at Microsoft/Github following the "which is the smartest/most practical model" meta-game for you and just silently updating defaults in Copilot for you without needing to make conscious choices. Sure, you can follow that meta yourself by watching HN every day and sampling hundreds or thousands of opinions across dozens to hundreds of stories each day, then play a "Netflix subscription game" of switching subscriptions every X months when the meta-game shifts or you can pay Microsoft to do all that research for you (which true also includes their professional business relationships/contracts with OpenAI and Anthropic, which is as much of a feature as a bug in my opinion because it's also a signal in the opinion war noise of choosing the "smartest model [for you, right now]" and does show up as its own HN stories for meta-debate). At least to me that's much more than a 0% or 1% value add, but maybe that's also because I don't trust either Anthropic or OpenAI directly, I sort of don't trust HN's comments as a strong guide to playing the meta-game, it's not a meta-game I want to play, and I'm happy to pay someone else to play it for me.
I exclusively use prepaid OAI tokens when doing copilot work in visual studio. It's really easy to set up a "custom" model. The consistency is hard to beat and I can use the latest model on day one. I also get to see how the magic happens in my provider logs. Every token accounted for.
I was accounting for that in the 1% of value. I don't see a ton of value in this for development, you end up just always using the smartest model, with maybe tuning subagents to slightly dumber but much faster model. You really only need one subscription to the provider of the smartest model, with maybe 30 minutes of setup time to switch over if SOTA ever switches back to OpenAI.
Over the past month, I started a GHCP ~$12 Pro sub, and found I hit my quota about half way through March or so (but I also wasn't being very...frugal). So I signed up for Claude (~$20 Pro) for a month, and I liked it at first, but the 5 hour window was very annoying, and I hit it quite a bit. The first ~week of April was nice though, and I could use Claude to the limit, and then switch to GHCP. I've sym-linked my instructions so it was more or less easy switching back and forth when I hit a limit.
However, Claude changed their limits so I got to 100% very easily, and when I did hit 100%, I couldn't be given a "window" of snapshoting my work into something for another agent (either future claude window or GHCP agent) to easily pick up mid-work.
I found the lack of visibility into what costs what was very annoying. For $20/month, you get an arbitrary amount of usage that they were changing without notice or alerts or visuals. I didn't renew CC after it expired and just kept with GHCP.
Even with this announcement from GHCP, I haven't run into a limit. I'm considering upgrading to Pro+ if I don't see a limit.
But I stick with Sonnet more or less in both environments. I only used Opus for a couple of planning sessions at the very beginning, but JIT planning is done good enough by the more mid-tier models.
Except Copilot doesnt bill you per token like all those companies do, they bill you per prompt, at least Copilot in Visual Studio 2026 which is insane to me, are they just hosting all those models and able to reduce costs of doing so?
No they are taking the massive L. Thats why they paused new sign ups.
Just for context to the insanity, they allow recursive subagents to I believe its 5 levels deep.
You can make a prompt and tell copilot to dig through a code base, have one sub agent per file, and one Recursive subagent per function, to do some complex codebase wide audit. If you use Opus 4.7 to do this it consumes a grand total of 0.5% of a Pro+ plan.
Thats why this paragraph is here:
> it’s now common for a handful of requests to incur costs that exceed the plan price
I don't know what they have done to Claude, but when using through copilot it's truly awful compared to using it straight from the API.
I have always just used the API, but I decided to give copilot a go on the weekend because of the cheap price. And I am seeing weird behavior like I have never seen before... It will somehow fail to use the file editing tool and then spend an absolutely huge amount of time/tokens building a python script to apply the edit in a sub process... And it will spin it's wheels on stuff the API routinely just gets right in one shot.
This might have been bad timing. Copilot API broke things last weekend with caused a lot of tool calls in various agent harnesses to start failing like the edit tool.
It's the only one with really tight integration with Visual Studio. I'm stuck using Visual Studio for most of WinForms development. You can do it with VSCode, or Codex, or Claude Code, but you just lose the GUI which is sometimes helpful.
1. They heavily subsidized their plans vs. paying for API.
2. They allowed me to use the subscription in every tool I wanted.
3. It covered both Anthropic and OpenAI.
I have thought about making a product out of something I'm building and trying to make the cost of my product a percentage on top of whatever I could resell Anthropic or OpenAI (or whatever) tokens for. I get this may be unpopular, maybe I should just stick with BYO-key.