Hacker Newsnew | past | comments | ask | show | jobs | submit | IX-103's commentslogin

I work for a company that has been using Mythos for vulnerability detection in our software. The results we're getting are revolutionary to the point that our software security teams are heavily overloaded addressing the deluge of thousands of real bugs/vulnerabilities and design flaws across our billions of lines of code.

For comparison, we are invested heavily the the AI space to the point where Anthropic is one of our competitors. We were already using state of the art models to find flaws in our code, but Mythos was just so much better at finding real vulnerabilities it's not even funny.


Read the above comment again. Both your comments and his/hers are compatible

They are directly contradicting the claim that if you ran other models on the same codebases you would get similar results.

Yeah I’m a security researcher and my colleagues who have access say it’s insanely good… but interestingly they also work for places like nvidia which have a deep vested interest selling tokens and hardware. So of course they are pushing this narrative.

if you are invested heavily in the AI space, isn't it in your best interest for the froth around Mythos to be true and the comment you are responding to to be invalid? even if you are competing with Anthropic, a rising tide raises all ships

i'd like to see more facts and data one way or another!


This is the "circumstantial" version of the ad hominem fallacy. Just because the author may benefit from the argument being true doesn't mean it is invalid.

They are clearly disputing the assertion the Mythos is an incremental gain rather than quantum leap. Of course objective unbiased data would be nice, but these anecdotes are all we have right now.


> billions of lines of code.

Billions as in 10^9?



Yes, exactly. Moore's law says that in less than 10 years you will be able to fit today's state of the art models on your phone. If you add in all of the computationally and memory neutral improvements and breakthroughs that we will accumulate over the next 10 years then it will be both far more capable and far more reliable than today's models.

An AI assistant you can trust and bring with you is coming, and almost nothing can stop it.


Ah yes the -2nm node.

I'd like to see a full development of this idea. Something like a CPU that runs at -3 GHz. Or perhaps it generates power while it undoes computation?

It's too bad node size is a linear dimension rather than area. If it were area, we could get into its many complex/imaginary properties.


Ha well one that does computation with no power draw is theoretically possible, since processing as such does zero actual "work" physics-wise, it's all just various mechanical losses.

https://en.wikipedia.org/wiki/Reversible_computing


Control-C is the usual for that.

1. That doesn't work in Powershell.

2. The standard keybind to quit is in fact q. See less, info, and fzf for examples.


They're working on it. I think they even have a "beta" for Android/Chrome on CiderV. From what I heard it's slow and doesn't work with most of the existing tooling (want to reformat your source files? Too bad).

They just announced the Googlebook (a laptop), not to be confused with Google Books (their service for selling ebooks). It sounds like the mismanagement is right at normal levels.

It's going to the administration overhead. If you have to document everything and argue for every medical procedure and deal with 20+ different processes for filing claims then it takes time. And, as a provider, you have to pay someone to spend that time if you want to get paid.

It doesn't help that our healthcare billing systems are so outdated and broken. I once had a doctor visit denied with the reason code that it should charge the other insurance (for people on multiple plans). I was only on one plan, but my wife was on two. The doctor and I went through all the paperwork - my name was right, my birthday was right, my policy number was right and when I got notice of the rejection it had my name on it. Eventually we traced it to an error - not in my insurance company, not in the company that handles claims in this areas for my insurance, but instead in some middle-man company that was responsible for transferring claims between the two. Nevermind that all three companies claimed to be BlueCross BlueShield. This took over a year to resolve.


No it's not. There is absolutely no way to get from $360B of insurer admin and net cost of insurance to $2.5T --- two point five trillion --- in practitioner costs on paperwork overhead. That is not a plausible argument.

The numbers here are not close. They're stark.


https://news.cornell.edu/stories/2011/08/us-health-care-cost...

> A new study finds that the extra time and labor physician practices spend on interacting with insurance companies and government entities cost U.S. physicians $82,975 each per year, while doctors in Ontario spent $22,205.

> Canadian physicians follow a single set of rules, but U.S. doctors grapple with different sets of regulations, procedures, requirements, formularies and forms mandated by each health insurance plan or payer. The average U.S. doctor spent 3.4 hours per week interacting with health plans; Ontario doctors spent 2.2 hours. The bureaucratic burden falls heavily on U.S. nurses and medical practice staff, who spent 20.6 hours per physician per week on administrative duties; their Canadian counterparts spent only 2.5 hours on paperwork.

All that falls in your $2.5T bucket. And their cleaners, HR, etc. And insurers have had 15 years of innovation since that study.


You haven't done the math here. Multiply the numbers out. This is what I'm talking about. How are you supposed to engage with these topics if you're literally recoiling from 7th grade arithmetic? Congratulations, taken on your own terms, you just found 3.6% worth of savings from practitioner costs.

My local grocery store wouldn't even bother issuing a coupon for that small a discount.


This is one example of an aspect where insurance causes costs that are not directly attributable to the insurer in your numbers.

This isn’t seventh grade math. This is kindergarten level cause and effect.


Yes, as I said, if we accept your claim at face value, that every dollar of American practitioner-side insurance overhead --- not the delta from Canada, but every single dollar of it --- is mis-spent, you managed to identify 3.6% of the waste in the system. Congratulations.

I said earlier we'd gone round-and-round on this topic before, and I was a little burned out on it, but I didn't expect you to refute your own argument like this. I'm glad we gave it another run this time! This is a great statistic; I'll be using it elsewhere. Thank you.


Insurance has more than one way to run the costs up; this is but one of them. Weird rebate deals with drug manufacturers. Vertical integration. Buying practices and paying them higher rates.

> I was a little burned out on it

I just did my taxes and am a little burned out by the $49k in healthcare expenses I got to deduct on them.


[flagged]


> Fun fact: given your background and field, you probably come out significantly ahead of where you'd be in countries with single-payer health care.

Oh, absolutely not. I’ve done the math on that, for sure. Unfortunately, one family member has a condition that makes emigration infeasible.


The TI-85 also didn't have a lot of the built-in statistical functions that the TI-83 had.

I also was the one person with a TI-85 in a school of 83s. But by the time I took the statistics class I knew enough BASIC to write my own programs to replicate the functionality that was missing.


The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.


You are a fun person. We should be friends


'readSync' does two different things - tells the OS we want to read some data and then waits for the data to be ready.

In a good API design, you should exposed functions that each do one thing and can easily be composed together. The 'readSync' function doesn't meet that requirement, so it's arguably not necessary - it would be better to expose two separate functions.

This was not a big issue when computers only had a single processor or if the OS relied on cooperative multi-threading to perform I/O. But these days the OS and disk can both run in parallel to your program so the requirement to block when you read is a design wart we shouldn't have to live with.


> tells the OS we want to read some data and then waits for the data to be ready

No, it tells the OS "schedule the current thread to wake up when the data read task is completed".

Having to implement that with other OS primitives is a) complex and error-prone, and b) not atomic.


The application in question is frozen for that period though, that's the wait they're referring to.

Even websites had this problem with freezing the browser in the early AJAX days, when people would do a synchronous XMLHttpRequest without understanding it.


he was referring to fs.readSync (node) which has also has fs.read, which is async. there is also no parallelism in node.

i don't see it as very useful or elegant to integrate any form for parallelism or concurrency into every imaginable api. depends on context of course. but generalized, just no. if a kind of io takes a microsecond, why bother.


They don't want the appearance of tampering. They don't really care about tampering itself because they make their money either way.

So as long as you are subtle about it they won't even investigate, as a public investigation would make it seem like tampering was common.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: