Hacker Newsnew | past | comments | ask | show | jobs | submit | mahemm's commentslogin

I'm surprised y'all stopped at the personal finance layer. I've been thinking for awhile that LLMs would be really effective as personal financial advisers, and this kind of hookup (plus I guess another one for investment accounts?) seems like all that's needed to bootstrap reasoning.


Yup! Actually starting to experiment with that now.

Just this morning, we stood up a demo email agent (basically, email back and forth with Claude with our MCP server connected, providing the data) and it's strangely comforting to chat with it. There's something about the medium of email and how it just works because that's where you're already used to talking with your financial advisor.

There's a lot of nuance in how it's built though, and everyone has different preferences, so to start with the focus is really on building an agent-friendly MCP.


A company Maybe.co recently shut down trying to do this exact same thing and couldn't make the economics work.


Thanks for sharing! Would be curious to learn more.


> I've been thinking for awhile that LLMs would be really effective as personal financial advisers

Why would you be thinking that?


I’ve thought the same. Because the main thing a financial advisor does is have the knowledge of different financial instruments and pair it with your situation right?

An LLM would do that extremely well but can also do it more often. not once a year appointments. You could have active portfolio management for a negligible fee.


Who accepts the liability when the LLM does its typical occasional massive judgement fuck up?

Asking that because even the very best, commercially available, state of the art LLMs (presently Claude Opus 4.7 (1M) with Max effort enabled) still occasionally fucks up its decisions and judgement in significant ways.

So, it's kind of horrifying to me that people would consider this for actually potentially life-impacting ways. Especially as it sounds like the consideration is to advise people in area's they don't themselves have the skill and knowledge to catch bad advice on. :(


Yeah, investing over time in such a way to beat the S&P is actually incredibly challenging (having tried it myself). I’m very skeptical an LLM can do better than that unless it has a very large, expensive firehose of data.

It may be that more mundane analysis ends up being the most useful. For example — for years, we had some money in a money market fund earning basically no interest. It just wasn’t on the radar.

Had even a not very smart LLM nudged us to put the money into a HYSA earlier, we would have made thousands per year in interest.

My wife also recently used our tool to do some simple investing optimization that’s going to save us a few thousand per year on taxes.

Thing like this aren’t sexy but do have value, and don’t take GPT Pro.


"You are absolutely right! This is a very deep, professional level insight. Yes, I have blown up your account but we can try more of my investment ideas. What would you like to do next?" /s


Yeah, I actually agree with you -- this is something that needs a ton of guardrails. It'll take a lot of thought to build correctly.


Would you be comfortable using this same logic to invest most of your net worth in lottery tickets/betting on black in a casino? If not, I'd be curious to hear what is different in that for you.


> Would you be comfortable using this same logic to invest most of your net worth in lottery tickets/betting on black in a casino?

I wouldn't "invest" in lottery tickets because for these p is far too small (exception: if I found a loophole in the system of the lottery, which has been found for some lotteries). For casinos, there is additionally the very important aspect that the casino will scam you (if you start winning money (for example by having found some clever strategy that gives you an advantage), the security will escort you out of the building and ban you from entering the casino again).

So, to give an explanation of the differences:

- Because "the typical run" for such an investment will be loosing, you should never invest your whole net worth (or a significant fraction thereof) into such an investment. The advice that I personally often give is to use index funds or stock investments for generating the money for investments that are much more risky, but have huge possible payouts.

- You should only do such an "early investment" if you have a significant information advantage over the average person. Such an advantage is plausible, for example, if you are deeply interested in technology topics

- Lottery tickets have an insanely small p (as defined in my comment). You only do "early investments" into topics where the p is still small, but not absurdly bad. The difference is that for lottery tickets the p is basically well-known. On the other hand, for "early investments", people can only estimate the p. Because of your information advantage from the previous point, you can estimate the p much better than other people, which gains you a strong advantage in picking the right "early investments" to choose.

But be aware that this is a strategy for risk-affine people. If you aren't, you better stay, for example, with index funds.


> this is a strategy for risk-affine people

If you’re paying a fair price for the risk, sure. Most of the examples you gave seemed to be in deep speculative territory to the point that they don’t very much resemble anything economic.


My FAANG employer launched a service ~6 months ago that today seems millions of DAUs. This service was 100% vibe coded. This service was created 20x faster than the median launch, and had notably fewer issues than the median launch. If AI stopped improving today, it would be a technological leap equivalent to a new high-level language paradigm for us.


What game is played? To me it seems pretty straightforward that for both the actual caloric content is ~0.


I believe it’s .4 calories per serving which is less than one and which rounds down to zero, but it’s not approximately zero by a long shot.


How is 0.4 kcal "not approximately zero by a long shot"?

Especially when compared to a standard coke with around 150 kcal.


Well, it’s almost half a calorie, to begin with.


By the time I finish the can I'll have Burned through more than 0.4 calories.


To me this is completely unrelated to the quality of the PRNG, because security is explicitly a non-goal of the design. A general-purpose non-cryptographically secure PRNG is evaluated primarily on speed and uniformity of output. Any other qualities can certainly be interesting, but they're orthogonal to (how I would evaluate) quality.


Right: put differently, why would you bother to select among the insecure RNGs an RNG whose "seed" was "harder" to recover? What beneficial property would that provide your system?


CSPRNGs have all of the desirable properties for the output.

All else being equal, I don't think it is possible for a trivially reversible generator to have better statistical properties than a generator whose output behaves more like a CSPRNG.

It can definitely be good enough and or faster, though.


Right, I think defaulting to a CSPRNG is a pretty sane decision, and you'd know if you had need of a non-CSPRNG RNG. But what does that say about the choice between PCG and xorshiro?


Defaulting to a CSPRNG pre-seeded with system randomness is not a bad choice per se(especially given many users don't know they need one) but current ones are much slower than the RNGs we are discussing.

If you're going to provide a non-CS one for general simulation purposes, you probably want the one that is the closest to indistinguishable from random data as you can without compromising performance, though.

Some people will have more than enough with a traditional LCG(MC isn't even using RNGs anymore) but others may be using more of the output in semantically relevant ways where it won't work.

If Xoshiro's state can be trivially recovered from a short span of the output, there is a local bias right there that PractRand lets through but that your application could accidentally uncover.

The choice is: Are the performance gains enough to justify that risk?


Why does it matter if the state can be trivially recovered? What does that have to do with the applications in which these generators are actually used? If the word "risk" applies to your situation, you can't use either xorshiro or PCG.


This is too deep to reply but if a bit is dependent on the value of a bit a couple bytes back then it is not acting randomly.

It's not about security.

I hope you can agree that if every time there is a treasure chest to the left of a door, a pink rabbit spawns on the top left of the room, that's not acting very random-like.

I'm not taking a position on the perceived added value of PCG over Xoshiro.


The property you're talking about (next bit unpredictability) is important for a CSPRNG, but it doesn't matter at all for a PRNG. A PRNG just needs to be fast and have a uniform output. LCGs, for instance, do not have next bit unpredictability and are a perfectly fine class of PRNG.


The paper that triggered this thread "breaking" PCG sees it as potentially in the same class of issues as using RANDU.

> our results […] do mean that [PCG']s output has detectable properties. Whether these properties may affect the result of Monte-Carlo numerical simulations is another matter entirely.

Again this is on PCG which required a breaking effort.

The short version of Xorshift as originally presented by Marsaglia outputting its whole state for example is bound to have behaviors like my room-generation example emerging fairly easily. Particularly, with low hamming-weight states.

I doubt Xoshiro's output is that bad but if presented as trivial to recover vs PCG, that to me indicates potential issues when using the output for simulation.


You replied to a claim that Telegram doesn't do E2EE for groups saying 'Neither does Whatsapp/Signal'.

That's wrong as `tptacek noted. If you meant something else, that wasn't clear.


> E) (I believe) don't enable E2EE with more than one device

my response was:

> E) Neither does Signal/Whatsapp.

The thread of the "E" topic is relevant here, i'm not claiming that Signal/Whatsapp support (or do not support) encryption for group chats.

Sorry that it wasn't clear, I thought referring to them directly by letter would make it easier to differentiate.


Why not just read 64 bits off /dev/urandom and be done with it? All this additional complexity doesn't actually buy any "extra" randomness over this approach, and I'm skeptical that it improves speed either.


The problem is, there's around 2^62 double precision numbers between 0 and 1, but they're not uniformly spaced-- there's many, many more between 0 and 0.5 (nearly 2^62) than there are between 0.5 and 1 (around 2^52), for instance.

So, if you want a uniform variate, but you want every number in the range to be possible to generate, it's tricky. Each individual small number needs to be much less likely than each individual large number.

If you just draw from the 2^62 space randomly, you almost certainly get a very small number.


Seems to me that the simplest solution would be to repeatedly divide the range of numbers into two halves, then randomly selecting either one until it converges onto a single value. In C this might look something like this:

  double random_real(double low, double high, int (*random_bit)(void)) {
    if (high < low)
      return random_real(high, low, random_bit);
    double halfway, previous = low;
    while (true) {
      halfway = low + (high - low) / 2;
      if (halfway == previous)
        break;
      if (random_bit() & 1)
        low = halfway;
      else
        high = halfway;
      previous = halfway;
    }
    return halfway;
  }
That should theoretically produce a uniformally-distributed value. (Although perhaps I've missed some finer point?)


So you have two doubles halfway and previous and a recursion that depends on if(halfway==previous) to terminate, where halfway is the result of a floating point calculation. You sure that’s going to work? I suspect it will frequently fail to terminate when you think.

Secondly, why does this generate a uniform random number? It’s not clear to me at all. It seems it would suffer the exact problem GP’s talking about here, that certain ranges of numbers would have a much higher probability than others on a weighted basis.


> Secondly, why does this generate a uniform random number?

Each interval of equal size occurs with equal likelihood at each step.

Consider that you want to generate a random number between 0 and 1024 (excl.). The midpoint would be 512, thus you choose randomly whether the lower interval [0, 512) or the higher interval [512,1024) is selected. In each step, the range size is independent of the concrete numbers, i.e. for it is exactly 2^(-step_size) * (high - low), and in each step each range has equal probability. Thus if the algorithm terminates, the selected number was in fact uniformly sampled.

I would also presume it must terminate. Assume that the two endpoints are one ulp apart. The midpoint is thus either of the two, there is no randomness involved (barring FPU flags but they don't count). In the next step, the algorithm either terminates or sets the endpoints equal, which also fixes the midpoint. Thus the procedure always returns the desired result.


The issues that the GP is grappling with are largely due to the fact that they are trying to "construct" real numbers from a stream of bits. That is always going to lead to bias issues. On the other hand, with this particular algorithm (assuming a truly random source) the resulting number should be more or less completely uniform. It works because we are partitioning the search space itself in such a way that all numbers are as likely as any other. In fact, that the algorithm terminates rather predictably essentially proves just that. After one million invocations, for example, the average number of iterations was something like 57 (with the minimum being 55 and the maximum outlier 74). Which is to say you could pick any number whatsoever and expect to see it no more than once per ~2^57 invocations.


I was curious about this. On the one hand, comparing doubles with == is rarely a good idea but, on the other hand, your explanation seems valid.

After some testing I discovered a problem but not with the comparison. The problem is with calculating the halfway value. There are some doubles where their difference cannot be represented as a double:

  #include <stdio.h>
  #include <stdlib.h>
  #include <time.h>
  #include <float.h>

  double random_real(double low, double high, int (*random_bit)(void)) {
    if (high < low)
      return random_real(high, low, random_bit);
    double halfway, previous = low;
    while (1) {
      halfway = low + (high - low) / 2;
      if (halfway == previous)
        break;
      if (random_bit() & 1)
        low = halfway;
      else
        high = halfway;
      previous = halfway;
    }
    return halfway;
  }


  int main(int argc, char *argv[]) {
    srand(time(NULL));
    for (int i = 0; i < 1000000; i++) {
      double r = random_real(-DBL_MAX, DBL_MAX, rand);
      printf("%f\n", r);
    }
  }


Actually, the problem is that the algorithm is having to calculate DBL_MAX+DBL_MAX, which of course is going to exceed the maximum value for a double-precision number (by definition). That isn't a very realistic use-case either, but in any case you could just clamp the inputs like so:

  double random_real(double low, double high, int (*random_bit)(void)) {
    if (high < low)
      return random_real(high, low, random_bit);
    const double max = DBL_MAX / 2;
    if (high > max)
      high = max;
    const double min = -max;
    if (low < min)
      low = min;
    double halfway, previous = low;
    while (1) {
      halfway = low + (high - low) / 2;
        if (halfway == previous)
      break;
      if (random_bit() & 1)
        low = halfway;
      else
        high = halfway;
      previous = halfway;
    }
    return halfway;
  }


Fixed it!

  #include <stdio.h>
  #include <stdlib.h>
  #include <time.h>
  #include <float.h>

  double random_real(double low, double high, int (*random_bit)(void)) {
    if (high < low)
      return random_real(high, low, random_bit);
    double halfway, previous = low;
    while (1) {
      halfway = low + (high/2 - low/2);
      if (halfway == previous)
        break;
      if (random_bit() & 1)
        low = halfway;
      else
        high = halfway;
      previous = halfway;
    }
    return halfway;
  }


  int main(int argc, char *argv[]) {
    srand(time(NULL));
    for (int i = 0; i < 1000000; i++) {
      double r = random_real(-DBL_MAX, DBL_MAX, rand);
      printf("%f\n", r);
    }
  }


Yep, that works too. You likely won't ever need a random number that large, of course, but if you did that would be the way to do it.


That wouldn't produce uniformly distributed random floating-point numbers.


Amazon | Full-time | Security Engineering/Management | Austin, TX | On-Site

I am hiring a new Application Security team in Austin to focus on making the highest-privilege applications in the non-AWS side of the company the planet's most secure.

This team will be joining a 9-month old effort to collaborate with developers of key apps on security assessment, architecture improvement, design and code review, and automation of the security process.

The pros of our team are technical excellence, a culture of sustainable work (we are working hard here, but strictly 9-5), the opportunity to have a significant influence on the security posture of the company as a whole, and the chance to hack on applications operating at a global scale, and low (1x/month) oncall expectations.

The cons of our team are moderate process debt (arising from our newness and some unexpected demand)and higher-than-normal ambiguity in tasks (we hold too many task definitions/bars in our head and haven't written them down yet).

Please apply to these roles through the links below:

* Security Engineering Manager: https://www.amazon.jobs/en/jobs/2769965/security-engineering...

* Senior Security Engineer: https://www.amazon.jobs/en/jobs/2778970/senior-security-engi...

* Security Engineer: https://www.amazon.jobs/en/jobs/2777245/security-engineer-ii...

I'll check this post periodically and respond to any questions (concerning non-confidential info about this job) if people are interested.


Applied! I'm interviewing for another security role in the store department, but this seems to be another team.


Yep! We're lucky to be part of an org that's growing across a few teams, so there's several jobs up for the wider Stores AppSec umbrella


Who do you think declassifies and releases information? Who do you think passed and enforces the Freedom of Information Act?


>Money breeding laziness ... killed ICOs

ICOs were killed by Solidity and the Ethereum ecosystem more generally being insufficiently expressive to create anything of value other than pyramid schemes (insofar as those have value).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: