Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel weird being stubborn against free tier google gemini

I feel as though it 'extracts' some sort of "smartness" out of me (if any) and then whatever intelligence from me becomes part of google gemini

this is why I would never want to pay for using these tools, anything good that comes from me in the chat becomes google's by AI training, which is ok so long as it's free to use

i.e. I won't pay to make their stuff better through my own work



Several LLM providers have solid promises that they won't train on your inputs to them. OpenAI have this if you are using their paid API (though frustratingly not for their paid ChatGPT users, at least to my knowledge), and Anthropic have that for input to their free apps as well: https://support.anthropic.com/en/articles/7996885-how-do-you...

I was hoping I could say the same for Gemini, but unfortunately their policy at https://support.google.com/gemini/answer/13594961?visit_id=6... says "Google uses this data, consistent with our Privacy Policy, to provide, improve, and develop Google products and services and machine-learning technologies"

My intuition is that Google don't directly train on user conversations (because user conversations are full of both junk and sensitive information that no model would want to train on), but I can't state that with any credibility.


I’m sure there’s absolutely zero chance that Sam Altman would lie about that, especially now that he’s gutted all oversight and senior-level opposition.


Ah yes. Solid promises you can never verify. That companies would benefit massively from violating.

That's worth literally nothing.


I know this sounds heretical, but companies generally do not go against what they say they are doing. They might use clever language or do slimey things, but it's very rare that they will say "We do not do xyz" while they are in fact doing xyz. Especially for big companies.

Reputation has far more value than whatever they gain by lying. Besides, they can just say "We do xyz" because <1% of users read the TOS and less than <0.1% care enough to not use the service.


> Google: "Don't Be Evil" is our motto!

> Also Google: "Let's do all the evil things..." ~ heavily "paraphrased" ;)

My "tongue-in-cheek point" is that it seems like corporations beyond a certain point of "filthy-richness" just do as they please, and say what they please, and mostly neither thing has to agree with the other, nor does either one need affect their profits "bottom line" all that seriously much. Most of your typical "mega-corps" are really only able to be affected much by the laws and legal system, which they've been increasingly "capturing" in various ways so that happens very rarely anymore these days, and when it does it's most often a "slap on the wrist" and "don't do that!" sorta thing, followed by more business-as-usual.

You know the old worry about the "paperclip production maximizer AI" eating everything to create paperclips? That's kinda where we're pretty-much already at with mega-corps. They're so utterly laser-focused on maximizing to extract every last dime of profit out of everything that they're gonna end up literally consuming all matter in the universe if they don't just destroy us all in the process of trying to get there.


It looks like you're trying to maximize paperclips. Would you like help?

https://www.decisionproblem.com/paperclips/ #-the-game


I mean from a non-subjective legal TOS perspective.

I'm not arguing that the grocery store saying "fresh produce" guarantees that the produce is fresh. Fresh, like evil, is subjective.

I'm saying that if the grocery puts "All our produce is no older than 10 days" you can be pretty sure they adhere to that and train employees to follow it. "10 days" is not subjective.


This is supremely naive, in my opinion.

Big companies not only lie, some of them do so routinely, including breaking the law. Look at the banking industry: Wells Fargo fraudulent / fake account scandal, JPMorgan Chase UST and precious metals future fraud. Standard Charter bank caught money laundering for Iran, twice. Deutsche Bank caught laundering for Russia, twice. UBS laundering and tax evasion. Credit Suisse caught laundering for Iran. And so on.

Really it comes down to what a company believes it can get away with, and what the consequences will be. If there are minimal consequences they'd be dumb not to try.

Oh I just remembered a funny one: remember when it came out that Uber employees were using "God view" to spy on ex-partners, etc? For years. Yeah I'm pretty sure the TOS didn't have a section "Our employees may, from time to time, spy on you at their discretion." Actually the opposite, Uber explicitly said they couldn't access ride information for its users.


The company can certainly make a calculated risk of going against their TOS and their promise to the customers at the cost of potential risk of their reputation.

Note that such reputation risks are external and internal. The reputation reflects on the executive team and there is a risk that the executive team members may leave or attempt to get the unscrupulous employee fired.


It would also destroy these companies if they were ever caught lying.


That seems awfully optimistic, given what Sam Altman is getting away with transforming the governing structure of OpenAI.


Not if the government required them to do it.


OpenAI also promised to remain open forever.


I totally sympathize with the sentiment. But how long until people who are taking a moral stand against AI are simply obsoleted by the people who don’t? Today it’s easy to code effectively without relying on AI. But in 10 years will you simply be too slow? Same argument can be made with nearly any industry.


That's same logic as for frameworks like react.

With react you are more productive, my web experience is worse than without a those frameworks.

And LLMs get worse if they are trained on AI generated text. At the current speed I don't know if in 10 years AI is still worse the high costs.


> With react you are more productive, my web experience is worse than without a those frameworks.

You cannot begin to know that for sure and really makes little to no sense if you think about it.

As with the anti-electron crowd the options are not:

* Electron app

or

* Bespoke, hand-crafted, made with love, native app

The options are normally “electron app” or “nothing”.

Same deal here. Taking away React/Angular/Vue won’t magically make people write more performant websites. I’m sure people bitched about (and continue to) PHP for making it easy for people to create websites that aren’t performant or Wordpress for all its flaws. It’s the same story that’s repeated over and over in tech circles and I find it both silly and incredibly short-sighted. Actually I find it tiring because you can always go one level deeper to one-up these absurd statements. It’s No True Scotsman all the way down.


I feel like I (probably?) agree with what you are saying, but this is a very confusing comment. You started out with an epistemological argument, and then jumped into an analogy that's so close to what is being discussed that on first read I thought you were just confused. I'm not sure anyone can continue the discussion in a meaningful way from what you've written because so many aspects of your comment are ambiguous or contradictory.


I mean retrospectively.

In the time before all those framework like react the UX was better for me than now.

Less flashy, animated but faster.


I hate this analogy, even things from the rad days like vb were better than electron


Pretty much like the people that don't care about privacy. You still get captured and tagged in their information and uploaded to the web. As an individual it's difficult to do much about it.


tbh your data would be too unstructured, it's not really being used to train unless you flag it deliberately with a feedback mechanism




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: