Adding to the litany of bad google ideas from the past 10+ years:
- Killing google reader
- Pointless UI changes
- Multiple chat and videocall apps that cannibalize each other.
- Stadia fiasco
- Shoving AI down our throats in their MAIN PRODUCT
What's the source of this rot? I have a friend at google who says the place is filled with smart people competing with each other. Perhaps this competition fuels a chaotic lack of coherency? Kind of feels like they have no clear vision in the "Google Ecosystem", and are hopping on the AI bandwagon with hopes it'll ride them into the future.
Google's Gemini is not mind-blowing nor probably top model but without a doubt is in the same ballpark as all their competitors. Which I think is a pretty Good sign. Just like Meta, Google did not drop the ball on AI and looks like they had their ears to the ground better than say MSFT, APPLE or AMZN.
In that sense I can see why investors are happy. What matters is if Google can continue to innovate and at a rate faster than competition.
I know that view is popular, but it seems so short-sighted. Most companies can increase profit whenever they want, but driving out innovators and abusing goodwill never works out long-term. Look at IBM, Oracle, Intel, Boeing, US Steel, GE, Commodore, Quiznos, etc. I feel like the Google's stock holders are just getting the wool pulled over their eyes. An increase in profit often means a cannibalization of value.
I'm mostly wondering why shareholders go along with it or even pay on premium on shares making a lot of money now that may be comparatively worthless in 20 years. Do they really believe this is sustainable? Do they expect Google to just start issuing massive dividends? Are they just hoping for a greater fool?
Wall St wants to see them keeping up with the latest tech. That's why Apple put billions into VR and we will hear nothing of it in 12-24 months. Another reason is talent retention/acquisition. AI will be history in 12-24 months when the bubble bursts and we are left with a mountain of cheap GPUs.
I think it’s safe to assume that in the next few years almost all consumer hardware will be able to run a local LLM comfortably. And no one knows if LLMs can go beyond GPT-4 in a way that will genuinely blow people away.
> (...) almost all consumer hardware will be able to run a local LLM comfortably.
The last thing we want. For example, car manufacturers tried to make voice command work and the results are still unreliable; they also had experimented with touch screens and those are going away, because they are a poor and unsafe way to operate a moving vehicle. People want tactile feedback and ability to operate them without taking their eyes off the road. Why would anybody want an LLM in my camera, phone, washing machine, thermostat?
I personally don’t want that. In my opinion, this current iteration of LLM is completely overblown and I am perplexed as to how quickly it is getting adopted into everything.
Of course, LLMs are useful. For small tasks, there is a significant productivity boost to be had, but they are not trustworthy. And that is the main issue.
If we see them as untrustworthy, then perhaps it is necessary to accelerate their exposure into consumer technology as a means to show that an LLM can cause harm - in whatever form that may come.
It’s very easy to overlook LLMs making things up, but they do (including GPT4) - and if that can’t be solved then it’s safe to assume this hype will be short-lived.
All tech that becomes ubiquitous is based on giving humans answers that do not change based a throw of coin. Your bank account statement shows the same balance for the same end of day query no matter when you request it; your GPS gives you directions to the same place every time you put in the same destination address, not to a place that looks like a probable destination you might want. The best uses for AI are those that do not use generative BS, but for ML that give us answers, patterns, action scripts. Our brains are wired for survival and constantly look for answers, patterns, and scripts that do not change at random. There is a reason why movies and stories follow a hero's arc. We want order, life is chaotic enough. Not realising that will be a rude awakening for all investors in generative AI.
People that hide behind procedures and metrics and competition driven by that instead of looking outside the window and seeing if it's rainy or sunny.
MBA heads (which might be unfair here because this is more like engineer clockwork head) that use "products launched" as a metric even if you're actually taking down and launching the same product over and over
G killing "unprofitable" products like Google Reader because it makes sense on paper except that a) it's a minor line item b) they are too analytical to measure the impact on good will and brand "soft power" of the power and c) the existence of the product created demand for RSS producers and it was not simply another reader.
I've heard one link in the chain is that promotions are too tied to Shipping New Things, not improving current products or keeping things from becoming worse.
Accurate. Abject lack of a cohesive vision/strategy, and no leadership to articulate and drive it. It’s a motley collection of fiefdoms competing for perf points.
The relentless drive to make as much money as possible, as quickly as possible. Publicly traded companies tearing at the soft underbelly of society for a few pennies more each quarter.
JMP is also a very criminally underrated and flexible tool to do data analysis with – both exploratory and modelling. It's my preferred tool of choice over R, especially after they added a structural equation modelling platform.
It's a pity they got rid of pricy perpetual licensing quite a few years ago for a no less pricy subscription model.
For those of you who are interested in this space (open source/affordable scientific equipment) I recommend the lab on the cheap blog[0]. They have a lot of links to interesting websites/papers in this space.
I know that qPCR is preferred for diagnostics, but is it actually necessary? As in, will the false positive/false negative rates suffer enough for standard RT-PCR to be useless?
No. qPCR is, well, quantitative, so it gives more information. Additionally, it requires less pipetting, and the data is easier to report. All of these things make it more desirable for diagnostics.
No you just need more of those parts, and depending on how much manual labor you want to be doing a different number and configuration.
process is:
-take sample
-amplify via thermocycler
-validate presence
That last step can be done a few different ways depending on what you are looking for and how much time/money you have. (I'm just a hobbyist so not getting into specifics to avoid misspeaking but I'm confident it's possible via open tech for the 'crude' methods -- more advanced are more proprietary i'm sure.)
- Killing google reader
- Pointless UI changes
- Multiple chat and videocall apps that cannibalize each other.
- Stadia fiasco
- Shoving AI down our throats in their MAIN PRODUCT
What's the source of this rot? I have a friend at google who says the place is filled with smart people competing with each other. Perhaps this competition fuels a chaotic lack of coherency? Kind of feels like they have no clear vision in the "Google Ecosystem", and are hopping on the AI bandwagon with hopes it'll ride them into the future.