Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
from
login
The AlphaFold moment for materials is not any time soon
(
lesswrong.com
)
8 points
by
gmays
2 days ago
|
past
|
discuss
Morale
(
lesswrong.com
)
2 points
by
jger15
2 days ago
|
past
|
discuss
You're gonna need a bigger benchmark, METR
(
lesswrong.com
)
3 points
by
frmsaul
4 days ago
|
past
|
discuss
Hypotheses for Why Models Fail on Long Tasks
(
lesswrong.com
)
1 point
by
joozio
4 days ago
|
past
|
discuss
Splitting Mounjaro pens for fun and profit
(
lesswrong.com
)
2 points
by
henryaj
4 days ago
|
past
|
discuss
We're running out of benchmarks to upper bound AI capabilities
(
lesswrong.com
)
15 points
by
gmays
6 days ago
|
past
|
10 comments
AIs can now do easy-to-verify SWE tasks, I've shortened timelines
(
lesswrong.com
)
3 points
by
gmays
7 days ago
|
past
|
discuss
The effects of caffeine consumption do not decay with a ~5 hour half-life
(
lesswrong.com
)
101 points
by
swah
7 days ago
|
past
|
105 comments
My Picture of the Present in AI
(
lesswrong.com
)
1 point
by
speckx
7 days ago
|
past
|
discuss
Most people can't juggle one ball
(
lesswrong.com
)
506 points
by
surprisetalk
8 days ago
|
past
|
174 comments
"Alignment" and "Safety", Part One: What Is "AI Safety"?
(
lesswrong.com
)
1 point
by
joozio
10 days ago
|
past
|
discuss
Paper Close Reading: "Why Language Models Hallucinate"
(
lesswrong.com
)
2 points
by
joozio
11 days ago
|
past
|
discuss
Estimates of the expected utility gain of AI Safety Research
(
lesswrong.com
)
1 point
by
joozio
11 days ago
|
past
|
discuss
What I like about MATS and Research Management
(
lesswrong.com
)
2 points
by
joozio
12 days ago
|
past
|
discuss
Predicting When RL Training Breaks Chain-of-Thought Monitorability
(
lesswrong.com
)
2 points
by
gmays
12 days ago
|
past
|
discuss
AI Safety at the Frontier: Paper Highlights of February and March 2026
(
lesswrong.com
)
2 points
by
joozio
13 days ago
|
past
|
discuss
How to emotionally grasp the risks of AI Safety
(
lesswrong.com
)
3 points
by
joozio
13 days ago
|
past
|
discuss
You can't imitation-learn how to continual-learn
(
lesswrong.com
)
2 points
by
paulpauper
14 days ago
|
past
A Mirror Test for LLMs
(
lesswrong.com
)
2 points
by
gmays
15 days ago
|
past
I'm Suing Anthropic for Unauthorized Use of My Personality
(
lesswrong.com
)
5 points
by
usrme
15 days ago
|
past
|
2 comments
Why did everything take so long?
(
lesswrong.com
)
2 points
by
jstanley
16 days ago
|
past
The state of AI safety in four fake graphs
(
lesswrong.com
)
3 points
by
allenleee
16 days ago
|
past
Gyre
(
lesswrong.com
)
3 points
by
jstanley
16 days ago
|
past
Less Dead
(
lesswrong.com
)
2 points
by
paulpauper
17 days ago
|
past
Using complex polynomials to approximate arbitrary continuous functions (2025)
(
lesswrong.com
)
1 point
by
measurablefunc
17 days ago
|
past
The Terrarium
(
lesswrong.com
)
1 point
by
johnfn
17 days ago
|
past
AI's capability improvements haven't come from it getting less affordable
(
lesswrong.com
)
3 points
by
gmays
17 days ago
|
past
I am definitely missing the pre-AI writing era
(
lesswrong.com
)
322 points
by
joozio
18 days ago
|
past
|
240 comments
Stanley Milgram wasn't pessimistic enough about human nature?
(
lesswrong.com
)
7 points
by
paulpauper
19 days ago
|
past
|
1 comment
Anthropic Donations: Guesses and Uncertainties
(
lesswrong.com
)
2 points
by
joozio
19 days ago
|
past
More
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: