Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are surely enough trials on depression per year that a result with 15 participants in the treatment group is to small to start bringing this to public attention?

With a sample size this small I don't think it should be in the news at all



Small sample size, but:

- Building upon mounting evidence of TMS efficacy in patients with severe, intractable depression.

- 79%-- 11 of the 14-- no longer meeting diagnostic criteria for depression after treatment, and another was improved.

- It's an RCT showing both massive efficacy and statistical significance. (2 of 14 in placebo group entered remission).

I know we like to chant "more n"-- but this is very likely to be both real and a clinically useful finding.


Thank you for saying this.

More n is required, yes, but small n results like this are a noteworthy step in the right direction.

There is often good reason why n is small. Statistically significant results like this one pave the way to future experiments with larger n.


I had an occasion to deep dive into hundreds of clinical trials about a year ago, talk to PIs and trial participants, etc. One thing I didn't even have awareness of, let alone appreciation for, was how essential it is to have a good trial design (or maybe how an insufficiently considered trial design can be vulnerable to catastrophic failure modes).

It seems that running these 'small batch' studies/trials can be tremendously helpful to inform and refine the approach before executing on a larger scale. Of course if it takes 3 years to reach an endpoint it probably doesn't make sense to serialize them, but to my layman perspective there appears to be clear value (and in this case encouraging results).


I'd also like to chime in on "effect size matters". After all, you don't need particularly large N to conclude that decapitation is going to be fatal.


Yes---

Small n studies are often the most exciting. If you get a p<0.01 effect in 10 people vs. 10000 people... the lower bound of the effect size is similar, and so for the same prior each study should change our belief by a similar amount... but the upper bound of effect size is much larger in the large study.

The only thing that messes this logic of mine up a bit is publication bias. There's a much larger chance that small studies sit on the shelf unpublished if there's no clear effect.


I'd be curious to know if any other executive/mental function changed, something that may not have been measured before or after the treatment; not feeling depressed, self-reported, is not necessarily the same as having the same functionality as someone who's not depressed.


> not feeling depressed, self-reported, is not necessarily the same as having the same functionality as someone who's not depressed.

Sure, though one aspect of the depression scales is (self-reported) ability to go about daily activities, get normal sleep, etc-- not just "feeling down". That is, it's self-reported information on functionality, too.


To be clear: I definitely think this kind of study is valuable, I don't think it should be in the news, unless the headline is something like "small study shows promise"


In fairness, "in the news" here is a bulletin from Stanford Medicine about research at Stanford.


Agreed, small sample size; but in defense of the researchers, I don't think the general public appreciates how difficult it is to find research participants - nevermind a lot of them. And, ones that remain adherent to study protocol throughout it's entirety, so their data points can be included.

It's also extremely expensive, and unless you're a god at writing grant proposals, most uni research labs are cash poor.

There's a reason most studies don't have lots of participants, and in most cases it's not laziness on the part of the researchers.


Yes. It's asinine to run a large N study before having run a small one. So small N doesn't mean bad, it means preliminary.

Internet comment sections however are not great at nuance, and the difference between a bad study and a study leaving unresolved uncertainty is often beyond them.


It’s already a well established treatment, this is just the accelerated form, confirming it still works.


Accelerated, with fine tuning of targeting. It seems it may work much better.


Accelerated worked better.


Correct. It's one footnote in the long road to proof positive or negative, and will ideally provide information for meta-analysis.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: