Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the time being the work is still in the early stages of the computational neurosci research. Once things get to the point of commercial/medical viability then I'd imagine either the direct synthesis model will improve sufficiently or something akin to parrotron's voice normalization may be of use https://google.github.io/tacotron/publications/parrotron/ind... .

For actual patients using the system however I'd expect it would be beneficial to keep the output as low latency and as unmodified as possible as this will help the individual learn to control the system as nerual activity shifts over the course of time.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: