Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are there techniques to ensure temporal consistency when performing this kind of upscaling? Currently it seems like each frame was upscaled independently leading to weird artifacts.


> it seems like each frame was upscaled independently leading to weird artifacts

Yep, they used an off the shelf nn (gigapixel) to upscale each individual frame. You could try to achieve your proposal using a guassian convolution windowed in 3d across frames. I suspect it may make a mess of the video's framerate, but the motion appears smooth enough that it might work.


It’s not as if the source material’s got a steady frame rate, there’s a lot of places where it jumps about 2-4 frames.

Which is quite possibly due to lapses in the hand-cranking of the film through the camera back in 1896, or maybe due to frames getting lost when the printed reel of film got damaged and spliced in the hundred and twenty-odd years between then and now. Possibly both, one could probably work out some of the cranking speed variations by looking at how the train’s frame-to-frame motions change, at least until it’s come to a stop.


I like that the two sibling comments suggest "train a model" and "model a train", respectively.


You could also train a model that conditions on the previous frame in addition on the surrounding pixels.


That would just create a slightly better onion skinning of overlayed frames, it wouldn't align them.





Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: