This is a kind of interesting approach, but I don't see a huge difference from the article between this method and traditional high-level synthesis. The same advantages apply: being able to test the thing as software and hardware with the same code, and "faster" development. The same methodology applies: a software-like language is synthesized to hardware, and while it's not an actual software language (as in HLS), it is still "Rust-like."
Still, for someone like Google, building ASICs with HLS makes a lot of sense. It should let you deploy very complex devices a lot faster, and they probably don't care very much about squeezing the last square mm of area or MHz of performance out of their devices.
I just looked briefly at the simple lfsr and fir filter examples, and have to say they look more complex than a sustemverilog version. Perhaps the benefit comes in the hls implementation option...
Still, for someone like Google, building ASICs with HLS makes a lot of sense. It should let you deploy very complex devices a lot faster, and they probably don't care very much about squeezing the last square mm of area or MHz of performance out of their devices.