It really depends on the scale you're working at; and whether you can assume that there will be someone available to supply credentials for an instance that had to restart.
I've used fabric and ansible to push configs out to small sets of hosts; and yes assumed that the sensitive bits were OK sitting on the filesystem of the production host. Since if an attacker had access to the filesystem there would be more issues and I'd have to invalidate those credentials anyhow.
At a larger scale you'll want something like etcd or consul or even just a centralized key server that new instances call and ask for their configuration.
The thing is that anything predicated on HMAC secrets is vulnerable to those secrets being exposed. The secret has to be in the clear at some point to perform authentication or signing and a sufficiently determined attacker will be able to get that string.
A system is only as secure as the humans running it can confirm it to be secure. This is why it's best to reduce your attack surface and ensure that you can log access and do process inventory and egress filtering and the whole checklist of prevention, detection and remediation. There is no magic pixie dust that will make your system fully secure; you will always be making tradeoffs and managing risk rather than eliminating it.
How do the secrets get safely distributed to the machines where they are needed?
How to revoke/rotate a secret, especially once a compromise is suspected?
How to perform all this in DevOps-y, automated systems?
This is the problem space I work in.