That was just a hyperbolic example, but I can think of a few superficial reasons:
1. Because you end up with a more complex Dockerfile. I've seen curls, clones etc during build - having a remote call for random tools in our build _feels_ like a bad practice.
2. There is also the issue that a fleet of application servers will have different tooling available, which can be frustrating in a pinch. I've done some gnarly things across many hosts, expecting a common set of tools. At AWS, we were responsible for non-nuclear team's services during oncall and would ssh to their boxes. Logging into a box with a different shell would be annoying.
[2] > for i in `cat hosts.txt`; do ssh $i '...'; done
1. Because you end up with a more complex Dockerfile. I've seen curls, clones etc during build - having a remote call for random tools in our build _feels_ like a bad practice.
2. There is also the issue that a fleet of application servers will have different tooling available, which can be frustrating in a pinch. I've done some gnarly things across many hosts, expecting a common set of tools. At AWS, we were responsible for non-nuclear team's services during oncall and would ssh to their boxes. Logging into a box with a different shell would be annoying.
[2] > for i in `cat hosts.txt`; do ssh $i '...'; done