Every month a cool shell is posted on HN and I get all excited.
Then I remember I need to work on hundreds of corporate servers, half of which I cannot install anything on. So the only common denominator shell is bash.
I would be flattered if people found this invaluable enough to install on fleets of servers. But pragmatically that's not the problem I'm trying to solve. I think managing remote systems is largely a solved problem with tools like Ansible (or which there are many competitors) and the few occasions you want to drop into a shell on a remote machine are occasions when you want the least resistance. So Bash is a sensible choice. But the problems Sysadmins/DevOps/Developers face these days is that a lot of tooling starts out being ran on local machines before being pushed out to CI/CD, Kubernetes, ELK stacks, Github/Gitlab, to all these other wonderful enterprise tooling I've neglected to mention. And having a shell that glues all these local machine executables together can still give big productively gains even if it does diverge from Bash.
So that's the inspiration behind this shell. It's not there to replace Bash on the server but it is designed to make engineers lives easier when working locally with CLI tooling like Docker, git, Terraform, JSON log files, and so on and so forth.
Of course, you're free to use / not use this in any way you want :)
> I think managing remote systems is largely a solved problem with tools like Ansible...
Leaving a server in the dark and remotely prodding it via Salt/Ansible/Chef/Puppet makes sense in a very narrow and very limited scenario set.
Even if you manage all your systems like that (I generally install a fleet via XCAT and manage some of them via Salt), you probably need to login to that server and poke around to see what went wrong with a user's job or why that server is acting wonky.
OTOH, I applaud you for all the work you did and hope this shell makes lots of people's lives much easier.
That’s where tools like Greylog, fluentd, and tools like Prometheus or “full service” tools like New Relic et al come into play—if you’re managing one or two servers, sure, it makes sense to jump in and poke around, but beyond a handful of servers that becomes a nightmare and centralized visualization is going to do wonders for your sanity without having to dive into individual machines on a semi regular basis.
thoroughly impressed by what i saw in the terminal session video! my inner voice kept saying: that's how it is supposed to be.
totally understand where OP is coming from though. most of the automation tools you cited would often have you drop in the shell because, well, they can't do it all.
this is a great project and i will be following it. even if it doesn't become widespread, i hope some of the ideas will catch and live on.
i don't want to look for binaries or configuration paths. i want inline docs as i type. and i want un-paged docs on my terminal. tired of googling documentation. hashicorp i am looking at you guys. "go doc" is the best thing ever. and yet you guys make us google for documentation all the time! i want the online multi-line completion and variable expansion i saw in the video, etc.
If I can find the wherewithal to switch my native shell to one of these shell improvements such as yours I will. I think the productivity gains could be really nice. Of late, I've been forcing myself to learn idomatic bash for my shell scripting duties, though.
Perhaps a feature to think about would be to be able to transpile to idomatic bash so that people could script in your shell but ship portable bash. Then once more and more devs adopt your shell it could replace bash.
Thanks for the feedback and that's an interesting suggestion. The biggest hindrance to that is that murex has builtin support for structured data types which would mean any transpiled code might then depend on non-standard tools like `jq`. But it's an interesting enough problem that I might have a play if just out of academic curiosity.
How much of the complex processing (.e.g. selecting fields in json documents) could be done without having to pipe data back to the client (without relying on tools like jq on the server)?
Adding to the other replies: because it’s not just about you. A large environment will include tens to hundreds of colleagues in the immediate blast radius of any change you make (and possibility thousands to tens of thousands beyond), including ringers from external contractors, and ranging in skill and disposition from ninja sorcerer to middle-of-the-road unimaginative plodders, and none of whom particularly wish to deal with someone’s idiosyncratic preferences.
If there’s a crisis in which your unexpected novelty is an impediment to resolution, or (worse) a direct contributing factor, it’ll be your head on a pike.
Conversely, if you introduce a tool that takes “only” fifteen minutes to learn, but a thousand people have to learn it, that’s six weeks of aggregate human productivity you just appropriated. So it’d better be worth it.
You absolutely can introduce new ideas and utilities and capabilities, but you have to bring everyone along with you, and it has to be a material benefit. Good news, the leadership skills required to do so are not innate, they can be learned.
Some organisations are better at fostering change as a matter of their overall strategy, and anyone whose professional disposition is towards constant reinvention would be well advised to seek them out.
Corporate servers can have strict rules about what software is allowed to be installed. It all depends on the corporation and what the servers are doing. Financial and health care companies are extremely risk averse. Even if the downside of installing something like murex is vanishingly tiny, the fact that there's any possible downside is enough to give them pause. Even if the new software is genuinely more productive, you may have to make the argument before a committee who's primary incentive is CYA above all else.
8 years I was working at a huge manufacturing company in a technical role, although related to a physical product rather than software. I had to ask for permission to install python from corporate HQ, and was denied...
Indeed. The problem isn't that you can't install anything, it's that the goalposts for getting software installation approved are so high that it's faster to reinvent the wheel using the tools you have.
The technical reason is that all accounts I can get access to (which doesn’t include root) are not able to call any package manager and any installed tools are “reverted” to a whitelisted set of tools every day.
The practical reason is that I’d need to convince some board of managers who have “more important shit to do” to change the default set of tools for all 10.000 servers (VM’s) for no other apparent gain than “better scripting”, which to them will sound like making it easier for hackers to extract sensitive data :-)
1. When you ssh you get to continue with the benefits and syntax of your current shell. The closest I know to this is eshell which gets some access to remote hosts through tramp rather than ssh.
2. There’s a small family of commands that take commands as arguments, eg env, xargs, parallel, find, ssh. Maybe they should get special support and the shell should allow you to write subshells to be passed to these commands in some first-class way, similar to the way bash magically converts <(...) into a special file name arg with a process running redirected to that file
Right now I just run commands remotely by prefixing ssh (actually I have aliases for the commonly connected machines). So it's
ls
for local list and
ssh remote ls
for remote's ls (actually I type
remote-name ls
).
So things like $! or ^ work as expected since they operate normally, but some quoting is indeed required (e.g. I usually want a pipeline to run entirely, or mostly, remotely).
But as each invocation invokes a fresh shell the remote connection is stateless. It would be nice if the shell could track things like
remote cd
and prefix subsequent commands with `cd`. This is hard to get right; there are so many ways to cd or pushd/popd (especially in `()` sub shells or with !:xx kinds of shortcuts) that they are hard to find. Emacs shell tries to keep a shell buffer's cwd in sync with the shell's but easily gets out of sync.
I have used the tramp interface and it's OK.
I wonder how useful this would really be; I typically run a bunch of things remotely and then have the result pipe back into a local executable. Thus anything that ran the pipeline remotely by default would need some quotology to differentiate the two, which exists now.
Then I remember I need to work on hundreds of corporate servers, half of which I cannot install anything on. So the only common denominator shell is bash.
So bash it is.