Hacker Newsnew | past | comments | ask | show | jobs | submit | joaquin2020's commentslogin

> The alternative is ansible. Everybody I know moved away from Chef to Ansible and never looked back.

That has been my experience/perspective as well. This was what I found industry...

* ~2012 Puppet golden years * ~2014 Chef golden years * ~2016 Surge of popularity for Ansible and Salt. * ~2018+ Kubernetes ubiquity, Terraform for cloud, Ansible for systems

Related to this, saw popularity in SSR (server-side rendering) with Rails before 2012-2016, and after 2016 rise of popularity in SPA (Angular, React, Vue) on top of micro-frameworks like Flask (Python), Express (Node), GoLang, others. Combined with this are ML and other backend infra that requires managing clusters that scale better on Kubernetes, where Chef/Puppet have little presence on either K8S or backend distributed clusters.


I can give you SF Bay Area perspective, where Ruby and especially Rails is quite popular. With the arrival of Docker and Kubernetes, immutable infrastructure patterns dramatically reduce costs, and the need for a centralized change management solution is nullified. At deploy time, some config and templating is useful, but Chef becomes overkill. People would want to use Chef-Zero, but Ansible in this scope have become wildly popular.

Chef tried to adapt with Chef Workstation to do push-based config, but has inability to extract state from the system configured. The target system would have to update state to a server, which is fetched indirectly. This doesn't scale, so this is another reason why Ansible and Salt are popular.

Puppet also experiences some of the same issues with Ansible eating their lunch and popularity of push-based config for immutable infra. They attempted to respond with Bolt, but Bolt is based on static hostnames or DNS names, which won't scale given dynamic nature of cloud native and transient ephemeral systems.

In the case of managing fleets of systems that are not atomic stateless nodes, where you need to maintain a state across nodes within a set, both Puppet/Chef do not scale, and create outages (though window is small), because they have to synchronously push state to a server, and rely on eventual convergence. This doesn't scale in cloud computing. With push based config, you can set the cluster into the proper state, and then use service discovery (asynchronous updates) to maintain the state of the cluster. In K8S, kubectl/helm would fill the push role, and etcd used to maintain state. Outside of K8S, such as lambda in cloud, pulumi/terraform could push state, and discovery through cloud metadata (labels, tags, etc) or service discovery like consul to maintain the state.

Chef and Puppet could have responded, but couldn't see past their own platforms that are based on managing desired state for groups of individual atomic systems. They also failed to monetize on things like inventory management that enterprises fork over a lot for such things.


Chocolatey is a proxy-package manager, automates fetching packages off the Internet installing them (often through the packages own .msi or other installer). So it doesn't compete against Chef any more than yum or apt-get or Homebrew competes with Chef. Chef supports Chocolatey directly.

That being said, the experience on Windows is abysmal. The implementation with ruby on Windows is sloooooooow. You wait seconds just to get things like version.


> I toyed with a simple puppet-alike, written in golang, called marionette (in hindsight a terrible name)

Well, yeah, especially as Puppet has a well known solution named Marionette Collective

* https://puppet.com/docs/mcollective/current/index.html * https://choria.io/docs/about/mcollective/


In the scope of config state management across mutable systems, SaltStack is the closest.

When doing immutable infra, where managing desired state is only at deploy time, then Ansible is by far more popular. Beyond that you get into container scheduling-orchestration platforms.

Others mentioned Terraform and Pulumi, which manage cloud resources, where Chef manages mostly system resources. Once upon a time, there was an attempt with Chef Metal, and later rebranded as Chef Provisioning, which are drive by chef-solo (or chef-zero) and use fog driver. The end result was something that was so gawd awful terribad, unstable, and unpopular. Instead of Chef investing to improve it, Chef started promoting Hashicorp Terraform over their solution.


Now I think I remember them. Weren't they an embedded database on Windows popular with PeachTree and such?


Progress ran on Unix varieties and Windows, had its own 4GL language that you built apps in to talk to the relational side. Ran some pretty big shops like Ideal Electrical/Burson Automotive etc here in Aus. This was 20 years ago mind you...


> Puppet is a great tool for managing containers, and so is Chef. There is nothing inherently good about YAML.

I am not sure what complex config state management solution brings to the table for immutable containers. The state is immutable, so there's nothing to manage after deployment. It's like using an atom bomb to take out a zit.

Puppet/Chef in their glory days, pre-cloud pre-container, excelled at managing desired state across numerous mutable static systems. With immutable solutions, Puppet/Chef are both cumbersome and expensive.


An application running as a container bascially carries the same configuration as one running without the container bits would. That may include things like endpoint addresses, certificates, secrets, but can also be things like feature flags, authorizations, API tokens. Basically anything that's not compiled in is configuration.

The deployment side also carries configuration, including things like desired amount of instances, request routing and filtering, log destinations, log retention, persistent volume sizes and location, backup rules, metrics and monitoring rules etc.

If anything, application deployments today carry more configuration, not less. Fifteen years ago, half of the above didn't even exist. Perhaps you pointed you application to a syslog server and that was it.

All of this configuration exists in disparate tools (JSON files, YAML files, firewalls, metrics dashboards, cloud providers proprietary APIs) and will over time slowly turn into a sprawling mess. Bringing control over this into a central repository is a good thing.

Not sure about expensive, as the aim is maintainability and reduced complexity, but these tools do tend to get cumbersome. After all, they want to do everything. It's a fundamental problem. It's not surprising that has led to a surge in less capable tools.


> That may include things like endpoint addresses, certificates, secrets, but can also be things like feature flags, authorizations, API tokens. Basically anything that's not compiled in is configuration.

But Kubernetes solves most of this problem in an easier way with Pods, ConfigMaps, Secrets, Services and Endpoints.


That is your configuration, which you (may) want to manage.

Your whole infrastructure contains much more than that, and managing all that as a coherent whole is what these tools do.


Yeah, but what's left after you slim down your VMs to a Linux kernel, an SSH daemon and a container runtime? Is a CM system really justified at that point? Why not bake VM images and treat the VMs as immutable?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: