I may be a sentimental old fool, but I feel a vague sense of sadness at the removal of yum. Seth Vidal, the original author of yum (or, the fellow who forked it from Yellow Dog Updater, and made it yum), was one of the sweetest, brightest, and most helpful developers I've interacted with in my long history with Open Source. He was killed a few years back when he was hit by a car while cycling, and yum has never quite been the same without him, but I occasionally think of him when using yum.
I wish they'd keep the yum name, since dnf is still based on yum, even if a lot of the innards have been replaced. Also, "dnf" is not at all awesome to say out loud, while "yum" is among the most awesome commands to say out loud.
Ditto. I mess Seth a great amount and this too makes me sad a bit -- though his code and advice lives on in scores of people.
His belief in Ansible being a decent idea really kept me going with it after the first 2 months or so, otherwise I might not have continued it as a side project, and over the course of several years, he helped me a ton with various issues and was an amazing sounding board, including sharing a lot of advice when I as being plagued by nuisance CVE reports and wondering what to do about them. We had occasional arguments and he was usually right, and we always resolved things. (We had also previously worked on Func together with Adrian Likins, who wrote most of up2date, which was also a great experience - Seth knocked out most of the fun SSL parts). Back then used to meet around Duke to come up with crazy patent ideas to supplement our low Red Hat salaries (mine was anyway), and ... yeah, so much fun.
At his service, what was super amazing was pretty much everyone said the same thing about how he had helped them, dozens upon dozens of people, and it was really one of the most moving things I've ever seen. One of the few that made you come out saying "whoa" and wanting to be a better person and care about helping other people more than I did. And I think I did. Or I'm trying hard to.
Seth got a lot of hell for yum and shared many of my frustrations with the nature of the OSS community at times, but I still prefer it than a lot of idioms in apt. Things like yumdownloader, createrepo, --enablerepo, --disablerepo, and many things like it, just made it click for me, and it worked so well. And working with people like him at Red Hat made it really feel special to be there and to be able to learn from them.
I think about him every time I run yum, and every time something in apt doesn't work the way I want. Folks should just enjoy the metadata refresh time as a reason to get coffee, and blame PackageKit for your silly yum locks. (ALWAYS. REMOVE. PACKAGEKIT.)
And I agree on keeping the name, and it doesn't clash with "disjunctive normal form" that way too :) .. not that I remember what that is so much.
I suspect he would have been right out front on integrating these new ideas into yum; he had a long history of pushing yum forward (which is why it became the standard for RHEL and Fedora despite Red Hat having developed one of their own; but, it was clear to everyone, including Red Hat, that up2date was inferior in every way to yum). So, I tend to think if he were still around, we wouldn't be talking about something replacing yum, we would instead be talking about new features in yum. I think it is unfortunate that people felt the need to fork rather than enhance.
Out of curiosity, why not fix something if broken? Why is there is always a half baked alternative that has a different set of problems. Instead of replacing what is perceived broken you can just fix it also keeping the good parts. I have several systems deeply vested in in the yum/rpm ecosystem and I see very little chance that a new package manager is going to offer that much new features that I need while keeping the features I already like from yum.
From the article:
"undocumented API, broken dependency solving algorithm and inability to refactor internal functions. The last mentioned issue is connected with the lack of documentation. "
Well some of them I don't care about as a user of yum some of them I can verify not true (documentation) and some of them I care about and it works for my use-cases (dependency solving algorithm).
dnf is a very heavily modified yum. Parts of dnf have a Duke University copyright (because Duke was the place yum came from originally). They took the opportunity to do a very big rewrite however so a lot of code has changed, and the command line is also quite a bit different.
It's kind of sad if most programmers think that debugging isn't fun. I thought that was a very large part of the programming process? Or maybe people just do OSS in order to re-balance their programming more to the side of clean-slate programming.
Serious question, because I am admittedly ignorant to the plusses and minuses of the different package managers.
If you're going to swap out: why not switch to apt? What does apt lack that DNF is going to provide? This seems like one of those low hanging fruit where standardization across distributions would make sense.
It would have been better to switch to zypper. Both dnf and zypper use a SAT solver for very fast depsolving. Yum and apt don't and both have very slow depsolving. Zypper also directly supports rpm, and would have unified the package managers on Fedora, OpenSUSE and the enterprise Linux distros RHEL and SUSE.
As far as I can tell the fedora guys wrote libhawkey and dnf to reach some sort of middle ground, as switching to zypper would have implied breaking everything (i.e. all the scripts admins use, rewriting parts of anaconda, etc). The compatibility is not perfect, and in fact a fair amount of people complained (see some discussion on phoronix). As far as the parent post is concerned, to switch to apt would imply switching to deb, which would be a massive change. RPM is simply too different, both in positive (the possibility of changing target directory and of rolling back) and in negative (dependency specification is still rudimental compared to .deb, see https://www.youtube.com/watch?v=FNwNF19oFqM). And as rwmj14 said, libsolv is better.
apt-rpm has been around for more than a decade, and was considered for Fedora back in the day (it lost out due to a combination of lack of multilib support back then, and considerations over complexity). Last time I used Fedora it was available in the repository and worked just fine - I always preferred it over yum, despite also preferring RPM over Deb.
I thought apt-rpm was basically dead, happy to read that it is still around. And yes, I reckon the second part of my comment was wrong, you can have apt on top of .deb .
Well, you may be right it's basically dead - checking out the repository there appears to have only been one commit or so in the last two years. It may still work, but it's certainly not getting more features.
Is "very slow depsolving" even relevant? In my experience that part of installation and remove is a very small part of the overall installation or removal time and goes by so fast I've never really noticed it.
Even if dnf and zypper are 1000x faster than apt there, that alone wouldn't convince me to use them.
As well as being a lot faster than apt, the biggest advantage of openSUSE package management stack to me is that it plays nicely with third party vendors.
Packages have sticky-vendors, so you only get updates from the same vendor unless you choose to switch. Even if another the update is in a different package repository. This means you don't get flip-flopping package vendors when the same package is available from two vendors and they release updates one after each other.
Another example is the tooling around package/repository signing - Letting the user decide whom to trust without making them jump through hoops to manually import keys.
There's also a lot of things in the packaging itself designed to help support third party vendors. Dependencies tend to be configured with capabilities rather than package names. e.g. The kernel provides information about the binary api version which third party drivers can depend on. Third party kernel modules can also supplement/enhance (soft dependencies - inverse recommends/suggests) specific PCI ids so the solver will suggest installing them if available and compatible with your hardware.
In my view debian package management "just works" because the set of packages is carefully curated and tested, there's a high standard of package quality, and everything is available in one place. PPAs can present problems.
The rpm distro ecosystem philosophy is somewhat different. It's impractical to expect a single organisation to package all the software you will ever need, so let's provide tools to help software from multiple vendors to get along.
Not only that, but it sounds like someone can just add a solver to apt, whereas repackaging the entire linux ecosystem to use a different format is going to be a bit more difficult.
Red Hat (the company) and RPM (the format) predate Debian, although only by a few months. Also RPM is a much better format than deb, and that's speaking as someone who regularly makes packages using both.
I would also say that dnf and zypper are better package managers than apt, in as much as they are faster and easier to automate. Compare the code here:
Primarily the all-in-one spec file is a lot easier to read and write than the scattered files of deb. Also the build system is considerably simpler and more coherent -- you don't have the mess of dh vs cdbs vs flavour of the month. RPM has a nice language and macro system. It's not that deb is bad, just that when I have to package for both, I find the RPM one simpler and easier.
Here is a relative simple package, done for both RPM and .deb. The RPM spec is 141 lines (excluding the changelog):
The .deb is actually shorter in this case, but split over several files, and uses cdbs which I find infuriating with its lack of documentation and multiple hidden implicit rules. If you have a Debian machine around, try reading the /usr/share/cdbs/1/ files some time. Remember also that for most Debian packages, the files come in a tarball or even a patch, which makes them hard to manipulate without obscure deb-* commands.
What? how is this better? what all of this is is apt making users say yes to everything such as config changes. I'm totally fine with that. Sure it's an extra line or 2 when automating, but I don't see it as bad.
I don't know about you, but I'd rather use the tool that I don't have to hack the training wheels off of first when I'm trying to update 100,000 machines to mitigate a critical security issue.
1. Security updates don't add debconf prompts
2. Supporting large numbers of systems is exactly when a persistent local configure database becomes useful for avoiding unanticipated regressions
These distros use RPM for decades (or at least more than one) now, for them switching to apt would be repackaging everything. There seems to be an apt-fork for RPM, but that hasn't been updated in years.
As mentioned, in server land, servers tend to be rebuilt rather than upgraded. In workstation land, major versions don't upgrade that often.
When I do an apt-based dist-upgrade, the long part of the upgrade is not the single-or-low-double-digit seconds it takes to solve the deps, but the download and installation of the new packages.
I've worked on a large website where all backend servers were running RHEL5. We had pretty good tooling, deployment, config management. We could launch and kill new physical and virtual servers with particular "roles". We did not, however, re-image servers for every deploy. Our deployment/configuration-management would make sure the correct versions of all things were installed, including stand-alone code, services under daemontools, crontab entries, system users, packages, and more.
In most cases, no-longer-specified items would be removed. BTW ansible sucks for this aspect (I use it extensively at a new place) ;)
My point is that, the modern attitude of "you can't trust what's on a server unless you build it from scratch or apply an image" is not the only valid way to do things, and is somewhat defeatist, like we're no better than windows where you can't guarantee that anything can be cleanly uninstalled / replaced. Is your package manager that bad?
Yum being dog-slow was very annoying. Updating all server lists for every yum command was very annoying. I customized various modules and scripts to use "makecache" and "-C" where appropriate, but that was an annoying task. And yum was still slower than I'm accustomed to a linux package manager being.
Finally having a couple of years experience with yum, I can confidently say that pacman (archlinux) and apt (debian) are worlds better. Maybe zypper and dnf make rpm not suck; I probably won't find out for a few years.
> "Updating all server lists for every yum command was very annoying."
This isn't the case. It updates information when the cache expires, so this will only happen about every 30 minutes or slow. It should also only take (usually) way less than a minute.
> BTW ansible sucks for this aspect
Yep, it's not meant to understand how to remove resources it doesn't know about. That being said, it feels like I've created PHP at times, and I don't mind people not liking it. Many of the ways we have to automate Linux systems (due to lack of structure and API) are kludgy at times, but removal of packages not present in a master manifest seems dangerous to me for various reasons (group installs, tools self-bootstrapping, work happening out of band). Fair enough.
> is not the only valid way to do things
This was not the argument for immutable systems (though I like them), but rather that you need a good disaster recovery strategy and this is likely the best way to handle a major distribution version upgrade.
Minor versions? Continue to do what you do. Major upgrades between EL versions are full of all sorts of fun.
I've typically seem them done with upgrade kickstarts and the like, and you don't get good error reporting there at all when things go wrong. There were some advances in pre-downloading and then doing upgrades but... yeah... not a fan of doing them in automated context.
> " I can confidently say "
Trying to drive it programatically, yum seems much better engineered to me than apt. One example recently (and maybe there's a way in apt to do this) was to be able to select just one repo to use during an update to grab packages from, and I was missing --disablerepo=* --enablerepo=X. But there are a lot of things like that. Config files also seem a lot more capable. My opinion there too of course.
> This isn't the case. It updates information when the cache expires, so this will only happen about every 30 minutes or slow.
It doesn't do full downloads of files, but (as of RHEL5) it still makes HEAD requests or something, re-processes package lists (I guess apt does this too), I dunno. "-C" makes a significant difference. I want it to take less than a second to make a decision or spit out information; other package managers can do that.
EDIT: also want to say, I appreciate ansible overall, and I appreciate your attitude towards it. I've been close to a project that was good for a couple of uses, got kinda popular, and then people wanted it to be good for all purposes...
It all depends on what kind of apps you are deploying and supporting. While I wasn't even discussing immutable systems in this capacity, some workflows work better for .com style applications. In a typical bank environment where you have thousands of legacy applications floating around, you are more apt to not be able to control the architecture and need to push out security updates.
In place updates here are fine, however, I still wouldn't want to do an in-place dist-upgrade across all of those systems, and then find out which ones of those thousand applications had problems. In this case, it's better to redeploy those applications if they need a new OS and the OS is no longer recieving security updates - and try to shift some of that burden onto those who maintain the application.
If you are just deploying a .com app though, you need a good backup/DR strategy, and it helps to be able to redeploy everything and take steps to not get attached to state on that machine.
Rather than forcing us all to install an outdated distro and do a major version upgrade then trying to guess the particular element of the process you find disconcerting could you may be tell us?
Because switching just the package manager will yield no benefit at all. Having apt as the common package manager among Debian derivatives is meaningful because those distributions get most of their packages from Debian, so they use not only the same package manager, but also same packaging format, same file system hierarchy, same package names/package splitting. All Fedora derivatives, OpenSUSE and Mandriva(?) would agree on using dnf but that would provide no significant benefit at all (other than reusing muscle memory).
Most of APT packages' superiority comes less from the packaging format than the entire philosophy surrounding them, most especially in the case of Debian, Debian Policy. Quite simply, so long as I stay within distro, the quality of Debian packages greatly exceeds those of RPM packages (long-standing direct, multiple-instance observation of Debian, REHL/Red Hat, CentOS, Fedora, and Suse systems).
Some of the benefit also derives from the packaging format itself. The ability to unpack DEBs using nothing but shell tools (busybox within a Debian system's pre-boot ramfs is sufficient and has been successfully used by me -- Red Hat loses in this instance by using a ramfs shell that's both 1) larger than Debian's dash and 2) doesn't offer interactive use -- it's a scripting-only shell, FML.
Joey Hess at one time had a detailed comparison of various packaging systems. He's pulled it apparently due to political / fanboi bickering, which is a shame. He's author of 'alien', a tool for converting between packaging formats.
But Policy (and fucking enforcing the fucking hell out of it) trumps.
Source: 18 years' use of both distros and many other Linuxen. 30 years' experience on further Unixen.
The portage ebuild system for describing package dependencies and build procedures is awesome. The portage program for solving dependencies and building packages is mediocre at best—it's slow and too prone to not finding a solution even when the constraints of source compatibility are looser than binary compatibility. The gentoo portage repository of packages is clearly understaffed and orphaned packages are all too easy to run across.
All three of those things have gotten better over the years, and I have little doubt that given the attention and effort that RedHat and Debian package managers get, portage could be a clear winner. But the portage we have now has too many pitfalls to be the best all-around choice.
Correct, the all-around best choice is Exherbo's package management. It is extremely similar to Gentoo (after all, many of us used to work on Gentoo), but I'd like to think it has fixed all the problems Gentoo had.
There's no way that Exherbo's package repos are anywhere near as well maintained and broad as the distros that people have actually heard of. There's no silver bullet for that problem; the only solution is manpower that they don't have.
Exherbo's package repos are incredibly well-maintained. What is provided generally always works and things are kept very much up to date (KDE/gnome/chrome/firefox updates within 24 hours usually). Lack of public awareness doesn't always mean the system isn't as well maintained as a system like gentoo (which often breaks!)
The broadness of the system isn't quite as vast as many distributions but running a desktop / dev workstation I have never encountered a package not available that I needed.
If you haven't encountered packages missing from their repo, then their search must be broken. In just a few minutes of searching, I found that they seem to be missing anything GIS-related, netperf, smokeping, targetcli, any daap server. That's just stuff I've been using my Linux box for in the past month, but it seems like Exherbo would make me do at least as much work as something like MacPorts!
Perhaps my needs are just different from yours. What I did say was "I have never encountered a package not available that I needed". That is not contradicted by your example. Your needs are different, that's cool. What isn't cool is claiming that the search is broken because I haven't found the need to search for those packages.
Exherbo may not be for you. It values users who are willing to be developers as well, and augment the system with the packages they need. You want others to do the work for you, that's not what Exherbo is about.
Besides, the original discussion concerned what package management system was best, not if it had tool X, Y or Z that you claim is very often needed.
In reply to a comment that listed the quality of the package repos as one of three major areas of concern, you said that "the all-around best choice is Exherbo's package management" and that "I'd like to think it has fixed all the problems Gentoo had".
If you can't be honest about its shortcomings, you won't be able to convince anyone to try out your pet project. It doesn't matter how reliable and trouble-free it is at managing the core of the system if it immediately degrades to "build it yourself" anytime you want to use something that's not popular enough to make the cut for a live CD.
Perhaps I was unclear, and if that's the reason for any confusion I am sorry. I was referring to the majority of the comment which was about portage's shortcomings (though it is also true that ebuild quality is a major problem for gentoo). I specifically was comparing portage/the Gentoo package management infrastructure (NOT the package repos per se) with Exherbo's package management infrastructure (by which I mean the package manager, alternatives handling, repositories). This is what I meant by "Exherbo's package management"; that does not mean the breadth of the repositories.
I like to think my comments were honest: I admitted that the system while technically superior does not have the breadth that larger distributions do, but that for my purposes it was sufficient. You ignored that and found some packages not currently packaged in an attempt to disprove my experiences. Furthermore I admitted that the project may not be for you since you expect different things from a distribution than many of us do. What is dishonest about any of this? I have been incredibly frank.
Besides, one of the nice things about Exherbo is that it handles the nonexistence of a package rather seamlessly. You can compile it by hand, install to a tempdir, and then have the package manager merge it directly while giving you the ability to specify information about the package (metadata, dependencies, etc.). And then of course the package manager can uninstall it when you no longer want it. This makes the problem of "build it yourself" kinda moot.
I'm not going to bother responding to the "make the cut for a live CD" remark since obviously the there are far more packages than would fit on a liveCD or liveDVD.
If we're going to be one-upping each other, then let me suggest Nix, a superset of Portage.
It has many of the strengths of Portage, and even go beyond (it builds from source, and users may configure package dependencies, like USE flags on steroids -- its declarative language used to write packages is also used to configure them, so you can do more than just passing flags to a package). But it offers a substantial advantage, because builds are deterministic. The set of installed software (with all needed configuration) is determined from a config file (or many), and from this it's always possible build the same system.
This means that upgrading always end up in the same state as installing from scratch. This also mean that common packages can be cached as binaries, without risk of breakage - it only downloads a binary package when building from source would build exactly the same binary.
Nix also feature atomic upgrades and rollbacks: it only touches the running environment as the very last step of the upgrade (setting up a symlink), and stores the previous versions of packages until garbage collected, so an upgrade stopped in the middle can't break your system (the exception here being kernel upgrades). Indeed, if you interrupt an upgrade or install at any stage, just issue the command again. (also, this architecture makes it incredibly concurrent)
NixOS is a distro that uses Nix. It can provide a GRUB menu to boot previous versions of the system. When you upgrade, you can have it affect running system or only upgrade after a reboot. Either way, when you reboot GRUB will give the option to also boot the system you was using before the upgrade. On a technical level, Nix works a bit like a git repository: each package is addressed by the hash of its derivation (that says how to build it, and all its dependencies), and if more than one system version uses the same package it gets stored only once.
Coupling NixOS with Nixops, a deployment tool; and Disnix, that does service-oriented deployments (like Docker), they can help build more repeatable systems for production servers too.
No, it really isn't. Portage poorly solves, or neglects to solve, most of the problems dnf is intended to solve with yum, and is, on many fronts inferior to yum, as well.
If portage/gentoo guys were not treating portage as if it was running on gazillion of GHz CPU, they would have a shot at making portage compete with other systems. Don't say it is the best. Not by any stretch.
I feel ashamed that for the last ten years I could not find a kind word to the portage developers. For what is worth, they don't care.
As long as all of the alternatives work, and they do, it is pretty low stakes. Realistically, how much difference does the choice between yum/dnf/apt-get really make to my everday life installing and upgrading packages? Very, very little.
Apt is part of a tool chain that is extreme Unix philosophy. With this we benefit from backwards compatibility and continuing to use our muscle memory, but it also means that if say you want to find out which uninstalled package provides a specific file it means installing another tool.
It was not mentioned in the post that dnf is based on the openSUSE dependency SAT solver (libsolv) that was created by Michael Schroeder years ago and that powers libzypp and zypper since openSUSE 11.0.
The dnf developers built a thin layer on top of it, called hawkey, and then build dnf in python on top of hawkey.
One of the biggest innovations of libsolv is not only the SAT based solver but also the .solv format which allows to store package metadata for big amounts of packages in an efficient way and operate the solver directly on it.
So, like, what is a package manager and what does it need to do? All the distros have one. All the lanaguages have one. Why do we have so many of these things and is all the complexity necessary? All our configuration management tools try to abstract the differences. The list goes on on and with these things. As I start down the path of doing more and more operations/systems administration I find myself asking the question over and over: What are we doing with our lives?
People don't know what they want package mangers to do and they had much less of a clue back when the fragmentation started. So we got rpm and deb. Arch decided these were too complicated when you wanted a rolling release so they have their own package manager. Programming languages need to have their package managers work on more than just Linux, so they make their own cross platform ones, but since the devs are only thinking about their own language they don't make something cross language.
Yum is "as dead" as one might think Fedora 22 is "production ready!" The reality is RHEL 6, RHEL 7 and many other distributions use Yum and they are not going to go away anytime soon.
Good to know, thanks Hacker News. Better "dependency solving" is appreciated. Though I stopped using Fedora, I assume this will make it to CentOS eventually.
As someone who has rewritten pacman for fun, it's not great. The tool itself works well, and quickly, but the package format is... interesting. Provides seem like a bad afterthought, and package signing being opt out and constantly breaking is interesting, at best.
I like it way more than apt at least. I don't have to use apt-cache, apt-get, apt-kitchensink to do what I want. Its all just subcommands of pacman. Its like vim vs nano - the later has the commands in the shell, but they aren't nearly as fast or power to use as vim if you take the time to learn it. Ironically, I use nano most of the time because I'm not terminal-locked enough to get vim muscle memory, but I use pacman enough to remember everything.
Zypper is actually fairly good. Still way more verbose than it needs to be, but it seems like a fair compromise between "apt-get stupidly-oversized-function-name" and "why the fuck does -Sy mean grab updated package lists from the repo".
Syntax isn't really the problem, given that one could easily write a wrapper around pacman, or apt-get, or dnf/zypper (at least for the subset of common operations supported by all). The problem with pacman as a package manager is that it simply does not have the features needed by fedora or debian. With the infrastructure given by Arch there is basically no way to have automated upgrades without potentially breaking the system. The same goes for the AUR: it's great when you know that the user is reading the script and making sure it's not malicious: it's not a model that scales for server (just like FreeBSD ports do not cut it if you have lots of machines), or if you imagine 10% of the computer running Arch (malware bonanza). Still, automating apt-get is messy (and here dnf/zypper do a better job) and building .deb packages and setting up your own repo is too complicated.
Arch always posts breaking changes to its mailing list. If you are complaining about the bleeding edge part, that really has nothing to do immediately with pacman as a package manager, you could always pull a Manjaro with real package introspection to keep tabs on incoming software.
The only real missing piece of pacman is that there is no distinction between a feature and security / bugfix update. But then you should be more stringent with upstream to beta test their feature releases better.
Really if you wanted to automate pacman nothing stops you from creating your own custom repo as a gateway to some workstation or server cluster you have and setting it up to use your custom repo exclusively for all packages. It has delta support, does checksums, uses the best compression format available, and is a lot easier to use for custom applications than the OBS or alternative packaging tools.
To be clear: I also am an Arch user (albeit not exclusively). I'm not complaining about the bleeding edge part (from your answer I get the feeling you already had similar discussions, with people complaining "upgrade X broke my system").
What i was driving at is the fact that pacman, as it is, does not cover the whole use case of other distros.
Would you take debian, with its three branches, and move it to pacman?
No, because the lack of pinning and differentiation between security upgrates and normal upgrades would wreak havoc ("get a better upstream" is a nice suggestion, but an unpractical one). The same could be said for Fedora (actually, for Red Hat, but Fedora is Red Hat testbed after all).
I could set up my own repository, clearly, it may very well be a good solution, but to me that is not automation anymore. Similarly, there is nothing preventing one from setting up one's own repo of old packages, and reinstalling those. Still, I see the value in having the rollback features inside DNF (call it laziness, if you wish). I guess my comment came out as random pissing on Arch, when what I wanted to point out was that other distros simply have different needs.
You can pin packages in pacman, you just add them to the IgnorePkg list. And while pacman does not have a native rollback command, it does not delete any historical version of a downloaded package by default - you can set it to delete, say, 3 versions and older if you want. But you can just reinstall an old version and blacklist the package until its fixed if something goes wrong.
And that kind of operation could be automated - its just a pacman -U on the old version and append into pacman.conf on the IgnorePkg line.
And nothing really stops you from having, with pacman, repos the way Ubuntu does - because really, its not that security and feature updates are hugely segregated - they usually are just a boolean in the package description. What happens is they have repositories of software they will not update with feature releases but instead only ship bugfix and security patches for, and they just call them jesse / wheezy / vivid / wiley etc. You could use pacman for the same end, making a repository of software you don't push feature changes to but just push bugfixes in, and again replace Archs repos.
The point I'm trying to make is there is a distinction between Archlinux the repository and Pacman the package manager. You can get around a lot of the unfavorable aspects of how Arch does packaging by doing it yourself. Of course it makes no sense to actually do that when Debian, CentOS, and Ubuntu LTS exist to do that exact same job without all the work, but it isn't because pacman is crippled in one aspect of package management to such a degree its unusable for that purpose.
We are basically agreeing. The point is not that you cannot automate pacman, but rather that other package managers automate for you, which is bound to be a virtue for some people. I personally never found the "integrated" rollback in dnf/yum particulary useful, but I've heard of enough people who used it to accept that it is a desired feature. Same goes for "dnf config-manager" for managing repositories with one command.
As far as pinning goes, however, I disagree: if you mark a package as IgnorePkg it does not get updated. You could use either naming conventions or splitting the repos up to track different repos for different packages, but it starts looking like an antipattern to me (i.e. the way you would have firefox track jessie while you are on wheezy would be by setting up your own repo only for firefox, a bit of an hassle). It's fine if you are the packager, it's a bit cumbersome if you want a stable debian box with a fresher version of django and nginx.
After all the road map for pacman 5.0 proposes hooks and a better handling of optdepends. Both can be already automated via scripts, but both would be nice to have out of the box.
Yes the `-Sy` bit is a bit cumbersome / annoying. I would have much preferred `pacman update -s` and `pacman remove <package>`.
When I have to install a font on my machine that isn't in AUR I'll just roll up a PKGBUILD, install it and dust my hands off. Makes it incredibly easy to remove the packages later if I no longer need them.
Its not cumbersome, the alternative is more cumbersome, it just requires memorization. Like I said, zypper is a great compromise, where you could do zypper update and zypper install, or you could do zypper u and zypper i. A bit more intuitive for the newbie but it doesn't make you write words like you're writing a poem in bash.
It gets so incredibly slow when you have aptitiude around apt around dpkg. Its a minor complaint, but when pacman does everything I want and is probably a tenth the codebase of either yum, zypper, dnf, or apt, then I'm content.
Because I have been using Archlinux full time for a while now. It's dead simple to make PKGBUILDs and maintain your own packages. Yum and RPM? forget it.
I just made my first pkgbuild yesterday and threw it on the AUR. Having come from, first, using launchpad (eech) to using the OBS (blargh) how are pkgbuilds not "the final solution" again? They are incredibly fantastic to write and use.
PKGBUILDs and the AUR are great, but I really wish they had a spot on the AUR page for people to add extra info particular to the that package. For instance, if you come across the package in the AUR 'firefox-toast' (made up example), there is no place on the AUR page to add a description that says something like "firefox built with support for toasters." The only place to describe package customizations is in the name and that's often not enough. You could put it in the PKGBUILD, but it seems people almost never do.
The AUR page is always completely generated by the pkgbuild. You cannot edit any of the fields yourself, you just have to upload a new source archive. In the pkgbuild you are talking about the pkgdesc field, and it is not that hard to paste a quotes wrapped string on a line in there for a description...
Quite true, I know how they're generated. More like, I wish you could include something like an optional README file or something. Particularly for packages that require further setup. I know people often use the `install` function to do it, but it seems like I often run into packages that leave me wondering either what they do or what else I have to do to make them work.
This of course isn't necessarily a flaw in the AUR and is really more of an issue with the packagers doing a bad job.
You could add a README as an external source if one is not included with the binary package or source repo itself.
And yeah I know that isn't basically a longform description field on the AUR page, but comments can do that job too. I don't see it as some crippling issue, and honestly I would probably not even change it - the way the AUR is laid out now makes it easy to pull down pkgbuilds and display package information agnostic to the environment easily. Adding paragraphs of detail into the pkgbuild or on the website limits that.
I've built debs by hand before. It's not hard, and it's trivial to automate with shell scripts.
And once you've built a deb, you can use alien to convert to RPM (the only gotcha is that RPM runs preinst/postinst scripts with an empty environment and dpkg doesn't).
(and I say this as a diehard Arch user who came from Gentoo)
Linux is a horrible mess. The more I try to do with it, the more the mess reveals itself. Wayland vs X11, Python 2 vs Python 3, init vs systemd, ifconfig vs ip addr, these are all examples where as a new user you have to learn the new and the old system just to be on a level playing field and it's is far, far too much.
I wish they'd keep the yum name, since dnf is still based on yum, even if a lot of the innards have been replaced. Also, "dnf" is not at all awesome to say out loud, while "yum" is among the most awesome commands to say out loud.