One objection I have to this is that no matter how much someone believes they can guarantee a certain job or task will be confined within a subset of general computing, inevitably the software gets pressured by business use cases to 'break free' and do more general purpose computing.
An example might be saying something like "Things you do in a spreadsheet will never benefit from object orientation to such a degree as to justify implementing that ability for the sake of the business's bottom line."
As a former analyst in a financial firm, I encountered this idea all the time. You can use Excel's "slope" to do regressions. You can even do matrix arithmetic if you're willing to deal with the syntax. You can program basic functions.
In a strictly time-to-market sense (the time from some ad hoc financial analysis in a spreadsheet to 'market' -- either producing a report for your boss, a strategy implementation, or some other deliverable based on what your spreadsheet calculations showed you), throwing stuff in Excel, copy/pasting code from templates, etc., can't be beaten.
But this is a very narrow view. For example, data provenance is extremely difficult if your work is a directory of spreadsheet files. Even if they are version controlled, there is no automatic way to understand how the spreadsheet programmer intends to copy/paste some functionality out of one template and into another at the moment a new analysis is performed. The dependencies, if documented at all, are documented only in natural language descriptions, rather than overt 'import' or 'include' style statements, or static analysis of what gets used where.
Unit testing is often ignored in a spreadsheet environment and it's a huge pain to do in the rare cases when someone actually tries to do it. The focus on superficial aspects of 'time-to-market' also puts pressure to avoid version control, even if the spreadsheet paradigm as a whole doesn't necessarily have to reduce use of version control.
But far beyond any of these items, there are (for example) tools like Pandas, or even xlwings, in Python, or HFrame/HMatrix in Haskell, and I'm sure many other things in many other languages, which presents you with what is effectively a spreadsheet as an abstract data type.
You can programmatically perform the spreadsheet interactions that would otherwise have been manually reproduced, and you can include more advanced software techniques, like logging or unit testing, since you are simply working in a full-featured programming environment, rather than an environment where the feature set is purposely limited.
In this case, I've never heard any compelling argument for why a direct spreadsheet is better than a spreadsheet-like abstraction in a full programming language.
The arguments I have heard to justify continued direct use of a spreadsheet are:
(a) the people using it don't know how to write code and the business doesn't believe it's worthwhile to hire/train programming talent for such a role.
(b) someone who makes the decisions is using a hyperbolic discounting function when they assess the future returns of compounding automation through proper software design vs. just-get-it-done-today-by-copy/pasting-in-a-spreadsheet-if-you-have-to attitude.
(c) "Programming" is a low-status activity, and so business/marketing/financial analysts must perform some activity that is plausibly different than "programming" so that "programming" doesn't experience a status rise within the organization, and potentially affect people's raises/bonuses/promotions or project allocation.
Let's take a step back from this example of spreadsheets. What is the general phenomenon going on? I would argue that it is about automation and productivity. But then, general purpose computing is also about automation and productivity. There can be counterexamples for sure, but most often newer layers of abstraction are introduced because they create genuine value in terms of making it easier to automate something or making a more generalized lever that lifts whole categories of heavy things instead of just lifting this or that specific heavy thing.
No matter what limited-scope tool you start out with, over time lots of the tasks performed within the scope of that tool will be repetitive and/or will agglomerate into clumps of similar work that can be factored out into an abstraction.
When the tool also has general purpose computing abilities, you can take advantage of these opportunities to automate or factor out clumps of work and write generic, reusable solutions. The process of doing this almost always has huge positive effects of productivity, especially as it accrues and compounds over time. It's overwhelmingly worth it to pay generally modest short term costs to work in this manner, rather than trying to hack in ways of coping with bottlenecks in a tool that can't support general computing.
Part of the problem, though, is that many middle-managers and up within any given organization do not understand how this works. The syntax of their brains only manipulates the "programming language" of the business. Deliver X to Y; ship Z by Friday; give me a forecast of W. They don't unpackage "Deliver X to Y" down into its atomistic components and ask to what extent general programming can help, and whether or not it will have returns on "Deliver X1 to Y1" next week and "Deliver X2 to Y2" the week after that.
When programmers throw an exception within the business's programming language, Exception('We can't ship Z by Friday if we also write sufficient unit tests this week.'), what happens? Let's just use unit tests as the example of a "general computing" behavior that might sometimes be axed in favor of justifying a narrow-scope tool environment for business reasons.
In some organizations, this is considered very carefully, and a lot of attention is paid to the engineering assessment. The managers may ultimately come back and say, "You know, we really looked it over and did some careful thinking, and we still must ship Z by Friday, so skip the tests." In that case, probably the managers and engineers alike both agree that you don't want a limited programming environment for the task. Unit tests mattered to the engineers, and they also mattered to the managers even though they had valid reasons to skip it this time. But all parties probably agree that, in principle, unit testing would have been better and should be at least a possibility.
In other organizations (a lot), the Exception is simply caught and never handled. Managers don't like hearing about something that sounds like a whiny and low-status issue ("programmers want unit tests"), and so they mandate that such things be skipped, and support using tooling environments where such things are not even a possibility. And at the end of the day, they justify this as a necessary business reality, when it's pretty questionable whether they truly ran any numbers to decide if the longer term gains from unit testing more than offset any short term slowdown to write the tests.
In practice, it's always some gray-area mixture of all of these things, and sometimes there definitely is a legitimate reason to forego general computing options for business reasons.
But I am very skeptical of the more far-reaching claim that it helps bottom-line business productivity to reduce the possibility of automation and software productivity practices enabled by general computing capabilities.
Great comment, thanks. We both know that corp managers prefer straining systems, reap rewards and then disband: no legacy, no reuse, new projects. I have one question, though: would you suggest a 2016 startup to focus on best practices or product-market fit? That's the value we're considering here, imho.
My feeling is that it is indeed worth it to be pedantic about these kinds of best practices in a start-up. In fact, when I left finance to join a start-up, literally the whole reason for doing so was that the start-up was supposedly a place where engineering best practices are valued more than incremental short-term business progress, in contrast to the stodgy bureaucratic finance firm where political in-fighting prevents best practices from materializing.
I don't personally see any reason to believe working for a start-up could possibly be a good idea otherwise. (And since most start-ups only pay lip service to supporting best practices, while really supporting only short term incremental business gains like any other kind of organization, this is generally a good reason to default to believing that it's a bad deal to work at any given start-up.)
For me, this also has a lot to do with vision and consistency. If you create a start-up and your only goal is to sell it (or, more realistically, you are expected to do this because it is the goal of the VC firm you got into bed with), then you're not only not going to focus on best practices, but you're also going to jettison any part of your mission or vision any time it's not convenient to some incremental short-term growth opportunity. By the time you reach a point to sell the business, it may be completely unrecognizeable from the vision you started with.
If your only goal is to make money, this might not be a problem. But probably not very many people who agreed to work with you will share the feeling -- and they certainly won't stand to make meaningful amounts of money -- and so they are unlikely to be happy workers through most of the process. That means you haven't been getting their best effort all along, and the product is probably shoddy.
Generally (but not always), good engineers don't want to work for something that is explicitly a hype machine where maybe later quality will be hacked back into it, but probably not. So start-ups that fundamentally take an approach of short term product-market fit are pretty much by definition staffed by bad and/or unhappy engineers.
Contrast this with something like 37Signals/Basecamp. Of course they had to make engineering sacrifices along the way and didn't always do everything in some pedantically best-practices-adherent way. But their ethos/vision was to be much more about best practices than about unreasonable growth or winning an acquisition lottery. Yes, they were driven by succeeding with product/market fit, but they didn't ever become a slave to it or turn into zombies about it.
Given that there's such a poor success rate among start-ups (a large base-rate bias towards failure mode), it's hard to draw much from outliers of any kind, whether they are like 37Signals/Basecamp, or they are like some always-pivoting shop that always favored short term business gains but still succeeded despite it. But my perspective is that it's way, way better to be on the side that pedantically clings to an engineering vision and has a general ethos of turning down short-term business gains as a means of investing in longer-term best-practices-focused infrastructure.
An example might be saying something like "Things you do in a spreadsheet will never benefit from object orientation to such a degree as to justify implementing that ability for the sake of the business's bottom line."
As a former analyst in a financial firm, I encountered this idea all the time. You can use Excel's "slope" to do regressions. You can even do matrix arithmetic if you're willing to deal with the syntax. You can program basic functions.
In a strictly time-to-market sense (the time from some ad hoc financial analysis in a spreadsheet to 'market' -- either producing a report for your boss, a strategy implementation, or some other deliverable based on what your spreadsheet calculations showed you), throwing stuff in Excel, copy/pasting code from templates, etc., can't be beaten.
But this is a very narrow view. For example, data provenance is extremely difficult if your work is a directory of spreadsheet files. Even if they are version controlled, there is no automatic way to understand how the spreadsheet programmer intends to copy/paste some functionality out of one template and into another at the moment a new analysis is performed. The dependencies, if documented at all, are documented only in natural language descriptions, rather than overt 'import' or 'include' style statements, or static analysis of what gets used where.
Unit testing is often ignored in a spreadsheet environment and it's a huge pain to do in the rare cases when someone actually tries to do it. The focus on superficial aspects of 'time-to-market' also puts pressure to avoid version control, even if the spreadsheet paradigm as a whole doesn't necessarily have to reduce use of version control.
But far beyond any of these items, there are (for example) tools like Pandas, or even xlwings, in Python, or HFrame/HMatrix in Haskell, and I'm sure many other things in many other languages, which presents you with what is effectively a spreadsheet as an abstract data type.
You can programmatically perform the spreadsheet interactions that would otherwise have been manually reproduced, and you can include more advanced software techniques, like logging or unit testing, since you are simply working in a full-featured programming environment, rather than an environment where the feature set is purposely limited.
In this case, I've never heard any compelling argument for why a direct spreadsheet is better than a spreadsheet-like abstraction in a full programming language.
The arguments I have heard to justify continued direct use of a spreadsheet are:
(a) the people using it don't know how to write code and the business doesn't believe it's worthwhile to hire/train programming talent for such a role.
(b) someone who makes the decisions is using a hyperbolic discounting function when they assess the future returns of compounding automation through proper software design vs. just-get-it-done-today-by-copy/pasting-in-a-spreadsheet-if-you-have-to attitude.
(c) "Programming" is a low-status activity, and so business/marketing/financial analysts must perform some activity that is plausibly different than "programming" so that "programming" doesn't experience a status rise within the organization, and potentially affect people's raises/bonuses/promotions or project allocation.
Let's take a step back from this example of spreadsheets. What is the general phenomenon going on? I would argue that it is about automation and productivity. But then, general purpose computing is also about automation and productivity. There can be counterexamples for sure, but most often newer layers of abstraction are introduced because they create genuine value in terms of making it easier to automate something or making a more generalized lever that lifts whole categories of heavy things instead of just lifting this or that specific heavy thing.
No matter what limited-scope tool you start out with, over time lots of the tasks performed within the scope of that tool will be repetitive and/or will agglomerate into clumps of similar work that can be factored out into an abstraction.
When the tool also has general purpose computing abilities, you can take advantage of these opportunities to automate or factor out clumps of work and write generic, reusable solutions. The process of doing this almost always has huge positive effects of productivity, especially as it accrues and compounds over time. It's overwhelmingly worth it to pay generally modest short term costs to work in this manner, rather than trying to hack in ways of coping with bottlenecks in a tool that can't support general computing.
Part of the problem, though, is that many middle-managers and up within any given organization do not understand how this works. The syntax of their brains only manipulates the "programming language" of the business. Deliver X to Y; ship Z by Friday; give me a forecast of W. They don't unpackage "Deliver X to Y" down into its atomistic components and ask to what extent general programming can help, and whether or not it will have returns on "Deliver X1 to Y1" next week and "Deliver X2 to Y2" the week after that.
When programmers throw an exception within the business's programming language, Exception('We can't ship Z by Friday if we also write sufficient unit tests this week.'), what happens? Let's just use unit tests as the example of a "general computing" behavior that might sometimes be axed in favor of justifying a narrow-scope tool environment for business reasons.
In some organizations, this is considered very carefully, and a lot of attention is paid to the engineering assessment. The managers may ultimately come back and say, "You know, we really looked it over and did some careful thinking, and we still must ship Z by Friday, so skip the tests." In that case, probably the managers and engineers alike both agree that you don't want a limited programming environment for the task. Unit tests mattered to the engineers, and they also mattered to the managers even though they had valid reasons to skip it this time. But all parties probably agree that, in principle, unit testing would have been better and should be at least a possibility.
In other organizations (a lot), the Exception is simply caught and never handled. Managers don't like hearing about something that sounds like a whiny and low-status issue ("programmers want unit tests"), and so they mandate that such things be skipped, and support using tooling environments where such things are not even a possibility. And at the end of the day, they justify this as a necessary business reality, when it's pretty questionable whether they truly ran any numbers to decide if the longer term gains from unit testing more than offset any short term slowdown to write the tests.
In practice, it's always some gray-area mixture of all of these things, and sometimes there definitely is a legitimate reason to forego general computing options for business reasons.
But I am very skeptical of the more far-reaching claim that it helps bottom-line business productivity to reduce the possibility of automation and software productivity practices enabled by general computing capabilities.