I look at the code I'm writing (mostly C# thesedays) and I consider the _information-theoretic_ view of the program code I'm writing, and I see a major problem is that even syntactically terse languages (like C# compared to its stablemate VB.NET) still requires excessive, even redundant code in many places (e.g. prior to C# 9.0, defining an immutable class requires you to repeat names 3 times and types twice (constructor parameters, class properties, and assignment in the constructor body) which alone is a huge time-sink.
The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time.
At least in modern languages like C# 9.0, Kotlin, and Swift (and C++ with heavy abuse of templates) a lot of the tedium can be eliminated - but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
> but not in the granddaddy of OOP languages: Java. Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
Because... We don't. IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.
Still, I don't work with Java because I like it, I work with Java because it works and has a great ecosystem, the tooling around it make it bearable and without it I'd have definitely have jumped ship a long time ago.
I'm not only a Java developer, I've worked with Go, Python, Clojure, Ruby, JavaScript, Objective-C, PHP and down to ASP 3.0. Java is still the language that employed me the most and the longest, I have no love for it apart from the huge ecosystem (and the JVM gets some kudos) but it works well for larger codebases with fluid teams.
Ward Cunningham once noted that because a programmer can input incomplete, or minimal, code, then push a few buttons to autogenerate the rest, that there's a smaller language inside Java. Since he made that remark a number of other JVM languages have sprung up that try to be less verbose. One of them is Groovy, which uses type inference and other semantics to reduce duplication (or "stuttering" as Go calls it").
The issue now with Java is that it's such a big language and it's accumulated so much from many different paradigms that experience in "Java" doesn't always transfer across different companies or teams. Some teams hew closely to OO design and patterns, others use a lot of annotations and dependency injection, still others have gone fully functional since Java 8.
And then there are shops like one of my employers, where a large codebase and poor teamwork has resulted in a mishmash of all of the above, plus some sections that have no organizing principles at all.
> Some teams hew closely to OO design and patterns
I maintain that design-patterns are just well-established workarounds for limitations baked into the language - and we get used to them so easily that we rarely question why we’ve even built-up an entire mechanism for programming on-top of a programming language that we never improve the underlying language to render those design-patterns obsolete. (I guess the language vendors are in cahoots with Addison-Wesley...)
For example, we wouldn’t need the Visitor Pattern if a language supported dynamic double-dispatch. We wouldn’t need the adapter, facade, or decorator patterns if a language supported structural typing and/or interface forwarding. We wouldn’t even need to ever use inheritance as a concept if languages separated interface from implementation, and so on.
Better answer: the only real OOP system is Smalltalk.
> design-patterns are just well-established workarounds for limitations baked into the language
Strict functional programmers have been saying that for years. They may be workarounds, but as patterns they have value in allowing one programmer to structure code in a way that is recognizable to another, even months later. You could say that a steering wheel, gas pedal, and brakes are workarounds for limitations baked into the automobile we wouldn't need if cars could drive themselves, but still value the fact that the steering wheel and the rest of the controls for a driver generally look and work the same across vehicles.
Right you are - but my point is that language designers (especially Java’s) aren’t evolving their languages to render the more tedious design-patterns obsolete - instead they seem to accept the DPs are here to stay.
Take the singleton pattern for example. It’s not perfect: it only works when constructors can be marked as private and/or when reflection can’t be used to invoke the constructor to create a runtime-legal second instance. A better long-term solution is to have the language itself natively support passing a reference to static-state, which completely eliminates the risk of a private ctor invocation - but that hasn’t happened.
OOP Design Patterns are like JavaScript polyfills: they enable things that should be part of the native platform. They’re fine to keep around for a few years when they’re new, but when you’re still using what is essentially an aftermarket add-on for 5+, 10+ or even 25+ years you need to ask if it’s the way things should be or not...
Patterns are commonly (but not only) symptoms of some failure of features on the language but at the same time they are just vocabulary.
Patterns exist in functional programming as well, any map/reduce operation is a pattern, any monad is a pattern. It's a proven way to achieve a goal, it's easy to compartmentalise under a moniker and refer to the whole repeatable chunk with a name.
Unfortunately a lot of people only learn how to properly apply design patterns after doing it wrong and/or overdoing it (mea culpa here!). It's easy to spot the bad smells after you've been burnt 2-3 times.
If map and reduce were design patterns, you’d be writing out the iteration bits every time you used them. Instead map and reduce are abstractions, and you only have to plug in your unique functions and values.
This reminds me of a talk by Rich Hickey [0], where he introduces the Transducer pattern, which is actually an abstraction for map/reduce/fold/filter etc.
(But I'm not trying to invalidate your claim that patterns exist in FP in general, only that specific case. Afaik, the Transducer abstraction isn't even widely-known nor used.)
By writing enough code using design-patterns that you see a pattern to the usage (and abusage) of design-patterns - and get plenty of experience with other languages and paradigms where said patterns are irrelevant.
I'd agree with that statement. I feel that I have settled in a few frameworks that I consider modern but mature and stable, with a good job market. For that I had to learn these smaller languages inside Java.
And not only smaller languages, the tool set as well. Maven and Gradle for a start, and Gradle is its own Universe of little quirks and "what the fuck" moments. IDEs have a learning curve (but I learned vim and emacs before using any IDE so knew how steep learning curves work) and if you manage to use their features it can help immensely your productivity. Frameworks such as Spring have enough pull that it's easy to find interesting projects with it, I think that the direction of Spring Boot is pretty good for modern tech enterprises, at least for part of the stack.
Boring technology has its place, it's a hassle you have to learn a pretty big set of tools to be productive in Java but when you do you can actually accomplish a lot with multiple teams and a scale of hundreds to thousands of engineers.
You can also shoot yourself on the foot pretty easily, in massive scale, if the people taking decisions are architecture astronauts and not battle-hardened engineers who have suffered through incomprehensible code and piles over piles of mishmash of technologies and failed frameworks. Keeping the tech stack simple, boring and focused on a small set of tools has its benefits on scale.
Groovy is a nightmare. We now reject candidates who advocate for it in production during interviews and we warn those who do say they d never dare there but use for testing.
Who cares if you have to write setters and getters by clicking a button in IntelliJ, or have to explicit your types rather than ask every subsequent readers to use their brain as a type inference compiler.
Typing clear code isn't a problem Groovy should have solved by making it all implicit, at the cost of having non compiled production code. A code should never fail at runtime because you made a function name typo that it couldnt tell you about any other time for free.
A lot of this also applies to other interpreted languages widely used in production, such as Ruby, Python and Clojure.
You don’t like interpreted languages in production, fine. But rejecting candidates who think differently from you just creates a monoculture and reduces the chance that you learn anything new, beyond reinforcing your own convictions.
How rational this is depends on your testing culture. With a comprehensive test suite, the guarantees offered by a compiler are not very important. The software goes through all its runtime paces before production anyway.
If you’re not going to write any tests, then obviously compile time is a crucial line of defense.
Most shops will be somewhere in the middle where compiler guarantees offer a real but marginal benefit, to be weighed against other tradeoffs.
The point is not "let's just not write any tests". With a compiler that offers meaningful guarantees, you can write more worthwhile tests than "does this function always take/return integers".
If you’re passing or returning values of the wrong type, it’s going to blow up one of your tests. Asserting on a value implicitly asserts its type. Passing a value and not getting a runtime error for your trouble, pretty strongly indicates that it’s the right type.
Writing tests instead of utilizing the compiler is wasted time and effort. And it is one of the worst kinds of code duplication, because you are reimplementing all the type-checking, bounds-checking, etc. Usually badly, buggy and again and again. And since usually the test suite doesn't have tests for the tests, you will only notice if something breaks in the most inopportune occasion possible.
In unit testing for dynamically typed languages, very rarely do you make explicit type checks. The type checking naturally falls out of testing for correct behavior.
If the language doesn't help you with a function name typo, that's crap dynamic. Not only is that not a feature or benefit of dynamic, but it fuels unfair strawman arguments against dynamic.
Here is something I made largely for my own use:
This is the TXR Lisp interactive listener of TXR 257.
Quit with :quit or Ctrl-D on an empty line. Ctrl-X ? for cheatsheet.
TXR is enteric coated to release over 24 hours of lasting relief.
1> (file-put-string "test.tl" "(foo (cons a))")
t
2> (compile-file "test.tl")
* test.tl:1: warning: cons: too few arguments: needs 2, given 1
* test.tl:1: warning: unbound variable a
* test.tl:1: warning: unbound function foo
* expr-2:1: variable a is not defined
That's still a strawman; it only scratches the surface of what can be diagnosed.
I see where you're coming from, having used Groovy for a few years (in Grails, mostly) but I think you're also overstating your case.
Groovy is very quirky, and it's good at turning compile errors into runtime errors (@CompileStatic negates many of its advantages like multiple dispatch) and making IDE refactoring less effective. But the again, so is Spring. This got better when Java configuration was introduced... and then came Spring Boot, which is a super leaky abstraction, with default configuration that automatically backs off depending on arbitrary conditions (!). And yet people find it valuable, because it reduces boilerplate.
These days, I use Groovy mostly in testing (and Gradle), and it can really make tests more expressive.
> IDEs and code generators have replaced a lot of the more stupid boilerplate. Not that there isn't a lot of stupid boilerplate in Java but it's been greatly reduced by tooling.
I've been dealing with Java for about four years in an academic setting, not professional. When observing Java code in workplaces, the code bases have universally been bloated monstrosities composed mainly of anti-patterns - but that was hopefully down to the era these applications were written in (J2EE, Struts, etc).
My experience is that:
* Auto-generated code is still visual (and debugging) noise
* Annotations like Lombok's are great, but are "magic" abstractions, which I find to be problematic. They add cognitive load, because each specific library has its own assumptions about how you want to use your code, as opposed to built-in language constructs.
* Especially for Lombok, I can't help but think adding @Getter and @Setter to otherwise private fields is a poor workaround for a simple language deficiency: not having C#'s properties { get; set; }. I feel the same about libraries that try to circumvent runtime erasure of generic types.
* Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.
* Java tooling (and if previous points weren't, this is definitely a subjective stance) just seems inferior and to set a low bar in terms of developer experience - maven or gradle compared to nuget, javac compared to dotnet CLI, executable packaging... As to other tooling, I suppose you're mainly referring to the IntelliJ suite?
I think C# (and by extension, Kotlin) had the right idea in seeking a balance through removing as much boilerplate as possible in base language constructs. Adding libraries is fine, but shouldn't be a workaround to evolving the language.
I agree with all of your points, including C# and Kotlin approach.
At the same time, a lot of that are artefacts of decisions made early in Java such as backwards compatibility.
The same for tooling, a lot of tools are results of its time, Ant, Maven and now Gradle.
Lombok has its pros and cons, I use it sometimes but had issues with Lombok and Java upgrades due to how Lombok writes out its bytecode.
I haven't touched javac in more than a decade so I don't really care about it, it's been abstracted away by my build tools (and I agree, the build tools are less-than-optimal).
Again, I agree with all the critics, at the same time just given how large the Java installation base is and by having to go through the Python 2 vs Python 3 migration path, debacles, etc., I still prefer to have all this cruft that is known, well discussed online, has documentation about its quirks rather than a moving target of language features over the last 20-25 years.
Java is too big due to all the evolution it went through, it could have taken different paths and modernised the language? Yes, at the expense of some core design decisions in the beginning. Do I agree with all these decisions? Nope, but who agrees to all design decisions made by their programming language designers?
> Compared to C#'s ASP.NET (core), I find fluent syntax configuration with sane defaults and "manual" DI configuration much more manageable and maintainable than auto-magic DI as in Spring, because at least it's explicit code.
Fluent-syntax, as it exists today, needs to die in a fire. It's horrible. It's abusing a core key computer-science concept (return values) and turning it into something that exists to only save a few keystrokes.
1. You have no way of knowing if the return-value is the same object as the subject or a new instance or something else.
2. It doesn't work with return-type covariance.
3. You can't use it with methods that return void.
4. You can't (easily) save an intermediate result to a separate variable.
5. You can't (easily) conditionally call some methods at runtime.
6. There is no transparency about to what extent a method mutates its subject or not. This is a huge problem with the `ConfigureX`/`UseY`/`AddZ` methods in .NET Core - I always have to whip-out ILSpy so I can see what's really going on inside the method.
Some libraries, like Linq and Roslyn's config use immutable builder objects - but others like ConfigureServices use mutable builders. Sometimes you'll find both types in the same method-call chain (e.g. Serilog and ImageProcessor).
What languages need is to bring back the "With" syntax that JavaScript and VB used to have - and better annotations or flow-analysis so that the compiler/editor/IDE can warn you if you're introducing unwanted mutations or unintentionally discarding an new immutable return value.
It does that, but it also makes your code read more like natural language. Perhaps I was careless in my wording, as I meant to point to manual, explicit configuration rather than fluent syntax per se.
As to your bullet points: I can see where you're coming from. I still think it's better than the invisible side effects and invisible method calls you get with annotations.
> What languages need is to bring back the "With" syntax that JavaScript and VB used to have
As far as I know, With... End With is a weird cross between "using" in C# and object initialisers. How does that help prevent mutations? One of the code examples (0) even explicitly mentions:
With theCustomer
.Name = "Coho Vineyard"
.URL = "http://www.cohovineyard.com/"
.City = "Redmond"
End With
I honestly don't see the big difference with either:
var customer = new Customer {
Name = "Coho Vineyard",
URL = "http://www.cohovineyard.com/",
City = "Redmond"
};
or:
var customer = Customer
.Name("Coho Vineyard")
.URL("http://www.cohovineyard.com/")
.City("Redmond")
.Build();
"The most tedious work I do right now is adding "one more" scalar or complex data-member that has to travel from Point A in Method 1 to Point B in Method 2 - I wish I could ctrl+click in my IDE and say "magically write code that expresses the movement of this scalar piece of data from here to here" and that would save me so much time."
All of this is possible today and not even that hard (though it's harder than meets the eye, there's a lot of issues that description glosses over that you have to deal with, especially in conventionally-imperative languages). The main problem you face is that the resulting code base is so far up the abstraction ladder that you need above-average programmers to even touch it. (I am assuming that this is merely a particular example of a class of such improvements you would like made.) This is essentially the same reason why Haskell isn't ever going to break out of its niche. You can easily create things like this with it, but you're not going to be hiring off the street to get people to work with it.
Or, to put it another way, a non-trivial reason if not the dominant reason we don't see code written to this level of abstraction is the cognitive limitations of the humans writing it.
I know HN doesn't really like this point sometimes, to which I'd ask anyone complaining if they've mentored someone a year or two out of college and fairly average. You can't build a software engineering industry out of the assumption that John Carmack is your minimum skill level.
Rich Hickey and Clojure have a low-tech solution for you: use maps. This “wiring through all the layers” problem is basically self-imposed by the use of static typing for data objects. Instead you should mostly pass around maps, validate that the keys you care about are present, and pass along the rest.
Of course your Java peers aren’t going to be happy about this, so in some ways a new language is needed to establish a context for this norm. But the limitation isn’t physical, not even in Java.
It's weird to me how you find the magic of spring sad while you find the magic of Lombok acceptable.
Lombok requires that you use a supported build system and IDE and while all the currently relevant ones are supported that is no guarantee. Needs plugins and agents that support your various tools' versions including the JVM itself. I've been in that hell before with AspectJ and the aspectJ compiler vs eclipse plugin (version incompatibilities that made it impossible to work efficiently until they fixed it all up).
Disclaimer: last company we used Lombok. Current company we are switching certain things to Kotlin instead. data classes FTW for example. I do miss magic builders. Builders are awesome. Building the builder is tedious ;)
Lombok magic doesn’t span across files. Look at the class, see the annotations, and as long as you have even a trivial understanding of what Lombok is, you can grok it. It’s basically like a syntax extension.
Spring on the other hand... autowired values everywhere, and at least for me (who doesn’t work with Spring day in and day out) it’s very difficult to understand where they come from.
Don't get me wrong, I've used Lombok and liked it from the working with it and what it saves you aspect.
We do use spring and I've used it for a very long time now. Nothing is magic and not understandable about wiring if you do it right. Unfortunately there are a lot of projects out there that use it in exactly the wrong way if you ask me and then I'd agree with you.
I used to be in a company where we used XML config and everything was wired explicitly. The XML part sucked but with SpringIDE (eclipse at the time) it was Ctrl-clickable to find what's what.
We use Java config with Spring at my current company and I can Ctrl-click my way through it all and find what's what. There's a small corner of 'package-scan'ed stuff that is evil but we are cleaning that up.
FWIW I think that whether someone wants to use mutable objects or swears by immutability should be their choice, especially for interoperability with legacy code. It can be much easier to 'just go with the flow and be careful' in a legacy code base vs trying to have a clear separation of where immutability has been introduced already and where we still don't use it. Not everything is green field (in fact most stuff isn't) and not every company gives you enough time to always Do The Right Thing (TM).
Copying objects is a well known need and there are countless libraries that try to help you with it. All with their own problems, notably runtime errors vs. compile time safety or pervasive use of reflection.
When applying events, for instance. In F#, you could do:
match msg with
| IncreaseCounter cnt ->
{ model with Count = model.Count + cnt }
| DecreaseCounter cnt ->
{ model with Count = model.Count - cnt }
| ResetCounter ->
{ model with Count = 0 }
| ChangeRubric name ->
{ model with Rubric = name, Count = 0 }
The "with" says: copy the original record by value, but change these fields. For completeness' sake: F# also has implicit returns, so the bit between brackets is the function's return value.
Why do you think copy methods are "presumably wrong and misguided"?
For the rest, I agree that in 99% of the cases inheritance and mutability are not needed if you're using greenfield Kotlin libraries. But they are unfortunately often necessary in the Java world.
Mutable data classes are especially quite useful for reducing boilerplate when creating classes that implement the Fluent Builder pattern, which is unfortunately quite necessary if you don't have a copy method...
> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
Lombok, at the very least, eliminates “manually writing by-hand getter-and-setter methods” (https://projectlombok.org/).
Thank you. I stopped using Java on a regular basis around late-2009-ish, which was before Lombok became really popular (as far as I know) so it is encouraging to hear that writing Java in-practice isn’t as bad as I feared.
Still... I feel strongly that Java eventually needs to adopt object-properties and reified-generics for it to stay relevant - otherwise it offers fewer and fewer advantages over competing languages - at least for greenfield projects at first, and eventually it’ll start being _uncool_ and fail to attract newer and younger devs to keep the ecosystem alive. Then we’ll end up with the next COBOL (well, more like the next Pascal/Delphi...)
That seems unnecessarily harsh towards Pascal/Delphi. I’d stick with the COBOL metaphor.
I’d also add that while Java does suck in some fairly obvious ways, most languages suck and at least Java can actually run concurrent threads in parallel.
Lombok ends up having all the costs of using a better JVM language (you still need to integrate it into your coverage tools, code analyzers etc.) but with few of the benefits. I used to use Lombok but in the end it was easier and better to just use Scala.
That’s fair. When I was writing Java code I wanted desperately to evaluate Kotlin, for the obvious reasons you’d expect, but there was not an easy Lombok-to-Kotlin migration path.
I probably would not choose Java with Lombok for a greenfield project today, were it up to me. But if I was forced to use Java, I would use Lombok. I was forced to use Java and I did use Lombok, and it didn’t really suck that bad.
I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).
The gap between post-8 Java and Kotlin is pretty small yeah. Though you have to write a lot of async plumbing yourself, and not having delegation is a real pain.
> I don't think any reasonable decisionmaker would approve Lombok and not Kotlin or Scala. (But I'm aware that many large organisations end up making unreasonable decisions).
For whatever reason, it’s a lot easier for most organizations to sign off on using a specific library for an existing programming language, even one as transformative as Lombok, than to sign off on using a different programming language, even one as backwards-compatible as Kotlin. Often they are categorically different decisions in terms of management’s interest in micromanaging them: they might default-allow you to include libraries and default-disallow you to write code in a different language.
In this respect, Lombok is really handy for a very common form of unreasonable organization :)
But when you read a novel, it's full of excessive and redundant code.
You don't write code for the machine, you write it for your team. It's fine if it's nice and comfortable, a bit repeated and fluffy, rather than terse and to the point.
This is exactly my grudge with boilerplate. Code is being read much more often than written.
I don't care if you hand-coded all those buckets of accessors, or your IDE has generated them -- that's irrelevant to that they're still overwhelmingly useless noise. Which I need to read through, which I need to review in PR diffs, skim in "find symbol usage" output, class interface outlines, javadocs, etc etc -- all that 10 as often as during writing. Somehow I'm expected to learn to ignore meaningless code, while producing it is fine?..
Remember the point made in "green languages, brown languages" recent post here on HN? The insight for me there was the source of "rewrite it from scratch" urge which should be very familiar to engineers working in the field. It comes from incomprehensible code or weak code reading skills. Either way, boilerplate does nothing but harm.
So no, while I agree on your point that code exists principally to be read by humans (and as a nice secondary bonus, executed by machines) -- I disagree that boilerplate is "fine" whatever its incarnation. It's not, because it directly damages the primary purpose of code: its readability.
> Still to this day, I have absolutely no idea how people can write Java for line-of-business applications (its bread-and-butter!) and remain sane from having to manually manage data-flow, implementing Java Beans, and manually writing by-hand getter-and-setter methods...
My long-standing view is that Java's strict and verbose OOP syntax and semantics are an interface for IDEs. People who are hand-coding are practically guaranteed to be baffled by the verbosity, but they forget that Java development was the driver for IDE evolution (afaik), such that now we have ‘extract method’ and similar magic that understands code structure and semantics.
More specifically, OOP works, or should work, as an interface for IDEs that allows to (semi-)programmatically manipulate entities on a higher level, closer to the architecture or the problem domain.
Like you, I wondered if this manipulation can be harnessed and customized, preferably in a simpler way than giving in to the whole OOP/IDE/static-typing tangle and without writing AST-manipulating plugins for the IDE. In those musings I ended up with the feeling that Lisps might answer this very wish, with their macros and hopefully some kind of static transformations. Which are of course manipulating ASTs, but it seems to be done somewhat easier. Alas, predictably I've had no chance of doing any significant work in a Lisp, so far.
FYI for c# you have resharper allowing to add parameters to method and it would automatically propagate passing it through several levels up based on references. Guess sometimes all you need to know are the right tools.
Nowhere near. There was a lot of OOP hype in the early 90s with Smalltalk and C++, so Java just went all-in (everything is in a class) on that trend of the times.
Could you give a concrete example of your problem in a gist or something? I'm curious if it's solvable in C# as is. Sounds like it may be the kind of thing I'm approaching with reflection and attributes right now.
If you're writing in Java, why not write in Kotlin? They're sufficiently compatible you can have Java and Kotlin files in the same directory structure, compiled with the same compiler.
Well, in the case of Java there definitely are ways to minimize the boilerplate. Some of the more common ones that i use:
- Lombok library ( https://projectlombok.org/ ) generates all of the getters and setters, toString, equals, hashCode and other methods that classes typically should have (JetBrains IDEs allow you to generate them with a few clicks as well)
- MapStruct library ( https://mapstruct.org/ ) allows mapping between two types, between UserEntity and UserDto for allowing mostly similar objects or ones that are largely the same yet should be separate for domain purposes
- Spring Boot framework ( https://spring.io/projects/spring-boot ) allows getting rid of some of the XML that's so prevalent in enterprise Java, even regular Spring, and allows more configuration to be done within the code itself (as well as offers a variety of pluggable packages, such as a Tomcat starter to launch the app inside of an embedded Tomcat instance)
- JetBrains IDE ( https://www.jetbrains.com/ ) allows generating constructors, setters/getters, equals/hashCode, toString (well, there are covered by Lombok), tests, as well as allows for a variety of refactoring actions, such as extracting interfaces, extracting selected code into its own method and replacing duplicated bits, extracting variables and converting between lambdas and also implementing functional interfaces, as well as generating all of the methods that must be implemented for interfaces etc.
- Codota plugin ( https://www.codota.com/ ) offers some autocomplete improvements, to order them by how often other people used any of the available options, though personally there was a non-insignificant performance hit when using it
As far as i know, there is a rich ecosystem for Java to allow treating the codebase as a live collection of a variety of abstractions which can be interacted with in more or less automated ways, as opposed to just bunches of overly verbose code (which it still can be at the same time). Personally, i really like it, since i can generate JPA annotations for database objects after feeding some tools information about where the DB is and allowing them to do the rest, as well as generating web service code from WSDL (though i haven't used SOAP in a while and noone uses WADL sadly, though OpenAPI will get there).
And then there's attempts like JHipster ( https://www.jhipster.tech/ ) which are more opinionated, but still interesting to look at. I think that model driven development and generative tooling is a bit underrated, though i also do believe that much of that could be done in other, less structured languages, like Python (though that may take more effort) and probably done better in more sophisticated languages, such as Rust.
Low Code/No Code solutions don’t work because the people involved in implementing solutions are rarely engineers themselves. Most (good) engineers have learned through training and/or experience, well, engineering things, like edge cases, error handling, user experience, efficiency, testing,
maintainability, automated testing, and a plethora of other subtle and obvious aspects of system design. I know this quite well because I’ve worked with these so-called low-code and no-code platforms and every one of them I have seen end up having to be taken over by experienced engineers who have been brought in to fix (or in some cases completely rebuild) a poorly-designed system. These platforms typically suffer the “last mile” problem as well, requiring someone to write actual code.
And there's been the business process engine craze in between. BPEL comes to mind which also has 'visual editors' for the business people to use.
It's too complex for them and the you pay Software engineers to use BEPL instead. Which is just a worse language to actually program in than the underlying system.
Or any other number of 'process engines' which give you a worse language to describe your actual process in and then you need to do stupidly convoluted things to do simple things. But hey, we didn't have to code!
I worked on a Pega project once. There was nothing in there that the business people would be able to touch, especially after the requirements exceeded the capabilities of Pega’s primitives. One of the local friendly FTEs (the dev work was contracted out) would’ve been happy to use C#/ASP.Net web forms like everything else in the org.
Some people see past tries at something as proof that something will never work. Others see past tries at someone having the right idea, but wrong implementation.
Imagine how many tried flying before we "invented flight", and how many said "oh how they won't learn from the past".
I think that's a fair point. The way I see it, going from requirements (even visual ones) to working system would require strong AI, as any sufficiently powerful visual environment would wind up being Turing complete.
Which means that no code is either use case bounded or claiming something roughly on par with a break through. The first is common enough and where I imagine most low/no code offerings fall when the hype is stripped away. The hype seems to promise something on par with the second and I think that's where the dismissive attitude comes from.
Functional and declarative programming are mostly specifying requirements directly. You don't need a AI to do it, in fact that would be the wrong tool (AI is good for fuzzy problems and inference - not following a spec).
An extreme example of this are logic and verification systems like prolog and TLA+.
There is a sweet spot of low code I haven't seen explored yet, which is a declarative system that is not Turing complete. That would be an interesting avenue to explore.
Business requirements still have a measure of ambiguity and "do what I mean" to them. They are more formal than natural language, sure, but fall far short of the formalism of a declarative programming language. This is a big part of the business partnership underlying most Agile methodologies. If the formal spec could be handed off, then it would be and Waterfall would work better in enterprise settings. Instead, the team is constantly requiring feedback on the requirements.
So I guess I still see declarative languages as being part of the tech stack and something tantamount to AI being needed to handle all the "do what I mean" that accompanies business process documentation.
I think honestly the problem is a lack of tech literacy. I've seen spec sheets that are glorified database and json schemas in a spreadsheet, put together by BAs and translated by hand.
It could be done directly if every BA had enough programming knowledge to put together schemas and run CLI tools to verify them.
> had enough programming knowledge to put together schemas and run CLI tools to verify them.
That's quite a lot of programming knowledge. It makes some sense to decouple the business-oriented from the more technical roles - BA's trying their hand at coding is how you get mammoth Excel spreadsheets and other big-balls-of-mud.
Not sure if prolog or formal methods are good examples here, as they are pretty hard programming language. Yes, they can be used to specify a system, but they also require human ingenuity, aka strong intelligence, to get right. Prolog may be easy for some people, but I did spend inordinate amount of time to understand how to use cut properly, and how to avoid infinite loops caused by ill-specified conditions in my mutually recursive definitions.
As for formal methods, oh, where shall I even begin? The amount of time to turn something intuitive into correct predicate logic can be prohibitive to most of professionals. HN used to feature Eric Hehner's Practical Theory of Programming. I actually read through his book. I could well spend hours specifying a searching condition even though I could solve the search problem in a few minutes. And have you checked out the model checking patterns (http://people.cs.ksu.edu/~dwyer/spec-patterns.ORIGINAL)? I honestly don't know how a mere mortal like me could spend my best days figuring how to correctly specify something as simple as an event will eventually have between an event Q and an event P. Just for fun, the CTL specification is as follows:
I want to see formal methods used more in lieu of writing standards documents. If you go read a standards document, say for 5G wireless, you'll find it's largely formal specification awkwardly stated in English and ad hoc diagrams. It would be better to just write out something formal (with textual annotations as a fallback) and have a way to translate that to readable language.
They were right. The previous attempts did in fact use the wrong approach, and people have now successfully turned lead into gold. The only problem is that it’s too expensive to be worth doing.
I don't agree. If you could ask a prime Newton if he'd be satisfied converting lead into gold in a cost prohibitive manner I would bet any amount of money his answer would be a quick "no". The goal of alchemy was to convert lead into gold in a way that made the discoverer rich, it's just not proper to say the second part explicitly, but I believe most people understand it that way.
> The goal of alchemy was to convert lead into gold in a way that made the discoverer rich
That definitely needs a citation. The Wikipedia page mentions no such motivation, and describes Alchemy as a proto-scientific endeavor aimed at understanding the natural world.
However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution: "understand how forces work at macro scales", "understand electricity", "understand magnetism", "develop a mathematical framework for summing tiny localized effects over large and irregular shapes", "develop a mathematical framework for understanding how continuous distributions evolve based on simple rules", "learn to look accurately at extremely small things", "learn to distinguish between approximate and exact numerical relationships", "develop a mathematical framework for understanding the large-scale interaction of huge numbers of tiny components", and so on.
If you went back in time to an age where people were working hard on changing lead into gold and your mission was to help them succeed as soon as possible, your best bet would probably be something like teaching them the decimal place value system, or how to express algebraic problems as geometric ones. But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.
> However, the analogy is still accurate, because the right approach involved several steps which no one thought were conceivably part of the solution
I don’t see how that follows. It’s just a truism that nobody figured out how to do it until someone finally did. The fact that the path wasn’t obvious at various points in the past seems irrelevant.
> But if you also told people that this knowledge was the key to solving the two problems they were working on, "how to make very pure versions of a substance", and "how to understand what makes specific types of matter different" you would reasonably have been regarded as deluded.
If they were listening to you at all, it’s not at all obvious why this part would sound deluded.
How is it any more exotic than any of the failed alchemies?
'It' (most of 17th, 18th and 19th and some early 20th century mathematics, chemistry and physics) is clearly a lot more abstract than the failed alchemies.
The point is that 'just keep trying' would not have been a good strategy.
Many alchemists tried to turn copper to gold as well. They might as well think that their predecessors are just unlucky by using the wrong implementation.
No Code is mostly a code word for outsourcing. You use their app to get it started, realize it won’t meet your requirements, and then pay them to work on it forever. Unless it’s just a marketing Website.
No Code I feel like could be also viewed as a compromise for selling dev tools to mass market. You can sell a "cheap complete*" solution instead for the overhead of also dealing with customer issues / inevitably helping train a dedicated person for them to maintain their app. Then you have customers who need dev tool support as intended
I think this is only partially true. There are aspects of coding which can be abstracted away, either because they're essentially boilerplate or because a simpler description of the solution is sufficient. Ideally if a more complex description is required, one can drill down into the simplified low-code description and add sufficient complexity to solve the problem.
I mean, couldn't many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts?
> many of the existing frameworks be described as low-code wrappers around more complex work flows and concepts
Using frameworks, you are still using the language itself to command the frameworks. For example, if someone claims oneself as a React programmer, nobody would assume that someone didn't know Javascript.
So to efficiently use one framework, you should master both the language + framework. In other words, the complexity not only remains, but also accumulates.
But this is contradictory to low/no code's selling point, as they are targeting non-programmers.
This only goes so far, though, with frameworks. In my experience, the vast majority of people that make claims about a particular framework do not understand the abstractions they are building upon. In fact it is sufficiently reliable that I have found this to be an excellent hiring signal, to probe how well a person understands the abstractions in frameworks they use, but also to probe how they diagnose and fix holes in their own knowledge.
Everybody promises their no-code solution is going to adopt to the way your enterprise already works, but the truth is you kind of have to go the other way around if you don't want misery.
I work at Stacker (YC S20) [0] and the approach we're taking to deliver on this promise is to start with the data.
That is, we let you take spreadsheets you already use and build more powerful collaborative tools on top of them, without code.
If you take the premise that a tool has a 'data' part, and an 'app' part, and that the data models the process, and the app more just controls how data is accessed, presented, the UX, etc, you might see why I'm so excited about this approach -- if you take the data from spreadsheets that are already being used to model a process in action, by definition you don't have to change your process at all.
About 30 years ago one of my managers used to say "get the data model right and the application writes itself" and I have found that to be mostly true. What I have also often found is that people who create spreadsheets in business don't understand data modeling and even if the spreadsheet solves some business problem it's often very brittle and hard to change and adapt or generalize.
The spreadsheet structure point is an interesting challenge - I think often a spreadsheet ends up as the de facto model of a process, but often with, as you say, some redundancy, excessive flattening, and other structural issues that can make it more diffcult to build an app around.
The nice thing, though, is that shifting this structure around does not mean changing the process being modelled - it's more just a necessary part of making a more powerful tool to support it.
It's as you say, since the process is known, it's usually very clear exactly how the app should be, which under our model can inform how to shift the structure of the spreadsheet accordingly in pretty practical way. It's cool to see the same thing work in both directions!
From my experience working with some business-side using spreadsheets: yes, usually spreadsheets end as the de facto model of a process but not necessarily an efficient model or an easily replicable one.
In banks I know of some long-living spreadsheets that have been patched so much that it takes a real human weeks to months of work to disentangle the mess of macros and recalculations onto a streamlined script/process. Sometimes the resulting model diverges in quirky ways that are manually patched, I've seen diversions due to datetime issues (time zones, summer time, leap days, incorrect assumptions about dates, etc.) that were noted by side-effects of side-effects and the manual patching didn't help at all to debug the root cause.
I think that spreasheets are incredibly powerful, but the main reason for that power is that they are quite free-flowing and that invites the human creativity to solve problems with the limited set of tools and knowledge some users have, and some of those are in quite high technical positions using spreadsheets daily for years.
I believe you might have a killer product but I had so many headaches with spreadsheets that I wouldn't like to be working in that space.
My first project out of college was working on an internal metrics tool for a company. Their prior one was basically Excel; a guy who was due to retire had, back in the 90s, written an entire DSL with VBA, that could load files, run various functions on them, and output graphs.
Thing is, no one except him knew the DSL; everyone in the company relied on it, but they relied on him to write the DSL to compile their inputs into the output they wanted.
The rewrite included an actual database, proper data ingestion, and a nice clean frontend. The methods of aggregating data were reduced and standardized, as were the types of simulations and metrics that could be reported on; the flexibility was drastically reduced. However, the practical usage for everyone was drastically increased, because it moved from "we can't do anything different unless we get (expensive due to retire person's time)" to "we can do it ourselves.
I'm very jaded toward no/low code, in general, and that experience is partly the reason why. There isn't a sweet spot, that I've seen, that allows for non-technical people to have the control they want. And that was true even with spreadsheets.
The less nice thing, though, is that the model of the process you're starting from -- the actual spreadsheet -- has, as you say, these structural problems. And since (some speculation here, but I very much suspect that) many different processes, after having been so mangled, will end up in the same redundant, excessively-flattened structure, you can't determine from the spreadsheet alone which of these different processes it is supposed to encapsulate.
So before you can start "shifting this structure around" you'll still have to go through a standard business analysis process to find out what you are going to shift it into. And if you're already doing that... Well, then most of your promise of automation is out the window already, so what's the use of having the actual implementation done in some weird newfangled "no-code" or "low-code" tool?
Some strange comments in here about Low Code, as if they weren’t already successful. There are easily hundreds of apps successfully making use of Low Code to solve problems for people. Some Marketing Automation tools have had them for 10+ Years. Integration tools are also often Low Code.
Microsoft's Power Platform is a low code framework which works well, and generates a massive amount of revenue, as does Salesforce and some others. I recently designed and implemented a complex 500 user child protection application with PP that has been live for a year now. It was highly successful, and the time and cost taken to deliver it was far less than the cost of a hand written solution. That said, there is still quite a lot of custom code required for most enterprise level solutions even with the most mature low code platforms. Low code is not a panacea, and the same issues of how to represent requirements and design arise in the low code world as in the high code world. Low code platforms will continue to mature and improve. Maybe AI will catch them one day, but I'd be surprised if that happens anytime soon.
In 7 months, over 8,000 apps were created with Budibase. Many of these are either data based (admin panels on top of databases), or process based (approval apps).
Budibase is built for internal tools so the apps are behind a log in / portal.
Microsoft Access and Claris FileMaker have nearly 3 decades of success at low-code.
I genuinely think they get a bad wrap because the really successful apps you never hear about, but the problematic ones need a real programmer to sort out.
Nonsense, No/lo-code is the polar opposite of UML, No-code is the ultimate Agile, it’s exciting and fun to build with, because it’s alive, because it executes immediately, resulting in a powerful iterative feedback loop. UML is static and tedious to build, because any gratification is delayed so far into the future.
I know because I’ve done both, in fact I’ve invested all of my wealth (many millions) into developing a No-code ERP/CRM platform and it’s incredible - will launch this year.
Comparing UML and No-code is apples to oranges. UML is about generic abstraction without actual implementation. No-code is about domain-specific implementation done using simplified high-level visual constructs instead of general-purpose programming languages. In other words, No-code is programming (just not text-based), while UML is modelling.
Low-code/no-code is simply the CASE tools of the late 80s, or the UML->production system fully automated pipeline of the late 90s/early 2000s, given new life and a shiny coat of paint. The same problems apply and the same people keep buying the same damn snake oil.
Once you can specify your procedures, requirements, and constraints in a way that is specific enough for a computer to read and act meaningfully on, the elements of your specification method become isomorphic to constructs in some programming language. So you've replaced typed-in keywords with clickable symbols or buttons or controls -- but you've in no wise reduced the headwork of programming or made the programming go away.
With the recent Lambda announcement, Excel is going to have a cool escape hatch for doing more complicated stuff right in the sheet without needing to dive into VBA. I honestly thought I would never get excited about a feature being added to Excel, but I was wrong. This looks friggin’ awesome!
I strongly believe that every tool, every invention, can only innovate/be innovative in one area.
As such what we need is to have the right building blocks and abstraction layers which at the end will mean that something like no or low code will work. It will only work when all the tools that underpin it have been crystallised over the years.
This happens on every layer and is fundamentally why we see so much work repeated, but slightly different. Every language makes different trade offs, therefore all the libraries implement the same functionality but just a bit different this time.
Every once in a while something like UML (too complicated, e.g., due to the use of EJBs), Business Works (too slow), etc comes along which has promise and offers value at the time but just misses the boat to survive until the next generation of revised underlying tools.
I think a certain set of problems requires the thinking of someone who knows how complex systems work and where the pitfalls are.
You can even see this with stepped covid restrictions that rely on infection numbers crossing a certain threshold. Most engineers wlould imidiately see the real world consequences that arise from the lack of hysteresis.
Similar things happen with input sanitization, deciding on formats etc.
Some stuff just takes experience. The actual writing of the code is not the problem, the knowledge of what to do and what to avoid is.
They're trying to commodify writing code. It would bring down the cost of programmers significantly. They won't succeed, I think. Instead AI will probably beat them to it.
That is why I am fundamentally skeptical with the current push for Low Code or even No Code. Seems like people just don't really learn from the past.