So you want to create a new abstraction
There’s always this motivation to provide value in the software development lifecycle by creating a new abstraction. This abstraction may be a simple microservice, or a UI or even a new programming language.
The promise is always this: use my abstraction, save time and probably make use of more features that you would have otherwise been difficult.
But can all abstractions be useful? We need to stop and think - is the abstraction I’m trying to build even going to pay for itself in the long term?
It might seem obvious but I just don’t think people think this through completely. This leads to a lot of wasteful effort not only in the developer but also the end users who use the abstraction in hopes to make their lives easier.
The selling points are usually: you can get stuff done faster by using our abstraction. Also by using it you are now in a position to be able to use more features in the future.
An easy example is Kubernetes. It is a complete solution - a platform even - and it totally lives up to its promise. Of course there are contexts to use it in and you should not just plug it in anywhere.
But have you considered the other side of new abstractions?
Every new abstraction comes with an overhead to learn it. This is especially true for abstractions that are created in house - for example a custom extension to a popular database.
This downside is kind of obvious but the next one is not.
The other downside is known and unknown edge cases. Inevitably new abstractions always come with places where it just can’t be used or it can be but with lot of hacks. This can be either known or even worse - only found out later.
Example: you choose a GC language that abstracts garbage collection. At some point you realise manual memory collection is necessary for latency reasons. Your whole company is all in on this language. What do you do?
Another example: low code tools. The promise is always that you don’t need to learn a serious language and get things done quickly. But the biggest problem is the edge cases.
My point is that not all abstractions live up to their promise. The overhead of the learning curve as well as possibility of edge cases are often not considered.