Software Architecture

OOP: past, present, future

Object-Oriented Programming (OOP) has been a mainstream programming paradigm since about 40 years now. That’s quite a bit of time. So it’s worth asking: how did the paradigm evolve over time? I would say, looking back, that there has been 3 eras.

Era 1: Modelling the world with objects (1980-1995)

The idea with the object-oriented paradigm is to model the world in objects. The poster child of the object paradigm from this era is Smalltalk, where everything is an object. Objects send messages to each other and live in a parmanent, persisted state. This approach is great to model stand-alone applications with GUI components. The problem with this approach is that it doesn’t work very well for busines entities. In Smalltalk everything is an object and everyting is persistent. In other language, regular programs are started and stopped. You only want to persist the business entites. Persisting a subset of the objects is possible with object databases. This is a challenging problem though, similar to the serialization of object graphs. Searching and navigating heaps of objects is also no so easy. There’s also no easy way to share the business entities across instances of the application. For these reasons, business applications have frequently relied on relational database to persist their state.

Era 2: Objects and enterprise applications (1995-2005)

Using relational database to persist object graph leads to the so called “object-relational impedance mismatch“. It manifest itself in the difficulty to have a rich domain model that is persistent at the same time. The simplest approach to reduce the mismatch is to have a simpler domain model – just data structure – that can be persisted easily. But this in turns means that you move towards a more procedural style of programing again. This style of programming is well capture in the pattern “anemic domain model” of Martin Fowler.

The Java Enterpise Plattform is a major technology of this era. The Java Enterprise Plattform embraced object-orientation with the concept of enterprise java beans (EJB). Prio to EJB3, entity beans and session beans where objects that were persisted by the application server itself, with a vague ressemblance to an object database. Every operations would be carried as a synchronous operation over the network. The technology proved however to be hard to use because of the associated network costs. Another mismatch. Starting with EJB3, entity beans were turned into regular objects persisted with an object-relational mapper.

Other approaches to object-orientation exists. Domain-driven design promotes rich domain model in accordance with the concept of “modelling the world as objects”. To solve the object-relational mismatch, the domain model is kept separate from the model used for persistence. So called repositories take care of dealing with the mismatch.

The actor pardigm can be seen as a special form of object-orientation where objects communicate asynchronously. Actors are stateful domain entites. This avoids the problem of networking but doesn’t provide out of the box a solution for persistence. Some way to solve it is through event sourcing of object serialisation.

The heavy use of inheritance is also a characteristic of this era. Object-orientation promised reuse, which we thought meant inheritance. This missed the point. Reuse is promoted with good interfaces, which doesn’t stricly needs inheritance. With time, we learned to use interfaces, inheritance, composition and parametric polymorphism (aka generics) in a sane way.

With the previous learnings, the use of object orientation stabilized to a form mixing object-oriented data structures (think lists, data transfer objects, etc.) and object-oriented components (think of a business service, an HTTP server, or framework). This isn’t the revolution promised in the first era, but it makes good uses of objects and encapsulation to improve upon procedural programming.

Era 3: Objects and functional programming (2005-now)

Scala started exploring object-orientation and functional programming more in detail around 2005. Both paradigms might look contradictory at first (object-oriented programming is about mutability, functional programming about immutability). But both blend in actually quite well, at least the basics like list transformations (think map and flatMap). This isn’t actually a big suprise, given that lambda has been there from the beginning in Smalltalk, it’s just that Java didn’t have them initially.

This exploration continued with Kotlin and with Java itself. Java finally added lambda to the language and there are many more explorations going in with incubating projects. For instance pattern matching and OO play along quite well too. Developpers found an appropriate balance between immutable constructs and mutable ones.

What we have now, is what we can call “FP in the small, OO in the large”. Objects shine at encaspulating whole components or services. The object-oriented data structures that are used internally don’t need necessary to be mutable, though. They can be transformed and manipulated using idioms from functional programming.

It’s I think we’re we stand now, and where we will stay for a few more years until we’ve fully explored this space.

More

Software Architecture

Metaphors in Software Engineering

One metaphor frequently used in the field of software development is the metaphor of software architecture. The architecture of a software system consists, like the architecture of building,  of the main structures of the systems.

For a software system, the term “structure” could mean structures that are logical or conceptual. They don’t necessary match with tangible system boundaries. But the term “structure” does mean that the metaphor is biased towards expressing the static aspects of the software system.

Unlike buildings, software systems have also dynamic aspects. Information flows in a software system, and systems communicate with each others. Therefore, other metaphors can be useful to explain the nature of software systems.

Here are a few that I find interesting.

A city

As said, the architecture metaphor is limited in that if focuses too much on the statics. The city metaphor is in this regard better, since it evoques simulateously static structures (roads, bridges, buildings) but also dynamic aspects (traffic flow, people living in the city). Good city planning deals with both. The metaphor can be used for a software system, but also for collections of software systems.

Enterprise architecture is the field of IT that addresses IT strategy at the enterprise level. The city metaphor is a good own for the enterprise architectture. Changes of IT strategy (for instance, moving to the cloud) impact many systems and take years to be achieved. They significantly and durably change the way software system are built for the enterprise. If Hausmann’s renovations gave a new face to Paris, moving to the cloud will give a new face to the IT of your enterprise. 

A garden

The architecture metaphor is also limited in that it conveys the impression that a software is built once, and then never changes. It may be true for a building, but isn’t for software systems. According to the laws of software evolution, a software system must constantly be maintained and adapted to the needs of their users, or it will become useless. As software systems are developped and grow, they tend to accumulate inconsistencies that must be actively removed. This is much like a garden, which must be constantly maintained, and bad weeds, which must be removed. 

It’s possible to convey something similar with the architecture metaphor too, since building suffer wear and tear. We speak sometimes of architecture erosion, to denote the degrading quality of the architecture. By the way, buildings do change over time, sometimes quite significantly.

A book

Software is expressed using programming languages and its source code consists of text. A software system can thus be compared to a book, albeit a very special one. You can’t read it linearly and everything is interlinked. But there is a sense of style in a given code base, and code can be more or less elegant. There is something arful to programming. Given that developpers spend a lot more time reading code than writing code, taking care of software as text makes sense. With development approaches like literate programming, developpers a supposed to write the source code like a story to explain their thoughts. It didn’t catch on, but still worth a look.

A living organism

A running software system can also be compared to a living organism: it needs energy to run and do something useful. In some way, functions of the runtime, like memory management or thread scheduling, can been seen as some form of metabolism. Interestingly, some software systems like blockchains are explicitly designed to have an inefficient metabolism and consume large amount of energy. A running software system has a health too, which indicates how well the system works. Millions of things can go wrong during run time, degrading its health and behavior. For instance a memory leak will over time degrade the performance of the system until it simply dies.  Some components of a software system have at run time multiple instances. A failure of one component doesn’t break the whole system, just like we can live with one kidney. A running software systems can be compromised by a hostile inputs, the equivalent of a pathogen. The immune system of a running software consists of mechanisms like SQL sanitization, managed memory, safe pointers, etc. which aim at making software more robust. Usually software systems do not reproduce, though. Except for software viruses.

An asset

The IT has long been seen as a cost center, detached from business units that are profit centers. With digitalization, the perception is changing. Software is the enabler for the business, and go hand in hand with it. It is an asset and generates value. But with software, more code doesn’t mean more returns. More code means more maintenance, and only some feature of the system might actually deliver value.

There are of course more metaphors. Just have a look at the links below. The city, the garden, and the book metaphor are somewhat popular. The metaphor with living organisms is surprisingly uncommon. The asset metaphor isn’t really a metaphor- more like a mindset. The architecture metaphor is sometimes critiqued, but if we assume that software development is an eingeering discipline, it’s the only metaphor that resonates with engineers. So it’s unlikely to change.

More

Software Architecture

Chasing the Perfect Technology

The goal of pretty much any framework/plattform that you use — from a PaaS offering to application server and everything in between — is to make you more productive by taking care of some technical complexity for you: “Focus on the business logic, not the technology”.

Frameworks and platforms speed up development so that you can ship faster. And it’s true that you can ship faster: You can now, with current technologies, build an internet-scale service, highly available, able to handles millions of transactions per seconds, in a few month. It would have been unimaginable one decade ago.

The peak of productivity is achieved when you master your stack completely. You can then spend significant time working on business feature with little friction around technology itself.

Sadly, the peak of productivity is rarely reached.

One of the reasons is that developers get bored too early. Once the technical groundwork is in place and you just have to use it, it becomes boring. It’s fun to set up a whole analytics pipeline to solve this first analytics problem. Using the exact same pipeline to solve another problem? Boring.

Go ask a developer to use your existing infrastrucutre and stack as is and simply implement new features. I bet they will be lukewarm if they don’t see any technological problem to solve, or at least some technology to learn.

I speak from experience. The project I’m working on is an application for which a dedicated platform was built. This platform provides all sorts of thing to write applications, ranging from messaging, message processing, fault tolerance, configuration management, job scheduling. You can reuse these buildings blocks to design new features. As long as features requires new combination of building blocks, it’s interesting. But once it feels like using the same pattern every time, it becomes boring, even if it’s actually the moment you’re the most productive.

What motivates developpers is leveraging technologies to solve a problem. They are interested in figuring out how to use technology for your problem, not actually having more time writing business logic. Engineers have studied computer science because they like technology more than other business domains.

Technology platforms and frameworks – app servers, cloud, data pipelines, web framework, etc. – are so amazingly complex that you will need to solve several problems with them before you feel like you master them. Also, even if you master the technologies individually, the combination of the technologies might pose some new challenges. At the same time, technology changes very fast. This is another reason why we rarely reach the peak  productivity: technologies change before we truly master them. Technology evolves fast and we’re always playing catch-up.

A VM is for instance way easier to deal with than physical hardware. Using VMs definitely improves productivity. But as soon as you have VMs you want to become elastic. And for this, you need a whole you set of tools to learn and master. Progress in technology takes the form of layers that piles on. When you’ve barely master the first layer comes already the second one. These new layers, once mastered, enable new jumps in productivity though.

Not reaching peak productivity isn’t in itself a problem, since productivity grows nevertheless. Curiosity is what makes us push technology. What’s interesting is to realize that productivity and curiosity are actually at odd. It’s because we are curious that we never truly master our technologies and don’t reach peak productivity. But it’s also because we are curious that productivity in the long term always increases.

More

In fact, we anticipate that there will soon be a whole generation of developers who have never touched a server and only write business logic.

Technology

The Inevitability of Superintelligence

If we assume that the brain is a kind of computer, artificial intelligence is the process of reproducing its functioning. Based on this hypothesis, it’s easy to dismiss the possibibility of above-human intelligence by arguing that we can only program what we understand, which would means the intelligence in the machine is bounded by our own. But it’s also very easy to refute this limitation by arguing that we encode learning processes in the machine. These learning processes would be working at a scale and speed that we can’t match. The machine will beat us.

This later argument definitively seems to hold if we look at recent achievements in deep learning. Computer achieve some tasks that very much ressemble some form of intelligence. Looking more carefully, it’s however questionable whether we should speak of intelligence or simply knowledge. Techniques like deep learning enable computers to learn facts based on large amounts of data. These facts might be very sophisticated, ranging from recoloring images correctly to impersonating the artistic style of a painters. But the computer isn’t intelligent because no reasoning really happen.

This leads actually to an interesting question about intelligence. How much of intelligence is simply about predicting things based on experiences? If an object fall, you predict its position in the future to catch it, based on other experiences with falling objects. If someone asks you a “what’s up?”, you can predict that they expect to learn about what’s going on.  With GPT-3, which works according to this principles, you can almost have a conversation. I say almost, because we also see the limit the approach. There are some classes of question that don’t work, like basic arithmetic.

Current artificial intelligence is able to learn, either by analysing large quantities of data (deep learning) or simulating an environment and learning what works and what doesn’t (reinforcement learning). But we’re still far from sentient, thinking machines.

If we assume that our brain is some kind of computer performing a computation, there’s however nothing that prevent us from replicating it. With this line of thought, it’s only a matter of time until we “crack” the nature the intelligence and will find the right way to express this computation. When this breakthrough will happen is unknown – maybe in a decade, maye much later – but there’s nothing that make it impossible. With sufficient perserverence, this breakthrough is inevitable.

Speaking in terms of computation and data, a system becoming smarter can happen in two ways. The first one, is what we have now. Systems that learn over time through the accumulation of data. The computation remains the same though. A deep learning network is programmed once (by human!) and than trained on large quantities of data to adjust its paramter. But maybe a second class of systems exists: system that self-improve by changing their computation. Systems able to inspect and change themselves do exist and are called reflective systems. In such a system, data can be turned in computation and computation into data. The system can thus modify itself.

Some people believe that with artificial intelligence, we risk beeing outsmarted by a “explosition” of intelligence. Systems of the first class learn within the bounds of the computation that defines them – however complex this computation is. The possibility of an explosion is limited. With systems of the second classes, we’re free to speculate, including the possibility of an explosion of intelligence. Such a system could outsmart us and lead to superintelligence.

If we assume that our brain is a computation: is it self-improving or not? Children acquire novel cognitive capabilities over time, which at least give the illusion of self-improvement. But maybe these learnings are only very complex form of data accumulation. Also, the boundary between reflective and non-reflective systems is not black and white. A fully reflective system can change any aspects of its computation, whereas a non-reflective system processes input data according to fixed rules that never changes. A system that is able to infer and defines some rules for itself would fall in between both categories: the rules can change, but to an extent that is limited to some aspect of the computation. The adaptive nature of neural networks could, in some way, be seen as a limited form of rule changing: the rules are fixed, but the “weight” given to them change over time due to feedback loops.

Learning requires data provided by an environment. We’re able to learn only because we interact with the world and other people. If we were to replicate the computation in our brain and the learning process that takes place, we would also need to simulate the environement. The computational complexity of all this is probably enourmous. Maybe we can replicate the computation in our brain, but not the environment, or only limited forms of it. In which case, it’s hard to tell what kind of intelligence could be achieved.

Depending on the computation and environment that we simulate, the intelligence won’t resemble human intelligence much. The algorithm of AlphaGo learns in an enviornment that only consists of the Go rules. We can not even image what this world would be like. Assuming that the artificial intelligence is human-like misjudges the nature of human intelligence. Intelligence is not one quantity that we can weight based on clear criterium. Intelligence has many facets and is contextual.

For some facets, like arithmetics, machines are for sure already superintelligent.

More

Software Architecture

Great Articles on Software Engineering

Sometimes, I read an article, and some idea deeply resonates with me and makes a long lasting impression. It changes the way I approach some topic.

Fred Brooks’ essay “no silver bullet” was one of the very first article I read that had this effect. The concepts of esssential and accidental complexity are very powerfull, deeply resonate with me, and shaped the way I see software engineering. This essay is a classic because it had the same effect on many people.

But there are other many great articles that influenced me. Let’s recap some of them:

No silver bullet, Fred Brooks

A software system consists of essential and accidental (or implementation) complexity. We should reduce accidental complexity as much as possible, but essential complexity will still be the dominng factor.

The law of leaky abstractions, Joel spolsky

It’s very hard to devise abstractions that completely hide the underlying complexity. Often, you will need to understand some internal details no matter what.

Simple Made Easy, Rich Hickey

This is a great talk about complexity. The key takeaway is that simplicity comes from not mixing things together that shouldn’t. It’s independent of your prior knowledge. Easiness comes from habits and convention. It depends on prior knowledge.

Choose Boring Technology, Dan McKinley

A reminder that using shiny new tools isn’t always the best option and that established and mature tools has its place if they suffice to get the job done.

There is no now, Justing Sheehy

An exploration of the way to handle time in distributed systems, where there’s no global notion of time or consistency.

Beating the Average,  Paul Graham

A classic from Paul Graham where he described how using Lisp and macros gave the company an advantage over their competitors.

Life Beyond Distributed Transaction, Pat Helland

An articles about giving up distributed transactions to design internet-scale systems using simpler data models (e.g. key-value stores)

Everything You Know About Latency Is Wrong, Tyler Treat

In short: using average or percentile hides your outliers, which is an important signal to understand the real beahvior of the system.

A Note on Distributed Computing, S. Kendall et al.

An article explaining that trying to abstract remote boundaries is bound to fail.

Smalltalk: A Reflective Language, F. Rivard

A very nice explanation of Smalltalk and its reflective capabilities showing how to adapt the language to add pre/post conditions. The reification of the stack and the fact that the debugger is just a normal tool is also explained.

Reuse: is the dream dead?, Kirk Knoernschild

An exploration of the use/reuse paradox:  “Maximizing reuse complicates use”

Reflection on Trusting Trust, Ken Thompson

A wicked experiment on bootstraping

The Log: What Every Software Engineer Should Know About Real-time Data’s Unifying Abstraction, Jay Kreps

A fanstatic analysis of the distributed log as the basic building block to integrate real-time systems. So good, that it was later converted in a book: I love logs.

Most of these authors have written several other articles that are great as well.

Organisation

Team Structures

A main responsibility of management is to make sure that teams function well. This implies defining adequate organisational structures and finding adequate people.

The Boss Team

Sometimes, the structure is easy. There’s a boss and there are employees. The boss decides how the team should function, what should be done, who should do it, and is finally the one evaluating the employees according to his/her expectations. The employees can be involved in all these aspects, but the boss has the final authority. This model has sometimes a bad press, because if the boss sucks, it’s hell.

Scrum Team

There are other ways to organise teams. In scrum for instance, the product owner decides what should be done. The development team decides which member does what, and how. How the team works is partly given by the scrum framework itself. But the team can also adapt its way of working through retrospectives. Performance evaluation is not part of scrum and there’s often a line manager, not working closely witht the team, in charge of it. There’s less concentration of power on a “boss”.

Self-Organising Team

Following the decentralisation of power, we lend on so called “self-organising” teams. Given a high-level mission, the team is in charge to figure out how to work (“working on the system, not in the system”), what to do, and who does what. Performance evaluation is still outside the team. In any group a power structure will establish itself. In this case, it’s informal.

In any of the above models, there can be more or less specialisation in the team members. If all employees of a boss team have roughly the same skills, tasks can be distributed arbitrary. If a high specialisation exists, who does what is fixed in advance. How specialisation happens is a side-effect of how the team functions. It can be actively encourage (e.g. T-Shirt Skills) or discouraged. It can be formal (e.g. through job title) or informal.

Staffing is usually outside of the power of the team, except for a boss team. But like other budget issue, it could become part of the team responsabilities.

Recap:

Boss Team
Scrum TeamSelf-Organising Team
How the team worksBossScrum + Team (Retro)Team
What to doBossProduct OwnerTeam
Who does whatBossTeamTeam
How to do itTeamTeamTeam
Performance EvaluationBossLine ManagerLine Manager
StaffingBossLine ManagerLine Manager

Needless to say, this isn’t an exhaustive listing of all possible structures we find in organisations. In a complex organisation you might have “what do do” actually beeing split between a project manager, a business analyst, and some high-level managment. Or “how to do it” can be split between someone specialised as architect and the rest of the team.

There might also be certain processes to follow, e.g. for compliance or governance reasons.

Each time you have more than one item in a cell in the table, it means that either responsibilities are diluted or there is some specialisation going on. A bit of both might be justified, like a review process for some decision or having experts in some domain. Dilution of responsibilities and specialisation can go hand in hand if the expert acts as reviewer. But having too many items in the cells is probably a sign of organizational dysfunction.

The number of items in the cells reflects the complexity of the organization, a bit like function points reflects the complexity of a feature. I don’t know what’s an acceptable complexity here. As a rough number for a threshold, I would say there should be no more than twelve points across the column.

More

Organisation

Academia vs. Industry

A post-doc student contacted me to know more about the difference between academia and industry (in software engineering). That was an interesting converstation. I tried to summarize the main points as I see them:

Day to day work – In academia, you are more alone and focused on one thing for a long streak of time. In industry, chances are that you will work either in a team, working on the stories in your sprint, or outside of a team, in the role of a faciliator/coordinator, switching between many smaller tasks.

Collaboration – In academia, your main contact is your supervisor, then the peers in your group and some peers in some external groups you collaborate with. In industry, your main peers are your teammates, plus all the external stakeholder you must occasionnaly synchronize with.

Skills – In academia, you focus on conceptional work and data analysis. The engineering part comes second (the code of my prototypes wouldn’t pass a code review). In industry, engineering comes first. Software must have a high quality if it’s developped collaboratively. Conceptual work and data analysis of course also happen in industry, just not all the time (except of course, if you’re a data scientist!). There are plenty of conceptional challenges in industry, but the engineering part will take even a more significant bulk of the time than in academia. In both case, academia and industry require a high attention to details—but other details.

Writing – In academia, part of your job is to sell your ideas in the form of papers. This takes a significant time. In industry, writing is less important, unless you work in a role like facilitator/coordinator. Writing skills are still an asset in industry because you will occasionally have to sell you ideas to your collegue too, and writing can help.

Gratification – In academia, results come after many month of work on prototypes or data gathering. The work might get a few citations and resonate with a few people, but the impact is usually limited, except of course, if you’re very tallented and are lucky to do a breakthrough somewhere. Research brings fruits in the long term, through the accumulation of contributions that enrich the body of knowledge. In industry, the result is more immediate. Broadly, you can work either on software products, where your users are end-users, or plattforms, where your users are other teams. Stories and features are shipped regularly and users will use them. Significant impact for your company can be achieved with ambitious projects over a few years. Impact reaching beyond your company is quite rare, like breakthrough in research.

Engagement – In academia, you pick your topics yourself, so this tend to engages you. On the other hand, if you are stuck with your research and don’t progress much, it might be boring if not depressing. In industry, you have less autonomy to do whaterever you want, but in a good company, everybody should have it’s share of challenge and stay engaged. Software products have different phases: incubating, expanding, maintaining, or decommissioning software all have their challanges. A good company takes care to match the expertise of the team to the challange at hand so that people feel they learn something. There are also many types of software, from application developped solo over a couple of weeks, to multi-year, multi-teams systems. I like large-scale software but some prefer smaller systems where you can iterate faster.

Growth – In academia, growing mostly means getting tenure, which requires a substantial committment over a long time and a bit of luck. In industry, you can reach senior positions relatively quickly if you’re talented. In both cases you can switch course over time, say to reoriented your research interests or technologies/industries that you focus on. Many companies provide dual ladders that can help you grow on the technical side or on the management side. Nowadays ther are many opportunities for leadership besides the traditional “manager” role.

Organisation

Drive

driveThe key message of “Drive“, from Daniel H. Pink, is that people are best engaged in a task when they understand the purpose of the job, they can develop the necessary mastery to realise the job, and they have sufficient autonomy to direct things their way.

That’s a very simple framework — purpose, mastery, autonomy — and I very much like it.

The problem with the book is that beside this simple message, there’s little more. The book presents scientific studies to backup the ideas and tries to trace the evolution of these ideas. It’s entertaining but very annectodical. The second half of the book has a “toolkit” to reflect on motivation, summaries of the book, a glossary, and even conversation starters. In themselves, these are creative ideas. But it did feel a bit like milking the content.

One idea that is somehow developped in the book and that differ from the main framework is the difference between intrinsic and extrasinc motivation. Instinsic motivation is way stronger. If the activity, like playing a game, is the reward itself, there’s no need of an external reward. Or maybe the activity is not the reward itself, but we understand the reward ourselves, without external inputs. We know how to kill intrisinc motivation (pay somebody to do what he would do for free), but how to foster it is very challenging.

The question of how to develop engagement is actually a very intersting one. It’s not only interesting in the context of the workplace, but also in education, or social community.

This book has a nice message. I just wished it had more depth.

Technology

Mastering Technology

Things move fast in the IT industry. Half of the technologies I use today didn’t exist ten years ago. There’s a constant push to switch to new technologies.  But when it comes to technologies, I’m kind of conservative. I like proven technologies.

The thing that makes me conservative is that mastering a technology takes a lot longer that we think.

Most advocates of new technologies massively underestimated the learning curve. Sure, you get something working quickly. But truely understanding a new technology takes years.

Take object-oriented programming. On the surface it’s easy to grasp quickly and become productive. But it took the industry something like 20 years to figure out that inheritance isn’t such a great idea. The result is that early object-oriented systems overused inheritance hoping it will favor reuse, whereas it just led to hard mainteance.

The same holds for instance for the idea of distributed objects. It’s easy to grasp and appealing . Yet it took the industry decades to realize that abstracting remote boundaries is a flawed idea. We instead need to explicitly embrace and expose asynchronous API.

Another one of my favorite easy-to-grasp-but-hard-to-master technology is object-relational mappers (e.g. Hibernate). 10 years of experience and I’m still struggling with it as soon as the mapping isn’t entirely trivial.

Want another example? XA Transaction. Updating a database row and sending a message seems to be the poster child for XA Transactions. Well, it turns out that this simple scenario is already problematic. When it didn’t work I learned that I was experiencing the classic “XA 2-PC race-condition“.

There are lots of new technologies being developed right now, like distributed log systems, container schedulers, web framework. I perfectly understand why they are being developed and what problems they supposedly solve.

But don’t try to convince me that they are silver bullets. Every technology choice is a trade-off because you can never fully abstract the underlying complexity away. There’s a price somewhere. Things will break in unexpected way and nobody knows why. Performance will be hard to figure out. Subtle misuse of the technology will only be detected later and be hard to correct. It will take time to figure these things out.

At the end it’s all about risk management. If the technology might provide a strategic advantage, we can talk. The investment might be worth it. But if it’s not even strategic, I would seriously challenge if the risk of using new technologies is worth it.

More