(or the Attraction for Complexity) There is a very common tendency in computer science and it is to complicate solutions. This complication is often referred as incidental/accidental complexity i.e. anything we coders/designers do to make more complex a simple matter. Some times this is called over engineering and stems from the best intentions :
Attraction to Complexity: there’s often a misconception that more complex solutions are inherently better or more sophisticated. This can lead to choosing complicated approaches over simpler, more effective ones.
Technological Enthusiasm: developers might be eager to try out new technologies, patterns, or architectures. While innovation is important, using new tech for its own sake can lead to unnecessary complexity.
Anticipating Future Needs: developers may try to build solutions that are overly flexible to accommodate potential future requirements. This often leads to complex designs that are not needed for the current scope of the project.
Lack of Experience or Misjudgment: less experienced developers might not yet have the insight to choose the simplest effective solution, while even seasoned developers can sometimes overestimate what’s necessary for a project.
Avoiding Refactoring: In an attempt to avoid refactoring in the future, developers might add layers of abstraction or additional features they think might be needed later, resulting in over-engineered solutions.
Miscommunication or Lack of Clear Requirements: without clear requirements or effective communication within a team, developers might make assumptions about what’s needed, leading to solutions that are more complex than necessary.
Premature Optimization: trying to optimize every aspect of a solution from the beginning can lead to complexity. The adage “premature optimization is the root of all evil” highlights the pitfalls of optimizing before it’s clear that performance is an issue.
Unclear Problem Definition: not fully understanding the problem that needs to be solved can result in solutions that are more complicated than needed. A clear problem definition is essential for a simple and effective solution.
Personal Preference or Style: sometimes, the preference for certain coding styles, architectures, or patterns can lead to more complex solutions, even if simpler alternatives would suffice.
Fear of Under-Engineering: there can be a fear of delivering a solution that appears under-engineered or too simplistic, leading to adding unnecessary features or layers of abstraction.
We all know Pascal language (less know about Algol or Oberon OS) and we might have heard that
“software is getting slower more rapidly than hardware is becoming faster”
or that
“What Intel giveth, Microsoft taketh away”
I prefer this quote :
“Reducing complexity and size must be the goal in every step—in system specification, design, and in detailed programming. A programmer’s competence should be judged by the ability to find simple solutions, certainly not by productivity measured in number of lines ejected per day. Prolific programmers contribute to certain disaster.”
You find them all here “A plea for lean software“. if you make code please read this 1995 paper.
Agile has failed ? I don’t think so : https://medium.com/developer-rants/agile-has-failed-officially-8136b0522c49 Anything applied as a religion is doomed to fail and the same is for Agile. You can’t take any methodology and apply it “as is” to a company or project or dev-team; you need to adapt it and not make the company/team adapt to it. What I hope is that we don’t trow away the good ideas of Agile (iterations/continuous delivery, attention to technical excellence, simplicity (avoiding over engineering) just to mention a few).
Reasons why you burnout swe : https://engineercodex.substack.com/p/how-to-burnout-a-software-engineer My list in order of priority : 1. don’t ship code (there is nothing worse than working months on something that is not going to prod or is lingering in the deploy queue) 2. Not trusting your engineers by telling them in fine detail how to do things. 3. Lack of Recognition (not reward, about the difference).
From a friend of mine substack, Chris Hedges talking about war : “But these words give me a balm to my grief, a momentary solace, a little understanding, as I stumble forward into the void.”
Yoshua Bengio (CA) and John Bunzl (UK), moderated by Nico A. Heller on strengths and limitations of current artificial intelligence, why it may become a dangerous instrument of disinformation, why superintelligent AI may be closer (years) than most previously expected (decades) and how this could yield to catastrophic outcomes – from AI-driven wars to the extreme risk of extinction. https://www.youtube.com/watch?v=07c1ZRUQOeY. Notes, general concepts from Bengio talk :
AI currently perceive the world and make sense of it with images, sound and text, generating content in all 3 areas. Current systems are not at the level of human intelligence, they master what psychologist call system 1 intelligence (intuition) : react to any question/context, no reasoning (or little reasoning) – arithmetic : simple operations with numbers ok, more complex (of the type we will need paper and pencil) they make mistakes. System 2 intelligence : explicit reasoning, you can plan, imagine. Ex: driving left hand roads after having driven right hand all life. We take 1 hour to adapt because we don’t use intuition but reasoning. AI will get there : how much time will take to bridge the gap between s1 and s2 ? nobody knows, could be close or take 10 years. Training data currently needs to be filtered to remove data that appears to be insulting, homophobic or racists, dangerous, inadequate. AI will get there too to avoid the need of preparing training material. Machines that are as intelligent as we are will be inevitably more intelligent than us because they are machines : immortal, don’t sleep, can exchange info at high speeds like if they were a huge brain. Humans have culture to try to simulate this.
pure intelectual power, completely detached from the goal. The goal will make the difference between a “good” AI and an “evil” one.
humans cannot turn off compassion (or just most people can’t) as it has been hardwired into us by evolution; machines can
Stating a goal in a precise way is probably impossible so even AI with good goals might behave evil.
Goals are not expressable, we can only give partial specifications
We’ll get to the point where the game will be : who’s AI is bigger/better/faster ?
Most important thing to reduce the probability of bad behavior connected to AI is to reduce the actors, materials, information, proliferation (like we did for atomic bombs) Regulatory frameworks are necessary.
On crypto and thieves : “In February 2022, a hacker stole 120,000 wrapped Ethereum from Wormhole, a cross-blockchain bridge” https://newsletter.mollywhite.net/p/oasis-defi-centralization – subscribe to Molly White newsletter for unbiased crypto news.
This last view seems to be shared by many ( https://signalvnoise.com/posts/591-brainstorm-the-software-garden , Jeff Atwood likes it too https://blog.codinghorror.com/tending-your-software-garden/ and I feel close to him on really many things) and I tend to see it more close to what software development is today : a continuously changing (growing) artifact, not totally manageable, that requires constant maintenance (here the building metaphor fails a bit), designed and redesigned over time like a garden, subject to seasons (“lots of new requirements” season, “robustness and reliability” season).