The image shows the interior of a vehicle with a futuristic-looking control panel consisting of several orange and green seven-segment displays arranged one above the other, showing the date and time, flanked by metal plates, switches, cables and connectors, creating the impression of a time travel console.
A personal journey through four decades of software development

Early influences and BASIC

At the end of the 1980s, I didn't have my own computer yet, but I had already developed a keen interest in programming. This period had a decisive influence on my later path into software development. Robert C. Martin describes this universal awakening experience very aptly in one of his talks:

There came a moment in your life when maybe you were in a store somewhere and you walked up to a Commodore 64 [...] and you typed a little BASIC program that would just print your name infinitely and you would go "YES! I am a god!" And you wanted to be a programmer.

I had exactly that kind of experience around the time I saw the film "War Games" for the first time. There was probably a Commodore 64 in a Kaufhof department store, but it could also have been an 8-bit device from Atari. In any case, I typed in my first lines of BASIC.

Since I didn't have my own computer, I had to rely on the city library, which had a few books on computers in general and programming in particular. I devoured these books and began programming on paper – a practice that seems completely unusual today, but was quite common back then. In any case, I hope it is uncommon now. During my computer science studies in the late 1990s, I unfortunately had to program on paper in some exams. Once, I even had to retake an exam because of a missing semicolon (or one that was not recognisable as such – syntax error!).

BASIC was the dominant programming language at the time and the first encounter with programming for millions of people. The Commodore 64 had BASIC stored directly in its ROM as a built-in programming language. Programming could begin immediately after switching on the computer.

Amiga, Demoscene, ARexx

In 1990, I got my first computer, an Amiga 500, which was a huge leap forward in technical terms. It had 512 KB of RAM, a Motorola 68000 processor running at 7 MHz, three custom chips for high-resolution graphics (up to 640 × 512 pixels), many colours (up to 4096 colours simultaneously on the screen) and digital 4-channel sound, and was far ahead of its time.

On the Amiga, I started programming with AmigaBASIC, a BASIC implementation developed by Microsoft specifically for the Amiga. In addition to the usual BASIC language constructs, AmigaBASIC also offered an easy-to-use programming interface for the Amiga's unique graphics and sound capabilities. For example, the OBJECT commands made it easy to create and animate moving objects – so-called sprites and bobs.

As a child of that era, I naturally came into contact with cracktros and later with real demoscene productions. The demoscene is an international computer art subculture whose members focus on producing "demos": standalone, sometimes extremely small computer programs that generate audiovisual presentations.

The roots of this scene lie in the software crackers of the early 1980s. They removed copy protection mechanisms and added intro screens (cracktros) to leave their signature. This competition for the best visual presentation gave rise to an international community that existed independently of the gaming and software sharing scene. It still exists today and has now been recognised as intangible cultural heritage by UNESCO.

I wanted to be able to program such impressive productions myself. So I started teaching myself C first and then Assembly. Assembly was the language of choice for demoscene productions because it offered maximum control over the hardware and thus enabled optimal performance.

At the time, my friends were all 5 to 10 years older than me. Some of them were very active in the Amiga scene and ran mailboxes (bulletin board systems, BBS), for example. We attended demoscene events together, where I was able to exchange ideas with others and experience the lively culture of that time first-hand.

However, I only really got started with programming on my second computer, an Amiga 1200 with the new AmigaOS 3 operating system, which included the ARexx programming language. In terms of hardware proximity, this was a step backwards compared to Assembly, but it opened up completely new possibilities.

ARexx was an implementation of the REXX language developed specifically for the Amiga. What was special about it was that it made it very easy to automate and extend software. Thanks to the multitasking-capable AmigaOS, every running program could act as a "host" for external commands, making ARexx the ideal macro language.

During this time, I wrote several customisations and extensions for mailbox software such as AmiExpress and FAME (Final Amiga Mailbox Engine). These programs formed the backbone of online communication at the time, years before the internet became accessible to private individuals. Of course, my first time "on the internet" was via modem and with my Amiga 1200.

These practical projects taught me important lessons about software development: how to extend existing software, how to collaborate with others, and the importance of clean, maintainable code. Even on the Amiga, including for projects I worked on alone, I used version control. When I was first shown RCS, the predecessor to CVS, it was also a revelation.

The path from paper-based programming to the first steps in BASIC on other people's computers to my own computers and finally to serious projects reflects the typical development of many of my generation. We did not learn programming in structured courses, but on our own initiative, driven by curiosity and the desire to understand and master the machines.

My experience with various programming languages, from BASIC to C and Assembler to ARexx, taught me early on how important it is to choose the right tool for the job. BASIC was suitable for beginners, Assembler for maximum performance, C for system-oriented programming, and ARexx for automation and integration. Each language had its place and its justification.

These early years laid the foundation for my lifelong fascination with programming and shaped my understanding that good software should not only work, but also be elegant, extensible and maintainable.

University, OOP, PHP, PHPUnit

Even though I had already programmed in C++ on my Amiga 1200 in the mid-1990s, I only really understood the concept of object-oriented programming when I studied computer science at university. And not in a lecture: we did, but neither object-oriented thinking nor object-oriented programming. We basically programmed procedurally in Java. I finally learned what object-oriented programming really means and how to program in Java from and with other students.

Nowadays, I wouldn't want to develop software without object orientation, although I like to borrow from functional programming, such as immutability. The immutability of data structures has proven to be an indispensable tool in this regard. Immutability and the use of functions without side effects are fundamental concepts that also have their place in the object-oriented world.

Shortly after I began studying computer science in 1998 and purchased my first x86-based PC, I was contacted by a graphic designer with whom I had previously collaborated in the Amiga scene. He was now designing websites and asked me if I could implement a solution for one of his clients using Perl or PHP. Since I had no prior knowledge of either language, I decided to try both.

I tried Perl first, but gave up after a few hours – I can't remember the exact reason why. However, within a weekend, I was not only able to learn enough PHP to meet the client's requirements, but I was also able to successfully complete the project. I am convinced that this says more about Perl and PHP than it does about me.

Later, I subscribed first to the German and then to the English PHP mailing list. Asking questions quickly turned into answering questions. It wasn't long before I helped translate the PHP manual from English into German. Eventually, I started working directly on PHP, sometimes to fix bugs or implement small features but mostly to discuss the concept and design of PHP 5, which was being worked on at the time.

My journey with PHPUnit began in 2000 when I first encountered unit testing while working with JUnit at university. The idea for PHPUnit arose from a discussion with a professor who doubted that a tool like JUnit could be implemented for PHP. This challenge was the catalyst for what would become one of the most important tools in the PHP ecosystem.

In November 2001, I finally dared to share the result of over a year's work with the world on cvs.php.net as part of the PEAR project. This first version was modest, but it laid the foundation for automated testing of PHP code. From the outset, PHPUnit was based on the xUnit architecture, which began with SUnit and became popular with JUnit.

PHPUnit later broke away from PEAR and has since been available as a standalone project under the BSD Licence. This independence allowed me to develop PHPUnit faster and better respond to the needs of the PHP community. PHPUnit quickly established itself as the standard testing framework for PHP and was rapidly adopted by large projects such as CakePHP, Drupal, Symfony, WordPress, and the Zend Framework. This broad acceptance confirms the necessity and value of automated testing in PHP development.

When I started with PHP, deploying a new software version consisted of uploading all (changed) PHP files to the server via FTP. It was also not uncommon to change files directly on the server Over time, the deployment process became more professionalised through the building of packages, their distribution and installation, and the activation of new software versions. By dividing the process into these separate steps, deployment became more traceable, robust, and flexible. Nowadays, of course, packages are often no longer built, but container images are used directly instead.

Modern continuous integration pipelines have further advanced this development. These automated workflows cannot only run tests, but also cover the entire software development lifecycle: from code push to deployment in the production environment.

At the beginning of my PHP career, it was not common to test software automatically. I hope I am not exaggerating when I say that I am responsible for automated testing becoming the norm in the PHP world today. In most cases, this is done with PHPUnit.

Over the years, these dynamic tests have been supplemented by static tests using tools such as Phan, Psalm and, more recently, PHPStan. A modern PHP project without static and dynamic tests and a corresponding continuous integration pipeline is unthinkable today.

Software Development Changes

Over time, the way software is developed has changed fundamentally. This applies not only to technology, but also and especially to processes and architecture. First, test-driven development became important, then domain-driven design, and now event-driven systems.

When the Amiga 500 was launched in 1987, anyone who was interested in it had the opportunity to fully understand the architecture of this computer, including the CPU and custom chips. The hardware was simpler. I don't think that applies to modern computers, where we have separate CPUs in our CPUs and sometimes even separate CPUs in other hardware components. Well, it's not for nothing that we say that in software development, everything is recursive. So why not hardware, too?

In the past, it wasn't just the hardware that was simpler. The way we developed software was also simpler. Today, we face completely different challenges. Agile methods are good, and we talk to our customers. But today, we do so much more than just programming. We take on almost all other tasks, which makes sense because it brings us closer to the business and helps us deliver the right things faster. But it also puts us as developers under enormous pressure to take care of everything that used to be done by other specialists.

The development from the simple beginnings on the Amiga 500 to complex, modern software architectures reflects the dramatic change in our industry. What once began with simple deployment methods via FTP is now highly complex CI/CD pipelines with automated testing, container orchestration, and event-driven architectures.

The challenge for today's developers lies not only in the technical complexity, but also in the fact that they have to work as generalists in an environment that used to be dominated by specialists. While university education continues to teach fundamental concepts, practical knowledge of modern technologies often has to be acquired independently or supplemented by practice-oriented training formats.

The future of software development will most likely become even more complex, but the basic principles – clean code, automated testing, and good architecture – will remain the same. Tools such as PHPUnit have shown how a single tool can change an entire industry and improve the quality of software in the long term.

The Impact of AI

You have just completed your university education or training and are faced with a problem: you have theoretical knowledge, but you lack practical experience. This is completely normal. In the past, your path was clearly mapped out: you were given simple tasks, perhaps needing a week to do something that an expert could do in half a day, gaining valuable experience and eventually becoming an expert yourself.

But artificial intelligence is fundamentally changing this tried-and-tested learning curve. What used to serve as a springboard for starting a career, the time-consuming but instructive basic tasks, is now increasingly being automated. Tasks performed by junior developers are increasingly being handled by AI. This development raises an existential question: Where will experience and expertise come from in the future if we automate the very tasks through which beginners acquire them?

In 1999, Martin Fowler laid the foundation for a revolution in software development with his groundbreaking book "Refactoring: Improving the Design of Existing Code". His systematic cataloguing of code improvements defined a controlled approach to improving existing software without changing its external behaviour.

However, the real transformation only came with the industrial implementation of these ideas. Shortly afterwards, JetBrains released IntelliJ IDEA, one of the first integrated development environments with comprehensive automated refactoring capabilities. This IDE made Fowler's concepts accessible to everyone: what was previously manual and error-prone became a simple click with deterministic, reproducible results.

Today's generation of programming tools takes a fundamentally different approach. AI assistants aim to generate code directly and assist humans in creating new features. These tools use large language models trained on millions upon millions of code repositories to provide context-aware suggestions.

However, these AI-powered approaches pose fundamental challenges that did not exist with traditional refactoring tools. AI assistants are inherently non-deterministic: the same prompt can lead to different code outputs. This unpredictability stands in direct contrast to the precise and reproducible transformations of refactoring tools.

While refactoring operations represent traceable changes with clear transformation rules, AI-generated code blocks often result in extensive changes of unknown origin. This makes it much more difficult to track changes and their rationale.

In this context, test-driven development (TDD) takes on a new, central importance. It acts as a stabiliser in a world increasingly dominated by AI-generated code. Tests specify the desired behaviour of the software. They are written by humans and guarantee the deterministic behaviour of the software.

The different assessments of these technological developments can be explained, among other things, by generation-related acceptance patterns. Douglas Adams once aptly put it this way:

Anything that is in the world when you're born is normal. Anything invented between when you're fifteen and thirty-five is exciting and innovative. Anything invented after you're thirty-five is against the natural order of things.

This observation explains why experienced developers may be more sceptical about AI assistants than their younger colleagues. Their scepticism is not unfounded, but is based on their experience with proven, deterministic tools and their understanding of the risks of non-deterministic systems.

Artificial intelligence will likely have a lasting impact on software development and is therefore a permanent factor. The challenge is to find a way for both approaches to coexist productively and leverage their respective strengths. AI assistants can support humans in rapid prototyping and the automation of repetitive tasks, while established practices such as TDD provide the necessary stability and quality assurance.

However, over-reliance on AI assistants can lead to a creeping "dumbing down" of developers. Those who rely too heavily on automatically generated code lose basic programming skills. This dependency is intentional: the more developers rely on AI assistants, the more difficult it becomes for them to work without these tools.

The current low prices for AI assistants belie the true long-term costs. These supposedly favourable conditions follow a proven market strategy: first, low prices are offered to create dependencies. Once users can no longer work without the services, or are no longer willing or allowed to do so, prices are increased.

The rapid increase in AI usage is leading to a dramatic rise in global electricity consumption. This massive demand for energy is exacerbating the climate crisis and making it even more difficult to achieve climate targets.

The AI market is dominated by a handful of US companies. This dependency fundamentally jeopardises Europe's digital sovereignty. The US CLOUD Act allows US authorities to access data stored by American technology companies. Even if this data is located outside the US. This turns Europe into a digital colony of the US, with no control over its own data and infrastructure.

Most large AI models have also been trained with copyrighted code without the rights holders' consent. The Free Software Foundation has already described GitHub Copilot as "unfair and illegal," and there are already several class action lawsuits against AI providers. This legal uncertainty shows that the foundations of current AI development are on shaky ground.

The current low prices for AI assistants are a trap. They create dependencies that will later have to be paid for dearly, not only financially, but also with the loss of digital sovereignty. Europe must not allow a handful of US corporations to determine the future of artificial intelligence.

The ecological costs of centralised AI infrastructures, the creeping loss of developer skills and the questionable legal basis of current AI development show that this system is not sustainable. Europe still has the opportunity to build an alternative. But with every day that dependence on US providers continues to grow, this opportunity is dwindling.

Decentralised and local AI systems are not only a technical alternative, but also a matter of political and economic common sense. Only in this way can Europe determine its own digital future and free itself from the stranglehold of US technology giants before they fully exploit their market power.