28 Years of Web Development

Mistakes were made. Lessons learned.

Published on May 31, 2025.

Hop in Your DeLorean and rewind the clock to the summer of 1997. I had just landed my first job as a web developer. HTML and images, nothing more. A few years pass.

As the turn of the century approaches, we’re utilizing JavaScript and ColdFusion to make websites dynamic! But there were no automated tests and no “cloud” to deploy to. We didn’t have version control, and the only collaboration among developers was playing hacky sack in the parking lot.

A lot has changed.

“the job of the software engineer is constantly being reinvented…”

- Keven Scott (Microsoft CTO) via Tiff in Tech

Today we work remotely thanks to tools like Zoom, Slack, GitHub, and Shortcut. Our team uses test-driven development (TDD), distributed version control, code reviews, and continuous integration. We rely on open source and deploy Docker containers to the cloud.

When deciding what to build, we hop on a video call, share our screens, and collaborate. We’ll reach for Claude Sonnet 4 with questions on how to best write or optimize code, to avoid a tedious task, or just out of curiosity. When pushing code, CodeRabbit (AI) will be the first to request changes, occasionally catching bugs that human reviewers may not spot.

The tools and technologies have changed over time, but not all changes are equal. At times we replaced one tool with another, but we still accomplished similar results in similar ways. At other times, our way of working fundamentally shifted.

  • Improving collaboration among developers.
  • Increasing confidence in our changes through automated tests.
  • Less time chasing down trivial bugs thanks to continuous integration.

At times Generative AI is like the former, substituting one tool for another, while providing responses more tailored to our specific context. AI can be a collaborator that’s more readily available than our colleagues, allowing them to work on their tasks with fewer interruptions. The more we adopt AI, including innovations like Model Context Protocol, the more it fundamentally shifts how we work as software developers.

Even as we adopt new tools and technologies, there are good practices that remain. Those practices were born from failure – making mistakes and learning lessons over many years.

Mistakes were made

“The only real mistake is one from which we learn nothing.” - Henry Ford

I thought it would be fun to share some of my biggest fails from the past 28 years. These stories illustrate how today’s tools, when used effectively, can prevent many past mistakes – but the tools alone are insufficient. We must constantly improve how we work, not just the tools we work with.

That time I deleted the users table

That day I was assigned the task of cleaning up the users_archive table and associated data. Why we had this table, I don’t know, but apparently it was getting big. I was to delete some of the archived users, presumably the oldest ones, though I don’t remember exactly.

While writing the SQL queries – directly on production – the CTO came by my desk and asked for a different query. A query that required the users table. I think you can see where this is going. 😬

My wires got crossed. I ended up running my “cleanup” query against the wrong table.

People in the office started asking why production was down. With thousands of active users, deleting a large number of rows from a fundamental table like users was a struggle for the database.

That struggle was a blessing in disguise. Once I realized my mistake, I was able to cancel the query. The transaction slowly rolled back. It was as if the data was never deleted!

Crisis averted. I had caused a temporary outage, but at least no data was lost.

Lessons learned

For me, the main lesson is to avoid multi-tasking, especially when making changes on production. Many mistakes are caused by doing too many things at once.

To avoid similar mistakes:

  • I have a to-do list for each day, so I can add a new request to my list and come back to it later, instead of jumping back and forth.
  • I prefer to avoid mutating data directly in production. If the problem isn’t urgent and can be solved with a data migration, then it will go through the regular code review and deployment process. That can help catch mistakes.
  • Though not mandated, I think it’s a good idea to pair program whenever mutating production data directly.
  • For my local database, I write DELETE queries as a comment. That way it must be selected to run, adding friction so that I won’t accidentally run a query that deletes data.
  • My phone has a work focus mode. I’m not checking my phone while performing surgery on the production database!

Too many pending transactions

It was a rough time for the company. Our payment gateway didn’t support verification numbers (CVV/CVC) or address verification (AVS). We had lost access to our merchant account due to an uptick in scammers, but the paperwork hadn’t gone through for a new payment gateway with support for CVV/CVC and AVS to help block the fraud. In the meantime, the best we could do is store the encrypted card information for later.

I was new to working with payment gateways, much less the system that was in place. But I was assigned to work on it during this transition.

Though I don’t remember the specifics, I do remember fixing a bug that day – or so I thought. 🤦🏼‍♂️

After deploying my “fix”, I caught up with the other developers who were heading out for lunch. It was a lengthy lunch too.

When I finally got back, I got an earful! Something about “pending transactions” – whatever those were? A little shaken, I immediately got to work. As it turned out, I had turned a small bug into a bigger nastier bug! 🦟

Later I learned what a pending transaction was – it’s actually pretty clever. Whenever a purchase was made, a database record was stored that was marked as pending. This was before the request to the payment gateway. After the gateway responded, the transaction was marked as approved or declined, but no longer pending.

If the number of recent pending transactions exceeded a threshold, internal emails would be sent. It meant that either the payment gateway was not responding or there was a bug in our code. Unfortunately, I was out to lunch, with no cell phone and no idea.

Lessons learned

  • Never deploy right before leaving.
  • Monitoring is a valuable time investment.
  • Share knowledge with your team. Preferably before an emergency. 😅

To avoid similar mistakes:

This mistake occurred years before automated testing was common practice. However, the knowledge of unit tests and TDD is insufficient on its own. We still need to actually write the tests, even when there is a seemingly urgent bug in production!

Taking 10 minutes to write a failing test won’t only prevent a regression from reintroducing the bug in the future, it may prevent introducing another “new and exciting” bug as part of the “fix”.

Accidentally invoicing a military airbase

As is fairly common, we had a few different environments. Local development servers, a QA environment, and production. From what I recall, QA was a clone of production.

I was deploying our work, and decided to do some manual testing. Everything worked fine, and I didn’t even realize the issue until it was brought to my attention the next day.

With so many browser tabs and windows open, it was an easy mistake to make. The manual testing triggered some emails, and those emails went to a real user because of the (QA) environment I was testing on! Not just to a customer, but to a customer’s customer. This was bad. This was very bad! 🤦🏼‍♂️

Lessons learned

To avoid similar mistakes, we made some changes after that event:

  • Visually demarcate QA and development environments in our app, making it obvious which server we were looking at.
  • We found a service to capture emails sent from QA or development, to prevent them from sending to end users.

Visually demarcating different environments can be extended to Terminals and database access:

  • When I log into production today, I use the Red Sands theme in Terminal to clearly demarcate it from a local shell.
  • For database access, TablePlus can be configured with a red status color too, though I wish it were more prominent. As soon as I’m done, I shut down my connection to production to avoid using the wrong database by accident.

These days, we don’t consider it a good practice to make QA a replica of production data. We want the data in QA to be representative. That is, we want enough data for testing load and so forth, but there should be no contact information outside of our own company on the QA servers.

In addition, to combat the explosion of browser tabs:

  • Use OneTab to collapse the tabs I’m not using.
  • Use a separate browser window for production to segregate it from development tabs.
  • Use a dedicated documentation browser like Dash and other desktop apps to reduce how many browser tabs I need in the first place.

This isn’t one of my own fails, but it’s a fun one, so I’m going to share it anyway.

My colleague was working on a new real-time chat feature for an online dating website. When he deployed his changes, he had forgotten to remove some debug code.

Shortly afterward he got a request to chat. And then another request. And another. He quickly realized that this was a little unusual.

While testing locally, he had all the requests go to his profile. That was useful locally, but it wasn’t intended to go to production. Whoops!

Lessons learned

What is the lesson here? Working for an online dating company has its perks? 😅

To avoid similar mistakes:

  • Automated tests and continuous integration could have caught it.
  • We have linters that will prevent some debug code from going to production.
  • These days we have configurations for each environment, so a chat debug mode could be a permanent option, one that could only be activated locally.
  • I have a habit of reviewing my own changes before committing them.

In summary

It goes without saying that I’ve made numerous mistakes throughout my career, these are just a few. Yet I managed to hold onto this career despite some pretty serious blunders. Nobody is perfect and nobody should expect perfection. All we can do is learn from our mistakes.

By giving you a glimpse into the past, I hope you gained a greater appreciation for the tools that can help prevent bugs today. At the same time, tools alone aren’t sufficient. Improving our own process of working may be the best thing we can do. Even in this era of Generative AI, good practices remain good practices.

What mistakes have you made in your career, and what lessons have you learned? Maybe you have your own list of good practices that you could share as a blog post of your own. After all, sharing your knowledge and experience is itself a good practice. 😉

Nathan Youngman

Software Developer and Author