Back to Blog
Software Engineering

Backend Architecture Mistakes That Are Silently Killing Your Product

Samuel Isaiah
8 min read
Backend Architecture Mistakes That Are Silently Killing Your Product

Backend Architecture Mistakes That Are Silently Killing Your Product

Let me paint you a picture.

It's a Tuesday. Nothing special about this Tuesday. You're sipping your third coffee — not because you want it, just because it exists and you are weak — and then Slack pings. A user is complaining that the dashboard is slow. You check. It's not slow. It's dead. What used to take 40 milliseconds now takes 14 seconds, and nobody touched anything, and everything is on fire, and somehow this is your problem even though you were on leave last week.

This is not bad luck. This is architecture debt collecting its invoice. With interest. In public.

I've seen this happen more times than I'd like to admit — sometimes to other people, sometimes embarrassingly close to home. The thing about backend systems is they don't fail dramatically. They don't explode. They just quietly accumulate terrible decisions until one random Tuesday the whole thing sits down and refuses to get up. And the worst part? Almost every single one of these disasters was avoidable. Not with genius. Not with a PhD. Just with a few good habits and someone asking "but what happens when this table has a million rows" before writing the migration.

So here are the mistakes. The real ones. The ones nobody puts in the job description but everybody eventually pays for.

Designing Your Database Around Your UI (aka The Designer Trap)

This one has taken down more backends than any security vulnerability ever will, and it happens like this: a designer produces a beautiful Figma screen, a frontend developer asks what data they need, and the backend developer — exhausted, under deadline, just trying to make the tests pass — creates a table that looks exactly like that screen.

The profile page needs name, avatar, role, and last login? Cool, one table with those four columns. The dashboard needs monthly sales grouped by product with color-coded categories? Sure, one view that joins seven tables and returns it pre-formatted for that exact screen.

Congratulations. You have just married your database to your UI. And like all rushed marriages, this one will eventually become very expensive.

The problem is your UI changes constantly. Your database should not have to hold its hand through every design revision. When your data model is built to serve one screen rather than to model actual reality — actual entities, actual relationships, the actual truth of your business domain — you end up with a schema so rigid that adding a new feature requires touching things that have absolutely nothing to do with that feature. You'll be in a migration at 11pm wondering how adding a "notes" field to an order somehow broke the user profile.

Model your database around your domain. What are the real things in your system? Users. Products. Orders. Transactions. What are the real relationships between them, independent of any screen? Start there. Then build your API to translate between that clean model and whatever shape the frontend needs today. The frontend shape is a mood. The database is the truth. Do not let the mood dictate the truth.

Not Adding Indexes Until Users Start Complaining

Somewhere right now, a developer is looking at a table with 800,000 rows and a WHERE clause filtering on an unindexed column, and they are watching the query planner scan every single one of those rows one by one, like a person looking for their keys by checking every pocket of every jacket they have ever owned.

Indexes are one of those things that feel like premature optimisation when your table has 200 rows and feel like a personal failure when your table has 200,000 rows and your endpoint is timing out in production. The database does not warn you. It just silently does more and more work until it can't anymore.

The habit is simple: every time you write a query that filters, sorts, or joins on a column — any column — ask yourself whether that column is indexed. Foreign keys? Index them. Columns in WHERE clauses that run frequently? Index them. Columns you ORDER BY on large tables? Definitely index them. This takes about 30 seconds at migration time and approximately one therapy session to fix reactively under production load.

The rule I live by: if you're writing a query, you're also writing the index that makes it fast. They come as a pair.

Putting Business Logic in the Controller (The Classic Move of Someone Who Was in a Hurry)

We have all done this. A feature needs to ship. The controller is right there. You just need to add a few conditions, call an external service, update two tables, fire a notification, and return a response. So you do it all in the controller. It works. You ship. You move on.

Six months later a colleague needs the same logic from a background job. They find your controller. They copy the logic. Now it lives in two places. A third person comes along and updates one copy but not the other. Now you have a bug that only appears on Tuesdays when the background job runs and not on the main endpoint that users hit and nobody can figure out why for three days.

Controllers handle HTTP. That is their job and their only job: receive a request, call the appropriate service, return a response. The business logic — the actual rules of your application, the things that make your product what it is — belongs in a service layer. Not because it's elegant, but because logic that lives in a service can be tested without simulating an HTTP request, called from anywhere in the codebase, and found by the next developer without them having to grep through route handlers.

The same goes for the other direction: please do not put business logic in your database triggers. Yes, it feels clever. Yes, it runs fast. But when something breaks at 2am and you're tracing a bug through application code and there's a trigger firing silently in the database that nobody remembers writing, clever will not feel like the right word.

The N+1 Query Problem (Or: How to Make 51 Database Calls When You Only Needed 1)

This one is so common it has a name, and yet it keeps appearing in codebases like a stray cat that everyone feeds without admitting they feed it.

Here is how it happens. You fetch a list of 50 orders. Then in your serializer, or your template, or your response builder, you access each order's customer name. Your ORM helpfully fires a separate database query for each one. That is 50 queries to get customer names, plus the 1 query to get the orders. 51 round trips to the database for what should have been 1 query with a join.

On your laptop, with a local database and 50 rows, this takes maybe 80ms and feels fine. In production, with 500 rows and a database server that is not sitting next to your application server, this is several seconds of pure database overhead, and the worst part is the code looks completely normal. There is no error. There is no warning. Just a loop and some property accesses and a database that is quietly exhausted.

The fix in Django is select_related for foreign key relationships and prefetch_related for many-to-many. Most ORMs have an equivalent. The habit is to always ask, when fetching a list of things, whether anything you'll do with each item will trigger another query. If yes, fetch it upfront. One trip, not fifty.

Confusing Authentication With Authorisation (They Are Not the Same Thing and This Mistake Is Costly)

Authentication is "who are you." Authorisation is "are you allowed to do this." These are two different questions and conflating them is one of the most expensive security mistakes a backend can make — not expensive like "we had to refactor some code", expensive like "we had a data breach and now we're explaining it to regulators."

The failure mode looks like this. Your endpoint at "/api/invoices/2847/" correctly checks that the user is logged in. Authentication: passing. But it does not check whether invoice 2847 belongs to the user making the request. So any logged-in user who increments the ID in the URL can access anyone else's invoices. This vulnerability has its own name — Broken Object Level Authorisation — and OWASP has listed it as the number one API security risk for years running because it is everywhere.

The fix is not complicated. Every endpoint that returns or modifies a specific resource must verify not just that the user is authenticated, but that the authenticated user has the right to that specific object. This check belongs in your service layer so it cannot be accidentally skipped. It is one extra condition. It prevents a category of breach that has cost companies millions.

Check who they are. Then check what they're allowed to touch. Both. Every time.

Not Planning for External Services to Fail (They Will Fail)

Your payment gateway will go down. Your SMS provider will time out. Your third-party analytics endpoint will return a 500 at the worst possible moment. This is not pessimism — this is how distributed systems work. The question is not whether your external calls will fail. The question is what your application does when they do.

The most dangerous version of this mistake is a synchronous external call with no timeout sitting inside a request/response cycle. Your user clicks "Pay Now." Your server calls the payment gateway. The gateway is slow. Your thread waits. Thirty seconds pass. Your thread is still waiting. Meanwhile, 40 other users click "Pay Now." Now 40 threads are waiting. Your server is out of threads. Your entire application is down because one external service was slow, and your users are seeing a 503 while their money is potentially in a limbo state that will take your team hours to untangle.

Always set timeouts on external calls. Move non-critical external calls to background jobs. Design your system to degrade gracefully — a failed analytics call should be logged and retried silently, not surfaced as an error to the user who is just trying to complete a purchase. External services are unreliable guests in your system. Plan accordingly.

Hardcoding Secrets and Config (Please. I'm Begging You.)

I have seen database URLs in source code. I have seen API keys in config files committed to public GitHub repositories. I have seen a production secret token in a comment — in a comment — left there "temporarily" eighteen months ago.

The immediate risk is obvious: secrets in version control are permanent. You can delete the file. You can push a new commit. The git history remembers. Automated scanning tools — the kind threat actors run against public repositories every hour — will find it. A secret committed even for five minutes and then removed is still compromised.

The less dramatic but equally real cost is operational. When your database host changes, when your API key rotates, when you need to deploy to staging with different credentials, hardcoded values mean code changes and redeployments instead of a five-second config update. Environment variables for all secrets. A secrets manager for production credentials. Your ".env" file is documentation, not a file to commit. This is not a senior engineer thing. This is a day-one thing.

Building a Messy Monolith and Blaming the Monolith

Here is a cycle I have watched play out across more teams than I can count. The monolith is slow. It's tangled. No one wants to touch the orders module because last time someone did, the notifications broke and nobody knew why. So the team decides: microservices. Split it all up. Six months later the system is slower, harder to debug, and the same tangled dependencies now exist over a network instead of a function call.

The conclusion the team draws: they split the services wrong. The actual problem: the monolith had no meaningful internal structure, and they faithfully reproduced all of its chaos in a distributed system and then added network latency, eventual consistency, and distributed tracing on top.

A well-structured monolith with clear module boundaries and proper separation of concerns will outperform a poorly architected microservice system for most products at most stages. The discipline that makes a monolith healthy — clean interfaces between modules, each part doing one thing, logic in the right layer — is the exact same discipline you need for microservices. Do it inside the monolith first. If you ever genuinely need to split, the seams will already be there and the split will take days, not months.

So. What Now.

None of this is advanced. None of it requires a systems architecture certification or a whiteboard interview at a FAANG company. It requires slowing down for five minutes before writing the migration, asking what happens at ten times the scale, and treating your future self — the one who will be debugging this at 11pm on a Tuesday — as someone who deserves a bit of consideration.

The backend is where correctness lives. A beautiful product on a fragile backend is a countdown timer, not a company. Build the foundation like it matters. Because eventually, it will be the only thing that does.

#backend#software-engineering#architecture#database#api#security