You’ve done everything right. Your code is clean, your functions are optimized, and you’ve used the latest frameworks. But your application is still slow.
Why?
Because your database is struggling to keep up.
This is a common problem in backend development—your code executes in milliseconds, but your database queries take seconds. If you don’t address this, you’ll end up with sluggish APIs, frustrated users, and a system that crumbles under load.
Let’s explore why databases slow down and how to fix them.
Why the Database Can Become a Bottleneck?
Databases are slower than code execution.
Your application code runs in memory, but your database reads and writes data from disks or over a network. Even with SSDs and optimized database engines, disk access and network calls are still much slower than in-memory operations.
But how do you know if your database is causing performance issues?
Look for these symptoms:
Your API requests take a long time despite minimal application logic.
Your database CPU usage is always high.
Queries take longer as your data grows.
Your application struggles under high user traffic.
Diagnosing the Problem
Before you start optimizing, you need to identify the actual bottlenecks. Here’s how:
Use Query Profiling
Every major database has a way to analyze slow queries:
MySQL & PostgreSQL: Use
EXPLAIN ANALYZE
to check execution plans.MongoDB: Use
.explain()
to analyze queries.Slow Query Logs: Identify the queries that are taking the most time.
For example:
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending';
This will show how your database processes the query and where the delays are happening.
Look at Key Performance Metrics
Monitor the following database metrics:
Query execution time: Are some queries consistently slow?
Read/write latency: Is your database struggling with high read or write operations?
Locking issues: Are queries blocking each other?
Connection pooling: Are too many connections being opened and closed?
Once you’ve identified the slow queries, you can start optimizing them.
Database Locking & Deadlocks
When multiple queries or transactions run concurrently, databases use locking mechanisms to ensure data consistency.
However, poor transaction management can lead to deadlocks, where two or more queries wait indefinitely for each other to release a lock, causing performance bottlenecks.
There are different types of locks:
Row-level locks: Lock individual rows, allowing higher concurrency. (Ideal for OLTP applications)
Table-level locks: Lock the entire table, preventing modifications while a transaction is running. (Can cause slowdowns)
Shared vs. Exclusive locks: Shared locks allow multiple reads, while exclusive locks block all access until completion.
Some best practices to deal with deadlocks are as follows:
Keep transactions short and efficient (commit early).
Access tables in a consistent order across transactions.
Use indexing to reduce unnecessary locks.
Monitor for deadlocks using
SHOW ENGINE INNODB STATUS;
(MySQL) orpg_stat_activity
(PostgreSQL).
Optimizing Your Queries
Optimizing queries is a quick win. Here are some common tips that can help:
1 - Add Indexes (The Right Way!)
Indexes speed up lookups by avoiding full table scans.
For example, the query below would be considered bad without indexes in the email field.
SELECT * FROM users WHERE email = 'user@example.com';
A simple fix would be to create the index on email.
CREATE INDEX idx_email ON users(email);
With an index, the database can locate the data instantly instead of scanning every row. However, too many indexes slow down writes, so there is a trade-off involved. Make sure to only index the columns you frequently search on.
2 - Avoid SELECT *** (Be Specific)
Fetching all columns when you only need a few wastes resources. For example, the below query should not be used:
SELECT * FROM orders WHERE status = 'pending';
A better option is the query below:
SELECT id, customer_id, total_price FROM orders WHERE status = 'pending';
This reduces the amount of data transferred and speeds up queries.
3 - Use Joins & Reduce Nested Queries
Avoid nested queries. For example:
SELECT * FROM orders WHERE customer_id IN (SELECT id FROM customers WHERE city = 'New York');
Using joins like the example below can be a much better strategy.
SELECT orders.order_id
FROM orders
JOIN customers ON orders.customer_id = customers.id
WHERE customers.city = 'New York';
Joins are optimized by the database, whereas nested queries can cause multiple redundant lookups.
Scaling Beyond Queries: Architectural Optimizations
Some optimizations are also required on the architectural level to improve database performance. Here are some examples:
Schema Design
Database schema design directly impacts query speed, scalability, and storage efficiency. Poor schema decisions lead to slow queries, high storage costs, and scaling bottlenecks.
Here’s how to optimize schema design for better performance:
1 - Normalization vs. Denormalization: Trade-offs
Normalization (3NF and above) reduces data redundancy by breaking tables into smaller, related entities.
Pros: Saves space and avoids update anomalies.
Cons: Can slow down read-heavy queries due to excessive JOINs.
Denormalization merges frequently used data to reduce JOINs, making reads faster at the cost of extra storage and potential update issues.
Pros: Great for analytics & high-read workloads.
Cons: Can lead to data inconsistency if not managed properly.
Rule of Thumb: Normalize when you expect frequent writes, denormalize when reads dominate.
2 - Choosing the Right Data Types
Use
BIGINT
instead ofUUID
for primary keys unless uniqueness across distributed systems is needed.Use
BOOLEAN
for true/false values.Use
TEXT
/JSONB
sparingly. This is because indexable columns are faster for queries.
Use Caching
Instead of hitting the database every time, store frequently accessed data in a cache (Redis, Memcached). This reduces database load significantly.
For example, cache the result of expensive queries.
cache.set("user_123_orders", query_result, expire=300) # Expires in 5 minutes
Optimize Database Connections
Opening and closing database connections is expensive. Use connection pooling to reuse existing connections instead of creating new ones for every request.
For example, use pgbouncer
for PostgreSQL connection pooling.
Materialized Views for Precomputed Expensive Queries
A Materialized View is a precomputed query result stored as a physical table. Unlike a regular database view, which runs a query every time it's accessed, a materialized view stores the result persistently, making expensive queries much faster.
For example, suppose we frequently calculate total sales per day from a huge orders
table:
SELECT order_date, SUM(total_price) AS daily_sales
FROM orders
GROUP BY order_date;
If the orders
table is massive, this query slows down dashboards and reports. To handle this, you can create a materialized view. This way queries can read directly from the view instead of querying the orders
table.
However, materialized views don’t usually auto-update when data changes. There needs to be a mechanism to update them. Also, this approach is not suitable for real-time data needs.
Sharding & Partitioning for Large Datasets
When your database gets huge, split data across multiple servers (sharding) or distribute tables across partitions.
For example:
Shard users based on country (
users_us
,users_eu
)Partition orders by date (
orders_2023
,orders_2024
)
This reduces the load on a single table and speeds up lookups.
Use Read Replicas for Scaling Reads
If you have millions of reads per second, create read replicas to distribute the load.
For example:
Primary DB: Handles writes and critical reads.
Read Replicas: Handle read queries.
Many large-scale systems use this approach for redundancy as well as improved durability.
Conclusion: Diagnose Before You Optimize
Here are some tips to keep in mind:
Don’t blindly optimize—use logs and profiling to find real bottlenecks.
Start with query optimization—indexes, joins, and batching solve 80% of problems.
Use caching and read replicas—offload repetitive queries.
Think about scaling architecture—sharding, partitioning, and hybrid databases.
A slow database doesn’t have to hold you back. With the right optimizations, you can make your system blazing fast and scalable.
👉 So - which other tip will you add to optimize the database access?
Shoutout
Here are some interesting articles I’ve read recently:
Database sharding case study: PostgreSQL by
How to Save 10 Hours Per Week as a Developer by
5 Developer Tools That Saved My Team 10 Hours Per Week (And They're All Free) by
That’s it for today! ☀️
Enjoyed this issue of the newsletter?
Share with your friends and colleagues.
This is a great breakdown of common database performance issues and practical solutions. One thing worth adding is caching strategies—tools like Redis or Memcached can significantly reduce database load for frequently accessed data. Solid read!
thanks for this article.
Using joins instead of nested queries in MySQL indeed provided me with performance benefits