4 Comments

Normalization and denormalization are two key concepts in database design, each serving a specific purpose. Great write-up.

Expand full comment

Thanks Fernando

Expand full comment

I have to admit, it's been almost two decades in web but this is the first I'm getting into a project where we're considering actually duplicating some data.

And our use case perfectly fits what you described as ideal for this - heavy read opreations. We actually have just a single import that creates the graph db structure and everything else is read after that.

Before this I couldn't think of a situation where I would have duplicated data, but looks like I was working with a limited scope of apps. :) Thanks for the mention Saurabh!

Expand full comment

I often look at the decision as being about integrity vs. speed.

Normalization: Optimizes for data integrity and minimizes redundancy, ideal for transactional systems.

Denormalization: Optimizes for read performance by reducing joins, ideal for analytical queries.

Simply put, Saurabh!

Thanks for mentioning my article on EDA!

Expand full comment