13 Comments
Jul 21·edited Jul 21Liked by Saurabh Dashora

It's quite incredible to see what kind of problems occur at such a scale. I have no idea how I would have done it. On the other hand, I'm relieved that the cat videos I upload go through Kafka, pgBouncer, and Aurora. 😃

And of course, thanks for the shoutout! 🙇‍♂️ Glad you liked this article.

Expand full comment
author

Haha...the cat videos are the most important ones.

Thanks Akos!

Expand full comment
Jul 18Liked by Saurabh Dashora

Interesting read. Thanks for Sharing

Expand full comment
author

Thank you

Expand full comment

This was a great example of replacing a production service.

Dual writes and backfill was the right move!

Thanks for explaining this and mentioning my article, Saurabh.

Expand full comment
author

Thanks Raul!

Reddit's solution is pretty neat I agree.

Expand full comment

Interesting article, Saurabh! Migrating data is always a big challenge.

Expand full comment
author

Thanks Fernando.

Migrations can go either way and it's important to have safeguards in place.

Expand full comment
Jul 16Liked by Saurabh Dashora

Great article! Thanks for writing!

Expand full comment
author

Thank you Devarshi!

Expand full comment

100K is a hell of a performance, solid post my friend!

Expand full comment
author

Thank you Daniel!

It was indeed a great case study to learn about.

Expand full comment

Since latest post (lets say last 1K) would reside on a single partition because of range based parition, wont it create a hot partition for reading data as reddit would return latest posts.

Expand full comment