13 Comments
User's avatar
Akos Komuves's avatar

It's quite incredible to see what kind of problems occur at such a scale. I have no idea how I would have done it. On the other hand, I'm relieved that the cat videos I upload go through Kafka, pgBouncer, and Aurora. 😃

And of course, thanks for the shoutout! 🙇‍♂️ Glad you liked this article.

Saurabh Dashora's avatar

Haha...the cat videos are the most important ones.

Thanks Akos!

Kisan Tamang's avatar

Interesting read. Thanks for Sharing

Raul Junco's avatar

This was a great example of replacing a production service.

Dual writes and backfill was the right move!

Thanks for explaining this and mentioning my article, Saurabh.

Saurabh Dashora's avatar

Thanks Raul!

Reddit's solution is pretty neat I agree.

Franco Fernando's avatar

Interesting article, Saurabh! Migrating data is always a big challenge.

Saurabh Dashora's avatar

Thanks Fernando.

Migrations can go either way and it's important to have safeguards in place.

Devarshi Shimpi's avatar

Great article! Thanks for writing!

Saurabh Dashora's avatar

Thank you Devarshi!

Daniel Moka's avatar

100K is a hell of a performance, solid post my friend!

Saurabh Dashora's avatar

Thank you Daniel!

It was indeed a great case study to learn about.

Tarun Pahuja's avatar

Since latest post (lets say last 1K) would reside on a single partition because of range based parition, wont it create a hot partition for reading data as reddit would return latest posts.