Skip to content

Are you still using DynamoDB?

Back in 2019, when I first faced DynamoDB in a project, I found it super interesting. It felt clean, fast, and very cloud-native. No servers to patch, no failover runbooks at 3am, and scaling looked almost magic. For a startup team trying to move quickly, it was honestly a great fit.

At the beginning, DynamoDB solved real problems for us:

  • very low opreational overhead
  • easy infra setup in AWS
  • predictable API surface
  • good integration with Lambda and event-driven flows
  • solid durability story out of the box

It was one of those tools that made us feel “we can ship this now and fix details later.”

And then the bills started comming.

The Real Bottlenect

DynamoDB is not “expensive” by default. It becomes expensive when your access patterns are unclear, or when your product evolves faster than your table design.

What hurt us most was not one single thing. It was the combinaton:

  • growing read/write volume
  • more GSIs added over time
  • hot partitions from uneven keys
  • scans sneaking into prod paths
  • retries and backoffs increasing request pressure
  • team members not fully understanding RCU/WCU behavior

A lot of teams think the hardest part is schema design. I’d argue the hardest part is workload discipline. If your app starts doing accidental scans or wide fan-out queries, DynamoDB will remind you with a nice monthly invoice.

Why People Still Use It

Even with cost pressure, I still think DynamoDB is very strong in specific cases:

  1. You are all-in on AWS and want speed over infra complexity.
  2. Your access patterns are stable and well-tested.
  3. You can model data around keys and query paths early.
  4. You actually monitor capacity, throttling, and partition health.

If those are true, DynamoDB can be excellent. Not just “good enough,” actually excelent.

What Changed For Us

As systems grew, we needed more control over:

  • indexing strategy
  • compaction behavior
  • async indexing and backlog visibility
  • migration flows
  • ETL from external sources
  • predictable internal costs

That pushed us toward building an internal database path focused on service-to-service persistent/transactional data.

The key lesson: base data must remain source of truth; indexes are derived views. Once we locked this rule, architecture decisions became simpler. Rebuilds, repairs, retries, and conformance tests all got easier to reason about.

We also got stricter about invariants:

  • what is linearizable vs eventual
  • what retry means
  • how queue IDs behave
  • what pagination tokens mean
  • what “done” means for async jobs

This part is boring on paper, but it saves you later when prod starts doing weird things.

If You’re Staying on DynamoDB

If I had to give one checklist to teams still on DynamoDB, it would be this:

  1. Write access-pattern contracts before adding new features.
  2. Reject scans in hot paths unless there is a very explicit reason.
  3. Track cost per endpoint/use-case, not only per table.
  4. Treat every new GSI as a product decision, not a dev convenience.
  5. Build load tests that mirror real key distributions (not uniform fake data).
  6. Have clear rules for retries and idempotency.
  7. Keep an eye on “small” features that add hidden read amplification.

Final Thought

I’m not anti-DynamoDB. It helped us move fast when we needed speed most. But if your workload matures and your bill curve becomes the loudest signal in the room, you probably need to revisit the architecture, not just tune capacity.

Sometimes “managed” is the right answer. Sometimes “managed” becomes “we don’t control enough.”

The trick is to know when that line is crossed, before the bottlenect becomes your roadmap.

Are you still using DynamoDB? has loaded