Our Database Architecture: Why We Use Four Different Systems
Maya Chen
Question Everything You Know About Databases
Our database setup looks nothing like what I learned in school. We’ve made choices that would make a DBA from 2015 nervous. They’ve also let us scale to millions of requests without hiring a database team.
The Old Rules
Traditional database wisdom says:
- Normalize your data to avoid duplication
- Use foreign keys to maintain integrity
- Design your schema before writing code
- Optimize queries, not tables
These rules made sense when disk was expensive, memory was scarce, and applications talked to one database. That’s not our world anymore.
Our Actual Architecture
We run multiple databases, each optimized for a specific access pattern:
PostgreSQL for transactions. User accounts, billing, anything that needs ACID guarantees. We don’t fight Postgres—we use it for what it’s good at.
Redis for hot data. Session state, rate limiting, feature flags. Anything accessed on every request lives here. Memory is cheap; latency is expensive.
Elasticsearch for search. We denormalize aggressively into Elasticsearch. Yes, we store data twice. The alternative—complex joins on read—was slower and more fragile.
S3 for blobs. Images, exports, anything over 1MB. Databases are terrible at binary data. S3 is purpose-built for it.
The Tradeoffs We Made
This architecture has costs:
Eventual consistency. When you update user data, the search index might be stale for a few seconds. We decided this was acceptable. Your use case might differ.
Operational complexity. Four systems instead of one. More monitoring, more potential failure points. We mitigate with managed services and good alerting.
Sync logic. Keeping data consistent across systems requires careful thought. We use event sourcing—every change emits an event that downstream systems consume.
What We’d Do Differently
If I started over:
Start simpler. We split too early. A single Postgres database would have been fine for our first year. Premature optimization cost us development time.
Event sourcing from day one. Retrofitting was painful. If you think you might need multiple data stores eventually, build the event infrastructure early.
Invest in observability. Understanding what’s actually happening across distributed systems is hard. We underinvested in tooling and paid for it in debugging time.
The Principle
There’s no universal “right” database architecture. There’s only the architecture that fits your access patterns, your scale, and your team’s ability to operate it.
Anyone who tells you otherwise is selling something.
ProductOS handles infrastructure decisions so you can focus on your product. Learn more at build.yellow-cat-229404.hostingersite.com