The Myth of Needing Multiple Databases Early
We regularly see early-stage teams introduce Redis, Elasticsearch, and MongoDB before reaching 1,000 users. The operational cost of running and keeping multiple databases consistent is almost always underestimated — and the PostgreSQL capabilities that would have avoided the need are almost always overlooked.
Full-Text Search
GIN indexes on tsvector columns provide fast, relevance-ranked full-text search for most real-world use cases. We have replaced Elasticsearch for three production clients without any user-perceptible quality degradation.
Time-Series Data
With the TimescaleDB extension, PostgreSQL handles workloads that most teams assume require a dedicated time-series database. Automatic partitioning by time interval, columnar compression, and continuous aggregates handle 10M+ events per day on modest hardware.
JSON Documents
JSONB with GIN indexing handles document storage patterns well. We use it regularly for configuration storage, feature flag payloads, and event log data that would otherwise justify a MongoDB dependency.
Where the Real Limits Are
Full-text search at very high query volume with complex relevance tuning eventually warrants Elasticsearch. Real-time pub/sub at massive scale warrants Redis. Analytics over petabytes of data warrants BigQuery or Redshift. In 12 years, fewer than 20% of the projects that introduced a second database actually needed it at their current scale.
