🚀 We Just Hit ~1M+ Records! What Changed?

In one of my projects, we recently crossed ~1 million order records, and that came with new performance challenges. So… what happened?

👉 The Order List View started to slow down. Why? It relied on JOINs across 4–5 tables, plus filtering, searching, and pagination — all putting load on the backend.

✅ How We Tackled It Step by Step:

  1. Profiling & Monitoring
    Used tools like Sentry and New Relic to analyze the system and slow ORM queries.
  2. ORM Optimization (Django)
    Improvements in select_related, prefetch_related, fetched only needed fields, and added indexes on high-usage fields.
  3. Frontend Optimization
    Reduced the number of search-triggered API requests.
    Result: Less noise, faster backend responses.
  4. Vertical Scaling
    Upgraded from a low-spec instance to a higher one.
    Example: (X GB RAM / Y vCPU) → (4X GB RAM / 2Y vCPU)
  5. Elasticsearch Integration (in progress)
    Moving from JOIN-heavy queries to denormalized, document-based indexing. Data from 4–5 tables now fetched in milliseconds.
  6. Horizontal Scaling (future plan)
    Planning to split the load across multiple servers as data grows.

📊 Performance Boost

⏱️ Before:
Sluggish and inconsistent — sometimes unbearably slow (20s) 😬

⚡ Now:
Smooth and responsive in most cases 🫠
✅ Up to 3x faster than before

🎯 Next Goal:
Snappy even under load 😊
🔜 Targeting up to 5x improvement on worst cases

💡 Takeaway:

  • ✅ Profiling & Monitoring
  • ✅ ORM Optimization
  • ✅ Frontend Control
  • ✅ Vertical Scaling
  • ✅ Elasticsearch

All these brought:

  • ✅ Happier users
  • ✅ Faster backend
  • ✅ Scalable architecture


💬 What else should we consider as we scale?
Caching? Asynchronous processing? DB replicas?
Would love to hear your thoughts: Join the discussion on Telegram