Yes the tried and true advice of don't scale until you must. scaling takes time and energy away from creating something that people want. Make sure you do that first. Once you have people banging down your door, read on.
Monitoring/Logging
Related to the previous note, If you don't know what your current systems are doing, you can't effectively plan or design for the next stage. I wasted 2 weeks rewriting a ruby component to a java implementation only to realize I had wasted the effort, ruby was performing just fine thank you very much. Metrics like 99th percentile requests per second, reported once per hour can easily be accomplished with request logs and standard unix tools.
cron, grep, awk or a small ruby script, Nagios
cat request.log | grep "POST /messages" | ruby times.rb
Procrastinate
From your logging and planning you should know what kind of loads you have. Some tasks roll in all at once, but can be done later. Push these to the background or schedule it for later. Background threads, work queues, and simple file drops/cron jobs are great ways to distribute the load. Many types of processing are more efficient when done in a batch mode. If its an option, do it.
JMS, ActiveMQ, cron,
Share the load
Embrace the stateless! The world without side effects allows you to easily add nodes and load balance. Sometimes you can fake stateless through sharding and replication. The goal is
to get as much work done by small short running tasks that can be accomplished on one of many nodes.
HAProxy, ec2, rsync
Memoize (aka cache)
Just as functional languages have shown us that stateless calls can gain drastic perf benefits from memoizing (recording the previous work done so you don't have to do it again), so can your component system benefit through stateless services and caching.
- memcache
- http caching proxy
- almost cache (cache all but the most recent changes to minimize work and focus optimization)
memcache, squid, disk
partition/shard
Think about how to partition your data/algorithms. this comes into play in many areas, from database query size, to full text search size. Eventually you will outrun your cpu, memory, or storage capacity for an individual node. You need to determine your requirements for
each resource, and prioritize on a triage like manner.
disk i/o
before you waste time worrying about disk i/o, raid schemes, or SAN tactics, you need to first ask yourself, should this be on mechanical disk? If you need to serve thousands of requests per second or more, the answer is likely NO! Fancy disk subsystems buy you time,
they do not buy your orders of magnitude. A typical seek time on a drive is around 10ms. This limits you to around 100 requests per spindle. You are looking at best 1 order of magnitude over this with many 15k scsi drives, expensive controllers, and lots of ops setup/monitoring.
If you are limited to 4 spindles per instance, then you will need at least 10 times as many instances to achieve the same perf as an in memory service. So you need to find out,
how big is my data? How many instances would it take to fit it into RAM? Is it partitionable (both lookup and store)? A cost benefit analysis here can be enlightening.
iostat, tmpfs, MySQL, memcache