Re: "Perlod's guidelines seem reasonable"
2xRAM never applied to Linux (nor Solaris 2); it's an old BSD (hence SunOS 4) thing where if you had nMB of RAM then you needed nMB of swap as backing store, so 2nMB swap resulted in a total of 2nMB of virtual memory (if swap was smaller than RAM then you couldn't even use all your RAM). Yuck.
So to match the old SunOS rule-of-thumb of "swap = twice RAM" then you'd have "swap = RAM" to get the same effective result. But that's from back when memory was small; 4Mb of RAM was luxury. Now we have gigs of RAM; do we really want to be doing slow slow slow swapping of gigs to disk?
It gets even more complicated. Old school swapping doesn't really happen these days; it's pretty much all paging ("swapping" used to mean placing a whole process into swap; "paging" works at a memory page level). Paging is more effective because it means a program could still be running and doesn't need to page back in the swapped out pages if it never touches them. Note, though, that we still call it "swapping in/out pages of memory" so the terminology is less than clear :-) But this means that really infrequently used pages of memory can be thrown away from memory and placed into swap with pretty much zero performance impact; they might never be used again!
If gets yet even more complicated when we look at demand paging of applications; even apps don't get loaded into memory in one chunk, the pages get loaded in as needed. Processes can start faster, less RAM is used and it makes shared pages (eg common pages of libc that might be in many processes) even more efficient (it's already in RAM). It also means that the kernel could just _drop_ pages (don't send them to swap) and reload them from the original executable if the page is ever needed again.
And then we run into what failure mode you want to tolerate. For some people having the OOM kill a process and generate a hard failure is sometimes more preferable to reduced performance ("why is it taking me 10 seconds just to get the login prompt on the console? Oh, the machine is swapping to death..."). Think of a cluster of web servers behind a load balancer; having the webserver die (and so drop out of the load balancer much much quicker) might be preferable to having it take forever to respond to an API request (a load balancer health check might succeed where an API call might time out because that needs more backend resources).
So how much swap do you need? It depends :-)