19in rack-mounted chassis for 8 or even 16 CPU cards, with a Gigabit Ethernet switch and front-loading hot-swappable eSATA drives (one per CPU card), maybe even a built-in load balancer on the Gigabit switch. power consumption would be ridiculously low, yet CPU horsepower ridiculously high. Cluster/blade/rack/backplane.
Could easily fit 64 of those standing up, top to bottom, in a 19" rack; with shared /usr/ over the network. That would give ~3.75 gflops using ~300 watt. Another idea would be a dedicated disk-rack with one CPU card which slots in to a raid module, with space for 32 2.5" hard drives, standing up, top to bottom, with about 2gb/s aggregate throughput. Maybe both and a 10GigE switch could be put in one rack enclosure, as neither the CPU boards nor 2.5" hard drives are very deep. The entire combined rack would use only about 700 watt. The only major problem i see is cooling. An entire rack full of (24 of) those 2U enclosures would provide .09tflops and 50gb/s for 125K$. Was first of by a factor of 100 wrt flops, not as impressive now compared to a multi-gpu workhorse.
100 megabit switch chip RTL8309 http://realtek.info/pdf/rtl8309sb.pdf is available in quantity 1 for experimentation from future electronics. http://www.futureelectronics.com/en/Technologies/Product.aspx?ProductID=RTL8309SBLFREALTEKSEMICONDUCTOR7787054 capable of VLANs, 802.1q, and trunking two ports together.