Sunday, March 16, 2008

Slashdot's Setup, Hardware

As part of our 10-Year anniversary coverage, we intend to update our insanely dated FAQ entry that describes our system setup. Today is Part 1 where we talk mostly about the hardware that powers Slashdot. Next week we'll run Part 2 where we'll talk mostly about Software. Read on to learn about our routers, our databases, our webservers and more. And as a reminder, don't forget to bid on our charity auction for the EFF and if you are in Ann Arbor, our anniversary party is tomorrow night.

CT:Most of the following was written by Uriah Welcome, famed sysadmin extraordinaire, responsible for our corporate intertubes. He Writes...

Many of you have asked about the infrastructure that supports your favorite time sink... err news site. The question even reached the top ten questions to ask CmdrTaco. So I've been asked to share our secrets on how we keep the site up and running, as well as a look towards the future of Slashdot's infrastructure. Please keep in mind that this infrastructure not only runs Slashdot, but also all the other sites owned by SourceForge, Inc.: SourceForge.net, Thinkgeek.com, Freshmeat.net, Linux.com, Newsforge.com, et al.

Well, let's begin with the most boring and basic details. We're hosted at a Savvis data center in the Bay Area. Our data center is pretty much like every other one. Raised floors, UPSs, giant diesel generators, 24x7 security, man traps, the works. Really, once you've seen one class A data center, you've seen them all. (CT: I've still never seen one. And they won't let us take pictures. Boo savvis.)

Next, our bandwidth and network. We currently have two Active-Active Gigabit uplinks; again nothing unique here, no crazy routing, just symmetric, equal cost uplinks. The uplinks terminate in our cage at a pair of Cisco 7301s that we use as our gateway/border routers. We do some basic filtering here, but nothing too outrageous; we tier our filtering to try to spread the load. From the border routers, the bits hit our core switches/routers, a pair of Foundry BigIron 8000s. They have been our workhorses throughout the years. The BigIron 8000s have been in production since we built this data center in 2002 and actually, having just looked at it... haven't been rebooted since. These guys used to be our border routers, but alas... their CPUs just weren't up to the task after all these years and growth. Many machines plug directly into these core switches, however for certain self contained racks we branch off to Foundry FastIron 9604s. They are basically switches and do nothing but save us ports on the cores.

Now onto the meat: the actual systems. We've gone through many vendors over the years. Some good, some...not so much. We've had our share of problems with everyone. Currently in production we have the following: HP, Dell, IBM, Rackable, and I kid you not, VA Linux Systems. Since this article is about Slashdot, I'll stick to their hardware. The first hop on the way to Slashdot is the load balancing firewalls, which are a pair of Rackable Systems 1Us; P4 Xeon 2.66Gz, 2G RAM, 2x80GB IDE, running CentOS and LVS. These guys distribute the traffic to the next hop, which are the web servers.

Slashdot currently has 16 web servers all of which are running Red Hat 9. Two serve static content: javascript, images, and the front page for non logged-in users. Four serve the front page to logged in users. And the remaining ten handle comment pages. All web servers are Rackable 1U servers with 2 Xeon 2.66Ghz processors, 2GB of RAM, and 2x80GB IDE hard drives. The web servers all NFS mount the NFS server, which is a Rackable 2U with 2 Xeon 2.4Ghz processors, 2GB of RAM, and 4x36GB 15K RPM SCSI drives. (CT: Just as a note, we frequently shuffle these 16 servers from one task to another to handle changes in load or performance. Next week's software story will explain in much more detail exactly what we do with those machines. Also as a note- the NFS is read-only, which was really the only safe way to use NFS around 1999 when we started doing it this way.)

Besides the 16 web servers, we have 7 databases. They currently are all running CentOS 4. They breakdown as follows: 2 Dual Opteron 270's with 16GB RAM, 4x36GB 15K RPM SCSI Drives These are doing multiple-master replication, with one acting as Slashdot's single write-only DB, and the other acting as a reader. We have the ability to swap their functions dynamically at any time, providing an acceptable level of failover.

2 Dual Opteron 270's with 8GB RAM, 4x36GB 15K RPM SCSI Drives These are Slashdot's reader DBs. Each derives data from a specific master database (listed above). The idea is that we can add more reader databases as we need to scale. These boxes are barely a year old now — and still are plenty fast for our needs.

Lastly, we have 3 Quad P3 Xeon 700Mhz with 4GB RAM, 8x36GB 10K RPM SCSI Drives which are sort of our miscellaneous 'other' boxes. They are used to host our accesslog writer, an accesslog reader, and Slashdot's search database. We need this much for accesslogs because moderation and stats require a lot of CPU time for computation.

And that is basically it, in a nutshell. There isn't anything too terribly crazy about the infrastructure. We like to keep things as simple as possible. This design is also very similar to what all the other SourceForge, Inc. sites use, and has proved to scale quite well.


Today we have Part 2 in our exciting 2 part series about the infrastructure that powers Slashdot. Last week Uriah told us all about the hardware powering the system. This week, Jamie McCarthy picks up the story and tells us about the software... from pound to memcached to mysql and more. Hit that link and read on.

The software side of Slashdot takes over at the point where our load balancers -- described in Friday's hardware story -- hand off your incoming HTTP request to our pound servers.

Pound is a reverse proxy, which means it doesn't service the request itself, it just chooses which web server to hand it off to. We run 6 pounds, one for HTTPS traffic and the other 5 for regular HTTP. (Didn't know we support HTTPS, did ya? It's one of the perks for subscribers: you get to read Slashdot on the same webhead that admins use, which is always going to be responsive even during a crush of traffic -- because if it isn't, Rob's going to breathe down our necks!)

The pounds send traffic to one of the 16 apaches on our 16 webheads -- 15 regular, and the 1 HTTPS. Now, pound itself is so undemanding that we run it side-by-side with the apaches. The HTTPS pound handles SSL itself, handing off a plaintext HTTP request to its machine's apache, so the apache it redirects traffic to doesn't need mod_ssl compiled in. One less headache! Of our other 15 webheads, 5 also run a pound, not to distribute load but just for redundancy.

(Trivia: pound normally adds an X-Forwarded-For header, which Slash::Apache substitutes for the (internal) IP of pound itself. But sometimes if you use a proxy on the internet to do something bad, it will send us an X-Forwarded-For header too, which we use to try to track abuse. So we patched pound to insert a special X-Forward-Pound header, so it doesn't overwrite what may come from an abuser's proxy.)

The other 15 webheads are segregated by type. This segregation is mostly what pound is for. We have 2 webheads for static (.shtml) requests, 4 for the dynamic homepage, 6 for dynamic comment-delivery pages (comments, article, pollBooth.pl), and 3 for all other dynamic scripts (ajax, tags, bookmarks, firehose). We segregate partly so that if there's a performance problem or a DDoS on a specific page, the rest of the site will remain functional. We're constantly changing the code and this sets up "performance firewalls" for when us silly coders decide to write infinite loops.

But we also segregate for efficiency reasons like httpd-level caching, and MaxClients tuning. Our webhead bottleneck is CPU, not RAM. We run MaxClients that might seem absurdly low (5-15 for dynamic webheads, 25 for static) but our philosophy is if we're not turning over requests quickly anyway, something's wrong, and stacking up more requests won't help the CPU chew through them any faster.

All the webheads run the same software, which they mount from a /usr/local exported by a read-only NFS machine. Everyone I've ever met outside of this company gives an involuntary shudder when NFS is mentioned, and yet we haven't had any problems since shortly after it was set up (2002-ish). I attribute this to a combination of our brilliant sysadmins and the fact that we only export read-only. The backend task that writes to /usr/local (to update index.shtml every minute, for example) runs on the NFS server itself.

The apaches are versions 1.3, because there's never been a reason for us to switch to 2.0. We compile in mod_perl, and lingerd to free up RAM during delivery, but the only other nonstandard module we use is mod_auth_useragent to keep unfriendly bots away. Slash does make extensive use of each phase of the request loop (largely so we can send our 403's to out-of-control bots using a minimum of resources, and so your page is fully on its way while we write to the logging DB).

Slash, of course, is the open-source perl code that runs Slashdot. If you're thinking of playing around with it, grab a recent copy from CVS: it's been years since we got around to a tarball release. The various scripts that handle web requests access the database through Slash's SQL API, implemented on top of DBD::mysql (now maintained, incidentally, by one of the original Slash 1.0 coders) and of course DBI.pm. The most interesting parts of this layer might be:

(a) We don't use Apache::DBI. We use connect_cached, but actually our main connection cache is the global objects that hold the connections. Some small chunks of data are so frequently used that we keep them around in those objects.

(b) We almost never use statement handles. We have eleven ways of doing a SELECT and the differences are mostly how we massage the results into the perl data structure they return.

(c) We don't use placeholders. Originally because DBD::mysql didn't take advantage of them, and now because we think any speed increase in a reasonably-optimized web app should be a trivial payoff for non-self-documenting argument order. Discuss!

(d) We built in replication support. A database object requested as a reader picks a random slave to read from for the duration of your HTTP request (or the backend task). We can weight them manually, and we have a task that reweights them automatically. (If we do something stupid and wedge a slave's replication thread, every Slash process, across 17 machines, starts throttling back its connections to that machine within 10 seconds. This was originally written to handle slave DBs getting bogged down by load, but with our new faster DBs, that just never happens, so if a slave falls behind, one of us probably typed something dumb at the mysql> prompt.)

(e) We bolted on memcached support. Why bolted-on? Because back when we first tried memcached, we got a huge performance boost by caching our three big data types (users, stories, comment text) and we're pretty sure additional caching would provide minimal benefit at this point. Memcached's main use is to get and set data objects, and Slash doesn't really bottleneck that way.

Slash 1.0 was written way back in early 2000 with decent support for get and set methods to abstract objects out of a database (getDescriptions, subclassed _wheresql) -- but over the years we've only used them a few times. Most data types that are candidates to be objectified either are processed in large numbers (like tags and comments), in ways that would be difficult to do efficiently by subclassing, or have complicated table structures and pre- and post-processing (like users) that would make any generic objectification code pretty complicated. So most data access is done through get and set methods written custom for each data type, or, just as often, through methods that perform one specific update or select.

Overall, we're pretty happy with the database side of things. Most tables are fairly well normalized, not fully but mostly, and we've found this improves performance in most cases. Even on a fairly large site like Slashdot, with modern hardware and a little thinking ahead, we're able to push code and schema changes live quickly. Thanks to running multiple-master replication, we can keep the site fully live even during blocking queries like ALTER TABLE. After changes go live, we can find performance problem spots and optimize (which usually means caching, caching, caching, and occasionally multi-pass log processing for things like detecting abuse and picking users out of a hat who get mod points).

In fact, I'll go further than "pretty happy." Writing a database-backed web site has changed dramatically over the past seven years. The database used to be the bottleneck: centralized, hard to expand, slow. Now even a cheap DB server can run a pretty big site if you code defensively, and thanks to Moore's Law, memcached, and improvements in open-source database software, that part of the scaling issue isn't really a problem until you're practically the size of eBay. It's an exciting time to be coding web applications.



0 comments: