Приглашаем посетить
Добычин (dobychin.lit-info.ru)

Scaling Databases

Previous
Table of Contents
Next

Scaling Databases

One of the most difficult challenges in building large-scale services is the scaling of databases. This applies not only to RDBMSs but to almost any kind of central data store. The obvious solution to scaling data stores is to approach them as you would any other service: partition and cluster. Unfortunately, RDBMSs are usually much more difficult to make work than other services.

Partitioning actually works wonderfully as a database scaling method. There are a number of degrees of portioning. On the most basic level, you can partition by breaking the data objects for separate services into distinct schemas. Assuming that a complete (or at least mostly complete) separation of the dependant data for the applications can be achieved, the schemas can be moved onto separate physical database instances with no problems.

Sometimes, however, you have a database-intensive application where a single schema sees so much DML (Data Modification LanguageSQL that causes change in the database) that it needs to be scaled as well. Purchasing more powerful hardware is an easy way out and is not a bad option in this case. However, sometimes simply buying larger hardware is not an option:

  • Hardware pricing is not linear with capacity. High-powered machines can be very expensive.

  • I/O bottlenecks are hard (read expensive) to overcome.

  • Commercial applications often run on a per-processor licensing scale and, like hardware, scale nonlinearly with the number of processors. (Oracle, for instance, does not allow standard edition licensing on machines that can hold more than four processors.)

Common Bandwidth Problems

You saw in Chapter 12, "Interacting with Databases," that selecting more rows than you actually need can result in your queries being slow because all that information needs to be pulled over the network from the RDBMS to the requesting host. In high-volume applications, it's very easy for this query load to put a significant strain on your network. Consider this: If you request 100 rows to generate a page and your average row width is 1KB, then you are pulling 100KB of data across your local network per page. If that page is requested 100 times per second, then just for database data, you need to fetch 100KB x 100 = 10MB of data per second. That's bytes, not bits. In bits, it is 80Mbps. That will effectively saturate a 100Mb Ethernet link.

This example is a bit contrived. Pulling that much data over in a single request is a sure sign that you are doing something wrongbut it illustrates the point that it is easy to have back-end processes consume large amounts of bandwidth. Database queries aren't the only actions that require bandwidth. These are some other traditional large consumers:

  • Networked file systems Although most developers will quickly recognize that requesting 100KB of data per request from a database is a bad idea, many seemingly forget that requesting 100KB files over NFS or another network file system requires just as much bandwidth and puts a huge strain on the network.

  • Backups Backups have a particular knack for saturating networks. They have almost no computational overhead, so they are traditionally network bound. That means that a backup system will easily grab whatever bandwidth you have available.

For large systems, the solution to these ever-growing bandwidth demands is to separate out the large consumers so that they do not step on each other. The first step is often to dedicate separate networks to Web traffic and to database traffic. This involves putting multiple network cards in your servers. Many network switches support being divided into multiple logical networks (that is, virtual LANs [VLANs]). This is not technically necessary, but it is more efficient (and secure) to manage. You will want to conduct all Web traffic over one of these virtual networks and all database traffic over the other. Purely internal networks (such as your database network) should always use private network space. Many load balancers also support network address translation, meaning that you can have your Web traffic network on private address space as well, with only the load balancer bound to public addresses.

As systems grow, you should separate out functionality that is expensive. If you have a network-available backup system, putting in a dedicated network for hosts that will use it can be a big win. Some systems may eventually need to go to Gigabit Ethernet or trunked Ethernet. Backup systems, high-throughput NFS servers, and databases are common applications that end up being network bound on 100Mb Ethernet networks. Some Web systems, such as static image servers running high-speed Web servers such as Tux or thttpd can be network bound on Ethernet networks.

Finally, never forget that the first step in guaranteeing scalability is to be careful when executing expensive tasks. Use content compression to keep your Web bandwidth small. Keep your database queries small. Cache data that never changes on your local server. If you need to back up four different databases, stagger the backups so that they do not overlap.


There are two common solutions to this scenario: replication and object partitioning.

Replication comes in the master/master and master/slave flavors. Despite what any vendor might tell you to in order to sell its product, no master/master solution currently performs very well. Most require shared storage to operate properly, which means that I/O bottlenecks are not eliminated. In addition, there is overhead introduced in keeping the multiple instances in sync (so that you can provide consistent reads during updates).

The master/master schemes that do not use shared storage have to handle the overhead of synchronizing transactions and handling two-phase commits across a network (plus the read consistency issues). These solutions tend to be slow as well. (Slow here is a relative term. Many of these systems can be made blazingly fast, but not as fast as a doubly powerful single system and often not as powerful as a equally powerful single system.)

The problem with master/master schemes is with write-intensive applications. When a database is bottlenecked doing writes, the overhead of a two-phase commit can be crippling. Two-phase commit guarantees consistency by breaking the commit into two phases:

  • The promissory phase, where the database that the client is committing to requests all its peers to promise to perform the commit.

  • The commit phase, where the commit actually occurs.

As you can probably guess, this process adds significant overhead to every write operation, which spells trouble if the application is already having trouble handling the volume of writes.

In the case of a severely CPU-bound database server (which is often an indication of poor SQL tuning anyway), it might be possible to see performance gains from clustered systems. In general, though, multimaster clustering will not yield the performance gains you might expect. This doesn't mean that multimaster systems don't have their uses. They are a great tool for crafting high-availability solutions.

That leaves us with master/slave replication. Master/slave replication poses fewer technical challenges than master/master replication and can yield good speed benefits. A critical difference between master/master and master/slave setups is that in master/master architectures, state needs to be globally synchronized. Every copy of the database must be in complete synchronization with each other. In master/slave replication, updates are often not even in real-time. For example, in both MySQL replication and Oracle's snapshot-based replication, updates are propagated asynchronously of the data change. Although in both cases the degree of staleness can be tightly regulated, the allowance for even slightly stale data radically improves the cost overhead involved.

The major constraint in dealing with master/slave databases is that you need to separate read-only from write operations.

Figure 15.9 shows a cluster of MySQL servers set up for master/slave replication. The application can read data from any of the slave servers but must make any updates to replicated tables to the master server.

Figure 15.9. Overview of MySQL master/slave replication.

Scaling Databases


MySQL does not have a corner on the replication market, of course. Many databases have built-in support for replicating entire databases or individual tables. In Oracle, for example, you can replicate tables individually by using snapshots, or materialized views. Consult your database documentation (or your friendly neighborhood database administrator) for details on how to implement replication in your RDBMS.

Master/slave replication relies on transmitting and applying all write operations across the interested machines. In applications with high-volume read and write concurrency, this can cause slowdowns (due to read consistency issues). Thus, master/slave replication is best applied in situations that have a higher read volume than write volume.

Writing Applications to Use Master/Slave Setups

In MySQL version 4.1 or later, there are built-in functions to magically handle query distribution over a master/slave setup. This is implemented at the level of the MySQL client libraries, which means that it is extremely efficient. To utilize these functions in PHP, you need to be using the new mysqli extension, which breaks backward compatibility with the standard mysql extension and does not support MySQL prior to version 4.1.

If you're feeling lucky, you can turn on completely automagical query dispatching, like this:

$dbh = mysqli_init();
mysqli_real_connect($dbh, $host, $user, $password, $dbname);
mysqli_rpl_parse_enable($dbh);
// prepare and execute queries as per usual

The mysql_rpl_parse_enable() function instructs the client libraries to attempt to automatically determine whether a query can be dispatched to a slave or must be serviced by the master.

Reliance on auto-detection is discouraged, though. As the developer, you have a much better idea of where a query should be serviced than auto-detection does. The mysqli interface provides assistance in this case as well. Acting on a single resource, you can also specify a query to be executed either on a slave or on the master:

$dbh = mysqli_init();
mysqli_real_connect($dbh, $host, $user, $password, $dbname);
mysqli_slave_query($dbh, $readonly_query);
mysqli_master_query($dbh, $write_query);

You can, of course, conceal these routines inside the wrapper classes. If you are running MySQL prior to 4.1 or another RDBMS system that does not seamlessly support automatic query dispatching, you can emulate this interface inside the wrapper as well:

class Mysql_Replicated extends DB_Mysql {
  protected $slave_dbhost;
  protected $slave_dbname;
  protected $slave_dbh;

  public function _ _construct($user, $pass, $dbhost, $dbname,
                                $slave_dbhost, $slave_dbname)
  {
    $this->user = $user;
    $this->pass = $pass;
    $this->dbhost = $dbhost;
    $this->dbname = $dbname;
    $this->slave_dbhost = $slave_dbhost;
    $this->slave_dbname = $slave_dbname;
  }

  protected function connect_master() {
    $this->dbh = mysql_connect($this->dbhost, $this->user, $this->pass);
    mysql_select_db($this->dbname, $this->dbh);
  }
  protected function connect_slave() {
    $this->slave_dbh = mysql_connect($this->slave_dbhost,
                                         $this->user, $this->pass);

    mysql_select_db($this->slave_dbname, $this->slave_dbh);
  }
  protected function _execute($dbh, $query) {
    $ret = mysql_query($query,  $dbh);
    if(is_resource($ret)) {
      return new DB_MysqlStatement($ret);
    }
    return false;
  }
  public function master_execute($query) {
    if(!is_resource($this->dbh)) {
      $this->connect_master();
    }
    $this->_execute($this->dbh, $query);
  }
  public function slave_execute($query) {
    if(!is_resource($this->slave_dbh)) {
      $this->connect_slave();
    }
   $this->_execute($this->slave_dbh, $query);
  }
}

You could even incorporate query auto-dispatching into your API by attempting to detect queries that are read-only or that must be dispatched to the master. In general, though, auto-detection is less desirable than manually determining where a query should be directed. When attempting to port a large code base to use a replicated database, auto-dispatch services can be useful but should not be chosen over manual determination when time and resources permit.

Alternatives to Replication

As noted earlier in this chapter, master/slave replication is not the answer to everyone's database scalability problems. For highly write-intensive applications, setting up slave replication may actually detract from performance. In this case, you must look for idiosyncrasies of the application that you can exploit.

An example would be data that is easily partitionable. Partitioning data involves breaking a single logical schema across multiple physical databases by a primary key. The critical trick to efficient partitioning of data is that queries that will span multiple databases must be avoided at all costs.

An email system is an ideal candidate for partitioning. Email messages are accessed only by their recipient, so you never need to worry about making joins across multiple recipients. Thus you can easily split email messages across, say, four databases with ease:

class Email {
  public $recipient;
  public $sender;
  public $body;
  /* ... */
}

class PartionedEmailDB {
  public $databases;

You start out by setting up connections for the four databases. Here you use wrapper classes that you've written to hide all the connection details for each:

public function _ _construct() {
  $this->databases[0] = new DB_Mysql_Email0;
  $this->databases[1] = new DB_Mysql_Email1;
  $this->databases[2] = new DB_Mysql_Email2;
  $this->databases[3] = new DB_Mysql_Email3;
}

On both insertion and retrieval, you hash the recipient to determine which database his or her data belongs in. crc32 is used because it is faster than any of the cryptographic hash functions (md5, sha1, and so on) and because you are only looking for a function to distribute the users over databases and don't need any of the security the stronger one-way hashes provide. Here are both insertion and retrieval functions, which use a crc32-based hashing scheme to spread load across multiple databases:

public function insertEmail(Email $email) {
  $query = "INSERT INTO emails
                (recipient, sender, body)
                VALUES(:1, :2, :3)";
  $hash = crc32($email->recipient) % count($this->databases);
  $this->databases[$hash]->prepare($query)->execute($email->recipient,
                                            $email->sender, $email->body);
  }
public function retrieveEmails($recipient) {
  $query = "SELECT * FROM emails WHERE recipient = :1";
  $hash = crc32($email->recipient) % count($this->databases);
  $result = $this->databases[$hash]->prepare($query)->execute($recipient);
  while($hr = $result->fetch_assoc) {
    $retval[] = new Email($hr);
  }
}

Alternatives to RDBMS Systems

This chapter focuses on RDBMS-backed systems. This should not leave you with the impression that all applications are backed against RDBMS systems. Many applications are not ideally suited to working in a relational system, and they benefit from interacting with custom-written application servers.

Consider an instant messaging service. Messaging is essentially a queuing system. Sending users' push messages onto a queue for a receiving user to pop off of. Although you can model this in an RDBMS, it is not ideal. A more efficient solution is to have an application server built specifically to handle the task.

Such a server can be implemented in any language and can be communicated with over whatever protocol you build into it. In Chapter 16, "RPC: Interacting with Remote Services," you will see a sample of so-called Web servicesoriented protocols. You will also be able to devise your own protocol and talk over low-level network sockets by using the sockets extension in PHP.

An interesting development in PHP-oriented application servers is the SRM project, which is headed up by Derick Rethans. SRM is an application server framework built around an embedded PHP interpreter. Application services are scripted in PHP and are interacted with using a bundled communication extension. Of course, the maxim of maximum code reuse means that having the flexibility to write a persistent application server in PHP is very nice.


Previous
Table of Contents
Next