Tech Blog :: drupal


Jun 20 '12 2:06pm

Understanding MapReduce in MongoDB, with Node.js, PHP (and Drupal)

MongoDB's query language is good at extracting whole documents or whole elements of a document, but on its own it can't pull specific items from deeply embedded arrays, or calculate relationships between data points, or calculate aggregates. To do that, MongoDB uses an implementation of the MapReduce methodology to iterate over the dataset and extract the desired data points. Unlike SQL joins in relational databases, which essentially create a massive combined dataset and then extract pieces of it, MapReduce iterates over each document in the set, "reducing" the data piecemeal to the desired results. The name was popularized by Google, which needed to scale beyond SQL to index the web. Imagine trying to build the data structure for Facebook, with near-instantaneous calculation of the significance of every friend's friend's friend's posts, with SQL, and you see why MapReduce makes sense.

I've been using MongoDB for two years, but only in the last few months starting using MapReduce heavily. MongoDB is also introducing a new Aggregation framework in 2.1 that is supposed to simplify many operations that previously needed MapReduce. However, the latest stable release as of this writing is still 2.0.6, so Aggregation isn't officially ready for prime time (and I haven't used it yet).

This post is not meant to substitute the copious documentation and examples you can find across the web. After reading those, it still took me some time to wrap my head around the concepts, so I want to try to explain those as I came to understand them.

The Steps

A MapReduce operation consists of a map, a reduce, and optionally a finalize function. Key to understanding MapReduce is understanding what each of these functions iterates over.

Map

First, map runs for every document retrieved in the initial query passed to the operation. If you have 1000 documents and pass an empty query object, it will run 1000 times.

Inside your map function, you emit a key-value pair, where the key is whatever you want to group by (_id, author, category, etc), and the value contains whatever pieces of the document you want to pass along. The function doesn't return anything, because you can emit multiple key-values per map, but a function can only return 1 result.

The purpose of map is to extract small pieces of data from each document. For example, if you're counting articles per author, you could emit the author as the key and the number 1 as the value, to be summed in the next step.

Reduce

The reduce function then receives each of these key-value(s) pairs, for each key emitted from map, with the values in an array. Its purpose is to reduce multiple values-per-key to a single value-per-key. At the end of each iteration of your reduce function, you return (not emit this time) a single variable.

The number of times reduce runs for a given operation isn't easy to predict. (I asked about it on Stack Overflow and the consensus so far is, there's no simple formula.) Essentially reduce runs as many times as it needs to, until each key appears only once. If you emit each key only once, reduce never runs. If you emit most keys once but one special key twice, reduce will run once, getting (special key, [ value, value ]).

A rule of thumb with reduce is that the returned value's structure has to be the same as the structure emitted from map. If you emit an object as the value from map, every key in that object has to be present in the object returned from reduce, and vice-versa. If you return an integer from map, return an integer from reduce, and so on. The basic reason is that (as noted above), reduce shouldn't be necessary if a key only appears once. The results of an entire map-reduce operation, run back through the same operation, should return the same results (that way huge operations can be sharded and map/reduced many times). And the output of any given reduce function, plugged back into reduce (as a single-item array), needs to return the same value as went in. (In CS lingo, reduce has to be idempotent. The documentation explains this in more technical detail.)

Here's a simple JS test, using Node.js' assertion API, to verify this. To use it, have your mapReduce operation export their methods for a separate test script to import and test:

// this should export the map, reduce, [finalize] functions passed to MongoDB.
var mr = require('./mapreduce-query');
 
// override emit() to capture locally
var emitted = [];
 
// (in global scope so map can access it)
global.emit = function(key, val) {
  emitted.push({key:key, value:val});
};
 
// reduce input should be same as output for a single object
// dummyItems can be fake or loaded from DB
mr.map.call(dummyItems[0]);
 
var reduceRes = mr.reduce(emitted[0].key, [ emitted[0].value ]);
assert.deepEqual(reduceRes, emitted[0].value, 'reduce is idempotent');

A simple MapReduce example is to count the number of posts per author. So in map you could emit('author name', 1) for each document, then in reduce loop over each value and add it to a total. Make sure reduce is adding the actual number in the value, not just 1, because that won't be idempotent. Similarly, you can't just return values.length and assume each value represents 1 document.

Finalize

Now you have a single reduced value per key, which get run through the finalize function once per key.

To understand finalize, consider that this is essentially the same as not having a finalize function at all:

var finalize = function(key, value) {
  return value;
}

finalize is not necessary in every MapReduce operation, but it's very useful, for example, for calculating averages. You can't calculate the average in reduce because it can run multiple times per key, so each iteration doesn't have enough data to calculate with.

The final results returned from the operation will have one value per key, as returned from finalize if it exists, or from reduce if finalize doesn't exist.

MapReduce in PHP and Drupal

The MongoDB library for PHP does not include any special functions for MapReduce. They can be run simply as a generic command, but that takes a lot of code. I found a MongoDB-MapReduce-PHP library on Github which makes it easier. It works, but hasn't been updated in two years, so I forked the library and created my own version with what I think are some improvements.

The original library by infynyxx created an abstract class XMongoCollection that was meant to be sub-classed for every collection. I found it more useful to make XMongoCollection directly instantiable, as an extended replacement for the basic MongoCollection class. I added a mapReduceData method which returns the data from the MapReduce operation. For my Drupal application, I added a mapReduceDrupal method which wraps the results and error handling in Drupal API functions.

I could then load every collection with XMongoCollection and run mapReduce operations on it directly, like any other query. Note that the actual functions passed to MongoDB are still written in Javascript. For example:

// (this should be statically cached in a separate function)
$mongo = new Mongo($server_name);      // connection
$mongodb = $mongo->selectDB($db_name); // MongoDB instance
 
// use the new XMongoCollection class. make it available with an __autoloader.
$collection = new XMongoCollection($mongodb, $collection_name);
 
$map = <<<MAP
  function() {
    // doc is 'this'
    emit(this.category, 1);
  }
MAP;
 
$reduce = <<<REDUCE
  function(key, vals) {
    // have `variable` here passed in `setScope`
    return something;
  }
REDUCE;
 
$mr = new MongoMapReduce($map, $reduce, array( /* limit initial document set with a query here */ ));
 
// optionally pass variables to the functions. (e.g. to apply user-specified filters)
$mr->setScope(array('variable' => $variable));
 
// 2nd param becomes the temporary collection name, so tmp_mapreduce_example. 
// (This is a little messy and could be improved. Stated limitation of v1.8+ not supporting "inline" results is not entirely clear.)
// 3rd param is $collapse_value, see code
$result = $collection->mapReduceData($mr, 'example', FALSE);

MapReduce in Node.js

The MongoDB-Native driver for Node.js, now an official 10Gen-sponsored project, includes a collection.mapReduce() method. The syntax is like this:

 
var db = new mongodb.Db(dbName, new mongodb.Server(mongoHost, mongoPort, {}));
db.open(function(error, dbClient) {
  if (error) throw error;  
  dbClient.collection(collectionName, function(err, collection) {
    collection.mapReduce(map, reduce, { 
        out : { inline : 1 },
        query: { ... },     // limit the initial set (optional)
        finalize: finalize,  // function (optional)
        verbose: true        // include stats
      },
      function(error, results, stats) {   // stats provided by verbose
        // ...
      }
    });
  });
});

It's mostly similar to the command-line syntax, except in the CLI, the results are returned from the mapReduce function, while in Node.js they are passed (asynchronously) to the callback.

MapReduce in Mongoose

Mongoose is a modeling layer on top of the MongoDB-native Node.js driver, and in the latest 2.x release does not have its own support for MapReduce. (It's supposed to be coming in 3.x.) But the underlying collection is still available:

var db = mongoose.connect('mongodb://dbHost/dbName');
// (db.connection.db is the native MongoDB driver)
 
// build a model (`Book` is a schema object)
// model is called 'Book' but collection is 'books'
mongoose.model('Book', Book, 'books');
 
...
 
var Book = db.model('Book');
Book.collection.mapReduce(...);

(I actually think this is a case of Mongoose being better without its own abstraction on top of the existing driver, so I hope the new release doesn't make it more complex.)

In sum

I initially found MapReduce very confusing, so hopefully this helps clarify rather than increase the confusion. Please write in the comments below if I've misstated or mixed up anything above.

Apr 29 '12 10:06am

Liberate your Drupal data for a service-oriented architecture (using Redis, Node.js, and MongoDB)

Drupal's basic content unit is a "node," and to build a single node (or to perform any other Drupal activity), the codebase has to be bootstrapped, and everything needed to respond to the request (configuration, database and cache connections, etc) has to be initialized and loaded into memory from scratch. Then node_load runs through the NodeAPI hooks, multiple database queries are run, and the node is built into a single PHP object.

This is fine if your web application runs entirely through Drupal, and always will, but what if you want to move toward a more flexible Service-oriented architecture (SOA), and share your content (and users) with other applications? For example, build a mobile app with a Node.js backend like LinkedIn did; or calculate analytics for business intelligence; or have customer service reps talk to your customers in real-time; or integrate with a ticketing system; or do anything else that doesn't play to Drupal's content-publishing strengths. Maybe you just want to make your data (which is the core of your business, not the server stack) technology-agnostic. Maybe you want to migrate a legacy Drupal application to a different system, but the cost of refactoring all the business logic is prohibitive; with an SOA you could change the calculation and get the best of both worlds.

The traditional way of doing this was setting up a web service in Drupal using something like the Services module. External applications could request data over HTTP, and Drupal would respond in JSON. Each request has to wait for Drupal to bootstrap, which uses a lot of memory (every enterprise Drupal site I've ever seen has been bogged down by legacy code that runs on every request), so it's slow and doesn't scale well. Rather than relieving some load from Drupal's LAMP stack by building a separate application, you're just adding more load to both apps. To spread the load, you have to keep adding PHP/Apache/Mysql instances horizontally. Every module added to Drupal compounds the latency of Drupal's hook architecture (running thousands of function_exists calls for example), so the stakeholders involved in changing the Drupal app has to include the users of every secondary application requesting the data. With a Drupal-Services approach, other apps will always be second-class citizens, dependent on the legacy system, not allowing the "loose coupling" principle of SOA.

I've been shifting my own work from Drupal to Node.js over the last year, but I still have large Drupal applications (such as Antiques Near Me) which can't be easily moved away, and frankly don't need to be for most use cases. Overall, I tend to think of Drupal as a legacy system, burdened by too much cruft and inconsistent architecture, and no longer the best platform for most applications. I've been giving a lot of thought to ways to keep these apps future-proof without rebuilding all the parts that work well as-is.

That led me to build what I've called the "Drupal Liberator". It consists of a Drupal module and a Node.js app, and uses Redis (a very fast key-value store) for a middleman queue and MongoDB for the final storage. Here's how it works:

  • When a node (or user, or other entity type) is saved in Drupal, the module encodes it to JSON (a cross-platform format that's also native to Node.js and MongoDB), and puts it, along with metadata (an md5 checksum of the JSON, timestamp, etc), into a Redis hash (a simple key-value object, containing the metadata and the object as a JSON string). It also notifies a Redis pub/sub channel of the new hash key. (This uses 13KB of additional memory and 2ms of time for Drupal on the first node, and 1KB/1ms for subsequent node saves on the same request. If Redis is down, Drupal goes on as usual.)

  • The Node.js app, running completely independently of Drupal, is listening to the pub/sub channel. When it's pinged with a hash key, it retrieves the hash, JSON.parse's the string into a native object, possibly alters it a little (e.g., adding the checksum and timestamp into the object), and saves it into MongoDB (which also speaks JSON natively). The data type (node, user, etc) and other information in the metadata directs where it's saved. Under normal conditions, this whole process from node_save to MongoDB takes less than a second. If it were to bottleneck at some point in the flow, the Node.js app runs asynchronously, not blocking or straining Drupal in any way.

  • For redundancy, the Node.js app also polls the hash namespace every few minutes. If any part of the mechanism breaks at any time, or to catch up when first installing it, the timestamp and checksum stored in each saved object allow the two systems to easily find the last synchronized item and continue synchronizing from there.

The result is a read-only clone of the data, synchronized almost instantaneously with MongoDB. Individual nodes can be loaded without bootstrapping Drupal (or touching Apache-MySql-PHP at all), as fully-built objects. New apps utilizing the data can be built in any framework or language. The whole Drupal site could go down and the data needed for the other applications would still be usable. Complex queries (for node retrieval or aggregate statistics) that would otherwise require enormous SQL joins can be built using MapReduce and run without affecting the Drupal database.

One example of a simple use case this enables: Utilize the CMS backend to edit your content, but publish it using a thin MongoDB layer and client-side templates. (And outsource comments and other user-write interactions to a service like Disqus.) Suddenly your content displays much faster and under higher traffic with less server capacity, and you don't have to worry about Varnish or your Drupal site being "Slashdotted".

A few caveats worth mentioning: First, it's read-only. If a separate app wants to modify the data in any way (and maintain data integrity across systems), it has to communicate with Drupal, or a synchronization bridge has to be built in the other direction. (This could be the logical next step in developing this approach, and truly make Drupal a co-equal player in an SOA.)

Second, you could have Drupal write to MongoDB directly and cut out the middlemen. (And indeed that might make more sense in a lot of cases.) But I built this with the premise of an already strained Drupal site, where adding another database connection would slow it down even further. This aims to put as little additional load on Drupal as possible, with the "Liberator" acting itself as an independent app.

Third, if all you needed was instant node retrieval - for example, if your app could query MySql for node ID's, but didn't want to bootstrap Drupal to build the node objects - you could leave them in Redis and take Node.js and MongoDB out of the picture.

I've just started exploring the potential of where this can go, so I've run this mostly as a proof-of-concept so far (successfully). I'm also not releasing the code at this stage: If you want to adopt this approach to evolve your Drupal system to a service-oriented architecture, I am available as a consultant to help you do so. I've started building separate apps in Node.js that tie into Drupal sites with Ajax and found the speed and flexibility very liberating. There's also a world of non-Drupal developers who can help you leverage your data, if it could be easily liberated. I see this as opening a whole new set of doors for where legacy Drupal sites can go.

Dec 14 '11 4:53pm

Unconventional unit testing in Drupal 6 with PhpUnit, upal, and Jenkins

Unit testing in Drupal using the standard SimpleTest approach has long been one of my pain points with Drupal apps. The main obstacle was setting up a realistic test "sandbox": The SimpleTest module builds a virtual site with a temporary database (within the existing database), from scratch, for every test suite. To accurately test the complex interactions of a real application, you need dozens of modules enabled in the sandbox, and installing all their database schemas takes a long time. If your site's components are exported to Features, the tests gain another level of complexity. You could have the test turn on every module that's enabled on the real site, but then each suite takes 10 minutes to run. And that still isn't enough; you also need settings from the variables table, content types real nodes and users, etc.

So until recently, it came down to the choice: make simple but unrealistic sandboxes that tested minutia but not the big-picture interactions; or build massive sandboxes for each test that made the testing workflow impossible. After weeks of trying to get a SimpleTest environment working on a Drupal 6 application with a lot of custom code, and dozens of hours debugging the tests or the sandbox setups rather than building new functionality, I couldn't justify the time investment, and shelved the whole effort.

Then Moshe Weizman pointed me to his alternate upal project, which aims to bring the PHPUnit testing framework to Drupal, with backwards compatibility for SimpleTest assertions, but not the baggage of SimpleTest's Drupal implementation. Moshe recently introduced upal as a proposed testing framework for Drupal 8, especially for core. Separately, a few weeks ago, I started using upal for a different purpose: as a unit testing framework for custom applications in Drupal 6.

I forked the Github repo, started a backport to D6 (copying from SimpleTest-6 where upal was identical to SimpleTest-7), and fixed some of the holes. More importantly, I'm taking a very different approach to the testing sandbox: I've set up an entirely separate test site, copied wholesale from the dev site (which itself is copied from the production site). This means:

  • I can visually check the test sandbox at any time, because it runs as a virtualhost just like the dev site.
  • All the modules, settings, users, and content are in place for each test, and don't need to be created or torn down.
  • Rebuilding the sandbox is a single operation (with shell scripts to sync MySql, MongoDB, and files, manually triggered in Jenkins)
  • Cleanup of test-created objects occurs (if desired) on a piecemeal basis in tearDown() - drupalCreateNode() (modified) and drupalVariableSet() (added) optionally undo their changes when the test ends.
  • setUp() is not needed for most tests at all.
  • dumpContentsToFile() (added) replicates SimpleTest's ability to save curl'd files, but on a piecemeal basis in the test code.
  • Tests run fast, and accurately reflect the entirety of the site with all its actual interactions.
  • Tests are run by the Jenkins continuous-integration tool and the results are visible in Jenkins using the JUnit xml format.

How to set it up (with Jenkins, aka Hudson)

(Note: the following are not comprehensive instructions, and assume familiarity with shell scripting and an existing installation of Jenkins.)

  1. Install upal from Moshe's repo (D7) or mine (D6). (Some of the details below I added recently, and apply only to the D6 fork.)
  2. Install PHPUnit. The pear approach is easiest.
  3. Upgrade drush: the notes say, "You currently need 'master' branch of drush after 2011.07.21. Drush 4.6 will be OK - http://drupal.org/node/1105514" - this seems to correspond to the HEAD of the 7.x-4.x branch in the Drush repository.
  4. Set up a webroot, database, virtualhost, DNS, etc for your test sandbox, and any scripts you need to build/sync it.
  5. Configure phpunit.xml. Start with upal's readme, then (D6/fork only) add DUMP_DIR (if wanted), and if HTTP authentication to the test site is needed, UPAL_HTTP_USER and UPAL_HTTP_PASS. In my version I've split the DrupalTestCase class to its own file, and renamed drupal_test_case.php to upal.php, so rename the "bootstrap" parameter accordingly. ** (note: the upal notes say it must run at a URL ending in /upal - this is no longer necessary with this approach.)
  6. PHPUnit expects the files to be named .php rather than .test - however if you explicitly call an individual .test file (rather than traversing a directory, the approach I took), it might work. You can also remove the getInfo() functions from your SimpleTests, as they don't do anything anymore.
  7. If Jenkins is on a different server than the test site (as in my case), make sure Jenkins can SSH over.
  8. To use dumpContentsToFile() or the XML results, you'll want a dump directory (set in phpunit.xml), and your test script should wipe the directory before each run, and rsync the files to the build workspace afterwards.
  9. To convert PHPUnit's JUnit output to the format Jenkins understands, you'll need the xUnit plugin for Jenkins. Then point the Jenkins job to read the XML file (after rsync'ing if running remotely). [Note: the last 3 steps have to be done with SimpleTest and Jenkins too.]
  10. Code any wrapper scripts around the above steps as needed.
  11. Write some tests! (Consult the PHPUnit documentation.)
  12. Run the tests!

Some issues I ran into (which you might also run into)

  1. PHPUnit, unlike SimpleTest, stops a test function after the first failure. This isn't a bug, it's expected behavior, even with --stop-on-failure disabled. I'd prefer it the other way, but that's how it is.
  2. Make sure your test site - like any dev site - does not send any outbound mail to customers, run unnecessary feed imports, or otherwise perform operations not meant for a non-production site.
  3. In my case, Jenkins takes 15 minutes to restart (after installing xUnit for example). I don't know why, but keep an eye on the Jenkins log if it's taking you a while too.
  4. Also in my case, Jenkins runs behind an Apache reverse-proxy; in that case when Jenkins restarts, it's usually necessary to restart Apache, or else it gets stuck thinking the proxy endpoint is down.
  5. I ran into a bug with Jenkins stopping its shell script commands arbitrarily before the end. I worked around it by moving the whole job to a shell script on the Jenkins server (which in turn delegates to a script on the test/dev server).

There is a pending pull request to pull some of the fixes and changes I made back into the original repo. In the pull request I've tried to separate what are merely fixes from what goes with the different test-site approach I've taken, but it's still a tricky merge. Feel free to help there, or make your own fork with a separate test site for D7.

I now have a working test environment with PHPUnit and upal, with all of the tests I wrote months ago working again (minus their enormous setUp() functions), and I've started writing tests for new code going forward. Success!

(If you are looking for a professional implementation of any of the above, please contact me.)

Recent related post: Making sense of Varnish caching rules

Dec 12 '11 3:52pm

Making sense of Varnish caching rules

Varnish is a reverse-proxy cache that allows a site with a heavy backend (such as a Drupal site) and mostly consistent content to handle very high traffic load. The “cache” part refers to Varnish storing the entire output of a page in its memory, and the “reverse proxy” part means it functions as its own server, sitting in front of Apache and passing requests back to Apache only when necessary.

One of the challenges with implementing Varnish, however, is the complex “VCL” protocol it uses to process requests with custom logic. The syntax is unusual, the documentation relies heavily on complex examples, and there don’t seem to be any books or other comprehensive resources on the software. A recent link on the project site to Varnish Training is just a pitch for a paid course. Searching more specifically for Drupal + Varnish will bring up many good results - including Lullabot’s fantastic tutorial from April, and older examples for Mercury - but the latest stable release is now 3.x and many of the examples (written for 2.x) don’t work as written anymore. So it takes a lot of trial and error to get it all working.

I’ve been running Varnish on AntiquesNearMe.com, partly to keep our hosting costs down by getting more power out of less [virtual] hardware. A side benefit is the site’s ability to respond very nicely if the backend Apache server ever goes down. They’re on separate VPS's (connected via internal private networking), and if the Apache server completely explodes from memory overload, or I simply need to upgrade a server-related package, Varnish will display a themed “We’re down for a little while” message.

But it wasn’t until recently that I got Varnish’s primary function, caching, really tuned. I spent several days under the hood recently, and while I don’t want to rehash what’s already been well covered in Lullabot’s tutorial, here are some other things I learned:

Check syntax before restarting

After you update your VCL, you need to restart Varnish - using sudo /etc/init.d/varnish restart for instance - for the changes to take effect. If you have a syntax error, however, this will take down your site. So check the syntax first (change the path to your VCL as needed):
varnishd -C -f /etc/varnish/default.vcl > /dev/null

If there are errors, it will display them; if not, it shows nothing. Use that as a visual check before restarting. (Unfortunately the exit code of that command is always 0, so you can’t do check-then-restart as simply as check-varnish-syntax && /etc/init.d/varnish restart, but you could grep the output for the words “exit 1” to accomplish the same.)

Logging

The std.log function allows you to generate arbitrary messages about Varnish’s processing. Add import std; at the top of your VCL file, and then std.log("DEV: some useful message") anywhere you want. The “DEV” prefix is an arbitrary way of differentiating your logs from all the others. So you can then run in the shell, varnishlog | grep "DEV" and watch only the information you’ve chosen to see.

How I use this:
- At the top of vcl_recv() I put std.log("DEV: Request to URL: " + req.url);, to put all the other logs in context.
- When I pipe back to apache, I put std.log("DEV: piping " + req.url + " straight back to apache"); before the return (pipe);
- On blocked URLs (cron, install), the same
- On static files (images, JS, CSS), I put std.log("DEV: Always caching " + req.url);
- To understand all the regex madness going on with cookies, I log req.http.Cookie at every step to see what’s changed.

Plug some of these in, check the syntax, restart Varnish, run varnishlog|grep PREFIX as above, and watch as you hit a bunch of URLs in your browser. Varnish’s internal logic will quickly start making more sense.

Watch Varnish work with your browser

Varnish headers in Chrome Inspector
The Chrome/Safari Inspector and Firebug show the headers for every request made on a page. With Varnish running, look at the Response Headers for one of them: you’ll see “Via: Varnish” if the page was processed through Varnish, or “Server:Apache” if it went through Apache. (Using Chrome, for instance, login to your Drupal site and the page should load via Apache (assuming you see page elements not available to anonymous users), then open an Incognito window and it should run through Varnish.)

Add hit/miss headers

  • When a page is supposed to be cached (not pipe'd immediately), Varnish checks if there is an existing hit or miss. To watch this in your Inspector, use this logic:
sub vcl_deliver {
  std.log("DEV: Hits on " + req.url + ": " + obj.hits);
 
  if (obj.hits > 0) {
    set resp.http.X-Varnish-Cache = "HIT";
  }
  else {
    set resp.http.X-Varnish-Cache = "MISS";
  }
 
  return (deliver);
}

Then you can clear the caches, hit a page (using the browser technique above), see “via Varnish” and a MISS, hit it again, see a HIT (or not), and know if everything is working.

Clear Varnish when aggregated CSS+JS are rebuilt

If you have CSS/JS aggregation enabled (as recommended), your HTML source will reference long hash-string files. Varnish caches that HTML with the hash string. If you clear only those caches (“requisites” via Admin Menu or cc css+js via Drush), Varnish will still have the old references, but the files will have been deleted. Not good. You could simply never use that operation again, but that’s a little silly.

The heavy-handed solution I came up with (I welcome alternatives) is to wipe the Varnish cache when CSS+JS resets. That operation is not hook-able, however, so you have to patch core. In common.js, _drupal_flush_css_js(), add:

if (module_exists('varnish') && function_exists('varnish_purge_all_pages')) {
  varnish_purge_all_pages();
}

This still keeps Memcache and other in-Drupal caches intact, avoiding an unnecessary “clear all caches” operation, but makes sure Varnish doesn’t point to dead files. (You could take it a step further and purge only URLs that are Drupal-generated and not static; if you figure out the regex for that, please share.)

Per-page cookie logic

On AntiquesNearMe.com we have a cookie that remembers the last location you searched, which makes for a nicer UX. That cookie gets added to Varnish’s page “hash” and (correctly) bypasses the cache on pages that take that cookie into account. The cookie is not relevant to the rest of the site, however, so it should be ignored in those cases. How to handle this?

There are two ways to handle cookies in Varnish: strip cookies you know you don’t want, as in this old Pressflow example, or leave only the cookies you know you do want, as in Lullabot’s example. Each strategy has its pros and cons and works on its own, but it’s not advisable to combine them. I’m using Lullabot’s technique on this site, so to deal with the location cookie, I use if-else logic: if the cookie is available but not needed (determined by regex like req.url !~ "PATTERN" || ...), then strip it; otherwise keep it. If the cookie logic you need is more varied but still linear, you could create a series of elsif statements to handle all the use cases. (Just make sure to roast a huge pot of coffee first.)

Useful add-ons to varnish.module

  • Added watchdog('varnish', ...) commands in varnish.module on cache-clearing operations, so I could look at the logs and spot problems.
  • Added a block to varnish.module with a “Purge this page” button for each URL, shown only for admins. (I saw this in an Akamai module and it made a lot of sense to copy. I’d be happy to post a patch if others want this.)
  • The Expire offers plug-n-play intelligence to selectively clear Varnish URLs only when necessary (clearing a landing page of blog posts only if a blog post is modified, for example.) Much better than the default behavior of aggressive clearing “just in case”.

I hope this helps people adopt Varnish. I am also available via my consulting business New Leaf Digital for paid implementation, strategic advice, or support for Varnish-aided sites.

Nov 29 '11 1:06pm

Parse Drupal watchdog logs in syslog (using node.js script)

Drupal has the option of outputting its watchdog logs to syslog, the file-based core Unix logging mechanism. The log in most cases lives at /var/log/messages, and Drupal's logs get mixed in with all the others, so you need to cat /var/log/messages | grep drupal to filter.

But then you still have a big text file that's hard to parse. This is probably a "solved problem" many times over, but recently I had to parse the file specifically for 404'd URLs, and decided to do it (partly out of convenience but mostly to learn how) using Node.js (as a scripting language). Javascript is much easier than Bash at simple text parsing.

I put the code in a Gist, node.js script to parse Drupal logs in linux syslog (and find distinct 404'd URLs). The last few lines of URL filtering can be changed to any other specific use case you might have for reading the logs out of syslog. (This could also be used for reading non-Drupal syslogs, but the mapping applies keys like "URL" which wouldn't apply then.)

Note the comment at the top: to run it you'll need node.js and 2 NPM modules as dependencies. Then take your filtered log (using the greg method above) and pass it as a parameter, and read the output on screen or output with > to another file.

Nov 25 '11 11:59am

Migrating a static 960.gs grid to a responsive, semantic grid with LessCSS

The layout of Antiques Near Me (a startup I co-founded) has long been built using the sturdy 960.gs grid system (implemented in Drupal 6 using the Clean base theme). Grids are very helpful: They allow layouts to be created quickly; they allow elements to be fit into layouts easily; they keep dimensions consistent; they look clean. But they have a major drawback that always bothered me: the grid-X classes that determine an element's width are in the HTML. That mixes up markup/content and layout/style, which should ideally be completely separated between the HTML and CSS.

The rigidity of an in-markup grid becomes especially apparent when trying to implement "responsive" design principles. I'm not a designer, but the basic idea of responsive design for the web, as I understand it, is that a site's layout should adapt automagically to the device it's viewed in. For a nice mobile experience, for example, rather than create a separate mobile site - which I always thought was a poor use of resources, duplicating the content-generating backend - the same HTML can be used with @media queries in the CSS to make the layout look "native".

(I've put together some useful links on Responsive Design and @media queries using Delicious. The best implementation of a responsive layout that I've seen is on the site of FourKitchens.)

Besides the 960 grid, I was using LessCSS to generate my styles: it supports variables, mix-ins, nested styles, etc; it generally makes stylesheet coding much more intuitive. So for a while the thought simmered, why not move the static 960 grid into Less (using mixins), and apply the equivalent of grid-X classes directly in the CSS? Then I read this article in Smashing on The Semantic Grid System, which prescribed pretty much the same thing - using Less with a library called Semantic.gs - and I realized it was time to actually make it happen.

To make the transition, I forked semantic.gs and made some modifications: I added .alpha and .omega mixins (to cancel out side margins); for nested styles, I ditched semantic.gs's .row() approach (which seems to be buggy anyway) and created a .nested-column mixin instead. I added clear:both to the .clearfix mixin (seemed to make sense, though maybe there was a reason it wasn't already in).

To maintain the 960.gs dimensions and classes (as an intermediary step), I made a transitional-960gs.less stylesheet with these rules: @columns: 16; @column-width: 40; @gutter-width: 20;. Then I made equivalents of the .grid_X classes (as Clean's implementation had them) with an s_ prefix:

.s_container, .s_container_16 {
  margin-left: auto;
  margin-right: auto;
  width: @total-width;
  .clearfix();
}
.s_grid_1 {
  .column(1);
}
.s_grid_2 {
  .column(2);
}
...
.s_grid_16 {
  .column(16);
}

The s_grid_X classes were purely transitional: they allowed me to do a search-and-replace from grid_ to s_grid_ and remove the 960.gs stylesheet, before migrating all the styles into semantic equivalents. Once that was done, the s_grid_ classes could be removed.

960.gs and semantic.gs also implement their columns a little differently, one with padding and the other with margins, so what was actually a 1000px-wide layout with 960.gs became a 960px layout with semantic.gs. To compensate for this, I made a wrapper mixin applied to all the top-level wrappers:

.wide-wrapper {
  .s_container;
  padding-right: 20px;
  padding-left: 20px;
  .clearfix();
}

With the groundwork laid, I went through all the grid_/s_grid_ classes in use and replaced them with purely in-CSS semantic mixins. So if a block had a grid class before, now it only had a semantic ID or class, with the grid mixins applied to that selector.

Once the primary layout was replicated, I could make it "respond" to @media queries, using a responsive.less sheet. For example:

/* iPad in portrait, or any screen below 1000px */
@media only screen and (max-device-width: 1024px) and (orientation: portrait), screen and (max-width: 999px) {
  ...
}
 
/* very narrow browser, or iPhone -- note that <1000px styles above will apply here too! 
note: iPhone in portrait is 320px wide, in landscape is 480px wide */
@media only screen and (max-device-width: 480px), only screen and (-webkit-min-device-pixel-ratio: 2), screen and (max-width: 499px) {
  ...
}
 
/* iPhone - portrait */
@media only screen and (max-device-width: 480px) and (max-width: 320px) {
  ...
}

Some vitals tools for the process:

  • Less.app (for Mac), or even better, the new CodeKit by the same author compiles and minifies the Less files instantly, so the HTML can refer to normal CSS files.
  • The iOS Simulator (part of XCode) and Android Emulator (with the Android SDK), to simulate how your responsive styles work on different devices. (Getting these set up is a project in itself).
  • To understand what various screen dimensions looked like, I added a simple viewport debugger to show the screen size in the corner of the page (written as a Drupal6/jQuery document-ready "behavior"; fills a #viewport-size element put separately in the template):
    Drupal.behaviors.viewportSize = function() {
      if (!$('#viewport-size').size()) return;
     
      Drupal.fillViewportSize = function() {
        $('#viewport-size').text( $(window).width() + 'x' + $(window).height() )
          .css('top', $('#admin-menu').height());
      };
      Drupal.fillViewportSize();
      $(window).bind('resize', function(event){
        Drupal.fillViewportSize();
      });  
    };

After three days of work, the layout is now entirely semantic, and the 960.gs stylesheet is gone. On a wide-screen monitor it looks exactly the same as before, but it now adapts to narrower screen sizes (you can see this by shrinking the window's width), and has special styles for iPad and iPhone (portrait and landscape), and was confirmed to work on a popular Android tablet. It'll be a continuing work in progress, but the experience is now much better on small devices, and the groundwork is laid for future tweaks or redesigns.

There are some downsides to this approach worth considering:

  • Mobile devices still load the full CSS and HTML needed for the "desktop" layout, even if not all the elements are shown. This is a problem for performance.
  • The stylesheets are enormous with all the mixins, compounding the previous issue. I haven't examined in depth how much of a problem this actually is, but I'll need to at some point.
  • The contents of the page can only change as much as the stylesheets allow. The order of elements can't change (unless their visible order can be manipulated with CSS floats).

To mitigate these and modify the actual content on mobile devices - to reduce the performance overhead, load smaller images, or put less HTML on the page - would probably require backend modifications that detect the user agent (perhaps using Browscap). I've been avoiding that approach until now, but with most of the work done on the CSS side, a hybrid backend solution is probably the next logical step. (For the images, Responsive Images could also help on the client side.)

See the new layout at work, and my links on responsive design. I'm curious to hear what other people do to solve these issues.

Added: It appears the javascript analog to media queries is media query lists, which are event-able. And here's an approach with media queries and CSS transition events.

Aug 10 '11 3:23pm
Tags

Drupal’s increasing complexity is becoming a turnoff for developers

I’ve been developing custom applications with Drupal for three years, a little with 4.7 and 5, primarily with 6, and lately more with 7. Lately I’ve become concerned with the trend in Drupal’s code base toward increasing complexity, which I believe is becoming a danger to Drupal’s adoption.

In general when writing code, a solution can solve the current scenario in front of us right now, or it can try to account for future scenarios in advance. I’ve seen this referred to as N-case or N+1 development. N-case code is efficient, but not robust; N+1 code is abstract and complex, and theoretically allows for an economy of scale, allowing more to be done with less code/work. In practice, it also shifts the burden: as non-developers want the code to accommodate more use cases, the developers write more code, with more complexity and abstraction.

Suppose you want to record a date with a form and save it to a database. You’d need an HTML form, a timestamp (integer) field in your schema, and a few lines of code. Throw in a stable jQuery date popup widget and you have more code but not much more complexity. Or you could imagine every possible date permutation, all theoretically accessible to non-developers, and you end up with the 14,673 lines in Drupal’s Date module.

Drupal is primarily a content management system, not simply a framework for efficient development, so it needs to account for the myriad use cases of non-developer site builders. This calls for abstracting everything into user interfaces, which takes a lot of code. However, there needs to be a countervailing force in the development process, pushing back against increasing abstraction (in the name of end-user simplicity) for the sake of preserving underlying simplicity. In other words, there is an inherent tension in Drupal (like any big software project) between keeping the UI both robust and simple, and keeping the code robust and simple - and increasingly Drupal, rather than trying to maintain a balance, has tended to sacrifice the latter.

User interfaces are one form of abstraction; N+infinity APIs - which I’m more concerned with - are another, which particularly increase underlying complexity. Drupal has a legacy code base built with partly outdated assumptions, and developers adding new functionality have to make a choice: rewrite the old code to be more robust but less complex, or add additional abstraction layers on top? The latter takes less time but easily creates a mess. For example: Drupal 7 tries to abstract nodes, user profiles, actions, etc into “entities” and attach fields to any kind of entity. Each of these still has its legacy ID, but now there is an additional layer in between tying these “entity IDs” to their types, and then another layer for “bundles,” which apply to some entity types but not others. The result from a development cycle perspective was a Drupal 7 release that, even delayed a year, lacked components of the Entity system in core (they moved to “contrib”). The result from a systems perspective is an architecture that has too many layers to make sense if it were built from scratch. Why not, for example, have everything be a node? Content as nodes, users as nodes, profiles as nodes, etc. The node table would need to lose legacy columns like “sticky” - they would become fields - and some node types like “user” might need fixed meanings in core. Then three structures get merged into one, and the system gets simpler without compromising flexibility.

I recently tried to programatically use the Activity module - which used to be a simple way to record user activity - and had to “implement” the Entities and Trigger APIs to do it, requiring hundreds of lines of code. I gave up on that approach and instead used the elegant core module Watchdog - which, with a simple custom report pulling from the existing system, produced the same end-user effect as Activity with a tiny fraction of the code and complexity. The fact that Views doesn’t natively generate Watchdog reports and Rules doesn’t report to Watchdog as an action says a lot, I think, about the way Drupal has developed over the last few years.

On a Drupal 7 site I’m building now, I’ve worked with the Node API, Fields API, Entities API, Form API, Activity API, Rules API, Token API... I could have also worked with the Schema, Views, Exportables, Features, and Batch APIs, and on and on. The best definition I’ve heard for an API (I believe by Larry Garfield at Drupalcon Chicago) is “ the wall between 2 systems.” In a very real way, rather than feeling open and flexible, Drupal’s code base increasingly feels like it’s erecting barriers and fighting with itself. When it’s necessary to write so much code for so many APIs to accomplish simple tasks, the framework is no longer developer-friendly. The irony is, the premise of that same Drupalcon talk was the ways APIs create “power and flexibility” - but that power has come at great cost to the developer experience.

I’m aware of all these APIs under the hood because I’ve seen them develop for a few years. But how is someone new to Drupal supposed to learn all this? (They could start with the Definitive Guide to Drupal 7, which sounds like a massive tome.) Greater abstraction and complexity lead to a steeper learning curve. Debugging Drupal - which requires “wrapping your head” around its architecture - has become a Herculean task. Good developer documentation is scarce because it takes so much time to explain something so complex.

There is a cycle: the code gets bigger and harder to understand; the bugs get more use-case-specific and harder to nail down; the issue queues get bloated; the developers have less time to devote to code quality improvement and big-picture architecture decisions. But someone wants all those use cases handled, so the code gets bigger and bigger and harder to understand... as of this writing, Drupal core has 9166 open issues, the Date module has 813, Rules has 494. Queues that big need a staff of dozens to manage effectively, and even if those resources existed, the business case for devoting them can’t be easy to make. The challenge here is not simply in maintaining our work; it’s in building projects from the get-go that aren’t so complicated as to need endless maintenance.

Some other examples of excessive complexity and abstraction in Drupal 7:

  • Field Tokens. This worked in Drupal 6 with contrib modules; to date with Drupal 7, this can’t be done. The APIs driving all these separate systems have gotten so complex, that either no one knows how to do this anymore, or the architecture doesn’t allow it.
  • The Media module was supposed to be an uber-abstracted API for handling audio, video, photos, etc. As of a few weeks ago, basic YouTube and Vimeo integration didn’t work. The parts of Media that did work (sponsored largely by Acquia) didn’t conform to long-standing Drupal standards. Fortunately there were workarounds for the site I was building, but their existence is a testament to the unrealistic ambition and excessive complexity of the master project.
  • The Render API, intended to increase flexibility, has compounded the old problem in Drupal of business logic being spread out all over the place. The point in the flow where structured data gets rendered into HTML strings isn’t standardized, so knowing how to modify one type of output doesn’t help with modifying another. (Recently I tried to modify a date_select field at the code level to show the date parts in a different order - as someone else tried to do a year ago - and gave up after hours. The solution ended up being in the UI - so the end-user was given code-free power at the expense of the development experience and overall flexibility.)

Drupal 8 has an “Initiatives” structure for prioritizing effort. I’d like to see a new initiative, Simplification: Drupal 8 should have fewer lines of code, fewer APIs, and fewer database tables than Drupal 7. Every component should be re-justified and eliminated if it duplicates an existing function. And the Drupal 8 contrib space should follow the same principles. I submit that this is more important than any single new feature that can be built, and that if the codebase becomes simpler, adding new features will be easier.

A few examples of places I think are ripe for simplifying:

  • The Form API has too much redundancy. #process handlers are a bear to work with (try altering the #process flow of a date field) and do much the same as #after_build.
  • The render API now has hook_page_build, hook_page_alter, hook_form_alter, hook_preprocess, hook_process, hook_node_views, hook_entity_view, (probably several more for field-level rendering), etc. This makes understanding even a well-architected site built by anyone else an enormous challenge. Somewhere in that mix there’s bound to be unnecessary redundancy.

Usable code isn’t a luxury, it’s critical to attracting and keeping developers in the project. I saw a presentation recently on Rapid Prototyping and it reminded me how far Drupal has come from being able to do anything like that. (I don’t mean the rapid prototype I did of a job listing site - I mean application development, building something new.) The demo included a massive data migration accomplished with 4 lines of javascript in the MongoDB terminal; by comparison, I recently tried to change a dropdown field to a text field (both identical strings in the database) and Drupal told me it couldn’t do that because “the field already had data.”

My own experience is that Drupal is becoming more frustrating and less rewarding to work with. Backend expertise is also harder to learn and find (at the last meetup in Boston, a very large Drupal community, only one other person did freelance custom development). Big firms like Acquia are hiring most of the rest, which is great for Acquia, but skews the product toward enterprise clients, and increases the cost of development for everyone else. If that’s the direction Drupal is headed - a project understood and maintained only by large enterprise vendors, for large enterprise users, giving the end-user enormous power but the developer a migraine - let’s at least make sure we go that way deliberately and with our eyes open. If we want the product to stay usable for newbie developers, or even people with years of experience - and ultimately, if we want the end-user experience to work - then the trend has to be reversed toward a better balance.

Aug 2 '11 10:51am
Tags

Drupal 7 / Drush tip: Find all field content using a text format

I'm working on a Drupal 7 site and decided one of the text formats ("input formats" in D6) was redundant. So I disabled it, and was warned that "any content stored with that format will not be displayed." How do I know what content is using that format? This little shell snippet told me:

drush sql-query "show tables like 'field_data_%'" | tail -n+2 | while read TABLE; do
  FIELD=`drush sql-query "show fields in $TABLE like '%format%';" | tail -n+2 | awk '{ print $1 }'`; 
  echo "$TABLE - $FIELD";
    if [[ "$FIELD" != "" ]]; then
     drush sql-query "select * from ${TABLE} where ${FIELD}='old_format''";
    fi
done

You'll need to run that in the terminal from your site's webroot and have Drush installed. Rename old_format to the code name of your text format. (drush sql-query "select * from {filter_format}" will show you that.) It'll work as a single command if you copy and paste it (as multiple lines or with line breaks stripped - the semi-colons indicate the end of each statement).

Breaking it down:

  1. Find all the tables used for content storage.
  2. Find all the 'format' fields in those tables. (They'll only exist if the field uses formats.)
  3. Find all the rows in those tables matching the format you want to delete. Alternatively, if you want everything to be in one format, you can see what does not use that format by changing the ${FIELD}=... condition to ${FIELD}<>'new_format'.

This won't fix anything for you, it'll just show you where to go - look at the entity_id columns (that's the nid if the content is nodes) and go edit that content.

Also note, this is checking the field_data_ tables, which (as far as I can tell) track the latest revision. If you are using content revisions you might want to change the first query to show tables like 'field_revision_%'. I'm not sure why D7 duplicates so much data, but that's for another post.

Update: I modified the title from Find all content to Find all field content because of the comment by David Rothstein below.

May 18 '11 11:48pm
Tags

Workaround to variables cache bug in Drupal 6

I run my Drupal crons with Drush and Jenkins, and have been running into a race condition frequently where it tells me, Attempting to re-run cron while it is already running, and fails to run.

That error is triggered when the cron_semaphore variable is found. It's set when cron starts, and is deleted when cron ends, so if it's still there, cron is presumably still running. Except it wasn't really - the logs show the previous crons ended successfully.

I dug into it a little further: drush vget cron_semaphore brought up the timestamp value of the last cron, like it was still set. But querying the `variables` table directly for cron_semaphore brought up nothing! That tipped me off to the problem - it was caching the variables array for too long.

Searching the issue brought up a bunch of posts acknowledging the issue, and suggesting that people clear their whole cache to fix it. I care about performance on the site in question, so clearing the whole cache every 15 minutes to run cron is not an option.

The underlying solution to the problem is very complex, and the subject of several ongoing Drupal.org threads:

Following Drupal core dev policy now (which is foolish IMHO), if this bug is resolved, it has to be resolved first in 8.x (which won't be released for another 2-3 years), then 7.x, then 6.x. So waiting for that to work for my D6 site in production isn't feasible.

As a stopgap, I have Jenkins clear only the 'variables' cache entry before running cron:
drush php-eval "cache_clear_all('variables', 'cache');"

That seems to fix the immediate problem of cron not running. It's not ideal, but at least it doesn't clear the entire site cache every 15 minutes.

May 8 '11 2:45pm
Tags

Three Quirks of Drupal Database Syntax

Database query syntax in Drupal can be finicky, but doing it right - following the coding standards as a matter of habit - is very important. Here are three "gotchas" I've run into or avoided recently:

1. Curly braces around tables: Unit testing with SimpleTest absolutely requires that table names in all your queries be wrapped in {curly braces}. SimpleTest runs in a sandbox with its own, clean database tables, so you can create nodes and users without messing up actual content. It does this by using the existing table prefix concept. If you write a query in a module like this,
$result = db_query("SELECT nid from node");
when that runs in test, it will load from the regular node table, not the sandboxed one (assuming you have no prefix on your regular database). Having tests write to actual database tables can make your tests break, or real content get lost. Instead, all queries (not just in tests) should be written like:

$result = db_query("SELECT nid from {node} node");
(The 2nd node being an optional alias to use later in the query, for example as node.nid JOINed to another table with a nid column.) When Drupal runs the query, it will prefix {node} by context as site_node, or simpletestXXnode, to keep the sandboxes separate. Make sure to always curly-brace your table names!

2. New string token syntax: Quotation marks around string tokens are different in Drupal 6 and 7. D7 uses the new "DBTNG" abstraction layer (backported to D6 as the DBTNG module). In Drupal 6, you'd write a query with a string token like this:
$result = db_query("SELECT nid from {node} where title='%s'", 'My Favorite Node');
Note the single quotation marks around the placeholder %s.

With D7 or DBTNG, however, the same static query would be written:
$result = db_query("SELECT nid from {node} WHERE title = :title", array(':title' => 'My Favorite Node'));
No more quotes around the :title token - DBTNG puts it in for you when it replaces the placeholder with the string value.

3. Uppercase SQL commands: Make sure to use UPPERCASE SQL commands (SELECT, FROM, ORDER BY, etc) in queries. Not doing so is valid syntax 99% of the time, but will occasionally trip you up. For example: the db_query_range function (in D6) does not like lowercase from. I was using it recently to paginate the results of a big query, like select * from {table}. The pagination was all messed up, and I didn't know why. Then I changed it to SELECT * FROM {table} and it worked. Using uppercase like that is a good habit, and in the few cases where it matters, I'll be glad I'm doing it from now on.