Tag1 Consulting

Performance and Scalability Experts

Blogs

How to Maintain Contrib Modules for Drupal and Backdrop at the Same Time - Part 3

This is the third in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

How to Maintain Contrib Modules for Drupal and Backdrop at the Same Time - Part 2

This is the second in a series of blog posts about the relationship between Drupal and Backdrop CMS, a recently-released fork of Drupal. The goal of the series is to explain how a module (or theme) developer can take a Drupal project they currently maintain and support it for Backdrop as well, while keeping duplicate work to a minimum.

When All Else Fails, Reflect on the Fail

While coding the MongoDB integration for Drupal 8 I hit a wall first with the InstallerKernel which was easy to remedy with a simple core patch but then a similar problem occurred with the TestRunnerKernel and that one is not so simple to fix: these things were not made with extensibility in mind. You might hit some other walls -- the code below is not MongoDB specific. But note how unusual this is: you won’t hit similar problems often. Drupal 8 very extensible but it has its limits.

How to Maintain Contrib Modules for Drupal and Backdrop at the Same Time

Part 1 - Reuse the Same Code

In mid-January, the first version of Backdrop CMS was released. Backdrop is a fork of Drupal that adds some highly-anticipated features and API improvements to the core Drupal platform while focusing on performance, usability, and developer experience.

yumrepos Puppet Module

Earlier this year we undertook a project to upgrade a client's infrastructure to all new servers including a migration from old Puppet scripts which were starting to show their age after many years of server and service changes. During this process, we created a new set of Puppet scripts using Hiera to separate configuration data from modules.

BDD: It's about value

I was drawn to Behavior Driven Development the moment I was pointed toward Behat not just for the automation but because it systematized and gave me a vocabulary for some things I already did pretty well. It let me teach some of those skills instead of just using them. At DrupalCon Amsterdam, Behat and Mink architect Konstantin Kudryashov gave a whole new dimension to that.

Watching remote tests run

It can be incredibly helpful when you're troubleshooting Behat tests to watch the tests execute. It's fairly straightforward to install Selenium locally and watch @javascript tests execute in your browser of choice, a bit more challenging remotely.

Here's how I set up to do that on a remote Ubuntu 14.04 server.

VNC on the Server

  1. Install dependencies:

    sudo apt-get install Xvfb tightvncserver xterm firefox


Not enough entropy

I was writing documentation for using VNC to watch Behat tests being executed with the selenium2 driver on a remote server, when I ran into a strange behavior.

I'd set up Behat 3 on my desktop and was successfully running Selenium Server 2.42.2 with Firefox 31. After following the same setup process I'd used locally on a clean Digital Ocean VM, the Behat tests wouldn't run.

Drush RPMs

I was recently working on scripting some OS installs of CentOS 5 and 6. As part of the deployment, I required drush be installed. Now, I’ve considered using the drush package found in EPEL but it don’t meet my needs for a number of reasons:

  • It is built for Drupal 6.
  • It has a dependency on the Drupal 6 package in EPEL meaning I have to install that if I want to pull in drush.

Tackling oversized cache items in Drupal

Drupal’s highly dynamic and modular nature means that many of the central core and contrib subsystems and modules need to maintain a large amount of meta-data.

Rebuilding the data every request would be very expensive, and usually when one part of the data is needed during part of the request, another part will be needed later during the same request. Since just about every request needs to check variable_get(), load entities with fields attached etc., the meta-data needs to be loaded too.

The pattern followed by most subsystems is to put the data into a single large cache item and statically cache it. The more modules you have, the larger these cache items become — since more modules mean more variables, hook_schema() and hook_theme() implementations, etc. And the same happens via configuration with field instances, content types and default views.

This affects many of the central core subsystems — without which it’s impossible to run a Drupal site — as well as some of the most popular contrib modules. The theme, schema, path alias, variables, field API and modules system all have similar issues in core. Views and CCK have similar issues in contrib.

With just a stock Drupal core install, none of this is too noticeable, but once you hit 100 or 200 installed modules, suddenly every request needs to fetch and unserialize() potentially dozens of megabytes of data. Some of the largest cache items like the theme registry can grow too large for MAX_ALLOWED_PACKET or the memcache default slab size. Since the items are statically cached, these caches can easily add 30MB or 40MB to PHP memory usage combined.

The full extent of this problem became apparent when I profiled WebWise Symantec Connect site (Drupal.org case study). Symantec Connect currently runs on Drupal 6, and as a complex site with a lot of social functionality has a reasonably large number of installed modules.

Syndicate content