Our Blog
Solutions. Made Simple.

Christmas celebration with Eledecks

We were kindly invited out earlier this week to join the Eledecks team for a Christmas dinner (www.eledecks.com). We have been working for them for nearly 6 years now and have watched their business go from strength to strength. Eledecks provides “white label” HR services to companies via a network of resellers, and the back-end programming has been very interesting with a lot of variety. Because we came together with them after they had already been established for a number of years we have had to deal with a substantial amount of legacy systems support. Juggling their constant desire to add new features to their system with our goal to improve the existing code while always ensuring security and quality has taken some fine balancing, and we’re very proud of the work we have done for them. They are also a lovely team to work with, so thanks again Carolyn and her team for taking us out, and long may we continue to work together!



Eledecks is one of the many clients that we have picked up because they have chosen to part company with their previous developers. If you are fed up with developers who try to dazzle you with technobabble, who charge a lot for doing very little, and who dodge your calls or prefer to point fingers rather than solve problems, then perhaps your New Year’s Resolution could be to talk to us too. Life is too short to stick with bad suppliers, and the short term pain of a sudden switch is often better than the “death by a thousand cuts” of sticking with the devil you know. If this sounds like your circumstances, we would love to hear from you.

Drupal: Two Steps Forward, One Step Back

The Drupal developers have a lovely habit of using the word “regression” in their version notes to politely refer to fixing bugs that were introduced in previous versions.   So where Linus Torvalds might famously record “I fixed this ****ing idiot’s code”, the more genteel Drupal community will record a phrase such as “Fixed regression in the link widget where help text does not show”.  It all boils down to the same thing though – going back to fix something that somebody just broke.

My question was, is this a case of “two steps forward, one step back”?  Fortunately I could see a simple way to quantify things.  All of the release notes are listed on pages starting from here:  https://www.drupal.org/project/drupal/releases  … so with a quick bit of command line coding, I was able to knock  together a few data trawls so that I could then count the number of times the phrase has been used in updates to the last three versions of Drupal – 8, 7 and 6.

The code for Drupal 6 was easy – there were 38 revisions posted for this major version, and the development team was very consistent about numbering them.  These few bash commands were all I needed to pull down the release notes and save all of the lines mentioning “regression” while filtering out the few that said “no known regressions”.

for start in {0..38}
curl -s "https://www.drupal.org/project/drupal/releases/6.$start" | grep regression | grep -v "no known regressions" >> drupal_6_regression_mentions

I will save the actual figures for the end of the article.

The code for Drupal 7 was very similar, except there were 57 revisions for that version:

for start in {0..57}
curl -s "https://www.drupal.org/project/drupal/releases/7.$start" | grep regression | grep -v "no known regressions" >> drupal_7_regression_mentions

For Drupal 8, the current version, they have moved the goalposts a bit – revision numbers now have major and minor parts, and scattered in amongst them are “dev”, “alpha”, “beta” and “rc” versions. In the interest of fairness I decided to ignore all of those. To do this, I looped across all of the pages of the Drupal 8 releases, created a list of all the links to release note pages that didn’t use those phrases in their names, and then looped across that list to pull down each page and then look for my keyword, “regression”. That resulted in a list of 38 separate revisions so far in Drupal 8, and my code looks like this:

for start in {0..4}
curl -s "https://www.drupal.org/project/drupal/releases?api_version%5B0%5D=7234&page=$start" | grep "/project/drupal/releases/" | egrep -v "(beta|alpha|rc|dev)" | sed -e "s:.*href=\"\([^\"]*\)\".*:\1:g" >> drupal_8_notes.tmp
sed -e 's#https://www.drupal.org##g' < drupal_8_notes.tmp | sort | uniq > drupal_8_notes
while read url
curl -s "https://www.drupal.org${url}" | grep regression | grep -v "no known regressions" >> drupal_8_regression_mentions
done < drupal_8_notes

So now for the fun part, the figures. I’ll lay them out in table form and then give a little commentary:

Drupal Version    Number of 
Number of
Ratio of
Regressions to Progressions 
6 38 5 7 and 1/2
7 57 37 1 and 1/2
8 39 17 2 and 1/2

The last figure gives the number of “steps forward” that are taken before each single step back, with the figures rounded to the nearest “half step” (or “stumble” as I prefer to think of them) .

So what do we see?  Well, Drupal 6 looks like it was really successful, with 38 steps forward and only 5 steps back.  Of course, those of us who used it in anger know this definitely was not the case (D6 was, in technical terms, a pig!), so presumably they simply did not use the term “regression” until late in its life.  Perhaps someone else would care to investigate further…

Drupal 7 was to all intents and purposes still a bit of a mess – with only 1 and 1/2 steps forward for each step back.  It certainly did sometimes feel like that at the time – we apply each of these patches to our clients’ sites as they arrive, and with the best will in the world Drupal is fundamentally a community-led project and there is not all that much real accountability.  Getting Drupal developers to agree on how something should be done is like herding cats.  So errors and contradictions do often slip through.  In essence, some of what you gain in the short term due to the flexibility and low cost of the Drupal platform you go on to lose in the longer term because of its greater maintenance requirements.  This harks back to my previous blog – I would still never recommend Drupal or Magento to small companies because their maintenance requirements are just too high.  If you compare these two systems with WordPress for instance, the update cycle and mechanism seems archaic.  Modern WordPress installations auto-update and they very rarely fail if they have been built on popular, standard plugins.  Magento and Drupal still require a lot of ongoing developer support.  While companies like SteamDesk are happy to make a living providing this, we really feel for the smaller clients who have been pushed down the Magento/Drupal route by inexperienced developers in the past when it really was not the best choice for their requirements.

Coming back to the story, we finish with Drupal 8, which certainly appears to have introduced a reasonable improvement over Drupal 7.  There are now 2 and 1/2 steps forward for each step back, which is still very close to the apocryphal “two steps forward, one step back” but it is still moving in the right direction.  So apparently things are currently looking up in the Drupal community, hooray!

Caveat:  the figures above are a bit silly really, because a new version is very rarely a “single step forward” (a new version usually includes multiple, sometimes even dozens of improvements), while what I am counting as a regression is typically a “single step backward”.  So the actual ration of regression to progress is going to be much lower than I’ve shown above.  But that wouldn’t have made for such a fun story!

Magento Hacks Revisited


We were recently approached to help a client with their Magento website, and as a matter of course we first started with a check of what version they were running, what their current patch status was, and so on.  We were immediately alerted to the fact that the site had not been patched for some time, and had recently been hacked.  There’s a great online tool to help with this at MageReport.com:


It was showing lots of red marks on the report, most of which were warnings that patches haven’t been installed, but our eyes were drawn to the worst, the one for “Credit Card Hijack detected?” which actually meant that the site wasn’t just vulnerable, it was already compromised.  This was easily confirmed by looking at the source code for the home page.  At the end of it, just before the closing </html> tag there’s an obfuscated bit of JavaScript (please excuse the ham-fisted redacting of the actual domain name)…



I could easily work out what the script does by using the debugging console in my browser (I’m using Chrome here but you can do it in pretty much all the browsers).  Copy the Javascript, all except for the ‘eval’ command at the end…



Then paste it into the JavaScript debug console in the (F12) developer tools:



Then for this particular script when you hit return it shows you what the variable x evaluated to (variations on this technique pretty much always work, although you do have to know what you are doing to avoid triggering some malware by mistake):



We’ve seen very similar code before, our assumption is that this is getting dropped on as part of a scripted “drive-by” attack.  If this succeeds the script presumably keeps a note that your site is vulnerable and moves on to the next one.  Some time later, the hackers visit again and drop back doors all over the place (we found more with just a cursory glance around the site).

In the screen grab above you can see the “send” function posts the form data to a destination page which has been named to sound like an innocuous jQuery library download ( trafficanalyzer.biz/lib/jquery-1.9.1.min.php ) but is actually isn’t that at all but is a script to capture the stolen data (the .php is another good clue, jQuery is a .js library file usually).  I haven’t got the whole script in view in the screenshot above, there is a bit which has scrolled off the top which limits it to only trigger when you are on the site’s checkout/cart pages.  To add insult to injury, there’s also a simple bit of code there which double checks whether any credit card numbers it is stealing look valid, using a simple regexp check.  If  it thinks it might have found a good credit card number it alters the destination URL slightly.  This makes me laugh a bit to be honest, it is so cheeky of the hacker to use the victim’s processing capacity when it would have been trivial to do this check at the reverse end.  They really do seem to be rubbing the victim’s nose in it.

You know that the hacker must have acquired admin level access into Magento in order to have dropped this hack in place.  They have also turned on the on-site Sage Pay mechanism, so that rather than the two off-site ones which the client *thought* were enabled there were now three options when you got to the checkout step:


The third option has been turned on because the other two would take the user off-site to get credit card details.  This is no good for the hacker (their script stops working when you go off to the remote payment gateway).  By keeping you on-site they ensure that they will definitely get your credit card details as well as the personal details (address, phone number, email, etc) that they would get whichever payment mechanism you used.

This hack has certainly happened due to missing patches and/or poor password security.  You might think you just need to clean up the site, indeed there are tips at this web address for how to remove the script via the Magento control panel:


…. (plus of course, you should remove the on-site payment mechanism).  But our experience is that once someone has access to the Magento admin area they will have compromised the site in multiple ways and left themselves back doors – if there isn’t a clean copy of the site in source control to review against I would always strongly advise re-building rather than attempting repairs.  Magento is a heavy beast (30,000 files plus in a typical install), the best programmer in the world couldn’t sensibly check all of those for hacks without the appropriate reference copies to check against, which means having the original versions of all of the plugins/extensions that the site uses.  Without source control the chances of getting the right versions of all those files and making full comparisons in anything like a sensible timescale are absolutely zero.  I wrote a blog entry detailing our processes a year or so ago:


It gets worse…  In this particular instance we were quickly able to see that the hacker had also dropped a similar script into the “protoype.js” library which all site pages used (even the admin areas), so it was clear that if someone just followed the “how to fix credit card hijack” instructions above they would have immediately leaked their new admin passwords to the hacker as soon as they next logged in.

Of course, the real solution is to put a company like SteamDesk on retainer (our packages start from 4 hours/month), and we will monitor your site, update it straight away whenever new security patches are released, and add source control/backup mechanisms so that we can *recover* a site after it has been damaged (because with the best will in the world, you might still suffer from a zero-day attack, or a malicious insider, or a member of staff exercising poor password control) rather than have to start all over again.

If This Then That!

We’re just playing with If This Then That ( https://ifttt.com )

We are working on a new AI project, to monitor things and give you warnings when it might be going wrong.  If This Then That is a great web service which lets you connect things together, we’re demoing it to someone by making it send a text message when a new blog post appears.

So this is the blog post.

And here’s a gratuitous picture of everyone at SteamDesk, to make it more interesting.

Rose Tinted Glasses

A client just emailed me:
> Thanks 🙂 … life was a lot easier with a beige box with IE installed!

I had to laugh a little. It’s easy to look at how complicated things have become nowadays and look back to when things were easier/better in the past.  Probably around the same time that America used to be “great”, and everyone was happy, fit and well.

But it doesn’t always hold up to closer inspection, does it?  Here is how I replied:

Hehe, you’ve got your rose-tinted nostalgia glasses on and you’re forgetting back around 2000 when IE 2.0 and 4.0 and 6.0 were all being used in similar numbers at the same time, and they had wildly different capabilities, plus there was Netscape 6, Firefox and Konqueror, and Opera … and by 2008 things had become really, really messy, with about a dozen major browser/OS combinations to worry about. It wasn’t until about 2012 that issues with differences in rendering more or less went away and CSS libraries like bootstrap started doing the heavy lifting. But that’s also the time when the current proliferation of screen sizes happened. Something nasty happens about every 5-6 years to make life as a developer difficult. I don’t know what is next (maybe strict enforcement of Content Security Policy rules, making it very hard to mix and match external resources?), but I do suspect that we’re about due for another big headache.

There’s a great graph here: https://en.wikipedia.org/wiki/Timeline_of_web_browsers

Maybe I’m being a Grinch, and things really were better back then, but I still prefer living in the now than the past.  We’re working with some great clients, on some really interesting projects, and I think that the coming year is going to be the most exciting yet for the company.  We have a new Artificial Intelligence / Agents project getting off the ground, and another one which got as far as a smart prototype earlier this year being picked up for some serious investment.  We’re meeting with a new major client tomorrow, Hull City Council.  Things have never looked so good!

Magento Hack Recovery (and prevention!)

We’ve just been asked to document what we do when we are approached by someone whose Magento website has been compromised. While Magento is great, just as with any other software system there are bad guys out there looking to exploit it and sometimes they do find a chink in the armour. A successful website hack attack is for many people the start of a really bad day, but for us at SteamDesk it can make for an interesting few hours as we work out what has gone wrong, how to fix it, and what we can do to stop it from happening again. One of our clients asked me to write up an overview of what we do. It all begins like this…

When someone new approaches us to do a one-off repair job we start with the following routine:

1. Take a copy of the current site and database so that we could examine it on our servers using suitable tools.

2. Compare your site with a “bare” reference copy of the same Magento version.
a. This will let us identify whether code modifications have been made correctly (e.g. within private theme folders) or incorrectly (changes to the Magento core files).
b. Check patch history, ensure that the site is up to date with the latest security patches.

3. Identify and list any third party components that have been installed. Disable/remove any that can be identified as having been installed but not put into use. Try to get source code (with matched version numbers) for each of the components to compare against. (This is often not possible because developers do not always archive older versions of their components, especially if it is a paid-for extension).

4. Using the reference copies we would make further comparisons to identify hacked/compromised files in the Magento source and/or independent “Trojan” files that have been dropped into the website.

5. Check for and remove “hidden” admin users which exist in the database but because of hacked code are not shown in the Magento admin panels.

6. Apply new, strong passwords to all admin users (most Magento break-ins are attributed to guessable admin passwords).

7. Relocate the entire Magento admin panel away from its standard /admin address, moving it to a non-standard address on the server to prevent “drive-by hacking”. (We would also do this for various other parts of a standard Magento system).

8. Check for malicious JavaScript that has been forced into Magento products, categories and static blocks so that it compromises site visitors.

9. Attempt to install any necessary security patches and/or upgrade the Magento version. (But we would not attempt an upgrade from a Magento 1.x site to Magento 2.x, there is not a reliable, simple upgrade mechanism to do this).

10. When we were confident that the site was “cleaned” we would re-deploy it to your live server.

What we really like is to impress someone with our dedication so they become a “retainer client”. We charge from £200/month to act on retainer for which we would also do the following:

1. Create a source code repository for your site so that we could compare the current site against previous versions as time goes on.

2. Set up notifications from scanning service such as those provided by MageReport.com to warn of site problems.

3. Routinely review the security of your system, applying “hardening” guidelines and proactively watching for signs of intrusion (both attempted and successful).

4. Set up and maintain a reference copy of your site on our development servers ready for patch-testing (we never patch a live site until we have tested the patch on a development copy).

5. Monitor the Magento notification services so that we are ready to apply security patches as soon as they are announced.

6. Provide up to 4 hours of maintenance or development work every month as part of the retainer fee.

7. Besides having the reference copy of your Magento installation in our source code repository, if necessary we would also set up off-site nightly/weekly/monthly backups of your site’s database and product image assets.

8. Provide on-call priority support during office hours. (Mon-Fri 9am-5pm excluding public holidays).

The list above isn’t set in stone – we adapt to meet the demands of the ever-changing security environment. It’s one of the more interesting things about being technical web developers, and to my mind the principle reason why you should always contract your designers separately from your developers. It’s a sad fact that most of the broken sites we come up against got that way because a client contracted for a new website direct with a design company who produced flashy visuals but then sub-contracted the actual development work to the cheapest company they could find…