remaildr.com is back!

So, remaildr.com had been in a pretty sorry state for a couple of months now, and I kept thinking I should go have a look into it and get to the bottom of the issue.

And the bottom of the issue was the 6000 spam emails sitting in the inbox, making the server crash at startup.

They’re now deleted, and everything is back up and happy. I’m currently thinking about different monitoring options, but given it’s all email-based, no solution that I know of seem overly practical to me. Any idea would be appreciated. :)

“So, the tests sometimes fail inexplicably” is a sentence you probably hear pretty often if you’re running any type of full-stack, browser-driven integration tests over a large-enough code base, especially when customising on top of an existing product.

Today’s instance was puzzling at first – the tests would sometimes fail to log in at all. That is, open the login page, fill in the username and password, wait until the URL change and assert that we’re now on the dashboard – nope, failure. It happens about 5% of the time, breaks straight away (no timeout), but happens to seemingly random users as different points of testing.

Well, time to output some debug information. First, let’s see whether that “wait until URL change” logic is working properly by outputting the URL after it changed – yes, the URL has indeed changed, and is now back into the login page with a little ?error=true tacked to the end.

An error at login? Let’s check the username and password… No, they’re definitely correct. And the problem is intermittent anyway. Could it be the backend rejecting valid credentials every so often? I’d never heard of that happening, but who knows. Let’s keep that in mind and possibly come back to it later.

As an added debugging, I made the test output the values of the username and password field sbefore they get submitted, and now we’ve got something interesting – the values filled in aren’t the values we told Cucumber to fill in! Instead of e.g. “username” / “password“, the username is filled in and sent as “usernameassword” and the password as “p“.

Uh? Ah!

And lo and behold, the culprit is unmasked – as a useful convenience, the login page executes a tiny bit of Javascript on page load that sets the focus to the “username” field so that a user doesn’t have to click or tab through it, but can start typing straight away. And Cucumber+Phantom faithfully emulating a browser and executing the javascript on that page will run into a race condition in which it starts entering keystrokes into a field, only to get its focus stolen by javascript and writing the rest of the keystrokes into the other field.

And the bug was of course intermittent because it would only happen when Cucumber was writing the password concurrently with page load. Any time before that and the fields would be correct, any time after that and the fields would be just as correct.

 

Our solution? Watir::Wait.until { browser.username_field.focused? } before actually filling in the fields and logging in. Works like a charm!

In Maven, LESS is less

Sorry, this is a rant.

I was recently investigating Maven plugins for LESS compilation. The use-case is pretty run-of-the-mill (I think?): I want to be able to write a .less file anywhere in my project src/ folder and have Maven compile it to CSS in the corresponding folder in target/ at some point of the build pipeline.

I first looked into lesscss-maven-plugin, a short-and-sweet kind of tool that looks perfect if you have one (and *only one*) target folder for all of your CSS. Working on a large project including things such as separate templates and per-plugin CSS, this would not be working for us.

More reading and research lead me to Wro4j, Web Resources Optimization for Java. One understood the cryptic acronym, it sounds like the kind of all-encompassing tool that the Gods of Bandwidth and Simplicity would bestow upon us mere mortals provided we made the required server sacrifices. A noble idea, but after actually using the thing, it didn’t take me long to drop the religious metaphor.

Wro4j is a horrible mess. There’s absolutely no justification in any way, shape or form for the complexity involved in this plugin. As a perfect example of being too clever for its own good, Wro4j includes several parsers for the configuration file: an XML parser, a JSON parser, and for some reason a Groovy parser. Why you would need three different ways to configure a poor Maven plugin —which should get its configuration from the pom.xml anyway— is beyond me.
And the implementation is the most horrific part: when supplied with a config file, Wro4j will try all successive parsers (yes, even when the file extension is .xml(1)) on the file and end up with this absolutely undecipherable error message if parsing failed with each ‘Factory’. Which will happen when Wro4j doesn’t like your XML configuration file for some reason.

I ended up using a bash script to find .less files and call lessc on them. No it’s not portable, no it’s not “the maven way”, but at least it works and it’s maintainable.

 

1: it takes a special kind of crazy YAGNI-oblivious Java helicopter architect to consider that a Groovy file saved as ‘config.json’ should be a supported feature. In a Maven (which is so heavily XML-based) plugin of all places!

Pushing bookmarklets to their limits

I recently had to implement a new functionality for an internal web application: a button to download a specially-formatted file. The right way to do it is, of course, to deploy server-side code generating the needed file in the backend and make it accessible to the user via the front-end. The application in question is an important company-wide production system and I was on a hurry, so I decided to go the Quick way rather than the Right way 1.

Luckily, all the necessary information to create the file turned out to be accessible on a single web page, which meant I could create a user-triggered script to gather it from this page.

There are several ways for a user to inject and run custom (as in non-server-provided) javascript on their machine, such as using a dev console or a dedicated tool like Greasemonkey. But the best balance between ease of development and —comparatively— good user-experience is bookmarklets.

A bookmarklet is an executable bookmark. In practice, rather than being a normal URI such as:

http://example.com

They are of the form:

javascript: /* arbitrary js code */

All a user has to do is to add the bookmarklet to the bookmarks bar or menu of their browser of choice, click on it whenever they want to run the specified snippet of code against the current page, and said snippet will be ran.

To come back to our problem, the task at hand was to gather information on the page and make it downloadable as a text file with a custom extension.

Gathering the information was the easy part — a few strategically-placed querySelectorAll and string wrangling gave me exactly what was needed. The tricky part turned out to be creating a file.

How indeed do you create a downloadable file when you have no access to a server to download it from?

As it turns out, there exists a little-known provision of HTML link tags (<a> — </a>) that allows for embedding files as base-64 encoded strings in the tag’s href element itself. We can control what goes into this tag through javascript, hence we can generate and embed the file on-the-fly!

The last roadblock was to initiate the download action automatically2, rather than requiring the user to click on the bookmarklet, then on a ‘Download’ link. One would think it’s just a matter of clicking the newly-created link. Yes, but… In javascript, creating a link element is one thing, but clicking it is another — Firefox and IE allow the straightforward link.click(), but Chrome has historically only allowed click() on input elements. We have to dig deeper then, and manually generate a mouse event and propagate it to the link element3.

The following bookmarklet is what I ended up using as a prototype. It extracts information from a page, generates a specifically-formatted file containing the information, and fires up the download window:

If you wish to use this code sample to do something similar, remember to strip off comments and delete linebreaks, as the bookmarklet must be on one line.

 

Footnotes

1: the prototype worked, but the use-case I imagined for it turned out to be quite far removed from the true need. I did end up going the Right way ;-), learning a bit in the process though.

2: Respectively presenting the Open with/Save as dialog (Firefox) and downloading the file directly (Chrome).

3: Hat tip to StackOverflow for this technique.

Simulating bad network conditions on Linux

Sometimes, your network is just too good.

Today I ran into this issue as I was testing an application running off a VM in the local network. Latency and bandwidth were excellent, as you’d expect, but nowhere near the conditions you’d encounter over the internet. Testing in these conditions is unrealistic and can lead to underestimating issues your users will experience with your app once it’s deployed.

So let’s change that and add artificial latency, bandwidth limitations, and even drop a few packets, using tc.

Just put the following script in /etc/init.d, modify the values to fit your needs, make it executable, and run /etc/init.d/traffic_shaping.sh start to degrade performance accordingly.

I originally found the script on top web hosts’ website, and added a few things. Props!