Programming – Upon my shoulder http://www.uponmyshoulder.com/blog // TODO: insert witty tagline Tue, 20 Jun 2017 20:25:30 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 Apple Contact & Calendar Server – dockerised http://www.uponmyshoulder.com/blog/2017/apple-contact-calendar-server-dockerised/ http://www.uponmyshoulder.com/blog/2017/apple-contact-calendar-server-dockerised/#comments Tue, 20 Jun 2017 20:25:30 +0000 http://www.uponmyshoulder.com/blog/?p=682 Continue reading Apple Contact & Calendar Server – dockerised]]> I’ve been using Radicale as a contacts / calendar server (CardDAV / DalDAV) for some time now, and it worked flawlessly across macOS and Windows Phone for contacts and calendars.

However, I recently got an iPhone and synchronising calendars from Radicale just crashed the iPhone calendar app. It worked fine some of the time, but most times it just crashed, which is not great.

Therefore, I went on the search for a better self-hosted calendaring server. Onwards! To the internets! Who promptly gave me a list of great self-hosted calendaring software.

Off that list, I tried to install Baikal, but the claimed Docker installation is broken by default: the container fails to build. The Dockerfile is pretty messy and complex, and as I’d rather not rely on PHP (it’s probably not the best filter, but… you know… PHP1), I gave up on it and looked into CalendarServer instead.

CalendarServer is an Apple open-source project providing calendar and contacts, written in Python under the Apache licence. The software is still regularly updated (the 9.1 release dates from last month), backed by a huge corporation, and from the looks of it should sync with iPhone and macOS pretty easily :)

Unfortunately, the docs are a bit sparse, but the install script was a great starting point to write up a Dockerfile. In the end, the whole Dockerfile is little more than apt-get installing all the dependencies and running the setup script.

Pretty helpful: this container, thanks to the built-in scripts, embeds all its external dependencies and runs Postgres and memcached. As long as you mount the data folder as a volume, you’re good to go with a completely contained calendar/contacts server that saves up to a single folder!

The server refuses to start as root though, which is a bit annoying given that Docker volumes cannot be edited by a non-root user… That is, unless you give that user a specific UID, and you make sure to give this UID ownership on the volume on the Docker host. A bit hacky, but works fine.

As an example of how to run it, here is the systemd unit file I use to run the server:

You’ll have to create that /opt/ccs/data folder yourself, and also create the config and user list as described in the QuickStart guide.

The code is on GitHub, and the built container is available on the Docker Hub.

Sync with iPhone and macOS has been absolutely faultless so far, so I’m pretty happy with it!

 

 

1: slightly ironic given this blog is WordPress, but nvm.

]]>
http://www.uponmyshoulder.com/blog/2017/apple-contact-calendar-server-dockerised/feed/ 2
Updating a tiny Rails app from Rails 3.1 to Rails 4.2 http://www.uponmyshoulder.com/blog/2015/updating-a-tiny-rails-app-from-rails-3-1-to-rails-4-2/ http://www.uponmyshoulder.com/blog/2015/updating-a-tiny-rails-app-from-rails-3-1-to-rails-4-2/#comments Fri, 09 Oct 2015 01:54:18 +0000 http://www.uponmyshoulder.com/blog/?p=654 Continue reading Updating a tiny Rails app from Rails 3.1 to Rails 4.2]]> In 2011 I wrote a small Rails app in order to learn Ruby better and see what all the fuss was about – this was Antipodes, a website that shows you the antipodes of a given point or country/state using google maps.

I built it using the latest and greatest version of Rails available at the time, which was 3.2. It has since fell to various security issues and has been superseded by newest version, and is currently unsupported.

I’ve been aware of these limitations and decided not to carry on hosting on such an old version, so I just stopped the nginx instance that was powering it and left it aside.

Until now! I had some time to spare recently, so I decided to upgrade.

Updating to Rails 3.2

The railsguide documentation about upgrading suggests not to update straight to the latest version of Rails, but to do it by steps instead. The first one for me was to update from 3.1 to 3.2.

First up, let’s update our Gemfile to pick up on the new Rails version, and let’s dive in!

The first issue I ran into was that the :timestamps model field definition are now marked as NON NULL. I wasn’t actively using these, so I decided to remove them from the DB rather than fixing DB import code.

My next issue was that some gems would not install properly – I decided to use the newest version of Ruby available on the system, 2.2, and it was not happy at my Gemfile requiring ruby-debug19. Fair enough, let’s remove it.

My next problem didn’t come from Rails itself, but rather from the gem I used to generate Google Maps, Gmaps4Rails. It underwent a serious rewrite in the past 4 years and now needed very different code under the hood – no problem. It also allowed me to clean some of the .coffee files and make better use of the assets pipeline.

An lo, the website was running under Rails 3.2!

Upgrading to Rails 4.0

The next step was to upgrade to Rails 4.0. This was very straightforward, a quick change in the Gemfile and a change to route handling (match was retired in favour of using the actual HTTP verbs of the route being handled, e.g. get) made it work, and a couple of new config options silenced the deprecation warnings.

Upgrading to Rails 4.2

And finally, the upgrade from Rails 4.0 to Rails 4.2 was made through the Gemfile only, no update was needed on the code itself.

 

And here we are! Antipodes is now up to date with its dependencies, and waiting for a new nginx+passenger to run again (more on that soon!).

]]>
http://www.uponmyshoulder.com/blog/2015/updating-a-tiny-rails-app-from-rails-3-1-to-rails-4-2/feed/ 1
http://www.uponmyshoulder.com/blog/2015/643/ http://www.uponmyshoulder.com/blog/2015/643/#respond Thu, 16 Apr 2015 07:52:55 +0000 http://www.uponmyshoulder.com/blog/?p=643 Continue reading ]]> “So, the tests sometimes fail inexplicably” is a sentence you probably hear pretty often if you’re running any type of full-stack, browser-driven integration tests over a large-enough code base, especially when customising on top of an existing product.

Today’s instance was puzzling at first – the tests would sometimes fail to log in at all. That is, open the login page, fill in the username and password, wait until the URL change and assert that we’re now on the dashboard – nope, failure. It happens about 5% of the time, breaks straight away (no timeout), but happens to seemingly random users as different points of testing.

Well, time to output some debug information. First, let’s see whether that “wait until URL change” logic is working properly by outputting the URL after it changed – yes, the URL has indeed changed, and is now back into the login page with a little ?error=true tacked to the end.

An error at login? Let’s check the username and password… No, they’re definitely correct. And the problem is intermittent anyway. Could it be the backend rejecting valid credentials every so often? I’d never heard of that happening, but who knows. Let’s keep that in mind and possibly come back to it later.

As an added debugging, I made the test output the values of the username and password field sbefore they get submitted, and now we’ve got something interesting – the values filled in aren’t the values we told Cucumber to fill in! Instead of e.g. “username” / “password“, the username is filled in and sent as “usernameassword” and the password as “p“.

Uh? Ah!

And lo and behold, the culprit is unmasked – as a useful convenience, the login page executes a tiny bit of Javascript on page load that sets the focus to the “username” field so that a user doesn’t have to click or tab through it, but can start typing straight away. And Cucumber+Phantom faithfully emulating a browser and executing the javascript on that page will run into a race condition in which it starts entering keystrokes into a field, only to get its focus stolen by javascript and writing the rest of the keystrokes into the other field.

And the bug was of course intermittent because it would only happen when Cucumber was writing the password concurrently with page load. Any time before that and the fields would be correct, any time after that and the fields would be just as correct.

 

Our solution? Watir::Wait.until { browser.username_field.focused? } before actually filling in the fields and logging in. Works like a charm!

]]>
http://www.uponmyshoulder.com/blog/2015/643/feed/ 0
In Maven, LESS is less http://www.uponmyshoulder.com/blog/2013/in-maven-less-is-less/ http://www.uponmyshoulder.com/blog/2013/in-maven-less-is-less/#comments Sat, 24 Aug 2013 03:51:50 +0000 http://www.uponmyshoulder.com/blog/?p=622 Continue reading In Maven, LESS is less]]> Sorry, this is a rant.

I was recently investigating Maven plugins for LESS compilation. The use-case is pretty run-of-the-mill (I think?): I want to be able to write a .less file anywhere in my project src/ folder and have Maven compile it to CSS in the corresponding folder in target/ at some point of the build pipeline.

I first looked into lesscss-maven-plugin, a short-and-sweet kind of tool that looks perfect if you have one (and *only one*) target folder for all of your CSS. Working on a large project including things such as separate templates and per-plugin CSS, this would not be working for us.

More reading and research lead me to Wro4j, Web Resources Optimization for Java. One understood the cryptic acronym, it sounds like the kind of all-encompassing tool that the Gods of Bandwidth and Simplicity would bestow upon us mere mortals provided we made the required server sacrifices. A noble idea, but after actually using the thing, it didn’t take me long to drop the religious metaphor.

Wro4j is a horrible mess. There’s absolutely no justification in any way, shape or form for the complexity involved in this plugin. As a perfect example of being too clever for its own good, Wro4j includes several parsers for the configuration file: an XML parser, a JSON parser, and for some reason a Groovy parser. Why you would need three different ways to configure a poor Maven plugin —which should get its configuration from the pom.xml anyway— is beyond me.
And the implementation is the most horrific part: when supplied with a config file, Wro4j will try all successive parsers (yes, even when the file extension is .xml(1)) on the file and end up with this absolutely undecipherable error message if parsing failed with each ‘Factory’. Which will happen when Wro4j doesn’t like your XML configuration file for some reason.

I ended up using a bash script to find .less files and call lessc on them. No it’s not portable, no it’s not “the maven way”, but at least it works and it’s maintainable.

 

1: it takes a special kind of crazy YAGNI-oblivious Java helicopter architect to consider that a Groovy file saved as ‘config.json’ should be a supported feature. In a Maven (which is so heavily XML-based) plugin of all places!

]]>
http://www.uponmyshoulder.com/blog/2013/in-maven-less-is-less/feed/ 1
Pushing bookmarklets to their limits http://www.uponmyshoulder.com/blog/2013/pushing-bookmarklets-to-their-limits/ http://www.uponmyshoulder.com/blog/2013/pushing-bookmarklets-to-their-limits/#comments Thu, 25 Jul 2013 03:44:12 +0000 http://www.uponmyshoulder.com/blog/?p=620 Continue reading Pushing bookmarklets to their limits]]> I recently had to implement a new functionality for an internal web application: a button to download a specially-formatted file. The right way to do it is, of course, to deploy server-side code generating the needed file in the backend and make it accessible to the user via the front-end. The application in question is an important company-wide production system and I was on a hurry, so I decided to go the Quick way rather than the Right way 1.

Luckily, all the necessary information to create the file turned out to be accessible on a single web page, which meant I could create a user-triggered script to gather it from this page.

There are several ways for a user to inject and run custom (as in non-server-provided) javascript on their machine, such as using a dev console or a dedicated tool like Greasemonkey. But the best balance between ease of development and —comparatively— good user-experience is bookmarklets.

A bookmarklet is an executable bookmark. In practice, rather than being a normal URI such as:

http://example.com

They are of the form:

javascript: /* arbitrary js code */

All a user has to do is to add the bookmarklet to the bookmarks bar or menu of their browser of choice, click on it whenever they want to run the specified snippet of code against the current page, and said snippet will be ran.

To come back to our problem, the task at hand was to gather information on the page and make it downloadable as a text file with a custom extension.

Gathering the information was the easy part — a few strategically-placed querySelectorAll and string wrangling gave me exactly what was needed. The tricky part turned out to be creating a file.

How indeed do you create a downloadable file when you have no access to a server to download it from?

As it turns out, there exists a little-known provision of HTML link tags (<a> — </a>) that allows for embedding files as base-64 encoded strings in the tag’s href element itself. We can control what goes into this tag through javascript, hence we can generate and embed the file on-the-fly!

The last roadblock was to initiate the download action automatically2, rather than requiring the user to click on the bookmarklet, then on a ‘Download’ link. One would think it’s just a matter of clicking the newly-created link. Yes, but… In javascript, creating a link element is one thing, but clicking it is another — Firefox and IE allow the straightforward link.click(), but Chrome has historically only allowed click() on input elements. We have to dig deeper then, and manually generate a mouse event and propagate it to the link element3.

The following bookmarklet is what I ended up using as a prototype. It extracts information from a page, generates a specifically-formatted file containing the information, and fires up the download window:

If you wish to use this code sample to do something similar, remember to strip off comments and delete linebreaks, as the bookmarklet must be on one line.

 

Footnotes

1: the prototype worked, but the use-case I imagined for it turned out to be quite far removed from the true need. I did end up going the Right way ;-), learning a bit in the process though.

2: Respectively presenting the Open with/Save as dialog (Firefox) and downloading the file directly (Chrome).

3: Hat tip to StackOverflow for this technique.

]]>
http://www.uponmyshoulder.com/blog/2013/pushing-bookmarklets-to-their-limits/feed/ 1
On overflowing stacks http://www.uponmyshoulder.com/blog/2013/on-overflowing-stacks/ http://www.uponmyshoulder.com/blog/2013/on-overflowing-stacks/#respond Sat, 02 Feb 2013 03:00:29 +0000 http://www.uponmyshoulder.com/blog/?p=591 Continue reading On overflowing stacks]]> I recently set out to implement a few basic data structures in C for the hell of it (and to reassure myself that I can still code C), and ran into an interesting compiler wart…

I was trying to instantiate a static array of 10 million integers (who doesn’t?), in order to test insertions and deletions in my tree. However, as you can astutely deduce from the title of this post, this was too much for the stack of my poor program and ended up in a segfault – a textbook stack overflow.

I did not think of that at first though, and tried to isolate the offending piece of code by inserting a return 0; in the main() after a piece of code I knew to be working, and working my way down to pinpoint the issue.

Much to my dismay, this didn’t really work out. Why? Check the following code:

Do you think it works with that last line uncommented? You’d be wrong!

[15:20:35]florent@Air:~/Experiments/minefield$ gcc boom.c
[15:20:40]florent@Air:~/Experiments/minefield$ ./a.out
Segmentation fault

GCC (4.2.1) wants to instantiate the array even though it’s declared after the function returns!

Interestingly enough, when you tell GCC to optimise the code, it realises the array will never get reached and prunes it away.

[15:26:06]florent@Air:~/Experiments/minefield$ gcc -O2 boom.c
[15:26:16]florent@Air:~/Experiments/minefield$ ./a.out
Hello world!

Clang (1.7) exhibits exactly the same behaviour.

Lessons learnt? return is no way of debugging a program.

]]>
http://www.uponmyshoulder.com/blog/2013/on-overflowing-stacks/feed/ 0
Javascript closures as a way to retain state http://www.uponmyshoulder.com/blog/2013/javascript-closures-as-a-way-to-retain-state/ http://www.uponmyshoulder.com/blog/2013/javascript-closures-as-a-way-to-retain-state/#respond Fri, 01 Feb 2013 08:15:46 +0000 http://www.uponmyshoulder.com/blog/?p=583 Continue reading Javascript closures as a way to retain state]]> Long time no blog! Let’s get back into it with a nifty and clean way of retaining state in Javascript – closures.

I was recently looking for an easy way to call a specific function after two separate/unrelated AJAX calls to two remote endpoints have been completed. The naive method would be to make the first AJAX call -> callback to the second AJAX  call -> callback to doSomething, but we can use the fact that these two AJAX calls are not related and run them concurrently. An easy way to achieve that is to:

1. set flags, say initialising two global variables at the beginning of the file:

var callback_one_done = false;
var callback_two_done = false;

2. have each callbacks set its own flag to ‘true’ upon completion and call doSomething
3. check both flags in doSomething:

var doSomething = function() {
    if (callback_one_done && callback_two_done){
        // actually do something
    }
}

But this is a bit ugly, as it litters the global namespace with two flags that are only used there. Instead, thanks to Javascript’s lexical scope we can declare doSomething as a closure and have the flags live inside the function itself:

var doSomething = (function(){
        var callback_one_done = false;
        var callback_two_done = false;
        return function(source) {
            if (source === 'callback_one') { callback_one_done = true; }
            if (source === 'callback_two') { callback_two_done = true; }
            if (callback_one_done && callback_two_done) {
                // actually do something
            }
        };
    }());

What we’ve done here is declare an anonymous function that returns a function. This newly created function, that gets attributed to doSomething, is a closure that contains both the code needed to run and the flag variables. The state is set and kept inside the function itself, without leaking on the outside environment.

Now we just need to call doSomething(‘callback_one’) from the first AJAX call and doSomething(‘callback_two’) from the second AJAX call and doSomething will wait until both calls are complete to run the “actually do something” part.

]]>
http://www.uponmyshoulder.com/blog/2013/javascript-closures-as-a-way-to-retain-state/feed/ 0
Bits of javascript goodness http://www.uponmyshoulder.com/blog/2012/bits-of-javascript-goodness/ http://www.uponmyshoulder.com/blog/2012/bits-of-javascript-goodness/#respond Sun, 27 May 2012 05:14:26 +0000 http://www.uponmyshoulder.com/blog/?p=570 Continue reading Bits of javascript goodness]]> (This blog post is better followed with the associated github repo. Run the Sinatra server with ‘ruby slowserver’, pop up a browser, and follow the revisions to see how the code gradually gets better. :) )

Recently at work, we wanted to modify some js ad code to include weather data for better ad targeting. For certain caching reasons, weather data has to be fetched by an AJAX call, then fed to the ad code.

The existing ad code was (schematically) as follows.

My first idea was to replace those calls by a call to a single js method, which would stall the ad calls using, say, a setTimeout, until the weather data is retrieved. The js looked like this.

Before you go get the pitchforks, I knew that this code was crap, and asked one of my amazing colleagues for help. He advised me to get rid of the timeOuts and instead use a function that would either, depending if the data was retrieved or not, shelf the task in an array, or execute it.

Using an array of tasks (actually callbacks, or anonymous functions, or closures) allows us to have actual event-driven javascript. This means executing the ad loading code only once, not using resources for timeouts when the resource is not available yet, and maybe most importantly not looking stupid when code review time codes.

Additionally, my code littered the general namespace with his variables. We could instead create a module, and have only that module’s name public – we clean up the global namespace and also get private variables for free!
The module pattern is an amazingly useful javascript pattern, implemented as a self-executing anonymous function. If that sounds clear as mud, Ben Cherry has a better, in-depth explanation on his blog.

The js code now looked like this.

But the HTML still contained <script> tags, and separation of concerns taught us it’s better to keep HTML and JS as separate as possible. Cool beans, let’s just add a couple of line to our js file.

Result: a pure event-driven wait for a callback, a single variable in the global js namespace, and a cleaned-up HTML.

And of course… We ended up not using it. The ad code actually expects a document.write, making this delayed approach impossible (document.write appends data inline while the page is still loading, but completely rewrites the page if called after pageload). Good learning experience still!

]]>
http://www.uponmyshoulder.com/blog/2012/bits-of-javascript-goodness/feed/ 0
Format perlcritic output as TAP to integrate with Jenkins http://www.uponmyshoulder.com/blog/2012/format-perlcritic-output-as-tap-to-integrate-with-jenkins/ http://www.uponmyshoulder.com/blog/2012/format-perlcritic-output-as-tap-to-integrate-with-jenkins/#comments Thu, 09 Feb 2012 11:23:17 +0000 http://www.uponmyshoulder.com/blog/?p=560 Perl::Critic is a nifty syntax analyzer able to parse your Perl code, warn you against common mistakes and hint you towards best practices. It’s available either as a Perl module or a standalone shell script (perlcritic). Unfortunately, there is no standard way to integrate it with Jenkins.

Jenkins, the continuous-integration-tool-formerly-known-as-Hudson, is the cornerstone of our continuous building process at work. It checks out the latest build from Git, runs a bunch of tests (mainly Selenium, as we develop a website) and keeps track of what goes wrong and what goes right. We wanted to integrate Perl::Critic to Jenkins’ diagnostics to keep an eye on some errors that could creep in our codebase.

So Jenkins doesn’t do Perl::Critic. However, Jenkins supports TAP. TAP, the Test Anything Protocol, is an awesomely simple format to express test results that goes as follow:

1..4
ok 1 my first test
ok 2 another successful test
not ok 3 oh, this one failed
ok 4 the last one's ok

The first line (the plan) announces how many tests there are, and each following line is a test result beginning by either “ok” or “not ok” depending on what gave.

Based on such a simple format, we can use a bit of shell scripting to mangle perlcritic’s output to be TAP-compatible:

# Perl::Critic                             \
# with line numbers (nl)...                \
# prepend 'ok' for passed files...         \
# and 'not ok' for each error...           \
# and put everything in a temp file

perlcritic $WORKSPACE                      \
  | nl -nln                                \
  | sed 's/\(.*source OK\)$/ok \1/'        \
  | sed '/source OK$/!s/^.*$/not ok &/'    \
  > $WORKSPACE/perlcritic_tap.results.tmp

# Formatting: add the TAP plan, and output the tap results file.
# TAP plan is a line "1..N", N being the number of tests
echo 1..`wc -l < $WORKSPACE/perlcritic_tap.results.tmp` \
 |cat - $WORKSPACE/perlcritic_tap.results.tmp > $WORKSPACE/perlcritic_tap.results

# Cleanup
rm -f $WORKSPACE/perlcritic_tap.results.tmp

(§WORKSPACE is a Jenkins-set variable referring to where the current build is being worked on.)

And voilà! Jenkins reads the TAP result file and everything runs smoothly.

The only downside of this approach is that the number of tests will vary. For example, a single .pm file containing 5 Perl::Critic violations will show up as 5 failed tests, but fixing these will turn into a single successful test.

]]>
http://www.uponmyshoulder.com/blog/2012/format-perlcritic-output-as-tap-to-integrate-with-jenkins/feed/ 3
Optimising a video editor plugin http://www.uponmyshoulder.com/blog/2011/optimising-a-video-editor-plugin/ http://www.uponmyshoulder.com/blog/2011/optimising-a-video-editor-plugin/#respond Wed, 09 Nov 2011 00:26:19 +0000 http://www.uponmyshoulder.com/blog/?p=523 Continue reading Optimising a video editor plugin]]> During the past few weeks, I have been writing a C++ plugin to grade C41 digital intermediates in Cinelerra, an open-source Linux video editor. C41 is the most common chemical process for negatives, resulting in films that look like this — you probably know it if you’ve ever shot film cameras.

Of course, after scanning those negatives, you have to process (“grade”) them to turn them back to positive. And it’s not as simple as merely inverting the values of each channel for each pixel; C41 has a very pronounced orange shift that you have to take into account.

The algorithm

The core algorithm for this plugin was lifted from a script written by JaZ99wro for still photographs, based on ImageMagick, which does two things:
– Compute “magic” values for the image
– Apply a transformation to each channel (R, G, B) based on those magic values

The problem with film is that due to tiny changes between the images, the magic values were all over the place from one frame to the other. Merely applying JaZ’s script on a series of frames gave a sort of “flickering” effect, with colours varying from one frame to the other, which is unacceptable effect for video editing.

The plugin computes those magic values for each frame of the scene, but lets you pick and fix specific values for the duration of the scene. The values are therefore not “optimal” for each frame, but the end result is visually very good.

However, doing things this way is slow: less than 1 image/second for 1624*1234 frames.

Optimising: do less

The first idea was to make optional the computing of the magic values: after all, when you’re batch processing a scene with fixed magic values, you don’t need to compute them again for each frame.

It was a bit faster, but not by much. A tad more than an image/second maybe.

Optimising: measure

The next step —which should have been the first!— was to actually benchmark the plugin, and see where the time was spent. Using clock_gettime() for maximum precision, the results were:

~0.3 seconds to compute magic values (0.2s to apply a box blur to smooth out noise, and 0.1s to actually compute the values)

~0.9 seconds to apply the transformation

Optional computing of the magic values was indeed a step in the right direction, but the core of the algorithm was definitely the more computationally expensive. Here’s what’s to be computed for each pixel:

row[0] = (magic1 / row[0]) - magic4;
row[1] = pow((magic2 / row[1]),1/magic5) - magic4;
row[2] = pow((magic3 / row[2]),1/magic6) - magic4;

With row[0] being the red channel, row[1] the green channel, and row[2] the blue channel.

The most expensive call here is pow(), part of math.h. We don’t need to be extremely precise for each pixel value, so maybe we can trade some accuracy for raw speed?

Optimising: do better

Our faithful friend Google, tasked with searching for a fast float pow() implementation, gives back Ian Stephenson’s implementation, a short and clear (and more importantly, working) version of pow().

But we can’t just throw that in without analysing how it affects the resulting frame. The next thing to do was to add a button that would switch between the “exact” version and the approximation: the results were visually identical.

Just to be sure, I measured the difference between the two methods, and it shows an average 0.2% difference, going as high as 5% for the worst case, which are acceptable values.

And the good news is that the plugin now only takes 0.15 to 0.20 second to treat each image, i.e. between 5 and 6 images/second — an 8-fold gain since the first version. Mission accomplished!

]]>
http://www.uponmyshoulder.com/blog/2011/optimising-a-video-editor-plugin/feed/ 0