Automating tasks with Makefiles

Almost 20 years ago, one of the first posts on this blog (hosted elsewhere at the time) was about documentation.

Since then, I’ve written about documentation and checklists and the like spradically. The problem is that although I know documentation and checklists are a good thing, I don’t use them enough.

It is more fun to write code.

At the same time, I have a hidden perfectionist in me (trust me, he’s there), so if I write code to perform some process, I can spend a lot of time making sure it works just right.

So, (part of) the cure for my lack of documentation, is to write code that performs a task and let the code be the documentation. (I’ve even used this as an excuse to practice literate programming because then I can write code and readable documentation at the same time in Emacs.)

Anyway, back to code as documentation.

With my background in setting up systems, I know all too well the pain of having to repeat something over and over. At the same time, because I’m so old, I don’t want to learn any new tool when the tools I have are already. So, while friends of mine have used Ansible and similar tools to set up complete MediaWiki systems, I’m too opinionated about how I do things that, try as I might, I couldn’t just use their system.

Which brings us to Make. GNU Make in particular. I coudl get into the byzantine differences between makes, but I tend to be on Linux and, hey, GNU make is available on the other systems.

For the past year or so, I’ve been working on deploying MediaWiki with Make. I just used it to stage a major upgrade at a client of mine. Today, I have a small project I need to deploy, so I decided to try and use my Makefile method. Over the next few days, I’ll document this.

Get the makefile skeleton

Obviously the first thing to do is get my makefile skeleton set up. I’ve learned that I only need a stub of a file to do this and I’ve been adapting it over the years. Here’s what I have so far:

include makeutil/baseConfig.mk
baseConfigGitRepo=https://phabricator.nichework.com/source/makefile-skeleton

.git:
    echo This is obviously not a git repository! Run 'git init .'
    exit 2

#
makeutil/baseConfig.mk: /usr/bin/git .git
    test -f $@                                                                                                                              ||      \
        git submodule add ${baseConfigGitRepo} makeutil

With that in place as my Makefile, I just run make and the magic happens:

$ make
echo This is obviously not a git repository!
This is obviously not a git repository!
exit 2
Makefile:1: makeutil/baseConfig.mk: No such file or directory
make: *** [Makefile:6: .git] Error 2

Ok, well, I run git init && make and the magic happens:

$ git init && make
Initialized empty Git repository in /home/mah/client/client/.git/
test -f makeutil/baseConfig.mk                                                                                     ||       \
    git submodule add https://phabricator.nichework.com/source/makefile-skeleton makeutil
Cloning into '/home/mah/client/client/makeutil'...
remote: Enumerating objects: 106, done.
remote: Counting objects: 100% (106/106), done.
remote: Compressing objects: 100% (106/106), done.
remote: Total 106 (delta 52), reused 0 (delta 0)
Receiving objects: 100% (106/106), 36.38 KiB | 18.19 MiB/s, done.
Resolving deltas: 100% (52/52), done.

  Usage:

    make <target> [flags...]

  Targets:

    composer   Download composer and verify binary
    help       Show this help prompt
    morehelp   Show more targets and flags

  Flags: (current value in parenthesis)

    NOSSL      Turn off SSL checks -- !!INSECURE!! ()
    VERBOSE    Print out every command ()

Better.

Set up DNS

I want to put the client domain on its own IP with its own DNS record. I don’t have “spin up a VM” anywhere close to automated, but I have been using my bind and nsupdate to update my files, so I’ve automated that.

# DNS server to update
dnsserver ?= 

# List of all DNS servers
allDNSServers ?=

# Keyfile to use
keyfile ?= K${domain}.private

# DNS name to update
name ?=

# IP address to use
ip ?=

# Time to live
ttl ?= 604800

# Domain being updated
domain = $(shell echo ${name} | sed 's,.*\(\.\([^.]\+\.[^.]\+\)\)\.*$,\2,')

NSUPDATE=/usr/bin/nsupdate
DIG=/usr/bin/dig

#
verifyName:
    test -n "${name}"                                                                                                       ||      (       \
        echo Please set name!                                                                                           &&      \
        exit 1                                                                                                                          )

#
verifyIP:
    test -n "${ip}"                                                                                                         ||      (       \
        echo Please set ip!                                                                                                     &&      \
        exit 1                                                                                                                          )

#
verifyDomain:
    test -n "${domain}"                                                                                                     ||      (       \
        echo Could not determine domain. Please set domain!                                     &&      \
        exit 1                                                                                                                          )
    test "${domain}" != "${name}"                                                                            ||      (       \
        echo Problem parsing domain from name. Please set domain!                       &&      \
        exit 1                                                                                                                          )

#
verifyKeyfile:
    test -n "${keyfile}"                                                                                            ||      (       \
        echo No keyfile. Please set keyfile!                                                            &&      \
        exit 1                                                                                                                          )
    test -f "${keyfile}"                                                                                            ||      (       \
        echo "Keyfile (${keyfile}) does not exist!"                                                     &&      \
        exit 1                                                                                                                          )

# Add host with IP
addHost: verifyName verifyIP verifyDomain verifyKeyfile ${NSUPDATE}
    printf "server %s\nupdate add %s %d in A %s\nsend\n" "${dnsserver}"                     \
        "${name}" "${ttl}" "${ip}" | ${NSUPDATE} -k ${keyfile}
    ${make} checkDNSUpdate ip=${ip} name=${name}

# Check a record across all servers
checkDNSUpdate: verifyName verifyIP
    for server in ${allDNSServers}; do                                                                              \
        ${make} checkAddr ip=${ip} name=${name} dnsserver=$server              ||      \
            exit 10                                                                                                         ;       \
    done

# Check host has IP
checkAddr: verifyName verifyIP ${DIG}
    echo -n ${indent}Checking $server for A record of ${name} on ${dnsserver}...
    ${DIG} ${name} @${dnsserver} A | grep -q ^${name}.*IN.*A.*${ip}         ||      (       \
        echo " FAIL!"                                                                                                           &&      \
        echo ${name} is not set to ${ip} on ${dnsserver}!                                       &&      \
        exit 1                                                                                                                          )
    echo " OK"

Now, I’ll just add the IP that I got for the virtual machine to the DNS:

$ make addHost name=example.winkyfrown.com. ip=999.999.999.999
> > Checking for A record of example.winkyfrown.com. on web.nichework.com... OK
> > Checking for A record of example.winkyfrown.com. on ns1.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on ns2.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on ns3.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on 1.1.1.1... OK

(Of note: this goes back to the checklist bit. When I first tested this, I found that my nsupdate wasn’t propagating to one of my secondaries. It prompted me to check who was allowed to do zone transfers from the host and fix the problem.)

Basic server setup

I believe in versioning (wherever it is easy). So the first thing we’ll do is install etckeeper.

#
verifyHost:
    test -n "${REMOTE_HOST}"                                                                                        ||      (       \
        echo Please set REMOTE_HOST!                                                                            &&      \
        exit 10                                                                                                                         )

#
verifyCmd:
    test -n "${cmd}"                                                                                                        ||      (       \
        echo Please set cmd!                                                                                            &&      \
        exit 10                                                                                                                         )

doRemote: verifyHost verifyCmd
    echo ${indent}running '"'${cmd}'"' on ${REMOTE_HOST}
    ssh ${REMOTE_HOST} "${cmd}"


# Set up etckeeper on host
initEtckeeper:
    ${make} doRemote cmd="sh -c 'test -d /etc/.git || sudo apt install -y etckeeper'"

Initial installation of Apache+PHP on the server

Finally, let’s set up a webserver!

# Install the basic LAMP stack
initLamp:
    ${make} doRemote cmd="sh -c 'test -d /etc/apache2 || sudo apt install -y        \
        php-mysql php-curl php-gd php-intl php-mbstring php-xml php-zip                 \
        libapache2-mod-php'"
    ${make} doRemote cmd="sh -c 'test -d /var/lib/mysql || sudo apt install -y mariadb-server'"
    ${make} doRemote cmd="sh -c 'sudo systemctl enable apache2'"
    ${make} doRemote cmd="sh -c 'sudo systemctl enable mariadb'"
    ${make} doRemote cmd="sh -c 'sudo systemctl start apache2'"
    ${make} doRemote cmd="sh -c 'sudo systemctl start mariadb'"

    curl -s -I ${REMOTE_HOST} | grep -q ^.*200.OK                                           ||      (       \
        echo Did not get "'200 OK'" from ${REMOTE_HOST}                                         &&      \
        exit 1                                                                                                                          )
    touch $@

And the basic website:

setupSite: initLamp verifyRemotePath
    ${make} doRemote cmd="sh -c 'test -x /usr/bin/tee || sudo apt install -y        \
            coreutils'"
    (                                                                                                                                                       \
        echo "<VirtualHost *:80>"                                                                 &&      \
        echo "  ServerName ${REMOTE_HOST}"                                                &&      \
        echo "  DocumentRoot ${REMOTE_PATH}/html"                                 &&      \
        echo "  ErrorLog ${REMOTE_PATH}/logs/error.log"                   &&      \
        echo "  CustomLog ${REMOTE_PATH}/logs/access.log combined"                      &&      \
        echo "  <Directory ${REMOTE_PATH}/html>"                                                        &&      \
        echo "          Options FollowSymlinks Indexes"                                                 &&      \
        echo "          Require all granted"                                                                    &&      \
        echo "          AllowOverride All"                                                                              &&      \
        echo "  </Directory>"                                                                                           &&      \
        echo "</VirtualHost>"                                                                                                   \
    ) | ${make} doRemote                                                                                                            \
        cmd="sh -c 'test -f /etc/apache2/sites-available/${REMOTE_HOST}.conf || \
                sudo tee /etc/apache2/sites-available/${REMOTE_HOST}.conf'"
    ${make} doRemote                                                                                                                        \
        cmd="sh -c 'test -L /etc/apache2/sites-enabled/${REMOTE_HOST}.conf      ||      \
            sudo a2ensite ${REMOTE_HOST}'"
    ${make} doRemote                                                                                                                        \
        cmd="sh -c 'test ! -L /etc/apache2/sites-enabled/${REMOTE_HOST}.conf || \
            sudo systemctl reload apache2                                                                   ||      \
            ( sudo systemctl status apache2 && false )'"
        touch $@

Finally, let’s deploy MediaWiki!

MABS status report: Updating MediaWiki::API

For the past couple of weeks, I’ve had a significant amount of time to spend on Multilateral, Asynchronous, Bidirectional Synchronisation of wikis or MABS for short.

This is all built on the git remote for MediaWiki work that was started almost a decade ago by some students. Since the initial effort there have been some significant changes in the MediaWiki API and, in the meantime, the MediaWiki::API Perl module that is doing a lot of heavy lifting in this project hasn’t seen a lot of work. For example, the last commit on the GitHub repository was to fix a typo in 2015.

So, I’ve been working this past week on updating the Perl module. This has been a lot of fun since I used to be quite the Perl snob—and by that I mean I looked down on people who didn’t love Perl, not that I looked down on Perl. Times have changed for me in the past ten or eleven years, so I’ve acquired some humility and begun doing a lot of work in what I would have considered to be the bottom-of-the-barrel language: PHP. Coming back to Perl is a lot of fun.

That said, Perl has continued to grow while I’ve been gone and I need some advice. I’ve become a huge fan of linters, so one change has been adhering pretty closely to almost every criticism Perl::Critic throws at me. I’ve gone as far as adding “smx” after almost every regular expression and incompatibly changing use constant to Readonly. You might say I’m getting a little carried away.

This and fixing the tests to use a docker instance (if available) rather than just sending every tester to the testwiki, as well as fixing some bugs I found along the way, has helped me understand this vital piece of the MABS project.

Still, coming back to Perl has made me realize just how ad hoc Perl’s object system is. I’ve heard of Moose and Mus (which I’m leaning towards), but I was wondering what best-practices the Perl community has for updating an existing code base.

Update 1: I asked for some feedback on the Perl object system to use and got some great feedback.

Update 2: I contacted the original author (Jools Wills) of the MediaWiki::API module and talked to him about what direction to take with it. I’ll have to do some more work on it to make it work well with for my purposes, but I may end up sending him a bunch of pull requests.

Photo by Roger McLassus [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

Creating an external auto-completion provider for Page Forms

(The picture on this post is from Pilgram’s Progress as Christian stuggles in the slough of despond.  I feel his pain here.)

I have a couple of use cases that require pulling from an external data source into MediaWiki. Specifically, they need to pull information such as employee data in Active Directory or a company-wide taxonomy that is maintained outside of MediaWiki. Lucky for me, there is thePage Forms extention.

Page Forms provides a couple of ways to do this: directly from an outside source or using the methods provided by the External Data extension.

Since I was going to have to write some sort of PHP shim, anyway, I decided to go with the first method.

Writing the PHP script to provide possible completions when it was given a string was the easy part. As a proof of concept, I took a the list of words in /usr/share/dict/words on my laptop, trimed it to 1/10th its size using

sed -n ‘0~20p' /usr/share/dict/words > short.txt

and used a simple PHP script (hosted on winkyfrown.com) to provide the data.

That script is the result of a bit of a struggle. Despite the fact that the documentation pointed to a working example (after I updated it, natch), that wasn’t clear enough for me. I had to spend a few hours poking through the source and instrumenting the code to find the answer.

And that is the reason for this weblog post. I posted the same thing earlier today to the Semantic Mediawiki Users mailing list after an earlier plea for help. What resulted is the following stream-of-conciousness short story:

I must be doing something wrong because I keep seeing this error in the
js console (in addition to not seeing any results):

    TypeError: text is undefined 1 ext.sf.select2.base.js:251:4
        removeDiacritics https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:251:4
        textHighlight https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:258:23
        formatResult https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:100:15
        populate https://example.dom/w/extensions/SemanticForms/libs/select2.js:920:39
        populateResults https://example.dom/w/extensions/SemanticForms/libs/select2.js:942:21
        updateResults/<.callback< https://example.dom/w/extensions/SemanticForms/libs/select2.js:1732:17
        bind/< https://example.dom/w/extensions/SemanticForms/libs/select2.js:672:17
        success https://example.dom/w/extensions/SemanticForms/libs/select2.js:460:25
        fire https://example.dom/w/load.php:3148:10
        fireWith https://example.dom/w/load.php:3260:7
        done https://example.dom/w/load.php:9314:5
        callback https://example.dom/w/load.php:9718:8

The URL http://example.dom/custom-autocomplete.php?l=lines&f=words
shows all the lines from the source (in this case, every 10th line from
/usr/share/dict/words) that matches “lines”. This example results in:

        {"sfautocomplete":
            {"2435":{"id":"borderlines",
                            "value":"borderlines",
                            "label":"borderlines",
                            "text":"borderlines"},
                            …

In my php script, I blatted the value over the keys “id”, “value”, “label” and “text”
because I saw each of them being use, but not why.

Anyway, PF is configured to read this correctly, so I can see that when
the user types “lines” an XHR request is made for
https://example.dom/w/api.php?action=sfautocomplete&format=json&external_url=tempURL&substr=lines&_=1494345628246
and it returns

    {"warnings": {
        "main": {
              "*": "Unrecognized parameter: '_'"
        }
    },
     "sfautocomplete": [
        {
          "id": "borderlines",
          "value": "borderlines",
          "label": "borderlines",
          "text": "borderlines"
        }, ....

So far, so good.

I’m instrumenting the code for select2.js (console.log() is your friend!) and I can see that by the time we get to its populate() method we have a list of objects that look like this:

Object { id: 0, value: "borderlines", label: "borderlines", text: undefined }

Ok, I can see it substituting its own id so I’ll take that out of my
results.

There is no difference. (Well, the ordering is different — id now comes
at the end — but that is to be expected.)

Now, what happens if I take out text?

Same thing. Ordering is different, but still shows up as undefined.

Output from my custom autocompleter now looks like this:

        {"sfautocomplete":
            {"2435":{"value":"borderlines",
                     "label":"borderlines"},
                     …

and the SMWApi is now giving

    {"warnings": {
        "main": {
              "*": "Unrecognized parameter: '_'"
        }
    },
     "sfautocomplete": [
        {
          "value": "borderlines",
          "label": "borderlines"
        }, ....

Still the same problem. Let me try Hermann’s suggestion and make my
output look like:

        {"sfautocomplete":
            [
                {"borderlines":”borderlines”},
                ....

Still, no results. The resulting object does look like this, though:

Object { id: 0, borderline: "borderlines", label: "borderlines", text: undefined }

Looking at my instrumented code and the traceback, I have found that the
transformation takes place in the call

options.results(data, query.page);

at the success callback around line 460 in select2.js. This leads us back to ajaxOpts.results() at line 251 in ext.sf.select2.tokens.js (since this is the token input method I’m looking at) and, yep, it looks like I should be putting something in the title attribute.

And, yay!, after changing the output of my custom autocomplete script to:

        {"sfautocomplete":
            [
                {"title":”borderlines”,
                 “value”: ”borderlines”},
                ....

the autocompletes start working. In fact, putting

        {"sfautocomplete":
            [
                {"title":”borderlines”}
                ....

is enough.

If you made it this far, you’ll know that I should have just copied the example I found when I updated the page on MW.o, but then I wouldn’t have understood this as well as I do now. Instead, I used what I learned to provide an example in the documentation that even I wouldn’t miss.

(Image is public domain from the Henry Altemus edition of John Bunyan’s Pilgrim’s Progress, Philadelphia, PA, 1890. Illustrations by Frederick Barnard, J.D. Linton, W. Small, etc. Engraved by Dalziel Brothers. Elarged detail from Wikimedia Commons uploaded by by User:Dr_Jorgen.)

 

Improving watchlists for corporate MediaWiki use

I’ve learned, from listening to corporate users of MediaWiki, that watchlists are very important for maintaining the quality of the wiki.

The guys running the EVA wiki at NASA, for example, have done a lot of work on Watchlist Analytics extension to ensure that the articles on their wiki have watchers spread out over the entire wiki.

Installing this extension for a client increased their awareness of the usefulness of watchlists and, since they had been using WhoIsWatching in the past, they asked me to fix a problem they had encountered.

The client takes a little more proactive approach to managing their wiki. It might not seem like “the wiki way” to people who have only used MediaWiki on Wikipedia, but they wanted to use the ability of WhoIsWatching to put pages on editor’s watchlists.

In the past, when they used the page, it showed a list of users and allows those with permission to add the page to anyone’s watchlist. It limited the list to those people who had provided an email address, which made that more manageable.

Since then, I’ve implemented Single Sign On for them and auto-populated their email address from Active Directory. As a result, the number of users with an email address has jumped from a handful to over 10,000.

So, now WhoIsWatching was trying to load thousands of rows and display them all at once on a single page.

It was trying, but the requests were timing out and the page was un-usable.

The extension had other problems. It practiced security-through-obscurity. While you could disable the ability to add pages to other people’s watchlists, the only thing to keep the anyone from adding pages was the fact that its administrative page (or “Special Page” in MediaWiki parlance) was not on the list of special pages. If you knew about the page, you could visit it and add an article you were hard at work on to everyone’s watchlists, thus spamming them with notifications from the wiki of all your changes.

That, and if you visited the special page without providing any arguments, you’d get a cryptic “usage” message.

To address the first problem, I decided to put an auto-complete form on the page so that a user could start typing a username and then MediaWiki would provide a list of matching usernames. I wondered how I would do this until I noticed that the Special:UserRights page was now providing this sort of auto-completion. Adding that functionality was as easy as providing an text field with the class mw-autocomplete-user.

I addressed the security issue by adding a couple of rights that could be given to users through the user interface (instead of by updating the value of global variables in a php file).

Finally, the frosting on the cake was to make WhoIsWatching’s special page useful if you visited it all by itself.

I already knew that the search bar provided auto-completion for article names and, based on what I discovered with mw-autocomplete-user, I thought I might be able to do something similar with page completion.

I was right. Thanks to a bug in a minor skin discovered back in 2012, you can add the class mw-search-input to a text field and works.

I haven’t been aware of all the great auto-completion work that MediaWiki developers like krinkle and MetaMax have been doing, but I’m pleased with what I see. And the improvments that they implemented made adding the features I needed to WhoIsWatching about a thousand percent easier.

Oh, and I did miscellaneous code cleanup and i18n wrangling (with Siebrand’s guidance, naturally). Now many changes sit ready for review.

There are still things I’d like to fix, but those will have to wait.

Image credit: Livrustkammaren (The Royal Armoury) / Erik Lernestål / CC BY-SA [CC BY-SA 3.0 or Public domain], via Wikimedia Commons

Emacs for MediaWiki

Tyler Romeo wrote:

If I had the time, I would definitely put together some sort of .dir-locals.el for MediaWiki, that way we could make better use of Emacs, since it has a bunch of IDE-like functionality, even if it’s not IDEA-level powerful.

A client wanted to me to help train someone to take over the work for maintaining their MediaWiki installation. As part of that work, they asked for an IDE and, knowing that other MW devs used PHPStorm, I recommended it and they bought a copy for me and the person I was to train.

PHPStorm has “emacs keybindings” but these are just replacements for the CUA keybindings. Somethings that I expected the keybindings to invoke, didn’t. (It’s been a while since I’ve used PHPStorm, so I’ve forgotten the details.)

In any case, I’ve found that a lot of what I wanted from PHPStorm could be implemented in Emacs using the following .dir-locals.el (which I put above my core and extensions checkouts):

((nil . ((flycheck-phpcs-standard .
        "…/mediawiki/codesniffer/MediaWiki")
     (flycheck-phpmd-rulesets .
        ("…/mediawiki/messdetector/phpmd-ruleset.xml"))
     (mode . flycheck)
     (magit-gerrit-ssh-creds . "mah@gerrit.wikimedia.org"))))

The above is in addition to the code-sniffing I already had set up to put Emacs’ php-mode into the MW style.

The one thing that PHPStorm lacked (and where Emacs’ magit excels) is dealing with git submodules. Since I make extensive use of submodules for my MediaWiki work, this set up makes Emacs a much better tool for working with MediaWiki.

Naturally, I won’t claim that what works for me will work for anyone else. I’ve spent 15 years in Emacs every day. I was first exposed to Emacs in the late 80s(!!) so the virus has had a long time to work its way into my psyche and, by now, I’m incurable.

2014 Summer of Code

Google Summer of Code has ended and, with it, my first chance to mentor a student with Markus Glaser in the process of implementing a new service for MediaWiki users.

At the beginning of the summer, Markus and I worked with Quim Gil to outline the project and find a student to work on it.

Aditya Chaturvedi, a student from the Indian Institute of Technology (“India’s MIT”) saw the project, applied for our mentorship, and, soon after, we began working with him.

We all worked to outline a goal of creating a rating system on WikiApiary with the intention of using a bot to copy the ratings over to MediaWiki.org.

I’m very happy to say that Adiyta’s work can now be seen on WikiApiary. We don’t have the ratings showing up on MediaWiki yet (more on that in a bit) but since that wasn’t a part of the deliverables listed as a success factor for this project, this GSOC project is a success.

As a result of his hard work, the ball is now in our court — Markus and I have to evangelize his ratings and, hopefully, get them displayed on MediaWiki.org.

Unlike some other projects, this project’s intent is to help provide feedback for MediaWiki extensions instead of create a change in how MediaWiki itself behaves. To do this, Aditya and I worked with Jamie Thinglestaad to create a way for users to rate the extensions that they used.

We worked with Jamie for a few reasons. First, Jamie has already created an infrastructure on WikiApiary for surveying MediaWiki sites. He is actively maintaining it and improving the site. Pairing user ratings with current his current usage statistics makes a lot of sense.

Another reason we worked with Jamie instead of trying to deploy any code on a Wikimedia site is that the process of deploying code on WikiApiary only requires Jamie’s approval.

The wisdom of this decision really became apparent at the end when Adiyta requested help getting his ratings to show up using the MediaWiki Extension template.

Thank you, Aditya. It was a pleasure working with you. Your hard work this summer will help to invigorate the ecosystem for MediaWiki extensions.  Good luck on your future endevors.  I hope we can work together again on MediaWiki.

A freetard apologizes for Google

(For those not familiar with the term “freetard“, it is a derogatory term that Fake Steve Jobs coined for free software fanatics like myself.  I’m reappropriating it here.)

A friend of mine posted a question on facebook about backing up his Mac, asking what would happen if he decided to switch to Windows later.  Instead of answering his question, I picked up on the bit about photos and failed to respond to his question with the following bit:

Give your life to Google. My phone is my camera and it syncs all photos automatically to “the cloud”. Everything is on the web, of course, and sometimes Google will surprise me with bits of scrap-booking that its bots send me.

For example, here is a movie made out of one of the breakfasts we had in London a few weeks ago.

And here is the “story” Google’s bots made of our whole trip to London.

So, yeah, Google is a multi-billion dollar corporation, but so are Apple and Microsoft. The difference is that Google’s doesn’t care if you are on a Mac or a PC.

But they would prefer you to use an Android phone, I’m sure, instead of an iPhone. Even there, you have more options because, like Microsoft, Google isn’t focused on controlling the delivery of their software to the same degree that Apple is.

This means you can have a crappy Android device, just the same way you can have a crappy PC. So, yes, there is a higher chance you will be dissatisfied, but it also means that you are less limited than you are on an IOS device and that, as a result, more people will be able to contribute to providing you with a better experience via software or cloud services because Google (like Microsoft on Windows) doesn’t exert the same control over the Android ecosystem that Apple does on the IOS ecosystem.

MediaWiki support

Monday, I announced MediaWiki 1.20.0, affirmed a six-month release cycle, and stated a plan for long-term support for the 1.19 series of MediaWiki. This is the first release that has been managed by a non-WMF employee, and I think it bodes well for third party users of MediaWiki.

I’m hoping that by working with Debian and other Linux distributor on 1.19 support, we can make MediaWiki more welcoming to new and old users. For example, by looking at some of the older MediaWiki installations recorded on WikiStats, I contacted a few wikis and encouraged them to upgrade to 1.19, especially some that were running ancient MediaWiki.

Long term support is especially important for people who customize MediaWiki for their own use. Of course, I would encourage anyone who adapts MediaWiki like this to use hooks and, ideally, share their modifications with us. But, as Linus Torvalds says, “reality is complicated”.

So, instead of saying telling users of MediaWiki “If you modify MediaWiki, we can’t help you at all”, I would rather say, “We’re going to support this version for 2 years, but you’re responsible for upgrading to the next release when the time comes.”

This gives people something that they’re able to plan around more easily than something that changes every six months. Using WikiStats, I’ll contact more MediaWiki installations that are out of date, encourage them to upgrade, and let them know how they can be notified of security updates and later long term support updates.

We have a really good tool, but we need to support users who aren’t the Wikimedia Foundation itself better. This is a start that should encourage the users of MediaWiki to keep their installations up-to-date as well as encourage wider use of MediaWiki.

MediaWiki 1.20 RC

[photocommons file=”File:MediaWiki_logo_1.svg.png” width=180]A week and a half ago, the Platform Engineering Director for Wikimedia clarified how he would like to see volunteers helping with MediaWiki tarball releases.

Instead of doing some other work I had planned for this weekend (Yay, procrastination!), I managed to put together a 1.20 RC tarball and announce it.

If you get a chance to test this, let me know. If you find a bug, file it in bugzilla. Hopefully we’ll have something ready for release in a couple of weeks.

Why your javascript on Wikipedia will break

This week we did our first roll out of MediaWiki 1.19 on some of the smaller project sites. This staged roll out is a great way to find out how you are using the software in ways we didn’t expect and to give you a warning: “Beware! This thing you are doing is going to break!” Of course, I would prefer to avoid that wherever possible, but there are things I can’t control.

So now, I get to say “Beware!”:

Beware!

If are using document.write() in some javascript, whether in a gadget, in your common.js, vector.js, monobook.js or even global.js, you need to change it. In the cases that I saw, people had used a code fragment like the following:

function importAnyScript(lang,family,script) {
document.write('<script type="text/javascript" src="' + 'http://'
        + lang + '.'
        + family + '.org/w/index.php?title='
        + script + '&action=raw&ctype=text/javascript"></script>');

This has to be changed to something like the following:

function importAnyScript(lang,family,script) {
mw.loader.load('//' + lang + '.' + family
        + '.org/w/index.php?title='
        + script + '&action=raw&ctype=text/javascript');
}