The party of the people

Sometimes, I think I should use my weblog as a place to aggregate my writing. Not that I write a lot, but if I answer a question or write a letter, I should post a copy here if it is public enough. In that vein, I’m posting this letter to the editor of my local paper that I wrote this past Sunday.


I was struck by Chris DelVecchio’s statements in the front page article “Trump to remain key figure”. Mr. DelVecchio said “The Republican Party is becoming the party of the people” which sounds strange to me.

He is saying that the party that put up a national candidate that didn’t get a majority of the vote and lost the election is the people’s party?

Someone should tell the people that.

He also pointed to the rise of younger representatives like Matt Gaetz and said it was part of a generational shift in the Republican party. I suppose he thinks this is going to help attract younger voters. Reading exit polls may be a pass-time of the elites, but DelVecchio should at least glance at exit polls like NBC’s which showed that Biden won the majority of all age groups younger than 50 years of age.

But, sure, maybe their political leaning will change when the turn 50 and grow up a little.

While party members like DelVecchio may be enamored with Trump’s populism and think it is a sign that he listens to the people, I think the real story is the record turn-out for this past election—over 66% of the eligible population, the highest in the past 120 years—and that the majority voted to remove Trump from office.

Finally, I hope the events of January 6th will provide compelling evidence for those of us who love law and order that populism and firebrands are a dangerous combination.

 

Photo is CC BY-NC-ND 2.0 from peacearena on Flickr.

View Post

Ensuring people cannot complain about voting security

A couple of weeks ago, my cousin wrote up a blog post about “Fraud-Free Electronic Voting” where he described a system that would let people vote online in such a way that would “prevent the issues we currently see with accusations of voter fraud and inaccurate counting”. He flattered me by asking for my opinion so I thought I would deign to give mine.

(But I should point out that my opinion here is worth about as much as the paper it is printed on. So you, dear reader, can determine its value by printing it on pricey paper on not.)

The statement “prevent the issues we currently see with accusations of voter fraud and inaccurate counting” is not practical. There were a number of audits and recounts in various states during this past election. The audits and recounts did not find any instances of widespread (defined as affecting 1000s of ballots) fraud.

As long as there are winners and losers, especially sore losers, there will always be people questioning the results of elections. — LittleAncestor

There were individual instances of misrepresentation, double-voting, and other forms of voter fraud, but nothing that could be called widespread.

Which leave us with the claims of fraud that people insisted had happened despite these audits and recounts. They are claims being made in bad faith or in response to claims by bad actors. There isn’t anything you can do that will keep a bad actor from claiming fraud in the face of the evidence.

If you have someone who has managed to gather a bunch of hangers-on who has a deep-seated need to avoid any hint of loss, then, when that person loses, they will cast about for anything to blame for their loss.

No system, no matter how rationally constructed, is going to get around that problem.

Humans are driven, not by facts or rational argument, but by emotion.

As long as we inform our emotion with rational thought and are humble enough to admit we can be wrong, this is how things should be.

Of course, many of us are not humble and do not inform our emotion with rational thought. And some of us are narcissists who have managed to exploit the emotional drive of those around us.

Photo source: NATO Training Mission-Afghanistan Mass Communication Specialist 2nd Class Ernesto Hernandez Fonte/Public Affairs Specialist, Public domain, via Wikimedia Commons

The single issue voter

I remember sitting in a journalism class at the University of New Orleans almost 30 years ago listening to an old hand from the Times Picayune regaling us with stories about his work at the paper. One that stood out to me was how the church excommunicated a politician for their stance on integration (and then physically blocked him from entering the church for his daughter’s wedding).

The exact details of what he said escape me and so I’m probably wrong on parts, but looking at Wikipedia, I think he may have been talking about the excommunication (and later, after a public retraction, reinstatement) of Leander Perez in 1962 who was the secretary of the Citizens Council of South Louisiana for aggressively opposing the integration of Catholic schools.

I mention this because I was reminded about this when reading the letters to the editor for my local paper. There seems to be some confusion about Biden, the Catholic running for president of these United States. Many people have written into the local paper over the last month and pointed to a single issue—abortion—as a reason no Christian should vote for Biden.

Biden certainly has his faults. A reader was gracious enough to give us a few of them in the Letters page recently. She isn’t wrong.

But as she said, Biden has had 47 years in politics and the issues she managed to find were a molehill compared with the mountain Trump has managed to accumulate during just four (4!) years in political office.

One question—why are some good Catholics willing to support Biden?—really got me to write.

Abortion is a real issue, but no matter if you think it should be outlawed or kept legal, it should not be the only reason a Christian uses to pick a candidate for president.

It is a fact that Biden supports abortion and the Catholic church is against abortion. But, thinking Catholics might also recall that in 2018 the Church reinforced its Culture of Life when it updated the catechism to include these words from a speech Pope Francis made: “the death penalty is inadmissible because it is an attack on the inviolability and dignity of the person“.

This thinking Catholic might also remember that, in 1989, Trump paid $85,000 for a full page ad in four New York papers calling for a return of the death penalty—something that just happened to coincide with the trial of the Central Park Five, black men who were ultimately exonerated despite Trump’s best efforts.

Thinking Catholics might also recall that the catechism of the Church says “Every form of social or cultural discrimination … on the grounds of sex, race, color, social conditions, language, or religion must be curbed and eradicated…”.

Then they may remember that Trump’s first appearance in the New York Times was on the front page in October 16, 1973 under the headline “Major Landlord Accused of Antiblack bias in City”.

Christians should use their faith to help them make decisions, but we would be wrong to dictate who any Christian (Catholic or otherwise) should vote for.

Photo: Choices are hard (Public Domain image from Commons)

Automating tasks with Makefiles

Almost 20 years ago, one of the first posts on this blog (hosted elsewhere at the time) was about documentation.

Since then, I’ve written about documentation and checklists and the like spradically. The problem is that although I know documentation and checklists are a good thing, I don’t use them enough.

It is more fun to write code.

At the same time, I have a hidden perfectionist in me (trust me, he’s there), so if I write code to perform some process, I can spend a lot of time making sure it works just right.

So, (part of) the cure for my lack of documentation, is to write code that performs a task and let the code be the documentation. (I’ve even used this as an excuse to practice literate programming because then I can write code and readable documentation at the same time in Emacs.)

Anyway, back to code as documentation.

With my background in setting up systems, I know all too well the pain of having to repeat something over and over. At the same time, because I’m so old, I don’t want to learn any new tool when the tools I have are already. So, while friends of mine have used Ansible and similar tools to set up complete MediaWiki systems, I’m too opinionated about how I do things that, try as I might, I couldn’t just use their system.

Which brings us to Make. GNU Make in particular. I coudl get into the byzantine differences between makes, but I tend to be on Linux and, hey, GNU make is available on the other systems.

For the past year or so, I’ve been working on deploying MediaWiki with Make. I just used it to stage a major upgrade at a client of mine. Today, I have a small project I need to deploy, so I decided to try and use my Makefile method. Over the next few days, I’ll document this.

Get the makefile skeleton

Obviously the first thing to do is get my makefile skeleton set up. I’ve learned that I only need a stub of a file to do this and I’ve been adapting it over the years. Here’s what I have so far:

include makeutil/baseConfig.mk
baseConfigGitRepo=https://phabricator.nichework.com/source/makefile-skeleton

.git:
    echo This is obviously not a git repository! Run 'git init .'
    exit 2

#
makeutil/baseConfig.mk: /usr/bin/git .git
    test -f $@                                                                                                                              ||      \
        git submodule add ${baseConfigGitRepo} makeutil

With that in place as my Makefile, I just run make and the magic happens:

$ make
echo This is obviously not a git repository!
This is obviously not a git repository!
exit 2
Makefile:1: makeutil/baseConfig.mk: No such file or directory
make: *** [Makefile:6: .git] Error 2

Ok, well, I run git init && make and the magic happens:

$ git init && make
Initialized empty Git repository in /home/mah/client/client/.git/
test -f makeutil/baseConfig.mk                                                                                     ||       \
    git submodule add https://phabricator.nichework.com/source/makefile-skeleton makeutil
Cloning into '/home/mah/client/client/makeutil'...
remote: Enumerating objects: 106, done.
remote: Counting objects: 100% (106/106), done.
remote: Compressing objects: 100% (106/106), done.
remote: Total 106 (delta 52), reused 0 (delta 0)
Receiving objects: 100% (106/106), 36.38 KiB | 18.19 MiB/s, done.
Resolving deltas: 100% (52/52), done.

  Usage:

    make <target> [flags...]

  Targets:

    composer   Download composer and verify binary
    help       Show this help prompt
    morehelp   Show more targets and flags

  Flags: (current value in parenthesis)

    NOSSL      Turn off SSL checks -- !!INSECURE!! ()
    VERBOSE    Print out every command ()

Better.

Set up DNS

I want to put the client domain on its own IP with its own DNS record. I don’t have “spin up a VM” anywhere close to automated, but I have been using my bind and nsupdate to update my files, so I’ve automated that.

# DNS server to update
dnsserver ?= 

# List of all DNS servers
allDNSServers ?=

# Keyfile to use
keyfile ?= K${domain}.private

# DNS name to update
name ?=

# IP address to use
ip ?=

# Time to live
ttl ?= 604800

# Domain being updated
domain = $(shell echo ${name} | sed 's,.*\(\.\([^.]\+\.[^.]\+\)\)\.*$,\2,')

NSUPDATE=/usr/bin/nsupdate
DIG=/usr/bin/dig

#
verifyName:
    test -n "${name}"                                                                                                       ||      (       \
        echo Please set name!                                                                                           &&      \
        exit 1                                                                                                                          )

#
verifyIP:
    test -n "${ip}"                                                                                                         ||      (       \
        echo Please set ip!                                                                                                     &&      \
        exit 1                                                                                                                          )

#
verifyDomain:
    test -n "${domain}"                                                                                                     ||      (       \
        echo Could not determine domain. Please set domain!                                     &&      \
        exit 1                                                                                                                          )
    test "${domain}" != "${name}"                                                                            ||      (       \
        echo Problem parsing domain from name. Please set domain!                       &&      \
        exit 1                                                                                                                          )

#
verifyKeyfile:
    test -n "${keyfile}"                                                                                            ||      (       \
        echo No keyfile. Please set keyfile!                                                            &&      \
        exit 1                                                                                                                          )
    test -f "${keyfile}"                                                                                            ||      (       \
        echo "Keyfile (${keyfile}) does not exist!"                                                     &&      \
        exit 1                                                                                                                          )

# Add host with IP
addHost: verifyName verifyIP verifyDomain verifyKeyfile ${NSUPDATE}
    printf "server %s\nupdate add %s %d in A %s\nsend\n" "${dnsserver}"                     \
        "${name}" "${ttl}" "${ip}" | ${NSUPDATE} -k ${keyfile}
    ${make} checkDNSUpdate ip=${ip} name=${name}

# Check a record across all servers
checkDNSUpdate: verifyName verifyIP
    for server in ${allDNSServers}; do                                                                              \
        ${make} checkAddr ip=${ip} name=${name} dnsserver=$server              ||      \
            exit 10                                                                                                         ;       \
    done

# Check host has IP
checkAddr: verifyName verifyIP ${DIG}
    echo -n ${indent}Checking $server for A record of ${name} on ${dnsserver}...
    ${DIG} ${name} @${dnsserver} A | grep -q ^${name}.*IN.*A.*${ip}         ||      (       \
        echo " FAIL!"                                                                                                           &&      \
        echo ${name} is not set to ${ip} on ${dnsserver}!                                       &&      \
        exit 1                                                                                                                          )
    echo " OK"

Now, I’ll just add the IP that I got for the virtual machine to the DNS:

$ make addHost name=example.winkyfrown.com. ip=999.999.999.999
> > Checking for A record of example.winkyfrown.com. on web.nichework.com... OK
> > Checking for A record of example.winkyfrown.com. on ns1.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on ns2.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on ns3.worldwidedns.net... OK
> > Checking for A record of example.winkyfrown.com. on 1.1.1.1... OK

(Of note: this goes back to the checklist bit. When I first tested this, I found that my nsupdate wasn’t propagating to one of my secondaries. It prompted me to check who was allowed to do zone transfers from the host and fix the problem.)

Basic server setup

I believe in versioning (wherever it is easy). So the first thing we’ll do is install etckeeper.

#
verifyHost:
    test -n "${REMOTE_HOST}"                                                                                        ||      (       \
        echo Please set REMOTE_HOST!                                                                            &&      \
        exit 10                                                                                                                         )

#
verifyCmd:
    test -n "${cmd}"                                                                                                        ||      (       \
        echo Please set cmd!                                                                                            &&      \
        exit 10                                                                                                                         )

doRemote: verifyHost verifyCmd
    echo ${indent}running '"'${cmd}'"' on ${REMOTE_HOST}
    ssh ${REMOTE_HOST} "${cmd}"


# Set up etckeeper on host
initEtckeeper:
    ${make} doRemote cmd="sh -c 'test -d /etc/.git || sudo apt install -y etckeeper'"

Initial installation of Apache+PHP on the server

Finally, let’s set up a webserver!

# Install the basic LAMP stack
initLamp:
    ${make} doRemote cmd="sh -c 'test -d /etc/apache2 || sudo apt install -y        \
        php-mysql php-curl php-gd php-intl php-mbstring php-xml php-zip                 \
        libapache2-mod-php'"
    ${make} doRemote cmd="sh -c 'test -d /var/lib/mysql || sudo apt install -y mariadb-server'"
    ${make} doRemote cmd="sh -c 'sudo systemctl enable apache2'"
    ${make} doRemote cmd="sh -c 'sudo systemctl enable mariadb'"
    ${make} doRemote cmd="sh -c 'sudo systemctl start apache2'"
    ${make} doRemote cmd="sh -c 'sudo systemctl start mariadb'"

    curl -s -I ${REMOTE_HOST} | grep -q ^.*200.OK                                           ||      (       \
        echo Did not get "'200 OK'" from ${REMOTE_HOST}                                         &&      \
        exit 1                                                                                                                          )
    touch $@

And the basic website:

setupSite: initLamp verifyRemotePath
    ${make} doRemote cmd="sh -c 'test -x /usr/bin/tee || sudo apt install -y        \
            coreutils'"
    (                                                                                                                                                       \
        echo "<VirtualHost *:80>"                                                                 &&      \
        echo "  ServerName ${REMOTE_HOST}"                                                &&      \
        echo "  DocumentRoot ${REMOTE_PATH}/html"                                 &&      \
        echo "  ErrorLog ${REMOTE_PATH}/logs/error.log"                   &&      \
        echo "  CustomLog ${REMOTE_PATH}/logs/access.log combined"                      &&      \
        echo "  <Directory ${REMOTE_PATH}/html>"                                                        &&      \
        echo "          Options FollowSymlinks Indexes"                                                 &&      \
        echo "          Require all granted"                                                                    &&      \
        echo "          AllowOverride All"                                                                              &&      \
        echo "  </Directory>"                                                                                           &&      \
        echo "</VirtualHost>"                                                                                                   \
    ) | ${make} doRemote                                                                                                            \
        cmd="sh -c 'test -f /etc/apache2/sites-available/${REMOTE_HOST}.conf || \
                sudo tee /etc/apache2/sites-available/${REMOTE_HOST}.conf'"
    ${make} doRemote                                                                                                                        \
        cmd="sh -c 'test -L /etc/apache2/sites-enabled/${REMOTE_HOST}.conf      ||      \
            sudo a2ensite ${REMOTE_HOST}'"
    ${make} doRemote                                                                                                                        \
        cmd="sh -c 'test ! -L /etc/apache2/sites-enabled/${REMOTE_HOST}.conf || \
            sudo systemctl reload apache2                                                                   ||      \
            ( sudo systemctl status apache2 && false )'"
        touch $@

Finally, let’s deploy MediaWiki!

Purging whole namespaces of pages in MediaWiki

So, I was asked to purge all the pages in several categories. The smaller categories are relatively easy to do using the API sandbox.

  1. Visit the Special:ApiSandbox page on your wiki..gnome-shell-screenshot-QY4TO0.png
  2. Select the action purge. purge.png
  3. Select action=purge from the sidebar.gnome-shell-screenshot-EBGKO0.png
  4. Look for the generator option and then select allpages from the drop down.gnome-shell-screenshot-YX5XO0.png
  5. Return to the top of the page and select generator=allpages from the sidebar.gnome-shell-screenshot-65JRO0.png
  6. Look for the gapnamespace option and select the namespace you want to purge. gnome-shell-screenshot-JZZOO0.png
  7. Execute the request using the “Make request” button at the top of the page. gnome-shell-screenshot-XHXJO0.png
  8. When the request is complete, there may be the opportunity to repeat the request with the next batch of pages. You’ll see a button at the bottom of the JSON output that says “Continue”. Click it until the entire namespace has been purged. gnome-shell-screenshot-D14WO0.png

The API sandbox will let you play around with different parameters. For example, in the last screenshot, I set gaplimit (under generator=allpages) to 3 but I could have set it as high as 500 if I wanted.

So for namespaces that don’t have too many pages (say, less than 1000), this is do-able. But for your average-sized wiki, a namespace is likely hold tens of thousands of pages. Something more is needed.

Next, purging namespaces programatically.

MABS status report: Updating MediaWiki::API

For the past couple of weeks, I’ve had a significant amount of time to spend on Multilateral, Asynchronous, Bidirectional Synchronisation of wikis or MABS for short.

This is all built on the git remote for MediaWiki work that was started almost a decade ago by some students. Since the initial effort there have been some significant changes in the MediaWiki API and, in the meantime, the MediaWiki::API Perl module that is doing a lot of heavy lifting in this project hasn’t seen a lot of work. For example, the last commit on the GitHub repository was to fix a typo in 2015.

So, I’ve been working this past week on updating the Perl module. This has been a lot of fun since I used to be quite the Perl snob—and by that I mean I looked down on people who didn’t love Perl, not that I looked down on Perl. Times have changed for me in the past ten or eleven years, so I’ve acquired some humility and begun doing a lot of work in what I would have considered to be the bottom-of-the-barrel language: PHP. Coming back to Perl is a lot of fun.

That said, Perl has continued to grow while I’ve been gone and I need some advice. I’ve become a huge fan of linters, so one change has been adhering pretty closely to almost every criticism Perl::Critic throws at me. I’ve gone as far as adding “smx” after almost every regular expression and incompatibly changing use constant to Readonly. You might say I’m getting a little carried away.

This and fixing the tests to use a docker instance (if available) rather than just sending every tester to the testwiki, as well as fixing some bugs I found along the way, has helped me understand this vital piece of the MABS project.

Still, coming back to Perl has made me realize just how ad hoc Perl’s object system is. I’ve heard of Moose and Mus (which I’m leaning towards), but I was wondering what best-practices the Perl community has for updating an existing code base.

Update 1: I asked for some feedback on the Perl object system to use and got some great feedback.

Update 2: I contacted the original author (Jools Wills) of the MediaWiki::API module and talked to him about what direction to take with it. I’ll have to do some more work on it to make it work well with for my purposes, but I may end up sending him a bunch of pull requests.

Photo by Roger McLassus [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons

Rule #0 of any checklist

The Checklist Manifesto.jpg

A while back I mentioned Atul Gawande‘s book The Checklist Manifesto. Today, I got another example of how to improve my checklists.

The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg.

So, the takeaway for me is this: any checklist should start off verifying that what you “know” to be true is true. (Thankfully, my errors can be backed out with very little long term consequences, but I shouldn’t use this as an excuse to forego checklists.)

Before starting, ask the “Is it plugged in?” question first. What happened today was an example of when asking “Is it plugged in?” would have helped.

Today I was testing the thumbnailing of some MediaWiki code and trying to understand the $wgLocalFileRepo variable. I copied part of an /images/ directory over from another wiki to my test wiki. I verified that it thumbnailed correctly.

So far so good.

Then I changed the directory parameter and tested. No thumbnail. Later, I realized this is to be expected because I didn’t copy over the original images. So that is one issue.

I erased (what I thought was) the thumbnail image and tried again on the main repo. It worked again–I got a thumbnail.

I tried copying over the images directory to the new directory, but it the new thumbnailing directory structure didn’t produce a thumbnail.

I tried over and over with the same thumbnail and was confused because it kept telling me the same thing.

I added debugging statements and still got no where.

Finally, I just did an ls on the directory to verify it was there. It was. And it had files in it.

But not the file I was trying to produce a thumbnail of.

The system that “worked” had the thumbnail, but not the original file.

So, moral of the story: Make sure that your understanding of the current state is correct. If you’re a developer trying to fix a problem, make sure that you are actually able to understand the problem first.

Maybe your perception of reality is wrong. Mine was. I was sure that the thumbnails were being generated each time until I discovered that I hadn’t deleted the thumbnails, I had deleted the original.

(Photo CC-BY 2.0 David Pursehouse: Earthquake survival kit checklist from Japan.

Translation:
– 1.5 liter bottle of water
– Canned bread
– Rice
– Pack of disposable toilet bags
– Foil sheet (to keep body dry/warm))

Go and do likewise

While many people are becoming more comfortable with single payer healthcare–thanks to Bernie Sanders–many of my Christian compatriots live in a socially conservative milieu that has so totally embraced the myth of the bootstraps that it has turned the call for personal responsibility (an inarguable good) into an excuse to escape caring for other people when we have the means.

This was made clear to me when I shared Jessica Kantrowitz‘s post on twitter:

Understandably, some people objected.  For example, my mother, a careful reader of scripture, commented: “???? Never read that.”  In the discussion that followed, she said Christians are to be personally involved, “A real neighbor sees a need and gets personally involved.

And I totally agree with that.

However, it ends up being an excuse not to use taxes for social welfare since there is no “personal involvement.” But, the story of the Good Samaritan does not say the only way we are to help others is through personal involvement.

So, let me return to the original statement that provoked this discussion. It is a hyperbolic statement.  Jesus did not literally say “Pay for other people’s healthcare.”

But it would be a valid conclusion to draw from the story of the Good Samaritan.

Jesus was asked “who is my neighbor?” by a man trying to make sure he met all the legal requirements the command to “love your neighbor as yourself.” He was trying to make sure he would merit eternal life.

In response, Jesus told a story that ended with a Samaritan paying for the care of the man he rescued (after two other “holy” men before him had passed by) and then promising to pay for any further costs when he was able to return.  After this story, Jesus asked, in the Socratic style of teaching, “Which of these three do you think was a neighbor to the man who fell into the hands of robbers?”

So, yes, Jesus didn’t say “Pay for other people’s health care” but he also did not say “Go be personally involved.” In fact, the story clearly shows the opposite: the Samaritan was personally involved, but when he couldn’t stay and personally take care of the man, he left him with someone else and left money to care for him.

And in the end, Jesus didn’t give the man asking him for spiritual advice an easy answer. He didn’t give any explicit direction. He said “Go and do likewise.” What that is in any situation differs.

Sure, like the Good Samaritan, Christians are called to get dirty helping others.

But, also like the Good Samaritan, we have to continue with our own business.

This doesn’t excuse us from caring for others when we cannot be personally involved. When we have other pressing matters we can give others the resources to care in our place, just as the Good Samaritan left the man with Innkeeper.

(Photograph by jean-louis Zimmermann from Moulins, FRANCE [CC BY 2.0], via Wikimedia Commons.)

Creating an external auto-completion provider for Page Forms

(The picture on this post is from Pilgram’s Progress as Christian stuggles in the slough of despond.  I feel his pain here.)

I have a couple of use cases that require pulling from an external data source into MediaWiki. Specifically, they need to pull information such as employee data in Active Directory or a company-wide taxonomy that is maintained outside of MediaWiki. Lucky for me, there is thePage Forms extention.

Page Forms provides a couple of ways to do this: directly from an outside source or using the methods provided by the External Data extension.

Since I was going to have to write some sort of PHP shim, anyway, I decided to go with the first method.

Writing the PHP script to provide possible completions when it was given a string was the easy part. As a proof of concept, I took a the list of words in /usr/share/dict/words on my laptop, trimed it to 1/10th its size using

sed -n ‘0~20p' /usr/share/dict/words > short.txt

and used a simple PHP script (hosted on winkyfrown.com) to provide the data.

That script is the result of a bit of a struggle. Despite the fact that the documentation pointed to a working example (after I updated it, natch), that wasn’t clear enough for me. I had to spend a few hours poking through the source and instrumenting the code to find the answer.

And that is the reason for this weblog post. I posted the same thing earlier today to the Semantic Mediawiki Users mailing list after an earlier plea for help. What resulted is the following stream-of-conciousness short story:

I must be doing something wrong because I keep seeing this error in the
js console (in addition to not seeing any results):

    TypeError: text is undefined 1 ext.sf.select2.base.js:251:4
        removeDiacritics https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:251:4
        textHighlight https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:258:23
        formatResult https://example.dom/w/extensions/SemanticForms/libs/ext.sf.select2.base.js:100:15
        populate https://example.dom/w/extensions/SemanticForms/libs/select2.js:920:39
        populateResults https://example.dom/w/extensions/SemanticForms/libs/select2.js:942:21
        updateResults/<.callback< https://example.dom/w/extensions/SemanticForms/libs/select2.js:1732:17
        bind/< https://example.dom/w/extensions/SemanticForms/libs/select2.js:672:17
        success https://example.dom/w/extensions/SemanticForms/libs/select2.js:460:25
        fire https://example.dom/w/load.php:3148:10
        fireWith https://example.dom/w/load.php:3260:7
        done https://example.dom/w/load.php:9314:5
        callback https://example.dom/w/load.php:9718:8

The URL http://example.dom/custom-autocomplete.php?l=lines&f=words
shows all the lines from the source (in this case, every 10th line from
/usr/share/dict/words) that matches “lines”. This example results in:

        {"sfautocomplete":
            {"2435":{"id":"borderlines",
                            "value":"borderlines",
                            "label":"borderlines",
                            "text":"borderlines"},
                            …

In my php script, I blatted the value over the keys “id”, “value”, “label” and “text”
because I saw each of them being use, but not why.

Anyway, PF is configured to read this correctly, so I can see that when
the user types “lines” an XHR request is made for
https://example.dom/w/api.php?action=sfautocomplete&format=json&external_url=tempURL&substr=lines&_=1494345628246
and it returns

    {"warnings": {
        "main": {
              "*": "Unrecognized parameter: '_'"
        }
    },
     "sfautocomplete": [
        {
          "id": "borderlines",
          "value": "borderlines",
          "label": "borderlines",
          "text": "borderlines"
        }, ....

So far, so good.

I’m instrumenting the code for select2.js (console.log() is your friend!) and I can see that by the time we get to its populate() method we have a list of objects that look like this:

Object { id: 0, value: "borderlines", label: "borderlines", text: undefined }

Ok, I can see it substituting its own id so I’ll take that out of my
results.

There is no difference. (Well, the ordering is different — id now comes
at the end — but that is to be expected.)

Now, what happens if I take out text?

Same thing. Ordering is different, but still shows up as undefined.

Output from my custom autocompleter now looks like this:

        {"sfautocomplete":
            {"2435":{"value":"borderlines",
                     "label":"borderlines"},
                     …

and the SMWApi is now giving

    {"warnings": {
        "main": {
              "*": "Unrecognized parameter: '_'"
        }
    },
     "sfautocomplete": [
        {
          "value": "borderlines",
          "label": "borderlines"
        }, ....

Still the same problem. Let me try Hermann’s suggestion and make my
output look like:

        {"sfautocomplete":
            [
                {"borderlines":”borderlines”},
                ....

Still, no results. The resulting object does look like this, though:

Object { id: 0, borderline: "borderlines", label: "borderlines", text: undefined }

Looking at my instrumented code and the traceback, I have found that the
transformation takes place in the call

options.results(data, query.page);

at the success callback around line 460 in select2.js. This leads us back to ajaxOpts.results() at line 251 in ext.sf.select2.tokens.js (since this is the token input method I’m looking at) and, yep, it looks like I should be putting something in the title attribute.

And, yay!, after changing the output of my custom autocomplete script to:

        {"sfautocomplete":
            [
                {"title":”borderlines”,
                 “value”: ”borderlines”},
                ....

the autocompletes start working. In fact, putting

        {"sfautocomplete":
            [
                {"title":”borderlines”}
                ....

is enough.

If you made it this far, you’ll know that I should have just copied the example I found when I updated the page on MW.o, but then I wouldn’t have understood this as well as I do now. Instead, I used what I learned to provide an example in the documentation that even I wouldn’t miss.

(Image is public domain from the Henry Altemus edition of John Bunyan’s Pilgrim’s Progress, Philadelphia, PA, 1890. Illustrations by Frederick Barnard, J.D. Linton, W. Small, etc. Engraved by Dalziel Brothers. Elarged detail from Wikimedia Commons uploaded by by User:Dr_Jorgen.)

 

More tragic middle aged white men

Just as I hit middle age (my 44th birthday is this year), stories start coming out about how tragic white, middle-aged men’s lives are becoming. And, unlike many other people who have lived with their whole lives fighting, this is a new experience for many middle-aged white men in the States.

It started shortly after Bill Clinton helped Republicans in Congress enact a bunch of welfare reforms in the mid-90s.  Of course, those reforms targeted people that white men with jobs would see as moochers.

We started seeing the effects a few years later as disability claims more than doubled in the 10 years after 2000 and the mortality rate for middle-aged white folks went up.

Tragedy begats tragedy and, into this environment, a divorced middle aged and isolated, church-going white man falls on desperate times. His criminal background probably didn’t help, but he saw an opportunity in targeting romantic men in their 50s who had been divorced and become isolated and desperate.

The Atlantic story with the click-bait title “Murder by Craigslist” does a great job of telling the story of these middle-aged guys in a compassionate way. It manages to use the story of a serial murderer in a depressed area of Ohio to help us see the tragedy in his life and that of his victims.

When I read stories like this, they hit close to home. I’ve been very, very blessed, but I still see that I am but a step or two away from being one of the romantic white guys described in this story.

(Image of Craigslist World Headquarters in San Francisco‘s Sunset District from Wikimedia Commons by User:Calton. CC-BY-SA-3.0. Used by permission.)