You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 2 hours 19 min ago

Michael Still: Giving serial devices meaningful names

Sun 15th Apr 2018 21:04

This is a hack I’ve been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$ cat /etc/udev/rules.d/60-local.rules KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \ ATTRS{serial}=="A8003Ye7", \ SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to “/dev/radish”.

You find out the vendor and product ID from lsusb like this:

$ lsusb Bus 003 Device 003: ID 0624:0201 Avocent Corp. Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that’s great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more… difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \ PROGRAM="/usr/bin/usbtest /dev/%k", \ SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is — in my case either a currentcost or a solar panel inverter.

Categories: thinktime

Michael Still: Hugo nominees for 2018

Sun 15th Apr 2018 21:04

Lifehacker kindly pointed out that the Hugo nominees are out for 2018. They are:

  • The Collapsing Empire, by John Scalzi. I’ve read this one and liked it.
  • New York 2140, by Kim Stanley Robinson. I’ve had a difficult time with Kim’s work in the past, but perhaps I’ll one day read this.
  • Provenance, by Ann Leckie. I liked Ancillary Justice, but failed to fully read the sequel, so I guess we’ll wait and see on this one.
  • Raven Stratagem, by Yoon Ha Lee. I know nothing!
  • Six Wakes, by Mur Lafferty. Again, I know nothing about this book or this author.

So a few there to consider in the future.

Categories: thinktime

Michael Still: The Collapsing Empire

Sun 15th Apr 2018 21:04

This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don’t know that and are busy having petty trade wars instead. It isn’t a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire…

Title: The Collapsing Empire
Author: John Scalzi
Genre: Fiction
Publisher: Tor Books
Release Date: March 21, 2017
Pages: 336

Our universe is ruled by physics and faster than light travel is not possible—until the discovery of The Flow, an extra-dimensional field we can access at certain points in space-time that transport us to other worlds, around other stars. Humanity flows away from Earth, into space, and in time forgets our home world and creates a new empire, the Interdependency, whose ethos requires that no one human outpost can survive without the others. It’s a hedge against interstellar war—and a system of control for the rulers of the empire. The Flow is eternal—but it is not static. Just as a river changes course, The Flow changes as well, cutting off worlds from the rest of humanity. When it’s discovered that The Flow is moving, possibly cutting off all human worlds from faster than light travel forever, three individuals -- a scientist, a starship captain and the Empress of the Interdependency—are in a race against time to discover what, if anything, can be salvaged from an interstellar empire on the brink of collapse. “John Scalzi is the most entertaining, accessible writer working in SF today.” —Joe Hill "If anyone stands at the core of the American science fiction tradition at the moment, it is Scalzi." —The Encyclopedia of Science Fiction, Third Edition

Categories: thinktime

Michael Still: Things I read today: the best description I’ve seen of metadata routing in neutron

Sun 15th Apr 2018 21:04

I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Categories: thinktime

Michael Still: Escaping from blosxom

Sun 15th Apr 2018 21:04

I’ve been running my personal blog on a very hacked version of blosxom for a hilariously long time, and its time to escape. I’ve therefore started converting all of the content to wordpress here, and will eventually redirect the old domain to here as well.

Why blogging when its so 2000? I’m increasingly disinterested in social media like Facebook and Twitter. I figure if I’m going to note something down that looks like it might be useful to others I’ll put it on ye olde blog instead.

I’m sure the conversion isn’t perfect, and I’ve decided not to migrate very old content that simply not interesting any more (linux kernel patches from 2004 for example). If you find a post which has converted badly, just comment on it and I’ll do something about it. I am very sure that pretty much no one will do that thing however.

Categories: thinktime

Michael Still: Nova vendordata deployment, an excessively detailed guide

Sun 15th Apr 2018 21:04

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.

Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.

Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.

The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example:

{ "testing": { "value1": 1, "value2": 2, "value3": "three" } }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time

Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata $ cd vendordata $ apt-get install virtualenvwrapper $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed) $ mkvirtualenv vendordata $ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:

[keystone_authtoken] insecure = False auth_plugin = password auth_url = http://172.29.236.100:35357 auth_uri = http://172.29.236.100:5000 project_domain_id = default user_domain_id = default project_name = service username = nova password = 5dff06ac0c43685de108cc799300ba36dfaf29e4 region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api] vendordata_providers=DynamicJSON vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool { "testing": { "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing." } }

Categories: thinktime

Michael Still: So you want to setup a Ceph dev environment using OSA

Sun 15th Apr 2018 21:04

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I’ve never seen before called a “Scenario”. Basically this means that you need to export an environment variable called “SCENARIO” before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph

Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:55:07.803635173 +1000 +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml 2017-05-26 08:58:30.417019878 +1000 @@ -338,7 +338,9 @@ # foo: 1234 # bar: 5678 # -ceph_conf_overrides: {} +ceph_conf_overrides: + global: + osd_pool_default_pg_num: 8 ############# @@ -373,4 +375,4 @@ # Set this to true to enable File access via NFS. Requires an MDS role. nfs_file_gw: true # Set this to true to enable Object access via NFS. Requires an RGW role. -nfs_obj_gw: false \ No newline at end of file +nfs_obj_gw: false

That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I’ll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1 root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd health HEALTH_OK monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0} election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1 osdmap e20: 3 osds: 3 up, 3 in flags sortbitwise,require_jewel_osds pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects 102156 kB used, 3070 GB / 3070 GB avail 40 active+clean root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 2.99817 root default -2 2.99817 host labosa 0 0.99939 osd.0 up 1.00000 1.00000 1 0.99939 osd.1 up 1.00000 1.00000 2 0.99939 osd.2 up 1.00000 1.00000

Categories: thinktime

OpenSTEM: Australia and the Commonwealth Games

Sun 15th Apr 2018 15:04
Australia has been doing exceptionally well at the 2018 Commonwealth Games, held at the Gold Coast, Queensland. We can be very proud of our athletes, not only for their sporting prowess, but also because of their friendly demeanour and wonderful examples of the spirit of sportsmanship. I’m sure we all felt proud when the Australian […]
Categories: thinktime

Ben Martin: My little robotic pals

Fri 13th Apr 2018 15:04
Years ago I decided to build an indoor robot with multiple kinects for navigation and a robotic arm for manipulation. It was an interesting time working out how to do this and what is needed to get a mobile base to map and navigate a static and dynamic indoor space. Any young players reading this might think that ROS can just magically make this all happen. There are some interesting issues to discover building your own base and some, um, "issues" shall we say that you will need to address that are not in the books or docs. I won't spoil it here for the new players other than to say be prepared to be persistent. 


There are two active wheels at the front and a single drag wheel at the back about 12 inches behind the front wheels. I wrote the code to control the arm myself as custom ROS nodes. A great trick here is you can inject sinusoidal movement by injecting a shim ROS node to take one target and smoothly move towards it.

Now I have a new friend for outdoor activity, the "hound bot". The little furry friend is still sans hair but has gps, imu, rc control override, and a ps4 eye camera mounted for depth perception and mapping. Taking a leaf out of one of the big car makers book and only using cameras for navigation. But for me it is about cost since a good lidar is still much to expensive for the hound.


The hound is a sort of monocoque where the copper looking square part at the front is part of a 1/4 inch aircraft grade alloy solid welded chassis that extends the lenght of the robot. The hound can do about 20km/h and is around 20kg in heft. The electronics bay in the middle is protected by a reinforced carbon fibre layup that I did. Mixing material for fun and slight weight loss.

One great part about doing this "because I want to" is that I am unbounded. Academic institutions might say that building robust alloy shells is not a worthwhile task and only the abstract algorithms matter. I get to pick and choose what matters based purely on what is interesting, what is hard to do (yay!), and what will help me get the robot to perform a task that I want.

The hound will get gripper(s) so it can autonomously "fetch" things for me such as the mail or go find and pick up objects on the lawn.
Categories: thinktime

Donna Benjamin: Leadership, and teamwork.

Fri 13th Apr 2018 07:04
Friday, April 13, 2018 - 04:09

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Categories: thinktime

James Morris: Linux Security Summit North America 2018 CFP Announced

Thu 12th Apr 2018 11:04

The CFP for the 2018 Linux Security Summit North America (LSS-NA) is announced.

LSS will be held this year as two separate events, one in North America
(LSS-NA), and one in Europe (LSS-EU), to facilitate broader participation in
Linux Security development. Note that this CFP is for LSS-NA; a separate CFP
will be announced for LSS-EU in May. We encourage everyone to attend both
events.

LSS-NA 2018 will be held in Vancouver, Canada, co-located with the Open Source Summit.

The CFP closes on June 3rd and the event runs from 27th-28th August.

To make a CFP submission, click here.

Categories: thinktime

BlueHackers: Post-work: the radical idea of a world without jobs | The Guardian

Tue 10th Apr 2018 16:04
The long read: Work has ruled our lives for centuries, and it does so today more than ever. But a new generation of thinkers insists there is an alternative
Categories: thinktime

Lev Lafayette: Net Promoter Score: The Most Useless Metric of All

Tue 03rd Apr 2018 17:04

A number of organisations use a customer service metric known as "Net Promoter", first suggested in the Harvard Business Review. Indeed, it is so common that apparently two-thirds of Fortune 500 companies are using the metric. It simply asks a single question: "How likely is it that you would recommend [company X] to a friend or colleague?". The typical scoring for the answer is a one to ten scale, with a value of 9 or 10 considered a "promoter" score, a 7 or 8 a "neutral" score, and a 0 to 6 a "detractor" score. The Net Promoter Score is calculated by subtracting the percentage of responders who are Detractors from the percentage of responders who are Promoters. It is a simple and blunt instrument and it's entirely the wrong tool to use.

To begin with, it fails of the most elementary mathematics. There is nothing to be gained from providing a score that provides an 11 point range from 0-10, yet only calculates a score from values of promoter, neutral, and detractor. In the Net Promoter system, a score of 6 is just as much a detractor as responder who provides a score of 0, despite what should be a glaringly obvious difference in reaction. It is stunning that a journal with the alleged quality of the Harvard Business Review didn't notice this - let alone the authors of the article.

Secondly, it conflates subjective responses with a quantitative value. What does a score of "6" mean anyway? According to the designers of the NPS, it's a detractor, a fail. Yet there is no guarantee that a responder interprets the value that way. In most assessment systems a "6" is a pass - and more to the point a "7" or "8" is considered a distinction grade; the latter would result in a cum laude or even magna cum laude in most universities. But in the NPS, it is merely a "neutral" result. The problem being of course, unless the individual is provided qualitative guidance with the values (which most organisations or applications don't do), there is no way of determining what their subjective score of 0-10 really reflects. Numerical values cannot be translated to qualitative values unless all parties are provided a means for correlation.

Thirdly, a single-value NPS provides no information to act upon. What does it mean that a respondent would or would not recommend a company, product, or service? Even assuming that the graduation is in place that matches values with scale, and qualitative assessment to numerical values, the answers still providing nothing to act upon. Is it the company or service as a whole that has resulted in the evaluation? Is it a part of company or service? Could it be, for a detractor, that the product or service was something that they thought they needed, but actually didn't? Unless the score is supplemented with an opportunity for a responder to explain their evaluation, there is no way that it creates an opportunity for action.

Given these errors, it is perhaps unsurprising that an unmodified "Net Promoter" method of measuring customer satisfaction ranked last in an extensive study by Schneider et al. in terms of predictive capability. Granted, some information is better than no information, and people do prefer shorter surveys to longer surveys. But as designed in its pure form, using a Net Promoter score is almost as bad as not collecting respondent data at all. A short survey which breaks up the item being reviewed into equal composite components, which guides subjective values to numerical values, which provides an opportunity for free-text qualitative information, and which measures metrics along the scale (with mean and distribution) will always be far more effective measurement of both a respondent's satisfaction, and an organisation's opportunity for action. As it is writ, the NPS should be avoided in all circumstances.

Categories: thinktime

Francois Marier: Looking back on starting Libravatar

Tue 03rd Apr 2018 10:04

As noted on the official Libravatar blog, I will be shutting the service down on 2018-09-01.

It has been an incredible journey but Libravatar has been more-or-less in maintenance mode for 5 years, so it's somewhat outdated in its technological stack and I no longer have much interest in doing the work that's required every two years when migrating to a new version of Debian/Django. The free software community prides itself on transparency and so while it is a difficult decision to make, it's time to be upfront with the users who depend on the project and admit that the project is not sustainable in its current form.

Many things worked well

The most motivating aspect of running Libravatar has been the steady organic growth within the FOSS community. Both in terms of traffic (in March 2018, we served a total of 5 GB of images and 12 GB of 302 redirects to Gravatar), integration with other sites and projects (Fedora, Debian, Mozilla, Linux kernel, Gitlab, Liberapay and many others), but also in terms of users:

In addition, I wanted to validate that it is possible to run a FOSS service without having to pay for anything out-of-pocket, so that it would be financially sustainable. Hosting and domain registrations have been entirely funded by the community, thanks to the generosity of sponsors and donors. Most of the donations came through Gittip/Gratipay and Liberapay. While Gratipay has now shut down, I encourage you to support Liberapay.

Finally, I made an effort to host Libravatar on FOSS infrastructure. That meant shying away from popular proprietary services in order to make a point that these convenient and well-known services aren't actually needed to run a successful project.

A few things didn't pan out

On the other hand, there were also a few disappointments.

A lot of the libraries and plugins never implemented DNS federation. That was the key part of the protocol that made Libravatar a decentralized service but unfortunately the rest of the protocol was must easier to implement and therefore many clients stopped there.

In addition, it turns out that while the DNS system is essentially a federated caching system for IP addresses, many DNS resolvers aren't doing a good job caching records and that created unnecessary latency for clients that chose to support DNS federation.

The main disappointment was that very few people stepped up to run mirrors. I designed the service so that it could scale easily in the same way that Linux distributions have coped with increasing user bases: "ftp" mirrors. By making the actual serving of images only require Apache and mod_rewrite, I had hoped that anybody running Apache would be able to add an extra vhost to their setup and start serving our static files. A few people did sign up for this over the years, but it mostly didn't work. Right now, there are no third-party mirrors online.

The other aspect that was a little disappointing was the lack of code contributions. There were a handful from friends in the first couple of months, but it's otherwise been a one-man project. I suppose that when a service works well for what people use it for, there are less opportunities for contributions (or less desire for it). The fact dev environment setup was not the easiest could definitely be a contributing factor, but I've only ever had a single person ask about it so it's not clear that this was the limiting factor. Also, while our source code repository was hosted on Github and open for pull requests, we never even received a single drive-by contribution, hinting at the fact that Github is not the magic bullet for community contributions that many people think it is.

Finally, it turns out that it is harder to delegate sysadmin work (you need root, for one thing) which consumes the majority of the time in a mature project. The general administration and maintenance of Libravatar has never moved on beyond its core team of one. I don't have a lot of ideas here, but I do want to join others who have flagged this as an area for "future work" in terms of project sustainability.

Personal goals

While I was originally inspired by Evan Prodromou's vision of a suite of FOSS services to replace the proprietary stack that everybody relies on, starting a free software project is an inherently personal endeavour: the shape of the project will be influenced by the personal goals of the founder.

When I started the project in 2011, I had a few goals:

This project personally taught me a lot of different technologies and allowed me to try out various web development techniques I wanted to explore at the time. That was intentional: I chose my technologies so that even if the project was a complete failure, I would still have gotten something out of it.

A few things I've learned

I learned many things along the way, but here are a few that might be useful to other people starting a new free software project:

  • Speak about your new project at every user group you can. It's important to validate that you can get other people excited about your project. User groups are a great (and cheap) way to kickstart your word of mouth marketing.

  • When speaking about your project, ask simple things of the attendees (e.g. create an account today, join the IRC channel). Often people want to support you but they can't commit to big tasks. Make sure to take advantage of all of the support you can get, especially early on.

  • Having your friends join (or lurk on!) an IRC channel means it's vibrant, instead of empty, and there are people around to field simple questions or tell people to wait until you're around. Nobody wants to be alone in a channel with a stranger.

Thank you

I do want to sincerely thank all of the people who contributed to the project over the years:

  • Jonathan Harker and Brett Wilkins for productive hack sessions in the Catalyst office.
  • Lars Wirzenius, Andy Chilton and Jesse Noller for graciously hosting the service.
  • Christian Weiske, Melissa Draper, Thomas Goirand and Kai Hendry for running mirrors on their servers.
  • Chris Forbes, fr33domlover, Kang-min Liu and strk for writing and maintaining client libraries.
  • The Wellington Perl Mongers for their invaluable feedback on an early prototype.
  • The #equifoss group for their ongoing suppport and numerous ideas.
  • Nigel Babu and Melissa Draper for producing the first (and only) project stikers, as well as Chris Cormack for spreading so effectively.
  • Adolfo Jayme, Alfredo Hernández, Anthony Harrington, Asier Iturralde Sarasola, Besnik, Beto1917, Daniel Neis, Eduardo Battaglia, Fernando P Silveira, Gabriele Castagneti, Heimen Stoffels, Iñaki Arenaza, Jakob Kramer, Jorge Luis Gomez, Kristina Hoeppner, Laura Arjona Reina, Léo POUGHON, Marc Coll Carrillo, Mehmet Keçeci, Milan Horák, Mitsuhiro Yoshida, Oleg Koptev, Rodrigo Díaz, Simone G, Stanislas Michalak, Volkan Gezer, VPablo, Xuacu Saturio, Yuri Chornoivan, yurchor and zapman for making Libravatar speak so many languages.

I'm sure I have forgotten people who have helped over the years. If your name belongs in here and it's not, please email me or leave a comment.

Categories: thinktime

Simon Lyall: Audiobooks – March 2018

Mon 02nd Apr 2018 11:04

The Actor’s Life: A survival guide by Jenna Fischer

Combination of advice for making it as an actor and a memoir of her experiences. Interesting and enjoyable 8/10

One Man’s Wilderness: An Alaskan Odyssey by Sam Keith

Based on the journals of Richard Proenneke who built a cabin in the Alaskan wilderness and lived there for 16 month (& returned in later years). Interesting & I’m a little inspired 7/10

The Interstellar Age: The Story of the NASA Men and Women Who Flew the Forty-Year Voyager Mission by Jim Bell

Pretty much what the title says. Very positive throughout and switching between the science and profiles of the people smoothly. 8/10

Richard Nixon: The Life by John A Farrell

Comprehensive but balanced biography. Doesn’t shy away from Nixon’s many many problems but also covers his accomplishments and positive side (especially early in his career). 8/10

The Adventures of Sherlock Holmes, Book I – Arthur Conan Doyle – Read by David Timson

4 Stories unabridged. Reading is good but drop a point since the music is distracting at fast playback. 7/10

Death by Black Hole: And Other Cosmic Quandaries by Neil deGrasse Tyson

42 Essays on mainly space-related topics. Some overlap but pretty good, 10 years old so missing a few newer developments but good introduction. 8/10

The Sports Gene: Inside the Science of Extraordinary Athletic Performance by David Epstein

Good wide-ranging book on nature vs nurture in sports performance, how genes for athletic performance are not that simple & how little we know. 9/10

The Residence: Inside the Private World of the White House by Kate Andersen Brower

Gossipy account from interviewing various ex-staff ( maids, cooks, butlers). A different angle than from what I get from other accounts. 7/10

Tanker Pilot: Lessons from the Cockpit by Mark Hasara

Account of the author flying & planning aerial refueling operations during the Gulf wars & elsewhere. A bit of business advice but that is unobtrusive. No actual politics 7/10

The Big Short: Inside the Doomsday Machine by Michael Lewis

Account of various people who made billions shorting the mortgage market in the run up to 2008. Fun and easy for layman to follow. 8/10

Driverless: Intelligent Cars and the Road Ahead by Hod Lipson

Listening to it the week a driverless car first killed a pedestrian. Fairly good intro/history/overview although fast changing topic so will go out of date quickly. 7/10

Journeys in English by Bill Bryson

A series of radio shows. I found the music & random locations annoying. Had to slow it down due to varied voices, accents and words. Interesting despite that, 7/10

Categories: thinktime

Ben Martin: The Gantry is attached!

Mon 02nd Apr 2018 10:04
Now the fourth axis finally looks at home on the CNC plate.  The new gantry sides are almost 100mm taller than the old ones and share a similar shape. While the gantry was off the machine was a good time to attach the new Z-Axis which gains a similar amount of Z travel. Final adjustment of where the spindle sits in its holder are still needed but it makes sense for the cutting edge to be fairly high up when the Z-Axis is fully retracted as shown.




After a day of great success early on a day of great problem solving arrived before the attachment was possible. The day of great success involved testing the two new sides to see if or how well they attached to the mount points at the base of the machine. These holes in the gantry were hand marked, drilled, and tapped so there was some good chance that they were off target enough to not work well. But those all went fine.

The second success was mounting the Z-Axis to the existing points on the gantry. I had in the back of my mind the thought that one side (the three holes on the bottom of the mount) to line up and attach fine but the top holes to be out of alignment. Both of these plates, seen in horizontal in the image above, were made by CNC so the holes should be where I intended. Though these plates were both mounted to the Z-Axis and the bottom plate goes right through to the lower steel bracket so the alignment might not have been 100%. I registered both plates to the smooth side of the spindle backing plate so the alignment in that axis should have been ok. To great surprise and joy the top holes also aligned perfectly and the second phase fell into place.

It was only when putting the new sides onto the gantry that interesting things started to happen. I will have a new blog post on that part soon and likely a video of the problems and solutions for that part. One thing I will say now is that it helps to have washers, bolts, and spare skate bearings on hand for this process depending on how you have designed your far side gantry upright.



Categories: thinktime

Matthew Oliver: Keystone Federated Swift – False Federation

Tue 27th Mar 2018 21:03

This is the second post in my series of posts on Swift in a Keystone federated environment, and the first post where I’ll walk through the first environment. The environment I’m calling ‘False Federation’. For details on these series of posts including the rationalisation see my last introductory post.

 

False Federation

This first environment doesn’t actually use Keystone federation, instead it uses an existing ability of Swift to have more then 1 authentication middleware in the proxy pipeline. Which is why I’m calling this ‘False Federation’.

Swift Reseller’s and the reseller_prefix

Swift, in an OpenStack environment, talks to Keystone for identity management through Keystone’s authtoken and the Swift keystoneauth middlewares. However Keystone isn’t required. Swift was designed to be a complete standalone storage solution, in fact many Swift deployments use different (like swauth) and sometimes custom authentication middlewares. This way people can easily integrate Swift into their own environments.

If you’ve spend anytime setting up authentication middlewares (like keystoneauth) in Swift, you’ve undoubtedly come across Swift’s reseller_prefix option, and maybe thought to yourself why?

 

As I mentioned earlier from the start Swift was designed to be an end to end standalone storage system. One of the features it has always supported is the idea of more then 1 authentication middleware in the pipeline. And if you have more then 1, then you need a way to distinguish which authentication middleware handles what account. This is what the reseller_prefix does. Swift will match the reseller_prefix prefixed to the account name with the authentication middleware who is to handle it.

This is actually a really powerful feature. It means you could resell your storage solution to other parties to manage accounts, or connect up different parts of your organisation, if say for some reason you have more then 1 source you want to use as an authentication service.

Some authentication middleware’s like Keystoneauth can even cover more then 1 reseller_prefix, this is how service tokens tend to be deployed, so a service can have it’s own namespace of a users for isolation and the data is safe from accidental deletion.

And yes, it’s also possible to set an empty reseller_prefix.

 

Multiple Keystone middlewares

Having got the idea of reseller_prefixes out of the way, this is the first potential solution and the idea behind ‘False Federation’. If you have a large Swift cluster, you could place the required authentication middlewares for each separate OpenStack environment you want to connect it to.

 

NOTE: The are 2 middlewares needed to connect to a single Keystone instance, Keystones authtoken and then Swifts keystoneauth. Other authentication middleware, like swauth and many custom ones, are only 1 middleware. So a little less confusing.

 

Before I get into the configuration I should also mention before you run off and give it a go. The current upstream keystoneauth in Swift doesn’t support being placed multiple times in a pipeline. Why? Because of the way places itself in the wsgi environment. But never fear, I have written a patch to correct this behavior specifically for these set’s of experiments, and when I get a chance to clean it up and write some tests I’ll push it upstream. In the meantime you can grab hold of the patch here.

 

I’m not going into huge amounts of detail on how to connect to Keystone, the Swift documentation and installation guides to that too well. And really your just duplicating exactly that, but to each Keystone endpoint you want to connect. If you need detailed instructions, then let me know. They say an image is worth more then a 1000 words. So here is a how it’s done in 1 pretty diagram:

The run down is:

  • Edit your proxy-server.conf on each node, and create ‘[filter:authtoken]’ and ‘[filter:keystoneauth]’ sections for each Keystone endpoint. Noting the names of the filters have to be different.
  • Each ‘[filter:authtoken]’ will point to an endpoint, and it’s corresponding ‘[filter:keystoneauth]’ will have a different reseller_prefix which will need to be matched in the Object Storage endpoint on the keystone servers service catalog. (see project documentation)
  • You then place these filters in the proxy pipeline. When placing a pair the authtoken must come before it’s keystoneauth other. But the pair’s ketstoneauth must also appear before then next authtoken (like in the picture).

 

NOTE: I’ve left of a bunch of middleware options in the picture to keep it small and readable.

 

Now if I send the following GET requests:
GET /v1/KEY_matt/pictures/cat.png
GET /v1/AUTH_matt/pictures/cat.png

 

The first would be authenticated on the blue keystone (or via ‘authtoken1 keystoneauth1’) and the second with the green keystone (or via ‘authtoken2 keystoneauth2’).

 

Cons

This approach was to demonstrate what Swift could already do. But there are some limitations to this approach. Which as always depends on your situation. Keystone’s authtoken middelware will always go and try an authenticate. So would add a bunch of latency to each request going through the proxy. If they are close maybe that’s ok. But if this was a geographical cluster with keystones all around the world then… ouch. If using a custom middleware, you’d just skip reseller_prefixes that don’t relate to you (like keystoneauth does).

 

Maybe you could have a different Swift proxy in each “region” that only points to the local keystone, so you are only authenticating locally.. ok. But then a user can’t come and access their data if they happen to be in a different region.. even though your talking to the same cluster.

So really what we want to do is take advantage of Keystone federation, where we only ever have to talk to 1 instance, the local one for the region a Swift proxy lives. That way we get the speed and the ability to access our data from anywhere.

 

Next time…

So the next post we’ll add real keystone federation, but assume each federation environment is it’s own cluster, including each has it’s own Swift cluster. In which case we could take advantage of another Swift feature, container sync.

Then the final post would be that we really want, and 1 large Swift cluster with multiple Federated keystone OpenStack clusters. But that will involve fiddling with the federation sync metadata and need a more detailed explanation on how Swift authentication works. So first I want to cover what Swift can do simply with the tools it comes with!

Categories: thinktime

OpenSTEM: New Mirobot v3 arrival in Australia

Tue 27th Mar 2018 01:03
Here’s our batch of brand new Mirobot v3 kits on their arrival in Australia, dozens stacked. Since the v3 have a neat acrylic frame, I think I’ll do a proper “unboxing” and first build video of one soon, so you can see for yourself what this is about. Many classes of year 5 and 6 […]
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main April 2018 Meeting: Write docs like a software developer using the Linux toolchain

Sat 24th Mar 2018 23:03
Start: Apr 3 2018 18:30 End: Apr 3 2018 20:30 Start: Apr 3 2018 18:30 End: Apr 3 2018 20:30 Location:  Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Link:  http://www.melbourne.vic.gov.au/community/hubs-bookable-spaces/kathleen-syme-lib...

PLEASE NOTE NEW LOCATION

Tuesday, April 3, 2018

6:30 PM to 8:30 PM
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

Linux Users of Victoria is a subcommittee of Linux Australia.

April 3, 2018 - 18:30

read more

Categories: thinktime

Matthew Oliver: Keystone Federated Swift – A series of posts

Fri 23rd Mar 2018 15:03

Matt Treinish and I proposed a presentation at the OpenStack Summit in Vancouver in May, it was accepted but on standby. Which simply means we have a lightening talk slot (10 minutes), but may be bumped up to a full slot based on how other presenters go (visa issues, pull outs, etc).

Anyway, 10 minutes wont do the topic justice, so I thought what better then to also post details as I work through them here. Some of what I say may end up in the presentation, or may not. All I know is I’ve been asked a few times how to setup Swift in a Keystone federated environment. Let’s face it, Swift scales to a global cluster no worries, however other OpenStack components may have trouble doing the same. So federating a bunch of different regions and treating them as their own clouds makes heaps of sense. Great, then what’s the best way of integrating Swift into this federated environment?

 

My current Idea is to walk through 3 initial topologies. The first I’ll call ‘false federation’ where we can simply use Swift’s ability to use multiple authentication middlewares as different resellers to be able to authenticate to multiple keystone endpoints. For those playing along at home, the keystone middleware currently doesn’t let you do this, but I have a trivial patch that fixes this.. and plan to push it upstream as soon as I have a chance to clean it up and add tests.

 

The second, is separate swift clusters in each cloud. But using Swifts container sync to move objects so you still have access to your data on any cloud you visit… eventually.

 

And finally the third is what we’d all want, I large swift cluster, that all clouds talk to, so no matter where you are, there your data is. Plus gives better durability, dispersion, and everything we want out of a Swift cluster. The trick here will be making sure the same swift account name is used no matter which keystone your talk to, and assume this will come down to how you configure what you share during federated token exchange. I’ll leave this as the last post and we still need to play to iron it out.. but obviously is the dream.

These diagrams are obviously overly simplistic, but I hope you get the idea.

The next post will be the ‘False federation’ approach seeing as I already have a swift keystoneauth middleware patch that solves this.

Categories: thinktime

Pages