You are here

thinktime

Clinton Roy: South Coast Track Report

Planet Linux Australia - Sun 29th Jan 2017 23:01

Please note this is a work in progress

I had previously stated my intention to walk the South Coast Track. I have now completed this walk and now want a space where I can collect all my thoughts.

Photos: Google Photos album

The sections I’m referring to here come straight from the guide book. Due to the walking weather and tides all being in our favour, we managed to do the walk in six days. We flew in late on the first day and did not finish section one of the walk, on the second day we finished section one and then completed section two and three. On day three it was just the Ironbound range. On day four it was just section five. Day five we completed section six and the tiny section seven. Day six was section eight and day seven was cockle creak (TODO something’s not adding up here)

The hardest day, not surprisingly, was day three where we tackled the Ironbound range, 900m up, then down. The surprising bit was how easy the ascent was and how god damn hard the descent was. The guide book says there are three rest camps on the descent, with one just below the peak, a perfect spot for lunch. Either this camp is hidden (e.g. you have to look behind you) or it’s overgrown, as we all missed it. This meant we ended up skipping lunch and were slipping down the wed, muddy awful descent side for hours. When we came across the mid rest camp stop, because we’d been walking for so long, everyone assumed we were at the lower camp stop and that we were therefore only an hour or so away from camp. Another three hours later or so we actually came across the lower camp site, and the by that time all sense of proportion was lost and I was starting to get worried that somehow we’d gotten lost and were not on the right trail and that we’d run out of light. In the end I got into camp about an hour before sundown (approx eight) and B&R got in about half an hour before sundown. I was utterly exhausted, got some water, pitched the tent, collapsed in it and fell asleep. Woke up close to midnight, realised I hadn’t had any lunch or dinner, still wasn’t actually feeling hungry. I forced myself to eat a hot meal, then collapsed in bed again.

TODO: very easy to follow trail.
TODO: just about everything worked.
TODO: spork
TODO: solar panel
TODO: not eating properly
TODO: needing more warmth

I could not have asked for better walking companions, Richard and Bec.


Filed under: camping, Uncategorized
Categories: thinktime

Friction and traction

Seth Godin - Sun 29th Jan 2017 20:01
It's fashionable for designers and marketers to want to reduce friction in the way they engage with users. And sometimes, that's smart. If someone knows what they want, get out of their way and help them get it. One-click, done....        Seth Godin
Categories: thinktime

Julien Goodwin: Charging my ThinkPad X1 Gen4 using USB-C

Planet Linux Australia - Sun 29th Jan 2017 01:01
As with many massive time-sucking rabbit holes in my life, this one starts with one of my silly ideas getting egged on by some of my colleagues in London (who know full well who they are), but for a nice change, this is something I can talk about.

I have a rather excessive number of laptops, at the moment my three main ones are a rather ancient Lenovo T430 (personal), a Lenovo X1 Gen4, and a Chromebook Pixel 2 (both work).

At the start of last year I had a T430s in place of the X1, and was planning on replacing both it and my personal ThinkPad mid-year. However both of those older laptops used Lenovo's long-held (back to the IBM days) barrel charger, which lead to me having a heap of them in various locations at home and work, but all the newer machines switched to their newer rectangular "slim" style power connector and while adapters exist, I decided to go in a different direction.

One of the less-touted features of USB-C is USB-PD[1], which allows devices to be fed up to 100W of power, and can do so while using the port for data (or the other great feature of USB-C, alternate modes, such as DisplayPort, great for docks), which is starting to be used as a way to charge laptops, such as the Chromebook Pixel 2, various models of the Apple MacBook line, and more.

Instead of buying a heap of slim-style Lenovo chargers, or a load of adapters (which would inevitably disappear over time) I decided to bridge towards the future by making an adapter to allow me to charge slim-type ThinkPads (at least the smaller ones, not the portable workstations which demand 120W or more).

After doing some research on what USB-PD platforms were available at the time I settled on the TI TPS65986 chip, which, with only an external flash chip, would do all that I needed.

Devkits were ordered to experiment with, and prove the concept, which they did very quickly, so I started on building the circuit, since just reusing the devkit boards would lead to an adapter larger than would be sensible. As the TI chip is a many-pin BGA, and breaking it out on 2-layers would probably be too hard for my meager PCB design skills, I needed a 4-layer board, so I decided to use KiCad for the project.

It took me about a week of evenings to get the schematic fully sorted, with much of the time spent reading the chip datasheet, or digging through the devkit schematic to see what they did there for some cases that weren't clear, then almost a month for the actual PCB layout, with much of the time being sucked up learning a tool that was brand new to me, and also fairly obtuse.

By mid-June I had a PCB which should (but, spoiler, wouldn't) work, however as mentioned the TI chip is a 96-ball 3x3mm BGA, something I had no hope of manually placing for reflow, and of course, no hope of hand soldering, so I would need to get these manufactured commercially. Luckily there are several options for small scale assembly at very reasonable prices, and I decided to try a new company still (at the time of ordering) in closed trials, PCB.NG, they have a nice simple procedure to upload board files, and a slightly custom pick & place file that includes references to the exact component I want by Digikey[link] part number. Best of all the pricing was completely reasonable, with a first test run of six boards only costing my US$30 each.

Late in June I recieved a mail from PCB.NG telling me that they'd built my boards, but that I had made a mistake with the footprint I'd used for the USB-C connector and they were posting my boards along with the connectors. As I'd had them ship the order to California (at the time they didn't seem to offer international shipping) it took a while for them to arrive in Sydney, courtesy a coworker.

I tried to modify a connector by removing all through hole board locks, keeping just the surface mount pins, however I was unsuccessful, and that's where the project stalled until mid-October when I was in California myself, and was able to get help from a coworker who can perform miracles of surface mount soldering (while they were working on my board they were also dead-bug mounting a BGA). Sadly while I now had a board I could test it simply dropped off my priority list for months.

At the start of January another of my colleagues (a US-based teammate of the London rabble-rousers) asked for a status update, which prompted me to get off my butt and perform the testing. The next day I added some reinforcement to the connector which was only really held on by the surface mount pins, and was highly likely to rip off the board, so I covered it in epoxy. Then I knocked up some USB A plug/socket to bare wires test adapters using some stuff from the junk bin we have at the office maker space for just this sort of occasion (the socket was actually a front panel USB port from an old IBM x-series server). With some trepidation I plugged the board into my newly built & tested adapter, and powered the board from a lab supply set to limit current in case I'd any shorts in the board. It all came up straight away, and even lit the LEDs I'd added for some user feedback.

Next was to load a firmware for the chip. I'd previously used TI's tool to create a firmware image, and after some messing around with the SPI flash programmer I'd purchased managed to get the board programmed. However the behaviour of the board didn't change with (what I thought was) real firmware, I used an oscilloscope to verify the flash was being read, and a twinkie to sniff the PD negotiation, which confirmed that no request for 20v was being sent. This was where I finished that day.

Over the weekend that followed I dug into what I'd seen and determined that either I'd killed the SPI MISO port (the programmer I used was 5v, not 3.3v), or I just had bad firmware and the chip had some good defaults. I created a new firmware image from scratch, and loaded that.

Sure enough it worked first try. Once I confirmed 20v was coming from the output ports I attached it to my recently acquired HP 6051A DC load where it happily sank 45W for a while, then I attached the cable part of a Lenovo barrel to slim adapter and plugged it into my X1 where it started charging right away.

At linux.conf.au last week I gave (part of) a hardware miniconf talk about USB-C & USB-PD, which open source hardware folk might be interested in. Over the last few days while visiting my dad down in Gippsland I made the edits to fix the footprint and sent a new rev to the manufacturer for some new experiments.

Of course at CES Lenovo announced that this years ThinkPads would feature USB-C ports and allow charging through them, and due to laziness I never got around to replacing my T430, so I'm planning to order a T470 as soon as they're available, making my adapter obsolete.





Rough timeline:
  • April 21st 2016, decide to start working on the project
  • April 28th, devkits arrive
  • May 8th, schematic largely complete, work starts on PCB layout
  • June 14th, order sent to CM
  • ~July 6th, CM ships order to me (to California, then hand carried to me by a coworker)
  • early August, boards arrive from California
  • Somewhere here I try, and fail, to reflow a modified connector onto a board
  • October 13th, California cowoker helps to (successfully) reflow a USB-C connector onto a board for testing
  • January 6th 2017, finally got around to reinforce the connector with epoxy and started testing, try loading firmware but no dice
  • January 10th, redo firmware, it works, test on DC load, then modify a ThinkPad-slim adapater and test on a real ThinkPad
  • January 25/26th, fixed USB-C connector footprint, made one more minor tweak, sent order for rev2 to CM, then some back & forth over some tolerance issues they're now stricter on.


1: There's a previous variant of USB-PD that works on the older A/B connector, but, as far as I'm aware, was never implemented in any notable products.
Categories: thinktime

Just the right amount of data

Seth Godin - Sat 28th Jan 2017 20:01
The digital sign at the train station near my home could show me what time it is. It could tell us how many more minutes until the next train. Or it could announce if the train was running on time......        Seth Godin
Categories: thinktime

Michael Still: Recording performance information from short lived processes with prometheus

Planet Linux Australia - Sat 28th Jan 2017 17:01
Now that I'm recording basic statistics about the behavior of my machines, I now want to start tracking some statistics from various scripts I have lying around in cron jobs. In order to make myself sound smarter, I'm going to call these short lived scripts "ephemeral scripts" throughout this document. You're welcome.

The promethean way of doing this is to have a relay process. Prometheus really wants to know where to find web servers to learn things from, and my ephemeral scripts are both not permanently around and also not running web servers. Luckily, prometheus has a thing called the pushgateway which is designed to handle this situation. I can run just one of these, and then have all my little scripts just tell it things to add to its metrics. Then prometheus regularly scrapes this one process and learns things about those scripts. Its like a game of Telephone, but for processes really.

First off, let's get the pushgateway running. This is basically the same as the node_exporter from last time:

$ wget https://github.com/prometheus/pushgateway/releases/download/v0.3.1/pushgateway-0.3.1.linux-386.tar.gz $ tar xvzf pushgateway-0.3.1.linux-386.tar.gz $ cd pushgateway-0.3.1.linux-386 $ ./pushgateway

Let's assume once again that we're all adults and did something nicer than that involving configuration management and init scripts.

The pushgateway implements a relatively simple HTTP protocol to add values to the metrics that it reports. Note that the values wont change once set until you change them again, they're not garbage collected or aged out or anything fancy. Here's a trivial example of adding a value to the pushgateway:

echo "some_metric 3.14" | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/some_job

This is stolen straight from the pushgateway README of course. The above command will have the pushgateway start to report a metric called "some_metric" with the value "3.14", for a job called "some_job". In other words, we'll get this in the pushgateway metrics URL:

# TYPE some_metric untyped some_metric{instance="",job="some_job"} 3.14

You can see that this isn't perfect because the metric is untyped (what types exist? we haven't covered that yet!), and has these confusing instance and job labels. One tangent at a time, so let's explain instances and jobs first.

On jobs and instances

Prometheus is built for a universe a little bit unlike my home lab. Specifically, it expects there to be groups of processes doing a thing instead of just one. This is especially true because it doesn't really expect things like the pushgateway to be proxying your metrics for you because there is an assumption that every process will be running its own metrics server. This leads to some warts, which I'll explain in a second. Let's start by explaining jobs and instances.

For a moment, assume that we're running the world's most popular wordpress site. The basic architecture for our site is web frontends which run wordpress, and database servers which store the content that wordpress is going to render. When we first started our site it was all easy, as they could both be on the same machine or cloud instance. As we grew, we were first forced to split apart the frontend and the database into separate instances, and then forced to scale those two independently -- perhaps we have reasonable database performance so we ended up with more web frontends than we did database servers.

So, we go from something like this:



To an architecture which looks a bit like this:



Now, in prometheus (i.e. google) terms, there are three jobs here. We have web frontends, database masters (the top one which is getting all the writes), and database slaves (the bottom one which everyone is reading from). For one of the jobs, the frontends, there is more than one instance of the job. To put that into pictures:



So, the topmost frontend job would be job="fe" and instance="0". Google also had a cool way to lookup jobs and instances via DNS, but that's a story for another day.

To harp on a point here, all of these processes would be running a web server exporting metrics in google land -- that means that prometheus would know that its monitoring a frontend job because it would be listed in the configuration file as such. You can see this in the configuration file from the previous post. Here's the relevant snippet again:

- job_name: 'node' static_configs: - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']

The job "node" runs on three targets (instances), named "molokai:9100", "dell:9100", and "eeebox:9100".

However, we live in the ghetto for these ephemeral scripts and want to use the pushgateway for more than one such script, so we have to tell lies via the pushgateway. So for my simple emphemeral script, we'll tell the pushgateway that the job is the script name and the instance can be an empty string. If we don't do that, then prometheus will think that the metric relates to the pushgateway process itself, instead of the ephemeral process.

We tell the pushgateway what job and instance to use like this:

echo "some_metric 3.14" | curl --data-binary @- http://localhost:9091/metrics/job/frontend/instance/0

Now we'll get this at the metrics URL:

# TYPE some_metric untyped some_metric{instance="",job="some_job"} 3.14 some_metric{instance="0",job="frontend"} 3.14

The first metric there is from our previous attempt (remember when I said that values are never cleared out?), and the second one is from our second attempt. To clear out values you'll need to restart the pushgateway process. For simple ephemeral scripts, I think its ok to leave the instance empty, and just set a job name -- as long as that job name is globally unique.

We also need to tell prometheus to believe our lies about the job and instance for things reported by the pushgateway. The scrape configuration for the pushgateway therefore ends up looking like this:

- job_name: 'pushgateway' honor_labels: true static_configs: - targets: ['molokai:9091']

Note the honor_labels there, that's the believing the lies bit.

There is one thing to remember here before we can move on. Job names are being blindly trusted from our reporting. So, its now up to us to keep job names unique. So if we export a metric on every machine, we might want to keep the job name specific to the machine. That said, it really depends on what you're trying to do -- so just pay attention when picking job and instance names.

On metric types

Prometheus supports a couple of different types for the metrics which are exported. For now we'll discuss two, and we'll cover the third later. The types are:

  • Gauge: a value which goes up and down over time, like the fuel gauge in your car. Non-motoring examples would include the amount of free disk space on a given partition, the amount of CPU in use, and so forth.
  • Counter: a value which always increases. This might be something like the number of bytes sent by a network card -- the value only resets when the network card is reset (probably by a reboot). These only-increasing types are valuable because its easier to do maths on them in the monitoring system.
  • Histograms: a set of values broken into buckets. For example, the response time for a given web page would probably be reported as a histogram. We'll discuss histograms in more detail in a later post.


I don't really want to dig too deeply into the value types right now, apart from explaining that our previous examples haven't specified a type for the metrics being provided, and that this is undesirable. For now we just need to decide if the value goes up and down (a gauge) or just up (a counter). You can read more about prometheus types at https://prometheus.io/docs/concepts/metric_types/ if you want to.

A typed example

So now we can go back and do the same thing as before, but we can do it with typing like adults would. Let's assume that the value of pi is a gauge, and goes up and down depending on the vagaries of space time. Let's also show that we can add a second metric at the same time because we're fancy like that. We'd therefore need to end up doing something like (again heavily based on the contents of the README):

cat <<EOF | curl --data-binary @- http://pushgateway.example.org:9091/metrics/job/frontend/instance/0 # TYPE some_metric gauge # HELP approximate value of pi in the current space time continuum some_metric 3.14 # TYPE another_metric counter # HELP another_metric Just an example. another_metric 2398 EOF

And we'd end up with values like this in the pushgateway metrics URL:

# TYPE some_metric gauge some_metric{instance="0",job="frontend"} 3.14 # HELP another_metric Just an example. # TYPE another_metric counter another_metric{instance="0",job="frontend"} 2398

A tangible example

So that's a lot of talking. Let's deploy this in my home lab for something actually useful. The node_exporter does not report any SMART health details for disks, and that's probably a thing I'd want to alert on. So I wrote this simple script:

#!/bin/bash hostname=`hostname | cut -f 1 -d "."` for disk in /dev/sd[a-z] do disk=`basename $disk` # Is this a USB thumb drive? if [ `/usr/sbin/smartctl -H /dev/$disk | grep -c "Unknown USB bridge"` -gt 0 ] then result=1 else result=`/usr/sbin/smartctl -H /dev/$disk | grep -c "overall-health self-assessment test result: PASSED"` fi cat <<EOF | curl --data-binary @- http://localhost:9091/metrics/job/$hostname/instance/$disk # TYPE smart_health_passed gauge # HELP whether or not a disk passed a "smartctl -H /dev/sdX" smart_health_passed $result EOF done

Now, that's not perfect and I am sure that I'll re-write this in python later, but it is actually quite useful already. It will report if a SMART health check failed, and now I could write an alerting rule which looks for disks with a health value of 0 and send myself an email to go to the hard disk shop. Once your pushgateways are being scraped by prometheus, you'll end up with something like this in the console:



I'll explain how to turn this into alerting later.

Tags for this post: prometheus monitoring ephemeral_script pushgateway
Related posts: A pythonic example of recording metrics about ephemeral scripts with prometheus; Basic prometheus setup; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment
Categories: thinktime

Shared reality, diverse opinions

Seth Godin - Fri 27th Jan 2017 20:01
We're not having a lot of trouble with the "diverse opinions" part. But they're worthless without shared reality. At a chess tournament, when the newcomer tries to move his rook diagonally, it's not permitted. "Hey, that's just your opinion," is...        Seth Godin
Categories: thinktime

Michael Still: Basic prometheus setup

Planet Linux Australia - Fri 27th Jan 2017 17:01
I've been playing with prometheus for monitoring. It feels quite familiar to me because its based on an internal google technology called borgmon, but I suspect that means it feels really weird to everyone else.

The first thing to realize is that everything at google is a web server. Your short lived tool that copies some files around probably runs a web server. All of these web servers have built in URLs which report the progress and status of the task at hand. Prometheus is built to: scrape those web servers; aggregate the data; store the data into a time series database; and then perform dashboarding, trending and alerting on that data.

The most basic example is to just export metrics for each machine on my home network. This is the easiest first step, because we don't need to build any software to do this. First off, let's install node_exporter on each machine. node_exporter is the tool which runs a web server to export metrics for each node. Everything in prometheus land is written in go, which is new to me. However, it does make running node exporter easy -- just grab the relevant binary from https://prometheus.io/download/, untar, and run. Let's do it in a command line script example thing:

$ wget https://github.com/prometheus/node_exporter/releases/download/v0.14.0-rc.1/node_exporter-0.14.0-rc.1.linux-386.tar.gz $ tar xvzf node_exporter-0.14.0-rc.1.linux-386.tar.gz $ cd node_exporter-0.14.0-rc.1.linux-386 $ ./node_exporter

That's all it takes to run the node_exporter. This runs a web server at port 9100, which exposes the following metrics:

$ curl -s http://localhost:9100/metrics | grep filesystem_free | grep 'mountpoint="/data"' node_filesystem_free{device="/dev/mapper/raidvg-srvlv",fstype="xfs",mountpoint="/data"} 6.811044864e+11

Here you can see that the system I'm running on is exporting a filesystem_free value for the filesystem mounted at /data. There's a lot more than that exported, and I'd encourage you to poke around at that URL a little before continuing on.

So that's lovely, but we really want to record that over time. So let's assume that you have one of those running on each of your machines, and that you have it setup to start on boot. I'll leave the details of that out of this post, but let's just say I used my existing puppet infrastructure.

Now we need the central process which collects and records the values. That's the actual prometheus binary. Installation is again trivial:

$ wget https://github.com/prometheus/prometheus/releases/download/v1.5.0/prometheus-1.5.0.linux-386.tar.gz $ tar xvzf prometheus-1.5.0.linux-386.tar.gz $ cd prometheus-1.5.0.linux-386

Now we need to move some things around to install this nicely. I did the puppet equivalent of:

  • Moving the prometheus file to /usr/bin
  • Creating an /etc/prometheus directory and moving console_libraries and consoles into it
  • Creating a /etc/prometheus/prometheus.yml config file, more on the contents on this one in a second
  • And creating an empty data directory, in my case at /data/prometheus


The config file needs to list all of your machines. I am sure this could be generated with puppet templating or something like that, but for now here's my simple hard coded one:

# my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'stillhq' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first.rules" # - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['molokai:9090'] - job_name: 'node' static_configs: - targets: ['molokai:9100', 'dell:9100', 'eeebox:9100']

Here you can see that I want to scrape each of my web servers which exports metrics every 15 seconds, and I also want to calculate values (such as firing alerts) every 15 seconds too. This might not scale if you have bajillions of processes or machines to monitor. I also label all of my values as coming from my domain, so that if I ever aggregate these values with another prometheus from somewhere else the origin will be clear.

The other interesting bit for now is the scrape configuration. This lists the metrics exporters to monitor. In this case its prometheus itself (molokai:9090), and then each of my machines in the home lab (molokai, dell, and eeebox -- all on port 9100). Remember, port 9090 is the prometheus binary itself and port 9100 is that node_exporter binary we now have running on all of our machines.

Now if we start prometheus, it will do its thing. There is some configuration which needs to be passed on the command line here (instead of in the configration file), so my command line looks like this:

/usr/bin/prometheus -config.file=/etc/prometheus/prometheus.yml \ -web.console.libraries=/etc/prometheus/console_libraries \ -web.console.templates=/etc/prometheus/consoles \ -storage.local.path=/data/prometheus

Prometheus also presents an interactive user interface on port 9090, which is handy. Here's an example of it graphing the load average on each of my machines (it was something which caused a nice jaggy line):



You can see here that the user interface has a drop down for selecting values that are known, and that the key at the bottom tells you things about each time series in the graph. So for example, if we added {instance="eeebox:9100"} to the end of the value in the text box at the top, then we'd be filtering for values with that label set, and would as a result only show one value in the graph (the one for eeebox).

If you're interested in very simple dashboarding of basic system metrics, that's actually all you need to do. In my next post about prometheus I'm going to show how to write your own binary which exports values to be graphed. In my case, the temperature outside my house.

Tags for this post: prometheus monitoring node_exporter
Related posts: Recording performance information from short lived processes with prometheus; A pythonic example of recording metrics about ephemeral scripts with prometheus; Mona Lisa Overdrive; The Diamond Age ; Buying Time; The System of the World

Comment
Categories: thinktime

Binh Nguyen: Life in the Philippines, Duterte Background, and More

Planet Linux Australia - Fri 27th Jan 2017 05:01
- interesting history. Aetan people were original inhabitants of Philippines. Now a minority group. Malaysian and Indonesian people settled there thereafter. Then Chinese and Muslims came. Finally, Spanish and US colonialism and now relative independence... Two main wars that changed it's history. That between it's colonisers (US and Spain) and between itself and the US which helped them
Categories: thinktime

Appropriate complexity and risk

Seth Godin - Thu 26th Jan 2017 20:01
The best time to experiment in the kitchen is if you don't have 11 guests coming for dinner in three hours. Or, at the very least, be sure to have some decent frozen pizzas on hand, just in case. We...        Seth Godin
Categories: thinktime

Lev Lafayette: Data Centre Preparation: A Summary

Planet Linux Australia - Thu 26th Jan 2017 19:01

When installing a new system (whether HPC, cloud, or just a bunch of servers, disks, etc), they must be housed. Certainly this can be without any specialist environment, especially if one is building a small test cluster; for example with half-a-dozen old but homogeneous systems, each connected with 100BASE-TX ethernet to a switch etc.

read more

Categories: thinktime

Binh Nguyen: Does the Military-Industrial Complex Make Sense?, Random Thoughts, and More

Planet Linux Australia - Thu 26th Jan 2017 04:01
Given the importance of the Military-Industrial Complex now plays in many economies I thought it'd be worthwhile to take a look at whether it actually makes sense? - the military industrial complex doesn't work the way that a lot of people think. It's actually linked in to the way the financial system works and the way the world panned out after the World Wars. Since the US/West came out
Categories: thinktime

Tridge on UAVs: ArduPilot on the CL84 TiltWing VTOL

Planet Linux Australia - Wed 25th Jan 2017 18:01

ArduPilot now supports TiltWing VTOL aircraft! First test flights done earlier today in Canberra

We did the first test flights of the Unique Models CL84 TiltWing VTOL with ArduPilot today. Support for tilt-wing aircraft is in the 3.7.1 stable release, with enhancements to support the tilt mechanism of the CL84 added for the upcoming 3.8.0 release.

This CL84 model operates as a tricopter in VTOL flight, and as a normal aileron/elevator plane in forward flight. The unusual thing from the point of view of ArduPilot is that the tilt mechanism uses a retract style servo, which means it can be commanded to be fully up or fully down, but you can't ask it to hold any angle in between. That makes for some interesting challenges in the VTOL transition code.

For the 3.8.0 release there is a new parameter Q_TILT_TYPE that controls whether the tilt mechanism for tiltrotors and tiltwings is a continuous servo or a binary (retract style) servo. In either case the Q_TILT_RATE parameter sets the rate at which the servo changes angle (in degrees/second).

This aircraft has been previously tested in hover by Greg Covey (see http://discuss.ardupilot.org/t/tiltrotor-support-for-plane/8805) but has not previously been tested with automatic transitions.
Many thanks to Grant, Peter, James and Jack from CanberraUAV for their assistance with testing the CL84 today and the videos and photos!

Categories: thinktime

Gaming the System&#8230;and Winning

a list apart - Wed 25th Jan 2017 02:01

Good intentions usually drive the “gamification” of websites—adding points, badges, and leaderboards to make them more engaging. It sounds like a great idea, but borrowing game design elements out of context is a risky way to design experiences, especially experiences intended to bring users back to a site.

Not everyone wants to spend more time on your site just to get a badge; some people simply aren’t motivated by such extrinsic rewards. When game designers include elements intended to promote deeper engagement, they look at their users through a very different lens than those of us in web design. Understanding how and why they do this can help us craft more engaging web experiences.

A game designer and a web designer looking at the same user research will most likely come up with totally different personas. Talking about many people as though they were one person can be a strength;it gives a clear vision of who we need to serve. But it’s also a limitation because this method of clustering requires us to pick certain axes and disregard others. Web designers cluster people by their needs and abilities, but in doing so, we tend to disregard their personalities. Game designers are more likely to fixate on interaction styles; they embrace personality.

Let me show you what I mean. Pokémon Go has seen a meteoric rise in popularity since its release, possibly because of how well it caters to people with different personalities. In Pokémon Go, people who like to compete can “battle” in gyms; those who prefer to collaborate can go on “pokéwalks” together; and anyone with a drive to explore and push boundaries can try to “catch ’em all.” Even if a player enjoys all of these options, they’re likely to find one more motivating than the others. Ensuring that the game caters to each interaction preference enhances its appeal to a wide audience.

The same is true of web design. Adding gamification elements to draw in more visitors—but only elements that cater to competitive people—can overlook the wider audience. UX design that doesn’t target a variety of interaction styles is going to be hit-or-miss.

Game designers (and educators and psychologists) segment people in ways that complement web design and user experience. I’ve put together this guide to show you how they think and how (and where and why) we can use their models. Along the way, I’ll show how those models might overlap to create one bigger framework that we can use to pinpoint strengths and weaknesses in our designs.

Making it fun

Emotional design—the practice of moving interfaces beyond merely usable—is a growing field of interest among UX designers. (I highly recommend Designing for Emotion and Seductive Interaction Design.) Unsurprisingly, game designers have a theory of fun, too.

A recent study performed a contextual inquiry of hardcore and casual gamers, plus interviews with their friends and family members, identifying four types of fun (PDF) that relate to interaction preference.

  • Hard Fun comes from pursuing a goal—earning rewards according to progress.
  • Easy Fun focuses on something other than winning. It encourages learning and emphasizes feelings of wonder, awe, and mystery.
  • The People Factor caters to interaction with others; the interface becomes a mechanism for social engagement.
  • Altered States attract people to engage in a social context to feel something different (e.g., to gain pleasure from acting upon or inciting those around them).

These styles are easy to spot on social networking sites:

  • Competing for likes or followers (as on Twitter) is Hard Fun.
  • Casually browsing to find humorous new posts (as on Tumblr) is Easy Fun.
  • Engaging with a social network to connect with other people (as on Facebook) is a People Factor.
  • People who experience excitement from messing with other people (as on Reddit) are enjoying Altered States.
Bartle’s four player types

There are numerous ways of grouping people by interaction style, including Hallford and Hallford’s six categories of player behavior and Yee’s statistical clustering of three player types (PDF), but I find that these approaches roughly map to Richard Bartle’s 4 player types (particularly the three non-Troll ones). Bartle’s theory is among the most common approaches chosen by game designers (Fig 1).

Fig 1: Bartle’s player types are one common way that game designers segment player experiences.

Suppose you have an ecommerce site for people to buy snakes. An Achiever may be motivated to engage by extrinsic factors: How many snakes can she collect? How many reviews of snakes can she write? The Explorer may be motivated by learning more about snakes: What’s the length specification on the new python? Where can she read more about novel uses of snakeskin? The Socializer will be motivated by interactions with others: What types of snakes are her friends buying? Is there a discussion forum where she can connect with others over her love of snakes? And you’ll probably have to deal with a couple of teenaged Trolls who just want to make fun of everyone else’s snake-buying experience.

Bartle also examines pairwise interactions between player types. This leads him to the following sorts of conclusions:

  • The ratio of Achievers to Trolls is like the ratio of rabbits to wolves. When there are too many Trolls, the Achievers leave, but without interesting victims, the Trolls leave and the Achievers return.
  • Providing opportunities for exploration is key. When there aren’t enough Explorers finding new things and telling others about them, the Achievers eventually get bored and leave, causing the Socializers and Trolls to leave, as well.
  • Trolls keep the Socializer population in check. To reach the widest audience possible, a few Trolls are necessary because they keep the Socializer population from expanding exponentially and pushing out the Achievers and Explorers. (The only other stable equilibria involve fewer player types: either a balance of just Trolls and Achievers or else a community almost exclusively composed of Socializers.)
Motivating people

Another way of segmenting people is by identifying what motivates them. At a recent conference, I saw a presentation of a study of a mobile fitness coach. The presenters said that people’s motivations for exercise tend to cluster into distinct categories. Bartle himself didn’t focus much on motivation, so it was fascinating to see how closely the presenters’ categories align with Bartle’s: one extrinsically motivated, one self-motivated, one socially motivated.

In addition to the two dimensions considered by Bartle (Players versus World and Acting versus Interacting), a third axis should be taken into account: the players’ motivations for choosing to engage. Unlike the binary nature of acting OR interacting with the player OR the world, motivations fall along a spectrum of intrinsic and extrinsic motivation (Fig 2).

Fig 2: Each player type falls on a spectrum from intrinsic to extrinsic motivation.

Ryan and Deci’s Self Determination Theory (PDF) is a framework illustrating how player types vary according to source of motivation (Fig 3).

Fig 3: Ryan and Deci represent levels of motivation.
(adapted from Ryan & Deci, 2000, p. 61)

How we can motivate people to engage with a site or app depends on where their interaction style falls along this spectrum. What player type we cater to also determines how likely people are to stick around.

  • We can motivate Achievers (or the Achiever side of people) by adding extrinsic metrics they can use to compete with others. Achievers, being extrinsically-motivated, will eventually grow bored and leave. Gamification techniques like points, badges, or leaderboards motivate Achievers extrinsically but can also give them a way to measure their growing competence (encouraging intrinsic motivation).
  • Explorers set their own goals and are self-motivated to interact with your website. Because they’re intrinsically motivated by enjoyment of the task itself, they tend to stick around longer. (Bartle says that changes in numbers of other player types doesn’t usually impact the number of Explorers.) To promote long-term engagement of Explorers, it helps to introduce unexpected achievements. Hinting that there are some Easter Eggs hidden in your site could be a great way to encourage explorers to stick around.
  • Socializers will stick around as long as there’s enough social interaction for them. We can motivate them by adding ways to add their own thoughts to site content, such as comments on an epublishing site or reviews on an ecommerce site. If they don’t get enough external feedback from other people, they’ll leave. Adding a way to “Like” reviews or respond to other users’ comments can help to provide this feedback.
  • Trolls set their own goal of annoying other users, but they require feedback from others to know that they’re actually being annoying. The expression “Don’t feed the Trolls” means removing this feedback loop, which we can do with moderation tools and community posting guidelines.

How you choose to present rewards will encourage people with different preferred interaction styles to engage with your product. Conventional cognitive psychology wisdom suggests that irregular rewards at irregular intervals motivate continued engagement most effectively. (Expected rewards can be problematic because if people know that a reward is coming, they may work for the reward rather than focusing on the task itself.) Yet, having a variety of expected rewards can help people set goals, thus promoting Achievers.

Helping people learn to use your site

When we make fairly complicated sites, people may cycle through different player types as they learn to engage with them.

To see what I mean, consider the Experiential Learning Cycle of Kurt Lewin, which contains four stages (Fig 4).

Fig 4: Lewin shows how we learn.

Viewing this cycle through Bartle’s player types, it’s possible to align player characteristics with aspects of Lewin’s cycle: Achievers concretely experience the world, Explorers reflect upon what they see, Socializers abstract the world into a context they can discuss, and Trolls premeditate their approach to interactions. This mapping also works with Bartle’s axes (Fig 5).

Fig 5: Lewin’s learning model can be overlaid upon Bartle’s player types.

For example, someone coming to a large content site focused on do-it-yourself projects might initially be motivated to engage by their own experience with home improvement. Once they’ve engaged regularly with the site for a time, they might move to a more reflective mode, exploring more of the site. If they really get interested, they might join in the discussion forum, discussing projects with others, and this discussion will give them new ideas for projects they can create in their own life. Application of the ideas to the real world moves back to the Achiever quadrant, but may impact people in that world as well (the Troll quadrant). Thus, if we want people to engage deeply with a site or app over time (learning different aspects of it) it helps to support the many different interaction styles that people may use to engage while learning.

Making it engaging

Of course, when we make websites, we often talk about supporting immersion through an experience of flow. Game designers again bring light to this form of interaction. Salen and Zimmerman, authors of the seminal textbook on game design, note that four parts of Mihaly Csikszentmihalyi’s flow model are necessary for flow to occur:

  • Challenge
  • Goals
  • Feedback
  • Control

All four player types need all four of these to engage fully, yet each prerequisite of flow might be aligned with a player type and learning stage (Fig 6). 

Fig 6: Csikszentmihalyi’s four prerequisites of flow can be mapped to the same axes as Bartle’s player types.
  • Achievers seek challenge. (They don’t set their own goals as much as Explorers, but will tackle whatever extrinsic challenge you put in front of them.) Challenge is actively experienced.
  • Explorers set their own goals to create challenge for themselves. These goals project into the future with reflection upon observation.
  • All players need feedback from the game, yet Socializers thrive upon feedback from other players.
  • Trolls seek control. (To prevent trolls, building in some form of moderation system is a way of taking away their control.)

For example, a photo-sharing site might create an immersive experience by challenging people to upload a certain number of photos, letting them set their own goals for organizing content, allowing other people to give feedback about the quality of the photos or collections, and giving everyone a comfortable sense of control over their own photos and collections. Introducing artificial difficulties, like saying “Try only uploading black and white photos this week” can make the experience more game-like, presenting a challenge that people can choose to use as their own goal or not.

Bringing it all together

Here is a fully overlaid model based on my own interpretation of how they’re related (Fig 7).

Fig 7: All of the models can be overlaid upon each other to form a diagnostic tool.

This model might be expanded further using other four-part models from game design such as Hallford and Hallford’s four reward types or Callois’s four game types. To examine how people progress through the learning stages, we might overlay David Kolb’s Experiential Learning Cycle (which is built upon Lewin’s model for learning).

I haven’t tested all the variables here, but I hope that this grouped model can be a useful diagnostic tool, as I’ll show with the following three examples. (If you encounter evidence supporting or refuting this way of overlaying the models in your own work, please leave a comment at the end of the article explaining what you’ve found.)

The model suggests that if a new feature does not succeed right away, it may largely be due to the interaction styles of the people engaging with the site or app. For example, on a start-up site I worked on, the most engaged users were the ones who had created their own reviews. When we added a discussion forum, it initially didn’t get a lot of use. One possible diagnosis of this slow beginning is that most of the people we’d attracted were Achievers, who learned through their own experiences and enjoyed the challenging hard fun of seeing who could write the most reviews. We dialed back the discussion experience to focus the site more directly on these Achievers. Another approach might have been to add a competitive element to the discussion itself; you’ve probably seen forums that rank their participants, which is a way of luring Achievers to be more social (and thus creating a more diverse community for Socializers).

The Troll quadrant is certainly the one least directly mapped to by other models. One possible explanation of this is that we try not to design for Trolls. An explanation I consider more likely is that in a socially-moderated setting, there’s little opportunity for Trolls to exist. If your site is having difficulty with Trolls, the model suggests that making it more difficult for people to act upon people can mitigate the problem. Moderation tools can take away the control that trolls require, making it difficult for them to create altered emotional states.

Bartle found that Explorers are rare and difficult to attract, but necessary for the long-term survival of a site. They’re the ones who try out new features to give the rest of the community things to do and discuss. One conventional approach to trying to increase site engagement is to add “gamified” elements, such as points, badges, and leaderboards. But such an approach may actually harm long-term engagement because such extrinsic motivators are unappealing to the Explorers who help the community evolve. Looking at what interaction types a site appeals to and then adding elements to appeal to others can help the site to grow a well-rounded, sustainable audience.

There’s much more that can be brought from games to make the web fun and meaningful. I hope that a greater understanding of how game designers make experiences engaging can empower web designers to craft a more intriguing and inviting web for a wide variety of people.

Categories: thinktime

Gaming the System&#8230;and Winning

a list apart - Wed 25th Jan 2017 02:01

Good intentions usually drive the “gamification” of websites—adding points, badges, and leaderboards to make them more engaging. It sounds like a great idea, but borrowing game design elements out of context is a risky way to design experiences, especially experiences intended to bring users back to a site.

Not everyone wants to spend more time on your site just to get a badge; some people simply aren’t motivated by such extrinsic rewards. When game designers include elements intended to promote deeper engagement, they look at their users through a very different lens than those of us in web design. Understanding how and why they do this can help us craft more engaging web experiences.

A game designer and a web designer looking at the same user research will most likely come up with totally different personas. Talking about many people as though they were one person can be a strength;it gives a clear vision of who we need to serve. But it’s also a limitation because this method of clustering requires us to pick certain axes and disregard others. Web designers cluster people by their needs and abilities, but in doing so, we tend to disregard their personalities. Game designers are more likely to fixate on interaction styles; they embrace personality.

Let me show you what I mean. Pokémon Go has seen a meteoric rise in popularity since its release, possibly because of how well it caters to people with different personalities. In Pokémon Go, people who like to compete can “battle” in gyms; those who prefer to collaborate can go on “pokéwalks” together; and anyone with a drive to explore and push boundaries can try to “catch ’em all.” Even if a player enjoys all of these options, they’re likely to find one more motivating than the others. Ensuring that the game caters to each interaction preference enhances its appeal to a wide audience.

The same is true of web design. Adding gamification elements to draw in more visitors—but only elements that cater to competitive people—can overlook the wider audience. UX design that doesn’t target a variety of interaction styles is going to be hit-or-miss.

Game designers (and educators and psychologists) segment people in ways that complement web design and user experience. I’ve put together this guide to show you how they think and how (and where and why) we can use their models. Along the way, I’ll show how those models might overlap to create one bigger framework that we can use to pinpoint strengths and weaknesses in our designs.

Making it fun

Emotional design—the practice of moving interfaces beyond merely usable—is a growing field of interest among UX designers. (I highly recommend Designing for Emotion and Seductive Interaction Design.) Unsurprisingly, game designers have a theory of fun, too.

A recent study performed a contextual inquiry of hardcore and casual gamers, plus interviews with their friends and family members, identifying four types of fun (PDF) that relate to interaction preference.

  • Hard Fun comes from pursuing a goal—earning rewards according to progress.
  • Easy Fun focuses on something other than winning. It encourages learning and emphasizes feelings of wonder, awe, and mystery.
  • The People Factor caters to interaction with others; the interface becomes a mechanism for social engagement.
  • Altered States attract people to engage in a social context to feel something different (e.g., to gain pleasure from acting upon or inciting those around them).

These styles are easy to spot on social networking sites:

  • Competing for likes or followers (as on Twitter) is Hard Fun.
  • Casually browsing to find humorous new posts (as on Tumblr) is Easy Fun.
  • Engaging with a social network to connect with other people (as on Facebook) is a People Factor.
  • People who experience excitement from messing with other people (as on Reddit) are enjoying Altered States.
Bartle’s four player types

There are numerous ways of grouping people by interaction style, including Hallford and Hallford’s six categories of player behavior and Yee’s statistical clustering of three player types (PDF), but I find that these approaches roughly map to Richard Bartle’s 4 player types (particularly the three non-Troll ones). Bartle’s theory is among the most common approaches chosen by game designers (Fig 1).

Fig 1: Bartle’s player types are one common way that game designers segment player experiences.

Suppose you have an ecommerce site for people to buy snakes. An Achiever may be motivated to engage by extrinsic factors: How many snakes can she collect? How many reviews of snakes can she write? The Explorer may be motivated by learning more about snakes: What’s the length specification on the new python? Where can she read more about novel uses of snakeskin? The Socializer will be motivated by interactions with others: What types of snakes are her friends buying? Is there a discussion forum where she can connect with others over her love of snakes? And you’ll probably have to deal with a couple of teenaged Trolls who just want to make fun of everyone else’s snake-buying experience.

Bartle also examines pairwise interactions between player types. This leads him to the following sorts of conclusions:

  • The ratio of Achievers to Trolls is like the ratio of rabbits to wolves. When there are too many Trolls, the Achievers leave, but without interesting victims, the Trolls leave and the Achievers return.
  • Providing opportunities for exploration is key. When there aren’t enough Explorers finding new things and telling others about them, the Achievers eventually get bored and leave, causing the Socializers and Trolls to leave, as well.
  • Trolls keep the Socializer population in check. To reach the widest audience possible, a few Trolls are necessary because they keep the Socializer population from expanding exponentially and pushing out the Achievers and Explorers. (The only other stable equilibria involve fewer player types: either a balance of just Trolls and Achievers or else a community almost exclusively composed of Socializers.)
Motivating people

Another way of segmenting people is by identifying what motivates them. At a recent conference, I saw a presentation of a study of a mobile fitness coach. The presenters said that people’s motivations for exercise tend to cluster into distinct categories. Bartle himself didn’t focus much on motivation, so it was fascinating to see how closely the presenters’ categories align with Bartle’s: one extrinsically motivated, one self-motivated, one socially motivated.

In addition to the two dimensions considered by Bartle (Players versus World and Acting versus Interacting), a third axis should be taken into account: the players’ motivations for choosing to engage. Unlike the binary nature of acting OR interacting with the player OR the world, motivations fall along a spectrum of intrinsic and extrinsic motivation (Fig 2).

Fig 2: Each player type falls on a spectrum from intrinsic to extrinsic motivation.

Ryan and Deci’s Self Determination Theory (PDF) is a framework illustrating how player types vary according to source of motivation (Fig 3).

Fig 3: Ryan and Deci represent levels of motivation.
(adapted from Ryan & Deci, 2000, p. 61)

How we can motivate people to engage with a site or app depends on where their interaction style falls along this spectrum. What player type we cater to also determines how likely people are to stick around.

  • We can motivate Achievers (or the Achiever side of people) by adding extrinsic metrics they can use to compete with others. Achievers, being extrinsically-motivated, will eventually grow bored and leave. Gamification techniques like points, badges, or leaderboards motivate Achievers extrinsically but can also give them a way to measure their growing competence (encouraging intrinsic motivation).
  • Explorers set their own goals and are self-motivated to interact with your website. Because they’re intrinsically motivated by enjoyment of the task itself, they tend to stick around longer. (Bartle says that changes in numbers of other player types doesn’t usually impact the number of Explorers.) To promote long-term engagement of Explorers, it helps to introduce unexpected achievements. Hinting that there are some Easter Eggs hidden in your site could be a great way to encourage explorers to stick around.
  • Socializers will stick around as long as there’s enough social interaction for them. We can motivate them by adding ways to add their own thoughts to site content, such as comments on an epublishing site or reviews on an ecommerce site. If they don’t get enough external feedback from other people, they’ll leave. Adding a way to “Like” reviews or respond to other users’ comments can help to provide this feedback.
  • Trolls set their own goal of annoying other users, but they require feedback from others to know that they’re actually being annoying. The expression “Don’t feed the Trolls” means removing this feedback loop, which we can do with moderation tools and community posting guidelines.

How you choose to present rewards will encourage people with different preferred interaction styles to engage with your product. Conventional cognitive psychology wisdom suggests that irregular rewards at irregular intervals motivate continued engagement most effectively. (Expected rewards can be problematic because if people know that a reward is coming, they may work for the reward rather than focusing on the task itself.) Yet, having a variety of expected rewards can help people set goals, thus promoting Achievers.

Helping people learn to use your site

When we make fairly complicated sites, people may cycle through different player types as they learn to engage with them.

To see what I mean, consider the Experiential Learning Cycle of Kurt Lewin, which contains four stages (Fig 4).

Fig 4: Lewin shows how we learn.

Viewing this cycle through Bartle’s player types, it’s possible to align player characteristics with aspects of Lewin’s cycle: Achievers concretely experience the world, Explorers reflect upon what they see, Socializers abstract the world into a context they can discuss, and Trolls premeditate their approach to interactions. This mapping also works with Bartle’s axes (Fig 5).

Fig 5: Lewin’s learning model can be overlaid upon Bartle’s player types.

For example, someone coming to a large content site focused on do-it-yourself projects might initially be motivated to engage by their own experience with home improvement. Once they’ve engaged regularly with the site for a time, they might move to a more reflective mode, exploring more of the site. If they really get interested, they might join in the discussion forum, discussing projects with others, and this discussion will give them new ideas for projects they can create in their own life. Application of the ideas to the real world moves back to the Achiever quadrant, but may impact people in that world as well (the Troll quadrant). Thus, if we want people to engage deeply with a site or app over time (learning different aspects of it) it helps to support the many different interaction styles that people may use to engage while learning.

Making it engaging

Of course, when we make websites, we often talk about supporting immersion through an experience of flow. Game designers again bring light to this form of interaction. Salen and Zimmerman, authors of the seminal textbook on game design, note that four parts of Mihaly Csikszentmihalyi’s flow model are necessary for flow to occur:

  • Challenge
  • Goals
  • Feedback
  • Control

All four player types need all four of these to engage fully, yet each prerequisite of flow might be aligned with a player type and learning stage (Fig 6). 

Fig 6: Csikszentmihalyi’s four prerequisites of flow can be mapped to the same axes as Bartle’s player types.
  • Achievers seek challenge. (They don’t set their own goals as much as Explorers, but will tackle whatever extrinsic challenge you put in front of them.) Challenge is actively experienced.
  • Explorers set their own goals to create challenge for themselves. These goals project into the future with reflection upon observation.
  • All players need feedback from the game, yet Socializers thrive upon feedback from other players.
  • Trolls seek control. (To prevent trolls, building in some form of moderation system is a way of taking away their control.)

For example, a photo-sharing site might create an immersive experience by challenging people to upload a certain number of photos, letting them set their own goals for organizing content, allowing other people to give feedback about the quality of the photos or collections, and giving everyone a comfortable sense of control over their own photos and collections. Introducing artificial difficulties, like saying “Try only uploading black and white photos this week” can make the experience more game-like, presenting a challenge that people can choose to use as their own goal or not.

Bringing it all together

Here is a fully overlaid model based on my own interpretation of how they’re related (Fig 7).

Fig 7: All of the models can be overlaid upon each other to form a diagnostic tool.

This model might be expanded further using other four-part models from game design such as Hallford and Hallford’s four reward types or Callois’s four game types. To examine how people progress through the learning stages, we might overlay David Kolb’s Experiential Learning Cycle (which is built upon Lewin’s model for learning).

I haven’t tested all the variables here, but I hope that this grouped model can be a useful diagnostic tool, as I’ll show with the following three examples. (If you encounter evidence supporting or refuting this way of overlaying the models in your own work, please leave a comment at the end of the article explaining what you’ve found.)

The model suggests that if a new feature does not succeed right away, it may largely be due to the interaction styles of the people engaging with the site or app. For example, on a start-up site I worked on, the most engaged users were the ones who had created their own reviews. When we added a discussion forum, it initially didn’t get a lot of use. One possible diagnosis of this slow beginning is that most of the people we’d attracted were Achievers, who learned through their own experiences and enjoyed the challenging hard fun of seeing who could write the most reviews. We dialed back the discussion experience to focus the site more directly on these Achievers. Another approach might have been to add a competitive element to the discussion itself; you’ve probably seen forums that rank their participants, which is a way of luring Achievers to be more social (and thus creating a more diverse community for Socializers).

The Troll quadrant is certainly the one least directly mapped to by other models. One possible explanation of this is that we try not to design for Trolls. An explanation I consider more likely is that in a socially-moderated setting, there’s little opportunity for Trolls to exist. If your site is having difficulty with Trolls, the model suggests that making it more difficult for people to act upon people can mitigate the problem. Moderation tools can take away the control that trolls require, making it difficult for them to create altered emotional states.

Bartle found that Explorers are rare and difficult to attract, but necessary for the long-term survival of a site. They’re the ones who try out new features to give the rest of the community things to do and discuss. One conventional approach to trying to increase site engagement is to add “gamified” elements, such as points, badges, and leaderboards. But such an approach may actually harm long-term engagement because such extrinsic motivators are unappealing to the Explorers who help the community evolve. Looking at what interaction types a site appeals to and then adding elements to appeal to others can help the site to grow a well-rounded, sustainable audience.

There’s much more that can be brought from games to make the web fun and meaningful. I hope that a greater understanding of how game designers make experiences engaging can empower web designers to craft a more intriguing and inviting web for a wide variety of people.

Categories: thinktime

Gaming the System&#8230;and Winning

a list apart - Wed 25th Jan 2017 02:01

Good intentions usually drive the “gamification” of websites—adding points, badges, and leaderboards to make them more engaging. It sounds like a great idea, but borrowing game design elements out of context is a risky way to design experiences, especially experiences intended to bring users back to a site.

Not everyone wants to spend more time on your site just to get a badge; some people simply aren’t motivated by such extrinsic rewards. When game designers include elements intended to promote deeper engagement, they look at their users through a very different lens than those of us in web design. Understanding how and why they do this can help us craft more engaging web experiences.

A game designer and a web designer looking at the same user research will most likely come up with totally different personas. Talking about many people as though they were one person can be a strength;it gives a clear vision of who we need to serve. But it’s also a limitation because this method of clustering requires us to pick certain axes and disregard others. Web designers cluster people by their needs and abilities, but in doing so, we tend to disregard their personalities. Game designers are more likely to fixate on interaction styles; they embrace personality.

Let me show you what I mean. Pokémon Go has seen a meteoric rise in popularity since its release, possibly because of how well it caters to people with different personalities. In Pokémon Go, people who like to compete can “battle” in gyms; those who prefer to collaborate can go on “pokéwalks” together; and anyone with a drive to explore and push boundaries can try to “catch ’em all.” Even if a player enjoys all of these options, they’re likely to find one more motivating than the others. Ensuring that the game caters to each interaction preference enhances its appeal to a wide audience.

The same is true of web design. Adding gamification elements to draw in more visitors—but only elements that cater to competitive people—can overlook the wider audience. UX design that doesn’t target a variety of interaction styles is going to be hit-or-miss.

Game designers (and educators and psychologists) segment people in ways that complement web design and user experience. I’ve put together this guide to show you how they think and how (and where and why) we can use their models. Along the way, I’ll show how those models might overlap to create one bigger framework that we can use to pinpoint strengths and weaknesses in our designs.

Making it fun

Emotional design—the practice of moving interfaces beyond merely usable—is a growing field of interest among UX designers. (I highly recommend Designing for Emotion and Seductive Interaction Design.) Unsurprisingly, game designers have a theory of fun, too.

A recent study performed a contextual inquiry of hardcore and casual gamers, plus interviews with their friends and family members, identifying four types of fun (PDF) that relate to interaction preference.

  • Hard Fun comes from pursuing a goal—earning rewards according to progress.
  • Easy Fun focuses on something other than winning. It encourages learning and emphasizes feelings of wonder, awe, and mystery.
  • The People Factor caters to interaction with others; the interface becomes a mechanism for social engagement.
  • Altered States attract people to engage in a social context to feel something different (e.g., to gain pleasure from acting upon or inciting those around them).

These styles are easy to spot on social networking sites:

  • Competing for likes or followers (as on Twitter) is Hard Fun.
  • Casually browsing to find humorous new posts (as on Tumblr) is Easy Fun.
  • Engaging with a social network to connect with other people (as on Facebook) is a People Factor.
  • People who experience excitement from messing with other people (as on Reddit) are enjoying Altered States.
Bartle’s four player types

There are numerous ways of grouping people by interaction style, including Hallford and Hallford’s six categories of player behavior and Yee’s statistical clustering of three player types (PDF), but I find that these approaches roughly map to Richard Bartle’s 4 player types (particularly the three non-Troll ones). Bartle’s theory is among the most common approaches chosen by game designers (Fig 1).

Fig 1: Bartle’s player types are one common way that game designers segment player experiences.

Suppose you have an ecommerce site for people to buy snakes. An Achiever may be motivated to engage by extrinsic factors: How many snakes can she collect? How many reviews of snakes can she write? The Explorer may be motivated by learning more about snakes: What’s the length specification on the new python? Where can she read more about novel uses of snakeskin? The Socializer will be motivated by interactions with others: What types of snakes are her friends buying? Is there a discussion forum where she can connect with others over her love of snakes? And you’ll probably have to deal with a couple of teenaged Trolls who just want to make fun of everyone else’s snake-buying experience.

Bartle also examines pairwise interactions between player types. This leads him to the following sorts of conclusions:

  • The ratio of Achievers to Trolls is like the ratio of rabbits to wolves. When there are too many Trolls, the Achievers leave, but without interesting victims, the Trolls leave and the Achievers return.
  • Providing opportunities for exploration is key. When there aren’t enough Explorers finding new things and telling others about them, the Achievers eventually get bored and leave, causing the Socializers and Trolls to leave, as well.
  • Trolls keep the Socializer population in check. To reach the widest audience possible, a few Trolls are necessary because they keep the Socializer population from expanding exponentially and pushing out the Achievers and Explorers. (The only other stable equilibria involve fewer player types: either a balance of just Trolls and Achievers or else a community almost exclusively composed of Socializers.)
Motivating people

Another way of segmenting people is by identifying what motivates them. At a recent conference, I saw a presentation of a study of a mobile fitness coach. The presenters said that people’s motivations for exercise tend to cluster into distinct categories. Bartle himself didn’t focus much on motivation, so it was fascinating to see how closely the presenters’ categories align with Bartle’s: one extrinsically motivated, one self-motivated, one socially motivated.

In addition to the two dimensions considered by Bartle (Players versus World and Acting versus Interacting), a third axis should be taken into account: the players’ motivations for choosing to engage. Unlike the binary nature of acting OR interacting with the player OR the world, motivations fall along a spectrum of intrinsic and extrinsic motivation (Fig 2).

Fig 2: Each player type falls on a spectrum from intrinsic to extrinsic motivation.

Ryan and Deci’s Self Determination Theory (PDF) is a framework illustrating how player types vary according to source of motivation (Fig 3).

Fig 3: Ryan and Deci represent levels of motivation.
(adapted from Ryan & Deci, 2000, p. 61)

How we can motivate people to engage with a site or app depends on where their interaction style falls along this spectrum. What player type we cater to also determines how likely people are to stick around.

  • We can motivate Achievers (or the Achiever side of people) by adding extrinsic metrics they can use to compete with others. Achievers, being extrinsically-motivated, will eventually grow bored and leave. Gamification techniques like points, badges, or leaderboards motivate Achievers extrinsically but can also give them a way to measure their growing competence (encouraging intrinsic motivation).
  • Explorers set their own goals and are self-motivated to interact with your website. Because they’re intrinsically motivated by enjoyment of the task itself, they tend to stick around longer. (Bartle says that changes in numbers of other player types doesn’t usually impact the number of Explorers.) To promote long-term engagement of Explorers, it helps to introduce unexpected achievements. Hinting that there are some Easter Eggs hidden in your site could be a great way to encourage explorers to stick around.
  • Socializers will stick around as long as there’s enough social interaction for them. We can motivate them by adding ways to add their own thoughts to site content, such as comments on an epublishing site or reviews on an ecommerce site. If they don’t get enough external feedback from other people, they’ll leave. Adding a way to “Like” reviews or respond to other users’ comments can help to provide this feedback.
  • Trolls set their own goal of annoying other users, but they require feedback from others to know that they’re actually being annoying. The expression “Don’t feed the Trolls” means removing this feedback loop, which we can do with moderation tools and community posting guidelines.

How you choose to present rewards will encourage people with different preferred interaction styles to engage with your product. Conventional cognitive psychology wisdom suggests that irregular rewards at irregular intervals motivate continued engagement most effectively. (Expected rewards can be problematic because if people know that a reward is coming, they may work for the reward rather than focusing on the task itself.) Yet, having a variety of expected rewards can help people set goals, thus promoting Achievers.

Helping people learn to use your site

When we make fairly complicated sites, people may cycle through different player types as they learn to engage with them.

To see what I mean, consider the Experiential Learning Cycle of Kurt Lewin, which contains four stages (Fig 4).

Fig 4: Lewin shows how we learn.

Viewing this cycle through Bartle’s player types, it’s possible to align player characteristics with aspects of Lewin’s cycle: Achievers concretely experience the world, Explorers reflect upon what they see, Socializers abstract the world into a context they can discuss, and Trolls premeditate their approach to interactions. This mapping also works with Bartle’s axes (Fig 5).

Fig 5: Lewin’s learning model can be overlaid upon Bartle’s player types.

For example, someone coming to a large content site focused on do-it-yourself projects might initially be motivated to engage by their own experience with home improvement. Once they’ve engaged regularly with the site for a time, they might move to a more reflective mode, exploring more of the site. If they really get interested, they might join in the discussion forum, discussing projects with others, and this discussion will give them new ideas for projects they can create in their own life. Application of the ideas to the real world moves back to the Achiever quadrant, but may impact people in that world as well (the Troll quadrant). Thus, if we want people to engage deeply with a site or app over time (learning different aspects of it) it helps to support the many different interaction styles that people may use to engage while learning.

Making it engaging

Of course, when we make websites, we often talk about supporting immersion through an experience of flow. Game designers again bring light to this form of interaction. Salen and Zimmerman, authors of the seminal textbook on game design, note that four parts of Mihaly Csikszentmihalyi’s flow model are necessary for flow to occur:

  • Challenge
  • Goals
  • Feedback
  • Control

All four player types need all four of these to engage fully, yet each prerequisite of flow might be aligned with a player type and learning stage (Fig 6). 

Fig 6: Csikszentmihalyi’s four prerequisites of flow can be mapped to the same axes as Bartle’s player types.
  • Achievers seek challenge. (They don’t set their own goals as much as Explorers, but will tackle whatever extrinsic challenge you put in front of them.) Challenge is actively experienced.
  • Explorers set their own goals to create challenge for themselves. These goals project into the future with reflection upon observation.
  • All players need feedback from the game, yet Socializers thrive upon feedback from other players.
  • Trolls seek control. (To prevent trolls, building in some form of moderation system is a way of taking away their control.)

For example, a photo-sharing site might create an immersive experience by challenging people to upload a certain number of photos, letting them set their own goals for organizing content, allowing other people to give feedback about the quality of the photos or collections, and giving everyone a comfortable sense of control over their own photos and collections. Introducing artificial difficulties, like saying “Try only uploading black and white photos this week” can make the experience more game-like, presenting a challenge that people can choose to use as their own goal or not.

Bringing it all together

Here is a fully overlaid model based on my own interpretation of how they’re related (Fig 7).

Fig 7: All of the models can be overlaid upon each other to form a diagnostic tool.

This model might be expanded further using other four-part models from game design such as Hallford and Hallford’s four reward types or Callois’s four game types. To examine how people progress through the learning stages, we might overlay David Kolb’s Experiential Learning Cycle (which is built upon Lewin’s model for learning).

I haven’t tested all the variables here, but I hope that this grouped model can be a useful diagnostic tool, as I’ll show with the following three examples. (If you encounter evidence supporting or refuting this way of overlaying the models in your own work, please leave a comment at the end of the article explaining what you’ve found.)

The model suggests that if a new feature does not succeed right away, it may largely be due to the interaction styles of the people engaging with the site or app. For example, on a start-up site I worked on, the most engaged users were the ones who had created their own reviews. When we added a discussion forum, it initially didn’t get a lot of use. One possible diagnosis of this slow beginning is that most of the people we’d attracted were Achievers, who learned through their own experiences and enjoyed the challenging hard fun of seeing who could write the most reviews. We dialed back the discussion experience to focus the site more directly on these Achievers. Another approach might have been to add a competitive element to the discussion itself; you’ve probably seen forums that rank their participants, which is a way of luring Achievers to be more social (and thus creating a more diverse community for Socializers).

The Troll quadrant is certainly the one least directly mapped to by other models. One possible explanation of this is that we try not to design for Trolls. An explanation I consider more likely is that in a socially-moderated setting, there’s little opportunity for Trolls to exist. If your site is having difficulty with Trolls, the model suggests that making it more difficult for people to act upon people can mitigate the problem. Moderation tools can take away the control that trolls require, making it difficult for them to create altered emotional states.

Bartle found that Explorers are rare and difficult to attract, but necessary for the long-term survival of a site. They’re the ones who try out new features to give the rest of the community things to do and discuss. One conventional approach to trying to increase site engagement is to add “gamified” elements, such as points, badges, and leaderboards. But such an approach may actually harm long-term engagement because such extrinsic motivators are unappealing to the Explorers who help the community evolve. Looking at what interaction types a site appeals to and then adding elements to appeal to others can help the site to grow a well-rounded, sustainable audience.

There’s much more that can be brought from games to make the web fun and meaningful. I hope that a greater understanding of how game designers make experiences engaging can empower web designers to craft a more intriguing and inviting web for a wide variety of people.

Categories: thinktime

Gaming the System&#8230;and Winning

a list apart - Wed 25th Jan 2017 02:01

Good intentions usually drive the “gamification” of websites—adding points, badges, and leaderboards to make them more engaging. It sounds like a great idea, but borrowing game design elements out of context is a risky way to design experiences, especially experiences intended to bring users back to a site.

Not everyone wants to spend more time on your site just to get a badge; some people simply aren’t motivated by such extrinsic rewards. When game designers include elements intended to promote deeper engagement, they look at their users through a very different lens than those of us in web design. Understanding how and why they do this can help us craft more engaging web experiences.

A game designer and a web designer looking at the same user research will most likely come up with totally different personas. Talking about many people as though they were one person can be a strength;it gives a clear vision of who we need to serve. But it’s also a limitation because this method of clustering requires us to pick certain axes and disregard others. Web designers cluster people by their needs and abilities, but in doing so, we tend to disregard their personalities. Game designers are more likely to fixate on interaction styles; they embrace personality.

Let me show you what I mean. Pokémon Go has seen a meteoric rise in popularity since its release, possibly because of how well it caters to people with different personalities. In Pokémon Go, people who like to compete can “battle” in gyms; those who prefer to collaborate can go on “pokéwalks” together; and anyone with a drive to explore and push boundaries can try to “catch ’em all.” Even if a player enjoys all of these options, they’re likely to find one more motivating than the others. Ensuring that the game caters to each interaction preference enhances its appeal to a wide audience.

The same is true of web design. Adding gamification elements to draw in more visitors—but only elements that cater to competitive people—can overlook the wider audience. UX design that doesn’t target a variety of interaction styles is going to be hit-or-miss.

Game designers (and educators and psychologists) segment people in ways that complement web design and user experience. I’ve put together this guide to show you how they think and how (and where and why) we can use their models. Along the way, I’ll show how those models might overlap to create one bigger framework that we can use to pinpoint strengths and weaknesses in our designs.

Making it fun

Emotional design—the practice of moving interfaces beyond merely usable—is a growing field of interest among UX designers. (I highly recommend Designing for Emotion and Seductive Interaction Design.) Unsurprisingly, game designers have a theory of fun, too.

A recent study performed a contextual inquiry of hardcore and casual gamers, plus interviews with their friends and family members, identifying four types of fun (PDF) that relate to interaction preference.

  • Hard Fun comes from pursuing a goal—earning rewards according to progress.
  • Easy Fun focuses on something other than winning. It encourages learning and emphasizes feelings of wonder, awe, and mystery.
  • The People Factor caters to interaction with others; the interface becomes a mechanism for social engagement.
  • Altered States attract people to engage in a social context to feel something different (e.g., to gain pleasure from acting upon or inciting those around them).

These styles are easy to spot on social networking sites:

  • Competing for likes or followers (as on Twitter) is Hard Fun.
  • Casually browsing to find humorous new posts (as on Tumblr) is Easy Fun.
  • Engaging with a social network to connect with other people (as on Facebook) is a People Factor.
  • People who experience excitement from messing with other people (as on Reddit) are enjoying Altered States.
Bartle’s four player types

There are numerous ways of grouping people by interaction style, including Hallford and Hallford’s six categories of player behavior and Yee’s statistical clustering of three player types (PDF), but I find that these approaches roughly map to Richard Bartle’s 4 player types (particularly the three non-Troll ones). Bartle’s theory is among the most common approaches chosen by game designers (Fig 1).

Fig 1: Bartle’s player types are one common way that game designers segment player experiences.

Suppose you have an ecommerce site for people to buy snakes. An Achiever may be motivated to engage by extrinsic factors: How many snakes can she collect? How many reviews of snakes can she write? The Explorer may be motivated by learning more about snakes: What’s the length specification on the new python? Where can she read more about novel uses of snakeskin? The Socializer will be motivated by interactions with others: What types of snakes are her friends buying? Is there a discussion forum where she can connect with others over her love of snakes? And you’ll probably have to deal with a couple of teenaged Trolls who just want to make fun of everyone else’s snake-buying experience.

Bartle also examines pairwise interactions between player types. This leads him to the following sorts of conclusions:

  • The ratio of Achievers to Trolls is like the ratio of rabbits to wolves. When there are too many Trolls, the Achievers leave, but without interesting victims, the Trolls leave and the Achievers return.
  • Providing opportunities for exploration is key. When there aren’t enough Explorers finding new things and telling others about them, the Achievers eventually get bored and leave, causing the Socializers and Trolls to leave, as well.
  • Trolls keep the Socializer population in check. To reach the widest audience possible, a few Trolls are necessary because they keep the Socializer population from expanding exponentially and pushing out the Achievers and Explorers. (The only other stable equilibria involve fewer player types: either a balance of just Trolls and Achievers or else a community almost exclusively composed of Socializers.)
Motivating people

Another way of segmenting people is by identifying what motivates them. At a recent conference, I saw a presentation of a study of a mobile fitness coach. The presenters said that people’s motivations for exercise tend to cluster into distinct categories. Bartle himself didn’t focus much on motivation, so it was fascinating to see how closely the presenters’ categories align with Bartle’s: one extrinsically motivated, one self-motivated, one socially motivated.

In addition to the two dimensions considered by Bartle (Players versus World and Acting versus Interacting), a third axis should be taken into account: the players’ motivations for choosing to engage. Unlike the binary nature of acting OR interacting with the player OR the world, motivations fall along a spectrum of intrinsic and extrinsic motivation (Fig 2).

Fig 2: Each player type falls on a spectrum from intrinsic to extrinsic motivation.

Ryan and Deci’s Self Determination Theory (PDF) is a framework illustrating how player types vary according to source of motivation (Fig 3).

Fig 3: Ryan and Deci represent levels of motivation.
(adapted from Ryan & Deci, 2000, p. 61)

How we can motivate people to engage with a site or app depends on where their interaction style falls along this spectrum. What player type we cater to also determines how likely people are to stick around.

  • We can motivate Achievers (or the Achiever side of people) by adding extrinsic metrics they can use to compete with others. Achievers, being extrinsically-motivated, will eventually grow bored and leave. Gamification techniques like points, badges, or leaderboards motivate Achievers extrinsically but can also give them a way to measure their growing competence (encouraging intrinsic motivation).
  • Explorers set their own goals and are self-motivated to interact with your website. Because they’re intrinsically motivated by enjoyment of the task itself, they tend to stick around longer. (Bartle says that changes in numbers of other player types doesn’t usually impact the number of Explorers.) To promote long-term engagement of Explorers, it helps to introduce unexpected achievements. Hinting that there are some Easter Eggs hidden in your site could be a great way to encourage explorers to stick around.
  • Socializers will stick around as long as there’s enough social interaction for them. We can motivate them by adding ways to add their own thoughts to site content, such as comments on an epublishing site or reviews on an ecommerce site. If they don’t get enough external feedback from other people, they’ll leave. Adding a way to “Like” reviews or respond to other users’ comments can help to provide this feedback.
  • Trolls set their own goal of annoying other users, but they require feedback from others to know that they’re actually being annoying. The expression “Don’t feed the Trolls” means removing this feedback loop, which we can do with moderation tools and community posting guidelines.

How you choose to present rewards will encourage people with different preferred interaction styles to engage with your product. Conventional cognitive psychology wisdom suggests that irregular rewards at irregular intervals motivate continued engagement most effectively. (Expected rewards can be problematic because if people know that a reward is coming, they may work for the reward rather than focusing on the task itself.) Yet, having a variety of expected rewards can help people set goals, thus promoting Achievers.

Helping people learn to use your site

When we make fairly complicated sites, people may cycle through different player types as they learn to engage with them.

To see what I mean, consider the Experiential Learning Cycle of Kurt Lewin, which contains four stages (Fig 4).

Fig 4: Lewin shows how we learn.

Viewing this cycle through Bartle’s player types, it’s possible to align player characteristics with aspects of Lewin’s cycle: Achievers concretely experience the world, Explorers reflect upon what they see, Socializers abstract the world into a context they can discuss, and Trolls premeditate their approach to interactions. This mapping also works with Bartle’s axes (Fig 5).

Fig 5: Lewin’s learning model can be overlaid upon Bartle’s player types.

For example, someone coming to a large content site focused on do-it-yourself projects might initially be motivated to engage by their own experience with home improvement. Once they’ve engaged regularly with the site for a time, they might move to a more reflective mode, exploring more of the site. If they really get interested, they might join in the discussion forum, discussing projects with others, and this discussion will give them new ideas for projects they can create in their own life. Application of the ideas to the real world moves back to the Achiever quadrant, but may impact people in that world as well (the Troll quadrant). Thus, if we want people to engage deeply with a site or app over time (learning different aspects of it) it helps to support the many different interaction styles that people may use to engage while learning.

Making it engaging

Of course, when we make websites, we often talk about supporting immersion through an experience of flow. Game designers again bring light to this form of interaction. Salen and Zimmerman, authors of the seminal textbook on game design, note that four parts of Mihaly Csikszentmihalyi’s flow model are necessary for flow to occur:

  • Challenge
  • Goals
  • Feedback
  • Control

All four player types need all four of these to engage fully, yet each prerequisite of flow might be aligned with a player type and learning stage (Fig 6). 

Fig 6: Csikszentmihalyi’s four prerequisites of flow can be mapped to the same axes as Bartle’s player types.
  • Achievers seek challenge. (They don’t set their own goals as much as Explorers, but will tackle whatever extrinsic challenge you put in front of them.) Challenge is actively experienced.
  • Explorers set their own goals to create challenge for themselves. These goals project into the future with reflection upon observation.
  • All players need feedback from the game, yet Socializers thrive upon feedback from other players.
  • Trolls seek control. (To prevent trolls, building in some form of moderation system is a way of taking away their control.)

For example, a photo-sharing site might create an immersive experience by challenging people to upload a certain number of photos, letting them set their own goals for organizing content, allowing other people to give feedback about the quality of the photos or collections, and giving everyone a comfortable sense of control over their own photos and collections. Introducing artificial difficulties, like saying “Try only uploading black and white photos this week” can make the experience more game-like, presenting a challenge that people can choose to use as their own goal or not.

Bringing it all together

Here is a fully overlaid model based on my own interpretation of how they’re related (Fig 7).

Fig 7: All of the models can be overlaid upon each other to form a diagnostic tool.

This model might be expanded further using other four-part models from game design such as Hallford and Hallford’s four reward types or Callois’s four game types. To examine how people progress through the learning stages, we might overlay David Kolb’s Experiential Learning Cycle (which is built upon Lewin’s model for learning).

I haven’t tested all the variables here, but I hope that this grouped model can be a useful diagnostic tool, as I’ll show with the following three examples. (If you encounter evidence supporting or refuting this way of overlaying the models in your own work, please leave a comment at the end of the article explaining what you’ve found.)

The model suggests that if a new feature does not succeed right away, it may largely be due to the interaction styles of the people engaging with the site or app. For example, on a start-up site I worked on, the most engaged users were the ones who had created their own reviews. When we added a discussion forum, it initially didn’t get a lot of use. One possible diagnosis of this slow beginning is that most of the people we’d attracted were Achievers, who learned through their own experiences and enjoyed the challenging hard fun of seeing who could write the most reviews. We dialed back the discussion experience to focus the site more directly on these Achievers. Another approach might have been to add a competitive element to the discussion itself; you’ve probably seen forums that rank their participants, which is a way of luring Achievers to be more social (and thus creating a more diverse community for Socializers).

The Troll quadrant is certainly the one least directly mapped to by other models. One possible explanation of this is that we try not to design for Trolls. An explanation I consider more likely is that in a socially-moderated setting, there’s little opportunity for Trolls to exist. If your site is having difficulty with Trolls, the model suggests that making it more difficult for people to act upon people can mitigate the problem. Moderation tools can take away the control that trolls require, making it difficult for them to create altered emotional states.

Bartle found that Explorers are rare and difficult to attract, but necessary for the long-term survival of a site. They’re the ones who try out new features to give the rest of the community things to do and discuss. One conventional approach to trying to increase site engagement is to add “gamified” elements, such as points, badges, and leaderboards. But such an approach may actually harm long-term engagement because such extrinsic motivators are unappealing to the Explorers who help the community evolve. Looking at what interaction types a site appeals to and then adding elements to appeal to others can help the site to grow a well-rounded, sustainable audience.

There’s much more that can be brought from games to make the web fun and meaningful. I hope that a greater understanding of how game designers make experiences engaging can empower web designers to craft a more intriguing and inviting web for a wide variety of people.

Categories: thinktime

Lev Lafayette: Keeping The Build Directory in EasyBuild and Paraview Plugins

Planet Linux Australia - Tue 24th Jan 2017 15:01

By default, EasyBuild will delete the build directory of an successful installation and will save failures of attempted installs for diagnostic purposes. In some cases however, one may want to save the build directory. This can be useful, for example, for diagnosis of *successful* builds. Another example is the installation of plugins for applications such as Paraview, which *require* access to the the successful buildir.

read more

Categories: thinktime

Lev Lafayette: The Provision of HPC Resources to Top Universities

Planet Linux Australia - Tue 24th Jan 2017 13:01

Recently, the University of Melbourne is ranked #1 in Australia and #33 in the world, according to the Times Higher Education World University Rankings of 2015-2016 [1]. The rankings are based on a balanced metric between citations, industry income, international outlook, research, and teaching.

read more

Categories: thinktime

Binh Nguyen: Saving Capitalist Democracy 2, Random Thoughts, and More

Planet Linux Australia - Tue 24th Jan 2017 01:01
Obviously, this next part is a continuation of one of my others posts:  http://dtbnguyen.blogspot.com/2016/11/saving-capitalist-democracy-random.html - just get better at doing things. If counties such as Cuba can deliver good healthcare on limited resources surely more prosperous countries can as well? http://www.smh.com.au/lifestyle/health-and-wellbeing/
Categories: thinktime

Ben Martin: OHC2017 zero to firmware in < 2 hours

Planet Linux Australia - Mon 23rd Jan 2017 23:01
I thought I'd make some modifications along the way in the build, so I really couldn't do a head to head with the build time I had heard about (a lowish number of minutes). The on/off switch being where it was didn't fit my plans so I made that an off boarder and also moved the battery to off board so that I might use the space below the screen for something, perhaps where the stylus lives in the case.


I did manage to go from opening the packet to firmware environment setup, built, and uploaded in less than 2 hours total. No bridges, no hassles, cable shrinks around the place and 90 degree headers across the bottom of the board for future tinkering.

This is going to look extremely stylish in a CNCed hardwood case. My current plan is to turn it into a smart remote control. Rotary encoder for volume, maybe modal so that the desired "program" can be selected quickly from a list without needing to flick or page through things.

Categories: thinktime

Pages

Subscribe to kattekrab aggregator