You are here


How many hops?

Seth Godin - Fri 23rd Mar 2018 19:03
Some things, like your next job, might happen as the direct result of one meeting. Here I am, here's my resume, okay, you're hired. But most of the time, that's not the way it works. You meet someone. You do...        Seth Godin
Categories: thinktime

Matthew Oliver: Keystone Federated Swift – A series of posts

Planet Linux Australia - Fri 23rd Mar 2018 15:03

Matt Treinish and I proposed a presentation at the OpenStack Summit in Vancouver in May, it was accepted but on standby. Which simply means we have a lightening talk slot (10 minutes), but may be bumped up to a full slot based on how other presenters go (visa issues, pull outs, etc).

Anyway, 10 minutes wont do the topic justice, so I thought what better then to also post details as I work through them here. Some of what I say may end up in the presentation, or may not. All I know is I’ve been asked a few times how to setup Swift in a Keystone federated environment. Let’s face it, Swift scales to a global cluster no worries, however other OpenStack components may have trouble doing the same. So federating a bunch of different regions and treating them as their own clouds makes heaps of sense. Great, then what’s the best way of integrating Swift into this federated environment?


My current Idea is to walk through 3 initial topologies. The first I’ll call ‘false federation’ where we can simply use Swift’s ability to use multiple authentication middlewares as different resellers to be able to authenticate to multiple keystone endpoints. For those playing along at home, the keystone middleware currently doesn’t let you do this, but I have a trivial patch that fixes this.. and plan to push it upstream as soon as I have a chance to clean it up and add tests.


The second, is separate swift clusters in each cloud. But using Swifts container sync to move objects so you still have access to your data on any cloud you visit… eventually.


And finally the third is what we’d all want, I large swift cluster, that all clouds talk to, so no matter where you are, there your data is. Plus gives better durability, dispersion, and everything we want out of a Swift cluster. The trick here will be making sure the same swift account name is used no matter which keystone your talk to, and assume this will come down to how you configure what you share during federated token exchange. I’ll leave this as the last post and we still need to play to iron it out.. but obviously is the dream.

These diagrams are obviously overly simplistic, but I hope you get the idea.

The next post will be the ‘False federation’ approach seeing as I already have a swift keystoneauth middleware patch that solves this.

Categories: thinktime

Overpinnings when the underpinnings go away

Seth Godin - Thu 22nd Mar 2018 19:03
Years ago, most middle class people had a huge, expensive piece of furniture in their living room. It played music and captured radio broadcasts. The high-end stereo business was the overpinning built on this underpinning. "If you're already going to...        Seth Godin
Categories: thinktime

Mobile blindness

Seth Godin - Wed 21st Mar 2018 21:03
You don't need a peer-reviewed study to know that when people surf the web on their smartphones, they're not going as deep. We swipe instead of click. We scan instead of read. Even our personal email... We get exposure to...        Seth Godin
Categories: thinktime

Tim Serong: You won’t find us on Facebook

Planet Linux Australia - Tue 20th Mar 2018 23:03

I made these back in August 2016 (complete with lovingly hand-drawn thumb and middle finger icons), but it seems appropriate to share them again now. The images are CC-BY-SA, so go nuts, or you can grab them in sticker form from Redbubble.

Categories: thinktime

The beat goes on

Seth Godin - Tue 20th Mar 2018 19:03
That's what makes it the beat. There are other things that stop. That start. That go faster or slower. But don't worry about the beat. We can't change the beat. The beat continues. When we're watching it, it continues, and...        Seth Godin
Categories: thinktime

Getting the default settings right

Seth Godin - Mon 19th Mar 2018 20:03
We know that the default settings determine the behavior of the group. Organ donation, 401k allocations, the typeface on our word processor--the way it's set to act if we don't override it is often the way we act. Because often,...        Seth Godin
Categories: thinktime

Ben Martin: Waiting for the other Gantry to drop

Planet Linux Australia - Mon 19th Mar 2018 18:03
When cutting the first side of the gantry I used the "traditional" hold down method of clamps and the like on the edges. That works ok when you have a much larger piece of alloy than the part you are cutting. In this case there isn't much spare in the height axis (left right as shown) and as you can see very little in the x axis (up/down in the below image). My clamping allowed for more vibration on the first cutting than I like so I changed how I went about the second side of the gantry.

For the second gantry, after flipping things in the software so that I was coming in from the other side I drilled out 4 m6 holes and countersank them.

This way the bolts (m6x40) were almost flush with the work piece. These bolts go straight through the plywood and connect with t-slot nuts in the alloy bed of the cnc. So there isn't much ability to use bolts that are too long for this application. Counter sinking the bolts helps on a machine with limited Z travel as using some non stubby drill bits really locks down the amount of free play and clearance you can get. The downside of this work holding is that you are left with 4 m6 holes that don't really need to be in the final product.

In this case it doesn't matter as I can use them and a new plate to mount one or two cameras on the back gantry facing forwards. I have found that the best vantage for CNC viewing is when not in the same room and looking at the video streams.

In future jobs I might move the countersunk bolts to the edge so they are not on the final work piece.

So now all I have to do is free this piece from the waste, tap a bunch of m5 holes, drill and tap 5 holes on 3 sides of the new gantry pieces and I'm getting close to loading it on.
Categories: thinktime

Francois Marier: Dynamic DNS on your own domain

Planet Linux Australia - Mon 19th Mar 2018 07:03

I recently moved my dynamic DNS hostnames from (now owned by Oracle) to No-IP. In the process, I moved all of my hostnames under a sub-domain that I control in case I ever want to self-host the authoritative DNS server for it.

Creating an account

In order to use my own existing domain, I registered for the Plus Managed DNS service and provided my top-level domain (

Then I created a support ticket to ask for the sub-domain feature. Without that, No-IP expects you to delegate your entire domain to them, whereas I only wanted to delegate *

Once that got enabled, I was able to create hostnames like machine.dyn in the No-IP control panel. Without the sub-domain feature, you can't have dots in hostnames.

I used a bogus IP address (e.g. for all of the hostnames I created in order to easily confirm that the client software is working.

DNS setup

On my registrar's side, here are the DNS records I had to add to delegate anything under to No-IP:

dyn NS dyn NS dyn NS dyn NS dyn NS Client setup

In order to update its IP address whenever it changes, I installed ddclient on each of my machines:

apt install ddclient

While the ddclient package won't help you configure your No-IP service during installation or enable the web IP lookup method, this can all be done by editing the configuration after the fact.

I put the following in /etc/ddclient.conf:

ssl=yes protocol=noip use=web,, web-skip='IP Address' login=myusername password='Password1!'

and the following in /etc/default/ddclient:

run_dhclient="false" run_ipup="false" run_daemon="true" daemon_interval="3600"

Then restart the service:

systemctl restart ddclient.service

Note that you do need to change the default update interval or the server will ban your IP address.


To test that the client software is working, wait 6 minutes (there is an internal check which cancels any client invocations within 5 minutes of another), then run it manually:

ddclient --verbose --debug

The IP for that machine should now be visible on the No-IP control panel and in DNS lookups:

dig +short
Categories: thinktime

Yes, there's a free lunch

Seth Godin - Sun 18th Mar 2018 18:03
In a physical economy in which scarcity is the fundamental driver, eating lunch means someone else gets less. But in a society where ideas lead to trust and connection and productivity, where working together is better than working apart, where...        Seth Godin
Categories: thinktime

Chris Smart: Auto apply latest package updates on OpenWrt (LEDE Project)

Planet Linux Australia - Sun 18th Mar 2018 11:03

Running Linux on your router and wifi devices is fantastic, but it’s important to keep them up-to-date. This is how I auto-update my devices with the latest packages from OpenWrt (but not firmware, I still do that manually when there’s a new release).

This is a very simple shell script which uses OpenWrt’s package manager to fetch a list of updates, and then install them, rebooting the machine if that was successful. The log file is served up over http, in case you want to get the log easily to see what’s been happening (assuming you’re running uhttpd service).

Make a directory to hold the script.
root@firewall:~# mkdir -p /usr/local/sbin

Make the script.
root@firewall:~# cat > /usr/local/sbin/ << \EOF
opkg update
PACKAGES="$(opkg list-upgradable |awk '{print $1}')"
if [ -n "${PACKAGES}" ]; then
  opkg upgrade ${PACKAGES}
  if [ "$?" -eq 0 ]; then
    echo "$(date -I"seconds") - update success, rebooting" \
>> /www/update.result
    exec reboot
    echo "$(date -I"seconds") - update failed" >> /www/update.result
  echo "$(date -I"seconds") - nothing to update" >> /www/update.result

Make the script executable and touch the log file.
root@firewall:~# chmod u+x /usr/local/sbin/
root@firewall:~# touch /www/update.result

Give it a run manually, if you want.
root@firewall:~# /usr/local/sbin/

Next schedule the script in cron.
root@firewall:~# crontab -e

My cron entry looks like this, to run at 2am every day.

0 2 * * * /usr/local/sbin/

Now just start and enable cron.
root@firewall:~# /etc/init.d/cron start
root@firewall:~# /etc/init.d/cron enable

Download a copy of the log from another machine.
chris@box:~$ curl http://router/update.result
2018-03-18T10:14:49+1100 - nothing to update

That’s it! Now if you have multiple devices you can do the same, but maybe just set the cron entry for a different time of the night.

Categories: thinktime

Donna Benjamin: DrupalCon Nashville

Planet Linux Australia - Sat 17th Mar 2018 23:03
Saturday, March 17, 2018 - 22:01

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

Categories: thinktime

"What does this remind you of?"

Seth Godin - Sat 17th Mar 2018 19:03
That's a much more useful way to get feedback than asking if we like it. We make first impressions and long-term judgments based on the smallest of clues. We scan before we dive in, we see the surface before we...        Seth Godin
Categories: thinktime

Russell Coker: Racism in the Office

Planet Linux Australia - Sat 17th Mar 2018 01:03

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.

Related posts:

  1. Anarchy in the Office Some of the best examples I’ve seen of anarchy working...
  2. Servers in the Office I just had a conversation with someone who thinks that...
  3. a good security design for an office One issue that is rarely considered is how to deal...
Categories: thinktime

Everyone has an accent

Seth Godin - Fri 16th Mar 2018 18:03
The fact that we think the way we speak is normal is the first clue that empathy is quite difficult. You might also notice how easy it is to notice people who are much worse at driving than you are--but...        Seth Godin
Categories: thinktime

OpenSTEM: Vale Stephen Hawking

Planet Linux Australia - Fri 16th Mar 2018 13:03
Stephen Hawking was born on the 300th anniversary of Galileo Galilei‘s death (8 March 1942), and died on the anniversary of Albert Einstein‘s birth (14 March).   Having both reached the age of 76, Hawking actually lived a few months longer than Einstein, in spite of his health problems.  By the way, what do you call it when […]
Categories: thinktime

Conversational Design

a list apart - Fri 16th Mar 2018 01:03

A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Erika Hall’s new book, Conversational Design, available now from A Book Apart.

Texting is how we talk now. We talk by tapping tiny messages on touchscreens—we message using SMS via mobile data networks, or through apps like Facebook Messenger or WhatsApp.

.article-layout .main-content > figure.quote:first-child figcaption { margin-top: 1rem; }

In 2015, the Pew Research Center found that 64% of American adults owned a smartphone of some kind, up from 35% in 2011. We still refer to these personal, pocket-sized computers as phones, but “Phone” is now just one of many communication apps we neglect in favor of texting. Texting is the most widely used mobile data service in America. And in the wider world, four billion people have mobile phones, so 4 billion people have access to SMS or other messaging apps. For some, dictating messages into a wristwatch offers an appealing alternative to placing a call.

The popularity of texting can be partially explained by the medium’s ability to offer the easy give-and-take of conversation without requiring continuous attention. Texting feels like direct human connection, made even more captivating by unpredictable lag and irregular breaks. Any typing is incidental because the experience of texting barely resembles “writing,” a term that carries associations of considered composition. In his TED talk, Columbia University linguist John McWhorter called texting “fingered conversation”—terminology I find awkward, but accurate. The physical act—typing—isn’t what defines the form or its conventions. Technology is breaking down our traditional categories of communication.

By the numbers, texting is the most compelling computer-human interaction going. When we text, we become immersed and forget our exchanges are computer-mediated at all. We can learn a lot about digital design from the inescapable draw of these bite-sized interactions, specifically the use of language.

What Texting Teaches Us

This is an interesting example of what makes computer-mediated interaction interesting. The reasons people are compelled to attend to their text messages—even at risk to their own health and safety—aren’t high-production values, so-called rich media, or the complexity of the feature set.

Texting, and other forms of social media, tap into something very primitive in the human brain. These systems offer always-available social connection. The brevity and unpredictability of the messages themselves triggers the release of dopamine that motivates seeking behavior and keeps people coming back for more. What makes interactions interesting may start on a screen, but the really interesting stuff happens in the mind. And language is a critical part of that. Our conscious minds are made of language, so it’s easy to perceive the messages you read not just as words but as the thoughts of another mingled with your own. Loneliness seems impossible with so many voices in your head.

With minimal visual embellishment, texts can deliver personality, pathos, humor, and narrative. This is apparent in “Texts from Dog,” which, as the title indicates, is a series of imagined text exchanges between a man and his dog. (Fig 1.1). With just a few words, and some considered capitalization, Joe Butcher (writing as October Jones) creates a vivid picture of the relationship between a neurotic canine and his weary owner.

Fig 1.1: “Texts from Dog” shows how lively a simple text exchange can be.

Using words is key to connecting with other humans online, just as it is in the so-called “real world.” Imbuing interfaces with the attributes of conversation can be powerful. I’m far from the first person to suggest this. However, as computers mediate more and more relationships, including customer relationships, anyone thinking about digital products and services is in a challenging place. We’re caught between tried-and-true past practices and the urge to adopt the “next big thing,” sometimes at the exclusion of all else.

Being intentionally conversational isn’t easy. This is especially true in business and at scale, such as in digital systems. Professional writers use different types of writing for different purposes, and each has rules that can be learned. The love of language is often fueled by a passion for rules — rules we received in the classroom and revisit in manuals of style, and rules that offer writers the comfort of being correct outside of any specific context. Also, there is the comfort of being finished with a piece of writing and moving on. Conversation, on the other hand, is a context-dependent social activity that implies a potentially terrifying immediacy.

Moving from the idea of publishing content to engaging in conversation can be uncomfortable for businesses and professional writers alike. There are no rules. There is no done. It all feels more personal. Using colloquial language, even in “simplifying” interactive experiences, can conflict with a desire to appear authoritative. Or the pendulum swings to the other extreme and a breezy style gets applied to a laborious process like a thin coat of paint.

As a material for design and an ingredient in interactions, words need to emerge from the content shed and be considered from the start.  The way humans use language—easily, joyfully, sometimes painfully—should anchor the foundation of all interactions with digital systems.

The way we use language and the way we socialize are what make us human; our past contains the key to what commands our attention in the present, and what will command it in the future. To understand how we came to be so perplexed by our most human quality, it’s worth taking a quick look at, oh!, the entire known history of communication technology.

The Mother Tongue

Accustomed to eyeballing type, we can forget language began in our mouths as a series of sounds, like the calls and growls of other animals. We’ll never know for sure how long we’ve been talking—speech itself leaves no trace—but we do know it’s been a mighty long time.

Archaeologist Natalie Thais Uomini and psychologist Georg Friedrich Meyer concluded that our ancestors began to develop language as early as 1.75 million years ago. Per the fossil record, modern humans emerged at least 190,000 years ago in the African savannah. Evidence of cave painting goes back 30,000 years (Fig 1.2).

Then, a mere 6,000 years ago, ancient Sumerian commodity traders grew tired of getting ripped off. Around 3200 BCE, one of them had the idea to track accounts by scratching wedges in wet clay tablets. Cuneiform was born.

So, don’t feel bad about procrastinating when you need to write—humanity put the whole thing off for a couple hundred thousand years! By a conservative estimate, we’ve had writing for about 4% of the time we’ve been human. Chatting is easy; writing is an arduous chore.

Prior to mechanical reproduction, literacy was limited to the elite by the time and cost of hand-copying manuscripts. It was the rise of printing that led to widespread literacy; mass distribution of text allowed information and revolutionary ideas to circulate across borders and class divisions. The sharp increase in literacy bolstered an emerging middle class. And the ability to record and share knowledge accelerated all other advances in technology: photography, radio, TV, computers, internet, and now the mobile web. And our talking speakers.

Fig 1.2: In hindsight, “literate culture” now seems like an annoying phase we had to go through so we could get to texting.

Every time our communication technology advances and changes, so does the surrounding culture—then it disrupts the power structure and upsets the people in charge. Catholic archbishops railed against mechanical movable type in the fifteenth century. Today, English teachers deplore texting emoji. Resistance is, as always, futile. OMG is now listed in the Oxford English Dictionary.

But while these developments have changed the world and how we relate to one another, they haven’t altered our deep oral core.

Orality, Say It with Me Orality knits persons into community. Walter Ong

Today, when we record everything in all media without much thought, it’s almost impossible to conceive of a world in which the sum of our culture existed only as thoughts.

Before literacy, words were ephemeral and all knowledge was social and communal. There was no “save” option and no intellectual property. The only way to sustain an idea was to share it, speaking aloud to another person in a way that made it easy for them to remember. This was orality—the first interface.

We can never know for certain what purely oral cultures were like. People without writing are terrible at keeping records. But we can examine oral traditions that persist for clues.

The oral formula

Reading and writing remained elite activities for centuries after their invention. In cultures without a writing system, oral characteristics persisted to help transmit poetry, history, law and other knowledge across generations.

The epic poems of Homer rely on meter, formulas, and repetition to aid memory:

Far as a man with his eyes sees into the mist of the distance Sitting aloft on a crag to gaze over the wine-dark seaway, Just so far were the loud-neighing steeds of the gods overleaping. Iliad, 5.770

Concrete images like rosy-fingered dawn, loud-neighing steeds, wine-dark seaway, and swift-footed Achilles served to aid the teller and to sear the story into the listener’s memory.

Biblical proverbs also encode wisdom in a memorable format:

As a dog returns to its vomit, so fools repeat their folly. Proverbs 26:11

That is vivid.

And a saying that originated in China hundreds of years ago can prove sufficiently durable to adorn a few hundred Etsy items:

A journey of a thousand miles begins with a single step. Tao Te Ching, Chapter 64, ascribed to Lao Tzu The labor of literature

Literacy created distance in time and space and decoupled shared knowledge from social interaction. Human thought escaped the existential present. The reader doesn’t need to be alive at the same time as the writer, let alone hanging out around the same fire pit or agora. 

Freed from the constraints of orality, thinkers explored new forms to preserve their thoughts. And what verbose and convoluted forms these could take:

The Reader will I doubt too soon discover that so large an interval of time was not spent in writing this discourse; the very length of it will convince him, that the writer had not time enough to make a shorter. George Tullie, An Answer to a Discourse Concerning the Celibacy of the Clergy, 1688

There’s no such thing as an oral semicolon. And George Tullie has no way of knowing anything about his future audience. He addresses himself to a generic reader he will never see, nor receive feedback from. Writing in this manner is terrific for precision, but not good at all for interaction.

Writing allowed literate people to become hermits and hoarders, able to record and consume knowledge in total solitude, invest authority in them, and defend ownership of them. Though much writing preserved the dullest of records, the small minority of language communities that made the leap to literacy also gained the ability to compose, revise, and perfect works of magnificent complexity, utility, and beauty.

The qualities of oral culture

In Orality and Literacy: The Technologizing of the Word, Walter Ong explored the “psychodynamics of orality,” which is, coincidentally, quite a mouthful.  Through his research, he found that the ability to preserve ideas in writing not only increased knowledge, it altered values and behavior. People who grow up and live in a community that has never known writing are different from literate people—they depend upon one another to preserve and share knowledge. This makes for a completely different, and much more intimate, relationship between ideas and communities.

Oral culture is immediate and social

In a society without writing, communication can happen only in the moment and face-to-face. It sounds like the introvert’s nightmare! Oral culture has several other hallmarks as well:

  • Spoken words are events that exist in time. It’s impossible to step back and examine a spoken word or phrase. While the speaker can try to repeat, there’s no way to capture or replay an utterance.
  • All knowledge is social, and lives in memory. Formulas and patterns are essential to transmitting and retaining knowledge. When the knowledge stops being interesting to the audience, it stops existing.
  • Individuals need to be present to exchange knowledge or communicate. All communication is participatory and immediate. The speaker can adjust the message to the context. Conversation, contention, and struggle help to retain this new knowledge.
  • The community owns knowledge, not individuals. Everyone draws on the same themes, so not only is originality not helpful, it’s nonsensical to claim an idea as your own.
  • There are no dictionaries or authoritative sources. The right use of a word is determined by how it’s being used right now.
Literate culture promotes authority and ownership

Printed books enabled mass-distribution and dispensed with handicraft of manuscripts, alienating readers from the source of the ideas, and from each other. (Ong pg. 100):

  • The printed text is an independent physical object. Ideas can be preserved as a thing, completely apart from the thinker.
  • Portable printed works enable individual consumption. The need and desire for private space accompanied the emergence of silent, solo reading.
  • Print creates a sense of private ownership of words. Plagiarism is possible.
  • Individual attribution is possible. The ability to identify a sole author increases the value of originality and creativity.
  • Print fosters a sense of closure. Once a work is printed, it is final and closed.

Print-based literacy ascended to a position of authority and cultural dominance, but it didn’t eliminate oral culture completely.

Technology brought us together again

All that studying allowed people to accumulate and share knowledge, speeding up the pace of technological change. And technology transformed communication in turn. It took less than 150 years to get from the telegraph to the World Wide Web. And with the web—a technology that requires literacy—Ong identified a return to the values of the earlier oral culture. He called this secondary orality. Then he died in 2003, before the rise of the mobile internet, when things really got interesting.

Secondary orality is:

  • Immediate. There is no necessary delay between the expression of an idea and its reception. Physical distance is meaningless.
  • Socially aware and group-minded. The number of people who can hear and see the same thing simultaneously is in the billions.
  • Conversational. This is in the sense of being both more interactive and less formal.
  • Collaborative. Communication invites and enables a response, which may then become part of the message.
  • Intertextual. The products of our culture reflect and influence one another.

Social, ephemeral, participatory, anti-authoritarian, and opposed to individual ownership of ideas—these qualities sound a lot like internet culture.

Wikipedia: Knowledge Talks

When someone mentions a genre of music you’re unfamiliar with—electroclash, say, or plainsong—what do you do to find out more? It’s quite possible you type the term into Google and end up on Wikipedia, the improbably successful, collaborative encyclopedia that would be absent without the internet.

According to Wikipedia, encyclopedias have existed for around two-thousand years. Wikipedia has existed since 2001, and it’s the fifth most-popular site on the web. Wikipedia is not a publication so much as a society that provides access to knowledge. A volunteer community of “Wikipedians” continuously adds to and improves millions of articles in over 200 languages. It’s a phenomenon manifesting all the values of secondary orality:

  • Anyone can contribute anonymously and anyone can modify the contributions of another.
  • The output is free.
  • The encyclopedia articles are not attributed to any sole creator. A single article might have 2 editors or 1,000.
  • Each article has an accompanying “talk” page where editors discuss potential improvements, and a “history” page that tracks all revisions. Heated arguments are not documented. They take place as revisions within documents.

Wikipedia is disruptive in the true Clayton Christensen sense. It’s created immense value and wrecked an existing business model. Traditional encyclopedias are publications governed by authority, and created by experts and fact checkers. A volunteer project collaboratively run by unpaid amateurs shows that conversation is more powerful than authority, and that human knowledge is immense and dynamic.

In an interview with The Guardian, a British librarian expressed some disdain about Wikipedia.

The main problem is the lack of authority. With printed publications, the publishers must ensure that their data are reliable, as their livelihood depends on it. But with something like this, all that goes out the window. Philip Bradley, “Who knows?”, The Guardian, October 26, 2004

Wikipedia is immediate, group-minded, conversational, collaborative, and intertextual— secondary orality in action—but it relies on traditionally published sources for its authority. After all, anything new that changes the world does so by fitting into the world. As we design for new methods of communication, we should remember that nothing is more valuable simply because it’s new; rather, technology is valuable when it brings us more of what’s already meaningful.

From Documents to Events

Pages and documents organize information in space. Space used to be more of a constraint back when we printed conversation out. Now that the internet has given us virtually infinite space, we need to mind how conversation moves through time. Thinking about serving the needs of people in an internet-based culture requires a shift from thinking about how information occupies space—documents—to how it occupies time—events.

Texting means that we’ve never been more lively (yet silent) in our communications. While we still have plenty of in-person interactions, it’s gotten easy to go without. We text grocery requests to our spouses. We click through a menu in a mobile app to summon dinner (the order may still arrive at the restaurant by fax, proving William Gibson’s maxim that the future is unevenly distributed). We exchange messages on Twitter and Facebook instead of visiting friends in person, or even while visiting friends in person. We work at home and Slack our colleagues.

We’re rapidly approaching a future where humans text other humans and only speak aloud to computers. A text-based interaction with a machine that’s standing in for a human should feel like a text-based interaction with a human. Words are a fundamental part of the experience, and they are part of the design. Words should be the basis for defining and creating the design.

We’re participating in a radical cultural transformation. The possibilities manifest in systems like Wikipedia that succeed in changing the world by using technology to connect people in a single collaborative effort. And even those of us creating the change suffer from some lag. The dominant educational and professional culture remains based in literary values. We’ve been rewarded for individual achievement rather than collaboration. We seek to “make our mark,” even when designing changeable systems too complex for any one person to claim authorship. We look for approval from an authority figure. Working in a social, interactive way should feel like the most natural thing in the world, but it will probably take some doing.

Literary writing—any writing that emerges from the culture and standards of literacy—is inherently not interactive. We need to approach the verbal design not as a literary work, but as a conversation. Designing human-centered interactive systems requires us to reflect on our deep-seated orientation around artifacts and ownership. We must alienate ourselves from a set of standards that no longer apply.

Most advice on “writing for the web” or “creating content” starts from the presumption that we are “writing,” just for a different medium. But when we approach communication as an assembly of pieces of content rather than an interaction, customers who might have been expecting a conversation end up feeling like they’ve been handed a manual instead.

Software is on a path to participating in our culture as a peer.  So, it should behave like a person—alive and present. It doesn’t matter how much so-called machine intelligence is under the hood—a perceptive set of programmatic responses, rather than a series of documents, can be enough if they have the qualities of conversation.

Interactive systems should evoke the best qualities of living human communities—active, social, simple, and present—not passive, isolated, complex, or closed off.

Life Beyond Literacy Indeed, language changes lives. It builds society, expresses our highest aspirations, our basest thoughts, our emotions and our philosophies of life. But all language is ultimately at the service of human interaction. Other components of language—things like grammar and stories—are secondary to conversation. Daniel L. Everett, How Language Began

Literacy has gotten us far. It’s gotten you this far in this book. So, it’s not surprising we’re attached to the idea. Writing has allowed us to create technologies that give us the ability to interact with one another across time and space, and have instantaneous access to knowledge in a way our ancestors would equate with magic. However, creating and exchanging documents, while powerful, is not a good model for lively interaction. Misplaced literate values can lead to misery—working alone and worrying too much about posterity.

So, it’s time to let go and live a little! We’re at an exciting moment. The computer screen that once stood for a page can offer a window into a continuous present that still remembers everything. Or, the screen might disappear completely.

Now we can start imagining, in an open-ended way, what constellation of connected devices any given person will have around them, and how we can deliver a meaningful, memorable experience on any one of them. We can step away from the screen and consider what set of inputs, outputs, events, and information add up to the best experience.

This is daunting for designers, sure, yet phenomenal for people. Thinking about human-computer interactions from a screen-based perspective was never truly human-centered from the start. The ideal interface is an interface that’s not noticeable at all—a world in which the distance from thought to action has collapsed and merely uttering a phrase can make it so.

We’re fast moving past “computer literacy.” It’s on us to ensure all systems speak human fluently.

Categories: thinktime

We make our own taste, and call it reality

Seth Godin - Thu 15th Mar 2018 19:03
Most of us say, "this is better, therefore I like it." In fact, the converse is what actually happens. "I like it, therefore I'm assuring you (and me) that this is better."        Seth Godin
Categories: thinktime


Seth Godin - Wed 14th Mar 2018 20:03
The latest episode my Akimbo podcast is about hits. (Click then scroll down to see all the episodes, or, even better, subscribe for free...) The hit that sweeps an industry, like a thresher through a wheat field. The one that...        Seth Godin
Categories: thinktime

A DIY Web Accessibility Blueprint

a list apart - Wed 14th Mar 2018 00:03

The summer of 2017 marked a monumental victory for the millions of Americans living with a disability. On June 13th, a Southern District of Florida Judge ruled that Winn-Dixie’s inaccessible website violated Title III of the Americans with Disabilities Act. This case marks the first trial under the ADA, which was passed into law in 1990.

Despite spending more than $7 million to revamp its website in 2016, Winn-Dixie neglected to include design considerations for users with disabilities. Some of the features that were added include online prescription refills, digital coupons, rewards card integration, and a store locator function. However, it appears that inclusivity didn’t make the cut.

Because Winn-Dixie’s new website wasn’t developed to WCAG 2.0 standards, the new features it boasted were in effect only available to sighted, able-bodied users. When Florida resident Juan Carlos Gil, who is legally blind, visited the Winn-Dixie website to refill his prescriptions, he found it to be almost completely inaccessible using the same screen reader software he uses to access hundreds of other sites.

Juan stated in his original complaint that he “felt as if another door had been slammed in his face.” But Juan wasn’t alone. Intentionally or not, Winn-Dixie was denying an entire group of people access to their new website and, in turn, each of the time-saving features it had to offer.

What makes this case unique is that it marks the first time in history in which a public accommodations case went to trial, meaning the judge ruled the website to be a “place of public accommodation” under the ADA and therefore subject to ADA regulations. Since there are no specific ADA regulations regarding the internet, Judge Scola decided the adoption of the Web Content Accessibility Guidelines (WCAG) 2.0 Level AA to be appropriate. (Thanks to the hard work of the Web Accessibility Initiative (WAI) at the W3C, WCAG 2.0 has found widespread adoption throughout the globe, either as law or policy.)

Learning to have empathy

Anyone with a product subscription service (think diapers, razors, or pet food) knows the feeling of gratitude that accompanies the delivery of a much needed product that arrives just in the nick of time. Imagine how much more grateful you’d be for this service if you, for whatever reason, were unable to drive and lived hours from the nearest store. It’s a service that would greatly improve your life. But now imagine that the service gets overhauled and redesigned in such a way that it is only usable by people who own cars. You’d probably be pretty upset.

This subscription service example is hypothetical, yet in the United States, despite federal web accessibility requirements instituted to protect the rights of disabled Americans, this sort of discrimination happens frequently. In fact, anyone assuming the Winn-Dixie case was an isolated incident would be wrong. Web accessibility lawsuits are rising in number. The increase from 2015 to 2016 was 37%. While some of these may be what’s known as “drive-by lawsuits,” many of them represent plaintiffs like Juan Gil who simply want equal rights. Scott Dinin, Juan’s attorney, explained, “We’re not suing for damages. We’re only suing them to follow the laws that have been in this nation for twenty-seven years.”

For this reason and many others, now is the best time to take a proactive approach to web accessibility. In this article I’ll help you create a blueprint for getting your website up to snuff.

The accessibility blueprint

If you’ll be dealing with remediation, I won’t sugarcoat it: successfully meeting web accessibility standards is a big undertaking, one that is achieved only when every page of a site adheres to all the guidelines you are attempting to comply with. As I mentioned earlier, those guidelines are usually WCAG 2.0 Level AA, which means meeting every Level A and AA requirement. Tight deadlines, small budgets, and competing priorities may increase the stress that accompanies a web accessibility remediation project, but with a little planning and research, making a website accessible is both reasonable and achievable.

My intention is that you may use this article as a blueprint to guide you as you undertake a DIY accessibility remediation project. Before you begin, you’ll need to increase your accessibility know-how, familiarize yourself with the principles of universal design, and learn about the benefits of an accessible website. Then you may begin to evangelize the benefits of web accessibility to those you work with.

Have the conversation with leadership

Securing support from company leadership is imperative to the long-term success of your efforts. There are numerous ways to broach the subject of accessibility, but, sadly, in the world of business, substantiated claims top ethics and moral obligation. Therefore I’ve found one of the most effective ways to build a business case for web accessibility is to highlight the benefits.

Here are just a few to speak of:

  • Accessible websites are inherently more usable, and consequently they get more traffic. Additionally, better user experiences result in lower bounce rates, higher conversions, and less negative feedback, which in turn typically make accessible websites rank higher in search engines.
  • Like assistive technology, web crawlers (such as Googlebot) leverage HTML to get their information from websites, so a well marked-up, accessible website is easier to index, which makes it easier to find in search results.
  • There are a number of potential risks for not having an accessible website, one of which is accessibility lawsuits.
  • Small businesses in the US that improve the accessibility of their website may be eligible for a tax credit from the IRS.
Start the movement

If you can’t secure leadership backing right away, you can still form a grassroots accessibility movement within the company. Begin slowly and build momentum as you work to improve usability for all users. Though you may not have the authority to make company-wide changes, you can strategically and systematically lead the charge for web accessibility improvements.

My advice is to start small. For example, begin by pushing for site-wide improvements to color contrast ratios (which would help color-blind, low-vision, and aging users) or work on making the site keyboard accessible (which would help users with mobility impairments or broken touchpads, and people such as myself who prefer not using a mouse whenever possible). Incorporate user research and A/B testing into these updates, and document the results. Use the results to champion for more accessibility improvements.

Read and re-read the guidelines

Build your knowledge base as you go. Learning which laws, rules, or guidelines apply to you, and understanding them, is a prerequisite to writing an accessibility plan. Web accessibility guidelines vary throughout the world. There may be other guidelines that apply to you, and in some cases, additional rules, regulations, or mandates specific to your industry.

Not understanding which rules apply to you, not reading them in full, or not understanding what they mean can create huge problems down the road, including excessive rework once you learn you need to make changes.

Build a team

Before you can start remediating your website, you’ll need to assemble a team. The number of people will vary depending on the size of your organization and website. I previously worked for a very large company with a very large website, yet the accessibility team they assembled was small in comparison to the thousands of pages we were tasked to remediate. This team included a project manager, visual designers, user experience designers, front-end developers, content editors, a couple requirements folks, and a few QA testers. Most of these people had been pulled from their full-time roles and instructed to quickly become familiar with WCAG 2.0. To help you create your own accessibility team, I will explain in detail some of the top responsibilities of the key players:

  • Project manager is responsible for coordinating the entire remediation process. They will help run planning sessions, keep everyone on schedule, and report the progress being made. Working closely with the requirements people, their goal is to keep every part of this new machine running smoothly.
  • Visual designers will mainly address issues of color usage and text alternatives. In its present form, WCAG 2.0 contrast minimums only apply to text, however the much anticipated WCAG 2.1 update (due to be released in mid-2018) contains a new success criterion for Non-text Contrast, which covers contrast minimums of all interactive elements and “graphics required to understand the content.” Visual designers should also steer clear of design trends that ruin usability.
  • UX designers should be checking for consistent, logical navigation and reading order. They’ll need to test that pages are using heading tags appropriately (headings are for semantic structure, not for visual styling). They’ll be checking to see that page designs are structured to appear and operate in predictable ways.
  • Developers have the potential to make or break an accessible website because even the best designs will fail if implemented incorrectly. If your developers are unfamiliar with WAI-ARIA, accessible coding practices, or accessible JavaScript, then they have a few things to learn. Developers should think of themselves as designers because they play a very important role in designing an inclusive user experience. Luckily, Google offers a short, free Introduction to Web Accessibility course and, via Udacity, a free, advanced two-week accessibility course. Additionally, The A11Y Project is a one-stop shop loaded with free pattern libraries, checklists, and accessibility resources for front-end developers.
  • Editorial review the copy for verbosity. Avoid using phrases that will confuse people who aren’t native language speakers. Don’t “beat around the bush” (see what I did there?). Keep content simple, concise, and easy to understand. No writing degree? No worries. There are apps that can help you improve the clarity of your writing and that correct your grammar like a middle school English teacher. Score bonus points by making sure link text is understandable out of context. While this is a WCAG 2.0 Level AAA guideline, it’s also easily fixed and it greatly improves the user experience for individuals with varying learning and cognitive abilities.
  • Analysts work in tandem with editorial, design, UX, and QA. They coordinate the work being done by these groups and document the changes needed. As they work with these teams, they manage the action items and follow up on any outstanding tasks, questions, or requests. The analysts also deliver the requirements specifications to the developers. If the changes are numerous and complex, the developers may need the analysts to provide further clarification and to help them properly implement the changes as described in the specs.
  • QA will need to be trained to the same degree as the other accessibility specialists since they will be responsible for testing the changes that are being made and catching any issues that arise. They will need to learn how to navigate a website using only a keyboard and also by properly using a screen reader (ideally a variety of screen readers). I emphasized “properly” because while anyone can download NVDA or turn on VoiceOver, it takes another level of skill to understand the difference between “getting through a page” and “getting through a page with standard keyboard controls.” Having individuals with visual, auditory, or mobility impairments on the QA team can be a real advantage, as they are more familiar with assistive technology and can test in tandem with others. Additionally, there are a variety of automated accessibility testing tools you can use alongside manual testing. These tools typically catch only around 30% of common accessibility issues, so they do not replace ongoing human testing. But they can be extremely useful in helping QA learn when an update has negatively affected the accessibility of your website.
Start your engines!

Divide your task into pieces that make sense. You may wish to tackle all the global elements first, then work your way through the rest of the site, section by section. Keep in mind that every page must adhere to the accessibility standards you’re following for it to be deemed “accessible.” (This includes PDFs.)

Use what you’ve learned so far by way of accessibility videos, articles, and guidelines to perform an audit of your current site. While some manual testing may seem difficult at first, you’ll be happy to learn that some manual testing is very simple. Regardless of the testing being performed, keep in mind that it should always be done thoroughly and by considering a variety of users, including:

  • keyboard users;
  • blind users;
  • color-blind users;
  • low-vision users;
  • deaf and hard-of-hearing users;
  • users with learning disabilities and cognitive limitations;
  • mobility-impaired users;
  • users with speech disabilities;
  • and users with seizure disorders.
When you are in the weeds, document the patterns

As you get deep in the weeds of remediation, keep track of the patterns being used. Start a knowledge repository for elements and situations. Lock down the designs and colors, code each element to be accessible, and test these patterns across various platforms, browsers, screen readers, and devices. When you know the elements are bulletproof, save them in a pattern library that you can pull from later. Having a pattern library at your fingertips will improve consistency and compliance, and help you meet tight deadlines later on, especially when working in an agile environment. You’ll need to keep this online knowledge repository and pattern library up-to-date. It should be a living, breathing document.

Cross the finish line … and keep going!

Some people mistakenly believe accessibility is a set-it-and-forget-it solution. It isn’t. Accessibility is an ongoing challenge to continually improve the user experience the way any good UX practitioner does. This is why it’s crucial to get leadership on board. Once your site is fully accessible, you must begin working on the backlogs of continuous improvements. If you aren’t vigilant about accessibility, people making even small site updates can unknowingly strip the site of the accessibility features you worked so hard to put in place. You’d be surprised how quickly it can happen, so educate everyone you work with about the importance of accessibility. When everyone working on your site understands and evangelizes accessibility, your chances of protecting the accessibility of the site are much higher.

It’s about the experience, not the law

In December of 2017, Winn-Dixie appealed the case with blind patron Juan Carlo Gil. Their argument is that a website does not constitute a place of accommodation, and therefore, their case should have been dismissed. This case, and others, illustrate that the legality of web accessibility is still very much in flux. However, as web developers and designers, our motivation to build accessible websites should have nothing to do with the law and everything to do with the user experience.

Good accessibility is good UX. We should seek to create the best user experience for all. And we shouldn’t settle for simply meeting accessibility standards but rather strive to create an experience that delights users of all abilities.

Additional resources and articles

If you are ready to learn more about web accessibility standards and become the accessibility evangelist on your team, here are some additional resources that can help.

Resources Articles
Categories: thinktime


Subscribe to kattekrab aggregator