You are here


The chance of a lifetime

Seth Godin - Fri 07th Oct 2016 19:10
That would be today. And every day, if you're up for it. The things that change our lives (and the lives of others) are rarely the long-scheduled events, the much-practiced speeches or the annual gala. No, it's almost certain that...        Seth Godin
Categories: thinktime

Do what you're good at, or...

Seth Godin - Thu 06th Oct 2016 20:10
get really good at what you do. You have nearly unlimited strategic choices and options about your career and what your organization does. Which means you can focus on doing things you are truly good at. Or, if a particular...        Seth Godin
Categories: thinktime

Breakage vs. references

Seth Godin - Wed 05th Oct 2016 20:10
Years ago, I asked fabled direct marketer Joe Sugarman about the money-back guarantee he offered on the stuff he sold through magazine ads. He said 10% of the people who bought asked for their money back... and if any product...        Seth Godin
Categories: thinktime

A Redesign with CSS Shapes

a list apart - Wed 05th Oct 2016 01:10

Here at An Event Apart (an A List Apart sibling) we recently refreshed the design of our “Why Should You Attend?” page, which had retained an older version of our site design and needed to be brought into alignment with the rest of the site. Along the way, we decided to enhance the page with some cutting-edge design techniques: non-rectangular float shapes and feature queries.

To be clear, we didn’t set out to create a Cutting Edge Technical Example™; rather, our designer (Mike Pick of Monkey Do) gave us a design, and we realized that his vision happened to align nicely with new CSS features that are coming into mainstream support. We were pleased enough with the results and the techniques that we decided to share them with the community.

Styling bubbles

Here are some excerpts from an earlier stage of the designs (Fig. 1). (The end-stage designs weren’t created as comps, so I can’t show their final form, but these are pretty close.)

Fig 1: Late-stage design comps showing “desktop” and “mobile” views.

What interested me was the use of the circular images, which at one point we called “portholes,” but I came to think of as “bubbles.” As I prepared to implement the design in code, I thought back to the talk Jen Simmons has been giving throughout the year at An Event Apart. Specifically, I thought about CSS Shapes and how I might be able to use them to let text flow along the circles’ edges—something like Fig. 2.

Fig 2: Flowing around a circular shape.

This layout technique used to be sort of possible by using crude float hacks like Ragged Float and Sliced Sandbags, but now we have float shapes! We can define a circle—or even a polygon—that describes how text should flow past a floated element.

“Wait a minute,” you may be saying, “I haven’t heard about widespread support for Shapes!” Indeed, you have not. They’re currently supported only in the WebKit/Blink family—Chrome, Safari, and Opera. But that’s no problem: in other browsers, the text will flow past the boxy floats the same way it always has. The same way it does in the design comps, in fact.

The basic CSS looks something like this:

img.bubble.left { float: left; margin: 0 40px 0 0 ; shape-outside: circle(150px at 130px 130px); } img.bubble.right { float: right; margin: 0 0 0 40px; shape-outside: circle(150px at 170px 130px); }

Each of those bubble images, by the way, is intrinsically 260px wide by 260px tall. In wide views like desktops, they’re left to that size; at smaller widths, they’re scaled to 30% of the viewport’s width.

Shape placement

To understand the shape setup, look at the left-side bubbles. They’re 260×260, with an extra 40 pixels of right margin. That means the margin box (that is, the box described by the outer edge of the margins) is 300 pixels wide by 260 pixels tall, with the actual image filling the left side of that box.

This is why the circular shape is centered at the point 130px 130px—it’s the midpoint of the image in question. So the circle is now centered on the image, and has a radius of 150px. That means it extends 20 pixels beyond the visible outer edge of the circle, as shown here (Fig. 3).

Fig 3: The 150px radius of the shape covers the entire visible part of the image, plus an extra 20px

In order to center the circles on the right-side bubbles, the center point has to be shifted to 170px 130px—traversing the 40-pixel left margin, and half the width of the image, to once again land on the center. The result is illustrated here, with annotations to show how each of the circles’ centerpoints are placed (Fig. 4).

Fig 4: Two of the circular shapes, as highlighted by Chrome’s Inspector and annotated in Keynote (!)

It’s worth examining that screenshot closely. For each image, the light blue box shows the element itself—the img element. The light orange is the basic margin area, 40 pixels wide in each case. The purple circle shows the shape-outside circle. Notice how the text flows into the orange area to come right up against the purple circle. That’s the effect of shape-outside. Areas of the margin outside that shape, and even areas of the element’s content outside the shape, are available for normal-flow content to flow into.

The other thing to notice is the purple circle extending outside the margin area.  This is misleading: any shape defined by shape-outside is clipped at the edge of the element’s margin box. So if I were to increase the circle’s radius to, say, 400 pixels, it would cover half the page in Chrome’s inspector view, but the actual layout of text would be around the margin edges of the floated image—as if there were no shape at all. I’d really like to see Chrome show this by fading the parts of the shape that extend past the margin box. (Firefox and Edge should of course follow suit!)

Being responsive

At this point, things seem great; the text flows past circular float shapes in Chrome/Safari/Opera, and past the standard boxy margin boxes in Firefox/Edge/etc. That’s fine as long as the page never gets so narrow as to let text wrap between bubbles—but, of course, it will, as we see in this screenshot (Fig. 5).

Fig 5: The perils of floats on smaller displays

For the right-floating images, it’s not so bad—but for the left floaters, things aren’t as nice. This particular situation is passably tolerable, but in a situation where just one or two words wrap under the bubble, it will look awful.

An obvious first step is to set some margins on the paragraphs so that they don’t wrap under the accompanying bubbles. For example:

.complex-content div:nth-child(even):not(:last-child) p { margin-right: 20%; } .complex-content div:nth-child(odd):not(:last-child) p { margin-left: 20%; }

The point here being, for all even-numbered child divs (that aren’t the last child) in a complex-content context, add a 20% right margin; for the odd-numbered divs, a similar left margin.

That’s pretty good in Chrome (Fig. 6) (with the circular float shapes) because the text wraps along the bubble and then pushes off at a sensible point. But in Firefox, which still has the boxy floats, it creates a displeasing stairstep effect (Fig. 7).

Fig 6: Chrome (with float shapes) Fig 7: Firefox (without float shapes)

On the flip side, increasing the margin to the point that the text all lines up in Firefox (33% margins) would mean that the float shape in Chrome would be mostly pointless, since the text would never flow down along the bottom half of the circles.

Querying feature support

This is where @supports came into play. By using @supports to run a feature query, I could set the margins for all browsers to the 33% needed when shapes aren’t supported, and then reduce it for browsers that do understand shapes. It goes something like this:

.complex-content div:nth-child(even):not(:last-child) p { margin-right: 33%; } .complex-content div:nth-child(odd):not(:last-child) p { margin-left: 33%; } @supports (shape-outside: circle()) { .complex-content div:nth-child(even):not(:last-child) p { margin-right: 20%; } .complex-content div:nth-child(odd):not(:last-child) p { margin-left: 20%; } }

With that, everything is fine in the two worlds (Fig. 8 and Fig. 9). There are still a few things that could be tweaked, but overall, the effect is pleasing in browsers that support float shapes, and also those that don’t. The two experiences are shown in the following videos. (They don’t autoplay, so click at your leisure.)

ALA CSS Bubble Example Captured in Chrome; higher resolution available (mp4, 3.3MB) ALA CSS Bubble Example in Firefox Captured in Firefox; higher resolution available (mp4, 2.9MB)

Thanks to feature queries, as browsers like Firefox and MS Edge add support for float shapes, they’ll seamlessly get the experience that currently belongs only to Chrome and its bretheren.  There’s no browser detection to adjust later, no hacks to clear out. There’s only silent progressive enhancement baked right into the CSS itself.  It’s pretty much “style and forget.”

While an arguably minor enhancement, I really enjoyed the process of working with shapes and making them progressively and responsively enhanced. It’s a nice little illustration of how we can use advanced features of CSS right now, without the usual wait for widespread support. This is a general pattern that will see a lot more use as we start to make use of shapes, flexbox, grid, and more cutting-edge layout tools, and I’m glad to be able to offer this case study.

Further reading

If you’d like to know more about float shapes and feature queries, I can do little better than to recommend the following articles.

Categories: thinktime

Indomitable is a mirage

Seth Godin - Tue 04th Oct 2016 19:10
One seductive brand position is the posture of being indomitable. Unable to be subdued, incapable of loss, the irresistible force and the immovable object, all in one. The public enjoys rooting for this macho ideal. Superman in real life, but...        Seth Godin
Categories: thinktime

Enough ethics?

Seth Godin - Mon 03rd Oct 2016 19:10
Most companies seek to be more profitable. They seek to increase their Key Performance Indicators. More referrals, more satisfaction, more loyalty. They seek to increase their market share, their dividends, their stock price. But ethics? In fact, most companies strive...        Seth Godin
Categories: thinktime


Seth Godin - Sun 02nd Oct 2016 18:10
Weasel words damage trust. And weasels are worth avoiding. There are two traps to look out for: Promotional weasel words. Every experienced marketing copywriter knows how to use them. "As much as half off," means, "There is at least one...        Seth Godin
Categories: thinktime

Overdraft protection

Seth Godin - Sat 01st Oct 2016 18:10
The problem with taking all we can get away with is that we fail to invest in a cushion, in goodwill, in a reserve for when things don't go the way we expect. Short-term thinking pays no attention to the...        Seth Godin
Categories: thinktime

Dropping the narrative

Seth Godin - Fri 30th Sep 2016 19:09
Okay, you don't like what your boss did yesterday or last week or last month. But today, right now, sitting across the table, what's happening? Narrating our lives, the little play-by-play we can't help carrying around, that's a survival mechanism....        Seth Godin
Categories: thinktime

Fully baked

Seth Godin - Thu 29th Sep 2016 18:09
In medical school, an ongoing lesson is that there will be ongoing lessons. You're never done. Surgeons and internists are expected to keep studying for their entire career—in fact, it's required to keep a license valid. Knowledge workers, though, the...        Seth Godin
Categories: thinktime

The ripples

Seth Godin - Wed 28th Sep 2016 19:09
Every decision we make changes things. The people we befriend, the examples we set, the problems we solve... Sometimes, if we're lucky, we get to glimpse those ripples as we stand at the crossroads. Instead of merely addressing the urgency...        Seth Godin
Categories: thinktime

Task Performance Indicator: A Management Metric for Customer Experience

a list apart - Wed 28th Sep 2016 00:09

It’s hard to quantify the customer experience. “Simpler and faster for users” is a tough sell when the value of our work doesn’t make sense to management. We have to prove we’re delivering real value—increased the success rate, or reduced time-on-task, for example—to get their attention. Management understands metrics that link with other organizational metrics, such as lost revenue, support calls, or repeat visits. So, we need to describe our environment with metrics of our own.

For the team I work with, that meant developing a remote testing method that would measure the impact of changes on customer experience—assessing alterations to an app or website in relation to a defined set of customer “top tasks.” The resulting metric is stable, reliable, and repeatable over time. We call it the Task Performance Indicator (TPI).

For example, if a task has a TPI score of 40 (out of 100), it has major issues. If you measure again in 6 months’ time but nothing has been done to address the issues, the testing score will again result in a TPI of 40.

In traditional usability testing, it has long been established that if you test with between three and eight people, you’ll find out if significant problems exist. Unfortunately, that’s not enough to reveal precise success rates or time-on-task measurements. What we’ve discovered from hundreds of tests over many years is that reliable and stable patterns aren’t apparent until you’re testing with between 13 and 18 people. Why is that?

When the number of participants ranges anywhere from 13–18 people, testing results begin to stabilize and you’re left with a reliable baseline TPI metric.

The following chart shows why we can do this (Fig. 1).

Fig 1: TPI scores start to level out and stabilize as more participants are tested. How TPI scores are calculated

We’ve spent years developing a single score that we believe is a true reflection of the customer experience when completing a task.

For each task, we present the user with a “task question” via live chat. Once they understand what they have to do, the user indicates that they are starting the task. At the end of the task, they must provide an answer to the question. We then ask people how confident they are in their answer.

A number of factors affect the resulting TPI score.

Time: We establish what we call the “Target Time”—how long it should take to complete the task under best practice conditions. The more they exceed the target time, the more it affects the TPI.

Time out: The person takes longer than the maximum time allocated. We set it at 5 minutes.

Confidence: At the end of each task, people are asked how confident they are. For example, low confidence in a correct answer would have a slight negative impact on the TPI score.

Minor wrong: The person is unsure; their answer is almost correct.

Disaster: The person has high confidence, but the wrong result; acting on this wrong answer could have serious consequences.

Gives up: The person gives up on the task.

A TPI of 100 means that the user has successfully completed the task within the agreed target times.

In the following chart, the TPI score is 61 (Fig. 2).

Fig 2: A visual breakdown of sample results for Overall Task Performance, Mean Completion Times, and Mean Target Times. Developing task questions

Questions are the greatest source of potential noise in TPI testing. If a question is not worded correctly, it will invalidate the results. To get an overall TPI for a particular website or app, we typically test 10-12 task questions. In choosing a question, keep in mind the following:

Based on customer top tasks. You must choose task questions that are examples of top tasks. If you measure and then seek to improve the performance of tiny tasks (low demand tasks) you may be contributing to a decline in the overall customer experience.

Repeatable. Create task questions that you can test again in 6 to 12 months.

Representative and typical. Don’t make the task questions particularly difficult. Start off with reasonably basic, typical questions.

Universal, everyone can do it. Every one of your test participants must be able to do each task. If you’re going to be testing a mixture of technical, marketing, and sales people, don’t choose a task question that only a salesperson can do.

One task, one unique answer. Limit each task question to only one actual thing you want people to do, and one unique answer.

Does not contain clues. The participant will examine the task question like Sherlock Holmes would hunt for a clue. Make sure it doesn’t contain any obvious keywords that could be answered by conducting a search.

Short—30 words or less. Remember, the participant is seeing each task question for the first time, so aim to keep its length at less than 20 words (and definitely less than 30).

No change within testing period. Choose questions where the website or app is not likely to change during the testing period. Otherwise, you’re not going to be testing like with like.

Case Study: Task questions for OECD

Let’s look at some top tasks for the customers of Organisation for Economic Co-operation and Development (OECD), an economic and policy advice organization.

  1. Access and submit country surveys, reviews, and reports.
  2. Compare country statistical data.
  3. Retrieve statistics on a particular topic.
  4. Browse a publication online for free.
  5. Access, submit, and review working papers.

Based on that list, these task questions were developed:

  1. What are OECD’s latest recommendations regarding Japan’s healthcare system?
  2. In 2008, was Vietnam on the list of countries that received official development assistance?
  3. Did more males per capita die of heart attacks in Canada than in France in 2004?
  4. What is the latest average starting salary, in US dollars, of a primary school teacher across OECD countries?
  5. What is the title of Box 1.2 on page 73 of OECD Employment Outlook 2009?
  6. Find the title of the latest working paper about improvements to New Zealand’s tax system.
Running the test

To test 10-12 task questions usually takes about one hour, and you’ll need between 13 and 18 participants (we average 15). Make sure that they’re representative of your typical customers. 

We’ve found that remote testing is better, faster, and cheaper than traditional lab-based measurement for TPI testing. With remote testing, people are more likely to behave in a natural way because they are in their normal environment—at home or in the office—and using their own computer. That makes it much easier for someone to give you an hour of their time, rather than spend the morning at your lab. And since the cost is much lower than lab-based tests, we can set them up more quickly and more often. It’s even convenient to schedule them using Webex, GoToMeeting, Skype, etc.

The key to a successful test is that you are confident, calm, and quiet. You’re there to facilitate the test—not to guide it or give opinions. Aim to become as invisible as possible.

Prior to beginning the test, introduce yourself and make sure the participant gives you permission to record the session. Next, ask that they share their screen. Remember to stress that you are only testing the website or app—not them. Ask them to go to an agreed start point where all the tasks will originate. (We typically choose the homepage for the site/app, or a blank tab in the browser.)

Explain that for each task, you will paste a question into the chat box found on their screen. Test the chat box to confirm that the participant can read it, and tell them that you will also read the task aloud a couple of times. Once they understand what they have to do, ask them to indicate when they start the task, and that they must give an answer once they’ve finished. After they’ve completed the task, ask the participant how confident they are in their answer.

Analyzing the results

As you observe the tests, you’re looking for patterns. In particular, look for the major reasons people give for selecting the wrong answer or exceeding the target time.

Video recordings of your customers as they try—and often fail—to complete their tasks have powerful potential. They are the raw material of empathy. When we identify a major problem area during a particular test, we compile a video containing three to six participants who were affected. For each participant, we select less than a minute’s worth of video showing them while affected by this problem. We then edit these participant snippets into a combined video (that we try to keep under three minutes). We then get as many stakeholders as possible to watch it. You should seek to distribute these videos as widely, and as often as possible.

How Cisco uses the Task Performance Indicator

Every six months or so, we measure several tasks for Cisco, including the following:

Task: Download the latest firmware for the RV042 router.

The top task of Cisco customers is downloading software. When we started the Task Performance Indicator for software downloads in 2010, a typical customer might take 15 steps and more than 300 seconds to download a piece of software. It was a very frustrating and annoying experience. The Cisco team implemented a continuous improvement process based on the TPI results. Every six months, the Task Performance Indicator was carried out again to see what had been improved and what still needed fixing. By 2012—for a significant percentage of software—the number of steps to download software had been reduced from 15 to 4, and the time on task had dropped from 300 seconds to 40 seconds. Customers were getting a much faster and better experience.

According to Bill Skeet, Senior Manager of Customer Experience for Cisco Digital Support, implementing the TPI has had a dramatic impact on how people think about their jobs:

We now track the score of each task and set goals for each task. We have assigned tasks and goals to product managers to make sure we have a person responsible for managing the quality of the experience ... Decisions in the past were driven primarily by what customers said and not what they did. Of course, that sometimes didn’t yield great results because what users say and what they do can be quite different.

Troubleshooting and bug fixing are also top tasks for Cisco customers. Since 2012, we’ve tested the following.

Task: Ports 2 and 3 on your ASR 9001 router, running v4.3.0 software, intermittently stop functioning for no apparent reason. Find the Cisco recommended fix or workaround for this issue.

Fig 3: Bug Task Success Rate Comparisons, February 2012 through December 2014.

For a variety of reasons, it was difficult to solve the underlying problems connected with finding the right bug fix information on the Cisco website. Thus, the scores from February 2012 to February 2013 did not improve in any significant way.

For the May 2013 measurement, the team ran a pilot to show how (with the proper investment) it could be much easier to find bug fix information. As we can see in the preceding image, the success rate jumped. However, it was only a pilot and by the next measurement it had been removed and the score dropped again. The evidence was there, though, and the team soon obtained resources to work on a permanent fix. The initial implementation was for the July 2014 measurement, where we see a significant improvement. More refinements were made, then we see a major turnaround by December 2014.

Task: Create a new guest account to access the website and log in with this new account.

Fig 4: Success/Failure rates from March 2015 through June 2015

This task was initially measured in 2014; the results were not good.

In fact, nobody succeeded in completing the task during the March 2014 measurements, resulting in three specific design improvements to the sign-up form. These involved:

  1. Clearly labelling mandatory fields
  2. Improving password guidance
  3. Eliminating address mismatch errors.

A shorter pilot form was also launched as a proof of concept. Success jumped by 50% in the July 2014 measurements, but dropped 21% by December 2014 because the pilot form was no longer there. By June 2015, a shorter, simpler form was fully implemented, and the success again reached 50%.

The team was able to show that because of their work:

  • The three design improvements improved the success rate by 29%.
  • The shorter form improved the success rate by 21%.

That’s very powerful. You can isolate a piece of work and link it to a specific increase in the TPI. You can start predicting that if a company invests X it will get a Y TPI increase. This is control and the route to power and respect within your organization, or to trust and credibility with your client.

If you can link it with other key performance indicators, that’s even more powerful.

The following table shows that improvements to the registration form halved the support requests connected with guest account registration (Fig. 5).

Fig 5: Registration Support Requests, Q1 2014, Q2 2015, and Q3 2015.

A more simplified guest registration process resulted in:

  • A reduction in support requests—from 1,500 a quarter, to less than 700
  • Three fewer people were required to support customer registration
  • 80% productivity improvement
  • Registration time down to 2 minutes from 3:25.

Task: Pretend you have forgotten the password for the Cisco account and take whatever actions are required to log in.

When we measured the change passwords task, we found that there was a 37% failure rate.

A process of improvement was undertaken, as can be seen by the following chart, and by December 2013, we had a 100% success rate (Fig. 6).

Fig 6: Progression of success rate improvement from November 2012 to December 2013.

100% success rate is a fantastic result. Job done, right? Wrong. In digital, the job is never done. It is always an evolving environment. You must keep measuring the top tasks because the digital environment that they exist within is constantly changing. Stuff is getting added, stuff is getting removed, and stuff just breaks (Fig. 7).

Fig 7: Comparison of success rates, March 2014 and July 2014.

When we measured again in March 2014, the success rate had dropped to 59% because of a technical glitch. It was quickly dealt with, so the rate shot back up to 100% by July.

At every step of the way, the TPI gave us evidence about how well we were doing our job. It’s really helped us fight against some of the “bright shiny object” disease and the tendency for everyone to have an opinion on what we put on our webpages ... because we have data to back it up. It gave us more insight into how content organization played a role in our work for Cisco, something that Jeanne Quinn (senior manager responsible for the Cisco Partner) told us kept things clear and simple while working with the client.

The TPI allows you to express the value of your work in ways that makes sense to management. If it makes sense to management—and if you can prove you’re delivering value—then you get more resources and more respect.

Categories: thinktime

Wedding syndrome

Seth Godin - Tue 27th Sep 2016 19:09
Running a business is a lot more important than starting one. Choosing and preparing for the job you'll do for the next career is a much more important task than getting that job. Serving is more important than the campaign....        Seth Godin
Categories: thinktime

Spectator sports

Seth Godin - Mon 26th Sep 2016 19:09
Every year, we spend more than a trillion dollars worth of time and attention on organized spectator sports. The half-life of a sporting event is incredibly short. Far more people are still talking about the Godfather movie or the Nixon...        Seth Godin
Categories: thinktime

Anxiety loves company

Seth Godin - Sun 25th Sep 2016 18:09
Somehow, at least in our culture, we find relief when others are anxious too. So we spread our anxiety, stoking it in other people, looking for solace in the fear in their eyes. And thanks to the media, to the...        Seth Godin
Categories: thinktime

Looking for the trick

Seth Godin - Sat 24th Sep 2016 19:09
When you find a trick, a shortcut, a hack that gets you from here to there without a lot of sweat or risk, it's really quite rewarding. So much so that many successful people are hooked on the trick, always...        Seth Godin
Categories: thinktime

Skills vs. talents

Seth Godin - Fri 23rd Sep 2016 19:09
If you can learn it, it's a skill. If it's important, but innate, it's a talent. The thing is, almost everything that matters is a skill. If even one person is able to learn it, if even one person is...        Seth Godin
Categories: thinktime

For the weekend...

Seth Godin - Fri 23rd Sep 2016 03:09
New podcast with Brian Koppelman Classic podcast with Krista Tippett Unmistakable Creative from 2015 And a video of Creative Mornings and their podcast The Your Turn book continues to spread. Have you seen it yet? Early-bird pricing on the huge...        Seth Godin
Categories: thinktime

Widespread confusion about what it takes to be strong

Seth Godin - Thu 22nd Sep 2016 18:09
Sometimes we confuse strength with: Loudness Brusqueness An inability to listen A resistance to seeing the world as it is An unwillingness to compromise small things to accomplish big ones Fast talking Bullying External unflappability Callousness Lying Policies instead of...        Seth Godin
Categories: thinktime

Big fish in a little pond

Seth Godin - Wed 21st Sep 2016 19:09
There's no doubt that the big fish gets respect, more attention and more than its fair share of business as a result. The hard part of being a big fish in a little pond isn't about being the right fish....        Seth Godin
Categories: thinktime


Subscribe to kattekrab aggregator - thinktime