You are here

thinktime

Contempt is contagious

Seth Godin - Sun 30th Aug 2015 18:08
The only emotion that spreads more reliably is panic. Contempt is caused by fear and by shame and it looks like disgust. It's very hard to recover once you receive contempt from someone else, and often, our response is to...        Seth Godin
Categories: thinktime

Francois Marier: Letting someone ssh into your laptop using Pagekite

Planet Linux Australia - Sun 30th Aug 2015 07:08

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org. 3600 IN CNAME fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1 Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1 Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org CheckHostIP no ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

Categories: thinktime

The average

Seth Godin - Sat 29th Aug 2015 19:08
Everything you do is either going to raise your average or lower it. The next hire. The quality of the chickpeas you serve. The service experience on register 4. Each interaction is a choice. A choice to raise your average...        Seth Godin
Categories: thinktime

Scientific Management 2.0

Seth Godin - Fri 28th Aug 2015 18:08
130 years ago, Frederick Taylor changed the world forever. Scientific Management is the now-obvious idea that factories would measure precisely what their workers were doing. Use a stopwatch. Watch every movement. Adjust the movements until productivity goes up. Re-organize the...        Seth Godin
Categories: thinktime

Stewart Smith: Running OPAL in qemu – the powernv platform

Planet Linux Australia - Fri 28th Aug 2015 13:08

Ben has a qemu tree up with some work-in-progress patches to qemu to support the PowerNV platform. This is the “bare metal” platform like you’d get on real POWER8 hardware running OPAL, and it allows us to use qemu like my previous post used the POWER8 Functional Simulator – to boot OpenPower firmware.

To build qemu for this, follow these steps:

apt-get -y install gcc python g++ pkg-config libz-dev libglib2.0-dev \ libpixman-1-dev libfdt-dev git git clone https://github.com/ozbenh/qemu.git cd qemu ./configure --target-list=ppc64-softmmu make -j `grep -c processor /proc/cpuinfo`

This will leave you with a ppc64-softmmu/qemu-system-ppc64 binary. Once you’ve built your OpenPower firmware to run in a simulator, you can boot it!

Note that this qemu branch is under development, and is likely to move/change or even break.

I do it like this:

cd ~/op-build/output/images; # so skiboot.lid is in pwd ~/qemu/ppc64-softmmu/qemu-system-ppc64 -m 1G -M powernv \ -kernel zImage.epapr -nographic \ -cdrom ~/ubuntu-vivid-ppc64el-mini.iso

and this lets me test that we launch the Ubunut vivid installer correctly.

You can easily add other qemu options such as additional disks or networking and verify that it works correctly. This way, you can do development on some skiboot functionality or a variety of kernel and op-build userspace (such as the petitboot bootloader) without needing either real hardware or using the simulator.

This is useful if, say, you’re running on ppc64el, for which the POWER8 functional simulator is currently not available on.

Categories: thinktime

Stewart Smith: doing nothing on modern CPUs

Planet Linux Australia - Fri 28th Aug 2015 12:08

Sometimes you don’t want to do anything. This is understandably human, and probably a sign you should either relax or get up and do something.

For processors, you sometimes do actually want to do absolutely nothing. Often this will be while waiting for a lock. You want to do nothing until the lock is free, but you want to be quick about it, you want to start work once that lock is free as soon as possible.

On CPU cores with more than one thread (e.g. hyperthreading on Intel, SMT on POWER) you likely want to let the other threads have all of the resources of the core if you’re sitting there waiting for something.

So, what do you do? On x86 there’s been the PAUSE instruction for a while and on POWER there’s been the SMT priority instructions.

The x86 PAUSE instruction delays execution of the next instruction for some amount of time while on POWER each executing thread in a core has a priority and this is how chip resources are handed out (you can set different priorities using special no-op instructions as well as setting the Relative Priority Register to map how these coarse grained priorities are interpreted by the chip).

So, when you’re writing spinlock code (or similar, such as the implementation of mutexes in InnoDB) you want to check if the lock is free, and if not, spin for a bit, but at a lower priority than the code running in the other thread that’s doing actual work. The idea being that when you do finally acquire the lock, you bump your priority back up and go do actual work.

Usually, you don’t continually check the lock, you do a bit of nothing in between checking. This is so that when the lock is contended, you don’t just jam every thread in the system up with trying to read a single bit of memory.

So you need a trick to do nothing that the complier isn’t going to optimize away.

Current (well, MySQL 5.7.5, but it’s current in MariaDB 10.0.17+ too, and other MySQL versions) code in InnoDB to “do nothing” looks something like this:

ulint ut_delay(ulint delay) { ulint i, j; UT_LOW_PRIORITY_CPU(); j = 0; for (i = 0; i < delay * 50; i++) { j += i; UT_RELAX_CPU(); } if (ut_always_false) { ut_always_false = (ibool) j; } UT_RESUME_PRIORITY_CPU(); return(j); }

On x86, UT_RELAX_CPU() ends up being the PAUSE instruction.

On POWER, the UT_LOW_PRIORITY_CPU() and UT_RESUME_PRIORITY_CPU() tunes the SMT thread priority (and on x86 they’re defined as nothing).

If you want an idea of when this was all written, this comment may be a hint:

/*!< in: delay in microseconds on 100 MHz Pentium */

But, if you’re not on x86 you don’t have the PAUSE instruction, instead, you end up getting this code:

# elif defined(HAVE_ATOMIC_BUILTINS) # define UT_RELAX_CPU() do { \ volatile lint volatile_var; \ os_compare_and_swap_lint(&volatile_var, 0, 1); \ } while (0)

Which you may think “yep, that does nothing and is not optimized away by the compiler”. Except you’d be wrong! What it actually does is generates a lot of memory traffic. You’re now sitting in a tight loop doing atomic operations, which have to be synchronized between cores (and sockets) since there’s no real way that the hardware is going to be able to work out that this is only a local variable that is never accessed from anywhere.

Additionally, the ut_always_false and j variable there is also attempts to trick the complier into not optimizing the loop away, and since ut_always_false is a global, you’re generating traffic to a single global variable too.

Instead, what’s needed is a compiler barrier. This simple bit of nothing tells the compiler “pretend memory has changed, so you can’t optimize around this point”.

__asm__ __volatile__ ("":::"memory")

So we can eliminate all sorts of useless non-work and instead do what we want: do nothing (a for loop for X iterations that isn’t optimized away by the compiler) and don’t have side effects.

In MySQL bug 74832 I detailed this with the appropriately produced POWER assembler. Unfortunately, this patch (submitted under the OCA) has sat since November 2014 (so, over 9 months) with no action. I’m a bit disappointed by that to be honest.

Anyway, the real moral of this story is: don’t implement your own locking primitives. You’re either going to get it wrong or you’ll be wrong in a few years when everything changes under you.

See also:

Categories: thinktime

Nishant Kothary on the Human Web: “Buy Him A Coffee”

a list apart - Thu 27th Aug 2015 22:08

My first job out of college was as a program manager. Program Manager is one of those job titles that sounds important because it implies that there exists a Program, and you have been anointed to Manage it. Who doesn’t want to be boss!

As with all impressive-sounding things, program management job descriptions are littered with laughable bullets like:

Must be proficient at influencing others without authority.

Which may as well be written as:

Life is.

Or:

Thing is Thing.

Pretty much every freshman PM ignores that qualification, and interviewers rarely test for it. We take for granted that the ability to influence people is important (true), and that we are all acceptably good at it (false).

For most of us, the first time our ability to influence people is truly tested is at our first job. And most of us fail that first test.

When I first realized I was terrible at influencing people, I projected the problem outward and saw it as a product of the environment I worked in. “It’s not me, it’s them,” I’d tell my friends at work and my management chain. As I wrote in my first column, my boss would say to me, “It is what it is.” This would instantly make me want to either have at the world with an axe or drive my Outback straight up into the North Cascades, hike until I ran into a grizzly, give her cub a wet willy, and submit to the fateful paw of death.

I also blamed my nature. If you are to believe the results of the informal quiz I took in Susan Cain’s Quiet: The Power of Introverts in a World That Can’t Stop Talking, my score of 18/20 suggests I am as introverted as they come. And while I come across as an extrovert now—behavior I’ve practiced over years—nothing about interacting with people feels natural to me. This is not to say that introverts (or I) dislike people. It’s more like like what Dr. Seuss said about children, “In mass, [they] terrify me.”

My first breakthrough came when a colleague at work saw me having a particularly difficult struggle to convince an individual from another team to expedite some work for mine, and suggested, “Buy him a coffee.” The kind of advice that feels like it fell out of a Dale Carnegie book into an inspirational poster of two penguins holding hands. PENGUINS DON’T EVEN HAVE HANDS. But I did it anyway because I was at my wit’s end.

I met him at Starbucks, and picked up the tab for his latte. We grabbed some chairs and awkwardly, wordlessly stared at our coffees.

Panicked at the mounting silence, I tried the first thing that came to mind. What I didn’t know then was that it’s a cornerstone technique of people who are good at influencing others: I asked him something about himself.

“So, are you from Seattle?”

“Indiana, actually.”

“No way. I attended college in Indiana!”

Soon enough, we realized we had far more in common than we’d expected; including cats that, judging by their attitudes, probably came from the same satanic litter. While I still wasn’t able to get him to commit to our team’s deadline, I did walk away with a commitment that he’d do his best to come close to it.

More importantly, I’d inadvertently happened upon a whole new set of tools to help me achieve my goals. I didn’t realize it then, but I had just learned the first important thing about influencing people: it’s a skill—it can be learned, it can be practiced, and it can be perfected.

I became aware of a deficit in my skillset, and eventually I started working on it proactively. It’s been a decade since that first coffee. While I’m still (and suspect, always will be) a work in progress, I have come a long way.

You can’t learn how to influence people overnight, because (as is true for all sophisticated skills) there’s a knowledge component that’s required. It often differs from person to person, but it does take time and investment. Generally speaking, it involves filling gaps about your knowledge of humans: how we think, what motivates us, and as a result, how we behave. I keep a list of the books that helped me along the way, including Carnegie’s almost-century-old classic, How to Win Friends and Influence People. But as Carnegie himself wrote, “Knowledge isn’t power until it is applied.”

What will ultimately decide whether you become someone who can influence others is your commitment to practice. Depending on your nature, it will either come easier to you, or be excruciatingly hard. But even if you’re an extrovert, it will take practice. There is no substitute for the field work.

What I can promise you is that learning how to earn trust, be liked, and subsequently influence people will be a worthwhile investment not only for your career, but also for your life. And even if you don’t get all the way there—I am far from it—you’ll be ahead of most people for just having tried.

So my advice to you is: instead of avoiding that curmudgeon at work, go buy them a coffee.

Categories: thinktime

The strawberry conundrum

Seth Godin - Thu 27th Aug 2015 19:08
Every grocer has to decide: when packing a quart of strawberries, should your people put the best ones on top? If you do, you'll sell more and disappoint people when they get to the moldy ones on the bottom. Or,...        Seth Godin
Categories: thinktime

Hear for health

Teaser:  The World Health Organization considers participating in society as one of the three key parameters to healthy living and ageing along with physical and mental health, and security. ‘Blindness separates people from things, deafness separates people from people’ – Immanuel Kant The World Health Organization considers participating in society as one of the three key parameters to healthy living and ageing, along with physical health, mental health, and security. Hearing impairment can create obstacles in engaging with others for children, adults and the ageing population; many deaf children are academically behind their peers, and hearing difficulties in the elderly are related to cognitive decline in the e

read more

Categories: thinktime

Donna Benjamin: D8 Accelerate - Game over?

Planet Linux Australia - Thu 27th Aug 2015 11:08
Thursday, August 27, 2015 - 10:47

The Drupal 8 Accelerate campaign has raised over two hundred and thirty thousand dollars ($233,519!!).  That's a lot of money! But our goal was to raise US$250,000 and we're running out of time. I've personally helped raise $12,500 and I'm aiming to raise 8% of the whole amount, which equals $20,000. I've got less than $7500 now to raise. Can you help me? Please chip in.

Most of my colleagues on the board have contributed anchor funding via their companies. As a micro-enterprise, my company Creative Contingencies is not in a position to be able to that, so I set out to crowdfund my share of the fundraising effort.

I'd really like to shout out and thank EVERYONE who has made a contribution to get me this far.Whether you donated cash, or helped to amplify my voice, thank you SO so soooo much. I am deeply grateful for your support.

If you can't, or don't want to contribute because you do enough for Drupal that's OK! I completely understand. You're awesome. :) But perhaps you know someone else who is using Drupal, who will be using Drupal you could ask to help us? Do you know someone or an organisation who gets untold value from the effort of our global community? Please ask them, on my behalf, to Make a Donation

If you don't know anyone, perhaps you can help simply by sharing my plea? I'd love that help. I really would!

And if you, like some others I've spoken with, don't think people should be paid to make Free Software then I urge you to read Ashe Dryden's piece on the ethics of unpaid labor in the Open Source Community. It made me think again.

Do you want to know more about how the money is being spent? 

See: https://assoc.drupal.org/d8-accelerate-awarded-grants

Perhaps you want to find out how to apply to spend it on getting Drupal8 done?

See: https://assoc.drupal.org/d8-accelerate-application

Are you curious about the governance of the program?

See: https://www.drupal.org/governance/d8accelerate

And just once more, with feeling, I ask you to please consider making a donation.

So how much more do I need to get it done? To get to GAME OVER?

  • 1 donation x $7500 = game over!
  • 3 donations x $2500
  • 5 donations x $1500
  • 10 donations x $750
  • 15 donationsx $500 <== average donation
  • 75 donations x $100 <== most common donation
  • 100 donations x $75
  • 150 donations x $50
  • 500 donations x $15
  • 750 donations x $10 <== minimum donation

Thank you for reading this far. Really :-)

Categories: thinktime

James Morris: Linux Security Summit 2015 – Wrapup, slides

Planet Linux Australia - Thu 27th Aug 2015 05:08

The slides for all of the presentations at last week’s Linux Security Summit are now available at the schedule page.

Thanks to all of those who participated, and to all the events folk at Linux Foundation, who handle the logistics for us each year, so we can focus on the event itself.

As with the previous year, we followed a two-day format, with most of the refereed presentations on the first day, with more of a developer focus on the second day.  We had good attendance, and also this year had participants from a wider field than the more typical kernel security developer group.  We hope to continue expanding the scope of participation next year, as it’s a good opportunity for people from different areas of security, and FOSS, to get together and learn from each other.  This was the first year, for example, that we had a presentation on Incident Response, thanks to Sean Gillespie who presented on GRR, a live remote forensics tool initially developed at Google.

The keynote by kernel.org sysadmin, Konstantin Ryabitsev, was another highlight, one of the best talks I’ve seen at any conference.

Overall, it seems the adoption of Linux kernel security features is increasing rapidly, especially via mobile devices and IoT, where we now have billions of Linux deployments out there, connected to everything else.  It’s interesting to see SELinux increasingly play a role here, on the Android platform, in protecting user privacy, as highlighted in Jeffrey Vander Stoep’s presentation on whitelisting ioctls.  Apparently, some major corporate app vendors, who were not named, have been secretly tracking users via hardware MAC addresses, obtained via ioctl.

We’re also seeing a lot of deployment activity around platform Integrity, including TPMs, secure boot and other integrity management schemes.  It’s gratifying to see the work our community has been doing in the kernel security/ tree being used in so many different ways to help solve large scale security and privacy problems.  Many of us have been working for 10 years or more on our various projects  — it seems to take about that long for a major security feature to mature.

One area, though, that I feel we need significantly more work, is in kernel self-protection, to harden the kernel against coding flaws from being exploited.  I’m hoping that we can find ways to work with the security research community on incorporating more hardening into the mainline kernel.  I’ve proposed this as a topic for the upcoming Kernel Summit, as we need buy-in from core kernel developers.  I hope we’ll have topics to cover on this, then, at next year’s LSS.

We overlapped with Linux Plumbers, so LWN was not able to provide any coverage of the summit.  Paul Moore, however, has published an excellent write-up on his blog. Thanks, Paul!

The committee would appreciate feedback on the event, so we can make it even better for next year.  We may be contacted via email per the contact info at the bottom of the event page.

Categories: thinktime

This week's sponsor: Craft

a list apart - Thu 27th Aug 2015 00:08

Time to look for a new CMS? Our sponsor Craft keeps the editing experience simple, flexible, and responsive.

Categories: thinktime

Embarrassed

Seth Godin - Wed 26th Aug 2015 19:08
It’s a tool or a curse, and it comes down to the sentence, “I’d be embarrassed to do that.” If you’re using it to mean, “I would feel the emotion of embarrassment,” you’re recognizing one of the most powerful forces...        Seth Godin
Categories: thinktime

Thinking Responsively: A Framework for Future Learning

a list apart - Wed 26th Aug 2015 00:08

Before the arrival of smartphones and tablets, many of us took a position of blissful ignorance. Believing we could tame the web’s inherent unpredictability, we prescribed requirements for access, prioritizing our own needs above those of users.

As our prescriptions grew ever more detailed, responsive web design signaled a way out. Beyond offering a means of building device-agnostic layouts, RWD initiated a period of reappraisal; not since the adoption of web standards has our industry seen such radical realignment of thought and practice.

In the five years since Ethan Marcotte’s article first graced these pages, thousands of websites have launched with responsive layouts at their core. During this time, we’ve experimented with new ways of working, and refined our design and development practice so that it’s more suited to a fluid, messy medium.

As we emerge from this period of enlightenment, we need to consolidate our learning and consider how we build upon it.

A responsive framework

When we think of frameworks, we often imagine software libraries and other abstractions concerned with execution and code. But this type of framework distances us from the difficulties we face designing for the medium. Last year, when Ethan spoke about the need for a framework, he proposed one focused on our approach—a framework to help us model ongoing discussion and measure the quality and appropriateness of our work.

I believe we can conceive this framework by first agreeing on a set of underlying design principles. You may be familiar with the concept. Organizations like GOV.UK and Google use them to define the characteristics of their products and even their organizations. Kate Rutter describes design principles as:

…short, insightful phrases that act as guiding lights and support the development of great product experiences. Design principles enable you to be true to your users and true to your strategy over the long term. (emphasis mine)

The long-term strategy of the web is to enable universal access to information and services. This noble goal is fundamental to the web’s continued relevance. Our design principles must operate in the service of this vision, addressing:

  • Our users: By building inclusive teams that listen to—and even work alongside—users, we can achieve wider reach.
  • Our medium: By making fewer assumptions about context and interface, focusing more on users’ tasks and goals, we can create more adaptable products.
  • Ourselves: By choosing tools that are approachable, simple to use, and open to change, we can elicit greater collaboration within teams.
Reflecting diversity in our practice

In surveying the landscape of web-enabled devices, attempting to categorize common characteristics can prove foolhardy. While this breadth and fluidity can be frustrating at times, device fragmentation is merely a reflection of human diversity and consumers exercising their right to choose.

Until recently, empathy for consumers has largely been the domain of user experience designers and researchers. Yet while a badly designed interface can adversely effect a site’s usability, so can a poorly considered technology choice. We all have a responsibility to consider how our work may affect the resulting experience.

Designing for everyone

Universal design promotes the creation of products that are usable by anyone, regardless of age, ability, or status. While these ideas enjoy greater recognition among architects and product designers, they are just as relevant to our own practice.

Consider OXO Good Grips kitchen utensils. In 1989, Sam Farber, inspired by his wife’s arthritis, redesigned the conventional vegetable peeler, replacing its metal handles with softer grips. Now anyone, regardless of strength or manual dexterity, could use this tool—and Farber’s consideration for aesthetics ensured broader appeal as well. His approach was applied to a range of products; Good Grips became an internationally recognized, award-winning brand, while Farber’s company, OXO, went on to generate significant sales.

This work brought the concept of inclusive design into the mainstream. OXO remains an advocate of designing inherently accessible products, noting that:

When all users’ needs are taken into consideration in the initial design process, the result is a product that can be used by the broadest spectrum of users. In the case of OXO, it means designing products for young and old, male and female, left- and right-handed and many with special needs.

Many of the technologies and specifications we use already feature aspects of universal design. Beyond specifications like WAI-ARIA that increase the accessibility of dynamic interfaces, HTML has long included basic accessibility features: the alt attribute allows authors to add textual alternatives to images, while the object element allows fallback content to be provided if a particular media plug-in or codec isn’t available.

Examples can also be found within the W3C and WHATWG. A key principle used in the design of HTML5 concerns itself with how authors should assess additions or changes to the specification. Called the priority of constituencies, it states that:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

We can use this prioritization when making choices on our own projects. While a client-side MVC framework might provide a degree of developer convenience, if it means users need to download a large JavaScript file before an application can be accessed, then we should look for an alternative approach.

Bridging the gap

When makers are attached to high-resolution displays and super-fast broadband connections, it can be difficult for them to visualize how users may experience the resulting product on a low-powered mobile device and an unreliable cellular network. The wider the gap between those making a product and those using it, the greater the likelihood that the former will make the wrong choice. We must prioritize getting closer to our users.

User research and usability testing help us see how users interact with our products. Having different disciplines (developers, interface and content designers, product managers) participate can ensure this learning is widely shared. But we can always do more. Susan Robertson recently wrote about how spending a week answering support emails gave her new insights into how customers were using the application she was building:

Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create.

Having the entire team collectively responsible for the end experience means usability and accessibility considerations will remain key attributes of the final product—but what if that team is more inclusive, too? In her article “Universal Design IRL,” Sara Wachter-Boettcher notes that:

[T]he best way to understand the audiences we design for is to know those audiences. And the best way to know people is to have them, with all their differences of perspective and background—and, yes, age and gender and race and language, too—right alongside us.

Perhaps it’s no coincidence that as we learn more about the diversity of our customers, we’ve started to acknowledge the lack of diversity within our own industry. By striving to reflect the real world, we can build more empathetic and effective teams, and in turn, better products.

Building on adaptable foundations

By empathizing with users, we can make smarter choices. Yet the resulting decisions will need to travel across unreliable networks before being consumed by different devices with unknown characteristics. It’s hard to make decisions if you’re unable to predict the outcomes.

By looking at websites through different lenses, we can uncover areas of constraint that offer the greatest degree of reach and adaptability. If an interface works on a mobile device, it’ll work on a larger display. If data can be interrogated when there’s no network, an unreliable connection will be of little hindrance. If content forms the basis of our design, information will be available regardless of styling. Optimizations based on more uncertain assumptions can be layered on afterwards, safe in the knowledge that we’ve provided fallbacks.

Interface first

In 2009, Luke Wroblewski asked us to consider how interfaces could take advantage of mobile device capabilities before thinking about their manifestation in desktop browsers. Mobile-first thinking encourages us to focus: phone displays leave little room for extraneous interface or content, so we need to know what matters most. By asking questions about which parts of an interface are critical and which are not, we can decide whether those non-critical parts are loaded conditionally or lazily—or perhaps not at all.

Network first

In 2013, in considering the realities of network reliability, Alex Feyerke proposed an offline-first approach. Rather than treat offline access as an edge case, we can create seamless experiences that work regardless of connectivity—by preemptively downloading assets and synchronizing data when online, and using aggressive caching and client-side computation when not. Others have suggested starting with URLs or an API-first approach, using these lenses to think about where content lives within a system. Each approach embraces the underlying fabric of the web to help us build more robust and resilient products.

Content first

In 2011, Mark Boulton signaled a move away from our canvas in approach, to one where layouts are designed from the content out. By defining visual relationships based on page elements, and using ratios instead of fixed values, we can imbue a page with connectedness, independent of its dimensions.

Recognizing that having content available before we design a page can be an unreasonable request, Mark later suggested we consider structure first, content always. This fits in well with the Core Model, an idea first introduced by Are Halland at the IA Summit in 2007. By asking questions about a site’s core content—what task it needs to perform, what information it should convey—we can help clients think more critically about their strategic objectives, business goals, and user needs. Ida Aalen recently noted:

The core model is first and foremost a thinking tool. It helps the content strategist identify the most important pages on the site. It helps the UX designer identify which modules she needs on a page. It helps the graphic designer know which are the most important elements to emphasize in the design. It helps include clients or stakeholders who are less web-savvy in your project strategy. It helps the copywriters and editors leave silo thinking behind and create better content. Sharing the toolbox

Having empathized with our users and navigated an unpredictable medium, we need to ensure that our decisions and discoveries are shared across teams.

As responsive design becomes embedded within organizations, these teams are increasingly collaborative and cross-functional. Previously well-defined roles are beginning to merge, the boundaries between them blurring. Job titles and career opportunities are starting to reflect this change too: see “full-stack developer” or “product designer.” Tools that were once the preserve of specific disciplines are being borrowed, shared, and repurposed; prototyping an animation may require writing JavaScript, while building a modular component library may require understanding visual language and design theories.

If the tools used are too opaque, and processes difficult to adopt, then opportunities for collaboration will diminish. Make a system too complex, and onboarding new members of a team will become difficult and time-consuming. We need to constantly make sure our work is accessible to others.

Considerate code

The growing use of front-end style guides is one example of a maturing relationship between disciplines. Rather than producing static, bespoke layouts, designers are employing more systematic design approaches. Front-end developers are taking these and building pattern libraries and modular components, a form of delivery that fits in better with backend development approaches.

Component-driven development has seen a succession of tools introduced to meet this need. Tools like Less and Sass allow us to modularize, concatenate, and minify stylesheets, yet they can also introduce procedural functionality into CSS, a language deliberately designed to be declarative and easier to reason with. However, if consideration is given to other members of the team, this new functionality can be used to extend CSS’s existing declarative feature set. By using mixins and functions, we can embed a design language within code, and propagate naming conventions that are understood by the whole team.

Common conventions

Quite often, problems of process are not a limitation of technology, but an unwillingness to apply critical thought. Trying to solve technology problems by employing more technology ignores the fact that establishing conventions can be just as helpful, and easier for others to adopt.

The BEM naming methodology helps CSS styles remain scoped, encapsulated, and easier to maintain, yet this approach has no dependency on a particular technology; it is purely a set of documented conventions. Had we foreseen the need, we could have been using BEM in 2005. A similar convention is that of CSS namespaces, as advocated by Harry Roberts. Using single-letter coded prefixes means everyone working on a project can understand the purpose of different classes, and know how they should be used.

A common complaint for those wishing to use software like preprocessors and task runners is that they often require knowledge of the command line. Tools tease new recruits with one-line install instructions, but the reality often involves much hair-pulling and shaving of yaks. To counter this, GitHub created Boxen, a tool that means anyone in their company can run a local instance of GitHub.com on their own computer by typing a single command. GitHub, and other companies like Bocoup and the Financial Times, also advocate using standard commands for installing, testing, and running new projects.

Responsive principles, responsive to change

Since responsive web design invited us to create interfaces that better meet the needs of users, it’s unsurprising that related discussion has increasingly focused on having greater consideration for users, the medium, and each other.

If we want to build a web that is truly universal, then we must embrace its unpredictable nature. We must listen more closely to the full spectrum of our audience. We must see opportunities for optimization where we previously saw barriers to entry. And we must consider our fellow makers in this process by building tools that will help us navigate these challenges together.

These principles should shape our approach to responsive design—and they, in turn, may need to adapt as well. This framework can guide us, but it, too, should be open to change as we continue to build, experiment, and learn.

Categories: thinktime

Multimodal Perception: When Multitasking Works

a list apart - Wed 26th Aug 2015 00:08

Word on the street is that multitasking is impossible. The negative press may have started with HCI pioneer Clifford Nass, who published studies showing that people who identify as multitaskers are worse at context switching, worse at filtering out extraneous information, worse at remembering things over the short term, and have worse emotional development than unitaskers.

With so much critical attention given to multitasking, it’s easy to forget that there are things our brains can do simultaneously. We’re quite good at multimodal communication: communication that engages multiple senses, such as visual-tactile or audio-visual. Understanding how we process mixed input can influence the design of multimedia presentations, tutorials, and games.

When I began researching multimodal communication, I discovered a field brimming with theories. The discipline is still too new for much standardization to have evolved, but many studies of multimodality begin with Wickens’s multiple resource theory (MRT). And it’s that theory that will serve as a launch point for bringing multimodality into our work.

Wickens’s multiple resource theory

Luckily, Wickens saved us some heavy lifting by writing a paper summarizing the decades of research (PDF) he spent developing MRT. Its philosophical roots, he explains, are in the 1940s through 1960s, when psychologists theorized that time is a bottleneck; according to this view, people can’t process two things simultaneously. But, Wickens explains, such theories don’t hold up when considering “mindless” tasks, like walking or humming, that occupy all of a person’s time but nevertheless leave the person free to think about other things.

Several works from the late 1960s and early 1970s redefine the bottleneck theory, proposing that what is limited is, in fact, cognitive processing power. Following this train of thought, humans are like computers with a CPU that can only deal with a finite amount of information at once. This is the “resource” part of MRT: the limitation of cognitive resources to deal with incoming streams of information. (MRT thus gives credence to the “mobile first” approach; it’s often best to present only key information up front because of people’s limited processing power.)

The “multiple” part of the theory deals with how processing is shared between somewhat separate cognitive resources. I say somewhat separate because even for tasks using seemingly separate resources, there is still a cost of executive control over the concurrent tasks. This is again similar to computer multiprocessing, where running a program on two processors is not twice as efficient as running it on one, because some processing capacity must be allocated to dividing the work and combining the results.

To date, Wickens and others have examined four cognitive resource divisions.

Processing stage

Perception and cognition share a structure separate from the structure used for responding. Someone can listen while formulating a response, but cannot listen very well while thinking very hard. Thus, time-based presentations need ample pauses to let listeners process the message. Video players should have prominent pause buttons; content should be structured to include breaks after key parts of a message.

Visual channel

Focal and ambient visual signals do not drain the same pool of cognitive resources. This difference may result from ambient vision seemingly requiring no processing at all. Timed puzzle games such as Tetris use flashing in peripheral vision to let people know that their previous action was successful—the row was cleared!—even while they’re focusing on the next piece falling.

Processing code

Spatial and verbal processing codes use resources based in separate hemispheres of the brain. This may account for the popularity of grid-based word games, which use both pools of resources simultaneously.

Perceptual modes

It’s easier to process two simultaneous streams of information if they are presented in two different modes—one visual and one auditory, for example. Wickens notes that this relative ease may result from the difficulties of scanning (between two visual stimuli) and masking (of one auditory stimulus by another) rather than from us actually having separate mental structures. Tower defense games are faster paced (and presumably more engaging) when accompanied by an audio component; players can look forward to the next wave of attackers while listening for warning signals near their tower base. Perceptual modes is the cognitive division most applicable to designing multimedia, so it’s the one we’ll look at further.

A million and one other theories

Now that we’ve covered Wickens’s multiple resource theory, let’s look at some of the other theories vying for dominance to explain how people understand multimodal information.

The modality effect (PDF) focuses on the mode (visual, auditory, or tactile) of incoming information and states that we process incoming information in different modes using separate sensory systems. Information is not only perceived in different modes, but is also stored separately; the contiguity effect states that the simultaneous presentation of information in multiple modes supports learning by helping to construct connections between the modes’ different storage areas. An educational technology video, for instance, will be more effective if it includes an audio track to reinforce the visual information.

This effect corresponds with the integration step of Richard Mayer’s generative theory of multimedia learning (PDF), which states that we learn by selecting relevant information, organizing it, and then integrating it. Mayer’s theory in turn depends upon other theories. (If you’re hungry for more background, you can explore Baddeley’s theory of working memory, Sweller’s cognitive load theory, Paivo’s dual-coding theory, and Penney’s separate stream hypothesis.) Dizzy yet? I remember saying something about how this field has too many theories…

What all these theories point to is that people generally understand better, remember better, and suffer less cognitive strain if information is presented in multiple perceptual modes simultaneously. The theories provide academic support for incorporating video into your content, for example, rather than providing only text or text with supporting images (says, ahem, the guy writing only text).

Visual-tactile vs. visual-auditory communication

Theories are all well and good, but application is even better. You may well be wondering how to put the research on multimodal communication to use. The key is to recognize that certain combinations of modes are better suited to some tasks than to others.

Visual-tactile

Use visual-tactile presentation to support quick responses. It will:

  • reduce reaction time
  • increase performance (measured by completion time)
  • capture attention effectively (for an alert or notification)
  • support physical navigation (by vibrating more when you near a target, for example)
Visual-auditory

Use visual-auditory presentation to prevent errors and support communication. “Wait, visual-auditory?” you may be thinking. “I don’t want to annoy my users with sound!” It’s worth noting, though, that one of the studies (PDF) found that as long as sounds are useful, they are not perceived as annoying. Visual-auditory presentation will:

Mode combination

You might also select a combination of modes depending on how busy your users are:

  • Visual-tactile presentation is more effective with a high workload or when multitasking.
  • Visual-auditory presentation is more effective with a single task and with a normal workload.
Multimodal tension

A multimodal tug-of-war goes on between the split-attention effect and the redundancy effect. Understanding these effects can help us walk the line between baffling novices with split attention and boring experts with redundancy:

  • The split-attention effect states that sequential presentation in multiple modes is bad for memory, while simultaneous presentation is good. Simultaneity helps memorization because it is necessary to encode information in two modes simultaneously in order to store cross-references between the two in memory.
  • In contrast, presenting redundant information through multiple channels simultaneously can hinder learning by increasing cognitive load without increasing the amount of information presented. Ever try reading a long quote on a slide while a presenter reads the same thing aloud? The two streams of information undermine each other because of the redundancy effect.

Which effect occurs is partially determined by whether users are novices or experts (PDF). Information that is necessary to a novice (suggesting that it should be presented simultaneously to avoid a split-attention effect) could appear redundant to an expert (suggesting that it should be removed to avoid a redundancy effect).

Additionally, modality effects appear only when limiting visual presentation time. When people are allowed to set their own time (examining visual information after the end of the auditory presentation), studied differences disappear. It is thus particularly important to add a secondary modality to your presentation if your users are likely to be in a hurry.

Go forth, multiprocessing human, and prosper

So the next time you hear someone talking about how multitasking is impossible, pause. Consider how multitasking is defined. Consider how multiprocessing may be defined separately. And recognize that sometimes we can make something simpler to learn, understand, or notice by making it more complex to perceive. Sometimes the key to simplifying presentation isn’t to remove information—it’s to add more.

And occasionally, some things are better done one after another. The time has come for you to move on to the next processing stage. Now that you’ve finished reading this article, you have the mental resources to think about it.

Categories: thinktime

The one thing that will change everything

Seth Godin - Tue 25th Aug 2015 19:08
That introduction you need. The capital that your organization is trying to raise. The breakthrough in what you're building... Have you noticed that as soon as you get that one thing, everything doesn't change? In fact, the only thing that...        Seth Godin
Categories: thinktime

BlueHackers: The Legacy of Autism and the Future of Neurodiversity

Planet Linux Australia - Tue 25th Aug 2015 09:08

The New York Times published an interesting review of a book entitled “NeuroTribes: The Legacy of Autism and the Future of Neurodiversity”, authored by Steve Silberman (534 pp. Avery/Penguin Random House).

Silberman describes how autism was discovered by a few different people around the same time, but with each the publicity around their work is warped by their environment and political situation.

This means that we mainly know the angle that one of the people took, which in turn warps our view of Aspergers and autism. Ironically, the lesser known story is actually that of Hans Asperger.

I reckon it’s an interesting read.

Categories: thinktime

James Purser: Mark got a booboo

Planet Linux Australia - Mon 24th Aug 2015 23:08

Mark Latham losing his AFR column because an advertiser thought his abusive tweets and articles weren't worth being associated with isn't actually a freedom of speech issue.

Nope, not even close to it.

Do you know why?

Because freedom of speech DOES NOT MEAN YOU'RE ENTITLED TO A GODS DAMNED NEWSPAPER COLUMN!!

No one is stopping Latho from spouting his particular brand of down home "outer suburban dad" brand of putresence.

Hell, all he has to do to get back up and running is go and setup a wordpress account and he can be back emptying his bile duct on the internet along with the rest of us who don't get cushy newspaper jobs because we managed to completely screw over our political career in a most spectacular way

Hey, he could setup a Patreon account and everyone who wants to can support him directly, either monthly sub, or a per flatulence rate.

This whole thing reeks of a massive sense of entitlement, both with Latho himself and his media supporters. Bolt, Devine and others who have lept to his defence all push this idea that any move to expose writers to consequences arising from their rantings is some sort of mortal offense against democracy and freedom. Of course, while they do this, they demand the scalps of anyone who dares to write abusive rants against their own positions.

Sigh.

Oh and as I've been reminded, Australia doesn't actually have Freedom of Speech as they do in the US.

Blog Catagories: media
Categories: thinktime

Sooner or later, the critics move on

Seth Godin - Mon 24th Aug 2015 19:08
Sooner or later, the ones who told you that this isn't the way it's done, the ones who found time to sneer, they will find someone else to hassle. Sooner or later, they stop pointing out how much hubris you've...        Seth Godin
Categories: thinktime

Vision of immune cells rallying to destroy invaders captured for the first time

Teaser:  Dr Scott Mueller and colleagues from the Department of Microbiology and Immunology at the Peter Doherty Institute for Infection and Immunity, used state-of-the-art microscopy to capture images of the interactions of three crucial types of immune cells rallying to destroy an infection. This article originally appeared on in the Newsroom on 21 August. View the original here.  The intricate interplay between immune cells working to defeat infection has been seen and photographed for the first time.

read more

Categories: thinktime

Pages

Subscribe to kattekrab aggregator - thinktime