Andy Clarke experimented with the max decoded size for images on iOS and found that file size isn’t the only thing to keep an eye on.
I’m sure this isn’t an original thought of mine, but it just popped into my head and I think it’s something of a “fundamental truth” that all software developers need to keep in mind:
Writing software is easy. The hard part is writing software that works.
All too often, we get so caught up in the rush of building something that we forget that it has to work – and, all too often, we fail in some fundamental fashion, whether it’s “doesn’t satisfy the user’s needs” or “you just broke my $FEATURE!” (which is the context I was thinking of).
With over three years of responsive web design in our collective portfolios, we now have a solid set of design patterns for making websites work on small devices. But what about larger screens?
It’s become common for sites to employ a liquid design for smaller breakpoints, which allows the content to expand and contract as necessary to make the most of the available screen width. At the opposite end of the spectrum, however, many of those same sites have a maximum width of 960 pixels or so, which can leave a lot of unused pixels on a contemporary desktop display.
Designing for the big screen can be complicated—negative space, scale, density, and layout devices such as grids, modules, and columns can be factors in managing hierarchy and emphasis.
Large screens are also generally shaped in a wide landscape orientation, a poor fit for the traditional vertically scrolled webpage. As with smaller screens, there are a wide variety of screen sizes and resolutions—but in the case of larger screens, the differences are often magnified, ranging from ultra-light 11-inch laptops to 30-inch desktop monitors.
Given these conditions, it’s not surprising that many desktop layouts (like this one) are designed to suit a 1024x768 resolution. It’s a leftover from an earlier era, when designs were constrained to the screen resolution that was most prevalent amongst users. Today, with the majority of desktop users on screens that are wider than 1024 pixels, a maximized browser window can turn that carefully considered 960-pixel layout into a monolith in a field of whitespace.
More people are accessing the internet with a mobile device every year, and so it makes sense to concentrate budgets and timelines on creating good user experiences for smaller screens. Mobile layouts can be perfectly usable on larger devices, but the same cannot always be said for desktop layouts viewed on small screens.
But by embracing large screens, designers have the opportunity to work within a larger fold, presenting the user with more content simultaneously, lessen scrolling on longer pages, and create a richer, more expansive user experience. And by using the same practices we developed to adapt layouts to smaller screens and identifying some common patterns for large screens, we need not necessarily introduce extra cost or time to our projects.Content challenges
As with any design, the first consideration when approaching larger breakpoints is content. Long- and short-form writing, photography, ecommerce, video, or web applications may benefit from different approaches in different ways.
Photography, search results, and other content presented in grid format are easy candidates for wide screens. Showing as much content as the screen can accommodate allows a user to quickly scan and compare results.
On the other hand, long-form reading is a challenge for wider breakpoints. Long line lengths can make it difficult to follow the text from line to line, while short line lengths can introduce a sense of jumpiness or acceleration, breaking a reader’s rhythm and pacing.
To make reading more comfortable, a designer needs to balance the width of the text column (the measure) against the size and line-height (leading) of each line of text. Classically, an appropriate count for a single column of text is seven to 10 words (Josef Muller-Brockmann) or 45 to 75 characters (Robert Bringhurst). Taken another way, Bringhurst also notes that the measure of a conventional book column is about 30 times that of the type size used, but that this number may also range from 20 to 40 times the size of the type.
Wider columns can use more line-height to make it easier to follow the text from line to line, but too much line-height can cause lines to drift apart, resembling a college research paper. Similarly, as the text size in a column grows larger, the number of lines that can be presented vertically on the screen grows smaller, increasing the need for scrolling and breaking the reader’s immersion. Simply scaling the text for larger breakpoints is a limited solution.Working with long reads
The Great Discontent demonstrates how a site can use art direction to adapt to larger screens without necessarily filling every single pixel in the browser window. Each article expands its feature art at the top to fill the viewport, resulting in a striking full-bleed effect upon first viewing. The main content of each article is set in a relatively narrow main column, but sidebars, pull quotes, and inline art expand beyond the central column. Breaking the content out of the main column creates an asymmetrical shape which complements the full-width artwork at the top—creating the illusion of a full-window experience without compromising legibility. Large images like these can come at a cost, though, as a balance between image quality and the overall page weight needs to be considered.The Great Discontent uses bold feature art at the top of each article to fill the viewport. The Roger Ebert site scales up most page elements, reducing the visible content.
The recently relaunched Roger Ebert site deals with large breakpoints by simply scaling up the maximum width of the page and the page elements proportionately. In theory this might work, but the execution is not entirely successful. Elements such as headers scale up vertically as well as horizontally, meaning the amount of content displayed within the fold is drastically reduced. Inexplicably, main body copy on the more text-heavy pages does not scale up in proportion to the other page elements, so it seems dwarfed in comparison, in addition to being set in a size that is too small for the main column measure.Medium places comments contextually, in the extended right margin.
Using the extended margins of larger screens for related or tangential content, such as Medium’s comments layout, is another idea that seems well suited for long-form publishing. When the main content column is maximized on smaller screens, it moves aside to reveal the comments area; on larger screens, the comments are revealed in the available margin space.
I’ve also always liked Grantland’s use of the lower right column for footnotes, which takes advantage of wider screens while maintaining focus on a readable central column. Photographs, figures, asides, quotes, and other related content can be extended out into the margins of wider viewports. This allows a designer to extend the vertical grid outward to create variety while preserving the flow of the main text.
Newer CSS features like columns and regions could be useful tools to enhance long-form reading on wider screens. CSS-based columns are now supported across most new browsers, and could be deployed within sections of an article to maximize screen usage while maintaining a good measure for text readability. If you have a large screen, for example, see my column-based demo of this article.
As a progressive enhancement measure, older browsers that do not support these features could be restricted to a single column of appropriate measure.Chunking content on large screens
Breaking content into chunks allows users to quickly and efficiently process information on content-heavy pages, and it’s a natural fit for responsive designs, because it allows content to be easily stacked hierarchically or arranged in columns for different breakpoints.
The advantage of this technique for large screens is that each chunk or band of content can use a different layout to optimize for legibility or impact. A good example of this approach is the Manchester City Council site, which uses different groups of modules in restricted widths together with a full-width photography chunk to create impact and emotion. The layout adapts fluidly to different viewports while retaining an appropriate width and layout for the content of each chunk.Manchester City Council and content chunks. Juliana Bicycles treats content chunks more visually.
Juliana Bicycles uses a more visual approach to content chunking, combining horizontal bands with flexible tiles to create a rich and compelling responsive site that also scales to large screen widths. Navigation is recast as a full-window carousel with rich background photographs. Content is presented in modular blocks, and gutters that appear between tiles are removed in tablet and mobile screen sizes. A paper texture background fills in empty tile spaces and also helps fill out the screen at the largest breakpoint. Using image-based modules in this way can be expensive from a bandwidth perspective, but is a great way to get the user to navigate quickly by showing rather than telling.Tiling modular content
The obvious advantage of a big screen is the ability to see a lot of content at one time.Google Images shows as many images as possible in the viewport.
With collection-based content such as photos, tiling can be an effective way to to fill large screens. We see this every day when searching Google Images—the results spread out to fill the viewport, presenting a large variety to choose from in a single scan.Pinterest’s tiled pages play to the scrapbooking or collector metaphor.
Pinterest also uses a tiling layout for images, with the addition of text and whitespace to mitigate what could be an overly busy layout. On larger screens the image preview modules seem to tile indefinitely. For a collection site, where the user experience is about quickly collecting and marking favorites, filling the viewport with thumbnails makes it easier to scan and creates a satisfying sense of fullness.Uniqlo’s wide view allows shoppers to compare items visually.
Uniqlo uses a wide, tiled-image layout that also looks well-designed and spacious. Items are chunked together with large headers acting as bumpers between sets to add breathing room. Tiling the products across a wide area allows shoppers to quickly compare items visually. At the same time, the whitespace, model photos, and variety in scale add refinement to the overall look and feel and help reinforce the point that design is an important differentiator in Uniqlo’s product line.
Neither Pinterest nor Google Images are responsive or adaptive sites—they both employ a separate site for mobile users. Uniqlo is also only adaptive to larger screens; small screens get the narrowest desktop layout. While these sites may not be complete models for responsive design, we can look at them as examples for expanding this type of content.Graphic techniques
Another interesting technique for larger screens is based more on classic print design, rather than restructuring or manipulating content to fill the browser.Elements in the grid extend to edges of the window at Institut Choiseul.
Institut Choiseul confines the content of each page to a structured grid in the center of the window, but effectively stakes out a large screen presence by extending a field of color from the logo and main page content outward toward the left edge of the viewport. The Back to Top link appears in the lower left corner of the viewport when the page is scrolled, a small touch that stakes out the entire window for the page. The strong grid and large fields of color give the site a sober, logical tone that evokes the International Design style of the 1950s and 1960s.Kanselarij der Nederlandse Orden frames a flexible central grid in asymmetrical bands of color.
Kanselarij der Nederlandse Orden has a similar style, with asymmetrical bands of color that provide the background to a centered flexible grid. Because the grid expands as a percentage of the total window width, the content also plays a part in filling the screen, but the boxy color fields add a level of sophistication to what is otherwise a fairly ordinary layout.
Small effects such as a color tone or texture in the background, or removing boxy lines from the edges of a layout, can go a long way toward creating a sense of completeness in the maximized window. Creative use of asymmetry instead of skinny, tower-like layouts can also keep readers from drowning in white margins.Finally
By simply extending common techniques for adapting content to smaller breakpoints, we can see plenty of opportunities for larger breakpoints as well. Sites that use a strong grid will have an easier time of it, as a well-structured grid should have no problem expanding into a wider space.
Obviously the most important consideration in any design is the content, and so that must be the basis for any effort to expand a design to fill a wide screen. For long reads, it’s more important to create a good rhythm and flow so that the text can be read without distraction. For photographs or graphics, space and scale contribute directly to impact. Government and service-oriented sites must provide easy access to tasks and information. Ecommerce sites need to make it easy for consumers to evaluate and purchase products. A layout’s density should reflect the site’s tone—more density for a more active experience, less for a slower, more thoughtful tone. Much like framing a photograph, filling out the viewport can make a design seem bigger and bolder, just as framing a design in generous whitespace can make it seem more elegant or precious.
It may be true that desktop users have the luxury of resizing the browser window if all that whitespace makes them uncomfortable, unlike users of smaller devices. It may be also be true that not all desktop users browse with large or full-screen windows. But as with mobile, we shouldn’t make assumptions about which devices are used to view our content now, and especially in the future. Large screens, in some cases, can provide both enhanced usability for users and a richer palette for designers. It’s up to us to take advantage of these expanded borders.
When it comes to building apps, we often assume our users are very much like us. We picture them with the latest devices, the most recent software, and the fastest connections. And while we may maintain a veritable zoo of older devices and browsers for testing, we spend most of our time building from the comfort of our modern, always-online desktop devices.
For us, a connection failure or slow service is a temporary problem that warrants nothing more than an error message. From this perspective, it is tempting to think of connectivity, mobile or otherwise, as something that will solve itself over time, as we get more network coverage and faster service. And that works, as long as our users stay above ground in large, well-developed—but not overly crowded—cities.
But what happens once they descend into the subway, board a plane, travel over land a bit, or go live in the countryside? Or when they stand in the wrong corner of a room or simply find themselves part of a huge crowd? Our carefully constructed app experiences become sources of frustration, because we rely so fully on that ephemeral link back to the servers.
This reliance ignores a fundamental truth: Offline is simply a fact of life. If you’re mobile, you’ll be offline at some point. It’s okay, though. There are ways to deal with it.Taking stock
Web apps used to be completely dependent on the server: it did all the work, and the client just displayed the result. Any disruption in your connection was a major problem: if you were offline, you couldn’t use your app.
That problem is solved, in part, by richer clients, where more of the application logic runs in the browser—like Google Docs, for example. But for a proper offline-first experience, you also want to store the data in the front end, and you want it to sync to the server’s data store. Happily, in-browser databases are maturing, and there are an increasing number of solutions for this—like derby.js, Lawnchair, Hoodie, Firebase, remotestorage.io, Sencha touch, and others—so solving the technical aspects is getting easier.
But we have bigger, and much weirder, fish to fry: designing apps and their interfaces for intermittent connectivity leads to an abundance of new scenarios and problems.
There are of course a few precedents for offline-first UX, and one of them is especially prevalent: your email inbox and outbox. Emails go into your outbox, even when you’re offline. Once you’re online, they get sent. It’s simple and unobtrusive, and it just works.
For incoming email, the experience is similarly smooth: once you reconnect, new emails from the server appear at the top of your inbox. In between, you’ve got a more or less complete local copy of all your emails up to this point, so you’re never stuck with an empty app. All three scenarios (client push fails, client pull/server push fails, availability of local data when offline) are well handled.
The experience of using a website or app when offline should be a lot better, less frustrating, and more empowering. We need the UX equivalent of responsive web design: a strong catalog of guides and patterns for a disconnected, multi-device world.The connectivity lifecycle
Most web apps have two connectivity-related points of failure: client push and client pull/server push. Depending on what your app does, you might want to
- communicate or explicitly hide the connectivity state and its changes (for instance, a chat client would inform users that any new messages typed are not sent immediately);
- enable client-side creation and editing features even when offline, and reassure users that their data is safe and will eventually make it to the server (think of a photo sharing app that lets you take and post pictures under any circumstances); and
- disable, modify, or possibly even hide features that cannot work offline, instead of letting people fail at using them (imagine a “send” button that knows when to turn into a “send later, when online” button).
Other issues arise when the connectivity state changes during use, e.g., the server wants to push a change to the object or view that the user is currently looking at, or even editing. This would require you to
- notify the user that newer, possibly conflicting data is available; and
- give the user a pleasant conflict-resolution tool, if necessary.
Let’s take a look at some real-world examples.Problematic connectivity scenarios Losing local data
Going offline while using Google Docs in a browser other than Chrome can be quite frustrating: you can’t edit your document. And while reading is still possible, copying parts of it isn’t. You can’t do anything with your text or spreadsheet—not even copy it to another program to continue working there. And yet, this is actually an improvement over past versions, where a large overlay would notify you of the offline state and prevent you from even seeing your work.
This is a common occurrence in both native and web apps: data you’ve only just accessed suddenly becomes unavailable when you lose your connection. If possible, apps should retain their last state and make their data available, even if it can’t be modified. This requires keeping local data to fall back to if the server can’t be reached, so your users are never stranded with an empty app.Treating offline like an error
Stop treating a lack of connectivity like an error. Your app should be able to handle disconnections and get on with business as gracefully as possible. Don’t show views you can’t fill with data, and make sure error messages hit the right tone. Take Instagram: when a person can’t post a photo, Instagram calls it a failure—instead of reassuring the user that the image isn’t lost, it’s just going to be posted later. No big deal. You might even want to reword your interface depending on the app’s connection state, such as turning “save” into “save locally.”
You might sometimes need to block whole features completely, but more often, you won’t need to. For example:
- If you can’t update a feed, show the old feed and a corresponding message. Don’t throw out the old data, then attempt to fetch the new data, fail, and end up with an empty, useless view.
- If your app lets users create data locally, let them do so, and inform them it will be saved and sent later. Optionally, ping them for confirmation before you do send it. Again, Instagram comes to mind: it knows where a photo was taken, but, when offline, can’t ask Foursquare what the place is called. Instagram could, however, ask users to come back to a picture and pick the Foursquare location once they’re online.
If your app offers collaborative editing or some other form of simultaneous use on multiple devices, you will likely create conflicting versions of objects at some point. We can’t prevent this, but we can provide easily usable conflict-resolution UIs for people who might not even understand what a sync conflict is.
Take Evernote, whose business is heavily based on syncing notes: conflicts are resolved by simply concatenating both versions of the note. On anything longer than a couple of lines, this requires an inordinate amount of cognitive effort and subsequent editing.
Draft, on the other hand, has managed to make conflict resolution between collaborators simple and beautiful. It shows both versions and their differences in three separate columns, and each difference has an “accept” and an “ignore” button. Intuitive and visually appealing conflict resolution, at least for text, is definitely possible.
Detailed change-by-change resolution isn’t always necessary. In many cases you just need to provide a nice interface for highlighting differences and allowing the user to choose which version wins in this specific conflict.
There are other types of conflicts awaiting us, however, and many of them won’t be text based: disputed map marker positions, bar chart colors, lines in a drawing, and endless other things we haven’t even thought of yet.
But not all technical problems need technical solutions. Consider two waiters with wireless ordering devices in a large, multi-story restaurant. One is connected to the restaurant’s server. The other is on the very top floor, where his connection fails temporarily. Both wait tables that order the same bottle of rare, expensive wine. The offline waiter’s device cannot know about this conflict. What it can do, however, is be aware of the risk of conflict (low stock number, its own offline state) and advise the waiter to give an appropriate reply to the table (“Oh, exquisite choice. Let me see if that’s still available”).Preempting users’ needs
In some cases, apps can preemptively take low-overhead action to give users a better experience later. When Google Maps detects I’m on wifi in a different country than usual, it could quickly cache my surroundings for the likely case of later offline or roaming use.
In many cases, however, content is too large to preemptively cache it—for example, a video from a news site. In these cases, users must make the explicit decision to locally sync, which would require them to download the video to their device and view it in a different application. Any context that video had online—like related information or relevant comment threads—is now lost, as is the opportunity for users themselves to comment.Refreshing chronological data
All of these examples were client push, but there’s the server push aspect as well: what can we do when the server updates a user’s active view, and pushes data that can’t conveniently be added to the top of a list? Chronological data often causes this problem.
For example, if you use iMessage on several devices, messages are sometimes displayed out of chronological order when syncing. iMessage could sort them in the correct order—they are timestamped, after all—but instead it shows them in the order in which they arrived on the device. This makes them highly noticeable, but is also terribly confusing.
Imagine the more intuitive way of doing it: messages are always shown in chronological order, regardless of when they arrive. This sounds more sensible at first, but means you may have to scroll back in time to read a message that just arrived, because it was sent in response to something much older. Worse, you might not even notice it, since it pops into existence somewhere you’re probably not looking.
If you display data chronologically and the sequence of the data itself is meaningful, like in a chat (as opposed to email, which can be threaded), offline capabilities pose a problem: the most recently transmitted data is not necessarily the newest, and may therefore appear somewhere users won’t expect it. You could maintain context and sequence, but your interface also needs to let users know where in time the new content is.Preparing for diverse data types
Many of these examples are text based, and even if they aren’t (like a map marker), some of them could conceivably have a text-based helper (like a list of map markers next to the map), which can simplify sync-related updates and notifications.
However, we know the amount, diversity and complexity of web applications will continue to increase, as will the types of data that are handled by their users. Some will be collaborative, most will be usable on multiple devices, and many will introduce new and exciting syncing issues. It makes sense to study them, and to develop a common vocabulary for offline scenarios and their solutions.Offliners Anonymous
As we started asking developers from all over the world about these issues, we were surprised at how many people suddenly opened up about their tales of offline woe—realizing they’d had similar problems in the past, but never spoken to others about them. Most battled it out alone, gave up, or put it off, but all secretly wished they had somewhere to turn for offline app advice.
We don’t need to be anonymous, though. We can look to John Allsopp’s call, 13 years ago, to embrace the web as a fluid medium full of unknowns, and to “accept the ebb and flow of things.” Today we realize this extends beyond screen sizes and aspect ratios, feature support and rendering implementations, and holds true even for our work’s very connection to the web itself.
In this even more fluid and somewhat more daunting reality, we’ll all need each other’s help. We should make sure that we, and those who follow us, are equipped with reliable tools and patterns for the uncertainties of the increasingly mobile world—both for the sake of our users and for our own. Web development is complicated enough without wasting extra time reinventing wheels.
To help each other and future generations of designers, developers, and user interface experts, we are inviting you to join the discussion at offlinefirst.org. Our eventual goal is to create an offline handbook that includes UX patterns and anti-patterns, technology options, and research on user’s mental models—creating a repository of knowledge to draw from and contribute to, so our collective efforts and experiences don’t go to waste.
For now, we need to hear from you: about your experiences in this field, your knowledge of tools, your tips and tricks, or even just your challenges. Solving them won’t be easy, but it will improve our users’ experiences—wherever and whenever they need our services. Isn’t that why we’re here?
We are still chasing one invoice so we can finalise the accounts.For subcommittee: JoomlaDay Sydney 2013
Previous versions of Korora have shipped with Ubuntu’s Jockey for installing drivers like NVIDIA and Catalyst but now we have Pharlap. Jockey has been deprecated upstream in favour of ubuntu-drivers-common, so we thought we’d see if we could make some use of that code base and create a version for Korora/Fedora+RPMFusion.
It seemed to work out well and so it’s included by default in the Korora 20 beta release, although Jockey is still available to install if you so desire. It’s much more lightweight and uses yum-daemon for package management. The packages are pharlap and pharlap-modaliases, both of which are available for Fedora 20 from the Korora repo (including source RPMs).
We need lots of testing on Pharlap so if you’re keen to help, we would appreciate any feedback. Hopefully at some point we can get it into RPMFusion.
Where did the name come from?
Well since we were replacing Jockey, we thought we’d go with a horse theme. Born in 1926, Phar Lap was a New Zealand foaled, Australian-trained thoroughbred racehorse, one of the greatest of all time. Although Phar Lap came last in his first ever race, towards the end of his career he won 32 of 35 races (coming second in two of the others he lost).
In 1932 Phar Lap raced in the Agua Caliente Handicap at Tijuana, Mexico, which was North America’s then richest race. It was his first time racing in America, his first race on dirt tracks and his first start from barrier stalls. He was last out of the gate and started ten lengths behind the leaders, but by the half-mile he was in front and then won the race by three lengths, setting a new track record.
Two weeks later Phar Lap was dead after someone poisoned him with arsenic.
So to honour this great horse, we’ve named the project Pharlap.
Press Release. For immediate Release.
Announcing the linux.conf.au 2014 third Keynote Speaker
The lca2014.linux.org.au 2014 team are pleased to announce Matthew Garrett as the third keynote speaker for their conference.
Matthew is a cloud security developer working at Nebula, a prominent cloud hardware and services vendor.
Matthew has been hacking on Secure Boot for quite a while, and was deeply involved with it during his time with RedHat and more recently, with a broader view of its security implications within the cloud initiative, at Nebula.
Matthew will be speaking to us about the current state of the debate concerning Secure Boot, concerns over the security of material held in the cloud, the influence of the activities of security agencies, and issues relating to people maintaining traditional freedoms within cloud environments. These factors are key to the success of the cloud initiative that has captured the interest of computing communities throughout the world.
We are really pleased to have Matthew speaking to us about these key topics that have such an important impact on the success of the cloud computing initiative.
linux.conf.au is an annual open source conference organised by Linux Australia. In 2014 the conference will be held in Perth, Western Australia from the 6th January to the 10th.
We would like to thank our Emperor sponsors IBM and HP for supporting lca2014.linux.org.au and helping our community.
lca2014.linux.org.au #lca2014 #linux.conf.au
I’ve just read Bartleby the Scrivener which is a short story about a scrivener who refused to work saying “I’d prefer not to”.
It reminded me of some situations in the computer industry. I’ve never seen a single case where someone preferred not to work when everyone around them (colleagues and management) wanted them to work. But then the incidence of having an entire team and management wanting to work efficiently isn’t nearly as common as one might imagine.
In some cases it’s desired that someone not work, such as a former colleague who was hired as a sysadmin but did nothing but change backup tapes (a few hours work per week). Not having him login as root improved the general reliability of the servers but it was fortunate that we never needed to restore from backups…
One time I had a colleague who preferred to spend most of his time in the office searching the Internet for videos of street fights. I have often told colleagues that I would prefer them to work, but in the case of a guy who’s only hobby is street-fighting I decided to let it go.
Managing people can be difficult, particularly for someone who doesn’t like disagreements. Some managers that I’ve reported to seemed to prefer not to manage in an apparent attempt to avoid disputes. One time when I complained about a colleague not even having a suitable computer to permit doing any work a manager responded with the rhetorical question “what do you expect me to do?”. That manager didn’t do any annual reviews of staff for over a year, he only eventually did some reviews because he was told that his scheduled promotion was on hold until he got them done. I got the impression that at least two levels of management preferred not to work at that company.
Sometimes it just gets weird though, such as the occasion when I was the only member of a team and the manager who was supposedly managing no-one but me never seemed to have time to have a meeting with me. But he didn’t want me to bypass him and talk directly to other people in the company, so he preferred not to work and not to have anyone else do his job.
Most of the companies that I’ve worked for in a full-time capacity didn’t seem to have any effective technical interviews (note that I’ve mostly worked for financial companies and ISPs not free software companies). So it seems that anyone with minimal computer skills who wants a well paying job could just send out a CV to a bunch of recruiting agencies, get interviewed by enough companies to eventually hit one without a technical interview process and then find a job that doesn’t require work.Depression
The Wikipedia page about Bartleby the Scrivener  suggests that Bartleby was depressed. I wonder how much of the lack of performance I’ve witnessed has been due to depression. There appears to be a strong correlation between work environments that cause depression and people preferring not to work.
Maybe managers should be considering how to make work less depressing to try and get more effective employees (in terms of quality and quantity of work). One example of this is the sysadmin team death spiral I’ve witnessed where no-one can automate solving problems (EG by cron jobs to manage resource usage and analysis tools to find minor problems before they become major problems) because everyone is dedicated to fixing things that break needlessly (EG systems crashing due to lack of disk space). When people start getting control over recurring problems and automating things then the work becomes increasingly about solving problems and less about implementing the same manual processes every day/week and it’s more fun and effective for everyone.
At the BoF on depression at LCA 2013 one delegate stated that many companies have people in HR who can arrange support for depressed employees. Apparently if you are depressed and you work for a company that’s large enough to have a HR department then it can be beneficial to talk to HR about it. That probably works well in the case where an employee is depressed but the company is working well. But in the case where the company isn’t working well it seems unlikely to help.
David Graeber wrote an interesting article about “Bullshit Jobs” . He goes a bit far, I don’t think that late night pizza delivery is a bullshit job and actuaries are useful to society. But his points about the existence of useless jobs are reasonable.Management Levels
I sometimes wonder whether there is some benefit in establishing social norms about working and then having management take little interest in how it happens. If a team works well together then management could just set deadlines (which would be negotiated with employees who know what’s possible) and let the team work out how to do it. Then instead of having one manager for each team of ~10 people who theoretically tracks what everyone is doing you could have one manager for a dozen teams who just tracks overall team performance – essentially remove a layer of management.
Valve is famous for having no formal management structure and for getting things done, unfortunately that apparently allows school-style cliques to block actions . But I think that the Valve experiment is useful and provides some ideas that can be used by other companies. Maybe if instead of requiring consensus of the entire company for hiring decisions they only required consensus of the team things would have worked better.
Of course another down-side to such things is that hierarchical management can be good for avoiding discrimination and bullying. The article I cited about Valve compares it to high-school. It could be that Valve employees were all nice people who only hired other nice people. But if similar systems were implemented in many companies then some would surely end up being like a typical high school with all the bullying and mistreatment of minority groups that entails.
Michael O. Church wrote an interesting article in which he divides employees into four categories, “loser”, “clueless”, “psychopaths”, and “technocrats” (note that he didn’t invent the first three names) . In his model the “clueless” category includes most middle-management. I think that there are some problems with Michael’s model and I’m not arguing for a “technocracy” (which is how this post might be interpreted in terms of his ideas). But I think he demonstrates some of the real problems in the way companies are managed and in his model the “losers” prefer not to work as long as they can get paid.Conclusion
I don’t have any good solutions to these problems to offer. It seems that the best we can hope for is incremental change to make work less depressing, to have the minimal amount of management, and to avoid “bullshit jobs”.
-  http://www.gutenberg.org/ebooks/11231
-  http://en.wikipedia.org/wiki/Bartleby,_the_Scrivener
-  http://www.strikemag.org/bullshit-jobs/
-  http://www.wired.com/gamelife/2013/07/wireduk-valve-jeri-ellsworth/
-  http://tinyurl.com/kxcduch
A little under 3 months ago Hamish did what all mothers dread - ignored their repeated safety messages and put himself in danger. Not by running across the road but by jumping a fence and running behind a now startled pony (Buddy) that was busy being fed.
The result was a jaw snapped clean in two places, 4 days in hospital, wired for 8 weeks and enough blood over him and his mother for this four year old to ask if he was going to die.
Out of the blue on Sunday, Hamish asked if he could ride Buddy, so today he did:
Needless to say I'm pretty proud of this little man's courage.
Three weeks ago I attended TPAC, the annual meeting of W3C Working Groups. One of the meetings was of the Timed Text Working Group (TT-WG), that has been specifying TTML, the Timed Text Markup Language. It is now proposed that WebVTT be also standardised through the same Working Group.
How did that happen, you may ask, in particular since WebVTT and TTML have in the past been portrayed as rival caption formats? How will the WebVTT spec that is currently under development in the Text Track Community Group (TT-CG) move through a Working Group process?
I’ll explain first why there is a need for WebVTT to become a W3C Recommendation, and then how this is proposed to be part of the Timed Text Working Group deliverables, and finally how I can see this working between the TT-CG and the TT-WG.Advantages of a W3C Recommendation
Because of its Recommendation status, TTML has become the basis for several other caption standards that other SDOs have picked: the SMPTE’s SMPTE-TT format, the EBU’s EBU-TT format, and the DASH Industry Forum’s use of SMPTE-TT. SMPTE-TT has also become the “safe harbour” format for the US legislation on captioning as decided by the FCC. (Note that the FCC requirements for captions on the Web are actually based on a list of features rather than requiring a specific format. But that will be the topic of a different blog post…)
WebVTT is much younger than TTML. TTML was developed as an interchange format among caption authoring systems. WebVTT was built for rendering in Web browsers and with HTML5 in mind. It meets the requirements of the <track> element and supports more than just captions/subtitles. WebVTT is popular with browser developers and has already been implemented in all major browsers (Firefox Nightly is the last to implement it – all others have support already released).
As we can see and as has been proven by the HTML spec and multiple other specs: browsers don’t wait for specifications to have W3C Recommendation status before they implement them. Nor do they really care about the status of a spec – what they care about is whether a spec makes sense for the Web developer and user communities and whether it fits in the Web platform. WebVTT has obviously achieved this status, even with an evolving spec. (Note that the spec tries very hard not to break backwards compatibility, thus all past implementations will at least be compatible with the more basic features of the spec.)
Given that Web browsers don’t need WebVTT to become a W3C standard, why then should we spend effort in moving the spec through the W3C process to become a W3C Recommendation?
The modern Web is now much bigger than just Web browsers. Web specifications are being used in all kinds of devices including TV set-top boxes, phone and tablet apps, and even unexpected devices such as white goods. Videos are increasingly omnipresent thus exposing deaf and hard-of-hearing users to ever-growing challenges in interacting with content on diverse devices. Some of these devices will not use auto-updating software but fixed versions so can’t easily adapt to new features. Thus, caption producers (both commercial and community) need to be able to author captions (and other video accessibility content as defined by the HTML5
element) towards a feature set that is clearly defined to be supported by such non-updating devices.
Understandably, device vendors in this space have a need to build their technology on standardised specifications. SDOs for such device technologies like to reference fixed specifications so the feature set is not continually updating. To reference WebVTT, they could use a snapshot of the specification at any time and reference that, but that’s not how SDOs work. They prefer referencing an officially sanctioned and tested version of a specification – for a W3C specification that means creating a W3C Recommendation of the WebVTT spec.
Taking WebVTT on a W3C recommendation track is actually advantageous for browsers, too, because a test suite will have to be developed that proves that features are implemented in an interoperable manner. In summary, I can see the advantages and personally support the effort to take WebVTT through to a W3C Recommendation.Choice of Working Group
FAIK this is the first time that a specification developed in a Community Group is being moved into the recommendation track. This is something that has been expected when the W3C created CGs, but not something that has an established process yet.
The first question of course is which WG would take it through to Recommendation? Would we create a new Working Group or find an existing one to move the specification through? Since WGs involve a lot of overhead, the preference was to add WebVTT to the charter of an existing WG. The two obvious candidates were the HTML WG and the TT-WG – the first because it’s where WebVTT originated and the latter because it’s the closest thematically.
Adding a deliverable to a WG is a major undertaking. The TT-WG is currently in the process of re-chartering and thus a suggestion was made to add WebVTT to the milestones of this WG. TBH that was not my first choice. Since I’m already an editor in the HTML WG and WebVTT is very closely related to HTML and can be tested extensively as part of HTML, I preferred the HTML WG. However, adding WebVTT to the TT-WG has some advantages, too.
Since TTML is an exchange format, lots of captions that will be created (at least professionally) will be in TTML and TTML-related formats. It makes sense to create a mapping from TTML to WebVTT for rendering in browsers. The expertise of both, TTML and WebVTT experts is required to develop a good mapping – as has been shown when we developed the mapping from CEA608/708 to WebVTT. Also, captioning experts are already in the TT-WG, so it helps to get a second set of eyes onto WebVTT.
A disadvantage of moving a specification out of a CG into a WG is, however, that you potentially lose a lot of the expertise that is already involved in the development of the spec. People don’t easily re-subscribe to additional mailing lists or want the additional complexity of involving another community (see e.g. this email).
So, a good process needs to be developed to allow everyone to contribute to the spec in the best way possible without requiring duplicate work. How can we do that?The forthcoming process
At TPAC the TT-WG discussed for several hours what the next steps are in taking WebVTT through the TT-WG to recommendation status (agenda with slides). I won’t bore you with the different views – if you are keen, you can read the minutes.
What I came away with is the following process:
- Fix a few more bugs in the CG until we’re happy with the feature set in the CG. This should match the feature set that we realistically expect devices to implement for a first version of the WebVTT spec.
- Make a FSA (Final Specification Agreement) in the CG to create a stable reference and a clean IPR position.
- Assuming that the TT-WG’s charter has been approved with WebVTT as a milestone, we would next bring the FSA specification into the TT-WG as FPWD (First Public Working Draft) and immediately do a Last Call which effectively freezes the feature set (this is possible because there has already been wide community review of the WebVTT spec); in parallel, the CG can continue to develop the next version of the WebVTT spec with new features (just like it is happening with the HTML5 and HTML5.1 specifications).
- Develop a test suite and address any issues in the Last Call document (of course, also fix these issues in the CG version of the spec).
- As per W3C process, substantive and minor changes to Last Call documents have to be reported and raised issues addressed before the spec can progress to the next level: Candidate Recommendation status.
- For the next step – Proposed Recommendation status – an implementation report is necessary, and thus the test suite needs to be finalized for the given feature set. The feature set may also be reduced at this stage to just the ones implemented interoperably, leaving any other features for the next version of the spec.
- The final step is Recommendation status, which simply requires sufficient support and endorsement by W3C members.
The first version of the WebVTT spec naturally has a focus on captioning (and subtitling), since this has been the dominant use case that we have focused on this far and it’s the part that is the most compatibly implemented feature set of WebVTT in browsers. It’s my expectation that the next version of WebVTT will have a lot more features related to audio descriptions, chapters and metadata. Thus, this seems a good time for a first version feature freeze.
There are still several obstacles towards progressing WebVTT as a milestone of the TT-WG. Apart from the need to get buy-in from the TT-WG, the TT-CG, and the AC (Adivisory Committee who have to approve the new charter), we’re also looking at the license of the specification document.
The CG specification has an open license that allows creating derivative work as long as there is attribution, while the W3C document license for documents on the recommendation track does not allow the creation of derivative work unless given explicit exceptions. This is an issue that is currently being discussed in the W3C with a proposal for a CC-BY license on the Recommendation track. However, my view is that it’s probably ok to use the different document licenses: the TT-WG will work on WebVTT 1.0 and give it a W3C document license, while the CG starts working on the next WebVTT version under the open CG license. It probably actually makes sense to have a less open license on a frozen spec.Making the best of a complicated world
WebVTT is now proposed as part of the recharter of the TT-WG. I have no idea how complicated the process will become to achieve a W3C WebVTT 1.0 Recommendation, but I am hoping that what is outlined above will be workable in such a way that all of us get to focus on progressing the technology.
At TPAC I got the impression that the TT-WG is committed to progressing WebVTT to Recommendation status. I know that the TT-CG is committed to continue developing WebVTT to its full potential for all kinds of media-time aligned content with new kinds already discussed at FOMS. Let’s enable both groups to achieve their goals. As a consequence, we will allow the two formats to excel where they do: TTML as an interchange format and WebVTT as a browser rendering format.
Typekit changes the way you design websites by making it easy to use stunning, real fonts—no more browser defaults. Add a line of code to your pages, and choose from hundreds of quality typefaces. It’s simple, reliable, and makes for a more beautiful web. Try it out for free.
In recent time, MariaDB 10 has been getting many new storage engines. We’ve seen TokuDB, CONNECT, SEQUENCE, SPIDER, CassandraSE for various use cases. For a long time, MariaDB shipped OQGRAPH, but it was disabled in MariaDB 5.5. It will make a come back as OQGRAPH v3 has been worked on actively by Andrew McDonnell. Keep track of this via MDEV-5319.
Another engine being worked on by Kentoku Shiba & team is the mroonga engine, which allows you to do full text search. It is optimised for CJK languages, and is supposedly very fast. To track this, follow MDEV-5222.
What this means is that from the start of the MariaDB project, the only engine that we have disabled and don’t include since 5.5 and greater is PBXT. That’s a pretty good record of having many shipping storage engines that have largely come from the community.
Stewart Smith: FINAL CALL: CFP:Developer, Testing, Release and CI Automation miniconf @ linux.conf.au 2014
I’ve sneakily kept the CFP open for a while after the “deadline” and will be closing it for good on Wednesday, December 4th. SUBMIT NOW!
- Jamaican Ministry of Health is the first to adopt free and open source health system nationwide http://t.co/TUoAGZ6c7w 17:27:06, 2013-12-01
- Abbott & Libs delete speeches from websites to hide views & policies http://t.co/J2wrdkvk94 #auspol 15:33:03, 2013-12-01
- United Nations predicts HIV-AIDS could be eradicated in the Asia-Pacific within 15 years http://t.co/c3YitGH5UP 13:19:04, 2013-12-01
- Aardvark Founder Max Ventilla Is Trying To Turn Education On Its Head With AltSchool http://t.co/BPyOe0afOD 13:19:04, 2013-11-30
- High-speed rail network $30 billion cheaper than first thought: study http://t.co/ESi8kC3xSF 13:19:05, 2013-11-29
- Low-paid jobs are still paid too much, senior Abbott advisor says http://t.co/bkfSNJLUea #auspol 22:59:04, 2013-11-28
- How Pisa became the world’s most important exam http://t.co/dCVm73wBEk #education 17:27:02, 2013-11-28
- More than one broken promise or policy surprise a week! That must be some kind of record! http://t.co/6ZsK9ijGNg #auspol 15:33:06, 2013-11-28
- RT @groklearning: Let’s change this, starting with one #HourofCode. http://t.co/YZauzUx2vF http://t.co/6Xb89GcURJ 14:33:11, 2013-11-28
- NAPLAN, HSC will not help students succeed in real life http://t.co/ZwF5LiaB5b #education 13:19:05, 2013-11-28
- Pope Francis calls for power to be taken away from Vatican http://t.co/TFPuCczTXZ 19:32:05, 2013-11-27
- Wow, the broken promises just keep on coming http://t.co/iD81WpeFaJ #auspol 17:27:05, 2013-11-27
- Parents more supportive of NAPLAN school tests than teachers: survey http://t.co/y95BOe8huH 13:19:09, 2013-11-27
- Unspeakable horrors in a country on the verge of genocide http://t.co/oIBZs8mTOT 13:19:04, 2013-11-25
- RT @misscmorrison: Being able to admit we don’t know stuff gives us more credibility than knowing all answers. @gcouros #elhst13 08:45:07, 2013-11-25
On the 27 of October met up with some LEAF drivers from Australian LEAF Owners Forum at Sun Valley Tourist Park see the above picture for a line up of their Leafs. It was a good meet up most of them had not met before and basically the whole meet up was based around interest in the cars. so naturally the talk of the next meet up came up and I suggested next years 2014 hunter EV festival it’s actually down for a weekend this day 1 at the race track day 2 the expo at the foreshore.
So in an attempt to help out I sent this email out to the accommodation around Newcastle
In the lead up the the 2014 hunter Electric Vehicle festival weekend I’m canvassing Newcastle accommodation for places that are Electric Vehicle friendly so i can provide a list to out of town EV drivers.
So the first question is would you like to be added to the list ?
If you are interested I would like to know how many power points you have within close proximity to a car park? and what type they are ? (most will be 240 Volt 10 amp)
The last piece of information I need is if there is a surcharge to the accommodation cost for EV charging and how much will it be?
Thank you for your time
What I forgot the put in the email was the dates August 16th and 17th… and the approximate cost of power which I estimated at 15kwh (~6 hours of time) so at $0.25 per KWh $3.75. So far only had one reply which was positive but you can keep and eye on my progress by checking the spreadsheet on google drive