You are here

thinktime

Andrew Pollock: [life] Day 268: Science Friday, TumbleTastics, haircuts and a big bike outing

Planet Linux Australia - Fri 24th Oct 2014 23:10

I didn't realise how jam packed today was until we sat down at dinner time and recounted what we'd done today.

I started the day pretty early, because Anshu had to be up for an early flight. I pottered around at home cleaning up a bit until Sarah dropped Zoe off.

After Zoe had watched a bit of TV, I thought we'd try some bottle rocket launching for Science Friday. I'd impulse purchased an AquaPod at Jaycar last year, and haven't gotten around to using it yet.

We wandered down to Hawthorne Park with the AquaPod, an empty 2 litre Sprite bottle, the bicycle pump and a funnel.

My one complaint with the AquaPod would have to be that the feet are too smooth. If you don't tug the string strongly enough you end up just dragging the whole thing across the ground, which isn't really what you want to be doing. Once Zoe figured out how to yank the string the right way, we were all good.

We launched the bottle a few times, but I didn't want to waste a huge amount of water, so we stopped after about half a dozen launches. Zoe wanted to have a play in the playground, so we wandered over to that side of the park for a bit.

It was getting close to time for TumbleTastics, and we needed to go via home to get changed, so we started the longish walk back home. It was slow going in the mid-morning heat and no scooter, but we got there eventually. We had another mad rush to get to TumbleTastics on time, and miraculously managed to make it there just as they were calling her name.

Lachlan wasn't there today, and I was feeling lazy, and Zoe was keen for a milkshake, so we dropped into Ooniverse on the way home. Zoe had a great old time playing with everything there.

After we got home again, we biked down to the Bulimba post office to collect some mail, and then biked over for a haircut.

After our haircuts, Zoe wanted to play in Hardcastle Park, so we biked over there for a bit. I'd been wanting to go and check out the newly opened Riverwalk and try taking the bike and trailer on a CityCat. A CityCat just happened to be arriving when we got to the park, but Zoe wasn't initially up for it. As luck would have it, she changed her mind as the CityCat docked, but it was too late to try and get on that one. We got on the next one instead.

I wasn't sure how the bike and the trailer were going to work out on the CityCat, but it worked out pretty well going from Hawthorne to New Farm Park. We boarded at Hawthorne from the front left hand side, and disembarked at New Farm Park from the front right hand side, so I basically just rolled the bike on and off again, without needing to worry about turning it around. It was a bit tight cornering from the pontoon to the gangway, but the deckhand helped me manoeuvre the trailer.

It was quite a nice little ride through the back streets of New Farm to get to the start of the Riverwalk, and we had a nice quick ride into the city. We biked all the way along the riverside through to the Old Botanic Gardens. We stopped for a little play in the playground that Zoe had played in the other weekend when we were wandering around for Brisbane Open House, and then continued through the gardens, over the Goodwill Bridge, and the bottom of the Kangaroo Point cliffs.

We wound our way back home through Dockside, and Mowbray Park and along the bikeway alongside Wynnum Road. It was a pretty huge ride, and I'm excited that it's opened up an easy way to access Southbank by bicycle. I'm looking forward to some bigger forays in the near future.

Categories: thinktime

Tim Serong: Watching Grass Grow

Planet Linux Australia - Fri 24th Oct 2014 19:10

For Hackweek 11 I thought it’d be fun to learn something about creating Android apps. The basic training is pretty straightforward, and the auto-completion (and auto-just-about-everything-else) in Android Studio is excellent. So having created a “hello world” app, and having learned something about activities and application lifecycle, I figured it was time to create something else. Something fun, but something I could reasonably complete in a few days. Given that Android devices are essentially just high res handheld screens with a bit of phone hardware tacked on, it seemed a crime not to write an app that draws something pretty.

The openSUSE desktop wallpaper, with it’s happy little Geeko sitting on a vine, combined with all the green growing stuff outside my house (it’s spring here) made me wonder if I couldn’t grow a little vine jungle on my phone, with many happy Geekos inhabiting it.

Android has OpenGL ES, so thinking that might be the way to go I went through the relevant lesson, and was surprised to see nothing on the screen where there should have been a triangle. Turns out the view is wrong in the sample code. I also realised I’d probably have to be generating triangle strips from curvy lines, then animating them, and the brain cells I have that were once devoted to this sort of graphical trickery are so covered in rust that I decided I’d probably be better off fiddling around with beziers on a canvas.

So, I created an app with a SurfaceView and a rendering thread which draws one vine after another, up from the bottom of the screen. Depending on Math.random() it extends a branch out to one side, or the other, or both, and might draw a Geeko sitting on the bottom most branch. Originally the thread lifecycle was tied to the Activity (started in onResume(), killed in onPause()), but this causes problems when you blank the screen while the app is running. So I simplified the implementation by tying the thread lifecycle to Surface create/destroy, at the probable expense of continuing to chew battery if you blank the screen while the app is active.

Then I realised that it would make much more sense to implement this as live wallpaper, rather than as a separate app, because then I’d see it running any time I used my phone. Turns out this simplified the implementation further. Goodbye annoying thread logic and lifecycle problems (although I did keep the previous source just in case). Here’s a screenshot:

The final source is on github, and I’ve put up a release build APK too in case anyone would like to try it out – assuming of course that you trust me not to have built a malicious binary, trust github to host it, and trust SSL to deliver it safely

Enjoy!

Categories: thinktime

Michael Still: Specs for Kilo

Planet Linux Australia - Fri 24th Oct 2014 14:10
Here's an updated list of the specs currently proposed for Kilo. I wanted to produce this before I start travelling for the summit in the next couple of days because I think many of these will be required reading for the Nova track at the summit.



API



  • Add instance administrative lock status to the instance detail results: review 127139 (abandoned).
  • Add more detailed network information to the metadata server: review 85673.
  • Add separated policy rule for each v2.1 api: review 127863.
  • Add user limits to the limits API (as well as project limits): review 127094.
  • Allow all printable characters in resource names: review 126696.
  • Expose the lock status of an instance as a queryable item: review 85928 (approved).
  • Implement instance tagging: review 127281 (fast tracked, approved).
  • Implement tags for volumes and snapshots with the EC2 API: review 126553 (fast tracked, approved).
  • Implement the v2.1 API: review 126452 (fast tracked, approved).
  • Microversion support: review 127127.
  • Move policy validation to just the API layer: review 127160.
  • Provide a policy statement on the goals of our API policies: review 128560.
  • Support X509 keypairs: review 105034.




Administrative



  • Enable the nova metadata cache to be a shared resource to improve the hit rate: review 126705 (abandoned).
  • Enforce instance uuid uniqueness in the SQL database: review 128097 (fast tracked, approved).




Containers Service







Hypervisor: Docker







Hypervisor: FreeBSD



  • Implement support for FreeBSD networking in nova-network: review 127827.




Hypervisor: Hyper-V



  • Allow volumes to be stored on SMB shares instead of just iSCSI: review 102190 (approved).




Hypervisor: Ironic







Hypervisor: VMWare



  • Add ephemeral disk support to the VMware driver: review 126527 (fast tracked, approved).
  • Add support for the HTML5 console: review 127283.
  • Allow Nova to access a VMWare image store over NFS: review 126866.
  • Enable administrators and tenants to take advantage of backend storage policies: review 126547 (fast tracked, approved).
  • Enable the mapping of raw cinder devices to instances: review 128697.
  • Implement vSAN support: review 128600 (fast tracked, approved).
  • Support multiple disks inside a single OVA file: review 128691.
  • Support the OVA image format: review 127054 (fast tracked, approved).




Hypervisor: libvirt







Instance features







Internal



  • Move flavor data out of the system_metdata table in the SQL database: review 126620 (approved).
  • Transition Nova to using the Glance v2 API: review 84887.




Internationalization



  • Enable lazy translations of strings: review 126717 (fast tracked).




Performance



  • Dynamically alter the interval nova polls components at based on load and expected time for an operation to complete: review 122705.




Scheduler



  • Add an IOPS weigher: review 127123 (approved).
  • Add instance count on the hypervisor as a weight: review 127871 (abandoned).
  • Allow limiting the flavors that can be scheduled on certain host aggregates: review 122530 (abandoned).
  • Convert the resource tracker to objects: review 128964 (fast tracked, approved).
  • Create an object model to represent a request to boot an instance: review 127610.
  • Decouple services and compute nodes in the SQL database: review 126895.
  • Implement resource objects in the resource tracker: review 127609.
  • Isolate the scheduler's use of the Nova SQL database: review 89893.
  • Move select_destinations() to using a request object: review 127612.




Security



  • Provide a reference implementation for console proxies that uses TLS: review 126958 (fast tracked).
  • Strongly validate the tenant and user for quota consuming requests with keystone: review 92507.




Tags for this post: openstack kilo blueprint spec

Related posts: One week of Nova Kilo specifications; Compute Kilo specs are open; On layers; Juno nova mid-cycle meetup summary: slots; My candidacy for Kilo Compute PTL; Juno nova mid-cycle meetup summary: nova-network to Neutron migration



Comment
Categories: thinktime

Andrew Pollock: [life] Day 267: An outing to the Valley for lunch, and swim class

Planet Linux Australia - Fri 24th Oct 2014 09:10

I was supposed to go to yoga in the morning, but I just couldn't drag my sorry arse out of bed with my man cold.

Sarah dropped Zoe around, and she watched a bit of TV while we were waiting for a structural engineer to come and take a look at the building's movement-related issues.

While I was downstairs showing the engineer around, Zoe decided she'd watched enough TV and, remembering that I'd said we needed to tidy up her room the previous morning, but not had time to, took herself off to her room and tidied it up. I was so impressed.

After the engineer was finished, we walked to the ferry terminal to take the cross-river ferry over to Teneriffe, and catch the CityGlider bus to the Valley for another one of the group lunches I get invited to.

After lunch, we reversed our travel, dropping into the hairdresser on the way home to make an appointment for the next day. We grabbed a few things from the Hawthorne Garage on the way through.

We pottered around at home for a little bit before it was time to bike to swim class.

After swim class, we biked home, and Zoe watched some TV while I got organised for a demonstration that night.

Sarah picked up Zoe, and I headed out to my demo. Another full day.

Categories: thinktime

linux.conf.au News: Call for Volunteers

Planet Linux Australia - Fri 24th Oct 2014 08:10

The Earlybird registrations are going extremely well – over 50% of the available tickets have sold in just two weeks! This is no longer a conference we are planning – this is a conference that is happening and that makes the Organisation Team very happy!

Speakers have been scheduled. Delegates are coming. We now urgently need to expand our team of volunteers to manage and assist all these wonderful visitors to ensure that LCA 2015 is unforgettable – for all the right reasons.

Volunteers are needed to register our delegates, show them to their accommodation, guide them around the University and transport them here and there. They will also manage our speakers by making sure that their presentations don't overrun, recording their presentations and assisting them in many other ways during their time at the conference.

Anyone who has been a volunteer before will tell you that it’s an extremely busy time, but so worthwhile. It’s rewarding to know that you’ve helped everybody at the conference to get the most out of it. There's nothing quite like knowing that you've made a difference.

But there is more, membership has other privileges and advantages! You don't just get to meet the delegates and speakers, you get to know many of them while helping them as well. You get a unique opportunity to get behind the scenes and close to the action. You can forge new relationships with amazing, interesting, wonderful people you might not ever get the chance to meet any other way.

Every volunteer's contribution is valued and vital to the overall running and success of the conference. We need all kinds of skills too – not just the technically savvy ones (although knowing which is the noisy end of a walkie-talkie may help). We want you! We need you! It just wouldn't be the same without you! If you would like to be an LCA 2015 volunteer it's easy to register. Just go to our volunteer page for more information. We review volunteer registrations regularly and if you’re based in Auckland (or would like a break away from wherever you are) then we would love to meet you at one of our regular meetings. Registered volunteers will receive information about these via email.

Categories: thinktime

Rian van der Merwe on A View from a Different Valley: How to Do What You Love, the Right Way

a list apart - Thu 23rd Oct 2014 22:10

Every time I start a new job I take my dad to see my office. He loves seeing where I work, and I love showing him. It’s a thing. As much as I enjoy this unspoken ritual of ours, there’s always a predictable response from my dad that serves as a clear indicator of our large generation gap. At some point he’ll ask a question along the lines of, “So… no one has an office? You just sit out here in the open?” I’ve tried many times to explain the idea of colocation and collaborative work, but I don’t think it’s something that will ever compute for him.

This isn’t a criticism on how he’s used to doing things (especially if he’s reading this… Hi Dad!). But it shows how our generation’s career goals have changed from “I want the corner office!” to “I just want a space where I’m able to do good work.” We’ve mostly gotten over our obsession with the size and location of our physical workspaces. But we haven’t completely managed to let go of that corner office in our minds: the job title.

Even that’s starting to change, though. This tweet from Jack Dorsey has received over 1,700 retweets so far:

Titles, like "CEO", get in the way of doing the right thing. Respect to the people who ignore titles, and fight like hell for what is right.

— Jack (@jack) September 29, 2012

In episode 60 of Back to Work, Merlin Mann and Dan Benjamin discuss what they call “work as platform.” The basic idea is that we need to stop looking at work as a thing you do for a company. If you view your career like that, your success will always be linked to the success of the company, as well as your ability to survive within that particular culture. You will be at the mercy of people who are concerned about their own careers, not yours.

Instead, if you think about your work as platform, your attention starts to shift to using whatever job you are doing to develop your skills further, so that you’re never at the mercy of one company. Here’s Merlin, from about 31 minutes into that episode of Back to Work (edited down slightly):

If you think just in terms of jobs, you become a little bit short-sighted, because you tend to think in terms of, “What’s my next job?”, or “If I want good jobs in my career, what do I put on my resume?” So in terms of what you can do to make the kinds of things you want, and have the kind of career you like, I think it’s very interesting to think about what you do in terms of having a platform for what you do.

There’s always this thing about “doing what you love.” Well, doing what you love might not ever make you a nickel. And if doing what you love sucks, no one is ever going to see it, like it, and buy it, which is problematic. That’s not a branding problem, that’s a “you suck” problem. So the platform part is thinking about what you do not simply in terms of what your next job is — it’s a way of thinking about how all of the things that you do can and should and do feed into each other.

I think it’s worth giving yourself permission to take a dip into the douche-pool, and think a little bit about what platform thinking might mean to you. Because if you are just thinking about how unhappy you are with your job your horizons are going to become pretty short, and your options are going to be very limited.

So here’s how I want to pull this all together. Just like we’ve moved on from the idea that the big office is a big deal, we have to let go of the idea that a big enough title is equal to a successful career. Much more important is that we figure out what it is that we want to spend our time and attention on — and then work at our craft to make that our platform. Take a realistic look at how much agency you have at work — it may be more than you realize — and try to get the responsibilities that interest you most, just to see where it takes you.

This is also why side projects are so important. They help you use the areas you’re truly interested in to hone your skills by making something real, just for you, because you want to. And as you get really good, you’ll be able to use those skills more in your current role, which will almost certainly make for a more enjoyable job. But it could even turn into a new role at your company — or who knows, maybe even your own startup.

If you go down this path, little by little you’ll discover that you suddenly start loving what you do more and more. Doing what you love doesn’t necessarily mean quitting your job and starting a coffee shop. Most often, it means building your own platform, and crafting your own work, one step at a time.

Categories: thinktime

Handshakes and contracts, the future and the past

Seth Godin - Thu 23rd Oct 2014 19:10
If you lease a car, borrow money for school or engage in some other complex transaction, there's a contract to sign. It's filled with rules and obligations, and the profit-maximizing finance organization does everything it can to do as little...         Seth Godin
Categories: thinktime

Jonathan Adamczewski: Assembly Primer Part 7 — Working with Strings — ARM

Planet Linux Australia - Thu 23rd Oct 2014 16:10

These are my notes for where I can see ARM varying from IA32, as presented in the video Part 7 — Working with Strings.

I’ve not remotely attempted to implement anything approximating optimal string operations for this part — I’m just working my way through the examples and finding obvious mappings to the ARM arch (or, at least what seem to be obvious). When I do something particularly stupid, leave a comment and let me know :)

Working with Strings .data HelloWorldString: .asciz "Hello World of Assembly!" H3110: .asciz "H3110" .bss .lcomm Destination, 100 .lcomm DestinationUsingRep, 100 .lcomm DestinationUsingStos, 100

Here’s the storage that the provided example StringBasics.s uses. No changes are required to compile this for ARM.

1. Simple copying using movsb, movsw, movsl @movl $HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @movl $Destination, %edi movw r1, #:lower16:Destination movt r1, #:upper16:Destination @movsb ldrb r2, [r0], #1 strb r2, [r1], #1 @movsw ldrh r3, [r0], #2 strh r3, [r1], #2 @movsl ldr r4, [r0], #4 str r4, [r1], #4

More visible complexity than IA32, but not too bad overall.

IA32′s movs instructions implicitly take their source and destination addresses from %esi and %edi, and increment/decrement both. Because of ARM’s load/store architecture, separate load and store instructions are required in each case, but there is support for indexing of these registers:

ARM addressing modes

According to ARM A8.5, memory access instructions commonly support three addressing modes:

  • Offset addressing — An offset is applied to an address from a base register and the result is used to perform the memory access. It’s the form of addressing I’ve used in previous parts and looks like [rN, offset]
  • Pre-indexed addressing — An offset is applied to an address from a base register, the result is used to perform the memory access and also written back into the base register. It looks like [rN, offset]!
  • Post-indexed addressing — An address is used as-is from a base register for memory access. The offset is applied and the result is stored back to the base register. It looks like [rN], offset and is what I’ve used in the example above.
2. Setting / Clearing the DF flag

ARM doesn’t have a DF flag (to the best of my understanding). It could perhaps be simulated through the use of two instructions and conditional execution to select the right direction. I’ll look further into conditional execution of instructions on ARM in a later post.

3. Using Rep

ARM also doesn’t appear to have an instruction quite like IA32′s rep instruction. A conditional branch and a decrement will be the long-form equivalent. As branches are part of a later section, I’ll skip them for now.

@movl $HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @movl $DestinationUsingRep, %edi movw r1, #:lower16:DestinationUsingRep movt r1, #:upper16:DestinationUsingRep @movl $25, %ecx # set the string length in ECX @cld # clear the DF @rep movsb @std ldm r0!, {r2,r3,r4,r5,r6,r7} ldrb r8, [r0,#0] stm r1!, {r2,r3,r4,r5,r6,r7} strb r8, [r1,#0]

To avoid conditional branches, I’ll start with the assumption that the string length is known (25 bytes). One approach would be using multiple load instructions, but the load multiple (ldm) instruction makes it somewhat easier for us — one instruction to fetch 24 bytes, and a load register byte (ldrb) for the last one. Using the ! after the source-address register indicates that it should be updated with the address of the next byte after those that have been read.

The storing of the data back to memory is done analogously. Store multiple (stm) writes 6 registers×4 bytes = 24 bytes (with the ! to have the destination address updated). The final byte is written using strb.

4. Loading string from memory into EAX register @cld @leal HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @lodsb ldrb r1, [r0, #0] @movb $0, %al mov r1, #0 @dec %esi @ unneeded. equiv: sub r0, r0, #1 @lodsw ldrh r1, [r0, #0] @movw $0, %ax mov r1, #0 @subl $2, %esi # Make ESI point back to the original string. unneeded. equiv: sub r0, r0, #2 @lodsl ldr r1, [r0, #0]

In this section, we are shown how the IA32 lodsb, lodsw and lodsl instructions work. Again, they have implicitly assigned register usage, which isn’t how ARM operates.

So, instead of a simple, no-operand instruction like lodsb, we have a ldrb r1, [r0, #0] loading a byte from the address in r0 into r1. Because I didn’t use post indexed addressing, there’s no need to dec or subl the address after the load. If I were to do so, it could look like this:

ldrb r1, [r0], #1 sub r0, r0, #1 ldrh r1, [r0], #2 sub r0, r0, #2 ldr r1, [r0], #4

If you trace through it in gdb, look at how the value in r0 changes after each instruction.

5. Storing strings from EAX to memory @leal DestinationUsingStos, %edi movw r0, #:lower16:DestinationUsingStos movt r0, #:upper16:DestinationUsingStos @stosb strb r1, [r0], #1 @stosw strh r1, [r0], #2 @stosl str r1, [r0], #4

Same kind of thing as for the loads. Writes the letters in r1 (being “Hell” — leftovers from the previous section) into DestinationUsingStos (the result being “HHeHell”). String processing on little endian architectures has its appeal.

6. Comparing Strings @cld @leal HelloWorldString, %esi movw r0, #:lower16:HelloWorldString movt r0, #:upper16:HelloWorldString @leal H3110, %edi movw r1, #:lower16:H3110 movt r1, #:upper16:H3110 @cmpsb ldrb r2, [r0,#0] ldrb r3, [r1,#0] cmp r2, r3 @dec %esi @dec %edi @not needed because of the addressing mode used @cmpsw ldrh r2, [r0,#0] ldrh r3, [r1,#0] cmp r2, r3 @subl $2, %esi @subl $2, %edi @not needed because of the addressing mode used @cmpsl ldr r2, [r0,#0] ldr r3, [r1,#0] cmp r2, r3

Where IA32′s cmps instructions implicitly load through the pointers in %edi and %esi, explicit loads are needed for ARM. The compare then works in pretty much the same way as for IA32, setting condition code flags in the current program status register (cpsr). If you run the above code, and check the status registers before and after execution of the cmp instructions, you’ll see the zero flag set and unset in the same way as is demonstrated in the video.

The condition code flags are:

  • bit 31 — negative (N)
  • bit 30 — zero (Z)
  • bit 29 — carry (C)
  • bit 28 — overflow (V)

There’s other flags in that register — all the details are on page B1-16 and B1-17 in the ARM Architecture Reference Manual.

And with that, I think we’ve made it (finally) to the end of this part for ARM.

Other assembly primer notes are linked here.
Categories: thinktime

Stewart Smith: CFP for Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015

Planet Linux Australia - Thu 23rd Oct 2014 10:10

This is the Call for Papers for the Developer, Testing, Release and Continuous Integration Automation Miniconf at linux.conf.au 2015 in Auckland.

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)

– continuous integration tools (e.g. jenkins)

– CI techniques (e.g. gated trunk, zuul)

– testing tools and techniques (e.g. subunit, fuzz testing tools)

– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.

– applying CI in your workplace/project

We’re looking for talks about technology *and* the human side of this

Speakers at this miniconf can get a miniconf only pass, but to attend the main conference, you’ll need to organize that yourself.

There will be a projector, and there is a possibility the talk will be recorded (depending on if the conference A/V is up and running) – if recorded, talks will be posted with the same place with the same CC license as main LCA talks are.

CFP is open until midnight November 21st 2015.

http://goo.gl/forms/KZI1YDDw8n

Categories: thinktime

Andrew Pollock: [life] Day 266: Prep play date, shopping and a play date

Planet Linux Australia - Thu 23rd Oct 2014 10:10

Zoe's sleep seems a bit messed up lately. She yelled out for me at 3:53am, and I resettled her, but she wound up in bed with me at 4:15am anyway. It took me a while to get back to sleep, maybe around 5am, but then we slept in until about 7:30am.

That made for a bit of a mad rush to get out the door to Zoe's primary school for her "Prep Play Date" orientation. We managed to make it out the door by a bit after 8:30am.

15 minutes is what it appears to take to scooter to school, which is okay. With local traffic being what it is, I think this will be a nice way to get to and from school next year, weather permitting.

We signed in, and Zoe got paired up with an existing (extremely tall) Prep student to be her buddy. The other girl was very keen to hold Zoe's hand, which Zoe was a bit dubious about at first, but they got there eventually.

The kids spent about 20 minutes rotating through the three classrooms, with a different buddy in each classroom. They were all given a 9 station name badge when they signed in, and they got a sticker for each station that they visited in each classroom.

It was a really nice morning, and I discovered there's one other girl from Zoe's Kindergarten going to her school, so I made a point of introducing myself to her mother.

I've got a really great vibe about the school, and Zoe enjoyed the morning. I'm looking forward to the next stage of her education.

We scootered home afterwards, and Zoe got the speed wobbles going down the hill and had a spectacular crash, luckily without any injuries thanks to all of her safety gear.

Once we got home, we headed out to the food wholesaler at West End to pick up a few bits and pieces, and then I had to get to Kindergarten to chair the monthly PAG meeting. I dropped Zoe at Megan's place for a play date while I was at the Kindergarten.

After the meeting, I picked up Zoe and we headed over to Westfield Carindale to buy a birthday present for Zoe's Kindergarten friend, Ivy, who is having a birthday party on Saturday.

We got home from Carindale with just enough time to spare before Sarah arrived to pick Zoe up.

I then headed over to Anshu's place for a Diwali dinner.

Categories: thinktime

linux.conf.au News: Speaker Feature: Audrey Lobo-Pulo, Jack Moffitt

Planet Linux Australia - Thu 23rd Oct 2014 08:10
Audrey Lobo-Pulo Evaluating government policies using open source models

10:40am Wednesday 14th January 2015

Dr. Audrey Lobo-Pulo is a passionate advocate of open government and the use of open source software in government modelling. Having started out as a physicist developing theoretical models in the field of high speed data transmission, she moved into the economic policy modelling sphere and worked at the Australian Treasury from 2005 till 2011.

Currently working at the Australian Taxation Office in Sydney, Audrey enjoys discussions on modelling economic policy.

For more information on Audrey and her presentation, see here. You can follow her as @AudreyMatty and don’t forget to mention #LCA2015.



Jack Moffitt Servo: Building a Parallel Browser

10:40am Friday 16th January 2015

Jacks current project is called Chesspark and is an online community for chess players built on top of technologies like XMPP (aka Jabber), AJAX, and Python.

He previously created the Icecast Streaming Media Server, spent a lot of time developing and managing the Ogg Vorbits project, and helping create and run the Xiph.org Foundation. All these efforts exist to create a common, royalty free, and open standard for multimedia on the Internet.

Jack is also passionate about Free Software and Open Source, technology, music, and photography.

For more information on Jack and his presentation, see here. You can follow him as @metajack and don’t forget to mention #LCA2015.

Categories: thinktime

Learning to Be Flexible

a list apart - Wed 22nd Oct 2014 23:10

As a freelancer, I work in a lot of different code repos. Almost every team I work with has different ideas of how code should be organized, maintained, and structured.

Now, I’m not here to start a battle about tabs versus spaces or alphabetical order of CSS properties versus organizing in terms of concerns (positioning styles, then element layout styles, then whatever else), because I’m honestly not attached to any one system anymore. I used to be a one-tab kind of person, along with not really even thinking about the ordering of my properties, but slowly, over time, I’ve realized that most of that doesn’t really matter. In all the projects I’ve worked on, the code got written and the product or site worked for the users—which is really the most important thing. What gets me excited about projects now is the code, making something work, seeing it work across different devices, seeing people use something I built, not getting upset about how it’s written.

Since I went down the freelance route again earlier this year, I’m working with many different teams and they all have different standards for how their code should be written. What I really want to know when I start a project is what the standards are, so I can adhere to them. For many teams that means a quick look through their documentation (when they have it, it’s a dream come true—there are no questions and I can just get to work). For other teams, it means I ask a lot of questions after I’ve taken a look at the code to verify how they prefer to do things.

Even more so than just thinking about how to write code, there’s the fact that I may be working in straight CSS, Sass, Stylus, Handlebars, plain old HTML, or Jade and I usually roll right along with that as well. Every team makes decisions that suit them and their way of working—I’m there to make life easier by coming in and helping them get a job done, not tell them their whole setup is wrong. The variety keeps me on my toes, but it also helps me remember that there isn’t just one way to do any of this.

What has this really done for me? I’ve started letting go of some things. I have opinions on how to structure and write CSS, but whether it’s written with a pre-processor or not, I don’t always care, and which pre-processor matters less to me as well. Any way you do it, you can get the job done. Choosing what works best for your team is what’s most important, not what anyone outside the team says is the “right” or “only” way to do something.

Categories: thinktime

Taking the plunge

Seth Godin - Wed 22nd Oct 2014 20:10
Maybe that's the problem. Perhaps it's better to commit to wading instead. Ship, sure. Not the giant life-changing, risk-it-all-venture, but the small. When you do a small thing, when you finish it, polish it, put it into the world, you've...         Seth Godin
Categories: thinktime

linux.conf.au News: Speaker Feature: Denise Paolucci, Gernot Heiser

Planet Linux Australia - Wed 22nd Oct 2014 08:10
Denise Paolucci When Your Codebase Is Nearly Old Enough To Vote

11:35 am Friday 16th January 2015

Denise is one of the founders of Dreamwidth, a journalling site and open source project forked from Livejournal, and one of only two majority-female open source projects.

Denise has appeared at multiple open source conferences to speak about Dreamwidth, including OSCON 2010 and linux.conf.au 2010.

For more information on Denise and her presentation, see here.



Gernot Heiser seL4 Is Free - What Does This Mean For You?

4:35pm Thursday 15th January 2015

Gernot is a Scientia Professor and the John Lions Chair for operating systems at the University of New South Wales (UNSW).

He is also leader of the Software Systems Research Group (SSRG) at NICTA. In 2006 he co-founded Open Kernel Labs (OK Labs, acquired in 2012 by General Dynamics) to commercialise his L4 microkernel technology

For more information on Gernot and his presentation, see here. You can follow him as @GernotHeiser and don’t forget to mention #LCA2015.

Categories: thinktime

Joshua Hesketh: OpenStack infrastructure swift logs and performance

Planet Linux Australia - Wed 22nd Oct 2014 01:10

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.
The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.

 

Here it is quite clear that swift is slower at actually responding.

 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632 Second pass without outliers Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.

 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.

 

The size histograms don’t really add much here.

 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.

 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).

 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0 Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232 Second pass without outliers Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0 Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.

 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.

 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352 Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913 Second pass without outliers Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993 Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.

 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.

 

Swift still loses out on transfers but again does a much better job of keeping up.

 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

Categories: thinktime

Axiomatic CSS and Lobotomized Owls

a list apart - Wed 22nd Oct 2014 01:10

At CSS Day last June I introduced, with some trepidation, a peculiar three-character CSS selector. Called the “lobotomized owl selector” for its resemblance to an owl’s vacant stare, it proved to be the most popular section of my talk.

I couldn’t tell you whether the attendees were applauding the thinking behind the invention or were, instead, nervously laughing at my audacity for including such an odd and seemingly useless construct. Perhaps I was unwittingly speaking to a room full of paid-up owl sanctuary supporters. I don’t know.

The lobotomized owl selector looks like this:

* + *

Despite its irreverent name and precarious form, the lobotomized owl selector is no mere thought experiment for me. It is the result of ongoing experimentation into automating the layout of flow content. The owl selector is an “axiomatic” selector with a voracious purview. As such, many will be hesitant to use it, and it will terrify some that I include it in production code. I aim to demonstrate how the selector can reduce bloat, speed up development, and help automate the styling of arbitrary, dynamic content.

Styling by prescription

Almost universally, professional web interface designers (engineers, whatever) have accustomed themselves to styling HTML elements prescriptively. We conceive of an interface object, then author styles for the object that are inscribed manually in the markup as “hooks.”

Despite only pertaining to presentation, not semantic interoperability, the class selector is what we reach for most often. While elements and most attributes are predetermined and standardized, classes are the placeholders that gift us with the freedom of authorship. Classes give us control.

.my-module { /* ... */ }

CSS frameworks are essentially libraries of non-standard class-based ciphers, intended for forming explicit relationships between styles and their elements. They are vaunted for their ability to help designers produce attractive interfaces quickly, and criticized for the inevitable accessibility shortcomings that result from leading with style (form) rather than content (function).

< !-- An unfocusable, semantically inaccurate "button" --> <a class="ui-button">press me</a>

Whether you use a framework or your own methodology, the prescriptive styling mode also prohibits non-technical content editors. It requires not just knowledge of presentational markup, but also access to that markup to encode the prescribed styles. WYSIWYG editors and tools like Markdown necessarily lack this complexity so that styling does not impede the editorial process.

Bloat

Regardless of whether you can create and maintain presentational markup, the question of whether you should remains. Adding presentational ciphers to your previously terse markup necessarily engorges it, but what’s the tradeoff? Does this allow us to reduce bloat in the stylesheet?

By choosing to style entirely in terms of named elements, we make the mistake of asserting that HTML elements exist in a vacuum, not subject to inheritance or commonality. By treating the element as “this thing that needs to be styled,” we are liable to redundantly set some values for the element in hand that should have already been defined higher in the cascade. Adding new modules to a project invites bloat, which is a hard thing to keep in check.

.module-new { /* So… what’s actually new here? */ }

From pre-processors with their addition of variables to object-based CSS methodologies and their application of reusable class “objects,” we are grappling with sandbags to stem this tide of bloat. It is our industry’s obsession. However, few remedies actually eschew the prescriptive philosophy that invites bloat in the first place. Some interpretations of object-oriented CSS even insist on a flattened hierarchy of styles, citing specificity as a problem to be overcome—effectively reducing CSS to SS and denying one of its key features.

I am not writing to condemn these approaches and technologies outright, but there are other methods that just may be more effective for certain conditions. Hold onto your hats.

Selector performance

I’m happy to concede that when some of you saw the two asterisks in * + * at the beginning of this article, you started shaking your head with vigorous disapproval. There is a precedent for that. The universal selector is indeed a powerful tool. But it can be good powerful, not just bad powerful. Before we get into that, though, I want to address the perceived performance issue.

All the studies I’ve read, including Steve Souders’ and Ben Frain’s, have concluded that the comparative performance of different CSS selector types is negligible. In fact, Frain concludes that “sweating over the selectors used in modern browsers is futile.” I’ve yet to read any compelling evidence to counter these findings.

According to Frain, it is, instead, the quantity of CSS selectors—the bloat—that may cause issues; he mentions unused declarations specifically. In other words, embracing class selectors for their “speed” is of little use when their proliferation is causing the real performance issue. Well, that and the giant JPEGs and un-subsetted web fonts.

Contrariwise, the * selector’s simultaneous control of multiple elements increases brevity, helping to reduce file size and improve performance.

The real trouble with the universal sector is that it alone doesn’t represent a very compelling axiom—nothing more intelligent than “style whatever,” anyway. The trick is in harnessing this basic selector and forming more complex expressions that are context-aware.

Dispensing with margins

The trouble with confining styles to objects is that not everything should be considered a property of an object per se. Take margins: margins are something that exist between elements. Simply giving an element a top margin makes no sense, no matter how few or how many times you do it. It’s like applying glue to one side of an object before you’ve determined whether you actually want to stick it to something or what that something might be.

.module-new { margin-bottom: 3em; /* what, all the time? */ }

What we need is an expression (a selector) that matches elements only in need of margin. That is, only elements in a contextual relationship with other sibling elements. The adjacent sibling combinator does just this: using the form x + n, we can add a top margin to any n where x has come before it.

This would, as with standard prescriptive styling, become verbose very quickly if we were to create rules for each different element pairing within the interface. Hence, we adopt the aforementioned universal selector, creating our owl face. The axiom is as follows: “All elements in the flow of the document that proceed other elements must receive a top margin of one line.”

* + * { margin-top: 1.5em; } Completeness

Assuming that your paragraphs’ font-size is 1 em and its line-height is 1.5, we just set a default margin of one line between all successive flow elements of all varieties occurring in any order. Neither we developers nor the folks building content for the project have to worry about any elements being forgotten and not adopting at least a standard margin when rendered one after the other. To achieve this the prescriptive way, we’d have to anticipate specific elements and give them individual margin values. Boring, verbose, and liable to be incomplete.

Instead of writing styles, we’ve created a style axiom: an overarching principle for the layout of flow content. It’s highly maintainable, too; if you change the line-height, just change this singular margin-top value to match.

Contextual awareness

It’s better than that, though. By applying margin between elements only, we don’t generate any redundant margin (exposed glue) destined to combine with the padding of parent elements. Compare solution (a), which adds a top margin to all elements, with solution (b), which uses the owl selector.

The diagrams in the left column show margin in dark grey and padding in light gray.

Now consider how this behaves in regard to nesting. As illustrated, using the owl selector and just a margin-top value, no first or last element of a set will ever present redundant margin. Whenever you create a subset of these elements, by wrapping them in a nested parent, the same rules that apply to the superset will apply to the subset. No margin, regardless of nesting level, will ever meet padding. With a sort of algorithmic elegance, we protect against compound whitespace throughout our interface.

This is eminently less verbose and more robust than approaching the problem unaxiomatically and removing the leftover glue after the fact, as Chris Coyier reluctantly proposed in “Spacing The Bottom of Modules”. It was this article, I should point out, that helped give me the idea for the lobotomized owl.

.module > *:last-child, .module > *:last-child > *:last-child, .module > *:last-child > *:last-child > *:last-child { margin: 0; }

Note that this only works having defined a “module” context (a big ask of a content editor), and requires estimating possible nesting levels. Here, it supports up to three.

Exception-driven design

So far, we’ve not named a single element. We’ve simply written a rule. Now we can take advantage of the owl selector’s low specificity and start judiciously building in exceptions, taking advantage of the cascade rather than condemning it as other methods do.

Book-like, justified paragraphs p { text-align: justify; } p + p { margin-top: 0; text-indent: 2em; }

Note that only successive paragraphs are indented, which is conventional—another win for the adjacent sibling combinator.

Compact modules .compact * + * { margin-top: 0.75em; }

You can employ a little class-based object orientation if you like, to create a reusable style for more compact modules. In this example, all elements that need margin receive a margin of only half a line.

Widgets with positioning .margins-off > * { margin-top: 0; }

The owl selector is an expressive selector and will affect widgets like maps, where everything is positioned exactly. This is a simple off switch. Increasingly, widgets like these will occur as web components where our margin algorithm will not be inherited anyway. This is thanks to the style encapsulation feature of Shadow DOM.

The beauty of ems

Although a few exceptions are inevitable, by harnessing the em unit in our margin value, margins already adjust automatically according to another property: font-size. In any instances that we adjust font-size, the margin will adapt to it: one-line spaces remain one-line spaces. This is especially helpful when setting an increased or reduced body font-size via a @media query.

When it comes to headings, there’s still more good fortune. Having set heading font sizes in your stylesheet in ems, appropriate margin (leading whitespace) for each heading has been set without you writing a single line of additional code.

Phrasing elements

This style declaration is intended to be inherited. That is how it, and CSS in general, is designed to work. However, I appreciate that some will be uncomfortable with just how voracious this selector is, especially after they have become accustomed to avoiding inheritance wherever possible.

I have already covered the few exceptions you may wish to employ, but, if it helps further, remember that phrasing elements with a typical display value of inline will inherit the top margin but be unaffected in terms of layout. Inline elements only respect horizontal margin, which is as specified and standard behavior across all browsers.

If you find yourself overriding the owl selector frequently, there may be deeper systemic issues with the design. The owl selector deals with flow content, and flow content should make up the majority of your content. I don’t advise depending heavily on positioned content in most interfaces because they break implicit flow relationships. Even grid systems, with their floated columns, should require no more than a simple .row > * selector applying margin-top: 0 to reset them.

Conclusion

I am a very poor mathematician, but I have a great fondness for Euclid’s postulates: a set of irreducible rules, or axioms, that form the basis for complex and beautiful geometries. Thanks to Euclid, I understand that even the most complex systems must depend on foundational rules, and CSS is no different. Although modularization of a complex interface is a necessary step in its maturation, any interface that does not follow basic governing tenets is going to lack clarity.

The owl selector allows you to control flow content, but it is also a way of relinquishing control. By styling elements according to context and circumstance, we accept that the structure of content is—and should be—mutable. Instead of prescribing the appearance of individual items, we build systems to anticipate them. Instead of prescribing the appearance of the interface as a whole, we let the content determine it. We give control back to the people who would make it.

When turning off CSS for a webpage altogether, you should notice two things. First, the page is unfalteringly flexible: the content fits the viewport regardless of its dimensions. Second—provided you have written standard, accessible markup—you should see that the content is already styled in a way that is, if not highly attractive, then reasonably traversable. The browser’s user agent styles take care of that, too.

Our endeavors to reclaim and enhance the innate device independence offered by user agents are ongoing. It’s time we worked on reinstating content independence as well.

Categories: thinktime

The Specialized Web: Working with Subject-Matter Experts

a list apart - Wed 22nd Oct 2014 01:10

The time had come for The Big Departmental Website Redesign, and my content strategist heart was all aflutter. Since I work at a research university, the scope wasn’t just the department’s site—there were also 20 microsites focusing on specific faculty projects. Each one got an audit, an inventory, and a new strategy proposal.

I met one-on-one with each faculty member to go over the plans, and they loved them. Specific strategy related to their users and their work! Streamlined and clarified content to help people do what needed doing! “Somebody pinch me,” I enthused after another successful and energizing meeting.

Don’t worry, the pinch came.

I waltzed into my next microsite meeting, proud of my work and capabilities. I outlined my grand plan to this professor, but instead of meeting with the enthusiasm I expected, I was promptly received a brick wall of “not interested.” She dismissed my big strategy with frowns and flat refusals without elaboration. Not to be deterred, I took a more specific tack, pointing out that the photos on the site felt disconnected from the research. No dice: she insisted that the photos not only needed to stay, but were critical to understanding the heart of the research itself.

She shot down idea after idea, all the while maintaining that the site should both be better but not change. My frustration mounted, and I finally pulled my papers together and asked, “Do you really even need a website?!” Of course, she scoffed. Meeting over.

Struggles with subject-matter experts (SMEs) are as diverse as the subject-matter experts themselves. Whether they’re surgeons, C-level executives, engineers, policy makers, faculty—we as web workers need SMEs for their specialized content knowledge. Arming yourself with the right tools, skills, and mentalities will make your work and projects run smoother for everyone on your team—SME included.

The right frame of mind

Know that nobody comes to the table with a clean slate. While the particulars may be new—a web presence, a social media campaign, a new database-driven tool—projects aren’t.

When starting off a project, I’ll ask each person why they’re at the table. Even though it may be obvious why the SME is on the team, each person gets equal time (no more than a minute or two) to state how what they do relates to the project or outcome. You’re all qualified to be there, and stating those qualifications not only builds familiarity, but provides everyone with a picture of the team as a whole.

I see SMEs as colleagues and co-collaborators, no matter what they may think of me and my lack of similar specialized knowledge. I don’t come to them from a service mentality—that they give me their great ideas and I humbly craft them to be web-ready. We’re working together to create something that can serve and help the user.

Listening for context

After my disastrous initial meeting with the prickly professor, I gave myself some time to calm down, and scheduled another meeting with one thing on my agenda: listening. I knew I was missing part of the story, and when we sat down again, I told the SME that I only wanted her to talk to me about the site, the content, and the research.

I wasn’t satisfied with her initial surface-level responses, because they weren’t solvable problems. To find the deeper root causes of her reluctance, I busted out my friends the Five Ws (and that tagalong how). When she insisted something couldn’t be removed or changed, I breezed right past why, because it wasn’t getting me anywhere. Instead, I asked: when did you choose this image? Where did this image come from? If it’s so essential to the site, it must have a history. I kept asking questions until I understood the context that existed already around this site. Once I understood the context, I could identify the need that content served, and could make sure that need was addressed, rather than just cutting it out.

Through this deeper line of questioning, I learned that the SME had been through an earlier redesign process with a different web team. They started off much in the same way I had—with big plans for her content, and not a lot of time for her. The design elements and photos that she was determined to hang on to? That was all that she had been able to control in the process before.

By swooping in with my ideas, I was just another Web Person to her, with mistrust feeding off old—but still very real—feelings of being ignored and passed over. It was my responsibility to build the working relationship back up and make it productive.

In the end, the SME and I agreed to start off with only a few changes—moving to a 960-pixel width and removing dead links—and left the rest of the content and structure as-is in the migration. This helped build her trust that I would not only listen to her, but be a good steward of her content. When we revisited the content later on, she was much more receptive to all my big ideas.

If someone seems afraid, ornery, reluctant, distrustful, or any other work-hampering trait, they’re likely not doing it just to be a jerk—there are histories, insecurities, and fears at work beneath the less-than-ideal behavior, as Kerry-Anne Gilowey points out in her presentation “The People Puzzle: Making the Pieces Fit.”

Listening is a key skill here: let them be heard, and try to uncover what’s at the root of their resistance. Some people may have a natural affinity for these people skills, but anyone will benefit from spending time practicing and working on them.

Tools before strategy, heading for tragedy

Being a good listener, however, is not a simple Underpants Gnome scheme toward project success:

  1. Listen to your frustrating SME
  2. PROFIT

Sometimes you and your SME are on the same page, ready to hop right in to Shiny New Project World! And hey, they have a great idea of what that new project is, and it is totally a Facebook page. Or a Twitter feed. Or an Instagram account, even though there is nothing to take photographs of.

This doesn’t necessarily indicate a mentality of “Social media, how hard can it be!”–instead, exuberance signals your SME’s desire to be involved in the work.

In the case of social media like Facebook or Twitter, the SME knows there is a conversation, a connection, happening somewhere, and they want to be a part of it. They may latch onto the thing they’ve heard of—maybe they check out photos of their friend’s kids on Facebook, or saw the use of hashtags mentioned during a big event like the World Cup. They’re not picking a specific tool just to be stubborn—they often just don’t have a clue as to how many options they actually have.

Sometimes the web is a little freaky, so we might as well stare it in the face together:

The Conversation Prism, a visual map of social media. Click to view larger.


Each wedge of this photo is a different type of service, and inside the wedge are the sites or tools or apps that offer that service. This is a great way to show the SME the large toolbox at our disposal, and the need to be mindful and strategic in our selection.

After peering at the glorious toolbox of possible options, it becomes clear we’ll need a strategy to pick the right tool—an Allen wrench is great for building an IKEA bookshelf, but is lousy for tearing down drywall. I start my SME off with homework—a few simple, top-level questions:

  1. Who is this project/site/page for?
  2. Who is this project/site/page not for?

Oftentimes, this is the first time the SME has really thought about audience. If the answer to Question 1 is “everyone,” I start to ask about specific stakeholder groups: Customers? Instructors? Board Members? Legislators? Volunteers? As soon as we can get one group in the “not for” column, the conversation moves forward more easily.

An SME who says a website is “for everyone” is not coming from a place of laziness or obstinacy; the SMEs I work with simply want their website to be the most helpful to the most people.

  1. What other sites out there are like, or related to, the one we hope to make?

SMEs know who their peers, their competitors, and their colleagues are. While you may toil for hours, days, or weeks looking at material you think is comparable, your SME will be able to rattle off people or projects for you to check out in the blink of an eye. Their expertise saves you time.

There are obviously a lot more questions that get asked about a project, but these two are a great start to collaborative work, and function independently of specific tools. They facilitate an ongoing discussion about the Five Ws, and lay a good foundation to think about the practical side of the how.

Get yourself (and your project, and your team) right

It is possible to have a great working relationship with an SME. The place to start is with people—meet your SME, and introduce yourself! Meet as soon as the project starts, or even earlier.

Go to their stomping grounds

If you work with a large group of SMEs, find out where and when they gather (board meetings, staff retreats, weekly team check-ins) and get on the agenda. I managed to grab five minutes of a faculty meeting and told everyone who I was, a very basic overview of what I did, and that I was looking forward to working together—which sped up putting faces to names.

Find other avenues

If you’re having trouble locating the SMEs—either because they’re phenomenally busy or reluctant to work together—try tracking down an assistant. These assistants might have access to some of the same specialized knowledge as your SME (in the case of a research assistant), or they could have more access to your SME themselves (in the case of an executive assistant or other calendar-wrangler). Assistants are phenomenal people to know and respect in either of these cases; build a good, trusting relationship with them, and projects will move forward without having to wait for one person’s calendar to clear.

Make yourself available

Similarly, making yourself easier to find can open doors. I told people it was fine to “drop by any time,” and, while true, actually left people with no sense of my availability. When I started establishing set “office hours” instead, I found that drop-in meetings happened more often and more predictably. People knew that from 9 to 11 on Tuesdays and Thursdays, I was happy to talk about any random thing on their mind. I ended up having more impromptu meetings that led to better work.

For those of you about to collaborate, we salute you

SMEs, as stated in their very name, have specialized knowledge I don’t. However, the flip side is also true: my specialized knowledge is something they need so their content can be useful and usable on the web. Though you and your SMEs may be coming from different places, with different approaches to your work, and different skill sets and knowledge, you’ve got to work together to advance the project.

Do the hard work to understand each other, and move forward even if the steps seem tiny (don’t let perfect be the enemy of the good!). Find a seed of respect for each other’s knowledge, nurture it as it grows, and bask in the fruits of your labor—together.

Categories: thinktime

Andrew Pollock: [life] Day 265: Kindergarten and startup stuff

Planet Linux Australia - Tue 21st Oct 2014 21:10

Zoe yelled out for me at 5:15am for some reason, but went back to sleep after I resettled her, and we had a slow start to the day a bit after 7am. I've got a mild version of whatever cold she's currently got, so I'm not feeling quite as chipper as usual.

We biked to Kindergarten, which was a bit of a slog up Hawthorne Road, given the aforementioned cold, but we got there in the end.

I left the trailer at the Kindergarten and biked home again.

I finally managed to get some more work done on my real estate course, and after a little more obsessing over one unit, got it into the post. I've almost got another unit finished as well. I'll try to get it finished in the evenings or something, because I'm feeling very behind, and I'd like to get it into the mail too. I'm due to get the second half of my course material, and I still have one more unit to do after this one I've almost finished.

I biked back to Kindergarten to pick up Zoe. She wanted to watch Megan's tennis class, but I needed to grab some stuff for dinner, so it took a bit of coaxing to get her to leave. I think she may have been a bit tired from her cold as well.

We biked home, and jumped in the car. I'd heard from Matthew's Dad that FoodWorks in Morningside had a good meat selection, so I wanted to check it out.

They had some good roasting meat, but that was about it. I gave up trying to mince my own pork and bought some pork mince instead.

We had a really nice dinner together, and I tried to get her to bed a little bit early. Every time I try to start the bed time routine early, the spare time manages to disappear anyway.

Categories: thinktime

Biggest vs. best

Seth Godin - Tue 21st Oct 2014 20:10
There's not much overlap. Regardless of how you measure 'best' (elegance, deluxeness, impact, profitability, ROI, meaningfulness, memorability), it's almost never present in the thing that is the most popular. The best restaurant, Seinfeld episode, political candidate, brand of beer, ski...         Seth Godin
Categories: thinktime

Pages

Subscribe to KatteKrab aggregator - thinktime