You are here

thinktime

David Rowe: SM1000 Part 3 – Rx Working

Planet Linux Australia - Thu 21st Aug 2014 13:08

After an hour of messing about it turns out a bad solder joint meant U6 wasn’t connected to the ADC1 pin on the STM32F4 (schematic). This was probably the source of “noise” in some of my earlier unit tests. I found it useful to write a program to connect the ADC1 input to the DAC2 output (loudspeaker) and “listen” to the noise. Software signal tracer. Note to self: I must add that sort of analog loopback as a SM1000 menu option. I “cooked” the bad joint for 10 seconds with the soldering iron and some fresh flux and the rx side burst into life.

Here’s a video walk through of the FreeDV Rx demo:

I am really excited by the “analog” feel to the SM1000. Power up and “off air” speech is coming out of the speaker a few 100ms later! Benefits of no operating system (so no boot delay) and the low latency, fast sync, FreeDV design that veterans like Mel Whitten K0PFX have designed after years of pioneering HF DV.

The SM1000 latency is significantly lower that the PC version of FreeDV. It’s easy to get “hard” real time performance without an operating system, so it’s safe to use nice small audio buffers. Although to be fair optimising latency in x86 FreeDV is not something I have explored to date.

The top level of the receive code is pretty simple:



/* ADC1 is the demod in signal from the radio rx, DAC2 is the SM1000 speaker */

 

nin = freedv_nin(f);  

nout = nin;

f->total_bit_errors = 0;

 

if (adc1_read(&adc16k[FDMDV_OS_TAPS_16K], 2*nin) == 0) {

  GPIOE->ODR = (1 << 3);

  fdmdv_16_to_8_short(adc8k, &adc16k[FDMDV_OS_TAPS_16K], nin);

  nout = freedv_rx(f, &dac8k[FDMDV_OS_TAPS_8K], adc8k);

  //for(i=0; i<FREEDV_NSAMPLES; i++)

  //   dac8k[FDMDV_OS_TAPS_8K+i] = adc8k[i];

  fdmdv_8_to_16_short(dac16k, &dac8k[FDMDV_OS_TAPS_8K], nout);              

  dac2_write(dac16k, 2*nout);

  //led_ptt(0); led_rt(f->fdmdv_stats.sync); led_err(f->total_bit_errors);

  GPIOE->ODR &= ~(1 << 3);

}



We read “nin” modem samples from the ADC, change the same rate from 16 to 8 kHz, then call freedv_rx(). We then re-sample the “nout” output decoded speech samples to 16 kHz, and send them to the DAC, where they are played out of the loudspeaker.

The commented out “for” loop is the analog loopback code I used to “listen” to the ADC1 noise. There is also some commented out code for blinking LEDs (e.g. if we have sync, bit errors) that I haven’t tested yet (indeed the LEDs haven’t been loaded onto the PCB). I like to hit the highest risk tasks on the check list first.

The “GPIOE->ODR” is the GPIO Port E output data register, that’s the code to take the TP8 line high and low for measuring the real time CPU load on the oscilloscope.

Running the ADC and DAC at 16 kHz means I can get away without analog anti-aliasing or reconstruction filters. I figure the SSB radio’s filtering can take care of that.

OK. Time to load up the switches and LEDs and get the SM1000 switching between Tx and Rx via the PTT button.

I used this line to compress the 250MB monster 1080p video from my phone to a 8MB file that was fast to upload on YouTube:



david@bear:~/Desktop$ ffmpeg -i VID_20140821_113318.mp4 -ab 56k -ar 22050 -b 300k -r 15 -s 480x360 VID_20140821_113318.flv

Categories: thinktime

David Rowe: SM1000 Part 2 – Embedded FreeDV Tx Working

Planet Linux Australia - Thu 21st Aug 2014 08:08

Just now I fired up the full, embedded FreeDV “tx side”. So speech is sampled from the SM1000 microphone, processed by the Codec 2 encoder, then sent to the FDMDV modulator, then out of the DAC as modem tones. It worked, and used only about 25% of the STM32F4 CPU! A laptop running the PC version of FreeDV is the receiver.

Here is the decoded speech from a test “transmission” which to my ear sounds about the same as FreeDV running on a PC. I am relieved that there aren’t too many funny noises apart from artefacts of the Codec itself (which are funny enough).

The scatter plot is really good – better than I expected. Nice tight points and a SNR of 25 dB. This shows that the DAC and line interface hardware is working well:

For the past few weeks I have been gradually building up the software for the SM1000. Codec 2 and the FDMDV modem needed a little tweaking to reduce the memory and CPU load required. It’s really handy that I am the author of both!

The hardware seems to be OK although there is some noise in the analog side (e.g. microphone amplifier, switching power supply) that I am still looking into. Thanks Rick Barnich KA8BMA for an excellent job on the hardware design.

I have also been working on various drivers (ADC, DAC, switches and LEDs), and getting my head around developing on a “bare metal” platform (no operating system). For example if I run out of memory it just hangs, and when I Ctrl-C in gdb the stack is corrupted and it’s in an infinite loop. Anyway, it’s all starting to make sense now, and I’m nearing the finish line.

The STM32F4 is a curious combination of a “router” class CPU that doesn’t have an operating system. By “router” class I mean a CPU found inside a DSL router, like a WRT54G, that runs embedded Linux. The STM32F4 is much faster (168MHz) and more capable than the smaller chips we usually call a “uC” (e.g. a PIC or AVR). Much to my surprise I’m not missing embedded Linux. In some ways an operating system complicates life, for example random context switches, i-cache thrashing, needing lots of RAM and Flash, large and complex build systems and on the hardware side an external address and data bus which means high speed digital signals and PCB area.

I am now working on the Rx side. I need to work out a way to extract demod information so I can determine that the analog line in, ADC, and demod are working correctly. At this stage nothing is coming out of U6, the line interface op-amp schematic here). Oh well, I will take a look at that tomorrow.

Categories: thinktime

Andrew Pollock: [life] Day 203: Kindergarten, a run and cleaning

Planet Linux Australia - Wed 20th Aug 2014 21:08

I started the day off with a run. It was just a very crappy 5 km run, but it was nice to be feeling well enough to go for a run, and have the weather cooperate as well. I look forward to getting back to 10 km in my near future.

I had my chiropractic adjustment and then got stuck into cleaning the house.

Due to some sort of scheduling SNAFU, I didn't have a massage today. I'm still not quite sure what happened there, but I biked over and everything. The upside was it gave me some more time to clean.

It also worked out well, because I'd booked a doctor's appointment pretty close after my massage, so it was going to be tight to get from one place to the other.

With my rediscovered enthusiasm for exercise, and cooperative weather, I decided to bike to Kindergarten for pick up. Zoe was very excited. I'd also forgotten that Zoe had a swim class this afternoon, so we only had about 30 minutes at home before we had to head out again (again by bike) to go to swim class. I used the time to finish cleaning, and Zoe helped mop her bathroom.

Zoe wanted to hang around and watch Megan do her swim class, so we didn't get away straight away, which made for a slightly late dinner.

Zoe was pretty tired by bath time. Hopefully she'll have a good sleep tonight.

Categories: thinktime

Totally and completely out of my control

Seth Godin - Wed 20th Aug 2014 18:08
Gravity, for example. I can't do a thing about gravity. Even if I wanted to move to Jupiter or the moon for a change in gravity, it's inconceivable that I could. On the other hand, there are lots of things...         Seth Godin
Categories: thinktime

Michael Still: Juno nova mid-cycle meetup summary: nova-network to Neutron migration

Planet Linux Australia - Wed 20th Aug 2014 14:08
This will be my second last post about the Juno Nova mid-cycle meetup, which covers the state of play for work on the nova-network to Neutron upgrade.



First off, some background information. Neutron (formerly Quantum) was developed over a long period of time to replace nova-network, and added to the OpenStack Folsom release. The development of new features for nova-network was frozen in the Nova code base, so that users would transition to Neutron. Unfortunately the transition period took longer than expected. We ended up having to unfreeze development of nova-network, in order to fix reliability problems that were affecting our CI gating and the reliability of deployments for existing nova-network users. Also, at least two OpenStack companies were carrying significant feature patches for nova-network, which we wanted to merge into the main code base.



You can see the announcement at http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html. The main enhancements post-freeze were a conversion to use our new objects infrastructure (and therefore conductor), as well as features that were being developed by Nebula. I can't find any contributions from the other OpenStack company in the code base at this time, so I assume they haven't been proposed.



The nova-network to Neutron migration path has come to the attention of the OpenStack Technical Committee, who have asked for a more formal plan to address Neutron feature gaps and deprecate nova-network. That plan is tracked at https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage. As you can see, there are still some things to be merged which are targeted for juno-3. At the time of writing this includes grenade testing; Neutron being the devstack default; a replacement for nova-network multi-host; a migration plan; and some documentation. They are all making good progress, but until these action items are completed, Nova can't start the process of deprecating nova-network.



The discussion at the Nova mid-cycle meetup was around the migration planning item in the plan. There is a Nova specification that outlines one possible plan for live upgrading instances (i.e, no instance downtime) at https://review.openstack.org/#/c/101921/, but this will probably now be replaced with a simpler migration path involving cold migrations. This is prompted by not being able to find a user that absolutely has to have live upgrade. There was some confusion, because of a belief that the TC was requiring a live upgrade plan. But as Russell Bryant says in the meetup etherpad:



"Note that the TC has made no such statement on migration expectations other than a migration path must exist, both projects must agree on the plan, and that plan must be submitted to the TC as a part of the project's graduation review (or project gap review in this case). I wouldn't expect the TC to make much of a fuss about the plan if both Nova and Neutron teams are in agreement."



The current plan is to go forward with a cold upgrade path, unless a user comes forward with an absolute hard requirement for a live upgrade, and a plan to fund developers to work on it.



At this point, it looks like we are on track to get all of the functionality we need from Neutron in the Juno release. If that happens, we will start the nova-network deprecation timer in Kilo, with my expectation being that nova-network would be removed in the "M" release. There is also an option to change the default networking implementation to Neutron before the deprecation of nova-network is complete, which will mean that new deployments are defaulting to the long term supported option.



In the next (and probably final) post in this series, I'll talk about the API formerly known as Nova API v3.



Tags for this post: openstack juno nova mid-cycle summary nova-network neutron migration

Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers



Comment
Categories: thinktime

Tridge on UAVs: First flight of ArduPilot on Linux

Planet Linux Australia - Wed 20th Aug 2014 14:08

I'm delighted to announce that the effort to port ArduPilot to Linux reached an important milestone yesterday with the first flight of a fixed wing aircraft on a Linux based autopilot running ArduPilot.

As I mentioned in a previous blog post, we've been working on porting ArduPilot to Linux for a while now. There are lots of reasons for wanting to do this port, not least of which is that it is an interesting challenge!

For the test flight yesterday I used a PXF cape which is an add-on to a BeagleBoneBlack board which was designed by Philip Rowse from 3DRobotics. The PXF cape was designed as a development test platform for Linux based autopilots, and includes a rich array of sensors. It has 3 IMUs (a MPU6000, a MPU9250 and a LSM9DSO), plus two barometers (MS5611 on SPI and I2C), 3 I2C connectors for things like magnetometers and airspeed sensors plus a pile of UARTs, analog ports etc.

All of this sits on top of a BeagleBoneBlack, which is a widely used embedded Linux board with 512M ram, a 1GHz ARM CPU and 2 GByte of EMMC for storage. We're running the Debian Linux distribution on the BeagleBoneBlack, with a 3.8.13-PREEMPT kernel. The BBB also has a two nice co-processors called PRUs (programmable realtime units) which are ideal for timing critical tasks. In the flight yesterday we used one PRU for capturing PPM-SUM input from a R/C receiver, and the other PRU for PWM output to the aircrafts servos.

Summer of code project

The effort to port ArduPilot to Linux got a big boost a few months ago when we teamed up with Victor, Sid and Anuj from the BeaglePilot project as part of a Google Summer of Code project. Victor was sponsored by GSoC, while Sid and Anuj were sponsored as summer students in 3DRobotics. Together they have put a huge amount of effort in over the last few months, which culminated in the flight yesterday. The timing was perfect, as yesterday was also the day that student evaluations were due for the GSoc!

PXF on a SkyWalker

For the flight yesterday I used a 3DR SkyWalker, with the BBB+PXF replacing the usual Pixhawk. Because the port of ArduPilot to Linux used the AP_HAL hardware abstraction layer all of the hardware specific code is abstracted below the flight code, which meant I was able to fly the SkyWalker with exactly the same parameters loaded as I have previously used with the Pixhawk on the same plane.

For this flight we didn't use all of the sensors on the PXF however. Some issues with the build of the initial test boards meant that only the MPU9250 was fully functional, but that was quite sufficient. Future revisions of the PXF will fix up the other two IMUs, allowing us to gain the advantages of multiple IMUs (specifically it gains considerable robustness to accelerometer aliasing).

I also had a digital airspeed sensor (on I2C) and an external GPS/Compass combo to give the full set of sensors needed for good fixed wing flight.

Debugging at the field

As with any experimental hardware you have to expect some issues, and the PXF indeed showed up a problem when I arrived at the flying field. At home I don't get GPS lock due to my metal roof so I hadn't done much testing of the GPS and when I was doing pre-flight ground testing yesterday I found that I frequently lost the GPS. With a bit of work using valgrind and gdb I found the bug, and the GPS started to work correctly. It was an interesting bug in the UART layer in AP_HAL_Linux which also may affect the AP_HAL_PX4 code used on a Pixhawk (although with much more subtle effect), so it was an important fix, and really shows the development benefit of testing on multiple platforms.

After that issue was fixed the SkyWalker was launched, and as expected it flew perfectly, exactly as it would fly with any other ArduPilot based autopilot. There was quite a strong wind (about 15 knots, gusting to 20) which was a challenge for such a small foam plane, but it handled it nicely.

Lots more photos of the first flight are available here. Thanks to Darrell Burkey for braving a cold Canberra morning to come out and take some photos!

Next Steps

Now that we have ArduPilot on PXF flying nicely the next step is a test flight with a multi-copter (I'll probably use an Iris). I'm also looking forward to hearing about first flight reports from other groups working on porting ArduPilot to other Linux based boards, such as the NavIO.

This projects follows in the footsteps of quite a few existing autopilots that run on Linux, both open source and proprietary, including such well known projects as Paparrazi, the AR-Drone and many research autopilots at universities around the world. Having the abiity to run ArduPilot on Linux opens up some interesting possibilities for the ArduPilot project, including things like ROS integration, tightly integrated SLAM and lots of computationally intensive vision algorithms. I'm really looking forward to ArduPilot on Linux being widely available for everyone to try.

All of the code needed to fly ArduPilot on Linux is in the standard ArduPilot git repository.

Thanks

Many thanks to 3DRobotics for sponsoring the development of the PXF cape, and to Victor, Sid and Anuj for their efforts over the last few months! Special thanks to Philip Rowse for designing the board, and for putting up with lots of questions as we worked on the port, and to Craig Elder and Jeff Wurzbach for providing engineering support from the 3DR US offices.

Categories: thinktime

Andrew Pollock: [life] Day 202: Kindergarten, a lot of administrative running around, tennis

Planet Linux Australia - Wed 20th Aug 2014 12:08

Yesterday was a pretty busy day. I hardly stopped, and on top of a poor night's sleep, I was pretty exhausted by the end of the day.

I started the day with a yoga class, because a few extraordinary things had popped up on my schedule, meaning this was the only time I could get to a class this week. It was a beginner's class, but it was nice to have a slower pace for a change, and an excellent way to start the day off.

I drove to Sarah's place to pick up Zoe and take her to Kindergarten, and made a bad choice for the route, and the traffic was particularly bad, and we got to Kindergarten a bit later than normal.

After I dropped Zoe off, I headed straight to the post office to get some passport photos for the application for my certificate of registration. I also noticed that they now had some post office boxes available (I was a bit miffed because I'd been actively discouraged from putting my name down for one earlier in the year because of the purported length of the wait list). I discovered that one does not simply open a PO box in the name of a business, one needs letters of authority and print outs of ABNs and whatnot, so after I got my passport photos and made a few other impulse purchases (USB speakers for $1.99?!) I headed back home to gather the other documentation I needed.

By the time I'd done that and a few other bits and pieces at home, it was time to pop back to get my yoga teacher to certify my photos. Then I headed into the city to lodge the application in person. I should get the piece of paper in 6 weeks or so.

Then I swung past the post office to complete my PO box application (successfully this time) and grab some lunch, and update my mailing address with the bank. By the time I'd done all that, I had enough time to swing past home to grab Zoe's tennis racquet and a snack for her and head to Kindergarten to pick her up.

Today's tennis class went much better. Giving her a snack before the class started was definitely the way to go. She'd also eaten a good lunch, which would have helped. I just need to remember to get her to go to the toilet, then she should be all good for an interruption-free class.

I dropped Zoe directly back to Sarah after tennis class today, and then swung by OfficeWorks to pick up some stationery on the way home.

Categories: thinktime

Arjen Lentz: Two Spaces After a Period: Why You Should Never, Ever Do It | Slate.com

Planet Linux Australia - Wed 20th Aug 2014 11:08

The cause of the double space may be manual typewriters with their monospace font. But we all use proportional fonts these days.

Related Posts:
  • No related posts
Categories: thinktime

Squidthanks

Seth Godin - Wed 20th Aug 2014 04:08
Nine years ago last month, a few of us sat down in my office and started working on Squidoo. Since then, there have been billions of visits to our site, and many of you have clicked, written, and contributed to...         Seth Godin
Categories: thinktime

Rusty Russell: POLLOUT doesn’t mean write(2) won’t block: Part II

Planet Linux Australia - Wed 20th Aug 2014 00:08

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte,

if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file

and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock cups-browsed: (1262): fd 6 poll() for write without nonblock Xorg: (1377): fd 1 select() for write without nonblock Xorg: (1377): fd 3 select() for write without nonblock Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking. ... those in writefds will be watched to see if a write will not block...

And poll():

POLLOUT Writing now will not block.

Man page patches have been submitted…

Categories: thinktime

Dependence Day: The Power and Peril of Third-Party Solutions

a list apart - Wed 20th Aug 2014 00:08

“Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep. Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support.

Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.

A web of third parties

I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party.

The third-party landscape is rapidly expanding. Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddly…old-fashioned.

Yet with so many third-party options to choose from, there are more chances than ever to veer off-course.

What could go wrong?

At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once!

But in one case, I believed the third party was worth the risk. In the other, it wasn’t. I just didn’t know how to communicate those thoughts to my team.

I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.

The difference between a request and a goal

This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to. But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value.

In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. I conceded.

It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions.

To dev or to download

Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.

Strengths and weaknesses

The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before? Not exactly, but we could think through it.

At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly. Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.

Betterment of the team

When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards. Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.

Organizational mission

If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot. No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent.

We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases.

Evaluating the unknown

The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing. Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar.

Familiarity: is there a provider we already work with?

Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships.

Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.

Vitality: will this service stick around?

The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.

Extensibility: can this service adapt as our needs change?

Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul.

APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data. This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service.

Branding: is theirs strong, or can you use your own?

White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.

SLAs: what are you getting, beyond uptime?

For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help?

In the case of our feed-smusher service, there was no solution that ran the table. The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally. Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions.

Relationship maintenance

If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship.

As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm.

If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency. If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail.

If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package. Everyone on our team should know where and how to learn about our third-party relationships.

We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions. Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features. They should be able to train new employees and route more complex support requests to the third party.

In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software. And we made managing external dependencies a primary part of one team member’s job.

DIY: a third party waiting to happen?

Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden. Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it.

Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency? 

  • Consider pair-programming. What better way to ensure that multiple people understand a product, than to have multiple people build it?
  • “Job-switch Tuesdays.” When feasible, we have developers switch roles for an entire day. Literally, in our ticketing system, it’s as though one person is another. It’s a way to force cross-training without doubling the hours needed for a task.
  • Hold code reviews before new code is pushed. This might feel slightly intrusive at first, but that passes. If it’s not readable, it’s not deployable. If you have project managers with a technical bent, empower them to ask questions about the code, too.
  • Bring moldy code into light by displaying it as phpDoc, JSDoc, or similar.
  • Beware the big. Create hourly estimates in Fibonacci increments. As a project gets bigger, so does its level of uncertainty. The Fibonacci steps are biased against under-budgeting, and also provide a cue to opt out of projects that are too difficult to estimate. In that case, it’s likely better to toe-in with a third party instead of blazing into the unknown by yourself.

All of these considerations apply to our earlier example, the typeahead search widget. Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off.

Look before you leap, and after you land

It’s not that third parties are bad per se. It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too.

Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency.

Categories: thinktime

One Step Ahead: Improving Performance with Prebrowsing

a list apart - Wed 20th Aug 2014 00:08

We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book.

But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next? What if we could have that content ready for them before they even ask for it?

We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.

The three big techniques

Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet.

Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal:

  • DNS prefetching
  • Resource prefetching
  • Prerendering

Now let’s dive into each of these separately.

DNS prefetching

Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL. The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action.

Modern browsers are very good at parsing our pages, looking ahead to pre-resolve all necessary domains ahead of time. Chrome goes as far as keeping an internal list with all related domains every time a user visits a site, pre-resolving them when the user returns (you can see this list by navigating to chrome://dns/ in your Chrome browser). However, sometimes access to new URLs may be hidden behind redirects or embedded in JavaScript, and that’s our opportunity to help the browser.

Let’s say we are downloading a set of resources from the domain cdn.example.com using a JavaScript call after a user clicks a button. Normally, the browser would have to resolve the DNS at the time of the click, but we can speed up the process by including a dns-prefetch directive in the head section of our page:

<link rel="dns-prefetch" href="http://cdn.example.com">

Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation. (Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.)

But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution. You’ll see a table like this:

This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate.

But what happens if the user never clicks the button to access the cdn.example.com domain? Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest.

Look for situations that might be good candidates to introduce DNS prefetching on your site:

  • Resources on different domains hidden behind 301 redirects
  • Resources accessed from JavaScript code
  • Resources for analytics and social sharing (which usually come from different domains)

DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion.

Resource prefetching

We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time:

<link rel="prefetch" href="http://cdn.example.com/library.js">

The browser will use this instruction to prefetch the indicated resources and store them on the local cache. This way, as soon as the resources are actually needed, the browser will have them ready to serve.

Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead.

Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive:

On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself). This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.

If you decide to use this technique, prefetch only the most important resources, and make sure they are cacheable by the browser. Images, CSS, JavaScript, and font files are usually good candidates for prefetching, but HTML responses are not since they aren’t cacheable.

Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time:

  • On a login page, since users are usually redirected to a welcome or dashboard page after logging in
  • On each page of a linear questionnaire or survey workflow, where users are visiting subsequent pages in a specific order
  • On a multi-step animation, since you know ahead of time which images are needed on subsequent scenes

Resource prefetching is currently supported on IE11, Chrome, Chrome Mobile, Firefox, and Firefox Mobile. (To determine browser compatibility, you can run a quick browser test on prebrowsing.com.)

Prerendering

What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint:

<link rel="prerender" href="http://example.com/about.html">

This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it. The transition from the current page to the prerendered one would be instantaneous.

Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive:

In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.

When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly. You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order.

At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.)

A final word

Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?

Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users.

Categories: thinktime

Andrew McDonnell: Unleashed GovHack – an Adelaide Adventure in Open Data

Planet Linux Australia - Tue 19th Aug 2014 22:08
Last month I attended Unleashed Govhack, our local contribution to the Australian GovHack hackathon. Unleashed Essentially GovHack is a chance for makers, hackers, designers, artists, and researchers to team up with government ‘data custodians’ and build proof of concept applications (web or mobile), software tools, video productions or presentations (data journalism) in a way that […]
Categories: thinktime

Lev Lafayette: A Source Installation of gzip

Planet Linux Australia - Tue 19th Aug 2014 21:08

GNU zip is a compression utility free from patented algorithms. Software patents are stupid, and patented compression algorithms are especially stupid.

read more

Categories: thinktime

Slacktivism

Seth Godin - Tue 19th Aug 2014 19:08
This is far from a new phenomenon. Hundreds of years ago there were holier-than-thou people standing in the village square, wringing their hands, ringing their bells and talking about how urgent a problem was. They did little more than wring...         Seth Godin
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main September 2014 Meeting: AGM + lightning talks

Planet Linux Australia - Tue 19th Aug 2014 18:08
Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Link:  http://luv.asn.au/meetings/map

AGM + lightning talks

Notice of LUV Annual General Meeting, 2nd September 2014, 19:00.

Linux Users of Victoria, Inc., registration number

A0040056C, will be holding its Annual General Meeting at

7pm on Tuesday, 2nd September 2014, in the Buzzard

Lecture Theatre, Trinity College.

The AGM will be held in conjunction with our usual

September Main Meeting. As is customary, after the AGM

business we will have a series of lightning talks by

members on a recent Linux experience or project.

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 2, 2014 - 19:00

read more

Categories: thinktime

Michael Still: Juno nova mid-cycle meetup summary: slots

Planet Linux Australia - Tue 19th Aug 2014 18:08
If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.



If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.



If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.



The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).



The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.



How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.



What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.



We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.



We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.



Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.



This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.



This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.



Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management

Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: containers; Juno nova mid-cycle meetup summary: cells



Comment
Categories: thinktime

Valediction

a list apart - Mon 18th Aug 2014 22:08

When I first met Kevin Cornell in the early 2000s, he was employing his illustration talent mainly to draw caricatures of his fellow designers at a small Philadelphia design studio. Even in that rough, dashed-off state, his work floored me. It was as if Charles Addams and my favorite Mad Magazine illustrators from the 1960s had blended their DNA to spawn the perfect artist.

Kevin would deny that label, but artist he is. For there is a vision in his mind, a way of seeing the world, that is unlike anyone else’s—and he has the gift to make you see it too, and to delight, inspire, and challenge you with what he makes you see.

Kevin was part of a small group of young designers and artists who had recently completed college and were beginning to establish careers. Others from that group included Rob Weychert, Matt Sutter, and Jason Santa Maria. They would all go on to do fine things in our industry.

It was Jason who brought Kevin on as house illustrator during the A List Apart 4.0 brand overhaul in 2005, and Kevin has worked his strange magic for us ever since. If you’re an ALA reader, you know how he translates the abstract web design concepts of our articles into concrete, witty, and frequently absurd situations. Above all, he is a storyteller—if pretentious designers and marketers haven’t sucked all the meaning out of that word.

For nearly 10 years, Kevin has taken our well-vetted, practical, frequently technical web design and development pieces, and elevated them to the status of classic New Yorker articles. Tomorrow he publishes his last new illustrations with us. There will never be another like him. And for whatever good it does him, Kevin Cornell has my undying thanks, love, and gratitude.

Categories: thinktime

My Favorite Kevin Cornell

a list apart - Mon 18th Aug 2014 22:08

After 200 issues—yes, two hundred—Kevin Cornell is retiring from his post as A List Apart’s staff illustrator. Tomorrow’s issue will be the last one featuring new illustrations from him.

Sob.

For years now, we’ve eagerly awaited Kevin’s illustrations each issue, opening his files with all the patience of a kid tearing into a new LEGO set.

But after nine years and more than a few lols, it’s time to give Kevin’s beautifully deranged brain a rest.

We’re still figuring out what comes next for ALA, but while we do, we’re sending Kevin off the best way we know how: by sharing a few of our favorite illustrations. Read on for stories from ALA staff, past and present—and join us in thanking Kevin for his talent, his commitment, and his uncanny ability to depict seemingly any concept using animals, madmen, and circus figures.

Of all the things I enjoyed about working on A List Apart, I loved anticipating the reveal: seeing Kevin’s illos for each piece, just before the issue went live. Every illustration was always a surprise—even to the staff. My favorite, hands-down, was his artwork for “The Discipline of Content Strategy,” by Kristina Halvorson. In 2008, content was web design’s “elephant in the room” and Kevin’s visual metaphor nailed it. In a drawing, he encapsulated thoughts and feelings many had within the industry but were unable to articulate. That’s the mark of a master.

—Krista Stevens, Editor-in-chief, 2006–2012

In the fall of 2011, I submitted my first article to A List Apart. I was terrified: I didn’t know anyone on staff. The authors’ list read like a who’s who of web design. The archives were intimidating. But I had ideas, dammit. I hit send.

I told just one friend what I’d done. His eyes lit up. “Whoa. You’d get a Kevin Cornell!” he said.

Whoa indeed. I might get a Kevin Cornell?! I hadn’t even thought about that yet.

Like Krista, I fell in love with Kevin’s illustration for “The Discipline of Content Strategy”—an illustration that meant the world to me as I helped my clients see their own content elephants. The idea of having a Cornell of my own was exciting, but terrifying. Could I possibly write something worthy of his illustration?

Months later, there it was on the screen: little modular sandcastles illustrating my article on modular content. I was floored.

Now, after two years as ALA’s editor in chief, I’ve worked with Kevin through dozens of issues. But you know what? I’m just as floored as ever.

Thank you, Kevin, you brilliant, bizarre, wonderful friend.

—Sara Wachter-Boettcher, Editor-in-chief

It’s impossible for me to choose a favorite of Kevin’s body of work for ALA, because my favorite Cornell illustration is the witty, adaptable, humane language of characters and symbols underlying his years of work. If I had to pick a single illustration to represent the evolution of his visual language, I think it would be the hat-wearing nested egg with the winning smile that opened Andy Hagen’s “High Accessibility is Effective Search Engine Optimization.” An important article but not, perhaps, the juiciest title A List Apart has ever run…and yet there’s that little egg, grinning in his slightly dopey way.

If my memory doesn’t fail me, this is the second appearance of the nested Cornell egg—we saw the first a few issues before in Issue 201, where it represented the nested components of an HTML page. When it shows up here, in Issue 207, we realize that the egg wasn’t a cute one-off, but the first syllable of a visual language that we’ll see again and again through the years. And what a language! Who else could make semantic markup seem not just clever, but shyly adorable?

A wander through the ALA archives provides a view of Kevin’s changing style, but something visible only backstage was his startlingly quick progression from reading an article to sketching initial ideas in conversation with then-creative director Jason Santa Maria to turning out a lovely miniature—and each illustration never failed to make me appreciate the article it introduced in a slightly different way. When I was at ALA, Kevin’s unerring eye for the important detail as a reader astonished me almost as much as his ability to give that (often highly technical, sometimes very dry) idea a playful and memorable visual incarnation. From the very first time his illustrations hit the A List Apart servers he’s shared an extraordinary gift with its readers, and as a reader, writer, and editor, I will always count myself in his debt.

—Erin Kissane, Editor-in-chief, contributing editor, 1999–2009

So much of what makes Kevin’s illustrations work are the gestures. The way the figure sits a bit slouched, but still perched on gentle tippy toes, determinedly occupied pecking away on his phone. With just a few lines, Kevin captures a mood and moment anyone can feel.

—Jason Santa Maria, Former creative director

I’ve had the pleasure of working with Kevin on the illustrations for each issue of A List Apart since we launched the latest site redesign in early 2013. By working, I mean replying to his email with something along the lines of “Amazing!” when he sent over the illustrations every couple of weeks.

Prior to launching the new design, I had to go through the backlog of Kevin’s work for ALA and do the production work needed for the new layout. This bird’s eye view gave me an appreciation of the ongoing metaphorical world he had created for the magazine—the birds, elephants, weebles, mad scientists, ACME products, and other bits of amusing weirdness that breathed life into the (admittedly, sometimes) dry topics covered.

If I had to pick a favorite, it would probably be the illustration that accompanied the unveiling of the redesign, A List Apart 5.0. The shoe-shine man carefully working on his own shoes was the perfect metaphor for both the idea of design as craft and the back-stage nature of the profession—working to make others shine, so to speak. It was a simple and humble concept, and I thought it created the perfect tone for the launch.

—Mike Pick, Creative director

So I can’t pick one favorite illustration that Kevin’s done. I just can’t. I could prattle on about this, that, or that other one, and tell you everything I love about each of ’em. I mean, hell: I still have a print of the illustration he did for my very first ALA article. (The illustration is, of course, far stronger than the essay that follows it.)

But his illustration for James Christie’s excellent “Sustainable Web Design” is a perfect example of everything I love about Kevin’s ALA work: how he conveys emotion with a few deceptively simple lines; the humor he finds in contrast; the occasional chicken. Like most of Kevin’s illustrations, I’ve seen it whenever I reread the article it accompanies, and I find something new to enjoy each time.

It’s been an honor working alongside your art, Kevin—and, on a few lucky occasions, having my words appear below it.

Thanks, Kevin.

—Ethan Marcotte, Technical editor

Kevin’s illustration for Cameron Koczon’s “Orbital Content” is one of the best examples I can think of to show off his considerable talent. Those balloons are just perfect: vaguely reminiscent of cloud computing, but tethered and within arm’s reach, and evoking the fun and chaos of carnivals and county fairs. No other illustrator I’ve ever worked with is as good at translating abstract concepts into compact, visual stories. A List Apart won’t be the same without him.

—Mandy Brown, Former contributing editor

Kevin has always had what seems like a preternatural ability to take an abstract technical concept and turn it into a clear and accessible illustration.

For me, my favorite pieces are the ones he did for the 3rd anniversary of the original “Responsive Web Design” article…the web’s first “responsive” illustration? Try squishing your browser here to see it in action—Ed

—Tim Murtaugh, Technical director

I think it may be impossible for me to pick just one illustration of Kevin’s that I really like. Much like trying to pick your one favorite album or that absolutely perfect movie, picking a true favorite is simply folly. You can whittle down the choices, but it’s guaranteed that the list will be sadly incomplete and longer (much longer) than one.

If held at gunpoint, however ridiculous that sounds, and asked which of Kevin’s illustrations is my favorite, close to the top of the list would definitely be “12 Lessons for Those Afraid of CSS Standards.” It’s just so subtle, and yet so pointed.

What I personally love the most about Kevin’s work is the overall impact it can have on people seeing it for the first time. It has become commonplace within our ranks to hear the phrase, “This is my new favorite Kevin Cornell illustration” with the publishing of each issue. And rightly so. His wonderfully simple style (which is also deceptively clever and just so smart) paired with the fluidity that comes through in his brush work is magical. Case in point for me would be his piece for “The Problem with Passwords” which just speaks volumes about the difficulty and utter ridiculousness of selecting a password and security question.

We, as a team, have truly been spoiled by having him in our ranks for as long as we have. Thank you Kevin.

—Erin Lynch, Production manager

The elephant was my first glimpse at Kevin’s elegantly whimsical visual language. I first spotted it, a patient behemoth being studied by nonplussed little figures, atop Kristina Halvorson’s “The Discipline of Content Strategy,” which made no mention of elephants at all. Yet the elephant added to my understanding: content owners from different departments focus on what’s nearest to them. The content strategist steps back to see the entire thing.

When Rachel Lovinger wrote about “Content Modelling,” the elephant made a reappearance as a yet-to-be-assembled, stylized elephant doll. The unflappable elephant has also been the mascot of product development at the hands of a team trying to construct it from user research, strutted its stuff as curated content, enjoyed the diplomatic guidance of a ringmaster, and been impersonated by a snake to tell us that busting silos is helped by a better understanding of others’ discourse conventions.

The delight in discovering Kevin’s visual rhetoric doesn’t end there. With doghouses, birdhouses, and fishbowls, Kevin speaks of environments for users and workers. With owls he represents the mobile experience and smartphones. With a team arranging themselves to fit into a group photo, he makes the concept of responsive design easier to grasp.

Not only has Kevin trained his hand and eye to produce the gestures, textures, and compositions that are uniquely his, but he has trained his mind to speak in a distinctive visual language—and he can do it on deadline. That is some serious mastery of the art.

—Rose Weisburd, Columns editor

Categories: thinktime

Andrew Pollock: [life] Day 201: Kindergarten, some startup stuff, car wash and a trip to the vet ophthalmologist

Planet Linux Australia - Mon 18th Aug 2014 22:08

Zoe woke up at some point in the night and ended up in bed with me. I don't even remember when it was.

We got going reasonably quickly this morning, and Zoe wanted porridge for breakfast, so I made a batch of that in the Thermomix.

She was a little bit clingy at Kindergarten for drop off. She didn't feel like practising writing her name in her sign-in book. Fortunately Miss Sarah was back from having done her prac elsewhere, and Zoe had been talking about missing her on the way to Kindergarten, so that was a good distraction, and she was happy to hang out with her while I left.

I used the morning to do some knot practice for the rock climbing course I've signed up for next month. It was actually really satisfying doing some knots that previously I'd found to be mysterious.

I had a lunch meeting over at the Royal Brisbane Women's Hospital to bounce a startup idea off a couple of people, so I headed over there for a very useful lunch discussion and then briefly stopped off at home before picking up Zoe from Kindergarten.

Zoe had just woken up from a nap before I arrived, and was a bit out of sorts. She perked up a bit when we went to the car wash and had a babyccino while the car got cleaned. Sarah was available early, so I dropped Zoe around to her straight after that.

I'd booked Smudge in for a consult with a vet ophthalmologist to get her eyes looked at, so I got back home again, and crated her and drove to Underwood to see the vet. He said that she had the most impressive case of eyelid agenesis he'd ever seen. She also had persistent pupillary membranes in each eye. He said that the eyelid agenesis is a pretty common birth defect in what he called "dumpster cats", which for all we know, is exactly what Smudge (or more importantly, her mother) were. He also said that other eye defects, like the membranes she had, were common in cases where there was eyelid agenesis.

The surgical fix was going to come in at something in the order of $2,000 an eye, be a pretty long surgery, and involve some crazy transplanting of lip tissue. Cost aside, it didn't sound like a lot of fun for Smudge, and given her age and that she's been surviving the way she is, I can't see myself wanting to spend the money to put her through it. The significantly cheaper option is to just religiously use some lubricating eye gel.

After that, I got home with enough time to eat some dinner and then head out to crash my Thermomix Group Leader's monthly team meeting to see what it was like. I got a good vibe from that, so I'm happy to continue with my consultant application.

Categories: thinktime

Pages

Subscribe to KatteKrab aggregator