You are here

thinktime

Joshua Hesketh: Introducing turbo-hipster for testing nova db migrations

Planet Linux Australia - Mon 23rd May 2016 13:05

Zuul is the continuous integration utility used by OpenStack to gate patchsets against tests. It takes care of communicating with gerrit (the code review system) and the test workers – usually Jenkins. You can read more about how the systems tie together on the OpenStack Project Infrastructure page.

The nice thing is that zuul doesn’t require you to use Jenkins. Anybody can provide a worker to zuul using the gearman protocol (which is a simple job server). Enter turbo-hipster*.

“Turbo-hipster is a CI worker with pluggable tasks initially designed to test OpenStack’s database migrations against copies of real databases.”

This will hopefully catch scenarios where changes to the database schema may not work due to outliers in real datasets and also help find where a migration may take an unreasonable amount of time against a large database.

In zuuls layout configuration we are able to specify which jobs should be ran against which projects in which pipelines. For example, for nova we want to run tests when a patchset is created, but we don’t need to run tests against it (necessarily) once it is merged etc. So in zuul we specify a new gate (aka job) to test nova against real databases.

turbo-hipster then listens for jobs created on that gate using the gearman protocol. Once it receives a patchset from zuul it creates a virtual environment and tests the upgrades. It then compiles and sends back the results.

At the moment turbo-hipster is still under heavy development but I hope to have it reporting results back to gerrit patchsets soon as part of zuuls report summary. For the moment I have a separate zuul instance running to test new nova patches and email the results back to me. Here is an example result report:

<code>Build succeeded. - http://thw01.rcbops.com/logviewer/?q=/results/47/47162/9/check/gate-real-db-upgrade_nova_mysql/c4bc35c/index.html : SUCCESS in 13m 31s </code>

*The name was randomly generated and does not necessarily contain meaning.

Categories: thinktime

Joshua Hesketh: LinuxCon Europe

Planet Linux Australia - Mon 23rd May 2016 13:05

After travelling very close to literally the other side of the world[0] I’m in Edinburgh for LinuxCon EU recovering from jetlag and getting ready to attend. I’m very much looking forward to my first LinuxCon, meeting new people and learning lots :-).

If you’re around and would like to catch up drop me a comment here. Otherwise I’ll see you at the conference!

[0] http://goo.gl/maps/JeJO2

Categories: thinktime

Joshua Hesketh: OpenStack infrastructure swift logs and performance

Planet Linux Australia - Mon 23rd May 2016 13:05

Turns out I’m not very good at blogging very often. However I thought I would put what I’ve been working on for the last few days here out of interest.

For a while the OpenStack Infrastructure team have wanted to move away from storing logs on disk to something more cloudy – namely, swift. I’ve been working on this on and off for a while and we’re nearly there.

For the last few weeks the openstack-infra/project-config repository has been uploading its CI test logs to swift as well as storing them on disk. This has given us the opportunity to compare the last few weeks of data and see what kind of effects we can expect as we move assets into an object storage.

  • I should add a disclaimer/warning, before you read, that my methods here will likely make statisticians cringe horribly. For the moment though I’m just getting an indication for how things compare.
The set up

Fetching files from an object storage is nothing particularly new or special (CDN’s have been doing it for ages). However, for our usage we want to serve logs with os-loganalyze giving the opportunity to hyperlink to timestamp anchors or filter by log severity.

First though we need to get the logs into swift somehow. This is done by having the job upload its own logs. Rather than using (or writing) a Jenkins publisher we use a bash script to grab the jobs own console log (pulled from the Jenkins web ui) and then upload it to swift using credentials supplied to the job as environment variables (see my zuul-swift contributions).

This does, however, mean part of the logs are missing. For example the fetching and upload processes write to Jenkins’ console log but because it has already been fetched these entries are missing. Therefore this wants to be the very last thing you do in a job. I did see somebody do something similar where they keep the download process running in a fork so that they can fetch the full log but we’ll look at that another time.

When a request comes into logs.openstack.org, a request is handled like so:

  1. apache vhost matches the server
  2. if the request ends in .txt.gz, console.html or console.html.gz rewrite the url to prepend /htmlify/
  3. if the requested filename is a file or folder on disk, serve it up with apache as per normal
  4. otherwise rewrite the requested file to prepend /htmlify/ anyway

os-loganalyze is set up as an WSGIScriptAlias at /htmlify/. This means all files that aren’t on disk are sent to os-loganalyze (or if the file is on disk but matches a file we want to mark up it is also sent to os-loganalyze). os-loganalyze then does the following:

  1. Checks the requested file path is legitimate (or throws a 400 error)
  2. Checks if the file is on disk
  3. Checks if the file is stored in swift
  4. If the file is found markup (such as anchors) are optionally added and the request is served
    1. When serving from swift the file is fetched via the swiftclient by os-loganlayze in chunks and streamed to the user on the fly. Obviously fetching from swift will have larger network consequences.
  5. If no file is found, 404 is returned

If the file exists both on disk and in swift then step #2 can be skipped by passing ?source=swift as a parameter (thus only attempting to serve from swift). In our case the files exist both on disk and in swift since we want to compare the performance so this feature is necessary.

So now that we have the logs uploaded into swift and stored on disk we can get into some more interesting comparisons.

Testing performance process

My first attempt at this was simply to fetch the files from disk and then from swift and compare the results. A crude little python script did this for me: http://paste.openstack.org/show/122630/

The script fetches a copy of the log from disk and then from swift (both through os-loganalyze and therefore marked-up) and times the results. It does this in two scenarios:

  1. Repeatably fetching the same file over again (to get a good average)
  2. Fetching a list of recent logs from gerrit (using the gerrit api) and timing those

I then ran this in two environments.

  1. On my local network the other side of the world to the logserver
  2. On 5 parallel servers in the same DC as the logserver

Running on my home computer likely introduced a lot of errors due to my limited bandwidth, noisy network and large network latency. To help eliminate these errors I also tested it on 5 performance servers in the Rackspace cloud next to the log server itself. In this case I used ansible to orchestrate the test nodes thus running the benchmarks in parallel. I did this since in real world use there will often be many parallel requests at once affecting performance.

The following metrics are measured for both disk and swift:

  1. request sent – time taken to send the http request from my test computer
  2. response – time taken for a response from the server to arrive at the test computer
  3. transfer – time taken to transfer the file
  4. size – filesize of the requested file

The total time can be found by adding the first 3 metrics together.

 

Results Home computer, sequential requests of one file

 

The complementary colours are the same metric and the darker line represents swift’s performance (over the lighter disk performance line). The vertical lines over the plots are the error bars while the fetched filesize is the column graph down the bottom. Note that the transfer and file size metrics use the right axis for scale while the rest use the left.

As you would expect the requests for both disk and swift files are more or less comparable. We see a more noticable difference on the responses though with swift being slower. This is because disk is checked first, and if the file isn’t found on disk then a connection is sent to swift to check there. Clearly this is going to be slower.

The transfer times are erratic and varied. We can’t draw much from these, so lets keep analyzing deeper.

The total time from request to transfer can be seen by adding the times together. I didn’t do this as when requesting files of different sizes (in the next scenario) there is nothing worth comparing (as the file sizes are different). Arguably we could compare them anyway as the log sizes for identical jobs are similar but I didn’t think it was interesting.

The file sizes are there for interest sake but as expected they never change in this case.

You might notice that the end of the graph is much noisier. That is because I’ve applied some rudimentary data filtering.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 56.74750291 194.7547117 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 373.7328851 531.8043908 5091.536092 5122.686897 1219.804598 1220.735632

 

I know it’s argued as poor practice to remove outliers using twice the standard deviation, but I did it anyway to see how it would look. I only did one pass at this even though I calculated new standard deviations.

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 13.88664039 14.84054789 44.0860569 115.5299781 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 364.6289583 503.9393472 5008.439028 5013.627083 1220.013889 1220.888889

 

I then moved the outliers to the end of the results list instead of removing them completely and used the newly calculated standard deviation (ie without the outliers) as the error margin.

Then to get a better indication of what are average times I plotted the histograms of each of these metrics.

Here we can see a similar request time.
 

Here it is quite clear that swift is slower at actually responding.
 

Interestingly both disk and swift sources have a similar total transfer time. This is perhaps an indication of my network limitation in downloading the files.

 

Home computer, sequential requests of recent logs

Next from my home computer I fetched a bunch of files in sequence from recent job runs.

 

 

Again I calculated the standard deviation and average to move the outliers to the end and get smaller error margins.

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 54.89516183 43.71917948 194.7547117 56.74750291 849.8545127 838.9172066 7.121600095 7.311125275 Mean 283.9594368 282.5074598 531.8043908 373.7328851 5091.536092 5122.686897 1219.804598 1220.735632 Second pass without outliers Standard Deviation 13.88664039 14.84054789 115.5299781 44.0860569 541.3912899 515.4364601 7.038111654 6.98399691 Mean 274.9291111 276.2813889 503.9393472 364.6289583 5008.439028 5013.627083 1220.013889 1220.888889

 

What we are probably seeing here with the large number of slower requests is network congestion in my house. Since the script requests disk, swift, disk, swift, disk.. and so on this evens it out causing a latency in both sources as seen.
 

Swift is very much slower here.

 

Although comparable in transfer times. Again this is likely due to my network limitation.
 

The size histograms don’t really add much here.
 

Rackspace Cloud, parallel requests of same log

Now to reduce latency and other network effects I tested fetching the same log over again in 5 parallel streams. Granted, it may have been interesting to see a machine close to the log server do a bunch of sequential requests for the one file (with little other noise) but I didn’t do it at the time unfortunately. Also we need to keep in mind that others may be access the log server and therefore any request in both my testing and normal use is going to have competing load.
 

I collected a much larger amount of data here making it harder to visualise through all the noise and error margins etc. (Sadly I couldn’t find a way of linking to a larger google spreadsheet graph). The histograms below give a much better picture of what is going on. However out of interest I created a rolling average graph. This graph won’t mean much in reality but hopefully will show which is faster on average (disk or swift).
 

You can see now that we’re closer to the server that swift is noticeably slower. This is confirmed by the averages:

 

  request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 32.42528982 9.749368282 245.3197219 781.8807534 1082.253253 2737.059103 0 0 Mean 4.87337544 4.05191168 39.51898688 245.0792916 1553.098063 4167.07851 1226 1232 Second pass without outliers Standard Deviation 1.375875503 0.8390193564 28.38377158 191.4744331 878.6703183 2132.654898 0 0 Mean 3.487575109 3.418433003 7.550682037 96.65978872 1389.405618 3660.501404 1226 1232

 

Even once outliers are removed we’re still seeing a large latency from swift’s response.

The standard deviation in the requests now have gotten very small. We’ve clearly made a difference moving closer to the logserver.

 

Very nice and close.
 

Here we can see that for roughly half the requests the response time was the same for swift as for the disk. It’s the other half of the requests bringing things down.
 

The transfer for swift is consistently slower.

 

Rackspace Cloud, parallel requests of recent logs

Finally I ran just over a thousand requests in 5 parallel streams from computers near the logserver for recent logs.

 

Again the graph is too crowded to see what is happening so I took a rolling average.

 

 

request sent (ms) – disk request sent (ms) – swift response (ms) – disk response (ms) – swift transfer (ms) – disk transfer (ms) – swift size (KB) – disk size (KB) – swift Standard Deviation 0.7227904332 0.8900549012 434.8600827 909.095546 1913.9587 2132.992773 6.341238774 7.659678352 Mean 3.515711867 3.56191383 145.5941102 189.947818 2427.776165 2875.289455 1219.940039 1221.384913 Second pass without outliers Standard Deviation 0.4798803247 0.4966553679 109.6540634 171.1102999 1348.939342 1440.2851 6.137625464 7.565931993 Mean 3.379718381 3.405770445 70.31323922 86.16522485 2016.900047 2426.312363 1220.318912 1221.881335

 

The averages here are much more reasonable than when we continually tried to request the same file. Perhaps we’re hitting limitations with swifts serving abilities.

 

I’m not sure why we have sinc function here. A network expert may be able to tell you more. As far as I know this isn’t important to our analysis other than the fact that both disk and swift match.
 

Here we can now see swift keeping a lot closer to disk results than when we only requested the one file in parallel. Swift is still, unsurprisingly, slower overall.
 

Swift still loses out on transfers but again does a much better job of keeping up.
 

Error sources

I haven’t accounted for any of the following swift intricacies (in terms of caches etc) for:

  • Fetching random objects
  • Fetching the same object over and over
  • Fetching in parallel multiple different objects
  • Fetching the same object in parallel

I also haven’t done anything to account for things like file system caching, network profiling, noisy neighbours etc etc.

os-loganalyze tries to keep authenticated with swift, however

  • This can timeout (causes delays while reconnecting, possibly accounting for some spikes?)
  • This isn’t thread safe (are we hitting those edge cases?)

We could possibly explore getting longer authentication tokens or having os-loganalyze pull from an unauthenticated CDN to add the markup and then serve. I haven’t explored those here though.

os-loganalyze also handles all of the requests not just from my testing but also from anybody looking at OpenStack CI logs. In addition to this it also needs to deflate the gzip stream if required. As such there is potentially a large unknown (to me) load on the log server.

In other words, there are plenty of sources of errors. However I just wanted to get a feel for the general responsiveness compared to fetching from disk. Both sources had noise in their results so it should be expected in the real world when downloading logs that it’ll never be consistent.

Conclusions

As you would expect the request times are pretty much the same for both disk and swift (as mentioned earlier) especially when sitting next to the log server.

The response times vary but looking at the averages and the histograms these are rarely large. Even in the case where requesting the same file over and over in parallel caused responses to go slow these were only in the magnitude of 100ms.

The response time is the important one as it indicates how soon a download will start for the user. The total time to stream the contents of the whole log is seemingly less important if the user is able to start reading the file.

One thing that wasn’t tested was streaming of different file sizes. All of the files were roughly the same size (being logs of the same job). For example, what if the asset was a few gigabytes in size, would swift have any significant differences there? In general swift was slower to stream the file but only by a few hundred milliseconds for a megabyte. It’s hard to say (without further testing) if this would be noticeable on large files where there are many other factors contributing to the variance.

Whether or not these latencies are an issue is relative to how the user is using/consuming the logs. For example, if they are just looking at the logs in their web browser on occasion they probably aren’t going to notice a large difference. However if the logs are being fetched and scraped by a bot then it may see a decrease in performance.

Overall I’ll leave deciding on whether or not these latencies are acceptable as an exercise for the reader.

Categories: thinktime

Michael Still: Potato Point

Planet Linux Australia - Mon 23rd May 2016 13:05
I went to Potato Point with the Scouts for a weekend wide game. Very nice location, apart from the ticks!

                                       

See more thumbnails

Tags for this post: blog pictures 20160523 photo coast scouts bushwalk
Related posts: Exploring the Jagungal; Scout activity: orienteering at Mount Stranger

Comment
Categories: thinktime

Richard Jones: PyCon Australia 2016: Registration Opens!

Planet Linux Australia - Mon 23rd May 2016 09:05

We are delighted to announce that online registration is now open for PyCon Australia 2016. The seventh PyCon Australia is being held in Melbourne, Victoria from August 12th – 16th at the Melbourne Convention and Exhibition Centre, will draw hundreds of Python developers, enthusiasts and students from Australasia and afar.

Starting today, early bird offers are up for grabs. To take advantage of these discounted ticket rates, be among the first 90 to register. Early bird registration starts from $60 for full-time students, $190 for enthusiasts and $495 for professionals. Offers this good won’t last long, so register right away.

We strongly encourage attendees to organise their accommodation as early as possible, as demand for cheaper rooms is very strong during the AFL season.

PyCon Australia has endeavoured to keep tickets as affordable as possible. Financial assistance is also available: for information about eligibility, head to our financial assistance page and apply. We are able to make such offers thanks to our Sponsors and Contributors.

To begin the registration process, and find out more about each level of ticket, visit our registration information page.

Important Dates to Help You Plan

  • 22 May: Registration opens - ‘Early bird’ prices for the first 90 tickets
  • 17 June: Last day to apply for financial assistance
  • 26 June: Last day to purchase conference dinner tickets
  • 9 July: Last day to order conference t-shirts
  • 12 August: PyCon Australia 2016 begins!

About PyCon Australia

PyCon Australia is the national conference for the Python programming community. The seventh PyCon Australia will be held on August 12-16 2016 in Melbourne, bringing together professional, student and enthusiast developers with a love for programming in Python. PyCon Australia informs the country’s developers with presentations by experts and core developers of Python, as well as the libraries and frameworks that they rely on.

To find out more about PyCon Australia 2016, visit our website at pycon-au.org, follow us at @pyconau or e-mail us at contact@pycon-au.org.

PyCon Australia is presented by Linux Australia (www.linux.org.au) and acknowledges the support of our Platinum Sponsors, DevDemand.co and IRESS; and our Gold sponsors, Google Australia and Optiver. For full details of our sponsors, see our website.

Categories: thinktime

Danielle Madeley: Django and PostgreSQL composite types

Planet Linux Australia - Sun 22nd May 2016 23:05

PostgreSQL has this nifty feature called composite types that you can use to create your own types from the built-in PostgreSQL types. It’s a bit like hstore, only structured, which makes it great for structured data that you might reuse multiple times in a model, like addresses.

Unfortunately to date, they were pretty much a pain to use in Django. There were some older implementations for versions of Django before 1.7, but they tended to do things like create surprise new objects in the namespace, not be migrateable, and require connection to the DB at any time (i.e. during your build).

Anyway, after reading a bunch of their implementations and then the Django source code I wrote django-postgres-composite-types.

Install with:

pip install django-postgres-composite-types

Then you can define a composite type declaratively:

from django.db import models from postgres_composite_type import CompositeType class Address(CompositeType): """An address.""" address_1 = models.CharField(max_length=255) address_2 = models.CharField(max_length=255) suburb = models.CharField(max_length=50) state = models.CharField(max_length=50) postcode = models.CharField(max_length=10) country = models.CharField(max_length=50) class Meta: db_type = 'x_address' # Required

And use it in a model:

class Person(models.Model): """A person.""" address = Address.Field()

The field should provide all of the things you need, including formfield etc and you can even inherit this field to extend it in your own way:

class AddressField(Address.Field): def __init__(self, in_australia=True, **kwargs): self.in_australia = in_australia super().__init__(**kwargs)

Finally to set up the DB there is a migration operation that will create the type that you can add:

import address from django.db import migrations class Migration(migrations.Migration): operations = [ # Registers the type address.Address.Operation(), migrations.AddField( model_name='person', name='address', field=address.Address.Field(blank=True, null=True), ), ]

It’s not smart enough to add it itself (can you do that?). Nor would it be smart enough to write the operations to alter a type. That would be a pretty cool trick. But it’s useful functionality all the same, especially when the alternative is creating lots of 1:1 models that are hard to work with and hard to garbage collect.

It’s still pretty early days, so the APIs are subject to change. PRs accepted of course.

Categories: thinktime

More than ten is too many

Seth Godin - Sun 22nd May 2016 19:05
Human beings suffer from scope insensitivity. Time and again, we're unable to put more urgency or more value on choices that have more impact. We don't donate ten times as much to a charity that's serving 10 times (or even...        Seth Godin
Categories: thinktime

Maxim Zakharov: Restoring gitstats

Planet Linux Australia - Sat 21st May 2016 23:05

gitstats tool has stopped working on our project after upgrade to Ubuntu 16.04. Finally I have got time to have a look. There were two issues with it:

  1. we do not need to use process wait as process communicate waits until process termination and the last process in the pipeline do not finish until all processes before it in the pipeline terminate, plus process wait may deadlock on pipes with huge output, see notice at https://docs.python.org/2/library/subprocess.html
  2. On Ubuntu 16.04 grep has started to give "Binary file (standard input) matches" notice into the pipe which breaks parsing.

I have made a pull request which fixes this issue: https://github.com/hoxu/gitstats/pull/65
Also you can clone fixed version from my account: https://github.com/Maxime2/gitstats

Categories: thinktime

Metaphors aren't true

Seth Godin - Sat 21st May 2016 18:05
But they're useful. That's why professionals use them to teach, to learn and to understand. A metaphor takes what we know and uses it as a lever to understand something else. And the only way we can do that is...        Seth Godin
Categories: thinktime

The other kind of harm

Seth Godin - Fri 20th May 2016 19:05
Pop culture is enamored with the Bond villian, the psycho, the truly evil character intent on destruction. It lets us off the hook, because it makes it easy to see that bad guys are other people. But most of the...        Seth Godin
Categories: thinktime

Glen Turner: Heatsink for RPi3

Planet Linux Australia - Fri 20th May 2016 18:05

I ordered a passive heatsink for system-on-chip of the Raspberry Pi 3 model B. Since it fits well I'll share the details:

Order
  • Fischer Elektronik ICK S 14 X 14 X 10 heatsink (Element 14 catalogue 1850054, AUD3.70).

  • Fischer Elektronik WLFT 404 23X23 thermally conductive foil, adhesive (Element 14 catalogue 1211707, AUD2.42 ).

Install

To install you need these parts: two lint-free isopropyl alcohol swabs; and these tools: sharp craft knife, a anti-static wrist strap.

Prepare the heatsink: Swab the base of the heatsink. Wait for it to dry. Remove the firm clear plastic from the thermal foil, taking care not to get fingerprints in the centre of the exposed sticky side. Put the foil on the bench, sticky side up. Plonk the heatsink base onto the sticky side, rolling slightly to avoid air bubbles and then pressing hard. Trim around the edges of the heatsink with the craft knife.

Prepare the Raspberry Pi 3 system-on-chip: Unlug everything from the RPi3, turn off the power, wait a bit, plug the USB power lead back in but don't reapply power (this gives us a ground reference). If the RPi3 is in a case, just remove the lid. Attach wrist strap and clamp to ethernet port surround or some other convenient ground. Swab the largest of the chips on the board, ensuring no lint remains.

Attach heat sink: Remove the plastic protection from the thermal foil, exposing the other sticky side. Do not touch the sticky side. With care place the heatsink squarely and snuggly on the chip. Press down firmly with finger of grounded hand for a few seconds. Don't press too hard: we're just ensuring the glue binds.

Is it worth it?

This little passive heatsink won't stop the RPi3 from throttling under sustained full load, despite this being one of the more effective passive heatsinks on the market. You'll need a fan blowing air across the heatsink to prevent that happening, and you might well need a heatsink on the RAM too.

But the days of CPUs being able to run at full rate continuously are numbered. Throttling the CPU performance under load is common in phones and tablets, and is not rare in laptops.

What the heatsink allows is for a delay to the moment of throttling. So a peaky load can have more chance of not causing throttling. Since we're only talking AUD7.12 in parts a passive heatsink is worth it if you are going to use the RPi3 for serious purposes.

Of course the heatsink is also a more effective radiator. When running cpuburn-a53 the CPU core temperature stabilises at 80C with a CPU clock of 700MHz (out of 1200MHz). It's plain that 80C is the target core temperature for this version of the RPi3's firmware. That's some 400MHz higher than without the heatsink. But if your task needs sustained raw CPU performance then you are much better off with even the cheapest of desktops, let alone a server.

Categories: thinktime

Steven Hanley: [mtb/events] UTA100 - The big dance through the blue mountains again

Planet Linux Australia - Fri 20th May 2016 17:05
Back at Ultra Trail Australia running through the Blue Mountains wilderness

I am still fascinated by seeing how I can improve in this event, after running in pairs twice and now solo twice I signed up to come back this year still seeing how much time I can lop off my lap of the course. Though I continually claim I am not a runner with my mountain biking and adventure racing background I have been getting out on foot a lot since I got into doing this event. With an arbitrary number I apply to the time around this course before I may admit I am a runner of 12 hours I as coming back to see how close to this goal I would get.

My first year solo in 2014 I was positive I would finish just now sure how fast, thinking on the day I may take around 15 hours I managed 13:44 which at the time had me happy and a little surprised. In 2015 I had a few things interrupt my lead up and not everything felt great so though I hoped to go under 13 hours I was not sure, managing 13:15 was not what I wanted but I got around the loop again anyway.

In 2016 I continued to not have a training program and simply work toward goals by judging effort in my head and race schedule leading up to the event. However most running science seems to suggest the more you can run without getting injured the better. So on January 1st 2016 I kicked off a running streak to see how long it would last. I managed to run every day in 2016 until Wednesday before UTA100, so 132 days in a row with a minimum distance of 5km. This included the days before and after efforts such as the razorback ultra in Victoria and the Six Foot Track marathon in the Blue Mountains.

I never really managed to get much speed work into my prep again this year however had definitely upped my volume doing between 70 and 125km every week of the year with most of it on trails with some good altitude gain at times. I also remained un injured and able to run every day which was great, even with the odd fall or problem I could work around and keep moving through I was feeling good before the event. Due to my tendency to waste time at the check points on course I also had my sister here to support me this year so I would be able to run into CP 3, 4 and 5. Grab new bottles, have food shoved at me and head on out.

All was looking fairly good and I was sure I could go under 13 hours this year the question remained how far under I could get. Then Wednesday night before the race I got home feeling awful and shivering and needed to crawl into bed early and get sleep, waking up Thursday I felt worse if possible and was worried it was all over I had gotten sick and nothing would help. I left work at 2pm that day and headed home to sleep the rest of the day. Fortunately by the time I woke on Friday morning I no longer felt so awful, and actually felt I may be able to run the next day. I had stopped my running streak on Wednesday, no real need to continue it and feeling so bad for two days definitely had to stop.

I arrived Friday afternoon, spent money with Graham and Hanny in their store for some stuff I needed from Find Your Feet and headed to the briefing. The welcome to country form David King was once again a highlight of the runners briefing it is a fantastic part of the race every year and really heart felt, genuine and funny. Met my sister Jane at our accommodation and discussed the race day and estimated times while eating dinner. Fortunately I finally felt ready to run again by the time I went to sleep Friday night. I had a few runs the week before with what I call Happy Legs where you feel awesome running and light and happy on your feet. Though I hoped for that on Saturday I knew I just had to get out on the track and keep moving well.

I was in wave 1 and starting at 6:20am, had a chat with my mate Tom Reeve on the start line and then we got moving, taking it easy on the 5km bitumen loop I had a chat with Phil Whitten who was worried after stomach issues in six foot caused him problems he may have issues today too (in the end he did alas), still it was nice to be moving and cruising along the out and back before the steps. In wave 1 it was nice and open and even the descent down Furber steps was pretty open. Ran through toward the golden stairs feeling OK, never awesome but not like it was going to be a horrible day out.

I got onto the fire road out Narrow Neck and realised I was probably a few beats higher than I probably should be HR wise however decided to stay with it and ensure I not push too hard on the hills climbs along here. With the start out and back slightly extended this year it was good to pass through CP1 in the same time as last year so on course for slightly faster, however I would not have a proper idea of time ad how I was going until Dunphys camp. On the climb from Cedar gap I noticed some people around me seemed to be pushing harder than I thought they should however that had nothing to do with me so I kept moving and hoping I survived. On the descent down to the camp I had my left adductor cramp a bit which seems to happen here every year so I have to manage it and keep going.

At Dunphys CP I had a chat to Myf happy to actually see her or Matt this year (I missed seeing them here last year) and got moving aware I would need to take it easy on iron pot to keep the cramps at bay. I got onto Iron Pot and loved being able to say thanks to David King and his colleagues welcoming us to country with Didgeridoo and clap sticks up there, the short out and back made it easier this year and then I took it really easy on the loose ski slope sort of descent down due to cramps being close to the surface. Continued taking it easy chatting with other runners as we went back past the outgoing track on our right and then we dropped down to the bottom of the valley to start heading up Megalong Rd.

Looking at my watch I was probably behind time to do sub 12 hours already at this point but would have a much better idea once I got to Six Foot CP in a little while. I took it easy climbing the rd at strong power walk and then managed a comfortable 4 to 5 minute pace along the road into the CP. I got out of CP3 just before the 5 hour mark, this was confirming I was unlikely to go under 12 hours, I expected I needed to be gone from here in 4h40m to manage sub 12 knowing how I was feeling. I grabbed some risotto and baked potatoes with salt from Jane to see if I could eat these for some variety rather than sweet crap while climbing to Katoomba. On the way into the CP I passed Etienne who had an injury so asked her to see if he needed help when he got in (though that made it harder for her to get to e in time at Katoomba, fortunately Etienne had his parents there to help him out when he had to withdraw there)

Trying to eat the solid food was difficult and slowing me down so I gave up by the time I hit the single track just before the stairs. I had a chat with a blonde woman (it may have been Daniela Burton) and it was her first 100 so I told her not to get discouraged how long the next leg (CP4 to CP5) takes and to keep focusing on moving forward. I also had a chat with Ben Grimshaw a few times on the way up Nellies as I was passed by him while trying to eat solid food and then caught him again on the stairs once I started pushing up there reasonably fast once more. We cruised through the single track at the top passing a few runners and got into CP4 pretty much together.

I had to refill my water bladder here as well as get two new bottles, still with Jane's help I got out of here fast and left by 6 hours 30 minutes on the race clock. Though behind Ben now as he was quicker in the CP. Now I was happy to hit my race goal of feeling pretty good at Katoomba and still being keen to run which is always the way I think you need to feel at this point as the next leg is the the crux of the race, the half marathon of stairs is really a tough mental and physical barrier to get through.

I headed along to echo point through some crowds on the walk way near the cliff edge and it was nice to have some of the tourists cheering us on, a few other runners were near by and we got through nicely. On the descent down the giant stair case I seemed to pass a few people pretty comfortably and then on to Dardanelle's pass and it was nice running through there for a while. Of course getting down to Leura forest we got to see some 50km runners coming the other way (a few asked me where I was going worried they had made a wrong turn, when I said I was a 100km runner they realised all was cool told me well done and kept going).

I caught Ben again on the way up the stairs from Leura forest and we were near each other a bit for a while then however I seemed to pull ahead on stairs a bit so over the next while I got away from him (he caught me later in the race anyway though). Last year I had a diabetic low blood sugar incident in this leg, somewhere just before the wentworth falls lookout carpark I think. So I was paying more attention through the day on constant calorie intake with lots of clif shot blocks and gu gels. I kept moving well enough through this whole leg so that turned out well. I Said hi to Graham (Hammond) who was cheering runners on at the Fairmont resort water station and ran on for a few more stairs.

Running in to CP 5 on king tableland road I still felt alright and managed to eat another three cubes of shot block there. I had run out of plain water (bladder) again so had not had a salt tablet for a little while. This year I had decided to run with more salt consumption and had bought hammer enduralyte salt tablets, I was downing 1 or 2 of them every time I ate all day which I think may have helped, though I still had cramps around Dunphys that happens every year and I knew I had run a bit hard early anyway (hoping to hit splits needed for sub 12). However even though it was a hot day and many people struggled more in the heat than other years I seemed to deal with it well. However I had discovered I struggled to down the tablets with electrolyte drink from my bottles (high 5 tablets, usually berry flavour) so I needed plain water from the camelback for them.

I got more food from Jane at CP5, re lubed myself a bit refilled the bladder and got moving. I also grabbed a second head torch, though I was carrying one already I liked the beam pattern more on the one I grabbed here, though with full water, bottles and the extra torch I felt pretty heavy running out of CP 5. Still just 3 hours to go now I expected. I got out of there at 9h25m on the race clock which was good, thus if I could have a good run through here I may be able to get in under 12h20m (2h50m run would be nice for this leg at this point). I got moving on the approach to the kedumba descent joking with a few others around me it was time to smash the quads and say good bye to them as they were no longer needed after this really. (only one short sort of descent to Leura creek) I was asked if we needed quads on the stairs, my response was they were a glute fest and allowed use of arms due to the railing so who needs quads after Kedumba.

However as I got on to the descent and passed under the overhang I noticed my legs were a bit off and I could not open up well, I thought about it and realised I was probably low on sugar and needed to eat, eating at this sort of downhill pace was a bit hard (especially as some food was making me feel like throwing up (gels)). I thought I would try to hang on until the bottom as I could walk up out of Jamisons creek eating. However I needed to slow to a walk just after passing the Mt Solitary turn off and down a gel. Then a few minutes later trying to run still did not work so I had to stop and walk and eat for a while again rather than descending at full speed. Doing all of that I was passed by a few people (I think the woman who came 5th, the guy I joked about not needing Quads with and a few others).

Oh well I should have eaten more while stopped at the CP or on the flat at the top, oops, lost time (in the results comparing with people I ran similar splits all day to I may have lost as much as 15 minutes here with this issue). Once I got onto the climb out of Jamisons creek I ate some more and focused on holding a reasonably strong hike, the people who passed me were long gone and I could not motivate myself to push hard to see if I would catch them or not. I was passing a number of 50km runners by this point (I think the sweep must have been at CP5 when I went through). They were fun to cheer on and chat with as I caught and passed them, getting down to Leura creek was good as it was still day light and I could get moving up there to the last aid and onto the finish before I thought about lights.

Ben caught me a gain here saying he had really pushed hard on the kedumba descent and he was looking good so sat a little ahead of me up to the aid station. I refilled my bottles and kept going chatting with 50 km runners as I passed them. I got to the poo farm a bit quicker than I expected (going on feeling as I was not looking at my watch much) however it was good to finally be up on Federal pass not long after that and this is where I decided to focus on moving fast. The last two years I crawled along here and I think I lost a lot of time, I know last year I had mentally given up so was crawling, the year before I think I was just a bit stuffed by then. This time I focused on running whenever it was not a steep up and on getting over to the stairs as quickly as possible.

It was still fun cheering on the 50km runners and chatting with them as I passed, I even saw some women in awesome pink outfits I had seen here a few weeks earlier while training so it was good to cheer them on, when I asked them about it they said it was them and they recognised me (it's pinky they exclaimed) as I passed. I got to the base of the stairs at 12:14 so knew I had to work hard to finish in under 12:30 but it was time to get that done if possible. On the climb up the stairs it felt like I was getting stuck behind 50km runners on many of the narrow sections of stairs however it probably was not much time slowing up the pace (one occasion a race doctor was walking up the stairs with a runner just to help them get to the finish). I managed to get across the finish line in 12:29:51 (57th overall) which was a good result all things considered.

Thanks go to Jane for coming up from Sydney and supporting me all day, Tom, Al and AROC for keeping the fun happening for all the runners, Dave and co for some excellent course markings, all the other AROC people and volunteers. David, Julie, Alex and others for company on lots of the training the last few months. I have a few ideas for what I need to work on next to faster on this course, however am thinking I may have a year off UTA100 to go do something else. The Hubert race in South Australia at the start of may looks like it could be awesome (running in the wilpena pound area through the Flinders ranges) and it will probably be good to develop my base and speed a bit more over time before my next attempt to see if I can become a runner (crack 12 hours on this course).

UTA100 really is the pinnacle of trail running in Australia with the level of competition, course fun quality, vibe on ocurse and the welcome to country, the event history and everything else so I hightly recommend it to anyone keen to challenge themselves. Even if so far this year the event that has really grabbed my attention the most is probably the Razorback Ultra, it is a very different day out to UTA100 so it is all good fun to get outdoors and enjoy the Australian wilderness.

Categories: thinktime

Gary Pendergast: Introducing: Linkify for Chrome

Planet Linux Australia - Thu 19th May 2016 23:05

In WordPress 4.2, a fun little feature was quietly snuck into Core, I’m always delighted to see people’s reactions when they discover it.

Thank you kind WordPress Devs for introducing the paste-a-link-on-highlighted-text feature. It's already saved me half an hour this week!

— Meagan Hanes (@mhanes) May 10, 2016

I love being able to highlight text in @WordPress, paste a URL, and a link appearing. Thanks to @ellaiseulde for leading recent changes!

— Morgan Estes (@morganestes) April 29, 2016

But there’s still a problem – WordPress is only ~26% of the internet, how can you get the same feature on the other 74%? Well, that problem has now been rectified. Introducing, Linkify for Chrome:



Linkify

Automatically transform pasted URLs into links.

chrome.google.com

Thank you to Davide for creating Linkify’s excellent icon!

Linkify is a Chrome extension to automatically turn a pasted URL into a link, just like you’re used to in WordPress. It also supports Trac and Markdown-style links, so you can paste links on your favourite bug trackers, too.

Speaking of bug trackers, if there are any other link formats you’d like to see, post a ticket over on the Linkify GitHub repo!

Oh, and speaking of Chrome extensions, you might be like me, and find the word “emojis” to be extraordinarily awkward. If so, I have another little extension, just for you.

Categories: thinktime

Our bias for paid marketing

Seth Godin - Thu 19th May 2016 18:05
A few rhetorical questions: Is a physical therapist with a professional logo better than one with a handmade sign? Are you more likely to stay at a hotel that you've heard of as opposed to an unknown one, even if...        Seth Godin
Categories: thinktime

Stewart Smith: Fuzzing Firmware – afl-fuzz + skiboot

Planet Linux Australia - Thu 19th May 2016 09:05

In what is likely to be a series on how firmware makes some normal tools harder to use, first I’m going to look at american fuzzy lop – a tool for fuzz testing that if you’re not using then you most certainly have bugs it’ll find for you.

I first got interested in afl-fuzz during Erik de Castro Lopo’s excellent linux.conf.au 2016 in Geelong earlier this year: “Fuzz all the things!“. In a previous life, the Random Query Generator managed to find a heck of a lot of bugs in MySQL (and Drizzle). For randgen info, see Philip Stoev’s talk on it from way back in 2009, a recent (2014) blog post on how Tokutek uses it and some notes on how it was being used at Oracle from 2013. Basically, the randgen was a specialized fuzzer that (given a grammar) would randomly generate SQL queries, and then (if the server didn’t crash), compare the result to some other database server (e.g. your previous version).

The afl-fuzz fuzzer takes a different approach – it’s a much more generic fuzzer rather than a targeted tool. Also, while tools such as the random query generator are extremely powerful and find specialized bugs, they’re hard to get started with. A huge benefit of afl-fuzz is that it’s really, really simple to get started with.

Basically, if you have a binary that takes input on stdin or as a (relatively small) file, afl-fuzz will just work and find bugs for you – read the Quick Start Guide and you’ll be finding bugs in no time!

For firmware of course, we’re a little different than a simple command line program as, well, we aren’t one! Luckily though, we have unit tests. These are just standard binaries that include a bunch of firmware code and get run in user space as part of “make check”. Also, just like unit tests for any project, people do send me patches that break tests (which I reject).

Some of these tests act on data we get from a place – maybe reading other parts of firmware off PNOR or interacting with data structures we get from other bits of firmware. For testing this code, it can be relatively easy to (for the test), read these off disk.

For skiboot, there’s a data structure we get from the service processor on FSP machines called HDAT. Basically, it’s just like the device tree, but different. Because yet another binary format is always a good idea (yes, that is laced with a heavy dose of sarcasm). One of the steps in early boot is to parse the HDAT data structure and convert it to a device tree. Luckily, we structured our code so that creating a unit test that can run in userspace was relatively easy, we just needed to dump this data structure out from a running machine. You can see the test case here. Basically, hdat_to_dt is a binary that reads the HDAT structure out of a pair of files and prints out a device tree. One of the regression tests we have is that we always produce the same output from the same input.

So… throwing that into AFL yielded a couple of pretty simple bugs, especially around aborting out on invalid data (it’s better to exit the process with failure rather than hit an assert). Nothing too interesting here on my simple input file, but it does mean that our parsing code exits “gracefully” on invalid data.

Another utility we have is actually a userspace utility for accessing the gard records in the flash. A GARD record is a record of a piece of hardware that has been deconfigured due to a fault (or a suspected fault). Usually this utility operates on PNOR flash through /dev/mtd – but really what it’s doing is talking to the libflash library, that we also use inside skiboot (and on OpenBMC) to read/write from flash directly, via /dev/mtd or just from a file. The good news? I haven’t been able to crash this utility yet!

So I modified the pflash utility to read from a file to attempt to fuzz the partition reading code we have for the partitioning format that’s on PNOR. So far, no crashes – although to even get it going I did have to fix a bug in the file handling code in pflash, so that’s already a win!

But crashing bugs aren’t the only type of bugs – afl-fuzz has exposed several cases where we act on uninitialized data. How? Well, we run some test cases under valgrind! This is the joy of user space unit tests for firmware – valgrind becomes a tool that you can run! Unfortunately, these bugs have been sitting in my “todo” pile (which is, of course, incredibly long).

Where to next? Fuzzing the firmware calls themselves would be nice – although that’s going to require a targeted tool that knows about what to pass each of the calls. Another round of afl-fuzz running would also be good, I’ve fixed a bunch of the simple things and having a better set of starting input files would be great (and likely expose more bugs).

Categories: thinktime

The short run and the long run

Seth Godin - Wed 18th May 2016 18:05
It's about scale. Pick a long enough one (or a short enough one) and you can see the edges. In the short run, there's never enough time. In the long run, constrained resources become available. In the short run, you...        Seth Godin
Categories: thinktime

The Rich (Typefaces) Get Richer

a list apart - Wed 18th May 2016 00:05

There are over 1,200 font families available on Typekit. Anyone with a Typekit plan can freely use any of those typefaces, and yet we see the same small selection used absolutely everywhere on the web. Ever wonder why?

The same phenomenon happens with other font services like Google Fonts and MyFonts. Google Fonts offers 708 font families, but we can’t browse the web for 15 minutes without encountering Open Sans and Lato. MyFonts has over 20,000 families available as web fonts, yet designers consistently reach for only a narrow selection of those.

On my side project Typewolf, I curate daily examples of nice type in the wild. Here are the ten most popular fonts from 2015:

  1. Futura
  2. Aperçu
  3. Proxima Nova
  4. Gotham
  5. Brown
  6. Avenir
  7. Caslon
  8. Brandon Grotesque
  9. GT Walsheim
  10. Circular

And here are the ten most popular from 2014:

  1. Brandon Grotesque
  2. Futura
  3. Avenir
  4. Aperçu
  5. Proxima Nova
  6. Franklin Gothic
  7. GT Walsheim
  8. Gotham
  9. Circular
  10. Caslon

Notice any similarities? Nine out of the ten fonts from 2014 made the top ten again in 2015. Admittedly, Typewolf is a curated showcase, so there is bound to be some bias in the site selection process. But with 365 sites featured in a year, I think Typewolf is a solid representation of what is popular in the design community.

Other lists of popular fonts show similar results. Or simply look around the web and take a peek at the CSS—Proxima Nova, Futura, and Brandon Grotesque dominate sites today. And these fonts aren’t just a little more popular than other fonts—they are orders of magnitude more popular.

When it comes to typefaces, the rich get richer

I don’t mean to imply that type designers are getting rich like Fortune 500 CEOs and flying around to type conferences in their private Learjets (although some type designers are certainly doing quite well). I’m just pointing out that a tiny percentage of fonts get the lion’s share of usage and that these “chosen few” continue to become even more popular.

The rich get richer phenomenon (also known as the Matthew Effect) refers to something that grows in popularity due to a positive feedback loop. An app that reaches number one in the App Store will receive press because it is number one, which in turn will give it even more downloads and even more press. Popularity breeds popularity. For a cogent book that discusses this topic much more eloquently than I ever could, check out Nicholas Taleb’s The Black Swan.

But back to typefaces.

Designers tend to copy other designers. There’s nothing wrong with that—designers should certainly try to build upon the best practices of others. And they shouldn’t be culturally isolated and unaware of current trends. But designers also shouldn’t just mimic everything they see without putting thought into what they are doing. Unfortunately, I think this is what often happens with typeface selection.

How does a typeface first become popular, anyway?

I think it all begins with a forward-thinking designer who takes a chance on a new typeface. She uses it in a design that goes on to garner a lot of attention. Maybe it wins an award and is featured prominently in the design community. Another designer sees it and thinks, “Wow, I’ve never seen that typeface before—I should try using it for something.” From there it just cascades into more and more designers using this “new” typeface. But with each use, less and less thought goes into why they are choosing that particular typeface. In the end, it’s just copying.

Or, a typeface initially becomes popular simply from being in the right place at the right time. When you hear stories about famous YouTubers, there is one thing almost all of them have in common: they got in early. Before the market is saturated, there’s a much greater chance of standing out; your popularity is much more likely to snowball. A few of the most popular typefaces on the web, such as Proxima Nova and Brandon Grotesque, tell a similar story.

The typeface Gotham skyrocketed in popularity after its use in Obama’s 2008 presidential campaign. But although it gained enormous steam in the print world, it wasn’t available as a web font until 2013, when the company then known as Hoefler & Frere-Jones launched its subscription web font service. Proxima Nova, a typeface with a similar look, became available as a web font early, when Typekit launched in 2009. Proxima Nova is far from a Gotham knockoff—an early version, Proxima Sans, was developed before Gotham—but the two typefaces share a related, geometric aesthetic. Many corporate identities used Gotham, so when it came time to bring that identity to the web, Proxima Nova was the closest available option. This pushed Proxima Nova to the top of the bestseller charts, where it remains to this day.

Brandon Grotesque probably gained traction for similar reasons. It has quite a bit in common with Neutraface, a typeface that is ubiquitous in the offline world—walk into any bookstore and you’ll see it everywhere. Brandon Grotesque was available early on as a web font with simple licensing, whereas Neutraface was not. If you wanted an art-deco-inspired geometric sans serif with a small x-height for your website, Brandon Grotesque was the obvious choice. It beat Neutraface to market on the web and is now one of the most sought-after web fonts. Once a typeface reaches a certain level of popularity, it seems likely that a psychological phenomenon known as the availability heuristic kicks in. According to the availability heuristic, people place much more importance on things that they are easily able to recall. So if a certain typeface immediately comes to mind, then people assume it must be the best option.

For example, Proxima Nova is often thought of as incredibly readable for a sans serif due to its large x-height, low stroke contrast, open apertures, and large counters. And indeed, it works very well for setting body copy. However, there are many other sans serifs that fit that description—Avenir, FF Mark, Gibson, Texta, Averta, Museo Sans, Sofia, Lasiver, and Filson, to name a few. There’s nothing magical about Proxima Nova that makes it more readable than similar typefaces; it’s simply the first one that comes to mind for many designers, so they can’t help but assume it must be the best.

On top of that, the mere-exposure effect suggests that people tend to prefer things simply because they are more familiar with them—the more someone encounters Proxima Nova, the more appealing they tend to find it.

So if we are stuck in a positive feedback loop where popular fonts keep becoming even more popular, how do we break the cycle? There are a few things designers can do.

Strive to make your brand identifiable by just your body text

Even if it’s just something subtle, aim to make the type on your site unique in some way. If a reader can tell they are interacting with your brand solely by looking at the body of an article, then you are doing it right. This doesn’t mean that you should completely lose control and use type just for the sole purpose of standing out. Good type, some say, should be invisible. (Some say otherwise.) Show restraint and discernment. There are many small things you can do to make your type distinctive.

Besides going with a lesser-used typeface for your body text, you can try combining two typefaces (or perhaps three, if you’re feeling frisky) in a unique way. Headlines, dates, bylines, intros, subheads, captions, pull quotes, and block quotes all offer ample opportunity for experimentation. Try using heavier and lighter weights, italics and all-caps. Using color is another option. A subtle background color or a contrasting subhead color can go a long way in making your type memorable.

Don’t make your site look like a generic website template. Be a brand.

Dig deeper on Typekit

There are many other high-quality typefaces available on Typekit besides Proxima Nova and Brandon Grotesque. Spend some time browsing through their library and try experimenting with different options in your mockups. The free plan that comes with your Adobe Creative Cloud subscription gives you access to every single font in their library, so you have no excuse not to at least try to discover something that not everyone else is using.

A good tip is to start with a designer or foundry you like and then explore other typefaces in their catalog. For example, if you’re a fan of the popular slab serif Adelle from TypeTogether, simply click the name of their foundry and you’ll discover gems like Maiola and Karmina Sans. Don’t be afraid to try something that you haven’t seen used before.

Dig deeper on Google Fonts (but not too deep)

As of this writing, there are 708 font families available for free on Google Fonts. There are a few dozen or so really great choices. And then there are many, many more not-so-great choices that lack italics and additional weights and that are plagued by poor kerning. So, while you should be wary of digging too deep on Google Fonts, there are definitely some less frequently used options, such as Alegreya and Fira Sans, that can hold their own against any commercial font.

I fully support the open-source nature of Google Fonts and think that making good type accessible to the world for free is a noble mission. As time goes by, though, the good fonts available on Google Fonts will simply become the next Times New Romans and Arials—fonts that have become so overused that they feel like mindless defaults. So if you rely on Google Fonts, there will always be a limit to how unique and distinctive your brand can be.

Try another web font service such as Fonts.com, Cloud.typography or Webtype

It may have a great selection, but Typekit certainly doesn’t have everything. The Fonts.com library dwarfs the Typekit library, with over 40,000 fonts available. Hoefler & Co.’s high-quality collection of typefaces is only available through their Cloud.typography service. And Webtype offers selections not available on other services.

Self-host fonts from MyFonts, FontShop or Fontspring

Don’t be afraid to self-host web fonts. Serving fonts from your own website really isn’t that difficult and it’s still possible to have a fast-loading website if you self-host. I self-host fonts on Typewolf and my Google PageSpeed Insights scores are 90/100 for mobile and 97/100 for desktop—not bad for an image-heavy site.

MyFonts, FontShop, and Fontspring all offer self-hosting kits that are surprisingly easy to set up. Self-hosting also offers the added benefit of not having to rely on a third-party service that could potentially go down (and take your beautiful typography with it).

Explore indie foundries

Many small and/or independent foundries don’t make their fonts available through the major distributors, instead choosing to offer licensing directly through their own sites. In most cases, self-hosting is the only available option. But again, self-hosting isn’t difficult and most foundries will provide you with all the sample code you need to get up and running.

Here are some great places to start, in no particular order:

What about Massimo Vignelli?

Before I wrap this up, I think it’s worth briefly discussing famed designer Massimo Vignelli’s infamous handful-of-basic-typefaces advice (PDF). John Boardley of I Love Typography has written an excellent critique of Vignelli’s dogma. The main points are that humans have a constant desire for improvement and refinement; we will always need new typefaces, not just so that brands can differentiate themselves from competitors, but to meet the ever-shifting demands of new technologies. And a limited variety of type would create a very bland world.

No doubt there were those in the 16th century who shared Vignelli’s views. Every age is populated by those who think we’ve reached the apogee of progress… Vignelli’s beloved Helvetica, . . . would never have existed but for our desire to do better, to progress, to create. John Boardley, “The Vignelli Twelve” Are web fonts the best choice for every website?

Not necessarily. There are some instances where accessibility and site speed considerations may trump branding—in that case, it may be best just to go with system fonts. Georgia is still a pretty great typeface, and so are newer system UI fonts likes San Francisco, Roboto/Noto, and Segoe.

But if you’re working on a project where branding is important, don’t ignore the importance of type. We’re bombarded by more content now than at any other time in history; having a distinctive brand is more critical than ever.

90 percent of design is typography. And the other 90 percent is whitespace. Jeffrey Zeldman, “The Year in Design”

As designers, ask yourselves: “Is this truly the best typeface for my project? Or am I just using it to be safe, or out of laziness? Will it make my brand memorable, or will my site blend in with every other site out there?” The choice is yours. Dig deep, push your boundaries, and experiment. There are thousands of beautiful and functional typefaces out there—go use them!

Categories: thinktime

Binh Nguyen: More PSYOPS, Social Systems, and More

Planet Linux Australia - Tue 17th May 2016 22:05
- I think that most people would agree that the best social systems revolve around the idea that we have fair and just laws. If the size of the security apparatus exceeds a certain point (which seems to be happening in a lot of places) are we certain that we have the correct laws and societal laws in place? If they can't convince through standard argumentation then the policy is probably not
Categories: thinktime

Identity vs. logic

Seth Godin - Tue 17th May 2016 19:05
Before we start laying out the logical argument for a course of action, it's worth considering whether a logical argument is what's needed. It may be that the person you're engaging with cares more about symbols, about tribal identity, about...        Seth Godin
Categories: thinktime

Using video well

Seth Godin - Mon 16th May 2016 20:05
The web was built on words. And words, of course, are available to anyone who can type. They're cheap, easy to edit and incredibly powerful when used well. Today's internet, though, is built on video. Much more difficult to create...        Seth Godin
Categories: thinktime

Pages

Subscribe to kattekrab aggregator