You are here

thinktime

Squidthanks

Seth Godin - Wed 20th Aug 2014 04:08
Nine years ago last month, a few of us sat down in my office and started working on Squidoo. Since then, there have been billions of visits to our site, and many of you have clicked, written, and contributed to...         Seth Godin
Categories: thinktime

Rusty Russell: POLLOUT doesn’t mean write(2) won’t block: Part II

Planet Linux Australia - Wed 20th Aug 2014 00:08

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte,

if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file

and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock cups-browsed: (1262): fd 6 poll() for write without nonblock Xorg: (1377): fd 1 select() for write without nonblock Xorg: (1377): fd 3 select() for write without nonblock Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking. ... those in writefds will be watched to see if a write will not block...

And poll():

POLLOUT Writing now will not block.

Man page patches have been submitted…

Categories: thinktime

Dependence Day: The Power and Peril of Third-Party Solutions

a list apart - Wed 20th Aug 2014 00:08

“Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep. Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support.

Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.

A web of third parties

I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party.

The third-party landscape is rapidly expanding. Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddly…old-fashioned.

Yet with so many third-party options to choose from, there are more chances than ever to veer off-course.

What could go wrong?

At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once!

But in one case, I believed the third party was worth the risk. In the other, it wasn’t. I just didn’t know how to communicate those thoughts to my team.

I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.

The difference between a request and a goal

This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to. But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value.

In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. I conceded.

It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions.

To dev or to download

Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.

Strengths and weaknesses

The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before? Not exactly, but we could think through it.

At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly. Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.

Betterment of the team

When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards. Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.

Organizational mission

If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot. No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent.

We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases.

Evaluating the unknown

The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing. Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar.

Familiarity: is there a provider we already work with?

Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships.

Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.

Vitality: will this service stick around?

The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.

Extensibility: can this service adapt as our needs change?

Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul.

APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data. This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service.

Branding: is theirs strong, or can you use your own?

White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.

SLAs: what are you getting, beyond uptime?

For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help?

In the case of our feed-smusher service, there was no solution that ran the table. The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally. Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions.

Relationship maintenance

If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship.

As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm.

If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency. If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail.

If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package. Everyone on our team should know where and how to learn about our third-party relationships.

We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions. Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features. They should be able to train new employees and route more complex support requests to the third party.

In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software. And we made managing external dependencies a primary part of one team member’s job.

DIY: a third party waiting to happen?

Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden. Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it.

Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency? 

  • Consider pair-programming. What better way to ensure that multiple people understand a product, than to have multiple people build it?
  • “Job-switch Tuesdays.” When feasible, we have developers switch roles for an entire day. Literally, in our ticketing system, it’s as though one person is another. It’s a way to force cross-training without doubling the hours needed for a task.
  • Hold code reviews before new code is pushed. This might feel slightly intrusive at first, but that passes. If it’s not readable, it’s not deployable. If you have project managers with a technical bent, empower them to ask questions about the code, too.
  • Bring moldy code into light by displaying it as phpDoc, JSDoc, or similar.
  • Beware the big. Create hourly estimates in Fibonacci increments. As a project gets bigger, so does its level of uncertainty. The Fibonacci steps are biased against under-budgeting, and also provide a cue to opt out of projects that are too difficult to estimate. In that case, it’s likely better to toe-in with a third party instead of blazing into the unknown by yourself.

All of these considerations apply to our earlier example, the typeahead search widget. Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off.

Look before you leap, and after you land

It’s not that third parties are bad per se. It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too.

Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency.

Categories: thinktime

One Step Ahead: Improving Performance with Prebrowsing

a list apart - Wed 20th Aug 2014 00:08

We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book.

But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next? What if we could have that content ready for them before they even ask for it?

We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.

The three big techniques

Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet.

Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal:

  • DNS prefetching
  • Resource prefetching
  • Prerendering

Now let’s dive into each of these separately.

DNS prefetching

Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL. The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action.

Modern browsers are very good at parsing our pages, looking ahead to pre-resolve all necessary domains ahead of time. Chrome goes as far as keeping an internal list with all related domains every time a user visits a site, pre-resolving them when the user returns (you can see this list by navigating to chrome://dns/ in your Chrome browser). However, sometimes access to new URLs may be hidden behind redirects or embedded in JavaScript, and that’s our opportunity to help the browser.

Let’s say we are downloading a set of resources from the domain cdn.example.com using a JavaScript call after a user clicks a button. Normally, the browser would have to resolve the DNS at the time of the click, but we can speed up the process by including a dns-prefetch directive in the head section of our page:

<link rel="dns-prefetch" href="http://cdn.example.com">

Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation. (Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.)

But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution. You’ll see a table like this:

This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate.

But what happens if the user never clicks the button to access the cdn.example.com domain? Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest.

Look for situations that might be good candidates to introduce DNS prefetching on your site:

  • Resources on different domains hidden behind 301 redirects
  • Resources accessed from JavaScript code
  • Resources for analytics and social sharing (which usually come from different domains)

DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion.

Resource prefetching

We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time:

<link rel="prefetch" href="http://cdn.example.com/library.js">

The browser will use this instruction to prefetch the indicated resources and store them on the local cache. This way, as soon as the resources are actually needed, the browser will have them ready to serve.

Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead.

Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive:

On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself). This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.

If you decide to use this technique, prefetch only the most important resources, and make sure they are cacheable by the browser. Images, CSS, JavaScript, and font files are usually good candidates for prefetching, but HTML responses are not since they aren’t cacheable.

Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time:

  • On a login page, since users are usually redirected to a welcome or dashboard page after logging in
  • On each page of a linear questionnaire or survey workflow, where users are visiting subsequent pages in a specific order
  • On a multi-step animation, since you know ahead of time which images are needed on subsequent scenes

Resource prefetching is currently supported on IE11, Chrome, Chrome Mobile, Firefox, and Firefox Mobile. (To determine browser compatibility, you can run a quick browser test on prebrowsing.com.)

Prerendering

What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint:

<link rel="prerender" href="http://example.com/about.html">

This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it. The transition from the current page to the prerendered one would be instantaneous.

Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive:

In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.

When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly. You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order.

At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.)

A final word

Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?

Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users.

Categories: thinktime

Andrew McDonnell: Unleashed GovHack – an Adelaide Adventure in Open Data

Planet Linux Australia - Tue 19th Aug 2014 22:08
Last month I attended Unleashed Govhack, our local contribution to the Australian GovHack hackathon. Unleashed Essentially GovHack is a chance for makers, hackers, designers, artists, and researchers to team up with government ‘data custodians’ and build proof of concept applications (web or mobile), software tools, video productions or presentations (data journalism) in a way that […]
Categories: thinktime

Lev Lafayette: A Source Installation of gzip

Planet Linux Australia - Tue 19th Aug 2014 21:08

GNU zip is a compression utility free from patented algorithms. Software patents are stupid, and patented compression algorithms are especially stupid.

read more

Categories: thinktime

Slacktivism

Seth Godin - Tue 19th Aug 2014 19:08
This is far from a new phenomenon. Hundreds of years ago there were holier-than-thou people standing in the village square, wringing their hands, ringing their bells and talking about how urgent a problem was. They did little more than wring...         Seth Godin
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main September 2014 Meeting: AGM + lightning talks

Planet Linux Australia - Tue 19th Aug 2014 18:08
Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Start: Sep 2 2014 19:00 End: Sep 2 2014 21:00 Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Link:  http://luv.asn.au/meetings/map

AGM + lightning talks

Notice of LUV Annual General Meeting, 2nd September 2014, 19:00.

Linux Users of Victoria, Inc., registration number

A0040056C, will be holding its Annual General Meeting at

7pm on Tuesday, 2nd September 2014, in the Buzzard

Lecture Theatre, Trinity College.

The AGM will be held in conjunction with our usual

September Main Meeting. As is customary, after the AGM

business we will have a series of lightning talks by

members on a recent Linux experience or project.

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 2, 2014 - 19:00

read more

Categories: thinktime

Michael Still: Juno nova mid-cycle meetup summary: slots

Planet Linux Australia - Tue 19th Aug 2014 18:08
If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.



If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.



If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.



The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).



The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.



How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.



What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.



We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.



We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.



Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.



This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.



This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.



Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management

Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: containers; Juno nova mid-cycle meetup summary: cells



Comment
Categories: thinktime

Valediction

a list apart - Mon 18th Aug 2014 22:08

When I first met Kevin Cornell in the early 2000s, he was employing his illustration talent mainly to draw caricatures of his fellow designers at a small Philadelphia design studio. Even in that rough, dashed-off state, his work floored me. It was as if Charles Addams and my favorite Mad Magazine illustrators from the 1960s had blended their DNA to spawn the perfect artist.

Kevin would deny that label, but artist he is. For there is a vision in his mind, a way of seeing the world, that is unlike anyone else’s—and he has the gift to make you see it too, and to delight, inspire, and challenge you with what he makes you see.

Kevin was part of a small group of young designers and artists who had recently completed college and were beginning to establish careers. Others from that group included Rob Weychert, Matt Sutter, and Jason Santa Maria. They would all go on to do fine things in our industry.

It was Jason who brought Kevin on as house illustrator during the A List Apart 4.0 brand overhaul in 2005, and Kevin has worked his strange magic for us ever since. If you’re an ALA reader, you know how he translates the abstract web design concepts of our articles into concrete, witty, and frequently absurd situations. Above all, he is a storyteller—if pretentious designers and marketers haven’t sucked all the meaning out of that word.

For nearly 10 years, Kevin has taken our well-vetted, practical, frequently technical web design and development pieces, and elevated them to the status of classic New Yorker articles. Tomorrow he publishes his last new illustrations with us. There will never be another like him. And for whatever good it does him, Kevin Cornell has my undying thanks, love, and gratitude.

Categories: thinktime

My Favorite Kevin Cornell

a list apart - Mon 18th Aug 2014 22:08

After 200 issues—yes, two hundred—Kevin Cornell is retiring from his post as A List Apart’s staff illustrator. Tomorrow’s issue will be the last one featuring new illustrations from him.

Sob.

For years now, we’ve eagerly awaited Kevin’s illustrations each issue, opening his files with all the patience of a kid tearing into a new LEGO set.

But after nine years and more than a few lols, it’s time to give Kevin’s beautifully deranged brain a rest.

We’re still figuring out what comes next for ALA, but while we do, we’re sending Kevin off the best way we know how: by sharing a few of our favorite illustrations. Read on for stories from ALA staff, past and present—and join us in thanking Kevin for his talent, his commitment, and his uncanny ability to depict seemingly any concept using animals, madmen, and circus figures.

Of all the things I enjoyed about working on A List Apart, I loved anticipating the reveal: seeing Kevin’s illos for each piece, just before the issue went live. Every illustration was always a surprise—even to the staff. My favorite, hands-down, was his artwork for “The Discipline of Content Strategy,” by Kristina Halvorson. In 2008, content was web design’s “elephant in the room” and Kevin’s visual metaphor nailed it. In a drawing, he encapsulated thoughts and feelings many had within the industry but were unable to articulate. That’s the mark of a master.

—Krista Stevens, Editor-in-chief, 2006–2012

In the fall of 2011, I submitted my first article to A List Apart. I was terrified: I didn’t know anyone on staff. The authors’ list read like a who’s who of web design. The archives were intimidating. But I had ideas, dammit. I hit send.

I told just one friend what I’d done. His eyes lit up. “Whoa. You’d get a Kevin Cornell!” he said.

Whoa indeed. I might get a Kevin Cornell?! I hadn’t even thought about that yet.

Like Krista, I fell in love with Kevin’s illustration for “The Discipline of Content Strategy”—an illustration that meant the world to me as I helped my clients see their own content elephants. The idea of having a Cornell of my own was exciting, but terrifying. Could I possibly write something worthy of his illustration?

Months later, there it was on the screen: little modular sandcastles illustrating my article on modular content. I was floored.

Now, after two years as ALA’s editor in chief, I’ve worked with Kevin through dozens of issues. But you know what? I’m just as floored as ever.

Thank you, Kevin, you brilliant, bizarre, wonderful friend.

—Sara Wachter-Boettcher, Editor-in-chief

It’s impossible for me to choose a favorite of Kevin’s body of work for ALA, because my favorite Cornell illustration is the witty, adaptable, humane language of characters and symbols underlying his years of work. If I had to pick a single illustration to represent the evolution of his visual language, I think it would be the hat-wearing nested egg with the winning smile that opened Andy Hagen’s “High Accessibility is Effective Search Engine Optimization.” An important article but not, perhaps, the juiciest title A List Apart has ever run…and yet there’s that little egg, grinning in his slightly dopey way.

If my memory doesn’t fail me, this is the second appearance of the nested Cornell egg—we saw the first a few issues before in Issue 201, where it represented the nested components of an HTML page. When it shows up here, in Issue 207, we realize that the egg wasn’t a cute one-off, but the first syllable of a visual language that we’ll see again and again through the years. And what a language! Who else could make semantic markup seem not just clever, but shyly adorable?

A wander through the ALA archives provides a view of Kevin’s changing style, but something visible only backstage was his startlingly quick progression from reading an article to sketching initial ideas in conversation with then-creative director Jason Santa Maria to turning out a lovely miniature—and each illustration never failed to make me appreciate the article it introduced in a slightly different way. When I was at ALA, Kevin’s unerring eye for the important detail as a reader astonished me almost as much as his ability to give that (often highly technical, sometimes very dry) idea a playful and memorable visual incarnation. From the very first time his illustrations hit the A List Apart servers he’s shared an extraordinary gift with its readers, and as a reader, writer, and editor, I will always count myself in his debt.

—Erin Kissane, Editor-in-chief, contributing editor, 1999–2009

So much of what makes Kevin’s illustrations work are the gestures. The way the figure sits a bit slouched, but still perched on gentle tippy toes, determinedly occupied pecking away on his phone. With just a few lines, Kevin captures a mood and moment anyone can feel.

—Jason Santa Maria, Former creative director

I’ve had the pleasure of working with Kevin on the illustrations for each issue of A List Apart since we launched the latest site redesign in early 2013. By working, I mean replying to his email with something along the lines of “Amazing!” when he sent over the illustrations every couple of weeks.

Prior to launching the new design, I had to go through the backlog of Kevin’s work for ALA and do the production work needed for the new layout. This bird’s eye view gave me an appreciation of the ongoing metaphorical world he had created for the magazine—the birds, elephants, weebles, mad scientists, ACME products, and other bits of amusing weirdness that breathed life into the (admittedly, sometimes) dry topics covered.

If I had to pick a favorite, it would probably be the illustration that accompanied the unveiling of the redesign, A List Apart 5.0. The shoe-shine man carefully working on his own shoes was the perfect metaphor for both the idea of design as craft and the back-stage nature of the profession—working to make others shine, so to speak. It was a simple and humble concept, and I thought it created the perfect tone for the launch.

—Mike Pick, Creative director

So I can’t pick one favorite illustration that Kevin’s done. I just can’t. I could prattle on about this, that, or that other one, and tell you everything I love about each of ’em. I mean, hell: I still have a print of the illustration he did for my very first ALA article. (The illustration is, of course, far stronger than the essay that follows it.)

But his illustration for James Christie’s excellent “Sustainable Web Design” is a perfect example of everything I love about Kevin’s ALA work: how he conveys emotion with a few deceptively simple lines; the humor he finds in contrast; the occasional chicken. Like most of Kevin’s illustrations, I’ve seen it whenever I reread the article it accompanies, and I find something new to enjoy each time.

It’s been an honor working alongside your art, Kevin—and, on a few lucky occasions, having my words appear below it.

Thanks, Kevin.

—Ethan Marcotte, Technical editor

Kevin’s illustration for Cameron Koczon’s “Orbital Content” is one of the best examples I can think of to show off his considerable talent. Those balloons are just perfect: vaguely reminiscent of cloud computing, but tethered and within arm’s reach, and evoking the fun and chaos of carnivals and county fairs. No other illustrator I’ve ever worked with is as good at translating abstract concepts into compact, visual stories. A List Apart won’t be the same without him.

—Mandy Brown, Former contributing editor

Kevin has always had what seems like a preternatural ability to take an abstract technical concept and turn it into a clear and accessible illustration.

For me, my favorite pieces are the ones he did for the 3rd anniversary of the original “Responsive Web Design” article…the web’s first “responsive” illustration? Try squishing your browser here to see it in action—Ed

—Tim Murtaugh, Technical director

I think it may be impossible for me to pick just one illustration of Kevin’s that I really like. Much like trying to pick your one favorite album or that absolutely perfect movie, picking a true favorite is simply folly. You can whittle down the choices, but it’s guaranteed that the list will be sadly incomplete and longer (much longer) than one.

If held at gunpoint, however ridiculous that sounds, and asked which of Kevin’s illustrations is my favorite, close to the top of the list would definitely be “12 Lessons for Those Afraid of CSS Standards.” It’s just so subtle, and yet so pointed.

What I personally love the most about Kevin’s work is the overall impact it can have on people seeing it for the first time. It has become commonplace within our ranks to hear the phrase, “This is my new favorite Kevin Cornell illustration” with the publishing of each issue. And rightly so. His wonderfully simple style (which is also deceptively clever and just so smart) paired with the fluidity that comes through in his brush work is magical. Case in point for me would be his piece for “The Problem with Passwords” which just speaks volumes about the difficulty and utter ridiculousness of selecting a password and security question.

We, as a team, have truly been spoiled by having him in our ranks for as long as we have. Thank you Kevin.

—Erin Lynch, Production manager

The elephant was my first glimpse at Kevin’s elegantly whimsical visual language. I first spotted it, a patient behemoth being studied by nonplussed little figures, atop Kristina Halvorson’s “The Discipline of Content Strategy,” which made no mention of elephants at all. Yet the elephant added to my understanding: content owners from different departments focus on what’s nearest to them. The content strategist steps back to see the entire thing.

When Rachel Lovinger wrote about “Content Modelling,” the elephant made a reappearance as a yet-to-be-assembled, stylized elephant doll. The unflappable elephant has also been the mascot of product development at the hands of a team trying to construct it from user research, strutted its stuff as curated content, enjoyed the diplomatic guidance of a ringmaster, and been impersonated by a snake to tell us that busting silos is helped by a better understanding of others’ discourse conventions.

The delight in discovering Kevin’s visual rhetoric doesn’t end there. With doghouses, birdhouses, and fishbowls, Kevin speaks of environments for users and workers. With owls he represents the mobile experience and smartphones. With a team arranging themselves to fit into a group photo, he makes the concept of responsive design easier to grasp.

Not only has Kevin trained his hand and eye to produce the gestures, textures, and compositions that are uniquely his, but he has trained his mind to speak in a distinctive visual language—and he can do it on deadline. That is some serious mastery of the art.

—Rose Weisburd, Columns editor

Categories: thinktime

Andrew Pollock: [life] Day 201: Kindergarten, some startup stuff, car wash and a trip to the vet ophthalmologist

Planet Linux Australia - Mon 18th Aug 2014 22:08

Zoe woke up at some point in the night and ended up in bed with me. I don't even remember when it was.

We got going reasonably quickly this morning, and Zoe wanted porridge for breakfast, so I made a batch of that in the Thermomix.

She was a little bit clingy at Kindergarten for drop off. She didn't feel like practising writing her name in her sign-in book. Fortunately Miss Sarah was back from having done her prac elsewhere, and Zoe had been talking about missing her on the way to Kindergarten, so that was a good distraction, and she was happy to hang out with her while I left.

I used the morning to do some knot practice for the rock climbing course I've signed up for next month. It was actually really satisfying doing some knots that previously I'd found to be mysterious.

I had a lunch meeting over at the Royal Brisbane Women's Hospital to bounce a startup idea off a couple of people, so I headed over there for a very useful lunch discussion and then briefly stopped off at home before picking up Zoe from Kindergarten.

Zoe had just woken up from a nap before I arrived, and was a bit out of sorts. She perked up a bit when we went to the car wash and had a babyccino while the car got cleaned. Sarah was available early, so I dropped Zoe around to her straight after that.

I'd booked Smudge in for a consult with a vet ophthalmologist to get her eyes looked at, so I got back home again, and crated her and drove to Underwood to see the vet. He said that she had the most impressive case of eyelid agenesis he'd ever seen. She also had persistent pupillary membranes in each eye. He said that the eyelid agenesis is a pretty common birth defect in what he called "dumpster cats", which for all we know, is exactly what Smudge (or more importantly, her mother) were. He also said that other eye defects, like the membranes she had, were common in cases where there was eyelid agenesis.

The surgical fix was going to come in at something in the order of $2,000 an eye, be a pretty long surgery, and involve some crazy transplanting of lip tissue. Cost aside, it didn't sound like a lot of fun for Smudge, and given her age and that she's been surviving the way she is, I can't see myself wanting to spend the money to put her through it. The significantly cheaper option is to just religiously use some lubricating eye gel.

After that, I got home with enough time to eat some dinner and then head out to crash my Thermomix Group Leader's monthly team meeting to see what it was like. I got a good vibe from that, so I'm happy to continue with my consultant application.

Categories: thinktime

Who named the colors?

Seth Godin - Mon 18th Aug 2014 19:08
We did. It's not a silly question. It has a lot to do with culture and crowds and the way we decide, as a group, what's right and what's not. A quick look at some colors confirms that there is...         Seth Godin
Categories: thinktime

Craige McWhirter: Introduction to Managing OpenStack Via the CLI

Planet Linux Australia - Mon 18th Aug 2014 14:08
Assumptions: Introduction:

There's great deal of ugliness in OpenStack but what I enjoy the most is the relative elegance of driving an OpenStack deployment from the comfort of my own workstation.

Once you've configured your workstation as an OpenStack Management Client, these are some of the commands you can run from your workstation against an OpenStack deployment.

There are client commands for each of the projects, which makes it rather simple to relate the commands you want to run against the service you need to work with. ie:

$ PROJECT --version

$ cinder --version 1.0.8 $ glance --version 0.12.0 $ heat --version 0.2.9 $ keystone --version 0.9.0 $ neutron --version 2.3.5 $ nova --version 2.17.0 Getting by With a Little Help From Your Friends

The first slice of CLI joy when using these OpenStack clients is the CLI help that is available for each of the clients. When each client is called with --help, a comprehensive list of options and sub commands is dumped to STDOUT, which is useful but but not expected.

The question you usually find yourself asking is, "How do I use those sub commands?", which answered by utilising the following syntax:

$ PROJECT help subcommand

This will dump all the arguments for the specified subcommand to STDOUT. I've used the below example for it's brevity:

$ keystone help user-create usage: keystone user-create --name <user-name> [--tenant <tenant>] [--pass [<pass>]] [--email <email>] [--enabled <true|false>] Create new user Arguments: --name <user-name> New user name (must be unique). --tenant <tenant>, --tenant-id <tenant> New user default tenant. --pass [<pass>] New user password; required for some auth backends. --email <email> New user email address. --enabled <true|false> Initial user enabled status. Default is true. Getting Behind the Wheel

Before you can use these commands, you will need to set some appropriate environment variables. When you completed configuring your workstation as an OpenStack Management Client, you will have completed short file that set the username, passowrd, tenant name and authentication URL for you OpenStack clients. Now is the time to source that file:

$ source <username-tenant>.sh

I have one of these for each OpenStack deployment, user account and tenant that I wish to work with. Sourcing the relevent one before I commence a body of work.

Turning the Keystone

Keystone provides the authentication service, assuming you have appropriate privileges, you will need to:

Create a Tenant (referred to as a Project via the Web UI).

$ keystone tenant-create --name DemoTenant --description "Don't forget to \ delete this tenant" +-------------+------------------------------------+ | Property | Value | +-------------+------------------------------------+ | description | Don't forget to delete this tenant | | enabled | True | | id | painguPhahchoh2oh7Oeth2jeh4ahMie | | name | DemoTenant | +-------------+------------------------------------+ $ keystone tenant-list +----------------------------------+----------------+---------+ | id | name | enabled | +----------------------------------+----------------+---------+ | painguPhahchoh2oh7Oeth2jeh4ahMie | DemoTenant | True | +----------------------------------+----------------+---------+

Create / add a user to that tenant.

$ keystone user-create --name DemoUser --tenant DemoTenant --pass \ Tahh9teih3To --email demo.tenant@example.tld +----------+----------------------------------+ | Property | Value | +----------+----------------------------------+ | email | demo.tenant@example.tld | | enabled | True | | id | ji5wuVaTouD0ohshoChoohien3Thaibu | | name | DemoUser | | tenantId | painguPhahchoh2oh7Oeth2jeh4ahMie | | username | DemoUser | +----------+----------------------------------+ $ keystone user-role-list --user DemoUser --tenant DemoTenant +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | +----------------------------------+----------+----------------------------------+----------------------------------+

Provide that user with an appropriate role (defaults to member)

$ keystone user-role-add --user DemoUser --role admin --tenant DemoTenant $ keystone user-role-list --user DemoUser --tenant DemoTenant +----------------------------------+----------+----------------------------------+----------------------------------+ | id | name | user_id | tenant_id | +----------------------------------+----------+----------------------------------+----------------------------------+ | eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | | ieDieph0iteidahjuxaifi6BaeTh2Joh | admin | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie | +----------------------------------+----------+----------------------------------+----------------------------------+ Taking a Glance at Images

Glance provides the service for discovering, registering and retrieving virtual machine images. It is via Glance that you will be uploading VM images to OpenStack. Here's how you can upload a pre-existing image to Glance:

Note: If your back end is Ceph then the images must be in RAW format.

$ glance image-create --name DemoImage --file /tmp/debian-7-amd64-vm.qcow2 \ --progress --disk-format qcow2 --container-format bare \ --checksum 05a0b9904ba491346a39e18789414724 [=============================>] 100% +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | 05a0b9904ba491346a39e18789414724 | | container_format | bare | | created_at | 2014-08-13T06:00:01 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | mie1iegauchaeGohghayooghie3Zaichd1e5 | | is_public | False | | min_disk | 0 | | min_ram | 0 | | name | DemoImage | | owner | gei6chiC3hei8oochoquieDai9voo0ve | | protected | False | | size | 2040856576 | | status | active | | updated_at | 2014-08-13T06:00:28 | | virtual_size | None | +------------------+--------------------------------------+ $ glance image-list +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ | ID | Name | Disk Format | Container Format | Size | Status | +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ | mie1iegauchaeGohghayooghie3Zaichd1e5 | DemoImage | qcow2 | bare | 2040856576 | active | +--------------------------------------+---------------------------+-------------+------------------+------------+--------+ Starting Something New With Nova

Build yourself an environment file with the new credentials:

export OS_USERNAME=DemoTenant export OS_PASSWORD=Tahh9teih3To export OS_TENANT_NAME=DemoTenant export OS_AUTH_URL=http://horizon.my.domain.tld:35357/v2.0

Then source it.

By now you just want a VM, so lets knock one up for the user and tenant you just created:

$ nova boot --flavor m1.small --image DemoImage DemoVM +--------------------------------------+--------------------------------------------------+ | Property | Value | +--------------------------------------+--------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-0000009f | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | W3kd5WzYD2tE | | config_drive | | | created | 2014-08-13T06:41:19Z | | flavor | m1.small (2) | | hostId | | | id | 248a247d-83ff-4a52-b9b4-4b3961050e94 | | image | DemoImage (d51001c2-bfe3-4e8a-86d8-e2e35898c0f3) | | key_name | - | | metadata | {} | | name | DemoVM | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | c6c88f8dbff34b60b4c8e7fad1bda869 | | updated | 2014-08-13T06:41:19Z | | user_id | cb040e80138c4374b46f4d31da38be68 | +--------------------------------------+--------------------------------------------------+

Now you can use nova show DemoVM to provide you with the IP address of the host or acces the console via the Horizon dashboard.

Categories: thinktime

Michael Still: Juno nova mid-cycle meetup summary: scheduler

Planet Linux Australia - Mon 18th Aug 2014 13:08
This post is in a series covering the discussions at the Juno Nova mid-cycle meetup. This post will cover the current state of play of our scheduler refactoring efforts. The scheduler refactor has been running for a fair while now, dating back to at least the Hong Kong summit (so about 1.5 release cycles ago).



The original intent of the scheduler sub-team's effort was to pull the scheduling code out of Nova so that it could be rapidly iterated on its own, with the eventual goal being to support a single scheduler across the various OpenStack services. For example, the scheduler that makes placement decisions about your instances could also be making decisions about the placement of your storage resources and could therefore ensure that they are co-located as much as possible.



During this process we realized that a big bang replacement is actually much harder than we thought, and the plan has morphed into being a multi-phase effort. The first step is to make the interface for the scheduler more clearly defined inside the Nova code base. For example, in previous releases, it was the scheduler that launched instances: the API would ask the scheduler to find available hypervisor nodes, and then the scheduler would instruct those nodes to boot the instances. We need to refactor this so that the scheduler picks a set of nodes, but then the API is the one which actually does the instance launch. That way, when the scheduler does move out it's not trusted to perform actions that change hypervisor state, and the Nova code does that for it. This refactoring work is under way, along with work to isolate the SQL database accesses inside the scheduler.



I would like to set expectations that this work is what will land in Juno. It has little visible impact for users, but positions us to better solve these problems in Kilo.



We discussed the need to ensure that any new scheduler is at least as fast and accurate as the current one. Jay Pipes has volunteered to work with the scheduler sub-team to build a testing framework to validate this work. Jay also has some concerns about the resource tracker work that is being done at the moment that he is going to discuss with the scheduler sub-team. Since the mid-cycle meetup there has been a thread on the openstack-dev mailing list about similar resource tracker concerns (here), which might be of interest to people interested in scheduler work.



We also need to test our assumption at some point that other OpenStack services such as Neutron and Cinder would be even willing to share a scheduler service if a central one was implemented. We believe that Neutron is interested, but we shouldn't be surprising our fellow OpenStack projects by just appearing with a complete solution. There is a plan to propose a cross-project session at the Paris summit to cover this work.



In the next post in this series we'll discuss possibly the most controversial part of the mid-cycle meetup. The proposal for "slots" for landing blueprints during Kilo.



Tags for this post: openstack juno nova mid-cycle summary scheduler

Related posts: Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Juno nova mid-cycle meetup summary: cells; Michael's surprisingly unreliable predictions for the Havana Nova release



Comment
Categories: thinktime

Michael Still: Juno nova mid-cycle meetup summary: bug management

Planet Linux Australia - Mon 18th Aug 2014 13:08
Welcome to the next exciting installment of the Nova Juno mid-cycle meetup summary. In the previous chapter, our hero battled a partially complete cells implementation, by using his +2 smile of good intentions. In this next exciting chapter, watch him battle our seemingly never ending pile of bugs! Sorry, now that I'm on to my sixth post in this series I feel like it's time to get more adventurous in the introductions.



For at least the last cycle, and probably longer, Nova has been struggling with the number of bugs filed in Launchpad. I don't think the problem is that Nova has terrible code, it is instead that we have a lot of users filing bugs, and the team working on triaging and closing bugs is small. The complexity of the deployment options with Nova make this problem worse, and that complexity increases as we allow new drivers for things like different storage engines to land in the code base.



The increasing number of permutations possible with Nova configurations is a problem for our CI systems as well, as we don't cover all of these options and this sometimes leads us to discover that they don't work as expected in the field. CI is a tangent from the main intent of this post though, so I will reserve further discussion of our CI system until a later post.



Tracy Jones and Joe Gordon have been doing good work in this cycle trying to get a grip on the state of the bugs filed against Nova. For example, a very large number of bugs (hundreds) were for problems we'd fixed, but where the bug bot had failed to close the bug when the fix merged. Many other bugs were waiting for feedback from users, but had been waiting for longer than six months. In both those cases the response was to close the bug, with the understanding that the user can always reopen it if they come back to talk to us again. Doing "quick hit" things like this has reduced our open bug count to about one thousand bugs. You can see a dashboard that Tracy has produced that shows the state of our bugs at http://54.201.139.117/nova-bugs.html. I believe that Joe has been moving towards moving this onto OpenStack hosted infrastructure, but this hasn't happened yet.



At the mid-cycle meetup, the goal of the conversation was to try and find other ways to get our bug queue further under control. Some of the suggestions were largely mechanical, like tightening up our definitions of the confirmed (we agree this is a bug) and triaged (and we know how to fix it) bug states. Others were things like auto-abandoning bugs which are marked incomplete for more than 60 days without a reply from the person who filed the bug, or unassigning bugs when the review that proposed a fix is abandoned in Gerrit.



Unfortunately, we have more ideas for how to automate dealing with bugs than we have people writing automation. If there's someone out there who wants to have a big impact on Nova, but isn't sure where to get started, helping us out with this automation would be a super helpful way to get started. Let Tracy or I know if you're interested.



We also talked about having more targeted bug days. This was prompted by our last bug day being largely unsuccessful. Instead we're proposing that the next bug day have a really well defined theme, such as moving things from the "undecided" to the "confirmed" state, or similar. I believe the current plan is to run a bug day like this after J-3 when we're winding down from feature development and starting to focus on stabilization.



Finally, I would encourage people fixing bugs in Nova to do a quick search for duplicate bugs when they are closing a bug. I wouldn't be at all surprised to discover that there are many bugs where you can close duplicates at the same time with minimal effort.



In the next post I'll cover our discussions of the state of the current scheduler work in Nova.



Tags for this post: openstack juno nova mi-cycle summary bugs

Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers



Comment
Categories: thinktime

Andrew Pollock: [tech] Solar follow up

Planet Linux Australia - Mon 18th Aug 2014 11:08

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Categories: thinktime

Sridhar Dhanapalan: Twitter posts: 2014-08-11 to 2014-08-17

Planet Linux Australia - Mon 18th Aug 2014 09:08
Categories: thinktime

Andrew Pollock: [life] Day 198: Dentist, play date and some Science Friday

Planet Linux Australia - Sun 17th Aug 2014 21:08

First up on Friday morning was Zoe's dentist appointment. When Sarah dropped her off, we jumped in the car straight away and headed out. It's a long way to go for a 10 minute appointment, but it's worth it for the "Yay! I can't wait!" reaction I got when I told her she was going a few days prior.

Having a positive view of dental care is something that's very important to me, as teeth are too permanent to muck around with. The dentist was very happy with her teeth, and sealed her back molars. Apparently it's all the rage now, as these are the ones that hang around until she's 12.

Despite this being her third appointment with this dentist, Zoe was feeling a bit shy this time, so she spent the whole time reclining on me in the chair. She otherwise handled the appointment like a trooper.

After we got home, I had a bit of a clean up before Zoe's friend from Kindergarten, Vaeda and her Mum came over for lunch. The girls had a good time playing together really nicely for a couple of hours afterwards.

I was flicking through 365 Science Experiments, looking for something physics-related for a change, when I happened on the perfect thing. The girls were already playing with a bunch of balloons that I'd blown up for them, so I just charged one up with static electricity and made their hair stand on end, and also picked up some torn up paper. Easy.

After Vaeda left, we did the weekend grocery shop early, since we were going away for the bulk of the weekend.

It was getting close to time to start preparing dinner after that. Anshu came over for dinner and we all had a nice dinner together.

Categories: thinktime

Escalators, elevators and the ferry

Seth Godin - Sun 17th Aug 2014 19:08
Escalators make people happy. They're ready when you are, there is almost never a line, and you can see progress happening the entire time. Elevators are faster, particularly for long distances, but we get frustrated when we just miss one,...         Seth Godin
Categories: thinktime

Pages

Subscribe to KatteKrab aggregator - thinktime