Planet Linux Australia
I started the day with a yoga class. It was good to be back. I missed last week's class because I was under the weather, and was really missing yoga. It was just me and one other student this morning, so it was nice.
I picked up Zoe from Sarah's place this morning. She was a bit wrecked from an outing to the Ekka yesterday with Sarah, but adamant that she wasn't tired, and still wanted to go to the Ekka with me today.
I decided that given she was up for it, and tomorrow's schedule didn't really permit going, it had to be today or bust, so I figured we'd just go and take it gently, and go home at the first sign of trouble.
The day worked out perfectly fine. We caught the train in, and stopped at the animal nursery first. Zoe got to hand feed some lambs, goats and calves, as well as hold a baby chicken.
The main goal for the day was the rides, so we headed over there, after I'd gotten propositioned by the Surf Life Savers for a raffle ticket, and located some fairy floss for Zoe. Suitably sugared up, we hit the kids rides area.
I'd prepurchased a $40 ride card, at a $5 discount, and tried to impress upon Zoe that once it was exhausted, we were done with the rides. She seemed to get that. I did less well with convincing her to check out everything on offer before we started blowing money on rides.
The first thing she wanted to go on was the Magic Circus, which they mercifully only charged us entry for one on. It was a pretty cool multi-level physical sensory sort of thing. It was fun to go with her.
After that we waited in line for an eternity for bungy trampoline. This was where I wished we'd scouted around a bit first, because we waited in line for ages for a single trampoline, where there were another four standing idle a bit further down. I used the wait to grab a bit of food and share it with Zoe.
Next, we went on the dodgem cars. That was heaps of fun. Zoe couldn't reach the pedal, but she could steer (with a little bit of help occasionally). She seems to really enjoy rides where she gets thrown around. She's going to be a total adrenaline junkie when she's bigger I think.
What I thought was going to be her last ride was the Big Bubble Bump, those big air-inflated balls on a wading pool. She was very keen for that one, and the line was short. She had lots of fun tumbling all over the place.
With a little bit of extra assistance, we managed to squeeze one more go on the Magic Circus out of the ride card, which made her very happy.
After the obligatory strawberry sundae, she was pretty much done, and we'd managed to avoid the rain, so we headed home. I thought she was going to fall asleep on the train on the way home, but she didn't, and perked up by the time we got home. I tried to convince her to nap in my bed while I read a book, but she ended up just playing with Smudge while I read for a bit.
After the quiet time, we went for a scooter ride around the block, via the Hawthorne Garage, to collect some produce for making a fresh batch of vegetable stock concentrate, and then Sarah arrived to pick Zoe up.
It was a really good day, and Zoe went really well. I bet she crashes tonight.
This post will cover the progress of the ironic nova driver. This driver is interesting as an example of a large contribution to the nova code base for a couple of reasons -- its an official OpenStack project instead of a vendor driver, which means we should already have well aligned goals. The driver has been written entirely using our development process, so its already been reviewed to OpenStack standards, instead of being a large code dump from a separate development process. Finally, its forced us to think through what merging a non-trivial code contribution should look like, and I think that formula will be useful for later similar efforts, the Docker driver for example.
One of the sticking points with getting the ironic driver landed is exactly how upgrade for baremetal driver users will work. The nova team has been unwilling to just remove the baremetal driver, as we know that it has been deployed by at least a few OpenStack users -- the largest deployment I am aware of is over 1,000 machines. Now, this unfortunate because the baremetal driver was always intended to be experimental. I think what we've learnt from this is that any driver which merges into the nova code base has to be supported for a reasonable period of time -- nova isn't the right place for experiments. Now that we have the stackforge driver model I don't think that's too terrible, because people can iterate quickly in stackforge, and when they have something stable and supportable they can merge it into nova. This gives us the best of both worlds, while providing a strong signal to deployers about what the nova team is willing to support for long periods of time.
The solution we came up with for upgrades from baremetal to ironic is that the deployer will upgrade to juno, and then run a script which converts their baremetal nodes to ironic nodes. This script is "off line" in the sense that we do not expect new baremetal nodes to be launchable during this process, nor after it is completed. All further launches would be via the ironic driver.
These nodes that are upgraded to ironic will exist in a degraded state. We are not requiring ironic to support their full set of functionality on these nodes, just the bare minimum that baremetal did, which is listing instances, rebooting them, and deleting them. Launch is excluded for the reasoning described above.
We have also asked the ironic team to help us provide a baremetal API extension which knows how to talk to ironic, but this was identified as a need fairly late in the cycle and I expect it to be a request for a feature freeze exception when the time comes.
The current plan is to remove the baremetal driver in the Kilo release.
Previously in this post I alluded to the review mechanism we're using for the ironic driver. What does that actually look like? Well, what we've done is ask the ironic team to propose the driver as a series of smallish (500 line) changes. These changes are broken up by functionality, for example the code to boot an instance might be in one of these changes. However, because of the complexity of splitting existing code up, we're not requiring a tempest pass on each step in the chain of reviews. We're instead only requiring this for the final member in the chain. This means that we're not compromising our CI requirements, while maximizing the readability of what would otherwise be a very large review. To stop the reviews from merging before we're comfortable with them, there's a marker review at the beginning of the chain which is currently -2'ed. When all the code is ready to go, I remove the -2 and approve that first review and they should all merge together.
In the next post I'll cover the state of adding DB2 support to nova.
Tags for this post: openstack juno nova mid-cycle summary ironic
Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: containers; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts
Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn't feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.
The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using "check experimental". This hasn't been implemented yet, but will be advertised when it is ready. Once we've seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.
We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven't started talking about specifics for when this driver will return to the nova code base, but I think at this stage we're talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we'll decide whether to merge it back into nova then.
There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:
Nova will continue to support "lowest common denominator containers": by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.
The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn't already have one, or being able to declare an existing instance to be "full" and then create a new one for the next incremental container. These are interesting design issues, and I'd like to see them explored more in a specification.
This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don't think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.
Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.
Tags for this post: openstack juno nova mid-cycle summary containers docker lxc
Related posts: Juno nova mid-cycle meetup summary: social issues; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno Nova PTL Candidacy; Thoughts from the PTL; Merged in Havana: fixed ip listing for single hosts; Merged in Havana: configurable iptables drop actions in nova
First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event -- it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should "always" be in the US because that's where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we've run two of these I think they're clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.
I understand that sometimes life gets in the way, but that's the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.
I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can't co-locate with every other team working on OpenStack, but I'd like to see us pick a couple of teams -- who we might be blocking -- each cycle and invite them to co-locate with us. It's easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.
The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.
We started off talking about core reviewer burnout, and what we expect from core. We've previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores "on the same page". The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.
The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.
We expect cores to use their judgement about what is a trivial change.
I have an action item to remind cores that this is acceptable behavior. I'm going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don't want to drown people in email all at once.
We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.
There was talk about picking project sponsored "themes" for each release -- with the obvious examples being "stability" and "features". One problem here is that we haven't had a lot of luck convincing developers and reviewers to actually work on things we've specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn't been a glowing success.
One solution we're going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we'll have less time for the current sub-team updates, but people seem ok with that.
A few people have also suggested various interpretations of a "review day". One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.
As I mentioned earlier, this is the first in a series of posts. In this post I've tried to cover social aspects of nova -- the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout - and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I'll leave that for another post. It's already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you're dying to know more about what we talked about, don't forget the relatively comprehensive notes in our etherpad.
Tags for this post: openstack juno nova mid-cycle summary core review social
Related posts: Michael's surprisingly unreliable predictions for the Havana Nova release; More reviews; Book reviews; Juno Nova PTL Candidacy; What US address should I give?; Working on review comments for Chapters 2, 3 and 4 tonight
RMIT Building 91, 110 Victoria Street, Carlton SouthLink: http://luv.asn.au/meetings/map
MythTV is a free and open source home entertainment application with a simplified "10-foot user interface" design for the living-room TV, and turns a computer with the necessary hardware into a network streaming digital video recorder, a digital multimedia home entertainment system, or home theatre personal computer. It runs on various operating systems, primarily Linux, Mac OS X and FreeBSD.
This introduction to MythTV with live examples, will be presented by LUV Committee member Deb Henry.
Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.August 16, 2014 - 12:30
The attraction of pens is that I can churn out a pen in about 30 minutes, whereas a bowl can take twice that. Therefore when I have a small chance to play in the garage I'll do a pen, whereas when I have more time I might do a bowl.
Tags for this post: wood turning 20140718-woodturning photo
Tags for this post: wood turning 20140628-woodturning photo
First off, thanks for electing me as the Nova PTL for Juno. I find the outcome of the election both flattering and daunting. I'd like to thank Dan and John for running as PTL candidates as well -- I strongly believe that a solid democratic process is part of what makes OpenStack so successful, and that isn't possible without people being will to stand up during the election cycle. I'm hoping to send out regular emails to this list with my thoughts about our current position in the release process. Its early in the cycle, so the ideas here aren't fully formed yet -- however I'd rather get feedback early and often, in case I'm off on the wrong path. What am I thinking about at the moment? The following things: * a mid cycle meetup. I think the Icehouse meetup was a great success, and I'd like to see us do this again in Juno. I'd also like to get the location and venue nailed down as early as possible, so that people who have complex travel approval processes have a chance to get travel sorted out. I think its pretty much a foregone conclusion this meetup will be somewhere in the continental US. If you're interested in hosting a meetup in approximately August, please mail me privately so we can chat. * specs review. The new blueprint process is a work of genius, and I think its already working better than what we've had in previous releases. However, there are a lot of blueprints there in review, and we need to focus on making sure these get looked at sooner rather than later. I'd especially like to encourage operators to take a look at blueprints relevant to their interests. Phil Day from HP has been doing a really good job at this, and I'd like to see more of it. * I promised to look at mentoring newcomers. The first step there is working out how to identify what newcomers to mentor, and who mentors them. There's not a lot of point in mentoring someone who writes a single drive by patch, so working out who to invest in isn't as obvious as it might seem at first. Discussing this process for identifying mentoring targets is a good candidate for a summit session, so have a ponder. However, if you have ideas let's get talking about them now instead of waiting for the summit. * summit session proposals. The deadline for proposing summit sessions for Nova is April 20, which means we only have a little under a week to get that done. So, if you're sitting on a summit session proposal, now is the time to get it in. * business as usual. We also need to find the time for bug fix code review, blueprint implementation code review, bug triage and so forth. Personally, I'm going to focus on bug fix code review more than I have in the past. I'd like to see cores spend 50% of their code review time reviewing bug fixes, to make the Juno release as solid as possible. However, I don't intend to enforce that, its just me asking real nice. Thanks for taking the time to read this email, and please do let me know if you think this sort of communication is useful.
Tags for this post: openstack juno ptl nova
Related posts: Juno Nova PTL Candidacy; Havana Nova PTL elections; Expectations of core reviewers; Merged in Havana: fixed ip listing for single hosts; Merged in Havana: configurable iptables drop actions in nova; Michael's surprisingly unreliable predictions for the Havana Nova release
Hi. I would like to run for the OpenStack Compute PTL position as well. I have been an active nova developer since late 2011, and have been a core reviewer for quite a while. I am currently serving on the Technical Committee, where I have recently been spending my time liaising with the board about how to define what software should be able to use the OpenStack trade mark. I've also served on the vulnerability management team, and as nova bug czar in the past. I have extensive experience running Open Source community groups, having served on the TC, been the Director for linux.conf.au 2013, as well as serving on the boards of various community groups over the years. In Icehouse I hired a team of nine software engineers who are all working 100% on OpenStack at Rackspace Australia, developed and deployed the turbo hipster third party CI system along with Joshua Hesketh, as well as writing nova code. I recognize that if I am successful I will need to rearrange my work responsibilities, and my management is supportive of that. The future -------------- To be honest, I've thought for a while that the PTL role in OpenStack is poorly named. Specifically, its the T that bothers me. Sure, we need strong technical direction for our programs, but putting it in the title raises technical direction above the other aspects of the job. Compute at the moment is in an interesting position -- we're actually pretty good on technical direction and we're doing interesting things. What we're not doing well on is the social aspects of the PTL role. When I first started hacking on nova I came from an operations background where I hadn't written open source code in quite a while. I feel like I'm reasonably smart, but nova was certainly the largest python project I'd ever seen. I submitted my first patch, and it was rejected -- as it should have been. However, Vishy then took the time to sit down with me and chat about what needed to change, and how to improve the patch. That's really why I'm still involved with OpenStack, Vishy took an interest and was always happy to chat. I'm told by others that they have had similar experiences. I think that's what compute is lacking at the moment. For the last few cycles we're focused on the technical, and now the social aspects are our biggest problem. I think this is a pendulum, and perhaps in a release or two we'll swing back to needing to re-emphasise on technical aspects, but for now we're doing poorly on social things. Some examples: - we're not keeping up with code reviews because we're reviewing the wrong things. We have a high volume of patches which are unlikely to ever land, but we just reject them. So far in the Icehouse cycle we've seen 2,334 patchsets proposed, of which we approved 1,233. Along the way, we needed to review 11,747 revisions. We don't spend enough time working with the proposers to improve the quality of their code so that it will land. Specifically, whilst review comments in gerrit are helpful, we need to identify up and coming contributors and help them build a relationship with a mentor outside gerrit. We can reduce the number of reviews we need to do by improving the quality of initial proposals. - we're not keeping up with bug triage, or worse actually closing bugs. I think part of this is that people want to land their features, but part of it is also that closing bugs is super frustrating at the moment. It can take hours (or days) to replicate and then diagnose a bug. You propose a fix, and then it takes weeks to get reviewed. I'd like to see us tweak the code review process to prioritise bug fixes over new features for the Juno cycle. We should still land features, but we should obsessively track review latency for bug fixes. Compute fails if we're not producing reliable production grade code. - I'd like to see us focus more on consensus building. We're a team after all, and when we argue about solely the technical aspects of a problem we ignore the fact that we're teaching the people involved a behaviour that will continue on. Ultimately if we're not a welcoming project that people want to code on, we'll run out of developers. I personally want to be working on compute in five years, and I want the compute of the future to be a vibrant, friendly, supportive place. We get there by modelling the behaviour we want to see in the future. So, some specific actions I think we should take: - when we reject a review from a relatively new contributor, we should try and pair them up with a more experienced developer to get some coaching. That experienced dev should take point on code reviews for the new person so that they receive low-latency feedback as they learn. Once the experienced dev is ok with a review, nova-core can pile on to actually get the code approved. This will reduce the workload for nova-core (we're only reviewing things which are of a known good standard), while improving the experience for new contributors. - we should obsessively track review performance for bug fixes, and prioritise them where possible. Let's not ignore features, but let's agree that each core should spend at least 50% of their review time reviewing bug fixes. - we should work on consensus building, and tracking the progress of large blueprints. We should not wait until the end of the cycle to re-assess the v3 API and discover we have concerns. We should be talking about progress in the weekly meetings and making sure we're all on the same page. Let's reduce the level of surprise. This also flows into being clearer about the types of patches we don't want to see proposed -- for example, if we think that patches that only change whitespace are a bad idea, then let's document that somewhere so people know before they put a lot of effort in. Thanks for taking the time to read this email!
Tags for this post: openstack juno ptl nova election
Related posts: Havana Nova PTL elections; Thoughts from the PTL; Expectations of core reviewers; Merged in Havana: fixed ip listing for single hosts; Merged in Havana: configurable iptables drop actions in nova; Michael's surprisingly unreliable predictions for the Havana Nova release
Nova expects a minimum level of sustained code reviews from cores. In the past this has been generally held to be in the order of two code reviews a day, which is a pretty low bar compared to the review workload of many cores. I feel that existing cores understand this requirement well, and I am mostly stating it here for completeness.
Additionally, there is increasing levels of concern that cores need to be on the same page about the criteria we hold code to, as well as the overall direction of nova. While the weekly meetings help here, it was agreed that summit attendance is really important to cores. Its the way we decide where we're going for the next cycle, as well as a chance to make sure that people are all pulling in the same direction and trust each other.
There is also a strong preference for midcycle meetup attendance, although I understand that can sometimes be hard to arrange. My stance is that I'd like core's to try to attend, but understand that sometimes people will miss one. In response to the increasing importance of midcycles over time, I commit to trying to get the dates for these events announced further in advance.
Given that we consider these physical events so important, I'd like people to let me know if they have travel funding issues. I can then approach the Foundation about funding travel if that is required.
Tags for this post: openstack juno ptl nova
Related posts: Juno Nova PTL Candidacy; Thoughts from the PTL; Havana Nova PTL elections; Merged in Havana: fixed ip listing for single hosts; Merged in Havana: configurable iptables drop actions in nova; Michael's surprisingly unreliable predictions for the Havana Nova release
I'd also like to announce my TC candidacy. I am currently a member of the TC, and I would like to continue to serve. I first started hacking on Nova during the Diablo release, with my first code contributions appearing in the Essex release. Since then I've hacked mostly on Nova and Oslo, although I have also contributed to many other projects as my travels have required. For example, I've tried hard to keep various projects in sync with their imports of parts of Oslo I maintain. I work full time on OpenStack at Rackspace, leading a team of developers who work solely on upstream open source OpenStack. I am a Nova and Oslo core reviewer and the Nova PTL. I have been serving on the TC for the last year, and in the Icehouse release started acting as the liaison for the board "defcore" committee along with Anne Gentle. "defcore" is the board effort to define what parts of OpenStack we require vendors to ship in order to be able to use the OpenStack trade mark, so it involves both the board and the TC. That liaison relationship is very new and only starting to be effective now, so I'd like to keep working on that if you're willing to allow it.
Tags for this post: openstack juno tc election
Related posts: Juno Nova PTL Candidacy; Havana Nova PTL elections
Today should have have been more focused on my real estate licence training than it was, but live intruded.
Zoe had a good sleep, almost bang on 11 hours. She was happy but very congested when she woke up. Today was pajama day at Kindergarten, and she was very excited. As I result, I got her to Kindergarten quite quickly.
After I got home, I gave Zoe's bunk bed drawers another coat of sealant and by the time I was done with that, I pretty much had to jump on a bus to the city for a lunch meeting. I got a tiny bit of work done on my next course assessment on the bus.
After my lunch meeting I jumped in a taxi back home to meet with my Thermomix Group Leader to give her my application to become a consultant and then it was time to pick up Zoe from Kindergarten.
I almost got to Kindergarten before I realised that in my haste, I hadn't grabbed a change of clothes for Zoe to do tennis in. Fortunately I'd repacked her spare clothes for Kindergarten that morning, so after some brief dithering, I decided to run with that set of clothes. It worked out okay.
I think with tennis, it's a battle between being hungry after Kindergarten and tired after Kindergarten. Zoe did a better job of being focused, but still was ready to pack it in a little bit before the end of class. Still fighting off a cold can't have helped either. Next week I'll try and remember to bring a quick snack for her to eat between Kindergarten and tennis.
Zoe was desperate for Megan to come over for a play date after tennis. Jason had some stuff to do first, so we came home, and Zoe watched a bit of TV, and then Jason dropped Megan off and dashed off to Bunnings for a bit.
I ended up having dinner ready by the time he was due back, so I suggested they all stay for dinner, which they did, and then they went home afterwards.
Zoe was pretty tired, so I got her to bed a bit early.
Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.
One of the most exciting opportunities is still under-exploited: the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.
For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.
So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course: enjoy WebRTC! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team!
On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating!!
A while ago, I was contacted by MobileZap, a reseller of mobile phone accessories and asked if I was interested in reviewing an iPhone zoom lens attachment. Unfortunately the widget only attached to the iPhone 5S - mine's a 5c - so I wasn't able to. I also mostly do wide angle photography (landscapes) so a zoom lens would be sort of wasted on me anyway.
When browsing the website, I did stumble across the olloclip wide-angle/fisheye/macro lens kit, which piqued my interest. When I mentioned this, they sent me one and asked me to write a blog about my experiences using it. Happily, I was about to go on a road trip past some very large holes in the ground where it would come it very handly indeed!
The (to give it its full and proper name) olloclip iPhone 5S / 5 Fisheye, Wide-angle, Macro Lens Kit comes in a fully recyclable plastic and paper package. It includes the phone adapter with lenses, an insert to make the adapter fit iPods and a fabric pouch to keep the lenses free of scratches when not in use. The pouch also doubles as a lens cloth.
First Light: Royal Park
I've been taking an image of Melbourne's CBD once a day (when I'm in the country) from the same spot in Royal Park for close to a year, so I thought I'd start by using the olloclip for the same image:
iPhone 5c standard.
iPhone 5c with olloclip wide angle lens.
iPhone 5c with the olloclip fisheye lens.
Oops, it turns out the lens kit doesn't really fit the iPhone 5c! The adapter is made for the 5s model and the rounded edge on the 5c means it doesn't slide all the way on, so the lens and the camera don't quite align. Mind you, a little bit of image editing to trim this image still results in something useable for blogs and twitter :-)
To give you an idea of the field of view of each of the lens adapters, I've stacked the three images on top of each other at the approximate same size:
Relative sizes of the fields of view of the olloclip wide anfle and fish eye lenses.
Big Things: Road Trip
The main reason I agreed to review this lens kit was to play on the road trip, which was through the south eastern USA, from Los Angeles to Austin. Happily, that included a few choice large holes in the ground subjects for wide angle photography, as well as a friend with a iPhone 5S, on which the olloclip fit just fine.
Cathedral, Sedona, AZ. iPhone 5S with olloclip fish-eye lens.
Barringer Crater, Flagstaff, AZ. iPhone 5S with olloclip fish-eye lens.
Grand Canyon South Rim, Desert View, AZ. iPhone 5c with olloclip wide angle lens.
As you can see, the olloclip fits just fine on the iPhone 5S - there is no assymmetric distortion like there was in the iPhone 5c fish-eye image.
Small Things: Macro
The wide angle lens consists of two lenses, the top one of which you can unscrew and remove to make a macro lens. I didn't really have anything to take photos of, until a house move left me with a large pile of small change to sort through.
It turns out that some old Australian coins have minting errors or oversights, which make them sought after by collectors. Specifically, the some of the 2 cent coins are missing the designer's initials (S.D.).
Here was a lovely way to try out the macro lens. It works fine as a magnifying glass, too!
Australian 2 cent piece with 'SD' initials (just left of the lizard's toe), iPhone 5c with olloclip macro lens.
Australian 2 cent piece without initials, iPhone 5c with olloclip macro lens.
Australian 1917 penny, iPhone 5c with olloclip macro lens.
The macro attachment has come in incredibly handy for close-up images of items to put on eBay. The fact that it doesn't quite fit the iPhone 5c has not been a hindrance, in the way it was for the fish-eye lens.
All in all, I've found the olloclip to be a nifty little attachment and nice to have handy.
I am not affiliated with olloclip or MobileZap. MobileZap provided me with a free olloclip to review.Tags: AdvertorialphotographyiPhoneOlloclipiPhotography
I had grand plans of going for a run this morning before I got stuck into my day. The fact that I don't even remember my alarm going off, or turning it off and then waking up an hour later shows that I'm still not quite over my cold. I did feel better for the sleep in though.
After I got started, I applied the last coat of sealant to the drawers under Zoe's bed (that's a story for another time) and then decided to get stuck into cleaning the front balcony, which looks a bit like a craft shop exploded on it.
I stopped for lunch and then did some more mopping before it was time to pick up Zoe from Kindergarten.
After I picked her up, we popped over to Bunnings to grab some more sealant and a drop cloth, and also dropped past Overflow. I found the sort of cat water thing that I've been wanting for Smudge, and Zoe got all excited about a number 4 sparkler.
I'm also trying to track down a plush soccer ball-sized ball for kicking around the garage downstairs, without luck. We dropped into K Mart and couldn't find one, but Zoe wanted every doll in sight. We bumped into one of Zoe's Kindergarten teachers on the way out of the shopping centre. Zoe's very excited about pajama day tomorrow.
By the time we got home, it was pretty much time to start dinner. I finished off mopping the balcony first, while Zoe watched some TV.
Zoe had a really good dinner tonight. She tends to do pretty well with my Meatless Monday dinners. The zucchini fritters are always popular, and go well in her lunch box the next day too.
She's still a bit congested, but her cough is definitely improving.
Tags for this post: openstack pip cache devpi wheel
Related posts: Faster pip installs; Water, wheels, tyres (tires?) and computers; Wanted: a rear wheel bike computer; Dirty wheel rims
Lennart Regebro has provided me with a free copy of his self-published book "Porting to Python 3" so that I may review it. He has considerable experience in porting code from Python 2 to Python 3, and it shows in much of his advice and examples. Regebro has assembled an excellent cast of helpers: the technical reviewer Martin von Löwis also has much expertise with Python 3 (he implemented the first port of the popular Django web framework). Brett Canon's introduction also provides a good starting point for someone new to the story of Python 3.
The book's structure is well thought-out. The first chapter immediately invites the reader to make sure they really need to port their code, and what considerations might be taken into account when deciding to do so. The book even has advice, backed up with useful information, for those who are currently unable to port their code but may start preparing for doing so in the future.
The second chapter discusses strategies for moving to Python 3 (view it online). These are presented clearly with one section per strategy. Within each section there are clear references to the other, more detailed chapters of the book which may be used to implement each strategy.
Once you've decided which strategy to apply you can focus on the pertinent remaining chapters. These follow a pretty logical progression:
- firstly preparing the way by making your code more modern,
- introducing the 2to3 tool for automatically converting code to Python 3 and the various options for how to use it,
- talking through some common migration problems and some simple solutions to them,
- presenting some modern Python (2.6+) idioms that your code may use in Python 2 and 3, and
- supporting Python 2 and 3 without using the 2to3 tool.
The final two chapters are for more limited audiences and cover migrating C extensions and writing custom 2to3 tool fixers. Even though these are topics which most readers won't need to worry about they are still covered in good detail with minimum fuss. It is in this final chapter that Regebro's experience with porting gives weight to the advice and examples he presents.
Finally there's a couple of pretty comprehensive appendices covering over the main language incompatibilities and library changes which makes it a good reference.
As previously mentioned, the book is very well-organised giving the reader an easy path through the material depending on their situation. Throughout the book there are many pertinent and clear references to software written by others. The book suffers from some typesetting and grammatical errors, though these are minor and none could cause any confusion regarding the content.
There's a lot of information out there for porting from Python 2 to 3, but Regebro has produced a concise, well-organised and complete reference for doing so. It's not skimped on any detail though; the book covers all areas related to moving to Python 3. It's not just about porting code; it's also a handy book for any programmer who's grown up with Python 2 (or 1!) and is looking to move to Python 3.
(devpi is the caching proxy for PyPI which does a bunch of other things too but mostly just speeds up "pip install" and isolates you from network issues - or complete lack of connectivity)
devpi in 60 seconds
Step 1: create a virtualenvmkvirtualenv devpi
Step 2: install devpipip install devpi
Step 3: run devpidevpi-server --start
Step 4: use devpi (noting the URL from the previous command output)devpi use --set-cfg [URL]/root/pypi
Step 5: profit!pip install yarg
The first installation will call out to the Internet but subsequent installs will use the cached version.
I needed support for "priv" instead of Fabric's built-in "sudo" support. I went through a number of (sometimes quite horrific) iterations before I settled on this relatively simple solution:import contextlib @contextlib.contextmanager def priv(user): '''Context manager to cause all run()'ed operations to be priv('user')'ed. Replaces env.shell with the priv command for the duration of the context. ''' save_shell = env.shell env.shell = 'priv su - %s -c' % user yield env.shell = save_shell
This is then used in a fabfile like so:with priv('remote_user'): run('do some remote command as remote_user') run('another remote command as remote_user')
To let your deferreds run (and make sure they do finish) wrap your test functions in the "@deferred" decorator from nose.twistedtools:@deferred(1)
This will run the reactor for 1 second before declaring it hung.
If you want to see deferreds that are not behaving you will need to enable logging of Twisted events if they're not already enabled:defer.setDebugging(1) observer = log.PythonLoggingObserver() observer.start()
and then patch nose.twistedtools as follows:--- original/nose/twistedtools.py +++ installed/nose/twistedtools.py @@ -39,12 +39,19 @@ _twisted_thread = None +_deferreds =  + def threaded_reactor(): """ Start the Twisted reactor in a separate thread, if not already done. Returns the reactor. The thread will automatically be destroyed when all the tests are done. """ + from twisted.internet import defer + def __init__(self): + _deferreds.append(self) + defer.DebugInfo.__init__ = __init__ + global _twisted_thread try: from twisted.internet import reactor @@ -135,6 +142,8 @@ def errback(failure): # Retrieve and save full exception info try: + if issubclass(failure.type, StandardError): + print failure failure.raiseException() except: q.put(sys.exc_info()) @@ -156,6 +165,9 @@ try: error = q.get(timeout=timeout) except Empty: + for d in _deferreds: + if hasattr(d, 'creator') and not hasattr(d, 'invoker'): + print d._getDebugTracebacks() raise TimeExpired("timeout expired before end of test (%f s.)" % timeout) # Re-raise all exceptions
(I tried a couple of times to figure out how trial does the above but failed miserably)
This will result in a display of all the deferreds still pending when the deferred decorator timeout fires. Note that the deferred decorator introduces a deferred that will not have terminated.
One outstanding problem I've not solved is that inlineCallbacks aren't displayed usefully in the failure debug output - you get told that it is some inline callback that is still hanging, but not where it was created.Twisted Web and Selenium
Testing Twisted web using Selenium is made more complicated by the need to allow the Twisted reactor to run at the same time as the selenium webdriver is trying to poke at it.
In other applications I've tested like this I just run up the server being tested in a separate thread but I can't do that in the case of Twisted.
To make this work any selenium operation that will cause a Twisted reactor event must be deferred to a thread, hence the liberal sprinkling of deferToThread throughout this code. Some sections do not touch the reactor (for example, filling out a form once the form page has been loaded doesn't), so it can be straight selenium calls.
Note also that the pages are generated in a server dynamically using deferreds so I use a "done" tag marker at the end to indicate to the tests that the page has been fully generated. Selenium's timeout on element access allows me to cause the test to fail if the done marker does not appear.
For example this step function directly from a behave feature implementation:@then('the message should be sent') @deferred(1) @inlineCallbacks def step(context): # wait until we're all done context.browser.implicitly_wait(1) yield threads.deferToThread(lambda: get_element(context.browser, id="done")) # there should be a result container results = yield threads.deferToThread(lambda: find_elements(context.browser, id="result")) result = results[-1] status = yield threads.deferToThread(lambda: get_element(result, id="status")) # the status could be a number of values depending on synchronisiation of # the tests / runner assert status.text in 'created new accepted done'.split() message = context.db.messages.values() if message.status != message.STATUS_DONE: yield esme._wait_until_done(context)
More tips as I happen to think of them or discover them :-)