You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 48 min 38 sec ago

Sridhar Dhanapalan: HTML5 support in Browse

Fri 01st Jul 2016 19:07

One of the most exciting improvements in OLPC OS 12.1.0 is a revamped Browse activity:

Browse, Wikipedia and Help have been moved from Mozilla to WebKit internally, as the Mozilla engine can no longer be embedded into other applications (like Browse) and Mozilla has stated officially that it is unsupported. WebKit has proven to be a far superior alternative and this represents a valuable step forward for Sugar’s future. As a user, you will notice faster activity startup time and a smoother browsing experience. Also, form elements on webpages are now themed according to the system theme, so you’ll see Sugar’s UI design blending more into the web forms that you access.

In short, the Web will be a nicer place on XOs. These improvements (and more!) will be making their way onto One Education XOs (such as those in Australia) in 2013.

Here are the results from the HTML5 Test using Browse 140 on OLPC OS 12.1.0 on an XO-1.75. The final score (345 and 15 bonus points) compares favourably against other Web browsers. Firefox 14 running on my Fedora 17 desktop scores 345 and 9 bonus points.

Update: Rafael Ortiz writes, “For the record previous non-webkit versions of browse only got 187 points on html5test, my beta chrome has 400 points, so it’s a great advance!

Categories: thinktime

Sridhar Dhanapalan: Interviews from the field

Fri 01st Jul 2016 19:07

Oracle, a sponsor of OLPC Australia, have posted some video interviews of a child and a teacher involved in the One Education programme.

Categories: thinktime

Sridhar Dhanapalan: A Complete Literacy Experience For Young Children

Fri 01st Jul 2016 19:07

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

Categories: thinktime

Sridhar Dhanapalan: XO-1 Training Pack

Fri 01st Jul 2016 19:07

Our One Education programme is growing like crazy, and many existing deployments are showing interest. We wanted to give them a choice of using their own XOs to participate in the teacher training, rather than requiring them to purchase new hardware. Many have developer-locked XO-1s, necessitating a different approach than our official One Education OS.

The solution is our XO-1 Training Pack. This is a reconfiguration of OLPC OS 10.1.3 to be largely consistent with our 10.1.3-au release. It has been packaged for easy installation.

Note that this is not a formal One Education OS release, and hence is not officially supported by OLPC Australia.

If you’d like to take part in the One Education programme, or have questions, use the contact form on the front page.

Update: We have a list of improvements in 10.1.3-au builds over the OLPC OS 10.1.3 release. Note that some features are not available in the XO-1 Training Pack owing to the lesser storage space available on XO-1 hardware. The release notes have been updated with more detail.

Update: More information on our One News site.

Categories: thinktime

Sridhar Dhanapalan: OLPC Australia Education Newsletter, Edition 9

Fri 01st Jul 2016 19:07

Edition 9 of the OLPC Australia Education Newsletter is now available.

In this edition, we provide a few classroom ideas for mathematics, profile the Jigsaw activity, de-mystify the Home views in Sugar and hear about the OLPC journey of Girraween Primary School.

To subscribe to receive future updates, send an e-​​mail to education-​newsletter+subscribe@​laptop.​org.​au.

Categories: thinktime

Maxim Zakharov: Apache + YAJL

Fri 01st Jul 2016 13:07

//github.com/Maxime2/stan-challenge - here on GitHub is my answer to Stan code challenge. It is an example how one can use SAX-like streaming parser inside an Apache module to process JSON with minimal delays.

Custom made Apache module gives you some savings on request processing time by avoiding invocation of any interpreter to process the request with any programming language (like PHP, Python or Go). The stream parser allows to start processing JSON as soon as the first buffer filled with data while the whole request is still in transmission. And again, as it is an Apache module, the response is starting to construct while request is processing (and still transmitting).

Categories: thinktime

sthbrx - a POWER technical blog: A Taste of IBM

Fri 01st Jul 2016 11:07

As a hobbyist programmer and Linux user, I was pretty stoked to be able to experience real work in the IT field that interests me most, Linux. With a mainly disconnected understanding of computer hardware and software, I braced myself to entirely relearn everything and anything I thought I knew. Furthermore, I worried that my usefulness in a world of maintainers, developers and testers would not be enough to provide any real contribution to the company. In actual fact however, the employees at OzLabs (IBM ADL) put a really great effort into making use of my existing skills, were attentive to my current knowledge and just filled in the gaps! The knowledge they've given me is practical, interlinked with hardware and provided me with the foot-up that I'd been itching for to establish my own portfolio as a programmer. I was both honoured and astonished by their dedication to helping me make a truly meaningful contribution!

On applying for the placement, I listed my skills and interests. Having a Mathematics, Science background, I listed among my greatest interests development of scientific simulation and graphics using libraries such as Python matplotlib and R. By the first day they got me to work, researching and implementing a routine in R that would qualitatively model the ability of a system to perform common tasks - a benchmark. A series of these microbenchmarks were made; I was in my element and actually able to contribute to a corporation much larger than I could imagine. The team at IBM reinforced my knowledge from the ground up, introducing the rigorous hardware and corporate elements at a level I was comfortable with.

I would say that my greatest single piece of take-home knowledge over the two weeks was knowledge of the Linux Kernel project, Git and GitHub. Having met the arch/powerpc and linux-next maintainers in person placed the Linux and Open Source development cycle in an entirely new perspective. I was introduced to the world of GitHub, and thanks to a few rigorous lessons of Git, I now have access to tools that empower me to safely and efficiently write code, and to build a public portfolio I can be proud of. Most members of the office donated their time to instruct me on all fronts, whether to do with career paths, programming expertise or conceptual knowledge, and the rest were all very good for a chat.

Approaching the tail-end of Year Twelve, I was blessed with some really good feedback and recommendations regarding further study. If during the two weeks I had any query regarding anything ranging from work-life to programming expertise even to which code editor I should use (a source of much contention) the people in the office were very happy to help me. Several employees donated their time to teach me really very intensive and long lessons regarding the software development concepts, including (but not limited to!) a thorough and helpful lesson on Git that was just on my level of understanding.

Working at IBM these past two weeks has not only bridged the gap between my hobby and my professional prospects, but more importantly established friendships with professionals in the field of Software Development. Without a doubt this really great experience of an environment that rewards my enthusiasm will fondly stay in my mind as I enter the next chapter of my life!

Categories: thinktime

Tridge on UAVs: Using X-Plane 10 with ArduPilot SITL

Fri 01st Jul 2016 10:07

ArduPilot has been able to use X-Plane as a HIL (hardware in the loop) backend for quite some time, but it never worked particularly well as the limitations of the USB interface to the hardware prevented good sensor timings.

We have recently added the ability to use X-Plane 10 as a SITL backend, which works much better. The SITL (software in the loop) system runs ArduPilot natively on your desktop machine, and talks to X-Plane directly using UDP packets.

The above video demonstrates flying a Boeing 747-400 in X-Plane 10 using ArduPilot SITL. It flies nicely, and does an automatic takeoff and landing quite well. You can use almost any of the fixed wing aircraft in X-Plane with ArduPilot SITL, which opens up a whole world of simulation to explore. Many people create models of their own aircraft in order to test out how they will fly or to test them in conditions (such as very high wind) that may be dangerous to test with a real model.

I have written up some documentation on how to use X-Plane 10 with SITL to help people get started. Right now it only works with X-Plane 10 although I may add support for X-Plane 9 in the future.

Michael Oborne has added nice support for using X-Plane with SITL in the latest beta of MissionPlanner, and does nightly builds of the SITL binary for Windows. That avoids the need to build ArduPilot yourself if you just want to fly the standard code and not modify it yourself.

Limitations

There are some limitations to the X-Plane SITL backend. First off, X-Plane has quite slow network support. On my machine I typically get a sensor data rate of around 27Hz, which is far below the 1200 Hz we normally use for simulation. To overcome this the ArduPilot SITL code does sensor extrapolation to bring the rate up to around 900Hz, which is plenty for SITL to run. That extrapolation introduces small errors which can make the ArduPilot EKF state estimator unhappy. To avoid that problem we run with "EKF type 10" which is a fake AHRS interface that gets all state information directly from the simulator. That means you can't use the X-Plane SITL backend to test EKF settings.

The next limitation is that the simulation fidelity depends somewhat on the CPU load on your machine. That is an unfortunate consequence of X-Plane not supporting lock-step scheduling. So you may notice that simulated aircraft on your machine may not fly identically to the same aircraft on someone elses machine. You can reduce this effect by lowering the graphics settings in X-Plane.

We can currently only get joystick input from X-Plane for aileron, elevator, rudder and throttle. It would be nice to support flight mode switches, flaps and other controls that are normally used with ArduPilot. That is probably possible, but isn't implemented yet. So if you want a full controller then you can instead connect a joystick to SITL directly instead of via X-Plane (for example using the MissionPlanner joystick module or the mavproxy joystick module).

Finally, we only support fixed wing aircraft in X-Plane at the moment. I have been able to fly a helicopter, but I needed to give manual collective control from a joystick as we don't yet have a way to provide collective pitch input over the X-Plane data interface.

Manned AIrcraft and ArduPilot

Please don't assume that because ArduPilot can fly full sized aircraft in a simulator that you should use ArduPilot to fly real manned aircraft. ArduPilot is not suitable for manned applications and the development team would appreciate it if you did not try to use it for manned aircraft.

Happy Flying

I hope you enjoy flying X-Plane 10 with ArduPilot SITL!

Categories: thinktime

Russell Coker: Coalitions

Fri 01st Jul 2016 03:07

In Australia we are about to have a federal election, so we inevitably have a lot of stupid commentary and propaganda about politics.

One thing that always annoys me is the claim that we shouldn’t have small parties. We have two large parties, Liberal (right-wing, somewhat between the Democrats and Republicans in the US) and Labor which is somewhat similar to Democrats in the US. In the US the first past the post voting system means that votes for smaller parties usually don’t affect the outcome. In Australia we have Instant Runoff Voting (sometimes known as “The Australian Ballot”) which has the side effect of encouraging votes for small parties.

The Liberal party almost never wins enough seats to make government on it’s own, it forms a coalition with the National party. Election campaigns are often based on the term “The Coalition” being used to describe a Liberal-National coalition and the expected result if “The Coalition” wins the election is that the leader of the Liberal party will be Prime Minister and the leader of the National party will be the Deputy Prime Minister. Liberal party representatives and supporters often try to convince people that they shouldn’t vote for small parties and that small parties are somehow “undemocratic”, seemingly unaware of the irony of advocating for “The Coalition” but opposing the idea of a coalition.

If the Liberal and Labor parties wanted to form a coalition they could do so in any election where no party has a clear majority, and do it without even needing the National party. Some people claim that it’s best to have the major parties take turns in having full control of the government without having to make a deal with smaller parties and independent candidates but that’s obviously a bogus claim. The reason we have Labor allying with the Greens and independents is that the Liberal party opposes them at every turn and the Liberal party has a lot of unpalatable policies that make alliances difficult.

One thing that would be a good development in Australian politics is to have the National party actually represent rural voters rather than big corporations. Liberal policies on mining are always opposed to the best interests of farmers and the Liberal policies on trade aren’t much better. If “The Coalition” wins the election then the National party could insist on a better deal for farmers in exchange for their continued support of Liberal policies.

If Labor wins more seats than “The Coalition” but not enough to win government directly then a National-Labor coalition is something that could work. I think that the traditional interest of Labor in representing workers and the National party in representing farmers have significant overlap. The people who whinge about a possible Green-Labor alliance should explain why they aren’t advocating a National-Labor alliance. I think that the Labor party would rather make a deal with the National party, it’s just a question of whether the National party is going to do what it takes to help farmers. They could make the position of Deputy Prime Minister part of the deal so the leader of the National party won’t miss out.

Related posts:

  1. praying for rain Paul Dwerryhouse posted a comment about the Prime Minister asking...
  2. The 2013 Federal Election Seven hours ago I was handing out how to...
  3. Victorian State Election Election Tomorrow On Saturday we will have a Victorian state...
Categories: thinktime

Binh Nguyen: Are we now the USSR?, Brexit, and More

Fri 01st Jul 2016 02:07
Look at what's happened and you'll see the parallels:- in many parts of the world the past and current social and economic policies on offer basically aren't delivering. Clear that there is a democratic deficit. The policies at the top aren't dealing with enough of the population's problemsCrossTalk BREXIT - GOAL! (Recorded 24 June)https://www.youtube.com/watch?v=kgKIc0bobO4The Schulz Brexit
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main July 2016 Meeting: ICT in Education / To Search Perchance to Find

Wed 29th Jun 2016 23:06
Start: Jul 5 2016 18:30 End: Jul 5 2016 20:30 Start: Jul 5 2016 18:30 End: Jul 5 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Link:  http://luv.asn.au/meetings/map

Speakers:

  • Dr Gill Lunniss and Daniel Jitnah, ICT in Education
  • Tim Baldwin, To Search Perchance to Find: Improving Information Access over
    Technical Web User Forums

200 Victoria St. Carlton VIC 3053

Late arrivals, please call (0490) 049 589 for access to the venue.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat and Infoxchange for their help in obtaining the meeting venues.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

July 5, 2016 - 18:30

read more

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Beginners July Meeting: GNU COBOL

Wed 29th Jun 2016 23:06
Start: Jul 16 2016 12:30 End: Jul 16 2016 16:30 Start: Jul 16 2016 12:30 End: Jul 16 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond

Link:  http://luv.asn.au/meetings/map

Lev Lafayette will talk about GNU COBOL.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 16, 2016 - 12:30
Categories: thinktime

sthbrx - a POWER technical blog: Kernel interfaces and vDSO test

Fri 24th Jun 2016 16:06
Getting Suckered

Last week a colleague of mine came up to me and showed me some of the vDSO on PowerPC and asked why on earth does it fail vdsotest. I should come clean at this point and admit that I knew very little about the vDSO and hadn't heard of vdsotest. I had to admit to this colleague that I had no idea everything looked super sane.

Unfortunately (for me) I got hooked, vdsotest was saying it was getting '22' instead of '-1' and it was the case where the vDSO would call into the kernel. It plagued me all night, 22 is so suspicious. Right before I got to work the next morning I had an epiphany, "I bet 22 is EINVAL".

Virtual Dynamically linked Shared Objects

The vDSO is a mechanism to expose some kernel functionality into userspace to avoid the cost of a context switch into kernel mode. This is a great feat of engineering, avoiding the context switch can have a dramatic speedup for userspace code. Obviously not all kernel functionality can be placed into userspace and even for the functionality which can, there may be edge cases in which the vDSO needs to ask the kernel.

Who tests the vDSO? For the portion that lies exclusively in userspace it will escape all testing of the syscall interface which is really what kernel developers are so focused on not breaking. Enter Nathan Lynch with vdsotest who has done some great work!

The Kernel

When the vDSO can't get the correct value without the kernel, it simply calls into the kernel because the kernel is the definitive reference for every syscall. On PowerPC something like this happens (sorry, our vDSO is 100% asm): 1

/* * Exact prototype of clock_gettime() * * int __kernel_clock_gettime(clockid_t clock_id, struct timespec *tp); * */ V_FUNCTION_BEGIN(__kernel_clock_gettime) .cfi_startproc /* Check for supported clock IDs */ cmpwi cr0,r3,CLOCK_REALTIME cmpwi cr1,r3,CLOCK_MONOTONIC cror cr0*4+eq,cr0*4+eq,cr1*4+eq bne cr0,99f /* [snip] */ /* * syscall fallback */ 99: li r0,__NR_clock_gettime sc blr

For those not familiar, this couldn't be more simple. The start checks to see if it is a clock id that the vDSO can handle and if not it jumps to the 99 label. From here simply load the syscall number, jump to the kernel and branch to link register aka 'return'. In this case the 'return' statement would return to the userspace code which called the vDSO function.

Wait, having the vDSO calling into the kernel call gets us the wrong result? Or course it should, vdsotest is assuming a C ABI with return values and errno but the kernel doesn't do that, the kernel ABI is different. How does this even work on x86? Ohhhhh vdsotest does 2

static inline void record_syscall_result(struct syscall_result *res, int sr_ret, int sr_errno) { /* Calling the vDSO directly instead of through libc can lead to: * - The vDSO code punts to the kernel (e.g. unrecognized clock id). * - The kernel returns an error (e.g. -22 (-EINVAL)) * So we need to recognize this situation and fix things up. * Fortunately we're dealing only with syscalls that return -ve values * on error. */ if (sr_ret < 0 && sr_errno == 0) { sr_errno = -sr_ret; sr_ret = -1; } *res = (struct syscall_result) { .sr_ret = sr_ret, .sr_errno = sr_errno, }; }

That little hack isn't working on PowerPC and here's why:

The kernel puts the return value in the ABI specified return register (r3) and uses a condition register bit (condition register field 0, SO bit), so unlike x86 on error the return value isn't negative. To make matters worse, the condition register is very difficult to access from C. Depending on your definition of 'access from C' you might consider it impossible, a fixup like that would be impossible.

Lessons learnt
  • vDSO supplied functions aren't quite the same as their libc counterparts. Unless you have very good reason, and to be fair, vdsotest does have a very good reason, always access the vDSO through libc
  • Kernel interfaces aren't C interfaces, yep, they're close but they aren't the same
  • 22 is in fact EINVAL
  • Different architectures are... Different!
  • Variety is the spice of life

P.S I have a hacky patch waiting review

  1. arch/powerpc/kernel/vdso64/gettimeofday.S 

  2. src/vdsotest.h 

Categories: thinktime

Ian Wienand: Zuul and Ansible in OpenStack CI

Wed 22nd Jun 2016 08:06

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Categories: thinktime

Pia Waugh: Pia, Thomas and Little A’s Excellent Adventure – Week 1

Tue 21st Jun 2016 11:06

We arrived in Auckland after a fairly difficult flight. Little A had a mild cold and did NOT cope with the cabin pressure well, so there was a lot of walking cuddles around the plane kitchen to not disturb other passengers. After a restful night we picked up our rental car, a roomy 4 wheel drive, and drove to Turangi, a beautiful scenic introduction to our 3 month adventure! Our plan is to spend 3 months in Turing as a bit of a babymoon: to get to know little A as she goes through that lovely 6-9 months development which includes crawling, learning to eat and other fun stuff. We are also planning to catch a LOT of trout (and even keep some!), catch up with some studies and reading, and take the time to plan out the next chapter of our life. I’m also hoping to write a book if I can, but more on that later

So each week we’ll blog some highlights! Photos will be added every few days to the flickr album.

Arrival

The weather in Turangi has been gorgeous all week. Sunny and much warmer than Canberra, but of course Thomas would rather rain as that would get the Trout moving in the river We are renting a 3 bedroom house with woodfire heating which is toasty warm and very comfortable. The only downside is that we have no internet at the house, and the data plan on my phone doesn’t work at all at the house. So we are fairly offline, which has its pros and cons Good for relaxing, reflection, studying, writing and planning. Bad for Pia who feels like she has lost a limb! Meanwhile, the local library has reasonable WiFi and we have become a regular visitors.

Little A

Little A has made some new steps this week. She learned how to do raspberries, which she now does frequently. She also rolled over completely unassisted for the first time and spends a lot of time trying to roll more. Finally, she decided she wanted to start on solids. We know this because when Thomas was holding her whilst eating a banana, he turned away for a second to speak to me and she launched herself onto the banana, gumming furiously! So we have now tried some mashed potato, pumpkin and some water from the sippy cup. In all cases she insists on grabbing the spoon or sippy cup to feed herself.

Studies

Both of us are doing some extra studies whilst on this trip. I’m finishing off my degree this semester with a subject on policy and law, and another on white collar crime. Both are fascinating! Thomas is reading up on some areas of law he wants to brush up on for work and fun.

Book

My book preparations are going well, and I will be blogging about that in a few weeks once I get a bit more done. Basically I’m writing a book about the history and future of our species, focusing on the major philosophical and technological changes that have come and are coming, and the key things we need to carefully think about and change if we are to take advantage of how the world itself has fundamentally changed. It is a culmination of things I’ve been thinking about and exploring for the last 15 years, so I hope it proves useful in making a better world for everyone

Fishing

Part of the reason we have based this little sabbatical at Turangi is because it is arguably the best Trout fishing in the world, and is one of Thomas’ favourite places. It is a quaint and sleepy little country town with everything we need. The season hasn’t really kicked off yet and the fish aren’t running upstream yet, but we still netted 12 fish this week, of which we kept one Rainbow Trout for a delicious meal of Manuka smoked fish

Categories: thinktime

Stewart Smith: Building OPAL firmware for POWER9

Mon 20th Jun 2016 13:06

Recently, we merged into the op-build project (the build scripts for OpenPOWER Firmware) a defconfig for building OPAL for (certain) POWER9 simulators. I won’t bother linking over to articles on the POWER9 chip or schedule (there’s search engines for that), but with this commit – if you happen to be able to get your hands on a POWER9 simulator, you can now boot to the petitboot bootloader on it!

We’re using upstream Linux 4.7.0-rc3 and upstream skiboot (master), so all of this code is already upstream!

Now, by no means is this complete. There’s some fairly fundamental things that are missing (e.g. PCI) – but how many other platforms can you build open source firmware for before you can even get your hands on a simulator?

Categories: thinktime

Binh Nguyen: Religious Conspiracies, Is Capitalism Collapsing 2?, and More

Fri 17th Jun 2016 18:06
This is obviously a continuation of my past post, http://dtbnguyen.blogspot.com/2016/06/is-capitalism-collapsing-random.html You're probably wondering how on earth we've moved on to religious conspiracies. You'll figure this out in a second: - look back far enough and you'll realise that the way religion was practised and embraced in society was very different a long time ago and now. In fact,
Categories: thinktime

Chris Smart: Booting Fedora 24 cloud image with KVM

Fri 17th Jun 2016 17:06

Fedora 24 is on the way, here’s how you can play with the cloud image on your local machine.

Download the image:
wget https://alt.fedoraproject.org/pub/alt/stage/24_RC-1.2/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2

Make a new local backing image (so that we don’t write to our downloaded image) called my-disk.qcow2:
qemu-img create -f qcow2 -b Fedora-Cloud-Base-24-1.2.x86_64.qcow2 my-disk.qcow2

The cloud image uses cloud-init to configure itself on boot which sets things like hostname, usernames, passwords and ssh keys, etc. You can also run specific commands at two stages of the boot process (see bootcmd and runcmd below) and output messages (see final_message below) which is useful for scripted testing.

Create a file called meta-data with the following content:
instance-id: FedoraCloud00
local-hostname: fedoracloud-00

Next, create a file called user-data with the following content:
#cloud-config
password: password
chpasswd: { expire: False }
ssh_pwauth: True
 
bootcmd:
 - [ sh, -c, echo "=========bootcmd=========" ]
 
runcmd:
 - [ sh, -c, echo "=========runcmd=========" ]
 
# add any ssh public keys
ssh_authorized_keys:
  - ssh-rsa AAA...example...SDvZ user1@domain.com
 
# This is for pexpect so that it knows when to log in and begin tests
final_message: "SYSTEM READY TO LOG IN"

Cloud init mounts a CD-ROM on boot, so create an ISO image out of those files:
genisoimage -output my-seed.iso -volid cidata -joliet -rock user-data meta-data

If you want to SSH in you will need a bridge of some kind. If you’re already running libvirtd then you should have a virbr0 network device (used in the example below) to provide a local network for your cloud instance. If you don’t have a bridge set up, you can still boot it without network support (leave off the -netdev and -device lines below).

Now we are ready to boot this!
qemu-kvm -name fedora-cloud \
-m 1024 \
-hda my-disk.qcow2 \
-cdrom my-seed.iso \
-netdev bridge,br=virbr0,id=net0 \
-device virtio-net-pci,netdev=net0 \
-display sdl

You should see a window pop up and Fedora loading and cloud-init configuring the instance. At the login prompt you should be able to log in with the username fedora and password that you set in user-data.

Categories: thinktime

sthbrx - a POWER technical blog: Introducing snowpatch: continuous integration for patches

Wed 15th Jun 2016 15:06

Continuous integration has changed the way we develop software. The ability to make a code change and be notified quickly and automatically whether or not it works allows for faster iteration and higher quality. These processes and technologies allow products to quickly and consistently release new versions, driving continuous improvement to their users. For a web app, it's all pretty simple: write some tests, someone makes a pull request, you build it and run the tests. Tools like GitHub, Travis CI and Jenkins have made this process simple and efficient.

Let's throw some spanners in the works. What if instead of a desktop or web application, you're dealing with an operating system? What if your tests can only be run when booted on physical hardware? What if instead of something like a GitHub pull request, code changes were sent as plain-text emails to a mailing list? What if you didn't have control the development of this project, and you had to work with an existing, open community?

These are some of the problems faced by the Linux kernel, and many other open source projects. Mailing lists, along with tools like git send-email, have become core development infrastructure for many large open source projects. The idea of sending code via a plain-text email is simple and well-defined, not reliant on a proprietary service, and uses universal, well-defined technology. It does have shortcomings, though. How do you take a plain-text patch, which was sent as an email to a mailing list, and accomplish the continuous integration possibilities other tools have trivially?

Out of this problem birthed snowpatch, a continuous integration tool designed to enable these practices for projects that use mailing lists and plain-text patches. By taking patch metadata organised by Patchwork, performing a number of git operations and shipping them off to Jenkins, snowpatch can enable continuous integration for any mailing list-based project. At IBM OzLabs, we're using snowpatch to automatically test new patches for Linux on POWER, skiboot, snowpatch itself, and more.

snowpatch is written in Rust, an exciting new systems programming language with a focus on speed and safety. Rust's amazing software ecosystem, enabled by its package manager Cargo, made development of snowpatch a breeze. Using Rust has been a lot of fun, along with the practical benefits of (in our experience) faster development, and confidence in the runtime stability of our code. It's still a young language, but it's quickly growing and has an amazing community that has always been happy to help.

We still have a lot of ideas for snowpatch that haven't been implemented yet. Once we've tested a patch and sent the results back to a patchwork instance, what if the project's maintainer (or a trusted contributor) could manually trigger some more intensive tests? How would we handle it if the traffic on the mailing list of a project is too fast for us to test? If we were running snowpatch on multiple machines on the same project, how would we avoid duplicating effort? These are unsolved problems, and if you'd like to help us with these or anything else you think would be good for snowpatch, we take contributions and ideas via our mailing list, which you can subscribe to here. For more details, view our documentation on GitHub.

Thanks for taking your time to learn a bit about snowpatch. In future, we'll be talking about how we tie all these technologies together to build a continuous integration workflow for the Linux kernel and OpenPOWER firmware. Watch this space!

This article was originally posted on IBM developerWorks Open. Check that out for more open source from IBM, and look out for more content in their snowpatch section.

Categories: thinktime

Rusty Russell: Minor update on transaction fees: users still don’t care.

Wed 15th Jun 2016 13:06

I ran some quick numbers on the last retargeting period (blocks 415296 through 416346 inclusive) which is roughly a week’s worth.

Blocks were full: median 998k mean 818k (some miners blind mining on top of unknown blocks). Yet of the 1,618,170 non-coinbase transactions, 48% were still paying dumb, round fees (like 5000 satoshis). Another 5% were paying dumbround-numbered per-byte fees (like 80 satoshi per byte).

The mean fee was 24051 satoshi (~16c), the mean fee rate 60 satoshi per byte. But if we look at the amount you needed to pay to get into a block (using the second cheapest tx which got in), the mean was 16.81 satoshis per byte, or about 5c.

tl;dr: It’s like a tollbridge charging vehicles 7c per ton, but half the drivers are just throwing a quarter as they drive past and hoping it’s enough. It really shows fees aren’t high enough to notice, and transactions don’t get stuck often enough to notice. That’s surprising; at what level will they notice? What wallets or services are they using?

Categories: thinktime

Pages