You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 11 min 39 sec ago

Chris Smart: Single emergency mode with systemd

Fri 10th Oct 2014 19:10

Just to remind myself.. add systemd.unit=emergency.target to the kernel line, or if that fails, try init=/sbin/sh and remove both quiet and rhgb options.

Afterwards, exit or:

exec /sbin/init

Can also enable debug mode to help investigating problems with systemd.log_level=debug

You can get a console early on in the boot process by enabling debug-shell:

systemctl enable debug-shell.service

Categories: thinktime

BlueHackers: BlueHackers @ Open Source Developers’ Conference 2014

Fri 10th Oct 2014 16:10

This year, OSDC’s first afternoon plenary will be a specific BlueHackers related topic: Stress and Anxiety, presented by Neville Starick – an experienced Brisbane based counsellor.

We’ll also have our traditional BlueHackers “BoF” (birds-of-a-feather) session in the evening, usually featuring some general information, as well as the opportunity for safe lightning talks. Some people talk, some people don’t. That’s all fine.

The Open Source Developers’ Conference 2014 is being held at the beautiful Griffith University Gold Coast Campus, 4-7 November. It features a fine program, and if you use this magic link you get a special ticket price, but the regular registration is only around $300 anyhow, $180 for students! This includes all lunches and the conference dinner. Fabulous value.

Categories: thinktime

Andrew Pollock: [life] Day 253: Bike riding practice and swim class

Fri 10th Oct 2014 16:10

It's been a while since we've done any bike riding practice, and as the day was a bit cooler and more overcast, I thought we could squeeze in a little bit first thing.

We went to the Minnippi Parklands again, and did a little bit of practise. Zoe actually (really briefly) rode her bike for the first time successfully, which was really exciting. We'll have to have another go next week.

After that, we dropped past the supermarket to get a few things for lunch, and then had lunch.

I've managed to forget what happened after lunch, so it can't have been particularly memorable. I miscalculated when swim class was by 30 minutes, and thought we were in more of a rush than we really were. We biked to the Post Office to mail off my US taxes, but I thought we didn't have time to do all the extra paperwork for the mailing, so I paid for the postage, but took all of the stuff with me to swim class to finish filling out. On the way there, I realised I had an extra 30 minutes up our sleeves, which was nice. It gave Zoe some time to have some fruit before class.

Sarah picked up Zoe from swim class, and I biked home to ditch the bike and meet Anshu at the movie theatre to watch Gone Girl. Hoo boy. Good movie.

Categories: thinktime

Andrew Pollock: [life] Day 252: A poorly executed spontaneous outing to Wet and Wild

Fri 10th Oct 2014 16:10

The forecast maximum was supposed to be 32°C, so I rather spontaneously decided to go for our first visit to Wet and Wild for the season.

I whipped up some lunch, and got our swim gear together, and after a quick phone call with my REIQ tutor to discuss some questions I had about the current unit, I grabbed our gear and we headed off.

I thought that because school had gone back this week, that Wet and Wild would be nice and quiet, but boy, did I miscalculate. I think the place was the busiest I've ever seen it, even on a weekend visit. It would appear that a lot of people have decided to just take an extra week off school with the Labour Day public holiday on Monday.

Once we arrived, I discovered that I'd left my hat on my bed at home, so I had to buy a hat from the gift shop while I was queuing up to pay for a locker. Then we went to get changed, and I discovered I'd also left my swimming stuff on my bed as well, so I had to line up again to purchase a pair of board shorts and a rashie. I was pretty annoyed with myself at all the unnecessary added expense, because I'd left in too much of a hurry.

After we'd got ourselves appropriately attired, we had a fantastic day. Zoe's now tall enough for a few of the "big kid" slides, so we went on them together as well. The first one just involved us sitting in a big inflatable tube going down a wide, curving slide, so that was pretty easy. The second one I wasn't sure about, because we had to go separately. I went first, and waited at the bottom to catch her. As I went down, I was a bit worried it was going to be too much for Zoe, but she came down fine, and once she surfaced, her first word was "Again!", so it was all good.

She napped on the way home, so I swung by the Valley to check my post office box, and then we got home and Sarah picked her up. In spite of the extra expense, it was a really good day out.

Categories: thinktime

Lev Lafayette: Free and Open Source Software Business Applications

Thu 09th Oct 2014 09:10

Free and open source software is based on key principles that are firmly grounded in instrumental advantages and ethical principles. It is heavily used in infrastructure and in new hardware forms; it is less used in user-end applications and traditional forms (laptops and desktops); however there is a good range of mature and feature-rich business applications available. The development of increasingly ubiquitous and more powerful computing is particularly well-suited for FOSS and workplaces that make use of such software will gain financial and positional advantages.

Presentation to Young Professionals CPA, October 8, 2014

read more

Categories: thinktime

Michael Still: Lock In

Wed 08th Oct 2014 21:10






ISBN: 0765375869

LibraryThing

I know I like Scalzi stuff, but each series is so different that I like them all in different ways. I don't think he's written a murder mystery before, and this book was just as good as Old Man's War, which is a pretty high bar. This book revolves around a murder being investigated by someone who can only interact with the real world via personal androids. Its different from anything else I've seen, and a unique idea is pretty rare these days.



Highly recommended.



Tags for this post: book john_scalzi robot murder mystery

Related posts: Isaac Asimov's Robot Short Stories; Prelude To Foundation ; Isaac Asimov's Foundation Series; Caves of Steel; Robots and Empire ; A Talent for War Comment Recommend a book
Categories: thinktime

linux.conf.au News: Earlybird registrations are now open!

Wed 08th Oct 2014 18:10

The moment that you have been waiting for has finally arrived! It is with many hugs of sincere gratitude to the team that we can announce that Earlybird Registration for LCA2015 is now open.

Now is the time to chose - are you a Professional, a Hobbyist or a Student attending LCA2015? Will you be one of our fantastic army of volunteers? Go to lca2015.org.au to register and buy your Earlybird ticket or register as a volunteer.

All of the information that you need is there - what you receive for your ticket price, accommodation options, THE Penguin Dinner, as well as Partners Program and creche options for those of you who are bringing your family with you. You can also register as a volunteer right now, and begin to get involved with our wonderful conference.

There have been months of anticipation, and several sleepless nights, but we are now at a truly exciting stage of the conference organising process - registration!

We look forward to seeing you all in January 2015 in Auckland.

The LCA2015 team

Categories: thinktime

Simon Horms: kexec-tools 2.0.8 Released

Wed 08th Oct 2014 17:10

I have released version 2.0.8 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

Categories: thinktime

Simon Horms: kexec-tools 2.0.7 Released

Wed 08th Oct 2014 17:10

I have released version 2.0.7 of kexec-tools, the user-space portion of kexec a soft-reboot and crash-dump facility of Linux.

This is a feature release.

The code is available as a tarball here and in git here.

More information is available in the announcement email.

Categories: thinktime

Stewart Smith: Quick MySQL 5.7.5 thoughts

Wed 08th Oct 2014 10:10

It was great to see the recent announcement of MySQL 5.7.5 over at the MySQL Server Team blog. I’m looking forward to throwing this release at some of the POWER8 systems we have for a couple of really good reasons: 1) Does it work better than previous MySQL 5.7 releases “out of the box” on POWER? 2) What do the scalability improvements in 5.7.5 mean for peak QPS on POWER (and can I set a new record?).

Looking through the list of changes, I’m (casually not) surprised as to the number of features and the amount of work that echoes what we were working on in Drizzle a few years ago.

A closer look at the source for 5.7.5 may also prove enlightening, I wonder how the MySQL team is coping with a lot of the code rot legacy and the absolutely atrocious internal APIs they inherited…

Categories: thinktime

Andrew Pollock: [life] Day 251: Kindergarten, some more training and other general running around

Wed 08th Oct 2014 09:10

Yesterday was the first day of Term 4. I can't believe we're at Term 4 already. This year has flown by.

I had a letter from the PAG to all of the Kindergarten parents to put into the notice pockets at Kindergarten first up, so I drove to the Kindergarten not long after opening time, and quickly put them in all the pockets.

Then I headed over to Beaurepaires for a free tyre checkup.

After that, I headed over to my Group Leader's house for some more practical training. I'm feeling pretty confident about my ability to conduct a Thermomix demonstration now, especially after having done my first "real" one on Sunday night.

After that, it was time to pick up Zoe from Kindergarten. It was wonderful to see her after a week away. She wanted to go to Megan's house for a play date, but Megan had tennis, so we went and did the grocery shopping first.

After the grocery shopping, we popped around the Megan's for a bit, and then went home.

I made dinner, and Zoe seemed pretty tired, so I got her to bed a little bit early.

Categories: thinktime

Craige McWhirter: Post Receive Git Hook to Push to Github

Tue 07th Oct 2014 22:10

I self-host my own git repositories. I also use github as a backup for many of them. I have a use case for a number of them where the data changes can be quite large and pushing to both my own and github's remote services doubles my bandwidth usage in my mostly bandwidth contrained environments.

To get around those contraints, I wanted to only push to my git repository and have that service then push to github. This is how I went about it, courtesy of Matt Palmer

Assumptions:
  • You have your own git server
  • You have a github account
  • You're fairly familiar with git
Authentication

It's likely you have not used your remote server to connect to github. To make sure everything happens smoothly, you need to:

  • Add the SSH key for your user account on your server to the authorised keys on your github account
  • SSH just once from your server to github to accept the key from github.
Add a Remote for github

In the bare git repository on your server, you need to add the remote configuration. On a Debian server using gitweb, this file would be located as /var/cache/git/MYREPO/config. Add the below lines to it:

[remote "github"] url = git@github.com:MYACCOUNT/MYREPO.git fetch = +refs/heads/*:refs/remotes/github/* autopush = true Add a post-receive Hook

Now we need to create a post-receive hook to process the push to github. Going with the previous example, edit /var/cache/git/MYREPO/hooks/post-receive

#!/bin/bash for remote in $(git remote); do if [ "$(git config "remote.${remote}.autopush")" = "true" ]; then git push "$remote" fi done

Happy automated pushing to github.

Categories: thinktime

Stewart Smith: Things that are not news

Tue 07th Oct 2014 09:10

The following are not news:

  • Human has genitals (which may/may not have been exposed)
  • Absolutely anything about The Bachelor
  • Anything any celebrity is wearing, anyone they’re dating or if they are currently wearing underwear.
  • any list of “top 10″ things.
  • TEENAGERS DO THINGS THAT TEENAGERS DO!

(feel free to extend the list)

Categories: thinktime

linux.conf.au News: Miniconf Call for Presentations - Linux.conf.au 2015 Systems Administration Miniconf

Tue 07th Oct 2014 07:10

As part of the linux.conf.au conference in Auckland, New Zealand in January 2015 we will be holding a one day mini conference oriented to Linux Systems Administration.

The organisers of the Systems Administration Miniconf would like to invite proposals for presentations to be delivered at the Miniconf. Please forward this CFP to your colleagues, social networks, and other relevant mailing lists.

This is our 9th year at Linux.conf.au. To see presentations from our previous years, see the miniconf website: http://sysadmin.miniconf.org/.

Presentation Topics

Topics for presentations could include (but are not limited to):

Systems Administration Best Practice, Infrastructure as a Service (IAAS), Platform as a Service (PAAS), Docker, Containerisation, Software as a Service (SAAS), Virtualisation, "Cloud" Computing, Service Migration, Dealing with Extranets, Coping with the shortage of IPv4 addresses, Software Defined Networking (SDN), DevOps, Configuration Management, Bootstrapping systems, Lifecycle Management, Monitoring, Dealing with BYOD, Backups in a virtual/distributed world, Security in a world where you cannot even trust your shell, Troubleshooting, Log Management, Buying Decisions, Identity Management, Multi-Factor Authentication, Web and Email management, Keeping legacy systems functioning, and other War stories from the Real World.

We strongly welcome topics on best practice, new developments in systems administration and cutting edge techniques to better manage Linux environments both virtual and physical.

Presentations should be of a technical nature and speakers should assume that members of the audience have at least a couple of years experience in Unix/Linux administration.

Format of Presentations

We are now seeking proposals for presentations at the mini-conference.

We have openings for:

  • 45 minute double-length presentations
  • 20 minute full presentations
  • 10-20 minute "Life in the Real World" presentations
  • 5-10 minute "lightning talks"

Please note, due to the single day available (and whole-LCA keynote before morning tea), we expect the majority of available timeslots to be 20 minutes long or less.

The 10-20 minute "Life the Real World" presentations are intended for people to talk about their sites or real world projects/problems. Information could include: Servers, talks, tools, people and roles, experiences, software, operating systems, monitoring, alerting, provisioning etc. The intent is to give attendees a picture of what is happening in the "real world" and some understanding of how other sites work, as well as offer you a chance to get suggestions on other tools and practices that might help your site. Discussion of "lessons learned" through really trying to use some of these things are especially welcomed.

Submitting talks

Please note that in order to give a presentation or attend the miniconf you must be registered (and paid up) for the main linux.conf.au conference. Presenting at the Miniconf does not entitle you to discounted or free registration at the main conference nor priority with registration. Unfortunately the Miniconf has no budget for sponsorship of speakers.

Submissions should be made via: http://sysadmin.miniconf.org/proposal15.html

Questions should be sent to: lca2015 at sysadmin.miniconf.org

Dates and Deadlines

To encourage early submissions priority (both of inclusion and scheduling) will be given to presentations submitted before the 19th of October 2014.

  • 2014-10-19 - Deadline for early submissions
  • 2014-10-26 - Early submissions confirmation
  • 2014-11-16 - Deadline for submissions
  • 2014-11-30 - Confirmation of all presentations
  • 2015-01-13 - Start of Miniconf and 2nd day of linux.conf.au 2015
Contact and Questions

Please see our website for more information on the miniconf, past presentations and presenting at it. If you have any questions please feel free to email the organisers at: lca2015 at sysadmin.miniconf.org

Ewen McNeill

LCA2015 Sysadmin Miniconf Convener

Categories: thinktime

Stewart Smith: MariaDB 10.0 on POWER

Mon 06th Oct 2014 18:10

Good news for those wanting to run MariaDB on POWER systems, the latest 10.0 bzr tree (as of a couple of weeks ago) builds and runs well!

I recently pulled the latest MariaDB 10.0 from BZR and built it on a POWER8 system in the lab to run some quick tests. The MariaDB team has done some work on getting MariaDB to run on POWER recently, a bunch of which is based off my work on MySQL on POWER.

There’s obviously still some work in progress going on, but my initial results show performance within around 10% of MySQL, so with a bit of work we will hopefully see MariaDB reach performance parity.

One interesting find was the code to account for thread memory usage uses a single atomic variable: this does not scale and does end up showing up on profiles.

I’ll comment more on the code in a future post, but it looks like we will have MariaDB being functional on POWER in an upcoming release.

Categories: thinktime

Stewart Smith: MariaDB & Trademarks, and advice for your project

Mon 06th Oct 2014 11:10

I want to emphasize this for those who have not spent time near trademarks: trademarks are trouble and another one of those things where no matter what, the lawyers always win. If you are starting a company or an open source project, you are going to have to spend a whole bunch of time with lawyers on trademarks or you are going to get properly, properly screwed.

MySQL AB always held the trademark for MySQL. There’s this strange thing with trademarks and free software, where while you can easily say “use and modify this code however you want” and retain copyright on it (for, say, selling your own version of it), this does not translate too well to trademarks as there’s a whole “if you don’t defend it, you lose it” thing.

The law, is, in effect, telling you that at some point you have to be an arsehole to not lose your trademark. (You can be various degrees of arsehole about it when you have to, and whenever you do, you should assume that people are acting in good faith and just have not spent the last 40,000 years of their life talking to trademark lawyers like you have).Basically, you get to spend time telling people that they have to rename their product from “MySQL Headbut” to “Headbut for MySQL” and that this is, in fact, a really important difference.

You also, at some point, get to spend a lot of time talking about when the modifications made by a Linux distribution to package your software constitute sufficient changes that it shouldn’t be using your trademark (basically so that you’re never stuck if some arse comes along, forks it, makes it awful and keeps using your name, to the detriment of your project and business).

If you’re wondering why Firefox isn’t called Firefox in Debian, you can read the Mozilla trademark policy and probably some giant thread on debian-legal I won’t point to.

Of course, there’s ‘ MySQL trademark policy and when I was at Percona, I spent some non-trivial amount of time attempting to ensure we had a trademark policy that would work from a legal angle, a corporate angle, and a get-our-software-into-linux-distros-happily angle.

So, back in 2010, Monty started talking about a draft MariaDB trademark policy (see also, Ubuntu trademark policy, WordPress trademark policy). If you are aiming to create a development community around an open source project, this is something you need to get right. There is a big difference between contributing to a corporate open source product and an open source project – both for individuals and corporations. If you are going to spend some of your spare time contributing to something, the motivation goes down when somebody else is going to directly profit off it (corporate project) versus a community of contributors and companies who will all profit off it (open source project). The most successful hybrid of these two is likely Ubuntu, and I am struggling to think of another (maybe Fedora?).

Linux is an open source project, RedHat Enterprise Linux is an open source product and in case it wasn’t obvious when OpenSolaris was no longer Open, OpenSolaris was an open source product (and some open source projects have sprung up around the code base, which is great to see!). When a corporation controls the destiny of the name and the entire source code and project infrastructure – it’s a product of that corporation, it’s not a community around a project.

From the start, it seemed that one of the purposes of MariaDB was to create a developer community around a database server that was compatible with MySQL, and eventually, to replace it. MySQL AB was not very good at having an external developer community, it was very much an open source product and not a an open source project (one of the downsides to hiring just about anyone who ever submitted a patch). Things struggled further at Sun and (I think) have actually gotten better for MySQL at Oracle – not perfect, I could pick holes in it all day if I wanted, but certainly better.

When we were doing Drizzle, we were really careful about making sure there was a development community. Ultimately, with Drizzle we made a different fatal error, and one that we knew had happened to another open source project and nearly killed it: all the key developers went to work for a single company. Looking back, this is easily my biggest professional regret and one day I’ll talk about it more.

Brian Aker observed (way back in 2010) that MariaDB was, essentially, just Monty Program. In 2013, I did my own analysis on the source tree of MariaDB 5.5.31 and MariaDB 10.0.3-ish to see if indeed there was a development community (tl;dr; there wasn’t, and I had the numbers to prove it).If you look back at the idea of the Open Database Alliance and the MariaDB Foundation, actually, I’m just going to quote Henrik here from his blog post about leaving MariaDB/Monty Program:

When I joined the company over a year ago I was immediately involved in drafting a project plan for the Open Database Alliance and its relation to MariaDB. We wanted to imitate the model of the Linux Foundation and Linux project, where the MariaDB project would be hosted by a non-profit organization where multiple vendors would collaborate and contribute. We wanted MariaDB to be a true community project, like most successful open source projects are – such as all other parts of the LAMP stack.

….

The reality today, confirmed to me during last week, is that:

Those in charge at Monty Program have decided to keep ownership of the MariaDB trademark, logo and mariadb.org domain, since this will make the company more valuable to investors and eventually to potential buyers.

Now, with Monty Program being sold to/merged into (I’m really not sure) SkySQL, it was SkySQL who had those things. So instead of having Monty Program being (at least in theory) one of the companies working on MariaDB and following the Hacker Business Model, you now have a single corporation with all the developers, all of the trademarks, that is, essentially a startup with VC looking to be valuable to potential buyers (whatever their motives).

Again, I’m going to just quote Henrik on the us-vs-them on community here:

Some may already have observed that the 5.2 release was not announced at all on mariadb.org, rather on the Monty Program blog. It is even intact with the “us vs them” attitude also MySQL AB had of its community, where the company is one entity and “outside community contributors” is another. This is repeated in other communication, such as the recent Recently in MariaDB newsletter.

This was, again, back in 2010.

More recently, Jeremy Cole, someone who has pumped a fair bit of personal and professional effort into MySQL and MariaDB over the past (many) years, asked what seemed to be a really simple question on the maria-discuss mailing list. Basically, “What’s going on with the MariaDB trademark? Isn’t this something that should be under the MariaDB foundation?”

The subsequent email thread was as confusing as ever and should be held up as a perfect example about what not to do. Some of us had by now, for years, smelt something fishy going on around the talk of a community project versus the reality. At the time (October 2013), Rasmus Johansson (VP of Engineering at SkySQL and Board Member of MariaDB foundation) said this:

The MariaDB Foundation and SkySQL are currently working on the trademark issue to come up with a solution on what rights to the trademark each entity should have. Expect to hear more about this in a fairly near future.

 

MariaDB has from its beginning been a very community friendly project and much of the success of MariaDB relies in that fact. SkySQL of course respects that.

(and at the same time, there were pages that were “Copyright MariaDB” which, as it was pointed out, was not an actual entity… so somebody just wasn’t paying attention). Also, just to make things even less clear about where SkySQL the corporation, Monty Program the corporation and the MariaDB Foundation all fit together, Mark Callaghan noticed this text up on mariadb.com:

The MariaDB Foundation also holds the trademark of the MariaDB server and owns mariadb.org. This ensures that the official MariaDB development tree<https://code.launchpad.net/maria> will always be open for the MariaDB developer community.

So…. there’s no actual clarity here. I can imagine attempting to get involved with MariaDB inside a corporation and spending literally weeks talking to a legal department – which thrills significantly less than standing in lines at security in an airport does.

So, if you started off as yay! MariaDB is going to be a developer community around an open source project that’s all about participation, you may have even gotten code into MariaDB at various times… and then started to notice a bit of a shift… there may have been some intent to make that happen, to correct what some saw as some of the failings of MySQL, but the reality has shown something different.

Most recently, SkySQL has renamed themselves to MariaDB. Good luck to anyone who isn’t directly involved with the legal processes around all this differentiating between MariaDB the project, MariaDB Foundation and MariaDB the company and who owns what. Urgh. This is, in no way, like the Linux Foundation and Linux.

Personally, I prefer to spend my personal time contributing to open source projects rather than products. I have spent the vast majority of my professional life closer to the corporate side of open source, some of which you could better describe as closer to the open source product end of the spectrum. I think it is completely and totally valid to produce an open source product. Making successful companies, products and a butt-ton of money from open source software is an absolutely awesome thing to do and I, personally, have benefited greatly from it.

MariaDB is a corporate open source product. It is no different to Oracle MySQL in that way. Oracle has been up front and honest about it the entire time MySQL has been part of Oracle, everybody knew where they stood (even if you sometimes didn’t like it). The whole MariaDB/Monty Program/SkySQL/MariaDB Foundation/Open Database Alliance/MariaDB Corporation thing has left me with a really bitter taste in my mouth – where the opportunity to create a foundation around a true community project with successful business based on it has been completely squandered and mismanaged.

I’d much rather deal with those who are honest and true about their intentions than those who aren’t.

My guess is that this factored heavily into Henrik’s decision to leave in 2010 and (more recently) Simon Phipps’s decision to leave in August of this year. These are two people who I both highly respect, never have enough time to hang out with and I would completely trust to do the right thing and be honest when running anything in relation to free and open source software.

Maybe WebScaleSQL will succeed here – it’s a community with a purpose and several corporate contributors. A branch rather than a fork may be the best way to do this (Percona is rather successful with their branch too).

Categories: thinktime

Sridhar Dhanapalan: Twitter posts: 2014-09-29 to 2014-10-05

Mon 06th Oct 2014 01:10
Categories: thinktime

Tridge on UAVs: CanberraUAV Outback Challenge 2014 Debrief

Sat 04th Oct 2014 12:10

We've now had a few days to get back into a normal routine after the excitement of the Outback Challenge last week, so I thought this is a good time to write up a report on how things went for team I was part of, CanberraUAV. There have been plenty of posts already about the competition results, so I will concentrate on the technical details of our OBC entry, what went right and what went badly wrong.



For comparison, you might like to have a look at the debrief I wrote two years ago for the 2012 OBC. In 2012 there were far fewer teams, and nobody won the grand prize, although CanberraUAV went quite close. This year there were more than 3 times as many teams and 4 teams completed the criterion for the challenge, with the winner coming down to points. That means the Outback Challenge is well and truly "done", which reflects the huge advances in amateur UAV technology over the last few years.



The drama for CanberraUAV really started several weeks before the challenge. Our primary competition aircraft was the "Bushmaster", a custom built tail dragger with 50cc petrol motor designed by our chief pilot, Jack Pittar. Jack had designed the aircraft to have both plenty of room inside for whatever payload the rest of the team came up with, and to easily fit in the back of a station wagon. To achieve this he had designed it with a novel folding fuselage. Jack (along with several other team members) had put hundreds of hours into building the Bushmaster and had done a great job. It was beautifully laid out inside, and really was a purpose built Outback Challenge aircraft.



Just a couple of days after our D3 deliverable was sent in we had an unfortunate accident. A member of our local flying club was flying his CAP232 aerobatic plane at the same time we we were doing a test mission, and he misjudged a loop. The CAP232 loop went right through the rear of the Bushmaster fuselage, slicing it off. The Bushmaster hit the ground at 180 km/h, with predictable results.



Jack and Greg set to work on a huge effort to build a new Bushmaster, but we didn't manage to get it done in time. Luckily the OBC organisers allowed us to switch to our backup aircraft, a VQ Porter 2.7m RTF with some customisations. We had included the Porter in our D2 deliverable just in case it was needed, and already had plenty of autonomous hours up on it as we had been using it as a test aircraft for all our systems. It was the same basic style as the Bushmaster (a petrol powered tail dragger), but was a bit more cramped inside for fitting our equipment and used a bit smaller engine (a DLE35).



Strategy



Our basic strategy this year was the same as in 2012. We would search at a relatively low altitude (100m AGL this year, compared to 90m AGL in 2012), with a interleaved "mow the lawn" pattern. This year we setup the search with 60% overlap compared with 50% last year, and we had a longer turn around area at the end of each search leg to ensure the aircraft got back on track fully before it entered the search area. With that search pattern and a target airspeed of 28m/s we expected to cover the whole search area in 20 minutes, then we would cover it again in the reverse direction (with a 50m offset) if we didn't find Joe on the first pass.



As with 2012 we used an on-board computer to autonomously find Joe. This year we used an Odroid-XU, which is a quad core system running at 1.6GHz. That gave us a lot more CPU power than in 2012 (when we used a pandaboard), which allowed us to use more CPU intensive image recognition code. We did the first level histogram scanning at the full camera resolution this year (1280x960), whereas in 2012 we had run the first level scan at 640x480 to save CPU. That is why we were happy to fly a bit higher this year.



While the basic approach to image recognition was the same, we had improved the details of the implementation a lot in the last two years, with hundreds of small changes to the image scoring, communications and user interface. Using our 2012 image data as a guide (along with numerous test flights at our local flying field) we had refined the cuav code to provide much better object discrimination, and also to cope better with communication failures. We were pretty confident it could find Joe very reliably.



The takeoff



When the running order was drawn from a hat we were the 2nd last team on the list, so we ended up flying on the Friday. We'd been up early each morning anyway in case the order was changed (which does happen sometimes), and we'd actually been called out to the airfield on the Thursday afternoon, but didn't end up flying then due to high wind.



Our time to takeoff finally came just before 8am Friday morning. As with 2012 we did an auto takeoff, using the new tail-dragger takeoff coded added to APM:Plane just a couple of months before.



Unfortunately the takeoff did not go as planned. In fact, we were darn lucky the plane got into the air at all! As soon as Jack flicked the switch on the transmitter to put the plane into AUTO it started veering left on the runway, and nearly scraped a wing as it limped it's way into the air. A couple of seconds later it came quite close to the tents where the OBC organisers and our GCS was located.



It did get off the ground though, and missed the tents while it climbed, finally switching to normal navigation to the 2nd waypoint once it got to an altitude of 40m. Jack had his finger on the switch on the transmitter which would have taken manual control and very nearly aborted the takeoff. This would have to go down as one of the worst takeoffs in OBC history.



So why did it go so badly wrong? My initial analysis when I looked at the logs later was that the wind had pushed the plane sideways. After examining the logs more carefully though I discovered that while the wind did play a part, the biggest issue was the compass. For some reason the compass offsets we had loaded were quite different from the ones we had been testing with in all our flights in Canberra. I still don't know why the offsets changed, although the fact that we had compass learning enabled almost certainly played a part. We'd done dozens of auto takeoffs in Canberra with no issues, with the plane tracking beautifully down the center line of the runway. To have it go so badly wrong for the flight that matters was a disappointment.



I've decided that the best way to fix the issue for future flights is to make the auto takeoff code completely independent of the compass. Normally the compass is needed to get an initial yaw for the aircraft while it is not moving (as our hobby-grade GPS sensors can't give yaw when not moving), and that initial yaw is used to set the ground heading for the takeoff. That means that with the current code, any compass error at the time you change into AUTO will directly impact the takeoff.



Because takeoff is quite a short process (usually 20s or less), we can take an alternative approach. The gyros won't drift much in 20s, so what we can do is just keep a constant gyro heading until the aircraft is moving fast enough to guarantee a good GPS ground track. At that point we can add whatever gyro based yaw error has built up to the GPS ground course and use that as our navigation heading for the rest of the takeoff. That should make us completely independent of compass for takeoff, which should solve the problem for everyone, rather than just fixing it for our aircraft. I'm working on a set of patches to implement this, and expect it to be in the next release.



Note that once the initial takeoff is complete the compass plays almost no role in the flight of a fixed wing plane if you have the EKF enabled, unless you lose GPS lock. The EKF rejected our compass as being inconsistent, and happily got correct yaw from the fusion of GPS velocity and other sensors for the rest of the flight.



The search



After the takeoff things went much more smoothly. The plane headed off to the search area as planned, and tracked the mission extremely well. It is interesting to compare the navigation accuracy of this years mission compared to the 2012 mission. In 2012 we were using a simple vector field navigation algorithm, whereas we now use the L1 navigation code. This year we also used Paul's EKF for attitude and position estimation, and the TECS controller for speed/height control. The differences are really remarkable. We were quite pleased with how our Mugin flew in 2012, but this year it was massively better. The tracking along each leg of the search was right down the line, despite the 15 knot winds.



Another big difference from 2012 is that we were using the new terrain following capability that we had added this year. In 2012 we used a python script to generate our waypoints and that script automatically added intermediate waypoints to follow the terrain of the search area. This year we just set all waypoints as 100 meters AGL and let the autopilot do its job. That made things a lot simpler and also resulted in better geo-referencing and search area coverage.



On our GCS imaging display we had 4 main windows up. One is a "live" view from the camera. That is setup to only update if there is plenty of spare bandwidth on our radio links, and is really just there to give us something to watch while the imaging code does its job, plus to give us some situational awareness of what the aircraft is tracking over.

The second window is the map, which shows the mission flight path, the geo-fence, two plane icons (AHRS position estimate and GPS position estimate) plus overlays of thumbnails from the image recognition system of any "interesting" objects the Odroid has found.



The 3rd window is the "Mosaic" window. That shows a grid of thumbnails from the image recognition system, and has menus and hot-keys to allow us to control the image recognition process and sort the thumbnails in various ways. We expected Joe to have a image score of over 1000, but we set the threshold for thumbnails to display in the mosaic as 500. Setting a lower threshold means we get shown a lot of not very interesting thumbnails (things like oddly shaped tree stumps and patches of dirt that are a bit out of the ordinary), but at least it means we can see the system is working, and it stops the search being quite so boring.



The final window is a still image window. That is filled with a full resolution image of whatever image we have selected for download from the aircraft, allowing us to get some context around a thumbnail if we want it. Often it is much easier to work out what an object is if you can see the surroundings.



On top of those imaging windows we also have the usual set of flight control windows. We had 3 laptops setup in our GCS tent, along with a 40" LCD TV for one of the laptops (my thinkpad). Stephen ran the flight monitoring on his laptop, and Matt helped with the imaging and the antenna tracker from his laptop. Splitting the task up in this way helped prevent overload of any one person, which made the whole experience more manageable.



On Stephens laptop he had the main MAVProxy console showing the status of the key parts of the autopilot, plus graphs showing the consistency of the various subsystems. The ardupilot code on Pixhawk has a lot of redundancy at both the sensor level and at the higher level algorithm level, and it is useful to plot graphs showing if we get any significant discrepancies, giving us a warning of a potential failure. For this flight everything went smoothly (apart from the takeoff) but it is nice to have enough screens to show this sort of information anyway. It also makes the whole setup look a bit more professional to have a few fancy graphs going :-)



About 20 minutes after takeoff the imaging system spotted Joe lying in his favourite position in a peanut field. We had covered nearly all of the first pass across the search area so we were glad to finally spot him. The images were very clear, so we didn't need to spend any time discussing if it really was Joe or not.



We clicked the "Show Position" menu item we'd added to the MAVProxy map, which pops up a window giving the position in various coordinate systems. Greg passed this to the judges who quickly confirmed it was within the required 100 meters of Joe.



The bottle drop



That initial position was only a rough estimate though. To refine the position we used a new trick we'd added to MAVProxy. The "wp movemulti" command allowed us to move and rotate a pre-defined confirmation flight pattern over the top of Joe. That setup the plane to fly a butterfly pattern over Joe at an altitude of 80 meters, passing over him with wings level. That gives an optimal set of images for our image recognition system to work with, and the tight turns allow the wind estimation algorithm in ardupilot to get an accurate idea of wind direction and speed.



In the weeks leading up to the competition we had spent a lot of time refining our bottle drop system. We realised the key to a good drop is timing consistency and accurate wind drift estimation.



To improve the timing consistency we had changed our bottle release mechanism to be in two stages. The bottle was held to the bottom of the Porter by two servos. One servo held the main weight of the bottle inside a plywood harness, while the second servo was attached to the top of the parachute by a wire, using a glider release mechanism.



The idea is that when approaching Joe for the drop the main servo releases first, which leaves the bottle hanging beneath the plane held by the glider release servo with the parachute unfurled. The wind drags the bottle at an angle behind the plane for 2 seconds before the 2nd servo is released. The result is much more consistent timing of the release, as there is no uncertainty in how long the bottle tumbles before the chute unfolds.



The second part of the bottle drop problem is good wind drift estimation. We had decided to use a small parachute as we were not certain the bottle would withstand the impact with the ground without a chute. That meant significant wind drift, which meant we really had to know the wind direction and speed quite accurately, and also needed some data on how fast the bottle would drift in the wind.



In the weeks before the competition we did a lot of bottle drops, but the really key one was a drop just a week before we left for Kingaroy. That was the first drop in completely still conditions, which meant it finally gave us a "zero wind" data point, which was important in calculating the wind drift. Combining that drop with some previous drop results we came up with a very simple formula giving the wind drift distance for a drop from 80 meters as 5 times the wind speed in meters/second, as long as we dropped directly into the wind.



We had added a new feature to APM:Plane to allow an exact "acceptance radius" for an individual waypoint to be set, overriding the global WP_RADIUS parameter. So once we had the wind speed and direction it was just a matter of asking MAVProxy to rotate the drop mission waypoints to match the wind (which was coming from -121 degrees) and to set the acceptance radius for the drop to 35 meters (70 meters for zero wind, minus 7 times 5 for the 7m/s wind the EKF had estimated).



The plane then slowed to 20m/s for the drop, and did a 350m approach to ensure the wings were nice and level at the drop point. As the drop happened we could see the parachute unfurl in the camera view from the aircraft, which was a nice confirmation that the bottle had been successfully dropped.



The mission was setup for the aircraft to go back to the butterfly confirmation pattern after the drop. We had done that to allow us to see where the bottle had actually landed relative to Joe, in case we wanted to do a 2nd drop. We had 3 bottles ready (one on the plane, two back at the airfield), and were ready to fit a new bottle and adjust the drop point if we'd been too far off.



As soon as we did the confirmation pass it became clear we didn't need to drop a 2nd bottle. We measured the distance of the bottle to Joe as under 3 meters using the imaging system (the judges measured it officially as 2.6m), so we asked the plane to come back and land.



The landing



This year we used a laser rangefinder (a SF/02 from LightWare) to help with the landing. Using a rangefinder really helps ensure the flare is at the right altitude and produces much more consistent landings.



The only real drama we had with the landing was that we came in a bit fast, and ballooned more than it should have on the flare. The issue was that we hadn't used a shallow enough approach. Combined with the quite strong (14 knot) cross-wind it was an "interesting" landing.



We also should have set the landing point a bit closer to the end of the runway. We had put it quite a way along the runway as we weren't sure if the laser rangefinder would pick up anything strange as it crossed over the road and airport boundary fence, but in hindsight we'd put the touchdown point a bit too close to the geofence. Full size glider operations running at the same time meant only part of the runway was available for OBC teams to use.



The landing was entirely successful, and was probably better than a manual landing would have been by me in the same wind conditions (I'm only a mediocre pilot, and landings are my weakest point), but we certainly can do better. Paul Riseborough is already looking at ways to improve the autoland to hopefully produce something that produces a round of applause from spectators in future landings.



Radio performance



Another part of the mission that is worth looking at is the radio performance. We had two radio links to the aircraft - one was a RFD900 900MHz radio, and that performed absolutely flawlessly as usual. We had around 40 dB of fade margin at a range of over 6km, which is absolutely huge. Every team that flew in the OBC this year used a RFD900, which is a great credit to Seppo and the team at RFDesign.



The 2nd radio was a Ubiquity Rocket M5, which is a 5.8GHz ethernet bridge. We used an active antenna tracker this year for the 5.8GHz link, with a 28dBi MIMO antenna on the ground, and a 10dBi MIMO omni antenna in the aircraft (the protrusion from the top of the fuselage is for the antenna). The 5.8GHz link gave us lots of bandwidth for the images, but was not nearly as reliable as the RFD900 link. It dropped out 6 times over the course of the mission, with the longest dropout lasting just over a minute. The dropouts were primarily caused by magnetometer calibration on the antenna tracker - during the mission we had to add some manual trim to the tracker to improve the alignment. That worked, but really we should have used a ground antenna with a bit less gain (maybe around 24dBi) to give us a wider beam width.



Another alternative would have been to use a lower frequency. The 5.8GHz Rocket gives fantastic bandwidth, but we don't really need that much bandwidth for our system. The Robota team used 2.4GHz Ubiquity radios and much simpler antennas and ended up with a much better link than we had. The difference in path loss between 2.4GHz and 5.8GHz is quite significant.



The reason we didn't use the 2.4GHz gear is that we do most of our testing at a local MAAA flying club, and we know that if someone crashes their expensive model while we have a powerful 2.4GHz radio running then there will always be the thought that our radio may have caused interference with their 2.4GHz RC link.



So we're now looking into the possibility of using a 900MHz ethernet bridge. The Ubiquity M900 looks like a real possibility. It doesn't offer nearly as much bandwidth as the 5.8GHz or 2.4GHz radios as Australia only has 13MHz of spectrum available in the 900MHz band for ISM use, but that should still be enough for our application. We have heard that the spread spectrum M900 doesn't significantly interfere with the RFD900 in the same band (as the RFD900 is a TDM frequency hopping radio), but we have yet to test that theory.



Alternatively we may use two RFD900s in the same aircraft, with different numbers of hopping channels and different air data rates to minimise interference. One would be dedicated to telemetry and the other to image data. A RFD900 at 128kbps should be plenty for our cuav imaging system as long as the "live camera view" window is set to quite a low resolution and update rate.



Team cooperation



One of the most notable things about this years competition was just how friendly the discussions between the teams were. The competition has a great spirit of cooperation and it really is a fantastic experience to work closely with so many UAV developers from all over the world.



I don't really have time to go through all the teams that attended, but I do want to mention some of the highlights for me. Top of the list would have to be meeting Ben and Daniel Dyer from team SFWA. They put in an absolutely incredible effort to build their own autopilot from scratch. Their build log at http://au.tono.my/log/index.html is incredible to read, and shows just what can be achieved in a short time with enough talent. It was fantastic that they completed the challenge (the first team to ever do so) and I look forward to seeing how they take their system forward.



I'd also like to offer a tip of the hat to Team Swiss Fang. They used the PX4 native stack on a Pixhawk and it was fantastic to see how far they pushed that autopilot stack in the lead up to the competition. That is one of the things that competitions like the OBC can do for an autopilot - push it to much higher levels.



Team OpenUAS also deserves a mention, and I was especially pleased to meet Christophe who is one of the key people behind the Paparrazzi autopilot. Paparrazzi is a real pioneer in the field of amateur autopilots. Many of the posts we make on "ardupilot has just achieved X" on diydrones could reasonably be responded to by saying "Paparrazzi did that 3 years ago". The OpenUAS team had bad luck in both the 2012 competition and again this year. This time round it was an airspeed sensor failure which led to a crash soon after takeoff which is really tragic given the effort they have put in and the pedigree of their autopilot stack.



The Robota team also did very well, coming in second behind our team. Particularly impressive was the performance of the Goose autopilot on a quite small foam plane in the wind over Kingaroy. The automatic landing was fantastic. The Robota team used a much simpler approach, just using a 2.4GHz Ubiquity link to send a digital video stream to 3 video monitors and having 3 people staring at those screens to find Joe. Extremely simple, but it worked. They were let down a bit by the drop accuracy in the wind, but a great effort done with style.



I was absolutely delighted when Team Thunder, who were also running APM:Plane, completed the challenge, coming in 4th place. They built their system partly on the image recognition code we had released, which is exactly what we hoped would happen. We want to see UAV developers building on each others work to make better and better systems, so having Team Thunder complete the mission was great.



Overall ardupilot really shone in the competition. Over half the teams that flew in the competition were running ardupilot. Our community has shown that we can put together systems that compete with the best in the world. We've come a long way in the last few years and I'm sure there is a bright future for more developments in the UAV S&R space that will see ardupilot saving lives on a regular basis.



Thank you



On behalf of CanberraUAV I'd like to offer a huge thank you to the OBC organisers for the massive effort they have put in over so many years to run the competition. Back in 2007 when the competition started it took real insight for Rod Walker and Jon Roberts to see that a competition of this nature could push amateur UAV technology ahead so much, and it took a massive amount of perseverance to keep it going to the point that teams were able to finally complete the competition. The OBC has had a huge impact on amateur UAV technology.



We'd also like to thank our sponsors, 3DRobotics, who have been a huge help for CanberraUAV. We really couldn't have done it without you. Working with 3DR on this sort of technology is a great pleasure.



Next steps



Completing the Outback Challenge isn't the end for CanberraUAV and we are already starting discussions on what we want to do next. I've posted some ideas on our mailing list and we would welcome suggestions from anyone who wants to take part. We've come a long way, but we're not yet at the point where putting together an effective S&R aircraft is easy.

Categories: thinktime

Andrew McDonnell: (UPDATE) Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Sat 04th Oct 2014 10:10

Of course, there are more things I had known but not fully internalised yet. Of course.

Many  MIPS architectures, and specifically, most common router architectures, don’t have hardware support for NX.

 

Yet. It surely wont be long.

 

My own feeling in this age of Heartbleed and Shellshock is we should get ahead of the curve if we can – if a distribution supports NX in the toolchain then when a newer SOC arrives there is one less thing that we can forget about.

If I had bothered to keep reading instead of hacking I may have rediscovered sooner. But then I would know significantly less about embedded toolchains and how to debug and patch them.  Anyway, a determined user could also cherry-pick emulated NX protection from PAX.

When they Google this problem they will at least find mind work.

 

How else to  learn?  :-)

Categories: thinktime

Andrew McDonnell: Evaluating the security of OpenWRT (part 3) adventures in NOEXECSTACK’land

Sat 04th Oct 2014 01:10

To recap our experiments to date, out of the box OpenWRT seems to have sporadic coverage of various Linux hardening measures without doing a bit of extra work.

One metric of interest not closely examined to date is the NOEXECSTACK attribute on executable binaries and libraries. When coupled with Kernel support, if enabled this disallows execution of code in the stack memory area of a program, thus preventing an entire class of vulnerabilities from working.  I mentioned NOEXECSTACK in passing previously; from the checksec report we saw that the x86 build has 100% coverage of NOEXECSTACK, whereas the MIPS build was almost completely lacking.

For a quick introduction to NOEXECSTACK, see http://wiki.gentoo.org/wiki/Hardened/GNU_stack_quickstart.

Down the Toolchain Rabbit Hole

As far as a challenging detective exercise, this one was a bit of a doosy, for me at least.  Essentially  I had to turn the OpenWRT build system inside out to understand how it worked, and then the same with uClibc, so that I could learn where to begin to start.  After rather a few false starts, the culprit turned out to be somewhere completely different.

First, OpenWRT by default uses uClibc as the C library, which is the bedrock upon which the rest of the user space is built.  The C library is not just a standard package however. OpenWRT, like the majority of typical embedded Linux systems employs a  “toolchain” or “buildroot” architecture.  Simply put, the combines packages together the C/C++ compiler, the assembler, linker, the C library and various other core components in a way that the remainder of the firmware can be built without having knowledge of how this layer is put together.

This is a Good Thing as it turns out, especially when cross-compiling, i.e. when building the OpenWRT firmware for a CPU or platform (the TARGET) that is different from that where the firmware build happens (the HOST.)  Especially where everything is bootstrapped from source, as OpenWRT is.

The toolchain is combined from the following components:

  • binutils — provides the linker (ld) and the assembler and code for manipulating ELF files (Linux binaries) and object libraries
  • gcc — provides the C/C++ compiler and, often overlooked, libgcc, a library various “intrinsic” functions such as optimisations for certain C library functions, amongst others
  • A C library — in this case, uClibc
  • gdb — a debugger

Now all these elements need to be built in concert, and installed to the correct locations, and to complicate matters, the toolchain actually has multiple categories of output:

  • Programs that run on the HOST that produce programs and libraries that run on the TARGET (such as the cross compiler)
  • Programs that run on the on the TARGET (e.g. ldd, used for scanning dependencies)
  • Programs and libraries that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build other programs that run on the HOST to perform various tasks related to the above
  • Header files that are needed to build programs and libraries that run on the TARGET
  • Even, programs that run on the on the TARGET  to produce programs and libraries that run on the TARGET (a target-hosted C compiler!)

Confused yet?

All this magic is managed by the OpenWRT build system in the following way:

  • The toolchain programs are unpacked and built individually under the build_dir/toolchain directory
  • The results of the toolchain build designed to run on the host under the staging_dir/toolchain
  • The partial toolchain under staging_dir is used to build the remaining items under build_dir which are finally installed to staging_dir/target/blah-rootfs
  • (this is an approximation, maybe build OpenWRT for yourself to find out all the accurate naming conventions )
  • The kernel headers are an intrinsic part of this because of the C library, so along the way a pass over the target Linux kernel source is required as well.

OpenWRT is flexible enough to allow the C compiler to be changed (e.g. between stock gcc 4.6 and LInaro gcc 4.8) , and the binutils version, and even switch the C library between different project implementations ( uClibc vs eglibc vs MUSL.)

OpenWRT fetches the sources for all these things, then applies a number of local patches, before building.

We will need to refer to this later.

Confirming the Problem and Fishing for Red Herrings.

The first thing to note is that x86 has no problem, but MIPS does, and I want to run OpenWRT on various embedded devices with MIPS SOC.  Without that I may never have bothered digging deeper!

Of course I dived in initially and took the naive brute force approach.  I patched OpenWRT to apply the override flag to the linker: -Wl,-z,noexecstack. This was a bit unthinking, after all x86 did not need this.

Doing this gave partial success. In fact most programs gained NOEXECSTACK, except for a chunk of the uClibc components, busybox, and tellingly as it turned out, libgcc_s.so. That is, core components used by nearly everything. Of course.

(Spoiler: modern Linux toolchain implementations actually enable NOEXECSTACK by DEFAULT for C code! Which was an important fact I forgot at this point! Silly me.)

At this point,  I managed to overlook libgcc_s.so and decided to focus on uClibc. This decision would greatly expand my knowledge of OpenWRT and uClibc and embedded built systems, and do nothing to solve the problem the proper way!

OpenWRT builds uClibc as a host package, which basically means it abuses Makefiles to generate a uClibc configuration file partly derived from the OpenWRT config file settings, and eventually call the uClibc top level makefile to build uClibc. This can only be done after building binutils and two of three stages of gcc.

At this point I still did not fully understand how NOEXECSTACK really should be employed, which is probably an artefact of working on this stuff late at night and not reading everything as carefully as I might have.  So I did the obvious and incorrect thing and worked out how to patch uClibc further to push the force -Wl,-z,noexecstack through it. What I had to do to do that could almost take another blog article, so I’ll skip it for brevity.  Anyway, this did not solve the problem.

Finally I turned on all the debug and examined the build:

make V=csw toolchain/uClibc/compile

(Aside: the documentation for OpenWRT mentions using V=s to turn on some verboseness, but to get the actual compiler and linker commands of the toolchain build you need the extra flags. I should probably try and update the OpenWRT wiki but I have invested so much time in this that I might have to leave that as an exercise for the reader)

All the libraries were being linked using the -Wl,-z,noexecstack flag, yet some still failed checksec. Argh!

I should also note that repeating this process over and over gets tedious, taking about 20 minutes to built the toolchain plus minimal target firmware on my quad core AMD Phenom. Dont delete the build_dir/host and staging_dir/host  directories or it doubles!

So something else was going on.

Upgrades and trampolines, or not.

I sought help from the uClibc developers mailing list, who suggested I first try using up to date software. This was a fair point, as OpenWRT is using a 2 year old release of uClibc and 1 year old release of binutils, etc.

This of course entailed having to learn how to patch OpenWRT to give me that choice.

So another week later, around work and family and life, I found some time to do this, and discovered that the problem persisted.

At this point I revisited the Gentoo hardening guide.  After some detective work I discovered that several MIPS assembler files inside of uClibc did not actually have the recommended code fragments. Aha! I thought. Incorrectly again, as I should have realised; uClibc has already thought of this and when NOEXECSTACK is configured, as it is for OpenWRT, uClibc passes a different flag to the assembler that has the effect of fixing NOEXECSTACK for assembler files. And of course after I patched about 17 .S files and waited another half hour, the checksec scan was still unsuccessful. Red herring again!

I started to get desperate when I read about some compiler systems that use something called a ‘trampolline’. So I went back to the mailing uClibc  list.

At this point I would like to thank the uClibc developers for being so patient with me, as the solution was now near at hand.

Cutting edge patches and a wrinkle in time.

One of the uClibc developers pointed me to a patch posted on the gcc mailing list. As fate would happen, dated 10 September 2014, which was _after_ I started on these investigations.  This actually went full circle back to libgcc_s.so which was the small library I passed over to focus on uClibc.  This target library itself has some assembly files, which were neither built with the noexecstack option nor including the Gentoo-suggested assembly pragma. This patch also applies on gcc, not on uClibc, and of course was outside of binutils which was the other component I had to upgrade.  The fact that libgcc_s.so was not clean should maybe have pointed me to look at gcc, and it did cross my mind. But we all have to learn somehow.  Without all the above I would be the poorer for my knowledge of the internals of all these systems.

So I applied this patch, and finally, bliss, a sea of green NX enabled flags. Without in the end any modifications actually required to the older version of uClibc used inside OpenWRT.

This even fixed busybox NX status without modification.  So empirically this confirms what I read previously and also overlooked to my detriment, that being NOEXECSTACK is aggregated from all linked libraries: if one is missing it pollutes the lot.

Postscript

Now I have to package the patch so that it can be submitted to OpenWRT, initially against Barrier Breaker given that has just been released!

Then I will probably need to repeat it against all the supported gcc versions and submit separate patches for those. That will get a bit tedious, thankfully I can start a test build then go away…

Categories: thinktime

Pages