You are here

thinktime

Lev Lafayette: Password Praise in the Future Tense

Planet Linux Australia - Fri 08th Apr 2016 22:04

Apropos the previous post, I am coming to the conclusion that University's are very strange places when it comes to password policies. Mind you, it shouldn't really come to much of a surprise - the choice of technologies adopted are often so mind-bogglingly strange one is tempted to conclude that the decisions are more political than technical. Of course, that would never happen in the commercial world. All this aside, consider the password policy of a certain Victorian university.

read more

Categories: thinktime

Colin Charles: FOSDEM 2016 notes

Planet Linux Australia - Fri 08th Apr 2016 20:04

While being on the committee for the FOSDEM MySQL & friends devroom, I didn’t speak at that devroom (instead I spoke at the distributions devroom). But when I had time to pop in, I did take some notes on sessions that were interesting to me, so here are the notes. I really did enjoy Yoshinori Matsunobu’s session (out of the devroom) on RocksDB and MyRocks and I highly recommend you to watch the video as the notes can’t be very complete without the great explanation available in the slide deck. Anyway there are videos from the MySQL and friends devroom.

MySQL & Friends Devroom MySQL Group Replication or how good theory gets into better practice – Tiago Jorge
  • Multi-master update everywhere with built-in automatic distributed recovery, conflict detection and group membership
  • Group replication added 3 PERFORMANCE_SCHEMA tables
  • If a server leaves the group, the others will be automatically informed (either via a crash or if you execute STOP GROUP REPLICATION)
  • Cloud friendly, and it is self-healing. Integrated with server core via a well-defined API. GTIDs, row-based replication, PERFORMANCE_SCHEMA. Works with MySQL Router as well.
  • Multi-master update everywhere. Conflicts will be detected and dealt with, via the first committer wins rule. Any 2 transactions on different servers can write to the same tuple.
  • labs.mysql.com / mysqlhighavailability.com
  • Q: When a node leaves a group, will it still accept writes? A: If you leave voluntarily, it can still accept writes as a regular MySQL server (this needs to be checked)
  • Online DDL is not supported
  • Checkout the video
ANALYZE for statements – Sergei Petrunia
  • a lot like EXPLAIN ANALYZE (in PostgreSQL) or PLAN_STATISTICS (in Oracle)
  • Looks like explain output with execution statistics
  • slides and video
Preparse Query Rewrite Plugins – Sveta Smirnova / Martin Hansson
  • martin.hansson@oracle.com
  • Query rewwriting with a proxy might be too complex, so they thought of doing it inside the server. There is a pre-parse (string-to-string) and a post-parse (parse tree) API. Pre-parse: low overhead, but no structure. Post-parse: retains structure, but requires re-parsing (no destructive editing), need to traverse parse tree and will only work on select statements
  • Query rewrite API builds on top of teh Audit API, and then you’ve got the pre-parse/post-parse APIs on the top that call out to the plugins
  • video
Fedora by the Numbers – Remy DeCausemaker MyRocks: RocksDB Storage Engine for MySQL (LSM Databases at Facebook) – Yoshinori Matsunobu
  • SSD/Flash is getting affordable but MLC Flash is still expensive. HDD has large capacity but limited IOPS (reducing rw IOPS is very important and reducing write is harder). SSD/Flash has great read iops but limited space and write endurance (reducing space here is higher priority)
  • Punch hole compression in 5.7, it is aligned to the sector size of your device. Flash device is basically 4KB. Not 512 bytes. So you’re basically wasting a lot of space and the compression is inefficient
  • LSM tends to have a read penalty compared to B-Tree, like InnoDB. So a good way to reduce the read penalty is to use a Bloom Filter (check key may exist or not without reading data, and skipping read i/o if it definitely does not exist)
  • Another penalty is for delete. It puts them into tombstones. So there is the workaround called SingleDelete.
  • LSMs are ideal for write heavy applications
  • Similar features as InnoDB, transactions: atomicity, MVCC/non-locking consistent read, read committed repeatable read (PostgreSQL-style), Crash safe slave and master. It also has online backup (logical backup by mysqldump and binary backup by myrocks_hotbackup).
  • Much smaller space and write amplification compared to InnoDB
  • Reverse order index (Reverse Column Family). SingleDelete. Prefix bloom filter. Mem-comparable keys when using case sensitive collations. Optimizer statistics for diving into pages.
  • RocksDB is great for scanning forward but ORDER BY DESC queries are slow, hence they use reverse column families to make descending scan a lot faster
  • watch the video
Categories: thinktime

Colin Charles: (tweet) Summary of Percona Live 2015

Planet Linux Australia - Fri 08th Apr 2016 20:04

The problem with Twitter is that we talk about something and before you know it, people forget. (e.g. does WebScaleSQL have an async client library?) How many blog posts are there about Percona Live Santa Clara 2015? This time (2016), I’m going to endeavour to write more than to just tweet – I want to remember this stuff, and search archives (and also note the changes that happen in this ecosystem). And maybe you do too as well. So look forward to more blogs from Percona Live Data Performance Conference 2016. In the meantime, here’s tweets in chronological order from my Twitter search.

  • crowd filling up the keynote room for #perconalive
  • beginning shortly, we’ll see @peterzaitsev at #perconalive doing his keynote
  • #perconalive has over 1,200 attendees – oracle has 20 folk, with 22 folk from facebook
  • #perconalive is going to be in Amsterdam sept 21-22 2015 (not in London this year). And in 2015, April 18-21 2016!
  • We have @PeterZaitsev on stage now at #perconalive
  • 5 of the 5 top websites are powered by MySQL – an Oracle ad – alexa rankings? http://www.alexa.com/topsites #perconalive
  • now we have Harrison Fisk on ployglot persistence at facebook #perconalive
  • make it work / make it fast / make it efficient – the facebook hacker way #perconalive
  • a lot of FB innovation goes into having large data sizes with short query time response #perconalive
  • “small data” to facebook? 10’s of petabytes with <5ms response times. and yes, this all sits in mysql #perconalive
  • messages eventually lands in hbase for long term storage for disk #perconalive they like it for LSM
  • Harrison introduces @RocksDB to be fast for memory/flash/disk, and its also LSM based. Goto choice for 100’s of services @ FB #perconalive
  • Facebook Newsfeed is pulled from RocksDB. 9 billion QPS at peak! #perconalive
  • Presto works all in memory on a streaming basis, whereas Hive uses map/reduce. Queries are much faster in Presto #perconalive
  • Scuba isn’t opensource – real time analysis tool to debug/understand whats going on @ FB. https://research.facebook.com/publications/456106467831449/scuba-diving-into-data-at-facebook/ … #perconalive
  • InnoDB as a read-optimized store and RocksDB as a write-optimized store — so RocksDB as storage engine for MySQL #perconalive
  • Presto + MySQL shards is something else FB is focused on – in production @ FB #perconalive
  • loving the woz keynote @ #perconalive – wondering if like apple keynotes, we’ll see a “one more thing” after this ;)
  • “i’m only a genius at one thing: that’s making people think i’m a genius” — steve wozniak #perconalive
  • Happiness = Smiles – Frowns (H=S-F) & Happiness = Food, Fun, Friends (H=F³) Woz’s philosophy on being happy + having fun daily #perconalive
  • .@Percona has acquired @Tokutek in a move that provides some consolidation in the MySQL database market and takes..
  • MySQL Percona snaps up Tokutek to move onto MongoDB and NoSQL turf http://zd.net/1ct6PEI by @wolpe
  • One more thing – congrats @percona @peterzaitsev #perconalive Percona has acquired Tokutek with storage engines for MySQL & MongoDB – @PeterZaitsev #perconalive
  • Percona is now a player in the MongoDB space with TokuMX! #perconalive
  • The tokumx mongodb logo is a mongoose… #perconalive Percona will continue to support TokuDB/TokuMX to customers + new investments in it
  • @Percona “the company driving MySQL today” and “the brains behind MySQL”. New marketing angle? http://www.datanami.com/2015/04/14/mysql-leader-percona-takes-aim-at-mongodb/ …
  • We have Steaphan Greene from @facebook talk about @WebScaleSQL at #perconalive
  • what is @webscalesql? its a collaboration between Alibaba, Facebook, Google, LinkedIn, and Twitter to hack on mysql #perconalive
  • close collaboration with @mariadb @mysql @percona teams on @webscalesql. today? upstream 5.6.24 today #perconalive
  • whats new in @WebScaleSQL ? asynchronous mysql client, with support from within HHVM, from FB & LinkedIn #perconalive
  • smaller @webscalesql change (w/big difference) – lower innodb buffer pool memory footprint from FB & Google #perconalive
  • reduce double-write mode while still preserving safety. query throttling, server side statement timeouts, threadpooling #perconalive
  • logical readahead to make full table scans as much as 10x fast. @WebScaleSQL #perconalive
  • whats coming to @WebScaleSQL – online innodb defragmentation, DocStore (JSON style document database using mysql) #perconalive
  • MySQL & RocksDB coming to @WebScaleSQL thanks to facebook & @MariaDB #perconalive
  • So, @webscalesql will skip 5.7 – they will backport interesting features into the 5.6 branch! #perconalive
  • likely what will be next to @webscalesql ? will be mysql-5.8, but can’t push major changes upstream. so might not be an option #perconalive
  • Why only minor changes from @WebScaleSQL to @MySQL upstream? #perconalive
  • Only thing not solved with @webscalesql & upstream @mysql – the Contributor license agreement #perconalive
  • All @WebScaleSQL features under Apache CCLA if oracle can accept it. Same with @MariaDB @percona #perconalive
  • Steaphan Greene says tell Oracle you want @webscalesql features in @mysql. Pressure in public to use the Apache CLA! #perconalive
  • We now have Patrik Sallner CEO from @MariaDB doing the #perconalive keynote ==> 1+1 > 2 (the power of collaboration)
  • “contributors make mariadb” – patrik sallner #perconalive
  • Patrik Sallner tells the story about the CONNECT storage engine and how the retired Olivier Bertrand writes it #perconalive
  • Google contributes table/tablespace encryption to @MariaDB 10.1 #perconalive
  • Patrik talks about the threadpool – how #MariaDB made it, #Percona improved it, and all benefit from opensource development #perconalive
  • and now we have Tomas Ulin from @mysql @oracle for his #perconalive keynote
  • 20 years of MySQL. 10 years of Oracle stewardship of InnoDB. 5 years of Oracle stewardship of @MySQL #perconalive
  • Tomas Ulin on the @mysql 5.7 release candidate. It’s gonna be a great release. Congrats Team #MySQL #perconalive
  • MySQL 5.7 has new optimizer hint frameworks. New cost based optimiser. Generated (virtual) columns. EXPLAIN for running thread #perconalive
  • MySQL 5.7 comes with the query rewrite plugin (pre/post parse). Good for ORMs. “Eliminates many legacy use cases for proxies” #perconalive
  • MySQL 5.7 – native JSON datatypes, built-in JSON functions, JSON comparator, indexing of documents using generated columns #perconalive
  • InnoDB has native full-text search including full CJK support. Does anyone know how FTS compares to MyISAM in speed? #perconalive
  • MySQL 5.7 group replication is unlikely to make it into 5.7 GA. Designed as a plugin #perconalive
  • Robert Hodges believes more enterprises will use MySQL thanks to the encryption features (great news for @mariadb) #perconalive
  • Domas on FB Messenger powered by MySQL. Goals: response time, reliability, and consistency for mobile messaging #perconalive
  • FB Messenger: Iris (in-memory pub-sub service – like a queue with cache semantics). And MySQL as persistence layer #perconalive
  • FB focuses on tiered storage: minutes (in memory), days (flash) and longterm (on disks). #perconalive
  • Gotta keep I/O devices for 4-5 years, so don’t waste endurance capacity of device (so you don’t write as fast as a benchmark) #perconalive
  • Why MySQL+InnoDB? B-Tree: cheap overwrites, I/O has high perf on flash, its also quick and proven @ FB #perconalive
  • What did FB face as issues to address with MySQL? Write throughput. Asynchronous replication. and Failover time. #perconalive
  • HA at Facebook: <30s failover, <1s switchover, > 99.999% query success rate
  • Learning a lot about LSM databases at Facebook from Yoshinori Matsunobu – check out @rocksdb + MyRocks https://github.com/MySQLOnRocksDB/mysql-5.6 …
  • The #mysqlawards 2015 winners #PerconaLive
  • Percona has a Customer Advisory Board now – Rob Young #perconalive
  • craigslist: mysql for active, mongodb for archives. online alter took long. that’s why @mariadb has https://mariadb.com/kb/en/mariadb/progress-reporting/ … #perconalive
  • can’t quite believe @percona is using db-engines rankings in a keynote… le sigh #perconalive
  • “Innovation distinguishes between a leader and a follower” – Steve Jobs #perconalive
  • Percona TokuDB: “only alternative to MySQL + InnoDB” #perconalive
  • “Now that we have the rights to TokuDB, we can add all the cool features ontop of Percona XtraDB Cluster (PXC)” – Rob Young #perconalive
  • New Percona Cloud Tools. Try it out. Helps remote DBA/support too. Wonder what the folk at VividCortex are thinking about now #perconalive
  • So @MariaDB isn’t production ready FOSS? I guess 3/6 top sites on Alexa rank must disagree #perconalive
  • Enjoying Encrypting MySQL data at Google by @jeremycole & Jonas — you can try this in @mariadb 10.1.4 https://mariadb.com/kb/en/mariadb/mariadb-1014-release-notes/ … #perconalive
  • google encryption: mariadb uses the api to have a plugin to store the keys locally; but you really need a key management server #perconalive
  • Google encryption: temporary tables during query execution for the Aria storage engine in #MariaDB #perconalive
  • find out more about google mysql encryption — https://code.google.com/p/google-mysql/ or just use it at 10.1.4! https://downloads.mariadb.org/mariadb/10.1.4/ #perconalive
  • Encrypting MySQL data at Google – Percona Live 2015 #perconalive http://wp.me/p5WPkh-5F
  • The @WebScaleSQL goals are still just to provide access to the code, as opposed to supporting it or making releases #perconalive
  • There is a reason DocStore & Oracle/MySQL JSON 5.7 – they were designed together. But @WebScaleSQL goes forward with DocStore #perconalive
  • So @WebScaleSQL will skip 5.7, and backport things like live resize of the InnoDB buffer pool #perconalive
  • How to view @WebScaleSQL? Default GitHub branch is the active one. Ignore -clean branches, just reference for rebase #perconalive
  • All info you need should be in the commit messages @WebScaleSQL #perconalive
  • Phabricator is what @WebScaleSQL uses as a code review system. All diffs are public, anyone can follow reviews #perconalive
  • automated testing with jenkins/phabricator for @WebScaleSQL – run mtr on ever commit, proposed diffs, & every night #perconalive
  • There is feature documentation, and its a work in progress for @WebScaleSQL. Tells you where its included, etc. #perconalive
  • Checked out the new ANALYZE statement feature in #MariaDB to analyze JOINs? Sergei Petrunia tells all #perconalive https://mariadb.com/kb/en/mariadb/analyze-statement/ …
Categories: thinktime

None of the above

Seth Godin - Fri 08th Apr 2016 18:04
In a world where nuance, uncertainty and shades of grey are ever more common, becoming comfortable with ambiguity is one of the most valuable skills you can acquire. If you view your job as taking multiple choice tests, you will...        Seth Godin
Categories: thinktime

Rusty Russell: Bitcoin Generic Address Format Proposal

Planet Linux Australia - Fri 08th Apr 2016 12:04

I’ve been implementing segregated witness support for c-lightning; it’s interesting that there’s no address format for the new form of addresses.  There’s a segregated-witness-inside-p2sh which uses the existing p2sh format, but if you want raw segregated witness (which is simply a “0” followed by a 20-byte or 32-byte hash), the only proposal is BIP142 which has been deferred.

If we’re going to have a new address format, I’d like to make the case for shifting away from bitcoin’s base58 (eg. 1At1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2):

  1. base58 is not trivial to parse.  I used the bignum library to do it, though you can open-code it as bitcoin-core does.
  2. base58 addresses are variable-length.  That makes webforms and software mildly harder, but also eliminates a simple sanity check.
  3. base58 addresses are hard to read over the phone.  Greg Maxwell points out that the upper and lower case mix is particularly annoying.
  4. The 4-byte SHA check does not guarantee to catch the most common form of errors; transposed or single incorrect letters, though it’s pretty good (1 in 4 billion chance of random errors passing).
  5. At around 34 letters, it’s fairly compact (36 for the BIP141 P2WPKH).

This is my proposal for a generic replacement (thanks to CodeShark for generalizing my previous proposal) which covers all possible future address types (as well as being usable for current ones):

  1. Prefix for type, followed by colon.  Currently “btc:” or “testnet:“.
  2. The full scriptPubkey using base 32 encoding as per http://philzimmermann.com/docs/human-oriented-base-32-encoding.txt.
  3. At least 30 bits for crc64-ecma, up to a multiple of 5 to reach a letter boundary.  This covers the prefix (as ascii), plus the scriptPubKey.
  4. The final letter is the Damm algorithm check digit of the entire previous string, using this 32-way quasigroup. This protects against single-letter errors as well as single transpositions.

These addresses look like btc:ybndrfg8ejkmcpqxot1uwisza345h769ybndrrfg (41 digits for a P2WPKH) or btc:yybndrfg8ejkmcpqxot1uwisza345h769ybndrfg8ejkmcpqxot1uwisza34 (60 digits for a P2WSH) (note: neither of these has the correct CRC or check letter, I just made them up).  A classic P2PKH would be 45 digits, like btc:ybndrfg8ejkmcpqxot1uwisza345h769wiszybndrrfg, and a P2SH would be 42 digits.

While manually copying addresses is something which should be avoided, it does happen, and the cost of making them robust against common typographic errors is small.  The CRC is a good idea even for machine-based systems: it will let through less than 1 in a billion mistakes.  Distinguishing which blockchain is a nice catchall for mistakes, too.

We can, of course, bikeshed this forever, but I wanted to anchor the discussion with something I consider fairly sane.

Categories: thinktime

An interesting alternative to primaries

Seth Godin - Thu 07th Apr 2016 18:04
Presidential primaries in the US have several problems. We do it the way we do it because that's the usual way, not because it works particularly well. The biggest problem is that the people who vote are usually the most...        Seth Godin
Categories: thinktime

Jonathan Adamczewski: Aside: Over-engineered Min() [C++, variadic templates, constexpr, fold left]

Planet Linux Australia - Thu 07th Apr 2016 16:04

Q: Given a function constexpr int Min(int a, int b), construct a function constexpr int Min(Args... args) that returns the minimum of all the provided args. Fail to justify your over-engineering.

A: Rename Min(int, int) as MinImpl(int, int) or stick it in a namespace. Overloading the function is not only unnecessary, it gets in the way of the implementation.

constexpr int MinImpl(int a, int b) { return a < b ? a : b; }

Implement a constexpr fold left function. If we can use it for Min(), we should be able to do the same for Max(), and other similar functions. Should we be able to find any (#prematuregeneralization).

template<typename ArgA, typename ArgB, typename Func> constexpr auto foldl(Func func, ArgA a, ArgB b) { return func(a, b); } template<typename ArgA, typename ArgB, typename Func, typename ...Args> constexpr auto foldl(Func func, ArgA a, ArgB b, Args... args) { return foldl(func, func(a, b), args...); }

Combine the two.

template<typename ...Args> constexpr int Min(Args... args) { return foldl(MinImpl, args...); }

Add the bare minimum amount of testing for a constexpr function: slap a static_assert() on it.

static_assert(Min(6, 4, 5, 3, 9) == 3), "Nope");

I did so with Visual Studio 2015 Update 2. It did not object.

 

Categories: thinktime

Kristy Wagner: Panama Papers – what does it mean for me?

Planet Linux Australia - Thu 07th Apr 2016 11:04

It is just a little insane that in the process of my setting up this web site my motivation to launch it with a ‘real’ blog post simply had not transpired.  Then came along Panama Papers, which took me past that point of wanting to write to simply want to bury myself in research and to never resurface.  Data and I have a very loving relationship, I love to surround myself in it and it loves to share its secrets with me.

So, I buried myself and looked at all the tattles on big business and large personalities, and woke up this morning with two questions:

  • How did they manage to mine what they have and how much did they just hand over on platters to government agencies?
  • What does this mean to me?

The former is already being unveiled in the media.  So, I thought I would address the latter.  What do you think this means for you?

Well, the times are changing and, if successful in protecting its source then The International Consortium of Investigative Journalists it has opened up a brand spanking new precedent for whistle-blowers.  The survival of this whistle-blower is now completely dependent on not having given away their identity for themselves in their day to day interactions, dare a bead of sweat even think about beading near an eyebrow then their fate shall likely be sealed, along with anyone else with a vaguely guilty conscience.  Some of the ‘powers’ named in the document have the power to make you disappear and have no issue reaching to foreign soil to reach that goal.

So whilst they are on the chase, we innocents stand by and watch to ‘crazy’ unfold.  In other countries with greater questionability over political ethics there is going to be political change, in some countries there shall be protest and the destabilisation of leaders, in others solidarity as the political spin continues.  Here in Australia, though, it comes down to a bunch of corporations and about 800 of individuals. In the scheme of things, without impact figures, this seems relatively small.  Only time will reveal the size of the tax pie that got carved by offshore money movements.

Spineless ATO settlements

As the Australian Tax Office (ATO) tracks down these individuals and hits them up for tax evasion, we are going to see many, many quiet settlements.  To avoid the difficulties and costs of due Court process the ATO are known to taking settlements at as little as one tenth the known debt, including historic debts by organisations now named as being associated with the Panama Papers.  We are going to see a lot of these but don’t hold your breath, it will be some time before they start building cases.  As a result of settlements, they are not known for seeing these people face non-financial punishments such as actual gaol time.  I hope someone in political oversight gives the ATO a giant kick up the pants expecting that every one of these people gets prosecuted to the full extent of the law.  But hope rarely equates to action in political spheres.

More Leaks

Once one person gets away with whistleblowing, others follow suit, somewhat like media attention on suicides.  If you are free of the offshore banking set up it doesn’t mean you are off the hook.  If your actions are shady then know that even a sniff of it from others may lead to you being caught.  Are you shuffling money, paying family members who are not actual employees, or have a raft of false invoices in your tax deductions?  Yep, if you are then you are now more likely to have someone pull the rug out from under you and it is going to be someone you know.

A company that holds nearly nothing on the public web has lost over two terabytes of data, what can someone with system access do to you?  My first suggestion is to clean up your act and confess any sins before you get caught.  Lest, you shall be the at mercy of others….

 

 

 

Sorry, I trailed off topic reminiscing of a movie scene with Andy Garcia in Oceans 11 as his character, Terry Benedict, throws a tantrum demanding that staff find out how they (Ocean’s 11) hacked into his system.  Whistle-blowers are going to do it if they are given a strong enough encouragement and they suspect they know what they might find.

Dark Web Understanding

Through disclosure of how the journalists went about mining this giant slab of data and communicating about it securely over international boundaries is going to give way to a better understanding of the Dark Web.  Suddenly the fear associated with secure anonymised communication is going to make sense as a safe haven for those pursuing truth (and not just those pedalling unsavoury wares).  I think somewhere in the last 3-5 years we forgot about those proxies set up for people in foreign countries to be able to share what is ‘really’ going on with the rest of the world.  I believe as this story unfolds people will come to understand the differences between the dark web, the deep web and the internet and rather than fear it, will simple accept each for what they are, knowing that all three are destined to evolve over time also.

Closing Loopholes

The Australian Government can’t change international law or laws governing other sovereign nations so what will they do to close loopholes?  The easiest way is to tax the shit out of every dollar that heads offshore and some of that can be controlled in the first instances but other shadier means will be found around it.  The way around it is to make moving the money offshore more expensive than keeping it here.  Some of the obvious tax avoidance avenues that will need to be closed through taxation will be:

  • dividends payments to foreign shareholders,
  • payments to benefactors of trusts who are not paying tax in Australia on the amount,
  • revenues paid back into foreign parent/holding companies,
  • fees paid on goods and services, possibly inflated or non-substantiable items, also through tariffs and duties, and
  • taxation on loans taken by Australian companies from overseas sources, or the limitation of interest deductibility from the same, including from parent/holding companies.

These are just the ones that spring to mind and is in no way comprehensive.  The key thing in this, though, is that such measures would make our economy quite protectionist.  (Not that this is necessarily always a bad thing.)  It means that we will see costs of imports rise, giving way for local business to be a preferred provider of products for our citizens which is awesome, other than for industries that we have allowed to collapse over the last 30 years.  Implementing protectionist tax reform had ought to be worth it too, I actually figure that now would be time to remove exemptions for mining whilst we are at it.  Everyone knows that their international buy prices from Australia to the sister company that sells it is just another loophole to be closed.  Let’s face it, people still need the commodities that are being ripped from our land.  If we are going to let our resources be removed from our own use, then it should benefit our nation far more than it does now.

In reality though implementing the closure of loopholes, like those above, has some serious economic impacts to international trade and our supply chains.  It is going to take a long time for Cabinet parties to get their heads around the impact let alone to debate the cost to benefit from such a decision.  Don’t hold your breath for every loop to be closed but be sure that we are going to see some changes.  Maybe though, we can move the pressure of Mum and Dad investor who are using negative gearing to finance their retirement and instead focus on catching real tax that is being let slide through for the benefit of large multi-nationals and the one percenters who are prepared to dabble in the potentially unlawful.

What about me?

Well, watch the tax space, have a chat to people who understand the economy and international trade about what is happening politically.  The means of gap closure will provide more opportunities in our country for us to build economies anew.  It also means that we may see somewhat of a market crash as foreign investors are taxed for taking their money home but we will find our way through and the crash could open up opportunities that Generation Y and I are missing in respect to owning their own homes and securing personal investment in a meaningful way for the first time.

If we closed these gaps quickly enough for businesses to not have opportunity to jump loopholes, then we would also have a significant revenue impact which one would hope would be pushed to the citizens as investment into education and health.  The potential though is quite amazing because this eye opener is also a good means to revisit the short sightedness of the sale of public assets.  If we temporarily crashed the market by restricting foreign investment and cutting the ability for foreign ownership by sale, we have the opportunity for the Commonwealth as well as States and Territories to change the way they manage economics and asset allocation.

Also, for business owners, this is potentially a great time to consider your capability to be a provider to Government for goods and services.  There is going to be a counterbalance from this event which may just put your tender into the preference pool because as the owner you have a visible Australian face that pays tax right here in Australia.  Right now, that is a really good thing for you.

I am not a bad guy, but I do have offshore funds, what about me?

All I can say is come clean.  Make sure you declare your offshore funds (whether through Mossack Fonsecca or not) to the Australian Tax Office and make sure you take advice directly from them on the tax liabilities you face from your choice of tax minimisation so that you do not cross into the realms of tax avoidance.  If there is an infraction that has already occurred then upfront disclosure is your best option especially before the ATO pulls your file together.  I respect to this one whistle-blower incident, Deputy Commissioner, Mr Michael Cranston has in an ATO press statement detailed:

“The information we have includes some taxpayers who we have previously investigated, as well as a small number who disclosed their arrangements with us under the Project DO IT initiative. It also includes a large number of taxpayers who haven’t previously come forward, including high wealth individuals, and we are already taking action on those cases.”

So, just come clean.  (By the way, the ATO have already offered to treat you nicely if you do.)

Political Ethics has changed

Just before you start thinking that this is all that is going to happen…ponder this statement, also from Mr Cranston:

“The message is clear – taxpayers can’t rely on these secret arrangements being kept secret and we will act on any information that is provided to us.”

Does anyone else here notice a giant ethical shift from the events of Wikileaks where the Government condemned such actions?  In the era of wikileaks controversies Governments were scared of the capability of insiders to release sensitive data and they worked on logging and permissioning tools to deter anyone who might be tempted, but now?  But now what is going on?

Okay, granted, in times of cutbacks it could be questioned if our political leadership have any ethics at all in respect to where tax dollars are given and taken.  However, the statement above flies in the face of our deontological political comfort zone where even if a political leader messed up that we have been encouraged to judge them on the intent of their actions rather than the outcome.  Now, because the leak is corporate and to Government benefit the tables have turned to whistle-blower as hero.  The fact that this act in Australia, given the role that Mossack Fonseca played, would likely be illegal white collar crime shows a giant swing of political view that our Government has permitted to come through the taxation office that now is a time that our Government will embrace consequentialism – that the ‘rightness’ of an act shall be determined on the consequences it produces rather than the morality of the act itself.

How does that play out into politics and society?  That is a whole new conversation for another day.  But if you are keen then share your thoughts on what it will look like in the comments below, along with your thoughts and opinions on how the Panama Papers will affect you.

____

Featured image source: CC-By only: https://www.flickr.com/photos/famzoo/

Categories: thinktime

Kristy Wagner: Hello world!

Planet Linux Australia - Thu 07th Apr 2016 11:04

Sorry interwebs, I know it was peaceful without me being around blogging for the last 5 years but, it is too late for you.  I am back!

New domain, new site, new content.  I look forward to sharing some thoughts with you.

Categories: thinktime

And what else will you lie about?

Seth Godin - Wed 06th Apr 2016 19:04
When did companies start talking about, "unexpectedly high call volume?" Are they really so inept at planning that the call volume is unexpected? For months at a time? Even non-legacy companies like OpenTable are using it to describe their email...        Seth Godin
Categories: thinktime

The User&#8217;s Journey

a list apart - Wed 06th Apr 2016 00:04

A note from the editors: We’re pleased to share an excerpt from Chapter 5 of Donna Lichaw ’s new book, The User’s Journey: Storymapping Products That People Love, available now from Rosenfeld Media.

Both analytics funnels and stories describe a series of steps that users take over the course of a set period of time. In fact, as many data scientists and product people will tell you, data tells a story, and it’s our job to look at data within a narrative structure to piece together, extrapolate, troubleshoot, and optimize that story.

In the case of FitCounter, our gut-check analysis and further in-person testing with potential users uncovered that the reason our analytics showed a broken funnel with drop-off at key points was because people experienced a story that read something like this:

  • Exposition: The potential user is interested in getting fit or training others.
  • Inciting Incident: She sees the “start training” button and gets started.
  • Rising Action:
    • She enters her username and password. (A tiny percentage of people would drop off here, but most completed this step.)
    • She’s asked to “follow” some topics, like running and basketball. She’s not really sure what this means or what she gets out of doing this. She wants to train for a marathon, not follow things. (This is where the first drop-off happened.)
  • Crisis: This is where the cliffhanger happens. She’s asked to “follow” friends. She has to enter sensitive Gmail or Facebook log-in credentials to do this, which she doesn’t like to do unless she completely trusts the product or service and sees value in following her friends. Why would she follow them in this case? To see how they’re training? She’s not sure she totally understands what she’s getting into, and at this point, has spent so much brain energy on this step that she’s just going to bail on this sign-up flow.
  • Climax/Resolution: If she does continue on to the next step, there would be no climax.
  • Falling Action: Eh. There is no takeaway or value to having gotten this far.
  • End: If she does complete the sign-up flow, she ends up home. She’d be able to search for videos now or browse what’s new and popular. Searching and browsing is a lot of work for someone who can’t even remember why they’re there in the first place. Hmmm…in reality, if she got this far, maybe she would click on something and interact with the product. The data told us that this was unlikely. In the end, she didn’t meet her goal of getting fit, and the business doesn’t meet its goal of engaging a new user.

Why was it so important for FitCounter to get people to complete this flow during their first session? Couldn’t the business employ the marketing team to get new users to come back later with a fancy email or promotion? In this case, marketing tried that. For months. It barely worked.

With FitCounter, as with most products and services, the first session is your best and often only chance to engage new users. Once you grab them the first time and get them to see the value in using your product or service, it’s easier to get them to return in the future. While I anecdotally knew this to be true with consumer-facing products and services, I also saw it in our data.

Those superfans I told you about earlier rarely became superfans without using the product within their first session. In fact, we found a sweet spot: most of our superfans performed at least three actions within their first session. These actions were things like watching or sharing videos, creating playlists, and adding videos to lists. These were high-quality interactions and didn’t include other things you might do on a website or app, such as search, browse, or generally click around.

With all of our quantitative data in hand, we set out to fix our broken usage flow. It all, as you can imagine, started with some (more) data…oh, and a story. Of course.

The Plan

At this point, our goals with this project were two-fold:

  • To get new users to complete the sign-up flow;
  • To acquire more “high-quality” users who were more likely to return and use the product over time.

As you can see, getting people to pay to upgrade to premium wasn’t in our immediate strategic roadmap or plan. We needed to get this product operational and making sense before we could figure out how to monetize. We did, however, feel confident that our strategy was headed in the right direction because the stories we were designing and planning were ones that we extrapolated from actual paying customers who loved the product. We had also been testing our concept and origin stories and knew that we were on the right track, because when we weren’t, we maneuvered and adapted to get back on track. So what, in this case, did the data tell us that we should do to transform this story of use from a cliffhanger, with drop-off at the crisis moment, to a more complete and successful story?

Getting to “Why”

While our quantitative analytics told us a “what” (that people were dropping off during our sign-up funnel), it couldn’t tell us the “why.” To better answer that question, we used story structure to figure out why people might drop off when they dropped off. Doing so helped us better localize, diagnose, and troubleshoot the problem. Using narrative structure as our guide, we outlined a set of hypotheses that could explain why there was this cliffhanger.

For example, if people dropped off when we asked them to find their friends, did people not want to trust a new service with their login credentials? Or did they not want to add their friends? Was training not social? We thought it was. To figure this out better, once we had a better idea of what our questions were, we talked to existing and potential customers first about our sign-up flow and then about how they trained (for example, alone or with others). We were pretty sure training was social, so we just needed to figure out why this step was a hurdle.

What we found with our sign-up flow was similar to what we expected. Potential users didn’t want to follow friends because of trust, but more so because it broke their mental model of how they could use this product. “Start training” was a strong call to action that resonated with potential users. In contrast, “follow friends,” was not. Even something as seemingly minute as microcopy has to fit a user’s mental model of what the narrative structure is. Furthermore, they didn’t always think of training as social. There were a plethora of factors that played into whether or not they trained alone or with others.

What we found were two distinct behaviors: people tend to train alone half the time and with others half the time. Training alone or with others depended on a series of factors:

  • Activity (team versus solitary sport, for example)
  • Time (during the week versus weekend, for example)
  • Location (gym versus home, for example)
  • Goals (planning to run a 5k versus looking to lose pounds, for example).

This was too complex of a math equation for potential users to do when thinking about whether or not they wanted to “follow” people. Frankly, it was more math than anyone should have to do when signing up for something. That said, after our customer interviews, we were convinced of the value of keeping the product social and giving people the opportunity to train with others early on. Yes, the business wanted new users to invite their friends so that the product could acquire new users. And, yes, I could have convinced the business to remove this step in the sign-up process so that we could remove the crisis and more successfully convert new users. However, when people behave in a certain way 50% of the time, you typically want to build a product that helps them continue to behave that way, especially if it can help the business grow its user base.

So instead of removing this troublesome cliffhanger-inducing step in the sign-up flow, we did what any good filmmaker or screenwriter would do: we used that crisis to our advantage and built a story with tension and conflict. A story that we hoped would be more compelling than what we had.

The Story

In order to determine how our new sign-up flow would work, we first mapped it out onto a narrative arc. Our lead designer and engineer wanted to jump straight into screen UI sketches and flow charts and our CEO wanted to see a fully clickable prototype yesterday, but we started the way I always make teams and students start: with a story diagram. As a team, we mapped out a redesigned sign-up flow on a whiteboard as a hypothesis, brick by brick (see Figure 5.20).

Fig. 5.20 A story map from a similar project with the storyline on top and requirements below.

This was the story, we posited, that a new user and potential customer should have during her first session with our product (see Figure 5.21). As you can see, we tried to keep it much the same as before so that we could localize and troubleshoot what parts were or weren’t working.

  • Exposition: She’s interested in getting fit or training others. (Same as before.)
  • Inciting Incident: She sees the “start training” button and gets started. (Same as before.)
  • Rising Action:
    • She enters her username and password. (This step performed surprisingly great, so we kept it.)
    • Build a training plan. Instead of “following” topics, she answers a series of questions so that the system can build her a customized training plan. Many questions—ultimately extending the on-boarding flow by 15 screens. 15! There is a method to this madness. Even though there are now many more questions, they get more engaging, and more relevant, question by question, screen by screen. The questions start broad and get more focused as they progress, feeling more and more relevant and personal. Designing the questionnaire for rising action prevents what could be two crises: boredom and lack of value.
  • Crisis: One of the last questions she answers is whether or not she wants to use this training plan to train with or help train anyone else. If so, she can add them to the plan right then and there. And if not, no problem—she can skip this step and always add people later.
  • Climax/Resolution: She gets a personalized training plan. This is also the point at which we want her to experience the value of her new training plan. She sees a graph of what her progress will look like if she sticks with the training plan she just got.
  • Falling Action: Then what? What happens after she gets her plan and sees how she might progress if she uses FitCounter? This story isn’t complete unless she actually starts training. So…
  • End: She’s home. Now she can start training. This initially involves watching a video, doing a quick exercise, and logging the results. She gets a taste of what it’s like to be asked to do something, to do it, and to get feedback in the on-boarding flow and now she can do it with her body and not just a click of the mouse. Instead of saying how many sit-ups she can do by answering a questionnaire, she watches a short video that shows her how to best do sit-ups, she does the exercise, and she logs her results. While humanly impossible to fully meet her goal of getting fit in one session, completing the story with this ending gets her that much closer to feeling like she will eventually meet her goal. Our hope was that this ending would function as a teaser for her next story with the product, when she continued to train. We wanted this story to be part of a string of stories, also known as a serial story, which continued and got better over time.

Once we plotted out this usage story, we ran a series of planning sessions to brainstorm and prioritize requirements, as well as plan a strategic roadmap and project plan. After we had our requirements fleshed out, we then sketched out screens, comics, storyboards, and even role-played the flow internally and in person with potential customers. We did those activities to ideate, prototype, and test everything every step of the way so that we could minimize our risk and know if and when we were on the right path.

We were quite proud of our newly crafted narrative sign-up flow. But before we could celebrate, we had to see how it performed.

The Results

On this project and every project since, we tested everything. We tested our concept story, origin story, and everything that came after and in between. While we were very confident about all of the work we did before we conceived of our new usage story for the sign-up flow, we still tested that. Constantly. We knew that we were on the right path during design and in-person testing because at the right point in the flow, we started getting reactions that sounded something like: “Oh, cool. I see how this could be useful.”

Once we heard that from the third, fourth, and then fifth person during our in-person tests, we started to feel like we had an MVP that we were not only learning from, but also learning good things from. During our concept-testing phase, it seemed like we had a product that people might want to use. Our origin story phase and subsequent testing told us that the data supported that story. And now, with a usage story, we actually had a product that people not only could use, but wanted to use. Lots.

Fig. 5.21 The story of what we wanted new users to experience in their first session with FitCounter.

As planned, that reaction came during our in-person tests, unprompted, near the end of the flow, right after people received their training plan. What we didn’t expect was that once people got the plan and went to their new home screen, they started to tap and click around. A lot. And they kept commenting on how they were surprised to learn something new. And they would not only watch videos, but then do things with them, like share them or add and remove them from plans.

But this was all in person. What about when we launched the new sign-up flow and accompanying product. This new thing that existed behind the front door. The redesign we all dreaded to do, but that had to be done.

I wish I could say that something went wrong. This would be a great time to insert a crisis moment into this story to keep you on the edge of your seat.

But the relaunch was a success.

The story resonated not just with our in-person testers, but also with a broader audience. So much so that the new sign-up flow now had almost double the completion rate of new users. This was amazing, and it was a number that we could and would improve on with further iterations down the line. Plus, we almost doubled our rate of new user engagement. We hoped that by creating a sign-up flow that functioned like a story, the result would be more engagement among new users, and it worked. We not only had a product that helped users meet their goals, but it also helped the business meet its goals of engaging new users. What we didn’t expect to happen so soon was the side effect of this increased, high-quality engagement: these new users were more likely to pay to use the product. Ten times more likely.

We were ecstatic with the results. For now.

A business cannot survive on first-time use and engagement alone. While we were proud of the product we built and the results it was getting, this was just one usage story: the first-time usage story. What about the rest? What might be the next inciting incident to kick off a new story? What would be the next beginning, middle, and end? Then what? What if someone did not return? Cliffhangers can happen during a flow that lasts a few minutes or over a period of days, months, or years. Over time, we developed stories big and small, one-offs and serials, improving the story for both customers and the business. Since we started building story-first, FitCounter has tripled in size and tripled its valuation. It is now a profitable business and recently closed yet another successful round of financing so that it can continue this growth.

Categories: thinktime

Depth of field

Seth Godin - Tue 05th Apr 2016 19:04
Focus is a choice. The runner who is concentrating on how much his left toe hurts will be left in the dust by the runner who is focusing on winning. Even if the winner's toe hurts just as much. Hurt,...        Seth Godin
Categories: thinktime

Ian Wienand: Image building in OpenStack CI

Planet Linux Australia - Tue 05th Apr 2016 15:04

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins master to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Jenkins masters are subscribed to gearman as workers. It is these Jenkins masters that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins master gets these definitions pushed to it constantly by Puppet, thus each Jenkins master instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins master instances.

  6. At this point, the Jenkins master has what it needs to actually get jobs started. When nodepool registers a host to a Jenkins master as a slave, the Jenkins master can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins master instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty slave. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. The Jenkins master will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we have glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images and why build them at all? These images are where the "rubber hits the road" — they are instantiated into the virtual-machines that will run DevStack, unit-testing or whatever else someone might want to test.

The main goal is to provide a stable and consistent environment in which to run a wide-range of tests. A full OpenStack deployment results in hundreds of libraries and millions of lines of code all being exercised at once. The testing-images are right at the bottom of all this, so any instability or inconsistency affects everyone; leading to constant fire-firefighting and major inconvenience as all forward-progress stops when CI fails. We want to support a wide number of platforms interesting to developers such as Ubuntu, Debian, CentOS and Fedora, and we also want to and make it easy to handle new releases and add other platforms. We want to ensure this can be maintained without too much day-to-day hands-on.

Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug failures. We want jobs to rely on as few external resources as possible so tests are consistent and stable. This means caching things like the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images, packages and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Snapshot images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of OpenStack CI images were based on these public images. Nodepool would start one of these provider images and then run a series of scripts on it — these scripts would firstly try to work-around any quirks to make the images look as similar as possible across providers, and then do the caching, setup things like authorized keys and finish other configuration tasks. Nodepool would then snapshot this prepared image and start instantiating VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily (you can see how ridiculous it was getting in the logging configuration used at the time). It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

Naturally there was a desire for something more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron going on. And things like iSCSI kernel drivers that containers don't support well. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:14 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

DevStack and bare nodes

There are two major ways OpenStack projects test their changes:

  1. Running with DevStack, which brings up a small, but fully-functional, OpenStack cloud with the change-under-test applied. Generally tempest is then used to ensure the big-picture things like creating VM's, networks and storage are all working.
  2. Unit-testing within the project; i.e. what you do when you type tox -e py27 in basically any OpenStack project.

To support this testing, OpenStack CI ended up with the concept of bare nodes and devstack nodes.

  • A bare node was made for unit-testing. While tox has plenty of information about installing required Python packages into the virtualenv for testing, it doesn't know anything about the system packages required to build those Python packages. This means things like gcc and library -devel packages which many Python packages use to build bindings. Thus the bare nodes had an ever-growing and not well-defined list of packages that were pre-installed during the image-build to support unit-testing. Worse still, projects didn't really know their dependencies but just relied on their testing working with this global list that was pre-installed on the image.
  • In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment by ensuring it has the right dependencies installed. We don't want any packages pre-installed here because it hides actual dependencies that we want explicitly defined within DevStack — otherwise, when a user goes to deploy DevStack for their development work, things break because their environment differs slightly to the CI one. If you look at all the job definitions in OpenStack, by convention any job running DevStack has a dsvm in the job name — this referred to running on a "DevStack Virtual Machine" or a devstack node. As the CI environment has grown, we have more and more testing that isn't DevStack based (puppet apply tests, for example) that rather confusingly want to run on a devstack node because they do not want dependencies installed. While it's just a name, it can be difficult to explain!

Thus we ended up maintaining two node-types, where the difference between them is what was pre-installed on the host — and yes, the bare node had more installed than a devstack node, so it wasn't that bare at all!

Specifying Dependencies

Clearly it is useful to unify these node types, but we still need to provide a way for the unit-test environments to have their dependencies installed. This is where a tool called bindep comes in. This tool gives project authors a way to specify their system requirements in a similar manner to the way their Python requirements are kept. For example, OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This project now has some extra information in the other-requirements.txt file, which lists the system packages required to build the Python packages in the global-requirements list.

bindep knows how to look at these lists provided by projects and get the right packages for the platform it is running on. As part of the image-build, we have a cache-bindep element that can go through every project and build a list of the packages it requires. We can thus pre-cache all of these packages onto the images, knowing that they are required by jobs. This both reduces the dependency on external mirrors and improves job performance (as the packages are locally cached) but doesn't pollute the system by having everything pre-installed.

Package installation can now happen via the way we really should be doing it — as part of the CI job. There is a job-macro called install-distro-packages which a test can use to call bindep to install the packages specified by the project before the run. You might notice the script has a "fallback" list of packages if the project does not specify it's own dependencies — this essentially replicates the environment of a bare node as we transition to projects more strictly specifying their system requirements.

We can now start with a blank image and all the dependencies to run the job can be expressed by and within the project — leading to a consistent and reproducible environment without any hidden dependencies. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides. There's a few other job-macros that can do things like provide MySQL/Postgres instances for testing or setup other common job requirements. By splitting these types of things out from base-images we also improve the performance of jobs who don't waste time doing things like setting up databases for jobs that don't need it.

As of this writing, the bindep work is new and still a work-in-progress. But the end result is that we have no more need for a separate bare node type to run unit-tests. This essentially halves the number of image-builds required and brings us to the goal of a single image for each platform running all CI.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers and runs all testing environments equally. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

More powerful than you know

Seth Godin - Mon 04th Apr 2016 18:04
I think that's always been a little true, but now it's a lot true. Everyone reading this has an enormous amount of power. Cultural power, mostly. The ability to speak up, to paint a picture of a different way, to...        Seth Godin
Categories: thinktime

Ian Wienand: Image building in OpenStack CI

Planet Linux Australia - Mon 04th Apr 2016 15:04

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins host to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run, and what type of node it should be run on.

  4. A group of Jenkins hosts are subscribed to gearman as workers. It is these Jenkins hosts that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins node gets these definitions pushed to it constantly by Puppet, thus each Jenkins instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins instances.

  6. At this point, Jenkins has what it needs to actually get jobs started. When nodepool registers a host to Jenkins as a slave, the Jenkins host can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty host. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but but Jenkins is pretty much a glorified ssh/scp wrapper to OpenStack CI. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. Jenkins will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we just glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images, and why build them at all? These images are where the "rubber hits the road" — the are instantiated into the virtual-machine that will run DevStack, functional testing or whatever else someone might want to test. Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug CI failures. We want CI jobs to rely on as few external resources as possible so tests are as consistent and stable as possible. This means caching all the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images consumed by various tests and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Provider images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of building images was that nodepool would start one of these provider images, run a bunch of scripts on it to make a base-image (do the caching, setup keys, etc), snapshot it and then start putting VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily. It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

OpenStack is like a gigantic Jenga tower, with a full DevStack deployment resulting in hundreds of libraries and millions of lines of code all being exercised at once. The testing images are right at the bottom of all this, and it doesn't take much to make the whole thing fall over (see points about providers not leaving images alone). This leads to constant fire-firefighting and everyone annoyed as all CI stops. Naturally there was a desire for something much more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron etc going on. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu, this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:00 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

Dependencies

But guess what, there's more! Along the way, OpenStack CI ended up with the concept of bare nodes and devstack nodes. A bare node was one that was used for functional testing; i.e. what you do when you type tox -e py27 in basically any OpenStack project. The problem here is that tox has plenty of information about installing required Python packages into the virtualenv for testing; but it doesn't know anything about the system packages required to build the Python libraries. This means things like gcc and -devel packages which many Python libraries use to build library bindings.

In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment, ensuring it has the right libraries, etc, installed to get everything working. If you look at all the job definitions, anything running DevStack has a dsvm in the job name; which referred to to "DevStack Virtual Machine"; or basically specifying what was installed on the host (and yes, the bare node had more installed, so it wasn't that bare at all!). Thus we don't want packages pre-installed for DevStack, because it hides actual devstack dependencies that we want explicitly defined. But the bare nodes, used for functional testing, were different — there was an every-growing and not well-defined list of packages that were pre-installed on those nodes to make sure functional testing worked. In general, you don't want jobs relying on something like this; we want to be sure if jobs have a dependency, they require it explicitly.

This is where a tool called bindep comes in. OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This now has some extra information in the other-requirements.txt file, which lists the system-packages required to build the Python-packages in the requirements list. bindep knows how to look at this and get the right packages for the platform it is running on. Remember how it was previously mentioned we want to minimise dependencies on external resources at runtime? Well we can pre-cache all of these packages onto the images, knowing that they are likely to be required by packages. How do we get the packages installed? The way we really should be doing it — as part of the CI job. There is a macro called install-distro-packages which uses bindep to install those packages as required by the global-requirements list. The result — no more need for the bare node type to run functional tests! In all cases we can start with essentially a blank image and all the dependencies to run the job are expressed by and within the job — leading to a consistent and reproducible environment. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides; issues that get fixed by thinking about and ensuring we have correct dependencies bringing up jobs. There's a few other macros there that do things like provide MySQL/Postgres instances or setup other common job requirements. By splitting these out we also improve the performance of jobs who now only bring in the dependencies they need — we don't waste time doing things like setting up databases for jobs that don't need it.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

Ian Wienand: Image building for OpenStack CI -- Minimal images, big effort

Planet Linux Australia - Mon 04th Apr 2016 13:04

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates, etc. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins host to consume. gearman is a job-server, and as they explain it "provides a generic application framework to farm out work to other machines or processes that are better suited to do the work".

  4. A group of Jenkins hosts are subscribed to gearman as workers. It is these hosts that will consume the job requests from the queue and actually get the tests running. Jenkins needs two things to be able to run a job -- a job definition (something to do) and a slave node (somewhere to do it).

    The first part -- what to do -- is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Thus each Jenkins instance is knows about all the jobs it might need to run. Zuul also knows about these job definitions, so you can see how we now have a mapping where Zuul can put a job into gearman saying "run test foo-bar-baz" and a Jenkins host can consume that request and know what to do.

    The second part -- somewhere to run the test -- takes some more explaining. Let's skip to the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by nodepool -- a customised orchestration tool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul, and decides what type of capacity to provide and in what clouds to satisfy the outstanding job queue. Nodepool will start-up virtual-machines as required, and register those nodes to the Jenkins instances.

  6. At this point, Jenkins has what it needs to actually get jobs started. When nodepool registers a host to Jenkins as a slave, the Jenkins host can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins instance by nodepool, Jenkins can now consume a job from the queue intended to run on an ubuntu-trusty host. It will run the job as defined in the job-definition -- doing what Jenkins does by ssh-ing into the host, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but but Jenkins is pretty much a glorified ssh/scp wrapper to OpenStack CI. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. Jenkins will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused, and also have no important details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs and post the result back to Gerrit and give either a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

There is, however, another more asynchronous part of the process that hides a lot of details the rest of the system relies on. Illustrated in step 8 above, this is the management of the images that tests are being run upon. Above we we said that a test runs on a ubuntu-trusty, centos, fedora or some other type of node, but glossed over where these images come from.

Firstly, what are these images, and why build them at all? These images are where the "rubber hits the road" -- where DevStack, functional testing or whatever else someone might want to test is actually run for real. Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug CI failures. We want the images that CI runs on to rely on as few external resources as possible so test runs are as stable as possible. This means caching all the git trees tests might use, things like images consumed during tests and other bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  2. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.
  3. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  4. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.

So the original incarnations of building images was that nodepool would start one of these provider images, run a bunch of scripts on it to make a base-image (do the caching, setup keys, etc), snapshot it and then start putting VM's based on these images into the pool for testing. The first problem you hit here is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. With Rackspace and (now defunct) HP cloud) there was a situation where we were building 4 or 5 images across a total of about 8 regions -- meaning anywhere up to 40 separate image builds happening. It was almost a fait accompli that some images would fail every day -- nodepool can deal with this by reusing old snapshots; but this leads to a inconsistent and heterogeneous testing environment.

OpenStack is like a gigantic Jenga tower, with a full DevStack deployment resulting in hundreds of libraries and millions of lines of code all being exercised at once. The testing images are right at the bottom of all this, and it doesn't take much to make the whole thing fall over (see points about providers not leaving images alone). This leads to constant fire-firefighting and everyone annoyed as all CI stops. Naturally there was a desire for something much more consistent -- a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide their "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd or whatever your cloud requires. So nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still have a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. Tese dependencies can both conflict with what parts of OpenStack wants, or inversely end up hiding requirements because we end up with a dependency tacitly provided by some part of the base-image. The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests -- you want everything explicitly included. (If you were starting this whole thing again, things like Docker might come into play. Indeed they may be in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron etc going on).
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers seem to like providing different networking schemes or configuration methods.
Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that -- systems with essentially nothing on them. For Debian and Ubuntu, this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important "elements" that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:00 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs.

Dependencies

But, as they say, there's more! Personally I'm not sure how it started, but OpenStack CI ended up with the concept of bare nodes and devstack nodes. A bare node was one that was used for functional testing; i.e. what you do when you type tox -e py27 in basically any OpenStack project. The problem with this is that tox has plenty of information about installing required Python packages into the virtualenv for testing; but it doesn't know anything about the system packages required to build the Python libraries. This means things like gcc and -devel packages which many Python libraries use to build library bindings. In contrast to this, DevStack has always been able to bootstrap itself from a blank system, ensuring it has the right libraries, etc, installed to be able to get a functional OpenStack environment up and running.

If you remember the previous comments, we don't want things pre-installed for DevStack, because it hides actual devstack dependencies that we want explicitly defined. But the bare nodes, used for functional testing, were different -- we had an every-growing and not well-defined list of packages that were installed on those nodes to make sure functional testing worked. You don't want jobs relying on this; we want to be sure if jobs have a dependency, they require it explicitly.

So this is where a tool called bindep comes in. OpenStack has the concept of global requirements -- those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This now has some extra information in the other-requirements.txt file, which lists the system-packages required to build the Python-packages. bindep knows how to look at this and get the right packages for the platform it is running on. Indeed -- remember how it was previously mentioned we want to minimise dependencies on external resources at runtime? Well we can pre-cache all of these packages onto the images, knowing that they are likely to be required by packages. How do we get the packages installed? The way we really should be doing it -- as part of the CI job. There is a macro called install-distro-packages which uses bindep to install those packages as required by the global-requirements list. The result -- no more need for this bare node type! In all cases we can start with essentially a blank image and all the dependencies to run the job are expressed by and within the job -- leading to a consistent and reproducible environment. Several things have broken as part of removing bare nodes -- this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides; issues that get fixed by thinking about and ensuring we have correct dependencies bringing up jobs.

There's a few other macros there that do things like provide MySQL/Postgres instances or setup other common job requirements. By splitting these out we also improve the performance of jobs who now only bring in the dependencies they need -- we don't waste time doing things like setting up databases for jobs that don't need it.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burdens, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

Treating your talk as a gift

Seth Godin - Sun 03rd Apr 2016 19:04
In a few weeks, Chris Anderson's much awaited book on TED Talks comes out. I've just finished reading it, and it's well worth a pre-order. When Chris took the leap 11 years ago and published the first online TED talks,...        Seth Godin
Categories: thinktime

All the events you weren't there to control...

Seth Godin - Sat 02nd Apr 2016 19:04
Yesterday, thousands of people got married. Just about every one of these weddings went beautifully. Amazingly, you weren't there, on-site, making sure everything was perfect. Last week, a letter to investors went out from the CFO of a hot public...        Seth Godin
Categories: thinktime

OpenSTEM: New Viking Site in North America

Planet Linux Australia - Sat 02nd Apr 2016 16:04

The Vikings were the first Europeans to reach North America, more than 1000 years ago. The Vikings established settlements and traded with indigenous people in North America for about 400 years, finally abandoning the continent less than 100 years before Columbus’ voyage.

The story of the Vikings’ exploits in North America provides not only additional context to the history of human exploration, but also matches ideally to the study of the Geography of North America, as the names used by the Vikings for areas in North America provide a perfect match to the biomes in these regions.

Long consigned to the realms of myth within Norse sagas, the first archaeological evidence of the truth of the old stories of “Vinland” (Newfoundland) was uncovered by a Norwegian archaeologist in 1960. In recent years archaeologists have uncovered yet more evidence of Viking settlements in North America. OpenSTEM is delighted to share this story of how satellite technology is assisting this process, as we publish our own resource on the Vikings in North America.

http://www.nytimes.com/2016/04/01/science/vikings-archaeology-north-america-newfoundland.html

The site was identified last summer after satellite images showed possible man-made shapes under discoloured vegetation on the Newfoundland coast.

Categories: thinktime

BlueHackers: OSMI Mental Health in Tech Survey 2016

Planet Linux Australia - Sat 02nd Apr 2016 13:04

Participate in a survey about how mental health is viewed within the tech/IT workplace, and the prevalence of certain mental health disorders within the tech industry.

Categories: thinktime

Pages

Subscribe to kattekrab aggregator