You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 14 min 50 sec ago

Colin Charles: MariaDB Berlin Meetup Notes & Slides

Thu 14th Apr 2016 17:04

We had the first MariaDB Berlin Meetup on Tuesday 12.04.2016 at the Wikimedia Berlin offices at 7pm. More or less there were over 54 people that attended the event, a mix of MariaDB Corporation employees and community members. We competed with the entertainment at the AWS Summit Berlin which was apparently about 400m away! Food and drink were enjoyed by all, and most importantly there were many, many lightning talks (minimum 5 minutes, maximum 10 minutes – most were about 6-7 minutes long).

The bonus of all of this? Lots and lots of slides for you to see. Grab them from the Google Drive folder MariaDB Berlin meetup April 2016.

  1. Monty talked about improving the speed of connections to MariaDB Server, some work he’s just pushed fairly recently to the 10.2 tree.
  2. Dipti spoke about MariaDB ColumnStore and it is now clear we’ll see some source/binary drop by the end of May 2016.
  3. Sergei Petrunia and Vicentiu Ciorbaru spoke about the upcoming window functions that MariaDB Server 10.2.0 will see (yes, the alpha should be out real soon now).
  4. Jan spoke about InnoDB in 10.2.
  5. Lixun Peng spoke about a fairly interesting feature, the idea to flashback via mysqlbinlog and how you can have a “Time Machine”. I can’t wait for flashback/time machine to appear in 10.2. The demo for this is extremely good.
  6. Kolbe spoke about data at rest encryption using the MariaDB Amazon AWS KMS plugin.
  7. Sanja and Georg went up together to speak about 10.2 protocol enhancements as well as what you’ll see in Connector/C 3.0.
  8. Wlad gave us a good rundown on authenticating with GSSAPI, something you will notice is also available in MariaDB Server 10.1’s later releases.
  9. Johan Wikman gave us an introduction to MariaDB MaxScale, which started off the talks on MaxScale.
  10. Markus talked about the readwritesplit plugin.
  11. Massimiliano went into the Binlog server.
  12. Martin didn’t use slides but gave us an amazing talk titled “Rival concepts of SQL Proxy”; it was very well given and I’ve encouraged him to write a blog post about it.
  13. Community member Ben Kochie, an SRE at SoundCloud gave us a quick talk on Monitoring MySQL with Prometheus and how much they depend on the PERFORMANCE_SCHEMA.
  14. Diego Dupin spoke a little about the MariaDB Java Connector, and the idea was to do a demo but the projector via HDMI seemed to be a bit wonky (this was also true of using my Mac; the VGA output however worked fine). So it was just a quick talk without any deck.

We ended with a quick Q&A session with Monty dominating it. Lots of interesting questions around why the name Maria, licensing thoughts, ensuring all the software we have are in distributions, etc. Some ended up going for pizza while others ended up in a hotel bar at the Crowne Plaza Potsdamer Platz — and the chatter went on till at least 11pm.

Thanks again to Georg Richter who found us the venue and also did a lot of the legwork with Wikimedia Foundation.

Categories: thinktime

Colin Charles: Major post-GA features in the 5.7 release!

Thu 14th Apr 2016 16:04

Interesting developments in the MySQL world – it can now be used as a document store and you can query the database using JavaScript instead of SQL (via the MySQL Shell). There is also a new X Plugin (see: mysql-5.7.12/rapid/) (which now makes use of protocol buffers (see: mysql-5.7.12/extra/protobuf/)). I will agree, this is more than just a maintenance release.

Do get started playing with MySQL Shell. If you’re using the yum repository, remember to ensure you have enabled the mysql-tools-preview in /etc/yum.repos.d/mysql-community.repo. And don’t forget to load the X Plugin in the server! I can’t wait for the rest of the blog posts in the series, and today just took a cursory look at all of this — kudos Team MySQL @ Oracle.

However, I’m concerned that the GA is getting what you would think of as more than just a maintenance release. We saw 5.7.11 get at rest data encryption for InnoDB, and now 5.7.12 getting even more changes. This is going to for example, ship in the next Ubuntu LTS, Xenial Xerus. Today it has 5.7.11, but presumably after release it will be upgrade to 5.7.12. I am not a huge fan of surprises in LTS releases (predictability over 5 years is a nice thing; this probably explains why I still have a 5.0.95 server running), but I guess this small band-aid is what we need to ensure this doesn’t happen going forward?

As for the other question I’ve seen via email from several folk so far: will MariaDB Server support this? I don’t see why not in the future, so why not file a Jira?

Categories: thinktime

OpenSTEM: Launch of OpenSTEM Digital Technologies Program

Thu 14th Apr 2016 15:04

As promised, we delivered the OpenSTEM Digital Technologies Program for Primary Schools (F-6) to schools and individual teachers who already signed up: initial units for each year level, resource PDFs and activities, free software, a board game, optional incursions and workshops and other useful resources.

“Our goal is to make sure our students are at the cutting edge of innovation through the development of skills to become the technology architects of the digital age,” Queensland Premier Annastacia Palaszczuk said, “This will include an assessment of coding and computer science, as well as early stage robotics, something I firmly believe should be a part of our education system.”

Advance Queensland’ package announcement (July 2015)

Appreciating the very full schedule that teachers have, we have gone beyond regular integration with the initial materials for Digital Technologies (Australian Curriculum v8.1).  Instead, the base fits directly within existing curricula, particularly Maths and English.  So, doing the basics doesn’t cost any extra time!

That said, we also have some catching up to do. It’s no good tossing older students (or their teachers!) at more complicated problems when they don’t yet have the base level understanding or skills covered in the earlier years.  So we have a catch up plan integral to our initial units.

Today’s students have been immersed in the stream of new technologies since they were born. They have much to learn, but they regard the technology itself as an entirely normal part of life and society.

To be able to guide the students, all educators now also need to go beyond using specific technologies to understanding how things work on a broader scale, and how it all fits together.  So uniquely, the journey is very much a joint one and in some parts the teachers are learning along with (slightly ahead of) the students.

The more I see our teachers and students work with the programs, the more convinced I am that we have a great partnership and are doing the right thing by the kids.

— Cheryl Rowe, Principal

OpenSTEM’s related Robotics Program was recently featured on Channel TEN @ Schools coverage in Brisbane.

With schools already signed up and implementing this program in 2016, you can start any time and in a form that suits you (school wide, or individual teachers or year levels). Contact us for more details, and any questions you might have.

Feel free to ask us for a reference (teacher or principal of a school we’ve worked with).

Categories: thinktime

Glen Turner: There are only two ethernet settings

Thu 14th Apr 2016 12:04

I can't beleive I have to write this in 2016, more that twenty years after the bug in the DEC "Tulip" ethernet controller chip which created this mess.

There are only two ethernet speed and autonegotiation settings you should configure on a switch port or host:

1.

Auto negotiation = on

2.

Auto negotiation = off

Speed = 10Mbps

Duplex = half

These are the only two settings which work when the partner interface is set to autonegotiation = on.

If you are considering other settings then buy new hardware. It will work out cheaper.

That is all.

But...

Oh, so you know what you are doing. You know that explicitly setting a speed or duplex implicitly disables autonegotiation and therefore you need to explicitly set the partner interface's speed and duplex as well.

But if you know all that then you also know the world is not a perfect place. Equipment breaks. Operating systems get reinstalled. And you've left a landmine there, waiting for an opportunity...

A goal of modern network and systems administration is to push down the cost of overhead. That means being ruthless with exceptions which store away trouble for the future.

Categories: thinktime

Binh Nguyen: Hybrid Warfare, More PSYOPS, and More

Wed 13th Apr 2016 23:04
- the tactics of anti-West groups and states make no sense until you dealve a bit deeper. Basically, they're all saying the same thing. If you don't co-operate with the West you're in trouble but if you do co-operate with the West you still lose because of your loss of autonomy. The reason why South America/Ecuador is supportive of Assange is basically because he's opened up about the fact that
Categories: thinktime

OpenSTEM: OpenSTEM Robotics at Seville Rd on Ten News

Tue 12th Apr 2016 11:04

Seville Road State School and OpenSTEM got coverage on Channel TEN News yesterday afternoon with the Robotics Program, in their “TEN at Schools” segment. Good exposure for a great school.

Ten News at Seville Road school library Year 5/6 teacher Trent Perry talking with Josh Students + Mirobots Signing off
Categories: thinktime

BlueHackers: Explainer: what’s the link between insomnia and mental illness?

Tue 12th Apr 2016 11:04
The relationship between insomnia and mental illness is bidirectional: about 50% of adults with insomnia have a mental health problem; up to 90% of adults with depression have sleep problems.
Categories: thinktime

Colin Charles: Trip Report: Bulgarian Web Summit

Tue 12th Apr 2016 01:04

I have never been to Sofia, Bulgaria till this past February 2016, and boy did I enjoy myself. I visited the Bulgaria Web Summit and spoke there amongst many others. A few notes:

  • Almost 800 people (so more than last year); hence the event was sold out
  • Missed the RocksDB talk due to the massive Q&A session that went on afterwards.
  • Very interesting messaging
LinvoDB
  • LinvoDB (embeddable MongoDB alternative) — LinvoDB / www.strem.io
  • Library written entirely in JavaScript without any dependencies. Converts any KV store to a MongoDB-like API, with Mongoose-like models, and live queries
  • Use case: < 1 million objects (indexes are in memory using a binary search tree; so don’t use it for more). HTML5/Electron/NW.js. Best used with AngularJS/React and maybe Meteor. Can also use NativeScript or React Native. You can use it with node.js but its not recommended for server use cases.
  • Works with SQLite or LevelDB (why not RocksDB yet?). Can also use with IndexedDB/LocalStorage
  • Implemented almost entirely the MongoDB query language. Gives you automatic indexes.
  • FTS in memory (linvodb-fts) – uses trie/metaphone modules for node.js. Can also do p2p replication, persistent indexes, compound indexes
My talk

I enjoyed speaking about MariaDB Server as always, and its clear that many people had a lot of questions about it. Slides. Video. It was tweeted that I had to answer questions for about as long as my talk, afterwards, and it was true :)

I got to meet Robert Nyman at the social event (small world, since he works at the office where Jonas of ex-MySQL fame does). Also met someone very interested in contributing to InfiniDB. It was nice having a beer with my current colleague Salle too. And speaking to the track moderator, Alexander Todorov was also a highlight – since we had many common topics, and he does an amazing amount of work around automation and QA. His blog is worth a read.

Categories: thinktime

Michael Still: Exploring the Jagungal

Mon 11th Apr 2016 17:04
Peter Thomas kindly arranged for a variety of ACT Scout leaders to take a tour of the Jagungal portion of Kosciuszko National Park under the guidance of Robert Green. Robert is very experienced with this area, and has recently written a book. Five leaders from the Macarthur Scout Group decided to go along on this tour and take a look at our hiking options in the area.



The first challenge is getting to the area. The campsite we used for the first day is only accessible to four wheel drive vehicles -- the slope down to the camp site from Nimmo Plain is quite rocky and has some loose sections. That said, the Landcruiser I was in had no trouble making the trip, and the group managed to get two car style four wheel drives into the area without problems as well. The route to Nimmo Plain from the south of Canberra is as follows:



Interactive map for this route.



We explored two areas which are both a short drive from Nimmo Plain. We in fact didn't explore anything at Nimmo Plain itself, but as the intermediate point where the road forks it makes sense to show that bit of route first. From Nimmo Plain, it you turn left you end up where we camped for the first day, which is a lovely NWPS camp site with fire pits, a pit toilet, and trout in the river.



The route to that camp site is like this:



Interactive map for this route.



From this campsite we did a 14km loop walk, which took in a series of huts and ruins along relatively flat and easy terrain. There are certainly good walking options here for Scouts, especially those which don't particularly like hills. The route for the first day was like this:



Interactive map for this route.



Its a fantastic area, very scenic without being difficult terrain...



                                           



As you can see from the pictures, life around the camp fire that evening was pretty hard. One note on the weather though -- even at the start of April we're already starting to see very cool overnight weather in this area, with a definite frost on the tents and cars in the morning. I wouldn't want to be hiking in this area much later in the season than this without being prepared for serious cold weather.



   



The next day we drove back to Nimmo Plain and turned right. You then proceed down a dirt road that is marked as private property, but has a public right of way through to the national park. At the border of the park you can leave the car again and go for another walk. The route to this second entrance to the park is like this:



Interactive map for this route.



                     



This drive on the second morning involved a couple of river crossings, with some representative pictures below. Why does the red Landcruiser get to do the crossing three times? Well that's what happens when you forget to shut the gate...



                                                   



Following that we did a short 5km return walk to Cesjack's Hut, which again wasn't scenic at all...



Interactive map for this route.



                                     



I took some pictures on the drive home too of course...



             



Tags for this post: blog pictures 20160409-jagungal photo kosciuszko scouts bushwalk

Related posts: Scout activity: orienteering at Mount Stranger



Comment
Categories: thinktime

OpenSTEM: New shirts for OpenSTEM people

Mon 11th Apr 2016 11:04

Horays, our new polo shirts have arrived!  We’re very happy with how they came out with the embroidered owl logo.

We go out & about quite a bit to schools and other events, so it’s useful to be easily recognisable in those environments. The shirts standardise that effort and the colour scheme matches our branding very well.

Categories: thinktime

Glen Turner: Embedding files into the executable

Sat 09th Apr 2016 14:04

Say you've got a file you want to put into an executable. Some help text, a copyright notice. Putting these into the source code is painful:

static char *copyright_notice[] = { "This program is free software; you can redistribute it and/or modify", "it under the terms of the GNU General Public License as published by", "the Free Software Foundation; either version 2 of the License, or (at", "your option) any later version.", NULL /* Marks end of text. */ }; #include <stdio.h> for (s = copyright_notice; *s != NULL; s++) { puts(s); }

It's much easier to insert the file copyright.txt using the linker:

$ ld --relocatable --format=binary --output=copyright.o copyright.txt $ cc -c helloworld.c $ cc -o helloworld helloworld.o copyright.o

The final cc doesn't compile anything: it runs ld to link the object files of C programs on this particular architecture and operating system.

The linker defines some symbols in the object file marking the start, end and size of the copied copyright.txt:

$ nm copyright.o 000003bb D _binary_copyright_txt_end 000003bb A _binary_copyright_txt_size 00000000 D _binary_copyright_txt_start

Ignore the address of 00000000, this is relocatable object file and the final linkage will assign a final address and clean up references to it.

A C program can access these symbols with:

extern const unsigned char _binary_copyright_txt_start[]; extern const unsigned char _binary_copyright_txt_end[]; extern const size_t *_binary_copyright_txt_size;

Don't rush ahead and puts() this variable. The copyright.txt file has no final ASCII NUL character which C uses to mark the end of strings. Perhaps use the old-fashioned UNIX write():

#include <stdio.h> #include <unistd.h> fflush(stdout); /* Synchronise C's stdio and UNIX's I/O. */ write(fileno(stdout)), _binary_copyright_txt_start, (size_t)&_binary_copyright_txt_size);

Alternatively, add a final NUL to the copyright.txt file:

$ echo -e -n "\x00" >> copyright.txt

and program:

#include <stdio.h> extern const unsigned char _binary_copyright_txt_start[]; fputs(_binary_copyright_txt_start, stdout);

There's one small wrinkle:

$ objdump -s copyright.o copyright.o: file format elf32-littlearm Contents of section .data: 0000 54686973 2070726f 6772616d 20697320 This program is 0010 66726565 20736f66 74776172 653b2079 free software; y 0020 6f752063 616e2072 65646973 74726962 ou can redistrib 0030 75746520 69742061 6e642f6f 72206d6f ute it and/or mo

The .data section is copied into memory for all running instances of the executable. We really want the contents of the copyright.txt file to be in the .rodata section so that there is only ever one copy in memory no matter how many copies are running.

Objcopy could have copied an input 'binary' copyright.txt file to a particular section in an output object file. But objcopy's options require us to state the architecture of the output object file. We really don't want a different command for compiling on x86, x86_64, ARM and so on.

So here's a hack: let ld set the architecture details when it generates its default output and then use objcopy to rename the section from .data to .rodata. Remember that .data contains only the three _binary_* symbols, so no other symbols — which may need to be written to — will move from .data to .rodata:

$ ld --relocatable --format=binary --output=copyright.tmp.o copyright.txt $ objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents copyright.tmp.o copyright.o $ objdump -s copyright.o copyright.o: file format elf32-littlearm Contents of section .rodata: 0000 54686973 2070726f 6772616d 20697320 This program is 0010 66726565 20736f66 74776172 653b2079 free software; y 0020 6f752063 616e2072 65646973 74726962 ou can redistrib 0030 75746520 69742061 6e642f6f 72206d6f ute it and/or mo
Categories: thinktime

Lev Lafayette: Password Praise in the Future Tense

Fri 08th Apr 2016 22:04

Apropos the previous post, I am coming to the conclusion that University's are very strange places when it comes to password policies. Mind you, it shouldn't really come to much of a surprise - the choice of technologies adopted are often so mind-bogglingly strange one is tempted to conclude that the decisions are more political than technical. Of course, that would never happen in the commercial world. All this aside, consider the password policy of a certain Victorian university.

read more

Categories: thinktime

Colin Charles: FOSDEM 2016 notes

Fri 08th Apr 2016 20:04

While being on the committee for the FOSDEM MySQL & friends devroom, I didn’t speak at that devroom (instead I spoke at the distributions devroom). But when I had time to pop in, I did take some notes on sessions that were interesting to me, so here are the notes. I really did enjoy Yoshinori Matsunobu’s session (out of the devroom) on RocksDB and MyRocks and I highly recommend you to watch the video as the notes can’t be very complete without the great explanation available in the slide deck. Anyway there are videos from the MySQL and friends devroom.

MySQL & Friends Devroom MySQL Group Replication or how good theory gets into better practice – Tiago Jorge
  • Multi-master update everywhere with built-in automatic distributed recovery, conflict detection and group membership
  • Group replication added 3 PERFORMANCE_SCHEMA tables
  • If a server leaves the group, the others will be automatically informed (either via a crash or if you execute STOP GROUP REPLICATION)
  • Cloud friendly, and it is self-healing. Integrated with server core via a well-defined API. GTIDs, row-based replication, PERFORMANCE_SCHEMA. Works with MySQL Router as well.
  • Multi-master update everywhere. Conflicts will be detected and dealt with, via the first committer wins rule. Any 2 transactions on different servers can write to the same tuple.
  • labs.mysql.com / mysqlhighavailability.com
  • Q: When a node leaves a group, will it still accept writes? A: If you leave voluntarily, it can still accept writes as a regular MySQL server (this needs to be checked)
  • Online DDL is not supported
  • Checkout the video
ANALYZE for statements – Sergei Petrunia
  • a lot like EXPLAIN ANALYZE (in PostgreSQL) or PLAN_STATISTICS (in Oracle)
  • Looks like explain output with execution statistics
  • slides and video
Preparse Query Rewrite Plugins – Sveta Smirnova / Martin Hansson
  • martin.hansson@oracle.com
  • Query rewwriting with a proxy might be too complex, so they thought of doing it inside the server. There is a pre-parse (string-to-string) and a post-parse (parse tree) API. Pre-parse: low overhead, but no structure. Post-parse: retains structure, but requires re-parsing (no destructive editing), need to traverse parse tree and will only work on select statements
  • Query rewrite API builds on top of teh Audit API, and then you’ve got the pre-parse/post-parse APIs on the top that call out to the plugins
  • video
Fedora by the Numbers – Remy DeCausemaker MyRocks: RocksDB Storage Engine for MySQL (LSM Databases at Facebook) – Yoshinori Matsunobu
  • SSD/Flash is getting affordable but MLC Flash is still expensive. HDD has large capacity but limited IOPS (reducing rw IOPS is very important and reducing write is harder). SSD/Flash has great read iops but limited space and write endurance (reducing space here is higher priority)
  • Punch hole compression in 5.7, it is aligned to the sector size of your device. Flash device is basically 4KB. Not 512 bytes. So you’re basically wasting a lot of space and the compression is inefficient
  • LSM tends to have a read penalty compared to B-Tree, like InnoDB. So a good way to reduce the read penalty is to use a Bloom Filter (check key may exist or not without reading data, and skipping read i/o if it definitely does not exist)
  • Another penalty is for delete. It puts them into tombstones. So there is the workaround called SingleDelete.
  • LSMs are ideal for write heavy applications
  • Similar features as InnoDB, transactions: atomicity, MVCC/non-locking consistent read, read committed repeatable read (PostgreSQL-style), Crash safe slave and master. It also has online backup (logical backup by mysqldump and binary backup by myrocks_hotbackup).
  • Much smaller space and write amplification compared to InnoDB
  • Reverse order index (Reverse Column Family). SingleDelete. Prefix bloom filter. Mem-comparable keys when using case sensitive collations. Optimizer statistics for diving into pages.
  • RocksDB is great for scanning forward but ORDER BY DESC queries are slow, hence they use reverse column families to make descending scan a lot faster
  • watch the video
Categories: thinktime

Colin Charles: (tweet) Summary of Percona Live 2015

Fri 08th Apr 2016 20:04

The problem with Twitter is that we talk about something and before you know it, people forget. (e.g. does WebScaleSQL have an async client library?) How many blog posts are there about Percona Live Santa Clara 2015? This time (2016), I’m going to endeavour to write more than to just tweet – I want to remember this stuff, and search archives (and also note the changes that happen in this ecosystem). And maybe you do too as well. So look forward to more blogs from Percona Live Data Performance Conference 2016. In the meantime, here’s tweets in chronological order from my Twitter search.

  • crowd filling up the keynote room for #perconalive
  • beginning shortly, we’ll see @peterzaitsev at #perconalive doing his keynote
  • #perconalive has over 1,200 attendees – oracle has 20 folk, with 22 folk from facebook
  • #perconalive is going to be in Amsterdam sept 21-22 2015 (not in London this year). And in 2015, April 18-21 2016!
  • We have @PeterZaitsev on stage now at #perconalive
  • 5 of the 5 top websites are powered by MySQL – an Oracle ad – alexa rankings? http://www.alexa.com/topsites #perconalive
  • now we have Harrison Fisk on ployglot persistence at facebook #perconalive
  • make it work / make it fast / make it efficient – the facebook hacker way #perconalive
  • a lot of FB innovation goes into having large data sizes with short query time response #perconalive
  • “small data” to facebook? 10’s of petabytes with <5ms response times. and yes, this all sits in mysql #perconalive
  • messages eventually lands in hbase for long term storage for disk #perconalive they like it for LSM
  • Harrison introduces @RocksDB to be fast for memory/flash/disk, and its also LSM based. Goto choice for 100’s of services @ FB #perconalive
  • Facebook Newsfeed is pulled from RocksDB. 9 billion QPS at peak! #perconalive
  • Presto works all in memory on a streaming basis, whereas Hive uses map/reduce. Queries are much faster in Presto #perconalive
  • Scuba isn’t opensource – real time analysis tool to debug/understand whats going on @ FB. https://research.facebook.com/publications/456106467831449/scuba-diving-into-data-at-facebook/ … #perconalive
  • InnoDB as a read-optimized store and RocksDB as a write-optimized store — so RocksDB as storage engine for MySQL #perconalive
  • Presto + MySQL shards is something else FB is focused on – in production @ FB #perconalive
  • loving the woz keynote @ #perconalive – wondering if like apple keynotes, we’ll see a “one more thing” after this ;)
  • “i’m only a genius at one thing: that’s making people think i’m a genius” — steve wozniak #perconalive
  • Happiness = Smiles – Frowns (H=S-F) & Happiness = Food, Fun, Friends (H=F³) Woz’s philosophy on being happy + having fun daily #perconalive
  • .@Percona has acquired @Tokutek in a move that provides some consolidation in the MySQL database market and takes..
  • MySQL Percona snaps up Tokutek to move onto MongoDB and NoSQL turf http://zd.net/1ct6PEI by @wolpe
  • One more thing – congrats @percona @peterzaitsev #perconalive Percona has acquired Tokutek with storage engines for MySQL & MongoDB – @PeterZaitsev #perconalive
  • Percona is now a player in the MongoDB space with TokuMX! #perconalive
  • The tokumx mongodb logo is a mongoose… #perconalive Percona will continue to support TokuDB/TokuMX to customers + new investments in it
  • @Percona “the company driving MySQL today” and “the brains behind MySQL”. New marketing angle? http://www.datanami.com/2015/04/14/mysql-leader-percona-takes-aim-at-mongodb/ …
  • We have Steaphan Greene from @facebook talk about @WebScaleSQL at #perconalive
  • what is @webscalesql? its a collaboration between Alibaba, Facebook, Google, LinkedIn, and Twitter to hack on mysql #perconalive
  • close collaboration with @mariadb @mysql @percona teams on @webscalesql. today? upstream 5.6.24 today #perconalive
  • whats new in @WebScaleSQL ? asynchronous mysql client, with support from within HHVM, from FB & LinkedIn #perconalive
  • smaller @webscalesql change (w/big difference) – lower innodb buffer pool memory footprint from FB & Google #perconalive
  • reduce double-write mode while still preserving safety. query throttling, server side statement timeouts, threadpooling #perconalive
  • logical readahead to make full table scans as much as 10x fast. @WebScaleSQL #perconalive
  • whats coming to @WebScaleSQL – online innodb defragmentation, DocStore (JSON style document database using mysql) #perconalive
  • MySQL & RocksDB coming to @WebScaleSQL thanks to facebook & @MariaDB #perconalive
  • So, @webscalesql will skip 5.7 – they will backport interesting features into the 5.6 branch! #perconalive
  • likely what will be next to @webscalesql ? will be mysql-5.8, but can’t push major changes upstream. so might not be an option #perconalive
  • Why only minor changes from @WebScaleSQL to @MySQL upstream? #perconalive
  • Only thing not solved with @webscalesql & upstream @mysql – the Contributor license agreement #perconalive
  • All @WebScaleSQL features under Apache CCLA if oracle can accept it. Same with @MariaDB @percona #perconalive
  • Steaphan Greene says tell Oracle you want @webscalesql features in @mysql. Pressure in public to use the Apache CLA! #perconalive
  • We now have Patrik Sallner CEO from @MariaDB doing the #perconalive keynote ==> 1+1 > 2 (the power of collaboration)
  • “contributors make mariadb” – patrik sallner #perconalive
  • Patrik Sallner tells the story about the CONNECT storage engine and how the retired Olivier Bertrand writes it #perconalive
  • Google contributes table/tablespace encryption to @MariaDB 10.1 #perconalive
  • Patrik talks about the threadpool – how #MariaDB made it, #Percona improved it, and all benefit from opensource development #perconalive
  • and now we have Tomas Ulin from @mysql @oracle for his #perconalive keynote
  • 20 years of MySQL. 10 years of Oracle stewardship of InnoDB. 5 years of Oracle stewardship of @MySQL #perconalive
  • Tomas Ulin on the @mysql 5.7 release candidate. It’s gonna be a great release. Congrats Team #MySQL #perconalive
  • MySQL 5.7 has new optimizer hint frameworks. New cost based optimiser. Generated (virtual) columns. EXPLAIN for running thread #perconalive
  • MySQL 5.7 comes with the query rewrite plugin (pre/post parse). Good for ORMs. “Eliminates many legacy use cases for proxies” #perconalive
  • MySQL 5.7 – native JSON datatypes, built-in JSON functions, JSON comparator, indexing of documents using generated columns #perconalive
  • InnoDB has native full-text search including full CJK support. Does anyone know how FTS compares to MyISAM in speed? #perconalive
  • MySQL 5.7 group replication is unlikely to make it into 5.7 GA. Designed as a plugin #perconalive
  • Robert Hodges believes more enterprises will use MySQL thanks to the encryption features (great news for @mariadb) #perconalive
  • Domas on FB Messenger powered by MySQL. Goals: response time, reliability, and consistency for mobile messaging #perconalive
  • FB Messenger: Iris (in-memory pub-sub service – like a queue with cache semantics). And MySQL as persistence layer #perconalive
  • FB focuses on tiered storage: minutes (in memory), days (flash) and longterm (on disks). #perconalive
  • Gotta keep I/O devices for 4-5 years, so don’t waste endurance capacity of device (so you don’t write as fast as a benchmark) #perconalive
  • Why MySQL+InnoDB? B-Tree: cheap overwrites, I/O has high perf on flash, its also quick and proven @ FB #perconalive
  • What did FB face as issues to address with MySQL? Write throughput. Asynchronous replication. and Failover time. #perconalive
  • HA at Facebook: <30s failover, <1s switchover, > 99.999% query success rate
  • Learning a lot about LSM databases at Facebook from Yoshinori Matsunobu – check out @rocksdb + MyRocks https://github.com/MySQLOnRocksDB/mysql-5.6 …
  • The #mysqlawards 2015 winners #PerconaLive
  • Percona has a Customer Advisory Board now – Rob Young #perconalive
  • craigslist: mysql for active, mongodb for archives. online alter took long. that’s why @mariadb has https://mariadb.com/kb/en/mariadb/progress-reporting/ … #perconalive
  • can’t quite believe @percona is using db-engines rankings in a keynote… le sigh #perconalive
  • “Innovation distinguishes between a leader and a follower” – Steve Jobs #perconalive
  • Percona TokuDB: “only alternative to MySQL + InnoDB” #perconalive
  • “Now that we have the rights to TokuDB, we can add all the cool features ontop of Percona XtraDB Cluster (PXC)” – Rob Young #perconalive
  • New Percona Cloud Tools. Try it out. Helps remote DBA/support too. Wonder what the folk at VividCortex are thinking about now #perconalive
  • So @MariaDB isn’t production ready FOSS? I guess 3/6 top sites on Alexa rank must disagree #perconalive
  • Enjoying Encrypting MySQL data at Google by @jeremycole & Jonas — you can try this in @mariadb 10.1.4 https://mariadb.com/kb/en/mariadb/mariadb-1014-release-notes/ … #perconalive
  • google encryption: mariadb uses the api to have a plugin to store the keys locally; but you really need a key management server #perconalive
  • Google encryption: temporary tables during query execution for the Aria storage engine in #MariaDB #perconalive
  • find out more about google mysql encryption — https://code.google.com/p/google-mysql/ or just use it at 10.1.4! https://downloads.mariadb.org/mariadb/10.1.4/ #perconalive
  • Encrypting MySQL data at Google – Percona Live 2015 #perconalive http://wp.me/p5WPkh-5F
  • The @WebScaleSQL goals are still just to provide access to the code, as opposed to supporting it or making releases #perconalive
  • There is a reason DocStore & Oracle/MySQL JSON 5.7 – they were designed together. But @WebScaleSQL goes forward with DocStore #perconalive
  • So @WebScaleSQL will skip 5.7, and backport things like live resize of the InnoDB buffer pool #perconalive
  • How to view @WebScaleSQL? Default GitHub branch is the active one. Ignore -clean branches, just reference for rebase #perconalive
  • All info you need should be in the commit messages @WebScaleSQL #perconalive
  • Phabricator is what @WebScaleSQL uses as a code review system. All diffs are public, anyone can follow reviews #perconalive
  • automated testing with jenkins/phabricator for @WebScaleSQL – run mtr on ever commit, proposed diffs, & every night #perconalive
  • There is feature documentation, and its a work in progress for @WebScaleSQL. Tells you where its included, etc. #perconalive
  • Checked out the new ANALYZE statement feature in #MariaDB to analyze JOINs? Sergei Petrunia tells all #perconalive https://mariadb.com/kb/en/mariadb/analyze-statement/ …
Categories: thinktime

Rusty Russell: Bitcoin Generic Address Format Proposal

Fri 08th Apr 2016 12:04

I’ve been implementing segregated witness support for c-lightning; it’s interesting that there’s no address format for the new form of addresses.  There’s a segregated-witness-inside-p2sh which uses the existing p2sh format, but if you want raw segregated witness (which is simply a “0” followed by a 20-byte or 32-byte hash), the only proposal is BIP142 which has been deferred.

If we’re going to have a new address format, I’d like to make the case for shifting away from bitcoin’s base58 (eg. 1At1BvBMSEYstWetqTFn5Au4m4GFg7xJaNVN2):

  1. base58 is not trivial to parse.  I used the bignum library to do it, though you can open-code it as bitcoin-core does.
  2. base58 addresses are variable-length.  That makes webforms and software mildly harder, but also eliminates a simple sanity check.
  3. base58 addresses are hard to read over the phone.  Greg Maxwell points out that the upper and lower case mix is particularly annoying.
  4. The 4-byte SHA check does not guarantee to catch the most common form of errors; transposed or single incorrect letters, though it’s pretty good (1 in 4 billion chance of random errors passing).
  5. At around 34 letters, it’s fairly compact (36 for the BIP141 P2WPKH).

This is my proposal for a generic replacement (thanks to CodeShark for generalizing my previous proposal) which covers all possible future address types (as well as being usable for current ones):

  1. Prefix for type, followed by colon.  Currently “btc:” or “testnet:“.
  2. The full scriptPubkey using base 32 encoding as per http://philzimmermann.com/docs/human-oriented-base-32-encoding.txt.
  3. At least 30 bits for crc64-ecma, up to a multiple of 5 to reach a letter boundary.  This covers the prefix (as ascii), plus the scriptPubKey.
  4. The final letter is the Damm algorithm check digit of the entire previous string, using this 32-way quasigroup. This protects against single-letter errors as well as single transpositions.

These addresses look like btc:ybndrfg8ejkmcpqxot1uwisza345h769ybndrrfg (41 digits for a P2WPKH) or btc:yybndrfg8ejkmcpqxot1uwisza345h769ybndrfg8ejkmcpqxot1uwisza34 (60 digits for a P2WSH) (note: neither of these has the correct CRC or check letter, I just made them up).  A classic P2PKH would be 45 digits, like btc:ybndrfg8ejkmcpqxot1uwisza345h769wiszybndrrfg, and a P2SH would be 42 digits.

While manually copying addresses is something which should be avoided, it does happen, and the cost of making them robust against common typographic errors is small.  The CRC is a good idea even for machine-based systems: it will let through less than 1 in a billion mistakes.  Distinguishing which blockchain is a nice catchall for mistakes, too.

We can, of course, bikeshed this forever, but I wanted to anchor the discussion with something I consider fairly sane.

Categories: thinktime

Jonathan Adamczewski: Aside: Over-engineered Min() [C++, variadic templates, constexpr, fold left]

Thu 07th Apr 2016 16:04

Q: Given a function constexpr int Min(int a, int b), construct a function constexpr int Min(Args... args) that returns the minimum of all the provided args. Fail to justify your over-engineering.

A: Rename Min(int, int) as MinImpl(int, int) or stick it in a namespace. Overloading the function is not only unnecessary, it gets in the way of the implementation.

constexpr int MinImpl(int a, int b) { return a < b ? a : b; }

Implement a constexpr fold left function. If we can use it for Min(), we should be able to do the same for Max(), and other similar functions. Should we be able to find any (#prematuregeneralization).

template<typename ArgA, typename ArgB, typename Func> constexpr auto foldl(Func func, ArgA a, ArgB b) { return func(a, b); } template<typename ArgA, typename ArgB, typename Func, typename ...Args> constexpr auto foldl(Func func, ArgA a, ArgB b, Args... args) { return foldl(func, func(a, b), args...); }

Combine the two.

template<typename ...Args> constexpr int Min(Args... args) { return foldl(MinImpl, args...); }

Add the bare minimum amount of testing for a constexpr function: slap a static_assert() on it.

static_assert(Min(6, 4, 5, 3, 9) == 3), "Nope");

I did so with Visual Studio 2015 Update 2. It did not object.

 

Categories: thinktime

Kristy Wagner: Panama Papers – what does it mean for me?

Thu 07th Apr 2016 11:04

It is just a little insane that in the process of my setting up this web site my motivation to launch it with a ‘real’ blog post simply had not transpired.  Then came along Panama Papers, which took me past that point of wanting to write to simply want to bury myself in research and to never resurface.  Data and I have a very loving relationship, I love to surround myself in it and it loves to share its secrets with me.

So, I buried myself and looked at all the tattles on big business and large personalities, and woke up this morning with two questions:

  • How did they manage to mine what they have and how much did they just hand over on platters to government agencies?
  • What does this mean to me?

The former is already being unveiled in the media.  So, I thought I would address the latter.  What do you think this means for you?

Well, the times are changing and, if successful in protecting its source then The International Consortium of Investigative Journalists it has opened up a brand spanking new precedent for whistle-blowers.  The survival of this whistle-blower is now completely dependent on not having given away their identity for themselves in their day to day interactions, dare a bead of sweat even think about beading near an eyebrow then their fate shall likely be sealed, along with anyone else with a vaguely guilty conscience.  Some of the ‘powers’ named in the document have the power to make you disappear and have no issue reaching to foreign soil to reach that goal.

So whilst they are on the chase, we innocents stand by and watch to ‘crazy’ unfold.  In other countries with greater questionability over political ethics there is going to be political change, in some countries there shall be protest and the destabilisation of leaders, in others solidarity as the political spin continues.  Here in Australia, though, it comes down to a bunch of corporations and about 800 of individuals. In the scheme of things, without impact figures, this seems relatively small.  Only time will reveal the size of the tax pie that got carved by offshore money movements.

Spineless ATO settlements

As the Australian Tax Office (ATO) tracks down these individuals and hits them up for tax evasion, we are going to see many, many quiet settlements.  To avoid the difficulties and costs of due Court process the ATO are known to taking settlements at as little as one tenth the known debt, including historic debts by organisations now named as being associated with the Panama Papers.  We are going to see a lot of these but don’t hold your breath, it will be some time before they start building cases.  As a result of settlements, they are not known for seeing these people face non-financial punishments such as actual gaol time.  I hope someone in political oversight gives the ATO a giant kick up the pants expecting that every one of these people gets prosecuted to the full extent of the law.  But hope rarely equates to action in political spheres.

More Leaks

Once one person gets away with whistleblowing, others follow suit, somewhat like media attention on suicides.  If you are free of the offshore banking set up it doesn’t mean you are off the hook.  If your actions are shady then know that even a sniff of it from others may lead to you being caught.  Are you shuffling money, paying family members who are not actual employees, or have a raft of false invoices in your tax deductions?  Yep, if you are then you are now more likely to have someone pull the rug out from under you and it is going to be someone you know.

A company that holds nearly nothing on the public web has lost over two terabytes of data, what can someone with system access do to you?  My first suggestion is to clean up your act and confess any sins before you get caught.  Lest, you shall be the at mercy of others….

 

 

 

Sorry, I trailed off topic reminiscing of a movie scene with Andy Garcia in Oceans 11 as his character, Terry Benedict, throws a tantrum demanding that staff find out how they (Ocean’s 11) hacked into his system.  Whistle-blowers are going to do it if they are given a strong enough encouragement and they suspect they know what they might find.

Dark Web Understanding

Through disclosure of how the journalists went about mining this giant slab of data and communicating about it securely over international boundaries is going to give way to a better understanding of the Dark Web.  Suddenly the fear associated with secure anonymised communication is going to make sense as a safe haven for those pursuing truth (and not just those pedalling unsavoury wares).  I think somewhere in the last 3-5 years we forgot about those proxies set up for people in foreign countries to be able to share what is ‘really’ going on with the rest of the world.  I believe as this story unfolds people will come to understand the differences between the dark web, the deep web and the internet and rather than fear it, will simple accept each for what they are, knowing that all three are destined to evolve over time also.

Closing Loopholes

The Australian Government can’t change international law or laws governing other sovereign nations so what will they do to close loopholes?  The easiest way is to tax the shit out of every dollar that heads offshore and some of that can be controlled in the first instances but other shadier means will be found around it.  The way around it is to make moving the money offshore more expensive than keeping it here.  Some of the obvious tax avoidance avenues that will need to be closed through taxation will be:

  • dividends payments to foreign shareholders,
  • payments to benefactors of trusts who are not paying tax in Australia on the amount,
  • revenues paid back into foreign parent/holding companies,
  • fees paid on goods and services, possibly inflated or non-substantiable items, also through tariffs and duties, and
  • taxation on loans taken by Australian companies from overseas sources, or the limitation of interest deductibility from the same, including from parent/holding companies.

These are just the ones that spring to mind and is in no way comprehensive.  The key thing in this, though, is that such measures would make our economy quite protectionist.  (Not that this is necessarily always a bad thing.)  It means that we will see costs of imports rise, giving way for local business to be a preferred provider of products for our citizens which is awesome, other than for industries that we have allowed to collapse over the last 30 years.  Implementing protectionist tax reform had ought to be worth it too, I actually figure that now would be time to remove exemptions for mining whilst we are at it.  Everyone knows that their international buy prices from Australia to the sister company that sells it is just another loophole to be closed.  Let’s face it, people still need the commodities that are being ripped from our land.  If we are going to let our resources be removed from our own use, then it should benefit our nation far more than it does now.

In reality though implementing the closure of loopholes, like those above, has some serious economic impacts to international trade and our supply chains.  It is going to take a long time for Cabinet parties to get their heads around the impact let alone to debate the cost to benefit from such a decision.  Don’t hold your breath for every loop to be closed but be sure that we are going to see some changes.  Maybe though, we can move the pressure of Mum and Dad investor who are using negative gearing to finance their retirement and instead focus on catching real tax that is being let slide through for the benefit of large multi-nationals and the one percenters who are prepared to dabble in the potentially unlawful.

What about me?

Well, watch the tax space, have a chat to people who understand the economy and international trade about what is happening politically.  The means of gap closure will provide more opportunities in our country for us to build economies anew.  It also means that we may see somewhat of a market crash as foreign investors are taxed for taking their money home but we will find our way through and the crash could open up opportunities that Generation Y and I are missing in respect to owning their own homes and securing personal investment in a meaningful way for the first time.

If we closed these gaps quickly enough for businesses to not have opportunity to jump loopholes, then we would also have a significant revenue impact which one would hope would be pushed to the citizens as investment into education and health.  The potential though is quite amazing because this eye opener is also a good means to revisit the short sightedness of the sale of public assets.  If we temporarily crashed the market by restricting foreign investment and cutting the ability for foreign ownership by sale, we have the opportunity for the Commonwealth as well as States and Territories to change the way they manage economics and asset allocation.

Also, for business owners, this is potentially a great time to consider your capability to be a provider to Government for goods and services.  There is going to be a counterbalance from this event which may just put your tender into the preference pool because as the owner you have a visible Australian face that pays tax right here in Australia.  Right now, that is a really good thing for you.

I am not a bad guy, but I do have offshore funds, what about me?

All I can say is come clean.  Make sure you declare your offshore funds (whether through Mossack Fonsecca or not) to the Australian Tax Office and make sure you take advice directly from them on the tax liabilities you face from your choice of tax minimisation so that you do not cross into the realms of tax avoidance.  If there is an infraction that has already occurred then upfront disclosure is your best option especially before the ATO pulls your file together.  I respect to this one whistle-blower incident, Deputy Commissioner, Mr Michael Cranston has in an ATO press statement detailed:

“The information we have includes some taxpayers who we have previously investigated, as well as a small number who disclosed their arrangements with us under the Project DO IT initiative. It also includes a large number of taxpayers who haven’t previously come forward, including high wealth individuals, and we are already taking action on those cases.”

So, just come clean.  (By the way, the ATO have already offered to treat you nicely if you do.)

Political Ethics has changed

Just before you start thinking that this is all that is going to happen…ponder this statement, also from Mr Cranston:

“The message is clear – taxpayers can’t rely on these secret arrangements being kept secret and we will act on any information that is provided to us.”

Does anyone else here notice a giant ethical shift from the events of Wikileaks where the Government condemned such actions?  In the era of wikileaks controversies Governments were scared of the capability of insiders to release sensitive data and they worked on logging and permissioning tools to deter anyone who might be tempted, but now?  But now what is going on?

Okay, granted, in times of cutbacks it could be questioned if our political leadership have any ethics at all in respect to where tax dollars are given and taken.  However, the statement above flies in the face of our deontological political comfort zone where even if a political leader messed up that we have been encouraged to judge them on the intent of their actions rather than the outcome.  Now, because the leak is corporate and to Government benefit the tables have turned to whistle-blower as hero.  The fact that this act in Australia, given the role that Mossack Fonseca played, would likely be illegal white collar crime shows a giant swing of political view that our Government has permitted to come through the taxation office that now is a time that our Government will embrace consequentialism – that the ‘rightness’ of an act shall be determined on the consequences it produces rather than the morality of the act itself.

How does that play out into politics and society?  That is a whole new conversation for another day.  But if you are keen then share your thoughts on what it will look like in the comments below, along with your thoughts and opinions on how the Panama Papers will affect you.

____

Featured image source: CC-By only: https://www.flickr.com/photos/famzoo/

Categories: thinktime

Kristy Wagner: Hello world!

Thu 07th Apr 2016 11:04

Sorry interwebs, I know it was peaceful without me being around blogging for the last 5 years but, it is too late for you.  I am back!

New domain, new site, new content.  I look forward to sharing some thoughts with you.

Categories: thinktime

Ian Wienand: Image building in OpenStack CI

Tue 05th Apr 2016 15:04

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins master to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Jenkins masters are subscribed to gearman as workers. It is these Jenkins masters that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins master gets these definitions pushed to it constantly by Puppet, thus each Jenkins master instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins master instances.

  6. At this point, the Jenkins master has what it needs to actually get jobs started. When nodepool registers a host to a Jenkins master as a slave, the Jenkins master can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins master instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty slave. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. The Jenkins master will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we have glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images and why build them at all? These images are where the "rubber hits the road" — they are instantiated into the virtual-machines that will run DevStack, unit-testing or whatever else someone might want to test.

The main goal is to provide a stable and consistent environment in which to run a wide-range of tests. A full OpenStack deployment results in hundreds of libraries and millions of lines of code all being exercised at once. The testing-images are right at the bottom of all this, so any instability or inconsistency affects everyone; leading to constant fire-firefighting and major inconvenience as all forward-progress stops when CI fails. We want to support a wide number of platforms interesting to developers such as Ubuntu, Debian, CentOS and Fedora, and we also want to and make it easy to handle new releases and add other platforms. We want to ensure this can be maintained without too much day-to-day hands-on.

Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug failures. We want jobs to rely on as few external resources as possible so tests are consistent and stable. This means caching things like the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images, packages and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Snapshot images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of OpenStack CI images were based on these public images. Nodepool would start one of these provider images and then run a series of scripts on it — these scripts would firstly try to work-around any quirks to make the images look as similar as possible across providers, and then do the caching, setup things like authorized keys and finish other configuration tasks. Nodepool would then snapshot this prepared image and start instantiating VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily (you can see how ridiculous it was getting in the logging configuration used at the time). It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

Naturally there was a desire for something more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron going on. And things like iSCSI kernel drivers that containers don't support well. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:14 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

DevStack and bare nodes

There are two major ways OpenStack projects test their changes:

  1. Running with DevStack, which brings up a small, but fully-functional, OpenStack cloud with the change-under-test applied. Generally tempest is then used to ensure the big-picture things like creating VM's, networks and storage are all working.
  2. Unit-testing within the project; i.e. what you do when you type tox -e py27 in basically any OpenStack project.

To support this testing, OpenStack CI ended up with the concept of bare nodes and devstack nodes.

  • A bare node was made for unit-testing. While tox has plenty of information about installing required Python packages into the virtualenv for testing, it doesn't know anything about the system packages required to build those Python packages. This means things like gcc and library -devel packages which many Python packages use to build bindings. Thus the bare nodes had an ever-growing and not well-defined list of packages that were pre-installed during the image-build to support unit-testing. Worse still, projects didn't really know their dependencies but just relied on their testing working with this global list that was pre-installed on the image.
  • In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment by ensuring it has the right dependencies installed. We don't want any packages pre-installed here because it hides actual dependencies that we want explicitly defined within DevStack — otherwise, when a user goes to deploy DevStack for their development work, things break because their environment differs slightly to the CI one. If you look at all the job definitions in OpenStack, by convention any job running DevStack has a dsvm in the job name — this referred to running on a "DevStack Virtual Machine" or a devstack node. As the CI environment has grown, we have more and more testing that isn't DevStack based (puppet apply tests, for example) that rather confusingly want to run on a devstack node because they do not want dependencies installed. While it's just a name, it can be difficult to explain!

Thus we ended up maintaining two node-types, where the difference between them is what was pre-installed on the host — and yes, the bare node had more installed than a devstack node, so it wasn't that bare at all!

Specifying Dependencies

Clearly it is useful to unify these node types, but we still need to provide a way for the unit-test environments to have their dependencies installed. This is where a tool called bindep comes in. This tool gives project authors a way to specify their system requirements in a similar manner to the way their Python requirements are kept. For example, OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This project now has some extra information in the other-requirements.txt file, which lists the system packages required to build the Python packages in the global-requirements list.

bindep knows how to look at these lists provided by projects and get the right packages for the platform it is running on. As part of the image-build, we have a cache-bindep element that can go through every project and build a list of the packages it requires. We can thus pre-cache all of these packages onto the images, knowing that they are required by jobs. This both reduces the dependency on external mirrors and improves job performance (as the packages are locally cached) but doesn't pollute the system by having everything pre-installed.

Package installation can now happen via the way we really should be doing it — as part of the CI job. There is a job-macro called install-distro-packages which a test can use to call bindep to install the packages specified by the project before the run. You might notice the script has a "fallback" list of packages if the project does not specify it's own dependencies — this essentially replicates the environment of a bare node as we transition to projects more strictly specifying their system requirements.

We can now start with a blank image and all the dependencies to run the job can be expressed by and within the project — leading to a consistent and reproducible environment without any hidden dependencies. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides. There's a few other job-macros that can do things like provide MySQL/Postgres instances for testing or setup other common job requirements. By splitting these types of things out from base-images we also improve the performance of jobs who don't waste time doing things like setting up databases for jobs that don't need it.

As of this writing, the bindep work is new and still a work-in-progress. But the end result is that we have no more need for a separate bare node type to run unit-tests. This essentially halves the number of image-builds required and brings us to the goal of a single image for each platform running all CI.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers and runs all testing environments equally. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

Ian Wienand: Image building in OpenStack CI

Mon 04th Apr 2016 15:04

Also titled minimal images - maximal effort!

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins host to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run, and what type of node it should be run on.

  4. A group of Jenkins hosts are subscribed to gearman as workers. It is these Jenkins hosts that will consume the job requests from the queue and actually get the tests running. However, Jenkins needs two things to be able to run a job — a job definition (what to actually do) and a slave node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Each Jenkins node gets these definitions pushed to it constantly by Puppet, thus each Jenkins instance knows about all the jobs it can run automatically. Zuul also knows about these job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customised orchestration tool called nodepool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at the node-type of jobs in the queue and decides what type of nodes need to start in what clouds to satisfy demand. Nodepool will monitor the start-up of the virtual-machines and register the new nodes to the Jenkins instances.

  6. At this point, Jenkins has what it needs to actually get jobs started. When nodepool registers a host to Jenkins as a slave, the Jenkins host can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins instance by nodepool, Jenkins can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty host. Jekins will run the job as defined in the job-definition on that host — ssh-ing in, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but but Jenkins is pretty much a glorified ssh/scp wrapper to OpenStack CI. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. Jenkins will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused and also have no sensitive details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

So far we just glossed over how nodepool actually creates the images that it hands out for testing. Image creation, illustrated in step 8 above, contains a lot of important details.

Firstly, what are these images, and why build them at all? These images are where the "rubber hits the road" — the are instantiated into the virtual-machine that will run DevStack, functional testing or whatever else someone might want to test. Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug CI failures. We want CI jobs to rely on as few external resources as possible so tests are as consistent and stable as possible. This means caching all the git trees tests might use (OpenStack just broke the 1000 repository mark), VM images consumed by various tests and other common bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

Provider images

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  2. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  3. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.
  4. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.

So the original incarnations of building images was that nodepool would start one of these provider images, run a bunch of scripts on it to make a base-image (do the caching, setup keys, etc), snapshot it and then start putting VM's based on these images into the pool for testing. If you hear someone talking about a "snapshot image" in OpenStack CI context, that's likely what they are referring to.

Apart from the stability of the underlying images, the other issue you hit with this approach is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. Even with just Rackspace and the (now defunct) HP Cloud we would end up creating snapshot images for 4 or 5 platforms across a total of about 8 regions — meaning anywhere up to 40 separate image builds happening daily. It was almost a fait accompli that some of these would fail every day — nodepool can deal with this by reusing old snapshots — but this leads to a inconsistent and heterogeneous testing environment.

OpenStack is like a gigantic Jenga tower, with a full DevStack deployment resulting in hundreds of libraries and millions of lines of code all being exercised at once. The testing images are right at the bottom of all this, and it doesn't take much to make the whole thing fall over (see points about providers not leaving images alone). This leads to constant fire-firefighting and everyone annoyed as all CI stops. Naturally there was a desire for something much more consistent — a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd.

nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still start with a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. These dependencies can both conflict with parts of OpenStack or end up tacitly hiding real test requirements (the test doesn't specify it, but the package is there as part of another base dependency. Things then break when the base dependencies change). The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests — you want everything explicitly included.
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers all have their various different networking schemes or configuration methods which needs to be handled consistently.

If you were starting this whole thing again, things like LXC/Docker to keep "systems within systems" might come into play and help alleviate some of the packaging conflicts. Indeed they may play a role in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron etc going on. And you need to support Ubuntu, Debian, CentOS and Fedora. And you have hundreds of developers already relying on what's there. So change happens incrementally, and in the mean time, there is a clear need for a stable, consistent environment.

Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that — systems with essentially nothing on them. For Debian and Ubuntu, this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important elements that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:00 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs. If you wish to replicate this entire process, the build-image.sh script, run on an Ubuntu Trusty host in a virtualenv with diskimage-builder will get you pretty close (let us know of any issues!).

Dependencies

But guess what, there's more! Along the way, OpenStack CI ended up with the concept of bare nodes and devstack nodes. A bare node was one that was used for functional testing; i.e. what you do when you type tox -e py27 in basically any OpenStack project. The problem here is that tox has plenty of information about installing required Python packages into the virtualenv for testing; but it doesn't know anything about the system packages required to build the Python libraries. This means things like gcc and -devel packages which many Python libraries use to build library bindings.

In contrast to this, DevStack has always been able to bootstrap itself from a blank system to a working OpenStack deployment, ensuring it has the right libraries, etc, installed to get everything working. If you look at all the job definitions, anything running DevStack has a dsvm in the job name; which referred to to "DevStack Virtual Machine"; or basically specifying what was installed on the host (and yes, the bare node had more installed, so it wasn't that bare at all!). Thus we don't want packages pre-installed for DevStack, because it hides actual devstack dependencies that we want explicitly defined. But the bare nodes, used for functional testing, were different — there was an every-growing and not well-defined list of packages that were pre-installed on those nodes to make sure functional testing worked. In general, you don't want jobs relying on something like this; we want to be sure if jobs have a dependency, they require it explicitly.

This is where a tool called bindep comes in. OpenStack has the concept of global requirements — those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This now has some extra information in the other-requirements.txt file, which lists the system-packages required to build the Python-packages in the requirements list. bindep knows how to look at this and get the right packages for the platform it is running on. Remember how it was previously mentioned we want to minimise dependencies on external resources at runtime? Well we can pre-cache all of these packages onto the images, knowing that they are likely to be required by packages. How do we get the packages installed? The way we really should be doing it — as part of the CI job. There is a macro called install-distro-packages which uses bindep to install those packages as required by the global-requirements list. The result — no more need for the bare node type to run functional tests! In all cases we can start with essentially a blank image and all the dependencies to run the job are expressed by and within the job — leading to a consistent and reproducible environment. Several things have broken as part of removing bare nodes — this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides; issues that get fixed by thinking about and ensuring we have correct dependencies bringing up jobs. There's a few other macros there that do things like provide MySQL/Postgres instances or setup other common job requirements. By splitting these out we also improve the performance of jobs who now only bring in the dependencies they need — we don't waste time doing things like setting up databases for jobs that don't need it.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burden, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

Pages