You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 week 2 days ago

Russell Coker: Another Broken Nexus 5

Sat 22nd Oct 2016 17:10

In late 2013 I bought a Nexus 5 for my wife [1]. It’s a good phone and I generally have no complaints about the way it works. In the middle of 2016 I had to make a warranty claim when the original Nexus 5 stopped working [2]. Google’s warranty support was ok, the call-back was good but unfortunately there was some confusion which delayed replacement.

Once the confusion about the IMEI was resolved the warranty replacement method was to bill my credit card for a replacement phone and reverse the charge if/when they got the original phone back and found it to have a defect covered by warranty. This policy meant that I got a new phone sooner as they didn’t need to get the old phone first. This is a huge benefit for defects that don’t make the phone unusable as you will never be without a phone. Also if the user determines that the breakage was their fault they can just refrain from sending in the old phone.

Today my wife’s latest Nexus 5 developed a problem. It turned itself off and went into a reboot loop when connected to the charger. Also one of the clips on the rear case had popped out and other clips popped out when I pushed it back in. It appears (without opening the phone) that the battery may have grown larger (which is a common symptom of battery related problems). The phone is slightly less than 3 years old, so if I had got the extended warranty then I would have got a replacement.

Now I’m about to buy a Nexus 6P (because the Pixel is ridiculously expensive) which is $700 including postage. Kogan offers me a 3 year warranty for an extra $108. Obviously in retrospect spending an extra $100 would have been a benefit for the Nexus 5. But the first question is whether new phone going to have a probability greater than 1/7 of failing due to something other than user error in years 2 and 3? For an extended warranty to provide any benefit the phone has to have a problem that doesn’t occur in the first year (or a problem in a replacement phone after the first phone was replaced). The phone also has to not be lost, stolen, or dropped in a pool by it’s owner. While my wife and I have a good record of not losing or breaking phones the probability of it happening isn’t zero.

The Nexus 5 that just died can be replaced for 2/3 of the original price. The value of the old Nexus 5 to me is less than 2/3 of the original price as buying a newer better phone is the option I want. The value of an old phone to me decreases faster than the replacement cost because I don’t want to buy an old phone.

For an extended warranty to be a good deal for me I think it would have to cost significantly less than 1/10 of the purchase price due to the low probability of failure in that time period and the decreasing value of a replacement outdated phone. So even though my last choice to skip an extended warranty ended up not paying out I expect that overall I will be financially ahead if I keep self-insuring, and I’m sure that I have already saved money by self-insuring all my previous devices.

Related posts:

  1. The Nexus 5 The Nexus 5 is the latest Android phone to be...
  2. Nexus 4 Ringke Fusion Case I’ve been using Android phones for 2.5 years and...
  3. Nexus 4 My wife has had a LG Nexus 4 for about...
Categories: thinktime

Stewart Smith: Workaround for opal-prd using 100% CPU

Thu 20th Oct 2016 19:10

opal-prd is the Processor RunTime Diagnostics daemon, the userspace process that on OpenPower systems is responsible for some of the runtime diagnostics. Although a userspace process, it memory maps (as in mmap) in some code loaded by early firmware (Hostboot) called the HostBoot RunTime (HBRT) and runs it, using calls to the kernel to accomplish any needed operations (e.g. reading/writing registers inside the chip). Running this in user space gives us benefits such as being able to attach gdb, recover from segfaults etc.

The reason this code is shipped as part of firmware rather than as an OS package is that it is very system specific, and it would be a giant pain to update a package in every Linux distribution every time a new chip or machine was introduced.

Anyway, there’s a bug in the HBRT code that means if there’s an ECC error in the HBEL (HostBoot Error Log) partition in the system flash (“bios” or “pnor”… the flash where your system firmware lives), the opal-prd process may get stuck chewing up 100% CPU and not doing anything useful. There’s https://github.com/open-power/hostboot/issues/67 for this.

You will notice a problem if the opal-prd process is using 100% CPU and the last log messages are something like:

HBRT: ERRL:>>ErrlManager::ErrlManager constructor. HBRT: ERRL:iv_hiddenErrorLogsEnable = 0x0 HBRT: ERRL:>>setupPnorInfo HBRT: PNOR:>>RtPnor::getSectionInfo HBRT: PNOR:>>RtPnor::readFromDevice: i_offset=0x0, i_procId=0 sec=11 size=0x20000 ecc=1 HBRT: PNOR:RtPnor::readFromDevice: removing ECC... HBRT: PNOR:RtPnor::readFromDevice> Uncorrectable ECC error : chip=0,offset=0x0

(the parameters to readFromDevice may differ)

Luckily, there’s a simple workaround to fix it all up! You will need the pflash utility. Primarily, pflash is meant only for developers and those who know what they’re doing. You can turn your computer into a brick using it.

pflash is packaged in Ubuntu 16.10 and RHEL 7.3, but you can otherwise build it from source easily enough:

git clone https://github.com/open-power/skiboot.git cd skiboot/external/pflash make

Now that you have pflash, you just need to erase the HBEL partition and write (ECC) zeros:

dd if=/dev/zero of=/tmp/hbel bs=1 count=147456 pflash -P HBEL -e pflash -P HBEL -p /tmp/hbel

Note: you cannot just erase the partition or use the pflash option to do an ECC erase, you may render your system unbootable if you get it wrong.

After that, restart opal-prd however your distro handles restarting daemons (e.g. systemctl restart opal-prd.service) and all should be well.

Categories: thinktime

Binh Nguyen: Common Russian Media Themes, Has Western Liberal Capitalist Democracy Failed?, and More

Tue 18th Oct 2016 03:10
After watching international media for a while (particularly those who aren't part of the standard 'Western Alliance') you'll realise that their are common themes: - they are clearly against the current international order. Believe that things will be better if changed. Wants the rules changed (especially as they seemed to have favoured some countries who went through the World Wars relatively
Categories: thinktime

Tridge on UAVs: CanberraUAV Outback Challenge 2016 Debrief

Mon 17th Oct 2016 18:10

I have finally written up an article on our successful Outback Challenge 2016 entry

The members of CanberraUAV are home from the Outback Challenge and life is starting to return to normal after an extremely hectic (and fun!) time preparing our aircraft for this years challenge. It is time to write up our usual debrief acticle to give those of you who weren't able to be there some idea of what happened.

For reference here are the articles from the 2012 and 2014 challenges:

http://diydrones.com/profiles/blogs/canberrauav-outback-challenge-2012-debrief
http://diydrones.com/profiles/blogs/canberrauav-outback-challenge-2014-debrief

Medical Express

The Outback Challenge is held every two years in Queensland, Australia. As the challenge was completed by multiple teams in 2014 the organisers needed to come up with a new challenge. The new challenge for 2016 was called "Medical Express" and the challenge was to retrieve a blood sample from Joe at a remote landing site.

outback-joe.jpg762x562 217 KB

The back-story is that poor Outback Joe is trapped behind flood waters on his remote property in Queensland. Unfortunately he is ill, and doctors at a nearby hospital need a blood sample to diagnose his illness. A UAV is called in to fly a 23km path to a place where Joe is waiting. We only know Joes approximate position (within 100 meters) so
first off the UAV needs to find Joe using an on-board camera. After finding Joe the aircraft needs to find a good landing site in an area littered with obstacles. The landing site needs to be more than 30 meters from Joe (to meet CASA safety requirements) but less than 80 meters (so Joe doesn't have to walk too far).

The aircraft then needs to perform an automatic landing, and then wait for Joe to load the blood sample into an easily accessible carrier. Joe then presses a button to indicate he is done loading the blood sample. The aircraft needs to wait for one minute for Joe to get clear, and then perform an automatic takeoff and flight back to the home location to deliver the blood sample to waiting hospital staff.

That story hides a lot of very challenging detail. For example, the UAV must maintain continuous telemetry contact with the operators back at the base. That needs to be done despite not knowing exactly where the landing site will be until the day before the challenge starts.

Also, the landing area has trees around it and no landing strip, so a normal fixed wing landing and takeoff is very problematic. The organisers wanted teams to come up with a VTOL solution and in this
they were very successful, kickstarting a huge effort to develop the VTOL capabilities of multiple open source autopilot systems.

The organisers also had provided a strict flight path that the teams have to follow to reach the search area where Joe is located. The winding path over the rural terrain of Dalby is strictly enforced, with any aircraft breaching the geofence required to immediately and automatically terminate by crashing into the ground.

The organisers also gave quite a wide range of flight distance and weather conditions that the teams had to be able to cope with. The distance to the search area could be up to 30km, meaning a round trip distance of 60km without taking into account all the time spent above the search area trying to find Joe. The teams had to be able to fly in up to 25 knots average wind on the ground, which could mean well over 30 knots in the air.

The mission also needed to be completed in one hour, including the time for spent loading the blood sample and circling above Joe.

Categories: thinktime

Russell Coker: Improving Memory

Mon 17th Oct 2016 17:10

I’ve just attended a lecture about improving memory, mostly about mnemonic techniques. I’m not against learning techniques to improve memory and I think it’s good to teach kids a variety of things many of which won’t be needed when they are younger as you never know which kids will need various skills. But I disagree with the assertion that we are losing valuable skills due to “digital amnesia”.

Nowadays we have programs to check spelling so we can avoid the effort of remembering to spell difficult words like mnemonic, calendar apps on our phones that link to addresses and phone numbers, and the ability to Google the world’s knowledge from the bathroom. So the question is, what do we need to remember?

For remembering phone numbers it seems that all we need is to remember numbers that we might call in the event of a mobile phone being lost or running out of battery charge. That would be a close friend or relative and maybe a taxi company (and 13CABS isn’t difficult to remember).

Remembering addresses (street numbers etc) doesn’t seem very useful in any situation. Remembering the way to get to a place is useful and it seems to me that the way the navigation programs operate works against this. To remember a route you would want to travel the same way on multiple occasions and use a relatively simple route. The way that Google maps tends to give the more confusing routes (IE routes varying by the day and routes which take all shortcuts) works against this.

I think that spending time improving memory skills is useful, but it will either take time away from learning other skills that are more useful to most people nowadays or take time away from leisure activities. If improving memory skills is fun for you then it’s probably better than most hobbies (it’s cheap and provides some minor benefits in life).

When I was in primary school it was considered important to make kids memorise their “times tables”. I’m sure that memorising the multiplication of all numbers less than 13 is useful to some people, but I never felt a need to do it. When I was young I could multiply any pair of 2 digit numbers as quickly as most kids could remember the result. The big difference was that most kids needed a calculator to multiply any number by 13 which is a significant disadvantage.

What We Must Memorise

Nowadays the biggest memory issue is with passwords (the Correct Horse Battery Staple XKCD comic is worth reading [1]). Teaching mnemonic techniques for the purpose of memorising passwords would probably be a good idea – and would probably get more interest from the audience.

One interesting corner-case of passwords is ATM PIN numbers. The Wikipedia page about PIN numbers states that 4-12 digits can be used for PINs [2]. The 4 digit PIN was initially chosen because John Adrian Shepherd-Barron (who is credited with inventing the ATM) was convinced by his wife that 6 digits would be too difficult to memorise. The fact that hardly any banks outside Switzerland use more than 4 digits suggests that Mrs Shepherd-Barron had a point. The fact that this was decided in the 60’s proves that it’s not “digital amnesia”.

We also have to memorise how to use various supposedly user-friendly programs. If you observe an iPhone or Mac being used by someone who hasn’t used one before it becomes obvious that they really aren’t so user friendly and users need to memorise many operations. This is not a criticism of Apple, some tasks are inherently complex and require some complexity of the user interface. The limitations of the basic UI facilities become more obvious when there are operations like palm-swiping the screen for a screen-shot and a double-tap plus drag for a 1 finger zoom on Android.

What else do we need to memorise?

Related posts:

  1. Xen Memory Use and Zope I am currently considering what to do regarding a Zope...
  2. Improving Computer Reliability In a comment on my post about Taxing Inferior Products...
  3. Chilled Memory Attacks In 1996 Peter Gutmann wrote a paper titled “Secure Deletion...
Categories: thinktime

Clinton Roy: In Memory of Gary Curtis

Sun 16th Oct 2016 17:10

This week we learnt of the sad passing of a long term regular attendee of Humbug, Gary Curtis. Gary was often early, and nearly always the last to leave.

One  of Gary’s prized possessions was his car, more specifically his LINUX number plate. Gary was very happy to be our official airport-conference shuttle for linux.conf.au keynote speakers in 2011 with this number plate.

Gary always had very strong opinions about how Humbug and our Humbug organised conferences should be run, but rarely took to running the events himself. It became a perennial joke at Humbug AGMs that we would always nominate Gary for positions, and he would always decline. Eventually we worked out that Humbug was one of the few times Gary wasn’t in charge of a group, and that was relaxing for him.

A topic that Gary always came back to was genealogy, especially the phone app he was working on.

A peculiar quirk of Humbug meetings is that they run on Saturday nights, and thus we often have meetings at the same time as Australian elections. Gary was always keen to keep up with the election on the night, often with interesting insights.

My most personal memory of Gary was our road trip after OSDC New Zealand, we did something like three days of driving around in a rental car, staying at hotels along the way. Gary’s driving did little to impress me, but he was certainly enjoying himself.

Gary will be missed.

 


Filed under: Uncategorized
Categories: thinktime

Glen Turner: Activating IPv6 stable privacy addressing from RFC7217

Thu 13th Oct 2016 11:10
Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:

[connection] ipv6.ip6-privacy=0 ipv6.addr-gen-mode=stable-privacy

Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000 inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

Categories: thinktime

Maxim Zakharov: One more fix for AMP WordPress plugin

Thu 13th Oct 2016 11:10

With the recent AMP update at Google you may notice increased number of AMP parsing errors in your search console. They look like

The mandatory tag 'html ⚡ for top-level html' is missing or incorrect.

Some plugins, e.g. Add Meta Tags, may alter language_attributes() using 'language_attributes' filter and adding XML-related attributes which are disallowed (see www.ampproject.org/docs/reference/spec#required-markup ) and that causes the error mentioned above.

I have made a fix solving this problem and made pull request for WordPress AMP plugin, you may see it here:
github.com/Automattic/amp-wp/pull/531

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main November 2016 Meeting: The Internet of Toys / Special General Meeting / Functional Programming

Tue 11th Oct 2016 03:10
Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Start: Nov 2 2016 18:30 End: Nov 2 2016 20:30 Location: 

6th Floor, 200 Victoria St. Carlton VIC 3053

Link:  http://luv.asn.au/meetings/map

Speakers:

• Nick Moore, The Internet of Toys: ESP8266 and MicroPython
• Special General Meeting
• Les Kitchen, Functional Programming

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 627 326.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

November 2, 2016 - 18:30

read more

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Beginners October Meeting: Build a Simple RC Bot!

Sun 09th Oct 2016 21:10
Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Start: Oct 15 2016 12:30 End: Oct 15 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond

Link:  http://luv.asn.au/meetings/map

Build a Simple RC Bot! Getting started with Arduino and Android

In this introductory talk, Ivan Lim Siu Kee will take you through the process of building a simple remote controlled bot. Find out how you can get started on building simple remote controlled bots of your own. While effort has been made to keep the presentation as beginner friendly as possible, some programming experience is still recommended to get the most out of this talk.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 15, 2016 - 12:30

read more

Categories: thinktime

Craig Sanders: Converting to a ZFS rootfs

Sun 09th Oct 2016 17:10

My main desktop/server machine (running Debian sid) at home has been running XFS on mdadm raid-1 on a pair of SSDs for the last few years. A few days ago, one of the SSDs died.

I’ve been planning to switch to ZFS as the root filesystem for a while now, so instead of just replacing the failed drive, I took the opportunity to convert it.

NOTE: at this point in time, ZFS On Linux does NOT support TRIM for either datasets or zvols on SSD. There’s a patch almost ready (TRIM/Discard support from Nexenta #3656), so I’m betting on that getting merged before it becomes an issue for me.

Here’s the procedure I came up with:

1. Buy new disks, shutdown machine, install new disks, reboot.

The details of this stage are unimportant, and the only thing to note is that I’m switching from mdadm RAID-1 with two SSDs to ZFS with two mirrored pairs (RAID-10) on four SSDs (Crucial MX300 275G – at around $100 AUD each, they’re hard to resist). Buying four 275G SSDs is slightly more expensive than buying two of the 525G models, but will perform a lot better.

When installed in the machine, they ended up as /dev/sdp, /dev/sdq, /dev/sdr, and /dev/sds. I’ll be using the symlinks in /dev/disk/by-id/ for the zpool, but for partition and setup, it’s easiest to use the /dev/sd? device nodes.

2. Partition the disks identically with gpt partition tables, using gdisk and sgdisk.

I need:

  • A small partition (type EF02, 1MB) for grub to install itself in. Needed on gpt.
  • A small partition (type EF00, 1MB) for EFI System. I’m not currently booting with UEFI but I want the option to move to it later.
  • A small partition (type 8300, 2GB) for /boot.

    I want /boot on a separate partition to make it easier to recover from problems that might occur with future upgrades. 2GB might seem excessive, but as this is my tftp & dhcp server I can’t rely on network boot for rescues, so I want to be able to put rescue ISO images in there and boot them with grub and memdisk.

    This will be mdadm RAID-1, with 4 copies.

  • A larger partition (type 8200, 4GB) for swap. With 4 identically partitioned SSDs, I’ll end up with 16GB swap (using zswap for block-device backed compressed RAM swap)

  • A large partition (type bf07, 210GB) for my rootfs

  • A small partition (type bf08, 2GBB) to provide ZIL for my HDD zpools

  • A larger partition (type bf09, 32GB) to provide L2ARC for my HDD zpools

ZFS On Linux uses partition type bf08 (“Solaris Reserved 1”) natively, but doesn’t seem to care what the partition types are for ZIL and L2ARC. I arbitrarily used bf08 (“Solaris Reserved 2”) and bf09 (“Solaris Reserved 3”) for easy identification. I’ll set these up later, once I’ve got the system booted – I don’t want to risk breaking my existing zpools by taking away their ZIL and L2ARC (and forgetting to zpool remove them, which I might possibly have done once) if I have to repartition.

I used gdisk to interactively set up the partitions:

# gdisk -l /dev/sdp GPT fdisk (gdisk) version 1.0.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdp: 537234768 sectors, 256.2 GiB Logical sector size: 512 bytes Disk identifier (GUID): 4234FE49-FCF0-48AE-828B-3C52448E8CBD Partition table holds up to 128 entries First usable sector is 34, last usable sector is 537234734 Partitions will be aligned on 8-sector boundaries Total free space is 6 sectors (3.0 KiB) Number Start (sector) End (sector) Size Code Name 1 40 2047 1004.0 KiB EF02 BIOS boot partition 2 2048 2099199 1024.0 MiB EF00 EFI System 3 2099200 6293503 2.0 GiB 8300 Linux filesystem 4 6293504 14682111 4.0 GiB 8200 Linux swap 5 14682112 455084031 210.0 GiB BF07 Solaris Reserved 1 6 455084032 459278335 2.0 GiB BF08 Solaris Reserved 2 7 459278336 537234734 37.2 GiB BF09 Solaris Reserved 3

I then cloned the partition table to the other three SSDs with this little script:

clone-partitions.sh

#! /bin/bash src='sdp' targets=( 'sdq' 'sdr' 'sds' ) for tgt in "${targets[@]}"; do sgdisk --replicate="/dev/$tgt" /dev/"$src" sgdisk --randomize-guids "/dev/$tgt" done 3. Create the mdadm for /boot, the zpool, and and the root filesystem.

Most rootfs on ZFS guides that I’ve seen say to call the pool rpool, then create a dataset called "$(hostname)-1" and then create a ROOT dataset under that. so on my machine, that would be rpool/ganesh-1/ROOT. Some reverse the order of hostname and the rootfs dataset, for rpool/ROOT/ganesh-1.

There might be uses for this naming scheme in other environments but not in mine. And, to me, it looks ugly. So I’ll use just $(hostname)/root for the rootfs. i.e. ganesh/root

I wrote a script to automate it, figuring I’d probably have to do it several times in order to optimise performance. Also, I wanted to document the procedure for future reference, and have scripts that would be trivial to modify for other machines.

create.sh

#! /bin/bash exec &> ./create.log hn="$(hostname -s)" base='ata-Crucial_CT275MX300SSD1_' md='/dev/md0' md_part=3 md_parts=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${md_part}) ) zfs_part=5 # 4 disks, so use the top half and bottom half for the two mirrors. zmirror1=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | head -n 2) ) zmirror2=( $(/bin/ls -1 /dev/disk/by-id/${base}*-part${zfs_part} | tail -n 2) ) # create /boot raid array mdadm "$md" --create \ --bitmap=internal \ --raid-devices=4 \ --level 1 \ --metadata=0.90 \ "${md_parts[@]}" mkfs.ext4 "$md" # create zpool zpool create -o ashift=12 "$hn" \ mirror "${zmirror1[@]}" \ mirror "${zmirror2[@]}" # create zfs rootfs zfs set compression=on "$hn" zfs set atime=off "$hn" zfs create "$hn/root" zpool set bootfs="$hn/root" # mount the new /boot under the zfs root mount "$md" "/$hn/root/boot"

If you want or need other ZFS datasets (e.g. for /home, /var etc) then create them here in this script. Or you can do that later after you’ve got the system up and running on ZFS.

If you run mysql or postgresql, read the various tuning guides for how to get best performance for databases on ZFS (they both need their own datasets with particular recordsize and other settings). If you download Linux ISOs or anything with bit-torrent, avoid COW fragmentation by setting up a dataset to download into with recordsize=16K and configure your BT client to move the downloads to another directory on completion.

I did this after I got my system booted on ZFS. For my db, I stoppped the postgres service, renamed /var/lib/postgresql to /var/lib/p, created the new datasets with:

zfs create -o recordsize=8K -o logbias=throughput -o mountpoint=/var/lib/postgresql \ -o primarycache=metadata ganesh/postgres zfs create -o recordsize=128k -o logbias=latency -o mountpoint=/var/lib/postgresql/9.6/main/pg_xlog \ -o primarycache=metadata ganesh/pg-xlog

followed by rsync and then started postgres again.

4. rsync my current system to it.

Logout all user sessions, shut down all services that write to the disk (postfix, postgresql, mysql, apache, asterisk, docker, etc). If you haven’t booted into recovery/rescue/single-user mode, then you should be as close to it as possible – everything non-esssential should be stopped. I chose not to boot to single-user in case I needed access to the web to look things up while I did all this (this machine is my internet gateway).

Then:

hn="$(hostname -s)" time rsync -avxHAXS -h -h --progress --stats --delete / /boot/ "/$hn/root/"

After the rsync, my 130GB of data from XFS was compressed to 91GB on ZFS with transparent lz4 compression.

Run the rsync again if (as I did), you realise you forgot to shut down postfix (causing newly arrived mail to not be on the new setup) or something.

You can do a (very quick & dirty) performance test now, by running zpool scrub "$hn". Then run watch zpool status "$hn". As there should be no errorss to correct, you should get scrub speeds approximating the combined sequential read speed of all vdevs in the pool. In my case, I got around 500-600M/s – I was kind of expecting closer to 800M/s but that’s good enough….the Crucial MX300s aren’t the fastest drive available (but they’re great for the price), and ZFS is optimised for reliability more than speed. The scrub took about 3 minutes to scan all 91GB. My HDD zpools get around 150 to 250M/s, depending on whether they have mirror or RAID-Z vdevs and on what kind of drives they have.

For real benchmarking, use bonnie++ or fio.

5. Prepare the new rootfs for chroot, chroot into it, edit /etc/fstab and /etc/default/grub.

This script bind mounts /proc, /sys, /dev, and /dev/pts before chrooting:

chroot.sh

#! /bin/sh hn="$(hostname -s)" for i in proc sys dev dev/pts ; do mount -o bind "/$i" "/${hn}/root/$i" done chroot "/${hn}/root"

Change /etc/fstab (on the new zfs root to) have the zfs root and ext4 on raid-1 /boot:

/ganesh/root / zfs defaults 0 0 /dev/md0 /boot ext4 defaults,relatime,nodiratime,errors=remount-ro 0 2

I haven’t bothered with setting up the swap at this point. That’s trivial and I can do it after I’ve got the system rebooted with its new ZFS rootfs (which reminds me, I still haven’t done that :).

add boot=zfs to the GRUB_CMDLINE_LINUX variable in /etc/default/grub. On my system, that’s:

GRUB_CMDLINE_LINUX="iommu=noagp usbhid.quirks=0x1B1C:0x1B20:0x408 boot=zfs"

NOTE: If you end up needing to run rsync again as in step 4. above copy /etc/fstab and /etc/default/grub to the old root filesystem first. I suggest to /etc/fstab.zfs and /etc/default/grub.zfs

6. Install grub

Here’s where things get a little complicated. Running install-grub on /dev/sd[pqrs] is fine, we created the type ef02 partition for it to install itself into.

But running update-grub to generate the new /boot/grub/grub.cfg will fail with an error like this:

/usr/sbin/grub-probe: error: failed to get canonical path of `/dev/ata-Crucial_CT275MX300SSD1_163313AADD8A-part5'.

IMO, that’s a bug in grub-probe – it should look in /dev/disk/by-id/ if it can’t find what it’s looking for in /dev/

I fixed that problem with this script:

fix-ata-links.sh

#! /bin/sh cd /dev ln -s /dev/disk/by-id/ata-Crucial* .

After that, update-grub works fine.

NOTE: you will have to add udev rules to create these symlinks, or run this script on every boot otherwise you’ll get that error every time you run update-grub in future.

7. Prepare to reboot

Unmount proc, sys, dev/pts, dev, the new raid /boot, and the new zfs filesystems. Set the mount point for the new rootfs to /

umount-zfs-root.sh

#! /bin/sh hn="$(hostname -s)" md="/dev/md0" for i in dev/pts dev sys proc ; do umount "/${hn}/root/$i" done umount "$md" zfs umount "${hn}/root" zfs umount "${hn}" zfs set mountpoint=/ "${hn}/root" zfs set canmount=off "${hn}" 8. Reboot

Remember to configure the BIOS to boot from your new disks.

The system should boot up with the new rootfs, no rescue disk required as in some other guides – the rsync and chroot stuff has already been done.

9. Other notes
  • If you’re adding partition(s) to a zpool for ZIL, remember that ashift is per vdev, not per zpool. So remember to specify ashift=12 when adding them. e.g.

    zpool add -o ashift=12 export log \ mirror ata-Crucial_CT275MX300SSD1_163313AAEE5F-part6 \ ata-Crucial_CT275MX300SSD1_163313AB002C-part6

    Check that all vdevs in all pools have the correct ashift value with:

    zdb | grep -E 'ashift|vdev|type' | grep -v disk
10. Useful references

Reading these made it much easier to come up with my own method. Highly recommended.

Converting to a ZFS rootfs is a post from: Errata

Categories: thinktime

Maxim Zakharov: Data structure for word relative cooccurence frequencies, counts and prefix tree

Sat 08th Oct 2016 19:10

Trying to solve the task of calculating word cooccurrence relative frequencies fast, I have created an interesting data structure, which also allows to calculate counts for the first word in the pair to check; and it creates word prefix tree for the text processing, which can be used for further text analysis.

The source code is available on GitHub: github.com/Maxime2/cooccurrences

When you execute make command you should see the following output:

cc -O3 -funsigned-char cooccur.c -o cooccur -lm Example 1 ./cooccur a.txt 2 < a.in | tee a.out Checking pair d e Count:3 cocount:3 Relative frequency: 1.00 Checking pair a b Count:3 cocount:1 Relative frequency: 0.33 Example 2 ./cooccur b.txt 3 < b.in | tee b.out Checking pair a penny Count:3 cocount:3 Relative frequency: 1.00 Checking pair penny earned Count:4 cocount:1 Relative frequency: 0.25

The cooccur program takes two arguments: the filename of a text file to process and the window of words size to calculate relative frequencies within it. Then the program takes pairs of words from its standard input, one pair per line, to calculate count of appearance of the first word in the text processed and the cooccurrence count for the pair in that text. If the second word appears more than once in the window, only one appearance is counted.

Examples were taken here:

Categories: thinktime

Michael Davies: Fixing broken Debian packages

Fri 07th Oct 2016 11:10
In my job we make use of Vidyo for videoconferencing, but today I ran into an issue after re-imaging my Ubuntu 16.04 desktop.

The latest version of vidyodesktop requires libqt4-gui, which doesn't exist in Ubuntu anymore. This always seems to be a problem with non-free software targeting multiple versions of multiple operating systems.

You can work around the issue, doing something like:

sudo dpkg -i --ignore-depends=libqt4-gui VidyoDesktopInstaller-*.deb

but then you get the dreaded unmet dependencies roadblock which prevents you from future package manager updates and operations. i.e.

You might want to run 'apt-get -f install' to correct these:
 vidyodesktop : Depends: libqt4-gui (>= 4.8.1) but it is not installable
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

It's a known problem, and it's been well documented. The suggested solution was to modify the VidyoDesktopInstaller-*.deb package, but I didn't want to do that (because when the next version comes out, it will need to be handraulicly fixed too - and that's an ongoing burden I'm not prepared to live with). So I went looking for another solution - and found Debian's equivs package (and thanks to tonyb for pointing me in the right direction!)

So what we want to do is to create a dummy Debian package that will satisfy the libqt4-gui requirement.  So first off, let's uninstall vidyodesktop, and install equivs:

sudo apt-get -f install
sudo apt-get install equivs

Next, let's make a fake package:

mkdir -p ~/src/fake-libqt4-gui
cd  ~/src/fake-libqt4-gui
cat << EOF > fake-libqt4-gui
Section: misc
Priority: optional
Standards-Version: 3.9.2

Package: libqt4-gui
Version: 1:100
Maintainer: Michael Davies <michael@the-davies.net>
Architecture: all
Description: fake libqt4-gui to keep vidyodesktop happy
EOF
And now, let's build and install the dummy package:
equivs-build fake-libqt4-guisudo dpkg -i libqt4-gui_100_all.deb
And now vidyodesktop installs cleanly!
sudo dpkg -i VidyoDesktopInstaller-*.deb
Categories: thinktime

James Morris: LinuxCon Europe Kernel Security Slides

Fri 07th Oct 2016 01:10

Yesterday I gave an update on the Linux kernel security subsystem at LinuxCon Europe, in Berlin.

The slides are available here: http://namei.org/presentations/linux_kernel_security_linuxconeu2016.pdf

The talk began with a brief overview and history of the Linux kernel security subsystem, and then I provided an update on significant changes in the v4 kernel series, up to v4.8.  Some expected upcoming features were also covered.  Skip to slide 31 if you just want to see the changes.  There are quite a few!

It’s my first visit to Berlin, and it’s been fascinating to see the remnants of the Cold War, which dominated life in 1980s when I was at school, but which also seemed so impossibly far from Australia.

Brandenburg Gate, Berlin. Unity Day 2016.

I hope to visit again with more time to explore.

Categories: thinktime

Russell Coker: 10 Years of Glasses

Mon 03rd Oct 2016 23:10

10 years ago I first blogged about getting glasses [1]. I’ve just ordered my 4th pair of glasses. When you buy new glasses the first step is to scan your old glasses to use that as a base point for assessing your eyes, instead of going in cold and trying lots of different lenses they can just try small variations on your current glasses. Any good optometrist will give you a print-out of the specs of your old glasses and your new prescription after you buy glasses, they may be hesitant to do so if you don’t buy because some people get a prescription at an optometrist and then buy cheap glasses online. Here are the specs of my new glasses, the ones I’m wearing now that are about 4 years old, and the ones before that which are probably about 8 years old:

New 4 Years Old Really Old R-SPH 0.00 0.00 -0.25 R-CYL -1.50 -1.50 -1.50 R-AXS 180 179 180 L-SPH 0.00 -0.25 -0.25 L-CYL -1.00 -1.00 -1.00 L-AXS 5 10 179

The Specsavers website has a good description of what this means [2]. In summary SPH is whether you are log-sighted (positive) or short-sighted (negative). CYL is for astigmatism which is where the focal lengths for horizontal and vertical aren’t equal. AXS is the angle for astigmatism. There are other fields which you can read about on the Specsavers page, but they aren’t relevant for me.

The first thing I learned when I looked at these numbers is that until recently I was apparently slightly short-sighted. In a way this isn’t a great surprise given that I spend so much time doing computer work and very little time focusing on things further away. What is a surprise is that I don’t recall optometrists mentioning it to me. Apparently it’s common to become more long-sighted as you get older so being slightly short-sighted when you are young is probably a good thing.

Astigmatism is the reason why I wear glasses (the Wikipedia page has a very good explanation of this [3]). For the configuration of my web browser and GUI (which I believe to be default in terms of fonts for Debian/Unstable running KDE and Google-Chrome on a Thinkpad T420 with 1600×900 screen) I can read my blog posts very clearly while wearing glasses. Without glasses I can read it with my left eye but it is fuzzy and with my right eye reading it is like reading the last line of an eye test, something I can do if I concentrate a lot for test purposes but would never do by choice. If I turn my glasses 90 degrees (so that they make my vision worse not better) then my ability to read the text with my left eye is worse than my right eye without glasses, this is as expected as the 1.00 level of astigmatism in my left eye is doubled when I use the lens in my glasses as 90 degrees to it’s intended angle.

The AXS numbers are for the angle of astigmatism. I don’t know why some of them are listed as 180 degrees or why that would be different from 0 degrees (if I turn my glasses so that one lens is rotated 180 degrees it works in exactly the same way). The numbers from 179 degrees to 5 degrees may be just a measurement error.

Related posts:

  1. more on vision I had a few comments on my last so I...
  2. right-side visual migraine This afternoon I had another visual migraine. It was a...
  3. New Portslave release after 5 Years I’ve just uploaded Portslave version 2010.03.30 to Debian, it replaces...
Categories: thinktime

Colin Charles: Speaking in October 2016

Sun 02nd Oct 2016 21:10
  • I’m thrilled to naturally be at Percona Live Europe Amsterdam from Oct 3-5 2016. I have previously talked about some of my sessions but I think there’s another one on the schedule already.
  • LinuxCon Europe – Oct 4-6 2016. I won’t be there for the whole conference, but hope to make the most of my day on Oct 6th.
  • MariaDB Developer’s meeting – Oct 6-8 2016 – skipping the first day, but will be there all day 2 and 3. I even have a session on day 3, focused on compatibility with MySQL, a topic I deeply care about (session schedule)
  • OSCON London – Oct 17-20 2016 – a bit of a late entrant, I do have a talk titled “Forking successfully”, and wonder if a branch makes more sense, how to fork, and what happens when parity comes?
  • October MySQL London Meetup – Oct 17 2016 – I’m already in London, I wouldn’t miss this meetup for the world! There’s no agenda yet, but I think the discussion should be fun.
Categories: thinktime

Russell Coker: Hostile Web Sites

Sun 02nd Oct 2016 17:10

I was asked whether it would be safe to open a link in a spam message with wget. So here are some thoughts about wget security and web browser security in general.

Wget Overview

Some spam messages are designed to attack the recipient’s computer. They can exploit bugs in the MUA, applications that may be launched to process attachments (EG MS Office), or a web browser. Wget is a very simple command-line program to download web pages, it doesn’t attempt to interpret or display them.

As with any network facing software there is a possibility of exploitable bugs in wget. It is theoretically possible for an attacker to have a web server that detects the client and has attacks for multiple HTTP clients including wget.

In practice wget is a very simple program and simplicity makes security easier. A large portion of security flaws in web browsers are related to plugins such as flash, rendering the page for display on a GUI system, and javascript – features that wget lacks.

The Profit Motive

An attacker that aims to compromise online banking accounts probably isn’t going to bother developing or buying an exploit against wget. The number of potential victims is extremely low and the potential revenue benefit from improving attacks against other web browsers is going to be a lot larger than developing an attack on the small number of people who use wget. In fact the potential revenue increase of targeting the most common Linux web browsers (Iceweasel and Chromium) might still be lower than that of targeting Mac users.

However if the attacker doesn’t have a profit motive then this may not apply. There are people and organisations who have deliberately attacked sysadmins to gain access to servers (here is an article by Bruce Schneier about the attack on Hacking Team [1]). It is plausible that someone who is targeting a sysadmin could discover that they use wget and then launch a targeted attack against them. But such an attack won’t look like regular spam. For more information about targeted attacks Brian Krebs’ article about CEO scams is worth reading [2].

Privilege Separation

If you run wget in a regular Xterm in the same session you use for reading email etc then if there is an exploitable bug in wget then it can be used to access all of your secret data. But it is very easy to run wget from another account. You can run “ssh otheraccount@localhost” and then run the wget command so that it can’t attack you. Don’t run “su – otheraccount” as it is possible for a compromised program to escape from that.

I think that most Linux distributions have supported a “switch user” functionality in the X login system for a number of years. So you should be able to lock your session and then change to a session for another user to run potentially dangerous programs.

It is also possible to use a separate PC for online banking and other high value operations. A 10yo PC is more than adequate for such tasks so you could just use an old PC that has been replaced for regular use for online banking etc. You could boot it from a CD or DVD if you are particularly paranoid about attack.

Browser Features

Google Chrome has a feature to not run plugins unless specifically permitted. This requires a couple of extra mouse actions when watching a TV program on the Internet but prevents random web sites from using Flash and Java which are two of the most common vectors of attack. Chrome also has a feature to check a web site against a Google black list before connecting. When I was running a medium size mail server I often had to determine whether URLs being sent out by customers were legitimate or spam, if a user sent out a URL that’s on Google’s blacklist I would lock their account without doing any further checks.

Conclusion

I think that even among Linux users (who tend to be more careful about security than users of other OSs) using a separate PC and booting from a CD/DVD will generally be regarded as too much effort. Running a full featured web browser like Google Chrome and updating it whenever a new version is released will avoid most problems.

Using wget when you have to reason to be concerned is a possibility, but not only is it slightly inconvenient but it also often won’t download the content that you want (EG in the case of HTML frames).

Related posts:

  1. Google web sites and Chromium CPU Use Chromium is the free software build of the Google Chrome...
  2. How SE Linux Prevents Local Root Exploits In a comment on my previous post about SE Linux...
  3. Can SE Linux Stop a Linux Storm Bruce Schneier has just written about the Storm Worm [1]...
Categories: thinktime

Simon Lyall: DevOpsDays Wellington 2016 – Day 2, Session 3

Sat 01st Oct 2016 11:10
Ignites Mrinal Mukherjee – How to choose a DevOps tool

Right Tool
– Does the job
– People will accept

Wrong tool
– Never ending Poc
– Doesn’t do the job

How to pick
– Budget / Licensing
– does it address your pain points
– Learning cliff
– Community support
– API
– Enterprise acceptability
– Config in version control?

Central tooling team
– Pro standardize, educate, education
– Constant Bottleneck, delays, stifles innovation, not in sync with teams

DevOps != Tool
Tools != DevOps

Tools facilitate it not define it.

Howard Duff – Eric and his blue boxes

Physical example of KanBan in an underwear factory

Lindsey Holmwood – Deepening people to weather the organisation

Note: Lindsey presents really fast so I missed recording a lot from the talk

His Happy, High performing Team -> He left -> 6 months later half of team had left

How do you create a resilient culture?

What is culture?
– Lots of research in organisation psychology
– Edgar Schein – 3 levels of culture
– Artefacts, Values, Assumptions

Artefacts
– Physical manifestations of our culture
– Standups, Org charts, desk layout, documentation
– actual software written
– Easiest to see and adopt

Values
– Goals, strategies and philosophise
– “we will dominate the market”
– “Management if available”
– “nobody is going to be fired for making a mistake”
– lived values vs aspiration values (People have good nose for bullshit)
– Example, cores values of Enron vs reality
– Work as imagined vs Work is actually done

Assumptions
– beliefs, perceptions, thoughts and feelings
– exist on an unconscious level
– hard to discern
– “bad outcomes come from bad people”
– “it is okay to withhold information”
– “we can’t trust that team”
– “profits over people”

If we can change our people, we can change our culture

What makes a good team member?

Trust
– Vulnerability
– Assume the best of others
– Aware of their cognitive bias
– Aware of the fundamental attribution error (judge others by actions, judge ourselves by our intentions)
– Aware of hindsight bias. Hindsight bias is your culture killer
– When bad things happen explain in terms of foresight
– Regular 1:1s
Eliminate performance reviews
Willing to play devils advocate

Commit and acting
– Shared goal settings
– Don’t solutioneer
– Provide context about strategy, about desired outcome
What makes a good team?

Influence of hiring process
– Willingness to adapt and adopt working in new team
– Qualify team fit, tech talent then rubber stamp from team lead
– have a consistent script, but be prepared to improvise
– Everyone has the veto power
– Leadership is vetoing at the last minute, thats a systemic problem with team alignment not the system
– Benefit: team talks to candidate (without leadership present)
– Many different perspectives
– unblock management bottlenecks
– Risk: uncovering dysfunctions and misalignment in your teams
– Hire good people, get out of their way

Diversity and inclusion
– includes: race, gender, sexual orientation, location, disability, level of experience, work hours
– Seek out diverse candidates.
– Sponsor events and meetups
– Make job description clear you are looking for diverse background
– Must include and embrace differences once they actually join
– Safe mechanism for people to raise criticisms, and acting on them

Leadership and Absence of leadership
– Having a title isn’t required
– If leader steps aware things should continue working right
– Team is their own shit umbrella
– empowerment vs authority
– empowerment is giving permission from above (potentially temporary)
– authority is giving power (granting autonomy)

Part of something bigger than the team
– help people build up for the next job
– Guilds in the Spotify model
– Run them like meetups
– Get senior management to come and observe
– What we’re talking about is tech culture

We can change tech culture
– How to make it resist the culture of the rest of the organisation
– Artefacts influence behaviour
– Artifact fast builds -> value: make better quality
– Artifact: post incident reviews -> Value: Failure is an opportunity for learning

Q: What is a pre-incident review
A: Brainstorm beforehand (eg before a big rollout) what you think might go wrong if something is coming up
then afterwards do another review of what just went wrong

Q: what replaces performance reviews
A: One on ones

Q: Overcoming Resistance
A: Do it and point back at the evidence. Hard to argue with an artifact

Q: First step?
A: One on 1s

Getting started, reading books by Patrick Lencioni:
– Solos, Politics and turf wars
– 5 Dysfunctions of a team

Share

Categories: thinktime

Simon Lyall: DevOpsDays Wellington 2016 – Day 2, Session 2

Sat 01st Oct 2016 09:10
Troy Cornwall & Alex Corkin – Health is hard: A Story about making healthcare less hard, and faster!

Maybe title should be “Culture is Hard”

@devtroy @4lexNZ

Working at HealthLink
– Windows running Java stuff
– Out of date and poorly managed
– Deployments manual, thrown over the wall by devs to ops

Team Death Star
– Destroy bad processes
– Change deployment process

Existing Stack
– VMware
– Windows
– Puppet
– PRTG

CD and CI Requirements
– Goal: Time to regression test under 2 mins, time to deploy under 2 mins (from 2 weeks each)
– Puppet too slow to deploy code in a minute or two. App deply vs Conf mngt
– Can’t use (then) containers on Windows so not an option

New Stack
– VMware
– Ubuntu
– Puppet for Server config
– Docker
– rancher

Smashed the 2 minute target!

But…
– We focused on the tech side and let the people side slip
– Windows shop, hard work even to get a Linux VM at the start
– Devs scared to run on Linux. Some initial deploy problems burnt people
– Lots of different new technologies at once all pushed to devs, no pull from them.

Blackout where we weren’t allowed to talk to them for four weeks
– Should have been a warning sign…

We thought we were ready.
– Ops was not ready

“5 dysfunctions of a team”
– Trust as at the bottom, we didn’t have that

Empathy
– We were aware of this, but didn’t follow though
– We were used to disruption but other teams were not

Note: I’m not sure how the story ended up, they sort of left it hanging.

Pavel Jelinek – Kubernetes in production

Works at Movio
– Software for Cinema chains (eg Loyalty cards)
– 100million emails per month. million of SMS and push notifications (less push cause ppl hate those)

Old Stack
– Started with mysql and php application
– AWS from the beginning
– On largest aws instance but still slow.

Decided to go with Microservices
– Put stuff in Docker
– Used Jenkins, puppet, own docker registery, rundeck (see blog post)
– Devs didn’t like writing puppet code and other manual setup

Decided to go to new container management at start of 2016
– Was pushing for Nomad but devs liked Kubernetes

Kubernetes
– Built in ports, HA, LB, Health-checks

Concepts in Kub
– POD – one or more containers
– Deployment, Daemon, Pet Set – Scaling of a POD
– Service- resolvable name, load balancing
– ConfigMap, Volume, Secret – Extended Docker Volume

Devs look after some kub config files
– Brings them closer to how stuff is really working

Demo
– Using kubectl to create pod in his work’s lab env
– Add load balancer in front of it
– Add a configmap to update the container’s nginx config
– Make it public
– LB replicas, Rolling updates

Best Practices
– lots of small containers are better
– log on container stdout, preferable via json
– Test and know your resource requirements (at movio devs teams specify, check and adjust)
– Be aware of the node sizes
– Stateless please
– if not stateless than clustered please
– Must handle unexpected immediate restarts

Share

Categories: thinktime

James Morris: Linux Security Summit 2016 Wrapup

Fri 30th Sep 2016 23:09

Here’s a summary of the 2016 Linux Security Summit, which was held last month in Toronto.

Presentation slides are available at http://events.linuxfoundation.org/events/archive/2016/linux-security-summit/program/slides.

This year, videos were made of the sessions, and they may be viewed at https://www.linux.com/news/linux-security-summit-videos — many thanks to Intel for sponsoring the recordings!

LWN has published some excellent coverage:

This is a pretty good representation of the main themes which emerged in the conference: container security, kernel self-protection, and integrity / secure boot.

Many of the core or low level security technologies (such as access control, integrity measurement, crypto, and key management) are now fairly mature. There’s more focus now on how to integrate these components into higher-level systems and architectures.

One talk I found particularly interesting was Design and Implementation of a Security Architecture for Critical Infrastructure Industrial Control Systems in the Era of Nation State Cyber Warfare. (The title, it turns out, was a hack to bypass limited space for the abstract in the cfp system).  David Safford presented an architecture being developed by GE to protect a significant portion of the world’s electrical grid from attack.  This is being done with Linux, and is a great example of how the kernel’s security mechanisms are being utilized for such purposes.  See the slides or the video.  David outlined gaps in the kernel in relation to their requirements, and a TPM BoF was held later in the day to work on these.  The BoF was reportedly very successful, as several key developers in the area of TPM and Integrity were present.

#linuxsecuritysummit TPM BOF session pic.twitter.com/l1ko9Meiud

— LinuxSecuritySummit (@LinuxSecSummit) August 25, 2016

Attendance at LSS was the highest yet with well over a hundred security developers, researchers and end users.

Special thanks to all of the LF folk who manage the logistics for the event.  There’s no way we could stage something on this scale without their help.

Stay tuned for the announcement of next year’s event!

 

Categories: thinktime

Pages