Planet Linux Australia
This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of
choice at both was overwhelming.
Some time ago I read Skunk Works, a very good “engineering” read.
In the section on the SR-71, the author Ben Rich made a statement that has puzzled me ever since, something like: “Most of the engines thrust is developed by the intake”. I didn’t get it – surely an intake is a source of drag rather than thrust? I have since read the same statement about the Concorde and it’s inlets.
Lately I’ve been watching a lot of AgentJayZ Gas Turbine videos. This guy services gas turbines for a living and is kind enough to present a lot of intricate detail and answer questions from people. I find his presentation style and personality really engaging, and get a buzz out of his enthusiasm, love for his work, and willingness to share all sort of geeky, intricate details.
So inspired by AgentJayZ I did some furious Googling and finally worked out why supersonic planes develop thrust from their inlets. I don’t feel it’s well explained elsewhere so here is my attempt:
- Gas turbine jet engines only work if the air is moving into the compressor at subsonic speeds. So the job of the inlet is to slow the air down from say Mach 2 to Mach 0.5.
- When you slow down a stream or air, the pressure increases. Like when you feel the wind pushing on your face on a bike. Imagine (don’t try) the pressure on your arm hanging out of a car window at 100 km/hr. Now imagine the pressure at 3000 km/hr. Lots. Around a 40 times increase for the inlets used in supersonic aircraft.
- So now we have this big box (the inlet chamber) full of high pressure air. Like a balloon this pressure is pushing equally on all sides of the box. Net thrust is zero.
- If we untie the balloon neck, the air can escape, and the balloon shoots off in the opposite direction.
- Back to the inlet on the supersonic aircraft. It has a big vacuum cleaner at the back – the compressor inlet of the gas turbine. It is sucking air out of the inlet as fast as it can. So – the air can get out, just like the balloon, and the inlet and the aircraft attached to it is thrust in the opposite direction. That’s how an inlet generates thrust.
- While there is also thrust from the gas turbine and it’s afterburner, turns out that pressure release in the inlet contributes the majority of the thrust. I don’t know why it’s the majority. Guess I need to do some more reading and get my gas equations on.
Another important point – the aircraft really does experience that extra thrust from the inlet – e.g. it’s transmitted to the aircraft by the engine mounts on the inlet, and the mounts must be designed with those loads in mind. This helps me understand the definition of “thrust from the inlet”.
I recently went to Ottawa for the FWD50 conference run by Rebecca and Alistair Croll. It was my first time in Canada, and it combined a number of my favourite things. I was at an incredible conference with a visionary and enthusiastic crowd, made up of government (international, Federal, Provincial and Municipal), technologists, civil society, industry, academia, and the calibre of discussions and planning for greatness was inspiring.
There was a number of people I have known for years but never met in meatspace, and equally there were a lot of new faces doing amazing things. I got to spend time with the excellent people at the Treasury Board of Canadian Secretariat, including the Canadian Digital Service and the Office of the CIO, and by wonderful coincidence I got to see (briefly) the folk from the Open Government Partnership who happened to be in town. Finally I got to visit the gorgeous Canadian Parliament, see their extraordinary library, and wander past some Parliamentary activity which always helps me feel more connected to (and therefore empowered to contribute to) democracy in action.
Thank you to Alistair Croll who invited me to keynote this excellent event and who, with Rebecca Croll, managed to create a truly excellent event with a diverse range of ideas and voices exploring where we could or should go as a society in future. I hope it is a catalyst for great things to come in Canada and beyond.
For those in Canada who are interested in the work in New Zealand, I strongly encourage you to tune into the D5 event in February which will have some of our best initiatives on display, and to tune in to our new Minister for Broadband, Digital and Open Government (such an incredible combination in a single portfolio), Minister Clare Curran and you can tune in to our “Service Innovation” work at our blog or by subscribing to our mailing list. I also encourage you to read this inspiring “People’s Agenda” by a civil society organisation in NZ which codesigned a vision for the future type of society desired in New Zealand.
- One of the great delights of this trip was seeing a number of people in person for the first time who I know from the early “Gov 2.0″ days (10 years ago!). It was particularly great to see Thom Kearney from Canada’s TBS and his team, Alex Howard (@digiphile) who is now a thought leader at the Sunlight Foundation, and Olivia Neal (@livneal) from the UK CTO office/GDS, Joe Powell from OGP, as well as a few friends from Linux and Open Source (Matt and Danielle amongst others).
- The speech by Canadian Minister of the Treasury Board Secretariat (which is responsible for digital government) the Hon Scott Brison, was quite interesting and I had the chance to briefly chat to him and his advisor at the speakers drinks afterwards about the challenges of changing government.
- Meeting with Canadian public servants from a variety of departments including the transport department, innovation and science, as well as the Treasury Board Secretariat and of course the newly formed Canadian Digital Service.
- Meeting people from a range of sub-national governments including the excellent folk from Peel, Hillary Hartley from Ontario, and hearing about the quite inspiring work to transform organisational structures, digital and other services, adoption of micro service based infrastructure, the use of “labs” for experimentation.
- It was fun meeting some CIO/CTOs from Canada, Estonia, UK and other jurisdictions, and sharing ideas about where to from here. I was particularly impressed with Alex Benay (Canadian CIO) who is doing great things, and with Siim Sikkut (Estonian CIO) who was taking the digitisation of Estonia into a new stage of being a broader enabler for Estonians and for the world. I shared with them some of my personal lessons learned around digital iteration vs transformation, including from the DTO in Australia (which has changed substantially, including a name change since I was there). Some notes of my lessons learned are at http://pipka.org/2017/04/03/iteration-or-transformation-in-government-paint-jobs-and-engines/.
- My final highlight was how well my keynote and other talks were taken. People were really inspired to think big picture and I hope it was useful in driving some of those conversations about where we want to collectively go and how we can better collaborate across geopolitical lines.
Below are some photos from the trip, and some observations from specific events/meetings.My FWD50 Keynote – the Tipping Point
I was invited to give a keynote at FWD50 about the tipping point we have gone through and how we, as a species, need to embrace the major paradigm shifts that have already happened, and decide what sort of future we want and work towards that. I also suggested some predictions about the future and examined the potential roles of governments (and public sectors specifically) in the 21st century. The slides are at https://docs.google.com/presentation/d/1coe4Sl0vVA-gBHQsByrh2awZLa0Nsm6gYEqHn9ppezA/edit?usp=sharing and the full speech is on my personal blog at http://pipka.org/2017/11/08/fwd50-keynote-the-tipping-point.
I also gave a similar keynote speech at the NerHui conference in New Zealand the week after which was recorded for those who want to see or hear the content at https://2017.nethui.nz/friday-livestreamThe Canadian Digital Service
Was only set up about a year ago and has a focus on building great services for users, with service design and user needs at the heart of their work. They have some excellent people with diverse skills and we spoke about what is needed to do “digital government” and what that even means, and the parallels and interdependencies between open government and digital government. They spoke about an early piece of work they did before getting set up to do a national consultation about the needs of Canadians (https://digital.canada.ca/beginning-the-conversation/) which had some interesting insights. They were very focused on open source, standards, building better ways to collaborate across government(s), and building useful things. They also spoke about their initial work around capability assessment and development across the public sector. I spoke about my experience in Australia and New Zealand, but also in working and talking to teams around the world. I gave an informal outline about the work of our Service Innovation and Service Integration team in DIA, which was helpful to get some feedback and peer review, and they were very supportive and positive. It was an excellent discussion, thank you all!CivicTech meetup
I was invited to talk to the CivicTech group meetup in Ottawa (https://www.meetup.com/YOW_CT/events/243891738/) about the roles of government and citizens into the future. I gave a quick version of the keynote I gave at linux.conf.au 2017 (pipka.org/2017/02/18/choose-your-own-adventure-keynote/), which explores paradigm shifts and the roles of civic hackers and activists in helping forge the future whilst also considering what we should (and shouldn’t) take into the future with us. It included my amusing change.log of the history of humans and threw down the gauntlet for civic hackers to lead the way, be the lightCDS Halloween Mixer
The Canadian Digital Service does a “mixer” social event every 6 weeks, and this one landed on Halloween, which was also my first ever Halloween celebration I had a traditional “beavertail” which was a flat cinnamon doughnut with lemon, amazing! Was fun to hang out but of course I had to retire early from jet lag.Workshop with Alistair
The first day of FWD50 I helped Alistair Croll with a day long workshop exploring the future. We thought we’d have a small interactive group and ended up getting 300, so it was a great mind meld across different ideas, sectors, technologies, challenges and opportunities. I gave a talk on culture change in government, largely influenced by a talk a few years ago called “Collaborative innovation in the public service: Game of Thrones style” (http://pipka.org/2015/01/04/collaborative-innovation-in-the-public-service-game-of-thrones-style/). People responded well and it created a lot of discussions about the cultural challenges and barriers in government.Thanks
Finally, just a quick shout out and thanks to Alistair for inviting me to such an amazing conference, to Rebecca for getting me organised, to Danielle and Matthew for your companionship and support, to everyone for making me feel so welcome, and to the following folk who inspired, amazed and colluded with me In chronological order of meeting: Sean Boots, Stéphane Tourangeau, Ryan Androsoff, Mike Williamson, Lena Trudeau, Alex Benay (Canadian Gov CIO), Thom Kearney and all the TBS folk, Siim Sikkut from Estonia, James Steward from UK, and all the other folk I met at FWD50, in between feeling so extremely unwell!
Thank you Canada, I had a magnificent time and am feeling inspired!
In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to seccdn.libravatar.org, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.
Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.DNS Configuration
In order to serve the certbot validation files separately from the main service, I created a new hostname, acme.libravatar.org, pointing to the main Libravatar server:CNAME acme libravatar.org. Mirror Configuration
On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):<VirtualHost *:80> ServerName __SECCDNSERVERNAME__ ServerAdmin __WEBMASTEREMAIL__ ProxyPass /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/ ProxyPassReverse /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/ </VirtualHost>
Then I enabled the right modules and restarted Apache:a2enmod proxy a2enmod proxy_http systemctl restart apache2.service
Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:#!/bin/sh cd /etc/libravatar /usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true Main Configuration
On the main server, I created a new webroot:mkdir -p /var/www/acme/.well-known
and a new vhost in /etc/apache2/sites-available/acme.conf:<VirtualHost *:80> ServerName acme.libravatar.org ServerAdmin email@example.com DocumentRoot /var/www/acme <Directory /var/www/acme> Options -Indexes </Directory> </VirtualHost>
before enabling it and restarting Apache:a2ensite acme systemctl restart apache2.service Registering a new TLS certificate
With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:certbot certonly --webroot -w /var/www/acme -d seccdn.libravatar.org
The resulting certificate will then be automatically renewed before it expires.
Earlier this year I asked for some help. Steve Sampson K5OKC stepped up, and has done some fine work in porting the OFDM modem from Octave to C. I was so happy with his work I asked him to write a guest post on my blog on his experience and here it is!
On a personal level working with Steve was a great experience for me. I always enjoy and appreciate other people working on FreeDV with me, however it is quite rare to have people help out with programming. As you will see, Steve enjoyed the process and learned a great deal in the process.
The Problem with Porting
But first some background on the process involved. In signal processing it is common to develop algorithms in a convenient domain-specific scripting language such as GNU Octave. These languages can do a lot with one line of code and have powerul visualisation tools.
Usually, the algorithm then needs to be ported to a language suitable for real time implementation. For most of my career that has been C. For high speed operation on FPGAs it might be VHDL. It is also common to port algorithms from floating point to fixed point so they can run on low cost hardware.
We don’t develop algorithms directly in the target real-time language as signal processing is hard. Bugs are difficult to find and correct. They may be 10x or 100x times harder (in terms of person-hours) to find in C or VHDL than say GNU Octave.
So a common task in my industry is porting an algorithm from one language to another. Generally the process involves taking a working simulation and injecting a bunch of hard to find bugs into the real time implementation. It’s an excellent way for engineering companies to go bankrupt and upset customers. I have seen and indeed participated in this process (screwing up real time implementations) many times.
The other problem is algorithm development is hard, and not many people can do it. They are hard to find, cost a lot of money to employ, and can be very nerdy (like me). So if you can find a way to get people with C, but not high level DSP skills, to work on these ports – then it’s a huge win from a resourcing perspective. The person doing the C port learns a lot, and managers are happy as there is some predictability in the engineering process and schedule.
The process I have developed allows people with C coding (but not DSP) skills to port complex signal processing algorithms from one language to another. In this case its from GNU Octave to floating point C. The figures below shows how it all fits together.
Here is a sample output plot, in this case a buffer of received samples in the demodulator. This signal is plotted in green, and the difference between C and Octave in red. The red line is all zeros, as it should be.
This particular test generates 12 plots. Running is easy:$ cd codec2-dev/octave $ ../build_linux/unittest/tofdm $ octave >> tofdm W........................: OK tx_bits..................: OK tx.......................: OK rx.......................: OK rxbuf in.................: OK rxbuf....................: OK rx_sym...................: FAIL (0.002037) phase_est_pilot..........: FAIL (0.001318) rx_amp...................: OK timing_est...............: OK sample_point.............: OK foff_est_hz..............: OK rx_bits..................: OK
This shows a fail case – two vectors just failed so some further inspection required.
Key points are:
- We make sure the C and Octave versions are identical. Near enough is not good enough. For floating point I set a tolerance like 1 part in 1000. For fixed point ports it can be bit exact – zero difference.
- We dump a lot of internal states, not just the inputs and outputs. This helps point us at exactly where the problem is.
- There is an automatic checklist to give us pass/fail reports of each stage.
- This prcoess is not particularly original. It’s not rocket science, but getting people (especially managers) to support and follow such a process is. This part – the human factor – is really hard to get right.
- The same process can be used between any two versions of an algorithm. Fixed and float point, fixed point C and VHDL, or a reference implementation and another one that has memory or CPU optimisations. The same basic idea: take a reference version and use software to compare it.
- It makes porting fun and strangely satisfying. You get constant forward progress and no hard to find bugs. Things work when they hit real time. After months of tough, brain hurting, algorithm development, I find myself looking forward to the productivity the porting phase.
In this case Steve was the man doing the C port. Here is his story…..
Initial Code Construction
I’m a big fan of the Integrated Debugging Environment (IDE). I’ve used various versions over the years, but mostly only use Netbeans IDE. This is my current favorite, as it works well with C and Java.
When I take on a new programming project I just create a new IDE project and paste in whatever I want to translate, and start filling-in the Java or C code. In the OFDM modem case, it was the Octave source code ofdm_lib.m.
Obviously this code won’t do anything or compile, but it allows me to write C functions for each of the Octave code blocks. Sooner or later, all the Octave code is gone, and only C code remains.
I have very little experience with Octave, but I did use some Matlab in college. It was a new system just being introduced when I was near graduation. I spent a little time trying to make the program as dynamic as the Octave code. But it became mired in memory allocation.
Once David approved the decision for me to go with fixed configuration values (Symbol rate, Sample rate, etc), I was able to quickly create the header files. We could adjust these header files as we went along.
One thing about Octave, is you don’t have to specify the array sizes. So for the C port, one of my tasks was to figure out the array sizes for all the data structures. In some cases I just typed the array name in Octave, and it printed out its value, and then presto I now knew the size. Inspector Clouseau wins again!
The include files were pretty much patterned the same as FDMDV and COHPSK modems.
Code Starting Point
When it comes to modems, the easiest thing to create first is the modulator. It proved true in this case as well. I did have some trouble early on, because of a bug I created in my testing code. My spectrum looked different than Davids. Once this bug was ironed out the spectrums looked similar. David recommended I create a test program, like he had done for other modems.
The output may look similar, but who knows really? I’m certainly not going to go line by line through comma-separated values, and anyway Octave floating point values aren’t the same as C values past some number of decimal points.
This testing program was a little over my head, and since David has written many of these before, he decided to just crank it out and save me the learning curve.
We made a few data structure changes to the C program, but generally it was straight forward. Basically we had the outputs of the C and Octave modulators, and the difference is shown by their different colors. Luckily we finally got no differences.
As I was writing the modulator, I also had to try and understand this particular OFDM design. I deduced that it was basically eighteen (18) carriers that were grouped into eight (8) rows. The first row was the complex “pilot” symbols (BPSK), and the remaining 7 rows were the 112 complex “data” symbols (QPSK).
But there was a little magic going on, in that the pilots were 18 columns, but the data was only using 16. So in the 7 rows of data, the first and last columns were set to a fixed complex “zero.”
This produces the 16 x 7 or 112 complex data symbols. Each QPSK symbol is two-bits, so each OFDM frame represents 224 bits of data. It wasn’t until I began working on the receiver code that all of this started to make sense.
With this information, I was able to drive the modulator with the correct number of bits, and collect the output and convert it to PCM for testing with Audacity.
DFT Versus FFT
This OFDM modem uses a DFT and IDFT. This greatly simplifies things. All I have to do is a multiply and summation. With only 18 carriers, this is easily fast enough for the task. We just zip through the 18 carriers, and return the frequency or time domain. Obviously this code can be optimized for firmware later on.
The final part of the modulator, is the need for a guard period called the Cyclic Prefix (CP). So by making a copy of the last 16 of the 144 complex time-domain samples, and putting them at the head, we produce 160 complex samples for each row, giving us 160 x 8 rows, or 1280 complex samples every OFDM frame. We send this to the transmitter.
There will probably need to be some filtering, and a function of adjusting gain in the API.
That left the Demodulator which looked much more complex. It took me quite a long time just to get the Octave into some semblance of C. One problem was that Octave arrays start at 1 and C starts at 0. In my initial translation, I just ignored this. I told myself we would find the right numbers when we started pushing data through it.
I won’t kid anyone, I had no idea what was going on, but it didn’t matter. Slowly, after the basic code was doing something, I began to figure out the function of various parts. Again though, we have no idea if the C code is producing the same data as the Octave code. We needed some testing functions, and these were added to tofdm.m and tofdm.c. David wrote this part of the code, and I massaged the C modem code until one day the data were the same. This was pretty exciting to see it passing tests.
One thing I found, was that you can reach an underflow with single precision. Whenever I was really stumped, I would change the single precision to a double, and then see where the problem was. I was trying to stay completely within single precision floating point, because this modem is going to be embedded firmware someday.
There was no way that I could have reached a successful conclusion without the testing code. As a matter of fact, a lot of programming errors were found. You would be surprised at how much damage a miss placed parenthesis can do to a math equation! I’ve had enough math to know how to do the basic operations involved in DSP. I’m sure that as this code is ported to firmware, it can be simplified, optimized, and unrolled a bit for added speed. At this point, we just want valid waveforms.
C99 and Complex Math
Working with David was pretty easy, even though we are almost 16 time-zones apart. We don’t need an answer right now, and we aren’t working on a deadline. Sometimes I would send an email, and then four hours later I would find the problem myself, and the morning was still hours away in his time zone. So he sometimes got some strange emails from me that didn’t require an answer.
David was hands-off on this project, and doesn’t seem to be a control freak, so he just let me go at it, and then teamed-up when we had to merge things in giving us comparable output. Sometimes a simple answer was all I needed to blow through an Octave brain teaser.
I’ve been working in C99 for the past year. For those who haven’t kept up (1999 was a long time ago), but still, we tend to program C in the same way. In working with complex numbers though, the C library has been greatly expanded. For example, to multiply two complex numbers, you type” “A * B”. That’s it. No need to worry about a simulated complex number using a structure. You need a complex exponent, you type “cexp(I * W)” where “I” is the sqrt(-1). But all of this is hidden away inside the compiler.
For me, this became useful when translating Octave to C. Most of the complex functions have the same name. The only thing I had to do, was create a matrix multiply, and a summation function for the DFT. The rest was straight forward. Still a lot of work, but it was enjoyable work.
Where we might have problems interfacing to legacy code, there are functions in the library to extract the real and imaginary parts. We can easily interface to the old structure method. You can see examples of this in the testing code.
Looking back, I don’t think I would do anything different. Translating code is tedious no matter how you go. In this case Octave is 10 times easier than translating Fortran to C, or C to Java.
The best course is where you can start seeing some output early on. This keeps you motivated. I was a happy camper when I could look and listen to the modem using Audacity. Once you see progress, you can’t give up, and want to press on.
The Bit Exact Fairy Tale is a story of fixed point porting. Writing this helped me vent a lot of steam at the time – I’d just left a company that was really good at messing up these sorts of projects.
The cohpsk_frame_design spreadsheet includes some design calculations on the OFDM modem and a map of where the data and pilot symbols go in time and frequency.
Fixed Point Scaling – Low Pass Filter example – is consistently one of the most popular posts on this blog. It’s a worked example of a fixed point port of a low pass filter.
Linux Users of Victoria (LUV) Announce: LUV December 2017 end of year celebration: Meetup Mixup Melbourne
There will be no December workshop, but there will be an end of year party in conjunction with other Melbourne groups including Buzzconf, Electronic Frontiers Australia, Hack for Privacy, the Melbourne PHP Users Group, Open Knowledge Australia, PyLadies Melbourne and R-Ladies Melbourne.
Please note that there's a $8.80 cover fee, which includes a drink and nibbles, and bookings are essential as spaces are limited. Tickets are available at https://melbourne.meetupmixup.com/
Linux Users of Victoria is a subcommittee of Linux Australia.December 21, 2017 - 18:00
Communication is a skill most of us practice every day.
Often without realising we're doing it.
I take my communication skills for granted. I'm not a brilliant communicator, not the best by any means, but probably, yes, I'm a bit above average. It wasn't until a colleague remarked on my presentation skills in particular that I remembered I'd actually been taught a thing or two about being on a stage. First as a dancer, then as a performer, and finally as a theatre director.
It's called Stagecraft. There's a lot to it, but when mastering stagecraft, you learn to know yourself. To use your very self as a tool to amplify your message. Where and how you stand, awareness of the space, of the light, of the size of the room, and of how to project your voice so all will hear you. All these facets need polish if you want your message to shine.
But you also need to learn to know your audience. Why are they there? What have they come to hear? What do they need to learn? How will they be transformed? Tuning your message to serve your audience is the real secret to giving a great presentation.
But presenting is just one of many communication skills. It's probably the one that people tell me most instils fear. Then there's writing of course. I envy writers! I would love to write more. I think of these as the "broadcast" skills. The "loud" skills. But the most important communication skill, in my view, is Listening.
As I've developed new skills as a business analyst, I've come to understand that Listening is the communication skill I need to improve most.
I was delighted to read this article by Tammy Lenski on the very morning I was to give this comms skills talk at DrupalSouth. Tammy refers to 5 Types of Listening identified in a talk given by Stephen Covey some years back. She says
"He described a listening continuum that runs from ignoring all the way over on the left, to pretend listening (patronizing), then selective listening, then attentive listening, and finally to empathic listening on the right."
I think this is really useful. If we are to get better at listening, we need to study it. But more importantly, we need to practice it. "Practice makes perfect". Kathy Sierra talks a lot about the power of intentional practice in her book Badass: Making Users Awesome
So, communication is a huge, huge topic to try and cover in a conference talk, so I tried to distil it down into three elements.
and The why.
The what is the message itself. The how is the channel, the method, the style, or the medium, as Marshall Mcluhan said, and finally, there's the why; the intent, the purpose, or the reason for communicating. I believe we need to understand the "why" of what we're saying, or hearing if it is to be of any benefit.
Here's my slides:Communication skills for everyone (drupalsouth edition) from Donna Benjamin
We recently had a 5.94KW solar PV system installed – twenty-two 270W panels (14 on the northish side of the house, 8 on the eastish side), with an ABB PVI-6000TL-OUTD inverter. Naturally I want to be able to monitor the system, but this model inverter doesn’t have an inbuilt web server (which, given the state of IoT devices, I’m actually kind of happy about); rather, it has an RS-485 serial interface. ABB sell addon data logger cards for several hundred dollars, but Rick from Affordable Solar Tasmania mentioned he had another client who was doing monitoring with a little Linux box and an RS-485 to USB adapter. As I had a Raspberry Pi 3 handy, I decided to do the same.
Step one: Obtain an RS-485 to USB adapter. I got one of these from Jaycar. Yeah, I know I could have got one off eBay for a tenth the price, but Jaycar was only a fifteen minute drive away, so I could start immediately (I later discovered various RS-485 shields and adapters exist specifically for the Raspberry Pi – in retrospect one of these may have been more elegant, but by then I already had the USB adapter working).
Step two: Make sure the adapter works. It can do RS-485 and RS-422, so it’s got five screw terminals: T/R-, T/R+, RXD-, RXD+ and GND. The RXD lines can be ignored (they’re for RS-422). The other three connect to matching terminals on the inverter, although what the adapter labels GND, the inverter labels RTN. I plugged the adapter into my laptop, compiled Curt Blank’s aurora program, then asked the inverter to tell me something about itself:
Interestingly, the comms seem slightly glitchy. Just running aurora -a 2 -e /dev/ttyUSB0 always results in either “No response after 1 attempts” or “CRC receive error (1 attempts made)”. Adding “-Y 4″ makes it retry four times, which is generally rather more successful. Ten retries is even more reliable, although still not perfect. Clearly there’s some tweaking/debugging to do here somewhere, but at least I’d confirmed that this was going to work.
So, on to the Raspberry Pi. I grabbed the openSUSE Leap 42.3 JeOS image and dd’d that onto a 16GB SD card. Booted the Pi, waited a couple of minutes with a blank screen while it did its firstboot filesystem expansion thing, logged in, fiddled with network and hostname configuration, rebooted, and then got stuck at GRUB saying “error: attempt to read or write outside of partition”:
Next I needed an RPM of the aurora CLI, so I built one on OBS, installed it on the Pi, plugged the Pi into the USB adapter, and politely asked the inverter to tell me a bit more about itself:
Everything looked good, except that the booster temperature was reported as being 4294967296°C, which seemed a little high. Given that translates to 0×100000000, and that the south wall of my house wasn’t on fire, I rather suspected another comms glitch. Running aurora -a 2 -Y 4 -d 0 /dev/ttyUSB0 a few more times showed that this was an intermittent problem, so it was time to make a case for the Pi that I could mount under the house on the other side of the wall from the inverter.
I picked up a wall mount snap fit black plastic box, some 15mm x 3mm screws, matching nuts, and 9mm spacers. The Pi I would mount inside the box part, rather than on the back, meaning I can just snap the box-and-Pi off the mount if I need to bring it back inside to fiddle with it.
Then I had to measure up and cut holes in the box for the ethernet and USB ports. The walls of the box are 2.5mm thick, plus 9mm for the spacers meant the bottom of the Pi had to be 11.5mm from the bottom of the box. I measured up then used a Dremel tool to make the holes then cleaned them up with a file. The hole for the power connector I did by eye later after the board was in about the right place.
I didn’t measure for the screw holes at all, I simply drilled through the holes in the board while it was balanced in there, hanging from the edge with the ports. I initially put the screws in from the bottom of the box, dropped the spacers on top, slid the Pi in place, then discovered a problem: if the nuts were on top of the board, they’d rub up against a couple of components:
So I had to put the screws through the board, stick them there with Blu Tack, turn the Pi upside down, drop the spacers on top, and slide it upwards into the box, getting the screws as close as possible to the screw holes, flip the box the right way up, remove the Blu Tack and jiggle the screws into place before securing the nuts. More fiddly than I’d have liked, but it worked fine.
One other kink with this design is that it’s probably impossible to remove the SD card from the Pi without removing the Pi from the box, unless your fingers are incredibly thin and dexterous. I could have made another hole to provide access, but decided against it as I’m quite happy with the sleek look, this thing is going to be living under my house indefinitely, and I have no plans to replace the SD card any time soon.
All that remained was to mount it under the house. Here’s the finished install:
After that, I set up a cron job to scrape data from the inverter every five minutes and dump it to a log file. So far I’ve discovered that there’s enough sunlight by about 05:30 to wake the inverter up. This morning we’d generated 1KW by 08:35, 2KW by 09:10, 8KW by midday, and as I’m writing this at 18:25, a total of 27.134KW so far today.
- Figure out WTF is up with the comms glitches
- Graph everything and/or feed the raw data to pvoutput.org
There will be a new European version of the Linux Security Summit for 2018, in addition to the established North American event.
The dates and locations are as follows:
- North America: August 27-28, Vancouver, Canada
- Europe: October 25-26, Edinburgh, UK
Stay tuned for CFP announcements!
I wanted to setup a mail service on a staging server that would send all outgoing emails to a local mailbox. This avoids sending emails out to real users when running the staging server using production data.
First, install the postfix mail server:apt install postfix
and choose the "Local only" mail server configuration type.
Then change the following in /etc/postfix/main.cf:default_transport = error
to:default_transport = local:root
and restart postfix:systemctl restart postfix.service
Once that's done, you can find all of the emails in /var/mail/root.
So you can install mutt:apt install mutt
and then view the mailbox like this:mutt -f /var/mail/root
The following is a short tutorial on using BLAST with Slurm using fasta nucleic acid (fna) FASTA formatted sequence files for Rattus Norvegicus. It assumes that BLAST (Basic Local Alignment Search Tool) is already installed.
First, create a database directory, download the datafile, extract, and load the environment variables for BLAST.
mkdir -r ~/applicationtests/BLAST/dbs
module load BLAST/2.2.26-Linux_x86_64
Having extracted the file, there will be a fna formatted sequence file, rat.1.rna.fna. An example header line for a sequence:
>NM_175581.3 Rattus norvegicus cathepsin R (Ctsr), mRNA
PLEASE NOTE NEW LOCATION
Speakers to be announced.
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000
Food and drinks will be available on premises.
Linux Users of Victoria is a subcommittee of Linux Australia.December 5, 2017 - 18:30
Topic to be announced.
There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.
The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.
LUV would like to acknowledge Infoxchange for the venue.
Linux Users of Victoria is a subcommittee of Linux Australia.November 18, 2017 - 12:30
Over the years of my involvement with library projects, like Coder Dojo, programming workshops and such, I’ve struggled to nail down the intersection between libraries and open source. At this years linux.conf.au in Sydney (my seventeenth!) I’m helping to put together a miniconf to answer this question: Open GLAM. If you do work in the intersection of galleries, libraries, archives, musuems and open source, we’d love to hear from you.
Filed under: lca, oss, Uncategorized
I was invited to an incredible and inaugural conference in Canada called FWD50 which was looking at the next 50 days, months and years for society. It had a digital government flavour to it but had participants and content from various international, national and sub-national governments, civil society, academia, industry and advocacy groups. The diversity of voices in the room was good and the organisers committed to greater diversity next year. I gave my keynote as an independent expert and my goal was to get people thinking bigger than websites and mobile apps, to dream about the sort of future we want as a society (as a species!) and work towards that. As part of my talk I also explored what the big paradigm shifts have happened (note the past tense) and potential roles for government (particularly the public sector) in a hyper connected, distributed network of powerful individuals. My slides are available here (simple though they are). It wasn’t recorded but I did an audio recording and transcribed. I was unwell and had lost my voice so this is probably better anyway
I’ve been thinking a lot over many years about change and the difference between iteration and transformation, about systems, about what is going on in the big picture, because what I’m seeing around the world is a lot of people iterating away from pain but not actually iterating towards a future. Looking for ways to solve the current problem but not rethinking or reframing in the current context. I want to talk to you about the tipping point.
We invented all of this. This is worth taking a moment to think. We invented every system, every government, every means of production, we organised ourselves into structures and companies, all the things we know, we invented. By understanding we invented we can embrace the notice we aren’t stuck with it. A lot of people start from the normative perspective that it is how it is and how do we improve it slightly but we don’t have to be constrained to assumption because *we* invented it. We can take a formative approach.
The reason this is important is because the world has fundamentally changed. The world has started from a lot of assumptions. This (slide) is a map of the world as it was known at the time, and it was known for a long time to be flat. And at some point it became known that the world was not flat and people had to change their perspective. If we don’t challenge those assumptions that underpin our systems, we run the significant risk of recreating the past with shiny new things. If we take whatever the shiny thing is today, like blockchain or social media 10 years ago, and take that shiny thing to do what we have always done, then how are we progressing? We are just “lifting and shifting” as they like to say, which as a technologist is almost the worst thing I can hear.
Actually understanding the assumptions that underpin what we do, understanding the goal that we have and what we are trying to achieve, and actually having to make sure that we intentionally choose to move forward with the assumptions that we want to take into the future is important because a lot of the biases and assumptions that underpin the systems that we have today were forged centuries or even millennia ago. A long time before the significant paradigm shifts we have seen.
So I’m going to talk a little bit about how things have changed. It’s not that the tipping point is happening. The tipping point has already happened. We have seen paradigm shifts with legacy systems of power and control. Individuals are more individually powerful than ever in the history of our species. If you think way back in hunter and gatherer times, everyone was individually pretty powerful then, but it didn’t scale. When we moved to cities we actually started to highly specialise and become interdependent and individually less powerful because we made these systems of control that were necessary to manage the surplus of resource, necessary to manage information. But what’s happened now through the independence movements creating a culture of everyone being individually powerful through individual worthy of rights, and then more recently with the internet becoming a distributor, enabler and catalyst of that, we are now seeing power massively distributed.
Think about it. Any individual around the world that can get online, admittedly that’s only two thirds of us but it’s growing every day, and everyone has the power to publish, to create, to share, to collaborate, to collude, to monitor. It’s not just the state monitoring the people but the people monitoring the state and people monitoring other people. There is the power to enforce your own perspective. And it doesn’t actually matter whether you think it’s a good or bad thing, it is the reality. It’s the shift. And if we don’t learn to embrace, understand and participate in it,particularly in government, then we actually make ourselves less relevant. Because one of the main things about this distribution of power, that the internet has taught us fundamentally as part of our culture that we have all started to adopt, is that you can route around damage. The internet was set up to be able to route around damage where damage was physical or technical. We started to internalise that socially and if you, in government, are seen to be damage, then people route around you. This is why we have to learn to work as a node in a network, not just a king in a castle, because kings don’t last anymore.
So which way is forward. The priority now needs to be deciding what sort of future do we want. Not what sort of past do we want to escape. The 21st century sees many communities emerging. They are hyper connected, transnational, multicultural, heavily interdependent, heavily specialised, rapidly changing and disconnected from their geopolitical roots. Some people see that as a reason to move away from having geopolitically formed states. Personally I believe there will always be a role for a geographic state because I need a way to scale a quality of life for my family along with my fellow citizens and neighbours. But what does that mean in an international sense. Are my rights as a human being being realised in a transnational sense. There are some really interesting questions about the needs of users beyond the individual services that we deliver, particularly when you look in a transnational way.
So a lot of these assumptions have become like a rusty anchor that kept us in place in high tide, but are drawing us to a dangerous reef as to water lowers. We need to figure out how to float on the water without rusty anchors to adapt to the tides of change.
There are a lot of pressures that are driving these changes of course. We are all feeling those pressures, those of us that are working in government. There’s the pressure of changing expectations, of history, from politics and the power shift. The pressure of the role of government in the 21st century. Pressure is a wonderful thing as it can be a catalyst of change, so we shouldn’t shy away from pressure, but recognising that we’re under pressure is important.
So let’s explore some of those power shifts and then what role could government play moving forward.
Paradigm #1: central to distributed. This is about that shift in power, the independence movements and the internet. It is something people talk about but don’t necessarily apply to their work. Governments will talk about wanting to take a more distributed approach but followup with setting up “my” website expecting everyone to join or do something. How about everyone come to “my” office or create “my” own lab. Distributed, when you start to really internalise what that means, if different. I was lucky as I forged a lot of my assumptions and habits of working when I was involved in the Open Source community, and the Open Source community has a lot of lessons for rest of society because it is on the bleeding edge of a lot of these paradigm shifts. So working in a distributed way is to assume that you are not at the centre, to assume that you’re not needed. To assume that if you make yourself useful that people will rely on you, but also to assume that you rely on others and to build what you do in a way that strengthens the whole system. I like to talk about it as “Gov as a Platform”, sometimes that is confusing to people so let’s talk about it as “Gov as an enabler”. It’s not just government as a central command and controller anymore because the moment you create a choke point, people route around it. How do we become a government as an enabler of good things, and how can we use other mechanisms to create the controls in society. Rather than try to protect people from themselves, why not enable people to protect themselves. There are so many natural motivations in the community, in industry, in the broader sector that we serve, that we can tap into but traditionally we haven’t. Because traditionally we saw ourselves as the enforcer, as the one to many choke point. So working in a distributed way is not just about talking the talk, it’s about integrated it into the way we think.
Some other aspects of this include localised to globalised, keeping in mind that large multinational companies have become quite good at jurisdiction shopping for improvements of profits, which you can’t say is either a good or bad thing, it’s just a natural thing and how they’re naturally motivated. But citizens are increasingly starting to jurisdiction shop too. So I would suggest a role for government in the 21st century would be to create the best possible quality of life for people, because then you’ll attract the best from around the world.
The second part of central to distributed is simple to complex. I have this curve (on the slide) which shows green as complexity and red as government’s response to user needs. The green climbs exponentially whilst the red is pretty linear, with small increases or decreases over time, but not an exponential response by any means. Individual needs are no longer heavily localised. They are subject to local, national, transnational complexities with every extra complexity compounded, not linear. So the increasing complexities in people’s lives, and the obligations, taxation, services and entitlements, everything is going up. So there is a delta forming between what government can directly do, and what people need. So again I contend that the opportunity here particularly for the public sector is to actually be an enabler for all those service intermediaries – the for profit, non profit, civic tech – to help them help themselves, help them help their customers, by merit of making government a platform upon which they can build. We’ve had a habit and a history of creating public infrastructure, particularly in Australia, in New Zealand, in Canada, we’re been very good at building public infrastructure. Why have we not focused on digital infrastructure? Why do we see digital infrastructure as something that has to be cost recovered to be sustainable when we don’t have to do cost recovery for every thing public road. I think that looking at the cost benefits and value creation of digital public infrastructure needs to be looks at in the same way, and we need to start investing in digital public infrastructure.
Next paradigm shift, analog to digital, or slow to very fast. I like to joke that we use lawyers as modems. If you think about regulation and policy, we write it, it is translated by a lawyer or drafter into regulation or policy, it is then translated by a lawyer or drafter or anyone into operational systems, business systems, helpdesk systems or other systems in society. Why wouldn’t we make our regulation as code? The intent of our regulation and our legislative regimes available to be directly consumed (by the systems) so that we can actually speed up, automate, improve consistency of application through the system, and have a feedback loop to understand whether policy changes are having the intended policy effect.
There are so many great things we can do when we start thinking about digital as something new, not just digitising an analog process. Innovation too long was interpreted as a digitisation of a process, basic process improvements. But real digitisation should a a transformation where you are changing the thing to better achieve the purpose or intent.
The next paradigm is scarcity to surplus. I think this is critical. We have a lot of assumptions in our systems that assume scarcity. Why do we still have so many of our systems assume scarcity when surplus is the opportunity. Between 3D printing and nanotech, we could be deconstructing and reconstructing new materials to print into goods and food and yet a large inhibitor of 3D printing progress is copyright. So the question I have for you is do we care more about an 18h century business model or do we care about solving the problems of our society. We need to make these choices. If we have moved to an era of surplus but we are getting increasing inequality, perhaps the systems of distribution are problematic? Perhaps in assuming scarcity we are protecting scarcity for the few at the cost of the many.
Next paradigm is normative to formative, “please comply”. For the last hundred years in particular we have perfected the art of broadcasting images of normal into our houses, particularly with radio and television. We have the concept of set a standard or rule and if you don’t follow we’ll punish you, so a lot of culture is about compliance in society. Compliance is important for stability, but blind compliance can create millstones. A formative paradigm is about not saying how it is but in exploring where you want to go. In the public service we are particularly good at compliance culture but I suggest that if we got more people thinking formatively, not just change for changes sake, but bringing people together on their genuinely shared purpose of serving the public, then we might be able to take a more formative approach to doing the work we do for the betterment of society rather than ticking the box because it is the process we have to follow. Formative takes us away from being consumers and towards being makers. As an example, the most basic form of normative human behaviour is in how we see and conform to being human. You are either normal, or you are not, based on some externally projected vision of normal. But the internet has shown us that no one is normal. So embracing that it is through our difference we are more powerful and able to adapt is an important part of our story and culture moving forward. If we are confident to be formative, we can always trying to create a better world whilst applying a critical eye to compliance so we don’t comply for compliance sake.
Now on the back of these paradigm shifts, I’d like to briefly about the future. I spoke about the opportunity through surplus with 3D printing and nanotech to address poverty and hunger. What about the opportunities of rockets for domestic travel? It takes half an hour to get into space, an hour to traverse the world and half an hour down which means domestic retail transport by rocket is being developed right now which means I could go from New Zealand to Canada to work for the day and be home for tea. That shift is going to be enormous in so many ways and it could drive real changes in how we see work and internationalism. How many people remember Total Recall? The right hand picture is a self driving car from a movie in the 90s and is becoming normal now. Interesting fact, some of the car designs will tint the windows when they go through intersections because the passengers are deeply uncomfortable with the speed and closeness of self driving cars which can miss each other very narrowly compared to human driving. Obviously there are opportunities around AI, bots and automation but I think where it gets interesting when we think about opportunities of the future of work. We are still working on industrial assumptions that the number of hours that we have is a scarcity paradigm and I have to sell the number of hours that I work, 40, 50, 60 hours. Why wouldn’t we work 20 hours a week at a higher rate to meet our basic needs? Why wouldn’t we have 1 or 2 days a week where we could contribute to our civic duties, or art, or education. Perhaps we could jump start an inclusive renaissance, and I don’t mean cat pictures. People can’t thrive if they’re struggling to survive and yet we keep putting pressure on people just to survive. Again, we are from countries with quite strong safety nets but even those safety nets put huge pressure, paperwork and bureaucracy on our most vulnerable just to meet their basic needs. Often the process of getting access to the services and entitlements is so hard and traumatic that they can’t, so how do we close that gap so all our citizens can move from survival to thriving.
The last picture is a bit cheeky. A science fiction author William Gibson wrote Johnny Pneumonic and has a character in that book called Jones, a cyborg dolphin to sniff our underwater mines in warfare. Very dark, but the interesting concept there is in how Jones was received after the war: “he was more than a dolphin, but from another dolphin’s point of view he might have seemed like something less.” What does it mean to be human? If I lose a leg, right now it is assumed I need to replace that leg to be somehow “whole”. What if I want 4 legs. The human brain is able to adapt to new input. I knew a woman who got a small sphere filled with mercury and a free floating magnet in her finger, and the magnet spins according to frequency and she found over a short period of time she was able to detect changes in frequency. Why is that cool and interesting? Because the brain can adapt to foreign, non evolved input. I think that is mind blowing. We have the opportunity to augment our selves not to just conform to normal or be slightly better, faster humans. But we can actually change what it means to be human altogether. I think this will be one of the next big social challenges for society but because we are naturally so attracted to “shiny”, I think that discomfort will pass within a couple of generations. One prediction is that the normal Olympics has become boring and that we will move into a transhuman olympics where we take the leash off and explore the 100m sprint with rockets, or judo with cyborgs. Where the interest goes, the sponsorship goes, and more professional athletes compete. And what’s going to happen if your child says they want to be a professional transhuman olympian and that they will add wings or remove their legs for their professional career, to add them (or not) later? That’s a bit scary for many but at the same time, it’s very interesting. And it’s ok to be uncomfortable, it’s ok to look at change, be uncomfortable and ask yourself “why am I uncomfortable?” rather than just pushing back on discomfort. It’s critical more than ever, particularly in the public service that we get away from this dualistic good or bad, in or out, yours or mine and start embracing the grey.
So what’s the role of government in all this, in the future. Again these are just some thoughts, a conversation starter.
I think one of our roles is to ensure that individuals have the ability to thrive. Now I acknowledge I’m very privileged to have come from a social libertarian country that believe this, where people broadly believe they want their taxes to go to the betterment of society and not all countries have that assumption. But if we accept the idea that people can’t thrive if they can’t survive, then our baseline quality of life if you assume an individual starts from nothing with no privilege, benefits or family, provided by the state, needs to be good enough for the person to be able to thrive. Otherwise we get a basic structural problem. Part of that is becoming master buildings again, and to go to the Rawl’s example from Alistair before, we need empathy in what we do in government. The amount of times we build systems without empathy and they go terribly wrong because we didn’t think about what it would be like to be on the other side of that service, policy or idea. User centred design is just a systematisation of empathy, which is fantastic, but bringing empathy into everything we do is very important.
Leadership is a very important role for government. I think part of our role is to represent the best interests of society. I very strongly feel that we have a natural role to serve the public in the public sector, as distinct from the political sector (though citizens see us as the same thing). The role of a strong, independent public sector is more important than ever in a post facts “fake news” world because it is one of the only actors on the stage that is naturally motivated, naturally systemically motivated, to serve the best interests of the public. That’s why open government is so important and that’s why digital and open government initiatives align directly.
Because open with digital doesn’t scale, and digital without open doesn’t last.
Stability, predictability and balance. It is certainly a role of government to create confidence in our communities, confidence creates thriving. It is one thing to address Maslov’s pyramid of needs but if you don’t feel confident, if you don’t feel safe, then you still end up behaving in strange and unpredictable ways. So this is part of what is needed for communities to thrive. This relates to regulation and there is a theory that regulation is bad because it is hard. I would suggest that regulation is important for the stability and predictability in society but we have to change the way we deliver it. Regulation as code gets the balance right because you can have the settings and levers in the economy but also the ability for it to be automated, consumable, consistent, monitored and innovative. I imagine a future where I have a personal AI which I can trust because of quantum cryptography and because it is tethered in purpose to my best interests. I don’t have to rely on whether my interests happen to align with the purpose of a department, company or non-profit to get the services I need because my personal bot can figure out what I need and give me the options for me to make decisions about my life. It could deal with the Government AI to figure out the rules, my taxation, obligations, services and entitlements. Where is the website in all that? I ask this because the web was a 1990s paradigm, and we need more people to realise and plan around the idea that the future of service delivery is in building the backend of what we do – the business rules, transactions, data, content, models – in a modular consumable so we can shift channels or modes of delivery whether it is a person, digital service or AI to AI interaction.
Another role of government is in driving the skills we need for the 21st century. Coding is critical not because everyone needs to code (maybe they will) but more than that coding teaches you an assumption, an instinct, that technology is something that can be used by you, not something you are intrinsically bound to. Minecraft is the saviour of a generation because all those kids are growing up believing they can shape the world around them, not have to be shaped by the world around them. This harks back to the normative/formative shift. But we also need to teach critical thinking, teach self awareness, bias awareness, maker skills, community awareness. It has been delightful to move to New Zealand where they have a culture that has an assumed community awareness.
We need of course to have a strong focus on participatory democracy, where government isn’t just doing something to you but we are all building the future we need together. This is how we create a multi-processor world rather than a single processor government. This is how we scale and develop a better society but we need to move beyond “consultation” and into actual co-design with governments working collaboratively across the sectors and with civil society to shape the world.
I’ll finish on this note, government as an enabler, a platform upon which society can build. We need to build a way of working that assumes we are a node in the network, that assumes we have to work collaboratively, that assumes that people are naturally motivated to make good decisions for their life and how can government enable and support people.
So embrace the tipping point, don’t just react. What future do you want, what society do you want to move towards? I guess I’ve got to a point in my life where I see everything as a system and if I can’t connect the dots between what I’m doing and the purpose then I try to not do that thing. The first public service job I had I got in and automated a large proportion of the work within a couple of weeks and then asked for data.gov.au, and they gave it to me because I was motivated to make it better.
So I challenge you to be thinking about this every day, to consider your own assumptions and biases, to consider whether you are being normative or formative, to evaluate whether you are being iterative or transformative, to evaluate whether you are moving away from something or towards something. And to always keep in mind where you want to be, how you are contributing to a better society and to actively leave behind those legacy ideas that simply don’t serve us anymore.
I started web development around late 1994. Some of my earliest paid web work is still online (dated June 1995). Clearly, that was a simpler time for content! I went on to be ‘Webmaster’ (yes, for those joining us in the last decade, that was a job title once) for UWA, and then for Hartley Poynton/JDV.com at time when security became important as commerce boomed online.
At the dawn of the web era, the consideration of backwards compatibility with older web clients (browsers) was deemed to be important; content had to degrade nicely, even without any CSS being applied. As the years stretched out, the legacy became longer and longer. Until now.
In mid-2018, the Payment Card Industry (PCI) Data Security Standard (DSS) 3.2 comes into effect, requiring card holder environments to use (at minimum) TLS 1.2 for the encrypted transfer of data. Of course, that’s also the maximum version typically available today (TLS 1.3 is in draft 21 at this point in time of writing). This effort by the PCI is forcing people to adopt new browsers that can do the TLS 1.2 protocol (and the encryption ciphers that permits), typically by running modern/recent Chrome, Firefox, Safari or Edge browsers. And for the majority of people, Chrome is their choice, and the majority of those are all auto-updating on every release.
Many are pushing to be compliant with the 2018 PCI DSS 3.2 as early as possible; your logging of negotiated protocols and ciphers will show if your client base is ready as well. I’ve already worked with one government agency to demonstrate they were ready, and have already helped disable TLS 1.0 and 1.1 on their public facing web sites (and previously SSL v3). We’ve removed RC4 ciphers, 3DES ciphers, and enabled ephemeral key ciphers to provide forward secrecy.
But as we find ourselves with modern clients, we can now ask those clients to be complicit in our attempts to secure the content we serve. They understand modern security constructs such as Content Security Policies and other HTTP security-related headers.
There’s two tools I am currently using to help in this battle to improve web security. One is SSLLabs.com, the work of Ivan Ristić (and now owned/sponsored by Qualys). This tool gives a good view of the encryption in flight (protocols, ciphers), chain of trust (certificate), and a new addition of checking DNS records for CAA records (which I and others piled on a feature request for AWS Route53 to support). The second tool is Scott Helm’s SecurityHeaders.io, which looks at the HTTP headers that web content uses to ask browsers to enforce security on the client side.
There’s a really important reason why these tools are good; they are maintained. As new recommendations on ciphers, protocols, signature algorithms or other actions become recommended, they’re updated on these tools. And these tools are produced by very small, but agile teams — like one person teams, without the bureaucracy (and lag) associated with large enterprise tools. But these shouldn’t be used blindly. These services make suggestions, and you should research them yourselves. For some, not all the recommendations may meet your personal risk profile. Personally, I’m uncomfortable with Public-Key-Pins, so that can wait for a while — indeed, Chrome has now signalled they will drop this.
So while PCI is hitting merchants with their DSS-compliance stick (and making it plainly obvious what they have to do), we’re getting a side-effect of having a concrete reason for drawing a line under where our backward compatibility must stretch back to, and the ability to have the web client assist in ensure security of content.