You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia -
Updated: 5 days 6 hours ago

Stewart Smith: Microsoft Chicago – retro in qemu!

Sat 13th Aug 2016 15:08

So, way back when (sometime in the early 1990s) there was Windows 3.11 and times were… for Workgroups. There was this Windows NT thing, this OS/2 thing and something brewing at Microsoft to attempt to make the PC less… well, bloody awful for a user.

Again, thanks to abandonware sites, it’s possible now to try out very early builds of Microsoft Chicago – what would become Windows 95. With the earliest build I could find (build 56), I set to work. The installer worked from an existing Windows 3.11 install.

I ended up using full system emulation rather than normal qemu later on, as things, well, booted in full emulation and didn’t otherwise (I was building from qemu master… so it could have actually been a bug fix).

Mmmm… Windows 3.11 File Manager, the fact that I can still use this is a testament to something, possibly too much time with Windows 3.11.

Unfortunately, I didn’t have the Plus Pack components (remember Microsoft Plus! ?- yes, the exclamation mark was part of the product, it was the 1990s.) and I’m not sure if they even would have existed back then (but the installer did ask).

Obviously if you were testing Chicago, you probably did not want to upgrade your working Windows install if this was a computer you at all cared about. I installed into C:\CHICAGO because, well – how could I not!

The installation went fairly quickly – after all, this isn’t a real 386 PC and it doesn’t have of-the-era disks – everything was likely just in the linux page cache.

I didn’t really try to get network going, it may not have been fully baked in this build, or maybe just not really baked in this copy of it, but the installer there looks a bit familiar, but not like the Windows 95 one – maybe more like NT 3.1/3.51 ?

But at the end… it installed and it was time to reboot into Chicago:
So… this is what Windows 95 looked like during development back in July 1993 – nearly exactly two years before release. There’s some Windows logos that appear/disappear around the place, which are arguably much cooler than the eventual Windows 95 boot screen animation. The first boot experience was kind of interesting too:
Luckily, there was nothing restricting the beta site ID or anything. I just entered the number 1, and was then told it needed to be 6 digits – so beta site ID 123456 it is! The desktop is obviously different both from Windows 3.x and what ended up in Windows 95.

Those who remember Windows 3.1 may remember Dr Watson as an actual thing you could run, but it was part of the whole diagnostics infrastructure in Windows, and here (as you can see), it runs by default. More odd is the “Switch To Chicago” task (which does nothing if opened) and “Tracker”. My guess is that the “Switch to Chicago” is the product of some internal thing for launching the new UI. I have no ideawhat the “Tracker” is, but I think I found a clue in the “Find File” app:

Not only can you search with regular expressions, but there’s “Containing text”, could it be indexing? No, it totally isn’t. It’s all about tracking/reporting problems:

Well, that wasn’t as exciting as I was hoping for (after all, weren’t there interesting database like file systems being researched at Microsoft in the early 1990s?). It’s about here I should show the obligatory About box:
It’s… not polished, and there’s certainly that feel throughout the OS, it’s not yet polished – and two years from release: that’s likely fair enough. Speaking of not perfect:

When something does crash, it asks you to describe what went wrong, i.e. provide a Clue for Dr. Watson:

But, most importantly, Solitaire is present! You can browse the Programs folder and head into Games and play it! One odd tihng is that applications have two >> at the end, and there’s a “Parent Folder” entry too.

Solitair itself? Just as I remember.

Notably, what is missing is anything like the Start menu, which is probably the key UI element introduced in Windows 95 that’s still with us today. Instead, you have this:

That’s about the least exciting Windows menu possible. There’s the eye menu too, which is this:

More unfinished things are found in the “File cabinet”, such as properties for anything:
But let’s jump into Control Panels, which I managed to get to by heading to C:\CHICAGO\Control.sys – which isn’t exactly obvious, but I think you can find it through Programs as well.The “Window Metrics” application is really interesting! It’s obvious that the UI was not solidified yet, that there was a lot of experimenting to do. This application lets you change all sorts of things about the UI:

My guess is that this was used a lot internally to twiddle things to see what worked well.

Another unfinished thing? That familiar Properties for My Computer, which is actually “Advanced System Features” in the control panel, and from the [Sample Information] at the bottom left, it looks like we may not be getting information about the machine it’s running on.

You do get some information in the System control panel, but a lot of it is unfinished. It seems as if Microsoft was experimenting with a few ways to express information and modify settings.

But check out this awesome picture of a hard disk for Virtual Memory:

The presence of the 386 Enhanced control panel shows how close this build still was to Windows 3.1:

At the same time, we see hints of things going 32 bit – check out the fact that we have both Clock and Clock32! Notepad, in its transition to 32bit, even dropped the pad and is just Note32!

Well, that’s enough for today, time to shut down the machine:

Categories: thinktime

Craige McWhirter: Python for science, side projects and stuff! - PyConAu 2016

Sat 13th Aug 2016 11:08

By Andrew Lonsdale.

  • Talked about using python-ppt for collaborating on PowerPoint presentations.
  • Covered his journey so far and the lessons he learned.
  • Gave some great examples of re-creating XKCD comics in Python (matplotlib_venn).
  • Claimed the diversion into Python and Matplotlib has helped is actual research.
  • Spoke about how using Python is great for Scientific research.
  • Summarised that side projects are good for Science and Python.
  • Recommended Elegant SciPy
  • Demo's using Emoji to represent bioinformatics using FASTQE (FASTQ as Emoji).

Categories: thinktime

Craige McWhirter: MicroPython: a journey from Kickstarter to Space by Damien George - PyConAu 2016

Sat 13th Aug 2016 10:08

Damien George.

Motivations for MicroPython:
  • To provide a high level language to control sophisticated micro-controllers.
  • Approached it as an intellectually stimulating research problem.
  • Wasn't even sure it was possible.
  • Chose Python because:
    • It was a high level language with powerful features.
    • Large existing community.
    • Naively thought it would be easy.
    • Found Python easy to learn.
    • Shallow but long learning curve of python makes it good for beginners and advanced programmers.
    • Bitwise operaitons make it usefult for micro-controllers.
Why Not Use CPython?
  • CPython pre-allocates memory, resulting in inefficient memory usage which is problematic for low RAM devices like micro controllers.
  • If you know Python, you know MicroPython - it's implemented the same

Damien covered his experiences with Kickstarter.

Internals of MicroPython:
  • Damien covered the parser, lexer, compiler and runtime.
  • Walked us through the workflows of the internals.
  • Spoke about object represntation and the three machine word object forms:
    • Integers.
    • Strings.
    • Objects.
  • Covered the emitters:
    • Bytecode.
    • Native (machine code).
    • Inline assembler.
Coding Style:

Coding was more based on a physicist trying to make things work, than a computer engineer.

  • There's a code dashboard
  • Hosted on GitHub
  • Noted that he could not have done this without the support of the community.

Listed some of the micro controller boards that it runs on ad larger computers that currently run OpenWRT.

Spoke about the BBC micron:bit project. Demo'd speech synthesis and image display running on it.

MicroPython in Space:

Spoke about the port to LEON / SPARC / RTEMS for the European Space agency for satellite control, particularly the application layer.

Damien closed with an overview of current applications and ongoing software and hardware development.


Categories: thinktime

Craige McWhirter: Doing Math with Python - Amit Saha - PyConAu 2016

Fri 12th Aug 2016 15:08

Amit Saha.

Slides and demos.

Why Math with Python?
  • Provides an interactive learning experience.
  • Provides a great base for future programming (ie: data science, machine learning).
  • Python 3
  • SymPy
  • matplotlib

Amit's book: Doing Math with Python

Categories: thinktime

Craige McWhirter: The Internet of Not Great Things - Nick Moore - PyConAu 2016

Fri 12th Aug 2016 14:08

Nick Moore.

aka "The Internet of (Better) Things".

  • Abuse of IoT is not a technical issue.
  • The problem is who controls the data.
  • Need better analysis of the was it is used that is bad.
  • "If you're not the customer, you're the product."
    • by accepting advertising.
    • by having your privacy sold.
  • Led to a conflation of IoT and Big Data.
  • Product end of life by vendors ceasing support.
  • Very little cross vendor compatibility.
  • Many devices useless if the Internet is not available.
  • Consumer grade devices often fail.
  • Weak crypto support.
  • Often due to lack of entropy, RAM, CPU.
  • Poorly thought out update cycles.
Turning Complaints into Requirements:

We need:

  • Internet independence.
  • Generic interfaces.
  • Simplified Cryptography.
  • Easier Development.
Some Solutions:
  • Peer to peer services.
  • Standards based hardware description language.
  • Shared secrets, initialised by QR code.
  • Simpler development with MicroPython.

Categories: thinktime

Craige McWhirter: OpenBMC - Boot your server with Python - Joel Stanley - PyConAu 2016

Fri 12th Aug 2016 14:08

Joel Stanley.

  • OpenBMC is a Free Software BMC
  • Running embedded Linux.
  • Developed an API before developing other interfaces.
  • A modern kernel.
  • Up to date userspace.
  • Security patches.
  • Better interfaces.
  • Reliable performance.
    • REST interface.
    • SSH instead of strange tools.
The Future:
  • Support more home devices.
  • Add a web interface.
  • Secure boot, trusted boot, more security features.
  • Upstream all of the things.
  • Support more hardware.

Categories: thinktime

Craige McWhirter: Teaching Python with Minecraft - Digital K - PyConAu 2016

Fri 12th Aug 2016 12:08

by Digital K.

The video of the talk is here.

  • Recommended for ages 10 - 16
  • Why Minecraft?
    • Kids familiarity is highly engaging.
    • Relatively low cost.
    • Code their own creations.
    • Kids already use the command line in Minecraft
  • Use the Minecraft API to receive commands from Python.
    • Place blocks
    • Move players
    • Build faster
    • Build larger structures and shapes
    • Easy duplication
    • Animate blocks (ie: colour change)
    • Create games
Option 1:

How it works:

  • Import Minecraft API libraries to your code.
  • Push it to the server.
  • Run the Minecraft client.

What you can Teach:

  • Co-ordinates
  • Time
  • Multiplications
  • Data
  • Art works with maths
  • Trigonometry
  • Geo fencing
  • Design
  • Geography

Connect to External Devices:

  • Connect to Raspberry Pi or Arduino.
  • Connect the game to events in the real world.

Other Resources:

Categories: thinktime

Craige McWhirter: Scripting the Internet of Things - Damien George - PyConAu 2016

Fri 12th Aug 2016 12:08

Damien George

Damien gave an excellent overview of using MicroPython with microcontrollers, particularly the ESP8266 board.

Damien's talk was excellent and covered a broad and interesting history of the project and it's current efforts.

Categories: thinktime

Craige McWhirter: ESP8266 and MicroPython - Nick Moore - PyConAu 2016

Fri 12th Aug 2016 12:08

Nick Moore


  • Price and feature set are a game changer for hobbyists.
  • Makes for a more playful platform.
  • Uses serial programming mode to flash memory
  • Strict power requirements
  • The easy way to use them is with a NodeMCU for only a little more.
  • Tool kits:
  • Lua: (Node Lua).
  • Javascript: Espruino.
  • Forth, Lisp, Basic(?!).
  • Mircopython works on the ESP8266:
    • Drives micro controllers.
    • The onboard Wifi.
    • Can run a small webserver to view and control devices.
    • WebRepl can be used to copy files, as can mpy-utils.
    • Lacks:
      • an operating system.
      • Lacks multiprocessing.
      • Debugger / profiler.
  • Flobot:
    • Compiles via MicroPython.
    • A visual dataflow language for robots.

ES8266 and MicroPython provide an accessible entry into working with micro-crontrollers.

Categories: thinktime

Chris Smart: Command line password management with pass

Wed 10th Aug 2016 21:08

Why use a password manager in the first place? Well, they make it easy to have strong, unique passwords for each of your accounts on every system you use (and that’s a good thing).

For years I’ve stored my passwords in Firefox, because it’s convenient, and I never bothered with all those other fancy password managers. The problem is, that it locked me into Firefox and I found myself still needing to remember passwords for servers and things.

So a few months ago I decided to give command line tool Pass a try. It’s essentially a shell script wrapper for GnuPG and stores your passwords (with any notes) in individually encrypted files.

I love it.

Pass is less convenient in terms of web browsing, but it’s more convenient for everything else that I do (which is often on the command line). For example, I have painlessly integrated Pass into Mutt (my email client) so that passwords are not stored in the configuration files.

As a side-note, I installed the Password Exporter Firefox Add-on and exported my passwords. I then added this whole file to Pass so that I can start copying old passwords as needed (I didn’t want them all).

About Pass

Pass uses public-key cryptography to encrypt each password that you want to store as an individual file. To access the password you need the private key and passphrase.

So, some nice things about it are:

  • Short and simple shell script
  • Uses standard GnuPG to encrypt each password into individual files
  • Password files are stored on disk in a hierarchy of own choosing
  • Stored in Git repo (if desired)
  • Can also store notes
  • Can copy the password temporarily to copy/paste buffer
  • Can show, edit, or copy password
  • Can also generate a password
  • Integrates with anything that can call it
  • Tab completion!

So it’s nothing super fancy, “just” a great little wrapper for good old GnuPG and text files, backed by git. Perfect!

Install Pass

Installation of Pass (and Git) is easy:
sudo dnf -y install git pass

Prepare keys

You’ll need a pair of keys, so generate these if you haven’t already (this creates the keys under ~/.gnupg). I’d probably recommend RSA and RSA, 4096 bits long, using a decent passphrase and setting a valid email address (you can also separately use these keys to send signed emails and receive encrypted emails).
gpg2 --full-gen-key

We will need the key’s fingerprint to give to pass. It should be a string of 40 characters, something like 16CA211ACF6DC8586D6747417407C4045DF7E9A2.
gpg2 --list-secret-keys

Note: Your fingerprint (and public keys) can be public, but please make sure that you keep your private keys secure! For example, don’t copy the ~/.gnupg directory to a public place (even though they are protected by a nice long passphrase, right? Right?).

Initialise pass

Before we can use Pass, we need to initialise it. Put the fingerprint you got from the output of gpg2 –list-secret-keys above (e.g. 5DF7E9A2).
pass init 5DF7E9A2

This creates the basic directory structure in the .password-store directory in your home directory. At this point it just has a plain text file (.password-store/.gpg-id) with the fingerprint of the public key that it should use.

Adding git backing

If you haven’t already, you’ll need to tell Git who you are. Using the email address that you used when creating the GPG key is probably good.
git config --global ""
git config --global "Your Name"

Now, go into the password-store directory and initialise it as a Git repository.
cd ~/.password-store
git init
git add .
git commit -m "intial commit"
cd -

Pass will now automatically commit changes for you!


As mentioned, you can create any hierarchy you like. I quite like to use subdirectories and sort by function first (like mail, web, server), then domains (like, and then server or username. This seems to work quite nicely with tab completion, too.

You can rearrange this at any time, so don’t worry too much!

Storing a password

Adding a password is simple and you can create any hierarchy that you want; you just tell pass to add a new password and where to store it. Pass will prompt you to enter the password.

For example, you might want to store your password for a machine at – you could do that like so:
pass add servers/

This creates the directory structure on disk and your first encrypted file!
└── servers
        └── server1.gpg
2 directories, 1 file

Run the file command on that file and it should tell you that it’s encrypted.
file ~/.password-store/servers/

But is it really? Go ahead, cat that gpg file, you’ll see it’s encrypted (your terminal will probably go crazy – you can blindly enter the reset command to get it back).
cat ~/.password-store/servers/

So this file is encrypted – you can safely copy it anywhere (again, please just keep your private key secure).

Git history

Browse to the .password-store dir and run some git commands, you’ll see your history and showing will prompt for your GPG passphrase to decrypt the files stored in Git.

cd ~/.password-store
git log
git show
cd -

If you wanted to, you could push this to another computer as a backup (perhaps even via a git-hook!).

Storing a password, with notes

By default Pass just prompts for the password, but if you want to add notes at the same time you can do that also. Note that the password should still be on its own on the first line, however.
pass add -m mail/

If you use two-factor authentication (which you should be), this is useful for also storing the account password and recovery codes.

Generating and storing a password

As I mentioned, one of the benefits of using a password manager is to have strong, unique passwords. Pass makes this easy by including the ability to generate one for you and store it in the hierarchy of your choosing. For example, you could generate a 32 character password (without special characters) for a website you often log into, like so:
pass generate -n web/ 32

Getting a password out

Getting a password out is easy; just tell Pass which one you want. It will prompt you for your passphrase, decrypt the file for you, read the first line and print it to the screen. This can be useful for scripting (more on that below).

pass web/

Most of the time though, you’ll probably want to copy the password to the copy/paste buffer; this is also easy, just add the -c option. Passwords are automatically cleared from the buffer after 45 seconds.
pass -c web/

Now you can log into Twitter by entering your username and pasting the password.

Editing a password

Similarly you can edit an existing password to change it, or add as many notes as you like. Just tell Pass which password to edit!
pass edit web/

Copying and moving a password

It’s easy to copy an existing password to a new one, just specify both the original and new file.
pass copy servers/ servers/

If the hierarchy you created is not to your liking, it’s easy to move passwords around.
pass mv servers/ computers/

Of course, you could script this!

Listing all passwords

Pass will list all your passwords in a tree nicely for you.
pass list

Interacting with Pass

As pass is a nice standard shell program, you can interact with it easily. For example, to get a password from a script you could do something like this.
#!/usr/bin/env bash
echo "Getting password.."
PASSWORD="$(pass servers/"
if [[ $? -ne 0 ]]; then
    echo "Sorry, failed to get the password"
    exit 1
echo "..and we got it, ${PASSWORD}"

Try it!

There’s lots more you can do with Pass, why not check it out yourself!

Categories: thinktime

Chris Smart: Setting up OpenStack Ansible All-in-one behind a proxy

Tue 09th Aug 2016 09:08

Setting up OpenStack Ansible (OSA) All-in-one (AIO) behind a proxy requires a couple of settings, but it should work fine (we’ll also configure the wider system). There are two types of git repos that we should configure for (unless you’re an OpenStack developer), those that use http (or https) and those that use the git protocol.

Firstly, this assumes an Ubuntu 14.04 server install (with at least 60GB of free space on / partition).

All commands are run as the root user, so switch to root first.

sudo -i

Export variables for ease of setup

Setting these variables here means that you can copy and paste the relevant commands from the rest of this blog post.

Note: Make sure that your proxy is fully resolvable and then replace the settings below with your actual proxy details (leave out user:password if you don’t use one).

export PROXY_PROTO="http"
export PROXY_HOST="user:password@proxy"
export PROXY_PORT="3128"

First, install some essentials (reboot after upgrade if you like).
echo "Acquire::http::Proxy \"${PROXY}\";" \
> /etc/apt/apt.conf.d/90proxy
apt-get update && apt-get upgrade
apt-get install git openssh-server rsync socat screen vim

Configure global proxies

For any http:// or https:// repositories we can just set a shell environment variable. We’ll set this in /etc/environment so that all future shells have it automatically.

cat >> /etc/environment << EOF
export http_proxy="${PROXY}"
export https_proxy="${PROXY}"
export HTTP_PROXY="${PROXY}"
export ftp_proxy="${PROXY}"
export FTP_PROXY="${PROXY}"
export no_proxy=localhost
export NO_PROXY=localhost

Source this to set the proxy variables in your current shell.
source /etc/environment

Tell sudo to keep these environment variables
echo 'Defaults env_keep = "http_proxy https_proxy ftp_proxy \
> /etc/sudoers.d/01_proxy

Configure Git

For any git:// repositories we need to make a script that uses socat (you could use netcat) and tell Git to use this as the proxy.

cat > /usr/local/bin/ << EOF
# \$1 = hostname, \$2 = port
exec socat STDIO PROXY:${PROXY_HOST}:\${1}:\${2},proxyport=${PROXY_PORT}

Make it executable.
chmod a+x /usr/local/bin/

Tell Git to proxy connections through this script.
git config --global core.gitProxy /usr/local/bin/

Clone OpenStack Ansible

OK, let’s clone the OpenStack Ansible repository! We’re living on the edge and so will build from the tip of the master branch.
git clone git:// \
cd /opt/openstack-ansible/

If you would prefer to build from a specific release, such as the latest stable, feel free to now check out the appropriate tag. For example, at the time of writing this is tag 13.3.1. You can get a list of tags by running the git tag command.

# Only run this if you want to build the 13.3.1 release
git checkout -b tag-13.3.1 13.3.1

Or if you prefer, you can checkout the tip of the stable branch which prepares for the upcoming stable minor release.

# Only run this if you want to build the latest stable code
git checkout -b stable/matika origin/stable/mitaka

Prepare log location

If something goes wrong, it’s handy to be able to have the log available.

export ANSIBLE_LOG_PATH=/root/ansible-log

Bootstrap Ansible

Now we can kick off the ansible bootstrap. This prepares the system with all of the Ansible roles that make up an OpenStack environment.

Upon success, you should see:

System is bootstrapped and ready for use.

Bootstrap OpenStack Ansible All In One

Now let’s bootstrap the all in one system. This configures the host with appropriate disks and network configuration, etc ready to run the OpenStack environment in containers.

Run the Ansible playbooks

The final task is to run the playbooks, which sets up all of the OpenStack components on the host and containers. Before we proceed, however, this requires some additional configuration for the proxy.

The user_variables.yml file under the root filesystem at /etc/openstack_deploy/user_variables.yml is where we configure environment variables for OSA to export and set some other options (again, note the leading / before etc – do not modify the template file at /opt/openstack-ansible/etc/openstack_deploy by mistake).

cat >> /etc/openstack_deploy/user_variables.yml << EOF
## Proxy settings
proxy_env_url: "\"${PROXY}\""
no_proxy_env: "\"localhost,,{{ internal_lb_vip_address }},{{ external_lb_vip_address }},{% for host in groups['all_containers'] %}{{ hostvars[host]['container_address'] }}{% if not loop.last %},{% endif %}{% endfor %}\""
  HTTP_PROXY: "{{ proxy_env_url }}"
  HTTPS_PROXY: "{{ proxy_env_url }}"
  NO_PROXY: "{{ no_proxy_env }}"
  http_proxy: "{{ proxy_env_url }}"
  https_proxy: "{{ proxy_env_url }}"
  no_proxy: "{{ no_proxy_env }}"

Secondly, if you’re running the latest stable, 13.3.x, you will need to make a small change to pip package list for the keystone (authentication component) container. Currently it pulls in httplib2 version 0.8, however this does not appear to respect the NO_PROXY variable and so keystone provisioning fails. Version 0.9 seems to fix this problem.

sed -i 's/state: present/state: latest/' \

Now run the playbooks!

Note: This will take a long time, perhaps a few hours, so run it in a screen or tmux session.

time ./scripts/

Verify containers

Once the playbooks complete, you should be able to list your running containers and see their status (there will be a couple of dozen).
lxc-ls -f

Log into OpenStack

Now that the system is complete, we can start using OpenStack!

You should be able to use your web browser to log into Horizon, the OpenStack Dashboard, at your AIO hosts’s IP address.

If you’re not sure what IP that is, you can find out by looking at which address port 443 is running on.

netstat -ltnp |grep 443

The admin user’s password is available in the user_secrets.yml file on the AIO host.
grep keystone_auth_admin_password \

A successful login should reveal the admin dashboard.

Enjoy your OpenStack Ansible All-in-one!

Categories: thinktime

Stewart Smith: Windows 3.11 nostalgia

Mon 08th Aug 2016 21:08

Because OS/2 didn’t go so well… let’s try something I’m a lot more familiar with. To be honest, the last time I in earnest used Windows on the desktop was around 3.11, so I kind of know it back to front (fun fact: I’ve read the entire Windows 3.0 manual).

It turns out that once you have MS-DOS installed in qemu, installing Windows 3.11 is trivial. I didn’t even change any settings for Qemu, I just basically specced everything up to be very minimal (50MB RAM, 512mb disk).

Windows 3.11 was not a fun time as soon as you had to do anything… nightmares of drivers, CONFIG.SYS and AUTOEXEC.BAT plague my mind. But hey, it’s damn fast on a modern processor.

Categories: thinktime

Matthew Oliver: Swift + Xena would make the perfect digital preservation solution

Mon 08th Aug 2016 15:08

Those of you might not know, but for some years I worked at the National Archives of Australia working on, at the time, their leading digital preservation platform. It was awesome, opensource, and they paid me to hack on it.
The most important parts of the platform was Xena and Digital Preservation Recorder (DPR). Xena was, and hopefully still is amazing. It takes in a file, guesses the format. If it’s a closed proprietary format and it had the right xena plugin it would convert it to an open standard and optionally turned it into a .xena file ready to be ingested into the digital repository for long term storage.

We did this knowing that proprietary formats change so quickly and if you want to store a file format long term (20, 40, 100 years) you won’t be able to open it. An open format on the other hand, even if there is no software that can read it any more is open, so you can get your data back.

Once a file had passed through Xena, we’d use DPR to ingest it into the archive. Once in the archive, we had other opensource daemons we wrote which ensured we didn’t lose things to bitrot, we’d keep things duplicated and separated. It was a lot of work, and the size of the space required kept growing.

Anyway, now I’m an OpenStack Swift core developer, and wow, I wish Swift was around back then, because it’s exactly what is required for the DPR side. It duplicates, infinitely scales, it checks checksums, quarantines and corrects. Keeps everything replicated and separated and does it all automatically. Swift is also highly customise-able. You can create your own middleware and insert it in the proxy pipeline or in any of the storage node’s pipelines, and do what ever you need it to do. Add metadata, do something to the object on ingest, or whenever the object is read, updating some other system.. really you can do what ever you want. Maybe even wrap Xena into some middleware.

Going one step further, IBM have been working on a thing called storlets which uses swift and docker to do some work on objects and is now in the OpenStack namespace. Currently storlets are written in Java, and so is Xena.. so this might also be a perfect fit.

Anyway, I got talking with Chris Smart, a mate who also used to work in the same team at NAA, so it got my mind thinking about all this and so I thought I’d place my rambling thoughts somewhere in case other archives or libraries are interested in digital preservation and needs some ideas.. best part, the software is open source and also free!

Happy preserving.

Categories: thinktime

David Rowe: SM2000 – Part 8 – Gippstech 2016 Presentation

Mon 08th Aug 2016 07:08

Justin, VK7TW, has published a video of my SM2000 presentation at Gippstech, which was held in July 2016.

Brady O’Brien, KC9TPA, visited me in June. Together we brought the SM2000 up to the point where it is decoding FreeDV 2400A waveforms at 10.7MHz IF, which we demonstrate in this video. I’m currently busy with another project but will get back to the SM2000 (and other FreeDV projects) later this year.

Thanks Justin and Brady!

FreeDV and this video was also mentioned on this interesting Reddit post/debate from Gary KN4AQ on VHF/UHF Digital Voice – a peek into the future

Categories: thinktime

Stewart Smith: OS/2 Warp Nostalgia

Sun 07th Aug 2016 21:08

Thanks to the joys of abandonware websites, you can play with some interesting things from the 1990s and before. One of those things is OS/2 Warp. Now, I had a go at OS/2 sometime in the 1990s after being warned by a friend that it was “pretty much impossible” to get networking going. My experience of OS/2 then was not revolutionary… It was, well, something else on a PC that wasn’t that exciting and didn’t really add a huge amount over Windows.

Now, I’m nowhere near insane enough to try this on my actual computer, and I’ve managed to not accumulate any ancient PCs….

Luckily, qemu helps with an emulator! If you don’t set your CPU to Pentium (or possibly something one or two generations newer) then things don’t go well. Neither does a disk that by today’s standards would be considered beyond tiny. Also, if you dare to try to use an unpartitioned hard disk – OH MY are you in trouble.

Also, try to boot off “Disk 1” and you get this:
Possibly the most friendly error message ever! But, once you get going (by booting the Installation floppy)… you get to see this:

and indeed, you are doing the time warp of Operating Systems right here. After a bit of fun, you end up in FDISK:

Why I can’t create a partition… WHO KNOWS. But, I tried again with a 750MB disk that already had a partition on it and…. FAIL. I think this one was due to partition type, so I tried again with partition type of 6 – plain FAT16, and not W95 FAT16 (LBA). Some memory is coming back to me of larger drives and LBA and nightmares…

But that worked!

Then, the OS/2 WARP boot screen… which seems to stick around for a long time…..

and maybe I could get networking….

Ladies and Gentlemen, the wonders of having to select DHCP:

It still asked me for some config, but I gleefully ignored it (because that must be safe, right!?) and then I needed to select a network adapter! Due to a poor choice on my part, I started with a rtl8139, which is conspicuously absent from this fine list of Token Ring adapters:

and then, more installing……

before finally rebooting into….

and that, is where I realized there was beer in the fridge and that was going to be a lot more fun.

Categories: thinktime

OpenSTEM: Remembering Seymour Papert

Thu 04th Aug 2016 17:08

Today we’re remembering Seamour Papert, as we’ve received news that he died a few days ago (31st July 2016) at the age of 88.  Throughout his life, Papert did so much for computing and education, he even worked with the famous Jean Piaget who helped Papert further develop his views on children and learning.

For us at OpenSTEM, Papert is also special because in the late 1960s (yep that far back) he invented the Logo programming language, used to control drawing “turtles”.  The Mirobot drawing turtle we use in our Robotics Program is a modern descendant of those early (then costly) adventures.

I sadly never met him, but what a wonderful person he was.

For more information, see the media release at MIT’s Media Lab (which he co-founded) or search for his name online.


Categories: thinktime

Lev Lafayette: Supercomputers: Current Status and Future Trends

Thu 04th Aug 2016 17:08

The somewhat nebulous term "supercomputer" has a long history. Although first coined in the 1920s to refer to IBMs tabulators, in electronic computing the most important initial contribution was the CDC6600 in the 1960s, due to its advanced performance over competitors. Over time major technological advancements included vector processing, cluster architecture, massive processors counts, GPGPU technologies, multidimensional torus architectures for interconnect.

read more

Categories: thinktime

Simon Lyall: Putting Prometheus node_exporter behind apache proxy

Tue 02nd Aug 2016 09:08

I’ve been playing with Prometheus monitoring lately. It is fairly new software that is getting popular. Prometheus works using a pull architecture. A central server connects to each thing you want to monitor every few seconds and grabs stats from it.

In the simplest case you run the node_exporter on each machine which gathers about 600-800 (!) metrics such as load, disk space and interface stats. This exporter listens on port 9100 and effectively works as an http server that responds to “GET /metrics HTTP/1.1” and spits several hundred lines of:

node_forks 7916 node_intr 3.8090539e+07 node_load1 0.47 node_load15 0.21 node_load5 0.31 node_memory_Active 6.23935488e+08

Other exporters listen on different ports and export stats for apache or mysql while more complicated ones will act as proxies for outgoing tests (via snmp, icmp, http). The full list of them is on the Prometheus website.

So my problem was that I wanted to check my virtual machine that is on Linode. The machine only has a public IP and I didn’t want to:

  1. Allow random people to check my servers stats
  2. Have to setup some sort of VPN.

So I decided that the best way was to just use put a user/password on the exporter.

However the node_exporter does not  implement authentication itself since the authors wanted the avoid maintaining lots of security code. So I decided to put it behind a reverse proxy using apache mod_proxy.

Step 1 – Install node_exporter

Node_exporter is a single binary that I started via an upstart script. As part of the upstart script I told it to listen on localhost port 19100 instead of port 9100 on all interfaces

# cat /etc/init/prometheus_node_exporter.conf description "Prometheus Node Exporter" start on startup chdir /home/prometheus/ script /home/prometheus/node_exporter -web.listen-address end script

Once I start the exporter a simple “curl” makes sure it is working and returning data.

Step 2 – Add Apache proxy entry

First make sure apache is listening on port 9100 . On Ubuntu edit the /etc/apache2/ports.conf file and add the line:

Listen 9100

Next create a simple apache proxy without authentication (don’t forget to enable mod_proxy too):

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Allow from all </Proxy> ProxyErrorOverride On ProxyPass / ProxyPassReverse / </VirtualHost>

This simply takes requests on port 9100 and forwards them to localhost port 19100 . Now reload apache and test via curl to port 9100. You can also use netstat to see what is listening on which ports:

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0* LISTEN 8416/node_exporter tcp6 0 0 :::9100 :::* LISTEN 8725/apache2


Step 3 – Get Prometheus working

I’ll assume at this point you have other servers working. What you need to do now is add the following entries for you server in you prometheus.yml file.

First add basic_auth into your scape config for “node” and then add your servers, eg:

- job_name: 'node' scrape_interval: 15s basic_auth: username: prom password: mypassword static_configs: - targets: [''] labels: group: 'servers' alias: 'myserver'

Now restart Prometheus and make sure it is working. You should see the following lines in your apache logs plus stats for the server should start appearing: - - [31/Jul/2016:11:31:38 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1" - - [31/Jul/2016:11:31:53 +0000] "GET /metrics HTTP/1.1" 200 11398 "-" "Go-http-client/1.1" - - [31/Jul/2016:11:32:08 +0000] "GET /metrics HTTP/1.1" 200 11377 "-" "Go-http-client/1.1"

Notice that connections are 15 seconds apart, get http code 200 and are 11k in size. The Prometheus server is using Authentication but apache doesn’t need it yet.

Step 4 – Enable Authentication.

Now create an apache password file:

htpasswd -cb /home/prometheus/passwd prom mypassword

and update your apache entry to the followign to enable authentication:

# more /etc/apache2/sites-available/prometheus.conf <VirtualHost *:9100> ServerName prometheus CustomLog /var/log/apache2/prometheus_access.log combined ErrorLog /var/log/apache2/prometheus_error.log ProxyRequests Off <Proxy *> Order deny,allow Allow from all # AuthType Basic AuthName "Password Required" AuthBasicProvider file AuthUserFile "/home/prometheus/passwd" Require valid-user </Proxy> ProxyErrorOverride On ProxyPass / ProxyPassReverse / </VirtualHost>

After you reload apache you should see the following: - prom [01/Aug/2016:04:42:08 +0000] "GET /metrics HTTP/1.1" 200 11394 "-" "Go-http-client/1.1" - prom [01/Aug/2016:04:42:23 +0000] "GET /metrics HTTP/1.1" 200 11392 "-" "Go-http-client/1.1" - prom [01/Aug/2016:04:42:38 +0000] "GET /metrics HTTP/1.1" 200 11391 "-" "Go-http-client/1.1"

Note that the “prom” in field 3 indicates that we are logging in for each connection. If you try to connect to the port without authentication you will get:

Unauthorized This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.

That is pretty much it. Note that will need to add additional Virtualhost entries for more ports if you run other exporters on the server.



Categories: thinktime

Chris Samuel: Playing with Shifter – NERSC’s tool to use Docker containers in HPC

Tue 02nd Aug 2016 01:08

Early days yet, but playing with NERSC’s Shifter to let us use Docker containers safely on our test RHEL6 cluster is looking really interesting (given you can’t use Docker itself under RHEL6, and if you could the security concerns would cancel it out anyway).

To use a pre-built Ubuntu Xenial image, for instance, you tell it to pull the image:

[samuel@bruce ~]$ shifterimg pull ubuntu:16.04

There’s a number of steps it goes through, first retrieving the container from the Docker Hub:

2016-08-01T18:19:57 Pulling Image: docker:ubuntu:16.04, status: PULLING

Then disarming the Docker container by removing any setuid/setgid bits, etc, and repacking as a Shifter image:

2016-08-01T18:20:41 Pulling Image: docker:ubuntu:16.04, status: CONVERSION

…and then it’s ready to go:

2016-08-01T18:21:04 Pulling Image: docker:ubuntu:16.04, status: READY

Using the image from the command line is pretty easy:

[samuel@bruce ~]$ cat /etc/lsb-release LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch [samuel@bruce ~]$ shifter --image=ubuntu:16.04 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04 LTS"

and the shifter runtime will copy in a site specified /etc/passwd, /etc/group and /etc/nsswitch.conf files so that you can do user/group lookups easily, as well as map in site specified filesystems, so your home directory is just where it would normally be on the cluster.

[samuel@bruce ~]$ shifter --image=debian:wheezy bash --login samuel@bruce:~$ pwd /vlsci/VLSCI/samuel

I’ve not yet got to the point of configuring the Slurm plugin so you can queue up a Slurm job that will execute inside a Docker container, but very promising so far!

Correction: a misconception on my part – Shifter doesn’t put a Slurm batch job inside the container. It could, but there are good reasons why it’s better to leave that to the user (soon to be documented on the Shifter wiki page for Slurm integration).

This item originally posted here:

Playing with Shifter – NERSC’s tool to use Docker containers in HPC

Categories: thinktime

OpenSTEM: Australia moves fast: North-West actually

Fri 29th Jul 2016 15:07

This story is about the tectonic plate on which we reside.  Tectonic plates move, and so continents shift over time.  They generally go pretty slow though.

What about Australia?  It appears that every year, we move 11 centimetres West and 7 centimetres North.  For a tectonic plate, that’s very fast.

The last time scientists marked our location on the globe was in 1994, with the Geocentric Datum of Australia 1994 (GDA1994) – generally called GDA94 in geo-spatial tools (such as QGIS).  So that datum came into force 22 years ago.  Since then, we’ve moved an astonishing 1.5 metres!  You may not think much of this, but right now it actually means that if you use a GPS in Australia to get coordinates, and plot it onto a map that doesn’t correct for this, you’re currently going to be off by 1.5 metres.  Depending on what you’re measuring/marking, you’ll appreciate this can be very significant and cause problems.

Bear in mind that, within Australia, GDA94 is not wrong as such, as its coordinates are relative to points within Australia. However, the positioning of Australia in relation to the rest of the globe is now outdated.  Positioning technologies have also improved.  So there’s a new datum planned for Australia, GDA2020.  By the time it comes into force, we’ll have shifted by 1.8 metres relative to GDA94.

We can have some fun with all this:

  • If you stand and stretch both your arms out, the tips of your fingers are about 1.5 metres apart – of course this depends a bit on the length of your arms, but it’ll give you a rough idea.  Now imagine a pipe or cable in the ground at a particular GPS position,  move 1.5 metres.  You could clean miss that pipe or cable… oops!  Unless your GPS is configured to use a datum that gets updated, such as WGS84.  However, if you had the pipe or cable plotted on a map that’s in GDA94, it becomes messy again.
  • If you use a tool such as Google Earth, where is Australia actually?  That is, will a point be plotted accurately, or be 1.5 metres out, or somewhere in between?
    Well, that would depend on when the most recent broad scale photos were taken, and what corrections the Google Earth team possibly applies during processing of its data (for example, Google Earth uses a different datum – WGS 84 for its calculations).
    Interesting question, isn’t it…
  • Now for a little science/maths challenge.  The Northern most tip of Australia, Cape York, is just 150km South of Papua New Guinea (PNG).  Presuming our plate maintains its present course and speed, roughly how many years until the visible bits (above sea level) of Australia and PNG collide?  Post your answer with working/reasoning in a comment to this post!  Think about this carefully and do your research.  Good luck!
Categories: thinktime