You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 3 weeks 2 days ago

Lev Lafayette: OpenStack and the OpenStack Barcelona Summit

Tue 07th Feb 2017 17:02

Presentation to Linux Users of Victoria, 7th February, 2017

An overview of cloud computing platforms in general, and OpenStack in particular, is provided introduces this presentation. Cloud computing is one of the most significant changes to IT infrastructure and employment in the past decade, with major corporate services (Amazon, Microsoft) gaining particular significance in the late 2000s. In mid-2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack, with initial code coming from NASA's Nebula project and Rackspace's Cloud Files project, and soon gained prominence as the largest open-source cloud platform. Although a cross-platform service, it was quickly available on various Linux distributions including Debian, Ubuntu, SuSE (2011), and Red Hat (2012).

OpenStack is governed by the OpenStack Foundation, a non-profit corporate entity established in September 2012. Correlating with the release cycle of the product, OpenStack Summits are held every six months for developers, users and managers. The most recent Summit was held in Barcelona in late October 2016, with over 5000 attendees, almost 1000 organisations and companies, and 500 sessions, spread out over three days, plus one day of "Upstream University" prior to the main schedule, plus one day after the main schedule for contributor working parties. The presentation will cover the major announcements of the conference as well as a brief overview of the major streams, as well the direction of OpenStack as the November Sydney Summit approaches.

read more

Categories: thinktime

Binh Nguyen: Life in Venezuela, Examining Prophets/Pre-Cogs 5, and More

Mon 06th Feb 2017 21:02
Wanted to see what life was like in Venezuela given their recent problems: - complicated colonial history with conflict between Spaniards and local indigenous people (led by Native caciques, such as Guaicaipuro and Tamanaco). One of first to declare independence in Latin America. History of military strongmen and corruption? Political and economic instability over many years... venezuela
Categories: thinktime

Russell Coker: SE Linux in Debian/Stretch

Mon 06th Feb 2017 15:02

Debian/Stretch has been frozen. Before the freeze I got almost all the bugs in policy fixed, both bugs reported in the Debian BTS and bugs that I know about. This is going to be one of the best Debian releases for SE Linux ever.

Systemd with SE Linux is working nicely. The support isn’t as good as I would like, there is still work to be done for systemd-nspawn. But it’s close enough that anyone who needs to use it can use audit2allow to generate the extra rules needed. Systemd-nspawn is not used by default and it’s not something that a new Linux user is going to use, I think that expert users who are capable of using such features are capable of doing the extra work to get them going.

In terms of systemd-nspawn and some other rough edges, the issue is the difference between writing policy for a single system vs writing policy that works for everyone. If you write policy for your own system you can allow access for a corner case without a lot of effort. But if I wrote policy to allow access for every corner case then they might add up to a combination that can be exploited. I don’t recommend blindly adding the output of audit2allow to your local policy (be particularly wary of access to shadow_t and write access to etc_t, lib_t, etc). But OTOH if you have a system that’s running in enforcing mode that happens to have one daemon with more access than is ideal then all the other daemons will still be restricted.

As for previous releases I plan to keep releasing updates to policy packages in my own apt repository. I’m also considering releasing policy source to updates that can be applied on existing Stretch systems. So if you want to run the official Debian packages but need updates that came after Stretch then you can get them. Suggestions on how to distribute such policy source are welcome.

Please enjoy SE Linux on Stretch. It’s too late for most bug reports regarding Stretch as most of them won’t be sufficiently important to justify a Stretch update. The vast majority of SE Linux policy bugs are issues of denying wanted access not permitting unwanted access (so not a security issue) and can be easily fixed by local configuration, so it’s really difficult to make a case for an update to Stable. But feel free to send bug reports for Buster (Stretch+1).

Related posts:

  1. Debian SE Linux Status June 2012 It’s almost the Wheezy freeze time and I’ve been working...
  2. SE Linux Status in Debian 2012-01 Since my last SE Linux in Debian status report [1]...
  3. Debian SSH and SE Linux I have just filed Debian bug report #556644 against the...
Categories: thinktime

Francois Marier: IPv6 and OpenVPN on Linode Debian/Ubuntu VPS

Mon 06th Feb 2017 00:02

Here is how I managed to extend my OpenVPN setup on my Linode VPS to include IPv6 traffic. This ensures that clients can route all of their traffic through the VPN and avoid leaking IPv6 traffic, for example. It also enables clients on IPv4-only networks to receive a routable IPv6 address and connect to IPv6-only servers (i.e. running your own IPv6 broker).

Request an additional IPv6 block

The first thing you need to do is get a new IPv6 address block (or "pool" as Linode calls it) from which you can allocate a single address to each VPN client that connects to the server.

If you are using a Linode VPS, there are instructions on how to request a new IPv6 pool. Note that you need to get an address block between /64 and /112. A /116 like Linode offers won't work in OpenVPN. Thankfully, Linode is happy to allocate you an extra /64 for free.

Setup the new IPv6 address

If your server only has an single IPv4 address and a single IPv6 address, then a simple DHCP-backed network configuration will work fine. To add the second IPv6 block on the other hand, I had to change my network configuration (/etc/network/interfaces) to this:

auto lo iface lo inet loopback allow-hotplug eth0 iface eth0 inet dhcp pre-up iptables-restore /etc/network/iptables.up.rules iface eth0 inet6 static address 2600:3c01::xxxx:xxxx:xxxx:939f/64 gateway fe80::1 pre-up ip6tables-restore /etc/network/ip6tables.up.rules iface tun0 inet6 static address 2600:3c01:xxxx:xxxx::/64 pre-up ip6tables-restore /etc/network/ip6tables.up.rules

where 2600:3c01::xxxx:xxxx:xxxx:939f/64 (bound to eth0) is your main IPv6 address and 2600:3c01:xxxx:xxxx::/64 (bound to tun0) is the new block you requested.

Once you've setup the new IPv6 block, test it from another IPv6-enabled host using:

ping6 2600:3c01:xxxx:xxxx::1 OpenVPN configuration

The only thing I had to change in my OpenVPN configuration (/etc/openvpn/server.conf) was to change:

proto udp

to:

proto udp6

in order to make the VPN server available over both IPv4 and IPv6, and to add the following lines:

server-ipv6 2600:3c01:xxxx:xxxx::/64 push "route-ipv6 2000::/3"

to bind to the right V6 address and to tell clients to tunnel all V6 Internet traffic through the VPN.

In addition to updating the OpenVPN config, you will need to add the following line to /etc/sysctl.d/openvpn.conf:

net.ipv6.conf.all.forwarding=1

and the following to your firewall (e.g. /etc/network/ip6tables.up.rules):

# openvpn -A INPUT -p udp --dport 1194 -j ACCEPT -A FORWARD -m state --state NEW -i tun0 -o eth0 -s 2600:3c01:xxxx:xxxx::/64 -j ACCEPT -A FORWARD -m state --state NEW -i eth0 -o tun0 -d 2600:3c01:xxxx:xxxx::/64 -j ACCEPT -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT

in order to ensure that IPv6 packets are forwarded from the eth0 network interface to tun0 on the VPN server.

With all of this done, apply the settings by running:

sysctl -p /etc/sysctl.d/openvpn.conf ip6tables-apply systemctl restart openvpn.service Testing the connection

Now connect to the VPN using your desktop client and check that the default IPv6 route is set correctly using ip -6 route.

Then you can ping the server's new IP address:

ping6 2600:3c01:xxxx:xxxx::1

and from the server, you can ping the client's IP (which you can see in the network settings):

ping6 2600:3c01:xxxx:xxxx::1002

Once both ends of the tunnel can talk to each other, you can try pinging an IPv6-only server from your client:

ping6 ipv6.google.com

and then pinging your client from an IPv6-enabled host somewhere:

ping6 2600:3c01:xxxx:xxxx::1002

If that works, other online tests should also work.

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main February 2017 Meeting: OpenStack Summit/Data Structures and Algorithms

Sun 05th Feb 2017 21:02
Start: Feb 7 2017 18:30 Start: Feb 7 2017 18:30 Location:  6th Floor, Trinity College (EPA Victoria building), 200 Victoria St., Carlton Link:  http://luv.asn.au/meetings/map

Tuesday, February 7, 2017
6:30 PM to 8:30 PM
6th Floor, Trinity College (EPA Victoria building)
200 Victoria St., Carlton

Speakers:

• Lev Lafayette, OpenStack and the OpenStack Barcelona Summit
• Jacinta Richardson, Data Structures and Algorithms in the 21st Century

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 049 589.
 

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

February 7, 2017 - 18:30

read more

Categories: thinktime

Peter Lieverdink: Astrophotography with Mac OS X

Sun 05th Feb 2017 13:02

It's been a good three years now since I swapped my HP laptop for a Macbook Pro. In the mean time, I've started doing a bit more astrophotography and of course the change of operating system has affected the tools I use to obtain and process photos.

Amateur astronomers have traditionally mostly used Windows, so there are a lot of Windows tools, both freeware and payware, to help. I used to run the freeware ones in Wine on Ubuntu with varying levels of success.

When I first got the Mac, I had a lot of trouble getting Wine to run reliably and eventually ended up doing my alignment and processing manually in The Gimp. However, that's time consuming and rather fiddly and limited to stacking static exposures.

However, I've recently started finding quite a bit of Mac OS based astrophotography software. I don't know if that means it's all fairly new or whether my Google skills failed me over the past years :-)

Software

I thought I'd document what I use, in the hope that I can save others who want to use their Macs some searching.

Some are Windows software, but run OK on Mac OS X. You can turn them into normal double click applications using a utility called WineSkin Winery.

Obtaining data from video camera:

Format-converting video data:

Processing video data:

  • AutoStakkert! (Windows + Wine, free for non-commercial use, donationware)

Obtaining data from DSLR:

Processing and stacking DSLR files and post-processing video stacks:

Post-processing:

Telescope guiding:

  • AstroGuider (Mac OS X, payware, free trial)
  • PHD2 (Mac OS X, free, open source)
Hardware

A few weeks ago I bought a ZWO ASI120MC-S astro camera, as that was on sale and listed by Nebulosity as supported by OSX. Until then I'd messed around with a hacked up Logitech webcam, which seemed to only be supported by the Photo Booth app.

I've not done any guiding yet (I need a way to mount the guide scope on the main scope - d'oh) but the camera works well with Nebulosity 4 and oaCapture. I'm looking forward to being able to grab Jupiter with it in a month or so and Saturn and Mars later this year.

The image to the right is a stack of 24x5 second unguided exposures of the trapezium in M42. Not too bad for a quick test on a half-moon night.

Settings

I've been fiddling with Nebulosity  abit, to try and get it to stack the RAW images from my Nikon D750 as colour. I found a conversion matrix that was supposed to be decent, but as it turns out that made all images far too blue.

The current matrix I use is listed below. If you find a better one, please let me know.

  R G B R 0.50 0.00 1.00 G 0.00 1.00 0.00 B 1.00 0.00 0.50 Tags: astronomyastrophotographyMacOSXsoftwarehardware
Categories: thinktime

Lev Lafayette: Career Opportunities

Sat 04th Feb 2017 11:02

Had a friendly meeting a few days ago with a young person debating their future career path. They had a very good IT-orientated resume (give this person a job, seriously) but were debating whether they should go down the path of a Business Analyst. It was fairly clear that they lived and breathed IT, whereas the BA choice was one of some indifference. In reverse, there was a situation when VPAC had a year of summer school graduates where it became quickly obvious that none of them had any passion for IT.

read more

Categories: thinktime

David Rowe: CMA Equalisation of FSK

Sat 04th Feb 2017 11:02

We’ve just released a new experimental mode for Digital Voice called FreeDV 800XA. This uses the Codec 700C mode, 100 bit/s for synchronisation, and a 4FSK modem, actually the same modem that has been so successful for images from High Altitude Balloons.

FSK has the advantage of being a constant amplitude waveform, so efficient class C amplifiers can be used. However as it currently stands, 800XA has no real protection for the multipath common on HF channels, for example symbols that have an echo delayed by a few ms.

So I decided to start looking at equalisers. Some Googling suggested the Constant Modulus Algorithm (CMA) Equaliser might be a suitable choice for FSK, and turned up some sample code on DSP stack exchange.

I had a bit of trouble getting the algorithm to work for bandpass FSK signals, so posted this question on CMA equalisation for FSK. I received some kind help, and eventually made the equaliser work on a simulated HF channel. Here is the Octave simulation cma.m

How it works

The equaliser attempts to correct for the channel using the received signal, which is corrupted by noise.

There is a “gotcha” in using a FIR filter to equalise a channel response. Consider a channel H(z) with a simple 3 sample impulse response h(n). Now we could equalise this with the exact inverse 1/H(z). Here is a plot of our example channel frequency response and the ideal equaliser which is exactly the inverse:

Now here is a plot of the impulse responses of the channel h(n), and equaliser h'(n):

The ideal equaliser response h'(n) is much longer than the 3 samples of the channel impulse response h(n). The CMA algorithm requires our equaliser to be a FIR filter. Counter-intuitively, we need to use an FIR equaliser with a number of taps significantly larger than the expected channel impulse response we are trying to equalise.

One explanation for this – the channel response can be considered to be a Finite Impulse response (FIR) filter H(z). The exact inverse 1/H(z), when expressed in the time domain, is an Infinite Impulse Response (IIR) filter, which have, you know, an infinitely long impulse response!

Simulation

The figures below show the CMA equaliser doing it’s thing in a multipath channel with AWGN noise. In Figure 1 the error is reduced over time, and the lower plot shows the combined channel-equaliser impulse response. If the equaliser was perfect the combined channel-equaliser response would be 1.

Figure 2 below shows the CMA going to work on a FSK signal. The top subplot is the transmitted FSK signal, you can see the two different frequencies in the waveform. The middle plot shows the received signal, after it has been messed up by the multipath channel. It’s clear that the tone amplitudes are different. Looking carefully at the point where the tones transition (e.g. around sample 25 and 65) there is intersymbol interference due to multipath echoes, messing up the start of each FSK symbol.

However in the bottom subplot the equaliser has worked it’s magic and the waveform is looking quite nice. The tone levels are nearly equal and much of the ISI removed. Yayyyyyy.

Figure 4 shows the magnitude frequency response at several stages in the simulation. The top subplot is the channel response. It’s a comb filter, typical of multipath channels. The middle subplot is the equaliser response. Ideally, this should be the exact inverse of the channel. It’s pretty close at the low end but seems to lose it’s way at very low and high frequencies. The lower plot is the combined response, which is close to 0dB at the low frequencies. Cool.

Figure 4 is the transmit spectrum of the modem signal (top), and the spectrum after the channel has mangled it (lower). Note one tone is now lower than the other. Also note that the modem signal only has energy in the low-mid range of the spectrum. This might explain why the equaliser does a good job in that region of the spectrum – it’s where we have energy to drive the adaption.

Problems for HF Digital Voice

Unfortunately the CMA equaliser only works well at high SNRs, and takes seconds to converge. I am interested in low SNR (around 0dB in a 3000 Hz noise bandwidth) and it’s Push To Talk (PTT) radio so we a need fast initial training, around 100ms. Then it must follow the time varying HF channel, continually retraining on the fly.

For further work I really should measure BER versus Eb/No for a variety of SNRs and convergence times, and measure what BER improvement we are buying with equalisation. BER is King, much easier that squinting at time domain waveforms.

If the CMA cost function was used with known information (like pilot symbols or the Unique Word we have in 800XA) it might be able to work faster. This would involve deconvolution on the fly, rather than using iterative or adaptive techniques.

Categories: thinktime

Binh Nguyen: Trump Background, Random Stuff, and More

Fri 03rd Feb 2017 21:02
Given his recent inauguration, I thought it would be interesting to take a look at the background of the new US president, Donald Trump: https://www.bloomberg.com/politics/articles/2017-01-21/merkel-said-to-scour-trump-archive-for-clues-on-how-to-read-him https://www.rt.com/viral/374666-twitter-gifts-trump-followers/?utm_source=rss&utm_medium=rss&utm_campaign=RSS - well known background,
Categories: thinktime

Michael Still: Nova vendordata deployment, an excessively detailed guide

Fri 03rd Feb 2017 15:02
Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot -- the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user's behalf.

Nova supports a mechanism to add "vendordata" to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don't change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.


Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add "DynamicJSON" to the vendordata_providers configuration option. This can also include "StaticJSON" if you'd like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.


The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the '@' character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example::

{ "testing": { "value1": 1, "value2": 2, "value3": "three" } }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time


Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request -- you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behavior is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata $ cd vendordata $ apt-get install virtualenvwrapper $ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed) $ mkvirtualenv vendordata $ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn't what you're using:

[keystone_authtoken] insecure = False auth_plugin = password auth_url = http://172.29.236.100:35357 auth_uri = http://172.29.236.100:5000 project_domain_id = default user_domain_id = default project_name = service username = nova password = 5dff06ac0c43685de108cc799300ba36dfaf29e4 region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json $ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we're ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api] vendordata_providers=DynamicJSON vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool { "testing": { "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing." } }

Tags for this post: openstack nova metadata vendordata configdrive cloud-init
Related posts: One week of Nova Kilo specifications; Specs for Kilo; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno Nova PTL Candidacy; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic

Comment
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Beginners February Meeting: Static websites with Jekyll, Hugo and Forestry

Wed 01st Feb 2017 23:02
Start: Feb 25 2017 12:30 End: Feb 25 2017 16:30 Start: Feb 25 2017 12:30 End: Feb 25 2017 16:30 Location:  Infoxchange, 33 Elizabeth St. Richmond Link:  http://luv.asn.au/meetings/map

PLEASE NOTE CHANGE OF DATE THIS MONTH ONLY

Static websites with Jekyll, Hugo and Forestry

Andrew Pam will demonstrate a new way to make websites complete with content management that doesn't require software running on a web server.  This technique enhances both performance and security.  More information at:

 

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

February 25, 2017 - 12:30

read more

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Main February 2017 Meeting: OpenStack Barcelona Summit / Data Structures and Algorithms

Wed 01st Feb 2017 23:02
Start: Feb 7 2017 18:30 End: Feb 7 2017 20:30 Start: Feb 7 2017 18:30 End: Feb 7 2017 20:30 Location:  6th Floor, 200 Victoria St. Carlton VIC 3053 Link:  http://luv.asn.au/meetings/map

Speakers:

• Lev Lafayette, OpenStack and the OpenStack Barcelona Summit
• Jacinta Richardson, Data Structures and Algorithms in the 21st Century

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 049 589.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

February 7, 2017 - 18:30

read more

Categories: thinktime

Michael Still: Giving serial devices meaningful names

Wed 01st Feb 2017 09:02
This is a hack I've been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$ cat /etc/udev/rules.d/60-local.rules KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \ ATTRS{serial}=="A8003Ye7", \ SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to "/dev/radish".

You find out the vendor and product ID from lsusb like this:

$ lsusb Bus 003 Device 003: ID 0624:0201 Avocent Corp. Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that's great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more... difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \ ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \ PROGRAM="/usr/bin/usbtest /dev/%k", \ SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is -- in my case either a currentcost or a solar panel inverter.

Tags for this post: linux udev serial usb usbserial
Related posts: SMART and USB storage; Video4Linux, ov511, and RGB24 palettes; ov511 hackery; Ubuntu, Dapper Drake, and that difficult Dell e310; Roomba serial cables; Via M10000, video, and a Belkin wireless USB thing

Comment
Categories: thinktime

sthbrx - a POWER technical blog: NAMD on NVLink

Wed 01st Feb 2017 08:02

NAMD is a molecular dynamics program that can use GPU acceleration to speed up its calculations. Recent OpenPOWER machines like the IBM Power Systems S822LC for High Performance Computing (Minsky) come with a new interconnect for GPUs called NVLink, which offers extremely high bandwidth to a number of very powerful Nvidia Pascal P100 GPUs. So they're ideal machines for this sort of workload.

Here's how to set up NAMD 2.12 on your Minsky, and how to debug some common issues. We've targeted this script for CentOS, but we've successfully compiled NAMD on Ubuntu as well.

Prerequisites GPU Drivers and CUDA

Firstly, you'll need CUDA and the NVidia drivers.

You can install CUDA by following the instructions on NVidia's CUDA Downloads page.

yum install epel-release yum install dkms # download the rpm from the NVidia website rpm -i cuda-repo-rhel7-8-0-local-ga2-8.0.54-1.ppc64le.rpm yum clean expire-cache yum install cuda # this will take a while...

Then, we set up a profile file to automatically load CUDA into our path:

cat > /etc/profile.d/cuda_path.sh <<EOF # From http://developer.download.nvidia.com/compute/cuda/8.0/secure/prod/docs/sidebar/CUDA_Quick_Start_Guide.pdf - 4.4.2.1 export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}} export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} EOF

Now, open a new terminal session and check to see if it works:

cuda-install-samples-8.0.sh ~ cd ~/NVIDIA_CUDA-8.0_Samples/1_Utilities/bandwidthTest make && ./bandwidthTest

If you see a figure of ~32GB/s, that means NVLink is working as expected. A figure of ~7-8GB indicates that only PCI is working, and more debugging is required.

Compilers

You need a c++ compiler:

yum install gcc-c++ Building NAMD

Once CUDA and the compilers are installed, building NAMD is reasonably straightforward. The one hitch is that because we're using CUDA 8.0, and the NAMD build scripts assume CUDA 7.5, we need to supply an updated Linux-POWER.cuda file. (We also enable code generation for the Pascal in this file.)

We've documented the entire process as a script which you can download. We'd recommend executing the commands one by one, but if you're brave you can run the script directly.

The script will fetch NAMD 2.12 and build it for you, but won't install it. It will look for the CUDA override file in the directory you are running the script from, and will automatically move it into the correct place so it is picked up by the build system..

The script compiles for a single multicore machine setup, rather than for a cluster. However, it should be a good start for an Ethernet or Infiniband setup.

If you're doing things by hand, you may see some errors during the compilation of charm - as long as you get charm++ built successfully. at the end, you should be OK.

Testing NAMD

We have been testing NAMD using the STMV files available from the NAMD website:

cd NAMD_2.12_Source/Linux-POWER-g++ wget http://www.ks.uiuc.edu/Research/namd/utilities/stmv.tar.gz tar -xf stmv.tar.gz sudo ./charmrun +p80 ./namd2 +pemap 0-159:2 +idlepoll +commthread stmv/stmv.namd

This binds a namd worker thread to every second hardware thread. This is because hardware threads share resources, so using every hardware thread costs overhead and doesn't give us access to any more physical resources.

You should see messages about finding and using GPUs:

Pe 0 physical rank 0 binding to CUDA device 0 on <hostname>: 'Graphics Device' Mem: 4042MB Rev: 6.0

This should be significantly faster than on non-NVLink machines - we saw a gain of about 2x in speed going from a machine with Nvidia K80s to a Minsky. If things aren't faster for you, let us know!

Downloads Other notes

Namd requires some libraries, some of which they supply as binary downloads on their website. Make sure you get the ppc64le versions, not the ppc64 versions, otherwise you'll get errors like:

/bin/ld: failed to merge target specific data of file .rootdir/tcl/lib/libtcl8.5.a(regfree.o) /bin/ld: .rootdir/tcl/lib/libtcl8.5.a(regerror.o): compiled for a big endian system and target is little endian /bin/ld: failed to merge target specific data of file .rootdir/tcl/lib/libtcl8.5.a(regerror.o) /bin/ld: .rootdir/tcl/lib/libtcl8.5.a(tclAlloc.o): compiled for a big endian system and target is little endian

The script we supply should get these right automatically.

Categories: thinktime

sthbrx - a POWER technical blog: linux.conf.au 2017 review

Tue 31st Jan 2017 16:01

I recently attended LCA 2017, where I gave a talk at the Linux Kernel miniconf (run by fellow sthbrx blogger Andrew Donnellan!) and a talk at the main conference.

I received some really interesting feedback so I've taken the opportunity to write some of it down to complement the talk videos and slides that are online. (And to remind me to follow up on it!)

Miniconf talk: Sparse Warnings

My kernel miniconf talk was on sparse warnings (pdf slides, 23m video).

The abstract read (in part):

sparse is a semantic parser for C, and is one of the static analysis tools available to kernel devs.

Sparse is a powerful tool with good integration into the kernel build system. However, we suffer from warning overload - there are too many sparse warnings to spot the serious issues amongst the trivial. This makes it difficult to use, both for developers and maintainers.

Happily, I received some feedback that suggests it's not all doom and gloom like I had thought!

  • Dave Chinner told me that the xfs team uses sparse regularly to make sure that the file system is endian-safe. This is good news - we really would like that to be endian-safe!

  • Paul McKenney let me know that the 0day bot does do some sparse checking - it would just seem that it's not done on PowerPC.

Main talk: 400,000 Ephemeral Containers

My main talk was entitled "400,000 Ephemeral Containers: testing entire ecosystems with Docker". You can read the abstract for full details, but it boils down to:

What if you want to test how all the packages in a given ecosystem work in a given situation?

My main example was testing how many of the Ruby packages successfully install on Power, but I also talk about other languages and other cool tests you could run.

The 44m video is online. I haven't put the slides up yet but they should be available on GitHub soonish.

Unlike with the kernel talk, I didn't catch the names of most of the people with feedback.

Docker memory issues

One of the questions I received during the talk was about running into memory issues in Docker. I attempted to answer that during the Q&A. The person who asked the question then had a chat with me afterwards, and it turns out I had completely misunderstood the question. I thought it was about memory usage of running containers in parallel. It was actually about memory usage in the docker daemon when running lots of containers in serial. Apparently the docker daemon doesn't free memory during the life of the process, and the question was whether or not I had observed that during my runs.

I didn't have a good answer for this at the time other than "it worked for me", so I have gone back and looked at the docker daemon memory usage.

After a full Ruby run, the daemon is using about 13.9G of virtual memory, and 1.975G of resident memory. If I restart it, the memory usage drops to 1.6G of virtual and 43M of resident memory. So it would appear that the person asking the question was right, and I'm just not seeing it have an effect.

Other interesting feedback
  • Someone was quite interested in testing on Sparc, once they got their Go runtime nailed down.

  • A Rackspacer was quite interested in Python testing for OpenStack - this has some intricacies around Py2/Py3, but we had an interesting discussion around just testing to see if packages that claim Py3 support provide Py3 support.

  • A large jobs site mentioned using this technique to help them migrate their dependencies between versions of Go.

  • I was 'gently encouraged' to try to do better with how long the process takes to run - if for no other reason than to avoid burning more coal. This is a fair point. I did not explain very well what I meant with diminishing returns in the talk: there's lots you could do to make the process faster, it's just comes at the cost of the simplicity that I really wanted when I first started the project. I am working (on and off) on better ways to deal with this by considering the dependency graph.

Categories: thinktime

Binh Nguyen: Linux BASH CLI RSS Reader, Explaining Prophets 4, and More

Tue 31st Jan 2017 05:01
- built my own RSS feed reader yesterday. It actually took a lot less time then going out to search for one that suited my needs. It's based on someone else's code (credit given in code but since that code was so buggy that it wouldn't work) I guess it's mine now? https://sites.google.com/site/dtbnguyen/rssread-1.11.tar.gz https://sites.google.com/site/dtbnguyen/ - code to extract from
Categories: thinktime

Tim Serong: My Personal Travel Ban

Mon 30th Jan 2017 21:01

I plan to avoid any and all travel to the USA for the foreseeable future due to the complete mess unfolding there with Trump’s executive orders banning immigration from some Muslim-majority countries, related protests, illegal detainment, etc. etc. (the list goes on, and I expect it to get longer).

It’s not that I’m from one of the blacklist countries, and I’m not a Muslim. I’m even white. But I no longer consider travel to the USA safe (especially bearing in mind my ridiculous beard and long hair), and even if I did, I’d want to stand in solidarity with the people who are currently being screwed. The notion of banning entire groups of people based on a single shared trait (in this case, probable adherence to a particular religion) is abhorrent; it demonizes our fellow humans, divides us and builds walls – whether metaphorical or physical – between our various communities. The fact that this immigration ban will impact refugees and asylum seekers just makes matters worse. I am deeply ashamed by Australia’s record on that front too, and concerned that our government will not do much better.

So I won’t be putting in any talks for Cephalocon - which is a damn shame, as I’m working on Ceph – or for any other US-based tech conference unless and until the situation over there changes.

I realise this post may not make much difference in the grander scheme of things, but one more voice is one more voice.

Categories: thinktime

Michael Still: A pythonic example of recording metrics about ephemeral scripts with prometheus

Mon 30th Jan 2017 21:01
In my previous post we talked about how to record information from short lived scripts (I call them ephemeral scripts by the way) with prometheus. The example there was a script which checked the SMART status of each of the disks in a machine and reported that via pushgateway. I now want to work through a slightly more complicated example.

I think you hit the limits of reporting simple values in shell scripts via curl requests fairly quickly. For example with the SMART monitoring script, SMART is capable of returning a whole heap of metrics about the performance of a disk, but we boiled that down to a single "health" value. This is largely because writing a parser for all the other values that smartctl returns would be inefficient and fragile in shell. So for this post, we're going to work through an example of how to report a variety of values from a python script. Those values could be the parsed output of smartctl, but to mix things up a bit, I'm going to use a different script I wrote recently.

This new script uses the Weather Underground API to lookup weather stations near my house, and then generate graphics of the weather forecast. These graphics are displayed on the various Cisco SIP phones I already had around the house. The forecasts look like this:



The script to generate these weather forecasts is relatively simple python, and you can see the source code on github.

My cunning plan here is to use prometheus' time series database and alert capabilities to drive home automation around my house. The first step for that is to start gathering some simple facts about the home environment so that we can do trending and decision making on them. The code to do this isn't all that complicated. First off, we need to add the python prometheus client to our python environment, which is hopefully a venv:

pip install prometheus_client pip install six

That second dependency isn't a strict requirement for prometheus, but the script I'm working on needs it (because it needs to work out what's a text value, and python 3 is bonkers).

Next we import the prometheus client in our code and setup the counter registry. At the same time I record when the script was run:

from prometheus_client import CollectorRegistry, Gauge, push_to_gateway registry = CollectorRegistry() Gauge('job_last_success_unixtime', 'Last time the weather job ran', registry=registry).set_to_current_time()

And then we just add gauges for any values we want to add to the pushgateway

Gauge('_'.join(field), '', registry=registry).set(value)

Finally, the values don't exist in the pushgateway until we actually push them there, which we do like this:

push_to_gateway('localhost:9091', job='weather', registry=registry)

You can see the entire patch I wrote to add prometheus support on github if you're interested in an example with more context.

Now we can have pretty graphs of temperature and stuff!

Tags for this post: prometheus monitoring python pushgateway
Related posts: Recording performance information from short lived processes with prometheus; Basic prometheus setup; Implementing SCP with paramiko; Mona Lisa Overdrive; Packet capture in python; mbot: new hotness in Google Talk bots

Comment
Categories: thinktime

Francois Marier: Creating a home music server using mpd

Mon 30th Jan 2017 17:01

I recently setup a music server on my home server using the Music Player Daemon, a cross-platform free software project which has been around for a long time.

Basic setup

Start by installing the server and the client package:

apt install mpd mpc

then open /etc/mpd.conf and set these:

music_directory "/path/to/music/" bind_to_address "192.168.1.2" bind_to_address "/run/mpd/socket" zeroconf_enabled "yes" password "Password1"

before replacing the alsa output:

audio_output { type "alsa" name "My ALSA Device" }

with a pulseaudio one:

audio_output { type "pulse" name "Pulseaudio Output" }

In order for the automatic detection (zeroconf) of your music server to work, you need to prevent systemd from creating the network socket:

systemctl stop mpd.service systemctl stop mpd.socket systemctl disable mpd.socket

otherwise you'll see this in /var/log/mpd/mpd.log:

zeroconf: No global port, disabling zeroconf

Once all of that is in place, start the mpd daemon:

systemctl start mpd.service

and create an index of your music files:

MPD_HOST=Password1@/run/mpd/socket mpc update

while watching the logs to notice any files that the mpd user doesn't have access to:

tail -f /var/log/mpd/mpd.log Enhancements

I also added the following in /etc/logcheck/ignore.server.d/local-mpd to silence unnecessary log messages in logcheck emails:

^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Started Music Player Daemon.$ ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopped Music Player Daemon.$ ^\w{3} [ :0-9]{11} [._[:alnum:]-]+ systemd\[1\]: Stopping Music Player Daemon...$

and created a cronjob in /etc/cron.d/mpd-francois to update the database daily and stop the music automatically in the evening:

# Refresh DB once a day 5 1 * * * mpd MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet update # Think of the neighbours 0 22 * * 0-4 mpd MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop 0 23 * * 5-6 mpd MPD_HOST=Password1@/run/mpd/socket /usr/bin/mpc --quiet stop Clients

To let anybody on the local network connect, I opened port 6600 on the firewall (/etc/network/iptables.up.rules since I'm using Debian's iptables-apply):

-A INPUT -s 192.168.1.0/24 -p tcp --dport 6600 -j ACCEPT

Then I looked at the long list of clients on the mpd wiki.

Desktop

The official website suggests two clients which are available in Debian and Ubuntu:

Both of them work well, but haven't had a release since 2011, even though there is some activity in 2013 and 2015 in their respective source control repositories.

Ario has a simpler user interface but gmpc has cover art download working out of the box, which is why I might stick with it.

In both cases, it is possible to configure a polipo proxy so that any external resources are fetched via Tor.

Android

On Android, I got these two to work:

I picked M.A.L.P. since it includes a nice widget for the homescreen.

iOS

On iOS, these are the most promising clients I found:

since MPoD and MPaD don't appear to be available on the AppStore anymore.

Categories: thinktime

sthbrx - a POWER technical blog: Extracting Early Boot Messages in QEMU

Mon 30th Jan 2017 16:01

Be me, you're a kernel hacker, you make some changes to your kernel, you boot test it in QEMU, and it fails to boot. Even worse is the fact that it just hangs without any failure message, no stack trace, no nothing. "Now what?" you think to yourself.

You probably do the first thing you learnt in debugging101 and add abundant print statements all over the place to try and make some sense of what's happening and where it is that you're actually crashing. So you do this, you recompile your kernel, boot it in QEMU and lo and behold, nothing... What happened? You added all these shiny new print statements, where did the output go? The kernel still failed to boot (obviously), but where you were hoping to get some clue to go on you were again left with an empty screen. "Maybe I didn't print early enough" or "maybe I got the code paths wrong" you think, "maybe I just need more prints" even. So lets delve a bit deeper, why didn't you see those prints, where did they go, and how can you get at them?

__log_buf

So what happens when you call printk()? Well what normally happens is, depending on the log level you set, the output is sent to the console or logged so you can see it in dmesg. But what happens if we haven't registered a console yet? Well then we can't print the message can we, so its logged in a buffer, kernel log buffer to be exact helpfully named __log_buf.

Console Registration

So how come I eventually see print statements on my screen? Well at some point during the boot process a console is registered with the printk system, and any buffered output can now be displayed. On ppc it happens that this occurs in register_early_udbg_console() called in setup_arch() from start_kernel(), which is the generic kernel entry point. From this point forward when you print something it will be displayed on the console, but what if you crash before this? What are you supposed to do then?

Extracting Early Boot Messages in QEMU

And now the moment you've all been waiting for, how do I extract those early boot messages in QEMU if my kernel crashes before the console is registered? Well it's quite simple really, QEMU is nice enough to allow us to dump guest memory, and we know the log buffer is in there some where, so we just need to dump the correct part of memory which corresponds to the log buffer.

Locating __log_buf

Before we can dump the log buffer we need to know where it is. Luckily for us this is fairly simple, we just need to dump all the kernel symbols and look for the right one.

> nm vmlinux > tmp; grep __log_buf tmp; c000000000f5e3dc b __log_buf

We use the nm tool to list all the kernel symbols and output this into some temporary file, we can then grep this for the log buffer (which we know to be named __log_buf), and presto we are told that it's at kernel address 0xf5e3dc.

Dumping Guest Memory

It's then simply a case of dumping guest memory from the QEMU console. So first we press ^a+c to get us to the QEMU console, then we can use the aptly named dump-guest-memory.

> help dump-guest-memory dump-guest-memory [-p] [-d] [-z|-l|-s] filename [begin length] -- dump guest memory into file 'filename'. -p: do paging to get guest's memory mapping. -d: return immediately (do not wait for completion). -z: dump in kdump-compressed format, with zlib compression. -l: dump in kdump-compressed format, with lzo compression. -s: dump in kdump-compressed format, with snappy compression. begin: the starting physical address. length: the memory size, in bytes.

We just give it a filename for where we want our output to go, we know the starting address, we just don't know the length. We could choose some arbitrary length, but inspection of the kernel code shows us that:

#define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT) static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);

Looking at the pseries_defconfig file shows us that the LOG_BUF_SHIFT is set to 18, and thus we know that the buffer is 2^18 bytes or 256kb. So now we run:

> dump-guest-memory tmp 0xf5e3dc 262144

And we now get our log buffer in the file tmp. This can simply be viewed with:

> hexdump -C tmp

This gives a readable, if poorly formatted output. I'm sure you can find something better but I'll leave that as an exercise for the reader.

Conclusion

So if like me your kernel hangs somewhere early in the boot process and you're left without your console output you are now fully equipped to extract the log buffer in QEMU and hopefully therein lies the answer to why you failed to boot.

Categories: thinktime

Pages