You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 32 min 21 sec ago

OpenSTEM: This Week in HASS – term 3, week 8

Mon 28th Aug 2017 09:08

This week our younger students are putting together a Class Museum, while older students are completing their Scientific Report.

Foundation/Prep/Kindy to Year 3

Students in Foundation/Prep/Kindy (Units F.3 and F-1.3), as well as those in Years 1 (Unit 1.3). 2 (Unit 2.3) and 3 (Unit 3.3) are all putting together Class Museums of items of historical interest, either found at school or brought from home. Since the activity is similar (although explored to different depths by different year levels), there is the option for teachers to combine efforts across classes, and even across year level to make a more substantial Museum display. The Class Museum is an activity designed to assist students to consider how life has changed and what aspects are similar and different. Students should consider which items are easily recognisable and which are harder to identify. They can practise different points of view by imagining themselves using these objects and living in the past. Teachers can link this back to the stories read in the first weeks of term and allow students to compare their own lives with different types of past experiences of daily and family life. Museum Labels and a resource on Museums are provided to gain an understanding of how and why objects are displayed in museums.

Years 3 to 6

Older students are completing their main term research projects by finishing their Scientific Reports. This week students are concentrating on finishing their reports, drawing their Conclusions, making sure that their Bibliography is correct and formatting their report, including images, graphs and tables. For Year 3 students (Unit 3.7), the report will cover an aspect of the history of their capital city or local community. Year 4 students (Unit 4.3) are reporting on an investigation into Australia at the time of European contact and the start of European settlement. Students in Year 5 (Unit 5.3) are examining topics from Australian colonial history, and students in Year 6 (Unit 6.3) are researching topics from Federation and early 20th century Australian history. There is plenty of scope for incorporating digital technologies into the final version of the scientific report, especially for students in the upper year levels.  Formatting a document correctly is an essential skill and addresses many aspects of the digital technologies curriculum, adding the possibility of another curriculum section for the teacher to mark as done for the term.

OpenStem’s Understanding Our World program ensures that student’s work for assessment is completed well before the end of term, decreasing the rush to get everything assessed in the final weeks of term. It is our aim to support teachers and facilitate the processes involved in both teaching and assessment.

Categories: thinktime

BlueHackers: Mental Health Resources for New Dads

Thu 24th Aug 2017 10:08

Right now, one in seven new fathers experiences high levels of psychological distress and as many as one in ten experience depression or anxiety. Often distressed fathers remain unidentified and unsupported due to both a reluctance to seek help for themselves and low levels of community understanding that the transition to parenthood is a difficult period for fathers, as well as mothers.

The project is hoping to both increase understanding of stress and distress in new fathers and encourage new fathers to take action to manage their mental health.

This work is being informed by research commissioned by beyondblue into men’s experiences of psychological distress in the perinatal period.

Informed by the findings of the Healthy Dads research, three projects are underway to provide men with the knowledge, tools and support to stay resilient during the transition to fatherhood.

https://www.medicalert.org.au/news-and-resources/becoming-a-healthy-dad

Categories: thinktime

BlueHackers: The Attention Economy

Tue 22nd Aug 2017 16:08

In May 2017, James Williams, a former Google employee and doctoral candidate researching design ethics at Oxford University, won the inaugural Nine Dots Prize.

James argues that digital technologies privilege our impulses over our intentions, and are gradually diminishing our ability to engage with the issues we most care about.

Possibly a neat followup on our earlier post on “busy-ness“.

Categories: thinktime

OpenSTEM: This Week in HASS – term 3, week 7

Mon 21st Aug 2017 09:08

This week students are starting the final sections of their research projects and Scientific Reports. Our younger students are also preparing to set up a Class Museum.

Foundation/Prep/Kindy to Year 3

Our youngest students (Unit F.3) also complete a Scientific Report. By becoming familiar with the overall layout and skills associated with the scientific process at a young age, by the time students reach high school the process will be second-nature and their skills fine-tuned. This week teachers discuss how Science helps us find out things about the world. Teachers and students are also collecting material to form a Class Museum. Students in integrated, multi-age classes (Unit F-1.3) and Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking a similar set of activities this week, however, in increasing depth as appropriate for each year level, and with different subject matter, according to the class focus. By Year 3 (Unit 3.3), students are writing full sentences and even short paragraphs, focusing on a topic in the local history of their community or capital city, in their Scientific Report.

Years 3 to 6

Students in integrated Year 3/4 classes (Unit 3.7) and those in Year 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) are concentrating on analysis of data this week, for the final stages of their Scientific Report. It is expected that students have gathered information on their chosen research topic on an aspect of Australian history for the term by now and are analysing this information in order to answer their research questions and start to draw conclusions about their topic. This week’s lessons focus on pulling everything together towards a a full, final report. Teachers are able to quickly identify which students need extra guidance by referring to the Student Workbook, which tracks each student’s progress on a weekly basis. Thus feedback, intervention and additional support can be offered timeously and before the term marks are collated, allowing each student the chance to achieve their best.

Each year level focuses on a different aspect of Australian history and enough topics are supplied to ensure that each student is working on new information, even in multi-age classes. Instead of finding a continual stream of new, novel HASS units, or repeating material some students have covered before, OpenSTEM’s Understanding Our World® program allows teachers to tailor the same units to look different for each year level, thus ensuring that students are practising their skills on new material, as well as covering year-level appropriate skills and content. By the time students are in Year 6, they will have covered the full suite of Australian History up to the 20th century, as well as having studied each continent in turn. Civics and Citizenship and Economics and Business for part of this integrated whole and do not have to be taught separately. They will be ready to enter high school with a full suite of honed research and problem-solving skills, as well as having covered the core material necessary.

Categories: thinktime

Ben Martin: CNC Z Axis with 150mm or more of travel

Mon 21st Aug 2017 08:08
Many of the hobby priced CNC machines have limited Z Axis movement. This coupled with limited clearance on the gantry force a limited number of options for work fixtures. For example, it is very unlikely that there will be clearance for a vice on the cutting bed of a cheap machine.

I started tinkering around with a Z Axis assembly which offers around 150mm of travel. The assembly also uses bearing blocks that should help overcome the tensions that drilling and cutting can offer.


The assembly is designed to be as thin as possible. The spindle mount is a little wider which allows easy bolting onto the spindle mount plate which attaches to these bearings and drive nut. The width of the assembly is important because it will limit the travel in the Y axis if it can interact with the gantry in any way.

Construction is mainly done in 1/4 and 1/2 inch 6061 alloy. The black bracket at the bottom is steel. This seemed like a reasonable choice since that bracket was going to be key to holding the weight and attachment to the gantry.

The Z axis shown above needs to be combined with a gantry height extension when attaching to a hobby CNC to be really effective. Using a longer travel Z axis like this would allow higher gantries which combined allow for easier fixturing and also pave the way for a 4/5th axis to fit under the cutter.

Categories: thinktime

OpenSTEM: This Week in HASS – term 3, week 6

Mon 14th Aug 2017 09:08

This week all our students are hard at work examining the objects they are using for their research projects. For the younger students these are objects that will be used to generate a Class Museum. For the older students, the objects of study relate to their chosen topic in Australian History.

Foundation / Prep / Kindy to Year 3

Students in Foundation/Prep/Kindy (Unit F.3) are examining items from the past and completing their Scientific Report by drawing these items in the Method section of the report. We also ask students to analyse their Data by drawing a picture of how people would have used that item in the past. Students in combined Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), as well as students in Year 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are also addressing the Method, Data and Analysis sections of their report by listing, describing and drawing the sources and information which the teacher has helped them to locate. The sources should include items which can be used to make a Class Museum, as well as old photographs, paintings, books, newspapers etc. Teachers can guide class discussions around how items were used in the past – which are familiar, and which are not and compare with the stories read in the first weeks of term.

Years 3 to 6

Older students are expected to analyse their Data in increasing detail relevant to their year-level, as well as listing sources in the Method section of their Scientific Reports. Students in Year 4 (Unit 4.3) are researching a topic from Australia at the time of contact with Europeans, which includes topics in Aboriginal and early colonial history. Students should consider each source and what information they can get from the source. In addition students should think about how objects, pictures and texts were used in the past and what inherent biases might be present. Students in Year 5 (Unit 5.3) are researching a topic from Australian colonial history. Teachers should guide students through the process of determining whether they are dealing with a primary or secondary source, as well as how to use that source to learn more about the past. Inherent bias in different sources should be discussed. Students in Year 6 (Unit 6.3) are researching a topic surrounding Federation and events in Australia in the early 20th century. Many of the sources available contain both primary and secondary information and students should be starting to develop an understanding of how to use, analyse and reference these sources. In preparation for the requirements of high school, teachers should guide these students through the process of building an interpretation of their analysis which is substantiated through reference to their sources (listed in the Bibliography of their report). Students should be able to show where they got their information and how they are interpreting that information. For students in Year 6, the Student Workbook is more of a guide for writing a complete Scientific Report, which they are expected to compile more or less independently.

Categories: thinktime

Donna Benjamin: Tools for talking

Fri 11th Aug 2017 03:08
Friday, August 11, 2017 - 02:59

I gave a talk a couple of years ago called Tools for Talking.

I'm preparing a new talk, which, in some ways, is a sequel to this one. As part of that prep, I thought it might be useful to write some short summaries of each of the tools outlined here, with links to resources on them.

  • Powerful Non Defensive Communication
  • Non Violent Communication
  • Active Listening
  • Appreciative Inquiry
  • Transactional Analysis
  • The Drama Triangle vs
  • The Empowerment Dynamic
  • The 7 Cs

So I might try to make a start on that over the next week or so.

 

In the meantime, here's the slides:

Tools for talking from Donna Benjamin

And here's the video of the presentation at DrupalCon Barcelona

Categories: thinktime

Ben Martin: Larger format CNC

Thu 10th Aug 2017 22:08
Having access to a wood cutting CNC machine that can do a full sheet of plywood at once has led me to an initial project for a large sconce stand. The sconce is 210mm square at the base and the DAR ash I used was 140mm across. This lead to the four edge grain glue ups in the middle of the stand.


The design was created in Fusion 360 by just seeing what might look good. Unfortunately the sketch export as DXF presented some issues on the import side. This was part of why a littler project like this was a good first choice rather than a more complex whole sheet of ply.

To get around the DXF issue the tip was to select a face of a body and create a sketch from that face. Then export the created sketch as DXF which seemed to work much better. I don't know what I had in the original sketch that I created the body from that the DXF export/import didn't like. Maybe the dimensions, maybe the guide lines, hard to know without a bisect. The CNC was using the EnRoute software, so I had to work out how to bounce things from Fusion over to EnRoute and then get some help to reCAM things on that side and setup tabs et al.

One tip for others would be to use the DAR timber to form a glue up before arriving at a facility with a larger cut surface. Fewer pieces means less tabs/bridges and easier reCAM. A preformed blue panel would also have let me used more advanced designs such as n and u slots to connect two pieces instead of edge grains to connect four.

Overall it was a fun build and the owner of the sconce will love having it slightly off the table top so it can more easily be seen.

Categories: thinktime

Francois Marier: pristine-tar and git-buildpackage Work-arounds

Thu 10th Aug 2017 15:08

I recently ran into problems trying to package the latest version of my planetfilter tool.

This is how I was able to temporarily work-around bugs in my tools and still produce a package that can be built reproducibly from source and that contains a verifiable upstream signature.

pristine-tar being is unable to reproduce a tarball

After importing the latest upstream tarball using gbp import-orig, I tried to build the package but ran into this pristine-tar error:

$ gbp buildpackage gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT xdelta3: normally this indicates that the source file is incorrect xdelta3: please verify the source file with sha1sum or equivalent xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56. pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp pristine-tar: failed to generate tarball

So I decided to throw away what I had, re-import the tarball and try again. This time, I got a different pristine-tar error:

$ gbp buildpackage gbp:error: Pristine-tar couldn't checkout "planetfilter_0.7.4.orig.tar.gz": xdelta3: target window checksum mismatch: XD3_INVALID_INPUT xdelta3: normally this indicates that the source file is incorrect xdelta3: please verify the source file with sha1sum or equivalent xdelta3 decode failed! at /usr/share/perl5/Pristine/Tar/DeltaTools.pm line 56. pristine-tar: command failed: pristine-gz --no-verbose --no-debug --no-keep gengz /tmp/user/1000/pristine-tar.mgnaMjnwlk/wrapper /tmp/user/1000/pristine-tar.EV5aXIPWfn/planetfilter_0.7.4.orig.tar.gz.tmp pristine-tar: failed to generate tarball

After looking through the list of open bugs, I thought it was probably not worth filing a bug given how many similar ones are waiting to be addressed.

So as a work-around, I simply symlinked the upstream tarball I already had and then built the package using the tarball directly instead of the upstream git branch:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz ../planetfilter_0.7.4.orig.tar.gz gbp buildpackage --git-tarball-dir=..

Given that only the upstream and master branches are signed, the .delta file on the pristine-tar branch could be fixed at any time in the future by committing a new .delta file once pristine-tar gets fixed. This therefore seems like a reasonable work-around.

git-buildpackage doesn't import the upstream tarball signature

The second problem I ran into was a missing upstream signature after building the package with git-buildpackage:

$ lintian -i planetfilter_0.7.4-1_amd64.changes E: planetfilter changes: orig-tarball-missing-upstream-signature planetfilter_0.7.4.orig.tar.gz N: N: The packaging includes an upstream signing key but the corresponding N: .asc signature for one or more source tarballs are not included in your N: .changes file. N: N: Severity: important, Certainty: certain N: N: Check: changes-file, Type: changes N:

This problem (and the lintian error I suspect) is fairly new and hasn't been solved yet.

So until gbp import-orig gets proper support for upstream signatures, my work-around was to copy the upstream signature in the export-dir output directory (which I set in ~/.gbp.conf) so that it can be picked up by the final stages of gbp buildpackage:

ln -s ~/deve/remote/planetfilter/dist/planetfilter-0.7.4.tar.gz.asc ../build-area/planetfilter_0.7.4.orig.tar.gz.asc

If there's a better way to do this, please feel free to leave a comment (authentication not required)!

Categories: thinktime

Tim Serong: NBN Fixed Wireless – Four Years On

Mon 07th Aug 2017 23:08

It’s getting close to the fourth anniversary of our NBN fixed wireless connection. Over that time, speaking as someone who works from home, it’s been generally quite good. 22-24 Mbps down and 4-4.5 Mbps up is very nice. That said, there have been a few problems along the way, and more recently evenings have become significantly irritating.

There were some initial teething problems, and at least three or four occasions where someone was performing “upgrades” during business hours over the course of several consecutive days. These upgrade periods wouldn’t have affected people who are away at work or school or whatever during the day, as by the time they got home, the connection would have been back up. But for me, I had to either tether my mobile phone to my laptop, or go down to a cafe or friend’s place to get connectivity.

There’s also the icing problem, which occurs a couple of times a year when snow falls below 200-300 metres for a few days. No internet, and also no mobile phone.

These are all relatively isolated incidents though. What’s been happening more recently is our connection speed in the evenings has gone to hell. I don’t tend to do streaming video, and my syncing several GB of software mirrors happens automatically in the wee hours while I’m asleep, so my subjective impression for some time has just been that “things were kinda slower during the evenings” (web browsing, pushing/pulling from already cloned git repos, etc.). I vented about this on Twitter in mid-June but didn’t take any further action at the time.

Several weeks later, on the evening of July 28, I needed to update and rebuild a Ceph package for openSUSE and SLES. The specifics aren’t terribly relevant to this post, but the process (which is reasonably automated) involves running something like `git clone git@github.com:SUSE/ceph.git && cd ceph && git submodule update --init --recursive`, which in turn downloads a few GB of data. I’ve done this several times in the past, and it usually takes an hour, or maybe a bit more. So you start it up, then go make a meal, come back and you’re done.

Not so on that Friday evening. It took six hours.

I ran a couple of speed tests:

I looked at my smokeping graphs:

That’s awfully close to 20% packet loss in the evenings. It happens every night:

And it’s been happening for a long time:

Right now, as I’m writing this, the last three hours show an average of 15.57% packet loss:

So I’ve finally opened a support ticket with iiNet. We’ll see what they say. It seems unlikely that this is a problem with my equipment, as my neighbour on the same wireless tower has also had noticeable speed problems for at least the last couple of months. I’m guessing it’s either not enough backhaul, or the local NBN wireless tower is underprovisioned (or oversubscribed). I’m leaning towards the latter, as in recent times the signal strength indicators on the NTD flick between two amber and three green lights in the evenings, whereas during the day it’s three green lights all the time.

Categories: thinktime

sthbrx - a POWER technical blog: memcmp() for POWER8

Mon 07th Aug 2017 12:08
Userspace

When writing C programs in userspace there is libc which does so much of the heavy lifting. One important thing libc provides is portability in performing syscalls, that is, you don't need to know the architectural details of performing a syscall on each architecture your program might be compiled for. Another important feature that libc provides for the average userspace programmer is highly optimised routines to do things that are usually performance critical. It would be extremely inefficient for each userspace programmer if they had to implement even the naive version of these functions let alone optimised versions. Let us take memcmp() for example, I could trivially implement this in C like:

int memcmp(uint8_t *p1, uint8_t *p2, int n) { int i; for (i = 0; i < n; i++) { if (p1[i] < p2[i]) return -1; if (p1[i] > p2[i]) return 1; } return 0; }

However, while it is incredibly portable it is simply not going to perform, which is why the nice people who write libc have highly optimised ones in assembly for each architecture.

Kernel

When writing code for the Linux kernel, there isn't the luxury of a fully featured libc since it expects (and needs) to be in userspace, therefore we need to implement the features we need ourselves. Linux doesn't need all the features but something like memcmp() is definitely a requirement.

There have been some recent optimisations in glibc from which the kernel could benefit too! The question to be asked is, does the glibc optimised power8_memcmp() actually go faster or is it all smoke and mirrors?

Benchmarking memcmp()

With things like memcmp() it is actually quite easy to choose datasets which can make any implementation look good. For example; the new power8_memcmp() makes use of the vector unit of the power8 processor, in order to do so in the kernel there must be a small amount of setup code so that the rest of the kernel knows that the vector unit has been used and it correctly saves and restores the userspace vector registers. This means that power8_memcmp() has a slightly larger overhead than the current one, so for small compares or compares which are different early on then the newer 'faster' power8_memcmp() might actually not perform as well. For any kind of large compare however, using the vector unit should outperform a CPU register load and compare loop. It is for this reason that I wanted to avoid using micro benchmarks and use a 'real world' test as much as possible.

The biggest user of memcmp() in the kernel, at least on POWER is Kernel Samepage Merging (KSM). KSM provides code to inspect all the pages of a running system to determine if they're identical and deduplicate them if possible. This kind of feature allows for memory overcommit when used in a KVM host environment as guest kernels are likely to have a lot of similar, readonly pages which can be merged with no overhead afterwards. In order to determine if the pages are the same KSM must do a lot of page sized memcmp().

Performance

Performing a lot of page sized memcmp() is the one flaw with this test, the sizes of the memcmp() don't vary, hopefully the data will be 'random' enough that we can still observe differences in the two approaches.

My approach for testing involved getting the delta of ktime_get() across calls to memcmp() in memcmp_pages() (mm/ksm.c). This actually generated massive amounts of data, so, for consistency the following analysis is performed on the first 400MB of deltas collected.

The host was compiled with powernv_defconfig and run out of a ramdisk. For consistency the host was rebooted between each run so as to not have any previous tests affect the next. The host was rebooted a total of six times, the first three with my 'patched' power8_memcmp() kernel was booted the second three times with just my data collection patch applied, the 'vanilla' kernel. Both kernels are based off 4.13-rc3.

Each boot the following script was run and the resulting deltas file saved somewhere before reboot. The command line argument was always 15.

#!/bin/sh ppc64_cpu --smt=off #Host actually boots with ksm off but be sure echo 0 > /sys/kernel/mm/ksm/run #Scan a lot of pages echo 999999 > /sys/kernel/mm/ksm/pages_to_scan echo "Starting QEMUs" i=0 while [ "$i" -lt "$1" ] ; do qemu-system-ppc64 -smp 1 -m 1G -nographic -vga none \ -machine pseries,accel=kvm,kvm-type=HV \ -kernel guest.kernel -initrd guest.initrd \ -monitor pty -serial pty & i=$(expr $i + 1); done echo "Letting all the VMs boot" sleep 30 echo "Turning KSM om" echo 1 > /sys/kernel/mm/ksm/run echo "Letting KSM do its thing" sleep 2m echo 0 > /sys/kernel/mm/ksm/run dd if=/sys/kernel/debug/ksm/memcmp_deltas of=deltas bs=4096 count=100

The guest kernel was a pseries_le_defconfig 4.13-rc3 with the same ramdisk the host used. It booted to the login prompt and was left to idle.

Analysis

A variety of histograms were then generated in an attempt to see how the behaviour of memcmp() changed between the two implementations. It should be noted here that the y axis in the following graphs is a log scale as there were a lot of small deltas. The first observation is that the vanilla kernel had more smaller deltas, this is made particularly evident by the 'tally' points which are a running total of all deltas with less than the tally value.

Graph 1 depicting the vanilla kernel having a greater amount of small (sub 20ns) deltas than the patched kernel. The green points rise faster (left to right) and higher than the yellow points.

Still looking at the tallies, graph 1 also shows that the tally of deltas is very close by the 100ns mark, which means that the overhead of power8_memcmp() is not too great.

The problem with looking at only deltas under 200ns is that the performance results we want, that is, the difference between the algorithms is being masked by things like cache effects. To avoid this problem is may be wise to look at longer running (larger delta) memcmp() calls.

The following graph plots all deltas below 5000ns - still relatively short calls to memcmp() but an interesting trend emerges: Graph 2 shows that above 500ns the blue (patched kernel) points appear to have all shifted left with respect to the purple (vanilla kernel) points. This shows that for any memcmp() which will take more than 500ns to get a result it is favourable to use power8_memcmp() and it is only detrimental to use power8_memcmp() if the time will be under 50ns (a conservative estimate).

It is worth noting that graph 1 and graph 2 are generated by combining the first run of data collected from the vanilla and patched kernels. All the deltas for both runs are can be viewed separately here for vanilla and here for patched. Finally, the results from the other four runs look very much identical and provide me with a fair amount of confidence that these results make sense.

Conclusions

It is important to separate possible KSM optimisations with generic memcmp() optimisations, for example, perhaps KSM shouldn't be calling memcmp() if it suspects the first byte will differ. On the other hand, things that power8_memcmp() could do (which it currently doesn't) is check the length parameter and perhaps avoid the overhead of enabling kernel vector if the compare is less than some small amount of bytes.

It does seem like at least for the 'average case' glibcs power8_memcmp() is an improvement over what we have now.

Future work

A second round of data collection and plotting of delta vs position of first byte to differ should confirm these results, this would mean a more invasive patch to KSM.

Categories: thinktime

OpenSTEM: This Week in HASS – term 3, week 5

Mon 07th Aug 2017 09:08

This week students in all year levels are working on their research project for the term. Our youngest students are looking at items and pictures from the past, while our older students are collecting source material for their project on Australian history.

Foundation/Prep/Kindy to Year 3

The focus of this term is an investigation into the past and how we can find out about past events. For students in Foundation/Prep/Kindy (Units F.1 and F-1.3), Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) it is recommended that the teacher bring in sources of information about the past for the students to examine. Teachers can tailor these to suit a particular direction for their class. Examples of possible sources include old toys, old books, historic photographs, texts and items about local history (including the school itself), images of old paintings, old newspaper articles which can be accessed online etc. OpenSTEM provides resources which can be used for these investigations: e.g. Historic Photographs of Families, Modes of Transport 100 Years Ago, Brisbane Through the Years, Perth Through the Years, resources on floods in Brisbane and Gundagai, bush fires in Victoria, on the different colonies in Australia etc. Teachers can also use the national and state resources such as the State Library of Queensland, particularly their Picture Archive; the State Library of NSW; the State Library of South Australia, particularly their images collection; the National Archives of Australia; Trove, which archives old newspapers in Australia; Museums Victoria, and many similar sites. Students should also be encouraged to bring material from home, which can be built up into a Class Museum.

Years 3 to 6

As students in Years 3 (Unit 3.7), 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) move into the period of gathering information from sources to address their research question, teachers should guide them to consider the nature of each source and how to record it. Resources such as Primary and Secondary Sources and Historical Sources aid in understanding the context of different kinds of sources and teachers should assist students to record the details of each source for their Method section of their Scientific Report. Recording these sources in detail is also essential for being able to compile a Bibliography, which is required to accompany the report. OpenSTEM resources are listed for each research topic for these units, but students (and teachers) should feel free to complement these with any additional material such as online collections of images and newspaper articles (such as those listed in the paragraph above). These will help students to achieve a more unique presentation for their report and demonstrate the ability to collate a variety of information, thus earning a higher grade. Using a wide range of sources will also give students a wider appreciation for their chosen topic in Australian history.

Categories: thinktime

Francois Marier: Time Synchronization with NTP and systemd

Mon 07th Aug 2017 06:08

I recently ran into problems with generating TOTP 2-factor codes on my laptop. The fact that some of the codes would work and some wouldn't suggested a problem with time keeping on my laptop.

This was surprising since I've been running NTP for a many years and have therefore never had to think about time synchronization. After looking into this though, I realized that the move to systemd had changed how this is meant to be done.

The new systemd time synchronization daemon

On a machine running systemd, there is no need to run the full-fledged ntpd daemon anymore. The built-in systemd-timesyncd can do the basic time synchronization job just fine.

However, I noticed that the daemon wasn't actually running:

$ systemctl status systemd-timesyncd.service ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) Drop-In: /lib/systemd/system/systemd-timesyncd.service.d └─disable-with-time-daemon.conf Active: inactive (dead) Condition: start condition failed at Thu 2017-08-03 21:48:13 PDT; 1 day 20h ago Docs: man:systemd-timesyncd.service(8)

referring instead to a mysterious "failed condition". Attempting to restart the service did provide more details though:

$ systemctl restart systemd-timesyncd.service $ systemctl status systemd-timesyncd.service ● systemd-timesyncd.service - Network Time Synchronization Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled) Drop-In: /lib/systemd/system/systemd-timesyncd.service.d └─disable-with-time-daemon.conf Active: inactive (dead) Condition: start condition failed at Sat 2017-08-05 18:19:12 PDT; 1s ago └─ ConditionFileIsExecutable=!/usr/sbin/ntpd was not met Docs: man:systemd-timesyncd.service(8)

The above check for the presence of /usr/sbin/ntpd points to a conflict between ntpd and systemd-timesyncd. The solution of course is to remove the former before enabling the latter:

apt purge ntp systemctl enable systemd-timesyncd.service systemctl restart systemd-timesyncd.service Enabling time synchronization with NTP

Once the ntp package has been removed, it is time to enable NTP support in timesyncd.

Start by choosing the NTP server pool nearest you and put it in /etc/systemd/timesyncd.conf. For example, mine reads like this:

[Time] NTP=ca.pool.ntp.org

before restarting the daemon:

systemctl restart systemd-timesyncd.service

That may not be enough on your machine though. To check whether or not the time has been synchronized with NTP servers, run the following:

$ timedatectl status ... Network time on: yes NTP synchronized: no RTC in local TZ: no

If NTP is not enabled, then you can enable it by running this command:

timedatectl set-ntp true

Once that's done, everything should be in place and time should be kept correctly:

$ timedatectl status ... Network time on: yes NTP synchronized: yes RTC in local TZ: no
Categories: thinktime

Russell Coker: QEMU for ARM Processes

Tue 01st Aug 2017 19:08

I’m currently doing some embedded work on ARM systems. Having a virtual ARM environment is of course helpful. For the i586 class embedded systems that I run it’s very easy to setup a virtual environment, I just have a chroot run from systemd-nspawn with the --personality=x86 option. I run it on my laptop for my own development and on a server my client owns so that they can deal with the “hit by a bus” scenario. I also occasionally run KVM virtual machines to test the boot image of i586 embedded systems (they use GRUB etc and are just like any other 32bit Intel system).

ARM systems have a different boot setup, there is a uBoot loader that is fairly tightly coupled with the kernel. ARM systems also tend to have more unusual hardware choices. While the i586 embedded systems I support turned out to work well with standard Debian kernels (even though the reference OS for the hardware has a custom kernel) the ARM systems need a special kernel. I spent a reasonable amount of time playing with QEMU and was unable to make it boot from a uBoot ARM image. The Google searches I performed didn’t turn up anything that helped me. If anyone has good references for getting QEMU to work for an ARM system image on an AMD64 platform then please let me know in the comments. While I am currently surviving without that facility it would be a handy thing to have if it was relatively easy to do (my client isn’t going to pay me to spend a week working on this and I’m not inclined to devote that much of my hobby time to it).

QEMU for Process Emulation

I’ve given up on emulating an entire system and now I’m using a chroot environment with systemd-nspawn.

The package qemu-user-static has staticly linked programs for emulating various CPUs on a per-process basis. You can run this as “/usr/bin/qemu-arm-static ./staticly-linked-arm-program“. The Debian package qemu-user-static uses the binfmt_misc support in the kernel to automatically run /usr/bin/qemu-arm-static when an ARM binary is executed. So if you have copied the image of an ARM system to /chroot/arm you can run the following commands like the following to enter the chroot:

cp /usr/bin/qemu-arm-static /chroot/arm/usr/bin/qemu-arm-static
chroot /chroot/arm bin/bash

Then you can create a full virtual environment with “/usr/bin/systemd-nspawn -D /chroot/arm” if you have systemd-container installed.

Selecting the CPU Type

There is a huge range of ARM CPUs with different capabilities. How this compares to the range of x86 and AMD64 CPUs depends on how you are counting (the i5 system I’m using now has 76 CPU capability flags). The default CPU type for qemu-arm-static is armv7l and I need to emulate a system with a armv5tejl. Setting the environment variable QEMU_CPU=pxa250 gives me armv5tel emulation.

The ARM Architecture Wikipedia page [2] says that in armv5tejl the T stands for Thumb instructions (which I don’t think Debian uses), the E stands for DSP enhancements (which probably isn’t relevant for me as I’m only doing integer maths), the J stands for supporting special Java instructions (which I definitely don’t need) and I’m still trying to work out what L means (comments appreciated).

So it seems clear that the armv5tel emulation provided by QEMU_CPU=pxa250 will do everything I need for building and testing ARM embedded software. The issue is how to enable it. For a user shell I can just put export QEMU_CPU=pxa250 in .login or something, but I want to emulate an entire system (cron jobs, ssh logins, etc).

I’ve filed Debian bug #870329 requesting a configuration file for this [1]. If I put such a configuration file in the chroot everything would work as desired.

To get things working in the meantime I wrote the below wrapper for /usr/bin/qemu-arm-static that calls /usr/bin/qemu-arm-static.orig (the renamed version of the original program). It’s ugly (I would use a config file if I needed to support more than one type of CPU) but it works.

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
  if(setenv("QEMU_CPU", "pxa250", 1))
  {
    printf("Can't set $QEMU_CPU\n");
    return 1;
  }
  execv("/usr/bin/qemu-arm-static.orig", argv);
  printf("Can't execute \"%s\" because of qemu failure\n", argv[0]);
  return 1;
}

Related posts:

  1. SE Linux vs chroot A question that is often asked is whether to use...
  2. Video Mode and KVM I recently changed my KVM servers to use the kernel...
  3. Creating a SE Linux Chroot environment Why use a Chroot environment? A large part of the...
Categories: thinktime

Gabriel Noronha: NBN FTTN

Tue 01st Aug 2017 11:08

Unfortunate for us our home only got FTTN NBN connection. but like others I thought I would share the speed improvement results from cleaning up wiring inside your own home. we have 2 phone sockets 1 in the bedroom and one in the kitchen. by removing the cable from the kitchen to the bedroom, we managed to increase our maximum line rate from 14.2Mbps upload and 35.21Mbps download to 20Mbps upload and 47 Mbps download.

Bedroom Phone Line connected. Line Statistics Post Wiring clean up

we’ve also put a speed change request from the 12/5 plan to the 50/20 plan so next month we should be enjoying a bit more of an NBN.

To think that with FTTH you could of had up to 4 100/40 connections. and you wouldn’t of had to pay someone to rewire your phone sockets.

Categories: thinktime

Russell Coker: Running a Tor Relay

Mon 31st Jul 2017 23:07

I previously wrote about running my SE Linux Play Machine over Tor [1] which involved configuring ssh to use Tor.

Since then I have installed a Tor hidden service for ssh on many systems I run for clients. The reason is that it is fairly common for them to allow a server to get a new IP address by DHCP or accidentally set their firewall to deny inbound connections. Without some sort of VPN this results in difficult phone calls talking non-technical people through the process of setting up a tunnel or discovering an IP address. While I can run my own VPN for them I don’t want their infrastructure tied to mine and they don’t want to pay for a 3rd party VPN service. Tor provides a free VPN service and works really well for this purpose.

As I believe in giving back to the community I decided to run my own Tor relay. I have no plans to ever run a Tor Exit Node because that involves more legal problems than I am willing or able to deal with. A good overview of how Tor works is the EFF page about it [2]. The main point of a “Middle Relay” (or just “Relay”) is that it only sends and receives encrypted data from other systems. As the Relay software (and the sysadmin if they choose to examine traffic) only sees encrypted data without any knowledge of the source or final destination the legal risk is negligible.

Running a Tor relay is quite easy to do. The Tor project has a document on running relays [3], which basically involves changing 4 lines in the torrc file and restarting Tor.

If you are running on Debian you should install the package tor-geoipdb to allow Tor to determine where connections come from (and to not whinge in the log files).

ORPort [IPV6ADDR]:9001

If you want to use IPv6 then you need a line like the above with IPV6ADDR replaced by the address you want to use. Currently Tor only supports IPv6 for connections between Tor servers and only for the data transfer not the directory services.

Data Transfer

I currently have 2 systems running as Tor relays, both of them are well connected in a European DC and they are each transferring about 10GB of data per day which isn’t a lot by server standards. I don’t know if there is a sufficient number of relays around the world that the share of the load is small or if there is some geographic dispersion algorithm which determined that there are too many relays in operation in that region.

Related posts:

  1. RPC and SE Linux One ongoing problem with TCP networking is the combination of...
  2. The Most Important things for running a Reliable Internet Service One of my clients is currently investigating new hosting arrangements....
  3. how to run dynamic ssh tunnels service smtps { disable = no socket_type = stream wait...
Categories: thinktime

OpenSTEM: This Week in HASS – term 3, week 4

Mon 31st Jul 2017 09:07

This week younger students start investigating how we can find out about the past. This investigation will be conducted over the next 3 weeks and will culminate in a Scientific Report. Older students are considering different sources of historical information and how they will use these sources in their research.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), as well as those in integrated classes (Unit F-1.3) and Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are all starting to think about how we can find out about the past. This is a great opportunity for teachers to encourage students to think about how we know about the past and brainstorm ideas, as well as coming up with their own avenues of inquiry. Teachers may wish to hold a Question and Answer session in class to help guide students to examine many different aspects of this topic. The resource Finding Out About The Past contains core information to help the teacher guide the discussion to cover different ways of examining the past. This discussion can be tailored to the level and individual circumstances of each class. Foundation/Prep/Kindy students are just starting to think about the past as a time before the present and how this affects what we know about past events. The discussion can be developed in higher years, and the teacher can start to introduce the notion of sources of information, including texts and material culture. This investigation forms the basis for the Method section of the Scientific Report, which is included in the Student Workbook.

Years 3 to 6

Students in Years 3 (Unit 3.7), 4 (Unit 4.3), 5 (Unit 5.3) and 6 (Unit 6.3) are following a similar line of investigation this week, but examining Historical Sources specifically. As well as Primary and Secondary Sources, students are encouraged to think about Oral Sources, Textual Sources and Material Culture (artefacts such as stone tools or historical items). This discussion forms the basis for students completing the Method section of their Scientific Report, where they will list the sources of information and how these contributed to their research. Older students might be able to self-direct this process, although teachers may wish to guide the process through an initial class discussion. Teachers may wish to take the class through a discussion of the sources they are using for their research and discuss how students will use and report on these sources in their report for their topic.

Categories: thinktime

David Rowe: QSO Today Podcast

Sun 30th Jul 2017 09:07

Eric, 4Z1UG, has kindly interviewed me for his fine QSO Today Podcast.

Categories: thinktime

Russell Coker: Apache Mesos on Debian

Sat 29th Jul 2017 21:07

I decided to try packaging Mesos for Debian/Stretch. I had a spare system with a i7-930 CPU, 48G of RAM, and SSDs to use for building. The i7-930 isn’t really fast by today’s standards, but 48G of RAM and SSD storage mean that overall it’s a decent build system – faster than most systems I run (for myself and for clients) and probably faster than most systems used by Debian Developers for build purposes.

There’s a github issue about the lack of an upstream package for Debian/Stretch [1]. That upstream issue could probably be worked around by adding Jessie sources to the APT sources.list file, but a package for Stretch is what is needed anyway.

Here is the documentation on building for Debian [2]. The list of packages it gives as build dependencies is incomplete, it also needs zlib1g-dev libapr1-dev libcurl4-nss-dev openjdk-8-jdk maven libsasl2-dev libsvn-dev. So BUILDING this software requires Java + Maven, Ruby, and Python along with autoconf, libtool, and all the usual Unix build tools. It also requires the FPM (Fucking Package Management) tool, I take the choice of name as an indication of the professionalism of the author.

Building the software on my i7 system took 79 minutes which includes 76 minutes of CPU time (I didn’t use the -j option to make). At the end of the build it turned out that I had mistakenly failed to install the Fucking Package Management “gem” and it aborted. At this stage I gave up on Mesos, the pain involved exceeds my interest in trying it out.

How to do it Better

One of the aims of Free Software is that bugs are more likely to get solved if many people look at them. There aren’t many people who will devote 76 minutes of CPU time on a moderately fast system to investigate a single bug. To deal with this software should be prepared as components. An example of this is the SE Linux project which has 13 source modules in the latest release [3]. Of those 13 only 5 are really required. So anyone who wants to start on SE Linux from source (without considering a distribution like Debian or Fedora that has it packaged) can build the 5 most important ones. Also anyone who has an issue with SE Linux on their system can find the one source package that is relevant and study it with a short compile time. As an aside I’ve been working on SE Linux since long before it was split into so many separate source packages and know the code well, but I still find the separation convenient – I rarely need to work on more than a small subset of the code at one time.

The requirement of Java, Ruby, and Python to build Mesos could be partly due to language interfaces to call Mesos interfaces from Ruby and Python. Ohe solution to that is to have the C libraries and header files to call Mesos and have separate packages that depend on those libraries and headers to provide the bindings for other languages. Another solution is to have autoconf detect that some languages aren’t installed and just not try to compile bindings for them (this is one of the purposes of autoconf).

The use of a tool like Fucking Package Management means that you don’t get help from experts in the various distributions in making better packages. When there is a FOSS project with a debian subdirectory that makes barely functional packages then you will be likely to have an experienced Debian Developer offer a patch to improve it (I’ve offered patches for such things on many occasions). When there is a FOSS project that uses a tool that is never used by Debian developers (or developers of Fedora and other distributions) then the only patches you will get will be from inexperienced people.

A software build process should not download anything from the Internet. The source archive should contain everything that is needed and there should be dependencies for external software. Any downloads from the Internet need to be protected from MITM attacks which means that a responsible software developer has to read through the build system and make sure that appropriate PGP signature checks etc are performed. It could be that the files that the Mesos build downloaded from the Apache site had appropriate PGP checks performed – but it would take me extra time and effort to verify this and I can’t distribute software without being sure of this. Also reproducible builds are one of the latest things we aim for in the Debian project, this means we can’t just download files from web sites because the next build might get a different version.

Finally the fpm (Fucking Package Management) tool is a Ruby Gem that has to be installed with the “gem install” command. Any time you specify a gem install command you should include the -v option to ensure that everyone is using the same version of that gem, otherwise there is no guarantee that people who follow your documentation will get the same results. Also a quick Google search didn’t indicate whether gem install checks PGP keys or verifies data integrity in other ways. If I’m going to compile software for other people to use I’m concerned about getting unexpected results with such things. A Google search indicates that Ruby people were worried about such things in 2013 but doesn’t indicate whether they solved the problem properly.

Related posts:

  1. SE Linux Status in Debian 2011-10 Debian/Unstable Development deb http://www.coker.com.au wheezy selinux The above APT sources.list...
  2. Debian Work and Upstream Steve Kemp writes about security issues with C programs [1]....
  3. Getting Started with Amazon EC2 The first thing you need to do to get started...
Categories: thinktime

Pia Waugh: RegTech – a primer for the uninitiated

Fri 28th Jul 2017 13:07

Whilst working at AUSTRAC I wrote a brief about RegTech which was quite helpful. I was given permission to blog the generically useful parts of it for general consumption Thanks Leanne!

Overview – This brief is the most important thing you will read in planning transformation! Government can’t regulate in the way we have traditionally done. Traditional approaches are too small, too slow and too ineffective. We need to explore new ways to regulate and achieve the goal of a stronger financial sector resistance to abuse that leverages data, automation, machine learning, technology and collaboration. We are here to help!

The key here is to put technology at the heart of the business strategy, rather than as simply an implementation mechanism. By embracing technology thinking, which means getting geeks into the strategy and policy rooms, we can build the foundation of a modern, responsive, agile, proactive and interactive regulator that can properly scale.

The automation of compliance with RegTech has the potential to overcome individual foibles and human error in a way that provides the quantum leap in culture and compliance that our regulators, customers, policy makers and the community are increasingly demanding… The Holy Grail is when we start to actually write regulation and legislation in code. Imagine the productivity gains and compliance savings of instantaneous certified compliance… We are now in one of the most exciting phases in the development of FinTech since the inception of e-banking.Treasurer Morrison, FinTech Australia Summit, Nov 2016

On the back of the FinTech boom, there is a growth in companies focused on “RegTech” solutions and services to merge technology and regulation/compliance needs for a more 21st century approach to the problem space. It is seen as a logical next step to the FinTech boom, given the high costs and complexity of regulation in the financial sector, but the implications for the broader regulatory sector are significant. The term only started being widely used in 2015. Other governments have started exploring this space, with the UK Government investing significantly.

Core themes of RegTech can be summarised as: data; automation; security; disruption; and enabling collaboration. There is also an overall drive towards everything being closer to real-time, with new data or information informing models, responses and risk in an ongoing self-adjusting fashion.

  • Data driven regulation – better monitoring, better use of available big and small data holdings to inform modelling and analysis (rather than always asking a human to give new information), assessment on the fly, shared data and modelling, trends and forecasting, data analytics for forward looking projections rather than just retrospective analysis, data driven risk and adaptive modelling, programmatic delivery of regulations (regulation as a platform).
  • Automation – reporting, compliance, risk modelling of transactions to determine what should be reported as “suspicious”, system to system registration and escalation, use of machine learning and AI, a more blended approach to work combining humans and machines.
  • Security – biometrics, customer checks, new approaches to KYC, digital identification and assurance, sharing of identity information for greater validation and integrity checking.
  • Disruptive technologies – blockchain, cloud, machine learning, APIs, cryptography, augmented reality and crypto-currencies just to start!
  • Enabling collaboration – for-profit regulation activities, regulation/compliance services and products built on the back of government rules/systems/data, access to distributed ledgers, distributed risk models and shared data/systems, broader private sector innovation on the back of regulator open data and systems.

Some useful references for the more curious:

Categories: thinktime

Pages