You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 34 min 40 sec ago

OpenSTEM: Soldering: if it smells like chicken, you’re holding it wrong!

Wed 23rd Mar 2016 10:03

This T-shirt sums up soldering basics quite well. Funny too. But I hear you say, surely you don’t need to really explain that?

I’d agree, and in our experience with soldering with primary school students in classrooms, we’ve never had any such fuss.

However, in stock photography, we find the following “examples”…

This stock photo model (they appear in many other photos) is holding a hot air gun of a soldering rework station, by the metal part! If the station were turned on, there’d be third degree burns and a distinct nasty smell…

The open hard disk assembly near the front is also quite shiny…..

As if one isn’t enough, here’s another stock photo sample, again held by the metal part:

On a practical level, it’s very unlikely you’d be dealing with a modern computer  main board using a regular soldering iron, on the component side.

But what actually annoyed me most about this photo is something else: the original title goes something like “beautiful woman … soldering …”. Relevance? The other photo doesn’t say “hot spunk soldering”, and although that would be just as irrelevant, fact is that with articles and photos of professional women, their appearance is more often than not made a key part of their description. Which is just sexist garbage, bad journalism and bad copy-writing.

Which brings us to this final soldering stock photo sample. Just What The?

Female body selling soldering iron? Come on now. “Bad taste” doesn’t even remotely sum up the wrongness of it all.

Note: the low-res stock photo samples in this article are shown in a satirical fair-use context.

Categories: thinktime

sthbrx - a POWER technical blog: Getting logs out of things

Tue 22nd Mar 2016 18:03

Here at OzLabs, we have an unfortunate habit of making our shiny Power computers very sad, which is a common problem in systems programming and kernel hacking. When this happens, we like having logs. In particular, we like to have the kernel log and the OPAL firmware log, which are, very surprisingly, rather helpful when debugging kernel and firmware issues.

Here's how to get them.

From userspace

You're lucky enough that your machine is still up, yay! As every Linux sysadmin knows, you can just grab the kernel log using dmesg.

As for the OPAL log: we can simply ask OPAL to tell us where its log is located in memory, copy it from there, and hand it over to userspace. In Linux, as per standard Unix conventions, we do this by exposing the log as a file, which can be found in /sys/firmware/opal/msglog.

Annoyingly, the msglog file reports itself as size 0 (I'm not sure exactly why, but I think it's due to limitations in sysfs), so if you try to copy the file with cp, you end up with just a blank file. However, you can read it with cat or less.

From xmon

xmon is a really handy in-kernel debugger for PowerPC that allows you to do basic debugging over the console without hooking up a second machine to use with kgdb. On our development systems, we often configure xmon to automatically begin debugging whenever we hit an oops or panic (using xmon=on on the kernel command line, or the XMON_DEFAULT Kconfig option). It can also be manually triggered:

root@p86:~# echo x > /proc/sysrq-trigger sysrq: SysRq : Entering xmon cpu 0x7: Vector: 0 at [c000000fcd717a80] pc: c000000000085ad8: sysrq_handle_xmon+0x68/0x80 lr: c000000000085ad8: sysrq_handle_xmon+0x68/0x80 sp: c000000fcd717be0 msr: 9000000000009033 current = 0xc000000fcd689200 paca = 0xc00000000fe01c00 softe: 0 irq_happened: 0x01 pid = 7127, comm = bash Linux version 4.5.0-ajd-11118-g968f3e3 (ajd@ka1) (gcc version 5.2.1 20150930 (GCC) ) #1 SMP Tue Mar 22 17:01:58 AEDT 2016 enter ? for help 7:mon>

From xmon, simply type dl to dump out the kernel log. If you'd like to page through the log rather than dump the entire thing at once, use #<n> to split it into groups of n lines.

Until recently, it wasn't as easy to extract the OPAL log without knowing magic offsets. A couple of months ago, I was debugging a nasty CAPI issue and got rather frustrated by this, so one day when I had a couple of hours free I refactored the existing sysfs interface and added the do command to xmon. These patches will be included from kernel 4.6-rc1 onwards.

When you're done, x will attempt to recover the machine and continue, zr will reboot, and zh will halt.

From the FSP

Sometimes, not even xmon will help you. In production environments, you're not generally going to start a debugger every time you have an incident. Additionally, a serious hardware error can cause a 'checkstop', which completely halts the system. (Thankfully, end users don't see this very often, but kernel developers, on the other hand...)

This is where the Flexible Service Processor, or FSP, comes in. The FSP is an IBM-developed baseboard management controller used on most IBM-branded Power Systems machines, and is responsible for a whole range of things, including monitoring system health. Among its many capabilities, the FSP can automatically take "system dumps" when fatal errors occur, capturing designated regions of memory for later debugging. System dumps can be configured and triggered via the FSP's web interface, which is beyond the scope of this post but is documented in IBM Power Systems user manuals.

How does the FSP know what to capture? As it turns out, skiboot (the firmware which implements OPAL) maintains a Memory Dump Store Table which tells the FSP which memory regions to dump. MDST updates are recorded in the OPAL log:

[2690088026,5] MDST: Max entries in MDST table : 256 [2690090666,5] MDST: Addr = 0x31000000 [size : 0x100000 bytes] added to MDST table. [2690093767,5] MDST: Addr = 0x31100000 [size : 0x100000 bytes] added to MDST table. [2750378890,5] MDST: Table updated. [11199672771,5] MDST: Addr = 0x1fff772780 [size : 0x200000 bytes] added to MDST table. [11215193760,5] MDST: Table updated. [28031311971,5] MDST: Table updated. [28411709421,5] MDST: Addr = 0x1fff830000 [size : 0x100000 bytes] added to MDST table. [28417251110,5] MDST: Table updated.

In the above log, we see four entries: the skiboot/OPAL log, the hostboot runtime log, the petitboot Linux kernel log (which doesn't make it into the final dump) and the real Linux kernel log. skiboot obviously adds the OPAL and hostboot logs to the MDST early in boot, but it also exposes the OPAL_REGISTER_DUMP_REGION call which can be used by the operating system to register additional regions. Linux uses this to register the kernel log buffer. If you're a kernel developer, you could potentially use the OPAL call to register your own interesting bits of memory.

So, the MDST is all set up, we go about doing our business, and suddenly we checkstop. The FSP does its sysdump magic and a few minutes later it reboots the system. What now?

  • After we come back up, the FSP notifies OPAL that a new dump is available. Linux exposes the dump to userspace under /sys/firmware/opal/dump/.

  • ppc64-diag is a suite of utilities that assist in manipulating FSP dumps, including the opal_errd daemon. opal_errd monitors new dumps and saves them in /var/log/dump/ for later analysis.

  • opal-dump-parse (also in the ppc64-diag suite) can be used to extract the sections we care about from the dump:

    root@p86:/var/log/dump# opal-dump-parse -l SYSDUMP.842EA8A.00000001.20160322063051 |---------------------------------------------------------| |ID SECTION SIZE| |---------------------------------------------------------| |1 Opal-log 1048576| |2 HostBoot-Runtime-log 1048576| |128 printk 1048576| |---------------------------------------------------------| List completed root@p86:/var/log/dump# opal-dump-parse -s 1 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file Opal-log.842EA8A.00000001.20160322063051 root@p86:/var/log/dump# opal-dump-parse -s 2 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file HostBoot-Runtime-log.842EA8A.00000001.20160322063051 root@p86:/var/log/dump# opal-dump-parse -s 128 SYSDUMP.842EA8A.00000001.20160322063051 Captured log to file printk.842EA8A.00000001.20160322063051

There's various other types of dumps and logs that I won't get into here. I'm probably obliged to say that if you're having problems out in the wild, you should probably contact your friendly local IBM Service Representative...

Acknowledgements

Thanks to Stewart Smith for pointing me in the right direction regarding FSP sysdumps and related tools.

Categories: thinktime

sthbrx - a POWER technical blog: The Elegance of the Plaintext Patch

Tue 22nd Mar 2016 13:03

I've only been working on the Linux kernel for a few months. Before that, I worked with proprietary source control at work and common tools like GitHub at home. The concept of the mailing list seemed obtuse to me. If I noticed a problem with some program, I'd be willing to open an issue on GitHub but not to send an email to a mailing list. Who still uses those, anyway?

Starting out with the kernel meant I had to figure this email thing out. git format-patch and git send-email take most of the pain out of formatting and submitting a patch, which is nice. The patch files generated by format-patch open nicely in Emacs by default, showing all whitespace and letting you pick up any irregularities. send-email means you can send it to yourself or a friend first, finding anything that looks stupid before being exposed to the public.

And then what? You've sent an email. It gets sent to hundreds or thousands of people. Nowhere near that many will read it. Some might miss it due to their mail server going down, or the list flagging your post as spam, or requiring moderation. Some recipients will be bots that archive mail on the list, or publish information about the patch. If you haven't formatted it correctly, someone will let you know quickly. If your patch is important or controversial, you'll have all sorts of responses. If your patch is small or niche, you might not ever hear anything back.

I remember when I sent my first patch. I was talking to a former colleague who didn't understand the patch/mailing list workflow at all. I sent him a link to my patch on a mail archive. I explained it like a pull request - here's my code, you can find the responses. What's missing from a GitHub-esque pull request? We don't know what tests it passed. We don't know if it's been merged yet, or if the maintainer has looked at it. It takes a bit of digging around to find out who's commented on it. If it's part of a series, that's awkward to find out as well. What about revisions of a series? That's another pain point.

Luckily, these problems do have solutions. Patchwork, written by fellow OzLabs member Jeremy Kerr, changes the way we work with patches. Project maintainers rely on Pathwork instances, such as https://patchwork.ozlabs.org, for their day-to-day workflow: tagging reviewers, marking the status of patches, keeping track of tests, acks, reviews and comments in one place. Missing from this picture is support for series and revisions, which is a feature that's being developed by the freedesktop project. You can check out their changes in action here.

So, Patchwork helps patches and email catch up to what GitHub has in terms of ease of information. We're still missing testing and other hooks. What about review? What can we do with email, compared to GitHub and the like?

In my opinion, the biggest feature of email is the ease of review. Just reply inline and you're done. There's inline commenting on GitHub and GitLab, which works well but is a bit tacky, people commenting on the same thing overlap and conflict, each comment generates a notification (which can be an email until you turn that off). Plus, since it's email, it's really easy to bring in additional people to the conversation as necessary. If there's a super lengthy technical discussion in the kernel, it might just take Linus to resolve.

There are alternatives to just replying to email, too, such as Gerrit. Gerrit's pretty popular, and has a huge amount of features. I understand why people use it, though I'm not much of a fan. Reason being, it doesn't add to the email workflow, it replaces it. Plaintext email is supported on pretty much any device, with a bunch of different programs. From the goals of Patchwork: "patchwork should supplement mailing lists, not replace them".

Linus Torvalds famously explained why he prefers email over GitHub pull requests here, using this pull request from Ben Herrenschmidt as an example of why git's own pull request format is superior to that of GitHub. Damien Lespiau, who is working on the freedesktop Patchwork fork, outlines on his blog all the issues he has with mailing list workflows and why he thinks mailing lists are a relic of the past. His work on Patchwork has gone a long way to help fix those problems, however I don't think mailing lists are outdated and superceded, I think they are timeless. They are a technology-agnostic, simple and free system that will still be around if GitHub dies or alienates its community.

That said, there's still the case of the missing features. What about automated testing? What about developer feedback? What about making a maintainer's life easier? We've been working on improving these issues, and I'll outline how we're approaching them in a future post.

Categories: thinktime

Binh Nguyen: Psychological Warfare/Mind Control, More Economic Warfare, and More

Tue 22nd Mar 2016 03:03
Before we start, a lot of the following seems absolutely crazy but there are reasons for it and there is a history behind it. Moreover, all of these programs have been publicly acknowledged or de-classified...One of the things I've been curious about is the interplay between broadcast media, some commonly distributed substances and their relation to population control as well as mind control. I
Categories: thinktime

sthbrx - a POWER technical blog: No Network For You

Mon 21st Mar 2016 15:03

In POWER land IPMI is mostly known as the method to access the machine's console and start interacting with Petitboot. However it also has a plethora of other features, handily described in the 600ish page IPMI specification (which you can go read yourself).

One especially relevant feature to Petitboot however is the 'chassis bootdev' command, which you can use to tell Petitboot to ignore any existing boot order, and only consider boot options of the type you specify (eg. 'network', 'disk', or 'setup' to not boot at all). Support for this has been in Petitboot for a while and should work on just about any machine you can get your hands on.

Network Overrides

Over in OpenPOWER1 land however, someone took this idea and pushed it further - why not allow the network configuration to be overwritten too? This isn't in the IPMI spec, but if you cast your gaze down to page 398 where the spec lays out the entire format of the IPMI request, there is a certain field named "OEM Parameters". This is an optional amount of space set aside for whatever you like, which in this case is going to be data describing an override of the network config.

This allows a user to tell Petitboot over IPMI to either;

  • Disable the network completely,
  • Set a particular interface to use DHCP, or
  • Set a particular interface to use a specific static configuration.

Any of these options will cause any existing network configurations to be ignored.

Building the Request

Since this is an OEM-specific command, your average ipmitool package isn't going to have a nice way of making this request, such as 'chassis bootdev network'. Rather you need to do something like this:

ipmitool -I lanplus -H $yourbmc -U $user -P $pass raw 0x00 0x08 0x61 0x80 0x21 0x70 0x62 0x21 0x00 0x01 0x06 0x04 0xf4 0x52 0x14 0xf3 0x01 0xdf 0x00 0x01 0x0a 0x3d 0xa1 0x42 0x10 0x0a 0x3d 0x2 0x1

Horrific right? In the near future the Petitboot tree will include a helper program to format this request for you, but in the meantime (and for future reference), lets lay out how to put this together:

Specify the "chassis bootdev" command, field 96, data field 1: 0x00 0x08 0x61 0x80 Unique value that Petitboot recognises: 0x21 0x70 0x62 0x21 Version field (1) 0x00 0x01 .. .. Size of the hardware address (6): .. .. 0x06 .. Size of the IP address (IPv4/IPv6): .. .. .. 0x04 Hardware (MAC) address: 0xf4 0x52 0x14 0xf3 0x01 0xdf .. .. 'Ignore flag' and DHCP/Static flag (DHCP is 0) .. .. 0x00 0x01 (Below fields only required if setting a static IP) IP Address: 0x0a 0x3d 0xa1 0x42 Subnet Mask (eg, /16): 0x10 .. .. .. Gateway IP Address: .. 0x0a 0x3d 0x02 0x01

Clearing a network override is as simple as making a request empty aside from the header:

0x00 0x08 0x61 0x80 0x21 0x70 0x62 0x21 0x00 0x01 0x00 0x00

You can also read back the request over IPMI with this request:

0x00 0x09 0x61 0x00 0x00

That's it! Ideally this is something you would be scripting rather than bashing out on the keyboard - the main use case at the moment is as a way to force a machine to netboot against a known good source, rather than whatever may be available on its other interfaces.

[1] The reason this is only available on OpenPOWER machines at the moment is that support for the IPMI command itself depends on the BMC firmware, and non-OpenPOWER machines use an FSP which is a different platform.

Categories: thinktime

Chris Smart: Providing git:// (protocol) access to repos using GitLab

Mon 21st Mar 2016 12:03

I mirror a bunch of open source projects in a local GitLab instance which works well.

However, by default, GitLab only provides https and ssh access to repositories, which can be a pain for continuous integration (especially if you were to use self-signed certificates).

However, it’s relatively easy to configure your GitLab server to run a git daemon and provide read-only access to anyone on any repos that you choose.

On my CentOS box, I just installed git-daemon and then edited the startup script at /usr/lib/systemd/system/git@.service like so:

[Unit]

Description=Git Repositories Server Daemon

Documentation=man:git-daemon(1)

 

[Service]

User=git

ExecStart=-/usr/libexec/git-core/git-daemon \

--base-path=/var/opt/gitlab/git-data/repositories/ \

--syslog --inetd --verbose

StandardInput=socket

The important part here is the base path /var/opt/gitlab/git-data/repositories/, which is specified at the default location that git repos are stored when using the GitLab omnibus package.

Now start and enable the service:

[root@gitlab ~]# systemctl start git.socket && systemctl enable git.socket

As per the git.service systemd file, you should now have git-daemon listening on port 9418, however you may need to open the port through the firewall:

[root@gitlab ~]# firewall-cmd --permanent --zone=public --add-port=9418/tcp

[root@gitlab ~]# systemctl reload firewalld

Now, to enable git:// access to any given repository, you need to touch a file called git-daemon-export-ok in that repo’s git dir (it should be owned by your gitlab user, which is probably git). For example, a mirror of the Linux kernel:



-sh-4.2$ touch /var/opt/gitlab/git-data/repositories/mirror/linux.git/git-daemon-export-ok

From your local machine, test your git:// access!



[12:15 chris ~]$ git ls-remote git://gitlab/mirror/linux.git |head -1

46e595a17dcf11404f713845ecb5b06b92a94e43 HEAD

Success!

Categories: thinktime

Chris Smart: Mirroring git repositories (to GitLab)

Mon 21st Mar 2016 12:03

There are several open source git repos that I mirror in order to provide local speedy access to. Pushing those to a local GitLab server also means people can easily fork them and carry on.

On the GitLab server I have a local posix mrmirror user who also owns a group called mirror in GitLab (this user is cannot be called “mirror” as the user and group would conflict in GitLab).

In mrmirror’s home directory there’s a ~/git/mirror directory which stores all the repos that I want to mirror. The mrmirror user also has a cronjob that runs every few hours to pull down any updates and push them to the appropriate project in the GitLab mirror group.

So for example, to mirror Linux, I first create a new project in the GitLab mirror group called linux (this would be accessed at something like https://gitlab/mirror/linux.git).

Then as the mrmirror user on GitLab I run a mirror clone:

[mrmirror@gitlab ~]$ cd ~/git/mirror

[mrmirror@gitlab mirror]$ git clone --mirror git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

Then the script takes care of future updates and pushes them directly to GitLab via localhost:

#!/bin/bash

 

# Setup proxy for any https remotes

export http_proxy=http://proxy:3128

export https_proxy=http://proxy:3128

 

cd ~/git/mirror

 

for x in $(ls -d *.git) ; do

    pushd ${x}

    git remote prune origin

    git remote update -p

    git push --mirror git@localhost:mirror/${x}"

    popd

done

 

echo $(date) > /tmp/git_mirror_update.timestamp

That’s managed by a simple cronjob that the mrmirror user has on the GitLab server:

[mrmirror@gitlab mirror]$ crontab -l

0 */4 * * * /usr/local/bin/git_mirror_update.sh

And that seems to be working really well.

Categories: thinktime

Tridge on UAVs: APM:Plane 3.5.1 released

Mon 21st Mar 2016 09:03

The ArduPilot development team is proud to announce the release of version 3.5.1 of APM:Plane. This is a minor release with primarily small changes.



The changes in this release are:

  • update uavcan to new protocol
  • always exit loiter in AUTO towards next waypoint
  • support more multicopter types in quadplane
  • added support for reverse thrust landings
  • added LAND_THR_SLEW parameter
  • added LAND_THEN_NEUTRL parameter
  • fixed reporting of armed state with safety switch
  • added optional arming check for minimum voltage
  • support motor test for quadplanes
  • added QLAND flight mode (quadplane land mode)
  • added TECS_LAND_SRC (land sink rate change)
  • use throttle slew in quadplane transition
  • added PID tuning for quadplane
  • improved text message queueing to ground stations
  • added LAND_THR_SLEW parameter
  • re-organisation of HAL_Linux bus API
  • improved NMEA parsing in GPS driver
  • changed TECS_LAND_SPDWGT default to -1
  • improved autoconfig of uBlox GPS driver
  • support a wider range of Lightware serial Lidars
  • improved non-GPS performance of EKF2
  • allow for indoor flight of quadplanes
  • improved compass fusion in EKF2
  • improved support for Pixracer board
  • improved NavIO2 support
  • added BATT_WATT_MAX parameter



The reverse thrust landing is particularly exciting as that adds a whole new range of possibilities for landing in restricted areas. Many thanks to Tom for the great work on getting this done.



The uavcan change to the new protocol has been a long time coming, and I'd like to thank Holger for both his great work on this and his patience given how long it has taken to be in a release. This adds support for automatic canbus node assignment which makes setup much easier, and also supports the latest versions of the Zubax canbus GPS.



My apologies if your favourite feature didn't make it into this release! There are a lot more changes pending but we needed to call a halt for the release eventually. This release has had a lot of flight testing and I'm confident it will be a great release.



Happy flying!



Categories: thinktime

Lev Lafayette: The constellation is changed, the disposition is the same

Sat 19th Mar 2016 23:03

Ars Technica has reported of a relatively small GPU-Linux cluster which can crack by brute force standard eight-character MS-Windows passwords in under six hours. There are, of course, a reasons and caveats. Firstly, as online servers will typically block repeat password attempts, is system is most effective against offline password hashes, which then of course can be used for online exploits.

read more

Categories: thinktime

Leon Brooks

Sat 19th Mar 2016 22:03
“What do you actually need it to do?”That one question can simplify a process so very much.



The outcome as a whole can become simpler, when features which are not necessary for this to be able to do are discarded.



The outcome can become cheaper, as less resources are required to perform fewer functions.



The outcome can become untangled from ‘political’ factors such as who might have a vested interest in things happening a certain way, or who might expect to derive consequential benefits of various kinds.



The outcome can arrive sooner, as less needs to be done — in simpler ways — with fewer dependencies — to make it happen.



The final result is likely to be more flexible, as it is less burdened by specific (unnecessary) features — and so by implied limitations — than a poorly-targeted or very generalised solution.



For a very simplistic result, rather than buy a new PC, this 3rd-hand desktop box over here, with this video card plugged into it, plus these two 3rd-hand screens, this mouse, this keyboard, these two hard-disk drives (all free) and this Linux distribution will do absolutely everything required to source (and edit) words and images to reliably make a newsletter every certain amount of time.



It will also do other things (flexibility as a kind of a bonus), however by staying true to purpose it does not need expensive hardware, expensive software, a virus scanner, constant maintenance, or any one of a dozen other complex and/or pricey components to continue operating indefinitely.



As another bonus, some computer hardware which may have partially become scrap metal but mostly land-fill, continues to provide utility without any additional input in terms of energy, finance or transport.

Categories: thinktime

Russell Coker: Ethernet Interface Naming With Systemd

Sat 19th Mar 2016 14:03

Systemd has a new way of specifying names for Ethernet interfaces as documented in systemd.link(5). The Debian package should keep working with the old 70-persistent-net.rules file, but I had a problem with this that forced me to learn about systemd.link(5).

Below is a little shell script I wrote to convert a basic 70-persistent-net.rules (that only matches on MAC address) to systemd.link files.

#!/bin/bash

RULES=/etc/udev/rules.d/70-persistent-net.rules

for n in $(grep ^SUB $RULES|sed -e s/^.*NAME..// -e s/.$//) ; do

  NAME=/etc/systemd/network/10-$n.link

  LINE=$(grep $n $RULES)

  MAC=$(echo $LINE|sed -e s/^.*address….// -e s/…ATTR.*$//)

  echo "[Match]" > $NAME

  echo "MACAddress=$MAC" >> $NAME

  echo "[Link]" >> $NAME

  echo "Name=$n" >> $NAME

done

Related posts:

  1. Ethernet Interface Naming As far as I recall the standard for naming Linux...
  2. Maintaining Screen Output In my post about getting started with KVM I noted...
  3. Ethernet bonding Bonding is one of the terms used to describe multiple...
Categories: thinktime

David Rowe: Making my 32kHz Crystal Oscillator Actually Oscillate

Sat 19th Mar 2016 11:03

For Project Whack a Mole I need a 32.768kHz crystal oscillator. I found this circuits on the Interwebs and gave it a try:

It wouldn’t go. I messed about changing component values for while, then decided to actually try to understand the circuit. Now for an oscillator to work, we need an amplifier with a gain of greater than 1, and a phase shift of 360 degrees to get positive feedback.

The circuit above is an amplifier, with the crystal network connected between the collector output and base input. We get half of the 360 degree phase shift by using a common emitter topology, which is an inverting amplifier. So the crystal network must provide the other 180 degrees. On a good day. If it’s working.

First problem – the transistor was saturated, with Vc stuck near 0V. For an oscillator to start noise gets amplified, filtered by the crystal, amplified again etc. I reasoned that if the amplifier wasn’t biased to be linear, the oscillations couldn’t build up. So I reduced the collector resistor to 6.8k, and changed the the base bias resistor to 1.8M to get the collector voltage into a linear region. So now we have Vc=3.2V with a 5V supply.

But it still wouldn’t go. On a whim I adjusted the supply voltage up and then down and found it would start with a supply voltage beneath 3V, but not any higher. Huh?

Much fiddling with pencil and paper followed. Time for a LT Spice simulation of the “AC model” of the circuit:

I’ve “opened the loop”, to model the collector driving the crystal network which then drives the base impedance. On the left is a voltage source and 6.8k resistor that represents the collector driving the 330k resistor and an equivalent model of the crystal.

The values Lm, Cm, Rm, are the “motional” parameters. They are what the mechanical properties of the crystal look like to this circuit. The values are amazing, unrealizable if you are used to regular electronic parts. I found Cm = 1fF (1E-15 Farads, or 0.001 pF) in a 32kHz crystal data sheet, then solved f=1/(2*pi*sqrt(LC)) for Lm to get the remarkable value of 24,000 Henrys. Wow.

Phase Shift

With Vcc=5V, we have Vc=3.2V, so a collector current Ic = (5-3.2)/6800 = 0.265mA. I’m using a small signal transistor model with the emitter resistance re=26/Ic = 26/0.265 = 100 ohms. The effective impedance looking into the base rb=beta*re = 100*100 = 10k ohms (2N3904’s have a minimum beta of 100).

OK, so here is the phase response near 32kHz:

Well it looks about right, a phase shift of 170 degrees, which is close to the target of 180 degrees.

Now, can we explain why the oscillator starts with a reduced supply voltage? Well, reducing Vcc would reduce Ic and hence increase rb, the base impedance the crystal network is driving. So lets double rb to 20k and see what happens to the phase:

It gets closer to 180 degrees! Wow, that means it is more likely to oscillate. Just like the actual circuit.

So – can I induce it to oscillate on a 5V supply? Setting rb back to 10k, I messed about with C1 and C2. Increasing them to 82pF moved the phase shift to just on 180 degrees. I soldered 82pF capacitors into the circuit and it started on a 5V rail. Yayyyyy. Go Spice simulations.

Loop Gain

But what about the loop gain? Well here is the magnitude plot near 32kHz:

The maximum gain is -22dB at series resonance, followed by a minimum gain at parallel resonance. We need a net gain around the loop of 1 or 0dB. So the gain of the amplifier must be at least +22dB to get a net gain of 0dB around the loop.

The gain of common emitter amplifier is Rc/re = 20*log10(6800/100) = 36dB. So we have enough gain. At the reduced supply voltage let say Ic is halved, so re doubles. This would reduce the loop gain to 30dB. However rb=beta*re would also double to 20k. Spice tells me the maximum gain of the crystal network is now -16dB, as rb=20k loads the circuit less. So still plenty of margin for oscillation – which is what happens in the real hardware.

Increasing C1 and C2 to 82pF produced a crystal network gain of -24dB. With a 5V supply the amplifier gain is 36dB so we have enough loop gain to make this puppy oscillate. Which it does, eventually. It takes about 10 seconds for the oscillations on the collector to hit the supply rails. From some reading I understand a slow start in the order of seconds up is common for these oscillators.

Matt, VK5ZM, suggested the function of the 330k resistor is to limit the power through the crystal. These tiny crystals are rated at just 1uW maximum power. With 1Vrms AC drive, Spice measured a current of 7.2uA through the crystal series resistance Rm=35k at the resonant frequency, which is a power of 35E3*(7.2E-6)^2 = 1.8uW. Oops, a bit much. However I think increasing the 330k resistor might reduce the loop gain. And I have a big bag of spare crystals.

Matt also pointed out there are some parasitic capacitances from the transistor that could be included in the model.

Here is the final circuit, that works on 5V:

Links

Open Loop LT Spice simulation of the crystal oscillator network.

Categories: thinktime

Glen Turner: Raspberry Pi 3 performance, power and heat

Fri 18th Mar 2016 17:03

When you order a Raspberry Pi 3 then do yourself a favour and also order the matching 5.1VDC 2.5A power supply (eg: STONTRONICS T5875DV, Element 14 item 2520785). The RPi3 is four cores of 64-bit ARM with an impressive GPU -- that's a lot to power. If you present it with too little power the circuitry will make the red "power" LED blink and the software will reduce the CPU's clock rate.

You'll notice the clever use of tolerances to allow the RPi3 power supply to charge a phone, as you might expect from its Micro USB connector (5.0V + 10% = 5.5V, 5.1V + 5% ≅ 5.4V). The cable on the RPi3 power supply has an impressive amount of copper, so they are serious about avoid voltage drop due to thin cables.

You can argue that this is poor design, that the RPi should really use one of the higher power delivery solutions designed for mobile phones. But with Google, Apple and Samsung all choosing different solutions? Whatever the RPi's designers chose to do then most purchasers would have to buy the matching power supply. At least this design is simple for makers and hobbyists to power the RPi3 (simply provide the specified voltage and current, no USB signalling is needed).

The RPi3 will also slow down when it gets too hot; this is called throttling and is a feature of all modern CPUs. People are currently experimenting with heat sinks. Even a traditional aluminium 10mm heat sink seems to make a worthwhile difference in preventing throttling on CPU-intensive tasks; although how often such tasks occur in practice is another question. The newer ceramic heat sinks are about four times more effective than the classic black aluminium heat sinks, so keep your eyes out for someone offering a kit of those for the RPi3. This is a further complication when looking at cases, as the airflow through most RPi2 cases is quite poor. I've simply taken a drill to the plastic RPi2 case I am using, although there are ugly industrial cases and expensive attractive cases with good airflow.

Further reading: Raspberry Pi 3 Cooling / Heat Sink Ideas, Pi3B thermal throttling.

Categories: thinktime

Peter Lieverdink: 2014 DrupalCon Austin Road Trip

Thu 17th Mar 2016 18:03

It's been nearly two years since I blogged a DrupalCon road trip, so it's high time for another one. Not that there weren't other DrupalCons in between, but the trip to Sydney was rather short. And although we did drive a fair bit before Portland, we ended up where we started, so it was not really a road trip. To get to Prague we took a train, so that was out too. Thus, DrupalCon Austin.

Prior to moving to Boston, @beejeebus house-sat for us for a few weeks whilst @kattekrab and I were in Prague. Courtesy of the US government shut-down his visa was delayed by several months, so he ended up staying for a fair bit longer and introduced us a scary food show called Diners, Drive-ins and Dives.

We also did a few short road trips around Victoria to show him how pretty it was, and so was borne a plan to roadtrip from Los Angeles to Austin, eating ourselves silly along the way.

To get to Austin from Melbourne, you would usually fly to LAX and then onward for several more hours. Since LAX domestic is pretty shithousenot a particularly nice airport to hang out at, we decided we would like to escape it.

To accomplish this, we planned to meet up with beejeebus who would be flying in from Portland. We would then pick up our rental car and leave Los Angeles as soon as possible.

Unfortunately beejeebus missed his flight, so we were stuck at LAX for three hours anyway, waiting for him. Aaargh! *twitch* *twitch*

Las Vegas

Once you leave Los Angeles, there is mostly desert with the odd strip-mall town, which is convenient for lunch. Inspired by The Big Lebowski and lacking DD&D venues, we decided on an in-n-out burger, which actally turned out to be pretty tasty.

After our lunch stop we hit some proper desert highway, as witnessed by the turn-off to Death Valley and the largest solar thermal power plant in the world.

Apart from that, the only thing to see was desolate landscape (but pretty) and giant bill-boards for casinos and jesus (less pretty). You can spot the Nevada state line from miles away - there's a giant casino built right on it.

Las Vegas was ... interesting. A modern shiny city in the middle of nowhere and purely there to provide gambling. We drove up the strip, eyeballing all the famous casinos and the people moving between them. This was the start of the Memorial Day long weekend in the US, and Vegas was positively overflowing with The Wrong Kind Of People™.

Horrified, we stopped. After a quick chat where we all admitted to being quite happy to be somewhere else, we decided to give Las Vegas a miss and push on to our next stop instead.

The landscape changes pretty much immediately south-east of Las Vegas - that's where the Hoover Dam is and the Grand Canyon sort of ends. We didn't stop to look at the dam, as the sun was getting low and we had a lot of driving to do, but we got glimpses of the Colorado River as we drove along.

There are a lot of odd conglomerations of shacks and trailers a few miles off the highway for hundreds of miles either side of Kingman. We weren't sure what they were, but were later told these may be survivalists, living in the middle of nowhere. There are a LOT of them though, so they must like neighbours.

They also all have a flagpole with an american flag out the front, which lead to our first imaginary (because driving) drinking game of the trip. Whenever you see a flag out the front of a house, drink.

As a side-note, this flag-in-front-of-house also happens in Australia and I don't understand it here either. Do people get confused about what the national flag looks like so they need constant reminders?

Every other "town" in this area offered the opportunity to shoot an AK-47 for $35 or had a casino. Weird. We drove ever onward to Flagstaff and then to Sedona.

A bush/forest fire had broken out between Sedona and Flagstaff a few days earlier and our hosts had bought an RV were camping in the next town over to escape the smoke. As Australians, we decided we were used to fire smoke and gratefully took up residence in the accomodation offered. (Thanks Megan & Scott!!!) Kattekrab and I had stayed here for a few days last year as well.

Jerome, Prescott and Camp Verde

Just like a year earlier, we planned to use Sedona as our base for daytrips to local(-ish) attractions. Unfortunately, Oak Creek Canyon was out of bounds, as that was on fire.

Our hosts suggested we do a little drive into the hills in order to escape the smoke. First to the old mining town of Jerome. This is a very cute little town, with some interesting looking shops and bars. It also boasts the deepest mine shaft on the continent.

Being up in the hills, Jerome offered a rather dramatic view of the Slide Fires north of Sedona. This being the holiday weekend, downtown was overrun by tourists, so we decided to drive over the top of Mingus Mountain and on to Prescott, where we would either visit a whisky bar or press on to Camp Verde to see some native american cliff dwellings and a deep well.

It turns out that to one side of Prescott, there's a beautiful granite formation and we passed through this on our way into town. 

Due to jetlag, we left Sedona quite late and thus got to Prescott pretty late as well. We decided to give the whisky bar a miss in favour of some more cultural pursuits and after buying to insanely hot corn chips to snack on (try to make beejeebus cry by whispering "tacky fuego" to him) we set off to Camp Verde. Sadly, we left it too late and we didn't arrive until just after 5pm. We got a wave from the ranger who had obviously just closed the gate :-(

So we decided to head back to Sedona for an early night. On the way, we stopped at Bell Rock just south of Sedona for a leg-stretch and some pretty amazing scenery.

Though beejeebus really wanted to see a snake, we didn't this time around. What we did see was a rather impressive smoke column to the north...

Grand Canyon - Desert View

About a two hour drive north of Sedona is the Grand Canyon south rim. Kattekrab and I had been to the village part last year, so this time, with agreement from beejeebus, we decided to drive a bit further and go to Desert View instead. This turns out to have been a good decision, as we got to see a bunch of wildlife along (and on) the road - mostly elk.

Desert View itself is a rather pretty look-out over the wideing eastern end of the Grand Canyon, with mesas in the distance.

An artist has built an olde-looking tower which offers beautiful views of the Canyon, if you can get past the people taking selfies on the narrow staircase.

After Desert View, we backtracked a little bit to park the car and go for a walk along the south rim towards the South Kaibab trail head, which is very pretty indeed!

Unfortunately, the trail head itself is extremely unsuitable for people with vertigo. Walking along the start of the trail turns out to be pretty tricky when you aren't able to move your legs.

We got our second encounter with local wildlife at the trail head too. A squirrel heard me unwrap a muesli bar and clearly heard that sound before, as it made straight for me.

It is my policy to never feed plague-squirrels though, so it missed out.

On the way back to Sedona, we hit our first DD&D venue: Salsa Brava. It's a rather unassuming looking place along the highway, but it's packed inside. We had no booking, but luckily we didn't have too long to wait for  a table. After ordering some delicious margaritas, I decided to order "What Guy had" - a navajo fry-bread pulled pork taco. And omg, it was delicious! +10 would go again :-)

Sedona

Due to a wind change, the smoke had mostly cleared for our final day in Sedona and after waiting out the hottest part of the day we decided to climb Cathedral Rock. I'd been up this rock the year before, but kattekrab injured herself and wasn't able to climb the steep middle section that time. It's absolutely stunning though, so happily she got a second chance.

I may have gone slightly overboard with the photos of Cathedral, but it really is very pretty and offers views of the yellow and red mountains all around. We climbed in the afternoon, so got sunset colours at the top and on our way back down.

We finished a rather lovely day with dinner with our hosts at an incongruous german restaurant in Cornville followed by a drink at the local pub, where a scary local discovered we "we'en't from 'round here" ;-)

Winslow (Barringer Crater)

One of the things that has been on my bucket list for a long time is Barringer Crater (probably better known as Meteor Crater). We didn't get to see it last year because we ran out of time. Happily, this year our next stop after Sedona was Albuquerque and the crater is on the way!

Not far out of Flagstaff the landscape changes from pine forest into shrubby desert and becomes pretty flat. That means you can see the crater rim from a long way off and appreciate how huge it is. The rim might not be tall, but it is very very wide!

The crater is privately owned and the owners have built a small museum on the north rim. I ran straight through it for a view of the second largest hole in the ground I'd see on this trip.

There is a bit of a hill on one corner and from there you get a good sense of the crater and the flat desert landscape beyond.

At the bottom of the crater are the remains of a few drilling attempts (to find the meteorite) and a life-size model of an astronaut.

If the stomping tourists disappear and you just hang out for a while, the locals will come out to say hello :-)

The museum is a bit daggy, but I suppose that was to be expected. Not all visitors are obsessed or would know the history of the crater. I did get a t-shirt though, and various delicious cactus-flavoured margarita ingredients!

The crater is most impessive, but like the Grand Canyon it's so big that it's just impossible to get a good sense of its size when you're standing on the edge.

Albuquerque

Our planned stop for that evening was Albuquerque and kattekrab had enquired about local Drupalistas. @teampoop replied that there was a local tech meetup that evening, so we drove along route 66 all day to get there in time (which we failed to do, as there was apparently a 1 hour time difference between Arizona and New Mexico)

Route 66 was a bit sad, as it turned out. Apart from a bunch of places clustered around very blingy casinos, most towns are now ghost towns or fast on the way to becoming ghost towns.

We were told later that the combination of cheap flights and the raising of the speed limit from 55 to 75mph means that people no longer need to stop overnight at a small town, they can drive from major city to major city in a day. As we did, I suppose. Still, no diner lunch along the way for us :-(  Very pretty country-side though in that part of New Mexico, straight out of cowboy movies.

We didn't have any accommodation booked for Albuquerque, but the Rio Grande had rooms with hot showers and is very nice indeed!

After a quick frshening-up, we headed into town to meet the friendly locals for a snack and drink at beer.js. We didn't talk much about JavaScript, but we did meet @teampoop and @helennoat for a fun evening. Just before we left, we got a breakfast recommendation for the next morning.

And an excellent recommendation it was - Frontier Restaurant. It hasn't been on DD&D, but maybe it should be. The huevos rancheros set me right for the rest of a long morning of driving.

Roswell

For no other reason than to be able to say I've been to Roswell, New Mexico, I decided we had to go to Roswell, New Mexico. So we did. 

Roswell is a pretty normal small town and apart from the UFO museum (we passed) and the odd green alien on a bill-board it looks pretty normal. We went on a bit of a wild goose-chase looking for the UFO crash site (which someone had helpfully added to FourSquare) but that turned out to be the stadium out the back of some religious compound.

It wasn't a total bust though, on the way to the supposed crash site we passed the abandoned old Roswell airport terminal, which was due to be demolished shortly and it made for a lovely photo opportunity :-)

After that detour, we had philly cheesesteaks for lunch at Big D's Downtown Dive (delicious!) before hitting the road again.

We decided to get as close to Austin as wel could that day, leaving a relatively small amount of driving for the final day of our trip.

Leaving Roswell, we could tell we were getting closer to Texas, as the oil pumps started to get pretty thick on the ground. As not many people live out that way, the pumps replaced the flag poles for our drinking game.

There wasn't much to see or do along the way, apart from taking a mocking vanity photo for @texas as the state line, so we mostly just drove, until we ran very low on fuel at Lamesa. Luckily, Lamesa had a service station. Unluckily, it took us four attempts plus the help of a puzzled local to get the fuel pump going :-)

We finished the day's drive at San Angelo, only a few hours from Austin.

San Angelo

What can I say about San Angelo? It has a road.

Actually, it has a road with a classic 60s diner that hasn't changed since the 60s and that's where we a had a disgusting delicious diner breakfast, served by none other than Roxie.

After breakfast we eventually drove down from the desert highlands, past a lot more Drink!oil pumps. I noticed that enterprising oil barons have started planting wind turbines on their oil fields!

As we neared Austin, the vegetation got a lot lusher and greener, which was pretty nice after a week of driving through desert. In the end, we did around 2100 miles in six days. Pretty impressive :-)

Austin

I decided to give the actual Drupal conference a miss, but do a bit of work in the coders lounge/sprint rooms during the day and catch up with people in the evenings.

That gave me a chance to explore Austin a bit in the mornings and eat my way around as required. 

The river that runs through the city, also named Colorado, but not the same as the one that goes through the Grand Canyon (actually, this one seems to start at Lamesa where we had flues troubles) has walking paths along it, and I spent most mornings wandering around there, looking at turtles and graffiti.

A new walking/cycling track that actually sits on the river was under contruction across from the hotel and I managed to sneak past the construction workers for a sneak preview on Friday :-)

As for the other part, eat I did. A lot of BBQ, at Lamberts, Moonshine and Ironworks. I also finally got the opportunity to sample the famous "chicken and waffles" that @stephelhajj had kept talking about, at Diner 24.

And it was delicious, with maple syrup and tabasco - breakfast of champions! And lunch and dinner too!

Other stand-outs were the deliciously disgusting chilli cheese waffle fries and the nototious p.i.g (a hot-dog with mac & cheese on it) at Franks.

I managed to get beejeebus to one more DD&D venue as well, in Austin. We had very tasty burgers (twice!) at Casino El Camino, which turned out to be just behind the conference center.

Now, back to planning the next road trip!

Tags: drupalconroad triprocksspace
Categories: thinktime

Peter Lieverdink: DrupalCon Austin - Drupal Trivia Scores

Thu 17th Mar 2016 18:03
Many of the DrupalCon Trivia teams appear to have had rivalries (or actual feuds) with teams on tables nearby, as I've had a lot of people ask where they finished in relation to another team.

To help you to find out where your team finished, here is the final score table.

*/ RankTeam NameBonusONETWOTHREEFOURFIVESIXTIEBREAKSCORE1data-steak='cactus'589907382The Drop Bears24666462363We Hack Core33573581354We are not cheaters1665637345Development Steed577519346Drupal 6LTS478617337Cachfralca 801477446338Wonderwomen1477617339The Bats35663273210Foo Beer6576173211F*ck the bats15675263212The Meat Sweats14664373113Point Blood42465463114More Wine, Here!42765343115The Man32753373016The Local Branches34575153017\_("/)_/2867163018[marksonnabaum:sorrybeard]4765342919The Blinking Marquees25752172920Finns and friends13575172921Dependency Ejection4663372922We Know Drama4754182923check_plain(<script>ALERT("XSS");</script>);3575262824The Spanish Exposition33664062825Dries' Git Skillz4373382826Chang3555282827Bachm's Razer13756052728sudo rm -rf l14463272729Top of the bell curve12565262730Cache Killaz4564442731WWSD235531827324243463162733Zaphod32754322634Anonymous User13463362635Friends of Eithkhan32743342636Inmigration Reform33543262637Newbies52650352638Tuareg0.534641725.539Gobmint11465262540Something(dot)Ninjas23464152541Late Comers3564162542Jim,Jimmery22544252443MAMBO3564062444Heinz 57!?!11663432445Just put it down12363362446D.A. Barracus14362252347The Global Variables24471232348Team WordPress12852142349Lee Walker's Assless Chaps4445052250Lost Cause24561132251Victory Dirtwolves12354162252The Vikings22554222253The Amazing Stroopwafels3532272254DrupalChronic3464232255Batnado3343352156Flying Danes2364114215760345632158The DruPaul Show436421959Name that module255251960Austin Weirdos33233311861NXNW+1113631462Los Capos11631163Team Wordpress2002001564What the Fox Says101011465Miro00000101

One bonus point was assigned per DrupalCon newbie on each team. The judges also arbitrarily awarded bonuses for amusing answers, pretty artwork or headless Drupalize.me pony stickers.

Tags: drupaldrupalcon
Categories: thinktime

Peter Lieverdink: Add images to Drupal from your mobile device

Thu 17th Mar 2016 18:03

You can add images to Drupal, but mobile devices don't allow you to upload any photos to image fields. This is something that sort of irked me from time to time in the past, but recently came up for a website project, so I thought it would be good to see if it could be worked around.

HTML5 allows for this, but sadly that's mostly a no go with Drupal 7 at the moment. However, it turns out the fix is nice and easy via the HTML Media Capture method. Add the following snippet of jQuery, so it runs when pages load:

$('div.image-widget-data input[type="file"]').each(function(idx, item) { $(item).attr('accept', 'image/*;capture=camera'); });

That will look for <input type="file"> items inside image widgets and add the "accept" attribute This then tells mobile devices they can upload image data and are allowed to grab that from their on-board camera.

To make that a spot simpler on Drupal, you can grab the Image Mobile Camera module from its sandbox on Drupal.org.

Now hit your blog via your tablet or phone (using a browser that supports this - Chrome is fine) and start uploading photos :-)

Tags: drupalimagemobilehtml5
Categories: thinktime

Peter Lieverdink: Operation Hubble

Thu 17th Mar 2016 18:03

I got a telescope a few years back and though it works well for looking through with human eyes, it's been close to impossible to use with with a digital SRL camera mounted at the eyepiece. The problem is that the camera body can't move close enough to the tube to obtain focus on objects futher away than about 20 metres. Of course, that's not very useful for a telescope (unless you'e into bird-watching).

The camera can be made to focus with the addition of a barlow lens, but the only one of those I have magnifies by a factor of two and adds some blurring, so that's not really an ideal solution either. What I really want is to put the camera at prime focus using only the primary and secondary mirror.

On one of my bi-annual google searches for a solution I stumbled across the suggestion of a Hubble style operation to mount the telescopes primary mirror a bit closer to the secondary mirror, so making the focal plane move a bit further away from the tube. However, the original post is rather low on detail.

From the images added to the original post, it looked like the poster had used book binding screws known as "chicago screws" or "sex bolts" (not be confused with Andrew) to replace the thumb screws on the end of the telescope, to give the mirror assembly more inward travel and longer springs to prevent vibration.

Components

I found chicago screws at a craft store, but they turned out to be a bit short and made it nigh impossible to collimate the mirror by giving close to no purchase (compared to the thumb screws).

On my quest to find a matching longer screw head for the chicago screws, I ended up at a hardware store where one of the clerks actually found some used 5mm × 45mm machine screws that appeared perfect for my needs, but unfortunately he couldn't find any springs to match.

A bit more googling on the tram home though, led me to the RS Components website, which lists a plethora of varied size and strength springs. Including one that appears to fit :-) I ordered a set and a few days later I had everything I needed for my Hubble style telescope surgery. 

Disassembly

To remove the primary mirror assembly, unscrew the small black screws from the bottom of the telescope (you can stand it on its front for this) and carefully lift the entire assembly out of the tube. Unfortunately, the screws that keep the assembly attached to the backing plate are half obscrured by the rpimary mirror, so that will need to be removed too. You need a small screw driver to carefully undo the six screws that keep the mirror in place. Carefully lift the mirror off, put it in a safe place and cover it to keep dust off.

Remove the thumb screws and the backing plate, then turn the assembly over. You can now remove the screws that attache the mirror assembly to the backing plate and replace them with your longer ones.

Provided you got the correct springs (mine are 11mm diameter, 56mm long, 1mm piano wire), they should fit perfectly and push the mirror away from the backing plate with a fair bit of force. Add the backing plate and put the thumb screws back on. You may need to compress the springs quite firmly to accomplish this.

When that's done, all that remains is to reinstall the mirror and gently reinsert the whole assembly back into the tube.

Insert error

When I performed this last step I found that it was close to impossible to insert the mirror back into the tube. On closer inspection, some scratches on the mirror assembly implied it was catching on the small screws that keep the end cap in place. These would appear to be just the slightest bit too long. D'oh!

I wasn't about to go off again and find some more screws, so instead I simply reversed these. The screw head is now on the inside of the tube and the hex nut is on the outside, allowing the mirror assembly free travel. If you do this, just be sure to not have any clothing catch on that screw when you're out in the field.

I've not yet had the time to properly try this new setup to see if it makes any difference to focusing, fingers crossed!

Tags: telescopehubblemirrorDIY
Categories: thinktime

Peter Lieverdink: Sponsorship Success Metric

Thu 17th Mar 2016 18:03

Just after DrupalCon Sydney at the start of February of this year, I overheard some people wondering why they should sponsor a DrupalCon. Considering the people who attend, there's not a lot of product selling you can do if you're a Drupal shop and unless you're looking to hire delegates as new staff, there's not a lot of direct benefit from having a sponsor booth or table.

Obviously, helping to fund a DrupalCon and the Drupal Association via a sponsorship are good things to be doing for the community, but the payoff isn't necessarily immediately apparent. However, there definitely is one. There just hasn't been a metric for it, let alone a testable metric.

Most Drupal development is done by people in disparate locations around the world. They communicate via irc, email and the issue queue, but don't necessarily meet face to face.

We all know that face to face communication is far more efficient. The use of intonation, facial expressions and body language all make it much harder to misunderstand each other. Additionally, meeting face to face allows for social non-code interactions such as having breakfast, lunch or dinners, parties or just "hanging out", which all help build the team.

With a better team spirit and less misunderstanding between team members, I propose that there are fewer arguments (hissy-fits, if you will) between developers the more often they meet face to face. And the more sponsorship, the easier it is to run events and be able to meet face to face and have a good time.

My sponsorship metric then would be a decrease in hissy-fits per core release.

Since it's nice to measure something and have a larger number be better, I think a little bit of elementary algebra would give us the inverse release hissy-fit (RH-1).

Now there are only three things left to do, find a common name for the unit, give it a symbol and actually measure it over time :-)

Image by-nc-sa by Vaughan.

Tags: drupaldrupalconsponsorship
Categories: thinktime

Peter Lieverdink: Accidental Space Tourist - SocialSpaceWA

Thu 17th Mar 2016 18:03

Like many people, I love the beautiful images we receive from space telescopes and spacecraft that orbit other worlds in the solar system. Also like many other people, I expect, I never really stop to think how we get those images, just assuming they get sent to earth via some magic space internet.

However, there is no internet (magic or otherwise, yet) in space and getting the data to create these pretty images (and to do science) is rather involved.

Quite by accident I got a chance to learn a lot more about that process.

SocialSpaceWA

Whilst not working, I stumbled across a retweet by the European Space Agency, asking for people to apply to visit their deep space tracking station in New Norcia, Western Australia (NNO) as part of their SocialSpace programme. I didn't really have anything on, qualified to apply by way of having an ESA member nation passport, don't live more than 16 hours flying away, so I thought "why not?".

Why not indeed. I applied a few days before the closing date and only a week later I got the happy news I'd been selected to attend. I immediately grabbed some return tickets to Perth and then started fretting about doing this thing with 15 total strangers. Eep!



Time-lapse of land-fall over southern WA, after crossing the Great Australian Bight.

Of course, fretting was totally unwarranted. ESA had organised a bus to drive us all to New Norcia from Perth, and a bunch of delegates organised to meet up with Daniel (the ESA chef-de-mission) before heading to the bus pick-up. Of course, my fellow delegates were all space geeks too and we all got on really well (especially once Daniel started handing out ESA swag :-)

The trip to New Norcia was in a lovely airconditioned bus, which made coping with the heat wave rather easy.

Introductions

As an ice-breaker, we all shared a group dinner that evening at the New Norcia hotel. After a round of 140 character introductions, we split into groups and each group was joined by an ESA engineer, who talked a little about who they were and the work they did on the site.

After dinner, John Goldsmith gave a talk about astrophotography and the sights of the night sky in preparation for an observing session with some people from the Perth Observatory, who'd driven up with cars full of (rather lovely) telescopes. Sadly I missed the talk because I was volunteered to help out with the telescopes. On the up-side, that resulted in my first TV appearance ever on Channel 10 in Perth.

The seeing was excellent (New Norcia has proper dark skies) so it ended up being a fairly late night.

Unfortunately, that meant the morning wasn't quite as early as I'd hoped it would be. Because of the dark skies, and three hour time difference with home, I had planned to not go back to Perth for the night. Instead, I wanted to stay in New Norcia and then get up early to catch the planetary alignment in action. I ended up seeing it just fine, but it was getting a little bit too light at that stage to easily cature all planets on camera.

Because most delegates elected to stay back in Perth overnight (where the hotels have airco) they wouldn't be back before 10am, which gave me time to have a nice and relaxing early morning at the hotel, with fresh coffee.



Aaaah, the serenity.

Down to business

Once my partners in crime had arrived, we all moved to the ESA education room at the New Norcia monastery for some enlightening sessions about the ESA Tracking Network (ESTRACK) and NNO by ESA engineers.

ESTRACK

Yves Doat spoke about why the ESTRACK network is needed and what it currently consists of. He showed us highlights of some of the missions they've supported over the past decades, from the Giotto mission past Halley's Comet in 1986 through to the current Rosetta/Philae mission to comet 67P Churyumov-Gerasimenko.

Deep Space Comms

Klaus-Jürgen Schulz dove into the details of deep space communications and paid particular attention to the difficulties of communicating with spacecraft that are close to the sun (which is an issue for the BepiColombo mission to Mercury, of course!) he finished his presentation by telling us about the future of deep space communications, using light rather than radio, to obtain much higher rates of data transmission.

Ground Station Operations

Next, Marc Roubert explained the operational intricacies of running ground stations. Since they are generally located in relatively remote radio-silent areas, getting construction materials and equipment to the site can pose a real problem. Bush fires, sand storms, snow and the occasional leopard (for the Argentinian site) can interfere with operations as well.

Their location can also pose problems for the power supply. The sites use a lot of power to cryo-cool the amplifiers. Fire can cut power lines, so generators are needed.

All delegates became very excited when he said that due to the cost of power in Australia, NNO was actually going solar. ESA have built a 250kW solar plant on the New Norcia site, which will pay for istelf in only 7 years and save about 400 tons of CO2 per year.

They're not yet allowed to feed power back into the grid, because the infrastructure wouldn't be able to cope. But they built the plant to produce only as much power as they need, so there isn't that much to feed back currently anyway.

The trouble with big antennas

Gunther Sessler then gave us the low-down on the new NNO-2 antenna. How it was constructed and what it can do that the 35m NNO-1 antenna can't, which is mainly obtain signal from spacecraft even if they're slightly off-course (which can happen easily if a rocket slightly over- or underperforms at launch).

As it turns out, the 35m NNO-1 antenna has a beam with of 60 millidegrees and to acquire a signal from a spacecraft, it has to be somewhere within that beam. I did the maths on that, and 60 millidegrees equates to a circle with a diameter of only 200m at a distance of 1000km (eg: a spacecraft on its way to orbit just clearing the horizon) Now 200m sounds like a lot, but when you realise a spacecraft is doing upwards of 5km/sec at that point, locking on to it becomes a much harder problem!

That's where the wider beam width of the 4.5m NNO-2 antenna comes in. It can see a larger part of the sky, so can pick up spacecraft that are slightly off-course a lot easier. And if the space craft is even more off-course, the 0.75m antenna has a wider beam width still.

With some smarts, once the 0.75m antenna locks on to a spacecraft, it can be used to center the 4.5m dish on it in turn. And once the 4.5m antenna is locked, its data can be used to in turn lock the 35m NNO-1 on the craft.

Putting it all together

The final presentation was by Peter Droll, who put it all together and gave us an overview of how ESTRACK was used to send the Lisa Pathfinder mission on its way to the L1 langrange point.  That was done by boosting its orbit with several engine burns, after ach of which the crafts position needed to be known exactly in order to caculate the next burn.

LPF is trialing equipment for detecting gravitational waves in space and should have started science operations today. Fittingly, this presentation was on the morning of the LIGO announcement :-)

Tour

We had a quick lunch after the presentations and then hopped back on the bus to go see the NNO dishes. The Inmarsat Cricket Team had prepared well and gave us a tour of NNO-1, allowing us to stick our heads absolutely eveywhere. 

The only spanner in the works in terms of social media was that the inside of the dish is really well shielded against radio interference, so all our phones stopped working! Luckily, the Nikon with borrowed fish-eye lens worked fine.

You can see all of my SocialSpaceWA photos on Flickr.

We toured the NNO-1 dish, as well as the the generator and battery buildings and the control room. Two lucky souls managed to score the chance to actually operate NNO-1 and I grabbed a bit of video whilst Matt took the dish for a joyride. I am assured that New Norcia doesn't do hayrides like Parkes, and that nobody plays cricket in the dish either (but they do play football!)



Taking NNO-1 for a joyride.

Inauguration

After the tour, VIPs started arriving for the formal inauguration ceremony. After a welcome to country, we heard talks from the WA deputy premier and the European Union ambassador to Australia, praising the virtues of scientific cooperation. I definitely hope there will be more of that in the future, if only to make more space infrastructure more readily accessible for visiting! :-)

Speeches over, we all hopped back on the bus to finally go and see the new NNO-2 antenna. It's located a few hundred meters away from the main complex and since we were still enjoying the heat wave, the transport was most welcome. That is, until the smaller of the buses couldn't cope with the rather steep hill and we all had to do the last hundred meters or so on foot.

The sun was setting as we arrived at the NNO-2 site and with the thin crescent moon it made a rather lovely backdrop for the blessing of the new facility by three monks from the New Norcia Monastery, followed by the antenna doing a little dance.

Good luck on you mission, NNO-2!



Image: Vaughan Puddey.

Wrap-Up

The formal proceedings over, we were all bused back to the monastery where ESA treated us to a delicious dinner as the stars came out. The monks are New Norcia turn out to make a rather decent drop of wine as well. I'm not a fan of beer, but I'm told their ale is pretty good too :-)

Finally it was time to hop on the bus and head back to Perth and after a final farewell drink, all delegates went their separate ways again.

But one thing we did all agree on: if you ever get the chance to do some accidental space tourism, take that chance with both hands and don't let go!



Thumbs up for New Norcia!

Thank you, ESA, Inmarsat and New Norcia!

Tags: SocialSpaceWAdeep spacespaceadventureESA
Categories: thinktime

Peter Lieverdink: Social! Space! Western Australia!

Thu 17th Mar 2016 18:03

A few weeks ago I noticed a retweet by ESA, asking for expression of interest from space enthusiasts to attend and social-media (verb) the inauguration of a new antenna at their New Norcia deep spacetracking site in Western Australia.

That site is used to communicate with deep space missions such as Rosetta and Gaia.

After some um-ing and ah-ing, I decided to apply. After all, when I'm on holiday elsewhere I try to visit observatories and other space related things and am always a bit disappointed when a fence keeps me at a distance.

Last week I got an email with the the happy news that I was one of the fifteen lucky people selected to attend!

 

So, over the next week you'll probably see a lot of space tweets from me with impressive radio hardware, behind the scenes looks at things, and a lot of excited people.

You can read more about #SocialSpaceWA on the ESA Social Space blog.

 

Tags: spaceSocialSpaceWAESAdeep spaceastronomy
Categories: thinktime

Pages