The use of cloud computing as an alternative implementation for high performance computing (HPC) initially seems to be appealing, especially to IT managers and to users who may find the jump from their desktop application to the command line interface challenging. However a careful and nuanced review of metrics should lead to a reconsideration of these assumptions.
Job control is a basic feature of popular UNIX and Linux shells, such as “bash”.
It can be very useful, so I thought I’d make a little tutorial on it…
You can also use fg and bg with a job number, if you have several jobs in the list.
You can start a job in the background: put an &-symbol at the end of the command. This works well for jobs that write to a file, but not for interactive jobs. Things might get messy if you have a background job that writes to the terminal.
If you forget the % with kill, it will try to kill by process-id instead of job number. You don’t want to accidentally kill PID 1!
An example:vi /etc/apache2/vhosts.d/ids.conf ^Z jobs find / >find.out & jobs fg 2 ^Z jobs bg 2 jobs kill %2 fg
I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.Other options
In my opinion, the first step in starting a new free software project should be to look for a reason not to do it So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.
It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.
A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.PlanetFilter
PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.
You can either:
- add file:///var/cache/planetfilter/planetname.xml to your local feed reader
- serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
- host it on a server somewhere on the Internet.
The software will fetch new posts every hour and overwrite the local copy of each feed.
A basic configuration file looks like this:[feed] url = http://planet.debian.org/atom.xml [blacklist] Filters
There are currently two ways of filtering posts out. The main one is by author name:[blacklist] authors = Alice Jones John Doe
and the other one is by title:[blacklist] titles = This week in review Wednesday meeting for
In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.Tor support
Since blog updates happen asynchronously in the background, they can work very well over Tor.
In order to set that up in the Debian version of planetfilter:
- Install the tor and polipo packages.
Set the following in /etc/polipo/config:proxyAddress = "127.0.0.1" proxyPort = 8008 allowedClients = 127.0.0.1 allowedPorts = 1-65535 proxyName = "localhost" cacheIsShared = false socksParentProxy = "localhost:9050" socksProxyType = socks5 chunkHighMark = 67108864 diskCacheRoot = "" localDocumentRoot = "" disableLocalInterface = true disableConfiguration = true dnsQueryIPv6 = no dnsUseGethostbyname = yes disableVia = true censoredHeaders = from,accept-language,x-pad,link censorReferer = maybe
Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:export http_proxy="localhost:8008" export https_proxy="localhost:8008"
The source code is available on repo.or.cz.
I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!
I'm also interested in any suggestions you may have.
A guest blog post I wrote on managing git branches when doing devops.
When doing Devops we all know that using source code control is a “good thing” — indeed it would be hard to imagine doing Devops without it. But if you’re using Puppet and R10K for your configuration management you can end up having hundreds of old branches lying around — branches like XYZ-123, XYZ-123.fixed, XYZ-123.fixed.old and so on. Which branches to cleanup, which to keep? How to easily cleanup the old branches? This article demonstrates some git configurations and scripts that make working with hundreds of git branches easier…
Go to Devops and Old Git Branches to read the full article.
The increasing size of datasets acts a critical issue for eResearch, especially given that they are expanding at a rate greater than improvements in desktop application speed, suggesting that HPC knowledge is requisite. However knowledge of such systems is not common.
I didn't realize there had been a flash flood in Canberra in 1971 that killed seven people, probably because I wasn't born then. However, when I ask people who were around then, they don't remember without prompting either, which I think is sad. I only learnt about the flood because of the geocache I found hidden at the (not very well advertised) memorial today.
Interactive map for this route.
Tags for this post: blog pictures 20150323-curtin photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs
- curry powder
- chilli sauce
- soy sauce
- tomato sauce
- chicken, prawns, and/or seafood mix
- egg noodles (any kind)
- lemon juice (optional)
- oyster sauce (optional)
- garlic (optional)
- ginger (optional)
- onion (optional)
- tomatoes (optional)
- tofu (optional)
- vegetables (optional, type is your choice)
Coat chicken with bicarbonate soda if desired (meat tenderiser. This step is not required at all if chicken is diced into small enough pieces and cooked well) and then wash off in cold water. Marinade chicken in fish sauce, sugar, garlic, pepper (optional step). Fry off chicken, tofu, onion, garlic, ginger, etc... in pan. Create sauce by using tomato sauce, soy sauce, chill sauce, curry powder, sugar, etc... Cook sauce and add noodles/rice when ready. Garnish everything with chopped lettuce and fried shallots if desired.
The following is what it looks like.
- 11 facts about the changing face of the Australian workforce http://t.co/5GuEcwtatb 15:33:00, 2015-03-20
- Fact check: Did the Government inherit the ‘worst set of accounts’ in history? http://t.co/gaaxll2Pgi #auspol 13:19:12, 2015-03-20
- Brazilian study of 3,500 newborns over 30 years finds link between breastfeeding and intelligence http://t.co/yHhNOVUDj7 11:20:03, 2015-03-20
- Ten companies directly responsible for third of Australia’s greenhouse gas pollution, report finds http://t.co/j8RJDH0HMB 09:42:01, 2015-03-20
- Australia’s plain packaging laws successful, studies show http://t.co/3VH3EC8iX5 19:32:04, 2015-03-19
- Nestlé: “as a human being you should have no right to water” http://t.co/hEuLTNNhza 19:32:04, 2015-03-17
I really like this area. Its scenic, has nice trails, and you can't tell you're in Canberra unless you really look for it. It seemed lightly used to be honest, I think I saw three other people the entire time I was there. I encountered more dogs off lead than people.
Interactive map for this route.
Tags for this post: blog pictures 20150321-narrabundah photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; Cooleman and Arawang Trigs; Point Hut Cross to Pine Island
PSA: If you are a web professional, work in a digital agency or build mobile apps, please read this article now: Taking the social model of disability online
"The social model of disability reframes discussion of disability as a problem of the world, rather than of the individual. The stairs at the train station are the problem, rather than using a wheelchair."
El Gibbs has reminded me of question time during Gian Wild's keynote at Drupal Downunder in 2012. Gian asserts that accessibility guidelines are a legal requirement for everyone, not just Government. There was an audible gasp from the audience.
It's true that our physical environment needs to include ramps, lifts, accessible toilets, reserved parking spaces, etc in order to accommodate those with mobility needs. Multi-lingual societies require multi-lingual signage. There are hearing loops - but for some reason, this "social model" of accessibility doesn't seem to have extended online.
Making the digital world accessible, and counteracting the systemic discriminatory impact of failing to do so is something we must take seriously. We must build this in during planning and design, we must make it easy for content editors to maintain WCAG compliance AFTER a site or app is delivered.
Building accessibility features in from the beginning also means it costs less to implement, and delivers a double win of making the whole team more mindful of these issues to begin with. It should be part of the acceptance criteria, it should be part of the definition of done.
I'd like to see us tackle these issues directly in Drupal core. If you're interested in keeping track of accessibility issues in Drupal, you might like to follow drupala11y on twitter, and check out issues on drupal.org that have been tagged with "accessibility".
Accessibility traps might not affect you now, but they will. This is probably affecting people you know right now. People who silently struggle with small font sizes, poor contrast, cognitive load, keyboard traps, video without captions.
My own eyesight and hearing is not what it was. My once able parents now require mobility aids. My cousin requires an electric wheelchair. A friend uses a braille reader, and yet I still forget. It's not front and centre for me, but it should be. Let's all take a moment to think about how we can focus on making our online and digital world more accessible for everyone. It really does benefit us all.
Very much a minor update to the presentation I gave in 2013, this talk provides a definition of supercomputers, high performance computing, and parallel programming, their use and current metrics, the importance and dominance of the Linux operating system in these areas, as well as some practical hands-on examples.
An Introduction to Supercomputers. Presentation to Linux Users of Victoria Beginners Workshop, 21st March, 2015