You are here

thinktime

Sridhar Dhanapalan: Twitter posts: 2015-03-23 to 2015-03-29

Planet Linux Australia - 18 hours 52 min ago
Categories: thinktime

Andrew McDonnell: Challenge for 2015: hackaday prize competition

Planet Linux Australia - Sun 29th Mar 2015 23:03

So the 2015 Hackaday prize is happening, until at least August.

Somehow I’ve currently ended up involved with not one, but two entries!  The good thing is that with four months to go until the first round submission, I have been careful not to bite off more that can be chewed in the time available on weekends, or after the kids go to bed, etc. with other commitments. Along the way though it should be educational and fun, and with any luck I might at least win a T-shirt or something (some electronics test gear would be nice) … I’m under no illusion we will get anywhere near winning a trip to space!

The themes this year are is “Build Something that Matters”, around environment, agriculture and energy, with the related facet of solving a problem, and not necessarily a world-scale problem.

So my first project, of which I am making good progress, is a farm crop monitoring system for Australian conditions.  This utilises the ESP8266 wifi module and will exercise its deep sleep mode, and solar power, along with a yet to be determined Linux module for a local base station, and hopefully ISM band telemetry over long distances. I will also be helped by my neighbour who is a farmer who can use this system.

The second project, which is not my idea but that of a close friend, (but for which I am presently responsible for maintaining the hackaday.io page), is an Algorithmic Composting machine built out of repurposed parts and cheap electronics.  I’ll probably end up assisting with the embedded electronics, as well as keeping the documentation up to date.

I wont be posting here in a lot of detail as the contest progresses, as there is a project log built into the hackaday.io site intended for that purpose.  So follow along at http://hackaday.io/project/4758 and http://hackaday.io/project/4991  instead! (And please like our projects if you have a hackaday account!)

 

Categories: thinktime

Self talk

Seth Godin - Sun 29th Mar 2015 20:03
There's no more important criticism than self criticism. There's no amount of external validation that can undo the constant drone of internal criticism. And negative self talk is hungry for external corroboration. One little voice in the ether that agrees...         Seth Godin
Categories: thinktime

Glen Turner: Fedora 21: automatic software updates

Planet Linux Australia - Sun 29th Mar 2015 09:03

The way Fedora does automatic software updates has changed with the replacement of yum(8) with dnf(8).

Start by disabling yum's automatic updates, if installed:

# dnf remove yum-cron yum-cron-daily

Then install the dnf automatic update software:

# dnf install dnf-automatic

Alter /etc/dnf/automatic.conf to change the "apply_updates" line:

apply_updates = yes

Instruct systemd to run the updates periodically:

# systemctl enable dnf-automatic.timer # systemctl start dnf-automatic.timer
Categories: thinktime

'Pick yourself' and taking responsibility

Seth Godin - Sat 28th Mar 2015 20:03
Perhaps you've decided that the idea of Pick Yourself is sort of a new-age mantra, a promise that everyone is entitled to what they want, right now. What a shortcut it seems to be. A false promise, holding out that...         Seth Godin
Categories: thinktime

Clinton Roy: clintonroy

Planet Linux Australia - Sat 28th Mar 2015 17:03

PyCon Australia 2015 is pleased to announce that its Call for Proposals is now open!

The conference this year will be held on Saturday 1st and Sunday 2nd August 2015 in Brisbane. We’ll also be featuring a day of Miniconfs on Friday 31st July.

The deadline for proposal submission is Friday 8th May, 2015.

PyCon Australia attracts professional developers from all walks of life, including industry, government, and science, as well as enthusiast and student developers. We’re looking for proposals for presentations and tutorials on any aspect of Python programming, at all skill levels from novice to advanced.

Presentation subjects may range from reports on open source, academic or commercial projects; or even tutorials and case studies. If a presentation is interesting and useful to the Python community, it will be considered for inclusion in the program.

We’re especially interested in short presentations that will teach conference-goers something new and useful. Can you show attendees how to use a module? Explore a Python language feature? Package an application?

Miniconfs

Four Miniconfs will be held on Friday 31st July, as a prelude to the main conference. Miniconfs are run by community members and are separate to the main conference. If you are a first time speaker, or your talk is targeted to a particular field, the Miniconfs might be a better fit than the main part of the conference. If your proposal is not selected for the main part of the conference, it may be selected for one of our Miniconfs:

DjangoCon AU is the annual conference of Django users in the Southern Hemisphere. It covers all aspects of web software development, from design to deployment – and, of course, the use of the Django framework itself. It provides an excellent opportunity to discuss the state of the art of web software development with other developers and designers.

The Python in Education Miniconf aims to bring together community workshop organisers, professional Python instructors and professional educators across primary, secondary and tertiary levels to share their experiences and requirements, and identify areas of potential collaboration with each other and also with the broader Python community.

The Science and Data Miniconf is a forum for people using Python to tackle problems in science and data analysis. It aims to cover commercial and research interests in applications of science, engineering, mathematics, finance, and data analysis using Python, including AI and ‘big data’ topics.

The OpenStack Miniconf is dedicated to talks related to the OpenStack project and we welcome proposals of all kinds: technical, community, infrastructure or code talks/discussions; academic or commercial applications; or even tutorials and case studies. If a presentation is interesting and useful to the OpenStack community, it will be considered for inclusion. We also welcome talks that have been given previously in different events.

Full details: http://2015.pycon-au.org/cfp



Filed under: Uncategorized
Categories: thinktime

Simon Lyall: Parallel Importing vs The Economist

Planet Linux Australia - Sat 28th Mar 2015 11:03

For the last few years I have subscribed to the online edition of  The Economist magazine. Previously I read it via their website but for the last year or two I have used their mobile app. Both feature the full-text of each week’s magazine. Since I subscribed near 15 years ago I have paid:

Launched Jun 1997 US$ 48 Jun 1999 US$ 48 Oct 2002 US$ 69 Oct 2003 US$ 69 Dec 2006 US$ 79 Oct 2009 US$ 79 Oct 2010 US$ 95 Oct 2011 US$ 95 Mar 2014 NZ$ 400 (approx US$ 300)

You will note the steady creep for a few years followed by the huge jump in 2014.

Note: I reviewed by credit card bill for 2012 and 2013 and I didn’t see any payments, it is possible I was getting it for free for two years . Possibly this was due to the transition between using an outside card processor (Worldpay) and doing the subscriptions in-house.

Last year I paid the bill in a bit of a rush and while I was surprised at the amount I didn’t think to hard. This year however I had a closer look. What seems to have happened is that The Economist has changed their online pricing model from “cheap online product” to “discount from the printed price”. This means that instead of online subscribers paying the same everywhere they now pay slightly less than it would cost to get the printed magazine delivered to the home.

Unfortunately the New Zealand price is very high to (I assume) cover the cost of shipping a relatively small number of magazines via air all the way from the nearest printing location.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

So readers in New Zealand are now charged $NZ 736 for a two-year digital subscription while readers in the US$ are now charged $US 223 ( NZ$ 293) for the same product. Thus New Zealanders pay 2.5 times as much as Americans.

Fortunately since I am a globe-trotting member of the world elite® I was able to change my subscription address to my US office and save a bunch of cash. However for a magazine that publishes the Big Mac Index comparing prices of products around the world the huge different in prices for the same digital product seems a little weird.

So readers in New Zealand are now charged $NZ 736 for a two-year digital subscription while readers in the US$ are now charged $US 223 ( NZ$ 293) for the same product. Thus New Zealanders pay 2.5 times as much as Americans.

Fortunately since I am a globe-trotting member of the world elite® I was able to change my subscription address to my US office and save a bunch of cash. However for a magazine that publishes the Big Mac Index comparing prices of products around the world the huge different in prices for the same digital product seems a little weird.

Categories: thinktime

On Our Radar: Self-Centered Edition

a list apart - Sat 28th Mar 2015 03:03

Okay, we admit it: it’s all about us. From steps to sleep to social activities, we’re counting every kind of personal data you can think of. But what’s all that data add up to? How could we look at it—and ourselves—differently? This week, we’re asking ourselves—and our self—the tough questions. 

My so-called lifelog

While waiting for an invite from gyrosco.pe, which promises to help me lead a healthier and happier life by harnessing my personal data, I started reading about life resource planning: the idea that we can administer every aspect of our lives using our timeline, our life feed, as a tool. LRP isn’t just the lifelogging data gathered by all the apps we use (health, finance, commuting, social graph, etc.). It’s about a user interface to make sense of it—a personal agent telling my story.

This has me thinking, how can I ever reinvent myself if my life feed becomes part of a documented history? The answer seems to lie in the notion of storytelling, becoming active autobiographers ourselves, using the same tools that tell our history, only to tell it better. When people are prompted to “tell a story” rather than state “what’s on their mind,” a character emerges—a qualified self (as opposed to the notion of the quantified self)—that may defy “big” data.

Michelle Kondou, developer

Mirror, mirror

A couple of days ago, I came across dear-data.com, a project by data visualization pros Giorgia Lupi and Stefanie Posavec. Instead of building digital charts and graphs, they’re documenting details of their lives onto handmade postcards—translating quiet moments of the everyday into colors and lines of self-awareness, and reinventing the rules each week. With a flickering edge of whimsy and objectivity, those moments are real life—through a filter.

What I love about Dear Data is that their conditions create new filters; they end up with a different view of themselves each week. Getting out of their usual medium and having to create new ways to tell each story is a tactic for hunting down catalysts. I also like how they went to something so square one: paper and colored pens, no expectations to be fancy, no need for neat lines.

Dear Data has me thinking about how we can all gain momentum from reimagining our digital selves every once in a while—from ditching our habitual means of describing and defining. How I can so easily show myself a new mirror and allow a situation to filter through me—I’d discover a different result each time. Those moments are grounding: they’re a sharp instant of humility, a moment of recognition that you’ll never see anything in the same way again.

Mica McPheeters, submissions and events manager

My birthday, my self

Ah, spring—that special time of year when a young developer’s fancy soon turns to thoughts of lexical scoping, and I’ve got ECMAScript 6 arrow functions on the brain.

Defining a function as usual introduces a new value for the this keyword, meaning we sometimes need to write code like the following:

function Wilto() { var self = this; self.age = 32; setInterval( function constantBirthdays() { self.age++; console.log( "I am now " + self.age + " years old"); }, 3000 ); }

Since the meaning of this is going to change inside the constantBirthdays function, we alias the enclosing function’s this value as the variable self—or sometimes as that, depending on your own preference.

Arrow functions will maintain the this value of the enclosing context, however, so we can do away with that variable altogether:

function Wilto() { this.age = 32; setInterval(() => { this.age++; console.log( "I am now " + this.age + " years old"); }, 3000 ); }

Thanks to ES6, we can finally start getting over our selfs.

Mat Marquis, technical editor

A gif about: self(ie) love Haters gonna hate.
Categories: thinktime

Hypergrowth

Seth Godin - Fri 27th Mar 2015 20:03
Fast growth comes from overwhelming the smallest possible audience with a product or service that so delights that they insist that their friends and colleagues use it. And hypergrowth is a version of the same thing, except those friends and...         Seth Godin
Categories: thinktime

Laura Kalbag on Freelance Design: The Illusion of Free

a list apart - Thu 26th Mar 2015 23:03

Our data is out of our control. We might (wisely or unwisely) choose to publicly share our statuses, personal information, media and locations, or we might choose to only share this data with our friends. But it’s just an illusion of choice—however we share, we’re exposing ourselves to a wide audience. We have so much more to worry about than future employers seeing photos of us when we’ve had too much to drink.

Corporations hold a lot of information about us. They store the stuff we share on their sites and apps, and provide us with data storage for our emails, files, and much more. When we or our friends share stuff on their services, either publicly or privately, clever algorithms can derive a lot of of detailed knowledge from a small amount of information. Did you know that you’re pregnant? Did you know that you’re not considered intelligent? Did you know that your relationship is about to end? The algorithms know us better than our families and only need to know ten of our Facebook Likes before they know us better than our average work colleague.

A combination of analytics and big data can be used in a huge variety of ways. Many sites use our data just to ensure a web page is in the language we speak. Recommendation engines are used by companies like Netflix to deliver fantastic personalized experiences. Google creates profiles of us to understand what makes us tick and sell us the right products. 23andme analyzes our DNA for genetic risk factors and sells the data to pharmaceutical companies. Ecommerce sites like Amazon know how to appeal to you as an individual, and whether you’re more persuaded by social proof when your friends also buy a product, or authority when an expert recommends a product. Facebook can predict the likelihood that you drink alcohol or do drugs, or determine if you’re physically and mentally healthy. It also experiments on us and influences our emotions. What can be done with all this data varies wildly, from the incredibly convenient and useful to the downright terrifying.

This data has a huge value to people who may not have your best interests at heart. What if this information is sold to your boss? Your insurance company? Your potential partner?

As Tim Cook said, “Some companies are not transparent that the connection of these data points produces five other things that you didn’t know that you gave up. It becomes a gigantic trove of data.” The data is so valuable that cognitive scientists are giddy with excitement at the size of studies they can conduct using Facebook. For neuroscience studies, a sample of twenty white undergraduates used to be considered sufficient to say something general about how brains work. Now Facebook works with scientists on sample sizes of hundreds of thousands to millions. The difference between more traditional scientific studies and Facebook’s studies is that Facebook’s users don’t know that they’re probably taking part in ten “experiments” at any given time. (Of course, you give your consent when you agree to the terms and conditions. But very few people ever read the terms and conditions, or privacy policies. They’re not designed to be read or understood.)

There is the potential for big data to be collected and used for good. Apple’s ResearchKit is supported by an open source framework that makes it easy for researchers and developers to create apps to collect iPhone users’ health data on a huge scale. Apple says they’ve designed ResearchKit with people’s privacy values in mind, “You choose what studies you want to join, you are in control of what information you provide to which apps, and you can see the data you’re sharing.”

But the allure of capturing huge, valuable amounts of data may encourage developers to design without ethics. An app may pressure users to quickly sign the consent form when they first open the app, without considering the consequences. The same way we’re encouraged to quickly hit “Agree” when we’re presented with terms and conditions. Or how apps tell us we need to allow constant access to our location so the app can, they tell us, provide us with the best experience.

The intent of the developers, their bosses, and the corporations as a whole, is key. They didn’t just decide to utilize this data because they could. They can’t afford to provide free services for nothing, and that was never their intention. It’s a lucrative business. The business model of these companies is to exploit our data, to be our corporate surveillers. It’s their good fortune that we share it like—as Zuckerberg said—dumb fucks.

To say that this is a privacy issue is to give it a loaded term. The word “privacy” has been hijacked to suggest that you’re hiding things you’re ashamed about. That’s why Google’s Eric Schmidt said “if you’ve got something to hide, you shouldn’t be doing it in the first place.” (That line is immortalized in the fantastic song, Sergey Says.) But privacy is our right to choose what we do and don’t share. It’s enshrined in the Universal Declaration of Human Rights.

So when we’re deciding which cool new tools and services to use, how are we supposed to make the right decision? Those of us who vaguely understand the technology live in a tech bubble where we value convenience and a good user experience so highly that we’re willing to trade it for our information, privacy and future security. It’s the same argument I hear again and again from people who choose to use Gmail. But will the tracking and algorithmic analysis of our data give us a good user experience? We just don’t know enough about what the companies are doing with our data to judge whether it’s a worthwhile risk. What we do know is horrifying enough. And whatever corporations are doing with our data now, who knows how they’re going to use it in the future.

And what about people outside the bubble, who aren’t as well-informed when it comes to the consequences of using services that exploit our data? The everyday consumer will choose a product based on free and fantastic user experiences. They don’t know about the cost of running, and the data required to sustain, such businesses.

We need to be aware that our choice of communication tools, such as Gmail or Facebook, doesn’t just affect us, but also those who want to communicate with us.

We need tools and services that enable us to own our own data, and give us the option to share it however we like, without conditions attached. I’m not an Apple fangirl, but Tim Cook is at least talking about privacy in the right way:

None of us should accept that the government or a company or anybody should have access to all of our private information. This is a basic human right. We all have a right to privacy. We shouldn’t give it up.

“Apple has a very straightforward business model,” he said. “We make money if you buy one of these [pointing at an iPhone]. That’s our product. You [the consumer] are not our product. We design our products such that we keep a very minimal level of information on our customers.”

But Apple is only one potential alternative to corporate surveillance. Their services may have some security benefits if our data is encrypted and can’t be read by Apple, but our data is still locked into their proprietary system. We need more *genuine* alternatives.

What can we do?

It’s a big scary issue. And that’s why I think people don’t talk about it. When you don’t know the solution, you don’t want to talk about the problem. We’re so entrenched in using Google’s tools, communicating via Facebook, and benefitting from a multitude of other services that feed on our data, it feels wildly out of our control. When we feel like we’ve lost control, we don’t want to admit it was our mistake. We’re naturally defensive of the choices of our past selves.

The first step is understanding and acknowledging that there’s a problem. There’s a lot of research, articles, and information out there if you want to learn how to regain control.

The second step is questioning the corporations and their motives. Speak up and ask these companies to be transparent about the data they collect, and how they use it. Encourage government oversight and regulation to protect our data. Have the heart to stand up against a model you think is toxic to our privacy and human rights.

The third, and hardest, step is doing something about it. We need to take control of our data, and begin an exodus from the services and tools that don’t respect our human rights. We need to demand, find and fund alternatives where we can be together without being an algorithm’s cash crop. It’s the only way we can prove we care about our data, and create a viable environment for the alternatives to exist.

Categories: thinktime

Lev Lafayette: The Cloud : An Inferior Implementation of HPC

Planet Linux Australia - Thu 26th Mar 2015 21:03

The use of cloud computing as an alternative implementation for high performance computing (HPC) initially seems to be appealing, especially to IT managers and to users who may find the jump from their desktop application to the command line interface challenging. However a careful and nuanced review of metrics should lead to a reconsideration of these assumptions.

read more

Categories: thinktime

Active listening

Seth Godin - Thu 26th Mar 2015 20:03
The kind of listening we're trained to do in school and at work is passive listening. Sit still. Get through it. Figure out what's going to be on the test and ignore the rest. Your eyes can glaze over, but...         Seth Godin
Categories: thinktime

Sam Watkins: sswam

Planet Linux Australia - Thu 26th Mar 2015 15:03

Job control is a basic feature of popular UNIX and Linux shells, such as “bash”.

It can be very useful, so I thought I’d make a little tutorial on it…

^C  press Ctrl-C to interrupt a running job (you know this one!) ^\  press Ctrl-\ (backslash) to QUIT a running job (stronger) ^Z press Ctrl-Z to STOP a running job, it can be resumed later jobs  type jobs for a list of stopped jobs (and background jobs) fg  type fg to continue a job in the foreground bg  type bg to continue a job in the background kill kill a job, e.g. kill %1, or kill -KILL %2 wait  wait for all background jobs to finish

You can also use fg and bg with a job number, if you have several jobs in the list.

You can start a job in the background: put an &-symbol at the end of the command. This works well for jobs that write to a file, but not for interactive jobs. Things might get messy if you have a background job that writes to the terminal.

If you forget the % with kill, it will try to kill by process-id instead of job number.  You don’t want to accidentally kill PID 1!

An example:

vi /etc/apache2/vhosts.d/ids.conf ^Z jobs find / >find.out & jobs fg 2 ^Z jobs bg 2 jobs kill %2 fg

Categories: thinktime

This week's sponsor: Inbound.org

a list apart - Thu 26th Mar 2015 03:03

Thanks to Inbound.org for sponsoring A List Apart this week! Check out their community where inbound designers, developers, and marketers come together to connect, learn, and grow.

Categories: thinktime

Francois Marier: Keeping up with noisy blog aggregators using PlanetFilter

Planet Linux Australia - Wed 25th Mar 2015 20:03

I follow a few blog aggregators (or "planets") and it's always a struggle to keep up with the amount of posts that some of these get. The best strategy I have found so far to is to filter them so that I remove the blogs I am not interested in, which is why I wrote PlanetFilter.

Other options

In my opinion, the first step in starting a new free software project should be to look for a reason not to do it So I started by looking for another approach and by asking people around me how they dealt with the firehoses that are Planet Debian and Planet Mozilla.

It seems like a lot of people choose to "randomly sample" planet feeds and only read a fraction of the posts that are sent through there. Personally however, I find there are a lot of authors whose posts I never want to miss so this option doesn't work for me.

A better option that other people have suggested is to avoid subscribing to the planet feeds, but rather to subscribe to each of the author feeds separately and prune them as you go. Unfortunately, this whitelist approach is a high maintenance one since planets constantly add and remove feeds. I decided that I wanted to follow a blacklist approach instead.

PlanetFilter

PlanetFilter is a local application that you can configure to fetch your favorite planets and filter the posts you see.

If you get it via Debian or Ubuntu, it comes with a cronjob that looks at all configuration files in /etc/planetfilter.d/ and outputs filtered feeds in /var/cache/planetfilter/.

You can either:

  • add file:///var/cache/planetfilter/planetname.xml to your local feed reader
  • serve it locally (e.g. http://localhost/planetname.xml) using a webserver, or
  • host it on a server somewhere on the Internet.

The software will fetch new posts every hour and overwrite the local copy of each feed.

A basic configuration file looks like this:

[feed] url = http://planet.debian.org/atom.xml [blacklist] Filters

There are currently two ways of filtering posts out. The main one is by author name:

[blacklist] authors = Alice Jones John Doe

and the other one is by title:

[blacklist] titles = This week in review Wednesday meeting for

In both cases, if a blog entry contains one of the blacklisted authors or titles, it will be discarded from the generated feed.

Tor support

Since blog updates happen asynchronously in the background, they can work very well over Tor.

In order to set that up in the Debian version of planetfilter:

  1. Install the tor and polipo packages.
  2. Set the following in /etc/polipo/config:

    proxyAddress = "127.0.0.1" proxyPort = 8008 allowedClients = 127.0.0.1 allowedPorts = 1-65535 proxyName = "localhost" cacheIsShared = false socksParentProxy = "localhost:9050" socksProxyType = socks5 chunkHighMark = 67108864 diskCacheRoot = "" localDocumentRoot = "" disableLocalInterface = true disableConfiguration = true dnsQueryIPv6 = no dnsUseGethostbyname = yes disableVia = true censoredHeaders = from,accept-language,x-pad,link censorReferer = maybe
  3. Tell planetfilter to use the polipo proxy by adding the following to /etc/default/planetfilter:

    export http_proxy="localhost:8008" export https_proxy="localhost:8008"
Bugs and suggestions

The source code is available on repo.or.cz.

I've been using this for over a month and it's been working quite well for me. If you give it a go and run into any problems, please file a bug!

I'm also interested in any suggestions you may have.

Categories: thinktime

What is customer service for?

Seth Godin - Wed 25th Mar 2015 20:03
Customer service is difficult, expensive and unpredictable. But it's a mistake to assume that any particular example is automatically either good or bad. A company might spend almost nothing on customer service but still succeed in reaching its goals. Customer...         Seth Godin
Categories: thinktime

Sonia Hamilton: Devops and Old Git Branches

Planet Linux Australia - Wed 25th Mar 2015 11:03

A guest blog post I wrote on managing git branches when doing devops.

When doing Devops we all know that using source code control is a “good thing” — indeed it would be hard to imagine doing Devops without it. But if you’re using Puppet and R10K for your configuration management you can end up having hundreds of old branches lying around — branches like XYZ-123, XYZ-123.fixed, XYZ-123.fixed.old and so on. Which branches to cleanup, which to keep? How to easily cleanup the old branches? This article demonstrates some git configurations and scripts  that make working with hundreds of git branches easier…

Go to Devops and Old Git Branches to read the full article.

Categories: thinktime

Lev Lafayette: Skill Improvements versus Interface Designs for eResarchers

Planet Linux Australia - Wed 25th Mar 2015 10:03

The increasing size of datasets acts a critical issue for eResearch, especially given that they are expanding at a rate greater than improvements in desktop application speed, suggesting that HPC knowledge is requisite. However knowledge of such systems is not common.

read more

Categories: thinktime

Of course it's difficult...

Seth Godin - Tue 24th Mar 2015 20:03
Students choose to attend expensive colleges but don't major in engineering because the courses are killer. Doing more than the customary amount of customer service is expensive, time-consuming and hard to sustain. Raising money for short-term urgent projects is easier...         Seth Godin
Categories: thinktime

Global Health Information Infrastructure: $100m Philanthropic Grant from Bloomberg Philanthropies

Teaser:  Professor Alan Lopez leads team of experts to establish global health information infrastructure.

This article originally appeared on the 

read more

Categories: thinktime

Pages

Subscribe to KatteKrab aggregator - thinktime