You are here

Planet Linux Australia

Subscribe to Planet Linux Australia feed
Planet Linux Australia - http://planet.linux.org.au
Updated: 12 min 59 sec ago

Ian Wienand: Image building for OpenStack CI -- Minimal images, big effort

Mon 04th Apr 2016 13:04

A large part of OpenStack Infrastructure teams recent efforts has been focused on moving towards more stable and maintainable CI environments for testing.

OpenStack CI Overview

Before getting into details, it's a good idea to get a basic big-picture conceptual model of how OpenStack CI testing works. If you look at the following diagram and follow the numbers with the explanation below, hopefully you'll have all the context you need.

  1. The developer uploads their code to gerrit via the git-review tool. They wait.

  2. Gerrit provides a JSON-encoded "firehose" output of everything happening to it. New reviews, votes, updates, etc. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a Jenkins host to consume. gearman is a job-server, and as they explain it "provides a generic application framework to farm out work to other machines or processes that are better suited to do the work".

  4. A group of Jenkins hosts are subscribed to gearman as workers. It is these hosts that will consume the job requests from the queue and actually get the tests running. Jenkins needs two things to be able to run a job -- a job definition (something to do) and a slave node (somewhere to do it).

    The first part -- what to do -- is provided by job-definitions stored in external YAML files and processed by Jenkins Job Builder (jjb) in to job configurations for Jenkins. Thus each Jenkins instance is knows about all the jobs it might need to run. Zuul also knows about these job definitions, so you can see how we now have a mapping where Zuul can put a job into gearman saying "run test foo-bar-baz" and a Jenkins host can consume that request and know what to do.

    The second part -- somewhere to run the test -- takes some more explaining. Let's skip to the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by nodepool -- a customised orchestration tool. Nodepool watches the gearman queue and sees what requests are coming out of Zuul, and decides what type of capacity to provide and in what clouds to satisfy the outstanding job queue. Nodepool will start-up virtual-machines as required, and register those nodes to the Jenkins instances.

  6. At this point, Jenkins has what it needs to actually get jobs started. When nodepool registers a host to Jenkins as a slave, the Jenkins host can now advertise its ability to consume jobs. For example, if a ubuntu-trusty node is provided to the Jenkins instance by nodepool, Jenkins can now consume a job from the queue intended to run on an ubuntu-trusty host. It will run the job as defined in the job-definition -- doing what Jenkins does by ssh-ing into the host, running scripts, copying the logs and waiting for the result. (It is a gross oversimplification, but but Jenkins is pretty much a glorified ssh/scp wrapper to OpenStack CI. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

  7. Eventually, the test will finish. Jenkins will put the result back into gearman, which Zuul will consume. The slave will be released back to nodepool, which destroys it and starts all over again (slaves are not reused, and also have no important details on them, as they are essentially publicly accessible). Zuul will wait for the results of all jobs and post the result back to Gerrit and give either a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but we'll ignore that bit for now).

In a nutshell, that is the CI work-flow that happens thousands-upon-thousands of times a day keeping OpenStack humming along.

Image builds

There is, however, another more asynchronous part of the process that hides a lot of details the rest of the system relies on. Illustrated in step 8 above, this is the management of the images that tests are being run upon. Above we we said that a test runs on a ubuntu-trusty, centos, fedora or some other type of node, but glossed over where these images come from.

Firstly, what are these images, and why build them at all? These images are where the "rubber hits the road" -- where DevStack, functional testing or whatever else someone might want to test is actually run for real. Caching is a big part of the role of these images. With thousands of jobs going on every day, an occasional network blip is not a minor annoyance, but creates constant and difficult to debug CI failures. We want the images that CI runs on to rely on as few external resources as possible so test runs are as stable as possible. This means caching all the git trees tests might use, things like images consumed during tests and other bits and pieces. Obviously a cache is only as useful as the data in it, so we build these images up every day to keep them fresh.

If you log into almost any cloud-provider's interface, they almost certainly have a range of pre-canned images of common distributions for you to use. At first, the base images for OpenStack CI testing came from what the cloud-providers had as their public image types. However, over time, there are a number of issues that emerge:

  1. Providers rarely leave these images alone. One day you would boot the image to find a bunch of Python libraries pip-installed, or a mount-point moved, or base packages removed (all happened).
  2. Providers don't have some images you want (like a latest Fedora), or have different versions, or different point releases. All update asynchronously whenever they get around to it.
  3. No two images, even for the same distribution or platform, are the same. Every provider seems to do something "helpful" to the images which requires some sort of workaround.
  4. Even if the changes are helpful, it does not make for consistent and reproducible testing if every time you run, you're on a slightly different base system.

So the original incarnations of building images was that nodepool would start one of these provider images, run a bunch of scripts on it to make a base-image (do the caching, setup keys, etc), snapshot it and then start putting VM's based on these images into the pool for testing. The first problem you hit here is that the number of images being built starts to explode when you take into account multiple providers and multiple regions. With Rackspace and (now defunct) HP cloud) there was a situation where we were building 4 or 5 images across a total of about 8 regions -- meaning anywhere up to 40 separate image builds happening. It was almost a fait accompli that some images would fail every day -- nodepool can deal with this by reusing old snapshots; but this leads to a inconsistent and heterogeneous testing environment.

OpenStack is like a gigantic Jenga tower, with a full DevStack deployment resulting in hundreds of libraries and millions of lines of code all being exercised at once. The testing images are right at the bottom of all this, and it doesn't take much to make the whole thing fall over (see points about providers not leaving images alone). This leads to constant fire-firefighting and everyone annoyed as all CI stops. Naturally there was a desire for something much more consistent -- a single image that could run across multiple providers in a much more tightly controlled manner.

Upstream-based builds

Upstream distributions do provide their "cloud-images", which are usually pre-canned .qcow2 format files suitable for uploading to your average cloud. So the diskimage-builder tool was put into use creating images for nodepool, based on these upstream-provided images. In essence, diskimage-builder uses a series of elements (each, as the name suggests, designed to do one thing) that allow you to build a completely customised image. It handles all the messy bits of laying out the image file, tries to be smart about caching large downloads and final things like conversion to qcow2 or vhd or whatever your cloud requires. So nodepool has used diskimage-builder to create customised images based upon the upstream releases for some time. These are better, but still have some issues for the CI environment:

  1. You still really have no control over what does or does not go into the upstream base images. You don't notice a change until you deploy a new image based on an updated version and things break.
  2. The images still have a fair amount of "stuff" on them. For example cloud-init is a rather large Python program and has a fair few dependencies. Tese dependencies can both conflict with what parts of OpenStack wants, or inversely end up hiding requirements because we end up with a dependency tacitly provided by some part of the base-image. The whole idea of the CI is that (as much as possible) you're not making any assumptions about what is required to run your tests -- you want everything explicitly included. (If you were starting this whole thing again, things like Docker might come into play. Indeed they may be in the future. But don't forget that DevStack, the major CI deployment mechanism, was started before Docker existed. And there's tricky stuff with networking and Neutron etc going on).
  3. An image that "works everywhere" across multiple cloud-providers is quite a chore. cloud-init hasn't always had support for config-drive and Rackspace's DHCP-less environment, for example. Providers seem to like providing different networking schemes or configuration methods.
Minimal builds

To this end, diskimage-builder now has a serial of "minimal" builds that are really that -- systems with essentially nothing on them. For Debian and Ubuntu, this is achieved via debootstrap, for Fedora and CentOS we replicate this with manual installs of base packages into a clean chroot environment. We add-on a range of important "elements" that make the image useful; for example, for networking, we have simple-init which brings up the network consistently across all our providers but has no dependencies to mess with the base system. If you check the elements provided by project-config you can see a range of specific elements that OpenStack Infra runs at each image build (these are actually specified by in arguments to nodepool, see the config file, particularly diskimages section). These custom elements do things like caching, using puppet to install the right authorized_keys files and setup a few needed things to connect to the host. In general, you can see the logs of an image build provided by nodepool for each daily build.

So now, each day at 14:00 UTC nodepool builds the daily images that will be used for CI testing. We have one image of each type that (theoretically) works across all our providers. After it finishes building, nodepool uploads the image to all providers (p.s. the process of doing this is so insanely terrible it spawned shade; this deserves many posts of its own) at which point it will start being used for CI jobs.

Dependencies

But, as they say, there's more! Personally I'm not sure how it started, but OpenStack CI ended up with the concept of bare nodes and devstack nodes. A bare node was one that was used for functional testing; i.e. what you do when you type tox -e py27 in basically any OpenStack project. The problem with this is that tox has plenty of information about installing required Python packages into the virtualenv for testing; but it doesn't know anything about the system packages required to build the Python libraries. This means things like gcc and -devel packages which many Python libraries use to build library bindings. In contrast to this, DevStack has always been able to bootstrap itself from a blank system, ensuring it has the right libraries, etc, installed to be able to get a functional OpenStack environment up and running.

If you remember the previous comments, we don't want things pre-installed for DevStack, because it hides actual devstack dependencies that we want explicitly defined. But the bare nodes, used for functional testing, were different -- we had an every-growing and not well-defined list of packages that were installed on those nodes to make sure functional testing worked. You don't want jobs relying on this; we want to be sure if jobs have a dependency, they require it explicitly.

So this is where a tool called bindep comes in. OpenStack has the concept of global requirements -- those Python dependencies that are common across all projects so version skew becomes somewhat manageable. This now has some extra information in the other-requirements.txt file, which lists the system-packages required to build the Python-packages. bindep knows how to look at this and get the right packages for the platform it is running on. Indeed -- remember how it was previously mentioned we want to minimise dependencies on external resources at runtime? Well we can pre-cache all of these packages onto the images, knowing that they are likely to be required by packages. How do we get the packages installed? The way we really should be doing it -- as part of the CI job. There is a macro called install-distro-packages which uses bindep to install those packages as required by the global-requirements list. The result -- no more need for this bare node type! In all cases we can start with essentially a blank image and all the dependencies to run the job are expressed by and within the job -- leading to a consistent and reproducible environment. Several things have broken as part of removing bare nodes -- this is actually a good thing because it means we have revealed areas where we were making assumptions in jobs about what the underlying platform provides; issues that get fixed by thinking about and ensuring we have correct dependencies bringing up jobs.

There's a few other macros there that do things like provide MySQL/Postgres instances or setup other common job requirements. By splitting these out we also improve the performance of jobs who now only bring in the dependencies they need -- we don't waste time doing things like setting up databases for jobs that don't need it.

Conclusion

While dealing with multiple providers, image-types and dependency chains has been a great effort for the infra team, to everyone's credit I don't think the project has really noticed much going on underneath.

OpenStack CI has transitioned to a situation where there is a single image type for each platform we test that deploys unmodified across all our providers. We have better insight into our dependencies and better tools to manage them. This leads to greatly decreased maintenance burdens, better consistency and better performance; all great things to bring to OpenStack CI!

Categories: thinktime

OpenSTEM: New Viking Site in North America

Sat 02nd Apr 2016 16:04

The Vikings were the first Europeans to reach North America, more than 1000 years ago. The Vikings established settlements and traded with indigenous people in North America for about 400 years, finally abandoning the continent less than 100 years before Columbus’ voyage.

The story of the Vikings’ exploits in North America provides not only additional context to the history of human exploration, but also matches ideally to the study of the Geography of North America, as the names used by the Vikings for areas in North America provide a perfect match to the biomes in these regions.

Long consigned to the realms of myth within Norse sagas, the first archaeological evidence of the truth of the old stories of “Vinland” (Newfoundland) was uncovered by a Norwegian archaeologist in 1960. In recent years archaeologists have uncovered yet more evidence of Viking settlements in North America. OpenSTEM is delighted to share this story of how satellite technology is assisting this process, as we publish our own resource on the Vikings in North America.

http://www.nytimes.com/2016/04/01/science/vikings-archaeology-north-america-newfoundland.html

The site was identified last summer after satellite images showed possible man-made shapes under discoloured vegetation on the Newfoundland coast.

Categories: thinktime

BlueHackers: OSMI Mental Health in Tech Survey 2016

Sat 02nd Apr 2016 13:04

Participate in a survey about how mental health is viewed within the tech/IT workplace, and the prevalence of certain mental health disorders within the tech industry.

Categories: thinktime

Glen Turner: Getting started with Northbound Networks' Zodiac FX OpenFlow switch

Sat 02nd Apr 2016 12:04

Yesterday I received a Zodiac FX four 100Base-TX port OpenFlow switch as a result of Northbound Networks' KickStarter. Today I put the Zodiac FX through its paces.

Plug the supplied USB cable into the Zodiac FX and into a PC. The Zodiac FX will appear in Debian as the serial device /dev/ttyACM0. The kernel log says:

debian:~ $ dmesg usb 1-1.1.1: new full-speed USB device number 1 using dwc_otg usb 1-1.1.1: New USB device found, idVendor=03eb, idProduct=2404 usb 1-1.1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 1-1.1.1: Product: Zodiac usb 1-1.1.1: Manufacturer: Northbound Networks cdc_acm 1-1.1.1:1.0: ttyACM0: USB ACM device

You can use Minicom (obtained with sudo apt-get install minicom) to speak to that serial port by starting it with minicom --device /dev/ttyACM0. You'll want to be in the "dialout" group, you can add youself with sudo usermod --append --groups dialout $USER but you'll need to log in again for that to take effect. The serial parameters are speed = 115,200bps, data bits = 8, parity = none, stop bits = 1, CTS/RTS = off, XON/XOFF = off.

The entry text is:

_____ ___ _______ __ /__ / ____ ____/ (_)___ ______ / ____/ |/ / / / / __ \/ __ / / __ `/ ___/ / /_ | / / /__/ /_/ / /_/ / / /_/ / /__ / __/ / | /____/\____/\__,_/_/\__,_/\___/ /_/ /_/|_| by Northbound Networks Type 'help' for a list of available commands Zodiac_FX# Typing "help" gives: The following commands are currently available: Base: config openflow debug show ports show status show version Config: save show config show vlans set name <name> set mac-address <mac address> set ip-address <ip address> set netmask <netmasks> set gateway <gateway ip address> set of-controller <openflow controller ip address> set of-port <openflow controller tcp port> set failstate <secure|safe> add vlan <vlan id> <vlan name> delete vlan <vlan id> set vlan-type <openflow|native> add vlan-port <vlan id> <port> delete vlan-port <port> factory reset set of-version <version(0|1|4)> exit OpenFlow: show status show flows enable disable clear flows exit Debug: read <register> write <register> <value> exit

Some baseline messing about:

Zodiac_FX# show ports Port 1 Status: DOWN VLAN type: OpenFlow VLAN ID: 100 Port 2 Status: DOWN VLAN type: OpenFlow VLAN ID: 100 Port 3 Status: DOWN VLAN type: OpenFlow VLAN ID: 100 Port 4 Status: DOWN VLAN type: Native VLAN ID: 200 Zodiac_FX# show status Device Status Firmware Version: 0.57 CPU Temp: 37 C Uptime: 00:00:01 Zodiac_FX# show version Firmware version: 0.57 Zodiac_FX# config Zodiac_FX(config)# show config Configuration Name: Zodiac_FX MAC Address: 70:B3:D5:00:00:00 IP Address: 10.0.1.99 Netmask: 255.255.255.0 Gateway: 10.0.1.1 OpenFlow Controller: 10.0.1.8 OpenFlow Port: 6633 Openflow Status: Enabled Failstate: Secure Force OpenFlow version: Disabled Stacking Select: MASTER Stacking Status: Unavailable Zodiac_FX(config)# show vlans VLAN ID Name Type 100 'Openflow' OpenFlow 200 'Controller' Native Zodiac_FX(config)# exit Zodiac_FX# openflow Zodiac_FX(openflow)# show status OpenFlow Status Status: Disconnected No tables: 1 No flows: 0 Table Lookups: 0 Table Matches: 0 Zodiac_FX(openflow)# show flows No Flows installed! Zodiac_FX(openflow)# exit

We want to use the controller address on our PC and connect eth0 on the PC to Port 4 of the switch (probably by plugging them both into the same local area network).

Zodiac_FX# show ports … Port 4 Status: UP VLAN type: Native VLAN ID: 200 debian:~ $ sudo ip addr add 10.0.1.8/24 label eth0:zodiacfx dev eth0 debian:~ $ ip addr show label eth0:zodiacfx inet 10.0.1.8/24 scope global eth0:zodiacfx valid_lft forever preferred_lft forever debian:~ $ ping 10.0.1.99 PING 10.0.1.99 (10.0.1.99) 56(84) bytes of data. 64 bytes from 10.0.1.99: icmp_seq=1 ttl=255 time=0.287 ms 64 bytes from 10.0.1.99: icmp_seq=2 ttl=255 time=0.296 ms 64 bytes from 10.0.1.99: icmp_seq=3 ttl=255 time=0.271 ms ^C --- 10.0.1.99 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.271/0.284/0.296/0.022 ms

Now to check the OpenFlow basics. We'll use the POX controller, which is a simple controller written in Python 2.7.

debian:~ $ git clone https://github.com/noxrepo/pox.git debian:~ $ cd pox debian:~ $ ./pox.py openflow.of_01 --address=10.0.1.8 --port=6633 --verbose POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al. DEBUG:core:POX 0.2.0 (carp) going up... DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26) DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0 INFO:core:POX 0.2.0 (carp) is up. DEBUG:openflow.of_01:Listening on 10.0.1.8:6633 INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected Zodiac_FX(openflow)# show status Status: Connected Version: 1.0 (0x01) No tables: 1 No flows: 0 Table Lookups: 0 Table Matches: 0

You can then load POX programs to manuipulate the network. A popular first choice might be to turn the Zodiac FX into a flooding hub.

debian:~ $ ./pox.py --verbose openflow.of_01 --address=10.0.1.8 --port=6633 forwarding.hub POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al. INFO:forwarding.hub:Hub running. DEBUG:core:POX 0.2.0 (carp) going up... DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26) DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0 INFO:core:POX 0.2.0 (carp) is up. DEBUG:openflow.of_01:Listening on 10.0.1.8:6633 INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected INFO:forwarding.hub:Hubifying 70-b3-d5-00-00-00 Zodiac_FX(openflow)# show flows Flow 1 Match: Incoming Port: 0 Ethernet Type: 0x0000 Source MAC: 00:00:00:00:00:00 Destination MAC: 00:00:00:00:00:00 VLAN ID: 0 VLAN Priority: 0x0 IP Protocol: 0 IP ToS Bits: 0x00 TCP Source Address: 0.0.0.0 TCP Destination Address: 0.0.0.0 TCP/UDP Source Port: 0 TCP/UDP Destination Port: 0 Wildcards: 0x0010001f Cookie: 0x0 Attributes: Priority: 32768 Duration: 9 secs Hard Timeout: 0 secs Idle Timeout: 0 secs Byte Count: 0 Packet Count: 0 Actions: Action 1: Output: FLOOD

If we now send a packet into Port 1 we see it flooded to Port 2 and Port 3.

We also see it flooded to Port 4 (which is in 'native' mode). Flooding the packet up the same port as the OpenFlow controller isn't a great design choice. It would be better if the switch had four possible modes for ports with traffic kept distinct between them: native switch forwarding, OpenFlow forwarding, OpenFlow control, and switch management. The strict separation of forwarding, control and management is one of the benefits of software defined networks (that does lead to questions around how to bootstrap a remote switch, but the Zodiac FX isn't the class of equipment where that is a realistic issue).

VLANs between ports only seem to matter for native mode. A OpenFlow program can — and will — happily ignore the port's VLAN assignment.

The Zodiac FX is currently a OpenFlow 1.0 switch. So it can currently manipulate MAC addresses but not other packet headers. That still gives a suprising number of applications. Northbound Networks say OpenFlow 1.3 -- with it's manipulation of IP addresses -- is imminent.

The Zodiac FX is an interesting bit of kit. It is well worth buying one even at this early stage of development because it is much better at getting your hands dirty (and thus learn) than is the case with software-only simulated OpenFlow networks.

The source code is open source. It is on Github in some Atmel programming workbench format. I suppose it's time to unpack that, see if there's a free software Atmel toolchain, and set about fixing this port mode bug. I do hope simple modification of the switch's software is possible: a switch to teach people OpenFlow is great; a switch to teach people embedded network programming would be magnificent.

Categories: thinktime

Francois Marier: How Safe Browsing works in Firefox

Sat 02nd Apr 2016 09:04

Firefox has had support for Google's Safe Browsing since 2005 when it started as a stand-alone Firefox extension. At first it was only available in the USA, but it was opened up to the rest of the world in 2006 and moved to the Google Toolbar. It then got integrated directly into Firefox 2.0 before the public launch of the service in 2007.

Many people seem confused by this phishing and malware protection system and while there is a pretty good explanation of how it works on our support site, it doesn't go into technical details. This will hopefully be of interest to those who have more questions about it.

Browsing Protection

The main part of the Safe Browsing system is the one that watches for bad URLs as you're browsing. Browsing protection currently protects users from:

If a Firefox user attempts to visit one of these sites, a warning page will show up instead, which you can see for yourself here:

The first two warnings can be toggled using the browser.safebrowsing.malware.enabled preference (in about:config) whereas the last one is controlled by browser.safebrowsing.enabled.

List updates

It would be too slow (and privacy-invasive) to contact a trusted server every time the browser wants to establish a connection with a web server. Instead, Firefox downloads a list of bad URLs every 30 minutes from the server (browser.safebrowsing.provider.google.updateURL) and does a lookup against its local database before displaying a page to the user.

Downloading the entire list of sites flagged by Safe Browsing would be impractical due to its size so the following transformations are applied:

  1. each URL on the list is canonicalized,
  2. then hashed,
  3. of which only the first 32 bits of the hash are kept.

The lists that are requested from the Safe Browsing server and used to flag pages as malware/unwanted or phishing can be found in urlclassifier.malwareTable and urlclassifier.phishTable respectively.

If you want to see some debugging information in your terminal while Firefox is downloading updated lists, turn on browser.safebrowsing.debug.

Once downloaded, the lists can be found in the cache directory:

  • ~/.cache/mozilla/firefox/XXXX/safebrowsing/ on Linux
  • ~/Library/Caches/Firefox/Profiles/XXXX/safebrowsing/ on Mac
  • C:\Users\XXXX\AppData\Local\mozilla\firefox\profiles\XXXX\safebrowsing\ on Windows
Resolving partial hash conflicts

Because the Safe Browsing database only contains partial hashes, it is possible for a safe page to share the same 32-bit hash prefix as a bad page. Therefore when a URL matches the local list, the browser needs to know whether or not the rest of the hash matches the entry on the Safe Browsing list.

In order resolve such conflicts, Firefox requests from the Safe Browsing server (browser.safebrowsing.provider.mozilla.gethashURL) all of the hashes that start with the affected 32-bit prefix and adds these full-length hashes to its local database. Turn on browser.safebrowsing.debug to see some debugging information on the terminal while these "completion" requests are made.

If the current URL doesn't match any of these full hashes, the load proceeds as normal. If it does match one of them, a warning interstitial page is shown and the load is canceled.

Download Protection

The second part of the Safe Browsing system protects users against malicious downloads. It was launched in 2011, implemented in Firefox 31 on Windows and enabled in Firefox 39 on Mac and Linux.

It roughly works like this:

  1. Download the file.
  2. Check the main URL, referrer and redirect chain against a local blocklist (urlclassifier.downloadBlockTable) and block the download in case of a match.
  3. On Windows, if the binary is signed, check the signature against a local whitelist (urlclassifier.downloadAllowTable) of known good publishers and release the download if a match is found.
  4. If the file is not a binary file then release the download.
  5. Otherwise, send the binary file's metadata to the remote application reputation server (browser.safebrowsing.downloads.remote.url) and block the download if the server indicates that the file isn't safe.

Blocked downloads can be unblocked by right-clicking on them in the download manager and selecting "Unblock".

While the download protection feature is automatically disabled when malware protection (browser.safebrowsing.malware.enabled) is turned off, it can also be disabled independently via the browser.safebrowsing.downloads.enabled preference.

Note that Step 5 is the only point at which any information about the download is shared with Google. That remote lookup can be suppressed via the browser.safebrowsing.downloads.remote.enabled preference for those users concerned about sending that metadata to a third party.

Types of malware

The original application reputation service would protect users against "dangerous" downloads, but it has recently been expanded to also warn users about unwanted software as well as software that's not commonly downloaded.

These various warnings can be turned on and off in Firefox through the following preferences:

  • browser.safebrowsing.downloads.remote.block_dangerous
  • browser.safebrowsing.downloads.remote.block_dangerous_host
  • browser.safebrowsing.downloads.remote.block_potentially_unwanted
  • browser.safebrowsing.downloads.remote.block_uncommon

and tested using Google's test page.

If you want to see how often each "verdict" is returned by the server, you can have a look at the telemetry results for Firefox Beta.

Privacy

One of the most persistent misunderstandings about Safe Browsing is the idea that the browser needs to send all visited URLs to Google in order to verify whether or not they are safe.

While this was an option in version 1 of the Safe Browsing protocol (as disclosed in their privacy policy at the time), support for this "enhanced mode" was removed in Firefox 3 and the version 1 server was decommissioned in late 2011 in favor of version 2 of the Safe Browsing API which doesn't offer this type of real-time lookup.

Google explicitly states that the information collected as part of operating the Safe Browsing service "is only used to flag malicious activity and is never used anywhere else at Google" and that "Safe Browsing requests won't be associated with your Google Account". In addition, Firefox adds a few privacy protections:

  • Query string parameters are stripped from URLs we check as part of the download protection feature.
  • Cookies set by the Safe Browsing servers to protect the service from abuse are stored in a separate cookie jar so that they are not mixed with regular browsing/session cookies.
  • When requesting complete hashes for a 32-bit prefix, Firefox throws in a number of extra "noise" entries to obfuscate the original URL further.

On balance, we believe that most users will want to keep Safe Browsing enabled, but we also make it easy for users with particular needs to turn it off.

Learn More

If you want to learn more about how Safe Browsing works in Firefox, you can find all of the technical details on the Safe Browsing and Application Reputation pages of the Mozilla wiki or you can ask questions on our mailing list.

Google provides some interesting statistics about what their systems detect in their transparency report and offers a tool to find out why a particular page has been blocked. Some information on how phishing sites are detected is also available on the Google Security blog, but for more detailed information about all parts of the Safe Browsing system, see the following papers:

Categories: thinktime

Rusty Russell: BIP9: versionbits In a Nutshell

Fri 01st Apr 2016 12:04

Hi, I was one of the authors/bikeshedders of BIP9, which Pieter Wuille recently refined (and implemented) into its final form.  The bitcoin core plan is to use BIP9 for activations from now on, so let’s look at how it works!

Some background:

  • Blocks have a 32-bit “version” field.  If the top three bits are “001”, the other 29 bits represent possible soft forks.
  • BIP9 uses the same 2016-block periods (roughly 2 weeks) as the difficulty adjustment does.

So, let’s look at BIP68 & 112 (Sequence locks and OP_CHECKSEQUENCEVERIFY) which are being activated together:

  • Every soft fork chooses an unused bit: these are using bit 1 (not bit 0), so expect to see blocks with version 536870914.
  • Every soft fork chooses an start date: these use May 1st, 2016, and time out a year later if it fails.
  • Every period, we look back to see if 95% have a bit set (75% for testnet).
    • If so, and that bit is for a known soft fork, and we’re within its start time that soft fork is locked-in: it will activate after another 2016 blocks, giving the stragglers time to upgrade.

There are also two alerts in the bitcoin core implementation:

  • If at any stage 50 of the last 100 blocks have unexpected bits set, you get Warning: Unknown block versions being mined! It’s possible unknown rules are in effect.
  • If we see an unknown softfork bit activate: you get Warning: unknown new rules activated (versionbit X).

Now, when could the OP_CSV soft forks activate? bitcoin-core will only start setting the bit in the first period after the start date, so somewhere between 1st and 15th of May[1], then will take another period to lock-in (even if 95% of miners are already upgraded), then another period to activate.  So early June would be the earliest possible date, but we’ll get two weeks notice for sure.

The Old Algorithm

For historical purposes, I’ll describe how the old soft-fork code worked.  It used version as a simple counter, eg. 3 or above meant BIP66, 4 or above meant BIP65 support.  Every block, it examined the last 1000 blocks to see if more than 75% had the new version.  If so, then the new softfork rules were enforced on new version blocks: old version blocks would still be accepted, and use the old rules.  If more than 95% had the new version, old version blocks would be rejected outright.

I remember Gregory Maxwell and other core devs stayed up late several nights because BIP66 was almost activated, but not quite.  And as a miner there was no guarantee on how long before you had to upgrade: one smaller miner kept producing invalid blocks for weeks after the BIP66 soft fork.  Now you get two weeks’ notice (probably more if you’re watching the network).

Finally, this change allows for miners to reject a particular soft fork without rejecting them all.  If we’re going to see more contentious or competing proposals in the future, this kind of plumbing allows it.

Hope that answers all your questions!

 

[1] It would be legal for an implementation to start setting it on the very first block past the start date, though it’s easier to only think about version bits once every two weeks as bitcoin-core does.

Categories: thinktime

Colin Charles: MariaDB Berlin meetup

Fri 01st Apr 2016 12:04

Come meet the MariaDB Server and MariaDB MaxScale developers in Berlin (April 12 2016), talk about new upcoming things in MariaDB Server 10.2, as well as the next MariaDB MaxScale releases. Let’s not forget the talks about the upcoming developments with the Connectors.

It will be a fun-filled, information-packed night. Food & drink will be provided.

For more information and the opportunity to RSVP check-out our meetup page. RSVP soon as we only have 99 spaces available.

Categories: thinktime

Colin Charles: Speaking in April 2016

Thu 31st Mar 2016 15:03

I have a few speaking engagements coming up in April 2016, and I hope to see you at some of these events. I’m always available to talk shop (opensource, MariaDB Server, MySQL, etc.) so looking forward to saying hi.

  • A short talk at the MariaDB Berlin Meetup on April 12 2016 – this should be fun if you’re in Berlin as you’ll see many people from the MariaDB Server and MariaDB MaxScale world talk about what they’re doing for the next releases.
  • rootconf.in – April 14-15 2016, tutorial day on 16 – I’ve not been to India since about 2011, so I’m looking forward to this trip to Bangalore (and my first time to a HasGeek event). Getting the email from the conference chair was very nice, and I believe I’m giving a keynote and a tutorial.
  • Percona Live Data Performance Conference 2016 – April 18-21 2016 – this is obviously the event for the MySQL ecosystem, and I’m happy to state that I’m giving a tutorial and a talk at this event.
  • Open Source Data Centre Conference – April 26-28 2016 – Its been a few years since I’ve been here, but I’m looking forward to presenting to the audience again.

There’s some prep work for some internal presentations and tutorials that I’ll be running in Berlin at the company meeting as well.

Categories: thinktime

Arjen Lentz: Australian Lawyers And Scholars Are Encouraging Civil Disobedience In This Year

Thu 31st Mar 2016 12:03

The 2016 Australian Census will not be anonymous, and a lot of people aren’t happy about that. A group of Australian privacy advocates, includin…

Categories: thinktime

OpenSTEM: Substance over Assumptions: Use of Technology in Education | Eric Sheninger

Thu 31st Mar 2016 11:03
http://esheninger.blogspot.com.au/2016/03/substance-over-assumptions-and.html For educational technology to be fully embraced as a powerful teaching and learning tool there must be a focus on substance over assumptions and generalizations.  There is a great deal of evidence to make educators reflect upon their use of technology. The most glaring was the OECD Report that came out last fall. Here is an excerpt:

Schools have yet to take advantage of the potential of technology in the classroom to tackle the digital divide and give every student the skills they need in today’s connected world, according to the first OECD PISA assessment of digital skills.

Even countries which have invested heavily in information and communication technologies (ICT) for education have seen no noticeable improvement in their performances in PISA results for reading, mathematics, or science.”

Read the full article on Eric Sheninger’s blog site.
Categories: thinktime

Ian Wienand: Durable photo workflow

Wed 30th Mar 2016 16:03

Every since my kids were born I have accumulated thousands of digital happy-snaps and I have finally gotten to a point where I'm quite happy with my work-flow. I have always been extremely dubious of using any sort of external all-in-one solution to managing my photos; so many things seem to shut-down, cease development or disappear, all leaving you to have to figure out how to migrate to the next latest thing (e.g. Picasa shutting down). So while there is nothing complicated or even generic about them; I have a few things in my photo-scripts repo that I use that might help others who like to keep a self-contained archive like myself.

Firstly I have a simple script to copy the latest photos from the SD card (i.e. those new since the last copy -- this is obviously very camera specific). I then split by date so I have a simple flat directory layout with each week's photos in it. With the price of SD cards and my rate of filling them up, I don't even bother wiping them at this point, but just keep them in the safe as a backup.

For some reason I have a bit of a thing about geotagging all the photos so I know where I took them. Certainly some cameras do this today, but mine does not. So I have a two-progned approach; I have a geotag script and then a small website easygeotag.info which basically lets met translate a point on Google maps to exif2 command-line syntax quickly. Since I take a lot of photos in the same place, I can store points by name in a small file sourced by the script.

Then I want to add comments to the photos, which can be done with the perhaps lesser-known cousin to EXIF -- IPTC. Some time ago I wrote python bindings for libiptcdata and it has been working just fine ever since. Debian's python-iptcdata comes with a inbuilt script to set title and caption, which is easily wrapped.

So what I like about this is that I have my photos in a simple directory layout, with all metadata embedded within the actual image files themselves in very standarised formats that should be readable by anywhere I choose to host them.

For sharing, I then upload to Flickr, which reads the IPTC data for titles and comments and the geotag info for nice map displays. I manually coralle them into albums, and the Flickr "guest pass" is perfect for then sharing albums to friends and family without making them jump through hoops to register on a site to get access to the photos, or worse, host them myself. I consider Flickr a cache, because honestly (even though I pay) I expect it to shut-down or turn evil at any time. Interestingly, their AI tagging is often quite accurate, and I imagine will only get moreso. This is nice extra meta-data that you don't have to spend time on yourself.

The last piece has always been the "hit by a bus" component of all this. Can anyone figure out access to all these photos if I suddenly disappear? I've tried many things here -- at one point I was using rdiff-backup to sync encrypted bundles up to AWS for example -- but I very clearly found the problem in that when I forgot the keep the key safe and couldn't unencrypt any of my backups (let alone anyone else figuring all this out).

Finally Google Nearline seems to be just what I want. It's offsite, redundant and the price is right; but more importantly I can very easily give access to the backup bucket to anyone with a Google address, who can then just hit a website to download the originals from the bucket (I left the link with my other "hit by a bus" bits and pieces). Of course what they then do with this data is their problem, but at least I feel like they have a chance. This even has an rsync like interface in the client, so I can quickly upload the new stuff from my home NAS (where I keep the photos in a RAID0).

I've been doing this now for 350 weeks, and have some 25,000 photos. I used to get an album up every week, but as the kids get older and we're closer to family I now do it about once a month. I do wonder if my kids will ever be interested in tagged and commented photos with pretty much their exact location from their childhood ... I doubt it, but it's nice to have.

Categories: thinktime

James Morris: Linux Security Summit 2016 – CFP Announced!

Tue 29th Mar 2016 22:03

The 2016 Linux Security Summit (LSS) will be held in Toronto, Canada, on 25th and 26th, co-located with LinuxCon North America.  See the full announcement.

The Call for Participation (CFP) is now open, and submissions must be made by June 17th.  As with recent years, the committee is looking for refereed presentations and discussion topics.

This year, there will be a $100 registration fee for LSS, and you do not need to be registered for LinuxCon to attend LSS .

There’s also now an event twitter feed, for updates and announcements.

If you’ve been doing any interesting development, or deployment, of Linux security systems, please consider submitting a proposal!

Categories: thinktime

Simon Lyall: Wellington Open 2016

Tue 29th Mar 2016 09:03

Over Easter 2016 (March 25th – 27th) I played in the Wellington Open Chess Tournament. I play in the tournament about half of the time. This year it was again being played at the CQ Hotel in Cuba street so I was able to stay at the venue and also visit my favorite Wellington cafes.

There were 43 players entered (the highest for several years) with around 9 coming down from Auckland. I was ranked 16th with a rating of 1988 and the top 4 Wellington players ( Dive, Wastney, Ker & Croad) who are all ranked in the Top 10 in NZ were playing.

See the Tournament’s page for details and downloads for the games. Photos by Lin Nah and me are also up on Flickr for Days one, two and three.

Round 1 – White vs Dominic Leman (unrated) – Result win

This game was over fairly quickly after my opponents 5th Move (Nf6) which let me win a free Bishop after ( 5.. Nf6 6.Nxc6 bxc6 7.Bxc5 ) and then they played (7.. Nxe4) to take the pawn which loses the Night since I just pin it again the King with Qe2 and pick it up a move or two later.

 

 

Round 2 – Black vs Michael Steadman ( 2338) – Result lose

Mike plays at my club and is rated well above me. However I put on a pretty poor show and made a mistake early in the Opening (which was one of my lines rather than something Mike usually plays). Error on move 5 lost me a pawn and left my position poor. I failed to improve and resigned on move 21.

Round 3 – White vs Kate Song (1701) – Result win After 6. ..a5

I was very keen on beating Kate. While she is rated almost 200 points lower than me she improving faster and beat me in the last round of the Major Open at the NZ Champs at the start of this year.

We were the same colours as our game in January so I spent some time prepping the opening to avoid my previous mistakes.

In that game Black played 6.. a5  (see diagram) and I replied with the inaccurate Be2 and got tied into knots on the Queen side. This time I played 7. Bd3 which is a better line. However after 7. ..Nh6 8. dxc5 Bxc5 9. O-O black plays Ng4 which gives me some problems. After some back and forth Black ended up with a bit of a mid-game advantage with a developed bishop pair. and control of the open C file.

 

27. Bg5 and I offer a draw

However on move 27 after the rooks had been swapped I was able to play Bg5 which threaten to swap Black’s good Bishop or push it backwards. I offered a draw.

Luckily for me Kate picked to swap the Bishops and Queens with 27. ..Bxg5 28.Nxg5 Qd1+ 29.Qxd1 Bxd1 which left me with almost all my pawns on black squares and pretty safe from her white squared bishop. I then was able to march my King over to the Queenside while my Kingside was safe from the Bishop. After picking up a the a-pawn when the Knight and Bishops swapped I was left with a King plus A&B pawns vs King an b-pawn with around 3 tempo in reserve for pushing back the Black king.

Round 3 – Michael Nyberg vs Leighton Nicholls Position after 71. Kxg4

Another game during round 3 went very long. This was the position after move 71 , White has just taken blacks last pawn. The game kept going till move 125! White kept try to force black to the edge of the board while black kept his king close to the centre and the Knight nearby (keeping the king away with checks and fork threats).

At move 125 Black (Nicholls) claimed a draw under the 50-move rule at which point Michael Nyberg asked “are you sure” and “are you prepared for any penalties?”. After Leighton confirmed he wanted to go ahead with the claim Michael claimed that the draw rules were changed a couple of years ago and that King+Rook vs King+Knight was allowed 75 moves. And that since the draw claim was incorrect Leighton should lose.

However a check of the Official FIDE rules online showed that there was no such special limited for the material, the rule is always 50 moves (Rule 9.3) . The penalty for incorrectly claiming a draw would also have been 2 minutes added to Michael’s time not Leighton losing the game (Rule 9.5b).

The Arbiter checked the rules and declared the game a draw while Michael grumbled about appealing it (which did not happen). Not a good way to end the game since I thought Leighton defended very well. Especially the way Michael was very aggressive while being completely in the wrong.

There have been exceptions to the 50-move draw rule in the past but it has been a flat 50 moves since at least 2001 since while some positions take longer in theory no human would actually be able to play them perfectly.

Round 4 – Black vs David Paul – Result win

Another game against somebody close to my rating but a little below. So while I should win it could be hard. I didn’t play the opening right however and ended up in a slightly poor position a couple of tempo down.

After 32 Re4 draw offered

After some maneuvering (and the odd missed move by both sizes) white offered a draw after move 32. I decided to press on with f6 and was rewarded when after 32. ..f6 33.Kf2 Kf7 White played 34.b4? which allowed me to play Nc3 and bounce my Night to b5 and then take the Bishop on d6 along with an extra pawn.

 

After 44. ..Kd6

A few moves later I’m a pawn up and with a clear path to the win although I made a mistake at the ended it wasn’t bad enough to be fatal.

 

 

 

Round 5 – White vs Russell Dive – Game lost

After getting onto 3 points after 6 rounds I was rewarded with playing the top seed. As often happens with stronger players he just seemed to make 2 threats with every move and my position slowly (well not that slowly) got worse and worse as I couldn’t counter them all (let alone make my own threats).

Eventually I resigned 3 pawns down with no play (computer assessed my position as -5.0)

Round 6 – Black vs Brian Nijman – Game Lost

Last round was once again against a higher rater play but one I had a reasonable chance against.

After 10. ..Bg6

I prepped a bit of the opening but he played something different and we anded up in a messy position with White better developed but not a huge advantage.

We both had bishops cutting though the position and Queens stuck to the side but it would be hard for me to develop my pieces. I was goign to have to work hard at getting them out into good positions

 

After 23. d5

After some swaps white ended up charging though my centre and with lots of threats. I spent a lot of time looking at this position workign out what to do.

White has the Bishop ready to take the pawn on b5 and offer check, possibly grab the Knight or pin the rook. While th Knight can also attack the rook. and the pawns can even promote.

I ended up giving up the exchange for a pawn but promptly lost a pawn when white castled and took on f7.

After 32. Ne2

I decided to push forward hoping to generate some threats and managed to when I threated to mate with two Knights or win a rook after 32. Ne2

34.Rxc5+ Kxc5 35.Be1 Rd8 36.Rc7+ followed but I played 36. ..Kd4 and blocked by Rook rather than Kb6 giving myself a tempo to move my rook to d1. This would have probably picked up another exchange and should have been enough for the win.

 

After 47. g6

And then I found another win. All I had to do was push the pawn. On move 47 I just have to put a piece on f2 to block the bishop from taking my pawn on g1. If 47. ..Nf2 48. Bxf2 Rxf2 49. g1=Q leaves me a Queen vs a rook and I can take the pawn on g6 straight away.

But instead I got Chess Blindness and just  swapped the pawn for the Bishop. I then tried to mate (or perpetual check) the King instead of trying to stop the pawns (the computer says 50. ..Nf4 is just in time). A few moves later I ran out of King-chasing moves and resigned. At which point everybody told me the move I missed

So I ended up with 3/6 or 50% in the tournament. I Losts to the players better than me and beat the lower rated ones. I’m a little disappointed with the last game and the games against Russell Dive and Mike Steadman but happy with the others. Definitely need to keep working on things though.

Share

Categories: thinktime

Pia Waugh: My personal OGPau submission

Mon 28th Mar 2016 12:03

I have been fascinated and passionate about good government since I started exploring the role of government in society about 15 years ago. I decided to go work in both the political and public service arenas specifically to get a better understanding of how government and democracy works in Australia and it had been an incredible journey learning a lot, with a lot of good mentors and experiences along the way.

When I learned about the Open Government Partnership I was extremely excited about the opportunity it presented to have genuine public collaboration on the future of open government in Australia, and to collaborate with other governments on important initiatives like transparency, democracy and citizen rights. Once the government gave the go ahead, I felt privileged to be part of kicking the process off, and secure in my confidence in the team left to run the consultation as I left to be on maternity leave (returning to work in 2017). Amelia, Toby and the whole team are doing a great job, as are the various groups and individuals contributing to the consultation. I think it can be very tempting to be cynical about such things but it us so important we take the steering wheel offered, to drive this where we want to go. Otherwise it is a wasted opportunity.

So now, as a citizen who cares about this topic, and completely independently of my work, I’d like to contribute some additional ideas to the Australian OGP consultation and I encourage you all to contribute ideas too. There have already been a lot of great ideas I support, so these are just a few I think deserve a little extra attention. I’ve laid out some problems and then some actions for each problem. I’ve also got a 9 week old baby so this has been a bit tricky to write in between baby duties I’m keen to explore these and other ideas in more detail throughout the process but these are just the high level ideas to start.

Problem 1: democratic engagement. I think it is hard for a lot of citizens to engage in the range of activities of our democracy. Voting is usually considered the extent to which the average person considers participating but there are so many ways to be involved in the decisions and actions of governments, which affect us in our every day lives! These actions are about making the business of government easier for the people served  to get involved in.

Action (theme: public participation): Establish a single place to discover all consultations, publications, policies – it is currently difficult for people to contribute meaningfully to government because it is hard to find what is going on, what has already been decided, what the priorities of the government of the day are, and what research has been conducted to date.

Action: (theme: public participation): Establish a participatory budget approach. Each year there should be a way for the public to give ideas and feedback to the budget process, to help identify community priorities and potential savings.

Action: (theme: public participation): Establish a regular Community Estimates session. Senate Estimates is a way for the Senate to hold the government and departments to account however, often the politics of the individuals involved dominates the approach. What if we implemented an opportunity for the public to do the same? There would need to be a rigorous way to collect and prioritise questions from the public that was fair and representative, but it could be an excellent way to provide greater accountability which is not (or should not be) politicised.

Problem 2: analogue government. Because so much of the reporting, information, decisions and outcomes of government are published (or not published) in an analogue format (not digital or machine readable), it is very hard to discover and analyse, and thus very hard to monitor. If government was more digitally accessible, more mashable, then it would be easier to monitor the work of government.

Action: (theme: open data) XML feeds for all parliamentary data including Hansard, comlaw, annual reports, pbs’, MP expenses and declaration of interests in data form with notifications of changes. This would make this important democratic content more accessible, easier to analyse and easier to monitor.

Action: (theme: open data) Publishing of all the federal budget information in data format on budget night, including the tables throughout the budget papers, the data from the Portfolio Budget Statements (PBSs) and anything else of relevance. This would make analysing the budget easier. There have been some efforts in this space but it has not been fully implemented.

Action: (Freedom of Information): Adoption of rightoknow platform for whole of gov with central FOI register and publications, and a central FOI team to work across all departments consistently for responding to requests. Currently doing an FOI request can be tricky to figure out (unless you can find community initiatives like righttoknow which has automated the process externally) and the approach to FOI requests varies quite dramatically across departments. A single official way to submit requests, track them, and see reports published, as well as a single mechanism to respond to requests would be better for the citizen experience and far more efficient for government.

Action: (theme: government integrity): Retrospective open calendars of all Parliamentarians business calendars. Constituents deserve to know how their representatives are using their time and, in particular, who they are meeting with. This helps improve transparency around potential influencers of public policy, and helps encourage Parliamentarians to consider how they spend their time in office.

Problem 3: limits for reporting transparency. A lot of the rules about reporting of expenditure in Australia are better than most other countries in the world however, we can do better. We could lower the thresholds for reporting expenditure for instance, and others have covered expanding the reporting around political donations so I’ll stick to what I know and consider useful from direct experience.

Action: (theme: fiscal transparency): Regular publishing of government expenditure records down to $1000. Currently federal government contracts over $10k are reported in Australia through the AusTender website and ondata.gov.au however, there are a lot of expenses below $10k that arguably would be useful to know. In the UK they introduced expenditure reporting per department monthly at https://data.gov.uk/data/openspending-report/index

Action: (theme: fiscal transparency): A public register of all gov funded major projects (all types) along with status, project manager and regular reporting. This would make it easier to track major projects and to intervene when they are not delivering.

Action: (theme: fiscal transparency): Update of PBS and Annual Report templates for comparative budget and program information with common key performance indicators and reporting for programs and departmental functions. Right now agencies do their reporting in PDF documents that provide no easy way to compare outcomes, programs, expenditure, etc. If common XML templates were used for common reports, comparative assessment would be easier and information about government as a whole much more available for assessment.

Problem 4: stovepipe and siloed government impedes citizen centric service delivery. Right now each agency is motivated to deliver their specific mandate with a limited (and ever restricted) budget and so we end up with systems (human, technology, etc) for service delivery that are siloed from other systems and departments. If departments took a more modular approach, it would be more possible to mash up government data, content and services for dramatically improved service delivery across government, and indeed across different jurisdictions.

Action: (theme: public service delivery): Mandated open Application Programmable Interfaces (APIs) for all citizen and business facing services delivered or commissioned by government, to comply to appropriately defined standards and security. This would enable different data, content and services to be mashed up by agencies for better service delivery, but also enables an ecosystem of service delivery beyond government.

Action: (theme: government integrity): a consistent reporting approach and public access to details of outsourced contract work with greater consistency of confidentiality rules in procurement. A lot of work is outsourced by government to third parties. This can be a good way to deliver some things (and there are many arguments as to how much outsourcing is too much) however, it introduces a serious transparency issue when the information about contracted work is unable to be monitored, with the excuse of “commercial in confidence”. All contracts should have minimum reporting requirements and should make publicly available the details of what exactly is contracted, with the exception of contracts with national security where such disclosure creates a significant risk. This would also help in creating a motivation for contractors to deliver on their contractual obligations. Finally, if procurement officers across government had enhanced training to correctly apply the existing confidentiality test from the Commonwealth Procurement Rules, it would be reasonably to expect that there would be less information hidden behind commercial in confidence.

I also wholeheartedly support the recommendations of the Independent Parliamentary Entitlements System Report (https://www.dpmc.gov.au/taskforces/review-parliamentary-entitlements), in particular:

  • Recommendation 24: publish all key documents online;
  • Recommendation 25: more frequent reporting (of work expenses of parliamentarians and their staff) on data.gov.au as a dataset;
  • Recommendation 26: improved travel reporting by Parliamentarians.

I hope this feedback is useful and I look forward to participating in the rest of the consultation. I’m adding the ideas to the ogpau wiki and look forward to feedback and discussion. Just to be crystal clear, these are my own thoughts, based on my own passion and experience, and is not in any way representative of my employer or the government. I have nothing to do with the running of the consultation now and expect my ideas to hold no more weight than the ideas of any other contributor.

Good luck everyone, let’s do this

Categories: thinktime

Jonathan Adamczewski: floats, bits, and constant expressions

Sun 27th Mar 2016 17:03

Can you access the bits that represent an IEEE754 single precision float in a C++14 constant expression (constexpr)?

(Why would you want to do that? Maybe you want to run a fast inverse square root at compile time. Or maybe you want to do something that is actually useful. I wanted to know if it could be done.)

For context: this article is based on experiences using gcc-5.3.0 and clang-3.7.1 with -std=c++14 -march=native on a Sandy Bridge Intel i7. Where I reference sections from the C++ standard, I’m referring to the November 2014 draft.

Before going further, I’ll quote 5.20.6 from the standard:

Since this International Standard imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution.88

88) Nonetheless, implementations are encouraged to provide consistent results, irrespective of whether the evaluation was performed during translation and/or during program execution.

In this post, I document things that worked (and didn’t work) for me. You may have a different experience.

Methods of conversion that won’t work

(Error text from g++-5.3.0)

You can’t access the bits of a float via a typecast pointer [which is undefined behavior, and covered by 5.20.2.5]:

constexpr uint32_t bits_cast(float f) { return *(uint32_t*)f; } error: accessing value of 'f' through a 'uint32_t {aka unsigned int}' glvalue in a constant expression

You can’t convert it via a reinterpret cast [5.20.2.13]

constexpr uint32_t bits_reinterpret_cast(float f) {  const unsigned char* cf = reinterpret_cast<const unsigned char*>(&f); // endianness notwithstanding  return (cf[3] << 24) | (cf[2] << 16) | (cf[1] << 8) | cf[0]; } error: '*(cf + 3u)' is not a constant expression

(gcc reports an error with the memory access, but does not object to the reinterpret_cast. clang produces a specific error for the cast.)

You can’t convert it through a union [gcc, for example, permits this for non-constant expressions, but the standard forbids it in 5.20.2.8]:

constexpr uint32_t bits_union(float f) { union Convert { uint32_t u; float f; constexpr Convert(float f_) : f(f_) {} }; return Convert(f).u; } error: accessing 'bits_union(float)::Convert::u' member instead of initialized 'bits_union(float)::Convert::f' member in constant expression

You can’t use memcpy() [5.20.2.2]:

constexpr uint32_t bits_memcpy(float f) { uint32_t u = 0; memcpy(&u, &f, sizeof f); return u; } error: 'memcpy(((void*)(&u)), ((const void*)(&f)), 4ul)' is not a constant expression

And you can’t define a constexpr memcpy()-like function that is capable of the task [5.20.2.11]:

constexpr void* memcpy(void* dest, const void* src, size_t n) { char* d = (char*)dest; const char* s = (const char*)src; while(n-- > 0) *d++ = *s++; return dest; } constexpr uint32_t bits_memcpy(float f) { uint32_t u = 0; memcpy(&u, &f, sizeof f); return u; } error: accessing value of 'u' through a 'char' glvalue in a constant expression

So what can you do?

Floating point operations in constant expressions

For constexpr float f = 2.0f, g = 2.0f the following operations are available [as they are not ruled out by anything I can see in 5.20]:

  • Comparison of floating point values e.g.

    static_assert(f == g, "not equal");
  • Floating point arithmetic operations e.g.

    static_assert(f * 2.0f == 4.0f, "arithmetic failed");
  • Casts from float to integral value, often with well-defined semantics e.g.

    constexpr int i = (int)2.0f; static_assert(i == 2, "conversion failed");

So I wrote a function (uint32_t bits(float)) that will return the binary representation of an IEEE754 single precision float. The full function is at the end of this post. I’ll go through the various steps required to produce (my best approximation of) the desired result.

1. Zero

When bits() is passed the value zero, we want this behavior:

static_assert(bits(0.0f) == 0x00000000);

And we can have it:

if (f == 0.0f) return 0;

Nothing difficult about that.

2. Negative zero

In IEEE754 land, negative zero is a thing. Ideally, we’d like this behavior:

static_assert(bits(-0.0f) == 0x80000000)

But the check for zero also matches negative zero. Negative zero is not something that the C++ standard has anything to say about, given that IEEE754 is an implementation choice [3.9.1.8: “The value representation of floating-point types is implementation defined”]. My compilers treat negative zero the same as zero for all comparisons and arithmetic operations. As such, bits() returns the wrong value when considering negative zero, returning 0x00000000 rather than the desired 0x80000000.

I did look into other methods for detecting negative zero in C++, without finding something that would work in a constant expression. I have seen divide by zero used as a way to detect negative zero (resulting in ±infinity, depending on the sign of the zero), but that doesn’t compile in a constant expression:

constexpr float r = 1.0f / -0.0f; error: '(1.0e+0f / -0.0f)' is not a constant expression

and divide by zero is explicitly named as undefined behavior in 5.6.4, and so by 5.20.2.5 is unusable in a constant expression.

Result: negative zero becomes positive zero.

3. Infinity

We want this:

static_assert(bits(INFINITY) == 0x7f800000);

And this:

else if (f == INFINITY) return 0x7f800000;

works as expected.

4. Negative Infinity

Same idea, different sign:

static_assert(bits(-INFINITY) == 0xff800000); else if (f == -INFINITY) return 0xff800000;

Also works.

5. NaNs

There’s no way to generate arbitrary NaN constants in a constant expression that I can see (not least because casting bits to floats isn’t possible in a constant expression, either), so it seems impossible to get this right in general.

In practice, maybe this is good enough:

static_assert(bits(NAN) == 0x7fc00000);

NaN values can be anywhere in the range of 0x7f800001 -- 0x7fffffff and 0xff800001 -- 0xffffffff. I have no idea as to the specific values that are seen in practice, nor what they mean. 0x7fc00000 shows up in /usr/include/bits/nan.h on the system I’m using to write this, so — right or wrong — I’ve chosen that as the reference value.

It is possible to detect a NaN value in a constant expression, but not its payload. (At least that I’ve been able to find). So there’s this:

else if (f != f) // NaN return 0x7fc00000; // This is my NaN...

Which means that of the 2*(223-1) possible NaNs, one will be handled correctly (in this case, 0x7fc00000). For the other 16,777,213 values, the wrong value will be returned (in this case, 0x7fc00000).

So… partial success? NaNs are correctly detected, but the bits for only one NaN value will be returned correctly.

(On the other hand, the probability that it will ever matter could be stored as a denormalized float)

6. Normalized Values // pseudo-code static_assert(bits({ 0x1p-126f, ..., 0x1.ffff7p127}) == { 0x00800000, ..., 0x7f7fffff}); static_assert(bits({ -0x1p-126f, ..., -0x1.ffff7p127}) == { 0x80800000, ..., 0xff7fffff});

[That 0x1pnnnf format happens to be a convenient way to represent exact values that can be stored as binary floating point numbers]

It is possible to detect and correctly construct bits for every normalized value. It does requires a little care to avoid truncation and undefined behavior. I wrote a few different implementations — the one that I describe here requires relatively little code, and doesn’t perform terribly [0].

The first step is to find and clear the sign bit. This simplifies subsequent steps.

bool sign = f < 0.0f; float abs_f = sign ? -f : f;

Now we have abs_f — it’s positive, non-zero, non-infinite, and not a NaN.

What happens when a float is cast to an integral type?

uint64_t i = (uint64_t)f;

The value of f will be stored in i, according to the following rules:

  • The value will be rounded towards zero which, for positive values, means truncation of any fractional part.
  • If the value in f is too large to be represented as a uint64_t (i.e. f > 264-1) the result is undefined.

If truncation takes place, data is lost. If the number is too large, the result is (probably) meaningless.

For our conversion function, if we can scale abs_f into a range where it is not larger than (264-1), and it has no fractional part, we have access to an exact representation of the bits that make up the float. We just need to keep track of the amount of scaling being done.

Single precision IEEE 754 floating point numbers have, at most, (23+1) bits of precision (23 in the significand, 1 implicit). This means that we can scale down large numbers and scale up small numbers into the required range.

Multiplying by powers of two change only the exponent of the float, and leave the significand unmodified. As such, we can arbitrarily scale a float by a power of two and — so long as we don’t over- or under-flow the float — we will not lose any of the bits in the significand.

For the sake of simplicity (believe it or not [1]), my approach is to scale abs_f in steps of 241 so that (abs_f ≥ 287) like so:

int exponent = 254; while(abs_f < 0x1p87f) { abs_f *= 0x1p41f; exponent -= 41; }

If abs_f ≥ 287, the least significant bit of abs_f, if set, is 2(87-23)==264.

Next, abs_f is scaled back down by 264 (which adds no fractional part as the least significant bit is 264) and converted to an unsigned 64 bit integer.

uint64_t a = (uint64_t)(abs_f * 0x1p-64f);

All of the bits of abs_f are now present in a, without overflow or truncation. All that is needed now is to determine where they are:

int lz = count_leading_zeroes(a);

adjust the exponent accordingly:

exponent -= lz;

and construct the result:

uint32_t significand = (a << (lz + 1)) << (64 - 23); return (sign << 31) | (exponent << 23) | significand;

With this, we have correct results for every normalized float.

7. Denormalized Values // pseudo-code static_assert(bits({ 0x1.0p-149f, ..., 0x1.ffff7p-127f}) == { 0x00000001, ..., 0x007fffff}); static_assert(bits({ -0x1.0p-149f, ..., -0x1.ffff7p-127f}) == { 0x80000001, ..., 0x807fffff});

The final detail is denormalized values. Handling of normalized values as presented so far fails because denormals will have additional leading zeroes. They are fairly easy to account for:

if (exponent <= 0) { exponent = 0; lz = 8 - 1; }

To attempt to demystify that lz = 8 - 1 a little: there are 8 leading bits that aren’t part of the significand of a denormalized single precision float after the repeated 2-41 scaling that has taken place. There is also no leading 1 bit that is present in all normalized numbers (which is accounted for in the calculation of significand above as (lz + 1)). So the leading zero count (lz) is set to account for the 8 bits of offset to the start of the denormalized significand, minus the one that the subsequent calculation assumes it needs to skip over.

And that’s it. All the possible values of a float are accounted for.

(Side note: If you’re compiling with -ffast-math, passing denormalized numbers to bits() will return invalid results. That’s -ffast-math for you. With gcc or clang, you could add an #ifdef __FAST_MATH__ around the test for negative exponent.)

Conclusion

You can indeed obtain the bit representation of a floating point number at compile time. Mostly. Negative zero is wrong, NaNs are detected but otherwise not accurately converted.

Enjoy your compile-time bit-twiddling!

The whole deal:

// Based on code from // https://graphics.stanford.edu/~seander/bithacks.html constexpr int count_leading_zeroes(uint64_t v) { constexpr char bit_position[64] = { 0, 1, 2, 7, 3, 13, 8, 19, 4, 25, 14, 28, 9, 34, 20, 40, 5, 17, 26, 38, 15, 46, 29, 48, 10, 31, 35, 54, 21, 50, 41, 57, 63, 6, 12, 18, 24, 27, 33, 39, 16, 37, 45, 47, 30, 53, 49, 56, 62, 11, 23, 32, 36, 44, 52, 55, 61, 22, 43, 51, 60, 42, 59, 58 }; v |= v << 1; // first round down to one less than a power of 2 v |= v << 2; v |= v << 4; v |= v << 8; v |= v << 16; v |= v << 32; v = (v << 1) + 1; return 63 - bit_position[(v * 0x0218a392cd3d5dbf)<<58]; } constexpr uint32_t bits(float f) { if (f == 0.0f) return 0; // also matches -0.0f and gives wrong result else if (f == INFINITY) return 0x7f800000; else if (f == -INFINITY) return 0xff800000; else if (f != f) // NaN return 0x7fc00000; // This is my NaN... bool sign = f < 0.0f; float abs_f = sign ? -f : f; int exponent = 254; while(abs_f < 0x1p87f) { abs_f *= 0x1p41f; exponent -= 41; } uint64_t a = (uint64_t)(abs_f * 0x1p-64f); int lz = count_leading_zeroes(a); exponent -= lz; if (exponent <= 0) { exponent = 0; lz = 8 - 1; } uint32_t significand = (a << (lz + 1) >> (64 - 23); return (sign << 31) | (exponent << 23) | significand; }

[0] Why does runtime performance matter? Because that’s how I tested the conversion function while implementing it. I was applying Bruce Dawson’s advice for testing floats and the quicker I found out that I’d broken the conversion the better. For the implementation described in this post, it takes about 97 seconds to test all four billion float values on my laptop — half that time if I wasn’t testing negative numbers (which are unlikely to cause problems due to the way I handle the sign bit). The implementation I’ve described in this post is not the fastest solution to the problem, but it is relatively compact, and well behaved in the face of -ffast-math.

Admission buried in a footnote: I have not validated correct behavior of this code for every floating point number in actual compile-time constant expressions. Compile-time evaluation of four billion invocations of bits() takes more time than I’ve been willing to invest so far.

[1] It is conceptually simpler to multiply abs_f by two (or one half) until the result is exactly positioned so that no leading zero count is required after the cast — at least, that was what I did in my first attempt. The approach described here was found to be significantly faster. I have no doubt that better-performing constant-expression-friendly approaches exist.

 

Categories: thinktime

Tridge on UAVs: APM:Plane 3.5.2 released

Sat 26th Mar 2016 15:03

The ArduPilot development team is proud to announce the release of version 3.5.2 of APM:Plane. This is a minor release with small changes.



The main reason for this release over the recent 3.5.1 release is a fix for a bug where the px4io co-processor on a Pixhawk can run out of memory while booting. This causes the board to be unresponsive on boot. It only happens if you have a more complex servo setup and is caused by too much memory used by the IO failsafe mixer.



The second motivation for this release is to fix an issue where during a geofence altitude failsafe that happens at low speed an aircraft may dive much further than it should to gain speed. This only happened if the thrust line of the aircraft combined with low pitch integrator gain led to the aircraft not compensating sufficiently with elevator at full throttle in a TECS underspeed state. To fix this two changes have been made:



  • a minimum level of integrator in the pitch controller has been added. This level has a sufficiently small time constant to avoid the problem with the TECS controller in an underspeed state.
  • the underspeed state in TECS has been modified so that underspeed can end before the full target altitude has been reached, as long as the airspeed has risen sufficiently past the minimum airspeed for a sufficient period of time (by 15% above minimum airspeed for 3 seconds).

Many thanks to Marc Merlin for reporting this bug!

The default P gains for both roll and pitch have also been raised from 0.4 to 0.6. This is to help for users that fly with the default parameters. A value of 0.6 is safe for all aircraft that I have analysed logs for.



The default gains and filter frequencies of the QuadPlane code have also been adjusted to better reflect the types of aircraft users have been building.



Other changes include:

  • improved QuadPlane logging for better analysis and tuning (adding RATE and QTUN messages)
  • fixed a bug introduced in 3.5.1 in rangefinder landing
  • added TECS logging of speed_weight and flags
  • improvements to the lsm303d driver for Linux
  • improvements to the waf build system



Happy flying!

Categories: thinktime

James Purser: Changes, they are a happening

Fri 25th Mar 2016 15:03

So, if you've been following my social medias then  you'll have noticed that I have very rapidly (in the space of maybe two weeks?) gone through the process of being retrenched, looking for work and aquiring a new job. In this, I have been actually very, very lucky and unlike some others I'm not going to claim that being unemployed is some sort of "freedom" or relaxing time. Instead it's a period where you immediately go "Okay, well shit, I have a family to support, so no time for relaxing, get back out there".

As I said, I've been EXTREMELY lucky in that I have managed to snag a good job so soon after being retrenched. I won't say who yet, but will say it's back in the media industry (an industry I haven't worked in since leaving WIN TV back in 2004). I will be leaving the moodle space and returning to Drupalland with forays into new areas (I really do like forays into new areas).

With any luck this won't affect my plans for rebooting Purser Explores The World and my other plans for Angry Beanie. I've already started working on a new episode of PETW (actually interviewed someone the other day, it felt awesome), and am in the process of organising more. Also have sekrit podcast project to get going as well.

On the tech side, I'm going to be moving this blog to Drupal 8 (because, well while the contrib modules aren't there yet for something as complex as Angry Beanie, it's certainly there for a blog like this), I'm also going to be delving more into the MVC side of things. I've played around with django and the like, but it's probably about time I get it knocked over.

Well that's it for the moment, hopefully will blog a bit more in the future, we'll see.

Oh, and if you're reading this on medium, I am thinking about a module that allows you to actually publish from Drupal to medium, but that's at the "It's an idea I had on the train" stage.

Categories: thinktime

David Rowe: Project Whack a Mole Part 2

Fri 25th Mar 2016 10:03

I’ve been progressing this project steadily since Part 1 which describes how this direction finding system works.

I managed to get repeatable phase angles using two antennas with an RF signal, first in my office using a signal generator, then with a real signal from a local repeater. However the experimental set up was delicate and the software slow and cumbersome. So I’ve taken a step back to make the system easier to use and more robust.

New RF Head

I’ve built a new RF Head based on a NE602 active mixer:



The mixer has an impedance of about 3000 ohms across it’s balanced inputs and outputs so I’ve coupled the 50 ohm signals with a single turn loop to make some sort of impedance match. The tuned circuits also give some selectivity. This is important as I am afraid the untuned HackRF front end will collapse with overload when I poke a real antenna up above the Adelaide Plains and it can see every signal on the VHF and UHF spectrum.

Antenna 1 (A1) is coupled using a tapped tuned circuit,and with the mixer output forms a 3 winding transformer. Overall gain for the A1 and A2 signals is about -6dB which is OK. The carrier feed through from the A2 mixer is 14dB down. Need to make sure this carrier feed through stays well down on A1 which is on the same frequency. Otherwise the DSP breaks – it assumes there is no carrier feed through. In practice the levels of A1 and A2 will bob about due to multipath, so some attenuation of A2 relative to A1 is a good idea.

Real Time-ish Software

I refactored the df_mixer.m Octave code to make it run faster and make repeated system calls to hackrf_transfer. So now it runs real time (ish); grabs a second of samples, does the DSP foo, plots, then repeats about once every 2 seconds. Much easier to see whats going on now, here it is working with a FM signal:

You can “view image” on your browser for a larger image. I really like my “propeller plot”. It’s a polar histogram of the angles the DSP comes up with. It has two “blades” due to the 180 degree ambiguity of the system. The propellor gets fatter with low SNR as there is more uncertainty, and thinner with higher SNR. It simultaneously tells me the angle and the quality of the angle. I think that’s a neat innovation.

Note the “Rx signal at SDR Input” plot. The signals we want are centered on 48kHz (A1), 16 and 80kHz (A2 mixer products). Above 80kHz you can see the higher order mixer products, more on that below.

Reflections

As per Part 1 the first step is a bench test. I used my sig gen to supply a test signal which I split and fed into A1 and A2. By adding a small length of transmission line (38mm of SMA adapters screwed together), I could induce known amounts of phase shift.

Only I was getting dud results, 10 degrees one way then 30 the other when I swapped the 38mm segment from A1 to A2. It should be symmetrical, same phase difference but opposite.

I thought about the A1 and A2 ports. It’s unlikely they are 50 ohms with my crude matching system. Maybe this is causing some mysterious reflections that are messing up the phase at each port? Wild guess but I inserted some 10dB SMA attenuators into A1 and A2 and it started working! I measured +/- 30 +/-1 degrees as I swapped the 38mm segment. Plugging 38mm into my spreadsheet the expected phase shift is 30.03 degrees. Yayyyyyyy…..

So I need to add some built-in termination impedance for each port, like a 6dB “pad”. Why are they called “pads” BTW?

The near-real time software and propeller plot made it really easy to see what was going on and I could see and avoid any silly errors. Visualisation helps.

Potential Problems

I can see some potential problems with this mixer based method for direction finding:

  1. If the spectrum is “busy” and other nearby channels are in use the mixer will plonk them right on top of our signals. Oh dear.
  2. The mixer has high order output products – at multiples of the LO (32, 64, 96 ….. kHz) away from the input frequency. So any strong signal some distance away could potentially be mixed into our pass band. A strong BPF and resonant antennas might help. Yet to see if this is a real problem.

Next Steps

Anyway, onward and upwards. I’ll add some “pads” to A1 and A2, then assemble the RF head with a couple of antennas so I can mount the whole thing outdoors on a mast.

Mark has given me a small beacon transmitter that I will use for local testing, before trying it on a repeater. If I get repeatable repeater-bearings (lol) I will take the system to mountain overlooking the city and see if it blows up with strong signals. Gold star if I can pull bearings off the repeater input as that’s where our elusive mole lives.

Categories: thinktime

Tridge on UAVs: New ArduPilot documentation site

Thu 24th Mar 2016 19:03

If you have visited ardupilot.com recently you may have noticed you are redirected to our new documentation system on ardupilot.org. This is part of our on-going transformation of the ardupilot project that we announced in a previous post.

The new documentation system is based on sphinx and was designed by Hamish Willee. I think Hamish has done a fantastic job with the new site, creating something that will be easier to manage and update, while using less server resources which should make it more responsive for users.

Updates to the documentation will now be done via github pull requests, using the new ardupilot_wiki repository. That git will also host documentation issues, and includes all the issues from the old tracking repository imported to the new repository.

Many thanks to everyone who has helped with this conversion, including Hamish, Buzz, Jani and Peter.

We have endeavoured to make as many existing URLs auto-redirect to the correct URL on the new site, but there are bound to be some errors for which we apologise. If you find issues with the new site please either respond here or open an issue on the repository.

Happy flying!

Categories: thinktime

David Rowe: Organic Potato Chips Scam

Thu 24th Mar 2016 08:03

I don’t keep much junk food in my pantry, as I don’t like my kids eating too much high calorie food. Also if I know it’s there I will invariably eat it and get fat. Fortunately, I’m generally too lazy to go shopping when an urge to eat junk food hits. So if it’s not here at home I won’t do anything about it.

Instead, every Tuesday at my house is “Junk Food Night”. My kids get to have anything they want, and I will go out and buy it. My 17 year old will choose something like a family size meat-lovers pizza with BBQ sauce. My 10 year old usually wants a “slushie”, frozen coke sugar laden thing, so last Tuesday off we went to the local all-night petrol (gas) station. It was there spied some “Organic” potato chips. My skeptical “spidey senses” started to tingle.

Lets break it down from the information on the pack:



OK so they are made from organic grains. This means they are chemically and nutritionally equivalent to scientifically farmed grains but we need to cut down twice as much rain forest to grow them and they cost more. There is no scientifically proven health advantage to organic food. Just a profit advantage if you happen to sell it.

There is nothing wrong with Gluten. Nothing at all. It makes our bread have a nice texture. Humans have been consuming it from the dawn of agriculture. Like most marketing, the Gluten fad is just a way to make us feel bad and choose more expensive options.

And soy is suddenly evil? Please. Likewise dairy is a choice, not a question of nutrition. I’ve never met a cow I didn’t like. Especially served medium rare.

Whole grain is good, if the micro-nutrients survive deep frying in boiling oil.

There is nothing wrong with GMO. Another scam where scientifically proven benefits are being held back by fear, uncertainty, and doubt. We have been modifying the genetic material in everything we eat for centuries through selection.

Kosher is a religious choice and has nothing to do with nutrition.

Speaking of nutrition, lets compare the nutritional content per 100g to a Big Mac:

Item Big Mac Organic Chips Energy 1030 kJ 1996 kJ Protein 12.5 g 12.5 g Carbohydrates 17.6 g 66 g Fat 13.5 22.4 g Sodium 427 mg 343 mg

This is very high energy food. It is exactly this sort of food that is responsible for first world health problems like cardio-vascular disease and diabetes. The link between high calorie snack food and harm are proven – unlike the perceived benefits of organic food. The organic label is dangerous, irresponsible marketing hype to make us pay more and encourage consumption of food that will hurt us.

Links

Give Us Our Daily Bread – A visit to a modern wheat farm.

Energy Equivalents of a Krispy Kreme Factory – How many homes can you run on a donut?

Categories: thinktime

Pages