You are here

thinktime

Pia Waugh: Exploring change and how to scale it

Planet Linux Australia - 13 hours 14 min ago

Over the past decade I have been involved in several efforts trying to make governments better. A key challenge I repeatedly see is people trying to change things without an idea of what they are trying to change to, trying to fix individual problems (a deficit view) rather than recognising and fixing the systems that created the problems in the first place. So you end up getting a lot of symptomatic relief and iterative improvements of antiquated paradigms without necessarily getting transformation of the systems that generated the problems. A lot of the effort is put into applying traditional models of working which often result in the same old results, so we also need to consider new ways to work, not just what needs to be done.

With life getting faster and (arguably) exponentially more complicated, we need to take a whole of system view if we are to improve ‘the system’ for people. People sometimes balk when I say this thinking it too hard, too big or too embedded. But we made this, we can remake it, and if it isn’t working for us, we need to adapt like we always have.

I also see a lot of slogans used without the nuanced discussion they invite. Such (often ideological) assumptions can subtly play out without evidence, discussion or agreement on common purpose. For instance, whenever people say smaller or bigger government I try to ask what they think the role of government is, to have a discussion. Size is assumed to correlate to services, productivity, or waste depending on your view, but shouldn’t we talk about what the public service should do, and then the size is whatever is appropriate to do what is needed? People don’t talk about a bigger or smaller jacket or shoes, they get the right one for their needs and the size can change over time as the need changes. Indeed, perhaps the public service of the future could be a dramatically different workforce comprised of a smaller group of professional public servants complimented with and a large demographically representative group of part time citizens doing their self nominated and paid “civic duty year of service” as a form of participatory democracy, which would bring new skills and perspectives into governance, policy and programs.

We need urgently to think about the big picture, to collectively talk about the 50 or 100 year view for society, and only then can we confidently plan and transform the structures, roles, programs and approaches around us. This doesn’t mean we have to all agree to all things, but we do need to identify the common scaffolding upon which we can all build.

This blog posts challenges you to think systemically, critically and practically about five things:

    • What future do you want? Not what could be a bit better, or what the next few years might hold, or how that shiny new toy you have could solve the world’s problems (policy innovation, data, blockchain, genomics or any tool or method). What is the future you want to work towards, and what does good look like? Forget about your particular passion or area of interest for a moment. What does your better life look like for all people, not just people like you?
    • What do we need to get there? What concepts, cultural values, paradigm, assumptions should we take with us and what should we leave behind? What new tools do we need and how do we collectively design where we are going?
    • What is the role of gov, academia, other sectors and people in that future? If we could create a better collective understanding of our roles in society and some of the future ideals we are heading towards, then we would see a natural convergence of effort, goals and strategy across the community.
    • What will you do today? Seriously. Are you differentiating between symptomatic relief and causal factors? Are you perpetuating the status quo or challenging it? Are you being critically aware of your bias, of the system around you, of the people affected by your work? Are you reaching out to collaborate with others outside your team, outside your organisation and outside your comfort zone? Are you finding natural partners in what you are doing, and are you differentiating between activities worthy of collaboration versus activities only of value to you (the former being ripe for collaboration and the latter less so).
    • How do we scale change? I believe we need to consider how to best scale “innovation” and “transformation”. Scaling innovation is about scaling how we do things differently, such as the ability to take a more agile, experimental, evidence based, creative and collaborative approach to the design, delivery and continuous improvement of stuff, be it policy, legislation or services. Scaling transformation is about how we create systemic and structural change that naturally drives and motivates better societal outcomes. Each without the other is not sustainable or practical.
How to scale innovation and transformation?

I’ll focus the rest of this post on the question of scaling. I wrote this in the context of scaling innovation and transformation in government, but it applies to any large system. I also believe that empowering people is the greatest way to scale anything.

  • I’ll firstly say that openness is key to scaling everything. It is how we influence the system, how we inspire and enable people to individually engage with and take responsibility for better outcomes and innovate at a grassroots level. It is how we ensure our work is evidence based, better informed and better tested, through public peer review. Being open not only influences the entire public service, but the rest of the economy and society. It is how we build trust, improve collaboration, send indicators to vendors and influence academics. Working openly, open sourcing our research and code, being public about projects that would benefit from collaboration, and sharing most of what we do (because most of the work of the public service is not secretive by any stretch) is one of the greatest tools in try to scale our work, our influence and our impact. Openness is also the best way to ensure both a better supply chain as well as a better demand for things that are demonstrable better.

A quick side note to those who argue that transparency isn’t an answer because all people don’t have to tools to understand data/information/etc to hold others accountable, it doesn’t mean you don’t do transparency at all. There will always be groups or people naturally motivated to hold you to account, whether it is your competitors, clients, the media, citizens or even your own staff. Transparency is partly about accountability and partly about reinforcing a natural motivation to do the right thing.

Scaling innovation – some ideas:
  • The necessity of neutral, safe, well resourced and collaborative sandpits is critical for agencies to quickly test and experiment outside the limitations of their agencies (technical, structural, political, functional and procurement). Such places should be engaged with the sectors around them. Neutral spaces that take a systems view also start to normalise a systems view across agencies in their other work, which has huge ramifications for transformation as well as innovation.
  • Seeking and sharing – sharing knowledge, reusable systems/code, research, infrastructure and basically making it easier for people to build on the shoulders of each other rather than every single team starting from scratch every single time. We already have some communities of practice but we need to prioritise sharing things people can actually use and apply in their work. We also need to extend this approach across sectors to raise all boats. Imagine if there was a broad commons across all society to share and benefit from each others efforts. We’ve seen the success and benefits of Open Source Software, of Wikipedia, of the Data Commons project in New Zealand, and yet we keep building sector or organisational silos for things that could be public assets for public good.
  • Require user research in budget bids – this would require agencies to do user research before bidding for money, which would create an incentive to build things people actually need which would drive both a user centred approach to programs and would also drive innovation as necessary to shift from current practices Treasury would require user research experts and a user research hub to contrast and compare over time.
  • Staff mobility – people should be supported to move around departments and business units to get different experiences and to share and learn. Not everyone will want to, but when people stay in the same job for 20 years, it can be harder to engage in new thinking. Exchange programs are good but again, if the outcomes and lessons are not broadly shared, then they are linear in impact (individuals) rather than scalable (beyond the individuals).
  • Support operational leadership – not everyone wants to be a leader, disruptor, maker, innovator or intrapreneur. We need to have a program to support such people in the context of operational leadership that isn’t reliant upon their managers putting them forward or approving. Even just recognising leadership as something that doesn’t happen exclusively in senior management would be a huge cultural shift. Many managers will naturally want to keep great people to themselves which can become stifling and eventually we lose them. When people can work on meaningful great stuff, they stay in the public service.
  • A public ‘Innovation Hub’ – if we had a simple public platform for people to register projects that they want to collaborate on, from any sector, we could stimulate and support innovation across the public sector (things for which collaboration could help would be surfaced, publicly visible, and inviting of others to engage in) so it would support and encourage innovation across government, but also provides a good pipeline for investment as well as a way to stimulate and support real collaboration across sectors, which is substantially lacking at the moment.
  • Emerging tech and big vision guidance - we need a team, I suggest cross agency and cross sector, of operational people who keep their fingers on the pulse of technology to create ongoing guidance for New Zealand on emerging technologies, trends and ideas that anyone can draw from. For government, this would help agencies engage constructively with new opportunities rather than no one ever having time or motivation until emerging technologies come crashing down as urgent change programs. This could be captured on a constantly updating toolkit with distributed authorship to keep it real.
Scaling transformation – some ideas:
  • Convergence of effort across sectors – right now in many countries every organisation and to a lesser degree, many sectors, are diverging on their purpose and efforts because there is no shared vision to converge on. We have myriad strategies, papers, guidance, but no overarching vision. If there were an overarching vision for New Zealand Aotearoa for instance, co-developed with all sectors and the community, one that looks at what sort of society we want into the future and what role different entities have in achieving that ends, then we would have the possibility of natural convergence on effort and strategy.
    • Obviously when you have a cohesive vision, then you can align all your organisational and other strategies to that vision, so our (government) guidance and practices would need to align over time. For the public sector the Digital Service Standard would be a critical thing to get right, as is how we implement the Higher Living Standards Framework, both of which would drive some significant transformation in culture, behaviours, incentives and approaches across government.
  • Funding “Digital Public Infrastructure” – technology is currently funded as projects with start and end dates, and almost all tech projects across government are bespoke to particular agency requirements or motivations, so we build loads of technologies but very little infrastructure that others can rely upon. If we took all the models we have for funding other forms of public infrastructure (roads, health, education) and saw some types of digital infrastructure as public infrastructure, perhaps they could be built and funded in ways that are more beneficial to the entire economy (and society).
  • Agile budgeting – we need to fund small experiments that inform business cases, rather than starting with big business cases. Ideally we need to not have multi 100 million dollar projects at all because technology projects simply don’t cost that anymore, and anyone saying otherwise is trying to sell you something If we collectively took an agile budgeting process, it would create a systemic impact on motivations, on design and development, or implementation, on procurement, on myriad things. It would also put more responsibility on agencies for the outcomes of their work in short, sharp cycles, and would create the possibility of pivoting early to avoid throwing bad money after good (as it were). This is key, as no transformative project truly survives the current budgeting model.
  • Gov as a platform/API/enabler (closely related to DPI above) – obviously making all government data, content, business rules (inc but not just legislation) and transactional systems available as APIs for building upon across the economy is key. This is how we scale transformation across the public sector because agencies are naturally motivated to deliver what they need to cheaper, faster and better, so when there are genuinely useful reusable components, agencies will reuse them. Agencies are now more naturally motivated to take an API driven modular architecture which creates the bedrock for government as an API. Digital legislation (which is necessary for service delivery to be integrated across agency boundaries) would also create huge transformation in regulatory and compliance transformation, as well as for government automation and AI.
  • Exchange programs across sectors – to share knowledge but all done openly so as to not create perverse incentives or commercial capture. We need to also consider the fact that large companies can often afford to jump through hoops and provide spare capacity, but small to medium sized companies cannot, so we’d need a pool for funding exchange programs with experts in the large proportion of industry.
  • All of system service delivery evidence base – what you measure drives how you behave. Agencies are motivated to do only what they need to within their mandates and have very few all of system motivations. If we have an all of government anonymised evidence base of user research, service analytics and other service delivery indicators, it would create an accountability to all of system which would drive all of system behaviours. In New Zealand we already have the IDI (an awesome statistical evidence base) but what other evidence do we need? Shared user research, deidentified service analytics, reporting from major projects, etc. And how do we make that evidence more publicly transparent (where possible) and available beyond the walls of government to be used by other sectors?  More broadly, having an all of government evidence base beyond services would help ensure a greater evidence based approach to investment, strategic planning and behaviours.
Categories: thinktime

Tighter

Seth Godin - Sat 21st Apr 2018 20:04
Since dawn of the industrial age, tighter has been the goal. A tighter system, with less slack. Tighter connection with customers. Even plastic surgeons deliver tighter skin. No one ever goes seeking more folds and flab. The thing is, tighter...        Seth Godin
Categories: thinktime

David Rowe: WaveNet and Codec 2

Planet Linux Australia - Sat 21st Apr 2018 19:04

Yesterday my friend and fellow open source speech coder Jean-Marc Valin (of Speex and Opus fame) emailed me with some exciting news. W. Bastiaan Kleijn and friends have published a paper called “Wavenet based low rate speech coding“. Basically they take bit stream of Codec 2 running at 2400 bit/s, and replace the Codec 2 decoder with the WaveNet deep learning generative model.

What is amazing is the quality – it sounds as good an an 8000 bit/s wideband speech codec! They have generated wideband audio from the narrowband Codec model parameters. Here are the samples – compare “Parametrics WaveNet” to Codec 2!

This is a game changer for low bit rate speech coding.

I’m also happy that Codec 2 has been useful for academic research (Yay open source), and that the MOS scores in the paper show it’s close to MELP at 2400 bit/s. Not bad for an open source codec written (more or less) by one person.

Now I need to do some reading on Deep Learning!

Reading Further

Wavenet based low rate speech coding
Wavenet Speech Samples

Categories: thinktime

The placebo ratchet

Seth Godin - Fri 20th Apr 2018 18:04
A placebo that works becomes more powerful. Which makes it more likely to work next time. It's that simple, but it's magic. Placebos work for two reasons: The confidence they create makes it more likely our body will respond, our...        Seth Godin
Categories: thinktime

OpenSTEM: NAPLAN and vocabulary

Planet Linux Australia - Fri 20th Apr 2018 17:04
It is the time of year when the thoughts of teachers of students in years 3, 5, 7 and 9 turn (not so) lightly to NAPLAN. I’m sure many of you are aware of the controversial review of NAPLAN by Les Perelman, a retired professor from MIT in the United States. Perelman conducted a similar […]
Categories: thinktime

Francois Marier: Using a Kenwood TH-D72A with Pat on Linux and ax25

Planet Linux Australia - Fri 20th Apr 2018 15:04

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb

Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps

along with the systemd script that comes with Pat:

/usr/share/pat/ax25/install-systemd-ax25-unit.bash Configuration

Once the packages are installed, it's time to configure everything correctly:

  1. Power cycle the radio.
  2. Enable TNC in packet12 mode (band A*).
  3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
  4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

    wl2k CALLSIGN 9600 128 4 Winlink
  5. Set HBAUD to 1200 in /etc/default/ax25.

  6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

    gcc -o tmd710_tncsetup tmd710_tncsetup.c
  7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

    tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
  8. Start ax25 driver:

    systemctl start ax25.service
Connecting to a winlink gateway

To monitor what is being received and transmitted:

axlisten -cart

Then create aliases like these in ~/.wl2k/config.json:

{ "connect_aliases": { "ax25-VA7EOC": "ax25://wl2k/VA7EOC-10", "ax25-VE7LAN": "ax25://wl2k/VE7LAN-10" }, }

and use them to connect to your preferred Winlink gateways.

Troubleshooting

If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

  • HBAUD in /etc/default/ax25 should be set to 1200
  • line speed in /etc/ax25/axports should be set to 9600
  • SERIAL_SPEED in tmd710_tncsetup should be set to 9600
  • radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

Categories: thinktime

Michael Still: Art with condiments

Planet Linux Australia - Thu 19th Apr 2018 19:04

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

Categories: thinktime

A slow motion trainwreck

Seth Godin - Thu 19th Apr 2018 18:04
We like the flawed hero, bad behavior, tragedy and drama in our fictional characters. Batman and Deadpool sell far more tickets than Superman does. If we use social media to attract a crowd, we will, at some level, become a...        Seth Godin
Categories: thinktime

The words that work

Seth Godin - Wed 18th Apr 2018 18:04
We're bad at empathy. As a result, when we're arguing a point with someone, we tend to use words and images that work on us, not necessarily that help the other person. So, if you want to understand how to...        Seth Godin
Categories: thinktime

Michael Still: City2Surf 2018

Planet Linux Australia - Wed 18th Apr 2018 15:04

I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury.

This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run.

Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense.

I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far!

Categories: thinktime

David Rowe: Lithium Cell Amp Hour Tester and Electric Sailing

Planet Linux Australia - Wed 18th Apr 2018 09:04

I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells.

The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting!

The empty beer can in the background makes a useful insulated stand off. Might need to make more of those.

When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module:

Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V.

Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery.

After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it.

The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete!

In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test:

I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on:

The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:

12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH

So this cells a bit low, and won’t be finding it’s way onto my boat!

Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that?

Results

It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles.

I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance.

My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels.

One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time!

Reading Further

Electric Car BMS Controller
New Lithium Battery Pack for my EV
Engage the Silent Drive
EV Bugs

Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting

Planet Linux Australia - Tue 17th Apr 2018 21:04
Start: Apr 21 2018 12:00 End: Apr 21 2018 16:00 Start: Apr 21 2018 12:00 End: Apr 21 2018 16:00 Location:  Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne Link:  https://www.meetup.com/drupalmelbourne/events/qsvdwcyxgbcc/

As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria.

Linux Users of Victoria is a subcommittee of Linux Australia.

April 21, 2018 - 12:00
Categories: thinktime

Powerful metrics with hidden variables

Seth Godin - Tue 17th Apr 2018 19:04
What factors lead to a search result showing up on page 1 or page 5 of Google? What about the popularity bar in iTunes? How does it work? Who decides what your salary is compared to the person down the...        Seth Godin
Categories: thinktime

Gary Pendergast: Introducing: Click Sync

Planet Linux Australia - Tue 17th Apr 2018 17:04

Chrome’s syncing is pretty magical: you can see your browsing history from your phone, tablet, and computers, all in one place. When you install Chrome on a new computer, it automatically downloads your extensions. You can see your bookmarks everywhere, it even lets you open a tab from another device.

 

There’s one thing that’s always bugged me, however. When you click a link, it turns purple, as all visited links should. But it doesn’t turn purple on your other devices. Google have had this bug on their radar for ages, but it hasn’t made much progress. There’s already an extension that kind of fixes this, but it works by hashing every URL you visit and sending them to a server run by the extension author: not something I’m particularly comfortable with.

 

And so, I wrote Click Sync!

 

https://chrome.google.com/webstore/detail/click-sync/occoadgobmeenmclllmpbnkfcgkjkoel

 

When you click a link, it’ll use Chrome’s inbuilt sync service to tell all your other computers to mark it as visited. If you like watching videos of links turn purple without being clicked, I have just the thing for you:

 

 

While you’re thinking about how Chrome syncs between all your devices, it’s good to setup a Chrome Passphrase, if you haven’t already. This encrypts your personal data before it passes through Google’s servers.

 

Unfortunately, Chrome mobile doesn’t support extensions, so this is only good for syncing between computers. If you run into any bugs, head on over the Click Sync repository, and let me know!

Categories: thinktime

You'll pay a lot, but you'll get more than you pay for

Seth Godin - Mon 16th Apr 2018 19:04
That's as useful a freelancer marketing strategy as you can fit in a single sentence.        Seth Godin
Categories: thinktime

David Rowe: Testing HAB Telemetry Protocols

Planet Linux Australia - Mon 16th Apr 2018 09:04

On Saturday Mark and I had a pleasant day bench testing High Altitude Balloon (HAB) Telemetry protocols and demodulators.

Project Horus HAB flights use a low power transmitter to send regular updates of the balloons position and status. To date, this has been sent using RTTY, and demodulated using Fldigi, or a special version modified for HAB work called dl-Fldigi.

Lora is becoming common in HAB circles, however I am confident we can do better using a custom protocol and well engineered, and most importantly – open source – modems. While very well designed and conveniently packaged, Lora is not magic – modem performance is defined by physics.

A few year ago, Mark and I developed and flight tested a binary protocol (Horus Binary) for HAB flights. We have dusted this off, and I’ve written a C callable API (horus_api.c) to make Horus RTTY and Binary easy to use. The plan is to release a cross platform GUI application that supports Horus Binary, so anyone with a SSB receiver can join in the fun of tracking Horus flights using Horus Binary.

A good HAB telemetry protocol works at low SNRs, and has fast updates to allow accurate positioning of the payload during the final decent. A way of measuring the performance is Packet Error Rate (PER) – how many telemetry packets get through at a given Signal to Noise Ratio (SNR).

So we generated some synthetic Horus RTTY and Binary packets at calibrated SNRs using GNU Octave simulation code (fsk_horus.m), then played the wave files through several modems.

Here are the results (click for a larger version):

The X-axis is in Eb/No, which is proportional to SNR:

SNR = EBNodB + 10log10(Rb/BW)

where Rb is the bit rate and BW is the noise bandwidth you want to measure SNR in. Eb/No is handy as it normalises for the effect of bit rate and noise bandwidth, making modem comparison easier.

Protocol dl-Fldigi
RTTY Fldigi
RTTY Horus
RTTY Horus
Binary Eb/No
(50% PER) 13.0 12.0 11.5 4.5 Rb 100 100 100 200 SNR (3000Hz) -1.7 -2.7 -3.2 -7.2 Packet
Duration 6 6 6 1.6 Wave File Listen Listen Listen Listen

Discussion

The older dl-Fldigi is a few dB behind the more modern Fldigi. Our Horus RTTY and especially Binary protocols are doing very well. At the same bit rate (Eb/No curve), Horus Binary is 9dB ahead of dl-Fldigi, which is a very useful gain; at least double the Line of Site (LOS) range, and equivalent to having nearly 10x the transmit power. The Binary packets are fast as well, allowing for rapid position updates in the final descent.

Trade offs are possible, for example if we slowed Horus Binary to 50 bits/s, it’s packet duration would be 6.4s (about the same as RTTY) however 50% PER would occur at a SNR of -13dB, a 15dB improvement over dl-Fldigi.

Reading Further

Project Horus
Binary Telemetry Protocol
All Your Modem are Belong To Us
SNR and Eb/No Worked Example

Categories: thinktime

Michael Still: On Selecting a Well Engaged Open Source Vendor

Planet Linux Australia - Sun 15th Apr 2018 23:04

Aptira is in an interesting position in the Open Source market, because we don’t usually sell software. Instead, our customers come to us seeking assistance with deciding which OpenStack to use, or how to embed ONAP into their nationwide networks, or how to move their legacy networks to the software defined future. Therefore, our most common role is as a trusted advisor to help our customers decide which Open Source products to buy.

(My boss would insist that I point out here that we do customisation of Open Source for our customers, and have assisted many in the past with deploying pure upstream solutions. Basically, we do what is the right fit for the customer, and aren’t obsessed with fitting customers into pre-defined moulds that suit our partners.)

That makes it important that we recommend products from companies that are well engaged with their upstream Open Source communities. That might be OpenStack, or ONAP, or even something like Open Daylight. This raises the obvious question – what makes a company well engaged with an upstream project?

Read more over at my employer’s blog

Categories: thinktime

Michael Still: Configuring docker to use rexray and Ceph for persistent storage

Planet Linux Australia - Sun 15th Apr 2018 21:04

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working…

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh Selecting previously unselected package rexray. (Reading database ... 177547 files and directories currently installed.) Preparing to unpack rexray_0.9.0-1_amd64.deb ... Unpacking rexray (0.9.0-1) ... Setting up rexray (0.9.0-1) ... rexray has been installed to /usr/bin/rexray REX-Ray ------- Binary: /usr/bin/rexray Flavor: client+agent+controller SemVer: 0.9.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: 2a7458dd90a79c673463e14094377baf9fc8695e Formed: Thu, 04 May 2017 07:38:11 AEST libStorage ---------- SemVer: 0.6.0 OsArch: Linux-x86_64 Branch: v0.9.0 Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 Formed: Thu, 04 May 2017 07:36:11 AEST

Which is of course horrid. What that script seems to have done is install a deb’d version of rexray based on an alien’d package:

    root@labosa:~/rexray# dpkg -s rexray Package: rexray Status: install ok installed Priority: extra Section: alien Installed-Size: 36140 Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1> Architecture: amd64 Version: 0.9.0-1 Depends: libc6 (>= 2.3.2) Description: Tool for managing remote & local storage. A guest based storage introspection tool that allows local visibility and management from cloud and storage platforms. . (Converted from a rpm package by alien version 8.86.)

If I was building anything more than a test environment I think I’d want to do a better job of installing rexray than this, so you’ve been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren’t mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph . The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established. ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts. rbdmap 100% 92 0.1KB/s 00:00 ceph.conf 100% 681 0.7KB/s 00:00 ceph.client.admin.keyring 100% 63 0.1KB/s 00:00 ceph.client.glance.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder.keyring 100% 64 0.1KB/s 00:00 ceph.client.cinder-backup.keyring 71 0.1KB/s 00:00 root@labosa:/etc# modprobe rbd

You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: ceph

And the rexray output sure made it look like it worked…

    root@labosa:/etc# rexray service start ● rexray.service - rexray Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago Main PID: 477423 (rexray) Tasks: 5 Memory: 1.5M CPU: 9ms CGroup: /system.slice/rexray.service └─477423 /usr/bin/rexray start -f May 29 10:14:07 labosa systemd[1]: Started rexray.

Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray May 29 10:14:08 labosa rexray[477423]: ------- May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST May 29 10:14:08 labosa rexray[477423]: libStorage May 29 10:14:08 labosa rexray[477423]: ---------- May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0 May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64 May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0 May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9 May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting libStorage server" error.driver=ceph time=1496016848215 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="daemon failed to initialize" error.driver=ceph time=1496016848216 May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error msg="error starting rex-ray" error.driver=ceph time=1496016848216

That’s because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml libstorage: service: rbd rbd: defaultPool: rbd

Now to install docker:

    root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \ linux-image-extra-virtual root@labosa:/var/log# sudo apt-get install apt-transport-https \ ca-certificates curl software-properties-common root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - root@labosa:/var/log# sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" root@labosa:/var/log# sudo apt-get update root@labosa:/var/log# sudo apt-get install docker-ce

Now let’s make a rexray volume.

    root@labosa:/var/log# rexray volume ls ID Name Status Size root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \ --opt=size=1 A size of 1 here means 1gb mysql root@labosa:/var/log# rexray volume ls ID Name Status Size rbd.mysql mysql available 1

Let’s start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql Unable to find image 'mysql:latest' locally latest: Pulling from library/mysql 10a267c67f42: Pull complete c2dcc7bb2a88: Pull complete 17e7a0445698: Pull complete 9a61839a176f: Pull complete a1033d2f1825: Pull complete 0d6792140dcc: Pull complete cd3adf03d6e6: Pull complete d79d216fd92b: Pull complete b3c25bdeb4f4: Pull complete 02556e8f331f: Pull complete 4bed508a9e77: Pull complete Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5 Status: Downloaded newer image for mysql:latest ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b

And now to prove that persistence works and that there’s nothing up my sleeve…

    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \ -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql> create database demo; Query OK, 1 row affected (0.03 sec) mysql> use demo; Database changed mysql> create table foo(val char(5)); Query OK, 0 rows affected (0.14 sec) mysql> insert into foo(val) values ('a'), ('b'), ('c'); Query OK, 3 rows affected (0.08 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

Now let’s re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql some-mysql root@labosa:/var/log# docker rm some-mysql some-mysql root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \ -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql 99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05 root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \ sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\ P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"' mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.18 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> use demo; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from foo; +------+ | val | +------+ | a | | b | | c | +------+ 3 rows in set (0.00 sec)

So there you go.

Categories: thinktime

Michael Still: I think I found a bug in python’s unittest.mock library

Planet Linux Australia - Sun 15th Apr 2018 21:04

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we’ve used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what “methods” were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem — the mock object doesn’t know if you’re the code under test, or the code that’s making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here’s an example:

#!/usr/bin/python3 from unittest import mock class foo(object): def dummy(a, b): return a + b @mock.patch.object(foo, 'dummy') def call_dummy(mock_dummy): f = foo() f.dummy(1, 2) print('Asserting a call should work if the call was made') mock_dummy.assert_has_calls([mock.call(1, 2)]) print('Assertion for expected call passed') print() print('Asserting a call should raise an exception if the call wasn\'t made') mock_worked = False try: mock_dummy.assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) if not mock_worked: print('*** Assertion should have failed ***') print() print('Asserting a call where the assertion has a typo should fail, but ' 'doesn\'t') mock_worked = False try: mock_dummy.typo_assert_has_calls([mock.call(3, 4)]) except AssertionError as e: mock_worked = True print('Expected failure, %s' % e) print() if not mock_worked: print('*** Assertion should have failed ***') print(mock_dummy.mock_calls) print() if __name__ == '__main__': call_dummy()

If I run that code, I get this:

$ python3 mock_assert_errors.py Asserting a call should work if the call was made Assertion for expected call passed Asserting a call should raise an exception if the call wasn't made Expected failure, Calls not found. Expected: [call(3, 4)] Actual: [call(1, 2)] Asserting a call where the assertion has a typo should fail, but doesn't *** Assertion should have failed *** [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn’t a thing, but we didn’t notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don’t really have a solution to this right now (I’m home sick and not thinking straight), but it would be interesting to see what other people think.

Categories: thinktime

Michael Still: Python3 venvs for people who are old and grumpy

Planet Linux Australia - Sun 15th Apr 2018 21:04

I’ve been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn’t a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad…

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc echo 'eval "$(pyenv init -)"' >> ~/.bashrc git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv source ~/.bashrc

Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot cd ~/.virtualenvs/pyenv-infrasot pyenv virtualenv system infrasot

You can see your installed venvs like this:

    $ pyenv versions * system (set by /home/user/.pyenv/version) infrasot

Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot $ ... stuff you're doing ... $ pvenv deactivate

I’ll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Categories: thinktime

Pages

Subscribe to kattekrab aggregator - thinktime