Interactive map for this route.
Tags for this post: blog pictures 20150727 photo california bushwalk
Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica
- Everyone will bend over backwards for you if you’re famous, but if you’re not then you’ll be left out to die http://t.co/v7EvUWXorx 10:42:13, 2015-07-26
- Now this is real news. Shame that many don’t care because most of the victims are poor. http://t.co/CtkLyfnlnb 14:19:00, 2015-07-25
- Your taste in music says a lot about how you think: Cambridge study http://t.co/0rAmN3GqY8 10:42:05, 2015-07-25
- …and so the Age of Entitlement continues unapologetically http://t.co/3P7pUd0sC6 #auspol 18:27:04, 2015-07-24
- The effects of everyday sexism on young children http://t.co/2oswzu7bJH 14:19:01, 2015-07-24
- Your Smartphone Usage is Linked to Your Level of Depression http://t.co/qngN5Nwa6Q 10:42:00, 2015-07-24
- Country kids have better work ethic than urban, affluent kids; youth disconnected from the workforce http://t.co/tL9i9K0EOJ 16:33:16, 2015-07-21
- RT @TeenMogul: 8 online classes that will make you smarter about business
http://t.co/sjz4hcQfSp http://t.co/X9xws6PBIW 19:14:49, 2015-07-20
- Australia one of the only advanced economies without dedicated youth entrepreneurship initiatives supported by govt http://t.co/TaYAJs8LMS 16:33:04, 2015-07-20
- Dropping out of education is more likely to kill you than smoking: study http://t.co/Vb1FctGmjD 14:19:05, 2015-07-20
No, I didn’t attend
- Raspberry Pi hacks
- Microservices – Why, what and how to get there
- Introduction to planning and running tech events
- Docker in production: Reality, not hype
- Continuous delivery and large microservice architectures: Reflections on Ioncannon
- Choose boring technology
- 9 Big-Picture Takeaways From OSCON as Open Source Goes Mainstream
- Heard Around the Web: OSCON 2015
- 2015 OSCON Interview collection
- Signals from OSCON 2015
This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.
People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.
So why does microphone placement matter?
Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.
A microphone is a bit like a radio front end:
We assume linearity (the microphone signal isn’t clipping).
Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.
Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.
When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.
A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.
Science in my Lounge Room 1 – Proximity Effect
I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.
To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).
This spreadsheet shows the results over a couple of runs (levels in dB).
So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!
Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.
Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.
Science in my Lounge Room 2 – Multipath
So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.
The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.
Lets take a look at the frequency response close up and at 25cm:
Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:
- The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
- They both have a resonance around 500Hz.
- The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.
OK, lets look at the reflections:
A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.
Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.
Looking at a zoomed version of the 25cm spectrum:
I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.
Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:
Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.
OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!
If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.
How to break a low bit rate speech codec
Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.
This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.
Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?
Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.
In Sep 2014 I had some interesting discussions around the effect of microphones, small speakers, and speech samples with Mike, OH2FCZ, who has is an audio professional. Thanks Mike!
In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee. This year, LSS has been upgraded to a hosted event. I didn’t realize that this meant that LSS registration was available entirely standalone. To quote an email thread:
If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration. You will only have access to The Linux Security Summit.
Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.
There may be a number of people who registered for LinuxCon but who only wanted to attend LSS. In that case, please contact the program committee at lss-pc_AT_lists.linuxfoundation.org.
Apologies for any confusion.
Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install, I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)
mrda@host:~$ python -c 'import pywsman'
mrda@host:~$ tox -evenv --notest(venv)mrda@host:~$ python -c 'import pywsman' Traceback (most recent call last): File "<string>", line 1, in <module>ImportError: No module named pywsman# WAT?
Let's try something else that's installed system-wide(venv)mrda@host:~$ python -c 'import six' # Works
Why does six work, and pywsman not?(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*-rw-r--r-- 1 root root 1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info-rw-r--r-- 1 root root 22857 Jan 6 2014 /usr/lib/python2.7/dist-packages/six.py -rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*-rw-r--r-- 1 root root 80590 Jun 16 2014 /usr/lib/python2.7/dist-packages/pywsman.py -rw-r--r-- 1 root root 293680 Jun 16 2014 /usr/lib/python2.7/dist-packages/_pywsman.so
The only thing that comes to mind is that pywsman wraps a .so
A work-around is to tell venv that it should use the system-wide install of pywsman, like this:
# Kill the old venv first (venv)mrda@host:~$ deactivate mrda@host:~$ rm -rf .tox/venv
Now startover mrda@host:~$ tox -evenv --notest --sitepackages pywsman (venv)mrda@host:~$ python -c "import pywsman"# Fun and Profit!
When I was starting out as a web designer, one of my chief joys was simply observing how my mentors went about their job—the way they prepared for projects, the way they organized their work. I knew that it would take a while for my skills to catch up to theirs, but I had an inkling that developing a foundation of good work habits was something that would stay with me throughout my career.
Many of those habits centered around creating a personal system for organizing all the associated bits and pieces that contributed to the actual code I wrote. These days as I mentor Bluecadet’s dev apprentices, I frequently get asked how I keep all this information in my head. And my answer is always: I don’t. It’s simply not possible for me. I don’t have a “memory palace” like you’d see onscreen in Sherlock (or described in Hilary Mantel’s Wolf Hall). But I have tried a few things over the years, and what follows are a few habits and tools that have helped me.Extend your memory
Remember this: you will forget. It may not seem like it, hammering away with everything so freshly-imprinted in your mind. But you will forget, at least long enough to drive you absolutely batty—or you’ll remember too late to do any good. So the trick is figuring out a way to augment your fickle memory.
The core of my personal memory system has remained fairly stable over the years: networked notes, lots of bookmarks, and a couple of buffer utilities. I’ve mixed and matched many different tools on top of those components, like a chef trying out new knives, but the general setup remains the same. I describe some OS X/iOS tools that I use as part of my system, but those are not a requirement and can be substituted with applications for your preferred operating system.Networked notes
Think of these as breadcrumbs for yourself. You want to be able to quickly jot things down, true—but more importantly, you have to be able to find them once some time has passed.
I use a loose system of text notes, hooked up to a single folder in Dropbox. I settled on text for a number of reasons:
- It’s not strongly tied to any piece of software. I use nvALT to create, name, and search through most of my notes, but I tend to edit them in Byword, which is available on both OS X and iOS.
- It’s easily searchable, it’s extremely portable, and it’s lightweight.
- It’s easily backed up.
- I can scan my notes at the file system level in addition to within an app.
- It’s fast. Start typing a word in the nvALT search bar and it whittles down the results. I use a system of “tags” when naming my files, where each tag is preceded by an @ symbol, like so: @bluecadet. Multiple tags can be chained together, for example: @bluecadet @laphamsquarterly. Generally I use anywhere from one to four tags per note. Common ones are a project tag, or a subject (say, @drupal or @wordpress). So a note about setting up Drupal on a project could be named “@bluecadet @drupal @projectname Setup Notes.txt.” There are lots of naming systems. I used this nvALT 101 primer by Michael Schechter as a jumping-off point, but I found it useful to just put my tags directly into the filename. Try a few conventions out and see what sticks for you.
What do I use notes for? Every time I run into anything on a project, whether it’s something that confuses me, or something I just figured out, I put that in a note. If I have a commonly-used snippet for a project (say, a deploy command), then I put that in a note too. I try to keep the notes short and specific—if I find myself adding more and more to a note I will often break it out into separate notes that are related by a shared tag. This makes it easier to find things when searching (or even just scanning the file directory of all the notes).
Later on those notes could form the basis for a blog post, a talk, or simply a lunch-and-learn session with my coworkers.Scratch pad
I have one special note that I keep open during the day, a “scratch pad” for things that pop into my brain while I’m focusing on a specific task. (Ironically, this is a tip that I read somewhere and failed to bookmark). These aren’t necessarily things that are related to what I’m doing at that moment—in fact, they might be things that could potentially distract me from my current task. I jot a quick line in the scratch pad and when I have a break I can follow up on those items. I like to write this as a note in nvALT instead of in a notebook because I can later copy-and-paste bits and pieces into specific, tagged notes.Bookmarking: Pinboard
So notes cover my stuff, but what about everyone else’s? Bookmarks can be extremely useful for building up a body of links around a subject, but like my text notes they only started to have value when I could access them anywhere. I save my bookmarks to Pinboard. I used to use Delicious, but after its near-death, I imported my links to Pinboard when a friend gave me a gift subscription. I like that Pinboard gives you a (paid) option to archive your bookmarks, so you can retrieve a cached copy of a page if link rot has set in with the original.
Anything that could potentially help me down the line gets tagged and saved. When I’m doing research in the browser, I will follow links off Google searches, skim them quickly, and bookmark things for later, in-depth reading. When I’m following links off Twitter I dump stuff to Pocket, since I have Pinboard set to automatically grab all my Pocket articles. Before I enabled that last feature, I had some links in Pocket and some in Pinboard, so I had to look for things in two separate places.
Whatever system you use, make sure it’s accessible from your mobile devices. I use Pinner for iOS, which works pretty well with iOS 8’s share sheets. Every few days I sit down with my iPad and sift through the links that are auto-saved from Pocket and add more tags to them.Buffers: clipboard history and command line lookup
These last two tips are both very small, but they’ve saved me so much time (and countless keystrokes) over the years, especially given how often cut-and-paste figures into my job.
Find a clipboard history tool that works for you. I suggest using the clipboard history in your launcher application of choice (I use Launchbar since it has one built in, but Alfred has one as part of its Powerpack). On iOS I use Clips (although it does require an in-app purchase to store unlimited items and sync them across all your devices). Having multiple items available means less time spent moving between windows and applications—you can grab several items, and then paste them back from your history. I’m excited to see how the recently-announced multitasking features in iOS 9 help in this regard. (It also looks like Android M will have multiple window support.) If you don’t use a launcher, Macworld has a fairly recent roundup of standalone Mac apps.
If you use the command line bash shell, CTRL+R is your friend: it will allow you to do a string search through your recent commands. Hit CTLR+R repeatedly to cycle through multiple matches in your command history. When you deal with repetitive terminal commands like I do (deploying to remote servers, for instance), it’s even faster than copying-and-pasting from a clipboard history. (zsh users: looks like there’s some key bindings involved.)Finding your way
I like to tell Bluecadet’s dev apprentices that they should pay close attention to the little pieces that form the “glue” of their mentor’s process. Developing a personal way of working that transcends projects and code can assist you through many changes in roles and skills over the course of your career.
Rather than opting in to a single do-it-all tool, I’ve found it helpful to craft my own system out of pieces that are lightweight, simple, flexible, and low-maintenance. The tools I use are just layers on top of that system. For example, as I wrote this column I tested out two Markdown text editors without having to change how I organize my notes.
Your personal system may look very different from the one I’ve described here. I have colleagues who use Evernote, Google Docs, or Simplenote as their primary tool. The common thread is that they invested some time and found something that worked for them.
Binh Nguyen: Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation
If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...
If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.
It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?
If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.
If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.
There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.
If you manage and protect Apple devices at work, our sponsor Bushel is here to help make it easier.
Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.
The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.
Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.Installing Zotero
There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)
Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.
So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.
I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.
After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:$ cd $ mv .mozilla/firefox/*.default/zotero .zotero Full text indexing of PDF files
Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.
Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.
The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.Manual configuration of PDF full indexing utilities on Debian
Install the pdftotext and pdfinfo programs:$ sudo apt-get install poppler-utils
Find the kernel and architecture:$ uname --kernel-name --machine Linux armv7l
In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:$ cd ~/.zotero $ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m) $ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)
Install a small helper script to alter pdftotext paramaters:$ cd ~/.zotero $ wget -O redirect.sh https://raw.githubusercontent.com/zotero/zotero/4.0/resource/redirect.sh $ chmod a+x redirect.sh
Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:$ cd ~/.zotero $ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version $ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version
Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:PDF indexing pdftotext version 0.26.5 is installed pdfinfo version 0.26.5 is installed
Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.
I recently wrote about how to have empathy for our teammates when working to make a great site or application. I care a lot about this because being able to understand and relate to others is vital to creating teams that work well together and makes it easier for us to reach people we don’t know.
I see a lot of talk about empathy, but I find it hard to take the more theory-driven talk and boil that down into things that I can do in my day-to-day work. In my last post, I talked about how I practice empathy with my team members, but after writing that piece I got to thinking about how I, as a developer in particular, can practice empathy with the users of the things I make as well.
Since my work is a bit removed from the design and user experience layer, I don’t always have interactions and usability front of mind while coding. Sometimes I get lost in the code as I focus on making the design work across various screen sizes in a compact, modular way. I have to continually remind myself of ways I can work to make sure the application will be easy to use.
To that end, there are things I’ve started thinking about as I code and even ways I’ve gone outside the traditional developer role to ensure I understand how people are using the software and sites I help make.Accessibility
From a pure coding standpoint, I do as much as I can to make sure the things I make are accessible to everyone. This is still a work in progress for me, as I try to learn more and more about accessibility. Keeping the A11Y Project checklist open while I work means I can keep accessibility in mind. Because all the people who want to use what I’m building should be able to.
In addition to focusing on what I can do with code to make sure I’m thinking about all users, I’ve also tried a few other things.Support
In a job I had a few years ago, the entire team was expected to be involved with support. One of the best ways to understand how people were using our product was to read through the questions and issues they were having.
I was quite nervous at first, feeling like I didn’t have the knowledge or experience to adequately answer user emails, but I came to really enjoy it. I was lucky to be mentored by my boss on how to write those support messages better, by acknowledging and listening to the people writing in, and hopefully, helping them out when I could.
Just recently I spent a week doing support work for an application while my coworker was on vacation, reminding me yet again how much I learn from it. Since this was the first time I’d been involved with the app, I learned about the ways our users were getting tripped up, and saw pitfalls which I may never have thought about otherwise.
As I’ve done support, I’ve learned quite a bit. I’ve seen browser and operating system bugs, especially on devices that I may not test or use regularly. I’ve learned that having things like receipts on demand and easy flows for renewal is crucial to paid application models. I’ve found out about issues when English may not be the users’ native language—internationalization is huge and also hard. Whatever comes up, I’m always reminded (in a good way!), that not everyone uses an application or computer in the same ways that I do.
For developers specifically, support work also helps jolt us out of our worlds and reminds us that not everyone thinks the same way, nor should they. I’ve found that while answering questions, or having to explain how to do certain tasks, I come to realizations of ways we can make things better. It’s also an important reminder that not everyone has the technical know how I do, so helping someone learn to use Fluid to make a web app behave more like a native app, or even just showing how to dock a URL in the OS X dock can make a difference. And best of all? When you do help someone out, they’re usually so grateful for it—it’s always great to get the happy email in response.Usability testing
Another way I’ve found to get a better sense of what users are doing with the application is to sit in on usability testing when possible. I’ve only been able to do this once, but it was eye opening. There’s nothing better than watching someone use the software you’re making, or in my case, stumble through trying to use it.
In the one instance where I was able to watch usability testing, I found it fascinating on several levels. We were testing a mobile website for an industry that has a lot of jargon. So, people were stumbling not just with the application itself, but also with the language—it wasn’t just the UI that caused problems, but the words the industry uses regularly that people didn’t understand. With limited space on a small screen, we’d shortened things up too much, and it was not working for many of the people trying to use the site.
Since I’m not doing user experience work myself, I don’t get the opportunity to watch usability testing often, but I’m grateful for the time I was able to, and I’m hopeful that I’ll be able to observe it again in the future. Like answering support emails, it puts you on the front lines with your users and helps you understand how to make things better for them.
Getting in touch with users, in whatever ways are available to you, makes a big difference in how you think about them. Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create, but they may not think exactly the same way you do, and things may not work as they expect.
Even though many of us have roles where we aren’t directly involved in designing the interfaces of the sites and apps we build, we can all learn to be more empathetic to users. This matters. It makes us better at what we do and we create better applications and sites because of it. When you care about the person at the other end, you want to write more performant, accessible code to make their lives easier. And when the entire team cares, not just the people who interact with users most on a day-to-day basis, then the application can only get better as you iterate and improve it for your users.
Linux Users of Victoria (LUV) Announce: LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences
200 Victoria St. Carlton VIC 3053Link: http://luv.asn.au/meetings/map
• Jon Oxer, Open Machines Building Open Hardware
• Chris Samuel, VLSCI: Supercomputing for Life Sciences
200 Victoria St. Carlton VIC 3053 (formerly the EPA building)
Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.
Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.August 4, 2015 - 18:30
Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).
There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).
So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID. Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well. Woohoo!
There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it). One to resolve another day...
I’m a believer in self driving car technology, and predict it will have enormous effects, for example:
- Our cars currently spend most of the time doing nothing. They could be out making money for us as taxis while we are at work.
- How much infrastructure and frustration (home garage, driveways, car parks, finding a park) do we devote to cars that are standing still? We could park them a few km away in a “car hive” and arrange to have them turn up only when we need them.
- I can make interstate trips laying down sleeping or working.
- Electric cars can recharge themselves.
- It throws personal car ownership into question. I can just summon a car on my smart phone then send the thing away when I’m finished. No need for parking, central maintenance. If they are electric, and driverless, then very low running costs.
- It will decimate the major cause of accidental deaths, saving untold misery. Imagine if your car knew the GPS coordinates of every car within 1000m, even if outside of visual range, like around a corner. No more t-boning, or even car doors opening in the path of my bike.
- Speeding and traffic fines go away, which will present a revenue problem for governments like mine that depend on the statistical likelihood of people accidentally speeding.
- My red wine consumption can set impressive new records as the car can drive me home and pour me into bed.
I think the time will come when computers do a lot better than we can at driving. The record of these cars in the US is impressive. The record for humans in car accidents dismal (a leading case of death).
We already have driverless planes (autopilot, anti-collision radar, autoland), that do a pretty good job with up to 500 lives at a time.
I can see a time (say 20 years) when there will be penalties (like a large insurance excess) if a human is at the wheel during an accident. Meat bags like me really shouldn’t be in control of 1000kg of steel hurtling along at 60 km/hr. Incidentally that’s 144.5 kJ of kinetic energy. A 9mm bullet exits a pistol with 0.519 kJ of energy. No wonder cars hurt people.
However many people are concerned about “blue screens of death”. I recently had an email exchange on a mailing list, here are some key points for and against:
- The cars might be hacked. My response is that computers and micro-controllers have been in cars for 30 years. Hacking of safety critical systems (ABS or EFI or cruise control) is unheard of. However unlike a 1980′s EFI system, self driving cars will have operating systems and connectivity, so this does need to be addressed. The technology will (initially at least) be closed source, increasing the security risk. Here is a recent example of a modern car being hacked.
- Planes are not really “driverless”, they have controls and pilots present. My response is that long distance commercial aircraft are under autonomous control for the majority of their flying hours, even if manual controls are present. Given the large number of people on board an aircraft it is of course prudent to have manual control/pilot back up, even if rarely used.
- The drivers of planes are sometimes a weak link. As we saw last year and on Sep 11 2001, there are issues when a malicious pilot gains control. Human error is also behind a large number of airplane incidents, and most car accidents. It was noted that software has been behind some airplane accidents too – a fair point.
- Compared to aircraft the scale is much different for cars (billions rather than 1000s). The passenger payload is also very different (1.5 people in a car on average?), and the safety record of cars much much worse – it’s crying out for improvement via automation. So I think automation of cars will eventually be a public safety issue (like vaccinations) and controls will disappear.
- Insurance companies may refuse a claim if the car is driverless. My response is that insurance companies will look at the actuarial data as that’s how they make money. So far all of the accidents involving Google driverless cars have been caused by meat bags, not silicon.
I have put my money where my mouth is and invested in a modest amount of Google shares based on my belief in this technology. This is also an ethical buy for me. I’d rather have some involvement in an exciting future that saves lives and makes the a world a better place than invest in banks and mining companies which don’t.
- every single defense analyst knows that comprimises had to be made in order to achieve a blend of cost effectiveness, stealth, agility, etc... in the F-22 and F-35. What's also clear is that once things get up close and personal things mightn't be as clear cut as we're being told. I was of the impression that the F-22 would basically outdo anything and everything in the sky all of the time. It's clear that based on training excercises that unless the F-22's have been backing off it may not be as phenomenal as we're being led to believe (one possible reason to deliberately back off is to not provide intelligence on max performance envelope to provide less of a target for near peer threats with regards to research and engineering). There are actually a lot of low speed manouvres that I've seen a late model 3D-vectored Sukhoi perform that a 2D-vectored F-22 has not demonstrated. The F-35 is dead on arrival in many areas (at the moment. Definitely from a WVR perspective) as many people have stated. My hope and expectation is that it will have significant upgrades throughout it's lifetime
F22 vs Rafale dogfight video
Dogfight: Rafale vs F22 (Close combat)
F-22 RAPTOR vs F-15 EAGLE
- in the past public information/intelligence regarding some defense programs/equipment have been limited to reduce the chances of a setting off arms race. That way the side who has disemminated the mis-information can be guaranteed an advantage should there be a conflict. Here's the problem though, while some of this may be such, I doubt that all of it is. My expectation that due to some of the intelligence leaks (many terabytes. Some details of the breach are available publicly) regarding designs of the ATF (F-22) and JSF (F-35) programs is also causing some problems as well. They need to overcome technical problems as well as problems posed by previous intelligence leaks. Some of what is being said makes no sense as well. Most of what we're being sold on doesn't actually work (yet) (fusion, radar, passive sensors, identification friend-or-foe, etc...)...
- if production is really as problematic as they say that it could be without possible recourse then the only thing left is to bluff. Deterrence is based on the notion that your opponent will not attack because you have a qualitative or quantitative advantage... Obviously, the problem if there is actual conflict we have a huge problem. We purportedly want to be able to defend ourselves should anything potentially bad occur. The irony is that our notion of self defense often incorporates force projection in far off, distant lands...
F22 Raptor Exposed - Why the F22 Was Cancelled
F-35 - a trillion dollar disaster
4/6 F-35 JOINT STRIKE FIGHTER IS A LEMON
JSF 35 vs F18 superhornet
- we keep on giving Lockheed Martin a tough time regarding development and implementation but we keep on forgetting that they have delivered many successful platforms including the U-2, the Lockheed SR-71 Blackbird, the Lockheed F-117 Nighthawk, and the Lockheed Martin F-22 Raptor
f-22 raptor crash landing
- SIGINT/COMINT often produces a lot of a false positives. Imagine listening to every single conversation that you overheard every single conversation about you. Would you possibly be concerned about your security? Probably more than usual despite whatever you might say? As I said previously in posts on this blog it doesn't makes sense that we would have such money invested in SIGINT/COMINT without a return on investment. I believe that we may be involved in far more 'economic intelligence' then we may be led to believe
- despite what is said about the US (and what they say about themselves), they do tell half-truths/falsehoods. They said that the Patriot missile defense systems were a complete success upon release with ~80% success rates when first released. Subsequent revisions of past performance have indicated actual success rate of about half that. It has been said that the US has enjoyed substantive qualitative and quantitative advantages over Soviet/Russian aircraft for a long time. Recently released data seems to indicate that it is closer to parity (not 100% sure about the validity of this data) when pilots are properly trained. There seems to be indications that Russian pilots may have been involved in conflicts where they shouldn't have been or were unknown to be involved...
- the irony between the Russians and US is that they both deny that their technology is worth pursuing and yet time seems to indicate otherwise. A long time ago Russian scientists didn't bother with stealth because they though it was overly expensive without enough of a gain (especially in light of updated sensor technology) and yet the PAK-FA/T50 is clearly a test bed for such technology. Previously, the US denied that that thrust vectoring was worth pursuing and yet the the F-22 clearly makes use of it
- based on some estimates that I've seen the F-22 may be capable of close to Mach 3 (~2.5 based on some of the estimates that I've seen) under limited circumstances
- people keep on saying maintaining a larger, indigenous defense program is simply too expensive. I say otherwise. Based on what has been leaked regarding the bidding process many people basically signed on without necessarily knowing everything about the JSF program. If we had more knowledge we may have proceeded a little bit differently
- a lot of people who would/should have classified knowledge of the program are basically implying that it will work and will give us a massive advantage give more development time. The problem is that there is so much core functionality that is so problematic that this is difficult to believe...
- the fact that pilots are being briefed not to allow for particular circumstances tells us that there are genuine problems with the JSF
- judging by the opinions in the US military many people are guarded regarding the future performance of the aircraft. We just don't know until it's deployed and see how others react from a technological perspective
- proponents of the ATF/JSF programs keep on saying that since you can't see it you can't shoot. If that's the case, I just don't understand why we don't push up development of 5.5/6th gen fighters (stealth drones basically) and run a hybrid force composed of ATF, JSF, and armed drones (some countries including France are already doing this)? Drones are somewhat of a better known quantity and without life support issues to worry about should be able to go head to head with any manned fighter even with limited AI and computing power. Look at the following videos and you'll notice that the pilot is right on the physical limit in a 4.5 gen fighter during an excercise with an F-22. A lot of stories are floating around indicating that the F-22 enjoys a big advantage but that under certain circumstance it can be mitigated. Imagine going up against a drone where you don't have to worry about the pilot blacking out, pilot training (incredibly expensive to train. Experience has also told us that pilots need genuine flight time not just simulation time to maintain their skills), a possible hybrid propulsion system (for momentary speed changes/bursts (more than that provided by afterburner systems) to avoid being hit by a weapon or being acquired by a targeting system), and has more space for weapons and sensors? I just don't understand how you would be better off with a mostly manned fleet as opposed to a hybrid fleet unless there are technological/technical issues to worry about (I find this highly unlikely given some of the prototypes and deployments that are already out there)
F22 vs Rafale dogfight video
Dogfight: Rafale vs F22 (Close combat)
F-22 RAPTOR vs F-15 EAGLE
- if I were a near peer aggressor or looking to defend against 5th gen threats I'd just to straight to 5.5/6th gen armed drone fighter development. You wouldn't need to fulfil all the requirements and with the additional lead time you may be able to achieve not just parity but actual advantages while possibly being cheaper with regards to TCO (Total Cost of Ownership). There are added benefits going straight to 5.5/6th gen armed drone development. You don't have to compromise so much on design. The bubble shaped (or not) canopy to aide dogfighting affects aerodynamic efficiency and actually is one of the main causes of increased RCS (Radar Cross Section) on a modern fighter jet. The pilot and additional equipment (ejector sear, user interface equipment, life support systems, etc...) would surely add a large amount of weight which can now be removed. With the loss in weight and increase in aerodynamic design flexibility you could save a huge amount of money. You also have a lot more flexibility in reducing RCS. For instance, some of the biggest reflectors of RADAR signals is the canopy (a film is used to deal with this) and the pilot's helmet and one of the biggest supposed selling points of stealth aircraft are RAM coatings. They're incredibly expensive though and wear out (look up the history of the B-2 Spirit and the F-22 Raptor). If you have a smaller aicraft to begin with though you have less area to paint leading to lower costs of ownership while retaining the advantages of low observable technology
- the fact that it has already been speculated that 6th gen fighters may focus less on stealth and speed and more on weapons capability means that the US is aware of increasingly effective defense systems against 5th gen fighters such as the F-22 Raptor and F-35 JSF which rely heavily on low observability
- based on Wikileaks and other OSINT (Open Source Intelligence) everyone involved with the United States seems to acknowledge that they get a raw end of the deal to a certain extent but they also seem to acknowledge/imply that life is easier with them than without them. Read enough and you'll realise that even when classified as a closer partner rather than just a purchaser of their equipment you sometimes don't/won't receive much extra help
- if we had the ability I'd be looking to develop our own indigineous program defense programs. At least when we make procurements we'd be in a better position to be able to make a decision as to whether what was being presented to us was good or bad. We've been burnt on so many different programs with so many different countries... The only issue that I may see is that the US may attempt to block us from this. It has happened in the past with other supposed allies before...
- I just don't get it sometimes. Most of the operations and deployments that US and allied countries engage in are counter-insurgency and CAS significant parts of our operations involving mostly un-manned drones (armed or not). 5th gen fighters help but they're overkill. Based on some of what I've seen the only two genuine near peer threats are China and Russia both of whom have known limitations in their hardware (RAM coatings/films, engine performance/endurance, materials design and manufacturing, etc...). Sometimes it feels as though the US looks for enemies that mightn't even exist. Even a former Australian Prime-Ministerial advister said that China doesn't want to lead the world, "China will get in the way or get out of the way." The only thing I can possibly think of is that the US has intelligence that may suggest that China intends to project force further outwards (which it has done) or else they're overly paranoid. Russia is a slightly different story though... I'm guessing it would be interesting reading up more about how the US (overall) interprets Russian and Chinese actions behinds the scenes (lookup training manuals for allied intelligence officers for an idea of what our interpretation of what their intelligence services are like)
- sometimes people say that the F-111 was a great plane but in reality there was no great use of it in combat. It could be the exact same circumstance with the F-35
- there could be a chance the aircraft could become like the B-2 and the F-22. Seldom used because the actual true, cost of running it is horribly high. Also imagine the ramifications/blowback of losing such an expensive piece of machinery should there be a chance that it can be avoided
- defending against 5th gen fighters isn't easy but it isn't impossible. Sensor upgrades, sensor blinding/jamming technology, integrated networks, artificial manipulation of weather (increased condensation levels increases RCS), faster and more effective weapons, layered defense (with strategic use of disposable (and non-disposable) decoys so that you can hunt down departing basically, unarmed fighters), experimentation with cloud seeing with substances that may help to speed up RAM coating removal or else reduce the effectiveness of stealth technology (the less you have to deal with the easier your battles will be), forcing the battle into unfavourable conditions, etc... Interestingly, there have been some accounts/leaks of being able to detect US stealth bombers (B-1) lifting off from some US air bases from Australia using long range RADAR. Obviously, it's one thing to be able to detect and track versus achieving a weapons quality lock on a possible target
RUSSIAN RADAR CAN NOW SEE F-22 AND F-35 Says top US Aircraft designer
- following are rough estimate on RCS of various modern defense aircraft. It's clear that while Chinese and Russian technology aren't entirely on par they make the contest unconfortably close. Estimates on the PAK-FA/T-50 indicate RCS of about somewhere between the F-35 and F-22. Ultiamtely this comes back down to a sensor game. Rough estimates seem to indicate a slight edge to the F-22 in most areas. Part me thinks that the RCS of the PAK-FA/T-50 must be propoganda, the other part leads me to believe that there is no way countries would consider purchase of the aircraft if it didn't offer a competitive RCS
- it's somehwat bemusing that that you can't take pictures/videos from certain angles of the JSF in some of the videos mentioned here and yet there are heaps of pictures online of LOAN systems online including high resolution images of the back end of the F-35 and F-22
http://defence.pk/threads/low-observable-nozzles-exhausts-on-stealth-aircraft.328253/F 22 Raptor F 35 real shoot super clear
- people keep on saying that if you can't see and you can't lock on to stealth aircraft they'll basically be gone by the time. The converse is true. Without some form of targeting system the fighter in question can't lock on to his target. Once you understand how AESA RADAR works you also understand that given sufficient computing power, good implementation skills, etc... it's also subject to the same issue that faces the other side. You shoot what you can't see and by targeting you give away your position. My guess is that detection of tracking by RADAR is somewhat similar to a lot of de-cluttering/de-noising algorithms (while making use of wireless communication/encryption & information theories as well) but much more complex... which is why there has been such heavy investment and interest in more passive systems (infra-red, light, sound, etc...)
F-35 JSF Distributed Aperture System (EO DAS)
Lockheed Martin F-35 Lightning II- The Joint Strike Fighter- Full Documentary.
4195: The Final F-22 Raptor
Rafale beats F 35 & F 22 in Flight International
Eurofighter Typhoon fighter jet Full Documentary
Eurofighter Typhoon vs Dassault Rafale
DOCUMENTARY - SUKHOI Fighter Jet Aircrafts Family History - From Su-27 to PAK FA 50
Green Lantern : F35 v/s UCAVs