I recently wrote about how to have empathy for our teammates when working to make a great site or application. I care a lot about this because being able to understand and relate to others is vital to creating teams that work well together and makes it easier for us to reach people we don’t know.
I see a lot of talk about empathy, but I find it hard to take the more theory-driven talk and boil that down into things that I can do in my day-to-day work. In my last post, I talked about how I practice empathy with my team members, but after writing that piece I got to thinking about how I, as a developer in particular, can practice empathy with the users of the things I make as well.
Since my work is a bit removed from the design and user experience layer, I don’t always have interactions and usability front of mind while coding. Sometimes I get lost in the code as I focus on making the design work across various screen sizes in a compact, modular way. I have to continually remind myself of ways I can work to make sure the application will be easy to use.
To that end, there are things I’ve started thinking about as I code and even ways I’ve gone outside the traditional developer role to ensure I understand how people are using the software and sites I help make.Accessibility
From a pure coding standpoint, I do as much as I can to make sure the things I make are accessible to everyone. This is still a work in progress for me, as I try to learn more and more about accessibility. Keeping the A11Y Project checklist open while I work means I can keep accessibility in mind. Because all the people who want to use what I’m building should be able to.
In addition to focusing on what I can do with code to make sure I’m thinking about all users, I’ve also tried a few other things.Support
In a job I had a few years ago, the entire team was expected to be involved with support. One of the best ways to understand how people were using our product was to read through the questions and issues they were having.
I was quite nervous at first, feeling like I didn’t have the knowledge or experience to adequately answer user emails, but I came to really enjoy it. I was lucky to be mentored by my boss on how to write those support messages better, by acknowledging and listening to the people writing in, and hopefully, helping them out when I could.
Just recently I spent a week doing support work for an application while my coworker was on vacation, reminding me yet again how much I learn from it. Since this was the first time I’d been involved with the app, I learned about the ways our users were getting tripped up, and saw pitfalls which I may never have thought about otherwise.
As I’ve done support, I’ve learned quite a bit. I’ve seen browser and operating system bugs, especially on devices that I may not test or use regularly. I’ve learned that having things like receipts on demand and easy flows for renewal is crucial to paid application models. I’ve found out about issues when English may not be the users’ native language—internationalization is huge and also hard. Whatever comes up, I’m always reminded (in a good way!), that not everyone uses an application or computer in the same ways that I do.
For developers specifically, support work also helps jolt us out of our worlds and reminds us that not everyone thinks the same way, nor should they. I’ve found that while answering questions, or having to explain how to do certain tasks, I come to realizations of ways we can make things better. It’s also an important reminder that not everyone has the technical know how I do, so helping someone learn to use Fluid to make a web app behave more like a native app, or even just showing how to dock a URL in the OS X dock can make a difference. And best of all? When you do help someone out, they’re usually so grateful for it—it’s always great to get the happy email in response.Usability testing
Another way I’ve found to get a better sense of what users are doing with the application is to sit in on usability testing when possible. I’ve only been able to do this once, but it was eye opening. There’s nothing better than watching someone use the software you’re making, or in my case, stumble through trying to use it.
In the one instance where I was able to watch usability testing, I found it fascinating on several levels. We were testing a mobile website for an industry that has a lot of jargon. So, people were stumbling not just with the application itself, but also with the language—it wasn’t just the UI that caused problems, but the words the industry uses regularly that people didn’t understand. With limited space on a small screen, we’d shortened things up too much, and it was not working for many of the people trying to use the site.
Since I’m not doing user experience work myself, I don’t get the opportunity to watch usability testing often, but I’m grateful for the time I was able to, and I’m hopeful that I’ll be able to observe it again in the future. Like answering support emails, it puts you on the front lines with your users and helps you understand how to make things better for them.
Getting in touch with users, in whatever ways are available to you, makes a big difference in how you think about them. Rather than a faceless person typing away on a keyboard, users become people with names who want to use what you are helping to create, but they may not think exactly the same way you do, and things may not work as they expect.
Even though many of us have roles where we aren’t directly involved in designing the interfaces of the sites and apps we build, we can all learn to be more empathetic to users. This matters. It makes us better at what we do and we create better applications and sites because of it. When you care about the person at the other end, you want to write more performant, accessible code to make their lives easier. And when the entire team cares, not just the people who interact with users most on a day-to-day basis, then the application can only get better as you iterate and improve it for your users.
Join a panel conversation with:
I don’t remember the exact moment I fell in love with the web, but I distinctly remember the website that had a lot to do with it: whatdoiknow.org. It was the personal website of Todd Dominey, a web designer from Georgia (I wrote that from memory without the help of Google).
By most colloquial measures, whatdoiknow.org wasn’t anything spectacular. But to me, it felt perfect: fixed two-column layout, black text, white background, great typography, lovely little details, and good writing. And, it had this background tile—check it out here, compliments of Wayback Machine (“Give it a second!” to load)—that tapped into some primordial part of the brain that erupts in a dopamine fireworks show at the sight of such things. I’m sure Π is somehow involved.
It was 2001 (maybe 2002?), I was in college, and I was considering transferring out of computer science into interactive media when I found Dominey’s site. I immediately knew I wanted to make sites like that (even if I wasn’t sure why), so I submitted my CODO documentation, and walked across campus to the Computer Graphics department.
The universe pushed back, of course.
“Inadvisable,” advised my academic advisor at the time, because “how can you make money designing things?” It was a different time: B.S.—Before Steve. User experience was in its third trimester, and we’d just started feeling its first kicks. The argument against transferring was effectively the same one faced by liberal arts majors when they tell their parents they are going to major in English and minor in Sociology. The data suggested that I would be far less attractive to employers. But I was drawn in.
I had no choice but to succumb to the first, but certainly not the last, Dominey moment of my professional life.
And thus I was introduced to HTML and CSS. It was love at first sight. But unlike a lot of kids who found their home with standards-based front end web design, I’d just walked into a hyperlinked world of Dominey moments. And over the years, I clicked—maybe “tapped” is the more appropriate word for our generation—and followed many of them.
One by one, the domineys were falling.
What’s fascinating about these moments, in hindsight, is they were inextricably linked. And much like the web, even when the links disappeared into the horizon as I moved to the next, they affected my career trajectory for the better. It feels magical that my ability to produce letterpress business cards (a Dominey moment) could have any bearing on my public relations skills for convincing the web community that Internet Explorer had had a heart transplant (a part of one of my past jobs). But there’s nothing really magical about that, is there?
After all, feeling excitement for something new, learning it, getting somewhat good at it, and broadening your horizons can positively affect your career, no matter what you do (h/t to every post written about the benefits of side projects).Signal vs. signal
All that said, the highs from experiencing these moments were inevitably followed by their characteristic comedowns: a mixture of fear, challenge, prejudice, and even dogma. Take my foray into Flash, for instance.
For a standards-based web guy like me, embracing Flash felt like an either-or proposition as I looked around for mentorship. This phenomenon was a byproduct of the Flash vs. Web debate. You just didn’t come across web standardistas who were equally and openly into Flash—Shaun Inman and Dominey (who created SlideShowPro, a ubiquitous Flash-based slideshow app for a time) were prominent exceptions. Unsurprisingly, what Gruber writes about Apps vs. Web applies almost word for word to Flash vs. Web: “If you expand your view of ‘the web’ from merely that which renders inside the confines of a web browser to instead encompass all network traffic sent over HTTP/S, the explosive growth of native mobile apps is just another stage in the growth of the web. Far from killing it, native apps have made the open web even stronger.”
When you take these sort of necessary but paralyzing debates and couple them with the insecurity you feel in answering the tap of a Dominey moment, it doesn’t take much to talk yourself out of it. And that problem never really goes away.
Even today, having quit my job to go out on my own, the pushback is strong. What has changed though, thanks to a healthy amount of experience, reading, thinking, and counsel, is my ability to negotiate the arguments from others and myself as I embrace my next moment, inspired by the ongoing app revolution and the pleasure I derive from developing in Apple’s new language: Swift.
My ability to steer around the doldrums of doubt wasn’t always there, though. Long ago, and often along the way as well, I needed a little nudge from a friend or mentor to get me going.
So finally, on the topic of apps and Swift: let me give you a quick nudge.A Swift nudge
If you’re a web programmer (or a budding programmer of any kind, or just someone who wants to get into programming), and looking at an app on your device (Instagram, Pinterest, Paper, and iMovie come to mind for me) made you think, “I want to build something like that,” I have this to say to you: learn Swift today.
Don’t think twice about it.
This must seem like a bizarre recommendation to make to a group of “people who make websites.” But I’ve never quite taken that tagline literally, and I know far too many of you feel the same (whether you’ll admit it in public or not). We are people who make the web, and as luck would have it we are people who particularly understand designing for mobile.
After immersing myself in it for a year, I find Swift to be deeply web in its soul. From its expressive, functional syntax and its interpretive playgrounds to its runtime performance and recent foray into open source, Swift is the web developer’s compiled language (with the mature and convenient safeguards characteristic of the compiled environment).
As you may expect, the universe will push back on you embracing this moment. It will manifest in myriad ways—from the age old question of web vs. native to debates about Swift performance, syntax, and Apple’s intentions.
Ignore that pushback. (Or don’t: do your due diligence, and form your own opinion.)
But whatever you do, if you’ve got that thumping feeling, don’t ignore it. And try to not forget that you’re here because long ago you too embraced a Dominey moment.
As far as I can tell, it’s worked out pretty well for all of us.Footnotes
- 1. Full disclosure: I do not know nor have I ever met Todd Dominey (but I’d buy him a drink anytime).
When discussing performance, we tend to focus on technical challenges. But the social work—getting our colleagues to care about performance—is in fact the hardest and most crucial component of performance optimization. We simply can’t maintain the speed of our sites’ user experience over time without organizational leaders, design and developer peers, and clients who recognize the importance of performance work.
We have to show two groups of people why they should care about performance:
- Very Important People—like managers, executives, and clients—care about engagement metrics. They also take a lot of pride in their business and how it performs compared to competitors.
- Fellow developers and designers—your peers—care about their workflow and shipping good work. They want to create a user experience that they’re proud to show off.
The two groups definitely overlap; developers and designers care a lot about engagement metrics, for example. There are a number of great ways to showcase performance numbers, like Lonely Planet’s public dashboards. At Etsy, we even publish a quarterly report showing performance changes over time, and what triggered those changes. When convincing others to care about performance, however, it’s easy to get mired in numbers and graphs. Effecting a genuine culture shift is much more difficult.
Furthermore, performance is unfortunately a fairly invisible part of the user experience. When you’ve done a good job, users don’t even notice it! But slowness creates a really painful experience. In his book High Performance Browser Networking, Ilya Grigorik outlines metrics for humans’ perception of speed:
- 100 milliseconds of response time feels instantaneous to a user.
- 100–300 milliseconds creates a small, but perceptible, delay.
- 300 milliseconds–1 second feels like “the machine is working.”
- 1 second constitutes a noticeable delay for a user. Just a single second of wait time will interrupt thought flow, and the user will probably start mentally context-switching.
Put yourself in your fellow designers’ or developers’ shoes. How boring would it be for them to scan that list of numbers? If a Very Important Person doesn’t already care about performance as a part of the overall user experience, how would numbers (or charts that represent those numbers) help them start to care?
Take your message to the next level. Help those around you feel the impact that performance has on your overall user experience. Showing is so much more compelling than telling; showcasing the real user experience is much more powerful than staring at numbers or bar charts. So how can we show performance?The power of visualizing performance
Etsy’s Performance team commandeered a wall monitor in the office to tell our performance story. Its fullscreen Chrome window historically has cycled through the typical performance-related information: metrics, graphs, some explanation. But this dashboard’s newest superpower is the ability to show videos of how etsy.com loads on different connection speeds, and how our users around the world experience our site.
Screenshot from WebPagetest.org with example settings selected for a performance test run.
To create this kind of dashboard, visit WebPageTest.org and kick off a test of your site.
You can choose:
- Test location. Compare the geographical location where your files are hosted to a location halfway around the world!
- Connection speed. Compare a cable connection to a shaped 3G connection!
- Capturing video. This is what we’ll be downloading next!
When you see the results of the test, including a waterfall chart and metrics explaining how long it took for your page to render, you can choose to watch a video of the test, or examine the test results in Filmstrip View.How to share in an email
The Filmstrip View allows you to see a PNG of how your site loads over time with thumbnails of each time interval. There are additional customization options for your Filmstrip so you can best convey the story of how your site is loading over time. Filmstrips are great for showing the performance of your site in a static medium like email.Screenshot from WebPagetest.org in Filmstrip View, showing how the site loads over half-second intervals. How to create a video dashboard
Meanwhile, on the Watch Video page, you have options for downloading and embedding your video. Go ahead and download it, then run additional tests set with other location and connection speed options, and grab those videos, too.
At Etsy, we’ve saved the videos to a central location and created a simple HTML page that reloads every few seconds to show the videos on repeat. Whenever Etsy’s homepage undergoes a major change, we update the videos manually.
So how can we target our two main audiences in need of convincing that performance work is important? Create different sets of videos that resonate.
- Compare your site to your competitors’ sites. Fewer emotions are more powerful than pride. How will the Very Important People feel if a competitor’s site is faster?
- Compare the before and after of performance work. Want to demonstrate the payoff of performance work to Very Important People? Need to convince your peers that performance work is worth celebrating? Show a page before its performance was improved, and have it load right next to a post-improvement video.
- Compare the mobile and desktop experiences, and the global experience. Help those around you feel how your users on different platforms, all over the world, experience your site. Choose pages that particular designers or developers you speak with are working on, touching again on that pride point.
Moving away from raw numbers and graphs and toward showing video may help you in your quest to create a culture of performance. Helping those around you feel the effects of performance on the overall user experience makes it much easier to get the buy-in you need to make your site fast—and keep it fast. Many people have had success convincing their peers and VIPs to care about performance using comparison videos; I can’t wait to hear your success story, too.
Introducing potential new users to a product can be tricky. Visitors are just passing by, only willing to interact if they can immediately see a new product’s value. And even if they do sign up, they may not come back. It’s not enough simply to capture an email address—you have to thoughtfully design a process that gets and holds attention, turning new visits into repeated engagement.
I encountered just how difficult onboarding can be when we decided to redesign our signup process at Blendle, a startup that sells individual articles from newspapers and magazines. Our old signup was full of bugs, falling behind on design changes made in the rest of the product, and, most importantly, both data and our users told us that new users were left confused after signing up. Our redesigns looked good, but I wanted to dig deeper. I wanted to know what makes a truly great onboarding process.A 30-day experiment
I’m a builder. I like fast iterations. I like to ship. I don’t naturally gravitate toward long research processes, reports, complex numbers, and spreadsheets. So to collect as much research as quickly (and interestingly) as possible, I decided to inspect and review one onboarding experience every day for 30 days in a row. The questions I wanted to answer:
- Is there such a thing as the perfect onboarding experience?
- What are the most important things to consider when designing and building a smooth signup process?
- What can Blendle (and everyone else) learn from products that are already out there?
A month later, I had 30 blog posts reviewing well-known products like Facebook, Twitter, Vimeo, and Instagram, alongside newer products like Botify, Staply, and Meerkat. I took screenshots and wrote down my thoughts for each step in every product. I also built a spreadsheet to quantify the experiences through the number of steps, required fields, supported social media platforms, and other parameters. (While this method obviously would not yield significant and statistically correct data, it is informative, and a huge improvement over decisions based on gut feelings.)
With that data, I was able to create a framework for onboarding experiences in three stages: identifying, teaching, and engaging. With this framework, you can reevaluate your onboarding experience based on observable patterns, just as we did at Blendle—hopefully attracting, and retaining, more new users.Stage 1: Identifying
If you stripped down an onboarding process as far as you could, what would be the one thing you would keep? The user needs a way to sign up—a form to collect a name or email address, a way for users to identify themselves.
While the signup form may be the most boring part of your product to work on, a skillfully crafted form can help users set up a new account in seconds, and even bring some joy to their day. Take a quick look at how GitHub is doing this:
Notice how I get feedback as I’m completing the fields. It is really gratifying to watch the red error flag turn green when my input is correct. It also gives me really specific feedback about the errors in my password.
Some products request user identification as a single field, while others use a multipage form. Multiple studies have been done on the positive impact of having a shorter form and how form fields impact conversion. When encouraging potential users to set up a new account, is it really necessary to ask for a birth date or preferred country? Probably not—and short and sweet seems to get better results.
One way to reduce mandatory fields is to allow users to sign up with a service profile they already have. For example, Prismatic enables potential new users to sign up via their Twitter or Facebook accounts:Crystal-clear first step: either fill out an email address or pick one of the social networks. No fluff.
Adding these social login buttons should always be a conscious choice. Research your audience to find out which networks they use, and keep a close watch on your conversion rate to see how it compares to a normal signup. Mailchimp, for example, found that social login buttons didn’t actually help their onboarding process, so they removed them.
Ultimately, the identifying phase is about determining how much—or ideally how little—information you need from users. Here’s the technique we used to figure out what questions to ask in Blendle’s onboarding:
- Start by making a list of everything you would like to know from new users.
- For every field, write down why you need this information.
- Next, write down why a user would benefit from sharing that data with you.
- Cross off everything that doesn’t show a clear benefit to both you and your users.
Alternatively, ask yourself this question: what is the absolute minimum you need to know from your users in order to let them in?Evaluate your onboarding process
- Try to look at your form with a fresh pair of eyes. Are any fields potentially unclear or confusing to new users?
- Do you absolutely need to know everything you’re asking new users? Could some fields be removed, or reserved for later in the use cycle?
- Test and validate your error messages. Do they explain what’s wrong and how your user can fix the problem?
- If a user triggers a validation error on one of the fields, is the data on all other fields preserved? Users don’t like to start over because of a single error.
- Can you include instant feedback for each field to mark progress and prevent wasted time? This is especially important for users on slower internet connections.
- Does it make sense to incorporate social media logins for your product?
Teaching is the second layer of the onboarding experience. Guiding a new user’s first dive into your product helps ensure they aren’t overwhelmed, and that they know how to get the most value out of what you are offering.
The signup process at GitHub does this well, by keeping things simple and ending on this screen:
Nothing is mandatory here, but they’ve added explanatory content to show new users how to get started. It’s short and can be dismissed with a single click.
Instapaper’s onboarding uses Instapaper to explain how Instapaper works: they have folded their introduction into an article that appears in the user’s account after a signup:
Since Instapaper is targeted at people who like to read, this seems to be a good fit. It would be even better to have a trigger in place to suggest clicking to read the article, demonstrating the actions for using the service.
Slack’s onboarding takes that idea even further. An interactive tutorial via Slackbot, the service’s help entity, shows a new user how to complete their profile while also learning how to use the tool’s chat functions:
Slackbot’s lighthearted, personal tone creates a positive first experience while simultaneously filling out the user’s data fields.Evaluate your onboarding process
- What are your product’s most important or most frequently used features?
- Which features are the most difficult to use?
- Which features might make a new user feel comfortable?
- Can the functionality of your product actually be used to introduce the product or demonstrate its features (à la Instapaper or Slack)?
- Which feature has the most impact on growing the product?
- Which feature best shows off the product’s value?
Finding yourself as a new user in an application with thousands of options, buttons, settings, and tooltips can be overwhelming. Simultaneously, users who haven’t begun to use the product yet can end up in zero data state. This combination can make it really hard for users to discover the value of your product—and keep coming back to it.
This is an opportunity to separate a great onboarding process from a basic one. By asking a couple of strategic questions while signing up, it is possible to skip zero data state and make the user feel right at home at the end of creating their account.
For example, Meetup asks new users to select categories of interest at the end of the onboarding process:
Meetup can then use this data to recommend meetups that are most likely to appeal to the user.
The more data you collect during onboarding, the more you’ll be able to demonstrate value, and the more likely users will return. In other words: a good onboarding might be the very best thing you can do to set yourself up for success.
Triggers are another potential method for encouraging repeated use. In Hooked, Nir Eyal explains how to build habit-forming products using internal and external triggers. External triggers can be push notifications, emails, overheard conversations, phone calls, Facebook messages, or tweets. Internal triggers, such as wanting to check our inboxes to see if we have new email, originate in our own brains. (If you manage to add an internal trigger, people will return to your product without you having to nudge them.)
A carefully crafted onboarding helps the user open their account while setting up meaningful external triggers to be used post-signup. Prismatic does this by asking the user to select topics of interest during signup, much like Meetup does:
They then use this data to send the new user a weekly email with a well-curated selection of articles, encouraging return visits.
Another way to potentially drive engagement is through social connections. Interactions with friends can be just as meaningful or even better than recommendations based on selected interests. Twitter does this by regularly emailing users with activity from their network. Facebook and Twitter both let users know when someone from their network signs up for their products. The more friends a user has on a shared product, the more relevant and personal their interactions are, and the more likely they are to continue to use the product.
This brings us to a difficult part in the framework: on the one hand, you want to eliminate as many form fields as possible. On the other hand, you want to collect users’ data and expand their networks in order to enhance their experience. You can balance these competing goals by clearly communicating why you need each piece of data. Keep the complete process as short as possible, but spread out separate questions over multiple pages. Make each step as enjoyable as possible; you might even try adding some meaningful, functional animation where this makes sense.Evaluate your onboarding process
- What information do you need in order to eliminate zero data state?
- What sorts of external triggers can you build into the product?
- What data does the user need to share in order to set up meaningful triggers?
- Can you integrate activity from users’ friends or network to create additional triggers?
- If you need to collect a lot of data to provide better engagement, can you spread your fields across multiple pages or signup stages? Can you present the information requests in a more engaging way?
Blendle’s new onboarding process is doing well, with a steady conversion rate of 5 percent (if you’re interested, I wrote a review). Of course, this is not enough for us; we’re continuing to improve and explore how we can keep delivering a delightful experience to our new users. The three-stage framework helps us check our design decisions going forward.
Onboarding experiences should be continuously improved and updated, just like any other aspect of a product. The fourth product I reviewed during my experiment was Twitter; I hadn’t seen their onboarding since signing up for my account five years ago, and the process felt like it hadn’t changed since. Facebook seems to have the exact same problem. This was a common pattern I saw: the product may be moving forward with great speed, but the onboarding process is left to collect dust because it isn’t deemed as worthy as other, more visible design concerns.
Take the time to go back to your own product onboarding experience and evaluate where improvements can be made—especially if you haven’t looked at it in a while. Use the questions in the framework to examine how you’re dealing with identifying new users, teaching them how to use your product, and engaging them to return. By reevaluating your process—and ensuring that it continually evolves—you can continue to improve users’ experience with your product from the moment they first encounter it.
OK, so FreeDV 700 was released a few weeks ago and I’m working on some ideas to improve it. Especially those annoying R2D2 noises due to bit errors at low SNRs.
I’m trying some ideas to improve the speech quality without the use of Forward Error Correction (FEC).
Speech coding is the art of “what can I throw away”. Speech codecs remove a bunch of redundant information. As much as they can. Hopefully with whats left you can still understand the reconstructed speech.
However there is still a bit of left over redundancy. One sample of a model parameter can look a lot like the previous and next sample. If our codec quantisation was really clever, adjacent samples would look like noise. The previous and next samples would look nothing like the current one. They would be totally uncorrelated, and our codec bit rate would be minimised.
This leads to a couple of different approaches to the problem of sending coded speech over channel with bit errors:
The first, conventional approach is to compress the speech as much as we can. This lowers the bit rate but makes the coded speech very susceptible to bit errors. One bit error might make a lot of speech sound bad. So we insert Forward Error correction (FEC) bits, raising the overall bit rate (not so great), but protecting the delicate coded speech bits.
This is also a common approach for sending data over dodgy channels. For data, we cannot tolerate any bit errors, so we use FEC, which can correct every error (or die trying).
However speech is not like data. If we get a click or a pop in the decoded speech we don’t care much. As long as we can sorta make out what was said. Our “Brain FEC” will then work out what the message was.
Which leads us to another approach. If we leave a little redundancy in the codec speech, we can use that to help correct or at least smooth out the received speech. Remember that for speech, it doesn’t have to be perfect. Near enough is good enough. That can be exploited to get us gain over a system that uses FEC.
Turns out that in the Bit Error Rate (BER) ranges we are playing with (5-10%) it’s hard to get a good FEC code. Many of the short ones break – they introduce more errors than they correct. The really good ones are complex with large block sizes (1000s of bits) that introduce unacceptable delay. For example at 700 bit/s, a 7000 bit FEC codeword is 10 seconds of coded speech. Ooops. Not exactly push to talk. And don’t get me started on the memory, MIPs, implementation complexity, and modem synchronisation issues.
These ideas are not new, and I have been influenced by some guys I know who have worked in this area (Philip and Wade if you’re out there). But not influenced enough to actually look up and read their work yet, lol.
So the idea is to exploit the fact that each codec model parameter changes fairly slowly. Another way of looking at this is the probability of a big change is low. Take a look at the “trellis” diagram below, drawn for a parameter that is represented by a 2 bit “codeword”:
Lets say we know our current received codeword at time n is 00. We happen to know it’s fairly likely (50%) that the next received bits at time n+1 will be 00. A 11, however, is very unlikely (0%), so if we receive a 11 after a 00 there is very probably an error, which we can correct.
The model I am using works like this:
- We examine three received codewords: the previous, current, and next.
- Given a received codeword we can work out the probability of each possible transmitted codeword. For example we might BPSK modulate the two bit codeword 00 as -1 -1. However when we add noise the receiver will see -1.5 -0.25. So the receiver can then say, well … it’s most likely -1 -1 was sent, but it also could have been a -1 1, and maybe the noise messed up the last bit.
- So we work out the probability of each sequence of three codewords, given the probability of jumping from one codeword to the next. For example here is one possible “path”, 00-11-00:
total prob =
(prob a 00 was sent at time n-1) AND
(prob of a jump from 00 at time n-1 to 11 at time n) AND
(prob a 11 was sent at time n) AND
(prob of a jump from 11 at time n to 00 at time n+1) AND
(prob a 00 was sent at time n+1)
- All possible paths of the three received values are examined, and the most likely one chosen.
The transition probabilities are pre-computed using a training database of coded speech. Although it is possible to measure these on the fly, training up to each speaker.
I think this technique is called maximum likelihood decoding.
Demo and Walk through
To test this idea I wrote a GNU Octave simulation called trellis.m
Here is a test run for a single trellis decode. The internal states are dumped for your viewing pleasure. You can see the probability calculations for each received codeword, the transition probabilities for each state, and the exhaustive search of all possible paths through the 3 received codewords. At the end, it get’s the right answer, the middle codeword is decoded as a 00.
For convenience the probability calculations are done in the log domain, so rather than multiplies we can use adds.
Here is a plot of 10 seconds of a 4 bit LSP parameter:
You can see a segments where it is relatively stable, and some others where it’s bouncing around. This is a mesh plot of the transition probabilities, generated from a small training database:
It’s pretty close to a “eye” matrix. For example, if you are in state 10, it’s fairly likely the next state will be close by, and less likely you will jump to a remote state like 0 or 15.
Here is test run using data from several seconds of coded speech:
loading training database and generating tp .... done
loading test database .... done
Eb/No: 3.01 dB nerrors 28 29 BER: 0.03 0.03 std dev: 0.69 1.76
We are decoding using trellis based decoding, and simple hard decision decoding. Note how the number of errors and BER is the same? However the std dev (distance) between the transmitted and decoded codewords is much better for trellis based decoding. This plot shows 10 seconds of a 4 bit decoded parameter:
See how the trellis based decoding produces smaller errors?
Not all bit errors are created equal. The trellis based decoding favours small errors that have a smaller perceptual effect (we can’t hear them). Simple hard decision decoding has a random distribution of errors. Sometimes you get the most significant bit of the binary codeword flipped which is bad news. You can see this effect above, with a 4 bit codeword, a MSB means a jump of +/- 8. These large errors are far less likely with trellis decoding.
Hear are some samples that compare trellis based decoding to simple hard decision decoding, when applied to Codec2 at 700 bit/s on a AWGN channel using PSK. Only the 6 LSP parameters are tested (short term spectrum), no errors or correction are applied to the excitation parameters (voicing, pitch, energy).Eb/No (dB) BER Trellis Simple (hard dec) big 0.00 Listen Listen 3.0 0.02 Listen Listen 0.0 0.08 Listen Listen
At 3dB, the trellis based decoding removes most of the effects of bit errors, and it sounds similar to the no error reference. At 0dB Eb/No, the speech quality is improved, with some exceptions. Fast changes, like the “W” in double-you, and the “B” in Bruce become indistinct. This is because when the channel noise is high, the probability model favours slow changes in the parameters.
Still – getting any sort of speech at 8% bit error rates with no FEC is pretty cool.
These techniques could be applied to FreeDV 1600, improving the speech quality with no aditional overhead. Further work is required to extend these ideas to all the codec parameters, such as pitch, energy, and voicing.
I need to train the transition probabilities with a larger database, or make it train in real time using off air data.
We could include other information in the model, like the relationship of adjacent LSPs, or how energy and pitch change slowly in strongly voiced speech.
Now 10% BER is an interesting, rarely explored area. The data guys start to sweat above 1E-6, and assume everyone else does. At 10% BER FEC codes don’t work well, you need a really long block size or a low FEC rate. Modems struggle due to syncronisation issues. However at 10% the Eb/No versus BER curves start to get flat, so a few dB either way doesn’t change the BER much. This suggests small changes in intelligibility (not much of a threshold effect). Like analog.
However for speech, we don’t need to correct all errors, we just need to make it sound like they are corrected. By leaving some residual redundancy in the coded speech parameters we can use probability models to correct errors in the decoded speech with no FEC overhead.
This work is another example of experimental work we can do with an open source codec. It combines knowledge of the channel, the demodulator and the codec parameters to produce a remarkable result – improved performance with no FEC.
This work is in it’s early stages. But the gains all add up. A few more dB here and there.
Binh Nguyen: Electronics (TV) Repair, Working at Amazon, and Dealing With a Malfunctioning Apple iDevice
- take precautions. If you've ever watched some of those guys on YouTube, you'll realise that they are probably amateur electrcians and have probably never been shocked/electrocuted before. It's one thing to work with small electronic devices. It's an entirely different matter to be working with mains voltage. Be careful...
- a lot of the time electronic failure will take occur gradually over time (although the amount of time can vary drastically obviously)
- don't just focus on repairing it so that power can flow through the circuit once more. It's possible that it will just fail once more. Home in on the problem area, and make sure everything's working. That way you don't have to keep on dealing with other difficulties down the track
- it may only be possible to test components outside of circuit. While testing components with a multimeter will help you may need to purchase more advanced and expensive diagnostic equipment to really figure out what the true cause of the problem is
- setup a proper test environment. Ideally, one where you have a seperate circuit and where there are safety mechanisms in place to reduce the chances of a total blackout in your house and to increase your personal safety
- any information that you take from this is at your own risk. Please don't think that any of the information here will turn you into a qualified electronics technician or will allow you to solve most problems that you will face
- a lot of the time information on the Internet can be helpful but only applies to particular conditions. Try to understand and work the problem rather than just blindly following what other people do. It may save you a bit of money over the long term
Philips 32PFL5522D/05 - Completely dead (no power LED or signs of life) - Diagnosis and repair
https://www.youtube.com/all_comments?v=TrphsEw8slw - electronics repair is becoming increasingly un-economical. Parts may be impossible to find and replacing the TV rather than fixing it may actually be cheaper (especially when the screen is cracked. It's almost certain that a new replacement is going to cost more than the set itself). The only circumstances where it's likely to be worth it is if you have cheap spare parts on hand or the type of failure involves a relatively small, minor, component. The other thing you should know is that while the device may be physically structured in such a way to appear modularised it may not fail in such a fashion. I've been reading about boards which fail but actually have no mechanism to stop it from bleeding into other modules which means you end up in an infinite, failure loop. Replace one bad component with a good one and the leftover apparently good component fails and takes out the new, good board eventually. The cycle then continues on forever before the technician realises this or news of such design spreads. You may have to replace both boards at the same time which then makes the repair un-economical
- spare parts can be extremely difficult to source or are incredibly expensive. Moreover, the quality of the replacement parts can vary drastically in quality. If at all possible work with a source of known quality. Else, ask for demo parts particularly with Asian suppliers who may provide them for free and as a means of establishing a longer term business relationship
- be careful when replacing parts. Try to do your bet to replace like for like. Certain systems will operate in a degraded state if/when using sub-par replacements but will ultimately fail down the line
- use all your senses (and head) to track down a failure more quickly (sight and smell in particular for burnt out components). Sometimes, it may not be obvious where the actual failure is as opposed to where it may appear to be coming from. For instance, one set I looked at had a chirping power supply. It had actually suffered from failures of multiple components which made it appear/sound as though the transformer had failed though. Replacement of all relevant components (not the transformer) resulted in a functional power supply unit and stopping of the chirping sound
- as with musical instruments, teardowns may be the best that you can get with regards to details of how a device should work
- components may be shared across different manufacturers. It doesn't mean that they will work if swapped though. They could be using different version of the same base reference board (similar to the way in which graphics and network cards rely on reference designs in the computing world)
Magnavox has a very similar layout to a similar size Phillips LCD TV
Apparently, Amazon are interested in some local talent.
There are some bemusing tales of recruitment and the experience of working there though.
If your iPhone, iPad, or iPod touch doesn't respond or doesn't turn on
Identify your iPad model