You are here

thinktime

Tim Serong: It’s OK to be Wrong in Public

Planet Linux Australia - Wed 08th Jun 2016 03:06

I’ve spent a reasonably long time with computers. I’ve been doing something with either software or hardware (mostly software) for pretty close to three quarters of my current lifespan. I started when I was about 10, but (perhaps unsurprisingly) nobody was paying me for my work yet then. Flash forwards a few decades, and I have a gig I rather enjoy with SUSE, working on storage stuff.

OK, “yay Tim”. Enough of the backstory, what’s the point?

The point (if I can ball up my years of experience, and the experience of the world at large), is that, in aggregate, we write better software if we do it in the open. There’s a whole Free Software vs. Open Source thing, and the nuances of that discussion are interesting and definitely important, but to my mind this is all rather less interesting than the mechanics of how F/OSS projects actually work in practice. In particular, given that projects are essentially communities, and communities are made up of individuals, how does an individual join an existing project, and become accepted and confident in that space?

If you’re an individual looking for something to work on, whether or not you think about it in these terms, you’re effectively looking for a community to join. You’re hopefully going to be there for a while.

But you’re one little person, and there’s a big established community that already knows how everything works. Whatever you’re proposing has probably already been thought of by someone else, and your approach is probably wrong. It’s utterly terrifying, especially when anything you push to a git repo or public mailing list will probably be online for the rest of your life.

Fuck that line of thinking. It’s logical reasoning, but it’s utterly unhelpful in terms of joining a project. It might be correct in broad strokes if you squint at it just right, but you’re bringing new eyes to something. You’ll probably see things established community members didn’t, or if not, you’ll be able to help smooth the way for the next newcomer. One of the kinks though is speaking up about $WEIRD_THING_IN_PROJECT. Is it actually broken, or do you just have no idea what’s going on yet because you’re new? Do you speak up? Do you raise a bug? Put in a pull request? Risk shame in public if you’re wrong?

I might be slightly biased. This is either because I’ve been doing this long enough that I no longer suffer too much if someone tells me I’ve made a mistake (I’ve made lots of them, and hopefully learned from all of them), or it’s because I’m a scary looking white dude, dripping with privilege. Probably it’s a mix of both, but the most important thing I think I ever learned is that it’s OK to be wrong in public. If in doubt, you should:

  • Listen for long enought to get a feel for the mailing list (or forum, or whatever).
  • Ask the question you think is stupid.
  • Submit the pull request you hope is helpful, but are actually sure is incomplete or inadequate.
  • Propose the new architecture you’re certain will be shot down.
  • Don’t take it personally if you do get shot down. This can be a crawling horror of difficulty, and only goes away with either arrogance or time (hopefully time, which I’m assured will eventually compost into wisdom).

If you don’t get a helpful answer to the stupid question, if you don’t get constructive feedback for the pull request or new architecture, if some asshole does shoot you down, this is not the project or community for you.

If someone helps you, you might have found something worth pursuing. If that pans out, keep asking stupid questions, and keep submitting pull requests you’re worried about. You’ll learn something, and so will everyone else, and the world will eventually be a better place.

Categories: thinktime

Making your JavaScript Pure

a list apart - Wed 08th Jun 2016 00:06

Once your website or application goes past a small number of lines, it will inevitably contain bugs of some sort. This isn’t specific to JavaScript but is shared by nearly all languages—it’s very tricky, if not impossible, to thoroughly rule out the chance of any bugs in your application. However, that doesn’t mean we can’t take precautions by coding in a way that lessens our vulnerability to bugs.

Pure and impure functions

A pure function is defined as one that doesn’t depend on or modify variables outside of its scope. That’s a bit of a mouthful, so let’s dive into some code for a more practical example.

Take this function that calculates whether a user’s mouse is on the left-hand side of a page, and logs true if it is and false otherwise. In reality your function would probably be more complex and do more work, but this example does a great job of demonstrating:

function mouseOnLeftSide(mouseX) { return mouseX < window.innerWidth / 2; } document.onmousemove = function(e) { console.log(mouseOnLeftSide(e.pageX)); };

mouseOnLeftSide() takes an X coordinate and checks to see if it’s less than half the window width—which would place it on the left side. However, mouseOnLeftSide() is not a pure function. We know this because within the body of the function, it refers to a value that it wasn’t explicitly given:

return mouseX < window.innerWidth / 2;

The function is given mouseX, but not window.innerWidth. This means the function is reaching out to access data it wasn’t given, and hence it’s not pure.

The problem with impure functions

You might ask why this is an issue—this piece of code works just fine and does the job expected of it. Imagine that you get a bug report from a user that when the window is less than 500 pixels wide the function is incorrect. How do you test this? You’ve got two options:

  • You could manually test by loading up your browser and moving your mouse around until you’ve found the problem.
  • You could write some unit tests (Rebecca Murphey’s Writing Testable JavaScript is a great introduction) to not only track down the bug, but also ensure that it doesn’t happen again.

Keen to have a test in place to avoid this bug recurring, we pick the second option and get writing. Now we face a new problem, though: how do we set up our test correctly? We know we need to set up our test with the window width set to less than 500 pixels, but how? The function relies on window.innerWidth, and making sure that’s at a particular value is going to be a pain.

Benefits of pure functions Simpler testing

With that issue of how to test in mind, imagine we’d instead written the code like so:

function mouseOnLeftSide(mouseX, windowWidth) { return mouseX < windowWidth / 2; } document.onmousemove = function(e) { console.log(mouseOnLeftSide(e.pageX, window.innerWidth)); };

The key difference here is that mouseOnLeftSide() now takes two arguments: the mouse X position and the window width. This means that mouseOnLeftSide() is now a pure function; all the data it needs it is explicitly given as inputs and it never has to reach out to access any data.

In terms of functionality, it’s identical to our previous example, but we’ve dramatically improved its maintainability and testability. Now we don’t have to hack around to fake window.innerWidth for any tests, but instead just call mouseOnLeftSide() with the exact arguments we need:

mouseOnLeftSide(5, 499) // ensure it works with width < 500 Self-documenting

Besides being easier to test, pure functions have other characteristics that make them worth using whenever possible. By their very nature, pure functions are self-documenting. If you know that a function doesn’t reach out of its scope to get data, you know the only data it can possibly touch is passed in as arguments. Consider the following function definition:

function mouseOnLeftSide(mouseX, windowWidth)

You know that this function deals with two pieces of data, and if the arguments are well named it should be clear what they are. We all have to deal with the pain of revisiting code that’s lain untouched for six months, and being able to regain familiarity with it quickly is a key skill.

Avoiding globals in functions

The problem of global variables is well documented in JavaScript—the language makes it trivial to store data globally where all functions can access it. This is a common source of bugs, too, because anything could have changed the value of a global variable, and hence the function could now behave differently.

An additional property of pure functions is referential transparency. This is a rather complex term with a simple meaning: given the same inputs, the output is always the same. Going back to mouseOnLeftSide, let’s look at the first definition we had:

function mouseOnLeftSide(mouseX) { return mouseX < window.innerWidth / 2; }

This function is not referentially transparent. I could call it with the input 5 multiple times, resize the window between calls, and the result would be different every time. This is a slightly contrived example, but functions that return different values even when their inputs are the same are always harder to work with. Reasoning about them is harder because you can’t guarantee their behavior. For the same reason, testing is trickier, because you don’t have full control over the data the function needs.

On the other hand, our improved mouseOnLeftSide function is referentially transparent because all its data comes from inputs and it never reaches outside itself:

function mouseOnLeftSide(mouseX, windowWidth) { return mouseX < windowWidth / 2; }

You get referential transparency for free when following the rule of declaring all your data as inputs, and by doing this you eliminate an entire class of bugs around side effects and functions acting unexpectedly. If you have full control over the data, you can hunt down and replicate bugs much more quickly and reliably without chancing the lottery of global variables that could interfere.

Choosing which functions to make pure

It’s impossible to have pure functions consistently—there will always be a time when you need to reach out and fetch data, the most common example of which is reaching into the DOM to grab a specific element to interact with. It’s a fact of JavaScript that you’ll have to do this, and you shouldn’t feel bad about reaching outside of your function. Instead, carefully consider if there is a way to structure your code so that impure functions can be isolated. Prevent them from having broad effects throughout your codebase, and try to use pure functions whenever appropriate.

Let’s take a look at the code below, which grabs an element from the DOM and changes its background color to red:

function changeElementToRed() { var foo = document.getElementById('foo'); foo.style.backgroundColor = "red"; } changeElementToRed();

There are two problems with this piece of code, both solvable by transitioning to a pure function:

  1. This function is not reusable at all; it’s directly tied to a specific DOM element. If we wanted to reuse it to change a different element, we couldn’t.
  2. This function is hard to test because it’s not pure. To test it, we would have to create an element with a specific ID rather than any generic element.

Given the two points above, I would rewrite this function to:

function changeElementToRed(elem) { elem.style.backgroundColor = "red"; } function changeFooToRed() { var foo = document.getElementById('foo'); changeElementToRed(foo); } changeFooToRed();

We’ve now changed changeElementToRed() to not be tied to a specific DOM element and to be more generic. At the same time, we’ve made it pure, bringing us all the benefits discussed previously.

It’s important to note, though, that I’ve still got some impure code—changeFooToRed() is impure. You can never avoid this, but it’s about spotting opportunities where turning a function pure would increase its readability, reusability, and testability. By keeping the places where you’re impure to a minimum and creating as many pure, reusable functions as you can, you’ll save yourself a huge amount of pain in the future and write better code.

Conclusion

“Pure functions,” “side effects,” and “referential transparency” are terms usually associated with purely functional languages, but that doesn’t mean we can’t take the principles and apply them to our JavaScript, too. By being mindful of these principles and applying them wisely when your code could benefit from them you’ll gain more reliable, self-documenting codebases that are easier to work with and that break less often. I encourage you to keep this in mind next time you’re writing new code, or even revisiting some existing code. It will take some time to get used to these ideas, but soon you’ll find yourself applying them without even thinking about it. Your fellow developers and your future self will thank you.

Categories: thinktime

Try better

Seth Godin - Tue 07th Jun 2016 18:06
'Try harder' is something we hear a lot. After a while, though, we run out of energy for 'harder.' You can harangue people about trying harder all you like, but sooner or later, they come up empty. Perhaps it's worth...        Seth Godin
Categories: thinktime

sthbrx - a POWER technical blog: Using the Atom editor for Linux kernel development

Planet Linux Australia - Tue 07th Jun 2016 17:06

Atom is a text editor. It's new, it's shiny, and it has a lot of good and bad sides. I work in a lab full of kernel developers, and in the kernel, there are no IDEs. There's no real metadata you can get out of your compiler (given the kernel isn't very clang friendly), there's certainly nothing like that you can get out of your build system, so "plain old" text editors reign supreme. It's a vim or Emacs show.

And so Atom comes along. Unlike other shiny new text editors to emerge in the past 10 or so years, it's open source (unlike Sublime Text), it works well on Linux, and it's very configurable. When it first came out, Atom was an absolute mess. There was a noticeable delay whenever you typed a key. That has gone, but the sour impression that comes from replacing a native application with a web browser in a frame remains.

Like the curious person I am, I'm always trying out new things to see if they're any good. I'm not particularly tied to any editor; I prefer modal editing, but I'm no vim wizard. I eventually settled on using Emacs with evil-mode (which I assumed would make both Emacs and vim people like me, but the opposite happened), which was decent. It was configurable, it was good, but it had issues.

So, let's have a look at how Atom stacks up for low-level work. First of all, it's X only. You wouldn't use it to change one line of a file in /etc/, and a lot of kernel developers only edit code inside a terminal emulator. Most vim people do this since gvim is a bit wonky, and Emacs people can double-dip; using Emacs without X for small things and Emacs with X for programming. You don't want to do that with Atom, if nothing else because of its slow startup time.

Now let's look at configurability. In my opinion, no editor will ever match the level of configurability of Emacs, however the barrier to entry is much lower here. Atom has lots of options exposed in a config file, and you can set them there or you can use an equivalent GUI. In addition, a perk of being a browser in a frame is that you can customise a lot of UI things with CSS, for those inclined. Overall, I'd say Emacs > Atom > vim here, but for a newbie, it's probably Atom > Emacs > vim.

Okay, package management. Atom is the clear winner here. The package repository is very easy to use, for users and developers. I wrote my own package, typed apm publish and within a minute a friend could install it. For kernel development though, you don't really need to install anything, Atom is pretty batteries-included. This includes good syntax highlighting, ctags support, and a few themes. In this respect, Atom feels like an editor that was created this century.

What about actually editing text? Well, I only use modal editing, and Atom is very far from being the best vim. I think evil-mode in Emacs is the best vim, followed closely by vim itself. Atom has a vim-mode, and it's fine for insert/normal/visual mode, but anything involving a : is a no-go. There's a plugin that's entirely useless. If I tried to do a replacement with :s, Atom would lock up and fail to replace the text. vim replaced thousands of occurrences with in a second. Other than that, Atom's pretty good. I can move around pretty much just as well as I could in vim or Emacs, but not quite. Also, it support ligatures! The first kernel-usable editor that does.

Autocompletions feel very good in Atom. It completes within a local scope automatically, without any knowledge of the type of file you're working on. As far as intelligence goes, Atom's support for tags outside of ctags is very lacking, and ctags is stupid. Go-to definition sometimes works, but it lags when dealing with something as big as the Linux kernel. Return-from definition is very good, though. Another downside is that it can complete from any open buffer, which is a huge problem if you're writing Rust in one tab and C in the other.

An experience I've had with Atom that I haven't had with other editors is actually writing a plugin. It was really easy, mostly because I stole a lot of it from an existing plugin, but it was easy. I wrote a syntax highlighting package for POWER assembly, which was much more fighting with regular expressions than it was fighting with anything in Atom. Once I had it working, it was very easy to publish; just push to GitHub and run a command.

Sometimes, Atom can get too clever for its own good. For some completely insane reason, it automatically "fixes" whitespace in every file you open, leading to a huge amount of git changes you didn't intend. That's easy to disable, but I don't want my editor doing that, it'd be much better if it highlighted whitespace it didn't like by default, like you can get vim and Emacs to do. For an editor designed around git, I can't comprehend that decision.

Speaking of git, the editor pretty much has everything you'd expect for an editor written at GitHub. The sidebar shows you what lines you've added, removed and modified, and the gutter shows you what branch you're on and how much you've changed all-up. There's no in-built support for doing git things inside the editor, but there's a package for it. It's pretty nice to get something "for free" that you'd have to tinker with in other editors.

Overall, Atom has come a long way and still has a long way to go. I've been using it for a few weeks and I'll continue to use it. I'll encourage new developers to use it, but it needs to be better for experienced programmers who are used to their current workflow to consider switching. If you're in the market for a new editor, Atom might just be for you.

Categories: thinktime

Clinton Roy: Software Carpentry

Planet Linux Australia - Mon 06th Jun 2016 23:06

Today I taught my first Software Carpentry talk, specifically the Intro to Shell. By most accounts it went well.

After going through the course today I think I’ve spotted two issues that I’ll try to fix upstream.

Firstly, command substitution is a concept that is covered, and used incorrectly IMO. Command substitution is fine when you know you’re only going to get back one value, e.g. running an identify on an image to get its dimensions. But when you’re getting back an arbitrary long list of files, you’re only option is to use xargs. Using xargs also means that we can drop another concept to teach.

The other thing that Isn’t covered, but I think should be, is reverse isearch of the history buffer, it’s something that I use in my day to day use of the shell, not quite as much as tab completion, but it’s certainly up there.

A third, minor issue that I need to check, but I don’t think brace expansion was shown in the loop example. I think this should be added, as the example I ended up using showed looping over strings, numbers and file globs, which is everything you ever really end up using.

Software Carpentry uses different coloured sticky notes attached to learners laptops to indicate how they’re going. It’s really useful as a presenter out the front, if there’s a sea of green you’re good to go, if there are a few reds with helpers you’re probably OK to continue, but if there’s too many reds, it’s time to stop and fix the problem. At the end of the session we ask people to give feedback, here for posterity:

Red (bad):

  • Course really should be called Intro to Unix rather than bash
  • use of microphone might be good (difficult to hear, especially when helpers answer questions around)
  • Could have provided an intro into why  unix is advantageous over other programs
  • grep(?) got a bit complicated, could have explained more
  • start session with overview to set context eg. a graphic
  • why does unix shell suck so much, I blame you personally

Orange(not so bad):

  • maybe use the example data a bit more

Green(good):

  • patient, very knowledgeable
  • really knew his stuff
  • information generally easy to follow. good pacing overall good
  • good. referred to help files, real world this as go to for finding stuff out (mistranscribed i’m sure)
  •  good pace, good basis knowledge is taught

 

 


Filed under: Uncategorized
Categories: thinktime

On knowing it can be done

Seth Godin - Mon 06th Jun 2016 18:06
Can you imagine how difficult the crossword puzzle would be if any given answer might be, "there is no such word"? The reason puzzles work at all is that we know we should keep working on them until we figure...        Seth Godin
Categories: thinktime

This week's sponsor: FULLSTORY

a list apart - Mon 06th Jun 2016 14:06

FullStory, a pixel-perfect session playback tool that captures everything about your customer experience with one easy-to-install script.

Categories: thinktime

A ten-year plan is absurd

Seth Godin - Sun 05th Jun 2016 18:06
Impossible, not particularly worth wasting time on. On the other hand, a ten-year commitment is precisely what's required if you want to be sure to make an impact.        Seth Godin
Categories: thinktime

Linux Users of Victoria (LUV) Announce: LUV Beginners June Meeting: MPE on SIMH / Programming with Python

Planet Linux Australia - Sat 04th Jun 2016 23:06
Start: Jun 18 2016 12:30 End: Jun 18 2016 16:30 Start: Jun 18 2016 12:30 End: Jun 18 2016 16:30 Location: 

Infoxchange, 33 Elizabeth St. Richmond

Link:  http://luv.asn.au/meetings/map

Please note change of topic

Rodney Brown will demonstrate the HP minicomputer OS "Multi-Programming Executive" running on the SIMH emulator.

Andrew Pam will lead a hands-on Python programming class using the learnpython.org website. Suitable for people with no programming skills or with existing skills in other languages. Bring your own laptop or use the desktop machines on site.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.)

Late arrivals, please call (0490) 049 589 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 18, 2016 - 12:30

read more

Categories: thinktime

Neophilia and ennui

Seth Godin - Sat 04th Jun 2016 18:06
These are two sides of the same coin. Neophilia pushes us forward with wonder, eager for the next frontier. And ennui is the exhaustion we feel when we fall too in love with what might (should?) be next and ignore...        Seth Godin
Categories: thinktime

Add engines until airborne

Seth Godin - Fri 03rd Jun 2016 18:06
That's certainly one way to get through a thorny problem. The most direct way to get a jet to fly is to add bigger engines. And the easiest way to gain attention is to run more ads, or yell more...        Seth Godin
Categories: thinktime

All mirrors are broken

Seth Godin - Thu 02nd Jun 2016 19:06
It's impossible to see yourself as others do. Not merely because the medium is imperfect, but, when it comes to ourselves, we process what we see differently than everyone else in the world does. We make this mistake with physical...        Seth Godin
Categories: thinktime

Commit to Contribute

a list apart - Thu 02nd Jun 2016 00:06

One morning I found a little time to work on nodemon and saw a new pull request that fixed a small bug. The only problem with the pull request was that it didn’t have tests and didn’t follow the contributing guidelines, which results in the automated deploy not running.

The contributor was obviously extremely new to Git and GitHub and just the small change was well out of their comfort zone, so when I asked for the changes to adhere to the way the project works, it all kind of fell apart.

How do I change this? How do I make it easier and more welcoming for outside developers to contribute? How do I make sure contributors don’t feel like they’re being asked to do more than necessary?

This last point is important.

The real cost of a one-line change

Many times in my own code, I’ve made a single-line change that could be a matter of a few characters, and this alone fixes an issue. Except that’s never enough. (In fact, there’s usually a correlation between the maturity and/or age of the project and the amount of additional work to complete the change due to the growing complexity of systems over time.)

A recent issue in my Snyk work was fixed with this single line change:

In this particular example, I had solved the problem in my head very quickly and realized that this was the fix. Except that I had to then write the test to support the change, not only to prove that it works but to prevent regression in the future.

My projects (and Snyk’s) all use semantic release to automate releases by commit message. In this particular case, I had to bump the dependencies in the Snyk command line and then commit that with the right message format to ensure a release would inherit the fix.

All in all, the one-line fix turned into this: one line, one new test, tested across four versions of node, bump dependencies in a secondary project, ensure commit messages were right, and then wait for the secondary project’s tests to all pass before it was automatically published.

Put simply: it’s never just a one-line fix.

Helping those first pull requests

Doing a pull request (PR) into another project can be pretty daunting. I’ve got a fair amount of experience and even I’ve started and aborted pull requests because I found the chain of events leading up to a complete PR too complex.

So how can I change my projects and GitHub repositories to be more welcoming to new contributors and, most important, how can I make that first PR easy and safe?

Issue and pull request templates

GitHub recently announced support for issue and PR templates. These are a great start because now I can specifically ask for items to be checked off, or information to be filled out to help diagnose issues.

Here’s what the PR template looks like for Snyk’s command line interface (CLI) :

- [ ] Ready for review - [ ] Follows CONTRIBUTING rules - [ ] Reviewed by @remy (Snyk internal team) #### What does this PR do? #### Where should the reviewer start? #### How should this be manually tested? #### Any background context you want to provide? #### What are the relevant tickets? #### Screenshots #### Additional questions

This is partly based on QuickLeft’s PR template. These items are not hard prerequisites on the actual PR, but it does help in getting full information. I’m slowly adding these to all my repos.

In addition, having a CONTRIBUTING.md file in the root of the repo (or in .github) means new issues and PRs include the notice in the header:

Automated checks

For context: semantic release will read the commits in a push to master, and if there’s a feat: commit, it’ll do a minor version bump. If there’s a fix: it’ll do a patch version bump. If the text BREAKING CHANGE: appears in the body of a commit, it’ll do a major version bump.

I’ve been using semantic release in all of my projects. As long as the commit message format is right, there’s no work involved in creating a release, and no work in deciding what the version is going to be.

Something that none of my repos historically had was the ability to validate contributed commits for formatting. In reality, semantic release doesn’t mind if you don’t follow the commit format; they’re simply ignored and don’t drive releases (to npm).

I’ve since come across ghooks, which will run commands on Git hooks, in particular using a commit-msg hook validate-commit-msg. The installation is relatively straightforward, and the feedback to the user is really good because if the commit needs tweaking to follow the commit format, I can include examples and links.

Here’s what it looks like on the command line:

...and in the GitHub desktop app (for comparison):

This is work that I can load on myself to make contributing easier, which in turn makes my job easier when it comes to managing and merging contributions into the project. In addition, for my projects, I’m also adding a pre-push hook that runs all the tests before the push to GitHub is allowed. That way if new code has broken the tests, the author is aware.

To see the changes required to get the output above, see this commit in my current tinker project.

There are two further areas worth investigating. The first is the commitizenproject. Second, what I’d really like to see is a GitHub bot that could automatically comment on pull requests to say whether the commits are okay (and if not, direct the contributor on how to fix that problem) and also to show how the PR would affect the release (i.e., whether it would trigger a release, either as a bug patch or a minor version change).

Including example tests

I think this might be the crux of problem: the lack of example tests in any project. A test can be a minefield of challenges, such as these:

  • knowing the test framework
  • knowing the application code
  • knowing about testing methodology (unit tests, integration, something else)
  • replicating the test environment

Another project of mine, inliner, has a disproportionately high rate of PRs that include tests. I put that down to the ease with which users can add tests.

The contributing guide makes it clear that contributing doesn’t even require that you write test code. Authors just create a source HTML file and the expected output, and the test automatically includes the file and checks that the output is as expected.

Adding specific examples of how to write tests will, I believe, lower the barrier of entry. I might link to some sort of sample test in the contributing doc, or create some kind of harness (like inliner does) to make it easy to add input and expected output.

Fixing common mistakes

Something I’ve also come to accept is that developers don’t read contributing docs. It’s okay, we’re all busy, we don’t always have time to pore over documentation. Heck, contributing to open source isn’t easy.

I’m going to start including a short document on how to fix common problems in pull requests. Often it’s amending a commit message or rebasing the commits. This is easy for me to document, and will allow me to point new users to a walkthrough of how to fix their commits.

What’s next?

In truth, most of these items are straightforward and not much work to implement. Sure, I wouldn’t drop everything I’m doing and add them to all my projects at once, but certainly I’d include them in each active project as I work on it.

  1. Add issue and pull request templates.
  2. Add ghooks and validate-commit-msg with standard language (most if not all of my projects are node-based).
  3. Either make adding a test super easy, or at least include sample tests (for unit testing and potentially for integration testing).
  4. Add a contributing document that includes notes about commit format, tests, and anything that can make the contributing process smoother.

Finally, I (and we) always need to keep in mind that when someone has taken time out of their day to contribute code to our projects—whatever the state of the pull request—it’s a big deal.

It takes commitment to contribute. Let’s show some love for that.

Categories: thinktime

Read more blogs

Seth Godin - Wed 01st Jun 2016 18:06
Other than writing a daily blog (a practice that's free, and priceless), reading more blogs is one of the best ways to become smarter, more effective and more engaged in what's going on. The last great online bargain. Good blogs...        Seth Godin
Categories: thinktime

Russell Coker: I Just Ordered a Nexus 6P

Planet Linux Australia - Wed 01st Jun 2016 17:06

Last year I wrote a long-term review of Android phones [1]. I noted that my Galaxy Note 3 only needed to last another 4 months to be the longest I’ve been happily using a phone.

Last month (just over 7 months after writing that) I fell on my Note 3 and cracked the screen. The Amourdillo case is good for protecting the phone [2] so it would have been fine if I had just dropped it. But I fell with the phone in my hand, the phone landed face down and about half my body weight ended up in the middle of the phone which apparently bent it enough to crack the screen. As a result of this the GPS seems to be less reliable than it used to be so there might be some damage to the antenna too.

I was quoted $149 to repair the screen, I could possibly have found a cheaper quote if I had shopped around but it was a good starting point for comparison. The Note 3 originally cost $550 including postage in 2014. A new Note 4 costs $550 + postage now from Shopping Square and a new Note 3 is on ebay with a buy it now price of $380 with free postage.

It seems like bad value to pay 40% of the price of a new Note 3 or 25% the price of a Note 4 to fix my old phone (which is a little worn and has some other minor issues). So I decided to spend a bit more and have a better phone and give my old phone to one of my relatives who doesn’t mind having a cracked screen.

I really like the S-Pen stylus on the Samsung Galaxy Note series of phones and tablets. I also like having a hardware home button and separate screen space reserved for the settings and back buttons. The downsides to the Note series are that they are getting really expensive nowadays and the support for new OS updates (and presumably security fixes) is lacking. So when Kogan offered a good price on a Nexus 6P [3] with 64G of storage I ordered one. I’m going to give the Note 3 to my father, he wants a phone with a bigger screen and a stylus and isn’t worried about cracks in the screen.

I previously wrote about Android device service life [4]. My main conclusion in that post was that storage space is a major factor limiting service life. I hope that 64G in the Nexus 6P will solve that problem, giving me 3 years of use and making it useful to my relatives afterwards. Currently I have 32G of storage of which about 8G is used by my music video collection and about 3G is free, so 64G should last me for a long time. Having only 3G of RAM might be a problem, but I’m thinking of trying CyanogenMod again so maybe with root access I can reduce the amount of RAM use.

Related posts:

  1. Samsung Galaxy Note 3 In June last year I bought a Samsung Galaxy Note...
  2. Nexus 4 My wife has had a LG Nexus 4 for about...
  3. The Nexus 5 The Nexus 5 is the latest Android phone to be...
Categories: thinktime

This week's sponsor: JIRA

a list apart - Wed 01st Jun 2016 07:06

Thanks to our sponsor Jira. Plan, track, and release world-class software with the #1 software development tool used by agile teams. Try JIRA for free today.

Categories: thinktime

Wasting our technology surplus

Seth Godin - Tue 31st May 2016 23:05
When someone handed you a calculator for the first time, it meant that long division was never going to be required of you ever again. A huge savings in time, a decrease in the cognitive load of decision making. Now...        Seth Godin
Categories: thinktime

Colin Charles: Speaking in June 2016

Planet Linux Australia - Tue 31st May 2016 19:05

I have a few upcoming speaking engagements in June 2016:

  • Nerdear.la – June 9-10 2016 – Buenos Aires, Argentina – never been to this event but MariaDB Corporation are sponsors and I’m quite excited to be back in Buenos Aires. I’m going to talk about the MySQL ecosystem in 2016.
  • SouthEast LinuxFest – June 10-12 2016 – Charlotte, NC, USA – I have a few talks here, a bit bummed that I’m going to be missing the speaker dinner, but I expect this to be another great year. Learn about MariaDB Server/MySQL Security Essentials, the MySQL ecosystem in 2016, and about distributions from the view of a package.
  • NYC MySQL Meetup – June 27 2016 – New York, USA – I’m going to give a talk on lessons you can learn from other people’s database failures. I did this at rootconf.in, and it was well received so I’m quite excited to give this again.
  • Community Open House for MongoDB – June 30 2016 – New York, USA – I’m going to give my first MongoDB talk at the Community Open House for MongoDB – My First Moments with MongoDB, from the view of someone who’s been using MySQL for a very long time.

So if you’re in Buenos Aires, Charlotte or New York, I’m looking forward to seeing you to talk all things databases and open source.

Categories: thinktime

Binh Nguyen: Is Western Leadership Required? More Social Systems, and More

Planet Linux Australia - Tue 31st May 2016 06:05
Western Leadership stuff: From time to time you hear things about Western leadership being 'required'. Some of it sounds borderline authoritarian/dictatorial at times. I wanted to take a look further at this: - examples of such quotes include "Instead, America should write the rules. America should call the shots. Other countries should play by the rules that America and our partners set, and
Categories: thinktime

Maxim Zakharov: plsh2

Planet Linux Australia - Tue 31st May 2016 01:05

PL/sh - is a nice extension to PostgreSQL allowing to write stored procedures in an interpreted language, e.g. bash, python, perl, php, etc.

I found it useful though having a major drawback that the amount of data you can pass via arguments of such procedures may hit command line limitations, i.e. no more 254 spaces and no more 2MB (or even less).

So I have made a change that the value of the first argument is passed via stdin to the script implementing the stored procedure, the rest of arguments is passed as $1, $2, $3, etc. This change is allow to overcome above mentioned limitations in case when big amount of data is passed via one parameter.

Here is a tiny example I have added to the test suite with new functionality:

CREATE FUNCTION perl_concat2(text, text) RETURNS text LANGUAGE plsh2 AS ' #!/usr/bin/perl print while (<STDIN>); print $ARGV[0]; '; SELECT perl_concat2('pe', 'rl');

You may get modified PL/sh in my repository on GitHub: github.com/Maxime2/plsh. It has been implemented as a new procedural language plsh2, so you do not need to change anything in already created procedures/functions using plsh (and you can continue use it as before).

Categories: thinktime

Pages

Subscribe to kattekrab aggregator