Archive for August, 2013

Nicholas Skaggs

Feature freeze coming? Let’s test!

We're already approaching feature freeze at a quickening pace, and thus the next few weeks are rather important to us as a testing community. 13.10 is landing in October, which is now rapidly approaching (where did the summer go?!).

What?
Let's run through some manual tests for ubuntu and flavors. I'd like to ask for a special focus to be given to Mir/xMir. We plan to have a rigorous test of the package again in about a week once all features have landed. In the interim, let's try and catch any additional bugs.

When?
This week Saturday August 17th through Saturday August 24th. It's week 5 for our cadence tests.

Ok, I'm sold, what do I need to do?
Execute some testcases against the latest version of saucy; in particular the xMir test.

Got any instructions?
You bet, have a look at the Cadence Week testing walkthrough on the wiki, or watch it on youtube. If you get stuck, contact us.

Where are the tests?
You can find the Mir test in it's own milestone here. Remember to read and follow the installation instructions link at the top of the page!
The rest of the applications and packages can be found here.

I don't want to run/install the unstable version of ubuntu, can I still help?
YES! Boot up the livecd on your hardware and run the live session, or use a virtual machine to test (install ubuntu or use a live session). The video demonstrates using a virtual machine booting into a live session to run through the tests. For the Mir/xMir tests, however, we'd really like results from real hardware.

But, virtual machines are scary; I don't know how to set one up!
There's a tool called testdrive that makes setting up a vm with ubuntu development a point and click operation. You can then use it to test. Seriously, check out the video and the walkthrough for more details.

Thank you for your contributions! Good luck and Happy Testing Everyone! 
Brian Murray

Phasing of Stable Release Updates

For some time we’ve wanted to phase, gradually roll out, updates to expanding subsets of Ubuntu users so that we may monitor for regressions and stop the update process if there are any. The support for phased updates has existed in update-manager for a while, but we did not have the server side part implemented. Thanks to the work of Colin Watson, Evan Dandrea, and myself this is now done.

Who is participating?

Users of Ubuntu 13.04 who install updates with update-manager are automatically participating in this process. For every package, in -updates, update-manager will generate a random number and if that number is less than the Phased-Update-Percentage the package will be installed. One can opt out of the Phased Update process by adding ‘Update-Manager::Never-Include-Phased-Updates “True”;’ to the configuration file “/etc/apt/apt.conf”.

How does the Phased Update process work?

When a Stable Release Update is released to 13.04 it will have its phased update percentage initially set to 10%. A job is run, every 6 hours, in the data center that checks to see if there are any regression about the package, and if there are none then the phased update percentage will be incremented by 10%. The phased update percentage for a binary package is available at the publishing history page for it. Here is an example with apport. If there is no value for “Phased updates” then the update is fully phased at 100%.

What are the regression checks?

The Ubuntu Error Tracker (errors.ubuntu.com) has been modified to help us determine if there are any regressions about the package. We do this by checking to see if there are any crashes reported about the new version of the package that were not reported about the previous version of the package. (You can actually check this yourself using a query like: https://errors.ubuntu.com/api/1.0/package-version-new-buckets/?format=json&package=unattended-upgrades&previous_version=0.76&new_version=0.76ubuntu1) Additionally, we check the error tracker to see if there is an increased rate of crashes about the package. This is done by examining the quantity of errors reported today and comparing it to the average number of crashes per day for the past two weeks multiplied by the portion of the day that has passed.

If either of these types of regressions are detected then the phasing of the update is stopped by setting it to 0. This will prevent other users from receiving the updated version of the package. There is also a report of packages currently undergoing phasing that displays the phased update percentage for the package and any detected regressions. Additionally, an email is sent to the signer of the package (uploader) and its creator (uploader or sponsee). The email notifies them of the problem and that phasing of the update has been stopped.

There is support in the phased-updater for overriding specific problems, for example if we determine that a regression was not introduced in a specific version of a package. It also keeps track of emails sent so that we do not send an email about the same problem more than once.

If you encounter any issues as a user installing updates or as a developer who has uploaded packages please let me know.

Nicholas Skaggs

Autopilot best practices

I've now had the pleasure of writing autopilot tests for about 9 months, and along the way I've learned or been taught some of the things that are important to remember.

Use Eventually
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.

Assert more!
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.
  • Everytime you grab an object, assert you received the object
    • You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
  • Everytime you interact with the screen, try an assert to confirm your action
    • Click a button, assert
    • Click a field to type, assert you have focus first
      • You can do this by using the .focus property and asserting it to be True
      • Finished typing?, assert your text matches what you typed
        • You can do this by using the .text property and asserting it to be Equal to your input
Don't use strings, use objectNames
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.

Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.

Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.

Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!

Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.

Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.

Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.

So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.

Got more suggestions? Leave a comment!
Nicholas Skaggs

Autopilot best practices

I've now had the pleasure of writing autopilot tests for about 9 months, and along the way I've learned or been taught some of the things that are important to remember.

Use Eventually
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.

Assert more!
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.
  • Everytime you grab an object, assert you received the object
    • You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
  • Everytime you interact with the screen, try an assert to confirm your action
    • Click a button, assert
    • Click a field to type, assert you have focus first
      • You can do this by using the .focus property and asserting it to be True
      • Finished typing?, assert your text matches what you typed
        • You can do this by using the .text property and asserting it to be Equal to your input
Don't use strings, use objectNames
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.

Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.

Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.

Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!

Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.

Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.

Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.

So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.

Got more suggestions? Leave a comment!