August 16th, 2013 by Nicholas Skaggs
Let's run through some manual tests for ubuntu and flavors. I'd like to ask for a special focus to be given to Mir/xMir. We plan to have a rigorous test of the package again in about a week once all features have landed. In the interim, let's try and catch any additional bugs.
This week Saturday August 17th through Saturday August 24th. It's week 5 for our cadence tests.
Ok, I'm sold, what do I need to do?
Execute some testcases against the latest version of saucy; in particular the xMir test.
Got any instructions?
You bet, have a look at the Cadence Week testing walkthrough on the wiki, or watch it on youtube. If you get stuck, contact us.
Where are the tests?
You can find the Mir test in it's own milestone here. Remember to read and follow the installation instructions link at the top of the page!
The rest of the applications and packages can be found here.
I don't want to run/install the unstable version of ubuntu, can I still help?
YES! Boot up the livecd on your hardware and run the live session, or use a virtual machine to test (install ubuntu or use a live session). The video demonstrates using a virtual machine booting into a live session to run through the tests. For the Mir/xMir tests, however, we'd really like results from real hardware.
But, virtual machines are scary; I don't know how to set one up!
There's a tool called testdrive that makes setting up a vm with ubuntu development a point and click operation. You can then use it to test. Seriously, check out the video and the walkthrough for more details.
Thank you for your contributions! Good luck and Happy Testing Everyone!
August 7th, 2013 by Brian Murray
For some time we’ve wanted to phase, gradually roll out, updates to expanding subsets of Ubuntu users so that we may monitor for regressions and stop the update process if there are any. The support for phased updates has existed in update-manager for a while, but we did not have the server side part implemented. Thanks to the work of Colin Watson, Evan Dandrea, and myself this is now done.
Who is participating?
Users of Ubuntu 13.04 who install updates with update-manager are automatically participating in this process. For every package, in -updates, update-manager will generate a random number and if that number is less than the Phased-Update-Percentage the package will be installed. One can opt out of the Phased Update process by adding ‘Update-Manager::Never-Include-Phased-Updates “True”;’ to the configuration file “/etc/apt/apt.conf”.
How does the Phased Update process work?
When a Stable Release Update is released to 13.04 it will have its phased update percentage initially set to 10%. A job is run, every 6 hours, in the data center that checks to see if there are any regression about the package, and if there are none then the phased update percentage will be incremented by 10%. The phased update percentage for a binary package is available at the publishing history page for it. Here is an example with apport. If there is no value for “Phased updates” then the update is fully phased at 100%.
What are the regression checks?
The Ubuntu Error Tracker (errors.ubuntu.com) has been modified to help us determine if there are any regressions about the package. We do this by checking to see if there are any crashes reported about the new version of the package that were not reported about the previous version of the package. (You can actually check this yourself using a query like: https://errors.ubuntu.com/api/1.0/package-version-new-buckets/?format=json&package=unattended-upgrades&previous_version=0.76&new_version=0.76ubuntu1) Additionally, we check the error tracker to see if there is an increased rate of crashes about the package. This is done by examining the quantity of errors reported today and comparing it to the average number of crashes per day for the past two weeks multiplied by the portion of the day that has passed.
If either of these types of regressions are detected then the phasing of the update is stopped by setting it to 0. This will prevent other users from receiving the updated version of the package. There is also a report of packages currently undergoing phasing that displays the phased update percentage for the package and any detected regressions. Additionally, an email is sent to the signer of the package (uploader) and its creator (uploader or sponsee). The email notifies them of the problem and that phasing of the update has been stopped.
There is support in the phased-updater for overriding specific problems, for example if we determine that a regression was not introduced in a specific version of a package. It also keeps track of emails sent so that we do not send an email about the same problem more than once.
If you encounter any issues as a user installing updates or as a developer who has uploaded packages please let me know.
August 7th, 2013 by Nicholas Skaggs
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.
- Everytime you grab an object, assert you received the object
- You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
- Everytime you interact with the screen, try an assert to confirm your action
- Click a button, assert
- Click a field to type, assert you have focus first
- You can do this by using the .focus property and asserting it to be True
- Finished typing?, assert your text matches what you typed
- You can do this by using the .text property and asserting it to be Equal to your input
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.
Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.
Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.
Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!
Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.
Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.
Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.
So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.
Got more suggestions? Leave a comment!
July 31st, 2013 by David Murphy (schwuk)
As part of our self-improvement and knowledge sharing within Canonical, within our group (Professional and Engineering Services) we regularly – at least once a month – run what we call an “InfoSession”. Basically it is Google Hangout on Air with a single presenter on a topic that is of interest/relevance to others, and one of my responsibilities is organising them. Previously we have had sessions on:
- Go (a couple of sessions in fact)
- Localization (l10n) and internationalization (i18n)
- …and many others…
Merge requests and code reviews are a fact of life in Canonical. Most projects start by manually merging approved requests, including running a test suite prior to merging.
This infosession will talk about tools that automate this workflow (Tarmac), while leveraging your project’s test suite to ensure quality, and virtual machines (using Vagrant) to provide multi-release, repeatable testing.
Like most of our sessions it is publicly available, here it is is for your viewing pleasure:
July 30th, 2013 by David Murphy (schwuk)
Nine years, one month.
That’s how long I’ve had one server running with Linode. It has been through a number of versions of Ubuntu, and been re-installed at least twice (once to switch from 32-bit to 64-bit). It has operated as a LugRadio mirror; hosted many websites, both static and dynamic; hosted my blog for many years; operated as a Jenkins server; and done more general duties as an IRC bouncer, and general dogsbody.
Why the sentimentality? I’m shutting the server down today. Not that anyone will notice of course (unless you’re paying close attention to IP addresses or SSH host keys) since it has already been replaced with a DigitalOcean droplet (still running Ubuntu of course).
Linode have done absolutely nothing wrong – in fact just the opposite. I have been regularly rewarded with extra storage/memory/bandwidth, and they have always been responsive to my few needs. So much so that I am still remaining a customer. (So far) I am only moving one server to DigitalOcean.
So why the change? A few reasons: that server now does very little besides running my IRC bouncer; I wanted to try DigitalOcean out (I have heard a lot of good things); finally, perhaps most importantly considering the first reason – the droplet is half the price of the linode. In fact if I had gone for the $5 per month droplet instead of the $10 one, I could have had four servers for the price of one!