What we'll cover here:
Actioning Bugs (Accept/Reject)
Known Bug List
Following your initial test, we'll review how many bugs were found, what their severity classifications were, and which Features were tested.
We will discuss our findings to help find your right balance of quantity and quality; we don’t want to overwhelm you with so many bugs that it makes triage more difficult, but we also want to give you the right type of bugs that you're most intent on fixing.
Does the number seem seem manageable, or would you expect more?
Knowing your current build, are these findings in line with the bugs you were expecting? Did we find bugs that you didn’t know about?
Do you have other feedback in terms of where we looked and what we found?
Note: when finished reviewing a test, please give the test a rating. We use these ratings to improve your test cycles; if you rate your cycles, it is easier for us to improve on the next one.
You should "Accept" or "Reject" bugs based on their value to you. As we begin interacting, there will be a variety of reasons to reject a bug, so we’re going to work together to tune the Feature descriptions and tester instructions, as well as other best practices, based on the ones you accept and reject.
If you select "Accept and Export," the bug will be accepted and exported to your bug tracker of choice (how to set up an export to your bug tracker).
If you "Reject" a bug, you will be asked for a rejection reason, and the bug's status will be updated to "Rejected."
"Change Severity" button - if you find that the severity definitions we've shared with our testers and Team Leads don't match yours, you can make changes with a simple click.
Although every submitted bug is carefully reviewed by a Team Leader before it is forwarded to you, there is a chance you might need extra information from a tester to investigate a particular bug: browser version, Session ID, UDID, etc. If you want to get in touch with a tester, you can select "Request Information."
Note on Rejecting Bugs:
Choosing the appropriate rejection reason will help us to better refine your tests moving forward. Only when you reject bugs can we can see which bugs are important to you and which are not as relevant.
Two of the most common rejection reasons are marking as a "Known bug" or "Intended behavior."
When rejecting as "Intended behavior," we recommend adding that expected behavior to the Feature. This is how we make sure the crowd is aware that a product should or shouldn't work a certain way.
You can also mark the reason as "device not relevant" if you're not interested in the device reported on, or "not able to reproduce," after which you can leverage "Request Information" to find out more.
Testers are incentivized to respond to questions within 18 hours; otherwise, their bugs will get auto rejected.
Additionally, if more than 10 days have passed, bugs will be auto accepted in fairness to testers.
It’s also important for you to mark bugs as "known" if they are not fixed until the next test cycle, as testers will see which bugs are known and shouldn't be reported again (more below).
Known Bug List
For any known bug, simply mark it as "known" on the table. This will prevent bugs marked as known from being reported again on future tests. If/when they're fixed, you can remove them from the known list, and they’ll be open to the crowd to report on again.
Note: Known Bug List Feature Flag - if you reject a bug “as known," it’ll automatically be added to the known bug list.
If you fix this issue, you can remove it, and it’ll be open to the crowd to report it again. We want to make sure to keep this updated as they are fixed so that it can be reported again.
Discuss whether or not you're ready to run a fresh test, or if you should be doing a followup on anything you've recently fixed. "Do you think we’re ready to do a [x] test?"
Do you have new features coming along? Is there a new/upcoming build you want to be tested next?
If you're looking to have the same Features tested on a regular cadence, it can be very beneficial to run recurring, scheduled tests. For example, some customers run a Coverage Test scanning their entire application every week.
We just need to know which test we can run that off -- if the last Coverage Test you ran provides a good template -- and which day you want the test to run on.