A QA Onboarding Doc For NetNewsWire

12/26/201822 Min Read — In Testing, Career

Now that I've spent some time walking down memory lane thinking about Kaleidoscope, I dredged up another nice document for NetNewsWire, this one is less of a test plan and more of an onboarding document I wrote up for a new to the field tester taking over the project. There are warts and things I don't feel great about but it is an interesting read all these years later.

NetNewsWire QA – Getting Started

Passing knowledge and ideas about NetNewsWire and it's app ecosystem to get the new project Test Lead up and running fast.

Dear Friend of NetNewsWire,

You are going to love this project and beg to never, ever be taken away from it. The team, product and skills here are unmatched in the field, yourself included. Each tester brings something unique and highly valuable to the table, so while I'm sad to have only had a brief stint this time around, I am so excited to have your fresh eyes and sharp mind on the team, helping guide the project out of Beta.

I have tried to look back and consider things I wish I had known when I was starting out and I attempted to document most of that here (even the bad stuff), coupled with my view on testing, pardon the editorializing.

Please don't take this document as the be all, end all; I know you will find better ways of doing things.

Thanks for your time and attention. Now get to work!

—Sheree

First things first, know your roots.

0. History

NetNewsWire was acquired by Black Pixel circa July 2011. Beyond seeing few maintenance releases there was no active development on the 3 lineage or the NetNewsWire iOS offerings.

For some time after acquisition, development was picked up on a lineage from NetNewsWire Lite 4 and, if I recall correctly, a wholly new iOS line with a shared Core library between the two. Design was led by Phil and shaped much of what you see in NetNewsWire Beta 4 public release 1. Rick and Daniel were the lead developers at the time with no project management or Agile-ish practices invoked. A flourish of new features and ideas were implemented while the syncing solution was being worked on. There was a time of much consternation with iCloud sync services and ultimately that course had to be abandoned. Daniel and the team tried very hard for a very long time but it was just not dependable at that time. The product team was interrupted and for the most part, reorganized to focus on the release of Kaleidoscope 2 with a brief update to Versions seeing a 1.2 release. Kaleidoscope roadmap was in flux and the team had made some maintenance releases.

Somewhere between there and March is the announcement of the end of the Google Reader service that NetNewsWire 3.3 used for sync and iOS apps. The NetNewsWire torch was picked up again and many features ripped out in order that we could ship a viable product before July. Olivier led most of the UI updates with John having a v. strong influence all around, notably kicking butt on the Web site and icons, naturally. Rick was made the lead developer, as right as rain. Rudy was brought on board somewhere in the interim and has been a real asset. Michael's hands on any project will never steer you wrong. Rob was sent on a sync reconnaissance mission, see Sync Note 1 & Sync Note 2. The picking up of this project brought some important changes along with it: a more Agile focused (the Scrum variant of Agile with sprints, scrums, estimates, etc) team with a dedicated Product Owner (and acting Scrum Master), Hernan. At the time of the project's renewal, there was no assigned QA staff (as all bodies were on client work at the time). This has left the project in something of a debt, as far as robust testing goes.

Most of the testing was done by staff in a dogfooding manner. Sometime in June, Sheree was assigned and integrated with the team. She regrets having many other IT, Support & administrative duties that kept her from devoting a fully focused QA effort. Shortly after, NetNewsWire saw a semi-public Alpha release around the time of WWDC convention with a fully public Beta being released at the end of June. With the release of this Beta, our support staff of one, Jack, went from a 50ish incoming / 250 backlog to a 150ish incoming with a 1000+ backlog, the current status of which is in flux and needs as much help as you are able to give it (it is also a treasure trove of information that is far too often overlooked, but you're wiser than that). Assigned QA personnel are intermittent given other duties or assigned projects.

You, Dear Reader, are starting off without such baggage and histrionics and have the freshest of eyes. You will find failures with a ferocity the team and project desperately needs.

Let's get started...

1. Basics

The absolute minimum facts you need to dip your toes into the water...

Who's running this show?
  • Stakeholder Daniel Pasco (+council)
  • Product Owner/Scrum Master Hernan Pelassini
  • Tech Lead Rick Fillion
  • Dev Michael Gorbach
  • Dev Rudy Richter
  • Design Lead John Marstall
  • DevOps Hernan CJ Calabrese
  • DevOps Nathaniel Irons
  • QA Emeritus Sheree Pena
  • QA Lead George Rix
  • Support Jack Brewster
What exactly am I supposed to be testing?
  • NetNewsWire 4 Mac currently in public beta
  • NetNewsWire iOS currently in design phase, moving to updates and sync integration
  • NetNewsWire Sync currently in groundwork churn phase
  • NetNewsWire Site currently in post-beta-pre sync/iOS phase
Where can I get more information and tools to help me?
  • Listserv [redacted]
    • goes to NetNewsWire team, Council & interested parties
  • Chatroom [redacted]
    • attendance is open to anyone you are expected to have a presence there while on the clock
  • Wiki [redacted]
    • Lot's of info, historical and not, you are expected to update pages intermittently as you learn and grow and collect project data such as test files or the like
  • Sourcecode [redacted]
    • have a copy of the repository that you can pull and build from at any time
  • Test Documents [redacted]
    • You are expected to use them, keep them updated and expand them throughout the course of testing
  • Jenkins [redacted]
    • Continuous Integration server, you are not expected to maintain jobs but you should utilize this to aide in testing esp. regression
  • Dropbox [redacted]
    • not in widest usage w/r/t testing but does have project related documents
When do things happen?
  • Product Team Daily Scrum Standup Meets Post All Hands Scrum Daily PDT
    • as with regular scrum, do not miss and do not be 1 minute late.
    • jot check-ins down and mail the scrum summary to the [redacted] list with the date and any special notes ex: "release day, sprint planning meeting soon" etc
  • See Sprint planning for further schedule details
  • Now!
Why?
  • To bring NetNewsWire to the forefront of all things RSS where it belongs!

Enough Git to get you going
  • Navigate to your preferred repo holding folder in your terminal and run a git clone [redacted]into it
  • Navigate into your repo cd nnw and git checkout develop to switch to the main development branch
  • Make sure we are all up to date by running a git pull and then you can use open *xcworkspacewhich will get you to Xcode...
  • to get future updates git pull for the latest from GitHub
  • to use a branch other than your develop default, track it from the remote and checkout it out as your head
  • to switch to another branch, just checkout that one
  • to see where you are at git status should let you know the current branch
  • to put away any changes you may have made, just run a git stash for safekeeping
  • Review Git glossary for more info
Enough Xcode to get you going
  • Use the Direct Debug / 64 scheme to run local builds
  • For hygiene sake, run a Clean, then a Build without Running to see where there is trouble
  • If you cleaned and built successfully, you can then Run without Building
  • For the most part, you can simply use Run but breaking down these into steps we can better diagnose trouble at first
Enough Agile to get you going
  • Cases have Estimates for time in hours to complete the bug/task
  • Cases are filed into a Backlog for inclusion on a Sprint
  • The Sprint Planning meeting pulls cases out the Backlog
  • Sprints are made up of Estimated Cases that fill the time based on those estimates
  • Releases are made at the end of Sprints to contain the items in the Sprint
  • Review Agile links on the BPXL wiki
Enough FogBugz to get you going
  • Your My Cases filter should be your default review queue to process incoming cases daily.
  • You can save searches and queries by saving your query to a Filter.
  • You can stay on top of incoming issues by searching like so project:"NetNewsWire 4.0" -status:"Closed" -viewedby:"me" -area:"Support" -area:"DevOps" orderby:"LastEdited"
  • You can pluck unattended issues from the Support queue by running a search like -status:"Closed" project:"Inbox" title:"NetNewsWire Feedback" -dateclosed:"*" orderby:"-LastEdited" type:"cases"

2. Your Role As Tester

You are riding in the front passenger seat of a police car, spotlight shining all over the streets and sidewalks, in search of foes, speed bumps, traffic and small children! In pitch dark no less!

The point is to warn your drivers, passengers, detainees, car-owners, the public at large, for that oncoming semi-truck barreling down the highway, two full tankers of gasoline, and malfunctioning brakes. Sometimes this means letting the team hit an innocent and adorable feral bunny that happened onto the road. Be comfortable with that.

  • Aim to be as effective and possible: Test the big scary important stuff first and often, be vigilant!
  • Write your bug reports so thoroughly no one has to ask you for any further information to act and resolve the case.
  • Come to the team with questions but only after you have done your part of investigating and documenting the issue so that everyone is working with the same knowledge you have and they can choose to let you fragment their time/mind or not. Simply put: show your work and respect everyone's time.
  • Be able to put on any hat (user, customer, hacker, developer, designer, tester, sales, marketing, etc) and see the product from their point of view: anticipate needs and devise ways to please, and consider everyone from any angle at any time, volunteer possible risks when you hear about proposed changes.
  • Be constantly up to date with the status of the project just as much as any project manager or developer. You are the weather balloon of the project!
Responsibilities:
  • Creation & ongoing maintenance of lightweight test plans

    • Master test coverage plan
    • feature testing checklist
    • Regression test plan
    • pre-release test plan
  • Creation of a suite of comprehensive test scenarios/cases

    • the content of your test plans
    • your exploratory test charters
  • Manual verification of a feature

    • Exploratory and negative testing of a feature
  • FogBugz ticket filing for any issues found

    • bug reports are our main public output
  • Reproduction of issues, closing resolved cases

    • reproducing & filing customer reported defects
    • reproducing & verifying resolved cases regardless of who may have filed them
  • Verification of feature completeness

    • using your feature reconnaissance missions and exploratory feature testing session to inform the team of feature completion status
  • Attendance & Participation

    • attend team and product scrums alike
    • serve as project reporter, sending scrum notes daily and pertinent issue to the dev list as they come up, especially in regards to policy and procedural changes
    • read and review devlist notes, assessing QA's role in the information contained therein
    • active involvement in group chat, our main source of communication
  • Proactiveness

    • Test when you are aware it exists, ask for access so you can test, do not wait to be asked to test, if they don't want you to test something, they will let you know!
    • Sharpen your saw at every turn, aim to get better, faster and more effective at all times

3. Testing NetNewsWire

Testing is two things: Checking and Discovering.

When you run down a list of test cases to ensure nothing is discrepant against your expectations, that is Checking and it is the main tool of Smoke, Regression, Claims, Pre-Release testing. This is where your organization, documentation, and automation skills shine.

When you encounter a new feature, see an OS update, or have completed your regression checklists, etc), Discovery is the path you must take to ensure coverage and learn new details, perhaps toward discovering new bugs, or documenting test cases for your Checklisting. This is where your curiosity, platform knowledge, testing skills, and known heuristics come in to play in a large way.

Exploring the application and its ecosystem should be something that you are constantly doing but focused exploring should also be done. Instead of specific test cases, you prepare Questions aimed at a feature or theory or known vulnerable area and you test around that area, using the knowledge you are learning and seeing on the spot to inform your next test.

The flow is something like the following: Map out a charter, obtain the software at the specified state, time box your testing session so you have uninterrupted brain space. Enter your session and document all of your findings, ad hoc, as they come up, note what you tried and whatever comes to mind, note bugs to file or questions you have or other areas and test ideas to explore later. At the end of your session, cull your notes into bugs and use your discovered information in whatever way furthers you or your team.

To aide in a more structured testing path, please consider implementing your charter statements to look approximately like the following or at least cover the intention:

Inspect/explore/poke/prod [THING/FEATURE/AREA] with/using [RESOURCES/METHODS/TOOLS/HEURISTICS] to discover [INFORMATION/RESULTS/DAMAGE/CHARACTERISTICS]

Much in the same way delving into the customer support queue will deepen and expand your understanding of the product, I believe approaching the product in this fact-finding exploratory (yet structured!) way will lay the foundation to better inform your testing efforts when you are ready to dig deeper. Truly consider the strategy, it's implications, it's reasonings, and how you can do it better.

Getting deep into methods of testing and heuristics (performance, accessibility, claims, negative, security etc) is far beyond the scope of this document but the following list should give you some ideas and help inform your strategic planning.

### Smoke Tests
Identify and *automate* the core functions that needs to pass in order for a build to be ready for testing. Perhaps passing unit tests first. Perhaps just confirming the build can open and close. Perhaps any automated tests you have laying around. Figure this out and attach it to the build process.
### Unit Testing
Describe any unit testing in the project and how it relates to document user here.
### Automated Testing
Give a census of all automated tests and point to instructions for maintenance or additions.
### Basic CRUD Testing
Create Read Update Delete. Perhaps this is also automated and attached to the build process. Write a script that will do all in a swoop and also check each action.
### Hardware Testing
Max and minimum supported hardware. Document and test against. CPU, RAM, et al. Make sure to do a check on Virtual Machines (faked hardware) and graphics drivers/cards.
### Input Method Testing
Keyboard, touchscreen, trackpad, voice, pen, foreign language devices, assistive devices, bluetooth enabled devices, other supported peripherals.
### Bounds Testing
Wedging for fun and for profit.
### Feature Testing
Map each application-specific feature hierarchically and verify performs as expected. Go as deep as you are able. Functionality.
### UI/UX Testing
Layouts. Sections. Buttons. etc. Define views and pages and their respective parts. Cover animations & transitions here. Get flows and storyboards so you have known outcomes.
### Screens Testing
Resolution, screen size, pixel density, window size, responsiveness, orientations, external screens.
### Search Testing
Searching, finding, highlighting and all things related to search within your application.
### Preferences Testing
User configurable application preferences. In application and behind the scenes.
### Communication & Dialogs Testing
Interactivity, clarity, spelling, context, behavior. Feedback.
### Menus, Key Equivalent Testing
Checking each and every menu item, contextual menu item, each keyboard shortcut and all variations and states.
### Configuration Testing
Test each and every internal setting and the features they touch.
### Internationalization Testing
How does it handle another locale or language setting. And inputs?
### Localization Testing
Is the app localized? How does it hold up?
### Claims Testing
Review marketing materials and ensure that each thing is true. Review release notes and ensure each new entry is accurate.
### Beta Testing
Exploratory black box testing from non-stakeholders. Outlay your plans here.
### Documentation Testing
Help guide, about page, getting started – any piece of written stuff that is available in-application.
### Security Testing
Buffer overruns? SQL injection? Gatekeeper, signing, sandboxing, injecting, licensing, etc
### Binaries Testing
Inspect the application bundle in the finder as deep as you are able. Do the same in Xcode and with any known inspection tools you have. Pass a `strings` over it. Try to fiddle with the innards and see if there is anything inappropriate
### Risk Testing
All points where data is saved and changes are executed. Make a list and test cases for each and every area.
### Upgrade & Installation Testing
Installs, updates and upgrades. Cover un-installations here.
### Analytics Testing
Any analytics support and confirmation in test/prod.
### Third Party Tools Testing
store, Sparkle, TestFlight, frameworks, ad libraries, etc
### External Obligations Testing
Are you contractually obligated to show a client logo in certain areas? Do you need to have license credits in the about page? Are you using the correct social media logos?
### Debug Testing
Have you left any debug nonsense laying around? Do your logs get shuffled off to an email? Do you point to a temporary server? Is the console spewing things it should not? Have a developer help you audit these items.
### First Run Experience Testing
Welcome screens, EULAs, data migrations, prefs, licensing, updates, system checks, how-to guides, use reporting, configuration, walkthroughs, anything and everything tied to the first run of a fresh install. The first run of an update.
### Interoperability Testing
Interoperability between other apps or OS’s or services.
### Modes and States Testing
Dirty environs, messed up settings, sleep mode, safe mode, restarts etc.
### Friendly Neighbor Testing
Determine apps and tools that users may use with your application and make sure they play nice.
### Network Testing
Configs, failures and events. Get busy with the Network link conditioner. Jam a proxy into the mix. Cover network on first launch and relaunch issues.
### Project Legacy Testing
Review bug reports from previous versions and beta tests from early incarnations.
### Accessibility Testing
Mouse-less, sight-less, sound-less, colorblind, enlarged text, inverted color, zoom view. is everything perceivable, operable, understandable, robust experience? Truly sightless VO testing, etc.
### Stress & Performance Testing
Load, endurance, boundaries, interruptions, starvation etc. Establish numbers and then push them.
### Chaos Monkey Testing
Research Netflix's Chaos Monkey and take your queue from the monkey and go nutzo-smasho on some software. You will find a lot of timing bugs this way and cases where errors are shown are a trove of bugs.
### Scenario Testing
Develop use cases and stories. Extract examples from the team, from the support queue and our potential customers. Your stakeholders from all points of entry. As a _$USER_ I need to _$ACTION_ so that I can _$RESULT_.
### Internal Stakeholders Testing
Pick a team member or area and ask an interested team member what *they* would like tested or if they have any particular concerns about their areas.
### Support Advocacy Testing
Identify weak points in prior versions that have caused support load and tackle these here. Try to identify weak points that have carried over and any new ones. Beta & exploratory ad hoc testing are your friends here.
### Mockups & Design Track Testing
Are we in line with published mock-ups? Has the designed diverged? Create test cases that can flow with the constant change of direction. This is more back-end stuff that you should do with the aid of design & dev, not general UI/UX testing.
### Competitor Testing
Identify existing competitors. Run actions in their software that we can accomplish in ours. Compare and contrast and report findings. Review their release notes and support FAQs for test ideas.
### Core Values Testing
Refer to project mission statement. Refer to software values ethos of product owner. Take specific statements and create test cases against them.
### Regression Testing
Performance vs last public release. Versus last beta, alpha, build, update, etc.
### Personal Testing
Identify yourself as a stakeholder, what needs to be tested by you and for you? What are areas you think no one is paying attention to? Document and share and test those items. You have a passion for software quality and are a champion for the product customer, right? What are you doing above and beyond to fight for the end user?

4. Bugs!

BUG REPORT FLOW
  1. Test — use the product, test it, check it out, or otherwise approach it looking for unexpected issues or anomalies
  2. Discover — use your tacit and explicit knowledge of the software and basic platform knowledge to determine a defect, whether it's a logic error, UI defect or unfriendly user experience
  3. Capture — make a screenshot, grab a video, copy the logging, grab example files, note the build type, branch, commit, OS, hardware, etc
  4. Reproduce — review what you did to get the bug to show itself, try again on the same machine, try again on another machine, ensure it can be reproduced, if you can't find the reproduction steps and you feel it is serious enough to be reported anyway, note that and what you tried or thought happened, sometimes this is enough but be prepared to have your case rejected
  5. Regression Check — determine whether this bug was in a previous release or build, or whether it may have come after another bug or feature has had work done on it
  6. Describe — write up your defect report (see below)
  7. Estimate — use your best judgment about how long this issue might take to resolve including coding, testing, documenting, merging etc
  8. Submit — collect all your details and submit them to the bug tracker and move on to the next bug!
  9. Verify — accept a resolved case and verify the issue has been sufficiently resolved
  10. Test — any other things you can think of to ensure that the issue is indeed sorted and hasn't caused any regressions or was perhaps misunderstood or fixed a symptom and not a cause
  11. Add Test Case — now that you have an issue that came up, consider adding a test case in your test suite to ensure you will find the bug if it creeps up again
  12. Submit Release Notes — in the bug tracker, write releases notes in the style and format of a public facing release note, these may be re-written or coalesced but should be clear and concise
  13. Close Case — document the case in the bug tracker with all your notes and things you tried and what you found and close the case if it is resolved, if the issue is not resolved, leave your finding but mark the case as Unresolved, assign it to Backlog and notify the person who marked it Resolved originally.

If the thought "Should I file a case on this?" comes into your head, the answer is always Yes; let the team decide what to do with the information. This is, of course, after you have done your due diligence and are reasonably sure about your testing, theory, and have searched and found the issue not already filed.

Writing the actual report:

What Makes A Good Bug Report? and Determining Bug Priority.

We have a pre-formed bug to fill out in the snippets section in FogBugz, it is based off these articles, Apple's infamous Radar form, and my brain. Following a format will go a long way to a clear and usable report. You don't have to use mine but using a standard will help you and your team parse your reporting. Here is my personal take on that:

# Title of Your Bug Report Should Sum Up The Whole Thing
I title-case my bug titles because they are titles and I like to be able to pick them out of a lineup.
### Summary
* This is where you expand your issue in more of a paragraph form. If the user never reads past this part, they should still have an understanding of what the main issue is.
### Steps to reproduce
1. you open the app
2. you do a thing
3. you log it
4. you number these steps for clarity
### Results expected
* what you were hoping, wishing, dreaming and expecting so see as the result of your above numbered actions
### Actual results
* what you observed after the steps that was not in line with your expectations
### Frequency
This list should cover what you found when attempting to reproduce
* Repeatable
* Non-repeatable
* Situational
* Unknown
* Always
* Often
* Sometimes
* Once
### Severity
This is the proposed severity based on your understanding of the issue, prepare to be overrode by your favorite Designer, Developer, Product Owner or Project Manager
* Crasher
* Blocker
* Critical
* Major
* Minor
* Trivial
* Enhancement
### Work around
* If this is a customer-facing issue, ensure support has a way around things. Being able to see a path around may help the bug report reviewer figure out the source and severity.
### Proposed Solution
* Anything outside of the Expected section above that you feel would resolve the issue.
### Documentation / External References
* This is where things like, the HIG, WikiPedia, Design Patterns, Specs or the like can be noted if you think it might be helpful to move the case forward.
### Existing Cases / Internal References
* Tangentially related bugs you have already filed or documents and requirements from the client or whatever occurs to you.
### Regression Status
* In your repro you may have checked a prior build or release, branch or commit, this can be enormously helpful to your resolver.
### Configuration Details
* **Hardware** – be explicit
* **OS** – be explicit, harder
* **Build Info** – down to the SHA or Jenkins link or whathaveyou
* **Pertinent Settings** – network issues, proxies, OS settings, etc
### Enclosures
* Sample files, mock data, test accounts, the build, screenshots, videos or whatever source you have to reproduce the issue
###Personal Notes
* The personal appeal, confessions of known biases, love notes and curious musings.

5. Your Balance Sheet

Nothing's really handed to you on a gold-spray-painted-paper-plate. What you do have is something like the following:

Assets
  • your beautiful brain
  • friendly and enormously capable project team
  • band of QA goons ready to prove their muster
  • knowledge of RSS and platform
  • other RSS clients to compare to
  • continuous integration
  • some test documentation, test plans
  • source code
  • industry heuristics
  • bug tracker history
  • instruments is a wonderful tool
Liabilities
  • no unit tests, not a TDD project
  • no automated tests to speak of
  • no real test mock data
  • no requirements documentation
  • upcoming sync feature pending
  • upcoming iOS offerings pending

6. Yes, but what do I do all day long?

You are an Executive Software Tester which means your work is never done but you are the master of your domain. What you do is dictated by your team, the sprint at hand, known deadlines, and your own insights. Your days will generally consist of:

  • scrum for team and product scrum meetings
  • attend your other meetings and prepare for them
  • clear your Resolved queue in FogBugz
  • attend to any new features; testing & documenting
  • dip into support queue for bugs and to lighten their load
  • plan out test charter or text plan execution
  • perform testing session
  • file cases from your testing sessions
  • eyeball the days pushes and commits to the code repo for testing ideas
  • build and run the top of your elected branch and get to testing!
  • lightly monitor chatroom to see what the team is up to
  • review newly filed cases from people who aren't you (doing a FogBugz search on project:"NetNewsWire 4.0" -status:"Closed" -viewedby:"me" -area:"Support" -area:"DevOps" orderby:"LastEdited"is a great way to keep on top of things)
  • inquire with team or person and ask what they want tested (doubtless you will ever need to do this as you will be asked to perform branch testing ad hoc)
  • update test plans, the wiki, etc, maintain your documents and log your charters
  • automating something or generally finding ways to have your test set up be efficient and streamlined

7. Workflows

These are the general things you need to know from a QA, Support, Build, perspective

Git Process

You probably won't be committing much but this is the general status of the way things are and why. Git usage training and coverage is something we can do but beyond the scope of this introduction guide. Ping your team members at any time, we're all happy to help with Git things.

MASTER
  • public release of the product, these should coincide, by precise SHA the publicly available point at which we cut an RC build
DEVELOP
  • active working branch, default QA branch, generally stable, you are it's weather balloon, pull and run this a lot and let anyone know ASAP if it's crashy or too buggy to use. It's your underlying duty amidst all the other chaos.
PENDING RELEASE BRANCHES
  • when cutting a sprint or RC build, etc, QA moves to a public release branch, these are temporary and strategic, you should announce when you are moving on or off these branches.
FEATURE BRANCHES
  • per developer and per feature, they can be dicey and QA only goes into them as requested, generally for smoke, stability and regression testing before being merged back into develop, if a developer pulls you aside for assistance please give it your utmost attention.

Case life cycles

In short: A case is created, assessed by the team or Product Owner, a guesstimated time is attached to it, an importance level is set, a user is assigned, a sprint is assigned. When the submitter is QA, most of these things should not be set, the exception being Severity for Blockers, Crashers and The Sky Is Falling-ers. If you've done a good job writing up the report and keep your bugs in a generally the same standard layout, this eases the triage process greatly. Do more than consider it.

QA Role In Case Management

What each state means and how you got there in the first place.

TRIAGE
  • This is the default state you submit your case in. This signals that the team or Product Owner needs to assess and adjust the case.
BLOCKED
  • Any case status that can't be acted upon for any reason. The case is set to Blocked and generally assigned to the person who's blocked it or the PO to assist in the unblocking thereof.
RESOLVED
  • This means something has happened with the case in order that QA may review and either volley back or close. This is the bulk of your workload when not working off your own test schedule. You will have a rolling series of cases coming your way in a resolved state. For the most part, they are bugs you yourself filed and will need to assess whether they are addressed sufficiently or not. Other states are Won't Fix or the like and generally self-evident. Respond to these as swiftly as you are able to minimize the randomization (context switching) of your developer friends and ensure action is made on the appropriate sprint. If you find new bugs in the validation of a Resolved case, file them separately and connect them to the original but don't add them to the same case.
CLOSED
  • This is where you have sufficiently determined the case is no longer an issue. You enter in Release Notes up to the snuff of public scrutiny, write up your findings in the case, perhaps note your own documents with the enclosed test cases, and consider the matter Closed. Woohoo.
Case Types

These aren't always apparent by eyeballing the individual cases in the bug tracker. This is just the general type of case you may run in to and how they may differentiate. Context is crucial.

USER REPORTED ISSUES
  • Come in by way of the support queue. Usually scraped into its own ticket with fresh reproduction steps and supporting documentation attached to it. The format of these is generally taken from the customer.
TEAM REPORTED DEFECTS
  • These are generally come in by way of the product team itself. These are when the product does not meet the specifications. This can be from the Product Owner clarifying expected behavior, new features being hashed out, design discrepancies, platform issues. The format of these is generally from the eye of that particular internal stakeholder so consider their purview when reviewing.
QA REPORTED BUGS
  • These come in by way of your formal testing. These are our bread and butter. These our are most important deliverables. These are covered in depth in the previous Bug Reporting section of this document. Be really good at writing these and swift at filing these. When they are urgent, ensure the team is aware of them. IN the case of Regressions, well, those are extra special and we'll cover those later in this document.
USER STORIES
  • This is a Scrum Methodology tool that doesn't get much use with this particular team at this time. When you are asked to start tracking your hours, this will be a large part of that. If you feel these help you now, please do implement them in a way that serves the team and doesn't step on anyone's toes during sprint planning. Review the Agile/Scrum portions of this document for more details. And or, Contact your friendly local Product Owner for guidance.
TASK CASES
  • Many times these are technical debt or TODOs and QA doesn't see them unless it's a task for us! Feel free to make your own and use the bug tracker to meet your needs. Much in the same way you would use a User Story except almost no one frames their cases like that on this project. Again, confer with the Product Owner to avoid collisions of course!
FEATURE CASES
  • These will usually come in by way of the product owner or tech lead. They are implementation cases that may be deceivingly simple looking but will be a whole new feature that you must test and document and weave into your brain. These should have a dedicated reconnaissance session to explore the feature and what it is, so you can test it. Followed by as many as necessary exploratory test sessions it takes to provide sufficient test coverage and documentation. Then these knowledge gathering missions can be rolled into the standard set of known features lean test cases.

Release & Sprint cycles for QA

MONDAY
  • release branch is published and or a release candidate is made available
  • smoke, acceptance & regression, feature and claims testing on supported systems
TUESDAY
  • smoke, acceptance & regression, feature and claims testing on supported systems
  • filing cases
  • lobbying for fixes of those cases
WEDNESDAY
  • your testing continues
  • developers push fixes
  • last minute testing of those fixes
THURSDAY
  • follow up on any missing areas
  • focus on risk and regressions
  • test, test, test, panic
FRIDAY
  • last minute testing and following up of lingering cases
  • should arrive at a Pass/Fail for the release
MONDAY
  • release!
  • flinch as you watch the support queue, twitter and crash catcher for anything you missed
  • Move back to the Develop branch for ad hoc testing
  • figure out what features are being worked on
TUESDAY
  • feature testing [see below section regarding the subset of feature testing]
  • bug filing
  • Bug validation
WEDNESDAY
  • feature testing
  • bug filing
  • Bug validation
THURSDAY
  • feature testing
  • bug filing
  • Bug validation
FRIDAY
  • testing continues
  • Release branch created
  • Build cut by dev
MONDAY
  • See first Monday!

Paradeveloping Features: Roles

The flow of a feature in the eyes of QA

  • PROPOSED FEATURE
    • dev role feature branch created
    • qa role: observers, user advocates
  • UNDERSTAND FEATURE USE
    • dev role feature implemented
    • qa role: research, theorize, document, plan
  • TEST FEATURE MANUALLY
    • dev role merge feature branch into working branch
    • qa role: exploratory testing et al
  • PUBLISH FEATURE
    • dev role feature released in public release branch
    • qa role: build and release assistance, more testing
  • MATURE FEATURE
    • dev role public release branch goes to release candidate and out into the wild
    • qa role: monitor public feedback, bugs ahoy
  • PUBLISH TEST CASES FOR THAT FEATURE
    • dev role eat cake
    • qa role: oversee tests and validation, automate if applicable

8. Working Smarter: Filling the margins

If you're actively using the app and weaving it into your daily workflow, the coverage for testing and the in-depth user knowledge will grow without having to devote extra time and attention. You could:

  • Use NetNewsWire to track commits in GitHub
  • Subscribe to FogBugz cases for your projects
  • Use the in-app web view to manage FogBugz
  • Track Jenkins builds
  • Subscribe to testing blogs
  • Subscribe to our competitors
  • Subscribe to unusual feeds with unusual content
  • Subscribe to the popular feeds and generally popular Internet feeds.

9. Required Reading & Resources

Some things that might be helpful for when you're in a rut or need a nudge, a hand, or when your eyes glaze over.

APPS
  • Find a Markdown app you like as you will be writing test plans and test cases in it, Mou suites me well
  • Find a mind mapping program you like and use it to, MindNode Pro was the best for me
  • Find an image capture tool you like, I use Skitch 1x for images and SnapzPro for video
  • Find a capture workflow that works for you, I've had good luck with Cloudly and Evernote
  • Settle in on network tools, I like Little Snitch for general traffic monitoring and blocking, Charles Proxy for monitoring and fiddling with HTTP traffic, HTTPClient for quickly checking HTTP responses on feeds.
HEURISTICS AND IDEAS
BOOKS
SOFTWARE PROCESS

10. Frequently Asked Questions

Not by you, of you...

  • Q.) Have you looked at my branch yet?
  • A.) No!
    • You will be asked to test feature branches but don't do this without the whole team knowing you are diverging or being randomized, i.e.: do not do this in private as everyone is assuming you are working off of develop and if you've been hunting down issues somewhere else you are not on the same page as your team
  • Q.) Is QA satisfied with amount of testing done on this?
  • A.) Never!
    • OK, maybe not Never but you should be comfortable with your testing coverage when features are merged in, pre-release, post-bug-fixes and the like, you should be able to show your work at any moment for any part of the application or project
  • Q.) Have you heard that no one really reads documentation?
  • A.) Only from the incurious!
    • this is a joke but be able to and ready to show and share your test plans and documents to the any member of the team at any time. You may feel that your documents don't get used or read but they will and do intermittently and they are of course in heavy usage by you, tester.

11. Glossary

Terms and abbreviations that may be obtuse.

  • Agile umbrella term for various agile-manifesto influenced methodologies
  • Alpha pre-beta rough release, stable but perhaps not feature complete
  • Backlog holding bucket for cases that have not been swept into a sprint
  • Beta rough and stable and feature complete but not Golden Master ready
  • Blocked someone or something has an issue outside of your control that is disallowing you from completing your work or task
  • Blocker bug or issue large enough to delay release
  • Branch a strategic divergence of the code for the purpose of working on the code without affecting another branch (ex: main branch versus a feature branch)
  • Bug Report anomaly or defect report with enough information for the team to resolve the matter
  • Build command in Xcode to compile your app,
  • Burndown rate at which sprints are completing backlog tasks
  • Checking verifying the results of a test
  • Checkout switching a branch in git
  • Clean Install wiping all support files to run as if it's the first time
  • Clean Xcode command to wipe previous build detritus from build directory
  • Clone a copy of the repository, a command in Git to do so
  • Closing verifying a resolved case is no longer an issue and noting FogBugz with the details
  • Commit intermittent save of changes to a branch/repository
  • Develop main working branch that feature branches are merged with
  • Debug a scheme in your project that will generally have more verbose logging and some test or non-public-facing features
  • Defect Report a bug report, including STR and additional information to reproduce the issue in question
  • Dogfooding using your software in the course of you normal machine usage
  • Estimates means of guessing the time or level of effort needed to complete a task
  • Feature Branch working branches diverged from Develop for the purpose of isolated feature work
  • Git our chosen version control system
  • Git flow an agreed upon system of interacting with a code repository, also a formalized tool
  • GitHub where our Git repository is hosted
  • Head A named reference to the commit at the tip of a git branch.
  • Heuristic experience-based techniques for problem-solving, learning, and discovery
  • Instruments advanced Xcode testing tool
  • Jenkins the continuous integration server
  • Markdown lightweight method of marking up plain text
  • Master git branch that coincides with the public release (for NetNewsWire, sometimes this branch is the only branch in a repo)
  • Merge taking two branches in Git and merging them, very often a feature branch back into develop
  • MVP Minimum Viable Product
  • Product Owner end decider of the project, the boss
  • Pull Fetch from and merge with another repository or a local branch
  • Push Update remote refs along with associated objects
  • QA Question Averything (mwuahaha!)
  • RC release candidate
  • Release Candidate a build that is prospectively going to be released to the public
  • Release Scheme build type that will generally have lessened logging and debugging features removed
  • Remote the remote location where your repository lives
  • Repository where the version control system stores your project
  • Resolved the resolver believes the issue has addressed enough to be verified for completeness and accuracy
  • Revision a particular commit which maps to a SHA aka Hash
  • RSS really sloppy syndication, ha
  • Scrum the Agile group
  • SHA hash of a particular git commit
  • Simulator iOS simulator app in Xcode to run in a iPhone/iPod/iPad like environment
  • Steps to Reproduce exact actions needed to demonstrate issue or behavior
  • STR steps to reproduce
  • TDD test driven development
  • Test Case umbrella term for sample data, reproducible steps or examples
  • Testing umbrella term for verifying and exploring
  • TLF top level feature
  • Track tell git you want to track the commits of a branch
  • UI user interface
  • UIAutomation tool to automate iOS GUI testing
  • UX user experience
  • Verification ensure to the best of your ability the expected outcomes
  • VM virtual machine, ala Parallels or VMWare
  • VO VoiceOver, OS X screenreader
  • Xcode OS X IDE par excellence

cobbled together by Sheree Pena with loving kindness on Monday; July 8, 2013 (updated Saturday; August 3, 2013, and Sunday; August 11, 2013) out of parts of previous guides, her blog, and her brain