QA Talk For Code Schoolers

10/31/20176 Min Read — In Testing

This is a stub based on a talk aimed at Junior Devs. It's a bit of a mess in this format and I've got some attributions to work out but warts and all, here is a talk I gave on QA/Test.


is a misnomer.

  • QA isn't a verb

  • Quality is everyone's responsibility

  • If you can't control all variables, you can't assure the quality.

  • You can't control all the variables.

  • Neither can we.

  • We're here to allay risk.

  • We're here to test and advocate for quality.

  • That means we're poking holes in code, in/output and the process itself.

Sometimes test & QA can only make a small dent but:

  • 10% better can make 100% of the difference.

  • 10% faster can also make 100% of the difference.

  • Question Asking

  • Quality Analyst

  • Quagmire Adjuster

  • Quantity Assistance

  • Qualitative Analysis

  • Question Averything

Quality is value to some person.

  • QA is an umbrella term that includes testing, but mostly it's about building quality in and trying to prevent bugs before they occur.

  • Testing is a technical investigation to reveal facts about the quality of the system on behalf of stakeholders so they may make business decisions.

Developers Testing Their Own Work

Imagine a NYT reporter cranking out a story on a deadline and it going straight to press.

  • With no fact checking.

  • Or proofreading.

  • etc.

silly, right?

We have to tell you your baby is ugly. This is not a value judgment of you as a babymaker. Or even the baby. We want to birth the best baby we can get.

QA & Test is here to help you and the project as a whole. The relationship should never be adversarial.

Stated another way:

We care about the work, not the credit.

Things We Want You To Know

Please perish the "it's not ready to be tested" thought.

If there is a spec, req, brief, artifact, plan, code, design, idea, executable; it does not matter, it is testable.


You cannot test quality into a system.


We're here to dispel the illusion that things were working in the first place.

you may have heard some other terms like...

It's not "manual" testing, it's just


You wouldn't call it manual programming, would you? All testing is done by a human. Any automated tests are called 'checks' like automated programming is called 'compiling'. Sometimes it will come out of my mouth when I have to talk with stakeholders who only understanding testing as not coding

It's not "exploratory" testing, it's just


That's almost as bad as saying 'ATM machine'. Testing by its very nature is exploratory. Like wet water or delicious pizza. Sometimes people think exploratory amounts to ad hoc. This is inaccurate.

It's not "scripted" testing, it's just


Did you mean automated verification? We just call that 'checking'. A tester may or may not employ documented tests for their testing if it suits the context.

It's not "standard" testing, it's just


Whose standards? What standards? Maybe you meant IEEE or ISO 29119? It's true some stuff out there is regulated, I assure you this reskin and most of your future projects are not.

It's not "formal" testing, it's just


There is no formal testing process although testers understand why the idea is so appealing to stakeholders. Sometimes a bit of rigidness and formality is the right thing to lean on for test coverage. Most of the time people just mean documented.

one more time for the people in the back

If you know the expected result beforehand, it’s a check.

If you do not know, it’s a test.

  • You will find this to be a useful distinction when talking about testing.

  • Checks are typically automated or automatable.

  • Tests are not automated because they are typically not automatable.

  • Checking is a somewhat measurable deliverable.

  • Testing is a performance and a service.

You don't have to use these terms but they are leaking out of the test world and into the dev world.


Automation supplements good functional testing. Automation cannot think outside of the box and use gut instinct to perform tests, or take side paths and discovery of new areas on the fly. Good for automating API and web services, less good for things that change or are non-linear.

Test cases

Test cases are a tool, not a deliverable. If you are thinking in test cases, you are necessarily talking about checks and those should be thought of in terms of automation.

The Testing Pyramid


The pyramid is made up of checks:

  • Unit tests at the base.

  • Integration tests in the middle band.

  • The tippy top of the pyramid is for UI testing.

  • Above and around the pyramid is an orb which is human testing and hands-on QA. ORB!

Typically Domain, User, Scenario, Claims, Risk, Security testing is in the cloud above the pyramid. And Stress, Performance, Function, Flow testing is the pyramid. It's a heuristic based pyramid and therefore fallible. Things like Accessibility can appear in both.

More On Automation

Automation is fantastic and we embrace and value agile and DevOps-style automation. HOWEVER! some stakeholders think automating all the testing is a thing that can and should be done


Here's what it means to "automate all the testing": automate all the evaluation and learning and exploration and experimentation and modeling and studying of the specs and observation of the product and inference-drawing and questioning and risk assessment and prioritization and coverage analysis and pattern recognition and decision making and design of the test lab and preparation of the test lab and sensemaking and test code development and tool selection and recruiting of helpers and making test notes and preparing simulations and bug advocacy and triage and relationship building and product configuration and application of oracles and spontaneous playful interaction with the product and discovery of new information and preparation of reports for management and recording of problems and investigation of problems and working out puzzling situations and building the test team and analyzing competitors and resolving conflicting information and benchmarking

  • We are pro-automation,

  • We believe it's a part of a robust test coverage strategy.

  • It's simply not going to meet the need alone.

  • Where we can whittle away needless toil, we will strive to.

  • When we can make tools and checks that lighten the load or help deployment or whatever we need, that's where we will focus programmatic test attention.


when programmers say this, they're usually talking about abstraction, determinism, coupling, etc. all of those things apply to testers who are writing programmatic checks and they help with general testability. from the testers end of things we're mostly concerned about

  • observability

  • manipulability

  • stability

Can we inspect it, can we exercise it, is it usable enough to not crash and block testing? Cool.

Modern shift-left Agiley JIT Testing

You may have heard "mobile first" or "shift left security" or "accessibility first" and on and on. the problem is not building these things in from the start and the problem that it causes down the line. The folks with the seat at the table eating that first meal aren't bringing all of these things to the potluck.

This is a billion dollar problem we can't solve but we gon' try.





Building Quality Into the Product


Catching a bug in - the spec is cheaper than in the design. - design is cheaper than in code. - code is cheaper than in test. - test is cheaper than in production.

  • Most users who find your bugs do not report them, they simply stop using the product.

  • Or they are the client and we're all in deep shit.

Bugs should be hard to find. It should be the goal of the developer for QA to find nothing.

If the happy path isn't working as expected, it's not ready for deep scrutiny or bug reports but it is testable.

the primacy of feedback loops

the ideal lifecycle of a bug never makes it into the bug tracker

If it bugs you, it's a bug.

  • having testers on the team doesn't excuse you from preventing and reporting bugs

  • if you see a bug, file it

  • if you can fix it in less than 2min, just fix it

if you SEE a bug

but do not REPORT the bug

you ARE a bug

works on my machine we're not shipping your machine

Bad culture and process can lead to QA being seen as gatekeepers. Worse, testers can end up being the sin-eaters, trying at the very end to make up for all the failures and foibles of the project.

Let's try to avoid that here.

what do you do all day long? This generally falls into two buckets. Proactive and Reactive.


  • Human rubber ducking

  • Planning

  • Documenting


  • Testing the system and process

  • Code review and output inspection

  • Triaging our work

Test Planning: reconnaissance & research, strategic planning, finding and utilizing oracles, establishing heuristics, utilize common approach tactics Test Documents: create and maintain documents, write simple test cases, writing complex user stories, generate crafty exploratory test charters Test Data: manage test environments, generate mock data, realistic contrived scenarios, repeatable and reusable test case data Test Tools: research appropriate tools, write testing and test environment tools, implement automation as applicable Testing Software: interpreting customer reported issues, logging defects against specifications, hunting down bugs & reproducing Testing Project Administration: case validation and verification, defect case filing & triaging, progress reporting, self-assessment

  • Include us. If you're having a technical meeting and there is no test presence, something has gone wrong.

  • Communicate status of testability so we're not wasting cycles and we can give things appropriate attention.


  • you disagree with me on any of these points

  • you want to point out a typo


Please reach out to your friendly neighborhood software tester. We'd love to talk.

You should know...

The testing role is so varied and wide that you can count on the next tester you interact with disagreeing with a bunch of this stuff. And we're both wrong. And we're both correct.