During my last gig, I put together a set of principles for the QA practices.
It helped me to communicate what I was doing as a QA. I used the rationale behind each principle to move quality forward in the department.
Depending on the maturity level of the practices followed in your team, you might need some voice-over to explain the points below.
1. Principle: Testing at the right level
Rationale: QA strategy will follow the “testing pyramid” approach. This principle emphasizes testing the right amount of functionality in the right level. As we climb the testing pyramid running, maintaining and writing tests becomes more expensive. Higher level tests are also more brittle. The general approach will be “pushing the tests to lower levels whenever it can be done”.
2. Principle: Mock testing end-points whenever it makes sense
Rationale: Mock testing is not a solution for all your integration problems. The cost of mock testing should be considered before moving forward.
Principle: Do not propagate defects in the value stream
Rationale: The quality is not something that can be achieved and the end of the development process with some testing magic, it should be built in during the process. QA’s should not wait until the very end of the value stream to catch the defects. The target should be not generating these defects at all. Any defects that are found in the value stream should not be propagated to the downstream.https://leanqa.wordpress.com/2012/06/28/dont-get-it-dont-make-it-dont-send-it/
3. Principle: Keep test data clean and broadcast information
Rationale: The health of the test data is important for the quality of testing and the velocity of the delivery. Incomplete, undocumented and problematic test data should be fixed immediately.Test data should be accessible by everyone as we want a team that everybody tests. Broadcast the test data in a medium that everyone can access.
4. Principle: Automate when it pays back
Rationale: Pros: Automation saves time, takes care of the tedious and boring work and pays back handsomely if done properly. Spend time and invest on automation.
Cons: Adding automation also means adding maintenance. Do not automate for the sake of automation. Automation frameworks are tools that will enable the team to deliver a product with the desired quality. Automation is a vehicle to reach that target, get on it if it makes sense.
5. Principle: QA’s pair with other team members
Rationale: A team with quality awareness is a key factor in having an effective and achieving team. If the QA’s in a team are continuously on a treadmill because they are writing test automation and testing cards , this is a bad smell. It means that the testing is no embraced by the team and probably all of the testing falls on the QA team. To help the team to understand the QA mindset and embrace quality practices, developers should have hands-on testing experiences.This can be achieved by QA’s pairing with Dev’s occasionally. The main idea is diffusing the QA mindset to the team members so that testing will be seen as everybody’s responsibility.
6. Principle: QA mindset over tester’s mindset
Rationale: Tester mind-set: Sits at the end of the delivery process. Acts as a quality police. Behaves like a traffic lamp – approves what can pass and what can fail.
QA mind-set: Oversees the whole delivery process. Acts as a quality champion / consultant. Behaves like a street lamp – sheds light on the risks so that team can act on the risks.Ask “What is in it for business?” while testing Effective kick-offs and hand-overs.Kick-offs and hand-overs are good course correction points to see if what we deliver meets the customer requirements. Have them with someone from business, with a developer who played the story and a UX designer. Finding out points that needs clarification, fixing or redesign are far cheaper at these stages than finding them in later stages. Kick-offs and hand-overs also encourage business collaboration.
7. Principle: Beware of the 7 deadly wastes
- Transportation: Transportation waste is the movement of materials and good that is not actually required for testing. I.E., do we really pull these gems before we run the tests? What test data do we need to move around to have a meaningful environment?
- Inventory: Do not keep batches of tests in WIP or To-Do. Run test automation in smaller batches (daily, nightly, weekly runs)
- Movement: This is about the movement of people. Do you need to go to somewhere else to test? Do you need to go to meetings to test?
- Waiting: Such as waiting for the end of the functional test run, as it takes long time to run. Or stories waiting for hand over.
- Over-production: Do we really need that many tests in the functional level?
- Over Engineering: Implement what is expected, do not over-engineer your implementation.
- Defects: Such as the defects of functionality and design
Logging is important to understand what is going on with your app. But without guidelines it can yet be another place that leaks important information.
- Do not log tokens
- Do not log passwords or user names.
- Do not log any PII information.
Best way to avoid this is not logging the data but logging what happened.
SensitiveObject sensitiveObject =
String s = sensitiveObject.thisInfoDoesNotHurt();
"here is what happened but I am not leaking any sensitive info: "
From an agile perspective, quality is basically the fitness for the intended use.
A high quality Agile delivery process:
- Enables the creation of value that the customer is willing to pay for.
- Eliminates failure demand.
- Helps to move the stories through the downstream correctly without any context loss.
- Ensures the robustness of the product by creating test automation.
- Eliminates waste.
- Tests for the risks first, not for the coverage.
- Makes the risks visible.
It is not possible to achieve the goals above with waterfall methodology.
“Building quality in” by involving QA’s at each and every stage of the SDLC and making the whole team responsible from testing will help us to achieve our goals.
Quality levels are also defined by the context. This is called context driven quality. Quality targets of airplane navigation software will probably be different than a mobile gaming app.
I’ve been in some teams in the past that people were seeing stand-ups as a burden.I’ve heard various complaints:
“I wish I could do some real work in the time I spent in stand-ups”
“I don’t care what the team does, I have my isolated work to do”
“I can’t wake up in the mornings.”
First of all stand-ups are not unique to software development. Toyota is using a different kind of stand-up for years and it has been proved by the years that it works. These meetings are called Obeya meetings. Pete Abilla wrote a very good article on Obeya and how it improves your process.
“..Again, basic combinatorics teaches us that as the number of agents involved in a process increases, the communication links between those agents increases exponentially, thus allowing for a potentialy Nx communication-link breakdown. To manage that, scheduled but quick Obeya meetings can help, as well as as-needed informal meetings between individuals and groups...”
So as your team grows, your communication links increase exponentially. With stand-ups, you cover a lot of these links in a short time.
Another myth on stand-ups is about whether a team needs stand-ups every day or not. Having a 20 minute stand-up will consume 4.12% of your daily time. It will add up to a 1/5 of a day in a week. So the question on the necessity of having stand-ups every day is valid.
It is not very easy to realize this immediately but actually stand-ups free your day. Because of the stand-ups, the number of the meetings you need to have decreases. So having them every day eventually is a big time-saver.
“Don’t get it, don’t make it, don’t send it” is a slogan to emphasize the “quality first” practice in gemba kaizen. It is first formulated by Masaaki Imai and you can read about it more in his book, Gemba Kazien.
Though it was first formulated for production / manufacturing focused industries as most of the lean principles, it can be easily applied to agile projects.
Having the QA team as a bottleneck is not so uncommon in agile projects. Although there might be various reasons behind it -such as estimating stories without including the testing effort, tech debt, etc.-, one of the most important contributors to that situation is the “debt” that QA’s are inheriting from the upstream. A poor analysis in the upstream process will have a considerable impact on development but will have a bigger impact on a downstream process like QA. The cost of the lack of quality in a story increases as the story is propagated to the downstream.
Following the “don’t get it, don’t make it, don’t send it” principle will have a positive effect on the quality of your product and will decrease your delivery cycle.
Don’t get it: If you think that the story you are getting from the upstream (upstream is “analysis” for “developers” and “development” for “QA”s) does not have the quality built in, do not get it.
Do not accept it.
If the story does not have clear or enough acceptance criteria to start development, do not get it to develop. If the unit tests are not properly implemented, do not get it to do the functional testing.
Don’t make it: Remember, “quality first“. Always. Do not sacrifice from the quality for the sake of cost or delivery. Keep in mind that delivering a product without meeting the quality requirements does not make any sense. Also remember the cost of the lack of quality. If you sacrifice quality and decrease the first short term visible cost, are you really decreasing your cost? Maybe you are increasing it in total?
Don’t send it: Do not send a batch of work to the downstream (i.e. from analysis to development or from development to QA) if you think that you did not build the quality in. If your downstream is starving for work, it is a symptom of failed planning and bad process management. You should not rush and send your work to downstream to feed the starving downstream. Instead of solving your starvation problem, it will cause more problems. Learn your lesson and focus on solving the root cause of the problem. Starving downstream processes are not a root cause but a symptom of the problems you have your overall process management.
To follow these practices you can use some tools or create some guidelines. Having hand-overs between processes might help to create awareness when you first start doing it. Try to make these hand-overs not constraining to not to alienate people.
But also, keep in mind that you need to create some standards.
No standards, no improvement.
I am not a believer of pure coaching or pure delivery projects.
All coaching projects have some delivery in them (i.e. leading by example) and all delivery projects have some coaching in them.
I think coaching your client is not even an option in delivery projects. It is something you should do to both make your and your clients life easier.
Imagine that you are delivering your stories with a velocity that you are expected to do. But your bottleneck is the acceptance of these stories by the client. Somehow, your client is not able to accept your stories as fast as you deliver.
You can do two things:
- You can say: It is my clients internal process and I am only responsible from the delivery of the product. Their broken story acceptance process is not my concern.
- Or, you can say: The broken process they have really hurts all our pipeline. As they wait some time to accept the stories after our delivery, they come up with change requests which are more expensive to fix compared to fixing them just after the implementation. If we help them to improve their processes both of us will have less problem.
Believe me, life will be easier. Solving upstream problems will not solve all your downstream problems but it will help a lot.
I think instead of talking about acceptable response times, talking about thresholds might make more sense if we are talking about high volume and high traffic websites.
Including the project I am in now, I think one of the most painful parts of NFT testing is getting benchmarking numbers from the business. Especially if the client did not have a structural approach to performance testing in the past, most of the time the best you can hear from them is “We don’t want our users to get frustrated, that is what we expect from performance perspective“.
There are no agreed-upon industry standard for response times but there is an industry standard to calculate the response time performance. APDEX is widely used in the industry and most of the monitoring tools (including New Relic) have built-in APDEX support.
Basically there is a “T-Value” defined and your APDEX Score is calculated over this T-Value. If load_time is your page load time, then:
- load_time < T-value : User is satisfied
- T-value <load_time < 4*(T-value) : User is tolerating
- load_time > 4*(T-value) : User is frustrated
So if our T-value is 2 seconds, 8 seconds will be our frustration threshold. We might have some users frustrated and still have an acceptable performance. Here is how APDEX score is calculated:
Apdext = (Satisfied Count + Tolerated Count / 2) / Total Samples
Scores over 0.75 are considered acceptable and 0.95 are considered good according to APDEX alliance.
Defining your T-value might be depending on different factors including the infrastructure of your country, your production environment hardware and historical data from previous applications.
APDEX is not “the” perfect method you can have to benchmark / measure your performance but I’ve found it pretty useful to create a common ground with the client from the performance perspective.
Ask why: We all like the story of 5 monkeys and a water hose but sometimes some habits have some good reasoning behind. These reasons might not be clear instantly. So before commenting on something, ask why. Maybe there is a valid reason behind… Maybe not. But by asking why, we listen to our client. Being a good listener is a good way to create rapport.
Show respect: Like every person has a story, every project has a story. Respecting that story will not hurt. On the contrary, it will help our client to develop trust and feel like he is together with us in the journey.
But to give the right answers, first thing you need to do is asking the right questions. Sometimes as consultants we forget to ask questions.
As we are already experienced with the process or technology and we maybe had seen the problem a couple of times before, we might think that we need the correct answers for the problem right away. Most probably we are wrong.
Kaizen tells us to “see, show respect and ask why”. Embracing this mindset is essential to add value to your client as a consultant.
Frustration is a part of our jobs. It is inevitable. If you are absolutely sure that you are not getting frustrated at all during your office hours, I think you are an unique case.
We get frustrated because of a big list of reasons, but mostly the most significant culprit is being forced to tell the same trivial thing repeatedly, and watching people relapsing over and over again easily from what had been agreed upon.
This relapse could be about anything…
You might need to tell your developers about the importance of unit testing again and again or you might need to tell your QA’s about the importance of getting involved in the story discussions from the very start all the time. You might need to get into endless discussions over and over about including testing time in your estimations or the check-in etiquette.
I am sure you can create a longer list.
We mostly get frustrated more than we should because we think that getting frustrated is not written in our job description as a core responsibility.
Let me tell you: It is in your job description. Especially if you are a senior person.
You are expected to get frustrated by telling the same thing repeatedly and watching people slack, fret and relapse from what you have agreed upon. With no or limited complaints, you need to tell the same thing again and again…
You have to keep on explaining and doing the same thing… Until when? Well, I don’t know. But you need to…
You need to do so because that junior member of your team might just be gathering his or her courage not to follow the learned helplessness in the team. If he/she sees you fretting once, maybe he/she will lose his/her courage and will not get vocal. This might be traumatic sometimes for that very person and a broken courage might take a long time to be repaired again.
Of course, avoiding from getting frustrated as much as you can should be your target, but as soon as you accept the fact that you are expected to get frustrated, you will start to see frustration not as an extra cause of suffering, but as something your job demands. This kind of approach might have a lenitive effect.
Sometimes you need to do the same thing again and again to create engrams in the muscle memory of the team. It’s like learning how to play ping-pong. After some practice, your team will learn how to hit the ball. But until that point is reached, you might need to do the same thing again and again and again and again…
After that, it is in the muscle memory of your team. Enjoy!