Logging is important to understand what is going on with your app. But without guidelines it can yet be another place that leaks important information.
- Do not log tokens
- Do not log passwords or user names.
- Do not log any PII information.
Best way to avoid this is not logging the data but logging what happened.
SensitiveObject sensitiveObject =
String s = sensitiveObject.thisInfoDoesNotHurt();
"here is what happened but I am not leaking any sensitive info: "
From an agile perspective, quality is basically the fitness for the intended use.
A high quality Agile delivery process:
- Enables the creation of value that the customer is willing to pay for.
- Eliminates failure demand.
- Helps to move the stories through the downstream correctly without any context loss.
- Ensures the robustness of the product by creating test automation.
- Eliminates waste.
- Tests for the risks first, not for the coverage.
- Makes the risks visible.
It is not possible to achieve the goals above with waterfall methodology.
“Building quality in” by involving QA’s at each and every stage of the SDLC and making the whole team responsible from testing will help us to achieve our goals.
Quality levels are also defined by the context. This is called context driven quality. Quality targets of airplane navigation software will probably be different than a mobile gaming app.
I’ve been in some teams in the past that people were seeing stand-ups as a burden.I’ve heard various complaints:
“I wish I could do some real work in the time I spent in stand-ups”
“I don’t care what the team does, I have my isolated work to do”
“I can’t wake up in the mornings.“
First of all stand-ups are not unique to software development. Toyota is using a different kind of stand-up for years and it has been proved by the years that it works. These meetings are called Obeya meetings. Pete Abilla wrote a very good article on Obeya and how it improves your process.
“..Again, basic combinatorics teaches us that as the number of agents involved in a process increases, the communication links between those agents increases exponentially, thus allowing for a potentialy Nx communication-link breakdown. To manage that, scheduled but quick Obeya meetings can help, as well as as-needed informal meetings between individuals and groups...”
So as your team grows, your communication links increase exponentially. With stand-ups, you cover a lot of these links in a short time.
Another myth on stand-ups is about whether a team needs stand-ups every day or not. Having a 20 minute stand-up will consume 4.12% of your daily time. It will add up to a 1/5 of a day in a week. So the question on the necessity of having stand-ups every day is valid.
It is not very easy to realize this immediately but actually stand-ups free your day. Because of the stand-ups, the number of the meetings you need to have decreases. So having them every day eventually is a big time-saver.
“Don’t get it, don’t make it, don’t send it” is a slogan to emphasize the “quality first” practice in gemba kaizen. It is first formulated by Masaaki Imai and you can read about it more in his book, Gemba Kazien.
Though it was first formulated for production / manufacturing focused industries as most of the lean principles, it can be easily applied to agile projects.
Having the QA team as a bottleneck is not so uncommon in agile projects. Although there might be various reasons behind it -such as estimating stories without including the testing effort, tech debt, etc.-, one of the most important contributors to that situation is the “debt” that QA’s are inheriting from the upstream. A poor analysis in the upstream process will have a considerable impact on development but will have a bigger impact on a downstream process like QA. The cost of the lack of quality in a story increases as the story is propagated to the downstream.
Following the “don’t get it, don’t make it, don’t send it” principle will have a positive effect on the quality of your product and will decrease your delivery cycle.
Don’t get it: If you think that the story you are getting from the upstream (upstream is “analysis” for “developers” and “development” for “QA”s) does not have the quality built in, do not get it.
Do not accept it.
If the story does not have clear or enough acceptance criteria to start development, do not get it to develop. If the unit tests are not properly implemented, do not get it to do the functional testing.
Don’t make it: Remember, “quality first“. Always. Do not sacrifice from the quality for the sake of cost or delivery. Keep in mind that delivering a product without meeting the quality requirements does not make any sense. Also remember the cost of the lack of quality. If you sacrifice quality and decrease the first short term visible cost, are you really decreasing your cost? Maybe you are increasing it in total?
Don’t send it: Do not send a batch of work to the downstream (i.e. from analysis to development or from development to QA) if you think that you did not build the quality in. If your downstream is starving for work, it is a symptom of failed planning and bad process management. You should not rush and send your work to downstream to feed the starving downstream. Instead of solving your starvation problem, it will cause more problems. Learn your lesson and focus on solving the root cause of the problem. Starving downstream processes are not a root cause but a symptom of the problems you have your overall process management.
To follow these practices you can use some tools or create some guidelines. Having hand-overs between processes might help to create awareness when you first start doing it. Try to make these hand-overs not constraining to not to alienate people.
But also, keep in mind that you need to create some standards.
No standards, no improvement.
I am not a believer of pure coaching or pure delivery projects.
All coaching projects have some delivery in them (i.e. leading by example) and all delivery projects have some coaching in them.
I think coaching your client is not even an option in delivery projects. It is something you should do to both make your and your clients life easier.
Imagine that you are delivering your stories with a velocity that you are expected to do. But your bottleneck is the acceptance of these stories by the client. Somehow, your client is not able to accept your stories as fast as you deliver.
You can do two things:
- You can say: It is my clients internal process and I am only responsible from the delivery of the product. Their broken story acceptance process is not my concern.
- Or, you can say: The broken process they have really hurts all our pipeline. As they wait some time to accept the stories after our delivery, they come up with change requests which are more expensive to fix compared to fixing them just after the implementation. If we help them to improve their processes both of us will have less problem.
Believe me, life will be easier. Solving upstream problems will not solve all your downstream problems but it will help a lot.
I think instead of talking about acceptable response times, talking about thresholds might make more sense if we are talking about high volume and high traffic websites.
Including the project I am in now, I think one of the most painful parts of NFT testing is getting benchmarking numbers from the business. Especially if the client did not have a structural approach to performance testing in the past, most of the time the best you can hear from them is “We don’t want our users to get frustrated, that is what we expect from performance perspective“.
There are no agreed-upon industry standard for response times but there is an industry standard to calculate the response time performance. APDEX is widely used in the industry and most of the monitoring tools (including New Relic) have built-in APDEX support.
Basically there is a “T-Value” defined and your APDEX Score is calculated over this T-Value. If load_time is your page load time, then:
- load_time < T-value : User is satisfied
- T-value <load_time < 4*(T-value) : User is tolerating
- load_time > 4*(T-value) : User is frustrated
So if our T-value is 2 seconds, 8 seconds will be our frustration threshold. We might have some users frustrated and still have an acceptable performance. Here is how APDEX score is calculated:
Apdext = (Satisfied Count + Tolerated Count / 2) / Total Samples
Scores over 0.75 are considered acceptable and 0.95 are considered good according to APDEX alliance.
Defining your T-value might be depending on different factors including the infrastructure of your country, your production environment hardware and historical data from previous applications.
APDEX is not “the” perfect method you can have to benchmark / measure your performance but I’ve found it pretty useful to create a common ground with the client from the performance perspective.
Ask why: We all like the story of 5 monkeys and a water hose but sometimes some habits have some good reasoning behind. These reasons might not be clear instantly. So before commenting on something, ask why. Maybe there is a valid reason behind… Maybe not. But by asking why, we listen to our client. Being a good listener is a good way to create rapport.
Show respect: Like every person has a story, every project has a story. Respecting that story will not hurt. On the contrary, it will help our client to develop trust and feel like he is together with us in the journey.