Ok, let's talk about testing.
Recently, I had lunch with a QA manager. We spent the lion's share of our time commiserating over the pathetic state of quality control testing.
The root of the issue is this "magic" idea that shifting left will solve everyone's problems.
The promise of shift-left practitioners is fewer bugs, better client buy-in, and (most importantly) it's cheaper. Having been on calls with companies that do shift-left training, half of their pitch is "significant" cost savings.
My friend took me through a recent quality-of-life update for their software. The list of bugs submitted exceeded 200 individual requests. Of those requests:
48 were tickets duplicating other tickets
31 were recommendations for future improvements
61 were questions regarding functionality
5 were legitimate bugs (submitted by the QA on the project)
The rest didn't have enough information included to be of any use.
The uselessness of the data being captured in the issues tracker (linked to Jira automatically), piqued my interest. I decided to reach out to QA professionals to get their feedback. Overwhelmingly, shift-left has slowed down testing and made things overly complicated. Many also mentioned the need to test and verify that the submitted tickets were legit bugs, which frequently, they were not.
What's fucked up about it, is a Google search on shift-left criticism, or shift-left sucks returns nearly three full pages of results saying how great it is, and (of course) offering training courses so you too can implement this magic solution. It isn't until you near the bottom of page three that you get a result that's critical of it.

Shift-Left is a Specific Solution to a General Problem
The problem... testing takes time.
The recommendation frequently thrown around is that testing should be 40% of your development process. This is, for the most part, accurate in my experience.
The problem is testing takes time, time is money. But the problem is there is no single specific reason why testing takes so much time, other than to say that it does.
A specific solution for a general problem isn't going to solve the problem, it's only going to introduce new ones.
Now I'm not here to shit on shift-left, we're already taking on big Agile, and the last thing I need is to piss off the shift-left people too.
What is there to do then?
First off, stop being stupid, stupid.
One of the things I want to stress is that there is no such thing as a fix-all, full-fucking-stop.
So let's discuss ways to improve your QA/testing team
Stop Trying to Rush Shit
In a round table discussion I took part in, an IT manager asked how we could cut QA time down by 40%. This was during a time when legacy code was being updated, and a brand new CRM was being developed. The QAs were working 60+ hours a week. So asking them to cut the work nearly in half was a serious fucking insult.

The answer was to perform fewer tests. He didn't like that. He demanded the same quality, with half the effort. His attitude came straight from the shift-left practitioner that he had worked with prior. The service reported that they could cut testing time by as much as 50%. Is it the fault of the sales pitch? Well, yes.
You see, by promising a cut of upwards of 50%, that becomes the goal. It encourages the business to hit that metric at all costs. Plus, it's a goal without a constraint. Hit the 50%, period, even if that means you have to skip tests (just don't bring it to anyone's attention).
A better goal is something achievable. We want to cut testing time by 10% over the next 3 months without sacrificing quality. See how fucking easy that is?
But, no. They lead off with a reduction of 50%, which puts the QA analyst in a position where they feel the need to prioritize the testing, creating a hierarchy where things are skipped. The testers need reassurance that they won't be forced to do this, and possibly okay a shitty update because shift-left is supposed to save time.
You know what can save time without sacrificing the quality of testing? Automation. This, however, costs money. This goes against the whole purpose of shifting left, saved costs in the form of saved time. Who wants to pay a testing automation specialist, when we can just... shift-left.
Testing isn't an Expense, It's an Investment
First of all, we need to change our thinking when we approach testing within our projects.
Testing is an investment we make to prevent future expenses or loss of income. It isn't something that should be treated as a precaution to something that might not happen.
Like performing an analysis to determine if you should invest in hurricane-proofing your new construction. Why would you hurricane-proof the new building, if you don't think it'll be hit by a hurricane before you are able to recoup your investment? Likewise, many companies see QA testing as a meaningless investment; it's proofing against something that probably won't happen. The idea of testing as an investment, therefore, will always seem like a bad investment.
So, are we saying that there's no reason to try and save on costs as part of testing? No, of course not. There are ways to cut down on expenses and make your QA team better at what they do. You can allow them to test fully while meeting those cost reductions you so desperately seek.
Introduce Targeted Responses
What do I mean?
Instead of having one big bucket that everyone reports to, have multiple buckets that capture what the feedback specifically deals with. When I was talking through the various "bugs" and other "issues" that were submitted for this quality-of-life upgrade, I was dumbstruck by how many were related to a future update and/or a problem with verbiage.
Does QA need to visit these tickets? No. Does a ticket get created in Jira automatically that therefore has to be reviewed by QA? Yes.
The company my friend works for has a submission form that they use. It runs off of Google Forms and utilizes some plug-ins to automatically generate a Jira ticket once the issue is submitted. Why then, I asked, can't you just set up additional links that would redirect non-bug-related issues elsewhere? For example, why can't verbiage and future enhancements go to the UX/CX boards? Why can't questions regarding functionality be directed to a BA, so that no one has to review a question that should have already been answered?
The first thing that needs to be done to unfuck your QA testing, is to ensure that the people who review the bugs are reviewing things that are within their purview.

Introduce Targeted Tests
The days of telling user testers to just do what they always do and tell us what they think is over.
Much of what we discuss here at Astutely Obtuse deals with being specific in your dealings and avoiding any ambiguity that might introduce itself as part of your process. Testing should be no different.
I know the idea of creating hundreds of use cases to pass out to various user testers to ensure that you cover every possible problem that might come up, but I'm here to tell you that it is a lot easier than you might think.
There are three groups you will need to allow a streamlined approach to targeted testing.
First, customer/user experience testing. CX/UX testers should focus only on things like how things are worded, continuity between experiences, and potentially finding things that could be done as future enhancements. Their feedback would be reviewed by a UX team.
Second, workflow testing. Workflow testing should entail testing the workflows (no fucking shit), both typical and not typical workflows. This should also include some superficial edge cases. Creating testing documents for this team will take up a lot of time, but remember, much of the work will be simply copying and pasting anything that is duplicated across multiple documents. It seems daunting, but it gets easier with time.
Third, the Professor and Mary Anne testers. That is to say, "and the rest." PaMA testers should be staffed by only the most competent testers, those who have proven themselves as worthy of being called "the rest." They are the ones who fill in the gaps between the other testing being done. They are not QA testers, but they flirt with the role a bit. They shouldn't be testing anything technical from a technical point of view, but they should be superficially testing these things.
The reason the PaMA testers should only be the most competent is that you are trusting them to work without detailed direction. Why not just have everyone test this way? Well, the first two groups are testing specific aspects, the important stuff that will be a common occurrence, or will directly affect the customer experience. The PaMAs, therefore, can focus on the less important stuff, that still might affect the customer experience, it's why they have carte blanche to do whatever the fuck they think they should test. These are the fuckers who have war stories of the one time they discovered a bug that caused a bunny to die in Southern Oregon every time they searched for I.P. Freely in the CRM.
Then, obviously, you have the QA team, doing their own series of tests. With all of these people doing their own thing, how do you keep everyone in alignment?
Introduce Testing Visibility
In the process of our stitch and bitch, we talked about our experiences with testing visibility. In my last role as a QA specialist, we had zero visibility. Tickets went into a test swimlane and eventually came out. From the outside, QA was a mysterious box where tickets go in, and bugs come out.
It's the same for his organization. While testing, no one can see what new issues have been created, or what has been solved. This is probably why about 25% of the tickets that came in were duplicates of each other.
There are a number of tools that can be used to make the testing visible to those outside of the testing team. What we are looking to capitalize on are the targeted testing and targeted responses.
While it is true that we can build graphs, charts, dashboards, and bubbling cauldrons whose smoke can foretell cost overruns, what we want is to find a way to communicate what work has been submitted or is in the works without being overwhelming. Having a wall of text that links to all 200+ tickets is not going to be very helpful. Again, we're saying to "not be" over-fucking-whelming.
Let me introduce you to, the test maestro/maestra. In order to orchestrate all the incoming test results, you need someone to manage it all coming in. In my comrade's case, he'll be leveraging the product owner on the project, primarily because they will have the greatest insight needed, however, any team member with the time can handle this role.
The Maestro(a) should take the targeted responses and distill them down to bite-size, consumable info. A two to five-word phrase of what was reported, categorized so that submitters can see if their ticket is a duplicate or not. The key is to be short, concise, and easy to consume... fucking short... fucking concise... and fucking easy to consume.
The second part of the Maestra(o)'s job should be communicating when things are being duplicated a lot. For example, they should send out communications letting the testers know they can stop reporting that "the new chatbot IVR sends everyone to customer support when they ask to connect with Sun Hardick." It would also be helpful to let testers know what is being done, as in, "We've warned Dr. Hardick that she might see some issues having calls routed to her office, but it's being looked into."
Lastly, the Maestro(a) should send out daily communiqué to let all the testers and stakeholders know where we stand with testing.
I want to emphasize that this role sounds daunting, but it really isn't. It should only take an hour a day for a regular project, at the absolute most. As testing progresses, it becomes easier. Further, as people become used to the system, the maestro(a)s job gets significantly easier.
Only Bring in Testers When They Have Something to Fucking Test
My QA friend sent me the infographic they used to sell his boss on shift left. The claim is that testing only begins after the work has been implemented. Yeah... these fuckers honestly argued that shit was built, then pushed to production, and then tested. Of course, this set off alarms in the senior management team. Oh, no, why are we waiting to test things until after they've caused problems?!? It's ridiculous, absolutely ridiculous.
The argument behind this damn infographic was to encourage user testing (not QA testing) at every small step in the process. Every ticket that gets merged has a user test. It verifies that it meets their standards before ever getting regression tested. My problem with this is that there is way too many tickets that can't be tested by someone who is just an everyday user. Like building out an API endpoint. How is a user going to test that? There are significant tickets passing a PR that live in the black box, unseen by the user, so why bring them in to test it?
To counter this, one of our beta readers asked if we had ever heard of Shift-Right. While we couldn't find a lot of info on it, he sent us some info, including a sales sheet for the benefits of Shfit-Right. The documentation came to him through his master's program, where he is studying Supply Chain Management.
So, where shift left was about moving people left (earlier in the process), shift right is about moving the entire process right, thereby moving people left. In software engineering, this would put the QA team responsible for approving pull requests, users responsible for regression, stress, edge, corner, you sunk my battleship, smoke, functional/performance, and load testing on the shoulders of the users, where acceptance and use case testing would occur by all the users in production. The argument is that it frees up the developers to do more development or something, I don't know.
These are both really fucking stupid.

You don't want people performing tests that they are not comfortable with, or knowledgeable enough to perform the tests being asked of them. There is a trap in both shifting left or right. You might be performing the tests faster, but are they of the same quality that they were before? Probably not.
There is a benefit to bringing in testers earlier. Although, never in my entire career have I ever heard of testing only occurring after everything has been implemented and pushed to a production environment.
There are things that can be tested as they are completed, but only by a QA who is familiar enough with the technology to be able to perform those tests. Bring them in to test those tickets that are PR'd and merged wherever. But don't push to bring in a user to test this shit because you want to get that promised buy-in from the client.
When do you bring them in? When they have something they can fucking test, dipshit! It's hard enough to wrangle user testers without asking them to test things they can't fucking test.
The Never Ending Pursuit to Ignore Testing Entirely
This is going to really piss off a lot of people, but here it goes.
Developers, specifically, those with a ton of experience and degrees out of their pee holes are fucking idiots, who are wrong about everything, and anyone with more than two brain cells to rub together shouldn't even give any goddamn thought to listening to these self-aggrandizing pieces of shit.
Wow... was a bit much.
As someone with an undergrad in computer and information sciences, focusing on technology and application enterprise integrations, it really does pain me to say it, but here goes. The push from many developers in the tech industry to allow devs to do all of their own testings, and as such, push out QA entirely from the development process is dumber than shift left and shift right combined. And anyone who has worked with dev teams that weren't developers knows this to be true. How do I know? Five... fucking... words.
It worked on my machine
But... but... we can train the devs to be better testers!
Bullshit. Any software engineer worth the shitty beer in their desktop fridge will tell you that devs should be testing their work before submitting a pull request anyway. If this is an expected practice, why does it need to be said? Why isn't it being done already? Why is "oh, it worked when I ran it locally, so there's something wrong with your computer" common a statement enough that it shows up on a near hourly on r/programmerhumor?
Well... we'll just make it part of their job!
Did you not fucking read the previous paragraph imaginary combatant? Further, you're just pushing the testing around. You don't want to pay a QA specialist to perform testing, but you'll pay a developer to do it when they are already under significant pressure to push out shitty code?
The real reason so many IT and software managers want to push developers to do their own testing is that they know they won't do the testing. Testing takes time, and when bugs are found they create more work and make the dev team look bad. Any bugs that go out, are a new project, and not in any way a reflection on the shitty work they just pushed out.
It takes a page out of the flawed idea that moving fast and breaking things is a healthy dev process. It's the only line of business that encourages people to put out a broken, terrible product that could potentially drive away customers, and does so while arguing that it's a good thing.
Former contributor Christina offers this analogy: In professional kitchens, there is a very good reason whoever is running the pass/expediting verifies the quality of every dish being plated. It's because chefs de partie can't be trusted to do their job consistently, and as the head chef or sous chef, it's your responsibility to maintain the quality of the restaurant's brand.
Beyond Just Testing
We've talked about how testing is a worthwhile investment. We talked about the need to get client buy-in, and when to bring them into the fold. We discussed bringing testing out of the dank basement, and onto the kitchen table where it can be seen by all. We even threw some shade at developers, which will probably lose me a number of friends, and get my undergrad degree revoked.
What I want to conclude on, however, touches on what our good friend shared. I want to talk about quality.
The purpose of testing shouldn't be only focused on finding bugs or getting buy-in from the client. It should be about maintaining a level of quality that your organization has a reputation for. I would go further than that, however, I would insist that QA should be empowered to hold projects to higher quality standards. They should be a force that drives excellence among your team.
Yeah, finding bugs sucks, it's more work and makes the dev look bad. Instead of allowing it to frustrate you, use it as an opportunity to learn. Don't punish your devs because a QA found a couple of bugs, instead, incentivize them to learn from it. Things can always be better, and those who are testing should be seen as facilitators of that excellence.
Most importantly, dear reader, don't follow the never-ending pursuit to eliminate testing, embrace it, and all the annoyances that it brings.
Comentários