All articles

How to Block PR Merges Until QA Approves in Jira

QA GateGitHubGitLabBitbucketCI/CD

We kept merging bugs

I'll be honest, this whole project started because we were embarrassed. We had 90% unit test coverage, a solid CI pipeline, green checks across the board. And we still shipped a checkout page where the "Pay Now" button didn't do anything on Safari.

A two-minute manual test would've caught it. But nobody ran one, because there was no process for it. The PR had approvals, tests passed, someone hit merge. That was Friday. We found out Monday morning from a customer.

So we asked ourselves: what if the PR literally could not be merged until someone actually clicked through the feature?

What we mean by "QA gate"

It's simpler than it sounds. A QA gate is just another required check on your PR, like your build check or your linting check, except this one stays yellow until a human being says "yes, I tested this, it works."

If they say it failed, the check goes red and you can't merge. Same as a failing test suite, except the "test suite" is a person with eyeballs looking at the actual feature.

This isn't code review, by the way. Code review answers "is this well-written?" QA answers "does this actually work when I use it?"

How we wired this up

We built TrueStory partly because nothing else did this cleanly. Here's the actual flow we use every day:

A dev opens a PR with ticket keys in the title, something like SHOP-142 SHOP-143: Checkout flow. Most teams already do this for Jira linking, so it's not a new habit.

TrueStory picks up the webhook from GitHub (or GitLab, or Bitbucket, all three). It parses those ticket keys out of the title, looks up all the test cases we've linked to those tickets in Jira, and bundles them into a testing session. At the same time, it creates a pending check on the PR.

The tester gets an email, opens the session in Jira, clicks through each test case, and marks them pass or fail. When they hit "Complete," TrueStory pushes the result back to the Git provider. Green check if everything passed. Red if something failed.

That's it. The developer doesn't need to do anything different. The tester doesn't need to learn a new tool. It's all inside Jira. And the merge button stays disabled until QA gives the thumbs up.

The polling problem we refused to have

Before we built the push-based approach, we prototyped with a polling workflow. It was awful. A GitHub Actions job would sit there in a loop, hitting our API every 30 seconds: "Is QA done yet? Is QA done yet? Is QA done yet?"

That loop ran for as long as QA took. Sometimes 20 minutes. Sometimes 2 hours if the tester went to lunch. At GitHub's billing rate, we were burning real money on a workflow that did nothing but wait.

So we ripped it out. TrueStory is push-based now. Your CI finishes in about 2 minutes (deploy the preview, notify TrueStory, done). Later (could be 10 minutes, could be 3 hours) when QA finishes, we push the status directly to GitHub's Check Runs API. Or GitLab's Commit Status API. Or Bitbucket's Build Status API.

Zero runner minutes wasted. Your CI bill doesn't care how long QA takes.

Setting this up takes about 5 minutes

Not an exaggeration. Here's the actual steps:

  1. Install TrueStory from the Atlassian Marketplace
  2. Open Project Settings in Jira, go to CI/CD Integration
  3. Click "Connect" next to GitHub (or GitLab, or Bitbucket)
  4. OAuth pop-up, authorize, pick your repos
  5. Done

For GitHub specifically, you'll also want to go to your repo's branch protection settings and add "TrueStory QA Gate" as a required check. That's what actually prevents the merge.

After that, every PR that mentions a Jira ticket key in its title will automatically get a QA gate.

When this is overkill

I don't think every team needs this, and I don't think every PR needs it. If you're updating a README, changing an env variable, or refactoring internal code that doesn't affect the UI, you probably don't need a human to click around before merging.

Where it really pays off is user-facing features. Anything where the end result is something a person will look at and interact with. Login flows, payment pages, dashboards, onboarding wizards. The stuff where "it passes tests" and "it actually works" are different statements.

We use it on every PR that touches our frontend. Backend-only changes get a pass.

The real benefit isn't catching bugs

It's accountability. When every session is logged (who tested, what they tested, when, what passed, what failed), you stop having that meeting where everyone looks at each other and says "I thought you tested that."

You also get an audit trail, which matters if your company cares about compliance or if you just want to know what happened after something breaks in production.

Anyway, if you want to try it, it's on the Atlassian Marketplace. Free tier available. Setup instructions are in the docs.