Project Delivery/Raising Bugs/

Raising Bug Cards

The aim of this document is to provide a set of standards for Audacia to define bug cards, such that issues can be communicated clearly and work can be tracked effectively. There may be other standards for specific clients or third-parties - these standards should live in the project wiki, as they will be project specific.

People from all job families (Software Engineers, Test Engineers & Delivery) should be empowered to raise bugs for projects they are actively working on - this is not the sole responsibility of the projects tester(s). Other stakeholders should confirm the intended behaviour with the development team, who will raise bugs on their behalf.

Definition

A software bug is an error, flaw or fault in the design, development, or operation of software that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

As a result, anybody can observe a bug and should use this guide to report them.

Raising Bugs - Card Structure

Bug cards should describe a single issue.

Title

The title of a bug must contain a concise statement of the expected behaviour that has not been met. This ensures stakeholders looking at the board or backlog can identify the functionality achieved by completing the card.

✔️ Users who have failed to pay, must be shown “Payment failed, please try again” regardless of the failure reason(s)

❌ The “Payment failed” messages are inconsistent

If the card from which the bug is being raised has numbered Acceptance Criteria, include the AC reference at the end of the title:

Users who have failed to pay, must be shown “Payment failed, please try again” regardless of the failure reason(s) (AC 3.a.ii, AC 7.h.i)

If the card has been observed in a specific environment i.e. the card has not just been deployed to test/QA, include the environment at the start of the title:

UAT - Admin users must be shown a summary of payment failure reasons that updates hourly (AC 8.c)

Repro Steps

Within Repro Steps section of the bug card, document the steps taken to reproduce the issue observed as a numbered list.

  1. Start with a step to navigate to or under test, this indicates where the bug has been observed.
  2. Use as many steps as needed to navigate to the part of the system where the bug is present - this is especially important if there are multiple routes through the application.

Remaining within Repro Steps, add the title “Observed Behaviour” and detail what happened after following the above listed steps, including error messages / logs. If possible, include relevant screenshots, videos, or a unique data id to help explain what is happening.

By listing observed/actual behaviour first, the observed behaviour directly follows the steps to reproduce.

Following this, add the title “Expected Behaviour” and detail the desired behaviour, as per the original Acceptance Criteria. Use the same notation as above to reference the AC i.e. AC 3.a.ii. If required, add a sub-point to document why this is the expected behaviour.

The observed behaviour and expected behaviour should mirror each other.

If the bug has both customer and technical implications, the bug card should document the customers observed and expected actions. A technical note may be added, although defining the root-cause of an issue may send the developer down an incorrect path of enquiry.

Example Steps to Reproduce

  1. Log into Olympus QA as a project admin
  2. From the navigation menu, select reports -> timesheets summary
  3. Select the filter icon🔻
  4. Select missing timesheets from the filter

Observed Behaviour

  1. A request is sent to the API to fetch timesheet data and this has a 200 status
  2. All users are returned (users who are upto date on their timesheets and those with missing timesheets)

Expected Behaviour

  1. A request is sent to the API including the timesheetMissing property and this has a 200 status
  2. The timesheet summary is filtered to only show users who have missing timesheets (AC 3.h)

Acceptance Criteria

If the bug’s observed behaviour is a contradiction to a card’s Acceptance Criteria, then the acceptance criteria should be copied from the original card.

Without defined expectations of the desired behaviour, a bug card’s Acceptance Criteria should be left blank. The exceptions to this rule are obvious cases e.g. where a typo appears in some text on a page and the correct spelling can be applied. If the expected behaviour of a bug card is as-yet undefined, the Scrum Master is responsible for adding this in their discussions with the client.

Acceptance criteria should be written in such a way that they can be understood by a project stakeholder who has not read the steps to reproduce. Please see the separate guide on Writing Acceptance Criteria for more information on this.

Tags

The bug should be tagged with AC not signed off and the Scrum Master should review the AC, which may involve seeking clarity from the Product Owner.

Linked Cards

Any card which contains Acceptance Criteria relevant to the bug card being raised should be linked, so that all relevant information is attached. This can be achieved if the project manages bugs as children or manages bugs as related card, as a link between a Product Backlog Item and a bug card is still established.

In the case that a Product Backlog Item undergoing testing has directly resulted in a bug card being created, the PBI card should be moved to QA Failed.

Discussion

Any agreements relating to the cards functionality should be captured in the discussion section, so that it can be used as a single point of truth. By maintaining this audit trail, this allows people to understand the rationale behind past decisions when reviewing the card.

After raising a bug, if the acceptance criteria are not defined or well understood it is a best practice to tag the project’s Scrum Master to ensure the card’s AC are added / can get signed off with the client, the bug can be added to the appropriate sprints/releases and visibility of a PBI’s progress is maintained.

"Environment Found In" and "Root Cause"

Assuming these fields are being used for the project in question, these should be populated.

Concessioned Bugs

The term concessioned bugs refers to bug cards where it is agreed that a fix will not be actioned. More generally, a concession is an agreement to allow something, especially in order to end an argument or conflict. Concessioned bugs may be present for any number of reasons, including budget constraints or available time, or that the functionality described us an accepted side-effect of the system under test.

Management

As all bugs are raised following the standards defined above, the project backlog documents all known bugs.

If a bug is marked as “Removed”, a comment should be added to explain why.

If it is determined that a bug card will not be actioned in the current phase:

  1. Move the card into an epic representing a future phase of work or an epic called Archived, as appropriate.
  2. Add a comment in the discussion to indicate the reason the card is not in-scope, ideally tagging the client who made this decision.

It is important to maintain these cards, rather than deleting them, to ensure:

  • All bugs that won’t be fixed can be seen in one place i.e. there is a single source of truth for agreed project functionality.
  • There is not a duplication of effort e.g. if the same issue is reported again.

Depending on the project, it may be appropriate to create a page in the project wiki that details all known signed-off bugs and project intricacies so that a wider description of outstanding issues can be maintained. Wiki pages can also embed query results, such as all bug cards in ‘Phase 2’. This page contains a guide on how to do this. Wiki pages can be exported as PDFs and shared with a wider group of user acceptance testers, who may not have access to the project board. However, as the project grows this method of documentation does not scale.

Responding to bug reports

When the client identifies a bug, it is crucial to empathise and respond accordingly. They may be frustrated that the issue wasn't identified via QA, or they might need some reassurance that the issue will be taken seriously and resolved quickly. - DO NOT simply respond with Please raise a bug card following the bug raising standards ❌ - DO consider the suggestions below ✔️ The former should only happen if we have a very close and well established relationship with the individual, and we know they will take this message in the spirit intended. Considerations when responding to bug reports: 1. Thank them for spotting the issue. 1. Acknowledge that their effort is adding value to the process. 1. Confirm we will investigate. 1. For us this might be a given - we know that as part of our process we will investigate bugs - but the client may not be aware of this. 1. Provide context on the testing that has been done to date. 1. Explain why this issue may not have been spotted earlier in the process. Note the client might wrongly (but reasonably) assume that we have not done sufficient testing, when in fact the issue may be an edge-case or related to specific data. 1. Where appropriate, we can explain the types of testing that have been done (e.g. manual, automated, exploratory), why this issue may not have been caught, and (if applicable) what we'd do to catch this in future. 1. Reassure them that, where appropriate, we will add automated test coverage to mitigate against recurrence. 1. Provide an expected timeline for when the bug will be triaged and fixed, if possible. This helps manage expectations. 1. If the bug has been raised by a non-technical stakeholder, offer to help them raise the bug card. This ensures the issue is captured correctly and demonstrates our commitment to resolving it. 1. Show compassion. We might experience frustration if people highlight bugs verbally or in Teams messages, despite us having outlined the process for raising work items on the board. 1. The person raising the bug may not be a tester by trade, and it's reasonable to expect they'll need time to acclimatise with our process. 1. It's also reasonable for them to want to check whether a bug has already been captured, whether the behaviour they're seeing is a false negative. 1. We should guide them towards the right process without making them feel bad for not following it initially. We can do this by re-articulating the value the bug report gives us e.g. > "Thanks for flagging this issue, it's really helpful. Would it be possible to raise a bug card to describe the specific scenario in which you saw the issue, and include a timestamp please? This will help us correlate the issue with our logs and get to the bottom of it quickly. We'll also use this when re-testing and look at adding automated coverage to prevent future regression."