Engineering/Engineering Principles/Tester Standards/

Tester Standards

These standards outline the expectations for testers at Audacia - covering test strategy, execution, automation and security.

DevOps & Delivery

Use YAML pipelines · DEVOPS-01.1 · MUST · DEV/TEST

Always use YAML (YAML Ain’t Markup Language) when creating DevOps pipelines.

Use lower, kebab case for file name, e.g. 'build-app.job.yaml'. · DEVOPS-01.2 · MUST · DEV/TEST

Always use lower, kebab case for file name. For example: build-app.job.yaml

Use Pascal, snake case for IDs, including the type, e.g. 'Job_Build_App'. · DEVOPS-01.3 · MUST · DEV/TEST

Always use Pascal, snake case for IDs, including the type. For example: Job_Build_App

Include a blank line between steps. · DEVOPS-01.4 · MUST · DEV/TEST

Always include a blank line between steps.

steps:
  - task: PowerShell@2
    inputs:
      targetType: inline
      script: Write-Host "Hello world"

  - task: PowerShell@2
    inputs:
      targetType: inline
      script: Write-Host "Goodbye world"

Include an approval gate for deployments to customer-facing environments. · DEVOPS-01.6 · MUST · DEV/TEST

Always include an approval gate for deployments to customer-facing environments. View documentation

Use folders to group pipelines. · DEVOPS-01.10 · MUST · DEV/TEST

Always use a sensible folder structure for grouping pipelines (e.g., /deployment or /testing)

Restricted
Login to display internal content.
Sensible defaults can be found in the wiki here

Include an approval gate for deployments to internal environments. · DEVOPS-01.7 · SHOULD · DEV/TEST

It is strongly recommended that you include an approval gate for deployments to internal environments.

Split stages and jobs into their own file. · DEVOPS-01.8 · SHOULD · DEV/TEST

It is strongly recommended that you split stages and jobs into their own file.

Include explicit 'Use Node' and 'Use .NET' steps. · DEVOPS-01.9 · SHOULD · DEV/TEST

It is strongly recommended that you include explicit Use Node and Use .NET steps.

# Use Node.js ecosystem v1
# Set up a Node.js environment and add it to the PATH, additionally providing proxy support.
- task: UseNode@1
  inputs:
    #version: '10.x' # string. Version. Default: 10.x.
    #checkLatest: false # boolean. Check for Latest Version. Default: false.
    #force32bit: false # boolean. Use 32 bit version on x64 agents. Default: false.
  # advanced
    #retryCountOnDownloadFails: '5' # string. Set retry count when nodes downloads failed. Default: 5.
    #delayBetweenRetries: '1000' # string. Set delay between retries. Default: 1000.
# Use .NET Core v2
# Acquires a specific version of the .NET Core SDK from the internet or the local cache and adds it to the PATH. Use this task to change the version of .NET Core used in subsequent tasks. Additionally provides proxy support.
- task: UseDotNet@2
  inputs:
  #packageType: 'sdk' # 'runtime' | 'sdk'. Package to install. Default: sdk.
  #useGlobalJson: false # boolean. Optional. Use when packageType = sdk. Use global json. Default: false.
  #workingDirectory: # string. Optional. Use when useGlobalJson = true. Working Directory. 
  #version: # string. Optional. Use when useGlobalJson = false || packageType = runtime. Version. 
  #includePreviewVersions: false # boolean. Optional. Use when useGlobalJson = false  || packageType = runtime. Include Preview Versions. Default: false.
  # Advanced
  #vsVersion: # string. Compatible Visual Studio version. 
  #installationPath: '$(Agent.ToolsDirectory)/dotnet' # string. Path To Install .Net Core. Default: $(Agent.ToolsDirectory)/dotnet.
  #performMultiLevelLookup: false # boolean. Perform Multi Level Lookup. Default: false.

Parameters must be explicitly and unambiguously named · DEVOPS-03.1 · MUST · DEV/TEST

The name must clearly describe the parameter’s purpose without requiring additional context, ensuring that anyone can reliably identify and reference it.

Each parameter must be clearly and individually declared · DEVOPS-03.2 · MUST · DEV/TEST

To support clarity and prevent misinterpretation, each parameter must be declared on its own rather than embedded in compact or nested structures.

Parameter definitions must include essential metadata · DEVOPS-03.3 · MUST · DEV/TEST

All parameters must provide the necessary context to function predictably. This includes core attributes such as identifier, type, and default behaviour / value.

Parameter names must follow consistent naming conventions · DEVOPS-03.4 · MUST · DEV/TEST

Use a consistent casing convention (e.g. camelCase) for parameter names to ensure readability and familiarity across pipelines and projects.

Variables must be defined in a controlled and traceable manner · DEVOPS-03.5 · MUST · DEV/TEST

Avoid scattering variable definitions across pipeline stages. Use centralized declarations (e.g., variable groups, templates) to promote transparency and version control.

Secure variables must be stored in approved secret management tools · DEVOPS-03.6 · MUST · DEV/TEST

Sensitive values (e.g., connection strings, API keys) must be stored using secure mechanisms such as:

  • Azure DevOps Library Secure Files
  • Secret variable within a variable group
  • Azure Key Vault integration

Variable scope must match usage context · DEVOPS-03.7 · MUST · DEV/TEST

Define variables at the most appropriate scope:

  • Pipeline-level for global use
  • Stage/job-level for isolated logic
  • Environment-level for deployment scenarios

Avoid overly broad scoping, which can lead to unexpected overrides or conflicts.

Variable names must use a consistent format · DEVOPS-03.8 · MUST · DEV/TEST

Use a clear naming convention such as UPPER_SNAKE_CASE or camelCase, depending on the team standard. Prefix values logically where relevant (e.g., dbConnectionString, testTimeoutMinutes).

Common variables must be centralized for reuse · DEVOPS-03.9 · MUST · DEV/TEST

Frequent or shared values must be placed in reusable variable groups or YAML includes. This minimizes duplication and improves maintainability.


Quality

Manually tested work items must have linked test cases · MT-01 · MUST · TEST

All work items that include front-end changes must be manually tested · MT-02 · MUST · TEST

Manual testing must have test evidence provided · MT-03 · MUST · TEST

Provide clear evidence for all manual testing performed. This must be in the form of test cases that include detailed test steps

Each test case must be marked as either passed or failed to confirm that testing has been completed and recorded.

Manual tests must follow the end-to-end user journey · MT-04 · MUST · TEST

The person testing a card (PBI/bug) must ensure they have followed the full end-to-end user journey during their testing. Testing end-to-end mirrors how a real user will use the system, helping to identify usability issues and highlight bugs that users are likely to encounter. Sometimes a process can work in isolation but break if you try to run it end to end.

Manual testing must be performed on representative devices and browsers · MT-05 · MUST · TEST

Web applications tend to suffer from rendering inconsistencies when accessed through devices with different screen sizes.

Testing should be performed on relevant OS, devices and browsers to ensure usability is maintained. These requirements should be signed off by the client before the project starts.

A regression pack must be created at the beginning of a project · RT-01 · MUST · TEST

Start creating the regression pack as soon as you write the initial tests. Continue to develop and expand the pack throughout the project to prevent complications and delays.

The regression pack must be maintained throughout a project · RT-02 · MUST · TEST

Regularly review and update test logic. Add new tests as needed to ensure the regression pack reflects current business logic and expected behaviours. This reduces the risk of re-work. Remove tests that are no longer relevant or do not provide sufficient value to keep the regression pack efficient and focused.

All test cases must be reviewed for inclusion in regression packs · RT-03 · MUST · TEST

Evaluate each test case to determine if it should be included in the regression pack. Focus on tests that cover core system functionality and areas most likely to be affected by changes.

Conventional Commits must be used for commit messages · QUAL-04.1 · MUST · DEV/TEST

Commit messages must follow the Conventional Commits specification.

It is strongly recommended that projects at least use types from the specification to communicate the main purpose of the commit, e.g.:

  • feat: for new features
  • fix: for bug fixes
  • docs: for documentation changes
  • refactor: for refactoring
  • etc. (any type from the specification can be used)

Projects can optionally go beyond just using the type and follow the full specification, which includes additional features such as:

  • Including a scope after the type, e.g. the scope api after a feature would be feat(api):
  • Communicating breaking changes by appending a ! after the type/scope, e.g. feat!:
  • Including a footer

Commit messages must including work item number · QUAL-04.2 · MUST · DEV/TEST

The work item number (e.g. PBI or Bug) must be added to the end of the commit message. For example: fix: added phone number validation #xxxx, where #xxxx is the card related to the commit

A branch policy must be set on main development branches · QUAL-05.1 · MUST · DEV/TEST

Code reviews must be enforced by setting a branch policy on the main development branches (e.g. dev and/or main) with the following settings:

  1. Require a minimum of at least 1 reviewer (an individual project can require more than 1 reviewer if desired).
  2. Do not allow users to approve their own changes (unless the minimum number of reviewers is more than 1).
  3. ‘Reset code reviewer votes when there are new changes’ should be selected.
  4. ‘Check for linked work items’ should be required.
  5. ‘Check for comment resolution’ should be required.
  6. ‘Limit merge types’ should generally be set to ‘Basic merge (no fast-forward)’.
  7. ‘Build validation’ should be set for the relevant build pipeline(s) to ensure that the pull request doesn’t break the build, with the expiration set to immediately.

Projects can diverge from this policy provided it makes the policy stricter rather than more relaxed. For example, requiring more than 1 reviewer.

Technical debt must be documented · QUAL-01.1 · MUST · DEV/TEST

All projects must have a documented test strategy · TSY-01 · MUST · TEST

Document a test strategy for every project. This ensures that testing activities are planned, communicated, and understood by all stakeholders.

A project’s test strategy must be maintained throughout the project · TSY-02 · MUST · TEST

Update and maintain the test strategy as the project evolves. Review the strategy regularly to ensure it remains relevant and effective.

Test strategies must contain information on all areas of testing · TSY-03 · MUST · TEST

Include details for all relevant testing areas, such as unit, integration, system, acceptance, and performance testing.
Always include developer unit testing as part of the strategy. The lead developer is responsible for ensuring that unit testing is planned and executed.

Restricted
Login to display internal content.
Refer to the template test strategy for an example.

You can also find further guidelines in our Test Strategy Best Practice Wiki Pages.


Documentation

Delivery team roles and responsibilities must be documented · DOC-01.1 · MUST · DEV/TEST

These role and responsibilities can be captured as a RACI matrix, or something less formal if preferable.

Client key contacts must be documented · DOC-01.2 · MUST · DEV/TEST

Important information to capture includes each person’s contact details, role and decision-making authority on the project.

An overview of each delivery phase must be documented · DOC-01.3 · MUST · DEV/TEST

At a minimum, the following information must be included:

  • List of each phase delivered
  • Delivery dates
  • High-level summary of what was delivered in each phase.

EXCEPTION This is not applicable to projects that have not had multiple delivery phases.

README files must contain a title and introduction to detail the purpose of the repository · DOC-04.1 · MUST · DEV/TEST

README files must document the technologies used within the repository · DOC-04.2 · MUST · DEV/TEST

The main technologies used and any pre-requisite software that must be installed must be listed in the README: for example .NET 10, Node.js 20, Playwright.

README files must document how to use the repository locally · DOC-04.3 · MUST · DEV/TEST

Application Code

  1. Document any required installs, such as SSL Certificates
  2. Document any configuration files such as appsettings.Development.json or nuget.config
  3. Document how to setup local development, including any local databases
  4. Document how to run the application locally, explaining any npm scripts and any command-line options or parameters that must be provided
  5. Document any additional applications (such as IDEs, or IDE extensions/plugins) which are needed for debugging etc.

Automated Tests

  1. Document any required installs
  2. Document any credentials required to run the tests and the file they should be added to. For example, contributors should populate cypress.env.json from the UI Cypress Tests record in Keeper.
  3. Document how to run the tests locally, explaining any npm scripts and any command-line options or parameters that must be provided
  4. Document any additional applications which may be used. For example, Playwright Test for VSCode by Microsoft.

Key system workflows must be documented · DOC-02.1 · MUST · DEV/TEST

A glossary of terms and acronyms used within a project must be provided · DOC-02.2 · MUST · DEV/TEST

A known issues log must be maintained · DOC-02.3 · MUST · DEV/TEST

System roles and permissions must be documented · DOC-02.4 · MUST · DEV/TEST

Architecture decision records (ADRs) must be written to record key technical decisions · DOC-03.2 · MUST · DEV/TEST

ADRs can be included in the main project wiki or in the same repo as the source code (and linked to from the main wiki).

Restricted
Login to display internal content.
Sensible default rules for ADRs in the wiki.

Non-functional requirements must be documented · DOC-03.5 · MUST · DEV/TEST

Examples of non-functional requirements are:

  • Performance and response times
  • Accessibility
  • Device testing requirements (these should be captured in the Test Strategy)
  • Logging and observability

The branching strategy must be documented · DOC-03.7 · MUST · DEV/TEST

This should include the overall strategy, the reasoning behind it, and any specific details such as naming conventions, e.g. release/.

External resources must be listed and signposted · DOC-03.9 · MUST · DEV/TEST

For example, Postman collections or JMeter scripts.

A regression testing guide must be provided · DOC-03.11 · MUST · TEST

This should be some signposting to the relevant resources, e.g. an automated test repo and/or Azure DevOps test plan.

A ‘state of play’ of the automated tests must be documented, covering things like test coverage, which system areas have been targeted and where there are known gaps.


Automated Testing

API tests must test only one endpoint · API-01 · MUST · TEST

Each API test must focus on a single endpoint. This ensures that tests are isolated, failures are easier to diagnose, and the intent of each test is clear.

API tests must test happy and unhappy paths · API-02 · MUST · TEST

API tests must cover both:

  • Happy paths: Valid requests that should succeed and return expected results.
  • Unhappy paths: Invalid or unexpected requests that should fail gracefully (e.g., missing required fields, unauthorized access, invalid data).

API tests should outnumber UI tests · API-03 · SHOULD · TEST

API tests should be prioritized over UI tests, and in most cases, there should be more API tests than UI tests. API tests are generally faster, less brittle, and make it easier to narrow down the point of failure when issues arise, as they test the system at a lower level with fewer dependencies.

Regression tests must be automated · AT-01 · MUST · TEST

Automate all regression tests to ensure repeatability and reduce manual effort.

Tests must be run automatically when new functionality is released · AT-02 · MUST · TEST

Configure automated tests to run as part of the release process for new functionality.

Tests must be tagged by regression priority and system area · AT-03 · MUST · TEST

Tag each test with its regression priority (such as high, medium, or low) and the system area it covers. This enables targeted execution and reporting. This can be used to run only high priority or relevant tests when new functionality is released

Tests must be atomic and idempotent · AT-04 · MUST · TEST

Design each test to perform its own setup and teardown, including creating and deleting data as needed.
Explanation: Atomic and idempotent tests can run independently and in any order, without relying on the outcome or side effects of other tests. This ensures reliability and repeatability.

Tests must be actively maintained · AT-05 · MUST · TEST

Actively maintain all tests as the system evolves.
This includes:

  • Pipelines are checked after each run.
  • Any failing tests are addressed as soon as possible following the process laid out in the projects test strategy.
  • Reprioritising tests as project priorities change.
  • Deleting tests that are no longer relevant or valuable.

Tests must have a descriptive title · AT-06 · MUST · TEST

Give each test a clear and descriptive title that explains what is being tested.

Test titles should include work item ID · AT-07 · SHOULD · TEST

Include the work item ID in the test title to trace tests back to requirements or user stories.
This may be a project-specific decision.

There must be a link between automated test pull requests and work item(s) they test · AT-08 · MUST · TEST

Reference the relevant work item(s) in pull requests containing automated tests to ensure traceability between code changes and requirements.

Tests must have a strong assertion · AT-09 · MUST · TEST

Include a strong assertion in each test to verify the expected outcome.

A strong assertion ensures the test fails if the application does not behave as expected.

Tests must assert against the behaviour in the test title · AT-10 · MUST · TEST

The assertion in each test must directly verify the behaviour described in the test title, ensuring clarity and alignment between test intent and implementation.

UI tests must cover the main user journeys only · UI-01 · MUST · TEST

Automate only the most important user workflows. Exclude minor features and edge cases from UI automation so that the test run time stays low.

UI tests for unhappy paths should be kept to a minimum · UI-02 · SHOULD · TEST

Limit UI tests for negative or error scenarios. Cover these cases with API automated tests where possible.

The data setup for UI tests should be done via the API · UI-03 · SHOULD · TEST

Set up and manage test data using APIs instead of the UI. This approach increases test speed, reliability, and maintainability.


User Experience

Leverage AI for UX Testing · UX-01.8 · SHOULD · DEV/TEST

Use AI tools like ChatGPT for usability feedback. Simulate user perspectives to reveal blind spots. Example prompts:

✓ "Critique this sign-up flow: where could users get stuck?"
✓ "Act like a distracted user—what's confusing in this screen?"
✓ "How do top SaaS products design this flow?"

AI feedback isn’t final, but it helps prioritize areas for further testing or validation.


Security & Availability

Applications must use Mailtrap for emails in test environments · DS-01 · MUST · DEV/TEST

Applications must use Mailtrap for emails in test environments

EXCEPTION: This requirement does not apply if the client has their own service or process in place for handling test emails.

Data must be anonymised when moved between environments · DS-02 · MUST · DEV/TEST

Sensitive data must be anonymised before being transferred between environments (e.g., from production to development or test).

Restricted
Login to display internal content.
For guidance on reporting data movements, refer to the Security Infrastructure FAQs%3F).

EXCEPTION: This requirement may be waived if the data is essential for reproducing a specific issue, provided there is a documented process in place to delete the data from the destination environment once it is no longer required.

Sensitive data must be stored securely · DS-03 · MUST · DEV/TEST

Sensitive data includes credentials, API keys, connection strings and certificates. Store only in approved secret stores and encrypted services. Never commit to source control or place in plain text in config.

  • Use a managed cloud secret store for secrets, keys and certificates (e.g. Azure Key Vault or AWS Secrets Manager).
  • Store pipeline secrets in your platform’s secure secret management (e.g. Azure DevOps Variable Groups or Github Actions secrets). Restrict access via RBAC and least privilege. See DEVOPS-03.6.
  • For local development, store .env secrets in Keeper as a “Secure Note”. Do not share as plain text.

The scope of the penetration test must be defined clearly in advance · PEN-01 · MUST · DEV/TEST

Define the scope explicitly before testing begins, specifying systems, applications, and components to be tested.

Recognised scope types include:

  • External Network Testing
  • Internal Network Testing
  • Web Application Testing
  • API Testing
  • Cloud Infrastructure Testing
  • Social Engineering (only if explicitly agreed)

The penetration test must have formal sign-off from the System Owner and Relevant Stakeholders before it begins · PEN-02 · MUST · DEV/TEST

An agreed contract or Statement of Work (SoW) from the testing provider must be in place and cover: scope, methodology, test windows, escalation paths, impact management, and legal statements.

Testing should be performed on a representative environment · PEN-03 · SHOULD · DEV/TEST

Prefer a non‑production environment that mirrors production configuration (e.g. Azure Front Door, Web Application Firewall). Avoid testing in production unless absolutely necessary and explicitly approved.

System access to conduct the penetration test must be controlled and monitored · PEN-04 · MUST · DEV/TEST

Grant the least privilege required for the duration of testing. Access must be time‑bound and revoked immediately after testing concludes. Maintain audit logs of access and tester activity, and rotate any credentials or tokens post‑test.

Penetration testers must not have access to production data · PEN-05 · MUST · DEV/TEST

Do not provide testers with access to live production data. If production must be used, ensure no production data is present, or anonymise sensitive data to an acceptable standard. After testing, reset or roll back changes and revoke access.

Penetration testers must provide a formal report of findings and remediation recommendations · PEN-06 · MUST · DEV/TEST

Require a formal report including scope and methodology, evidence of findings with severity ratings, impacted assets, reproducible steps, and clear remediation recommendations.

Use long-term support framework versions · SEC-01.1.1 · MUST · DEV/TEST

Applications must target long-term support (LTS) versions of major frameworks (e.g. .NET or Angular).

Using LTS versions of frameworks improves an applications security, as LTS releases receive security patches and updates for a longer period than standard term support versions. For .NET applications, Microsoft provides LTS releases with 3 years of patching support.

Avoid the use of libraries with known vulnerabilities · SEC-01.1.2 · MUST · DEV/TEST

Applications must not utilise external libraries with known vulnerabilities (e.g. older versions of jQuery).

The use of external libraries should be carefully monitored, reviewed, and virus scanned in order to reduce the risk of malicious code injection (for example, cross-site scripting injection).

Use secure package versions · SEC-01.1.3 · SHOULD · DEV/TEST

Applications must maintain the security and reliability of all libraries and packages by upgrading to the latest secure version.

If a vulnerability is discovered, then it should be upgraded to the latest, patched version as soon as one becomes available.

Vulnerabilities can propagate through indirect dependencies or outdated packages - any packages which cannot be upgraded to secure versions must be recorded as a risk within the project tracker.

All application credentials must follow the Password and Authentication Policy · SEC-01.2.1 · MUST · DEV/TEST

To ensure password complexity, passwords must adhere to Audacia’s ‘Password and Authentication Policy’

Restricted
Login to display internal content.
.

Application credentials must not be re-used · SEC-01.2.2 · MUST · DEV/TEST

Unique, auto-generated passwords should be used to ensure the same password cannot be reused for accounts across multiple applications.

Un-encrypted credentials must not be stored in source control · SEC-01.2.3 · MUST · DEV/TEST

All credentials (not limited to API keys, default application logins or database config) must not be stored in source control.

If encrypted credentials are stored in source control, the necessary decryption key(s) must stored securely elsewhere.

Validate, encode or escape user input to prevent cross-site scripting (XSS) · SEC-01.3.1 · MUST · DEV/TEST

Any data entered by a user must be treated as untrusted.

All untrusted input must be validated, escaped or encoded appropriately when being rendered in a HTML page as this defends against cross-site scripting (XSS).

Server-side input validation must be performed,to ensure that malicious or malformed data is not processed. This is in addition to client-side validation, which can be by-passed.

Automated API tests should be written for input validation

Functional API tests can validate business logic rules around field length, required fields and expected data types faster than UI tests which need to login or navigate to the form or page under test.

Testing should ensure it is not possible to access resources using URL manipulation · SEC-01.5.4 · SHOULD · TEST

User’s must not be able to manipulate URLs and entity IDs in API requests to gain access to data they are not authorised to view.

Testing should ensure data is not exposed · SEC-01.5.5 · SHOULD · TEST

The system should only return the data that is required by the end user. It should not return any data the user is not authorised to view or any data the user does not need to fulfil their current action.

Data minimisation can be achieved by ensuring the system does not return more table columns and/or data than will be shown to the user.

Applications must only share sensitive information with the relevant audience.

In some cases, applications should be intentionally obscure with the information provided to the end user, for example, data exposure can be avoided in authentication messages. If a user enters an incorrect username or password, the system should not tell the user which is incorrect. This helps reduce the likelihood of a bad actor being able to guess valid username/password combinations.

The appropriate OWASP guidance must be followed · SEC-01.7.1 · MUST · DEV/TEST

Teams must follow current OWASP guidance relevant to the system and document the risks considered and the controls applied.

  • OWASP Top 10: Common web application risks (e.g. injection, broken access control, cryptographic failures, insecure design, vulnerable and outdated components, SSRF). See the OWASP Top 10.
  • OWASP Top 10 for LLM Applications: Risks specific to LLM systems (e.g. prompt injection, data poisoning, supply chain/model integrity, sensitive information disclosure). See the OWASP Top 10 for LLM Applications.
  • OWASP Top 10 for Business Logic Abuse: Abuse of workflows and rules (e.g. bypassing business processes, DoS, privilege escalation through logic flaws, quota/metering circumvention). See the OWASP Top 10 for Business Logic Abuse.

Security scans must be run on a regular basis · ST-01 · MUST · TEST

Security scans must be run on a regular basis (e.g. weekly) to identify vulnerabilities in the system. These scans should include:

  • Baseline Scans: Scans performed without authentication to identify publicly exposed vulnerabilities.
  • Authenticated Scans: Scans performed with valid credentials to identify vulnerabilities within the authenticated areas of the system.

A vulnerability management process must be documented and followed · ST-02 · MUST · TEST

A vulnerability management process must be documented and followed to ensure vulnerabilities are tracked, prioritized, and resolved. This process should include:

  • Identification: Regular scans and manual reviews to identify vulnerabilities, should include how to raise bugs.
  • Assessment: Prioritization of vulnerabilities based on severity and impact.
  • Remediation: Timely resolution of identified vulnerabilities.
Restricted
Login to display internal content.
For more details, refer to the Vulnerability Management Process.

Authorization tests must be included in automated tests · ST-03 · MUST · TEST

Authorization tests must be included in automated tests to ensure that users can only access resources they are authorized to access. Examples of such tests include:

  • Verifying that users with specific roles cannot access restricted endpoints or web pages.
  • Ensuring that user without an admin role cannot perform administrative actions.
  • Confirming that sensitive data is not exposed to users without proper permissions.

Due diligence must be performed before introducing a new third-party dependency · SEC-04.1 · MUST · DEV/TEST

The due diligence must, at a minimum, cover the following questions (with an aim to answer “Yes” to every question):

  1. Is the dependency stable, popular, etc. (using metrics like GitHub stars and downloads)?
  2. Is the dependency actively maintained (using metrics like number of contributors and last update)?
  3. Is the latest version secure (i.e. no unpatched vulnerabilities)?
  4. Is the dependency solving a well-scoped, common problem (e.g. Excel integration or date parsing)?
  5. Does the cost to build outweigh the risk of importing?
  6. Is the dependency solving a non-trivial problem or one that likely to evolve in future (i.e. it would be harder to write our own code)?
  7. Is the license acceptable for our intended use (e.g. copyleft licenses are generally to be avoided)?

Performance & Scalability

Load tests must use a representative number of users · PT-01 · MUST · TEST

Load tests must simulate the expected number of concurrent users accessing the system to ensure the results reflect real-world usage.

Performance tests must use realistic data volumes ·PT-02 · MUST · TEST

Performance tests must use data volumes that are representative of production environments to ensure accurate results.

Load tests must cover the most common API calls and user journeys · PT-03 · MUST · TEST

Load tests must include the most frequently used API calls and user workflows to ensure critical paths are tested.

Performance tests should run on a regular schedule· PT-04 · SHOULD · TEST

Performance tests should be scheduled to run regularly (e.g. weekly or before a release) to identify performance issues early.

Performance insights should be gathered from diverse methods · PT-05 · SHOULD · TEST

Do not rely solely on load testing to assess system performance. Load testing is essential for understanding how a system behaves under expected user traffic, but it does not provide a complete picture.

Use a combination of performance testing methods to evaluate different aspects of system behavior. Examples include:

  • UI performance testing – to audit front-end performance, including page load speed, interactivity, and visual stability. Various tools can be used for this purpose, such as Lighthouse.
  • Stress testing - to determine how the system performs under extreme conditions and how it recovers from failure.
  • Endurance testing - to identify issues such as memory leaks or performance degradation over time.
  • Spike testing - to evaluate how the system handles sudden increases in load.
  • Scalability Testing - to evaluate how the system performs as the workload increases.