Knowledge & Insights
Software Eligibility: beyond the myths
Software claims are often rejected (or delayed) for one of two reasons: they read like a product roadmap, or they lean on myths, “it’s innovative because it’s new to us”, “we used AI”, “we built a SaaS”, “we migrated to the cloud”. None of those are, by themselves, R&D.
This guide shows what actually drives eligibility in software: where the real technological uncertainty sits, how to describe it clearly, and how to separate qualifying development from routine build, implementation, and BAU change.
1) Common myths (and what HMRC actually needs)
Let’s separate what sounds persuasive in a pitch deck from what makes a software claim work in practice. These myths often lead to narratives that are broad, vague, or misaligned to the core eligibility tests.
Myth: “It’s new to us”
Being new to the company is not the test. The question is whether the outcome was not readily deducible and required systematic work to resolve a technological uncertainty.
Myth: “We used AI / ML”
Using an API or fine-tuning a standard model is not automatically R&D. The “signal” is where you had to overcome non-obvious constraints: evaluation, drift, performance, safety, data limitations, or integration at scale.
Myth: “We built a SaaS platform”
Many SaaS builds are straightforward implementations of known patterns. Eligibility arises when the build required experimentation to achieve a non-trivial technical capability (performance, reliability, scalability, correctness).
Myth: “We migrated to cloud / microservices”
Migrations can be BAU. They can also involve qualifying uncertainty (e.g., consistency models, latency budgets, data integrity, zero-downtime constraints), but only if those unknowns drove systematic engineering.
What HMRC actually needs to see
A clear description of the technological uncertainty, why it wasn’t readily deducible, and what systematic work was performed to resolve it, with evidence that ties back to how software teams genuinely build and test.
2) What “qualifying” looks like in software
Strong software claims tend to cluster around a small number of genuinely hard workstreams where the team pushed beyond routine delivery. These are often about non-functional requirements and systems constraints that aren’t solved by simply writing more code.
Performance & scalability
Achieving latency/throughput targets at scale; optimising query plans; designing caching/invalidation strategies; controlling cost without degrading correctness.
Reliability & resilience
Handling failure modes; consistency guarantees; idempotency; graceful degradation; recovery strategies; system observability that materially changes how incidents are prevented.
Data correctness & integrity
Non-trivial transformations; reconciliation at scale; schema evolution; drift detection; reducing silent data corruption; validating pipelines under real-world variability.
In software, “advance” is often the ability to reliably achieve a capability under constraints (scale, latency, correctness, safety), not a shiny new UI feature.
3) Examples: high-signal vs low-signal software work
These examples aren’t definitive (context matters), but they illustrate how the same “headline” activity can be either qualifying or non-qualifying depending on the underlying uncertainty.
- Designing a new caching + invalidation strategy because existing approaches caused correctness failures under concurrency.
- Building an event processing pipeline that maintains ordering/consistency under partial failure and high throughput.
- Developing a new evaluation framework to quantify model drift and maintain quality under changing real-world data distributions.
- Achieving near real-time sync across heterogeneous systems where eventual consistency created unacceptable business risk.
- Implementing a standard CRUD feature or workflow using known patterns.
- Integrating a third-party API using published documentation and conventional error handling.
- Cloud migration carried out with established reference architectures without non-obvious constraints driving experimentation.
- Routine bug fixing and QA testing (unless tied to deeper uncertainty and systematic resolution).
4) Framing projects: uncertainty-first, not feature-first
Many software narratives read like: “We built feature A, then B, then C.” That makes it hard for a reviewer to see the technical unknown. A better approach is to frame projects around the uncertainty you were trying to resolve.
Weak project title
“Platform improvements”
Stronger project title
“Reducing end-to-end latency under spike load while preserving data integrity”
A structure that consistently works
Objective (capability) → Constraints (real-world conditions) → Uncertainty (what wasn’t known) → Approach (tests/iterations) → Outcome (what you proved/learned).
5) Evidence that works for software teams
Software teams already generate most of the evidence HMRC would expect, it’s just scattered across tools. The goal is to pick artefacts that support both the uncertainty and the systematic work.
Benchmarks & test results
Load tests, latency charts, profiling outputs, A/B results, model evaluation metrics, ablation tests.
Design & decisions
Architecture diagrams, ADRs, trade-off notes, spike outcomes, migration plans and learnings.
Delivery trail
Sprint goals tied to uncertainty, PR threads, incident post-mortems, tickets that show iterations and rework.
The best evidence is the evidence your team used to decide what to do next, because it proves the work wasn’t obvious and required iteration.
6) Mixed work: separating R&D from BAU
Most software teams do a blend of qualifying and non-qualifying work. A defensible claim doesn’t pretend everything is R&D. Instead, it explains where the uncertainty sits and how much time/cost realistically related to resolving it.
Practical approach
Identify the “R&D core” (the hard part), then explicitly ring-fence routine build (UI work, standard integration, implementation and BAU support). Reviewers generally prefer a claim that is focused and plausible.
7) Methodology: defensible apportionments
A credible methodology is consistent, explainable, and aligned to how the business actually operates. In software, we commonly see two layers:
- Workstream split: allocate costs to the small number of qualifying workstreams (and explain why those workstreams are qualifying).
- Role/time logic: reflect reality: senior engineers may be heavily involved in uncertain work; support/ops/QA may be mostly BAU unless tied to experimentation.
If you apply a global percentage for certain overhead-like costs (tools, licences, some cloud), ensure the rationale is explicit and consistent with your delivery model.
8) Practical checklist for your next claim
- Can we articulate the technological uncertainty for each project in plain English?
- Is it clear why the solution wasn’t readily deducible at the outset?
- Do we show systematic work (tests/iterations) rather than a feature list?
- Have we separated BAU, implementation, and routine change from the R&D core?
- Do AIF, report, schedules and computation tie together cleanly and consistently?
Want an eligibility sense check for software?
We can review your projects and methodology, identify the strongest qualifying areas, and help you present a clear, technically grounded narrative.
Request a call back