Knowledge & Insights
Claim methodology & cost categories
A strong R&D claim is rarely “about the numbers” first. It’s about a clear, defensible logic for why each cost was necessary for qualifying R&D, and how you’ve applied that logic consistently across people, suppliers, licences, and cloud spend.
This guide sets out practical approaches we use with technology-led and engineering-led SMEs to build a methodology that is repeatable, explainable, and audit-friendly, without turning the process into a compliance burden.
1) The 5 principles of a robust methodology
HMRC (and your own finance team) need to be able to understand your approach quickly. In our experience, the strongest methodologies share five traits:
- Clarity: a simple explanation of what you treat as qualifying activity, and why.
- Consistency: the same logic applied across the year and across cost categories.
- Traceability: you can trace from a project narrative → people/suppliers → the apportionment.
- Reasonableness: assumptions are grounded and proportionate, not “maximised”.
- Evidence: enough contemporaneous support to show your approach is real, not retrospective.
A helpful test
Could a reviewer understand your cost logic in under 3 minutes? If not, simplify the story, not the truth.
2) Staffing: time, roles & evidence
People costs usually form the core of an R&D claim. The goal is to identify which roles materially contribute to resolving the technical uncertainties and then apply a fair apportionment to their time.
What “good” looks like
- Role-based mapping: engineers, developers, scientists, technical architects, and technical leads mapped to specific uncertainties.
- Project-level context: a short narrative tying each team/role to the qualifying work.
- Evidence: design docs, commit activity, tickets, test plans, sprint notes, technical decisions, experiment logs.
Apportionment options (pick what fits)
1) Activity-based sampling
Use a sample of sprints/months to estimate time spent on qualifying activities, then extrapolate. Best when work is iterative and reasonably repeatable.
2) Role-weighted allocation
Apply different R&D percentages by role based on typical contribution (e.g., core engineers vs QA vs PM), supported by evidence and interviews.
Timesheets are often a helpful starting point, but they should never be considered in isolation. A defensible claim still needs a clear technical narrative and a methodology that reflects how the underlying R&D was actually delivered.
3) EPWs vs subcontractors: getting the bucket right
A surprising number of claims run into trouble because labour is categorised inconsistently. Before you quantify, ensure you’re correctly identifying whether costs are: staff, externally provided workers (EPWs), or subcontractors.
EPWs (typical pattern)
Individuals supplied via an agency/umbrella to work under your direction, often integrated into your team. You typically control the “what” and “how”.
Subcontractors (typical pattern)
A third party delivering a defined piece of work or output. They may use their own methods and take on delivery risk.
4) Subcontractors & overseas restrictions: how to apportion
Post-reform, the location of the work and the structure of the engagement matters more than ever. If a project has mixed UK and non-UK delivery, or mixes qualifying/non-qualifying tasks, you need an apportionment approach that is specific and evidenced.
Practical apportionment methods
- Statement of work mapping: split deliverables into qualifying vs non-qualifying outputs.
- Invoice line analysis: where invoices show task breakdown, map lines to activities.
- Milestone-based allocation: allocate based on documented phases and who delivered them.
- Time & materials evidence: where available, use credible T&M summaries (not necessarily minute-by-minute timesheets).
Aim for “explainable”
Your apportionment should be something a reviewer can follow without specialist knowledge, with enough evidence to show it reflects reality.
5) Software licences: apportioning without perfect allocation
Software licensing is often shared across teams and rarely maps neatly to “R&D vs non-R&D”. You’re not expected to do the impossible. The goal is to adopt a method that is reasonable, consistent, and supported by how the business actually uses the tools.
Common approaches that work well
- Technical headcount basis: proportion of licences used by technical teams, adjusted for non-R&D usage.
- Tool purpose basis: tools primarily used for development/testing environments tend to have stronger linkage to R&D.
- Weighted role basis: different usage weights for engineers vs QA vs PM vs support.
If licences cannot be assigned to individuals, document that constraint and set out why your chosen proxy is the most representative.
6) Cloud computing & data: avoiding over/under-claiming
Cloud and data spend can look “large” on invoices but still be legitimate, or it can include a lot of BAU hosting and production traffic that should not be treated as qualifying. A clean method is to separate the spend into meaningful buckets.
A simple bucket model
R&D environments
Dev/test environments, experimentation, prototyping, training runs, non-production pipelines.
Production / BAU
Live hosting, routine operations, standard monitoring, general business analytics.
Where services are shared (e.g., AWS bills), consider an allocation approach based on: environment tagging, account separation, service-level mapping (e.g., dev clusters vs prod clusters), or an evidence-backed proxy where tagging isn’t feasible.
Avoid “max proxy” logic
Using a single highest R&D percentage across all cloud services can be hard to justify. A more defensible approach is to allocate by buckets/services and apply role/project evidence to each.
7) Consistency, benchmarking & reasonableness
If you use percentages, ensure they’re not arbitrary. The best claims can explain: why this percentage, why this year, and why this differs by role/service.
- Benchmark internally: compare against last year and explain key drivers of change.
- Benchmark externally: sanity-check against industry norms (without blindly copying them).
- Document change: new products, shifts from build to maintain, scaling teams, new compliance requirements.
8) Common pitfalls that cause delays
In our experience, processing delays and follow-up questions most often come from avoidable presentation and consistency issues:
- Methodology not explained clearly: numbers appear without the logic behind them.
- Inconsistent categorisation: workers/suppliers move between buckets without explanation.
- Cloud/software not separated: production/BAU spend mixed with R&D without a clean allocation.
- Overly broad assumptions: one percentage applied across everything with limited linkage to reality.
- Weak evidence trail: no contemporaneous artefacts tying work to uncertainties and experiments.
Want us to sense-check your methodology?
We can review your current approach, highlight where it can be made clearer and more consistent, and suggest a practical methodology that fits how your teams actually work.
Request a call back