Assistive Tech: How to Prevent Returns and Credit Erosion in Publicly-Funded Cases
Protect your cash flow by building clean evidence trails and preventing avoidable returns in publicly funded assistive technology cases.
The Hidden Cost of Returns in Publicly Funded AT
Returns and credit notes aren't just annoying operational noise in assistive technology—they're a direct hit to cash you already mentally "counted." In publicly funded or reimbursed cases, that hit is often worse because the file is gated: what gets paid isn't just what was delivered, it's what can be validated against an approved need, an agreed configuration, and a clean evidence trail.
That's the trap: you can do great work for the end user and still lose money if the paper trail can't explain why a change happened, who approved it, and what was actually provided.
What you want is boring and reliable: fewer returns, fewer partial credits, and fewer "we'll pay once this is clarified" holds—without adding a bunch of bureaucracy.
What "returns" and "credit erosion" mean in publicly funded AT
Returns
In this context, returns aren't only a device physically coming back. They include swaps, replacements, accessory changes, reconfigurations, and "we can't use it" outcomes that force you to unwind a sale you expected to collect.
Credit Erosion
Credit erosion is the quieter version: you still keep the case, but you give up margin or principal through partial credits, free replacements, unbillable revisit time, or "goodwill" add-ons that never make it into reimbursable documentation. The end result is the same—less cash from the case than planned, and more time spent defending the file.

The most important mindset shift is this: in publicly funded cases, a return is rarely "just a customer satisfaction issue." It's usually a requirements and evidence issue—the file no longer cleanly matches the approval, the delivery, and the acceptance story.
The gates your case must pass to stay paid
Most credit erosion happens when a case fails one of five gates. You can use these gates as a diagnostic: whenever a return or credit note shows up, ask which gate broke.
1
Authorization
What was approved (and for what purpose)
2
Configuration
What you ordered and why it matches the approved need
3
Delivery/Setup
What was provided, installed, trained, and when
4
Acceptance Evidence
What proves the user/case contact accepted the delivered configuration
5
Validation/Payment
What the payor can match and clear without a query
This framing matters because "doing the right thing" operationally only protects your cash if the file can prove it quickly. The validator is trying to answer simple questions: Is this the approved item? If it changed, is there a clear, dated reason and sign-off? Do the identifiers, dates, and quantities match across documents?
Why publicly funded AT returns happen
Most avoidable returns come from a mismatch between what was recommended and what actually works in the user's real environment. The device may be technically correct, but practically wrong: the layout at home, the user's routine, transport constraints, compatibility with existing equipment, or a misunderstood accessory requirement. When that gap shows up after delivery, the fix often looks like a return, a swap, or "just send the missing accessory"—which is where credit erosion starts.
The second driver is insufficient setup and training, especially when responsibility is fuzzy. If the user (or a caregiver) doesn't know how to use or maintain the equipment, "non-use" can get interpreted as "not fit for purpose," and the path of least resistance becomes taking it back.
The third driver is documentation that's too vague to defend. Publicly funded workflows punish ambiguity. If "what was agreed" isn't pinned down (configuration, accessories, constraints, expectations on maintenance and upgrades), then every post-delivery adjustment becomes arguable—and arguable files get held.

A common example is the accessory pack that was "assumed" but never spelled out in the signed configuration summary. The result is a predictable cycle: delivery happens, the user tries to use the device in the real world, something critical is missing, and the team either ships parts for free or issues a credit to close the loop. The fix is rarely complicated; it's usually a one-page sign-off that lists the exact configuration and accessories, plus a short note of known constraints (space, transport, storage, compatibility) so everyone is aligned before anything ships.
Pre-supply controls that prevent returns before they start
If you want fewer returns, you win before ordering—not after delivery. The goal is to reduce "surprise" by making requirements explicit, forcing an internal check, and aligning expectations with the professional or case contact who will later be asked, "Why did this change?"
1
Requirements Record
Start with a requirements record that's more concrete than a sales note. You're not doing clinical work; you're capturing operational facts: environment constraints, compatibility assumptions, accessory dependencies, and what "success" looks like in plain language.
2
Configuration Validation
Then make configuration a controlled step: someone other than the original salesperson should validate that the ordered configuration matches the requirement record and the approval summary.
3
Configuration Summary
At a practical level, the strongest pre-supply control is a short, standardized "configuration summary" that goes to the relevant professional or case contact before the order is placed. It doesn't need to be long. It needs to be precise: item, model, key options, accessories, and anything that would later become a dispute ("includes mounting kit," "includes spare battery," "requires doorway clearance," "compatible with X"). If you can get a written "confirmed" back—email is fine—you've already removed a big chunk of downstream return risk.
This is also where you prevent the most common validator failure mode: a file where the invoice describes one thing, the delivery evidence implies another, and the approval text sounds like a third. That mismatch is what creates queries, holds, and pressure to "just issue a credit so we can move on."

Even with a tighter process, publicly funded cases can still take time to validate and pay—especially when multiple parties touch the file. Some vendors choose to separate operational quality from cash timing by financing approved, well-documented receivables once delivery and acceptance evidence are in place. If you're seeing strong demand but cash is getting trapped in the validation window, it may be worth exploring a specialist like MFFG that focuses on advancing funds against public-body receivables—so ops can stay disciplined without forcing your growth to match the pay cycle.
Post-delivery practices that reduce credit erosion
Post-delivery is where small issues either get resolved cheaply—or turn into expensive reversals. Most credit erosion isn't a dramatic return; it's a drip: extra visits, "quick" swaps, and replacement parts you don't invoice because the file feels too messy to defend.
Lightweight Follow-Up Cadence
A lightweight follow-up cadence is one of the highest ROI moves you can make. A short check-in soon after delivery catches misunderstandings early ("we didn't know how to adjust X," "we're missing Y," "it doesn't fit here") when you can solve them with support, a minor adjustment, or a documented add-on instead of a return. The key is to treat the follow-up as evidence creation, not just customer care: log what was reported, what was advised, and what was accepted.
Support and Troubleshooting
The second habit is basic support and troubleshooting that's designed to prevent "non-use" from turning into "not fit for purpose." That can be as simple as a one-page quickstart, a short video, or a phone check-in. The goal isn't to build a call center; it's to stop preventable confusion from becoming a return request.
Clean Change Log
The third habit is a clean "change log" for any adjustment, replacement, or swap. Publicly funded workflows don't like surprises, but they can tolerate change if the story is consistent and documented. When something changes, capture four things in one place: the date, the reason (in plain language), what changed, and who approved it. Then attach the evidence (email, portal note, signed summary). When a validator later asks why a replacement happened, you don't want a long narrative—you want a neat chain of artifacts.
Two short scenarios where small process changes prevented a return or credit note
Scenario 1
Preventing a return with a pre-order sign-off
A vendor had repeat issues where a device came back because the user expected a different configuration than what arrived. The fix was a one-page configuration summary sent to the case contact before ordering, with a simple "reply confirmed." The next time a mismatch surfaced, the vendor could show the confirmation and the exact configuration that was approved. Instead of a return, the case was handled as a documented change request—protecting both the relationship and the expected cash.
Scenario 2
Preventing credit erosion with a follow-up and change log
Another team kept issuing partial credits after "minor adjustments" because they didn't want to trigger a payor review. They added two steps: a scheduled post-delivery check-in and a single change log page in the file. When an accessory swap was requested, it was documented as an approved delta with a date and reason, rather than informal free work. The next validation question was answered in one email with the change log and supporting proof—no credit note required.
A simple checklist to reduce returns and stop credit erosion
Use this checklist when reviewing why previous credits or returns occurred and what to fix. The goal is not perfection; it's repeatable control.
Requirements clarity
Were environment constraints and compatibility assumptions captured in plain language (space, transport, existing equipment, caregiver involvement)?
Configuration precision
Did the file include a clear list of what was being supplied, including accessories and options—not just a generic description?
Independent internal check
Did someone other than the original salesperson validate that the configuration matched the requirement record and the approval summary?
Expectation alignment
Was the relevant professional or case contact asked to confirm the configuration before ordering?
Delivery/setup evidence
Do you have dated proof of delivery and a simple record of what was installed/trained?
Acceptance evidence
Is there a clear acceptance confirmation that matches the delivered configuration (not just "delivered")?
Follow-up cadence
Did you check in shortly after delivery to catch issues early, and did you log the outcome?
Change documentation
For any adjustment or replacement, is there a dated reason, approval, and a clear record of what changed?
Root cause tagging
For each return/credit, can you tag it to one of the five gates (authorization, configuration, delivery/setup, acceptance evidence, validation/payment)?
Process fix
What single workflow change would have prevented it (a form, a sign-off step, a second-check, a follow-up trigger)?

Returns and credits are rarely random. If you treat each one as a data point against the same gates and artifacts, patterns show up fast—and the fixes tend to be simple. The payoff is not only fewer devices coming back. It's fewer time-sinks, fewer uncomfortable "goodwill" concessions, and a smoother path from delivery to cash.