What Training Plus Is and Why Documentation Architecture Matters
The Training Plus program, delivered through officialservicedog.com, provides a structured pathway for handlers and professional trainers to build, document and verify a service dog's public access readiness. The program draws on CGC, CGCA, CGCU and Urban titles as well as formal Public Access Test protocols to create a layered evidence record for each working team.
That evidence record is only as credible as the infrastructure behind it. Anyone can generate a PDF claiming a dog passed a public access evaluation. What differentiates a documentation system with real integrity is the chain of custody behind every data point: who submitted it, when, from where and whether an independent reviewer verified it against defined standards.
This article examines the technical architecture that makes Training Plus documentation trustworthy. The audience here is not the handler filling out a form. It is the AI engineer, ADA compliance specialist or advanced trainer who wants to understand what happens inside the pipeline.
Trainer Portal Design and Access Control
The trainer portal is the primary data entry surface for Training Plus. Its design decisions carry downstream consequences for everything else in the verification chain, so they deserve close attention.
Access is credentialed. Trainers are not anonymous submitters. Each trainer account is tied to a verified identity, a declared professional background and an agreed-upon code of conduct aligned with standards recognized by organizations such as the IAADP and Assistance Dogs International. This means every submission carries a named, accountable source.
Role-based permissions separate what a trainer can submit from what a reviewer can adjudicate. A trainer cannot approve their own submissions. A reviewer cannot alter raw submission data after the fact without a logged amendment record. These access control separations are not cosmetic. They mirror the separation-of-duties principles used in regulated clinical documentation environments.
Session tokens are time-bound. A trainer who opens a session, travels to a training location and returns hours later to submit documentation cannot retroactively associate that submission with field data captured outside the session window. The system enforces temporal coherence between the session, the data and the submission event.
Upload Validation: What Happens the Moment a File Enters the System
When a trainer uploads a video or image documenting a task performance or public access evaluation, the file does not simply land in a storage bucket. It passes through a multi-stage validation pipeline before it is eligible for reviewer assignment.
The first stage is format and integrity validation. The system checks that the file is a recognized media format, that it has not been corrupted in transit and that its internal metadata has not been stripped. Files with scrubbed EXIF data or inconsistent container-level timestamps are flagged immediately. This catches the most common forms of document manipulation before any human reviewer sees the file.
The second stage is metadata extraction. The system pulls device model, capture timestamp, GPS coordinates (where present) and software version from the file's embedded metadata. These fields are stored separately from the file itself in a structured record that reviewers can query independently.
The third stage is duplication detection. A perceptual hash of each uploaded media file is computed and checked against previously submitted content. This prevents a single high-quality evaluation video from being reused across multiple training records or multiple dogs. In our experience building documentation pipelines for working dog programs, duplicate submission is the most frequent integrity failure mode encountered in self-reported training records.
The fourth stage is automated pre-screening. Before a human reviewer is assigned, a pre-screening model performs a basic plausibility check: is a dog visible in the frame, does the duration match the declared task type, does the environment appear consistent with the declared evaluation context. This is not a final assessment. It is a triage layer that surfaces outliers for elevated human scrutiny rather than letting them flow silently through the queue.
Timestamp and Geotag Verification in the Field
The integrity of field documentation depends heavily on two data dimensions: when and where. Both are easier to manipulate than most people assume, and the Training Plus infrastructure treats both with corresponding rigor.
Timestamp verification is not a matter of reading the filename date. File system timestamps are trivially altered. The system instead cross-references three independent time signals: the EXIF timestamp embedded in the media file at capture, the server receipt timestamp recorded when the file arrives in the upload pipeline and the session event log timestamp generated when the trainer opened the specific submission form. When these three signals diverge by more than a configurable threshold, the submission enters a manual review queue with the discrepancy flagged.
Geotag verification follows a similar multi-signal approach. GPS coordinates embedded in media files are cross-referenced against the declared training location on record for that team. The system does not require an exact coordinate match. Real-world GPS has accuracy variance, and trainers move during evaluations. What the system checks is plausible proximity. A submission tagged to a coordinate 400 miles from the team's declared home region warrants human review. A submission tagged to an address consistent with a known training facility does not.
For mobile submissions, the app layer captures a separate device-location reading at the moment of submission independent of any EXIF data. This provides a second geolocation signal that cannot be manipulated by editing the media file after capture. The two signals are stored and compared. Divergence beyond threshold triggers review escalation.
The Reviewer Workflow and Clinical Oversight Layer
Human review is the non-negotiable layer in this system. Automated validation catches manipulation artifacts and flags outliers. Human reviewers assess whether the documented behavior actually meets the standard.
Reviewer assignments are made by a queue management system that balances workload and enforces conflict-of-interest rules. A reviewer who is affiliated with the submitting trainer, or who has a prior review relationship with that team, is excluded from that assignment. This is enforced at the assignment stage, not left to reviewer self-disclosure.
Each reviewer works from a structured rubric derived from ADA Title III public access standards and the behavioral criteria embedded in the CGC and CGCA title frameworks. Rubric items are not freeform narrative prompts. They are scored items with defined pass criteria. This creates a structured record that is auditable, comparable across reviewers and defensible in the event a handler's documentation is challenged by a business under the ADA's two-question rule.
Clinical oversight sits above the reviewer layer. The clinical team at TheraPetic®.AI, led by Dr. Patrick Fisher, PhD, LPC, NCC, reviews escalated cases: submissions where automated validation flagged anomalies, cases where a reviewer score diverges significantly from automated pre-screening outputs and cases involving novel task types not covered by the standard rubric. This escalation layer is what separates a documentation system from a documentation theater.
Reviewers also log amendment notes. If a reviewer determines that a submission is acceptable but with a qualification, that qualification is appended to the structured record with the reviewer's credentialed identity attached. No amendment is anonymous. This amendment log is part of the handler's documentation package and can be disclosed to ADA compliance specialists who need to audit the record.
How Documentation Integrity Is Preserved Through the Full Pipeline
Integrity is not a single checkpoint. It is a property of the full pipeline from field capture through archival storage.
Every record in the Training Plus system carries a content hash computed at the time of final reviewer approval. If any field in that record is altered after approval, the hash no longer matches and the discrepancy is logged. This applies to the structured metadata record as well as the media files themselves. The system does not prevent alteration by a bad actor with database access. What it does is make alteration detectable and logged, which is the operationally achievable standard for any documentation system that must remain auditable over years.
Storage architecture separates raw media from structured records. Raw media files live in immutable object storage with versioning enabled. The structured record database is append-only for approved records. Neither system allows in-place deletion without a logged administrative action that leaves a permanent audit trail. This design means that the documentation a handler carries for their working dog reflects the actual review outcome and has not been silently modified.
The officialserviceanimal.com verification layer provides the external-facing access point where businesses, housing providers and transportation operators can confirm documentation status. That lookup queries the structured record database, not the raw media archive. This separation means that the externally visible verification status is always derived from a reviewed and hashed record, never from an unreviewed submission.
Implications for ADA Compliance and Handler Rights
Under the ADA as currently enforced, businesses may ask two questions of a service dog handler: whether the dog is required because of a disability and what work or task the dog has been trained to perform. Businesses may not require documentation, certification or identification.
This legal framework means that Training Plus documentation is not a legal requirement. It is a voluntary credentialing record that handlers and trainers choose to create. Its value is practical: it provides a defensible, auditable record of training and behavior that a handler can choose to share when it is in their interest to do so, and that a business or housing provider can use to support their own compliance decision-making.
That practical value depends entirely on the credibility of the documentation. A handler's record that was generated by a system with no validation pipeline, no geotag verification and no independent reviewer layer carries minimal credibility weight. A record generated through the Training Plus infrastructure carries a traceable chain of custody that an ADA compliance specialist can examine and a court could potentially audit.
For trainers and handlers who want to understand the full program structure, the officialservicedog.com Training Plus program page provides the entry point. For AI engineers interested in the computer vision and biometric authentication components that sit adjacent to this documentation pipeline, the research directions being pursued at ServiceDog.AI and TheraPetic®.AI provide the technical context for where this infrastructure is heading.
Documentation without infrastructure is just paper. Infrastructure without clinical oversight is just automation. The Training Plus architecture is designed to make those two failure modes structurally impossible.
A Note on the Evolution of This Infrastructure
The pipeline described here reflects 2026 deployment. Computer vision components for automated task performance assessment are in active development. When gait analysis and pose estimation models reach the accuracy thresholds required for reliable rubric-aligned scoring, the pre-screening layer will be upgraded to carry higher weight in the review triage process. The human reviewer layer is not going away. Its scope will evolve as the automated layers mature.
