Effective date: January 14, 2026 · Last updated: January 15, 2026 · Version 1.0
Supplement to the MVPR Service Level Agreement
Parties
This AI Transparency & Governance Addendum (“Addendum”) supplements the MVPR Service Level Agreement (“SLA”) and forms part of the Master Services Agreement (“MSA”) between:
MV Public Relations Ltd (“MVPR,” “We,” “Us,” “Our”) and its Customers (“Customer,” “Client,” “You,” “Your”)
1. Purpose and scope
1.1 Purpose
This Addendum provides comprehensive information regarding MVPR’s use of Artificial Intelligence (AI) technologies in the provision of Services, including detailed specifications of AI models, governance frameworks, transparency commitments, and compliance measures. This Addendum expands upon Section 7 (AI-Enabled Services) of the SLA.
1.2 Integration with other documents
This Addendum is intended to be read in conjunction with:
- Section 7 of the SLA (AI-Enabled Services)
- Section 8 of the MSA (Data Protection and Customer Data)
- The Data Processing Agreement (DPA)
- MVPR’s Operations Security Policy
- MVPR’s Secure Development Policy
1.3 Updates and versioning
MVPR reviews and updates this Addendum quarterly to reflect material changes in AI capabilities, models, or regulatory requirements. Updated versions are made available via email notification to Customer’s administrative contact, at https://mvpr.io/ai-governance, and upon request from tom@mvpr.io.
2. AI system architecture and models
2.1 AI models currently deployed
MVPR uses the following enterprise-grade AI models in the provision of Services:
Claude (Anthropic)
- Provider: Anthropic Ireland Limited (EEA entity for European customers)
- Primary use cases: Long-form content creation and editing; research synthesis and analysis; interview transcription processing; talking point generation; quality assessment and editorial review
- Context window: Up to 200,000 tokens
- Key capabilities: Advanced reasoning, nuanced understanding, citation accuracy
- Training data cutoff: Updated quarterly — current: January 2025
GPT-4 and GPT-4 Turbo (OpenAI)
- Provider: OpenAI Ireland Ltd. (EEA entity for European customers)
- Primary use cases: Content outline development; headline and hook generation; journalist query response drafting; structured data extraction
- Context window: Up to 128,000 tokens (GPT-4 Turbo)
- Key capabilities: Creative content generation, structured output, consistency
- Training data cutoff: Updated quarterly — current: April 2024
Specialised AI tools
Natural Language Processing (NLP): Sentiment analysis for media coverage; entity extraction for journalist targeting; topic modelling for content relevance.
AI-assisted search and matching: Journalist database semantic search; content-to-opportunity matching algorithms; beat coverage analysis.
2.2 Model selection criteria
- Task suitability: Matching model capabilities to specific use cases
- Quality and accuracy: Demonstrated performance on relevant benchmarks
- Security and privacy: Provider’s data handling and security practices
- Compliance: Provider’s adherence to relevant regulations (GDPR, EU AI Act)
- Cost-effectiveness: Balancing performance with operational efficiency
- EU data residency: Preference for EEA-based processing
2.3 Model update and change management
MVPR will notify Customer of material changes to AI models, including: migration to a fundamentally different model architecture; changes that materially affect output quality or capabilities; and changes required by regulatory compliance.
Notification timeline: Major model migrations — 30 days advance notice. Version updates — notification with monthly service updates. Security patches — immediate implementation with retroactive notification.
Customer opt-out: If a material model change is unacceptable, Customer may request continuation of the previous model version (if technically feasible), terminate affected Services with 30 days’ written notice, or work with MVPR to develop an alternative solution.
2.4 Model training and data usage
- Customer data is not used to train AI models without explicit written consent
- Zero-data retention: Anthropic and OpenAI do not use Customer inputs/outputs for model training
- No model fine-tuning on Customer data unless explicitly contracted
- IP protection: Customer retains all intellectual property rights to inputs and outputs
AI provider terms: Anthropic Commercial Terms (Section B) — full customer ownership of outputs, no training on customer data. OpenAI Services Agreement (Sections 4.1–4.2) — customer owns inputs and outputs; OpenAI does not use customer data for model improvement unless customer explicitly opts in.
3. AI-enabled workflow and human oversight
3.1 Content creation workflow
MVPR employs a structured human-in-the-loop workflow for all AI-assisted content creation:
STAGE 1: STRATEGIC PLANNING (Human-led) → Client objectives analysis → Target audience identification → Messaging framework development → Approval of content strategy → [HUMAN DECISION CHECKPOINT] STAGE 2: RESEARCH & PREPARATION (AI-assisted) → AI: Industry research synthesis → AI: Competitive analysis → AI: Interview transcription → Human: Research validation and augmentation → [HUMAN REVIEW CHECKPOINT] STAGE 3: CONTENT OUTLINE (AI-assisted) → AI: Structure and outline generation → AI: Key points and arguments identification → Human: Outline review and refinement → Human: Angle and narrative adjustment → [HUMAN APPROVAL CHECKPOINT] STAGE 4: DRAFT CREATION (AI-assisted) → AI: Initial draft generation based on approved outline → AI: Incorporation of research and talking points → Human: Comprehensive editorial review → Human: Tone, voice, and style refinement → Human: Fact-checking and accuracy verification → [HUMAN QUALITY CHECKPOINT] STAGE 5: QUALITY ASSESSMENT (AI-assisted) → AI: Grammar and readability analysis → AI: Brand voice consistency check → AI: SEO and structure optimisation suggestions → Human: Evaluation of AI recommendations → Human: Final editorial polish → [HUMAN FINAL REVIEW] STAGE 6: CLIENT REVIEW (Human-led) → Human: Client presentation and explanation → Client: Feedback and revision requests → Human: Incorporation of client feedback → Human: Final approval process → [CLIENT APPROVAL REQUIRED] STAGE 7: PUBLICATION (Human-led) → Human: Final compliance and legal review (if applicable) → Human: Publication or distribution → AI: Performance monitoring and optimisation
Mandatory human oversight points:
- Strategic planning: 100% human-driven; AI not involved in strategic decisions
- Research validation: all AI-generated research must be verified by human staff
- Outline approval: no content drafted without human-approved outline
- Editorial review: every AI-generated draft reviewed by professional editor
- Fact-checking: all factual claims verified against authoritative sources
- Client approval: no content published without explicit client authorisation
3.2 Journalist query response workflow
STAGE 1: QUERY INGESTION & FILTERING (AI-assisted) → AI: Daily monitoring of journalist request feeds (HARO, etc.) → AI: Keyword-based relevance matching → AI: Preliminary opportunity assessment → Human: Review of AI-flagged opportunities → Human: Accept/decline decision with feedback loop → [HUMAN DECISION CHECKPOINT] STAGE 2: RESPONSE DRAFTING (AI-assisted) → AI: Analysis of journalist query requirements → AI: Review of customer resources and past pitches → AI: Identification of relevant spokesperson → AI: Draft response generation with citations → Human: Contextual reasoning validation → Human: Spokesperson suitability confirmation → Human: Response refinement and personalisation → [HUMAN REVIEW CHECKPOINT] STAGE 3: CLIENT APPROVAL & SENDING (Human-led) → AI: Populate client inbox with draft → Human: Review and modify response → Client: Review and approve response → Client or MVPR: Send response to journalist → AI: Track and monitor outcomes
AI reasoning transparency: For each AI-generated opportunity, MVPR displays why the opportunity is relevant to customer’s objectives, which spokesperson should respond and why, which resources support the pitch angle, and a quality score based on relevance and fit. This contextual reasoning is reviewed and validated by human staff before presentation to Client.
3.3 AI-assisted features with limited human oversight
Certain platform features use AI with minimal human oversight for efficiency:
- Journalist database search: Semantic search of journalist profiles and beat coverage
- Real-time feed monitoring: Daily ingestion and keyword filtering of journalist requests
- Sentiment analysis: Automated sentiment scoring of media coverage
- Performance analytics: Automated campaign metrics and reporting
Safeguards: Regular quality audits by human staff; customer feedback mechanisms; ability to escalate issues to human review; transparent AI confidence scores where applicable.
3.4 When AI is not used
MVPR does not use AI for:
- Strategic planning and decision-making — all strategic recommendations are human-generated
- Client relationship management — all client communications are personally handled by account teams
- Legal or compliance advice — no AI-generated legal or regulatory recommendations
- Final publishing decisions — all publication decisions made by humans
- Contract or business negotiations — all business terms negotiated by humans
- Crisis communications — critical or time-sensitive communications require human expertise
4. Quality control and accuracy
4.1 Quality assurance framework
Layer 1 — AI self-assessment: AI models provide confidence scores for outputs; flagging of uncertain or potentially inaccurate content; self-citation of sources where possible.
Layer 2 — Automated quality checks: Grammar and syntax validation; plagiarism detection; brand voice consistency scoring; readability analysis (Flesch-Kincaid, etc.).
Layer 3 — Human editorial review: Professional editor review of all AI-generated content; fact-checking against authoritative sources; verification of citations and references; tone and style alignment with client voice; cultural sensitivity review.
Layer 4 — Client review and approval: All final deliverables require explicit client approval; client feedback incorporated before publication; iterative refinement based on client preferences.
4.2 Accuracy and fact-checking
- Zero tolerance for fabrication — all factual claims must be verified
- Source verification — citations must link to authoritative, verifiable sources
- Correction protocol — immediate correction of identified errors
- Transparency — clear disclosure when AI may have limitations in accuracy
For all factual content: cross-reference against authoritative sources; verify statistics, dates, and numerical claims; confirm proper names, titles, and organisational affiliations; check for temporal accuracy; flag any unverifiable claims for client discussion.
Known AI limitations and mitigation
AI models may hallucinate (generate plausible but incorrect information), have knowledge cutoffs, misinterpret nuanced contexts, and exhibit biases present in training data. MVPR mitigates these through human editorial review, supplementary research for recent events, contextual briefings for nuanced situations, and bias detection procedures (see Section 5).
4.3 Quality metrics and continuous improvement
Tracked metrics: Client satisfaction scores for AI-assisted deliverables; revision rates; factual error rates (post-human review); time-to-delivery improvements; AI confidence score accuracy.
Continuous improvement: Monthly review of quality metrics; quarterly analysis of client feedback; annual AI model performance evaluation; iterative refinement of prompts and workflows.
5. Bias detection and fairness
5.1 Potential sources of bias
- Training data biases — historical data may reflect societal biases
- Algorithmic biases — model design choices may introduce systematic biases
- Deployment biases — context of use may amplify certain biases
- Interaction biases — user inputs and feedback loops may reinforce biases
5.2 Bias mitigation strategies
Diverse training data (provider-level): MVPR partners with AI providers (Anthropic, OpenAI) that prioritise diverse and representative training data, and regularly reviews provider commitments to fairness.
Human oversight: Editorial teams trained to identify and correct biased outputs; cultural sensitivity review for content targeting diverse audiences; stakeholder diversity in content review process.
Inclusive prompt engineering: Prompts designed to elicit balanced, representative perspectives; explicit instructions to avoid stereotypes and generalisations; requests for diverse examples and viewpoints.
5.3 Fairness in journalist targeting
MVPR’s journalist database and targeting algorithms are designed to prioritise relevance based on beat coverage and past work, avoid discriminatory filtering based on protected characteristics, promote diversity in media outreach opportunities, and provide transparency in matching logic.
Prohibited practices: Filtering journalists by gender, race, ethnicity, or other protected characteristics (except when targeting diversity-focused publications); algorithmic redlining or systematic exclusion of certain media outlets; bias in opportunity allocation among clients.
5.4 Reporting and remediation
- Immediate reporting — contact Account Director or tom@mvpr.io
- Investigation — MVPR will investigate within 3 business days
- Remediation — correct the specific output and identify systemic issues
- Prevention — update prompts, workflows, or training to prevent recurrence
- Transparency — document findings and share learnings with Customer
MVPR conducts an annual internal bias audit of AI-assisted outputs. Results and improvement plans are available upon Customer request.
6. Explainability and transparency
6.1 Explainability methods
Level 1 — Model-level transparency: Clear documentation of which AI models are used for which tasks; publication of model capabilities, limitations, and training data characteristics; regular updates on model versions and changes.
Level 2 — Decision-level transparency: AI-generated reasoning for journalist opportunity matching; explanation of why specific content structures or arguments were suggested; confidence scores for AI recommendations.
Level 3 — Output-level transparency: Upon request, identification of which sections of content were AI-generated vs. human-written; disclosure of AI’s role in research, outlining, drafting, or editing; citations and sources used by AI in content generation.
Level 4 — Workflow-level transparency: Documentation of human oversight checkpoints (see Section 3); clear delineation of AI-assisted vs. human-led activities; process flow diagrams showing AI integration points.
6.2 Customer’s right to explanation
Upon request, MVPR will provide: identification of specific AI models used for a particular deliverable; explanation of AI’s contribution to specific outputs; human review and oversight activities performed; alternative approaches considered; and limitations or uncertainties in AI-generated components.
Request process: Submit request to Account Director or tom@mvpr.io, specifying the deliverable in question. MVPR will respond within 5 business days.
6.3 Limitations of explainability
Large language models have complex internal representations that are not fully interpretable, even by their creators. Explanations represent approximate reasoning, not exact replication of model logic. Despite these limitations, MVPR is committed to maximum practical transparency and will provide the most detailed explanations feasible given current technology.
7. Data privacy and AI processing
7.1 Data flow in AI systems
Customer input (via Platform or Services)
↓
MVPR Systems (GCP EU Multi-Region)
↓
Preprocessing & anonymisation (where appropriate)
↓
AI Provider API (Anthropic IE / OpenAI IE — EEA)
↓
AI model processing (EEA data centres)
↓
AI-generated output
↓
MVPR Systems (post-processing, human review)
↓
Customer delivery (via Platform or Email)Data residency: Primary storage — Google Cloud Platform EU multi-region (EEA). AI processing — Anthropic Ireland Limited / OpenAI Ireland Ltd. (EEA). No systematic transfers to non-EEA jurisdictions.
7.2 Data minimisation in AI processing
- Selective data submission: Only data necessary for the specific AI task is sent to AI models
- Anonymisation where possible: Personal identifiers removed when not required for task completion
- Contextual filtering: Irrelevant data stripped from prompts to minimise exposure
- Ephemeral processing: AI providers do not retain customer data post-processing (per provider agreements)
| Task | Data sent to AI | Data not sent |
|---|---|---|
| Content outline generation | Topic, key messages, audience profile | Customer’s full database, historical projects |
| Journalist query response | Query text, relevant resources | Entire resource library, unrelated contacts |
| Quality assessment | Draft content only | Client names, account metadata |
7.3 AI and sensitive data
MVPR does not process Sensitive Personal Data (GDPR Article 9 — including racial or ethnic origin, political opinions, religious beliefs, health data, biometric data, etc.) through AI models unless explicitly required and authorised by Customer in writing.
If sensitive data processing is required: additional Data Processing Agreement terms will be executed; enhanced security and access controls implemented; AI processing limited to necessity and appropriateness; additional human oversight and review.
7.4 AI provider data handling
MVPR’s AI providers (Anthropic, OpenAI) contractually commit to: no training on customer data; no long-term storage beyond processing session (with exceptions for abuse prevention as disclosed in provider terms); no cross-customer contamination; customer ownership of all rights to inputs and outputs.
8. Intellectual property and AI-generated content
8.1 Ownership of AI outputs
Consistent with Section 6.2 of the MSA, Customer is and remains the exclusive owner of all AI-generated content produced under this Agreement, including: all AI-generated drafts, outlines, and suggestions; all human-refined versions of AI outputs; all final deliverables incorporating AI-generated components; and all modifications, updates, and derivatives.
MVPR does not retain any proprietary rights to Customer’s AI-generated content. MVPR’s rights are limited to using AI-generated content solely to provide Services to Customer and retaining non-identifiable, aggregated data for service improvement (with anonymisation).
8.2 Third-party IP considerations
- Plagiarism detection — all AI-generated content run through plagiarism detection tools
- Human editorial review — professional editors trained to identify derivative works
- Proper attribution — citations and references included where appropriate
- Originality focus — prompts engineered to encourage original expression, not reproduction
Both Anthropic and OpenAI provide IP indemnification to customers for AI-generated outputs, covering claims that outputs infringe third-party intellectual property rights (subject to provider terms).
8.3 Similarity of outputs across customers
Customer acknowledges that, due to the nature of AI, similar prompts may produce similar outputs for different customers, and common expressions may appear across outputs. MVPR maximises uniqueness for each customer through detailed client briefings, human editorial refinement ensuring brand voice differentiation, customer-specific resources incorporated into content, and multiple rounds of iteration and personalisation. Customer has full rights to modify AI-generated outputs to ensure uniqueness.
9. Compliance and regulatory alignment
9.1 EU AI Act compliance
MVPR’s AI use falls primarily into low-risk or minimal-risk categories under the EU Artificial Intelligence Act. MVPR’s systems do not involve critical infrastructure, education, employment decisions, law enforcement, or biometric identification.
Compliance measures: This Addendum provides comprehensive disclosure of AI use; mandatory human review for all significant outputs (see Section 3); formal risk category in MVPR’s Risk Management Policy; multi-layer quality control (see Section 4); comprehensive Data Processing Agreement with privacy safeguards; engagement only with reputable, compliant AI providers.
9.2 GDPR and data protection compliance
- Article 5 (Principles): Lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality
- Article 6 (Legal basis): Processing based on contract, legitimate interests, or consent
- Article 25 (Privacy by design): 7 privacy-by-design principles embedded in Secure Development Policy
- Article 28 (Processor obligations): Formal Data Processing Agreement with Standard Contractual Clauses
- Article 32 (Security): ISO 27001:2022 certified
- Articles 33–34 (Breach notification): Defined procedures for personal data breach notification
- Article 35 (DPIA): Assistance with Data Protection Impact Assessments upon request
- Article 22 (Automated decision-making): MVPR’s AI does not make solely automated decisions with legal or similarly significant effects; all decisions involve human oversight
9.3 Other regulatory frameworks
ISO 27001:2022: MVPR maintains ISO 27001:2022 certification covering AI systems; annual external audits; ISMS includes AI as an explicit risk category.
NIST AI Risk Management Framework: Risk Management Policy aligns with NIST RMF principles; AI risks assessed using NIST 800-30 and ISO 27005 methodologies.
Industry-specific compliance: Healthcare (HIPAA) — MVPR does not process Protected Health Information through AI without explicit Healthcare Addendum. Financial services — enhanced due diligence for financial services clients. Other regulated industries — custom compliance measures upon request.
10. Limitations and customer responsibilities
10.1 Known limitations of AI systems
- Hallucinations: AI may generate plausible but factually incorrect information
- Knowledge cutoffs: AI models lack real-time information
- Context misunderstanding: AI may occasionally misinterpret complex or nuanced contexts
- Bias potential: AI may exhibit biases present in training data
- Inconsistency: AI outputs may vary for similar inputs
- Lack of true understanding: AI does not “understand” content in the human sense
- Creativity limits: AI may produce formulaic outputs without human refinement
- Specialised domain gaps: AI may lack deep expertise in highly specialised or emerging topics
10.2 Customer’s role and responsibilities
- Review and approve: Carefully review all AI-assisted deliverables before approval or publication
- Provide context: Supply detailed briefings, brand guidelines, and contextual information
- Timely feedback: Provide feedback on AI-assisted content to enable continuous improvement
- Legal compliance: Ensure Customer’s use of AI-generated content complies with applicable laws
- IP verification: Conduct own IP due diligence in sensitive or high-stakes contexts
- Final accountability: Accept ultimate responsibility for published content
10.3 Prohibited uses
Customer agrees not to:
- Use Services to generate content that violates applicable laws or regulations
- Use Services to generate harmful, misleading, or inappropriate content
- Attempt to reverse-engineer or extract proprietary elements of MVPR’s AI implementation
- Use AI-generated content to train competing AI models or services
- Publish AI-generated content without required disclosures if mandated by law or industry standards
10.4 Disclaimer of warranties specific to AI
MVPR disclaims warranties that AI-generated outputs will be error-free or completely accurate; meet Customer’s subjective quality standards without human review; perform identically across all use cases; or remain unchanged as models are updated. Customer’s sole remedy for dissatisfaction with AI-assisted deliverables is to request revision in accordance with the MSA and SLA terms.
11. Incident response and issue escalation
11.1 AI-related incident categories
- Accuracy failures — significant factual errors that passed human review
- Bias incidents — outputs reflecting unacceptable bias or discrimination
- Privacy breaches — unauthorised disclosure of Personal Data through AI processing
- IP infringement — potential third-party IP infringement in AI-generated outputs
- System failures — AI service outages or significant performance degradation
- Security incidents — unauthorised access to AI systems or data
11.2 Reporting and response timelines
Contact your Account Director or tom@mvpr.io with details of the issue, affected deliverables, potential impact, and severity (Critical, High, Medium, Low).
| Severity | Initial response | Investigation | Resolution plan |
|---|---|---|---|
| Critical (e.g. privacy breach, published inaccuracy causing harm) | 2 hours | 24 hours | 48 hours |
| High (e.g. significant bias, pre-publication error) | 1 business day | 3 business days | 5 business days |
| Medium (e.g. quality concern, minor inaccuracy) | 2 business days | 5 business days | 10 business days |
| Low (e.g. stylistic preference, suggestion) | 3 business days | 10 business days | As appropriate |
11.3 Continuous improvement
Customer feedback on AI-related issues informs prompt engineering refinements, workflow adjustments, staff training updates, model selection decisions, and policy updates. MVPR conducts quarterly reviews of AI incident reports and resolutions, customer satisfaction metrics, quality trends, and emerging AI risks.
12. Governance and accountability
12.1 AI governance structure
Strategic oversight: ISMS Governance Council (strategic oversight of AI risks); CEO Tom Lawrence (ultimate accountability for AI-related risks and decisions); CTO Konrad Fuger (technical implementation and security).
Operational management: Operations Team (day-to-day monitoring of AI systems); Editorial Team (quality control and human oversight of AI-assisted content); Account Directors (client-facing responsibility for AI deliverables).
Risk management: AI included as explicit risk category in Risk Management Policy; quarterly reporting of AI metrics to ISMS Governance Council; annual AI risk assessments and penetration testing.
12.2 Accountability and escalation
| Role | Accountability |
|---|---|
| CEO | Ultimate AI governance and risk decisions |
| CTO | Technical security, AI provider management, system integrity |
| Editorial team | Content quality, accuracy, human oversight |
| Operations team | Platform performance, incident response, monitoring |
| Account Directors | Client satisfaction, deliverable quality, feedback collection |
Customer escalation path: Level 1 — Account Director (day-to-day issues); Level 2 — COO/CEO (unresolved concerns, policy questions); Level 3 — Independent Review (if requested, for disputes).
12.3 AI ethics principles
- Human-centric: AI augments human expertise; humans retain decision-making authority
- Transparency: Clear disclosure of AI’s role and capabilities
- Fairness: Proactive bias detection and mitigation
- Privacy: Data minimisation and privacy-by-design
- Accountability: Clear lines of responsibility for AI outcomes
- Safety: Robust quality control and risk management
- Sustainability: Responsible use of AI resources
13. Customer support and resources
13.1 AI-related support
- General inquiries: tom@mvpr.io
- Account support: Contact your Account Director
- Security/privacy concerns: Escalate through Account Director
- Documentation requests: Request via email or Account Director
Support scope includes: explanation of AI’s role in specific deliverables; assistance with understanding AI outputs; guidance on providing effective input for AI-assisted tasks; troubleshooting AI-related issues; updates on AI model changes; training on AI transparency features.
13.2 Training and education
Upon request, MVPR provides: overview of MVPR’s AI implementation and governance; best practices for collaborating with AI-assisted services; understanding AI limitations and how to work around them; interpreting AI transparency features. Resources available via this Addendum, the MVPR Feature Guide, quarterly AI development updates, and on-request consultations with the MVPR technical team.
13.3 Feedback and improvement suggestions
MVPR welcomes customer feedback on AI output quality and accuracy, user experience with AI-assisted features, suggestions for new AI capabilities, and concerns about AI ethics or transparency. Submit feedback to your Account Director or tom@mvpr.io.
14. Amendment and version control
14.1 Update policy
MVPR reviews this Addendum quarterly. Material changes include: addition or removal of AI models; significant changes to data handling practices; new regulatory requirements; changes to human oversight processes. For material changes: 30 days advance notice via email; summary of changes and effective date provided; option to discuss concerns with Account Director. Continued use of Services after the effective date constitutes acceptance.
14.2 Version history
| Version | Date | Summary of changes |
|---|---|---|
| 1.0 | January 14, 2026 | Initial publication of AI Transparency & Governance Addendum |
15. Contact information
For questions, concerns, or requests regarding this Addendum:
AI governance inquiries
Email: tom@mvpr.io
Subject line: “AI Governance Inquiry”
Acknowledgment
By continuing to use MVPR’s Services, Customer acknowledges that it has read, understood, and agrees to the terms of this AI Transparency & Governance Addendum.
End of AI Transparency & Governance Addendum — MV Public Relations Ltd — Version 1.0 — January 2026
