Articles on: Mia (Mesma Intelligent Assistant)

Mia Information Security Q&A


Mia (Mesma Intelligent Assistant) Data Governance and Privacy


Q1: How do we manage the types of data collected, processed, and stored by our AI systems?

A: The MIA Assistant processes clients data. Data may include user-entered text, selected ratings, file uploads, and metadata such as timestamps and user IDs. Conversations with the MIA Assistant are temporarily processed for response generation and audit logging.



Q2: How do we use client’s data in model training, fine-tuning, or inference processes?

A: Data is used for inference only; it is not used for model training or fine-tuning.



Q3: How do we apply client’s data in inference processes?

A: The data is used to generate contextually relevant AI responses during inference, without storing it for training purposes.



Q4: How do we enforce data segregation to prevent cross-tenant data leakage?

A: Tenant data is not used to train or fine-tune models without opting in. Data is only used to produce curated responses during inference. Multi-tenant data isolation is enforced via unique customer identifiers and tenant-based access controls.



Q5: How do we manage data retention, deletion, and anonymisation for AI data?

A: All inference requests are processed through a stateless API that prevents cross-tenant data persistence.



Q6: How do we ensure compliance with applicable privacy regulations (e.g., GDPR, CCPA)?

A: Mesma follows the principles of ISO27001/27002, Cyber Essentials plus, GDPR, and National Cyber Cloud Security Principles to ensure statutory and regulatory compliance.


AI Model Governance

Q1: How do we use third-party or open-source AI models, and how do we vet and validate them?

A: We use only enterprise-grade AI models from OpenAI via Microsoft Azure OpenAI Service. Models are validated internally through contextual accuracy testing and privacy compliance review.



Q2: How do we manage model updates and retraining?

A: Mesma does not train or fine-tune models. Updates and retraining are managed by Microsoft and OpenAI. MESMA performs regression testing to assess any impact from model or API version updates.



Q3: How do we ensure human review or intervention in critical AI decision-making?

A: AI outputs are advisory and designed to support, not replace, human decision-making. Users can review, edit, and override AI-generated content.


Transparency


Q1: How do we generate and validate AI decisions and outputs?

A: The MIA Assistant uses retrieval-augmented generation (RAG) to ground responses in tenant-provided content. Outputs are validated through internal testing for factual and linguistic consistency.



Q2: How do we provide explainability for AI outputs?

A: Users can view the contextual references that informed a response.



Security


Q1: How do we protect AI data pipelines, models, and inference endpoints?

A: All endpoints are secured by Azure API Management and authenticated using OAuth 2.0 with role-based access control (RBAC).



Q2: How do we defend against AI-specific threats?

A: We employ input validation, prompt-injection protection, content filtering via Azure OpenAI moderation, and abuse/anomaly monitoring.



Q3: How do we control and monitor API access to AI models?

A: Access is managed via Auth0-based API keys. Every API call is logged with timestamp, user ID, and metadata. Usage analytics are periodically reviewed to detect anomalies.

 

Ethical AI and Bias Management


Q1: How do we detect and mitigate bias in AI outputs?

A: We use only models provided by OpenAI via Microsoft Azure OpenAI Service, which include fairness and bias-mitigation protocols. All AI-generated outputs are advisory only, may contain errors, and users remain fully responsible for reviewing and making decisions based on them.



Q2: How do we provide oversight for AI ethics in our use of AI models?

A: SDN Mesma Group does not develop proprietary AI models; we use vetted, enterprise-grade models from OpenAI.



Q3: How do we embed fairness, accountability, and transparency principles into AI workflows?

A: All features undergo privacy, ethics, and risk assessments before deployment. AI-generated content remains auditable and human-reviewable.



Compliance and Certifications


Q1: How do we ensure our AI systems meet assurance standards (e.g., ISO 42001, SOC 2 with AI criteria)?

A: OpenAI models used via Microsoft Azure OpenAI Service are ISO 42001 certified.



Q2: How do we conduct assessments or audits on our AI systems?

A: No formal third-party AI audits are conducted, but internal testing is frequent, and independent penetration testing of the wider system occurs every 24–36 months.

 


Incident Management and Resilience


Q1: How do we detect, report, and respond to AI-specific incidents (e.g., model drift, hallucination, degradation)?

A: AI functionality undergoes continuous testing in an isolated environment. Responses are reviewed for suitability, and changes are tracked. Users can feedback either in the app or via the helpdesk.



Q2: How do we inform customers about material changes or incidents affecting AI features?

A: Release notes are added to the Mesma platform for tenant review.

 


Customer Control and Transparency


Q1: How do we allow customers to manage, opt-out, or customise AI-driven features?

A: AI-powered features can be enabled or disabled by tenant administrators in the Admin control panel preferences section.



Q2: How do we allow customers to exclude their data from AI model training?

A: Tenant data is not used for training or fine-tuning models without explicit opt-in.

 


Third-Party Dependencies


Q1: How do we integrate third-party AI tools, platforms, or APIs into our SaaS offering?

A: The system leverages vetted third-party LLMs (Azure OpenAI models) under enterprise agreements.



Q2: How do we manage and monitor third-party AI risks?

A: Third-party providers undergo vendor risk assessments covering data protection, security, and compliance. MESMA monitors release notes from providers to stay informed of vulnerabilities or risks.


Updated on: 06/02/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!