ISO/TR 24028 – AI Trustworthiness Assessment
Overview
ISO/TR 24028 provides guidance on trustworthiness in artificial intelligence systems. Our assessment helps organizations:
Evaluate AI systems against international trustworthiness principles
Identify risks in AI decision-making processes
Improve transparency and accountability of AI implementations
Align with emerging AI governance frameworks
Who It's For
Developers and deployers of AI systems
Organizations using AI for critical decision-making
Regulatory compliance teams addressing AI risks
Procurement teams evaluating AI vendor solutions
Ethics committees overseeing AI implementations
Why an AI Trustworthiness Assessment Matters
Risk Mitigation: Identify and address AI system vulnerabilities
Regulatory Preparedness: Stay ahead of evolving AI regulations
Stakeholder Trust: Demonstrate responsible AI practices
System Improvement: Enhance AI reliability and performance
Scope of Our Assessment
AI System Documentation: Review of development processes
Algorithmic Transparency: Explainability and interpretability
Data Quality: Training data representativeness and bias
Decision Auditing: Output validation and monitoring
Human Oversight: Control mechanisms and fallback procedures
Our 6-Step Assessment Process
Scoping Call: Define AI systems and use cases
Document Review: Technical documentation and policies
Technical Evaluation: Algorithm and data pipeline analysis
Stakeholder Interviews: Developers, users, and affected parties
Impact Assessment: Potential harms and mitigation strategies
Final Report: Conformity Assessment with improvement plan
Deliverables
Trustworthiness Assessment Certificate
AI Risk Profile Report
Bias and Fairness Evaluation
Governance Improvement Plan
Executive Summary Presentation
Why Company Certification Int.?
AI Ethics Experts: Assessors with technical and ethical expertise
Multidisciplinary Approach: Combines technical and governance perspectives
Practical Framework: Actionable recommendations for improvement
Future-Ready: Aligns with emerging global AI standards
FAQ
Q: Is this a certification of our AI system?
A: This is a conformity assessment providing independent validation of your AI's trustworthiness characteristics.
Q: How does this relate to EU AI Act requirements?
A: Our assessment helps prepare for compliance with high-risk AI system requirements.
Q: What types of AI systems can be assessed?
A: We assess machine learning, deep learning, and other AI approaches across all applications.
Q: How long does the assessment take?
A: Typically 3-5 weeks depending on system complexity.
Q: Do you need access to our source code?
A: We require appropriate technical documentation but typically don't need full source code access.
Get Started
Ready to demonstrate your AI's trustworthiness?
[Request AI Assessment] [Download Trustworthiness Checklist]