AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems

Eleni Seralidou, Kitty Kioskli, Theofanis Fotis, Nineta Polemi

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents the AI Trustworthiness Assessment Framework (AI_TAF), a comprehensive methodology for evaluating and mitigating trustworthiness risks across all stages of an AI system’s lifecycle. The framework accounts for the criticality of the system based on its intended application, the maturity level of the AI teams responsible for ensuring trust, and the organisation’s risk tolerance regarding trustworthiness. By integrating both technical safeguards and sociopsychological considerations, AI_TAF adopts a human-centric approach to risk management, supporting the development of trustworthy AI systems across diverse organisational contexts and at varying levels of human–AI maturity. Crucially, the framework underscores that achieving trust in AI requires a rigorous assessment and advancement of the trustworthiness maturity of the human actors involved in the AI lifecycle. Only through this human-centric enhancement can AI teams be adequately prepared to provide effective oversight of AI systems.
Original languageEnglish
Article number243
Number of pages23
JournalComputers
Volume14
Issue number7
DOIs
Publication statusPublished - 22 Jun 2025

Keywords

  • Artificial Intelligence
  • Trustworthiness
  • human-centric
  • framework

Fingerprint

Dive into the research topics of 'AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems'. Together they form a unique fingerprint.

Cite this