Semiotic analysis of human and artificial intelligence – knowing the limitations and building trustworthy AI
Liu, K., Bai, J., Noussia, K.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. Abstract/SummaryThe rapid advancement of artificial intelligence (AI), particularly large language models (LLMs) and generative AI (GenAI), raises pressing questions about trustworthiness and ethical alignment. This paper addresses these challenges through a semiotic and normative lens. It contrasts human and artificial intelligence, focusing on differences in sense-making and reasoning capabilities. While humans engage in contextual, interpretive, and abductive reasoning, current AI systems primarily rely on statistical associations, lacking contextual understanding and normative judgment. To mitigate these limitations, we propose the Epistemic-Deontic-Axiological (EDA) architecture, which integrates symbolic reasoning with neural models to improve interpretability, ethical alignment, and trustworthiness. Future work will focus on operationalizing this framework in applied settings, bridging normative theory and practical AI development.
Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |