Artificial Intelligence and Corporate Accountability in International Law: A Conceptual Exploration
Keywords:
artificial intelligence, corporate accountability, international law, transnational liability, human rights complianceAbstract
This paper explores the intersection of artificial intelligence (AI) and corporate accountability within the framework of international law. As AI increasingly assumes decision-making roles in global business operations, traditional legal doctrines—designed around human agency and state-centric liability—struggle to address harms arising from autonomous, opaque, and transnational AI systems. Drawing upon scholarship in international corporate law, human rights law, and technology regulation, this study develops a conceptual framework to guide corporate accountability in the age of AI. The proposed framework integrates three interrelated dimensions: responsibility and liability, transparency and auditability, and governance and ethical oversight. Responsibility is distributed across corporate, managerial, and technical levels, ensuring accountability for algorithmic harms. Transparency emphasizes explainable AI, documentation, continuous monitoring, and stakeholder engagement, while governance embeds ethical standards into organizational structures, aligning practices with international norms. By synthesizing legal, ethical, and organizational approaches, the framework provides a roadmap for accountable AI deployment, risk mitigation, and compliance with global standards. The paper highlights implications for corporate practice, international law, and policy development, emphasizing the need for adaptive, multi-level mechanisms to ensure responsible technological innovation in transnational contexts.
Downloads
Published
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC Attribution-NonCommercial-NoDerivatives 4.0
