Copyright (c) 2023 Authors

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors retain the copyright without restrictions for their published content in this journal. HSSR is a SHERPA ROMEO Green Journal.
Publishing License
This is an open-access article distributed under the terms of
A range of legal obligation for operators of Artificial Intelligence Systems in the context of Criminal Law
Corresponding Author(s) : Julita Skowrońska
Humanities & Social Sciences Reviews,
Vol. 11 No. 5 (2023): September
Abstract
Purpose of the study: This article aims to investigate and define the legal obligations of artificial intelligence (AI) system operators in the context of existing criminal law. The task is to identify key provisions, analyze issues of legal liability, and the interaction between AI ethics and criminal law. The article also aims to assess the implications of using AI in criminal law and propose possible improvements to current regulations.
Methodology: The article is based on an interdisciplinary approach combining law, computer science, and ethics. The methodology includes a literature review concerning the legal obligations of AI operators and a legal analysis of key provisions.
Main findings: The deliberations reveal that the responsibility of AI system operators in criminal law focuses primarily on the legal, technological, and ethical analysis of issues related to the operation of AI. These aspects can occur in various areas such as the design, implementation, control, and use of AI systems, where operators make key decisions.
Application of the study: The presented article refers to the responsibility of AI system operators in the context of criminal law. This has significant implications for scientific disciplines such as law, AI ethics, cybersecurity, and computer science. The study of criminal law applied to AI is extremely important as it highlights potential gaps in current legal frameworks that can lead to accountability issues in case of damage or error caused by an AI system.
Originality/Novelty of the study: The issue of AI operators' responsibility in criminal law is a relatively new area of research, especially in Poland. The complexity and dynamism of AI technology development create numerous interpretive challenges that deserve further analysis.
Keywords
Download Citation
Endnote/Zotero/Mendeley (RIS)BibTeX
- Amodei D., Olah C., Steinhardt J., Christiano P., Schulman J., Mané D. (2016). Concrete problems in AI safety, arXiv preprint arXiv:1606.06565.
- Boddington P.(2017). Towards a Code of Ethics for Artificial Intelligence, Springer, s. 44-58.
- Bostrom N. (2014). Superintelligence: Paths, dangers, strategies, OUP Oxford.
- Bryson J. (20160. Artificial intelligence and pro-social behaviour, In Artificial Intelligence and Ethics, MIT Press.
- Bryson J.(2016). Artificial intelligence and pro-social behaviour, In Artificial Intelligence Safety and Security (pp. 5-14), CRC Press.
- Bryson J.(2020). AI and the appearance of deception, In Ethical Artificial Intelligence (pp. 85-90), Springer, Berlin, Heidelberg.
- Bryson J.J., Diamantis M.E.(2019). The need for a legal definition of artificial intelligence, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 95-101.
- Calo R.(2017). Artificial Intelligence policy: a primer and roadmap, UCDL Rev., 51, 399.
- Cath C.(2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges, Philosophical Transactions of the Royal Society A, s. 76-90.
- Dignum V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Springer, s. 67-85.
- Etzioni A., Etzioni O. (2019). Incorporating Ethics into Artificial Intelligence, The Journal of Ethics, s. 101-121.
- Floridi L., Cowls J., Beltrametti M., Chatila R., Chazerand P., Dignum V., Luetge C., Madelin R., Pagallo U., Rossi F., Schafer B., Valcke P., Vayena E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, Minds and Machines, s. 170-190.
- Goodman B., Flaxman S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”, AI magazine, 38(3), 50-57.
- Gunning D. (2017). Explainable artificial intelligence (xai), Defense Advanced Research Projects Agency (DARPA), nd Web.
- Hagendorff T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines, Minds and Machines, s. 35-50.
- Jobin A., Ienca M., Vayena E. (2019). The global landscape of AI ethics guidelines, Nature Machine Intelligence, s. 110-130.
- Mittelstadt B., Allo P., Taddeo M., Wachter S., Floridi L. (2016). The ethics of algorithms: Mapping the debate, Big Data & Society, 3(2), 2053951716679679.
- Rahwan I. (2019). Society-in-the-loop: programming the algorithmic social contract, Ethics and Information Technology, 20(1), 5-14.
- Russell S., Dewey D., Tegmark M. (2015). Research priorities for robust and beneficial artificial intelligence, Ai Magazine, 36(4), 105-114.
- Russell S., Norvig P. (2016). Artificial intelligence: a modern approach, Malaysia; Pearson Education Limited, s. 52-60.
- Scherer M. U. (2016). Regulating artificial intelligence systems: risks, challenges, competencies, and strategies, Harvard Journal of Law & Technology, 29, s. 348-365.
- Schwartz P. M. (2020). The EU Privacy Directive, the GDPR, and the US Privacy Shield, Cambridge Handbook of Consumer Privacy, 13, 103.
- Siciliano B., Khatib O. (2016). Springer handbook of robotics, Springer.
- Tegmark M. (2017). Life 3.0: Being human in the age of artificial intelligence, Knopf.
- Vladeck D. C. (2014). Machines without Principals: Liability Rules and Artificial Intelligence, Washington Law Review, 89, 117.
- Wu T. (2020). The Trouble with Artificial Intelligence, In "Future Imperfect: Technology and Freedom in an Uncertain World", Cambridge University Press, s. 111–135.
- Yampolskiy R. V. (2020). Unpredictability of AI, arXiv preprint arXiv:2002.10432.
- Zuckerberg M. (2019). Speech at Georgetown University, Georgetown University, October 17, 2019.
References
Amodei D., Olah C., Steinhardt J., Christiano P., Schulman J., Mané D. (2016). Concrete problems in AI safety, arXiv preprint arXiv:1606.06565.
Boddington P.(2017). Towards a Code of Ethics for Artificial Intelligence, Springer, s. 44-58.
Bostrom N. (2014). Superintelligence: Paths, dangers, strategies, OUP Oxford.
Bryson J. (20160. Artificial intelligence and pro-social behaviour, In Artificial Intelligence and Ethics, MIT Press.
Bryson J.(2016). Artificial intelligence and pro-social behaviour, In Artificial Intelligence Safety and Security (pp. 5-14), CRC Press.
Bryson J.(2020). AI and the appearance of deception, In Ethical Artificial Intelligence (pp. 85-90), Springer, Berlin, Heidelberg.
Bryson J.J., Diamantis M.E.(2019). The need for a legal definition of artificial intelligence, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 95-101.
Calo R.(2017). Artificial Intelligence policy: a primer and roadmap, UCDL Rev., 51, 399.
Cath C.(2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges, Philosophical Transactions of the Royal Society A, s. 76-90.
Dignum V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, Springer, s. 67-85.
Etzioni A., Etzioni O. (2019). Incorporating Ethics into Artificial Intelligence, The Journal of Ethics, s. 101-121.
Floridi L., Cowls J., Beltrametti M., Chatila R., Chazerand P., Dignum V., Luetge C., Madelin R., Pagallo U., Rossi F., Schafer B., Valcke P., Vayena E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, Minds and Machines, s. 170-190.
Goodman B., Flaxman S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”, AI magazine, 38(3), 50-57.
Gunning D. (2017). Explainable artificial intelligence (xai), Defense Advanced Research Projects Agency (DARPA), nd Web.
Hagendorff T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines, Minds and Machines, s. 35-50.
Jobin A., Ienca M., Vayena E. (2019). The global landscape of AI ethics guidelines, Nature Machine Intelligence, s. 110-130.
Mittelstadt B., Allo P., Taddeo M., Wachter S., Floridi L. (2016). The ethics of algorithms: Mapping the debate, Big Data & Society, 3(2), 2053951716679679.
Rahwan I. (2019). Society-in-the-loop: programming the algorithmic social contract, Ethics and Information Technology, 20(1), 5-14.
Russell S., Dewey D., Tegmark M. (2015). Research priorities for robust and beneficial artificial intelligence, Ai Magazine, 36(4), 105-114.
Russell S., Norvig P. (2016). Artificial intelligence: a modern approach, Malaysia; Pearson Education Limited, s. 52-60.
Scherer M. U. (2016). Regulating artificial intelligence systems: risks, challenges, competencies, and strategies, Harvard Journal of Law & Technology, 29, s. 348-365.
Schwartz P. M. (2020). The EU Privacy Directive, the GDPR, and the US Privacy Shield, Cambridge Handbook of Consumer Privacy, 13, 103.
Siciliano B., Khatib O. (2016). Springer handbook of robotics, Springer.
Tegmark M. (2017). Life 3.0: Being human in the age of artificial intelligence, Knopf.
Vladeck D. C. (2014). Machines without Principals: Liability Rules and Artificial Intelligence, Washington Law Review, 89, 117.
Wu T. (2020). The Trouble with Artificial Intelligence, In "Future Imperfect: Technology and Freedom in an Uncertain World", Cambridge University Press, s. 111–135.
Yampolskiy R. V. (2020). Unpredictability of AI, arXiv preprint arXiv:2002.10432.
Zuckerberg M. (2019). Speech at Georgetown University, Georgetown University, October 17, 2019.