Maciej Kobiałka
Full Article: View PDF
How to cite
Kobiałka M., AI in the Service of Justice: Opportunities and Threats in the Application of Language Models (LM) and Large Multimodal Models (LMM) in Judicial and Extrajudicial Dispute Resolution with Specific Consideration of Civil Law. Reflections on the Current Dispute Resolution Model, “Polish Journal of Political Science”, 2025, Vol. 11, Issue 4, pp. 53–68, DOI: 10.58183/pjps.04042025.
ABSTRACT
The Polish justice system is hampered by the protracted nature of court proceedings, urgently requiring a reassessment of its current model. The paper examines the potential benefits and severe risks of integrating AI and LLMs into dispute resolution. The core opportunity lies in data-driven optimization, offering greater efficiency, standardization of jurisprudence, and administrative support (e.g., the digital judge’s assistant). However, the implementation of AI as an adjudicator faces critical threats: ethical issues surrounding the dehumanization of law and the inability to interpret human-centric concepts such as “good faith”; technical risks from “black box” algorithms and the replication of historical biases; and sovereignty concerns tied to strategic infrastructure and compliance with the EU AI Act. While AI is not ready to replace human judges, its inevitable role in supportive functions and ADR mandates immediate structural reform to leverage its benefits while safeguarding the constitutional principles of impartiality, human dignity, and the right to a fair trial.
Keywords: AI in justice, dispute resolution, algorithmic bias, judicial efficiency, AI Act
Introduction
“Everyone shall have the right to a fair and public hearing of their case, without undue delay, before a competent, independent, impartial, and lawful court” – states Article 45 of the Constitution of the Republic of Poland of April 2, 1997.[1] The administration of justice is an indispensable element of all state systems,[2] and its exercise in Poland, constitutionally reserved for the Supreme Court, common courts, administrative courts, and military courts as the domain of the judiciary, is one of the most significant activities of the state in the sphere of imperium (sovereign power). Justice is simultaneously a good that the state (and historically, any authority) must provide to resolve conflicts peacefully and to guarantee individuals the ability to interact with others with the assurance of seeking protection should their rights be violated. Much ink has already been spilled on the organization and axiology of the justice system, yet on one point, everyone, including those outside the legal and academic communities, seems to agree: the administration of justice should operate efficiently and with the utmost diligence, avoiding the protracted nature of proceedings, which has become its main ailment. In an ideal world, courts issue fair, substantively correct rulings quickly and efficiently. Due to a combination of various factors, we are extremely far from this state, and nothing indicates that the situation in the justice system will be remedied in the near future. Both caseloads and the waiting time for a case to be heard are swelling at an alarming rate; one can now wait up to two years for a hearing in civil cases.[3] For the purpose of this text, civil cases are understood in accordance with the Act of November 17, 1964, Code of Civil Procedure (hereinafter: “k.p.c.”)[4] as cases arising from relationships governed by civil, family, and guardianship law, as well as labor law, and also cases concerning social insurance and other cases to which the provisions of the Code apply by virtue of specific statutes (hereinafter: “civil cases”). Analogously, a civil dispute will be understood as a dispute conducted in accordance with the k.p.c. regulations, thus encompassing civil cases in both their substantive and formal sense (hereinafter: “civil dispute”).[5]
The aim of this study is a critical analysis of the possibilities and justification for the application of artificial intelligence, particularly large language models, in the process of dispute resolution, with specific consideration of civil law. The analysis is conducted using theoretical-legal and comparative methods and attempts to examine the potential advantages and drawbacks of implementing language models in certain fields of dispute resolution both outside and within state justice.
Research into the application of artificial intelligence in the administration of justice and dispute resolution is a relatively new phenomenon, which has received increased attention in the last few years. This correlates with the pace of the technology’s development and the extent of its implementation and presence. Nevertheless, the state of research on this issue is becoming increasingly advanced, having even seen the publication of the first Polish monographic study in 2024.[6] However, it is necessary to point out that, with the dynamic development of this technology, our knowledge and the resulting studies may quickly become outdated.
Current Presence of AI in Dispute Resolution and its Limitations
When considering the application of Language Models (LM and LLM)[7] in dispute resolution, or more broadly, the participation of AI in the functioning of the justice system, it is important to note that we are not discussing a theoretical situation. AI is already present in the justice system, though perhaps not yet in the role of a decision-maker. An example of the unprecedented use of such tools is China’s “intelligent courts.” In 2017, the central authorities, led by the Central Committee of the Communist Party of China, commissioned the Supreme Court in Shanghai to develop a program intended to support the Chinese justice system. The “System 206” assumed the creation of software capable of integrating artificial intelligence with every stage of criminal proceedings, from the initiation of the investigation to the issuance of a verdict and supervision of its execution. It is also worth mentioning that within the Chinese justice system, parties have been enabled to participate in hearings entirely remotely (all hearings in Chinese courts are also streamed and available online),[8] and to handle almost all stages of court proceedings via the official website, as well as through the China Mobile WeCourt application, which is linked to the WeChat messenger.[9] In this new reality, the role of judges, however, remains what is the essence of the administration of justice – dispute resolution. The question of how much longer this will be the case remains open.
This leads us to a fundamental question: What exactly is dispute resolution? Is a human actually necessary to perform subsumption? The process of fitting a set of facts to a legal norm is, theoretically, a logical operation, one that the devices surrounding us perform millions of times per second. Crucially, Artificial Intelligence would not suffer from better or worse days; it would not engage in what Willard L. King termed “Breakfast Theory of Jurisprudence.”[10] It could handle cases without breaks for sleep or rest, with memory and computing power limited only by external technical factors like access to electricity and server farms. This would be an unprecedented, hard-to-imagine revolution. Artificial Intelligence would be able to familiarize itself with vast amounts of procedural documents (pleadings, evidence) incomparably faster than a human, identifying key arguments and facts. It could compare these with the existing legal framework and case law and, based on that, issue its judgment.
This vision, however, comes with a whole range of future potential and current problems. The first one stems from the very nature of so-called Artificial Intelligence. Language Models (LMs) can be classified as Generative AI, meaning they are used to create content – in text, graphic, sound, etc. form. It is absolutely critical to note here that Artificial Intelligence does not constitute “intelligence” in our understanding of the word; at least for now, it lacks autonomy in what it creates. It is a tool, and it is closest to a compiler combined with a randomizing machine. A language model, trained on an enormous amount of data – in this case, texts produced by humans – predicts which word is most likely to appear next. This has far-reaching consequences. A model resolving disputes would have to be trained only on substantively correct data, which already poses difficulties. Is there a sufficient quantity of Polish-language legal provisions, court rulings, and commentaries to train even a very basic dispute-resolving model? What would its quality be then? Even if we overlook this problem, such a developed model would only be as good as the human creations it trained on, and nothing more.
Philosophical and Doctrinal Perspective of AI Implementation
In this context, it is worth recalling the reflections on law by one of its most eminent theorists, Gustav Radbruch, whose ideas are, paradoxically, highly relevant to the issue at hand. He made most of his key observations after 1945, analyzing the role of law and the normative system since 1933. At that time, dispute resolution (as well as the justice system and the law itself) was based only and exclusively on applying the letter of the law, without taking into account non-legal, non-normative humanistic values. Based on those experiences, Radbruch formulated a critique of certain tendencies within the legal system: conventionalism, hard positivism, utilitarianism, and the ignorance of those applying and engaging with the law. According to Radbruch, within institutions, the thinking about values is eradicated (the concept of positivism in its institutional variant).[11] If we want to have a stable and trustworthy social institution in the form of law, it must possess specific features to achieve this goal: security, justice, and expediency (purposefulness). Can these values be realized without a human as the decision-maker? Would deciding on human affairs without human participation not be the most extreme manifestation of the dehumanization of the institution that is law? On teleological grounds, one can argue about the purpose of law, but it cannot be denied that law is meant to serve and satisfy humanity, the society it creates, and its sense of justice. There is no basis to claim that this purpose would be more knowable or more intuitive for a machine than for a human being.
In the Polish legal system, there is a constitutionally adopted closed catalog of sources of law, and precedent de jure does not exist as it does in the United States. Nevertheless, the role of case law in interpreting provisions and de facto defining the law is significant and cannot be ignored. There are entire areas of civil law where the role of the courts is nearly law-making (quasi-legislative), for example, in the scope of protection of personal rights, where case law “discovers” increasingly new personal rights subject to legal protection, thus defining their open-ended catalog contained in the Civil Code. How would Artificial Intelligence interpret general clauses and undetermined phrases? Concepts like “principles of community life” (pol. “zasady współżycia społecznego”), “good customs” (pol. “dobre obyczaje”), or “good faith” (pol. “dobra wiara”) are not terms with precisely defined content. Their purpose is to provide greater flexibility to the provision and allow discretion for the adjudicating body. Their content is not fixed once and for all; moreover, their understanding is human in nature and inherently linked to the prevailing conventional patterns of life within the community. Even greater difficulty would be posed by “established customs” (pol. “ustalone zwyczaje”), which, under the Civil Code, are interpreted as customs binding within a specific group or relationship.[12] This would require training the model with very general knowledge about the world, which would additionally need to be updated in some way on an ongoing basis. Otherwise, the AI might uncritically accept the parties’ arguments regarding various issues and be unable to verify them.
A language model would also not be entirely immune to manipulation, and not necessarily “through its own fault.” Naturally, any attempts at traditionally understood bribery or pressure on such a decision-maker would be futile and fail. Nevertheless, the models’ creators could probably interfere with them or instill desired inclinations in one way or another. Even if this were not possible, it would be extremely difficult to defend against such an accusation, especially since computer programming and algorithmics are not fields where the general public has widespread knowledge. This would not positively affect the social perception of the administration of justice. Furthermore, a digital judge would not be impartial at all. A model trained on past rulings would adopt the understanding of the law that was most prevalent in the data it was fed. Groups that were historically treated more favorably or advantageously in disputes would remain privileged. For instance, the model would most likely have a predilection for granting custody of a child to mothers or would exhibit distrust toward representatives of the Romani or Muslim communities. In the United States, a language model trained this way would inherit the burden of racially biased case law. The fight against discrimination and prejudice is a social process, not a matter of machine learning. Judicial proceedings where the fate of children is decided would have to be subject to particular attention and control regarding the use of AI. Only a human being, as a social, humanitarian entity, is capable of discerning the nuances and complexity of a given case. In this context, the question also arises as to how the complete resolution of a case by Artificial Intelligence would relate to the constitutional principle of human dignity – a question to which, at least for the moment, we do not know the answer.
Technical Issues Concerning AI Usage
Regarding the technical nature itself, algorithms, including those behind language models, possess the characteristic of a so-called “black box.” This is a system that can be viewed in terms of its input and output data, without any knowledge of its internal workings. In neural networks or heuristic algorithms (terms used to describe computers that are “learning” or “simulating artificial intelligence”),[13] the black box describes the constantly changing part of the program’s environment that programmers cannot easily test. It follows from this that even if the model were to show its line of reasoning, it would remain opaque at the most fundamental level. Arbitrary assessment is unacceptable when carried out by a human, so why should it be acceptable when an algorithm is rendering a judgment? Furthermore, legal inference is governed by a specific set of rules, stemming from the currently accepted Kelsenian understanding of law as a coherent system, free of gaps and having a hierarchical structure. It would be possible to teach an algorithm these rules, but this would narrow the scope of thinking about legal interpretation. Some concepts regarding the reconstruction and interpretation of norms are explicitly mutually exclusive, for example, the derivational concept and the clarificatory concept. The decision as to which is more desirable would presumably have to be made by a human during the AI’s development stage.
It is also undeniable that language models experience hallucinations,[14] and due to their structural nature as “black boxes,” it is impossible to determine why this happens. For example, the most well-known language model once recommended that people consume one stone per day.[15] It is absolutely inevitable that a dispute-resolving model would also be afflicted by this phenomenon. How would one deal with a judgment that is not merely issued in violation of the law, but is entirely devoid of meaning? Would an appeal (or any other appellate measure) be available against it, perhaps due to a technical defect in the adjudicating model? If so, would the panel of judges in the higher-instance court then be composed of humans who would verify the algorithm’s ruling? Would the Supreme Court also be replaced by a language model (setting aside that the Supreme Court handles cases beyond just civil law)? Would appellate review of judgments even make any sense in such a situation if the same language model “sat” on every judicial panel? According to Article 48 § 1 point 5 of the k.p.c., this is illegal and constitutes grounds for the nullity of proceedings by operation of law (Article 379 point 4 of the k.p.c.). Regardless of these speculations, it is obvious that civil proceedings would require deep structural reform to create the legal framework for such changes.
Language models resolving disputes via the judicial route would also present problems related to their technical foundation (hardware) and the software itself. Server farms supporting Artificial Intelligence that administers justice (of any, even the lowest, level) would undoubtedly be considered absolutely strategic infrastructure for the state and its sovereignty. In a period of geopolitical upheaval and general instability, they would be vulnerable to attacks and sabotage. On one hand, in the event of any conflict, not necessarily an armed one, they would certainly be among the first targets of attack, as a straightforward way to paralyze the state and deprive it of the ability to perform one of its main functions. On the other hand, it is difficult to imagine locating such “decisive” infrastructure on the territory of a foreign state, even an allied one, as this would be a very real exposure of sovereignty and a reliance on the goodwill of the host. States more advanced in the sphere of digitalization than Poland possess so-called “digital embassies” (e.g., Estonia in Luxembourg), but these only store backup copies of public databases.[16] Simultaneously, using the services of commercial entities for this purpose is also precluded, especially given their nature and the entanglement of them and their creators in very current disputes and politics. For example, American regulations like the CLOUD Act or FISA enable US services to access data stored by American technology companies, regardless of the location of the servers. Outsourcing public services to private commercial entities can also be more costly than delivering them within public structures, as is the case, for example, with the prison system.
One could also analyze another scenario where dispute resolution by AI would constitute a pre-judicial stage of proceedings or would function as a “Justice of the Peace.” Perhaps it could resolve disputes non-bindingly, as a kind of “trial run.” Here, we are moving closer to arbitration and mediation[17] and are closest to the current reality, as there are already Polish commercial entities[18] and institutions[19] that use AI in the context of arbitration or amicable courts, but, similar to China, not for adjudication sensu stricto. The judiciary has a constitutional monopoly only on the binding resolution of disputes, and dispute resolution does not necessarily have to take place judicially and does not have to be binding. Furthermore, being aware that the civil departments of common courts are in a state of effective “clinical death,”[20] this path to enforcing one’s rights is extremely inefficient, and for actions requiring immediate reaction (e.g., the court issuing an injunction), its effectiveness is merely illusory. Judicial dispute resolution, despite being the only method secured by state coercion, should unfortunately be advised only as a last resort after exhausting all other options, as it is a measure that is grossly inefficient, incredibly time-consuming, and expensive – so much so that for disputes with low values, it is outright uneconomical, especially if professional representation costs are taken into account.
If we were to seek an answer to the question of whether it is time to reflect on the current dispute resolution model, the answer would undoubtedly be affirmative. Dispute resolution is a derivative of broader, long-term global changes that have recently accelerated significantly. Conventional methods of seeking justice must respond to this accelerated pace of affairs and the intensified legal circulation; otherwise, the public will turn to unconventional methods or even unlawful self-help, as is done, for example, by property owners who cannot regain possession from non-paying tenants. The volume of legal disputes will only increase as a direct result of the extensive juridification and regulation encompassing essentially all spheres of social life, ironically, even deregulation postulates have resulted in adding new provisions to the codes, ultimately increasing the volume of the supposedly deregulated legal acts.[21]
Implications of EU’s AI Act
Potential new methods of dispute resolution, including civil disputes, utilizing Artificial Intelligence are no longer existing in a legal vacuum. The European Parliament adopted Regulation (EU) 2024/1689, known as the AI Act,[22] which establishes a general ethical framework for AI systems and encourages the development of ethical codes. These provisions concern all providers and users of AI systems. The AI Act classifies AI systems in the administration of justice as high-risk systems due to their “potentially significant impact on democracy, the rule of law, and personal freedoms.” Classification as a “high-risk” system entails an obligation for conformity assessment, certification before deployment, risk management, performance monitoring, and algorithmic transparency. Courts and prosecutors’ offices will have to ensure a certificate of conformity with the AI Act and the possibility of explaining the system’s operation.[23] The AI Act itself has numerous shortcomings; for instance, it does not provide a simple answer to the question of whether judges should disclose to the parties the use of AI in the adjudicatory process.[24] When considering AI in the context of adjudication, it is impossible to ignore the fact that it would also have to be proficient in EU law, as many individual rights originate from Union law, and Polish national courts are, by treaty, simultaneously EU courts. On a side note, one can only imagine the hysteria in the public discourse (or perhaps: political discourse) if a Polish adjudicating AI model were required to recognize the primacy of EU law, even if this is currently the case. It is curious whether a dispute-resolving language model could pose a preliminary question to the Court of Justice of the European Union and what the answer would look like.
Implementation of AI in Polish Judiciary System
AI is a revolution taking shape in the labor market, and there is nothing to indicate that it will bypass courts and the justice system. Professions such as judge’s assistant will be largely replaced by AI, which can perform similar work incomparably faster and more efficiently than a human.[25] A pilot program for a digital judge’s assistant is already underway, currently in “Swiss franc loan” cases, and the creation of the Central Repertory and Office System (pol. “Centralny System Repertoryjno-Biurowy”)[26] is also planned, among other things. Very often, judges already use AI on their own initiative to streamline their work, which is hardly surprising; however, this is concerning and potentially dangerous, given the current lack of procedures regulating such AI use. Who bears legal responsibility for an erroneous decision made or suggested by the AI? Concerns are particularly raised by the sharing of sensitive information and personal data with the AI. Other countries have already begun the process of standardizing such rules – in the United Kingdom, such standards began to be created based on what was developed for representatives of other legal professions. These guidelines are contained in the Artificial Intelligence (AI) Guidance for Judicial Office Holders from 2023.[27]
The threats associated with AI can be multiplied, as new dilemmas and potential problems will emerge with the development of this technology, whose capabilities will grow exponentially, developed on unimaginable databases acquired through our increasingly frequent interaction with the models. One can already speak of a feedback loop mechanism at work: the more we, as users, use AI, the better and more widespread it becomes, and the more widespread and better it is, the more willingly we reach for it. This process also has financial backing, government support, and is being managed by some of the sharpest minds of our time. This is no longer the future, but a nascent present.
However, looking at these issues from a distance, it is impossible not to notice that AI in court is an opportunity for genuinely tectonic and revolutionary changes for the better. As mentioned at the outset, the main ailment of the justice system, especially civil disputes, is the protracted nature of proceedings, and the scope for optimization using Artificial Intelligence is immense. LLMs can be used for the rapid analysis of enormous amounts of procedural documents, identifying key arguments, facts, and relevant rulings. Work on these types of improvements is already underway, such as the aforementioned “digital judge’s assistant.” AI could also analyze incoming pleadings for formal errors. A native Polish language model, PLLuM (Polish Language Model), has also been developed for Polish needs, intended for use in public administration and the justice system, among other areas.[28] The model is meant to support judges[29] and their assistants during tasks such as drafting justifications. The entire court modernization program, “Digital Court” (“Cyfrowy Sąd”), being implemented by the Ministry of Justice, is planned to last until 2029 and includes a pilot program for the complete digitalization of the work of the Court of Competition and Consumer Protection, the introduction of a central repertory and office system for all common courts, the implementation of the Electronic Enforcement Procedure (EPU) version 3.0, electronic filing of pleadings, the hybridizing of case files, digital document circulation, and digital procedures.[30] The use of AI in courts and digitalization will initially require huge financial outlays for the development and implementation of these solutions, but in the long term, this will save time and money and improve the level of satisfaction with the “services” of the administration of justice, which is, to put it mildly, currently low. In the longer term, the automation of simpler tasks can lower the administrative costs of courts, potentially leading to lower court fees and reduced burdens for the parties. Unfortunately, from the perspective of concern for the climate and environment, neither the use of the highly energy-intensive technical infrastructure for artificial intelligence nor the storage of kilometers of paper files is commendable.
While AI in the resolution of civil disputes is unlikely to violate the principle of equality of the parties in civil proceedings, it may pose a serious threat to the principle of “equality of arms” in criminal proceedings.[31] The use of AI by courts will also necessitate the increasingly widespread adoption of these tools by commercial entities, including those providing legal services. The “equality of arms” in this matter is, moreover, highly desirable, as pleadings generated using AI, often of questionable quality, are already being submitted to law firms and court filing offices. For instance, the Labor Court in Turin ruled that a complaint prepared with the support of artificial intelligence and limited to a chaotic set of abstract legal citations constituted an act of bad faith or gross procedural negligence, resulting in the imposition of additional financial sanctions based on Article 96 of the Italian Code of Civil Procedure.[32] Many law firms already possess or plan to implement their own internal AI systems. The market for providers of such services is experiencing an absolute boom. The widespread use of these tools will also signify a revolution in employment and work methods within the legal sector. The role of the lawyer – be it a judge, prosecutor, or professional counsel – will increasingly resemble the work of a reviewer and verifier of the increasingly perfect outputs of artificial intelligence. A decline in employment will be noticeable in both the public and private sectors, as many tasks will simply require significantly less labor input. A particularly difficult future is looming for trainees, legal apprentices, and young lawyers. Simultaneously, legal assistance and dispute resolution will become more accessible, as some legal problems and questions will bypass law firm desks, let alone court dockets, because their authors will be able to find a solution by asking Artificial Intelligence for a legal opinion. This will also mean that lawyers will focus on more complex problems, or, to be optimistic, more interesting ones, where such simple AI advice and solutions are insufficient.
Simultaneously, the widely understood adjudicatory practice will become standardized. By relying on vast datasets of case law, AI can support judges in striving for greater consistency and predictability of judgments in similar cases, minimizing errors resulting from subjective factors (data-driven justice). AI would also open an entirely new chapter in the assessment of regulatory impact, both ex ante and ex post. The model could help optimize existing legislation by gathering data from proceedings, and it could, quasi de lege ferenda (proposals for future law), propose new regulations that would prove most effective. AI could also predict social conflicts on legal grounds by analyzing which cases are filed with the courts and which resolutions are subsequently handed down in them. AI could also provide relatively impartial tools for monitoring the efficiency of courts, applying objective criteria such as the waiting time for a hearing or the stability and endurance of rulings issued at a given instance. With good will, this would allow for at least a partial depoliticization of the supervision of the justice system and base it on transparent and objective criteria.
On a side note, it is worth mentioning that AI in court is also a huge opportunity for improving translations from foreign languages, especially rare languages. Artificial Intelligence could perform real-time translations, eliminating the need for professional translators, who generate considerable costs. Certified translators could, for example, verify the translation and explain nuances. They would still be indispensable, as many languages, especially those outside our cultural sphere, have very different ways of describing the world through language, which can be of considerable importance, for example, in family law or the law of obligations. This type of mechanization of the translation process would certainly be beneficial from the perspective of reducing the duration of proceedings in cases where the use of a translator for a rare language is necessary.
Artificial Intelligence also offers broad prospects for the development of education and professional improvement for judges and assessors. The possibilities are nearly limitless, from the implementation of training systems for legal trainees and law students to the creation of fully personalized professional development paths for court employees, adapting material to their specific needs. For didactic purposes, AI can generate examples of difficult cases and design highly realistic simulations of civil proceedings, drawing on current court practice. Furthermore, the technology can significantly facilitate access to rulings and commentaries, making legal resources more transparent. Finally, AI can act as a proofreader and reviewer, supporting students and lawyers in writing precise justifications and opinions.
Conclusion
The analysis confirms that while AI might be an inevitable and necessary tool to combat the crisis of protracted court proceedings in the Polish justice system, its role must remain supportive, not adjudicatory. AI offers immense potential for efficiency, standardization of case law, and administrative optimization, driving projects like the “Digital Court.” However, the prospect of AI as a sole decision-maker is rejected due to fundamental flaws: the dehumanization of law when humanistic values and general clauses are ignored, the technical opaqueness of the “black box” and its tendency to perpetuate historical biases, and critical sovereignty risks. Though the EU’s AI Act provides regulatory structure, reform of court procedures is urgently mandated to integrate AI’s benefits into support functions and alternative dispute resolution. The ultimate responsibility for any legally binding decision affecting human lives must, therefore, remain exclusively with the human judge.
Perhaps it is worth concluding by asking Artificial Intelligence itself: “Would you be ready to resolve disputes in courts?” The Chat GPT 5.0 model, to which I directed this question, replied: “That’s an interesting question. No, I would not be able to resolve civil disputes in courts – that task belongs exclusively to human beings holding the authority of a judge, operating within the legal system of a given state. However, I can assist in case analysis, e.g., by summarizing facts, regulations, and case law; prepare legal arguments (e.g., for a lawsuit, answer to a lawsuit, appeal); simulate possible judgment scenarios to better understand risks; and facilitate mediation, helping parties find a compromise. But the final resolution of the dispute – meaning the decision with legal force – must always be made by a human, a judge.”
References
[1] Constitution of the Republic of Poland of April 2, 1997, Journal of Laws 1997, No. 78, item 483 as amended [Konstytucja Rzeczypospolitej Polskiej z dnia 2 kwietnia 1997 r., Dz.U. 1997 nr 78 poz. 483 ze zm.].
[2] B. Stępień-Załucka, Sprawowanie wymiaru sprawiedliwości przez Sąd Najwyższy w Polsce, C.H. Beck 2016, p. 1.
[3] P. Rojek-Socha, A. Partyk, In the judge’s office, 700 cases, the first hearing, and after two years [W referacie sędziego 700 spraw, pierwsza rozprawa i po dwóch latach], Prawo.pl 2024, https://www.prawo.pl/prawnicy-sady/sytuacjach-w-wydzialach-cywilnych-czas-oczekiwania-na-rozprawy,528682.html, (access 18.10.2025); K. Michałowski, Two years of waiting for the date of the first court hearing. The RPO intervenes at the Ministry of Justice. There is a response [2 lata czekania na termin pierwszej rozprawy sądowej. RPO interweniuje w Ministerstwie Sprawiedliwości. Jest odpowiedź], Biuletyn Informacji Publicznej RPO 2022, https://bip.brpo.gov.pl/pl/content/rpo-ms-dlugie-terminy-na-rozpoznanie-sprawy-przez-sady-odpowiedz, (access 18.10.2025).
[4] Announcement of the Marshal of the Sejm of the Republic of Poland of October 10, 2024, on the publication of the consolidated text of the Act – Code of Civil Procedure, Journal of Laws 2024, item 1568 [Obwieszczenie Marszałka Sejmu Rzeczypospolitej Polskiej z dnia 10 października 2024 r. w sprawie ogłoszenia jednolitego tekstu ustawy – Kodeks postępowania cywilnego, Dz.U. 2024 poz. 1568].
[5] A. Marciniak, Dział I. Zagadnienia ogólne, in: Postępowanie cywilne w zarysie, eds. A. Marciniak, W. Broniewicz, I. Kunicki, Wolters Kluwer Polska 2023, pp. 43–44.
[6] P. Dolniak, et al., Sztuczna inteligencja w wymiarze sprawiedliwości. Między prawem a algorytmami, Wolters Kluwer Polska 2024.
[7] LM – language model; LLM – large language model.
[8] China Court Trial Online, https://tingshen.court.gov.cn/, (access 19.10.2025).
[9] M. Szmidt-Husarz, The cyber revolution in China, or digital courts “Made in China” [Cyberrewolucja w Chinach, czyli cyfrowe sądy “Made in China”], Chiny24 2020, https://chiny24.com/prawo/cyfrowe-sady-made-in-china, (access 19.10.2025).
[10] W.L. King, Breakfast Theory of Jurisprudence, 14 Dicta 143 (1937).
[11] G. Radbruch, Filozofia prawa, Wydawnictwo Naukowe PWN 2009, pp. 241–254.
[12] A. Brzozowski, W.J. Kocot, E. Skowrońska-Bocian, Prawo cywilne. Część ogólna, Wolters Kluwer Polska 2022, p. 63.
[13] A. Kuchta, Uncovering the secrets of the AI black box. Why is understanding how artificial intelligence works crucial for the future? [Odkrywanie tajemnic czarnej skrzynki AI. Dlaczego zrozumienie działania sztucznej inteligencji jest kluczowe dla przyszłości?], Infor 2023, https://ai.infor.pl/sztuczna-inteligencja/5757192,czarna-skrzynka-ai-wyzwania-i-korzysci-dla-przyszlosci.html, (access 17.09.2025).
[14] Situations in which Artificial Intelligence generates responses containing false or misleading information, presenting it as fact.
[15] N. Faleńczyk, Artificial intelligence recommends eating one stone per day. A real revolution in Google search! [Sztuczna inteligencja zaleca jedzenie jednego kamienia dziennie. Istna rewolucja w wyszukiwarce Google!], PurePC 2024, https://www.purepc.pl/sztuczna-inteligencja-zaleca-jedzenie-jednego-kamienia-dziennie-istna-rewolucja-w-wyszukiwarce-google, (access 18.10.2025).
[16] J. Kibitlewski, In their country, it only takes a few clicks to get divorced. The problem is that Russians know this too [Żeby się rozwieść, w ich kraju wystarczy parę kliknięć. Kłopot w tym, że Rosjanie też to wiedzą], Gazeta Wyborcza 2025, https://wyborcza.pl/magazyn/7,124059,32279422,zeby-sie-rozwiesc-w-ich-kraju-wystarczy-pare-klikniec-klopot.html, (access 17.10.2025).
[17] In the case of mediation, we are not dealing with dispute resolution sensu stricto, as the mediator merely helps the parties reach an agreement on their own.
[18] ENOIK Arbitration Court [ENOIK Sąd Arbitrażowy], https://enoik.pl/#steps, (access 15.10.2025).
[19] P. Rojek-Socha, The notaries’ e-court of arbitration is already using artificial intelligence [E-sąd polubowny notariuszy już korzysta ze sztucznej inteligencji], Prawo.pl 2025, https://www.prawo.pl/prawnicy-sady/e-sad-polubowny-notariuszy-juz-korzysta-ze-sztucznej-inteligencji,533732.html, (access 18.10.2025).
[20] P. Rojek-Socha, A. Partyk, In the judge’s office…, op. cit.
[21] J. Magulska, Map of deregulation projects relevant from the perspective of a professional attorney [Mapa projektów deregulacyjnych istotnych z perspektywy profesjonalnego pełnomocnika], Wolters Kluwer 2025, https://www.wolterskluwer.com/pl-pl/expert-insights/mapa-projektow-deregulacyjnych, (access 18.10.2025).
[22] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689, (access 17.11.2025).
[23] W. Wołoszyk, AI in legal translations – between technological optimism and practical reality [AI w tłumaczeniach sądowych – między technologicznym optymizmem a rzeczywistością praktyki], Prawo.pl 2025, https://www.prawo.pl/prawnicy-sady/sztuczna-inteligencja-w-tlumaczeniach-sadowych-miedzy-technologicznym-optymizmem-a-rzeczywistoscia-praktyki,535194.html, (access 18.10.2025).
[24] K. Wasik, AI in the justice system: public or covered by professional secrecy? [AI w wymiarze sprawiedliwości: jawna czy objęta tajemnicą zawodową?], Rzeczpospolita 2025, https://pro.rp.pl/sady-i-prokuratura/art43137151-ai-w-wymiarze-sprawiedliwosci-jawna-czy-objeta-tajemnica-zawodowa, (access 18.10.2025).
[25] Artificial intelligence in courts. A judge’s assistant is no longer fiction [Sztuczna inteligencja w sądach. Asystent sędziego to już nie fikcja], pulsHR.pl 2025, https://www.pulshr.pl/zarzadzanie/sztuczna-inteligencja-w-sadach-asystent-sedziego-to-juz-nie-fikcja,114801.html, (access 19.10.2025).
[26] P. Rojek-Socha, AI is knocking on the doors of courts and prosecutors’ offices—where, for now, there are no standards in place [AI puka do sądów i prokuratur – a tam na razie standardów brak], Prawo.pl 2025, https://www.prawo.pl/prawnicy-sady/ai-w-sadach-i-prokuraturach-czy-jest-wykorzystywane-co-ze-standardami,533509.html, (access 19.10.2025).
[27] Artificial Intelligence (AI) – Judicial Guidance, Courts and Tribunals Judiciary 2025, https://www.judiciary.uk/guidance-and-resources/artificial-intelligence-ai-judicial-guidance-2/, (access 20.10.2025).
[28] PLLuM – Polish Language Model. How does it work and what can it be used for? [PLLuM – polski model językowy. Jak działa i do czego może się przydać?], Ministerstwo Cyfryzacji 2025, https://www.gov.pl/web/cyfryzacja/pllum–polski-model-jezykowy-jak-dziala-i-do-czego-moze-sie-przydac, (access 20.10.2025).
[29] M. Kobylański, Artificial intelligence will help judges, but it will not replace them [Sztuczna inteligencja pomoże sędziom, ale ich nie zastąpi], Legalis 2024, https://legalis.pl/sztuczna-inteligencja-pomoze-sedziom-ale-ich-nie-zastapi/, (access 18.10.2025).
[30] The “Digital Court” program will change the justice system [Program „Cyfrowy Sąd” zmieni wymiar sprawiedliwości], Ministerstwo Sprawiedliwości 2025, https://www.gov.pl/web/sprawiedliwosc/program-cyfrowy-sad-zmieni-wymiar-sprawiedliwosci, (access 19.10.2025).
[31] A fundamental principle in criminal law, enshrined primarily in the case law of the European Court of Human Rights, ensuring a fair balance between the prosecution and the defense. See also: M. Lech, Możliwość wykorzystywania narzędzi opartych o sztuczną inteligencję w postępowaniu karnym w Polsce, “Wrocławskie Studia Erasmiańskie = Studia Erasmiana Wratislaviensia”, 2023, Vol. 17, pp. 209–226.
[32] M. Drażbo, Ruling on the use of AI in drafting pleadings [Wyrok w sprawie wykorzystania AI przy sporządzaniu pism procesowych], Legalis 2025, https://legalis.pl/wyrok-w-sprawie-wykorzystania-ai-przy-sporzadzaniu-pism-procesowych/, (access 20.10.2025).