Harnessing Artificial Intelligence for Efficiency and Risk Management: Navigating the Pitfalls of Incorrect Decisions and Lack of Transparency
By Kilian Pfahl *
Assessing AI Integration's legitimacy in corporate operations
The arrival of advanced Large Language Models (LLMs) like ChatGPT presents businesses with a significant technological milestone, opening doors to a multitude of possibilities. These models can process vast amounts of data rapidly and generate precise responses, unlocking countless opportunities for corporations. However, they also pose substantial legal and operational risks, particularly in matters of corporate due diligence. For business leaders, the question lingers: How can we responsibly wield this technology? Can we wholeheartedly trust its applications? And does corporate due diligence necessitate employing LLMs in operational processes?
Key Role in M&A Deals and Beyond
LLMs thrive in data-intensive areas with high legal requirements, fostering a broad range of applications. In M&A transactions, for example, these models drastically accelerate due diligence by swiftly analyzing extensive documents and identifying critical contract clauses or risks early on. By achieving a more efficient, less error-prone review phase, decision-makers gain a solid foundation for informed choices.
LLMs are also invaluable in contract management, locating ineffective clauses, and recommending suggested improvements. In areas like non-compete agreements or liability limitations, they help ensure contracts meet legal requirements and sidestep strategic missteps. Moreover, LLMs find their place in compliance monitoring, enabling companies to stay updated on evolving legal domains like data protection and financial regulation, thus implementing adjustments in a timely manner.
The Mirage of Accuracy
Despite the numerous advantages of LLMs, their potential pitfalls must not be overlooked. Business leaders must recognize the limitations of this technology to mitigate errors and liability risks. It is essential to understand that LLMs are not genuinely "strong AI." They are advanced text generators that calculate the probability of the next most likely sequence of words based on colossal amounts of data. They mimic intelligence by simulating which word is most likely to occur next. This results in answers that often seem plausible but are not always correct.
LLMs lack the human capacity to understand or know, nor are they capable of validating content or engaging in deep cognitive thinking. Their strength lies in their ability to discern statistical relationships between words and implement them coherently in text. However, the outcome is just a derivative of probability calculations, lacking a guarantee of accuracy. Therefore, leaders should never presume that LLMs deliver complete or infallible answers, especially in pivotal, legally relevant areas. Instead, LLMs should function as supportive tools, not as exclusive decision-makers. Relying solely on human expertise or qualified professionals might be deemed a breach of due diligence.
The Shrouded Decision-Making Process
Another danger revolves around the opaque nature of LLM decision-making processes. These models often function as black boxes, making their workings and logic challenging for users to comprehend. Yet, business leaders are bound by law to base decisions on traceable and transparent information. The Business Judgement Rule (§ 93 Abs. 1 AktG) demands that business decisions be transparent and well-founded. Without transparency, liability claims may arise, as insufficiently documented decision-making might be viewed as a breach of duty of care.
Furthermore, LLMs can be influenced by biases in their training data. These models are nurtured on vast datasets that often harbor historical biases or unequal representations. Such biases may unwittingly permeate generated answers, leading to flawed or discriminatory decisions. Consequently, the LLM output may be distorted.
Liability and Duty of Care
The legal responsibility of business leaders regarding the utilization of LLMs represents a significant challenge. According to the Business Judgement Rule, liability can be exempted if decisions are based on appropriate information and with the necessary care. However, the question arises as to whether LLM-derived results meet this criterion. The "illusion of correctness" discussed earlier might lead business leaders to make decisions based on seemingly plausible but fundamentally incorrect data.
In such scenarios, leaders must ensure that LLMs function as supporting tools, not as exclusive decision-makers. Decisions predicated solely on AI-generated content could be seen as negligent.
Documentation Conundrums
Another challenge is documenting decision-making processes. In the event of liability, business leaders must provide evidence that their decisions were based on sound and verifiable information. However, the black-box nature of LLMs hinders the presentation of such evidence. A court might perceive the lack of transparency as a breach of duty of care.
Additionally, the "Caremark Duty" mandates that business leaders implement suitable monitoring mechanisms to promptly identify and address risks. If LLMs are integrated into risk management, the systems used must remain transparent and reliable. This encompasses controlling potential risks like bias or misjudgments. A lack of control or transparency regarding AI decisions could prove problematic in liability issues.
Beyond Risks, the question persists as to whether business leaders are obligated, as part of their duty of care, to integrate Large Language Models (LLMs) into their decision-making processes. The duty of care requires utilization of all suitable means to enhance efficiency and improve quality. Thus, the obligation to employ LLMs may emerge in scenarios where the deployment of this technology demonstrably positively impacts decision-making efficiency and quality. Failure to do so could be considered a breach of duty of care, potentially leading to suboptimal decisions that might have been avoidable with LLM support.
Responsible Employment of LLMs
Business leaders face the task of maximizing the advantages of LLMs without compromising their responsibility and duty of care. To do so, these technology tools should become integral components in comprehensive decision-making processes that necessarily incorporate human judgment (as of today's state of the art). In certain contexts, using AI may not merely be optional but necessary if it significantly enhances decision-making efficiency and quality. However, this necessitates clear governance, defined processes, and employee training to adhere to legal requirements and minimize associated risks. Only then can AI be employed responsibly and in accordance with the duty of care.
Dr. Kilian Pfahl is a Senior Associate at Hogan Lovells.
Enrichment Insights:
To responsibly integrate LLMs like ChatGPT while enhancing efficiency and managing risks, businesses should adopt a structured approach:
- Define Clear Objectives
- Focus: Specify use cases (e.g., contract summarization[1], customer support automation[3]) to avoid scope creep and ensure measurable outcomes.
- Example: Narrowing an LLM’s role to extracting key clauses from legal documents improves accuracy and relevance[1].
- Implement Security and Compliance Safeguards
- Data protection: Regularly update security protocols, encrypt sensitive data, and conduct penetration testing[1][4].
- Employee training: Educate staff on data privacy, including handling confidential information and recognizing legal boundaries[1][4].
- Auditable outputs: Ensure LLM-generated content is traceable and reviewable for compliance[4].
- Optimize Workflows with Strategic LLM Deployment
- Task assessment: Identify repetitive, logic-based tasks (e.g., customer query resolution) where LLMs add value without introducing risk[3].
- Model training: Fine-tune LLMs on domain-specific data to minimize errors[5], or use RAG (Retrieval-Augmented Generation) to ground responses in verified sources[5].
- Feedback loops: Continuously refine models using performance metrics and user feedback[3][5].
- Legal Due Diligence and Risk Mitigation
- Bias checks: Audit outputs for discriminatory language or inaccuracies that could lead to liability.
- Compliance frameworks: Align LLM usage with regulations like GDPR, ensuring transparency in data usage and consent[1][4].
- Fallback protocols: Establish human oversight for high-stakes decisions (e.g., legal advice, financial forecasting)[1][3].
- Modular and Scalable Integration
- Component-based architecture: Decouple LLM services from core systems to enable updates without operational disruption[2].
- Scalability testing: Ensure infrastructure can handle increased demand while maintaining performance and security[2][4].
By embracing these practices, businesses can leverage LLMs to streamline workflows while proactively addressing legal, operational, and ethical risks. Mitigating transparency, security, and iterative improvement ensures alignment with corporate due diligence requirements[1][4][5].
- Business leaders must acknowledge that Large Language Models (LLMs) are not truly "strong AI," but rather advanced text generators that base their responses on the probability of the next most likely sequence of words.
- The potential pitfalls of LLMs should not be ignored, as their answers may seem plausible but are not always factually correct. Therefore, leaders should use LLMs as supportive tools, not as exclusive decision-makers, and rely on human expertise and qualified professionals to verify the information.
- Due to the opaque nature of LLM decision-making processes, leaders must ensure that their use of these models is transparent and well-documented to comply with the Business Judgement Rule. This means maintaining accountability for decisions based on data generated by LLMs.
- Incorporating LLMs into the decision-making process might be seen as a necessary Breach of Duty of Care if the deployment of this technology demonstrably improves decision-making efficiency and quality. A lack of diligence in integrating LLMs could potentially lead to suboptimal decisions that could have been avoided with their support.
