How to draft an AI policy for your accounting firm
How to draft an AI policy for your accounting firm Vertical

How to draft an AI policy for your tax firm

Read the Article

With the accounting profession adopting generative artificial intelligence (GenAI) more and more, firms must establish a framework for responsible AI use.

In my previous article, “ChatGPT for Accountants: 3 Tips to Enhance Firm Productivity,” I explored AI’s power to transform work in accounting firms. But with great power comes great responsibility! We must consider the impact of AI on our team, clients, and the public.

This article will explore formulating AI principles tailored to an accounting firm, including the impact on all stakeholders, how GenAI differs from previous technologies, and what to consider when drafting your policy.

Everyone is affected by AI

With AI, firms can redefine workflows to handle complex tasks with remarkable efficiency. While this will alter many jobs, accounting teams will require continuing education to keep up.

Clients will expect more personalized, accurate, and timely insights. And they’ll demand faster responses as AI-powered client service becomes the norm. To keep clients happy, firms will need to become more nimble.

When it comes to the public, the shift to AI will redefine personal relationships that have historically underpinned trust in the accounting profession. To maintain that trust, we must integrate AI into our firms with care and responsibility, ensuring that AI complements our expertise and values rather than overshadowing them.

How GenAI is different from past innovations

Many firms already have a technology-use policy as a foundation for ethical technology deployment. However, GenAI operates differently than past technological advancements. You must understand this fundamental difference before drafting an effective AI use policy.

Unlike traditional software that follows predetermined, linear processes, GenAI operates on a dynamic, non-linear model. This allows GenAI to process vast amounts of data, identify patterns, and generate insights faster than traditional tools, but these benefits also come with new risks.

GenAI is non-deterministic—meaning the same input may result in a different output. This is because the foundation of GenAI is a statistical model. You can provide an AI chatbot with guidelines, but, as with humans, there is no guarantee that it will follow the rules or that your prompt will get you the correct answer.

However, I’ve found that a well-written prompt to a well-trained AI can give you an extremely high probability of success. It may surpass the ability of humans given the same tasks due to AI’s power to analyze vast amounts of data and uncover insights that remain hidden to the untrained eye. In addition, GenAI can handle far more complex tasks than traditional software, and requires a fraction of the time to create prompts versus writing rules for complex tasks.

How to address the unique challenges of AI in your firm’s policy

When drafting your AI policy, remember the critical difference between AI and other tools in your firm. The non-determinism of AI outputs is one of the most significant risks, and as a result, your policy should be designed to mitigate it.

Here’s a list of challenges to consider and suggestions for how to solve them:

Non-Determinism

Challenge: GenAI is non-deterministic; you can get different results from the same input, and sometimes, the outputs need to be corrected. These so-called “hallucinations” challenge our ability to provide accurate information.

Solution

  • Implement quality control checks: Establish a quality control system to verify the accuracy of AI outputs. This might involve cross-checking AI-generated data against traditional methods or established benchmarks.
  • Regular testing and updates: AI models are constantly improving. Keep looking for new options. Do your due diligence, evaluate models, and consider using the latest AI models to improve accuracy and reliability.

Autonomous decision-making

Challenge: AI’s ability to make decisions without human oversight raises significant questions about accountability and control.

Solution:

  • Set clear boundaries for AI use: Define areas where AI decision-making is appropriate and human judgment remains paramount. This includes setting thresholds for AI autonomy in client interactions, financial analysis, and decision support.
  • Audit trails and accountability: Maintain detailed audit trails for decisions made by AI systems. This ensures traceability and accountability, allowing for the review and correction of AI decisions.

Data privacy

Challenge: GenAI stores and retrieves information fundamentally differently from traditional databases. This presents challenges for protecting the privacy of client data.

Solution

  • Review terms of service: Use AI tools that protect the privacy of client data the same as any other technology in your practice. Don’t allow staff to put personally identifiable information into systems that use prompts to train the model, such as the free version of ChatGPT. Consider signing up for services that do not retain data for training purposes, such as ChatGPT TeamRightworks Spark, or CoPilot for Microsoft 365.
  • Conduct regular audits: Regularly audit AI systems and their terms of service to ensure compliance with data privacy laws and internal data protection policies.

Regulatory compliance

Challenge: The legal status of GenAI outputs is still in flux.

Solution:

  • Stay informed and agile: Keep abreast of regulatory changes affecting AI in accounting. Be prepared to quickly adapt in response to new regulations or legal interpretations regarding AI-generated outputs. I recommend subscribing to The Rundown AI newsletter to stay up-to-date.
  • Get legal review: Regularly assess the legal compliance of AI outputs. This might involve consultations with legal experts to ensure AI-generated advice or reports adhere to current laws and standards.

Scalability

Challenge: Most accounting, tax, and audit software do not integrate with GenAI tools. Integrating and scaling AI solutions into existing systems and workflows, while maintaining data consistency, compliance, and user adoption, is a significant challenge.

Solution:

  • Phased implementation: Introduce AI systems gradually, starting with low-risk and low-effort areas. This phased-in approach helps manage the impact on workflows and allows for adjustments based on initial feedback.
  • Training and support: Provide comprehensive training for staff to understand and effectively use AI tools. This includes technical training and guidelines on when and how to use AI in their workflows.

Dependence and overreliance

Challenge: Striking the right balance in using AI is crucial to preserving professional judgment and skills, and preventing excessive dependence.

Solution:

  • Balanced AI utilization: Encourage a symbiotic relationship between AI tools and human expertise. AI should augment rather than replace human judgment.
  • Skill development: Invest in training programs that keep staff updated with emerging AI technologies and their applications, fostering a culture of continuous learning.

Client trust

Challenge: When using AI, it is essential to consider whether to disclose this information to clients. Transparency is vital; it builds trust, and helps clients understand the capabilities and limitations of AI technology.

Solution:

  • Transparent AI usage: Clearly explain to clients how AI is used in the firm’s services, highlighting its benefits for accuracy and efficiency. Being open and up front about AI usage helps businesses build trust and establish strong client relationships.
  • Ethical disclosure policy: Develop a policy for ethical disclosure of AI use in client engagements. This includes explaining the role of AI in processing their data and ensuring clients are comfortable with its usage.

Job disruption

Challenge: AI has the potential to automate complex tasks and redefine workflows, which can disrupt the roles of our staff, managers, and partners. Some positions will transform, while others may no longer exist. It is essential to manage this transition effectively to prevent significant stress among our people.

Solution: 

  • Strategic workforce planning: Assess the impact of AI on various roles within the firm and develop strategies to manage changes. This could include redefining job descriptions or identifying new opportunities created by AI.
  • Training for emerging roles: Proactively train staff for new roles that emerge as AI changes workflows. Focus on upskilling and reskilling employees to harness the capabilities of AI in their new roles.

Avoid cookie-cutter policies

To implement these recommendations, accounting firms should adopt a thoughtful approach that balances the innovative potential of AI with the foundational principles of accounting. It is crucial to adapt to the technical aspects of GenAI, and consider its broader implications on ethics, client relations, and workforce dynamics. By systematically addressing these challenges, firms can leverage the benefits of AI, while upholding their commitment to professional excellence and ethical standards.

Remember that there are no cookie-cutter policies. Every firm is different, so carefully consider your clients, staff, and services. After all, you wouldn’t want to have ChatGPT write your AI policy for you, would you?

If you enjoyed this article, please subscribe to my newsletter to get future updates sent directly to your inbox.

Editor’s note: This article was originally published by Firm of the Future.

Leave a Reply

Your email address will not be published. Required fields are marked *