Does your practice need an AI policy and how will it help?

As AI adoption accelerates well ahead of governance standards to manage ethical, legal and risk implications, how can an AI policy help accounting businesses and their clients?

  • Adopting AI tools without considering the impacts create risks for practices and clients.
  • An appropriate AI policy can help manage risks related to confidentiality, responsible use, and ethical considerations.
  • Effective AI policies should be tailored to each practice and cover data privacy, accuracy, employee training, compliance, and communication to promote safe and responsible AI use.

by | 16 Jul, 2024

Person's hand touches a digital LED screen

As artificial intelligence (AI) continues to grow exponentially, with the technology touching businesses large and small and outpacing, implementing adequate governance to manage the risks associated with remains a challenge.

Dr Niran Subramaniam, an Associate Professor in Financial Management and Systems at Henley Business School, cautions businesses have yet to fully grasp the scale of risk associated with such a powerful technology that uses masses of customer data.

Subramaniam, who held senior finance and information systems roles in the financial services, telecommunications and higher education sectors before entering the academic sector, recently published a journal article titled Digital transformation and artificial intelligence in organisations.

The paper explores how businesses are discovering the revolutionary and transformative power of AI and proposes a framework for successful digital transformation. 

An AI policy is essential, Subramaniam says.

“They’re embracing technological toolsets and systems without having thought through the risks inherent in them,” he says.

He advises companies to proceed with caution by developing an AI policy that addresses the risks and opportunities and provides a framework and training for technology use that aligns with the needs of a practice.

What is in an AI policy?

An AI policy outlines the behaviours and frameworks around the use of AI and the data, information and insights it produces, driven by the business’s values and principles.

In the report titled, What Internal Policies Should I Update to Reflect AI Use in My Business?, LegalVision lawyer Danielle Pedersen says an AI policy should set guidelines for AI’s responsible use by the business.

The policy should outline how the technology will not breach the business’s legal obligations, particularly around:

  • Privacy
  • Copyright
  • Work health and safety

It should also explore ethics as it relates to the business, demonstrating its values by outlining rules around sensitive information, fair and clean data, and transparency around processes so customers can understand how AI is being used.

In developing an AI policy, small accounting firms should first decide what their core values that underpin their policies.

“For example, if we were to look at data privacy and security, a policy might be to ensure the protection of sensitive client data,” Subramaniam says. “When it comes to accounting practices, that could be about maintaining compliance with data privacy laws.”

What should an AI policy cover?

The content of a specific AI policy should reflect the unique qualities of a business, Subramaniam says. There is no such thing as a one-size-fits-all approach.

“Do we want to look at patterns in data?” Subramaniam says. “Do we want to understand trends in the market? Do we want to make forecasts and predictions based on trends? Artificial intelligence is very good at assimilating data from individual data points to synthesise that data.”

And there’s the catch if individual client data is being analysed with AI. 

“Perhaps you’re trying to understand the types of accounting challenges each client might have during the year-end audit, or during the course of their business,” he says. 

“When I log all that client or product data together and run an AI algorithm, it gives me a pattern that shows every client has bad debt of around two per cent, for example. In that context, it is important that accountants and IT personnel do not spill that data outside of the organisation, because that still affects data privacy.”

The policy must cover accuracy and integrity of data, data validation, data verification, employee training programs, and compliance and regulatory adherence.

Even other seemingly unrelated departments such as marketing should be involved, he says, as AI will inevitably influence the work of everybody in the business.

“Think of customer service and client communication,” Subramaniam says. “Who is going to communicate properly, promptly and adequately with clients to ensure they are well informed around the use of AI tools including what you’re using, how you’re using it and how personal data are being used and protected.”

Examples of AI policies

An appropriate AI policy will give a business a structure for minimising risk of harm, such as articulating the decision-making process and when to monitor technology use and output.

Concerns around data privacy might also mean setting benchmarks for how any AI tool a business engages with treats data and blocking use of personal data.

Reviewing terms of service and choosing AI tools that reflect the policy is crucial. For instance, the free version of ChatGPT trains its model on user data, whereas paid models have the option of turning this off.

“We have to think in terms of policies and protocols because of the unknown,” Subramaniam says. 

“With AI, we are dealing with something that we have not seen before and that we actually have little knowledge of. We’re not fully aware of the risk and opportunity offered by AI, so when we work with it, we have to be as careful as we can.”

AI policy training is vital

An AI policy should also explain clearly how all staff will be kept informed procedures and processes through regular training.

An employee or stakeholder that has access to data but does not follow accepted processes or uses the AI technology in a way that does not align with the current AI policy, raises the risk of a data breach or unethical use of sensitive information.

“Capturing and using your customer data can create sales opportunities, and automating customer service processes can cut costs and boost profit, but both practices also carry … risks related to privacy, fairness, transparency and discrimination,” said Dr Nicole Hartley, MBA Director and Associate Professor at The University of Queensland, in the report The ethics of AI: why you need to embrace corporate digital responsibility.

Hartley says it is vital that small businesses analyse, understand and address the various ethical dilemmas that are packaged with AI systems, including around “data disclosure, surveillance and privacy violations, biased and discriminatory outcomes, dehumanised services and disempowerment”.

Without a strong policy, the ramifications of AI can be reputationally and legally impactful, she says.

Most importantly, Hartley says, businesses should recognise that there will be a trade-off between AI opportunity and risk, and a policy will help businesses reap the benefits without getting into hot water.

Share This