Comment: Five steps to responsible generative AI adoption

Safe generative AI builds customer trust, reduces business risk, and drives long-term growth, says Malcolm Ross, SVP, Product Strategy at Appian.

Regulations are emerging with guidelines on when and how to use AI
Regulations are emerging with guidelines on when and how to use AI - AdobeStock

Generative AI is transforming industries. Since ChatGPT’s launch, generative AI adoption has outpaced the internet’s early growth. AI comes with big benefits but also big risks. To maximize value, organisations must balance the potential for innovation with responsible use.

Follow these five steps to use AI safely and effectively.

1. Ensure transparency

Generative AI models don’t always explain their decisions. Organisations must track AI actions, monitor behaviour, and create clear audit trails. A process platform helps by assigning tasks to AI agents and bots first, and routing to humans for final approval.

AI should also cite its sources so users can verify the accuracy of outputs as needed. For example, the University of South Florida uses generative AI chatbots to give academic advisors personalised case data on each student. The system pulls data from student records, generates meeting agendas, and drafts follow-up emails to students, and it provides advisors with links to student data for verification.

2. Use private AI for better data protection

AI policies must prevent privacy risks and regulatory breaches. Public AI models use vast public data sets, which can produce biased results and expose sensitive data. Worse, they may incorporate your data into their learning process—potentially helping competitors.

Private AI keeps your data in-house, allowing organisations to train models within compliance boundaries. This protects intellectual property and lets you maintain full control.

3. Prevent AI bias

AI bias happens when training data or algorithms create unfair outcomes. To reduce bias, remove sensitive identifiers like race, gender, and age from datasets. Use diverse, representative data and regularly review AI decisions to catch issues early.

Building private AI models also helps. When AI is trained only on your data, you’re in control and can more easily prevent external biases from creeping in.

4. Choose AI use cases wisely

Regulations are emerging with guidelines on when and how to use AI. The EU AI Act, for example, sets strict rules on how to use AI in high-risk areas like employment and healthcare. For lower risk AI applications like chatbots and recommendation systems, the guidelines require transparency to inform users they are interacting with AI. Determining risk levels, then implementing the appropriate protocols are essential steps to ensure safety and security.

Use AI where it adds value, but keep human oversight for critical decisions. For instance, AI shouldn’t approve mortgages—it could unfairly deny loans. Instead, AI should assist by gathering data and making recommendations, escalating high-risk cases to humans for a final decision.

Here’s another example. A leading insurance company streamlined underwriting with AI. Before, manual data entry led to inconsistent address formats. Now, AI standardises addresses with 80-90 per cent accuracy while underwriters make final decisions.

5. Embed AI into processes

AI must operate with clear guidelines. A process platform gives AI structure, sets guardrails, and flags tasks needing human review. It also helps organisations train private AI models and control where AI is deployed.

Responsible AI isn’t just ethical—it’s a competitive advantage. Safe generative AI builds customer trust, reduces business risk, and drives long-term growth.

A process platform for responsible AI

The safest way to build AI into your processes is with a platform. Everest Group recently ranked the leading providers of intelligent automation platforms in their annual report and discussed the strengths and limitations of each. If you want to learn more about the current AI market, it’s a great resource.

Malcolm Ross, SVP, Product Strategy at Appian

MORE FROM ARTIFICIAL INTELLIGENCE