Guide

The Responsible AI Roadmap for Marketers

Published:


Share this article:
Facebook Logo
Twitter Logo
LinkedIn Logo
Share Email Logo

How To Develop a Reliable AI Strategy

Foreword

I’ve worked in the software engineering space for over twenty years, and between the development of the internet, mobile platforms, and the cloud, I’ve been a part of many rapid technological advancements; but never before have I witnessed a transformation and an adoption as drastic and rapid as the one brought by AI.

And I believe it’s just getting started.

Sooner rather than later, AI will reinvent how we work, collaborate, invent, and discover altogether. In fact, a 2023 IBM survey found that 42% of enterprise-scale companies say they have already deployed AI in their businesses, with an additional 40% saying that they are currently exploring or experimenting with different AI products.

Companies leverage AI to reduce overhead costs, improve customer service, and increase personalization and productivity. While AI has proven to do all of that, it’s also proven that it only works if companies adopt AI responsibly.

Because while AI offers massive opportunities for every industry, it doesn’t do so without risk. Already, we’ve seen the results of unfettered implementation: AI tools providing irrelevant and even incorrect advice, risking data breaches, and even perpetuating human biases like discrimination. But despite the near-universal acknowledgment that significant threats exist, according to a report by Riskonnect, just 9% of companies are ready to manage those threats.

With the race to adopt AI in full swing, companies can’t afford to wait around for regulations to be put into place by larger organizations, but they also can’t afford to adopt bad AI. Instead, each company needs to take a personal responsibility in doing their due diligence to establish their own AI guardrails and figure out the best AI for them.

That’s a lot to ask, so to help you get started, we compiled our best practices and research into one Responsible AI Roadmap.

Below, you’ll find our step-by-step guide to creating an AI strategy that spans across all functions of your business in a safe and sustainable way.

As AI continues to evolve at breakneck speeds, I hope that you can keep this roadmap on-hand to help you and your business find your footing and confidently transform with AI.

Bernard Kiyanda VP of Engineering

Understand the risks associated with AI

Just like any new technology adoption, AI tools don’t come without risk, and acknowledging these risks upfront is the best way to prevent them from becoming a reality.

While each tool comes with its own set of considerations, there are three key risk factors that span across most industries and tools:

1. Bias and discrimination

It’s easy to assume that machines, which are built on logic, can’t hold prejudices against individuals or groups. But, as AI products have become more mainstream, we’ve seen how they are able to reflect and perpetuate even the worst biases of our society. That’s why it’s crucial that you not only understand but also actively avoid vulnerable AI systems.

There are three main types of bias in AI systems you should know about:

  • Biased training data: When you feed AI systems biased training data, the system will learn to be biased, too.
  • Algorithmic bias: When bias gets into the underlying logic of AI, it hardwires discrimination into systems.
  • Cognitive bias: Because human engineers build AI systems, cognitive biases can slip into their code and architecture of the AI.

If left unchecked, biased AI systems will produce unfair and inaccurate outputs, which can undermine your credibility and harm your customers. With over 60% of survey respondents admitting that they are still wary of AI, ensuring that you have unbiased AI information isn’t just a nice-to-have, it’s a must-have to maintain trust with your buyers.

2. Hallucinations

Not only can AI provide bias results based on its training, but it can also hallucinate when it’s not trained to answer the right questions. When an AI hallucinates, it generates a false or misleading response that it presents as fact. Hallucinations serve as a huge concern for companies because again, if an end-user receives one false response, how can they trust that any information from the company is true?

In order to pick an AI product that mitigates the risk of these hallucinations, you need to understand how the AI is trained. Consider things like how much data that AI is trained on — is it enough to answer most questions? Additionally, how is the data kept up to date? Is it something your team will need to stay on top of, or does it update automatically?

In the B2B marketing space, we recommend looking for products that allow you to add your own information to the training. Often, this is done through a function of natural language processing called retrieval-augmented generation. This means that in addition to training the AI on a large set of data, like the world-wide-web, the AI can also be trained on your own data, like website collateral, PDFs, and help documentation to provide more specific and relevant information to anyone using the product.

3. Security

Any security requirements your company has for the technology that already exists in your tech stack should apply to your AI systems, too. Consider these two guardrails to help mitigate the risks mentioned above:

  • Input guardrails: The ability to flag inappropriate inputs before the AI interprets it to pre-determine that the person asking for the information should receive it.
  • Output guardrails: The ability to validate a response generated by a large language model before it’s sent. This is often accomplished through a human-in-the-loop functionality.

Additionally, just like anything on the internet, interacting with AI means risking privacy breaches. That’s why a responsible AI strategy relies heavily on ensuring that the AI you adopt takes security seriously. This could include scrubbing any personally identifiable information (PII) from AI integrations, just like Drift does with its OpenAI integration. By choosing AI providers that actively address these security risks, you can be confident that you are on the right path to adopting AI responsibly, and in a way that continues to build trust with your buyers.

Without governance and ethics, AI can harm us. But with governance and ethics, it can help us — tremendously.

Dr. Ayesha Khanna
Co-founder & CEO, Addo

Align on a company code of conduct

Now that you’re familiar with the risks of AI, you’re ready to set up a code of conduct that will keep you and your company safe when using the technology. With your code of conduct, you establish regulations for what people at your company can and cannot do with AI in order to minimize the risk of accidents.

While this might sound restrictive, in reality, it’s not. By setting robust guidelines, you actually empower your teams to experiment and innovate with AI — and to do so safely. Here’s how:

1. Recruit your AI council

Drafting a code of conduct is a big project. That’s why you need a team, or an AI council, to act as your strategic voice on AI. The code of conduct should be a company-wide initiative, so it’s important that your council includes people from across your entire organization. Here are some of the roles we recommend you start with:

  • Early Adopters: Find the people who signed up for ChatGPT on day one and the ones who experiment with every new tool. You can harness their enthusiasm to kickstart and sustain change.
  • Technical Leadership: Recruiting engineers, data scientists, and other technical roles are a must for your council. You need people who can work out what you can (and can’t) practically achieve.
  • Functional Representation: Eventually, AI will revolutionize everyone’s jobs — so get ahead of it by bringing together representation from across all functions. That way, you can test use cases and products with multiple teams and ensure that every purchase benefits the company as a whole.
  • Legal Experts: With AI regulation ramping up across the world (see: the EU Artificial Intelligence Act), recruiting strong legal voices to your council is non-negotiable. Staying on the right side of regulation isn’t just about staying legally compliant, it’s about doing right by your users, too — and your legal experts will help you ensure that you do both.

Once you’ve recruited your AI council, you can begin to work on your code of conduct.

2. Find your "north star"

Before you get into the weeds of how you will use AI, your council’s first job is to work out what you as a company want to achieve with AI. To do this, you should start with “why.” Why is your company driving AI transformation?

Your “why” will depend on your organization’s values and purpose. At Drift, for example, we’re committed to supporting an environment where generative AI is used to drive innovation, increase agility, streamline processes, and empower our workforce. While your “why” won’t be the same as ours, the important thing is that you decide on a mission that your entire council — and, by extension, your entire company — can get behind.

By setting an overarching mission, your company will have a North Star that you can then rally behind and make progress towards in your adoption of AI.

Your mission sets the destination. Your code of conduct defines how you’ll get there.

3. Draft your code of conduct

A code of conduct is a public declaration of your company’s position on and use of AI. It sets out what you will and won’t do with AI, guidelines for how employees can use or experiment with AI, and any requirements for security and privacy when leveraging the technology.

No two codes of conduct look alike — nor should they, as they represent the unique interests, goals, and concerns of each organization. That said, as you start writing your code of conduct, we recommend you include these common components:

  • Mission and Purpose: Put your mission front and center. This will serve as a reminder for your entire company (and customers, if your code is public) of what you’re trying to achieve.
  • Security and Privacy: Lay out minimum standards and expectations for security when using AI. Be clear about how people should approach data usage, privacy, and user consent.
  • Governance: Explain who at your company is accountable for your AI systems and their output. Here, you can detail your compliance processes and any procedures you have in place to file complaints.
  • AI Usage: Specify how you will — or won’t — use AI systems in your business. This can be as broad or narrow as you like, whether that’s deciding that you won’t use a certain type of AI or listing out all possible AI use cases for your company.
  • Ethics: Cover your ethical redlines. This includes, for example, discussing how you will mitigate bias, avoid discrimination, and ensure fairness with your use of AI.

If you need some extra inspiration, check out Jasper’s Ethics and Responsible AI or Capgemini’s Code of Ethics for AI.

4. Get your teams to put the code of conduct into action

Once you have your code of conduct written down, encourage your teams to think about how it applies to their work. This process isn’t about re-writing your code from scratch. It’s simply about working out how the code applies to each team’s unique operating environment — their specific goals, processes, and technologies.

A good way to get the ball rolling is to use AI council members as educators and change agents. This is where it’s useful to have representatives from every team on your AI council. You can send them out to engage their teams, educate other employees on the code of conduct, and encourage them to experiment with AI. From there, AI council members can monitor how their teams are applying the code and step in quickly if they see experiments going astray.

With your AI council members leading the charge in each team’s AI adoption, you will be able to create the impetus for change, all while making sure there is sufficient oversight.

5. Revisit your code of conduct (again and again)

Once you’ve got your code of conduct in action, the final thing to remember is this: Don’t carve your code of conduct into stone. Fixed policies will limit your ability to react to economic, technological, or regulatory change. So, task your AI council to stay alert to possible threats and potential opportunities that can affect your code of conduct.

Finally, make sure to revisit your code of conduct regularly. This will give you the opportunity to evaluate your guardrails, tighten where you think they’re too loose, and loosen where you think you need more flexibility.

After recruiting a team of AI enthusiasts and setting a robust foundation for how you will adopt AI, it’s time to consider where exactly your organization will deploy AI.

If you wouldn’t do it without AI, you shouldn’t do it with AI.

Chris Penn
Co-founder and Chief Data Scientist at Trustinsights.ai

Determine your AI use cases

While AI will eventually touch every single facet of a business, that doesn’t mean you should go from zero to 100% implementation in one leap. Instead, you need to start small and then expand.

That’s why the third step of responsible AI adoption is for your AI council to pick a handful of use cases to kick off your AI transformation with. Your council should look for recurring problems that devour your team’s time, common blockers that stop you from shipping work, and inefficient (usually manual) processes that are ripe for automation. Then, you can brainstorm how AI can help solve those problems.

While that’s easily said, jumping from problem to solution can be tricky. So if you’re getting stuck identifying your company’s best AI use cases, follow Drift’s lead and run an AI Hackathon. During these events, entire functional teams work together to get creative with AI and think about how it can help them solve problems more efficiently.

And let’s be honest, a little healthy competition helps, too!

The sky’s the limit with AI, but here are some AI use cases for salesmarketing, and customer success to get you started:

sales marketing cs table

Once you start compiling all possible use cases, you’ll find that your ideas start to snowball. After all, good ideas breed good ideas. Before you know it, you’ll probably end up with a mile-long list of potential use cases — far too many to tackle at once.

To hone in on your best use cases, think back to your North Star: what you hope to achieve with AI. Combine your North Star with other criteria like customer impact, time-to-value, and speed of implementation to evaluate your use cases. A good method to do this is by building a matrix and scoring each use case against the impact criteria. This will give you a quantitative way to evaluate your options.

Once you’ve landed on the best use cases, launch your AI transformation with one outstanding idea. By starting with one, you can build up your confidence and capability with AI, so you can more easily expand later on. With one lucky use case in mind, you’re only missing one thing to kickstart your transformation: technology.

Pick the product that suits your needs

Ambitious plans require great technology. Being both responsible and results-driven in your AI strategy means finding the right AI products to fix your problems, empower your teams, and give your company a competitive edge — all while mitigating the risks that come with AI.

This is where all the knowledge you acquired in step one comes in. As you start to research and evaluate potential AI products, you want to ensure you’re choosing a product that is safe and reliable. So, between reading case studies and consulting sales reps, it’s important that you look into how the vendor and their product think about and address AI risks.

Here, you can use your code of conduct as a go/no-go checklist. By speaking to the vendor or consulting their code of conduct, you can see whether their position on AI aligns with yours. If a vendor or product doesn’t match your values, don’t be afraid to pass on them.

In particular, you should insist that vendors hit your minimum security standards. In our current digital era, especially with the lack of overarching AI regulation, it’s crucial that you work with vendors that take data security seriously. SOC 2 accreditation is a must-have as is GDPR compliance, and transparency around data security policies is non-negotiable.

Beyond the company’s values, you can also examine the actions they have taken to address risks in their products and the guardrails they have in place to prevent them. For generative AI products, for example, you can investigate the product’s hallucination rates to eliminate options that regularly put out falsehoods.

Similarly, you want to consider if the AI product has a human-in-the-loop system. At least for now, AI operates best as a partner to humans — not a replacement for them.

For example, a generative AI tool can automatically generate a relevant and customized response in a live chat conversation, but it’s best for a human rep to always have the option to read through the response before it is sent. By opting for an AI tool that empowers you to keep a knowledgeable human in the loop, you can fact-check outputs and decisions, and catch hallucinations or mistakes before they make it out into the world.

While these aren’t the be-all and end-all of what you should consider when choosing an AI product, these considerations are key to ensuring that no matter what product you end up adopting, it will be safe and effective. That will help make sure that there is smooth sailing once you actually put the AI to work.

Looking for a list of AI tools to get you started? Takes these AI recommendations from the attendees of our CONVERGE event.

Pilot your AI and report your findings

Now that you’ve identified your use cases and found some great tech, it’s time to pilot your AI.

Here, it’s important to remember: Great change management leaders are as empirical as scientists. They don’t assume things will work; instead, they set hypotheses and test them with data. This is true for AI adoption as well, which is why you need to treat your AI pilot like an experiment.

First, you can treat your North Star (the mission you laid out at the beginning) as your hypothesis. This is the goal you will ultimately want to accomplish with your AI experiment. From there, you need to work out how you’ll measure your success using key performance indicators (KPIs). This is crucial because, if you can’t prove how the AI experiment is helping your company, budget holders are never going to sign off on the investment.

When it comes to KPIs, prioritize the metrics that prove the impact of the AI. For example, if your use case is SEO optimization, focus on organic traffic volume and pipeline generated.

If you’re using AI for real-time sales coaching, look at conversation intelligence scores and close rates. Wherever possible, pair output metrics (e.g. shipping blogs faster, landing more calls, preventing customer churn) with outcomes — dollars and cents. Because, let’s face it, it all comes back to revenue.

After running the pilot for a couple of months, measure the results against your performance without AI. This is where you will be able to prove your hypothesis as correct or incorrect. At the end of the experiment, report your findings to the rest of your team, your AI council, and your company.

By doing so, you will not only be able to broadcast successes across your entire company to inspire AI transformation, but you can also recruit more voices to provide new ideas, use cases, and strategies for using AI.

Once you wrap up an experiment, make sure to establish next steps. If an experiment didn’t work or produced lackluster results, you can either iterate on your approach or go back to the drawing board. (Don’t get stuck on unproductive ideas because there’s a world of opportunities out there!)

If an experiment worked, make sure to figure out how you can replicate its success — not just with your team, but also with other teams. One way to do this is to build a database of AI experiments and results. This research library can help other employees and teams learn from your mistakes and take shortcuts through your successes. As a result, your company as a whole will be able to double down on your wins, ultimately multiplying your impact with AI and accelerating your AI transformation.

AI in action: How Tenable generates 3x pipeline with AI

For years, cybersecurity company Tenable relied on email marketing to get in front of their target buyers, Chief Information Security Officers (CISOs). But, as the noise from their competition increased, Tenable found it more and more difficult to reach them — which is why they needed a better way to get those CISOs in front of their sales reps.

So, in 2018, Tenable rolled out their AI strategy with Drift. The AI-powered chatbot stepped in ahead of human sales reps to qualify buyers, deflect unnecessary traffic, and allow the sales team to focus on only the best leads. It created a flexible and more enjoyable buying experience — one available every minute of the day. As a result, Tenable delivered qualified leads three times faster to their team and converted 20% of all AI leads.

Welcome to the future of work

With the AI revolution well underway, change is coming for us all. But that’s not a bad thing — not at all. Because AI promises a brave new world of work where humans can work more efficiently, be more agile, and unlock more innovative and creative ideas.

But that future lies in saferesponsible, and effective AI. While it’s easy to get swept up in the hype of the most groundbreaking new model, tool, or feature, it’s crucial that companies put safety first to ensure the sustainability of their AI transformation now if they want to succeed in the future. The winners won’t necessarily be those who get there first but those who move swiftly and safely — those who implement responsible technology that works for both them and their customers.

By following our five steps, we’re confident that you can drive responsible AI transformation in your organization. You’ll mitigate threats, lay a firm foundation for AI governance, identify the most impactful use cases, select the best AI technologies, and pilot AI transformation to prepare your organization for broader and deeper AI experimentation. By doing all of this (and continuing to do so), you will be able to lead the way into an AI-augmented future that is as safe as it is transformative.