Written by
Argos Multilingual
Published on
16 Sep 2025

Having an idea for AI transformation is one thing, turning it into a concrete, successful initiative is another challenge entirely. In this episode of Field Notes, Stephanie Harris-Yee speaks with Erik Vogt to uncover why many AI projects stall before launch, and how leaders can transform broad ambitions into measurable outcomes.

Erik shares practical steps for avoiding common pitfalls in AI adoption, from establishing clear business objectives to identifying appropriate roles, data, and frameworks. If your team faces the vague directive to “do something with AI” but lacks direction, this conversation provides a roadmap to start small, prioritize impact, and build sustainable solutions.

Key topics covered include:

  • Why 50–70% of AI initiatives fail, and how to avoid common mistakes
  • How to set clear business objectives that drive real outcomes
  • The importance of starting small and focusing on MVP use cases
  • The role of data quality and “fit for use” in AI success
  • Who inside your company is best positioned to lead AI initiatives
  • Frameworks and tools (like Lean Canvas) to structure your AI projects
  • How vendors and solution providers can better support client teams

Watch the full video to learn how to transform your AI ideas from vision to action.


 

Stephanie Harris-Yee: Hello. I’m here with Erik Vogt once again on our next Field Notes session. And this time we’re going to go once again a bit broader. So we’ve talked about actual applications in a lot of things. We’ve talked about AI and marketing, so this time we’re addressing the concept of this issue.

A lot of people are finding where they’re getting these orders or mandates. Or suggestions from the C-suite to implement AI somehow, and they’ve been doing their research, they’ve been hearing about all of these different things. They may even have ideas, but how do you turn these ideas around AI into an actionable scopable solution for your company?

So Erik let’s go ahead and start off with the first challenge. So why do so many business leaders struggle to move from an AI idea to an actual project?

Erik Vogt: Yeah I, this is a very interesting, space that is all over the place. We’re being barraged by all kinds of solution candidates where somebody’s tried to shape like how can I offer some kind of solution? They’re trying to monetize a particular solution they already have.

But in real world, especially B2B, you usually start with a problem, not just a, bunch of solution fragments, and you have to figure out how do I solve my business problem within the constraints that I have? There’s been a lot of research on this. You’ll see a lot of statistics out there about the Gartner and others have surfaced some estimates for how often these fail.

They may be somewhere in the ballpark of 50 to 70% of AI initiatives fail. Why do they fail? They usually fail because they’re too vague and too ambitious and too ambiguous with regards to what they’re trying to achieve. So I think you start off with a general vague goal, and then you don’t reach it and money gets spent and then nothing happens and, and you end up just withdrawing the whole thing.

The successful deployments of AI generally need to zero in on a much more precise kind of focus. What’s interesting to me about this, having worked as a manager for 25, 30 years, is this is really parallel to what the goals you need to tell humans to reach. If you’re managing a person, it’s like smart goals, specific, manageable, actionable, time-based.

These things help a developer of an AI application to better understand what the intended outcome of this is. So very often I’ve had calls with folks trying to sort of say, we want to use AI for something or the other automatic translation for such. And generally what you really need to do is start off by saying, what’s the purpose of this?

What’s the outcome of this? What are you trying to reduce? Whose time are you trying to save? Is this a differentiator for your company or is this trying to be a cost reduction initiative? I really like to think in terms of investment proposition as a core anchor structure.

So either you’re saving money, you are accessing new revenue streams, you are differentiating or you’re reducing risks. So to use a super simple example, let’s imagine that we were trying to say we want to use AI to automate email responses. That’s great if you can. We get hundreds of emails every day.

Let’s get an AI to respond to all of them. Okay. Follow up questions. Which emails are you talking about? What’s your actual business goal? Are you trying to get faster replies so that you’re more attentive. Like for example, if it’s a B2B, B2C, maybe you have an automatic response that says, Hey, thank you for your email.

We’ve got it, we’re working on it. Or are you trying to reduce the cost of the people who are responding to those, those emails, and I mean, human agents in this case. Also, what does good look like? There’s been some chat bot deployments that ended up.

They’re great, but maybe 60% of people can solve their problems with an automatic agent. But if 40% of them have already had to go through the process of trying to interact with an agent before they get bounced up to level two, you’ve got 40% of people who are now already upset ’cause they’re already having fought their way through the agent and not gotten a result. Now you’re dealing with a CSAT score that can actually get negative. You’re starting off with people who already have a bad attitude. And other things to consider, and this is very important, maybe this is a subject for a future conversation, but where’s the data coming from? So we need to think about where are you getting the data from? It’s great to say we’re gonna use AI but the AI needs to be based on a model. That model is based on data and the quality of that data. And not just quality of data, but fit for use of that data.

So even if you have 10,000 emails to draw from, if half of them are spam or you know, if there aren’t. Clear patterns to detect, then you’re not gonna get very far. Thinking about our email example we really want to think instead of just responding to all of them, let’s set a goal for ourselves that we want to figure out what are the top most common refund requests and then maybe use AI to find a model that will detect refund requests. And if they fall into a category of rec and you test whether or not your recognition of that being the request is actually working, then you can use data from the last six months, for example, to train that model.

And you can then have an AI pre respond maybe too …and then you’ll have maybe a human review later. But something like that is very specific. It’s based on a very narrow use case. And I think at that point you’re more able to assemble a great solution.

Stephanie Harris-Yee: So what kind of structure or framework will help people? Or is there one? Is it something where you have to create. A new framework for each specific use case, or is there something more general that we could go into that say, Hey, make sure you check these tick boxes, put it in this basic framework in order to get that idea of I want less emails, whatever it is, into an actionable AI solution.

Erik Vogt: For sure. So first off, always start off with a business objective. Like, somebody’s spending money for this. What do they want for that money? Like, what is the outcome that you’re wanting to change or improve? Then you look at the current process and what are the actual pain points? Is it because it’s too manual, too slow, too repetitive?

It’s very important. Cause those are all extremely different objectives. If it’s error classification or if it’s speed. Another is to classify the type of the task type. And this is helpful. My years in the data annotation space taught me that when you come up with setting up the data annotation task, it’d be very, very clear of what you’re actually trying to test for.

So is it a classification task? In other words, the input is an email and then we want to classify it into, maybe we have to respond now, or which department does it go to? Like something like that fork in the road. If you’re clear about what fork in the road this model is intended to solve, then you’re gonna be much more likely to get a meaningful result.

So your goal, your inputs, your outputs, your constraints, and what are your metrics for success? So I think start small. Start MVP. And then try to take off task one and think about where the assumptions are that you’re making. So I think that structure really helps. One of the things I find in conversations about AI deployment, which is interesting, but borrows from the six thinking hats, which is kind of a theory about different ways people think.

They start ideating on what’s possible, yellow hat stuff, but then they also start ideating on what’s the blockers? And they start defeating themselves before they even begin by thinking about all the things that could go wrong. So it is great to think about that, like the black hat thinking is important, but it’s also important to say slow down.

If we have a business objective and we have a nominal target value of this. What’s the best path to get there? Then you can start breaking down. You can look at where those problem areas are as part of that process. But I think Canvas is a great tool for this too. Like I think Business Model Canvas use a lean canvas model to help you articulate. The activities, the partners the value add that you’re really trying to offer with a given framework, how you’re delivering that and is a particular activity being able to automate it, going to deliver the value that you’re trying to achieve?

Stephanie Harris-Yee: So it sounds like a lot of people are getting these pressures, but who would you think would be best equipped the right person at the company, the right job title role in order to actually lead this type of program? To get this up and running who do you think is well positioned for that?

Erik Vogt: Yeah, that’s a, that’s an interesting question. I’ve got a solutions mindset when I answer this question. But it really requires somebody with both a business context awareness as well as a technical constraints awareness. I think if you find people who are one or the other, they’re usually missing the other piece.

If you only are aware of technical constraints, you don’t know the purpose of why or the business value that you’re trying to achieve and if you’re only aware of the business problem, you’ll tend to dismiss or lack awareness of what the constraints likely are from a from a technical side.

And this is maybe an opportunity to make some predictions. I suspect that solutions design or being a solutions strategist and like an AI solutions strategist or an AI opportunity manager I imagine there’s gonna be an emerging job description for people who are trying to look at AI capabilities and map them to business problems.

I think some of the traits that are gonna be likely important here are curiosity. You have to be able to realize that there’s going to be ambiguity, but you need to put brackets around it and understand what does that mean? What are the possible values that are in this ambiguous block of this assumption?

You have to be able to talk to executives ’cause you have to talk about the business value, but you also have to be able to talk engineering, which is concrete specific. And you know, an engineer just can’t take a vague definition of something. They’re gonna need to code something that either does A or B, like, it’s not gonna be something in between.

Also, you need to be a generalist and know when to bring in the specialist. So no one person’s gonna be able to do this on their own. So I don’t really see this necessarily being a hard engineering task, but I also think that it, you kind of need to be part PM. To be part operations, experience, solutions, designer, and then also just really be aware of what the AI capabilities are and where the limitations are.

This is amazing time ’cause there’s so many magical things out there, apparently magical. As Arthur C. Clark likes to say, any sufficiently advanced technology is indistinguishable from magic. So yeah, we want magic. We start to expect it. But being able to look through the, the veil a little bit and say

having a image recognition system that would differentiate between cats and croissants is feasible and we can imagine an 80% sort of success rate at this. And we also need to think about how do we handle the exception management for when they can’t do the filter. And then you can start having a practical conversation about the real commercial value of what we’re doing.

Stephanie Harris-Yee: So maybe to wrap this up, do you have any advice for solutions providers, vendors who are trying to support folks on the client side who aren’t sure where to start with this whole process?

Erik Vogt: This is also borrowing from a bunch of history and solution oriented selling or solution selling. Don’t pitch your solution first. As I think it was Covey said in his book, seek first to understand, then to be understood. So if you come in with a presumption that you have a tool or a capability to offer something, be very, very careful about that.

The only reason those exist should be to hint at what’s possible, but you really need to focus on a deep understanding of what the problem is. So in my background as a solutions professional over the years. Something that comes up over and over and over again is this image of all the tools that they work with.

It’s like this whole map full of all the, like, we could use Corel and we use we can do stuff in OS too. And it’s like, that is the worst possible way to confront this ambiguity. Like here’s all the things. Instead tell stories of here’s a story about a problem that we had and the solution.

And here’s the results. Try to tell it from the narrative of the individuals who are gonna benefit from this. So if it’s an end user, like a translator. Tell the story through the lens of the translator’s experience. Make sure that they’re benefiting from this. How are they benefiting from it?

How much do they benefit from it? Or if it’s a project manager, how much do they benefit from it? Are they gonna benefit? Are they saving time? Are they hitting deadlines? Are they getting better visibility. All these be really careful about over promising. And this is a tough one ’cause a lot of procurement folks out there, I’m talking to you, procurement folks, I know you want guarantees ’cause that’s your job.

You need to quantify the, object that you’re buying. But very often AI development has unpredictable outcomes. It’s hard to be deterministic about it. So you can say it’s a little bit like a doctor. You can go in, get a diagnosis and they can recommend a treatment, but they’re not accountable for the outcome of that treatment.

They only can say that they did the treatment and they’ll charge you for the treatment. But whether or not you survive or whether or not you get better. It’s an unknown. There’s some ambiguity there. So I think it’s really important to have a perspective of openness, and that’s where that ambiguity comes in again. We can say, we can’t guarantee that you’re gonna get perfect email alignment, but we can say it’s gonna be better than what you have right now. And if you’ve done your homework and you’ve defined exactly what you’re looking for, you can measure it. And that’s where your KPIs come in. So it’s a lot about being a thought partner.

It’s about empathizing with a problem. Don’t be a vendor. Think of yourself as in the shoes of the person who’s describing the problem. Clarify those requirements. And at that stage. You have some hope of being able to deliver something that actually changes the world.

Stephanie Harris-Yee: Thanks Erik. And yeah, we’ll join you back again soon, I’m sure.

Erik Vogt: Thanks, Steph. Really enjoyed these. Talk soon.

 

What to read next...
Argos Multilingual 6 min. read
Translating Foreign Medical Claims for Faster, Safer Reimbursements

When employees, expats, or travelers need medical care abroad, the bills don’t stop at the border. Global health insurers and third-party administrators (TPAs) receive thousands of international claims every month—everything from handwritten receipts to detailed hospital records in dozens of languages. To adjudicate these claims and reimburse members, they need fast, accurate, and secure translations […]

Articles