How We Decide to Trust the AI We Use
There are no guarantees. Here's what we look at instead.
There’s a question we get from clients more often than you’d think: how do you know you can trust the AI you’re using?
It’s a fair question. And our honest answer is: the same way you decide to trust anyone.
You look at their record.
When we made the decision to build our AI-assisted workflows around Anthropic’s Claude, it wasn’t because someone ran us an attractive sales deck. It was because Anthropic has actually done things that cost them something.
They employ ethicists to shape how their models reason. They held back a model release until organizations had time to prepare. They turned down a Pentagon contract for surveillance and weapons targeting, and absorbed the pushback that came with it. That’s the kind of thing you only see when you’re actually paying attention to how a company behaves over time.
Is there a guarantee they won’t change? No. There are no guarantees anywhere — in vendor relationships, in partnerships, in anything that involves another organization making decisions over time. You either trust based on track record, or you don’t. The standard isn’t certainty — it’s accountability, and a track record worth examining.
What matter is how you build with it
Choosing a trustworthy AI provider is a starting point. The harder work is deciding what role AI plays in your organization — and what it doesn’t.
At inWorks, we’ve been deliberate about that boundary. Our founder Ilya Lehrman has thought carefully about where that line sits, and it shapes how we build.
AI handles the work that benefits from speed, consistency, and scale: drafting, summarizing, pattern recognition, first-pass analysis.
Humans handle the work that requires judgment, relationship, and accountability: decisions, client conversations, anything where the stakes are real.
This came from watching what happens when organizations skip that step. When AI output gets treated as final output. When speed gets confused with quality.
The multi-level businesses we work with can’t afford those mistakes. They don’t have the margin to recover from a trust breakdown with a client, or a process that scales the wrong behavior faster. So when we recommend AI tools or build AI-assisted workflows for clients, the first questions we ask aren’t about features. They’re about ownership: who reviews this, who’s responsible when it’s wrong, and what does the human in this loop actually do?
What we tell clients
When a client asks whether they should be using AI, the answer is usually: it depends on whether you’ve thought about what you’re handing off and what you’re keeping.
AI is genuinely useful for reducing friction in work that’s repetitive, high-volume, and well-defined. It’s less useful, and sometimes actively harmful, when it’s deployed in situations that require nuance, context, or care that the tool can’t provide. The mistake most organizations make isn’t adopting AI too slowly. It’s adopting it without a framework for where human judgment still has to live.
At inWorks, that framework comes down to three questions.
What is this tool replacing, and was that a good use of human time to begin with?
Who reviews the output before it touches a client or a decision?
And if this tool disappeared tomorrow, do we still understand the process well enough to run it manually?
If you can answer those three questions clearly, you’re probably building with AI in a way that holds up. If you can’t, you’re probably moving fast in a direction you haven’t fully thought through.
Why this matters more now than it did two years ago.
The AI landscape has shifted quickly enough that the decisions organizations make right now are going to shape their practices for a while. The tools are good enough that it’s easy to over-rely on them. The marketing around them is good enough that it’s easy to mistake adoption for strategy.
What doesn’t change, regardless of how good the tools get, is the underlying question of trust. Trust in the companies building the models. Trust in the processes your organization builds around them. Trust that the humans in your workflow are still doing the thing that humans need to do.
We chose the tools we use because the companies behind them have, so far, shown us a track record we can stand behind. We built our processes the way we did because our clients deserve workflows where accountability has a face.
That’s the standard we hold ourselves to at inWorks LLC. And it’s the standard Ilya would encourage any small business to ask of the technology they’re bringing into their work.
If you’re uncertain, that’s worth a conversation. We’re glad to have it.
inWorks LLC is a technology and systems company helping small businesses, creatives, and growing organizations build infrastructure they can actually use. Founded by Ilya Lehrman, inWorks has spent over 15 years thinking carefully about how technology should serve people — not the other way around.
📞 Call 267-857-8066 to start the conversation about preventing your sensitive information from being exploited.
For ongoing insights on current events in tech, cyber security, and protection best practices, follow inWorks LLC on LinkedIn for practical guidance designed for founders, operators, and leadership teams.


