Smart Ways to Think About AI in Your Business

As with most technology, there are pros and cons to working with AI. I don’t think ignoring it is a great idea, because it definitely has some helpful applications for entrepreneurs. It may also be coming to disrupt your industry sooner rather than later. 

That makes now a good time to think about what you can reasonably use it for, how it can help your business in a sustainable way, and what parts of your business might it take over.


There are reasons for business owners to be skeptical of AI

Many people fear AI because they think it’s actual intelligence, at least as we humans define it. Or that it “learns” like people learn, and it will become sentient and destroy the planet. 

Analysis shows that large language models (LLMs like ChatGPT, Gemini, xAI, etc.) hoover up much more energy than a Google search for the same thing. So they may well destroy the planet in terms of worsening and hastening climate change. 

But they’re extremely unlikely to become conscious or sentient, which is more science fiction than science at this point. Could that change? Eventually, but probably not in our lifetimes, for good reasons.

We humans don’t fully understand how brains work. We’ve discovered synapses and neurochemicals and different networks of brain areas that have different functions. We do know that brains work through chemical and electrical signaling, but there’s a lot to be learned still.

As an example, researchers have been able to map out the 300-some neurons for a worm species that’s been studied pretty extensively. Yet, we can’t figure out which neurons do which jobs, or how the worm’s brain sends the signals for it to move around. That doesn’t mean we won’t eventually know it. 

But if we can map out a simple brain and still not be able to use the model to predict its activity, mapping the human brain with its BILLIONS of neurons won’t help us understand it. 

We don't know how consciousness emerges partly because we can’t even agree on a definition for consciousness. There’s a theory that claims everything natural (e.g., not manmade, including rocks and volcanoes and the like) has consciousness. 

To me, that’s not a definition. If everything is conscious, that doesn’t help us define consciousness. Anyway, since we can’t agree on even a definition, we definitely can’t program computers to achieve consciousness.

Could the machines develop consciousness themselves, enough to take over the world? Highly unlikely, is my opinion. Remember that brains developed over billions of years. First they were pretty simple, like they are in worms, and then brains continued to develop over thousands and thousands of generations. Evolution takes a lot of time – not centuries, but millions of years. 

If I had to guess, I’d say human-like consciousness or sentience is far more likely to develop in animals. (If the octopodes get there first, we humans might be in trouble!) We already know that some animals are pretty smart already:

  • Tool users, including mammals and birds

  • Communicating animals

  • Animals that remember humans (crows bring their fave human shiny tokens as presents)

  • Animals that can grieve (elephants)

  • And more

So if you worry about machines being able to think and control the world, I'm pretty sure you can set that aside. Because there are definitely more pressing issues that can affect you and your business.


What is AI really?

It’s called artificial intelligence, but there’s nothing particularly intelligent about it. It doesn’t think, although many people talk about artificial intelligence as if it has human qualities. “Machine learning”, which is what trains an AI model, isn’t what humans know as learning either.

How long do you think it took before your kid (if you have one) could reliably pick out a cat as opposed to a dog? Maybe toddler age? 3 or 4 or so? By that age, could they have seen millions of pictures of cats to form the idea of what a cat looks like? No, by some learning capability of the human brain, your kid figured it out. 

You didn’t give them flashcards with pictures of cats and say “cats”, over and over again. Plus, once your child knew what a cat was, they probably didn’t make a ton of mistakes: either not recognizing a kind of cat they’d never seen before as a cat, or mistaking an animal such as a raccoon for a cat. Who knows exactly how it happens. We do know learning is best in short bursts at a time, consistent, and with plenty of sleep. 

That’s for humans, though. To teach an LLM (large language model like ChatGPT or Claude), first you have to show it a metric f*ckton of data. Millions of pictures of cats, labeled “cat”. I’m simplifying a bit, but this is essentially what’s happening. 

AI is not learning what a cat is, or what feline means. If you “trained” it, meaning you showed it millions of pictures of long-haired cats, it probably couldn’t recognize a picture of a shorthair as “cat”. 

AI is basically a very fast pattern analysis system. If you train it on a whole lot of English words, then the model “learns” that very often the phrase “lots” is most often followed by the word “of”. That’s why when you type in “lots” your autocorrect (which is a very basic form of AI) will suggest “of” for the next word.

How do you get all this data to train your dragon, oops, I mean AI model on? Well, you have to steal it. (This is a major problem I have with AI and one of the reasons I resisted it for so long.) You have to ignore copyright laws and let the model run rampant on the internet to scoop up all this data. 

For example, I have copyright on my website and a lot of content on it. As an ethical person, you’d ask for a license or my permission to use something I wrote. Or you’d have to acknowledge me as the creator if you quoted something from my site.

Instead, those who want to monetize their AI models used all that copyrighted content to “train” the models without getting permission from any of the authors, artists, or other original creators involved. 

Some uses of AI don’t flagrantly violate laws. For example, scientists can feed a model data on what tumors look like, using scans where the patients aren’t identified to protect their privacy. That’s a great use for AI, though so far (as of July 2025) the models aren’t doing much better than trained oncologists and in some cases do worse. 

You can find more information about what AI can and can’t do in Wired magazine and also The Atlantic, especially Matteo Wong, who writes a lot about AI.


The real problems with AI

One thing I find interesting is that the people behind AI will admit, mostly behind closed doors, that they don’t know exactly what’s going with the AI after it’s been “trained”. AI is a black box into which data goes in, but what comes out isn’t always reliable.

There have already been lawyers who’ve been fined and/or had cases dismissed because they used AI for their case and the AI “hallucinated” case law that doesn’t exist. The machine isn’t actually hallucinating – you need a working brain to do that. But it does spout stuff that doesn’t exist, whether it’s coming up with random letters and numbers or what. 

If you’re paying attention, AI slop is often pretty easy to spot. In images, what looks like letters might be random letters or even shapes, and the humans in these images might have extra limbs growing out of their backs or be blessed with ten fingers on each hand. 

AI-generated content is pretty bland. You feel like you’re reading a corporate mission statement that’s supposed to talk about values but doesn’t actually say anything. There’s a radio ad on my local station that they play constantly, and I can tell the voice is AI-generated. It doesn’t sound robotic, exactly, but there’s something about it that just isn’t right.

We recently learned about a new problem with AI, which is what’s called the system prompt. This one came to light because over on the Nazi-adjacent “social” media platform, the AI (named Grok) kept interjecting a discussion about white genocide in South Africa. Not only does this “genocide” not even exist except in the fevered minds of extremist white nationalists, Grok appended it to every answer, even if the prompt the user gave had nothing to do with white people, genocide, or South Africa.

Turns out that when you release your AI model, there’s a system prompt that you give it for it to obey as a general rule. Someone on the platform must have used white South African genocide as a system prompt. ChatGPT had a similar but less horrifying situation when its system prompt made it too sycophantic to the users, and people complained about it.


Spotting good AI use cases in your business

Now that you know where the issues are, you can be more aware of what the likely problems will be once you start using it for your business. It’s a good idea to have an AI policy for your company as to how it will be used and by whom – and that it always, ALWAYS needs a review before you let AI-generated answers out into the wild.

Identify where you and your team have repetitive, unpleasant tasks that no one likes. I know a guy who builds AI bots for business, and he gave me the example of using a bot to put data in a spreadsheet instead of manual entry and that is awesome. These are great places to use AI.

It might be trickier to figure out what in your business or industry will be destroyed. As an example, software coders are increasingly unemployed because AI has taken over a lot of code writing. I foresee AI filling out tax returns. You can use AI for asset allocation, investment decisions, and retirement planning without an advisor.

But tax and financial planning (among other fields) still need people, even if the AI is doing all the computations. A human CPA can still look at a tax return to see where the business may be running into trouble, and work with the business owner to rectify it. 

When I was a financial planner, I had plenty of software to do my calculations and so forth for me. I felt like my job was really to talk people off the ledge – to help them stay in the market when it was falling, and to avoid latching on to the latest craze when it was bubbling. Although a chatbot might help people rethink a decision, a real person is going to be the only thing standing between some clients and disastrous financial decisions. 

So consider where in your industry a fast computer or machine really won’t get the job done. We’re still going to have customer service jobs, because while a chatbot can help with basic website navigation and questions, when things go awry, people want to talk to a person who can get it sorted out without asking stupid questions. 

However, the customer service people will need to be trained at a higher level to accommodate people. Nothing irritates me more than when I have a computer or peripheral that isn’t working and the first thing the rep tells me is to turn it off and on. I’m not a moron and I’ve done that already.

Wherever in your business or industry a chatbot won’t get the job done, that’s where you need to be and where you and your team will add value.


Recap (tl;dr):

AI is not human, and machines are unlikely to take over the world any time soon. However, it’s a good idea for business owners to understand what it is and how it works so they can have a solid AI use policy in the firm. It’s also a good idea to see where AI might take over in your field.

Tired of trying the same old hacks for productivity that Gemini/Claude/ChatGPT/Grok have suggested to you? I can show you a system in your business that leads to a life you love outside the business you love. Click here to schedule your free consultation.


Photo by Steve Johnson on Unsplash.

Next
Next

Socializing in Real Life Makes You More Productive