Challenging AI

There’s plenty of hype around concerning the advent of AI, its use and prevalence. If you were to believe everything that’s said you would think the world of business was about to be totally disrupted.

However, all is not so clear-cut. In particular in the knowledge-based industries.

Getting ChatGPT to Write the Script?

If you listen to people working in the social media space they will tell you, incessantly, that producing content is key. Yet that implies a huge overhead if the same content is to be both useful and informed. Use ChatGPT they say – and get it written in a fraction of the time.

Unfortunately this overlooks a key component of how AI and machine learning works. There has to be an adequate database of source material in the first place. Otherwise it’s the GIGO principle – Garbage In, Garbage Out.

To the casual observer an article that was written by an AI algorithm may look fine. Yet we have learnt over recent years the risks of fake news. So what do you do if you want your content to be believed and valued? I’d suggest that you bin ChatGPT or its equivalent and focus on the quality of your output. By all means use the technology to source some material but then it’s down to YOU to check it out. As any good journalist would do, when checking their sources.

It’s a problem spreading across the software world, where AI texts are being inserted into material without proper understanding. My other half does a lot of work for Microsoft, translating their messaging, menus, instructions and the like into another language but frankly the input is becoming increasingly obtuse and a complete mess because AI has been used to produce the relevant dialogue strings in the first place. So don’t expect that AI is going to make things better overnight – it’s simply not happening. More to the point, the management at Microsoft seem oblivious to the chaos that results. So if they aren’t going to get this right, then who is?

The AI Benefits Case

I was recently in a summit discussing digital innovation and communications media. The topic of AI was raised throughout the day and I listened to some of the use-cases where it is currently being exploited.

  • For instance to curate a specific article targeting a small, highly-developed audience with media that is not of general interest. Yet that has to have a sufficient underlying dataset to provide the information required and it is still down to a human to provide the algorithm with sufficient tags to allow AI to assemble the output. It may be cost-effective to get a complex message to a couple of hundred recipients by curating masses of information to distil a particular point of interest. But that’s not a general use-case. In this instance it would be likely that the human author would at least be able to check it through.

    Broaden things out to a mass-audience and on a frequent basis and it’s questionable whether the output is going to be subject to confirmation bias. If you target a specific audience then, as we know from social media, they get to hear what they want to hear and not a balanced viewpoint. There are plenty of troll factories pumping stuff out that prove the point. Moreover it becomes difficult for the lay person to discern what’s real and what is not, and that’s not a good thing in my view.

    This is a two-way street because the consumer may be relying on AI to drive content to their inbox. Yet how does AI know whether the underlying data is even true? It doesn’t have a conscience? Nor, to my knowledge, does AI (in its current state) have the power to fact-check everything it curates. Over time the AI algorithms are going to be trawling content that was itself written by AI. Talk about building on shaky foundations.

    Regular users of Wikipedia are well-aware that there’s plenty of bias in many of the articles but at least you can check the edit history to get a feel where the content came from. An article written by AI is effectively a black box in this regard. Therefore I inherently would not trust it. Does that make me a cynic? I don’t think so. Instead I would prefer to be described as discerning but accept that many people may lack the skills, knowledge or time to get behind the truth that is being presented to them. So relying on AI to write your scripts is a massive threat to your brand integrity.
  • Another instance being discussed was the use of AI in fashion design. Yet even there it faces criticism for producing images that are racially or otherwise corrupt or insensitive. It may have solved one problem (speed to market) but is causing others in its place. Fashion design may require very rapid responses to the market and public opinion and taste. However that doesn’t apply to many areas of business. Good management principles haven’t changed hugely over the last 30 years apart from more awareness of mental well-being and the need to deal equitably with stakeholders. I’m still using basic principles first learned in the 1980s and they are as good as they ever were.

So where does that leave the benefits case? I suggest “under the microscope” would be a good response.

What Not to Do

In the first place take the hype with a massive pinch of salt. Yes, AI is going to drive innovation in processes and other ways. Machine learning can do a lot of what we would expect from a Lean Six Sigma approach to production. However that’s only part of how the world goes round.

In YOUR business what might benefit from that type of approach? Do you currently collect adequate data to power an AI intervention? If not, then relying on mass data from other businesses may be a fatal flaw. Designing processes that are based on collective data may highlight some common areas for improvement but they aren’t everything.

What is required is some human intelligence in the equation – so don’t abdicate responsibility.

If you ask Alexa or Siri to find stuff for you the response will be based on your past inputs and preferences. That’s not a great basis for taking an objective stand on a business issue. So step back and THINK!

What to Believe

At the moment content that has been written by AI is fairly easy to spot and is often acknowledged as such. Take news articles that are collating information from a variety of sources and are published on the internet. They are fairly brief and may now provide much to be challenged.

However you should take professional advice with a much greater degree of suspicion. We simply don’t and will not use AI in writing our content, for this simple reason. We stand behind what is written and can verify the content. It may be an opinion piece, like this one but it still is drawn from personal experience, inquiry and research.

Using AI to write content – such as Legal Terms and Conditions or perhaps frameworks for organisational change is bad practice. So – if you are being offered them – I suggest you might be sceptical about how good they really are. Sites offering the top 200 templates for X or Y are frankly useless (although masses of people seem to believe they are actually good). Yet those have been curated – through one algorithm or another. The curator who publishes them wants you to believe in their expertise. Well what expertise have they exhibited other than the ability to trawl for material and put it into a pile? Where’s the quality assurance?

As for the current trend to produce Cheat Sheets (for just about anything) I’d heave them straight into the trash.

Nuance is a key component of good advice. A simplistic approach isn’t going to deliver that no matter how complex the algorithm behind the AI.

Want to Get to the Truth?

In situations like this it is a good idea to be able to call on objective advice. Nobody expects you to be the expert on all matters in business. On the other hand we’ve got decades of experience of questioning and advising to get clients onto the right track and take control. So contact us to get the conversation started on how you can learn to deal with the challenges facing you. Let’s make this an intelligent conversation and leave out the artificiality.

Master Coach can be contacted at

© 2023. All rights reserved