Over the last year using artificial intelligence in everyday life has become more and more normal, even subconsciously, as various apps include this innovative technology in a way that you wouldn’t even think of. Did those who first started investigating and developing this kind of software more than 50 years ago ever imagine it would make it this far?
Introduction
Back then a simple test (the Turing Test) was used to evaluate their progress: a person was indicated to have two chats, one with a machine and another one with a person. If the person wasn’t able to detect a difference between these two conversations, it was considered as a proof that the machine was able to “think”.
Since then the whole development of artificial intelligence has taken a huge step forward and goes way beyond that simple test. In a way the improvement of automated creation of texts and visuals is so impressive, it’s understandable some may think they could delegate their tasks to a software. After the initial hype, though, it’s becoming more than clear that even though artificial intelligence can complete more and more complex tasks, it won’t replace the expertise and ability to develop an idea with an individual character that a marketing team can provide your company.
Especially since the launch of its 3rd model in November 2022, Chat-GPT has caused a huge impact in the world of digital marketing. Even though previous models of this OpenAI tool were already able to generate texts based on their learnings, it was Chat-GPT-3 that aroused a huge WOW-effect, and GPT-4 continues the line. All of a sudden, a chatbot could actually give a personalized answer and provide you with information gathered in the World Wide Web, almost as if it was a person.
A lot of people loved the idea, others were rather skeptical, and to others again it was a real threat. Why? Because especially in the beginning, many companies were amazed by the idea to create more content, faster and apparently so, “just with a few clicks”. Luckily, now it seems that that was just the first hype about this revolutionary software and other tools that use artificial intelligence. You know when kids get a new toy and bring it literally everywhere and try for example to feed a new teddy their spaghetti when clearly that mouth won’t open… but who wants to destroy a kid’s illusion, right? Well, that’s probably what happened with Chat-GPT and other tools that work with artificial intelligence.
What does Chat-GPT deserve its bonus points for?
This isn’t meant to dismiss the value of ChatGPT, but the notion of solely relying on its algorithms without question is evolving into something else. Let’s take a quick look at the advantages that Chat-GPT is in fact able to provide us:
-
- Time saver: For tasks like summarizing already given information, writing drafts for emails that we need to send out etc. we can save time. In these cases, there is no harm to use artificial intelligence as a sort of virtual assistant, as the information is given. Thus, there’s no risk of it fabricating facts and presenting them as genuine.
-
- Fight the blank page syndrome: We probably all know that situation, sitting in front of a blank page without any idea popping up to start with. To brainstorm ideas, get inspiration or gather information, Chat-GPT can be a good ally.
-
- Win the character count: Yes, it can be quite annoying to find alternatives for ad copies that actually fit the maximum number of characters. Another task that Chat-GPT can help out with. Still, don’t assume that you’d do it in no time, as, again, you definitely should double-check the results and maybe have to give it a few more tries or adapt them yourself. Chat-GPT tends to prioritize the character count goal over grammar, which doesn’t always look good.
So, to start with, even talking about the advantages that artificial intelligence provides, we should be aware of the fact that it’s not a dog that walks itself, you still need to hold the leash and make use of some extra commands and corrections here and there. That does take time and knowledge that goes beyond the learning a software is able to do, for now. So basically it’s only as clever and creative as the person who uses it.
In the example above, we compare in a fictional case, a social media caption for a new Coca Cola flavour written by us versus one written by Chat-GPT with an original caption of the Coca Cola news feed. Which one would you say hits the brand’s style?
As you can see, using simple prompts without any insights lead to standardized answers and messages that don’t reflect that character of a brand and don’t have any distinctive factor. To get these insights we do need a person who puts together all the pieces, like tone of voice of the brand, target group, purpose behind the message and so on, in order to create a creative idea of what should be the text like. If one then uses Chat-GPT to provide the copies or puts hand on oneself is free choice of course. But without an expert providing the necessary information, AI won’t be able to do the job.
4 Reasons AI tools shouldn’t do all the work
These innovative tools, even though they’re quite revolutionary, do have some real issues to face. Of course as they’re constantly developing and improving in the future these could be resolved, but so far there are still some important aspects that should be considered when using Chat-GPT, Animoto, Descript and others:
Be aware of hallucinations
To describe this phenomenon in a few words, let’s have a look at Chat-GPT. Sometimes it seems so eager to give us an answer, it just makes things up. That may sound even amusing, but can lead to actual problems. How comes? Basically Chat-GPT is trained on certain data, if the prompt it receives is not clear enough, includes slang or idioms it hasn’t learned or requires an answer that goes beyond the data it was trained with, it will generate a response according to its abilities. That could in fact sound quite creative, but:
As this software is not able to resonate or evaluate if it should or shouldn’t provide incorrect information, its responses can be false, which is what we call hallucinations. That’s why it’s important to always double-check and evaluate if you can actually use the text generated by Chat-GPT, OpenAI Playground, Microsoft Bing and others.
In order to prevent hallucinations it’s important to include as much and precise information as you have in your prompt and mention what you want and what you don’t want. Literally, as if you would tell a kid not to lie. In most cases it works and instead of providing you false information, your AI-Tool admits that it doesn’t know the answer.
Consider the context of your content
One should always consider the kind of tasks one uses artificial intelligence for and how much supervision it does require. Of course, if you want to, as mentioned above, get your own day-to-day tasks done or sum up information that you have gathered yourself and are already sure is correct, the risk involved when using AI software is probably quite low. With a simple double-check, you should be fine. However, if you want to use AI to get more complex jobs done, you should be more careful, though. Texts that will be used in public, that you provide to colleagues at work, clients etc. should definitely be correct and proven to be true in order to assure the quality of your service and your reputation. That goes in line with the next point.
Copyright issues
As generative AIs get better the more information they have received and been trained with, you can imagine the amount of data used from the internet needed to fulfill this purpose. It’s almost impossible to control if all this data, including texts and images, is free of copyright. So again, depending on the environment you use artificial intelligence for, the risk may not be as high. For example, when creating a few lines for an email to your colleague or gathering ideas for social media captions the risk is quite low. If you rely on AI tools to write more extensive texts though and depending on the use you will make of it, you should definitely be careful as there are already quite a few cases in court.
Ethical issues
As the data used to train AI software is created by human beings, it obviously can also contain stereotyped, racist or other kinds of harmful information, which these tools are not able to differentiate or evaluate. This, again, may lead to serious issues depending on the environment you use the content created by the tool in. To avoid the creation of conflicting content, the same advice we mentioned to avoid hallucinations is very important. When composing your prompt it is crucial to be as precise as possible, use a clean structure and mention exactly what you want and what you don’t want. Apart from that, there’s no one better than a person that ideally has been hired after being considered as capable and aligned with your company’s values to double-check if the content created by artificial intelligence is trustworthy and won’t cause any troubles.
What does that mean for marketing departments?
Like any other tool we use in digital marketing, AI can be very useful, make a lot of tasks easier to deal with and is constantly developing and improving. Nevertheless, for now, a professional with human abilities to make use of this software, evaluate its work and adjust and improve the content created is necessary. No doubt, it’s a very powerful tool, but it’s only as clever and creative as the person who types the prompts and checks the answers.
Furthermore, nowadays, it’s more important than ever to create unique brands that have their own tone of voice and transmit authenticity to their target audience. Yes, you can give Chat-GPT examples and tell them to create content in a certain style, but figuring out the best way to do so and the constant supervision of the outcome is time-consuming and requires notions and experience in marketing. The idea it would be cheaper and faster to create content and get work done is misleading. It is beneficial to make Chat-GPT your ally, but it shouldn’t do all the work on its own.
Sources
