In working life, artificial intelligence has been used for a long time. Nowadays, artificial intelligence like ChatGPT (and similar applications) is being made available to everyone and is starting to take its share of our daily time. AI is also increasingly integrated into various everyday applications and devices, e.g. the Internet and mobile phones. 

The potential benefits of using artificial intelligence are staggering. Still, it also has a huge risk potential, like when used for various criminal scams & manipulations using fake information/news, all the way up to various doomsday scenarios.

AI may indeed become one of our success factors in working life, but it also has the potential to stop us from succeeding, depending on how we use it. With the limited perspective I have from my own little bubble I live in, I have paid attention to three risk factors related to the use of artificial intelligence that, in my mind, can have a significant impact on what happens, that is:

  • How easily people outsource their independent thinking and problem-solving to AI.
  • How easily people perceive AI’s answers as absolute truth when they are not.
  • How easily people present AI outputs as their thoughts without realizing that this can involve significant risks, even at the co
  • How easily people present AI outputs as their thoughts without realizing that this can involve significant risks, even at the company level.

    Thinking, “I am so smart; this does not concern me”, does not make anyone immune to these risks. 

1) How easily people outsource their independent thinking and problem solving to AI.

Increasingly, people are consulting AI like ChatGPT about an issue or problem that is bothering their mind. As with any tool, the result depends on the tool (AI) and the user’s ability to use AI. The user must have sufficient competence and understanding in the matter in which they ask AI for advice and the ability to assess the “correctness” of the answer critically. It is also important that the starting point is based on the person’s thinking and idea and that their input, goal, and destination are included. If their input is bad or perhaps doesn’t exist at all, this is also how the output will be. And if everything from start to finish was only based on artificial intelligence thinking, we are treading dangerous waters.

Artificial intelligence can be an excellent tool when studying and learning new things. But “excessive” use of AI can also slow down learning and impair the development of one’s thinking. What if, in the future, our schoolchildren, and why not also adults who are already working, put their intelligence on the shelf, don’t even try to learn how to solve problems themselves, but always immediately ask the AI for an answer and always make the AI do all the written presentations for them!

Artificial intelligence does not represent anyone’s thinking. Our “own” thinking, even if it sometimes contains flaws and mistakes, is what sets us apart from others and the masses, and at best, perhaps even is one of the cornerstones of our success. This can quickly disappear in a game like this. If you only repeat AI’s answers like a parrot, chances are, over time, you may also lose the ability to think independently and critically, be creative, solve problems, brainstorm, improvise and express yourself in your own words.

Companies are not immune to this kind of development either. When acting this way, companies that are respected and successful for their original thinking and actions can suddenly find that their business, brand, and success have disappeared into the stories written by artificial intelligence.

You don’t learn how to use a hammer by asking the AI but by using a hammer. After banging your finger a couple of times, you are already a slightly more advanced hammer operator. You become a good driver by driving. Counting and reading are learned by counting and reading. Education/professional qualifications are obtained by studying and doing. The other day a newspaper asked, “Have you always wiped your butt wrong”? I did not open the link and read the answer. A lot of people probably did. Maybe someone is even asking AI about this! Here, too, however, I believe that personal empirical learning and experience are the best teachers. When your finger has gone through the paper a few times at the right moment, I wonder if things are not already starting to go better. This premise applies to all learning and doing in life. To paraphrase an advertisement, you could say that “always letting someone else do everything for you teaches you a little, doing it yourself can teach you a lot”.

Effort, thinking, studying, pondering, solving problems, practising, doing, experimenting, and having experiences, both good and bad, are always needed for us to develop. You must learn how to think and cope with “your own brain”. Because, in life, you often have to. And you must be in control – not AI.

Don’t outsource this to AI. Otherwise, one day, you may notice that artificial intelligence has robbed your intelligence. How, then, can you stand out to your advantage and succeed in the labour market?

2) How easily people perceive AI’s answers as absolute truth when they are not.

One of the characteristics of artificial intelligence is that it can sometimes speak pure rubbish. AI can write answers that seem very clever and convincing but where nothing is true. However, a surprising number of people appear to believe these answers. Serious examples of this kind have already been widely publicized.

E.g. Some readers may, some time ago, have noticed a news story about a lawyer in the US who asked AI to list specific types of court cases and received a long, seemingly convincing list in response, which he then presented at a court hearing (Forbes 8/6/2023). However, all court cases turned out to be non-existent phoney, and AI had just invented everything. Everyone can guess what happened!

A slightly lighter version of the same thing is when, according to the media, Paavo Väyrynen (an old-timer and well-known politician here in Finland)  recently asked artificial intelligence if he should return his medals of honour because he felt he had been badly treated. This could be seen as a small joke, but what if, in the future, other politicians also no longer know how or dare to form their own opinions on things but instead ask artificial intelligence what they should say and do to voters? It’s not an unthinkable thing. Then, it is no small joke anymore.

It is good to be aware that artificial intelligence is a powerful tool for manipulation. Every day, someone tries to manipulate us with fake information/news, e.g., on the Internet, social media, or wherever. Serious examples appear in the media every day. If we are not aware of this, and pay attention, we can be manipulated even without realizing we are being ma

nipulated. This applies not only to politicians but to all of us. Studies also show that over-reliance on AI can reduce our tendency to question information and think independently. This especially is where the famous “own brain” is needed!

3) How easily people use AI outputs as their thoughts, without realizing that this can involve risks, including at the enterprise level.

Artificial intelligence is good at writing text that looks great and convincing, which is why it is used in a wide variety of situations to improve “our message and story”. However, sometimes artificial intelligence can write “just a little too fancy and too convincing text” in relation to the person concerned and what does not correspond to reality, or, for example, write text “borrowed” from another person without telling us about this—a couple of examples of risks.

When using artificial intelligence in your studies, you must be careful about what you present as your thinking in work samples and master’s theses and what is “borrowed” text. A trained teacher who knows the students and their competence can easily discern whether the text and presentation method represent the pupil or whether the information is “borrowed” from someone else. Someone deliberately omitting source citations is guilty of plagiarism. This is not only criminal but also downright stupid. After all, borrowing or utilizing the ideas of others does not in itself make any document worse. In my opinion, the opposite is true. After all, the references to sources show that the person concerned has looked at the subject more broadly, which is probably better than if he/she had not done so.

When you use artificial intelligence to create a “perfect story”, cover letter and CV when applying for a job, the other party may get a very different impression of the person concerned than they really are. This may not be advantageous when you go to an interview, and the reality does not correspond to the preconceived notion given. 

In companies, artificial intelligence has mainly been used by “AI professionals” until now. Now, many newbies who have no previous experience with artificial intelligence and who have started using ChatGBT and other similar types of applications are not always aware of its risks have joined the group.

Artificial intelligence seems to be eagerly utilized in making sales offers. If there is a discrepancy between the salesperson and the substance of the text, this may cause problems. Sometimes, it may be evident to others that the person concerned does not possess the expressiveness and professional competence that the fine-sounding text written with artificial intelligence suggests. Then, the basis of the offer may fall off, let alone trust. This can already be evident from the background information of the person in question, without ever having met this person, at its simplest on LinkedIn, for example.

The same can happen with a company if they let AI create great, too good-looking website texts, campaigns, offers, brands, etc., that do not correspond to the company’s actual know-how, actions, and accomplishments; there is even a risk the text may emphasize that there is no correlation between the given customer promise, resources and delivery capacity. This is something to be wary of, especially in BtoB trading, where it is customary to check the background and references of a new partner. Few people want to deal with a company that promises more than it can deliver.

Pin It on Pinterest

Share This