AI Explained in Simple Words and How to Start Using It

AI Artificial Brain Drawing

Most of the times when you read about AI you find lots of technical explanations, so that finding a non-technical AI explanation in simple words is rare.

End of last month I presented for Nautilus Cyberneering, at a webinar where we got invited to collaborate. I talked about AI. The other presenter talked about the Meta-verse. Both presentations aimed to clarify terms and concepts for non-technical business people. The goal was to create a good base understanding, simplify these topics, give examples, and share some advice on where to start.

This is a link to the webinar in case that you are interested. In case that you want to get a copy of the slides here they are.

In this post I intend to do the same but I will also give the example of how we are approaching the development of our AI. This insight will give you a better understanding of what you need to consider.

So, if you are looking for technical knowledge this article is not for you. Nevertheless, if you want to learn the basics and get a non technical explanation to know what is possible and where to start, keep reading.

How did I end up knowing a little about AI?

To start, I am not technical expert but I am interested in technology. I have worked in different roles in business for the last 20 years and have often used technology to overcome challenges wherever I worked.

As for AI, I started dabbling with it about 2 years ago. My first contact was a course by Udacity on AI product management. This course taught me its potential, its training methods, challenges, costs, etc. as well as how to look at it from a business point of view. It was a good entry point.

Hereafter, I continued reading on the topic, and by coincidence I ended up in a team developing an AI product, where I am now working as a business developer.

It is an ambitious Open-Source project, where we create the necessary base product, infrastructure and community around a concept of a simple and expandable command line AI assistant. Evidently, you can imagine that AI is a constant discussion topic and I am learning something new every day.

What we do in this project explained visually is this:

Nautilus Cyberneering AI Assistant Conceptual Project

AI in our daily lives

AI is already everywhere in our daily life. It has become a seamless integration in your day-to-day when you use your desktop computer, tablet, or mobile. You may be using it when you take a picture, search online searching, buy at your favorite online store, watch a movie or use your bank account.

Companies are using it all over the place. Netflix, Amazon, Google, your bank, etc. are just a few of them.

AI necessary basic understanding, conceptual differentiation, and types

To understand how you can use and how AI works, you first have to have a basic understanding of what AI is. It is also important to understand the difference between AI and the human mind.

What is AI? – In simple words

AI is a computer system that can learn and work on its own. It does this by using artificial intelligence algorithms, optimized or “taught”, so to say, to the machine.

However, what is an algorithm? An algorithm in the case of AI is a set of rules that have been created to solve a problem or accomplish some end. In the context of AI, these algorithm sets are what enable the computer system to learn and work on its own, be it for example image recognition, or any other task.

These algorithms are what make AI special and different from traditional computer systems. For example, the tasks an AI algorithm can perform are many, though it is often specialized to accomplish these in specific contexts or domains.

Conceptual AI – Strong vs Weak

There are two different types of AI: Strong AI and Weak AI.

Strong AI, named also general intelligence (AGI). This type of AI can solve any problem that a human mind can solve. It can think creatively, reason, and learn on its own. Strong AI does not exist yet, but scientists are working on it.

Weak AI, on the other hand, also called artificial narrow intelligence (ANI). An AI focused on solving specific tasks. Weak AI is the kind of AI that we have today. Most of the AI applications that we use today fall into this category. They describe it as “weak” but it can be very powerful.

Machine learning versus Deep learning – The differences

Now that we have a basic understanding of AI, let’s take a closer look at the two most popular types of AI: machine learning and deep learning.

As mentioned before, both deep learning and machine learning are sub-fields of artificial intelligence. And as you might have guessed, deep learning is a sub-field of machine learning.

So, what is the difference between machine learning and deep learning?

The main differences between machine learning and deep learning can be simplified as two:

  1. Machine Learning is more dependent on human supervision to learn through prepared structured data, and Deep Learning is capable of doing this unsupervised and even with unstructured data.
  2. The complexity of the layers of interrelated algorithms known as nodes, usually more than three layers of hidden nodes.

Here is an example setup of the layers in a Deep Learning Neural Network Algorithm:

Example set up of a Neural Network Layer setup (from left to right: Input layer, multiple hidden layers, and the output layer)

Rather complicated, if you know that there are hundreds and thousands or even millions of such layers.

Difference between the Human Mind and AI

AI vs Human Brain
AI vs Human Brain

Now, if we consider the previous discussion we can say that AI is good at processing data from specific preset contexts, it runs data through its algorithms which it adjusts to improve its accuracy and outcome.

The human mind, on the other hand, though not as fast in processing data, can learn and adapt itself to new contexts, for which it uses abstraction and creativity. Thanks to this ability we have survived over the millennia and evolved till today.

To better understand this difference, let’s take a look at an example.

Say you want to buy a new car. You would go online, research the different models, their prices, features, etc. This is all data processable by an AI system.

Now, let’s say you want to buy a new house. The research part would be the same, but you would also need to consider the location, the schools in the area, commute, etc. You possibly would imagine life in that area, how it would be, how convenient, etc. this is where abstraction and creativity come into play.

The human mind can take all this data and more, process it, and come up with a decision. The AI system, on the other hand, would need to be “taught” how to do this.

So, while AI is good at processing lots of data, the human mind is better at making decisions based on available data, and possibly better with little data, if it is in a completely new setting or context. And this is just one example of the difference between the human mind and AI.

Business use cases for AI in different industries

There are many different real-world applications of artificial intelligence today as I already mentioned before. The most common also in business are:

  • Fraud detection, discovering unusual bank transactions, etc.
  • Predicting consumer behavior, based on historic consumption and personal information
  • Speech recognition is used in many different industries such as healthcare, law, and customer service.
  • Image recognition is used in business for tasks such as product identification, facial recognition, security, document automation, etc.
  • Customer service is an industry that is using AI more and more to provide better service to customers.

This is just a small list of examples, but there are many other possible applications for AI in business today, and in the future, there will be even more.

AI is rapidly evolving and the potential uses for it are endless. I am convinced that shortly we will see more and more businesses using AI to automate tasks, improve efficiency, and make better decisions.

How much can it cost and how complex is it

Well according to an article which I read, it can cost as little as 0 € to about 300 k € to implement AI in your company.

The most complex is usually creating a custom AI model. For instance, GPT-3 a Natural Language Processing model which is currently “on fire” is said to have cost almost 5 million USD $. Do you need to do this though? NO!! There are cheaper and even free Models out there.

Now, here is a little overview of a cost chart.

In the image above you may have wondered about Open – Source. Open-Source stands for publicly available software, data sets, models which are frequently used in developments since they are either free or cost much less than so-called closed source equivalents. Also, for your information very often closed source software makes heavy use of open-source components for their development.

Recommendations on how to start using AI in your business or organization

Now when talking about the companies which are already using it heavily they have some common characteristics. They usually:

  • Produce lots of data
  • Have complex processes
  • Operate in variable settings or industries
  • Are large companies

So, if you are a small business or startup it is likely harder for you to start using AI. But that doesn’t mean it’s impossible.

There are many different ways to start using AI in your business today. My advice here is to be practical and go for an incremental approach, not running off to look for “experts” right away. I would not do so now but later.

First, I would recommend the following to do in-house:

  1. Assign a person or a team to the task of getting familiar with the AI.
  2. Invest into training them on the basics
  3. Have them look at what data you are producing.
  4. Review your internal processes in search of complexity and large required time efforts.
  5. Have them research simple AI-empowered tools in the different areas.
  6. Check to which processes you can apply such tools.
  7. Implement these tools.

Second, now that you have an internal person or team with some practice, knowledge, and understanding of what can be achieved I would suggest to:

  1. Review the data and processes which require a custom approach due to the nature of your business.
  2. Contact external AI consultants and have them analyze your case and make a proposal and cost estimate of the required investment.

Important for this second phase is that when you approach them you have them consider Open-Source and not only consider large AI models, since according to several benchmarks smaller specialized models beat larger general models in predetermined contexts.


So there you have it. I hope that I could explain AI to you in simple words and give you some insight into the use cases. If so, you are now a little more knowledgeable about AI, how you can start using it in your business, and the ways it is impacting our lives.

To me it’s important to understand the basics so that one can conceptualize what is happening around us and differentiate between hype and potential. With the topic of AI there is always going to be people who over-hype things and make claims that AI will take over the world or do away with certain jobs.

I am sure that there are changes coming, but remember, as humans, we have an incredible ability to learn and adapt – something that artificial intelligence still has a lot of catching up to do o, but it is evident from the examples that AI used as a complementary tool lets people be much more productive.

Now, with all of this in mind, go forth and experiment with some AI tools! There are many great ones out there waiting to help you do your work. I leave you a list here of some that I am trying out right now.

Also if you want to read a little more on AI these are some links to articles that I wrote or re-shared:

Natural Language Processing AI Technology a Quick and Brief Intro with Examples

Natural Language Processing AI

The company that I currently work for, Nautilus Cyberneering, has a 5 year project for which the so called Natural Language Processing AI is key. We essentially want to create a virtual artificial intelligence assistant that you can run from your own local computer and communicate with you through a command line interface.

This assistant we envision, will do all sorts of things that a private user may consider of value. The user will basically interact with the “machine” indicating what he wants to achieve or do, and the “machine” will respond to his input.

As you can imagine such an application will require a good understanding of human language and it could look like this:

Clearly not exactly like in the picture but you get the point. : )

Human communication and understanding is rather complex, as you well know. Hence to achieve this we will employ “Natural Language Processing” artificial intelligence models also abbreviated as NLP.

Starting My Research

Given this I wanted to begin forming my opinion and test a few and ask around if anyone in my network had used any Natural Language Processing AI so far. It happened to be the case.

Some good friends of mine were currently using GPT-3. They told me that to them it was another employee in their company. Knowing them I knew it was no overstatement, when they told me that they used it for code review and research. Especially since their business also happens to be in machine learning, AI and automation solution consulting. Consequently, I became even more interested.

However, if you keep on reading please let me first start by saying that I do not consider myself an expert in this field, so please forgive any mistakes I may make during this post. Still you may find it interestint if you are also new to the topic.

In this post I will do the following:

  • Briefly explain what NLP is
  • How do NLPs work
  • NLP Creation Techniques
  • Known NLP Models
  • Share some links to the ones I found most interesting
  • Give you some examples of their replies to my input
  • Share some already usable tools

What is Natural Language Processing (NLP)

These are two definitions from different sources:


Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of “understanding” the contents of documents, including the contextual nuances of the language within them.”


Natural language processing strives to build machines that understand and respond to text or voice data—and respond with text or speech of their own—in much the same way humans do.”

So to sum up:

NLP is an artificial intelligence technology meant to power machines. It processes human language inputs written or spoken, understanding and responding to them.

How do NLPs Work

The before mentioned summary sounds very simple, but it is not. An NLP system needs to:

  • Recognize speech, which is convert voice data into text data, no matter how they speak, where they come from or what accents or mistakes they make.
  • Tag words, be they nouns, verbs, articles, etc.
  • Decide on the intended meaning of a word given many possible meanings based on the context.
  • Differentiate between block elements such as sentences.
  • Establish relevant words, for example names of a person, state, etc.
  • Make contextual cross references, from pronouns or descriptive words, etc.
  • Infer the emotional load within a text, such as subjectivity, objectivity, sarcasm, etc.
  • Generate “human” responses from structured information.

I do not know you, but I think that this is even difficult for a human. Recall for instance when you learn a language. All the different accents, double meanings, the different sense of humor, etc. Complex indeed.

If you are curious you can read more here.

Natural Language Processing AI Model Creation Techniques

Creating a single working NLP model is difficult. Evidently, it takes a lot of effort. For many years different approaches came into existence to optimize and test this process. Research in this field has been going on for over half a century. You can get a brief overview of the past models in Wikipedia.

The currently used machine learning methods are two. The two require extensive use of computational power and can be used in combination.

One could write a book on each of them but this is not my intent so that I will try to briefly describe how I have understood them and include a link to more information.

Feature or Representation Learning

A system is set up to automatically discover and learn through prepared sets of labeled or unlabeled data. It essentially learns to recognize and associate features, common patterns, within a context and make associations of meaning. For more information here.

Deep Neural Network Learning

Is an approach in which there are different layers of inter connected nodes. Nodes are computational sets of rules that get adjusted in the form of weights during the training phase. The nodes pass information through them. The data that you input into the system proceeds through this network of decision rules and progresses through the different layers like a decision tree. For more information here.

Neural Network
Neural Network

Known Natural Language Processing AI Models

There currently exist many NLP models. It would seem that there is a race to develop the most powerful one. You will find WU DAO, GPT-3, GPT-J, Meta AI, Bert, etc.

One of the challenges researchers are facing with such models is whether the models have learned reasoning or simply memorize training examples.

Clearly as you can image, some are Open-Source and others not. Through the use and access to these available models many solutions are being. I will briefly highlight some facts about the ones that I have looked at most and which I found demo implementations for or solutions developed on them which you can try.

GPT Group

GPT stands for “Generative Pre-trained Transformer”. These are models trained to predict the next token in a sequence of tokens autonomously. A token being a set of characters when it comes to text characters.


This is the model that has recently created a lot of buzz since 2020 when it came out. In 2020 it was the largest model ever trained. It has been already used to implement marketed solutions by different companies.

The model was developed by OPENAI. It started out as an open source project; however, nowadays its code base has been licensed out exclusively to Microsoft.

It has been trained to perform generalist and niche tasks such as writing code in different programming languages such as Python.

Parameters1.5 Billion125 Million – 175 Billion
Training Data10 Billion tokens499 Billion tokens
Model Progression OpenAI

Here are two interesting links:


These three models have been developed by EleutherAI. It is an Open-Source project. It is a grassroots collective of researchers working on open-source AI research. The models can from what I read be considered generalist models good for most of the purposes.

Parameters1,3 to 2,7 Billion6 Billion20 Billion
Model Progression EleutherAI
Interesting Responses from GPT-J

Below you will find several screenshots of the responses that I got from their online test interface so that judge for yourself.

AI responding to “Who is the greatest musician of all times?”
AI responding to “which is the best beginner programming language in your opinion?”
AI responding to “what is more important to work or to live?”

Here is the link to the online test instance where I got the responses from if you are interested:

On the other hand you also can get paid access at and test the different EleutherAI models at very reasonable prices.

Wu Dao 2.0 – China’s Monster Natural Language Processing AI

This Natural Language Processing AI model is considered the “monster” and largest NLP model ever. It was generated by the Beijing Academy in june 2021. Its code base is open-source based on PyTorch and it is “multi-modal” being able to process images and text at the same time and being capable to learn from it. Something that the others are not capable of.

It was trained on:

  • 1.2TB Chinese text data in Wu Dao Corpora.
  • 2.5TB Chinese graphic data.
  • 1.2TB English text data in the Pile dataset.

It is supposedly capable of doing all the standard translation etc. but also composing poetry, drawing, singing, etc…

Wu Dao 2.0
Parameters1,75 Trillion
Training Data4,9 TB
Model Specs Wu Dao 2.0

Some Implemented Solutions

Here you will find some interesting implementations that you can start using today if you want.


This is a tool that I think many digital copy writers will find handy to ease their work.


Same applies to this solution which helps you speed up your tweets in your own style.


This is a solution for developers to write code faster and easier.

Nevertheless, this is just three from many more. Here is a more extensive list of such solutions.

Final Reflections

Like with the examples above, technology never seizes to amaze me. Evidently, there is great potential in their use. Yet, what are its resulting disadvantages?

OpenAi, for instance decided when they developed their GPT-2 model to not make it fully available due to its potential to create fake news with it. In addition, later OpenAi went one step further and called out to create a general collaboration on AI safety in this post.

I agree with this line of thought. We have to weigh AI’s possibilities and dangers and check them against our values and beliefs. Technology in the end is nothing but tool, powerful though. Reason for which this old adage from before Christ rings true again:

“With great power comes great responsibility.”

Not from Marvel Comics : )

AI has only started and we are still to see much more of it in the coming years. If you want to read another interesting example of Natural Language Processing AI at work, here is another post of mine.



Tackling Problems: Plastic – An Example from Guatemala

Tackling Problems: Plastic - An Example from Guatemala

I saw this video yesterday, it impressed me for two reasons.

  1. It shows that modern societies problems can be tackled with common sense and a joined will.
  2. The importance of clear priorities no matter the economic cost, in this case respecting nature and safe guarding it.

A Remarkable Example

These people have given it a shot, an entire community joining together and banning the use of plastic, so that their future generations can enjoy their living place as they do.

In essence going back former consumption habits and valuing nature, the environment and acting accordingly out of respect to its resources. It is an example of the movement of an entire community including its leaders wholeheartedly supporting this commitment.

Technologically innovative?

Not at all since they are going back to the former ways of their people. In essence by looking back to former ways they tackle a problem without sophisticated technologies.

The innovativeness here in my opinion stems from the fact that they have banned the use of plastic in an entire community. Radical but see and judge for yourself.


Do you know of other examples? What do you think of this?

Good Reads: A small town in Japan doubles its fertility rate

Some time ago I read this article and I found it a good one, yesterday I bumped into it again.

I enjoy looking at how people, groups, institutions, etc. tackle problems of importance. For me this is an interesting article, since I believe focuses on an important problem more mature societies face: low fertility or birth rates.

There usually is much buzz about innovation, and many believe that it is only very novel things or approaches that can be called so, when sometimes it is actually just common sense, which are different just because nobody else exectuted.

Great article article of the 9th of January 2018 by the Economist: “A small town in Japan doubles its fertility rate”.

This case is a perfect example of how you can make an impact with common sense incentives, something countries like Spain could learn from.