But What is AI… really?

3 types of AI and what’s next?

Shaw Talebi
5 min readSep 17, 2024

It’s hard to imagine life without technologies like our laptops, smartphones, and internet access. In the not-so-distant future, another technology will join this list — AI. Although there are clear examples of powerful tools like ChatGPT, the term “AI” gets thrown around so casually these days that it can be hard to know exactly what people mean by it. Here, I share a beginner-friendly overview of the technology (thus far) and some thoughts on where it is heading.

Image from Canva.

What is AI?

AI is short for artificial intelligence, but what does that actually mean?

The trouble with defining this term is its latter half—intelligence. We (i.e., humans) haven’t settled on a singular definition for this term. However, the definition I like to use (and one relevant to most situations) is intelligence = the ability to solve problems and make decisions.

Therefore, from this view, artificial intelligence (AI) is simply a computer's ability to solve problems and make decisions.

Why should I care?

Offloading our problem-solving and decision-making to computers is nothing new (that’s why we created them). However, what is changing (rapidly) is how we do this.

28 years ago, if I wanted to translate a French love letter from my fiancé I could have used the rule-based software tool Babel Fish. 8 years ago, I might have used the ML-based Google Translate. And today, I’d probably use the LLM-based ChatGPT.

There are two things to note from this example. First, each iteration brought technological innovations, thus producing better and more capable translations. Second, it took about 20 years to go from Babel Fish to (ML-powered) Google Translate. However, it only took 6 years to go from (ML-powered) Google Translate to ChatGPT.

In other words, not only is AI getting better, but it’s getting better at an accelerating pace. Understanding this evolution is important so one can better use it and see where it’s going.

3 Types of AI

The three translation tools mentioned above exemplify three major types of AI developed over the past several decades. To frame each type, I borrow the language of ML researcher Andrej Karpathy [1]. Note: decade ranges are crude approximations.

Summary of 3 types of AI. Image by author.

Software 1.0 (~1950–2000)

If X then Y

Software 1.0 refers to programs that follow explicit instructions written by humans, i.e. rule-based systems. Therefore, in these systems, the programmer must anticipate and account for every possible scenario.

For example, Babel Fish (the early translation software) would have used predefined rules and dictionaries to match words from one language to another. If a word or phrase didn’t fit into its logic, the translation would be inaccurate or fail.

Although these systems were a breakthrough (at the time), they were rigid and limited by developers' ability to define every possible rule or edge case in advance.

Software 2.0 (~2000–2020)

Model see, model do.

Software 2.0 marks the shift toward machine learning (ML) systems, where instead of being explicitly programmed, computers learn to perform tasks by example. This unlocked the ability to create programs for which the Software 1.0 paradigm was difficult or insufficient.

For example, in 2016, Google Translate began using neural networks to learn from vast amounts of multilingual text [2]. Instead of relying on predefined rules, it could analyze patterns in data to provide translations. The more examples the system was shown, the better it became at understanding nuances in language, context, and grammar.

While Software 2.0 unlocked an ocean of new problems that could be solved with computers, these systems were largely specialized and required large volumes of labeled data to perform well, which limited their potential.

Software 3.0 (~2020-Now)

AI hacker’s paradise.

Software 3.0 is today’s state-of-the-art AI paradigm. Rather than building specialized task-specific models, researchers have developed general-purpose models that can be readily adapted to a variety of tasks.

OpenAI’s ChatGPT, for instance, is a single model that can perform an outstanding range of complex and esoteric tasks. It achieves this through so-called prompting, which is when a user adjusts the model’s behavior via natural language inputs. Not only can such a model translate French lover letters to English, but it can also translate them to Spanish or Arabic, summarize them, and write responses.

However, prompting is just one dimension of using Software 3.0. Another way we can adapt these powerful general-purpose models is via fine-tuning. This is where we train the model on a specific task (Software 2.0 style). The key benefit of fine-tuning a model, rather than training one from scratch, is we can develop state-of-the-art models with far fewer training examples.

Another way to think about this is we are “hacking” these pre-trained AI models. This is effective because these models gain a substantial understanding of the world during their pre-training. Other techniques in the “AI Hacker’s” toolkit are model pruning, quantization, distillation, and fusion.

Iteration > Intelligence

Like all technology, AI's fundamental benefit is allowing us to move faster.

If you wanted to build a simple web app in the late 1990s, it would probably take weeks of research and programming work. Twenty years later, that same project may have taken days with the help of Google. Today, it might be done in hours with the help of ChatGPT or other coding co-pilots.

This is valuable because, often, what determines a successful enterprise is not the number of PhDs on the team but rather how quickly they can make mistakes, learn from them, and try again.

How do I use it?

In the spirit of speed and iteration, the best starting point for using AI today is using tools like ChatGPT and Claude. They can readily help you get unstuck in a wide range of contexts.

Here are some of the most common things I use ChatGPT for.

  • Research paper question answering
  • Writing boilerplate code
  • Explaining code errors
  • Explaining new topics and fields (recent examples: drip campaign, marketing analytics, front-end development)
  • Brainstorming analogies and examples for blog posts
  • Recommendations on how to implement software projects

Software 4.0?

Given the accelerating pace of AI innovation, it’s natural to wonder, what’s next? While it’s almost impossible to say what the next major AI “software update” will be, its timing might be easier to anticipate.

Given that it took us ~50 years to go from Software 1.0 to 2.0 and ~20 years from 2.0 to 3.0, I’d bet that Software 4.0 will arrive in the next 5 years.

Of course, I’m probably wrong, but I am curious to hear what you think Software 4.0 might look like. Let me know in the comments :)

--

--

Shaw Talebi
Shaw Talebi

Responses (1)