What is Artificial Intelligence (AI)? Definition and use cases

What is Artificial Intelligence?

Artificial Intelligence (AI) is a scientific field focused on creating machines and computers able to think, learn and act in a human-like manner. AI is currently surrounded by a lot of hype and, although still at an early stage, its impact on society is already significant. So, as the industry matures, the number of products and services powered by AI will continue to grow. In the immediate future, the Artificial Intelligence market is expected to exceed €512 billion by 2024, according to IDC.

AI adoption is also among the goals to be achieved within the EU’s Digital Decade policy program. According to which, 75% of EU enterprises should use it by 2030, as part of Europe’s strategy to boost digital sovereignty in order to strengthen its role in the digital economy.

Artificial Intelligence definition

Artificial Intelligence is a discipline focused on creating intelligent machines and computer programs that are able to imitate problem-solving and decision-making capabilities inherent to human intelligence. This favors automation and innovation, thus boosting customer experience, competitiveness and growth.

AI is composed of a set of technologies used to extract valuable information from large data sets in order to automate processes and perform a wide range of tasks. To do so, it combines many disciplines such as: computer science, data analytics, linguistics, neuroscience and psychology, among others.

Machine Learning and Deep Learning

On the one hand, Machine Learning (ML) is a subfield of AI. On the other hand, although often used indifferently, Deep Learning (DL) is actually a subfield of Machine Learning.

Both of them comprise Artificial Intelligence algorithms to create expert systems able to make predictions and classifications based on input data. But while Machine Learning requires further manual human intervention, Deep Learning increases automation and scalability. AI enterprise applications, such as data intelligence, are mainly based on these subfields of Artificial Intelligence.

Types of Artificial Intelligence

Artificial Intelligence can be classified in three main categories: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).

Artificial Narrow Intelligence

Artificial Narrow Intelligence, also known as Narrow or Weak AI, is trained to perform specific tasks. It is the most widespread AI type nowadays. For instance, it is behind many daily used applications such as Apple and Amazon’s virtual assistants, Siri and Alexa.

Artificial General Intelligence

Artificial General Intelligence, also known as General or Strong AI, is a theoretical approach in which machines would equal human intelligence. It theorizes about the possibility of machines achieving self-consciousness.

Artificial Super Intelligence

Artificial Super Intelligence, also known as Super Intelligence, theorizes about the possibility to create a super intelligence that would surpass even the human brain.

Furthermore, depending on the degree of replication of human capabilities, there is also another classification system that establishes four types of Artificial Intelligence: reactive AI, limited memory AI, theory of mind AI and self-aware AI.

AI applications

Since 2018, AI has become more and more affordable and optimized, according to the Artificial Intelligence Report 2022 of Stanford University. For instance, the cost to train an image classification system has decreased by 63.6% and training times have improved by 94.4%.

Nowadays, companies can more easily access the necessary storage and processing capabilities to run AI workloads. GPU cloud and bare-metal servers offer an affordable and scalable way for any kind of company to develop Artificial Intelligence products and services, without big capital investments.

As a consequence, the adoption of AI technologies has not stopped growing. Here is a list of some of the current applications of AI:

  • Speech and image recognition.
  • Messaging bots in customer service. 
  • Recommendation engines in advertisement.
  • Data analytics.
  • Self-driving cars technology.

History of Artificial Intelligence

Although its history can be traced back to antiquity, Artificial Intelligence truly started to take shape during the 40s and 50s. During these decades:

  • Walter Pitts and Warren McCulloch first described what later became known as “neural networks”.
  • Marvin Minsky and Dean Edmonds built the first neural net machine, the SNARC (Stochastic Neural Analog Reinforcement Calculator).
  • Alan Turing, widely considered the father of AI, published his paper Computing Machinery and Intelligence in 1950. Where he proposed his question “Can machines think?” and introduced the Turing Test.
  • John McCarthy coined the term “Artificial Intelligence” during the first-ever AI conference in 1956 — The Dartmouth Workshop. A conference which is actually considered the birth of AI.
  • Arthur Samuel created a checkers program that achieved sufficient skill to challenge respectable amateur chess players. He also popularized the term “Machine Learning”. It is worth highlighting that, since back then, Game Artificial Intelligence has been used as a measure of progress in the field.

During the 60s and early 70s, the development in AI astonished most people, as computers started to be able to perform complex tasks such as solving algebra word problems or learning to speak English. During this period, there are interesting milestones such as:

  • The natural language processing program ELIZA, able to have such realistic conversations that some users even thought to be talking to another human being.
  • The creation of the first full-scale, intelligent android in 1972 in Japan.

However, critiques and setbacks also arose during this period — considered, indeed, the first “AI winter”. For instance, the lack of computational power and storage capacity to solve relevant problems led to the loss of funding. AI researchers had established such high, unachievable expectations that investors became frustrated by the lack of progress.

On the contrary, the 1980s was a flourishing period for Artificial Intelligence, thanks to “expert systems”. These AI programs were able to answer questions and solve problems using logical rules, derived from experts’ knowledge. Expert systems offered an emulation of human decision-making which was very attractive for organizations to boost productivity. To name an example, the XCON expert system — created at Carnegie Mellon University in Pittsburgh, Pennsylvania — saved Digital Equipment Corporation millions of dollars annually.

As a result of this success, corporations started to develop these programs and the AI industry started to grow again — even though it did not last long. During the late 80s and the early 90s, there was the second AI winter. Among other challenges, the industry had to face the improvement of desktop computers, which became more powerful and affordable than AI machines.

Nevertheless, after this second AI winter and despite numerous ups and downs, the adoption of Artificial Intelligence has grown unstoppably during the last decades. This is mainly due to the improvements achieved in terms of costs, training times and computing capabilities.

These are some of the achievements from the late 90s to now:

  • IBM’s Deep Blue chess-playing expert system beat world chess champion Garry Kasparov in 1997.
  • NASA designed the Nomad rover to traverse the Atacama Desert of Northern Chile in order to test technologies critical to planetary exploration in 1997.
  • NASA’s autonomous rovers Spirit and Opportunity landed on Mars in 2004 and navigated its surface for six and fourteen years respectively.
  • Apple launched its virtual assistant Siri in 2011.
  • AlphaGo’s DeepMind program beat world champion Go player Lee Sodol in a five-game match, in 2016.
  • OpenAI launched GPT-3 (Generative Pre-trained Transformer 3), an autoregressive language model that uses deep learning to produce human-like text, in 2020.
  • OpenAI launched the prototype chatbot ChatGPT (Chat Generative Pre-trained Transformer) in November 2022.

To conclude, it is worth mentioning that as AI enters into more and more everyday activities, Artificial Intelligence ethics and regulations are also becoming increasingly important. Organizations and institutions must work on building trust and defining responsible AI practices.

Share it on Social Media!

GPU servers

Private cloud and bare-metal servers featuring NVIDIA’s Tesla T4 GPU for data processing acceleration.