Everything you need to know about AI

Artificial intelligence is the buzzword of 2023. Learn all about the technology, and how it could alter the future of data.

Blue AI brain with binary digits background

If you have not been living under a rock, then a buzzword you are sure to have come across is Artificial Intelligence or AI. AI, which is shrouded in controversy and yet making headlines for its capabilities.

So what exactly is Artificial Intelligence? And what's making it the Number 1 topic of discussion across technological circles?

Here is a comprehensive guide to this phenomenon that will help you understand and navigate it better.

Use of AI in different Industry

Demystifying AI

By 2030, AI will add 15.7 trillion dollars to the world's GDP, boosting it by 14 percent. The sheer numbers are bound to make you curious. What's all the hype about Artificial intelligence?

Let's examine the history, advantages and scope of Artificial Intelligence in the years to come.

Artificial intelligence can be defined as a branch of computer science that deals with building intelligent machines that can perform tasks generally requiring human intelligence. In other words, it acts as an ally, allowing machines to complement the capabilities of the human mind.

But it would be unfair to limit the scope of AI with this definition. That’s because it is an interdisciplinary science with several approaches and multiple utilities. Plus, advancements in machine learning and deep learning are adding more use cases to AI as we speak.

Each one of us has used AI in some form or the other. From self-driving cars to bots and smart assistants like ChatGPT, Siri and Alexa, most technology we rely upon today uses AI. In fact, nearly 77 percent of devices today use AI technology in one form or another, according to Simplilearn.

Is Artificial Intelligence synonymous with Machine Learning and Deep Learning?

Oftentimes, you may have heard the terms Artificial Intelligence, Machine Learning and Deep Learning being used interchangeably. Although AI, ML and DL are all correlated, AI is a larger umbrella that helps create intelligent machines capable of emulating human cognition.

AI imimitating Human Brain

On the other hand, machine learning is a subset of AI that allows machines to analyze data, recognize patterns and continue evolving without being programmed explicitly. Machine learning utilizes a massive pool of structured and semi-structured data to aid the machine language in generating accurate results and predictions.

Finally, deep learning automates much of the data extraction process, eliminating some of the manual human intervention. This also allows the use of larger data sets.

The brief history of AI

How did we reach here? And from where did we begin? AI has been in development for a long time, and has undergone continuous upgrades to reach its current scope. Let’s have a look at its fascinating journey.

AI connecting with Humans

When was AI invented?

The year was 1951 when the earliest successful AI program was written by Christopher Strachey at the  University of Manchester, England. A prolific scientist who went on to become the director of the Programming Research Group at the University of Oxford, his program ran on the Ferranti Mark I computer. Within a year,  this program was capable enough to play a full game of checkers.

Next in line was the Shopper, written by Cambridge researcher Anthony Oettinger in 1952. It ran on the EDSAC computer and was a simulated mall.  When asked to purchase an item, it would systematically search for it, exploring shops at random until it found the item. But that's not all, every time Shopper made a successful purchase, it stored the item’s location in its memory, making the next purchase for the same item lightning fast.

Our journey now moves to the United States, where Arthur Samuel wrote one for the prototype of the IBM 701. It was built upon Strachey’s Checkers model and added features to make it more aware.

It was the year 1956 when researcher John McCarthy coined the term ‘artificial intelligence’. Since then AI has been in continuous development and efforts have been made to integrate it into regular workflows. For instance, in 1969, Shakey became the first general-purpose mobile robot built.

Understanding AI

At the level of a regular user, artificially intelligent systems are capable of performing tasks generally associated with basic human cognition. These include interpreting speech, recognizing patterns, creating artwork and playing games.

But are machines supposed to do that?

Typically, no. But AI-powered machines are quick learners. They typically process massive amounts of data and identify patterns. As they keep adding stuff to their ‘consciousness’, they get better at modeling their own decision-making. For instance, an AI can repeatedly play a video game till it can predict every possible alternative and become capable of beating the best human players.

Man playing chess with an AI powered robot

Are all AIs equally intelligent?

Believe it or not, AIs can be strong and weak too. Let's take a look at what makes some AIs superior to others.

Strong AI

Strong AI, or artificial general intelligence, is one that is intelligent enough to analyze and  solve problems it’s never been programmed to work on. You can throw the most complex and arbitrary question, and it will present an answer. Every time.

Seems scary right? Breathe easy, for this kind of AI is just a theory. It does not exist (yet) and is only the fodder for killer robot movies like Westworld, IRobot and  Star Trek: The Next Generation.

Weak AI

Weak AI, also called narrow AI, is specifically created to perform tasks within a limited context. Hence, the element of independent decision making is omitted.

Weak AI powered machines operate under far more constraints, but can perform certain tasks extremely well.

We’ve all used weak AI. Some examples include:

  1. Siri, OkGoogle and Alexa
  2. Self-driving cars
  3. Chatbots
  4. Spotify and Netflix recommendations

Types of AI

Now that we have classified AI into their respective functionalities, let's explore the broad categories they can be placed under.

Artificial intelligence can be organized in many different ways, based on stages of development or utility.

Broadly, there are four stages of AI development that are commonly recognized.

  1. Reactive machines
  2. Limited memory
  3. Theory of mind
  4. Self aware
  • Reactive machines

Weak AI powered machines can be placed under this category. They only react to available stimuli, and generate responses as per preprogrammed rules. For e.g IBM’s Deep Blue, the infamous AI that beat chess legend Garry Kasparov in 1997 was a reactive machine.

  • Limited memory

This is the most commonly found AI presently. These machines can soak in data and continue learning and improving, typically through an artificial neural network.

  • Theory of mind

These are the dreaded strong AI powered machines. Such AI can emulate the human consciousness and make solid decisions like us. This includes recognizing and remembering emotions and mimicking reactions in social situations!

  • Self aware

And finally, the stuff of Sci-fi films. A mythical machine that is aware of its own existence, (and if films are to be believed, is really angry about it!). This form of AI too, does not exist, and hopefully won't in the future too.

How is AI changing society?

Human and robot finger touching, causing

As AI becomes stronger, it has been recognized as the key to assisting society in advancing with the digital revolution. AI systems are capable of perceiving their surroundings, assessing their observations, resolving challenges, and making decisions.

If used responsibly, AI can drastically reduce human effort and provide accurate results within shorter spans of time. Mundane tasks can be transferred to these machines, which can replicate the results without becoming bored or careless.

Benefits of AI

AI can automate workflows and processes or work independently and autonomously from a human team. Some of its major benefits include:

  1. Reduce human error
  2. Perform repetitive tasks
  3. Adds intelligence to existing products
  4. Reduced Risk
  5. Infinite availability
  6. Objective decision making

  7. Reduce Human Error

A major advantage of Artificial Intelligence is that it can drastically reduce errors and boost precision. Since an AI powered machine depends on prior data and algorithms to make decisions, the correct programming can drop these errors to null.

From data processing, to assembly lines, AI automation follows the same processes every single time and offset manual errors.

  1. Perform repetitive tasks

AI can help free human capital by taking care of mundane tasks. Humans can instead focus on higher impact problems without having to manually verify documents or transcribe phone calls.

  1. Adds intelligence to existing products.

AI capabilities have greatly improved existing gadgets. Conversational platforms and chatbots like ChatGPT and Bard, and smart machines can handle large amounts of data to upgrade homes and the workplace.

2. Reduced Risk

AI can find patterns and relationships in data much faster than a human. Hence, the results it produces are bound to be much more accurate than manual reports.

3. Infinite availability

It's plain and simple. Humans get tired. Studies show that you can be productive for only about 3 to 4 hours in a day. As humans, you need vacations and time offs to prevent  burnout.

Not with AI. It can work endlessly, and even multitask, making it the perfect solution to handle tedious or repetitive jobs.

4. Objective Decision Making

Humans are inherently driven by biases and emotions. It also seeps into our workplace and affects our decision making. But unlike humans, AI solely relies on programming and data. Hence, the decisions it makes are free of subjectivity and highly practical.

Industries using AI

Use of AI in healthcare industries

Several industries have recognised the potential of AI and integrated it into their workflow. Here are some of the applications of AI in major industries:

  1. Health Care

Technology giants like IBM and Microsoft have made significant strides in the healthcare sector. AI is being applied for an entire gamut of healthcare services, including personalized medicine, preoperative preparation, medical imaging and data mining for identifying patterns. For example, IBM Watson is capable of deriving the context of a set of data and suggesting a holistic treatment plan (plus, it has also won the US quiz show Jeopardy!).

2. Retail

The Retail and FMCG industry has benefited immensely from AI, which provides virtual shopping capabilities to help shoppers through personalized recommendations. Warehousing, stock management and labor management is also undertaken by AI.

3. Manufacturing

In manufacturing, AI can be applied to improve failure prediction and aid in maintenance planning. This drastically reduces the maintenance costs for production lines. There is also the scope for more accurate demand forecasting and minimal material waste using AI for analytics.

4. Banking

AI has greatly improved cybersecurity and strengthened banking services by helping identify fraudulent transactions, adopt swift credit scoring, and automate manually intense datasets.

What's making headlines?

Since the potential of AI became known and more organizations started jumping on the bandwagon, we’re seeing some incredible developments in AI. Let's take a look at some of the most exciting ones:


ChatGPT is an artificial-intelligence chatbot developed by OpenAI that has taken the world by storm. Although the main vision of a chatbot is to mimic a human conversation, ChatGPT has gone beyond. It can write and debug codes, play games like tic-tac-toe, compose music, answer test questions, write blogs, tweets and poetry and even simulate an entire chat room.

Midjourney Copy.ai

Midjourney is an artificial intelligence program created by a San Francisco-based independent research lab Midjourney, Inc. Midjourney has the capability of generating images from natural language descriptions, called "prompts." The results are great, and everyone from The Economist to presenters like John Oliver are using it to create front covers, comics and logos.


Copy AI is based on the GPT-3 platform, and uses machine learning and neural networks to produce long form content from text prompts. Leveraging over 175 billion machine learning parameters, the copies produced by Copy AI are the closest you can get to mimicking human efforts.

What's in store for the future of AI?

Artificial intelligence has faced several waves of optimism, disappointment and resurgence. It has undergone a loss of funding, infamously labeled the "AI winter" and seen new approaches to reach its current status.

AI has always been surrounded by the fear that if machines continue evolving and learning, they may one day replace human workers and make their roles redundant.

Considering their ease of use, and drop in prices once deployed at scale, this is a legitimate concern. However, as companies explore the process of creating AI-literate cultures, they will recognize that it makes sense to work alongside machines. AI powered assistants can prove to be smart and cognitive support to help make informed predictions and arrive at critical decisions.

Final Words

Will an AI kill us all?

Probably not.

But Machine Learning experts have made some pretty uncomfortable predictions. According to them, AI will be able to match human-written essays by 2026, render truck drivers redundant by 2027, write a best-seller by 2049, and replace a surgeon by 2053.
But on the flip side, there is no denying the immense contribution of AI in making our daily lives easier.

So, while the predictions seem scary, as with every technological revolution, new jobs will be created to replace those lost. But AI is here to stay.