Artificial Intelligence Explained

Posted · Add Comment
Artificial Intelligence
Spread the love

The term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem-solving”.

Artificial Intelligence is a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.

Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

Artificial Intelligence can be classified in any number of ways:

  • There are two main categories or Levels of A.I.:
    (Weak and Strong A.I.)
  • There are four Types of A.I. based on Functionalities:
    (Reactive Machines, Limited Memory, Theory of Mind, and Self-Awareness).
  • There is an ever-growing number of fields or Subsets of A.I. such as Natural Language Processing, Neural Networks, Machine learning, Robotics, and others.

Weak Artificial Intelligence” or “Artificial Narrow Intelligence” is focused on One Narrow Task

Artificial Narrow Intelligence (ANI) also known as “Weak” AI is the AI that exists in our world today.

Narrow AI is AI that is programmed to perform a single task; whether it’s checking the weather, being able to play chess, or analyzing raw data to write journalistic reports.

ANI systems can attend to a task in real-time, but they pull information from a specific data-set. As a result, these systems don’t perform outside of the single task that they are designed to perform.

Narrow AI is not conscious, sentient, or driven by emotion the way that humans are. Narrow AI operates within a pre-determined, pre-defined range, even if it appears to be much more sophisticated than that.

Every sort of machine intelligence that surrounds us today is Narrow AI. Google Assistant, Google Translate, Siri and other natural language processing tools are examples of Narrow AI. Some might assume that these tools aren’t “weak” because of their ability to interact with us and process human language, but the reason that we call it “Weak” AI is because these machines are nowhere close to having human-like intelligence. They lack self-awareness, consciousness, and genuine intelligence to match human intelligence; they can’t think for themselves.

When we converse with Google Assistant, it isn’t a conscious machine responding to our queries. Instead, Google Assistant is able to process human language, enter it into a search engine (Google), and return to us with results.

This explains why when we pose abstract questions about things like the meaning of life or how to approach a personal problem to Google Assistant, we get vague responses that often don’t make sense, or we get links to existing articles from the Internet that address these questions.

On the other hand, when we ask Google Assistant what the weather outside is, we get an accurate response. That’s because answering basic questions about the weather is within the range of intelligence that Google Assitant is designed to operate in.

As humans, we have the capacity to assess our surroundings, to be sentient creatures, and to have emotionally-driven responses to situations. The AI that exists around us doesn’t have the fluidity or flexibility to think as we do. Even something as complex as a self-driving car is considered Weak AI, except that a self-driving car is made up of multiple ANI systems.

Benefits of Narrow AI

Narrow AI by itself is a great feat in human innovation and intelligence. ANI systems are able to process data and complete tasks at a significantly quicker pace than any human being can, which has enabled us to improve our overall productivity, efficiency, and quality of life.

ANI systems like IBM’s Watson are able to harness the power of AI to assist doctors to make data-driven decisions, making healthcare better, quicker, and safer.

Additionally, Narrow AI has relieved us of a lot of the boring, routine, mundane tasks that we don’t want to do. From increasing efficiency in our personal lives, like Google Assistant ordering a pizza, to rifting through mounds of data and analyzing it to produce results.

Narrow AI is making our lives significantly better

With the advent of advanced technologies like self-driving cars, ANI systems will also relieve us of frustrating realities like being stuck in traffic, which will inevitably provide us with more leisure time. ANI systems are important building blocks of more intelligent AI which will be the future in Artificial Intelligence.

Artificial General Intelligence” (AGI) or “Strong AI” refers to machines that exhibit human intelligence

Artificial General intelligence or “Strong” AI refers to machines that exhibit human intelligence. In other words, AGI can successfully perform any intellectual task that a human can.

This is AI that we see in movies like “Her” or other sci-fi movies in which humans interact with machines and operating systems that are conscious, sentient, and driven by emotion and self-awareness.

Currently, machines are able to process data faster than we can. But as humans, we have the ability to think abstractly, strategize, and tap into our thoughts and memories to make informed decisions or come up with creative ideas. This type of intelligence makes us superior to machines, but it’s hard to define because it’s primarily driven by our ability to be sentient creatures. Therefore, it’s something that is very difficult to replicate in machines.

AGI is expected to be able to reason, solve problems, make judgments under uncertainty, plan, learn, integrate prior knowledge in decision-making, and be innovative, imaginative, and creative. For machines to achieve true human-like intelligence, they will need to be capable of experiencing consciousness.

Artificial Super Intelligence” (ASI)

Artificial Super Intelligence (ASI) will surpass human intelligence in all aspects: creativity, general wisdom, and problem-solving.

Machines will be capable of exhibiting intelligence that we haven’t seen in the brightest human beings. This is the type of AI that worries people, and the type of AI that Elon Musk believes could cause humans to become an endangered species.

There are Four Types of Artificial Intelligence: Reactive Machines, Limited Memory, Theory of Mind, and Self-Awareness

Type 1: Reactive Machines

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions. Thes machines can only react to currently existing situations, hence the term “reactive.” An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with the intent to defeat the competitor. It was programmed with the ability to identify a chessboard and its pieces while understanding each pieces’ functions.

Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3½ to 2½ games, becoming the first computerized program to defeat a human opponent.

Deep Blue’s unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors. 

Type II: Limited Memory

Limited Memory machines can retain data for a short period of time. While they can use this data temporarily (for a specified period), they cannot add it to a library of experiences.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Many self-driving cars use Limited Memory technology: they store data such as the recent speed of nearby cars, the distance of such cars, the speed limit, and other similar types of information that can help them navigate roads.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI. 

Type III: Theory of Mind

Theory of Mind researchers hopes to build computers that imitate our mental models by forming representations about the world and other agents and entities in it.

One goal of these researchers is to build computers that relate to humans and perceive human intelligence. While plenty of computers use models, a computer with a ‘mind’ does not yet exist.

Theory of Mind is decision-making ability equal to the extent of a human mind but by machines.

While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards.

One component of human conversation is having emotional capacity or sounding and behaving like a person would in standard conventions of conversation.

This future class of machineability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a “Theory of Mind” machine’s thought process.

Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

Theory of Mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations.

Theory of Mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, Theory of Mind machines would have to be able to learn rapidly at a moment’s notice.

Some elements of Theory of Mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids. 

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

These two humanlike robots are samples of movement toward full implementation of the theory of mind AI systems materializing in the near future.

While neither fully holds the ability to have a complete human conversation with an actual person, both robots have aspects of emotional ability to interact with their human counterparts.

Type IIII: Self-Awareness

Self-aware machines are the still figments of science fiction, though many AI enthusiasts believe them to be the ultimate goal of AI development.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence but would be considered the most advanced form of artificial intelligence known to man. 

Facets of self-aware AI include the ability to not only recognize and replicate human-like actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI is in essence, an advancement, and extension of Theory of Mind AI. Where Theory of Mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

Even if a machine can operate as a person does: by preserving itself, predicting its own needs and demands, and relating to others as an equal, the question of whether a machine can become truly self-aware, or ‘conscious’, is up to philosophers to debate.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”. This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction, and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.

Many Futurists believe that we will become one with Machines

According to futurist Ray Kurzweil, if the technological singularity happens, then there won’t be a machine takeover. Instead, we’ll be able to co-exist with AI in a world where machines reinforce human abilities.

Kurzweil predicts that by 2045, we will be able to multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud. This will essentially cause a melding of humans and machines. Not only will we be able to connect with machines via the cloud, but we’ll also be able to connect to another person’s neocortex. This could enhance the overall human experience and allow us to discover various unexplored aspects of humanity.

Dr. Ben Goertzel is the founder and CEO of SingularityNET, a blockchain-based AI marketplace is one of the premier pioneers in the A.I. field. He had this to say:

AI is incorporated into a variety of different types of technology.

Here are some examples.

  • Autonomous Vehicles
    This area of AI has gathered a lot of attention. The list of vehicles includes cars, buses, trucks, trains, ships, submarines, and autopilot flying drones, etc. These use a combination of computer vision, image recognition, and deep learning to build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as pedestrians.
  • Evolutionary Computations
    Evolutionary algorithms are inspired by biological evolution and use mechanisms that imitate the evolutionary concepts of reproduction, mutation, recombination, and selection. Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings. For e.g. genetic algorithms, genetic programming, etc.
  • Expert Systems
    An Expert System is a program that is designed to solve the problems which typically require human expertise or experience. By mimicking the thinking of human experts, the system can perform monitoring, analysis, design, and make decisions based on this data. Expert Sytems can also stay abreast of modern inventions, new studies, and discoveries related to the job. An Expert System can be of great help by offering knowledge of similar cases and as a self-check tool.
  • Neural Networks
    Neural Networks are inspired by human brains and copies the working process of human brains. It is based on a collection of connected units or nodes called artificial neurons or perceptrons. The Objective of this approach is to solve problems in the same way that a human brain does. for e.g. brain modeling, time series prediction, classification, etc.
  • Machine Learning
    The science of getting a computer to act without programming, the capability of Artificial Intelligence systems to learn by extracting patterns from data. It is an approach or subset of Artificial Intelligence that is based on the idea that machines can be given access to data along with the ability to learn from it.

    Machine Learning is a method where the target (goal) is defined and the steps to reach that target is learned by the machine itself by training (gaining experience).

    For example: To identify a simple object such as an apple or orange. The target is achieved not by explicitly specifying the details about the object and coding it, but instead goes through a similar process as a child who learns from seeing different pictures of the object. This allows the machine to define the steps to identify the apple or the orange.

    Deep learning is a subset of machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. 

    There are three types of machine learning algorithms:
  • Supervised learning: Data sets are labeled so that patterns can be detected and used to label new data sets
  • Unsupervised learning: Data sets aren’t labeled and are sorted according to similarities or differences
  • Reinforcement learning: Data sets aren’t labeled but, after performing an action or several actions, the AI system is given feedback
  • Machine Vision
    Science of allowing computers to see. This technology captures and analyzes visual information using a camera, analog-to-digital conversion, and digital signal processing. It is often compared to human eyesight, but machine vision isn’t bound by biology and can be programmed to even see through walls. It is used in a range of applications from signature identification to medical image analysis. 
  • Natural Language Processing (NLP)
    In the field of natural language processing, we mainly focus on the interactions between human language and computers. NLP is a way for computers to analyze, understand, and derive meaning from human language in a smart and useful way. One of the older and best-known examples of NLP is spam detection, which looks at the subject line and the text of an email and decides if it’s junk. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. Current approaches to NLP are based on machine learning.
  • Robotics
    It is a field of engineering focused on the design and manufacturing of robots. Robots are often used to perform tasks that are difficult for humans to perform or perform consistently. Examples include car assembly lines, in hospitals, office cleaner, serving foods, and preparing foods in hotels, patrolling farm areas and even as police officers, or by NASA to move large objects in Space. Robots are the artificial agents which behave like human and build for the purpose of manipulating the objects by perceiving, picking, moving, modifying the physical properties of an object, or to have an effect thereby freeing manpower from doing repetitive functions without getting bored, distracted, or exhausted. Robots are not only part of Computer Science, here Mechanical and Electrical Engineering also plays a big role:

    a) AI robots are having mechanical construction and form to accomplish a particular task that can be achieved by Mechanical Engineering.

    b) Robots have electrical components, which power and control the machinery and can be achieved by Electrical Engineering.

    c) And Robots also contains some level of a computer program. That determines what, when, and how a robot does something and here comes the role of Computer Science.

Implementation of Artificial Intelligence can be found in almost all sectors of society

  • AI in healthcare
    The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and is capable of responding to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include chatbots, a computer program used online to answer questions and assist customers, to help schedule follow-up appointments or aid patients through the billing process, and virtual health assistants that provide basic medical feedback.
  • AI in business
    Robotic process automation is being applied to highly repetitive tasks normally performed by humans. Machine learning algorithms are being integrated into analytics and CRM platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.
  • AI in education. AI can automate grading, giving educators more time. AI can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. AI could change where and how students learn, perhaps even replacing some teachers.
  • AI in finance
    AI in personal finance applications, such as Mint or Turbo Tax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, the software performs much of the trading on Wall Street.
  • AI in law
    The discovery process, sifting through documents, in law is often overwhelming for humans. Automating this process is a more efficient use of time. Startups are also building question-and-answer computer assistants that can sift programmed-to-answer questions by examining the taxonomy and ontology associated with a database.
  • AI in manufacturing
    This is an area that has been at the forefront of incorporating robots into the workflow. Industrial robots used to perform single tasks and were separated from human workers, but as technology advanced that changed.

Security and Ethical Concerns 

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called “deepfakes”, convincingly fabricated videos of public figures saying or doing things that never took place.

Regulation of AI technology

Despite these potential risks, there are few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limits the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe’s GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

Mollify is a Software Development Firm. With the use of Analysis, Algorithms, Software, & Artificial Intelligence
Mollify Helps Companies Enhance Their Digital Business


Spread the love