The Dezeen guide to AI

Motherboard

Dezeen’s new editorial series, AItopia, is all about artificial intelligence. In this guide, we explain the key terms in the field and why they matter.

Artificial intelligence (AI) is usually summarised as computers performing tasks that would otherwise require a human brain, such as playing chess, recognising faces, driving a car or reading a piece of text and picking out the important information.

It is an area filled with jargon, so Dezeen has produced this glossary of some of the most common and important terms. All of the definitions were written by a human.


Machine learning

Machine learning is when computers use experience to improve their performance. It underpins most of the advanced capabilities of AI systems.

Rather than humans programming computers with specific step-by-step instructions on how to complete a task, in machine learning a human provides the AI with data and asks it to achieve a certain outcome via an algorithm.

Through a process of ultra-fast trial and error, the AI can very quickly start to spot patterns – including those that humans may not be able to identify – and use them to make predictions about what is likely to help it achieve the desired outcome.

For example, an AI may be taught the rules and objectives of a game and be left to try millions of moves and work out what is most effective. In a short period of time, the system would transition from making random moves to mastering the game.


Deep learning

Deep learning is a specific type of machine learning used in the most powerful AI systems. It imitates how the human brain works using artificial neural networks (explained below), allowing the AI to learn highly complex patterns in data.

While machine-learning systems are able to get better at tasks they’ve been trained on when presented with previously unseen data, deep learning enables computers to learn to do things they were never trained for.

As a result, it opens the door for machines capable of performing many different tasks significantly better than humans.

Deep learning was pioneered between 2010 and 2015 by DeepMind, a company founded in London by UCL researchers Demis Hassabis and Shane Legg and acquired by Google in 2014.


Neural networks

Neural networks are found in the human brain. They are a series of neurons connected to each other that exchange information, strengthening their connections as they do so and enabling us to learn.

Advanced AI systems use artificial neural networks that mimic these structures, processing data through layers of interconnected artificial neurons to become better at making predictions.


Narrow AI

Sometimes called weak AI, narrow AI refers to AI systems that are only able to complete specific tasks, such as automative driving or image recognition. They may perform these tasks much better than humans, but cannot apply their intelligence to different problems and situations.

All AI systems currently in existence are narrow AI.


Artificial general intelligence

The definition of artificial general intelligence (AGI) is a matter of debate among experts, but at the most basic level it typically refers to a computer being able to perform any intellectual task that a human can.

Such a computer would likely be able to perform these tasks much faster and better than a human would be able to, meaning that the emergence of AGI could have enormous implications for society. Ian Hogarth, co-author of the annual State of AI Report, calls it “God-like AI”: “A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it.”

It is often what people are talking about when they discuss the dangers of AI, worrying that AGI could lead to humans becoming obsolete or even extinct. A much-publicised open letter issued in March that called for a moratorium on developing increasingly powerful AI systems warned about “non-human minds that might eventually outnumber, outsmart, obsolete and replace us” and “loss of control of our civilization”.

Some experts argue that the warnings are overblown, and there is debate about whether the hypothetical existential risks posed by AGI should be a primary concern, as well as how powerful such a system would be when it arrives. The companies trying to develop AGI, including Google DeepMind and OpenAI, claim that it could help solve the world’s biggest problems such as disease, climate change and poverty, and investment in the field has rocketed in recent years.

Until recently, most people believed that AGI was a long way off. However, in March OpenAI launched the powerful chatbot GPT-4, which has proven itself capable of human-level performance at a large number of reasoning and knowledge tasks, passing the bar exam, beating humans at various games, using reasoning to improve and even deceiving people. A paper by researchers from Microsoft, which is a major investor in OpenAI, concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an AGI system”.

Estimates over how long it will be before a true AGI system emerges range between five and 50 years or longer.


Superintelligence

In his influential 2014 book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom defined superintellgence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. Some researchers predict that superintelligence would emerge shortly after the advent of AGI, but others are doubtful it will ever become a reality.

If a superintelligent AGI were to be created, there are concerns over whether humans would be able to control it. Bostrom and University of California, Berkeley professor Stuart Russell have warned that even giving a superintelligent AGI an ostensibly sensible, straightforward task could have unexpected and devastating results.

“If we put the wrong objective into a superintelligent machine, we create a conflict that we are bound to lose,” he said in a 2021 lecture. “The machine stops at nothing to achieve the specified objective.”

The classic example used to explain the difficulty of programming a superintelligent machine without human values is Bostrom’s “paperclip maximiser” thought experiment. A superintelligent AI tasked with producing paperclips uses its abilities of self-improvement to become extremely efficient at making them, flooding the world with paperclips using anything it can find, including the atoms in our bodies. If humans tried to turn off the machine, it would probably find a way to stop us.


Alignment

In the context of AI, alignment refers to attempts to make sure that systems have goals that match human values, in order to reduce the risk they could harm us. Currently, it often involves ensuring that AI chatbots do not engage in harmful content.

However, creating computers with truly human ethics is very difficult and there is not yet scientific consensus on how this would work, even in theory. Alignment has received much less funding than research focused on making AI systems more powerful, and is advancing more slowly as a field.


Singularity

A term borrowed from mathematics, singularity is the hypothetical point in the future at which technology has advanced to the point where it is uncontrollable and irreversible.

Some consider the emergence of an AGI more intelligent than humans to be the most likely moment of singularity. A popular interpretation of the theory holds that after this point, technology would enter a period of rapid, exponential advancements difficult to comprehend, with humans usurped as the dominant beings on Earth.


Generative AI

Generative AI systems are those that can create different types of content, including images, text, videos, music, voice audio and code. They are trained on large reams of data, and by the process of machine learning are able to then extrapolate to produce new data.

Examples include text-to-image generators such as DALL-E 2, Midjourney and Stable Diffusion, in which users input a text prompt and the model quickly produces a corresponding image. Chatbots such as OpenAI’s ChatGPT and Google’s Bard are also forms of generative AI.

The emergence of these easy-to-use tools over the past two years have dramatically increased interest in generative AI. The technology has the potential to boost productivity in a wide range of industries, including architecture and design. Zaha Hadid Architects principal Patrik Schumacher recently revealed that the firm is using text-to-image generators to come up with early designs for projects.

However, commentators have also expressed concerns about the potential for generative AI to spread misinformation, both inadvertently and maliciously. The most advanced version of Midjourney is capable of producing near-photorealistic but completely fabricated images.

In addition, the technology can be susceptible to biases and stigma embedded within training data. Others have suggested its ability to mimic existing media so convincingly could have major implications for copyright holders.


Large language models

Large language models (LLMs) are AI systems that use deep learning to understand language. An example is Generative Pre-trained Transformer, which powers ChatGPT.

LLMs are trained on enormous quantities of data to become very good at recognising language patterns, like a highly advanced form of predictive text. As a result, they are less competent at solving maths equations.


Hallucinations

LLM chatbots are prone to stating falsehoods – whether getting facts wrong or making things up entirely. In the industry, these episodes are called hallucinations, though some argue that this term makes AI systems seem more human-like than they really are.

Their tendency to generate casual and convincing mistruths remains a major shortcoming for LLMs like ChatGPT. “No one in the field has yet solved the hallucination problems,” Google CEO Sundar Pichai recently said in an interview.

The photo is by Michael Dziedzic via Unsplash.


AItopia
Illustration by Selina Yau

AItopia

This article is part of Dezeen’s AItopia series, which explores the impact of artificial intelligence (AI) on design, architecture and humanity, both now and in the future.

The post The Dezeen guide to AI appeared first on Dezeen.

No Responses to “The Dezeen guide to AI”

Post a Comment