A Two-Minute Guide To Artificial Intelligence


Colorful And Active Human BrainJohn Lund

If you keep hearing about artificial intelligence but aren’t quite sure what it means or how it works, you’re not alone. 

There’s been much confusion among the general public about the term, not helped by dramatic news stories about how “AI” will destroy jobs, or companies who overstate their abilities to “use AI.” 

A lot of that confusion comes from the misuse of terms like AI and machine learning. So here’s a short text-and-video guide to explain them:  

What’s the difference between AI and machine learning?

Think of it like the difference between economics and accounting. 

Economics is a field of study, but you wouldn’t hire a Nobel-Prize winning economist to do your taxes. Likewise, artificial intelligence is the field of science covering how computers can make decisions as well as humans. But machine-learning refers to the popular, modern-day technique for creating software that learns from data.   

The difference becomes important when money is at stake. Venture capital investors often dismiss AI as full of hype because they’ve got skin in the game. They prefer startups that make machine-learning software with a clear, commercial application, like a platform that can filter company emails with natural language processing, or track customers in a store with facial recognition (these are real businesses). 

On the other hand, universities and some large tech companies like Facebook and Google have large labs carrying out research that drives the wider field of AI forward. A lot of the tools they invent, like TensorFlow from Google, or Pytorch from Facebook, are freely available online.  

Why does the term “learning” (eg. deep learning) crop up everywhere? 

Because the most exciting application of AI today gives computers the ability to “learn” how to carry out a task from data, without being programmed to do that task. 

The terminology is confusing because this involves a mishmash of different techniques, many of which also have the word “learning” in their names. 

There are, for instance, three core types of machine learning, which can all be carried out in different ways: unsupervised, supervised and reinforcement, and they can also be used with statistical machine learning, Baeysean machine learning or symbolic machine learning.

You don’t really need to be clued up on these though, since the most-popular applications of machine learning use a neural network. 

What’s a neural network? 

It’s a computer system loosely inspired by the human brain that’s been going in and out of fashion for more than 70 years. 

So what is “deep learning?” 

That’s a specific approach to using a neural network – essentially, a (deep) neural network with lots of layers. The technique has led to popular services we use today, including speech-recognition on smartphones and Google’s automatic translation.   

In practice, each layer can represent increasingly abstract features. A social media company might, for instance, use a “deep neural network” to recognize faces. One of the first layers describes the dark edges around someone’s head, another describes the edges of a nose and mouth, and another describes blotches of shading. The layers become increasingly abstract, but put together they can represent an entire face.   

What does a neural network look like on a screen — a jumble of computer code? 

Basically yes. Engineers at Google’s AI subsidiary DeepMind write nearly all their code in Python, a general purpose programming language first released in 1991. 

Python has been used to develop all sorts of programs, both basic and highly complex, including some of the most popular services on the web today: YouTube, Instagram and Google. You can learn the basics of Python here

Does everyone agree that deep-learning neural networks is the best approach to AI? 

No. While neural networks combined with deep learning are seen as the most promising approach to AI today, that could all change in five years. 

———

This is the first in a series of guides to complicated but important new technology. Stay tuned for our next primer on quantum computing. Got a tip or suggestion for what we should cover next? Reach me by e-mail or on Twitter.

With thanks to Murray Shanahan, professor at Imperial College London and senior research scientist at DeepMind, and Luca Crnkovic-Friis, co-founder and CEO of machine-learning startup Peltarion.

from artificial intelligence – Google News https://www.forbes.com/sites/parmyolson/2018/10/03/a-two-minute-guide-to-artificial-intelligence/