Today Artificial Intelligence is a once-in-a lifetime commercial and defense game changer.
Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas equal and even surpass human capabilities.
If you haven’t paid attention, now’s the time.
Artificial Intelligence and the Department of Defense (DoD)
The Department of Defense has thought that Artificial Intelligence is such a foundational set of technologies that they started a dedicated organization- the JAIC – to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects.
Some specific defense related AI applications are listed later in this document.
We’re in the Middle of a Revolution
Imagine it’s 1950, and you’re a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business – supply chain, customer interactions, etc. Think about the competitive edge they’d have by today in business or as a nation. They’d steamroll everyone.
That’s where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies. Today, 100s of billions of dollars in private capital have been invested in 1,000s of AI startups. The U.S. Department of Defense has created a dedicated organization to ensure its deployment.
But What Is It?
Compared to the classic computing we’ve had for the last 75 years, AI has led to new types of applications, e.g. facial recognition; new types of algorithms, e.g. machine learning; new types of computer architectures, e.g. neural nets; new hardware, e.g. GPUs; new types of software developers, e.g. data scientists; all under the overarching theme of artificial intelligence. The sum of these feels like buzzword bingo. But they herald a sea change in what computers are capable of doing, how they do it, and what hardware and software is needed to do it. This brief will attempt to describe all of it.
New Words to Define Old Things
One of the reasons the world of AI/ML is confusing is that it’s created its own language and vocabulary. It uses new words to define programming steps, job descriptions, development tools, etc. But once you understand how the new world maps onto the classic computing world, it starts to make sense. So first a short list of some key definitions.
AI/ML – a shorthand for Artificial Intelligence/Machine Learning
Artificial Intelligence (AI) – a catchall term used to describe “Intelligent machines” which can solve problems, make/suggest decisions and perform tasks that have traditionally required humans to do. AI is not a single thing, but a constellation of different technologies.
Machine Learning (ML) – a subfield of artificial intelligence. Humans combine data with algorithms (see here for a list) to train a model using that data. This trained model can then make predications on new data (is this picture a cat, a dog or a person?) or decision-making processes (like understanding text and images) without being explicitly programmed to do so.
Machine learning algorithms – computer programs that adjust themselves to perform better as they are exposed to more data. The “learning” part of machine learning means these programs change how they process data over time. In other words, a machine-learning algorithm can adjust its own settings, given feedback on its previous performance in making predictions about a collection of data.
Data Science – a new field of computer science. Broadly it encompasses data systems and processes aimed at maintaining data sets and deriving meaning out of them. In the context of AI, it’s the practice of people who are doing machine learning.
Data Scientists – responsible for extracting insights that help businesses make decisions. They explore and analyze data using machine learning platforms to create models about customers, processes, risks, or whatever they’re trying to predict.
What’s Different? Why is Machine Learning Possible Now?
To understand why AI/Machine Learning can do these things, let’s compare them to computers before AI came on the scene. (Warning – simplified examples below.)
Classic Computers
For the last 75 years computers (we’ll call these classic computers) have both shrunk to pocket size (iPhones) and grown to the size of warehouses (cloud data centers), yet they all continued to operate essentially the same way.
Classic computers are designed to do anything a human explicitly tells them to do. People (programmers) write software code (programming) to develop applications, thinking a priori about all the rules, logic and knowledge that need to be built in to an application so that it can deliver a specific result. These rules are explicitly coded into a program using a software language (Python, JavaScript, C#, Rust, …).
Classic Computers – Compiling
The code is then compiled using software to translate the programmer’s source code into a version that can be run on a target computer/browser/phone. For most of today’s programs, the computer used to develop and compile the code does not have to be that much faster than the one that will run it.
Classic Computers – Running/Executing Programs
Once a program is coded and compiled, it can be deployed and run (executed) on a desktop computer, phone, in a browser window, a data center cluster, in special hardware, etc. Programs/applications can be games, social media, office applications, missile guidance systems, bitcoin mining, or even operating systems e.g. Linux, Windows, IOS. These programs run on the same type of classic computer architectures they were programmed in.
Machine Learning
In contrast to programming on classic computing with fixed rules, machine learning is just like it sounds – we can train/teach a computer to “learn by example” by feeding it lots and lots of examples. (For images a rule of thumb is that a machine learning algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance.) Once it is trained, the computer runs on its own and can make predictions and/or complex decisions.
Just as traditional programming has three steps – first coding a program, next compilingit and then running it – machine learning also has three steps: training (teaching), pruning and inference (predicting by itself.)
Machine Learning – Training
Unlike programing classic computers with explicit rules, training is the process of “teaching” a computer to perform a task e.g. recognize faces, signals, understand text, etc. (Now you know why you’re asked to click on images of traffic lights, cross walks, stop signs, and buses or type the text of scanned image in ReCaptcha.) Humans provide massive volumes of “training data” (the more data, the better the model’s performance) and select the appropriate algorithm to find the best optimized outcome.
By running an algorithm selected by a data scientist on a set of training data, the Machine Learning system generates the rules embedded in a trained model. The system learns from examples (training data), rather than being explicitly programmed. (See the “Types of Machine Learning” section for more detail.) This self-correction is pretty cool. An input to a neural net results in a guess about what that input is. The neural net then takes its guess and compares it to a ground-truth about the data, effectively asking an expert “Did I get this right?” The difference between the network’s guess and the ground truth is its error. The network measures that error, and walks the error back over its model, adjusting weights to the extent that they contributed to the error.)
Just to make the point again: The algorithms combined with the training data – not external human computer programmers – create the rules that the AI uses. The resulting model is capable of solving complex tasks such as recognizing objects it’s never seen before, translating text or speech, or controlling a drone swarm.