Function Approximation: Reverse Engineering Unknown Functions from Data
When we don’t know the function but only know some of its x and y values—the inputs and outputs but not the function used to produce them—we need a way to reverse engineer that function.
Neurons as Building Blocks: Simple Functions Combining into Complexity
Neurons are our building blocks of the larger network—building blocks that can be stretched and squeezed and shifted around and ultimately work with other blocks to construct something larger than themselves.
The Linearity Problem: Why Linear Functions Can Only Combine to Make Lines
A naive neuron defined as a linear function with weight and bias parameters creates a fundamental problem: linear functions can only combine to make one linear function.
ReLU: Almost Linear Non-Linearity Enabling Complex Function Approximation
ReLU (Rectified Linear Unit) serves as an activation function—about as close as you can get to a linear function without actually being one—that introduces the non-linearity neural networks need to approximate complex functions.
Backpropagation: Automatic Parameter Tuning Through Gradient Descent
Backpropagation is the most common algorithm for automatically finding weights and biases, essentially tweaking and tuning the parameters of the network bit by bit to improve the approximation.
Universal Approximation Theorem: Neural Networks Can Approximate Any Function
Neural networks can be rigorously proven to be universal function approximators—they can approximate any function to any degree of precision by adding more neurons.
Encoding the World as Numbers: Making Reality Computable
If you can express any intelligent behavior, any process, any task as a function, then a network can learn it—you just need to be able to encode your inputs and outputs as numbers, but computers do this all the time.
Neural Networks as Turing-Complete Systems: Learning Any Algorithm
Under a few more assumptions, neural networks are provably Turing complete, meaning they can solve all of the same kinds of problems that any computer can solve.
Practical Limitations: Why Neural Networks Can't Actually Learn Everything
Despite theoretical guarantees, neural networks face severe practical limitations: you can’t have an infinite number of neurons, there are practical limitations on network size and what can be modeled in the real world.
Neural Networks for Fuzzy Problems: Learning What We Can't Program
Neural networks have proven themselves indispensable for a number of really rather famously difficult problems for computers—usually these problems require a certain level of intuition and fuzzy logic that computers generally lack.
Functions as a General Framework for Understanding the World
This is all because of the humble function—a simple yet powerful way to think about the world—and by combining simple computations we can get computers to construct any function we could ever want.