Radial Basis Function (RBF) Networks are a particular type of Artificial Neural Network used for function approximation problems. RBF Networks differ from other neural networks in their three-layer architecture, universal approximation, and faster learning speed. In this article, we'll describe Radial Basis Functions Neural Network, its working, architecture, and use as a non-linear classifier. 

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More
Become the Highest Paid AI Engineer!

What Are Radial Basis Functions?

Radial Basis Functions are a special class of feed-forward neural networks consisting of three layers: an input layer, a hidden layer, and the output layer. This is fundamentally different from most neural network architectures, which are composed of many layers and bring about nonlinearity by recurrently applying non-linear activation functions. The input layer receives input data and passes it into the hidden layer, where the computation occurs. The hidden layer of Radial Basis Functions Neural Network is the most powerful and very different from most Neural networks. The output layer is designated for prediction tasks like classification or regression. 

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More
Become the Highest Paid AI Engineer!

How Do RBF Networks Work?

RBF Neural networks are conceptually similar to K-Nearest Neighbor (k-NN) models, though the implementation of both models is starkly different. The fundamental idea of Radial Basis Functions is that an item's predicted target value is likely to be the same as other items with close values of predictor variables. An RBF Network places one or many RBF neurons in the space described by the predictor variables. The space has multiple dimensions corresponding to the number of predictor variables present. We calculate the Euclidean distance from the evaluated point to the center of each neuron. A Radial Basis Function (RBF), also known as kernel function, is applied to the distance to calculate every neuron's weight (influence). The name of the Radial Basis Function comes from the radius distance, which is the argument to the function. Weight = RBF[distance)The greater the distance of a neuron from the point being evaluated, the less influence (weight) it has. 

Radial Basis Functions

A Radial Basis Function is a real-valued function, the value of which depends only on the distance from the origin. Although we use various types of radial basis functions, the Gaussian function is the most common. 

In the instance of more than one predictor variable, the Radial basis Functions Neural Network has the same number of dimensions as there are variables. If three neurons are in a space with two predictor variables, we can predict the value from the RBF functions. We can calculate the best-predicted value for the new point by adding the output values of the RBF functions multiplied by the weights processed for each neuron. 

The radial basis function for a neuron consists of a center and a radius (also called the spread). The radius may vary between different neurons. In DTREG-generated RBF networks, each dimension's radius can differ. 

As the spread grows larger, neurons at a distance from a point have more influence. 

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More
Become the Highest Paid AI Engineer!

RBF Network Architecture

The typical architecture of a radial basis functions neural network consists of an input layer, hidden layer, and summation layer. 

Input Layer 

The input layer consists of one neuron for every predictor variable. The input neurons pass the value to each neuron in the hidden layer. N-1 neurons are used for categorical values, where N denotes the number of categories. The range of values is standardized by subtracting the median and dividing by the interquartile range. 

Hidden Layer

The hidden layer contains a variable number of neurons (the ideal number determined by the training process). Each neuron comprises a radial basis function centered on a point. The number of dimensions coincides with the number of predictor variables. The radius or spread of the RBF function may vary for each dimension. 

When an x vector of input values is fed from the input layer, a hidden neuron calculates the Euclidean distance between the test case and the neuron's center point. It then applies the kernel function using the spread values. The resulting value gets fed into the summation layer. 

Output Layer or Summation Layer

The value obtained from the hidden layer is multiplied by a weight related to the neuron and passed to the summation. Here the weighted values are added up, and the sum is presented as the network's output. Classification problems have one output per target category, the value being the probability that the case evaluated has that category. 

The Input Vector  

It is the n-dimensional vector that you're attempting to classify. The whole input vector is presented to each of the RBF neurons. 

The RBF Neurons

Every RBF neuron stores a prototype vector (also known as the neuron's center) from amongst the vectors of the training set. An RBF neuron compares the input vector with its prototype, and outputs a value between 0 and 1 as a measure of similarity. If an input is the same as the prototype, the neuron's output will be 1. As the input and prototype difference grows, the output falls exponentially towards 0. The shape of the response by the RBF neuron is a bell curve. The response value is also called the activation value. 

The Output Nodes

The network's output comprises a set of nodes for each category you're trying to classify. Each output node computes a score for the concerned category. Generally, we take a classification decision by assigning the input to the category with the highest score. 

The score is calculated based on a weighted sum of the activation values from all RBF neurons. It usually gives a positive weight to the RBF neuron belonging to its category and a negative weight to others. Each output node has its own set of weights.  

Radial Basis Function Example

Let us consider a fully trained Radial Basis Function Example.

A dataset has two-dimensional data points belonging to two separate classes. An RBF Network has been trained with 20 RBF neurons on the said data set. We can mark the prototypes selected and view the category one score on the input space. For viewing, we can draw a 3-D mesh or a contour plot. 

The areas of highest and lowest category one score should be marked separately. 

In the case of category one output node:

  • All the weights for category 2 RBF neurons will be negative.
  • All the weights for category 1 RBF neurons will be positive. 

Finally, an approximation of the decision boundary can be plotted by computing the scores over a finite grid. 

Become the Highest Paid AI Engineer!

With Our Trending AI Engineer Master ProgramKnow More
Become the Highest Paid AI Engineer!

Training the RBFN

The training process includes selecting these parameters:

  • The prototype (mu)
  • Beta coefficient for every RBF neuron, and 
  • The matrix of output weights between the neurons and output nodes.

There are several approaches for selecting prototypes and their alterations, like creating an RBF neuron for every training example or randomly choosing k prototypes from training data.  

While specifying beta coefficients, set sigma equal to the average distance between points in the cluster and the center.

Output weights can be trained using gradient descent. 

Advantages of RBFN

  • Easy Design
  • Good Generalization
  • Faster Training 
  • Only one hidden layer
  • A straightforward interpretation of the meaning or function of each node in the hidden layer

Choose the Right Program

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name

AI Engineer

Post Graduate Program In Artificial Intelligence

Post Graduate Program In Artificial Intelligence

GeoAll GeosAll GeosIN/ROW
UniversitySimplilearnPurdueCaltech
Course Duration11 Months11 Months11 Months
Coding Experience RequiredBasicBasicNo
Skills You Will Learn10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more.16+ skills including
chatbots, NLP, Python, Keras and more.
8+ skills including
Supervised & Unsupervised Learning
Deep Learning
Data Visualization, and more.
Additional BenefitsGet access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM
Applied learning via 3 Capstone and 12 Industry-relevant Projects
Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building AssistanceUpto 14 CEU Credits Caltech CTME Circle Membership
Cost$$$$$$$$$$
Explore ProgramExplore ProgramExplore Program

Conclusion

If you are interested in gaining a deeper understanding of the Radial Basis Function Network or any other neural network, join the Simplilearn Machine Learning Course now and become an NLP expert. 

Our Learners Also Ask

1. What is the radial basis function neural network used for?

Radial Basis Function neural networks are commonly used artificial neural networks used for function approximation problems and support vector machine classification. 

2. What is the role of the radial basis?

Radial basis functions provide ways to approximate multivariable functions by using linear combinations of terms that are based on a single univariate function. 

3. What is the radial basis function in ML?

Radial Basis Functions (RBF) are real-valued functions that use supervised machine learning (ML) to perform as a non-linear classifier. Its value depends on the distance between the input and a certain fixed point.  

4. What is the advantage of the RBF neural network?

The main advantages of the RBF neural network are:

  • Easy Design
  • Good Generalization
  • Faster Training 
  • Only one hidden layer
  • Strong tolerance to input noise
  • Easy interpretation of the meaning or function of each node in the hidden layer

5. What is the difference between RBF and MLP?

Multilayer perceptron (MLP) and Radial Basis Function (RBF) are popular neural network architectures called feed-forward networks. The main differences between RBF and MLP are:

MLP consists of one or several hidden layers, while RBF consists of just one hidden layer. 

RBF network has a faster learning speed compared to MLP. In MLP, training is usually done through backpropagation for every layer. But in RBF, training can be done either through backpropagation or RBF network hybrid learning.

Our AI & ML Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees
Post Graduate Program in AI and Machine Learning

Cohort Starts: 23 Dec, 2024

11 months$ 4,300
Generative AI for Business Transformation

Cohort Starts: 7 Jan, 2025

16 weeks$ 2,499
No Code AI and Machine Learning Specialization

Cohort Starts: 7 Jan, 2025

16 weeks$ 2,565
Applied Generative AI Specialization

Cohort Starts: 8 Jan, 2025

16 weeks$ 2,995
Microsoft AI Engineer Program

Cohort Starts: 20 Jan, 2025

6 months$ 1,999
AI & Machine Learning Bootcamp

Cohort Starts: 22 Jan, 2025

24 weeks$ 8,000
Artificial Intelligence Engineer11 Months$ 1,449