Mar 18, 2025 6 min read

Weights and Bias in Artificial Neural Networks

Weights and Bias in Artificial Neural Networks

An artificial neural network is a machine learning system made up of numerous interconnected neurons arranged in layers to simulate the structure and function of a biological brain.

Learning occurs in a biological brain, in part, by the strengthening of connections between neurons. When you are learning a new subject or a new skill, the neurons in your brain form new connections with other neurons. The more you study or practice, the stronger these connections become.

In an artificial neural network, learning occurs in a similar way:

  1. Each neuron (also referred to as a "node") receives one or more inputs from an external source or from other neurons.
  2. Each input is multiplied by a weight to indicate the input's relative importance.
  3. The sum of the weighted input(s) is fed into the neuron.
  4. Bias is added to the sum of the weighted inputs.
  5. An activation function within the neuron performs a calculation on the total.
  6. The result is the neuron's output, which is passed to other neurons or delivered to the external world as the machine's output.
  7. The output passes through a loss function or cost function that evaluates the accuracy of the neural network's prediction, and results are fed back through the network, indicating adjustments that must be made to the weights and biases.
  8. The process is repeated in multiple iterations to optimize the accuracy of the output; weights and biases are adjusted with each iteration.

Activation Function - Weights and bias

Each node in an artificial neural network has an activation function that performs a mathematical operation on the sum of its weighted inputs and bias and produces an output. (A function is a special relationship in which each input has a single output.)

Different functions produce different outputs, and when you graph the outputs, you get different shapes. The most basic function used in machine learning is the Heaviside step function. This function outputs 1 (one) if its weighted input plus bias is positive or zero, and it outputs 0 (zero) if its weighted input plus bias is negative. In other words, the neuron either fires or doesn't. When you graph this function, you get something that looks like a step in a stairway, as shown below. The output can be one or zero, nothing in between.

The sigmoid function provides infinitely more variation in values than you get with a binary function like the Heaviside step function. When graphed, output from a sigmoid function forms an "S" shape, as shown below. Sigmoid functions are commonly used in neural networks because they allow for making small adjustments within a limited range (0.0 to 1.0 on the vertical access) during the machine learning process.

Note that regression functions, which are also used in machine learning, are not as useful in an artificial neural network, because their output range is infinite. The narrow range of a sigmoid function (from 0.0 to 1.0 on the vertical axis) makes the model more efficient and stable.

Weights

Weights enable the artificial neural network to dial up or dial down connections between neurons. For example, suppose you create an artificial neural network to distinguish among different dog breeds. Each neuron in one layer of the neural network may focus on a different characteristic. These could be the snout, ears, eyes, tail, size, shape and color. With weighted inputs, the network can increase or decrease the strength of the connection between each neuron in this layer and the neurons in the next layer to place less emphasis on the tail, for example, and more on the size and shape.

Bias

While weights enable an artificial neural network to adjust the strength of connections between neurons, bias can be used to make adjustments within neurons. Bias can be positive or negative, increasing or decreasing a neuron’s output. The neuron gathers and sums its inputs, adds bias (positive or negative) and then passes the total to the activation function.

With the sigmoid function graph, weight impacts the steepness of the curve, while bias shifts the curve left or right without changing the shape of the curve.

Cost Function

A cost function indicates the accuracy of the model. Its output tells the neural network whether weights and biases need to be adjusted to improve the model's accuracy. Think of the cost function as the means to reward the machine for success and/or punish it for failure. It enables the machine to learn from its achievements or mistakes.

To understand the interaction of functions, weights, and biases, imagine a neural network as a sound system with various dials for adjusting different parameters. During the machine learning process, a neural network may turn dozens, hundreds, or even thousands of dials, making tiny adjustments to weights and biases, and then checking the end result. It repeats this process over and over to optimize the output's accuracy with every iteration.

Frequently Asked Questions

What are neural network weights and how do they influence predictions?

In an artificial neural network, weights help measure strength between neurons. They show how much each input matters. Think of neurons as tiny decision-makers in the network. The weights decide how much an input can make a neuron active or help with a prediction.

During training, the network changes these weights based on the training data. It works hard to make fewer mistakes. Weights play a big role in helping the network make good predictions.

How does bias contribute to the function of a neural network modeling?

Bias in a neural network acts as an extra parameter. It adjusts the output with the weighted sum of the inputs.

Bias units are key because they help the network show patterns.

Here's why bias units are useful:

1. Better Patterns: They let the network see more patterns.
2. Flexible Learning: They help the network learn better.
3. Improved Accuracy: They make the network give more correct answers.
4. Extra Help: They give the network a boost to work well.

Bias units make networks smarter and more accurate.

When there's no bias, the neural network can't fit the data well. It struggles if the output needs a different setup than the starting weights. Bias helps adjust the network's output.

Why do we need activation functions in a neural network?

Activation functions in neural networks add non-linearity. This helps the network learn complex patterns. Without them, the network would just do simple tasks like linear regression. It couldn't solve hard problems like image recognition or understanding natural language.

How do you train a neural network?

Training a neural network means you adjust its settings based on mistakes it makes. You start by putting data through the network to get predictions. Then, you measure how wrong these predictions are using a loss function, which is a method to find errors.

You then send this error backward through the network using a method called backpropagation. This helps find out how much each part of the network contributed to the error. You use optimization methods like Stochastic Gradient Descent to update the settings in a way that reduces the error. You repeat this many times until the network works well.

What role do hidden layers play in a neural network algorithm?

Hidden layers help neural networks understand complex data. Each hidden layer learns something new from the data. The next layer builds on what the previous one learned.

This step-by-step learning makes neural networks very good at tasks like recognizing images and voices.

These tasks need the network to change simple patterns into meaningful ones. The number of hidden layers and neurons in each layer is important. They decide how much the network can do and how complex it can be.

- Hidden layers learn new features.
- Next layers build on previous layers.
- More hidden layers and neurons mean more power.

This is my weekly newsletter that I call The Deep End because I want to go deeper than results you’ll see from searches or LLMs. Each week I’ll go deep to explain a topic that’s relevant to people who work with technology. I’ll be posting about artificial intelligence, data science, and ethics.

This newsletter is 100% human written 💪 (* aside from a quick run through grammar and spell check).

More Sources:

  1. https://www.xenonstack.com/blog/artificial-neural-network-applications
  2. https://arxiv.org/abs/2202.10435
  3. https://h2o.ai/wiki/weights-and-biases/
  4. https://towardsdatascience.com/whats-the-role-of-weights-and-bias-in-a-neural-network-4cf7e9888a0f
  5. https://machine-learning.paperspace.com/wiki/weights-and-biases
  6. https://stackoverflow.com/questions/48554395/which-layers-in-neural-networks-have-weights-biases-and-which-dont
  7. https://arxiv.org/abs/2312.02517
  8. https://www.smartsheet.com/neural-network-applications
  9. https://www.coursera.org/articles/neural-network-example
Great! You’ve successfully signed up.
Welcome back! You've successfully signed in.
You've successfully subscribed to The Human Side of Tech.
Your link has expired.
Success! Check your email for magic link to sign-in.
Success! Your billing info has been updated.
Your billing was not updated.