Artificial neural networks learn through a combination of functions, weights, and biases. Each artificial neuron receives weighted inputs from the outside world or from other neurons, adds bias to the sum of the weighted inputs, and then executes a function on the total to produce an output. During the learning process, the neural network weights are assigned randomly across the entire network to increase its overall accuracy in performing its task, such as deciding how likely a certain credit card transaction is fraudulent.

Imagine weights and biases as dials on a sound system. Just as you can turn the dials to control the volume, balance, and tone to produce the desired sound quality, the machine can adjust its dials (weights and biases) to fine-tune its accuracy, which involves tweaking the parameters of the network.
Setting Random Weights and Biases in Artificial Neural Networks

When you’re setting up an artificial neural network, you have to start somewhere. You could start by cranking the dials all the way up or all the way down, but then you would have too much symmetry in the network, making it more difficult for the network to learn.
Specifically, if neighboring nodes in the hidden layers of the neural network are connected to the same inputs and those connections have identical weights, the learning algorithm is unable to adjust the weights, and the model will be stuck. So no learning 😕.
Instead, you want to assign different values to the weights. These are typically small values, close to zero but not zero.
By default, the bias in each neuron is set to zero. The network can dial up the bias during the learning process and then dial it up or down to make additional adjustments.
In the absence of any prior knowledge, a plausible solution is to assign totally random values to the weights.
For now just think of random values as unrelated weights between zero and one but closer to zero. What’s important is that these random values provide a starting point that enables the network to adjust weights up and down to improve the artificial neural network’s accuracy. The network can also make adjustments by dialing the bias within each neuron up or down.
The Difference between Deterministic and Non-Deterministic Algorithms
For an artificial neural network to learn, it requires a machine learning algorithm. This is a process or set of procedures that enables the machine to create a model that can process the data input in a way that achieves the network’s desired objective. Algorithms come in two types:
- Deterministic: Every time the algorithm is given the same problem, it takes the same steps in the same sequence to solve it, and produces the same outcome, highlighting the importance of deterministic processes. An example of a deterministic algorithm is the sort feature in a word processor. Every time you use the feature to sort a list, it takes the same steps to arrange the items in the same order.
- Non-deterministic: Every time the algorithm is given the same problem, it takes the steps in a different sequence, which may produce a slightly different outcome. An example of a non-deterministic algorithm is an electronic card game that shuffles the cards before dealing them. The cards must be shuffled in a way that places them in a random order, so players cannot “guess” the order of the cards, similar to how non-deterministic algorithms in artificial intelligence introduce randomness to find the solution.
As a rule of thumb, use deterministic algorithms to solve problems with concrete answers, such as determining which route is shortest in a GPS program. Use non-deterministic algorithms when an approximate answer is good enough and too much processing power and time would be required for the computer to arrive at a more accurate answer or solution.
An artificial neural network uses a non-deterministic algorithm, so the network can experiment with different approaches and then adjust accordingly to optimize its accuracy.
What Happens During the Learning Process?

Suppose you are training an artificial neural network to distinguish among different dog breeds. As you feed your training data (pictures of dogs and label of breeds) into the network, it adjusts the weights and biases to identify a relationship between each picture and label (the dog breed), and it begins to distinguish between different breeds.
Early in training, it may be a little unsure whether the dog in a certain picture is one breed or another. It may indicate that it’s 40% sure it’s a beagle, 30% sure it’s a dachshund, 20% sure it’s a Doberman, and 10% sure it’s a German shepherd.
Suppose it is a dachshund. The machine can autocorrect, it adjusts the weights and biases, and tries again. This time, the machine indicates that it’s 80% sure it’s a dachshund, and 20% sure it’s a beagle. This time it is correct, and no further adjustment is needed. Of course, the machine may need to make further adjustments later if it makes another mistake.
The good news is that during the machine learning process, the artificial neural network does most of the heavy lifting. It turns the dials up and down to make the necessary adjustments. You just need to make sure that you give it a good starting point by assigning random weights and that you continue to feed it relevant input to enable it to make further adjustments.
Frequently Asked Questions
What are neural network weights and biases?
Neural network weights are the parameters that determine the strength of the connection between neurons in the network.
Biases are parameters that allow the model to shift the activation function to better fit the data during the training process.
Why are weights and biases important in deep learning?
Weights and biases are important because they help the neural network learn the relationship between the input data and the desired output.
By adjusting these parameters during training, the network can make accurate predictions and optimize its performance.
How are weights initialized in a neural network?
Weights in a neural network are typically initialized with small random values. This initialization is important because it affects the learning process and the convergence of the model to the best solution possible.
How are weights updated during training?
Weights are updated during the training process using optimization algorithms like gradient descent.
The algorithm adjusts the weights by computing the gradient of the loss function with respect to each weight and updating them in the direction that minimizes the error.
What is the significance of the activation function in neural networks?
The activation function determines whether a neuron should be activated or not.
It introduces non-linearity into the model, which enables the network to learn more complex patterns in the data. Common activation functions include ReLU and Sigmoid.
How do overfitting and regularization relate to weights in neural networks?
Overfitting occurs when a neural network learns the noise in the training data instead of the actual patterns.
What is the role of biases in a perceptron?
In a perceptron, biases are added to the weighted sum of inputs before applying the activation function.
The bias helps in adjusting the position of the decision boundary, which allows the perceptron to better classify the input data.
Can you explain the concept of weights in the context of predictive modeling?
In predictive modeling, weights are used to quantify the importance of each input feature in predicting the target variable.
By optimizing these weights during training, the model learns to make more accurate predictions based on the input features, crucial for predictive modeling.

This is my weekly newsletter that I call The Deep End because I want to go deeper than results you’ll see from searches or LLMs. Each week I’ll go deep to explain a topic that’s relevant to people who work with technology. I’ll be posting about artificial intelligence, data science, and ethics.
This newsletter is 100% human written 💪 (* aside from a quick run through grammar and spell check).
More sources
- https://www.geeksforgeeks.org/the-role-of-weights-and-bias-in-neural-networks/
- https://towardsdatascience.com/whats-the-role-of-weights-and-bias-in-a-neural-network-4cf7e9888a0f
- https://www.turing.com/kb/necessity-of-bias-in-neural-networks
- https://stackoverflow.com/questions/2480650/what-is-the-role-of-the-bias-in-neural-networks
- https://www.geeksforgeeks.org/difference-between-deterministic-and-non-deterministic-algorithms/
- https://www.larksuite.com/en_us/topics/ai-glossary/nondeterministic-algorithm
- https://www.tutorialspoint.com/difference-between-deterministic-and-non-deterministic-algorithms
- https://www.geeksforgeeks.org/ml-architecture-and-learning-process-in-neural-network/
- https://towardsdatascience.com/learning-process-of-a-deep-neural-network-5a9768d7a651
- https://towardsdatascience.com/how-neural-network-learn-3b56c175b5ca