Neural networks: Nodes and hidden layers Stay organized with collections Save and categorize content based on your preferences.
To build aneural networkthat learnsnonlinearities,begin with the following familiar model structure: alinear model of the form$y' = b + w_1x_1 + w_2x_2 + w_3x_3$.
We can visualize this equation as shown below, where $x_1$,$x_2$, and $x_3$ are our three input nodes (in blue), and $y'$ is our output node(in green).
Exercise 1
In the model above, theweight andbias values have been randomlyinitialized. Perform the following tasks to familiarize yourself with theinterface and explore the linear model. You canignore theActivation Function dropdown for now; we'll discuss thistopic later on in the module.
Click the Play (▶️) button above the network to calculate the value ofthe output node for the input values $x_1 = 1.00$, $x_2 = 2.00$, and$x_3 = 3.00$.
Click the second node in theinput layer, and increasethe value from 2.00to 2.50. Note that the value of the output node changes. Select the outputnodes (in green) and review theCalculations panel to see how the outputvalue was calculated.
Notes about calculations:- Values displayed are rounded to the hundredths place.
- The
Linear()function simply returns the value it is passed.
Click the output node (in green) to see the weight ($w_1$, $w_2$, $w_3$) andbias ($b$) parameter values. Decrease the weight value for$w_3$ (again, note that the value of the output node and the calculations belowhave changed). Then, increase the bias value. Review how these changeshave affected the model output.
Adding layers to the network
Note that when you adjusted the weight and bias values of the network inExercise 1, that didn't change the overall mathematicalrelationship between input and output. Our model is still a linear model.
But what if we add another layer to the network, in between the input layerand the output layer? In neural network terminology, additional layers betweenthe input layer and the output layer are calledhidden layers, and the nodesin these layers are calledneurons.
The value of each neuron in the hidden layer is calculated the same way as theoutput of a linear model: take the sum of the product of each of its inputs(the neurons in the previous network layer) and a unique weight parameter,plus the bias. Similarly, the neurons in the next layer (here, the output layer)are calculated using the hidden layer's neuron values as inputs.
This new hidden layer allows our model to recombine the input data using anotherset of parameters. Can this help our model learn nonlinear relationships?
Exercise 2
We've added a hidden layer containing four neurons to the model.
Click the Play (▶️) button above the network to calculate the value ofthe four hidden-layer nodes and the output node for the input values$x_1 = 1.00$, $x_2 = 2.00$, and $x_3 = 3.00$.
Then explore the model, and use it to answer the following questions.
Try modifying the model parameters, and observe the effect on the hidden-layer node values and the output value (you can review the Calculations panel below to see how these values were calculated).
Can this model learn nonlinearities?
If you click on each of the nodes in the hidden layer and review the calculations below, you'll see that all of them are linear (comprising multiplication and addition operations).
If you then click on the output node and review the calculation below, you'll see that this calculation is also linear. Linear calculations performed on the output of linear calculations are also linear, which means this model cannot learn nonlinearities.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-03 UTC.