pygad.nn Module

This section of the PyGAD’s library documentation discusses thepygad.nn module.

Using thepygad.nn module, artificial neural networks are created.The purpose of this module is to only implement theforward pass ofa neural network without using a training algorithm. Thepygad.nnmodule builds the network layers, implements the activations functions,trains the network, makes predictions, and more.

Later, thepygad.gann module is used to train thepygad.nnnetwork using the genetic algorithm built in thepygad module.

Starting fromPyGAD2.7.1,thepygad.nn module supports both classification and regressionproblems. For more information, check theproblem_type parameter inthepygad.nn.train() andpygad.nn.predict() functions.

Supported Layers

Each layer supported by thepygad.nn module has a correspondingclass. The layers and their classes are:

  1. Input: Implemented using thepygad.nn.InputLayer class.

  2. Dense (Fully Connected): Implemented using thepygad.nn.DenseLayer class.

In the future, more layers will be added. The next subsections discusssuch layers.

pygad.nn.InputLayer Class

Thepygad.nn.InputLayer class creates the input layer for the neuralnetwork. For each network, there is only a single input layer. Thenetwork architecture must start with an input layer.

This class has no methods or class attributes. All it has is aconstructor that accepts a parameter namednum_neurons representingthe number of neurons in the input layer.

An instance attribute namednum_neurons is created within theconstructor to keep such a number. Here is an example of building aninput layer with 20 neurons.

input_layer=pygad.nn.InputLayer(num_neurons=20)

Here is how the single attributenum_neurons within the instance ofthepygad.nn.InputLayer class can be accessed.

num_input_neurons=input_layer.num_neuronsprint("Number of input neurons =",num_input_neurons)

This is everything about the input layer.

pygad.nn.DenseLayer Class

Using thepygad.nn.DenseLayer class, dense (fully-connected) layerscan be created. To create a dense layer, just create a new instance ofthe class. The constructor accepts the following parameters:

  • num_neurons: Number of neurons in the dense layer.

  • previous_layer: A reference to the previous layer. Using theprevious_layer attribute, a linked list is created that connectsall network layers.

  • activation_function: A string representing the activationfunction to be used in this layer. Defaults to"sigmoid".Currently, the supported values for the activation functions are"sigmoid","relu","softmax" (supported in PyGAD 2.3.0and higher), and"None" (supported in PyGAD 2.7.0 and higher).When a layer has its activation function set to"None", then itmeans no activation function is applied. For aregressionproblem, set the activation function of the output (last) layer to"None". If all outputs in the regression problem are nonnegative,then it is possible to use the ReLU function in the output layer.

Within the constructor, the accepted parameters are used as instanceattributes. Besides the parameters, some new instance attributes arecreated which are:

  • initial_weights: The initial weights for the dense layer.

  • trained_weights: The trained weights of the dense layer. Thisattribute is initialized by the value in theinitial_weightsattribute.

Here is an example for creating a dense layer with 12 neurons. Note thattheprevious_layer parameter is assigned to the input layerinput_layer.

dense_layer=pygad.nn.DenseLayer(num_neurons=12,previous_layer=input_layer,activation_function="relu")

Here is how to access some attributes in the dense layer:

num_dense_neurons=dense_layer.num_neuronsdense_initail_weights=dense_layer.initial_weightsprint("Number of dense layer attributes =",num_dense_neurons)print("Initial weights of the dense layer :",dense_initail_weights)

Becausedense_layer holds a reference to the input layer, then thenumber of input neurons can be accessed.

input_layer=dense_layer.previous_layernum_input_neurons=input_layer.num_neuronsprint("Number of input neurons =",num_input_neurons)

Here is another dense layer. This dense layer’sprevious_layerattribute points to the previously created dense layer.

dense_layer2=pygad.nn.DenseLayer(num_neurons=5,previous_layer=dense_layer,activation_function="relu")

Becausedense_layer2 holds a reference todense_layer in itsprevious_layer attribute, then the number of neurons indense_layer can be accessed.

dense_layer=dense_layer2.previous_layerdense_layer_neurons=dense_layer.num_neuronsprint("Number of dense neurons =",num_input_neurons)

After getting the reference todense_layer, we can use it to accessthe number of input neurons.

dense_layer=dense_layer2.previous_layerinput_layer=dense_layer.previous_layernum_input_neurons=input_layer.num_neuronsprint("Number of input neurons =",num_input_neurons)

Assuming thatdense_layer2 is the last dense layer, then it isregarded as the output layer.

previous_layer Attribute

Theprevious_layer attribute in thepygad.nn.DenseLayer classcreates a one way linked list between all the layers in the networkarchitecture as described by the next figure.

The last (output) layer indexed N points to layerN-1, layerN-1points to the layerN-2, the layerN-2 points to the layerN-3, and so on until reaching the end of the linked list which islayer 1 (input layer).

The one way linked list allows returning all properties of all layers inthe network architecture by just passing the last layer in the network.The linked list moves from the output layer towards the input layer.

Using theprevious_layer attribute of layerN, the layerN-1can be accessed. Using theprevious_layer attribute of layerN-1, layerN-2 can be accessed. The process continues untilreaching a layer that does not have aprevious_layer attribute(which is the input layer).

The properties of the layers include the weights (initial or trained),activation functions, and more. Here is how awhile loop is used toiterate through all the layers. Thewhile loop stops only when thecurrent layer does not have aprevious_layer attribute. This layeris the input layer.

layer=dense_layer2while"previous_layer"inlayer.__init__.__code__.co_varnames:print("Number of neurons =",layer.num_neurons)# Go to the previous layer.layer=layer.previous_layer

Functions to Manipulate Neural Networks

There are a number of functions existing in thepygad.nn module thathelps to manipulate the neural network.

pygad.nn.layers_weights()

Creates and returns a list holding the weights matrices of all layers inthe neural network.

Accepts the following parameters:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

  • initial: WhenTrue (default), the function returns theinitial weights of the layers using the layers’initial_weights attribute. WhenFalse, it returns thetrained weights of the layers using the layers’trained_weights attribute. The initial weights are only neededbefore network training starts. The trained weights are needed topredict the network outputs.

The function uses awhile loop to iterate through the layers usingtheirprevious_layer attribute. For each layer, either the initialweights or the trained weights are returned based on where theinitial parameter isTrue orFalse.

pygad.nn.layers_weights_as_vector()

Creates and returns a list holding the weightsvectors of all layersin the neural network. The weights array of each layer is reshaped toget a vector.

This function is similar to thelayers_weights() function exceptthat it returns the weights of each layer as a vector, not as an array.

Accepts the following parameters:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

  • initial: WhenTrue (default), the function returns theinitial weights of the layers using the layers’initial_weights attribute. WhenFalse, it returns thetrained weights of the layers using the layers’trained_weights attribute. The initial weights are only neededbefore network training starts. The trained weights are needed topredict the network outputs.

The function uses awhile loop to iterate through the layers usingtheirprevious_layer attribute. For each layer, either the initialweights or the trained weights are returned based on where theinitial parameter isTrue orFalse.

pygad.nn.layers_weights_as_matrix()

Converts the network weights from vectors to matrices.

Compared to thelayers_weights_as_vectors() function that onlyaccepts a reference to the last layer and returns the network weights asvectors, this function accepts a reference to the last layer in additionto a list holding the weights as vectors. Such vectors are convertedinto matrices.

Accepts the following parameters:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

  • vector_weights: The network weights as vectors where the weightsof each layer form a single vector.

The function uses awhile loop to iterate through the layers usingtheirprevious_layer attribute. For each layer, the shape of itsweights array is returned. This shape is used to reshape the weightsvector of the layer into a matrix.

pygad.nn.layers_activations()

Creates and returns a list holding the names of the activation functionsof all layers in the neural network.

Accepts the following parameter:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

The function uses awhile loop to iterate through the layers usingtheirprevious_layer attribute. For each layer, the name of theactivation function used is returned using the layer’sactivation_function attribute.

pygad.nn.sigmoid()

Applies the sigmoid function and returns its result.

Accepts the following parameters:

  • sop: The input to which the sigmoid function is applied.

pygad.nn.relu()

Applies the rectified linear unit (ReLU) function and returns itsresult.

Accepts the following parameters:

  • sop: The input to which the relu function is applied.

pygad.nn.softmax()

Applies the softmax function and returns its result.

Accepts the following parameters:

  • sop: The input to which the softmax function is applied.

pygad.nn.train()

Trains the neural network.

Accepts the following parameters:

  • num_epochs: Number of epochs.

  • last_layer: Reference to the last (output) layer in the networkarchitecture.

  • data_inputs: Data features.

  • data_outputs: Data outputs.

  • problem_type: The type of the problem which can be either"classification" or"regression". Added in PyGAD 2.7.0 andhigher.

  • learning_rate: Learning rate.

For each epoch, all the data samples are fed to the network to returntheir predictions. After each epoch, the weights are updated using onlythe learning rate. No learning algorithm is used because the purpose ofthis project is to only build the forward pass of training a neuralnetwork.

pygad.nn.update_weights()

Calculates and returns the updated weights. Even no training algorithmis used in this project, the weights are updated using the learningrate. It is not the best way to update the weights but it is better thankeeping it as it is by making some small changes to the weights.

Accepts the following parameters:

  • weights: The current weights of the network.

  • network_error: The network error.

  • learning_rate: The learning rate.

pygad.nn.update_layers_trained_weights()

After the network weights are trained, this function updates thetrained_weights attribute of each layer by the weights calculatedafter passing all the epochs (such weights are passed in thefinal_weights parameter)

By just passing a reference to the last layer in the network (i.e.output layer) in addition to the final weights, this function updatesthetrained_weights attribute of all layers.

Accepts the following parameters:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

  • final_weights: An array of weights of all layers in the networkafter passing through all the epochs.

The function uses awhile loop to iterate through the layers usingtheirprevious_layer attribute. For each layer, itstrained_weights attribute is assigned the weights of the layer fromthefinal_weights parameter.

pygad.nn.predict()

Uses the trained weights for predicting the samples’ outputs. It returnsa list of the predicted outputs for all samples.

Accepts the following parameters:

  • last_layer: A reference to the last (output) layer in the networkarchitecture.

  • data_inputs: Data features.

  • problem_type: The type of the problem which can be either"classification" or"regression". Added in PyGAD 2.7.0 andhigher.

All the data samples are fed to the network to return their predictions.

Helper Functions

There are functions in thepygad.nn module that does not directlymanipulate the neural networks.

pygad.nn.to_vector()

Converts a passed NumPy array (of any dimensionality) to itsarrayparameter into a 1D vector and returns the vector.

Accepts the following parameters:

  • array: The NumPy array to be converted into a 1D vector.

pygad.nn.to_array()

Converts a passed vector to itsvector parameter into a NumPy arrayand returns the array.

Accepts the following parameters:

  • vector: The 1D vector to be converted into an array.

  • shape: The target shape of the array.

Supported Activation Functions

The supported activation functions are:

  1. Sigmoid: Implemented using thepygad.nn.sigmoid() function.

  2. Rectified Linear Unit (ReLU): Implemented using thepygad.nn.relu() function.

  3. Softmax: Implemented using thepygad.nn.softmax() function.

Steps to Build a Neural Network

This section discusses how to use thepygad.nn module for building aneural network. The summary of the steps are as follows:

  • Reading the Data

  • Building the Network Architecture

  • Training the Network

  • Making Predictions

  • Calculating Some Statistics

Reading the Data

Before building the network architecture, the first thing to do is toprepare the data that will be used for training the network.

In this example, 4 classes of theFruits360 dataset are used forpreparing the training data. The 4 classes are:

  1. AppleBraeburn:This class’s data is available athttps://github.com/ahmedfgad/NumPyANN/tree/master/apple

  2. LemonMeyer:This class’s data is available athttps://github.com/ahmedfgad/NumPyANN/tree/master/lemon

  3. Mango:This class’s data is available athttps://github.com/ahmedfgad/NumPyANN/tree/master/mango

  4. Raspberry:This class’s data is available athttps://github.com/ahmedfgad/NumPyANN/tree/master/raspberry

The features from such 4 classes are extracted according to the nextcode. This code reads the raw images of the 4 classes of the dataset,prepares the features and the outputs as NumPy arrays, and saves thearrays in 2 files.

This code extracts a feature vector from each image representing thecolor histogram of the HSV space’s hue channel.

importnumpyimportskimage.io,skimage.color,skimage.featureimportosfruits=["apple","raspberry","mango","lemon"]# Number of samples in the datset used = 492+490+490+490=1,962# 360 is the length of the feature vector.dataset_features=numpy.zeros(shape=(1962,360))outputs=numpy.zeros(shape=(1962))idx=0class_label=0forfruit_dirinfruits:curr_dir=os.path.join(os.path.sep,fruit_dir)all_imgs=os.listdir(os.getcwd()+curr_dir)forimg_fileinall_imgs:ifimg_file.endswith(".jpg"):# Ensures reading only JPG files.fruit_data=skimage.io.imread(fname=os.path.sep.join([os.getcwd(),curr_dir,img_file]),as_gray=False)fruit_data_hsv=skimage.color.rgb2hsv(rgb=fruit_data)hist=numpy.histogram(a=fruit_data_hsv[:,:,0],bins=360)dataset_features[idx,:]=hist[0]outputs[idx]=class_labelidx=idx+1class_label=class_label+1# Saving the extracted features and the outputs as NumPy files.numpy.save("dataset_features.npy",dataset_features)numpy.save("outputs.npy",outputs)

To save your time, the training data is already prepared and 2 filescreated by the next code are available for download at these links:

  1. dataset_features.npy:The featureshttps://github.com/ahmedfgad/NumPyANN/blob/master/dataset_features.npy

  2. outputs.npy:The class labelshttps://github.com/ahmedfgad/NumPyANN/blob/master/outputs.npy

Theoutputs.npyfile gives the following labels for the 4 classes:

  1. AppleBraeburn:Class label is0

  2. LemonMeyer:Class label is1

  3. Mango:Class label is2

  4. Raspberry:Class label is3

The project has 4 folders holding the images for the 4 classes.

After the 2 files are created, then just read them to return the NumPyarrays according to the next 2 lines:

data_inputs=numpy.load("dataset_features.npy")data_outputs=numpy.load("outputs.npy")

After the data is prepared, next is to create the network architecture.

Building the Network Architecture

The input layer is created by instantiating thepygad.nn.InputLayerclass according to the next code. A network can only have a single inputlayer.

importpygad.nnnum_inputs=data_inputs.shape[1]input_layer=pygad.nn.InputLayer(num_inputs)

After the input layer is created, next is to create a number of denselayers according to the next code. Normally, the last dense layer isregarded as the output layer. Note that the output layer has a number ofneurons equal to the number of classes in the dataset which is 4.

hidden_layer=pygad.nn.DenseLayer(num_neurons=HL2_neurons,previous_layer=input_layer,activation_function="relu")output_layer=pygad.nn.DenseLayer(num_neurons=4,previous_layer=hidden_layer2,activation_function="softmax")

After both the data and the network architecture are prepared, the nextstep is to train the network.

Training the Network

Here is an example of using thepygad.nn.train() function.

pygad.nn.train(num_epochs=10,last_layer=output_layer,data_inputs=data_inputs,data_outputs=data_outputs,learning_rate=0.01)

After training the network, the next step is to make predictions.

Making Predictions

Thepygad.nn.predict() function uses the trained network for makingpredictions. Here is an example.

predictions=pygad.nn.predict(last_layer=output_layer,data_inputs=data_inputs)

It is not expected to have high accuracy in the predictions because notraining algorithm is used.

Calculating Some Statistics

Based on the predictions the network made, some statistics can becalculated such as the number of correct and wrong predictions inaddition to the classification accuracy.

num_wrong=numpy.where(predictions!=data_outputs)[0]num_correct=data_outputs.size-num_wrong.sizeaccuracy=100*(num_correct/data_outputs.size)print(f"Number of correct classifications :{num_correct}.")print(f"Number of wrong classifications :{num_wrong.size}.")print(f"Classification accuracy :{accuracy}.")

It is very important to note that it is not expected that theclassification accuracy is high because no training algorithm is used.Please check the documentation of thepygad.gann module for trainingthe network using the genetic algorithm.

Examples

This section gives the complete code of some examples that build neuralnetworks usingpygad.nn. Each subsection builds a different network.

XOR Classification

This is an example of building a network with 1 hidden layer with 2neurons for building a network that simulates the XOR logic gate.Because the XOR problem has 2 classes (0 and 1), then the output layerhas 2 neurons, one for each class.

importnumpyimportpygad.nn# Preparing the NumPy array of the inputs.data_inputs=numpy.array([[1,1],[1,0],[0,1],[0,0]])# Preparing the NumPy array of the outputs.data_outputs=numpy.array([0,1,1,0])# The number of inputs (i.e. feature vector length) per samplenum_inputs=data_inputs.shape[1]# Number of outputs per samplenum_outputs=2HL1_neurons=2# Building the network architecture.input_layer=pygad.nn.InputLayer(num_inputs)hidden_layer1=pygad.nn.DenseLayer(num_neurons=HL1_neurons,previous_layer=input_layer,activation_function="relu")output_layer=pygad.nn.DenseLayer(num_neurons=num_outputs,previous_layer=hidden_layer1,activation_function="softmax")# Training the network.pygad.nn.train(num_epochs=10,last_layer=output_layer,data_inputs=data_inputs,data_outputs=data_outputs,learning_rate=0.01)# Using the trained network for predictions.predictions=pygad.nn.predict(last_layer=output_layer,data_inputs=data_inputs)# Calculating some statisticsnum_wrong=numpy.where(predictions!=data_outputs)[0]num_correct=data_outputs.size-num_wrong.sizeaccuracy=100*(num_correct/data_outputs.size)print(f"Number of correct classifications :{num_correct}.")print(f"Number of wrong classifications :{num_wrong.size}.")print(f"Classification accuracy :{accuracy}.")

Image Classification

This example is discussed in theSteps to Build a Neural Networksection and its complete code is listed below.

Remember to either download or create thedataset_features.npyandoutputs.npyfiles before running this code.

importnumpyimportpygad.nn# Reading the data features. Check the 'extract_features.py' script for extracting the features & preparing the outputs of the dataset.data_inputs=numpy.load("dataset_features.npy")# Download from https://github.com/ahmedfgad/NumPyANN/blob/master/dataset_features.npy# Optional step for filtering the features using the standard deviation.features_STDs=numpy.std(a=data_inputs,axis=0)data_inputs=data_inputs[:,features_STDs>50]# Reading the data outputs. Check the 'extract_features.py' script for extracting the features & preparing the outputs of the dataset.data_outputs=numpy.load("outputs.npy")# Download from https://github.com/ahmedfgad/NumPyANN/blob/master/outputs.npy# The number of inputs (i.e. feature vector length) per samplenum_inputs=data_inputs.shape[1]# Number of outputs per samplenum_outputs=4HL1_neurons=150HL2_neurons=60# Building the network architecture.input_layer=pygad.nn.InputLayer(num_inputs)hidden_layer1=pygad.nn.DenseLayer(num_neurons=HL1_neurons,previous_layer=input_layer,activation_function="relu")hidden_layer2=pygad.nn.DenseLayer(num_neurons=HL2_neurons,previous_layer=hidden_layer1,activation_function="relu")output_layer=pygad.nn.DenseLayer(num_neurons=num_outputs,previous_layer=hidden_layer2,activation_function="softmax")# Training the network.pygad.nn.train(num_epochs=10,last_layer=output_layer,data_inputs=data_inputs,data_outputs=data_outputs,learning_rate=0.01)# Using the trained network for predictions.predictions=pygad.nn.predict(last_layer=output_layer,data_inputs=data_inputs)# Calculating some statisticsnum_wrong=numpy.where(predictions!=data_outputs)[0]num_correct=data_outputs.size-num_wrong.sizeaccuracy=100*(num_correct/data_outputs.size)print(f"Number of correct classifications :{num_correct}.")print(f"Number of wrong classifications :{num_wrong.size}.")print(f"Classification accuracy :{accuracy}.")

Regression Example 1

The next code listing builds a neural network for regression. Here iswhat to do to make the code works for regression:

  1. Set theproblem_type parameter in thepygad.nn.train() andpygad.nn.predict() functions to the string"regression".

pygad.nn.train(...,problem_type="regression")predictions=pygad.nn.predict(...,problem_type="regression")
  1. Set the activation function for the output layer to the string"None".

output_layer=pygad.nn.DenseLayer(num_neurons=num_outputs,previous_layer=hidden_layer1,activation_function="None")
  1. Calculate the prediction error according to your preferred errorfunction. Here is how the mean absolute error is calculated.

abs_error=numpy.mean(numpy.abs(predictions-data_outputs))print(f"Absolute error :{abs_error}.")

Here is the complete code. Yet, there is no algorithm used to train thenetwork and thus the network is expected to give bad results. Later, thepygad.gann module is used to train either a regression orclassification networks.

importnumpyimportpygad.nn# Preparing the NumPy array of the inputs.data_inputs=numpy.array([[2,5,-3,0.1],[8,15,20,13]])# Preparing the NumPy array of the outputs.data_outputs=numpy.array([0.1,1.5])# The number of inputs (i.e. feature vector length) per samplenum_inputs=data_inputs.shape[1]# Number of outputs per samplenum_outputs=1HL1_neurons=2# Building the network architecture.input_layer=pygad.nn.InputLayer(num_inputs)hidden_layer1=pygad.nn.DenseLayer(num_neurons=HL1_neurons,previous_layer=input_layer,activation_function="relu")output_layer=pygad.nn.DenseLayer(num_neurons=num_outputs,previous_layer=hidden_layer1,activation_function="None")# Training the network.pygad.nn.train(num_epochs=100,last_layer=output_layer,data_inputs=data_inputs,data_outputs=data_outputs,learning_rate=0.01,problem_type="regression")# Using the trained network for predictions.predictions=pygad.nn.predict(last_layer=output_layer,data_inputs=data_inputs,problem_type="regression")# Calculating some statisticsabs_error=numpy.mean(numpy.abs(predictions-data_outputs))print(f"Absolute error :{abs_error}.")

Regression Example 2 - Fish Weight Prediction

This example uses the Fish Market Dataset available at Kaggle(https://www.kaggle.com/aungpyaeap/fish-market). Simply download the CSVdataset fromthislink(https://www.kaggle.com/aungpyaeap/fish-market/download). The dataset isalso available at theGitHub project of the pygad.nnmodule:https://github.com/ahmedfgad/NumPyANN

Using the Pandas library, the dataset is read using theread_csv()function.

data=numpy.array(pandas.read_csv("Fish.csv"))

The last 5 columns in the dataset are used as inputs and theWeightcolumn is used as output.

# Preparing the NumPy array of the inputs.data_inputs=numpy.asarray(data[:,2:],dtype=numpy.float32)# Preparing the NumPy array of the outputs.data_outputs=numpy.asarray(data[:,1],dtype=numpy.float32)# Fish Weight

Note how the activation function at the last layer is set to"None".Moreover, theproblem_type parameter in thepygad.nn.train() andpygad.nn.predict() functions is set to"regression".

After thepygad.nn.train() function completes, the mean absoluteerror is calculated.

abs_error=numpy.mean(numpy.abs(predictions-data_outputs))print(f"Absolute error :{abs_error}.")

Here is the complete code.

importnumpyimportpygad.nnimportpandasdata=numpy.array(pandas.read_csv("Fish.csv"))# Preparing the NumPy array of the inputs.data_inputs=numpy.asarray(data[:,2:],dtype=numpy.float32)# Preparing the NumPy array of the outputs.data_outputs=numpy.asarray(data[:,1],dtype=numpy.float32)# Fish Weight# The number of inputs (i.e. feature vector length) per samplenum_inputs=data_inputs.shape[1]# Number of outputs per samplenum_outputs=1HL1_neurons=2# Building the network architecture.input_layer=pygad.nn.InputLayer(num_inputs)hidden_layer1=pygad.nn.DenseLayer(num_neurons=HL1_neurons,previous_layer=input_layer,activation_function="relu")output_layer=pygad.nn.DenseLayer(num_neurons=num_outputs,previous_layer=hidden_layer1,activation_function="None")# Training the network.pygad.nn.train(num_epochs=100,last_layer=output_layer,data_inputs=data_inputs,data_outputs=data_outputs,learning_rate=0.01,problem_type="regression")# Using the trained network for predictions.predictions=pygad.nn.predict(last_layer=output_layer,data_inputs=data_inputs,problem_type="regression")# Calculating some statisticsabs_error=numpy.mean(numpy.abs(predictions-data_outputs))print(f"Absolute error :{abs_error}.")