Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitbabadc8

Browse files
authored
Merge pull request#268 from star1327p/sentiment-blog
2 parents74f1ba5 +ca0f62b commitbabadc8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

‎content/tutorial-nlp-from-scratch.md‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -652,7 +652,7 @@ def initialize_grads(parameters):
652652
return grads
653653
```
654654

655-
Now, for each gate and the fully connected layer, we define a function to calculate the gradient of the loss with respect to the input passed and the parameters used. To understand the mathematics behind how the derivatives were calculated we suggest you to follow this helpful[blog](https://christinakouridi.blog/2019/06/19/backpropagation-lstm/) by Christina Kouridi.
655+
Now, for each gate and the fully connected layer, we define a function to calculate the gradient of the loss with respect to the input passed and the parameters used. To understand the mathematics behind how the derivatives were calculated we suggest you to follow this helpful[blog](https://christinakouridi.github.io/posts/backprop-lstm/) by Christina Kouridi.
656656

657657

658658
Define a function to calculate the gradients in the**Forget Gate**:

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp