positional-encoding
Here are 75 public repositories matching this topic...
Language:All
Sort:Most stars
Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch
- Updated
Jul 27, 2025 - Python
Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.
- Updated
Feb 24, 2025 - Jupyter Notebook
Cameras as Relative Positional Encoding
- Updated
Oct 20, 2025 - Python
Official implementation for "DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion".
- Updated
Nov 25, 2025 - Python
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces (CVPR 2023)
- Updated
May 2, 2024 - Python
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
- Updated
Feb 10, 2022 - Python
[CVPR 2021] Adversarial Generation of Continuous Images
- Updated
Nov 10, 2021 - Python
[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".
- Updated
Mar 4, 2023 - Python
Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding
- Updated
Sep 30, 2024 - Python
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
- Updated
Dec 28, 2022 - Python
Trading Positional Complexity vs Deepness in Coordinate Networks
- Updated
Sep 2, 2023 - Jupyter Notebook
"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.
- Updated
May 7, 2024 - Python
Developed the ViViT model for medical video classification, enhancing 3D organ image analysis using transformer-based architectures.
- Updated
May 22, 2024 - Jupyter Notebook
This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.
- Updated
Nov 4, 2025 - Jupyter Notebook
Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures
- Updated
Oct 27, 2023 - Python
Context-aware Biases for Length Extrapolation
- Updated
May 24, 2025 - Python
本仓库定位为 AI论文复现 / 从零实现 Transformer。 代码遵循原论文的模块划分,包含位置编码、多头注意力、前馈网络、编码器‑解码器等全部组件,并附带详细的中文拆解文档与英文注释,方便学习与二次开发。
- Updated
May 3, 2025 - Python
PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
- Updated
Aug 26, 2023 - Python
🧮 Algebraic Positional Encodings.
- Updated
Aug 20, 2025 - Python
A clean, ground-up implementation of the Transformer architecture in PyTorch, including positional encoding, multi-head attention, encoder-decoder layers, and masking. Great for learning or building upon the core model.
- Updated
Aug 30, 2025 - Jupyter Notebook
Improve this page
Add a description, image, and links to thepositional-encoding topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thepositional-encoding topic, visit your repo's landing page and select "manage topics."