🎯
Focusing
Computational imaging / machine learning researcher
- Luxembourg
- 23:15
(UTC +01:00) - ggluo.github.io
- in/guanxiong-luo
PinnedLoading
- Minimal_Flash_Attention
Minimal_Flash_Attention PublicMinimal_flash_attention is a minimal CUDA library for flash attention inference.
- Self-Diffusion
Self-Diffusion PublicSelf-diffusion for solving inverse problems without the need of pretrained priors
Python 13
- mrirecon/spreco
mrirecon/spreco Public archive - mrirecon/aid
mrirecon/aid Public - TensorRT-Cpp-Example
TensorRT-Cpp-Example PublicC++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
- Minimal_Softmax
Minimal_Softmax PublicMinimal softmal is a minimal CUDA library for softmax practice.
Cuda
Something went wrong, please refresh the page to try again.
If the problem persists, check theGitHub status page orcontact support.
If the problem persists, check theGitHub status page orcontact support.
Uh oh!
There was an error while loading.Please reload this page.

