- Beijing, China
- https://soulteary.com
- @soulteary
Highlights
I'm an open-source software developer andblogger, I'm often activeon the Zhihu community. Since late 2022, I've been studying at an investment institution while collaborating with its partners and tech community allies to research and implement cutting-edge technologies.
I also serve as an ambassador forDify andPerfXCloud in China. If you're interested in community partnerships, I welcome the opportunity to discuss potential collaborations. 👋
Previously, I held roles as: Open-Source Evangelist ofMilvus, Technical Director ofBeijing Artificial Intelligence Research Institutes Community R&D and IT,Meituan Technical Evangelist,X Financial Front End Architect, Senior Front End Development Engineer atAlibaba Cloud,Meituan,Taobao andSina Cloud.
I often use:
Docker | Go | Node.js | JavaScript | BASH | Python | PHP |
Additionally, choose programming languages and tools based on mood and performance requirements, embracing flexibility in the selection process.
I regularly share insights onZhihu andWeibo. If you're active on these platforms, feel free to connect and engage with me.
Interested in networking? Learn more aboutmakeing friends with me.
PinnedLoading
- docker-flare
docker-flare PublicFlare ✨ Lightweight, high performance and fast self-hosted navigation pages, resource utilization rate is <1% CPU, MEM <30 M, Docker Image < 10M
- llama-docker-playground
llama-docker-playground PublicForked frommeta-llama/llama
Quick Start LLaMA models with multiple methods, and fine-tune 7B/65B with One-Click.
- docker-prompt-generator
docker-prompt-generator PublicUsing a Model to generate prompts for Model applications. / 使用模型来生成作图咒语的偷懒工具,支持 MidJourney、Stable Diffusion 等。
- certs-maker
certs-maker Public100% Coverage! Lightweight self-signed certificate generator, size between 1.5MB (executable) and 5MB (docker image).
- docker-llama2-chat
docker-llama2-chat PublicPlay LLaMA2 (official / 中文版 / INT4 / llama2.cpp) Together! ONLY 3 STEPS! ( non GPU / 5GB vRAM / 8~14GB vRAM)
If the problem persists, check theGitHub status page orcontact support.