ChatGPT has become the poster child for artificial intelligence and large language models everywhere, but if you want something more specialized, or you want something you can guarantee is private, it isn't your only option.
I've been running a handful of AIs on my own PC for a year now instead of paying for ChatGPT—here's how.
Why run an AI locally on your own PC?
ChatGPT is responsive, relatively smart, and continuously receives updates, so why mess with hosting your own large language model at all?
There are three big reasons: integration with my projects, privacy, and specialization.
ChatGPT costs money to use
If you're self-hosting a smart home and want to integrate ChatGPT into your system, you're going to have to pay for access. Depending on how much you use it, that could range from a few cents per month to hundreds of dollars.
Hosting your own AI doesn't completely solve that problem, since you have to pay for electricity, but it does mean you're not going to unexpectedly experience a jump in cost to access the AI or accidentally incur a huge fee because you're overusing it. Even the most powerful home PCs would struggle to cost more than a few dollars per day in electricity, and that is assuming the system is running completely maxed out 24 hours a day.
Self-hosted AIs are private
ChatGPT is a fantastic tool, but it isn't private. If you're concerned about how your data might be used in the future, or if you're handling confidential information that cannot be shared outside your organization, a local AI is a fantastic option.
You can ensure that nothing leaves your PC, and so long as your PC is secure, you can be sure that data you provide won't be used for training some time in the future or leak because of a security bug.
Local AI can be fine-tuned to your needs
Not every AI or LLM is the same. If you ask Gemini or ChatGPT the same questions, you'll get slightly different answers. That sort of difference shows through in the AI you can host locally, too.
OpenAI's gpt-oss will provide different responses to Qwen3, and Gemma will provide different answers from Kimi. Additionally, these open models are subject to the same AI arms race that the commercial models are. Some of them are just better at certain tasks than others, and which AI is best at which job changes with the technology and new releases.
The ability to quickly switch between models for a specific job is incredibly handy, and one I leverage a lot. If I need complex feedback on an idea, a larger model like Qwen3 32B is handy. If I just need something to parse basic text, Gemma3 4b is perfectly fine for the job.

If you're self-hosting AI to handle tasks in your homelab, delegating simple jobs to lighter LLMs is a great way to save on resources. Additionally, you can attach other AI, like those specialized in machine vision or natural language processing, to perform more specialized jobs.
What do you need to host your own ChatGPT?
The first thing you need to run your own LLM isLM Studio, which provides a convenient interface to chat with an LLM much like you would talk with ChatGPT. It also makes trialing new LLMs extremely simple.

Pretty much any modern gaming PC can run at least some local AI models, though the main limiting factor isthe amount of VRAM available on your GPU. If you're buying new, 16GB of VRAM is probably a reasonable middleground that will make a large range of very capable AI accessible to you. 12GB is probably the minimum.
Other than that, it helps to havea zippy SSD to make loading and unloading models faster, anda healthy amount of system RAM (32GB or more) is ideal if you're going to try and offload some of the AI tasks from your GPU to your CPU.
If you're not sure what models your system can run with your given specs, there isa handy project on GitHub that can make recommendations for you based on what you want to do and what your system specs are.

Running your own ChatGPT
Once you download and install LM Studio, all you need to do is click the magnifying glass icon, browse for the model you want, and click "Download" towards the bottom.

If you've found a model elsewhere that you'd like to use, you need to drop it into the correct folder on your PC. By default, that will be:
C:\Users\(YOURUSERNAME)\.lmstudio\models
Where (YOURUSERNAME) is your user account name. So, in my case it was "C:\Users\Equinox\.lmstudio\models."
Once you do that, it'll appear in your list of models just like any other.
What can your own ChatGPT do?
What your own, privately hosted LLM can do depends on what model you're using, what hardware you have, and how good you are at writing a prompt.
There are dozens of models (or more) out there with specialized functions, but at a minimum, you can get them to read sources, generate summaries, discuss the content of resources you provide, and parse the contents of images or videos. Many are optimized for tool use, which means if you want them to, they can even interact with external applications to perform extra jobs, or get information automatically.
If you're willing to experiment, you can even fully integrate them with Home Assistant to create your own talking, thinking (sorta) smart home.
Above and beyond the cost savings, though, there is something else about hosting your own LLM: It is just fun. It isn't every day that a brand-new technology becomes widely accessible to home users, especially one that is slated to be as disruptive as AI.









