Setup your own ChatGPT locally (yeah, me too!)

INTRODUCTION

Alright folks, nothing new here. A friend & former colleague of mine explained on LinkedIn (https://lnkd.in/dAgb_Qip) about running DeepSeek locally. I was quite interested in this from an academic perspective.

It’s cool, and it works – but I find videos to be … I don’t know – not my generation? Copy/pasting code is also not so easy. I know, Marian, you probably host the details somewhere on Github, but hey, I wanted to give it a try on my own!

SYSTEM SETUP

We’ve all heard / read that a GPU can go a long way (NVIDIA anyone?) in running LLM. Luckily, I have a unused PC with a AMD 5800X3D with a NVIDIA 3600 Ti laying around (that’s one of the perk of having a gamer of a son). Therefore, on this cloudy Saturday morning, I decided to give it a try.

I was fortunate to have an unused PC with an AMD Ryzen 5800X3D processor and an NVIDIA RTX 3600 Ti GPU lying around – a perk of having a gamer enthusiast for a son. The machine already had Ubuntu Server 20.04 installed, which I upgraded to Ubuntu 24.04 LTS for better compatibility with the latest software.

Turns out I already had Ubuntu Server 20.04 running on it. After a little upgrade, I was on 24.04 and ready to go! Interestingly enough, Ollama has all the information it takes on its Download Ollama on Linux page. Just ran the command, and quickly enough I was up and running!

bash -c "$(curl -L https://ollama.ai/install.sh)"

I thought “this is all cool, but not convenient!”. I want a full ChatGPT experience so I can use it the way I am used to. And I want to learn how to use it to actually do stuff, not just answer questions…

THE WEB APPLICATION

That’s where Open WebUI comes into play! It is really a great piece of software, very easy to setup. In no time, I had it running:

Easy and AWESOME!

CONCLUSION

What is cool here is that I have it running on my LAN. So I can access it from any PC internally. Obviously, I could (and I did!) setup a Nginx reverse proxy to access it from the outside through SSL. Now I have my own private ChatGPT. OK, it is not as effective as the actual OpenAI models, but there are quite a few good open source models on Ollama. I will be trying them.

Another great feature of WebUI is that it has WebHooks. I need to play with it, but I envision that the output of a chat could be sent to a webhook, which would parse the JSON, and do something. My plan is to have a WebHook that will generate project files in the selected language. At this point, I am going to explore the models.

All in all, this whole thing took less than an hour to pull together, and I can add models with a few clicks, and easily compare results.

To continue…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.