Setting Up a Local ChatGPT Experience with Anything LLM and Ollama
Author
Erdi KöseAll tools used in this setup are open-source and free! (As of November 5th, 2024)
As large language models (LLMs) become essential for work and personal projects, many users are seeking local solutions instead of relying on cloud-based options. Thanks to advancements in open-source tools, setting up a local GPT-like experience is now easier than ever, offering freedom from internet connectivity and subscription fees. Today, we’ll explore two tools, Anything LLM and Ollama, that allow you to create a private, customizable LLM setup for enhanced privacy and control.
Step 1: Installing Ollama
To begin, install Ollama and download a model. Visit Ollama’s download page and select the appropriate installer for your system. Once Ollama is installed and the ollama
command is available, run the following command:
ollama run llama3.2
I recommend llama3.2
as it’s one of the most powerful open-source models available, providing versatile functionality for tasks like chatting and coding
Step 2: Setting Up Anything LLM
With Ollama running on your system, navigate to Anything LLM’s website and download the installer. Anything LLM is highly flexible and can connect with various models, but for this setup, we’ll configure it to use Ollama as our model provider.
Once installed, open the settings in Anything LLM. Under the configuration menu, select Ollama as your LLM source, and set llama3.2
as the default model. This ensures that you’re leveraging the powerful capabilities of this open-source model for your local setup. Your settings should resemble the image below:
Now, you’re ready to enjoy a ChatGPT-like experience offline with your preferred model. Whether you’re using it for conversations or coding, this setup is flexible and powerful — customizable to suit your specific needs.
If you’re interested in a scalable way to deploy Ollama on Kubernetes, check out my previous post for a detailed guide: Have Your LLM API on Your Kubernetes Cluster.