Running you own LLM using Ollama

Highlevel

Olama hosts LLM models and allows you to interact with them all locally

openwebui is a nice gui front end for Ollama and models

Sites

Download Ollama on Linux

library

Open WebUI

Videos

Run Through

Ran from an Aws Vm, the basic Micro doesn’t have enough /tmp space and you have to fudge around with things.

The quickest solution is uping the instance type to something with more power a t2.xlarge seems to work well

Bundled install

curl -fsSL https://ollama.com/install.sh | sh
yum install pip -y 
pip install ollama
ollama run llama3.1

Install Ollama – Shell

curl -fsSL https://ollama.com/install.sh | sh

by following instructions on Download Ollama on Linux

Install Ollama – Pip

yum install pip pip install ollama

Install a LLM Model

ollama run llama3.1

find model in library and copy command

To check what LLM models you have and other stuff on Ollama

$ ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:latest 42182419e950    4.7 GB  38 minutes ago
gemma2:2b       8ccf136fdd52    1.6 GB  2 hours ago
$ ollama
Usage:
  ollama [flags]
  ollama [command]
Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command
Flags:
  -h, --help      help for ollama
  -v, --version   Show version information
Use "ollama [command] --help" for more information about a command.

Passing Input files – Bash

$ cat /home/ollama_files/helloworld_testfile       i have 5 oranges and 2 apples
if i eat 4 oranges and 1 apple
how much is left?
$ cat /home/ollama_files/helloworld_testfile | ollama run gemma2:2b  "prompt"
Here's how to figure out the remaining fruit:
Oranges Left: You started with 5 oranges, and you ate 4, so you have
5 - 4 = 1 orange left.
Apples Left:  You started with 2 apples, and you ate 1, leaving you
with 2 - 1 = 1 apple.
Answer: You have 1 orange and 1 apple left. 🍊🍎

Passing Input files – Python

# cat ./llm_test.py
#!/usr/bin/python3.9
import ollama
notes = "helloworld_testfile"
with open(notes,'r') as file:
    content= file.read()
my_prompt = f"give me the answer {content}"
response = ollama.generate(model="gemma2:2b", prompt=my_prompt)
actual_response = response["response"]
print(actual_response)
#  ./llm_test.py
Here's how to solve that:
Oranges: You started with 5, and you ate 4, so you have 5 - 4 = 1 orange left.
Apples: You started with 2, and you ate 1, so you have 2 - 1 = 1 apple left.
Answer: You have 1 orange and 1 apple left.

Quick Chat

$ ollama run gemma2:2b
>>> tell me a joke
Why don't scientists trust atoms?
Because they make up everything! 😄
Let me know if you want to hear another one! 😊
>>> Send a message (/? for help)