Author: user

  • Catan & the power Of AI

    Why?

    Recently while doing some Xmas shopping i game across the board game Catan it fitted into an idea i have had for a while about a game along these lines.

    I Also wanted to see if i could build a game with the help of AI without coding a single thing myself.

    After 26 Iterations i present you with

    https://github.com/herepete/games/blob/main/catan.py

    You will only need to install 1 package

    pip3.11 install PrettyTable

    It might not be fully finished but it was a fun experiment playing with AI.

    I found the default chatgbt model of GPT-4o pretty good for the initial setup but when the script got more complicated to 200 ish lines plus it started struggling in the terms of removing core features without being asked.

    It did get quite frustrating and then i tried chatgbt o1 model and it has worked really well. It made the occasional error between iterations but it was helpfull.

    AI Guard Rails

    I found giving these instructions helped giving the AI some guard rails…

    Version Incrementing:
    Each new full version of the code should have its version number incremented on the second line. For example, after # Version 1.13, the next full version you provide should be # Version 1.14, then # Version 1.15, and so forth.

    Full Code with Each Update:
    Whenever you request changes, I should provide the complete updated script—not just a snippet—so you have a full, up-to-date version of the code at each iteration.

    Preserve Existing Code Unless Necessary:
    Do not remove or rewrite large sections of code unless it’s required to implement the requested changes. Keep as much of the original logic and structure as possible, only adjusting or adding code where needed.

    Implement Requested Features/Modifications Incrementally:
    Each time you requested changes—like adding a 4th AI player, explaining aspects of the game, improving the trading logic, or allowing the human player to accept/reject/counter AI-offered trades—I incorporated those changes step-by-step, ensuring stability and that previous features remained intact.

    Clarification and Reasoning:
    Before implementing changes, I asked clarifying questions when needed to ensure I understood your requirements correctly. Where possible, I explained what was done and why, so you understood the reasoning behind each update.

    No Removal Without Reason:
    Unless you explicitly allowed or it was necessary for the change, I avoided removing or altering code unrelated to the requested features to maintain code integrity and continuity.

    End Result

    Enter your name: asdas
    Welcome to Catan!
    
    --- Purpose of the Game ---
    Earn 10 Victory Points (VP) by building settlements, roads, and cities.
    
    --- How the Game Works ---
    1. The board is composed of hexes, each producing a specific resource (brick, lumber, ore, grain, wool) or desert.
       Each hex has a number (2-12). At the start of a turn, you roll two dice. The sum determines which hexes produce resources.
    2. Settlements adjacent to a producing hex earn 1 resource; cities earn 2 of that resource. Desert hexes never produce.
    3. If a 7 is rolled, no one collects resources and the robber would be activated (not fully implemented here).
    4. Your goal is to reach 10 VP. Settlements grant 1 VP, cities grant an additional VP over a settlement, reaching 2 total.
    5. On your turn, you can:
       - Build: Use resources to construct a settlement, road, or upgrade a settlement to a city.
       - Trade: Offer your resources and request others. AI players consider fairness, scarcity, and personal benefit. You can accept, reject, or counter trades offered to you.
       - Pass: If you pass without having built or traded, you gain 1 random resource as a bonus.
    6. The game features AI players with different personalities (generous, fair, greedy) who evaluate trades differently.
    7. Once you or another player reaches 10 VP, the game ends immediately and that player wins.
    8. After the last player in a round finishes their turn, press Enter to continue and start the next round.
    
    Starting the game!
    
    asdas's turn!
    
    --- Board ---
    [1] wool (3)                             [2] grain (11)                           [3] wool (10)                            [4] desert (2)
    [5] lumber (7)                           [6] ore (3)                              [7] grain (5)                            [8] lumber (7)
    [9] brick (12)                           [10] brick (8)                           [11] brick (7)                           [12] ore (8)
    [13] wool (11)                           [14] desert (6)                          [15] ore (8)                             [16] grain (2)
    [17] lumber (12)                         [18] desert (12)
    
    --- Dice Roll Explanation ---
    You rolled a 7. The robber would be activated (not yet implemented):
    - No hexes produce resources this turn.
    - The robber would move to a chosen hex, blocking it.
    - Players with >7 cards would discard half of them.
    +-------------+-------+--------+-----+-------+------+-------------+--------+-------+----------------+
    |    Player   | Brick | Lumber | Ore | Grain | Wool | Settlements | Cities | Roads | Victory Points |
    +-------------+-------+--------+-----+-------+------+-------------+--------+-------+----------------+
    |    asdas    |   2   |   0    |  1  |   1   |  1   |      0      |   0    |   0   |       0        |
    | AI Player 1 |   2   |   0    |  1  |   1   |  1   |      0      |   0    |   0   |       0        |
    | AI Player 2 |   0   |   0    |  3  |   1   |  1   |      0      |   0    |   0   |       0        |
    | AI Player 3 |   2   |   1    |  1  |   1   |  0   |      0      |   0    |   0   |       0        |
    +-------------+-------+--------+-----+-------+------+-------------+--------+-------+----------------+
    
    Actions: 1. Build  2. Pass  3. Trade
    Choose an action:
    ....
    
  • AI Agents

    Playing around with OpenAI Swarm and ChatGBT and this was the result (although as you can see i went off on a tangent not using Swarm) but a fun excercise.

    https://colab.research.google.com/drive/1gx5zmdIcJwwKIvDmNRoJmqpdeLh6UnCN?usp=sharing#scrollTo=4k3_qWopAGE_

    https://github.com/openai/swarm

    I have uploaded to https://github.com/herepete/Ai_playing/blob/main/ai_agents.py

    See my other AI posts about putting the OpenAI key in as a variable 🙂

    $ cat ai_agents.py
    #!/usr/bin/python3.11
    
    import openai
    import os
    os.system('clear')
    
    OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')
    
    # Function to determine the most appropriate agent to respond
    def select_agent(agents, user_input):
        if "joke" in user_input.lower():
            return "Joke Creator"
        elif "fact" in user_input.lower() or "verify" in user_input.lower():
            return "Fact Checker"
        else:
            return "Creative Thinker"
    
    # Function for the agent to respond based on instructions
    def agent_respond(agent, context):
        try:
            # Make the call to the OpenAI API with clear and explicit structure
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=[
                    {"role": "system", "content": agent["instructions"]},
                    *context
                ],
                max_tokens=150
            )
            return response['choices'][0]['message']['content'].strip()
        except Exception as e:
            print(f"Error: {e}")
            return None
    
    # Function to create an agent
    def create_agent(name, role, instructions):
        return {"name": name, "role": role, "instructions": instructions}
    
    # Create agents
    agent_1 = create_agent("Fact Checker", "assistant", "You are a detailed fact-checker. Provide accurate and concise responses.")
    agent_2 = create_agent("Creative Thinker", "assistant", "You are a creative agent that provides out-of-the-box thinking.")
    agent_3 = create_agent("Joke Creator", "assistant", "You are a joke creator. Provide funny jokes when asked.")
    
    # List of agents
    agents = [agent_1, agent_2, agent_3]
    
    # Initial explanation to the user
    print("Welcome! We have three agents here to assist you:")
    print("1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.")
    print("2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.")
    print("3. Joke Creator: This agent helps you by creating jokes and providing humor.")
    print("Feel free to ask any questions, and our most suitable agent will assist you.")
    
    # Run an interactive conversation loop
    while True:
        # Ask user for input
        user_input = input("\nWhat do you need help with today?\nYou: ")
    
        # Break loop if user wants to quit
        if user_input.lower() in ["quit", "exit"]:
            print("Ending the conversation.")
            break
    
        # Determine the most appropriate agent based on user input
        selected_agent_name = select_agent(agents, user_input)
        selected_agent = next(agent for agent in agents if agent["name"] == selected_agent_name)
    
        # Reset messages to contain only the most recent user input for new prompts
        messages = [{"role": "user", "content": user_input}]
    
        # Run the selected agent to process the current context
        response = agent_respond(selected_agent, messages)
        if response:
            messages.append({"role": "assistant", "content": f"{selected_agent['name']} response: {response}"})
            print(f"{selected_agent['name']} response: {response}")
        else:
            print(f"No response from {selected_agent['name']}.")
    

    result

    Welcome! We have three agents here to assist you:
    1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.
    2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.
    3. Joke Creator: This agent helps you by creating jokes and providing humor.
    Feel free to ask any questions, and our most suitable agent will assist you.
    
    What do you need help with today?
    You: tell me a joke
    Joke Creator response: Why did the scarecrow win an award? Because he was outstanding in his field!
    
    What do you need help with today?
    You: tell me a fact
    Fact Checker response: Fact: The Earth is the only planet in our solar system known to support life.
    
    What do you need help with today?
    You: what is the meaning of life?
    Creative Thinker response: The meaning of life is a deeply personal and subjective question that can have many different answers depending on individual beliefs, experiences, and perspectives. Some people find meaning in pursuing their passions, building connections with others, contributing to society, seeking personal growth, exploring spirituality, or simply in enjoying the present moment. Ultimately, the meaning of life is something that each person must decide for themselves based on what brings them fulfillment and purpose.
    
    What do you need help with today?
    You:
    

  • OpenAi chat with a bot from Linux CLI

    Available in Github at https://github.com/herepete/Fidelity/blob/main/chat_with_openai.py

    A standalone script to allow you at the command line to chat with OpenAI

    [ec2-user@ip-172-31-33-224 Fidelity]$ cat chat_with_openai.py
    #!/usr/bin/python3
    import openai
    import os
    
    
    # Your OpenAI API key
    openai.api_key = os.getenv("OPENAI_API_KEY")
    
    def chat_with_ai():
        messages = [
            {"role": "system", "content": "You are a helpful assistant."}
        ]
    
        print("Start chatting with the AI! Type 'exit' to end the conversation.")
        while True:
            # Get user input
            user_input = input("You: ")
    
            # Exit the chat loop if the user types 'exit'
            if user_input.lower() == 'exit':
                print("Ending the conversation. Goodbye!")
                break
    
            # Add the user's message to the conversation history
            messages.append({"role": "user", "content": user_input})
    
            # Get the AI's response
            response = openai.ChatCompletion.create(
                model="gpt-3.5-turbo",
                messages=messages,
                max_tokens=150  # Set the maximum number of tokens for each response
            )
    
            # Extract and print the AI's response
            ai_response = response['choices'][0]['message']['content']
            print(f"AI: {ai_response}")
    
            # Add the AI's response to the conversation history
            messages.append({"role": "assistant", "content": ai_response})
    
    
    chat_with_ai()
    

    Sample Run

    ]# ./chat_with_openai.py
    Start chatting with the AI! Type 'exit' to end the conversation.
    You: Tell me  a Joke
    AI: Sure, here's a joke for you:
    
    Why couldn't the bicycle stand up by itself?
    
    Because it was two-tired!
    You: nice one, tell me an intresting fact?
    AI: Here's an interesting fact for you:
    
    Honey never spoils! Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3000 years old and still perfectly edible. Honey's high sugar content and low moisture levels create an environment that prevents the growth of bacteria and yeasts, making it one of the only foods that never spoils.
    You:
  • Running you own LLM using Ollama

    Highlevel

    Olama hosts LLM models and allows you to interact with them all locally

    openwebui is a nice gui front end for Ollama and models

    Sites

    Download Ollama on Linux

    library

    Open WebUI

    Videos

    Run Through

    Ran from an Aws Vm, the basic Micro doesn’t have enough /tmp space and you have to fudge around with things.

    The quickest solution is uping the instance type to something with more power a t2.xlarge seems to work well

    Bundled install

    curl -fsSL https://ollama.com/install.sh | sh
    yum install pip -y 
    pip install ollama
    ollama run llama3.1

    Install Ollama – Shell

    curl -fsSL https://ollama.com/install.sh | sh

    by following instructions on Download Ollama on Linux

    Install Ollama – Pip

    yum install pip pip install ollama

    Install a LLM Model

    ollama run llama3.1

    find model in library and copy command

    To check what LLM models you have and other stuff on Ollama

    $ ollama list
    NAME            ID              SIZE    MODIFIED
    llama3.1:latest 42182419e950    4.7 GB  38 minutes ago
    gemma2:2b       8ccf136fdd52    1.6 GB  2 hours ago
    $ ollama
    Usage:
      ollama [flags]
      ollama [command]
    Available Commands:
      serve       Start ollama
      create      Create a model from a Modelfile
      show        Show information for a model
      run         Run a model
      pull        Pull a model from a registry
      push        Push a model to a registry
      list        List models
      ps          List running models
      cp          Copy a model
      rm          Remove a model
      help        Help about any command
    Flags:
      -h, --help      help for ollama
      -v, --version   Show version information
    Use "ollama [command] --help" for more information about a command.

    Passing Input files – Bash

    $ cat /home/ollama_files/helloworld_testfile       i have 5 oranges and 2 apples
    if i eat 4 oranges and 1 apple
    how much is left?
    $ cat /home/ollama_files/helloworld_testfile | ollama run gemma2:2b  "prompt"
    Here's how to figure out the remaining fruit:
    Oranges Left: You started with 5 oranges, and you ate 4, so you have
    5 - 4 = 1 orange left.
    Apples Left:  You started with 2 apples, and you ate 1, leaving you
    with 2 - 1 = 1 apple.
    Answer: You have 1 orange and 1 apple left. 🍊🍎

    Passing Input files – Python

    # cat ./llm_test.py
    #!/usr/bin/python3.9
    import ollama
    notes = "helloworld_testfile"
    with open(notes,'r') as file:
        content= file.read()
    my_prompt = f"give me the answer {content}"
    response = ollama.generate(model="gemma2:2b", prompt=my_prompt)
    actual_response = response["response"]
    print(actual_response)
    #  ./llm_test.py
    Here's how to solve that:
    Oranges: You started with 5, and you ate 4, so you have 5 - 4 = 1 orange left.
    Apples: You started with 2, and you ate 1, so you have 2 - 1 = 1 apple left.
    Answer: You have 1 orange and 1 apple left.

    Quick Chat

    $ ollama run gemma2:2b
    >>> tell me a joke
    Why don't scientists trust atoms?
    Because they make up everything! 😄
    Let me know if you want to hear another one! 😊
    >>> Send a message (/? for help)