AI Agents

Playing around with OpenAI Swarm and ChatGBT and this was the result (although as you can see i went off on a tangent not using Swarm) but a fun excercise.

https://colab.research.google.com/drive/1gx5zmdIcJwwKIvDmNRoJmqpdeLh6UnCN?usp=sharing#scrollTo=4k3_qWopAGE_

https://github.com/openai/swarm

I have uploaded to https://github.com/herepete/Ai_playing/blob/main/ai_agents.py

See my other AI posts about putting the OpenAI key in as a variable 🙂

$ cat ai_agents.py
#!/usr/bin/python3.11

import openai
import os
os.system('clear')

OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

# Function to determine the most appropriate agent to respond
def select_agent(agents, user_input):
    if "joke" in user_input.lower():
        return "Joke Creator"
    elif "fact" in user_input.lower() or "verify" in user_input.lower():
        return "Fact Checker"
    else:
        return "Creative Thinker"

# Function for the agent to respond based on instructions
def agent_respond(agent, context):
    try:
        # Make the call to the OpenAI API with clear and explicit structure
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": agent["instructions"]},
                *context
            ],
            max_tokens=150
        )
        return response['choices'][0]['message']['content'].strip()
    except Exception as e:
        print(f"Error: {e}")
        return None

# Function to create an agent
def create_agent(name, role, instructions):
    return {"name": name, "role": role, "instructions": instructions}

# Create agents
agent_1 = create_agent("Fact Checker", "assistant", "You are a detailed fact-checker. Provide accurate and concise responses.")
agent_2 = create_agent("Creative Thinker", "assistant", "You are a creative agent that provides out-of-the-box thinking.")
agent_3 = create_agent("Joke Creator", "assistant", "You are a joke creator. Provide funny jokes when asked.")

# List of agents
agents = [agent_1, agent_2, agent_3]

# Initial explanation to the user
print("Welcome! We have three agents here to assist you:")
print("1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.")
print("2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.")
print("3. Joke Creator: This agent helps you by creating jokes and providing humor.")
print("Feel free to ask any questions, and our most suitable agent will assist you.")

# Run an interactive conversation loop
while True:
    # Ask user for input
    user_input = input("\nWhat do you need help with today?\nYou: ")

    # Break loop if user wants to quit
    if user_input.lower() in ["quit", "exit"]:
        print("Ending the conversation.")
        break

    # Determine the most appropriate agent based on user input
    selected_agent_name = select_agent(agents, user_input)
    selected_agent = next(agent for agent in agents if agent["name"] == selected_agent_name)

    # Reset messages to contain only the most recent user input for new prompts
    messages = [{"role": "user", "content": user_input}]

    # Run the selected agent to process the current context
    response = agent_respond(selected_agent, messages)
    if response:
        messages.append({"role": "assistant", "content": f"{selected_agent['name']} response: {response}"})
        print(f"{selected_agent['name']} response: {response}")
    else:
        print(f"No response from {selected_agent['name']}.")

result

Welcome! We have three agents here to assist you:
1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.
2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.
3. Joke Creator: This agent helps you by creating jokes and providing humor.
Feel free to ask any questions, and our most suitable agent will assist you.

What do you need help with today?
You: tell me a joke
Joke Creator response: Why did the scarecrow win an award? Because he was outstanding in his field!

What do you need help with today?
You: tell me a fact
Fact Checker response: Fact: The Earth is the only planet in our solar system known to support life.

What do you need help with today?
You: what is the meaning of life?
Creative Thinker response: The meaning of life is a deeply personal and subjective question that can have many different answers depending on individual beliefs, experiences, and perspectives. Some people find meaning in pursuing their passions, building connections with others, contributing to society, seeking personal growth, exploring spirituality, or simply in enjoying the present moment. Ultimately, the meaning of life is something that each person must decide for themselves based on what brings them fulfillment and purpose.

What do you need help with today?
You:

OpenAi chat with a bot from Linux CLI

Available in Github at https://github.com/herepete/Fidelity/blob/main/chat_with_openai.py

A standalone script to allow you at the command line to chat with OpenAI

[ec2-user@ip-172-31-33-224 Fidelity]$ cat chat_with_openai.py
#!/usr/bin/python3
import openai
import os


# Your OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY")

def chat_with_ai():
    messages = [
        {"role": "system", "content": "You are a helpful assistant."}
    ]

    print("Start chatting with the AI! Type 'exit' to end the conversation.")
    while True:
        # Get user input
        user_input = input("You: ")

        # Exit the chat loop if the user types 'exit'
        if user_input.lower() == 'exit':
            print("Ending the conversation. Goodbye!")
            break

        # Add the user's message to the conversation history
        messages.append({"role": "user", "content": user_input})

        # Get the AI's response
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages,
            max_tokens=150  # Set the maximum number of tokens for each response
        )

        # Extract and print the AI's response
        ai_response = response['choices'][0]['message']['content']
        print(f"AI: {ai_response}")

        # Add the AI's response to the conversation history
        messages.append({"role": "assistant", "content": ai_response})


chat_with_ai()

Sample Run

]# ./chat_with_openai.py
Start chatting with the AI! Type 'exit' to end the conversation.
You: Tell me  a Joke
AI: Sure, here's a joke for you:

Why couldn't the bicycle stand up by itself?

Because it was two-tired!
You: nice one, tell me an intresting fact?
AI: Here's an interesting fact for you:

Honey never spoils! Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3000 years old and still perfectly edible. Honey's high sugar content and low moisture levels create an environment that prevents the growth of bacteria and yeasts, making it one of the only foods that never spoils.
You:

API,OPENAI & Finance Python Project

High level

So combining a few of my interests:
1) Python

2) Finance

3) AI

I have uploaded the script to

https://github.com/herepete/Fidelity

Setup steps

From a AWS Linux Vm

yum install git
yum install pip
pip3.9 install bs4 pandas
pip3.9 install openai==0.28

create openai account > https://platform.openai.com/docs/overview, create an API key , add openai key to .bashrc for example

export OPENAI_API_KEY="bla"


make sure its visible when running

 env | grep OPEN

run test api script.py – fix issues
run option 1 on menu within main.py -fix issues

run option 2

High Level Aim

A script to find a list of income funds i am interested in.

Then pass that list to openai for some feedback.

What it does

Based on the config in config.py

connect to the Fidelity OpenAi and with the minimal of calls check against criteria namely:

is it an Income fund?

does it have a yield over 4%

is the cost under 1%

As it produced a return of over 3 in the last 1,3 & 5 years

If the Api fails for any reason or the criteria is not met reject the fund straight away to reduce the number of API Calls needed.

If all the tests are passed some extra info is grabbed Last Years Yield, and Overall Morning Star Rating.

At the end you should have a list of funds which have passed all off the test and then that information is all passed to openai to get some feedback.

Considerations and Ideas

I am the first to admit this script needs tweaking and improvement but my aim was to get a first version published rather than having a perfect solution.

I though about multi-threading or multi-processing to speed everything up, but i don’t want to hammer the API.

I could do more with the AI i.e langchain but my aim was to get version 1 working.

I did wonder about getting the results written to a db like sqllite but thats another option for version2

In the output i could use colour and ask the Openai for more information.

You need quite a wide screen to display the ouput as designed.

Maybe play with different openai models to check output.

Maybe open a chat type windows from the command line to allow you to ask questions about the funds.

Play with Openai instructions to get better answers, i haven’t done any fine tunning of the commands.

Files

config.py – basic config, set testing to 0 if you want to unleash the power! & set any criteria you wish on the other fields

main.py – main script to run

test_openai.py – a basic test to check openai is being called, can be run on its own or from the menu in main.py

failer_test_fund.txt – is a log of what fund errored and why, this is only generated when option 2 from main.py is run. We use the funds here and mark them as Previously checked so we don’t unnecessarily test them again.

fidelity_funds.csv – this is your master list of funds to consider about 3,000 ish. This file is only generated when option 1 from main.py is run. The first check in option 2 of main.py is looking through all funds names and removing anything with the word “inc” in.

Typical workflow

setup enviorement

download git scripts

run main.py choose option 1 to create your all files lookup

run main.py chose option 2 – 10 funds are checked

edit config.py and change testing to 0

run main.py choose option 2 – sit back and watch and review results

Interface

It’s all Command line but i have tried to make it visualize as nice as possible.

The page should refresh:

Income stocks found should be about 1117 ish – is all income funds found in fidelity_funds.csv

Previously checked – funds found in failer_test_fund.txt, which is failed funds from any previous runs.

This Round checking – how many funds to check based on Db.config values

This round rejected – how many funds failed

This round suitable funds found – how many funds passed

Verbose stuff – a log of the 10 most recent things than happened

Basic run with no Funds found

Basic run with Funds found

note anything below the line “**Analysis:**” is aI generated.

A Screenshot didn’t look as good here so i have copied and pasted the output as a code line

Income Stocks Found           Previously checked            This round Checking           This Round Rejected           This Round Suitable Funds found

1117                          537                           20                            18                            2

Verbose Stuff...
Starting work on - Rathbone M-A Strategic Growth Port S Inc
Yield check failure
Starting work on - Rathbone M-A Strategic Income Port S Inc
Yield check failure
Starting work on - Rathbone M-A Total Return Port S Inc
Yield check failure
Starting work on - Schroder High Yield Opportunities Z Inc
Suitable Fund Found=Schroder High Yield Opportunities Z Inc
Starting work on - Schroder Strategic Credit L Inc
Suitable Fund Found=Schroder Strategic Credit L Inc

Here are the raw results and the AI feedback on those funds...
Based on the provided data, here is a comparison of the two income funds:

| Fund Name                           | ISIN          | Fee (%) | Yield (%) | Frequency     | Y1_Annualized | Y3_Annualized | Y5_Annualized | Last Year's Yield | Morning Star Rating |
|------------------------------------|-------------------------------|--------|----------|----------------|----------------|----------------|----------------|------------------|---------------------
| Schroder High Yield Opportunities Z Inc | GB00B5143284 | 0.72   | 7.71      | Monthly        | 15             | 3             | 5                | 7.82                 | 5                   |
| Schroder Strategic Credit L Inc         | GB00B11DP098 | 0.67   | 6.33      | Semi-Annually | 12             | 3             | 4                | 5.90                 | 5                   |

**Analysis:**

**Schroder High Yield Opportunities Z Inc:**
- **Pros:**
  - Higher yield (7.71%)
  - Monthly frequency of payouts
  - Higher Y1_Annualized return (15%)
  - Higher Last Year's Yield (7.82%)
  - Morning Star Rating of 5

- **Cons:**
  - Slightly higher fee compared to the other fund (0.72%)
  - Lower Y3_Annualized and Y5_Annualized returns

**Schroder Strategic Credit L Inc:**
- **Pros:**
  - Lower fee (0.67%)
  - Lower fee than the other fund
  - Consistent Morning Star Rating of 5

- **Cons:**
  - Lower yield (6.33%)
  - Semi-annual frequency of payouts
  - Lower Y1_Annualized and Last Year's Yield
  - Lower Y3_Annualized and Y5_Annualized returns compared to the other fund

**Additional Research:**
- Further research into the credit quality of the underlying assets might provide insights into the risk profile of these funds.
- Comparison of the fund performance against relevant benchmark indices.

In summary, the Schroder High Yield Opportunities Z Inc fund offers a higher yield and better short-term performance, but at a slightly higher fee. On the other hand, the Schroder Strategic Credit L Inc fund has a lower fee and consistent Morning Star Rating, but with lower yield and overall returns. Investors may consider their investment goals, risk tolerance, and liquidity preferences when choosing between these two funds.
AI Anslysis completed

Running you own LLM using Ollama

Highlevel

Olama hosts LLM models and allows you to interact with them all locally

openwebui is a nice gui front end for Ollama and models

Sites

Download Ollama on Linux

library

Open WebUI

Videos

Run Through

Ran from an Aws Vm, the basic Micro doesn’t have enough /tmp space and you have to fudge around with things.

The quickest solution is uping the instance type to something with more power a t2.xlarge seems to work well

Bundled install

curl -fsSL https://ollama.com/install.sh | sh
yum install pip -y 
pip install ollama
ollama run llama3.1

Install Ollama – Shell

curl -fsSL https://ollama.com/install.sh | sh

by following instructions on Download Ollama on Linux

Install Ollama – Pip

yum install pip pip install ollama

Install a LLM Model

ollama run llama3.1

find model in library and copy command

To check what LLM models you have and other stuff on Ollama

$ ollama list
NAME            ID              SIZE    MODIFIED
llama3.1:latest 42182419e950    4.7 GB  38 minutes ago
gemma2:2b       8ccf136fdd52    1.6 GB  2 hours ago
$ ollama
Usage:
  ollama [flags]
  ollama [command]
Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command
Flags:
  -h, --help      help for ollama
  -v, --version   Show version information
Use "ollama [command] --help" for more information about a command.

Passing Input files – Bash

$ cat /home/ollama_files/helloworld_testfile       i have 5 oranges and 2 apples
if i eat 4 oranges and 1 apple
how much is left?
$ cat /home/ollama_files/helloworld_testfile | ollama run gemma2:2b  "prompt"
Here's how to figure out the remaining fruit:
Oranges Left: You started with 5 oranges, and you ate 4, so you have
5 - 4 = 1 orange left.
Apples Left:  You started with 2 apples, and you ate 1, leaving you
with 2 - 1 = 1 apple.
Answer: You have 1 orange and 1 apple left. 🍊🍎

Passing Input files – Python

# cat ./llm_test.py
#!/usr/bin/python3.9
import ollama
notes = "helloworld_testfile"
with open(notes,'r') as file:
    content= file.read()
my_prompt = f"give me the answer {content}"
response = ollama.generate(model="gemma2:2b", prompt=my_prompt)
actual_response = response["response"]
print(actual_response)
#  ./llm_test.py
Here's how to solve that:
Oranges: You started with 5, and you ate 4, so you have 5 - 4 = 1 orange left.
Apples: You started with 2, and you ate 1, so you have 2 - 1 = 1 apple left.
Answer: You have 1 orange and 1 apple left.

Quick Chat

$ ollama run gemma2:2b
>>> tell me a joke
Why don't scientists trust atoms?
Because they make up everything! 😄
Let me know if you want to hear another one! 😊
>>> Send a message (/? for help)

AI Art

Introduction

Most pages on this site have a cartoon at the top of the page & 99% of them have been generated by AI.

This page is dedicated to playing around with AI Art to try and get the best picture available

This page has some really good examples of styles you can use in your prompt

https://www.shutterstock.com/ai-image-generator/styles?consentChanged=true

Going straight to Dall-e gives a nicer feedback rather than calling it directly from chatgbt

Coney Island

Here’s the image depicting a young girl, accompanied by her parents, as she first steps foot on land at Coney Island, New York, in the 1800s. The scene captures the moment of their arrival, filled with hope and trepidation, rendered in a vivid oil painting style. Feel free to use this for your project!

Pyramids

Here are two images depicting the construction of the pyramids in ancient Egypt. These scenes show workers engaged in various activities, with the grandeur of the partially constructed pyramids and the geographical context highlighted. Feel free to use these images for your project!

Picturesque coastal village at sunset.

i pasted this in and asked “do a oil painting of this”

“do a futuristic version please”

and now a plastic brick version

“show me 2 more versions of the orginal picture uploaded at the start of this conversation

1) watercolor

2) pencil sketch”

give me a representation of what does £X look like?

give me a representation of what does £1,000 look like

Here are the representations of £10,000 in cash, featuring two hundred £50 notes.

Here are the representations of £100,000 in cash, featuring two thousand £50 notes.

A large pile of £100,000 in cash, with two thousand £50 notes stacked and fanned out to show the amount. The notes are new, crisp, and arranged neatly in multiple bundles. The background is a clean, white surface to emphasize the money.

Here are the representations of £1,000,000 in cash, featuring twenty thousand £50 notes.

Relativity (M. C. Escher)

After a few issues with copywriting the best way to find a similar picture was

typing “Recreate this” and paste in a picture taken from Google photos

“To recreate an image inspired by the provided one, I’ll describe a scene based on it without directly replicating the copyrighted artwork.

Let’s create an image featuring a surreal architectural scene with multiple staircases intersecting at impossible angles, with figures walking on them in various directions, some upside down, and the overall structure having a maze-like appearance.

I’ll go ahead and generate this for you.”

Roman Times

“create a picture in roman times of soldiers coming out of a tavern into the dirtiness of the city”

Old British painters

“A serene rural landscape inspired by John Constable’s “The Hay Wain.” The scene features a picturesque countryside with a gently flowing river in the foreground. A wooden cart is half-submerged in the water, being pulled by horses. On the left, there is a charming old farmhouse surrounded by trees, with smoke gently rising from the chimney. The sky is filled with fluffy white clouds, casting soft shadows on the lush green fields. The painting should evoke a sense of peacefulness and natural beauty, with attention to detail in the foliage, water reflections, and overall composition, capturing the essence of Constable’s pastoral scenes.”

An oil painting similar to John Constable's 'The Hay Wain,' imagining it as if Constable painted another version one year later. The scene features a picturesque countryside with a gently flowing river in the foreground. A wooden cart is half-submerged in the water, being pulled by horses led by a laborer. On the left, there is a charming old farmhouse surrounded by trees, with smoke gently rising from the chimney. The sky is filled with fluffy white clouds, casting soft shadows on the lush green fields. The painting should evoke a sense of peacefulness and natural beauty, with attention to detail in the foliage, water reflections, and overall composition. The oil painting technique should emphasize rich textures and colors, capturing the essence and feel of Constable's original masterpiece.

Lavender field

I love this picture which is from a bing background

i struggled to get anything better than the original

my prompts were:

“replicate as near as you can this picture , its a lavender field”

make it look like a photo you would take

with the orginal i added , make it an oil painting the technique should emphasize rich textures and colors.

with the orginal i added , make it an oil painting the technique should emphasize rich textures and colors like how constable would have painted it

give me the prompt i would need to recreate this

Here is the prompt you would need to recreate the oil painting:

“An oil painting of a vast lavender field at sunset, with rows of vibrant purple flowers stretching towards the horizon. The technique emphasizes rich textures and colors, reminiscent of John Constable’s style, with thick brushstrokes creating a sense of depth and movement. The sky is painted with hues of pink, orange, and purple, blending beautifully into a gradient. In the center of the field stands a solitary tree, adding a focal point to the landscape. Mountains can be seen in the distant background. The atmosphere is serene and picturesque, captured in a vibrant and textured style.”

Serene lakeside village

Do this as a water colour

“Here are the watercolor versions of the serene lakeside village.”

Here are the futuristic versions of the serene lakeside village

Learning AI

Introduction

While I am very interested in AI and try and use it regularly. Ironically this is one of the pages not written by AI neither does it have an AI generated Cartoon at the top.

I only want to include real solid information on these pages.

Great Videos

This is a must-watch for everyone and anyone.