AI Agents

Playing around with OpenAI Swarm and ChatGBT and this was the result (although as you can see i went off on a tangent not using Swarm) but a fun excercise.

https://colab.research.google.com/drive/1gx5zmdIcJwwKIvDmNRoJmqpdeLh6UnCN?usp=sharing#scrollTo=4k3_qWopAGE_

https://github.com/openai/swarm

I have uploaded to https://github.com/herepete/Ai_playing/blob/main/ai_agents.py

See my other AI posts about putting the OpenAI key in as a variable 🙂

$ cat ai_agents.py
#!/usr/bin/python3.11

import openai
import os
os.system('clear')

OPENAI_API_KEY = os.getenv('OPENAI_API_KEY')

# Function to determine the most appropriate agent to respond
def select_agent(agents, user_input):
    if "joke" in user_input.lower():
        return "Joke Creator"
    elif "fact" in user_input.lower() or "verify" in user_input.lower():
        return "Fact Checker"
    else:
        return "Creative Thinker"

# Function for the agent to respond based on instructions
def agent_respond(agent, context):
    try:
        # Make the call to the OpenAI API with clear and explicit structure
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": agent["instructions"]},
                *context
            ],
            max_tokens=150
        )
        return response['choices'][0]['message']['content'].strip()
    except Exception as e:
        print(f"Error: {e}")
        return None

# Function to create an agent
def create_agent(name, role, instructions):
    return {"name": name, "role": role, "instructions": instructions}

# Create agents
agent_1 = create_agent("Fact Checker", "assistant", "You are a detailed fact-checker. Provide accurate and concise responses.")
agent_2 = create_agent("Creative Thinker", "assistant", "You are a creative agent that provides out-of-the-box thinking.")
agent_3 = create_agent("Joke Creator", "assistant", "You are a joke creator. Provide funny jokes when asked.")

# List of agents
agents = [agent_1, agent_2, agent_3]

# Initial explanation to the user
print("Welcome! We have three agents here to assist you:")
print("1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.")
print("2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.")
print("3. Joke Creator: This agent helps you by creating jokes and providing humor.")
print("Feel free to ask any questions, and our most suitable agent will assist you.")

# Run an interactive conversation loop
while True:
    # Ask user for input
    user_input = input("\nWhat do you need help with today?\nYou: ")

    # Break loop if user wants to quit
    if user_input.lower() in ["quit", "exit"]:
        print("Ending the conversation.")
        break

    # Determine the most appropriate agent based on user input
    selected_agent_name = select_agent(agents, user_input)
    selected_agent = next(agent for agent in agents if agent["name"] == selected_agent_name)

    # Reset messages to contain only the most recent user input for new prompts
    messages = [{"role": "user", "content": user_input}]

    # Run the selected agent to process the current context
    response = agent_respond(selected_agent, messages)
    if response:
        messages.append({"role": "assistant", "content": f"{selected_agent['name']} response: {response}"})
        print(f"{selected_agent['name']} response: {response}")
    else:
        print(f"No response from {selected_agent['name']}.")

result

Welcome! We have three agents here to assist you:
1. Fact Checker: This agent helps with verifying information, providing accurate answers, and fact-checking.
2. Creative Thinker: This agent helps with brainstorming ideas, creative problem-solving, and thinking outside the box.
3. Joke Creator: This agent helps you by creating jokes and providing humor.
Feel free to ask any questions, and our most suitable agent will assist you.

What do you need help with today?
You: tell me a joke
Joke Creator response: Why did the scarecrow win an award? Because he was outstanding in his field!

What do you need help with today?
You: tell me a fact
Fact Checker response: Fact: The Earth is the only planet in our solar system known to support life.

What do you need help with today?
You: what is the meaning of life?
Creative Thinker response: The meaning of life is a deeply personal and subjective question that can have many different answers depending on individual beliefs, experiences, and perspectives. Some people find meaning in pursuing their passions, building connections with others, contributing to society, seeking personal growth, exploring spirituality, or simply in enjoying the present moment. Ultimately, the meaning of life is something that each person must decide for themselves based on what brings them fulfillment and purpose.

What do you need help with today?
You:

OpenAi chat with a bot from Linux CLI

Available in Github at https://github.com/herepete/Fidelity/blob/main/chat_with_openai.py

A standalone script to allow you at the command line to chat with OpenAI

[ec2-user@ip-172-31-33-224 Fidelity]$ cat chat_with_openai.py
#!/usr/bin/python3
import openai
import os


# Your OpenAI API key
openai.api_key = os.getenv("OPENAI_API_KEY")

def chat_with_ai():
    messages = [
        {"role": "system", "content": "You are a helpful assistant."}
    ]

    print("Start chatting with the AI! Type 'exit' to end the conversation.")
    while True:
        # Get user input
        user_input = input("You: ")

        # Exit the chat loop if the user types 'exit'
        if user_input.lower() == 'exit':
            print("Ending the conversation. Goodbye!")
            break

        # Add the user's message to the conversation history
        messages.append({"role": "user", "content": user_input})

        # Get the AI's response
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages,
            max_tokens=150  # Set the maximum number of tokens for each response
        )

        # Extract and print the AI's response
        ai_response = response['choices'][0]['message']['content']
        print(f"AI: {ai_response}")

        # Add the AI's response to the conversation history
        messages.append({"role": "assistant", "content": ai_response})


chat_with_ai()

Sample Run

]# ./chat_with_openai.py
Start chatting with the AI! Type 'exit' to end the conversation.
You: Tell me  a Joke
AI: Sure, here's a joke for you:

Why couldn't the bicycle stand up by itself?

Because it was two-tired!
You: nice one, tell me an intresting fact?
AI: Here's an interesting fact for you:

Honey never spoils! Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3000 years old and still perfectly edible. Honey's high sugar content and low moisture levels create an environment that prevents the growth of bacteria and yeasts, making it one of the only foods that never spoils.
You:

API,OPENAI & Finance Python Project

High level

So combining a few of my interests:
1) Python

2) Finance

3) AI

I have uploaded the script to

https://github.com/herepete/Fidelity

Setup steps

From a AWS Linux Vm

yum install git
yum install pip
pip3.9 install bs4 pandas
pip3.9 install openai==0.28

create openai account > https://platform.openai.com/docs/overview, create an API key , add openai key to .bashrc for example

export OPENAI_API_KEY="bla"


make sure its visible when running

 env | grep OPEN

run test api script.py – fix issues
run option 1 on menu within main.py -fix issues

run option 2

High Level Aim

A script to find a list of income funds i am interested in.

Then pass that list to openai for some feedback.

What it does

Based on the config in config.py

connect to the Fidelity OpenAi and with the minimal of calls check against criteria namely:

is it an Income fund?

does it have a yield over 4%

is the cost under 1%

As it produced a return of over 3 in the last 1,3 & 5 years

If the Api fails for any reason or the criteria is not met reject the fund straight away to reduce the number of API Calls needed.

If all the tests are passed some extra info is grabbed Last Years Yield, and Overall Morning Star Rating.

At the end you should have a list of funds which have passed all off the test and then that information is all passed to openai to get some feedback.

Considerations and Ideas

I am the first to admit this script needs tweaking and improvement but my aim was to get a first version published rather than having a perfect solution.

I though about multi-threading or multi-processing to speed everything up, but i don’t want to hammer the API.

I could do more with the AI i.e langchain but my aim was to get version 1 working.

I did wonder about getting the results written to a db like sqllite but thats another option for version2

In the output i could use colour and ask the Openai for more information.

You need quite a wide screen to display the ouput as designed.

Maybe play with different openai models to check output.

Maybe open a chat type windows from the command line to allow you to ask questions about the funds.

Play with Openai instructions to get better answers, i haven’t done any fine tunning of the commands.

Files

config.py – basic config, set testing to 0 if you want to unleash the power! & set any criteria you wish on the other fields

main.py – main script to run

test_openai.py – a basic test to check openai is being called, can be run on its own or from the menu in main.py

failer_test_fund.txt – is a log of what fund errored and why, this is only generated when option 2 from main.py is run. We use the funds here and mark them as Previously checked so we don’t unnecessarily test them again.

fidelity_funds.csv – this is your master list of funds to consider about 3,000 ish. This file is only generated when option 1 from main.py is run. The first check in option 2 of main.py is looking through all funds names and removing anything with the word “inc” in.

Typical workflow

setup enviorement

download git scripts

run main.py choose option 1 to create your all files lookup

run main.py chose option 2 – 10 funds are checked

edit config.py and change testing to 0

run main.py choose option 2 – sit back and watch and review results

Interface

It’s all Command line but i have tried to make it visualize as nice as possible.

The page should refresh:

Income stocks found should be about 1117 ish – is all income funds found in fidelity_funds.csv

Previously checked – funds found in failer_test_fund.txt, which is failed funds from any previous runs.

This Round checking – how many funds to check based on Db.config values

This round rejected – how many funds failed

This round suitable funds found – how many funds passed

Verbose stuff – a log of the 10 most recent things than happened

Basic run with no Funds found

Basic run with Funds found

note anything below the line “**Analysis:**” is aI generated.

A Screenshot didn’t look as good here so i have copied and pasted the output as a code line

Income Stocks Found           Previously checked            This round Checking           This Round Rejected           This Round Suitable Funds found

1117                          537                           20                            18                            2

Verbose Stuff...
Starting work on - Rathbone M-A Strategic Growth Port S Inc
Yield check failure
Starting work on - Rathbone M-A Strategic Income Port S Inc
Yield check failure
Starting work on - Rathbone M-A Total Return Port S Inc
Yield check failure
Starting work on - Schroder High Yield Opportunities Z Inc
Suitable Fund Found=Schroder High Yield Opportunities Z Inc
Starting work on - Schroder Strategic Credit L Inc
Suitable Fund Found=Schroder Strategic Credit L Inc

Here are the raw results and the AI feedback on those funds...
Based on the provided data, here is a comparison of the two income funds:

| Fund Name                           | ISIN          | Fee (%) | Yield (%) | Frequency     | Y1_Annualized | Y3_Annualized | Y5_Annualized | Last Year's Yield | Morning Star Rating |
|------------------------------------|-------------------------------|--------|----------|----------------|----------------|----------------|----------------|------------------|---------------------
| Schroder High Yield Opportunities Z Inc | GB00B5143284 | 0.72   | 7.71      | Monthly        | 15             | 3             | 5                | 7.82                 | 5                   |
| Schroder Strategic Credit L Inc         | GB00B11DP098 | 0.67   | 6.33      | Semi-Annually | 12             | 3             | 4                | 5.90                 | 5                   |

**Analysis:**

**Schroder High Yield Opportunities Z Inc:**
- **Pros:**
  - Higher yield (7.71%)
  - Monthly frequency of payouts
  - Higher Y1_Annualized return (15%)
  - Higher Last Year's Yield (7.82%)
  - Morning Star Rating of 5

- **Cons:**
  - Slightly higher fee compared to the other fund (0.72%)
  - Lower Y3_Annualized and Y5_Annualized returns

**Schroder Strategic Credit L Inc:**
- **Pros:**
  - Lower fee (0.67%)
  - Lower fee than the other fund
  - Consistent Morning Star Rating of 5

- **Cons:**
  - Lower yield (6.33%)
  - Semi-annual frequency of payouts
  - Lower Y1_Annualized and Last Year's Yield
  - Lower Y3_Annualized and Y5_Annualized returns compared to the other fund

**Additional Research:**
- Further research into the credit quality of the underlying assets might provide insights into the risk profile of these funds.
- Comparison of the fund performance against relevant benchmark indices.

In summary, the Schroder High Yield Opportunities Z Inc fund offers a higher yield and better short-term performance, but at a slightly higher fee. On the other hand, the Schroder Strategic Credit L Inc fund has a lower fee and consistent Morning Star Rating, but with lower yield and overall returns. Investors may consider their investment goals, risk tolerance, and liquidity preferences when choosing between these two funds.
AI Anslysis completed