이 글은 AI Agents in LangGraph 코스를 보고 정리한 글입니다.


먼저 중요한 용어 정리부터 하자: (Agent 와 Agent Workflow, Agent Design Pattern)

  • Agent:
    • 특정한 작업을 자율적으로 수행할 수 있는 LLM 이다.
    • LLM 아 여러가지 외부 도구 (e.g Search, Wikipedia, Math, Code Function, API 등) 을 가진 상태를 말함.
  • Agent Workflow:
    • 특정 작업을 완료하기 위한 반복적인 작업의 흐름들을 말한다.
    • 크게는 Planning, Execution, Monitoring and Feedback 등을 반복한다.
    • 이런 행동들은 우리가 '지시' 하는게 아니라 LLM 스스로 이러한 행동들을 하는 걸 말함.
    • 반복적인 행동을 해야하는 이유로는 '처음부터 시작해서 한번에 작업을 모두 끝내기' 를 한다면 어떠한 전문가라도 쉽게 할 수 없기 떄문임.
  • Agent Design Pattenr:
    • Planning: 쟉업을 완료하기 위해 어떠한 행동들을 해야하는지 계획하는 과정
    • tool use: Planning 에서 결정한 행동을 하기 위해서 적절한 도구를 찾아서 수행하는 과정.
    • Reflection: 더 나은 결과를 만들어내기 위해서 피드백 하고 개선하는 과정. (이 과정에서 여러개의 Agent 를 사용할 수 있다. Multi-agent communication. 그리고 각각의 Agent 에 Role 을 부여해서 비판하고, 피드백하고 개선하고 하는거임. )
    • Memory: 어떠한 작업들을 했는지 메모리에 기록해두는 걸 말한다. 이를 통해서 점진적으로 개선될 수 있을거임.

 

 

대표적인 Agents: (여기에서는 이런 Agnet 를 지원하는 LangChain 의 LangGrpah 에 대해서 배워볼거임)

  • ReAct (Synergizing Reasoning and Acting in Language Models):
    • 작업을 완료하기 위해서 추론 (Reasoning) 과 행동 (Action) 그리고 행동의 결과를 Feedback 으로 삼아서 문제를 해결하는 Agent 임.
    • 이러한 행동을 반복적으로 하면서 작업의 완료에 가까워지도록 하는 Agent 이다.
  • Self-Refine Iterative Refinement with Self-Feedback 에서의 Agent:
    • Agent 가 반복적인 자가 피드백을 통해서 점진적으로 결과를 개선해나가는 Agent 임.
  • Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering 에서의 Agent:
    • 이 Agent 는 Code 생성을 자동화하고 더 좋은 최적화된 코드를 생성하도록 하는 Agent 임.
    • Agent 는 크게 두 부분으로 나눠져있다. 더 좋은 코드를 생성할 수 있도록 프롬포트를 만드는 Prompt Engineering, 생성된 코드를 분석하고, 리팩터링하고, 성능 최적화하고, 테스트와 검증을 거쳐서 올바른 코드를 생성할 수 있는 Flow Engineering.

 

 

그 다응으로 이 코스에서 다루는 것.

  • Agent Search: Agent 가 자주 사용하는 유용한 검색 도구 툴
  • Agent Componetns: (Input 을 지정받는 컴포넌트와 Agent 상태를 저장할 수 있도록 하는 컴포넌트)

 

 

1. Build an Agent from Scratch

여기서는 간단한 Agent 를 구축해보면서 Agent 가 어떻게 동작하는지 살펴보자.

 

먼저 OpenAI 모델 세팅:

import openai
import re
import httpx
import os
from dotenv import load_dotenv

_ = load_dotenv()
from openai import OpenAI

client = OpenAI()

chat_completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello world"}]
)

chat_completion.choices[0].message.content

 

 

다음으로 Agent 를 위한 클래스를 만들자.

  • 특별한 건 없음. 어뜬 Agent 인지 알려줄 수 있는 System Messgae 와, 히스토리 내역을 기록할 수 있는 Message List 를 내부적으로 가지고 있음.
class Agent:
    def __init__(self, system=""):
        self.system = system
        self.messages = []
        if self.system:
            self.messages.append({"role": "system", "content": system})

    def __call__(self, message):
        self.messages.append({"role": "user", "content": message})
        result = self.execute()
        self.messages.append({"role": "assistant", "content": result})
        return result

    def execute(self):
        completion = client.chat.completions.create(
                        model="gpt-4o", 
                        temperature=0,
                        messages=self.messages)
        return completion.choices[0].message.content

 

 

다음으로는 Agent 를 지시할 수 있는 Prompt 를 보자:

  • 예시에서의 Agent 는 ReAct Agent 이다. 그래서 작업을 완료시키기 위해서, Thought, Action, 그리고 Action 의 결과가 올 때까지 잠깐 PAUSE 한 후, Action 의 결과를 Observation 하고, 이 과정을 반복하는 식으로 동작한다.
  • Prompt 에서 보면 Agent 가 사용할 수 있는 도구들이 주어져있고, 실제로 Agent 가 어떻게 동작하는지 알려주는 Example 이 COT 형태로 주어져 있음.
prompt = """
You run in a loop of Thought, Action, PAUSE, Observation.
At the end of the loop you output an Answer
Use Thought to describe your thoughts about the question you have been asked.
Use Action to run one of the actions available to you - then return PAUSE.
Observation will be the result of running those actions.

Your available actions are:

calculate:
e.g. calculate: 4 * 7 / 3
Runs a calculation and returns the number - uses Python so be sure to use floating point syntax if necessary

average_dog_weight:
e.g. average_dog_weight: Collie
returns average weight of a dog when given the breed

Example session:

Question: How much does a Bulldog weigh?
Thought: I should look the dogs weight using average_dog_weight
Action: average_dog_weight: Bulldog
PAUSE

You will be called again with this:

Observation: A Bulldog weights 51 lbs

You then output:

Answer: A bulldog weights 51 lbs
""".strip()

 

 

다음은 Agent 가 사용할 수 있는 도구들을 함수 형태로 선언해놓자.

def calculate(what):
    return eval(what)

def average_dog_weight(name):
    if name in "Scottish Terrier": 
        return("Scottish Terriers average 20 lbs")
    elif name in "Border Collie":
        return("a Border Collies average weight is 37 lbs")
    elif name in "Toy Poodle":
        return("a toy poodles average weight is 7 lbs")
    else:
        return("An average dog weights 50 lbs")

known_actions = {
    "calculate": calculate,
    "average_dog_weight": average_dog_weight
}

 

 

이제 Agent 를 만들어서 동작시켜보자. 먼저 추론부터 시작할거임.

result = abot("How much does a toy poodle weigh?")
print(result)

 

 

추론 결과:

Thought: I should look up the average weight of a toy poodle using the available action.
Action: average_dog_weight: Toy Poodle
PAUSE

 

 

다음으로 도구를 사용해야한다. 여기서의 에시는 Agent 가 자동으로 도구까지 호출하지 않으므로, 수동으로 호출해주자.

result = average_dog_weight("Toy Poodle")
result

 

 

도구 호출 결과:

'a toy poodles average weight is 7 lbs'

 

 

이제 도구 결과를 Observation 에 넣어서 반복해서 다음 과정으로 가보자. 이제는 대답을 내릴 수 있을거임.

next_prompt = "Observation: {}".format(result)
abot(next_prompt)

 

 

출력 결과:

'Answer: A toy poodle weighs an average of 7 lbs.'

 

 

히스토리 내역에는 이렇게 담겨져 있을 것:

abot.messages

 

 

히스토리 내역 출력 결과:

[{'role': 'system',
  'content': 'You run in a loop of Thought, Action, PAUSE, Observation.\nAt the end of the loop you output an Answer\nUse Thought to describe your thoughts about the question you have been asked.\nUse Action to run one of the actions available to you - then return PAUSE.\nObservation will be the result of running those actions.\n\nYour available actions are:\n\ncalculate:\ne.g. calculate: 4 * 7 / 3\nRuns a calculation and returns the number - uses Python so be sure to use floating point syntax if necessary\n\naverage_dog_weight:\ne.g. average_dog_weight: Collie\nreturns average weight of a dog when given the breed\n\nExample session:\n\nQuestion: How much does a Bulldog weigh?\nThought: I should look the dogs weight using average_dog_weight\nAction: average_dog_weight: Bulldog\nPAUSE\n\nYou will be called again with this:\n\nObservation: A Bulldog weights 51 lbs\n\nYou then output:\n\nAnswer: A bulldog weights 51 lbs'},
 {'role': 'user', 'content': 'How much does a toy poodle weigh?'},
 {'role': 'assistant',
  'content': 'Thought: I should look up the average weight of a toy poodle using the available action.\nAction: average_dog_weight: Toy Poodle\nPAUSE'},
 {'role': 'user',
  'content': 'Observation: a toy poodles average weight is 7 lbs'},
 {'role': 'assistant',
  'content': 'Answer: A toy poodle weighs an average of 7 lbs.'}]

 

 

이제 이 Agent 를 반복해서 수행할 수 있도록 해보자.

action_re = re.compile('^Action: (\w+): (.*)$')   # python regular expression to selection action

known_actions = {
    "calculate": calculate,
    "average_dog_weight": average_dog_weight
}

def query(question, max_turns=5):
    i = 0
    bot = Agent(prompt)
    next_prompt = question
    while i < max_turns:
        i += 1
        result = bot(next_prompt)
        print(result)
        actions = [
            action_re.match(a) 
            for a in result.split('\n') 
            if action_re.match(a)
        ]
        if actions:
            # There is an action to run
            action, action_input = actions[0].groups()
            if action not in known_actions:
                raise Exception("Unknown action: {}: {}".format(action, action_input))
            print(" -- running {} {}".format(action, action_input))
            observation = known_actions[action](action_input)
            print("Observation:", observation)
            next_prompt = "Observation: {}".format(observation)
        else:
            return

question = """I have 2 dogs, a border collie and a scottish terrier. \
What is their combined weight"""
query(question)

 

 

2. LangGraph Components

여기서는 LangGraph 를 통해 Agent 를 만들 수 있는 컴포넌트들에 대해서 배워보자.

 

이전에 만들었던 간단한 Agent 는 다음의 컴포넌트 흐름을 가지고 있음:

  • User Input:
    • 사용자의 질문
  • Prompt:
    • ReAct 기반의 흐름대로 처리할 수 있는 프롬포트.
    • Agent 뿐 아니라 사용할 수 있는 다른 Prompt 는 LangChain hub 에 가면 다른 사람들이 작성한 프롬포트를 볼 수 있음.
  • LLM: 추론하고, 행동을 결정하는 LLM
  • Action:
    • 도구 (Tool) 를 통해 행동함.
    • Tools 또한 여러가지 LangChain 에서 많이 지원해줌. 특히 유용한 도구가 AI Agent 가 사용하는데 적합한 TabilySearch 라고 하낟.
  • Observation: 도구를 수행한 결과를 Prompt 에 넣어줌.
  • Loop: Prompt -> LLM -> Action -> Observation -> Prompt

 

 

LangGraph 는 Agent 에서 제일 많은 코드를 작성해야하는 작업 처리 흐름 Loop 를 쉽게 구성할 수 있도록 해준다:

  • Agent 의 처리 흐름을 보면 Cyclic Graph 형태를 이루고 있는데 이 처리 작업을 만들도록 해줌.
  • 그리고 처리 과정을 영속적으로 저장할 수 있는 기능을 제공해줘서, tracking 하거나, resume 할 수 있도록 해준다.
  • 마지막으로 AI 시스템에서 강한 UX 디자인인 Human-in-the-loop 디자인을 적용할 수 있기도 함.
  • 여러 Agent 를 이용한 flow 도 구성할 수 있다.

 

 

LangGraph 에서 Graph 요소는 3가지임:

  • Nodes: Agents or functions
  • Edges: connect nodes.
  • Conditional edges: decision

 

 

LangGraph 에서는 Agent State 라는 기능을 제공해준다. 이전에 했던 행동이나 결과들 히스토리 내역들, 중간 결과들, Input 등을 저장할 수 있고, 필요하다면 영속성 저장소와 연결해서 저장할 수도 있다.

  • 아래 이미지에서 Operator 에노테이션이 붙은건 해당 연산만을 지원한다는거임. Operator.add 가 붙은 건 더하기 연산만 지원하고, 수정, 삭제는 불가능함을 말한다.

 

 

이제 실습으로 넘어가서 Agent 를 만드는 실습 코드를 보자.

 

필요한 라이브러리부터 다운받자.

  • Agent 상태를 위한 AgentState 와 END 노드
  • Graph 를 만들기 위한 Python 상태 힌드를 주기 위한 typing (딕셔너리 타입 힌트를 주기 위한 typedict 와 메타데이터에 타입 힌트를 주기 위한 Annotated)
  • 대표적인 상태를 나타내는 Messages 들
  • OpenAI 모델인 LLM 인 ChatOpenAI
  • Agent 가 사용할 검색 엔진 도구인 TavilySearchResults
from dotenv import load_dotenv
_ = load_dotenv()

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults

 

 

검색 엔진 도구부터 만들어주자.

tool = TavilySearchResults(max_results=4) #increased number of results
print(type(tool))
print(tool.name)

 

 

간단한 Agent 상태도 만들어주자.

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

 

 

다음으로 LangGraph 를 만들어주자:

  • 여기서는 사용할 function 으로 3가지를 만든다. LLM 을 호출하는 함수 (도구 선택, 결론 호출 등을 위해서), 도구를 선택하는 함수, 도구의 존재 유무를 묻는 함수 이렇게 있음.
  • 그리고 LangGraph 의 workflow 를 만들기 위한 초기화 코드가 있다.
  • 초기화 코드를 좀 더 자세하게 보자:
    • Workflow 인 Graph 를 만들고, 노드 (function, llm or Agent) 와 Edge, 그리고 conditional_edge 를 graph 에 넣어줌.
    • graph 의 초기 진입점을 설정해주고 graph 를 컴파일해서 준비 상태로 만들어준다.
    • 마지막으로 도구를 설정해주고, model 도 설정해줌.
  • 이제 workflow 를 위한 Agent 함수들을 보자:
    • exists_action: LLM 의 응답에서 Action 이 존재하는지 판단해주는 함수임. 상태에서 메지리를 꺼내서 도구 호출이 있는지 확인함.
    • call_openai: LLM 에게 메시지를 가지고 호출하는 역할을 한다. input 을 주면 LLM 에게 어떤 도구를 쓸건지 물어보기도 하고, 도구의 결과를 가지고 다시 LLM 에게 질문하기도 하고, 답변을 작성하기도 하는 역할을 함.
    • take_action: LLM 이 도구를 사용하기로 결정했다면 도구를 사용하고, 해당 결과를 저장하는 역할을 한다. 코드를 보면 도구 이름이 올바른지 확인하는데 이건 LLM 의 할루시네이션 때문임.
class Agent:

    def __init__(self, model, tools, system=""):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges(
            "llm",
            self.exists_action,
            {True: "action", False: END}
        )
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile()
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def exists_action(self, state: AgentState):
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            if not t['name'] in self.tools:      # check for bad tool name from LLM
                print("\n ....bad tool name....")
                result = "bad tool name, retry"  # instruct LLM to retry if bad
            else:
                result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

 

 

이제 시스템 프롬포트를 설정하고 LangGrpah 를 통해 Agent 를 만들자:

  • 보면 도구를 병렬적으로 호출할 수도 있고, 순차적으로도 호출할 수도 있다고 함. (순차적 호출은 앞의 도구 호출 결과를 가지고 다음 도구를 호출하는데 사용하는 걸 말함)
prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""

model = ChatOpenAI(model="gpt-3.5-turbo")  #reduce inference cost
abot = Agent(model, [tool], system=prompt)

 

 

이제 하나씩 작업을 수행해보자.

messages = [HumanMessage(content="What is the weather in sf?")]
result = abot.graph.invoke({"messages": messages})

 

 

도구를 사용했다는 로그가 이렇게 호출될거임.

Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_JXvN12RV7urRUgPT9dTW6GV9'}
Back to the model!

 

 

그리고 result 를 통해 전체 메시지를 볼 수 있다.

result

 

 

result 출력 결과:

{'messages': [HumanMessage(content='What is the weather in sf?'),
  AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_JXvN12RV7urRUgPT9dTW6GV9', 'function': {'arguments': '{"query":"current weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 153, 'total_tokens': 175}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-71ee3f61-af86-419b-974d-127d2c1ca222-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_JXvN12RV7urRUgPT9dTW6GV9'}]),
  ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718608610, \'localtime\': \'2024-06-17 0:16\'}, \'current\': {\'last_updated_epoch\': 1718608500, \'last_updated\': \'2024-06-17 00:15\', \'temp_c\': 11.8, \'temp_f\': 53.3, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 14.8, \'wind_kph\': 23.8, \'wind_degree\': 284, \'wind_dir\': \'WNW\', \'pressure_mb\': 1011.0, \'pressure_in\': 29.87, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 77, \'cloud\': 5, \'feelslike_c\': 9.5, \'feelslike_f\': 49.1, \'windchill_c\': 9.5, \'windchill_f\': 49.1, \'heatindex_c\': 11.8, \'heatindex_f\': 53.3, \'dewpoint_c\': 7.9, \'dewpoint_f\': 46.3, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 23.1, \'gust_kph\': 37.2}}"}, {\'url\': \'https://www.accuweather.com/en/us/san-francisco/94103/june-weather/347629\', \'content\': \'Get the monthly weather forecast for San Francisco, CA, including daily high/low, historical averages, to help you plan ahead.\'}, {\'url\': \'https://forecast.weather.gov/MapClick.php?lat=37.77493&lon=-122.41942\', \'content\': \'Current conditions at SAN FRANCISCO DOWNTOWN (SFOC1) Lat: 37.77056°NLon: 122.42694°WElev: 150.0ft. NA. 54°F. 12°C. Humidity: 88%: ... 2024-6pm PDT Jun 17, 2024 . ... Hourly Weather Forecast. National Digital Forecast Database. High Temperature. Chance of Precipitation. ACTIVE ALERTS Toggle menu. Warnings By State; Excessive Rainfall;\'}, {\'url\': \'https://www.weathertab.com/en/c/e/06/united-states/california/san-francisco/\', \'content\': \'Temperature Forecast Normal. Avg High Temps 60 to 70 °F. Avg Low Temps 45 to 60 °F. Explore comprehensive June 2024 weather forecasts for San Francisco, including daily high and low temperatures, precipitation risks, and monthly temperature trends. Featuring detailed day-by-day forecasts, dynamic graphs of daily rain probabilities, and ...\'}]', name='tavily_search_results_json', tool_call_id='call_JXvN12RV7urRUgPT9dTW6GV9'),
  AIMessage(content='The current weather in San Francisco is as follows:\n- Temperature: 11.8°C (53.3°F)\n- Condition: Clear\n- Wind: 23.8 km/h (WNW)\n- Humidity: 77%\n- Pressure: 1011.0 mb\n- Visibility: 10.0 km (6.0 miles)\n- UV Index: 1.0\n\nIf you need more detailed information or have any other questions, feel free to ask!', response_metadata={'token_usage': {'completion_tokens': 100, 'prompt_tokens': 905, 'total_tokens': 1005}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-0b3e091d-6308-4499-803f-d70ee11ec2d0-0')]}

 

 

result 의 마지막 메시지만 보려면 다음과 같이 하면 됨.

result['messages'][-1].content

 

 

출력 결과는 다음과 같음.

'The current weather in San Francisco is as follows:\n- Temperature: 11.8°C (53.3°F)\n- Condition: Clear\n- Wind: 23.8 km/h (WNW)\n- Humidity: 77%\n- Pressure: 1011.0 mb\n- Visibility: 10.0 km (6.0 miles)\n- UV Index: 1.0\n\nIf you need more detailed information or have any other questions, feel free to ask!'

 

 

LangGraph 를 이용한 Agent 에서는 병렬로 도구 호출 기능도 지원해준다.

messages = [HumanMessage(content="What is the weather in SF and LA?")]
result = abot.graph.invoke({"messages": messages})

 

 

LangGraph 를 이용한 Agent 에서는 순차적인 도구 사용으로 인한 작업 처리 흐름도 지원해준다.

# Note, the query was modified to produce more consistent results. 
# Results may vary per run and over time as search information and models change.

query = "Who won the super bowl in 2024? In what state is the winning team headquarters located? \
What is the GDP of that state? Answer each question." 
messages = [HumanMessage(content=query)]

model = ChatOpenAI(model="gpt-4o")  # requires more advanced model
abot = Agent(model, [tool], system=prompt)
result = abot.graph.invoke({"messages": messages})

 

 

3. Agentic Search Tools

Agent 가 자주 사용하는 Search Tool 에 대해 알아보고, 이런 Search Tool 과 우리 인간이 주로 사용하는 Search Tool 을 비교해보자.

Agent 가 사용하는 Search Tool 은 내부적으로 다음과 같이 행동한다:

  • 복잡한 Query 를 여러개의 핵심적인 SubQuery 로 분해한다.
  • SubQuery 로 검색한 후 가장 관련성이 있는 상위의 데이터를 취해서 사용한다.
  • RAG 와 유사한 점이 많음.

 

 

이제 이러한 Serach Tool 을 이용해보자.

 

여기서는 Agent Search 로 TavilyClient 를 이용해볼 것.

 

TavilyClient 를 이용하는 코드는 다음과 같다:

# libraries
from dotenv import load_dotenv
import os
from tavily import TavilyClient

# load environment variables from .env file
_ = load_dotenv()

# connect
client = TavilyClient(api_key=os.environ.get("TAVILY_API_KEY"))

# run search
result = client.search("What is in Nvidia's new Blackwell GPU?",
                       include_answer=True)

# print the answer
result["answer"]

 

 

답변을 쉽게 얻어올 수 있음.

 

result["answer"] 은 다음과 같다:

'The new Nvidia Blackwell GPU architecture is set to power the RTX 50-series graphics cards. It enables organizations to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor. The Blackwell B200 GPU delivers up to 20 petaflops of compute and significant performance improvements over its predecessor. The architecture features a 512-bit GDDR7 memory configuration for the GB202 model and a 256-bit configuration for the GB203 model.'

 

 

result 자체는 다음과 같음:

{'query': "What is in Nvidia's new Blackwell GPU?",
 'follow_up_questions': None,
 'answer': 'The new Nvidia Blackwell GPU, specifically the Blackwell B200, is part of the Blackwell platform and is designed to power a new era of computing by enabling organizations to build and run real-time generative AI on trillion-parameter large language models at a significantly reduced cost and energy consumption compared to its predecessor. The Blackwell B200 GPU offers up to 20 petaflops of compute power and is set to potentially more than quadruple the performance of its predecessor.',
 'images': None,
 'results': [{'title': 'Nvidia Blackwell and GeForce RTX 50-Series GPUs: Rumors, specifications ...',
   'url': 'https://www.tomshardware.com/pc-components/gpus/nvidia-blackwell-rtx-50-series-gpus-everything-we-know',
   'content': "Nvidia's Blackwell architecture is on the horizon, set to power the RTX 50-series graphics cards. ... That's a full decade of new Nvidia GPU architectures arriving approximately every two years ...",
   'score': 0.95815,
   'raw_content': None},
  {'title': "Nvidia's new Blackwell GPU specs just leaked, including RTX 5090",
   'url': 'https://www.pcgamesn.com/nvidia/blackwell-specs-rtx-50',
   'content': 'The new Nvidia Blackwell spec leak comes via regular X/Twitter-based leaker @kopite7kimi who simply put forth the following list of Nvidia GPU specs: GB202 12*8 512-bit GDDR7. GB203 7*6 256-bit ...',
   'score': 0.94677,
   'raw_content': None},
  {'title': 'NVIDIA Blackwell Platform Arrives to Power a New Era of Computing',
   'url': 'https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing',
   'content': 'GTC— Powering a new era of computing, NVIDIA today announced that the NVIDIA Blackwell platform has arrived — enabling organizations everywhere to build and run real-time generative AI on trillion-parameter large language models at up to 25x less cost and energy consumption than its predecessor. The Blackwell GPU architecture features six ...',
   'score': 0.94421,
   'raw_content': None},
  {'title': "Nvidia's next-gen AI GPU is 4X faster than Hopper: Blackwell B200 GPU ...",
   'url': 'https://www.tomshardware.com/pc-components/gpus/nvidias-next-gen-ai-gpu-revealed-blackwell-b200-gpu-delivers-up-to-20-petaflops-of-compute-and-massive-improvements-over-hopper-h100',
   'content': 'Nvidia revealed its upcoming Blackwell B200 GPU at GTC 2024, which will power the next generation of AI supercomputers and potentially more than quadruple the performance of its predecessor. Here ...',
   'score': 0.93731,
   'raw_content': None},
  {'title': "Nvidia reveals Blackwell B200 GPU, the 'world's most ... - The Verge",
   'url': 'https://www.theverge.com/2024/3/18/24105157/nvidia-blackwell-gpu-b200-ai',
   'content': 'But perhaps Nvidia is about to extend its lead — with the new Blackwell B200 GPU and GB200 "superchip.". Nvidia CEO Jensen Huang holds up his new GPU on the left, next to an H100 on the ...',
   'score': 0.93296,
   'raw_content': None}],
 'response_time': 4.36}

 

 

Agent Search 와 대비되는 우리 인간이 자주 사용하는 검색은 다음과 같이 이용될 수 있다.

  • 크롤링해서 HTML 문서를 파싱해서 content 를 가져오는 방식임.
  • search() 함수를 실행하면 URL 이 나올거고, scrape_weather_info() 함수를 실행하면 URL 에 따른 HTML 파일이 나올 것.
  • 그 다음 후속 코드를 통해 parsing 하면 본문이 나올거다.
# choose location (try to change to your own city!)

city = "San Francisco"

query = f"""
    what is the current weather in {city}?
    Should I travel there today?
    "weather.com"
"""

import requests
from bs4 import BeautifulSoup
from duckduckgo_search import DDGS
import re

ddg = DDGS()

def search(query, max_results=6):
    try:
        results = ddg.text(query, max_results=max_results)
        return [i["href"] for i in results]
    except Exception as e:
        print(f"returning previous results due to exception reaching ddg.")
        results = [ # cover case where DDG rate limits due to high deeplearning.ai volume
            "https://weather.com/weather/today/l/USCA0987:1:US",
            "https://weather.com/weather/hourbyhour/l/54f9d8baac32496f6b5497b4bf7a277c3e2e6cc5625de69680e6169e7e38e9a8",
        ]
        return results  

for i in search(query):
    print(i)

def scrape_weather_info(url):
    """Scrape content from the given URL"""
    if not url:
        return "Weather information could not be found."

    # fetch data
    headers = {'User-Agent': 'Mozilla/5.0'}
    response = requests.get(url, headers=headers)
    if response.status_code != 200:
        return "Failed to retrieve the webpage."

    # parse result
    soup = BeautifulSoup(response.text, 'html.parser')
    return soup

# use DuckDuckGo to find websites and take the first result
url = search(query)[0]

# scrape first wesbsite
soup = scrape_weather_info(url)

print(f"Website: {url}\n\n")
print(str(soup.body)[:50000]) # limit long outputs

# extract text
weather_data = []
for tag in soup.find_all(['h1', 'h2', 'h3', 'p']):
    text = tag.get_text(" ", strip=True)
    weather_data.append(text)

# combine all elements into a single string
weather_data = "\n".join(weather_data)

# remove all spaces from the combined text
weather_data = re.sub(r'\s+', ' ', weather_data)

print(f"Website: {url}\n\n")
print(weather_data)

 

 

HTML 본문 결과는 다음과 같다:

recents Specialty Forecasts San Francisco, CA Small Craft Advisory Today's Forecast for San Francisco, CA Morning Afternoon Evening Overnight Weather Today in San Francisco, CA 5:47 am 8:34 pm Don't Miss Hourly Forecast Now 2 am 3 am 4 am 5 am Outside That's Not What Was Expected Daily Forecast Today Tue 18 Wed 19 Thu 20 Fri 21 Radar We Love Our Critters Summer Skin Survival Guide Home, Garage & Garden Heat Dome Vs. Heat Wave Weather in your inbox Your local forecast, plus daily trivia, stunning photos and our meteorologists’ top picks. All in one place, every weekday morning. By signing up, you're opting in to receive the Morning Brief email newsletter. To manage your data, visit Data Rights . Terms of Use | Privacy Policy Health News For You Seasonal Spotlight Stay Safe Air Quality Index Air quality is considered satisfactory, and air pollution poses little or no risk. Health & Activities Seasonal Allergies and Pollen Count Forecast Grass pollen is moderate in your area Cold & Flu Forecast Flu risk is low in your area We recognize our responsibility to use data and technology for good. We may use or share your data with our data vendors. Take control of your data. © The Weather Company, LLC 2024

 

 

Regular Search 는 이렇게 텍스트 형식으로 이뤄지기 때문에 Agent 가 좋아하는 형식은 아님. Agent 는 의미를 담은 구조화된 형식을 좋아한다. 다음과 같이.

# run search
result = client.search(query, max_results=1)

# print first result
data = result["results"][0]["content"]

print(data)

 

 

{'location': {'name': 'San Francisco', 'region': 'California', 'country': 'United States of America', 'lat': 37.78, 'lon': -122.42, 'tz_id': 'America/Los_Angeles', 'localtime_epoch': 1718614008, 'localtime': '2024-06-17 1:46'}, 'current': {'last_updated_epoch': 1718613900, 'last_updated': '2024-06-17 01:45', 'temp_c': 11.8, 'temp_f': 53.3, 'is_day': 0, 'condition': {'text': 'Clear', 'icon': '//cdn.weatherapi.com/weather/64x64/night/113.png', 'code': 1000}, 'wind_mph': 14.8, 'wind_kph': 23.8, 'wind_degree': 284, 'wind_dir': 'WNW', 'pressure_mb': 1011.0, 'pressure_in': 29.87, 'precip_mm': 0.0, 'precip_in': 0.0, 'humidity': 77, 'cloud': 5, 'feelslike_c': 9.5, 'feelslike_f': 49.1, 'windchill_c': 9.5, 'windchill_f': 49.1, 'heatindex_c': 11.8, 'heatindex_f': 53.3, 'dewpoint_c': 7.9, 'dewpoint_f': 46.3, 'vis_km': 10.0, 'vis_miles': 6.0, 'uv': 1.0, 'gust_mph': 23.1, 'gust_kph': 37.2}}

 

 

이를 이쁘게 확인하려면 다음과 같음. LLM 과 Agent 는 구조화된 입력을 넣을 경우 더 잘 대답한다.

import json
from pygments import highlight, lexers, formatters

# parse JSON
parsed_json = json.loads(data.replace("'", '"'))

# pretty print JSON with syntax highlighting
formatted_json = json.dumps(parsed_json, indent=4)
colorful_json = highlight(formatted_json,
                          lexers.JsonLexer(),
                          formatters.TerminalFormatter())

print(colorful_json)
{
    "location": {
        "name": "San Francisco",
        "region": "California",
        "country": "United States of America",
        "lat": 37.78,
        "lon": -122.42,
        "tz_id": "America/Los_Angeles",
        "localtime_epoch": 1718614008,
        "localtime": "2024-06-17 1:46"
    },
    "current": {
        "last_updated_epoch": 1718613900,
        "last_updated": "2024-06-17 01:45",
        "temp_c": 11.8,
        "temp_f": 53.3,
        "is_day": 0,
        "condition": {
            "text": "Clear",
            "icon": "//cdn.weatherapi.com/weather/64x64/night/113.png",
            "code": 1000
        },
        "wind_mph": 14.8,
        "wind_kph": 23.8,
        "wind_degree": 284,
        "wind_dir": "WNW",
        "pressure_mb": 1011.0,
        "pressure_in": 29.87,
        "precip_mm": 0.0,
        "precip_in": 0.0,
        "humidity": 77,
        "cloud": 5,
        "feelslike_c": 9.5,
        "feelslike_f": 49.1,
        "windchill_c": 9.5,
        "windchill_f": 49.1,
        "heatindex_c": 11.8,
        "heatindex_f": 53.3,
        "dewpoint_c": 7.9,
        "dewpoint_f": 46.3,
        "vis_km": 10.0,
        "vis_miles": 6.0,
        "uv": 1.0,
        "gust_mph": 23.1,
        "gust_kph": 37.2
    }
}

 

 

4. Persistence and Streaming

LangGraph 의 Persistence 와 Streaming 에 대해서 알아보자:

 

Persistence 기능은 LangGraph 의 Agent 상태를 지속적으로 저장하는 기능임. 이걸 통해서 중간에 멈춘 작업을 재개하거나, 히스토리를 기억하거나 할 수 있다.

 

Task 를 처리하는데 오래 걸리는 Agent 의 경우에는 Persistence 기능이 필요할 것.

 

Streaming 은 Agent 내부에서 일어나는 이벤트들을 수집해서 볼 수 있도록 하는 기능임. 그래서 어떻게 동작하고 있는지 파악하는데 도움을 준다.

 

Streaming 에서 내부적으로 볼 수 있는 단위는 토큰 단위로 보던지, 메시지 단위로 보던지임.

 

Persistence 기능을 준 LangGraph 를 만들어보자:

  • Persistence 로 사용할 데이터 스토어로는 SqliteSaver 를 이용했다. 이건 메모리 상태에서만 유지해주는 기능임. 원한다면 다른 데이터 스토어를 이용하는 것도 당연히 가능하다. PostgreSQL 같은 걸로.
  • Checkpointer 는 Graph 컴파일 시점에 넣어주면 됨.
from dotenv import load_dotenv

_ = load_dotenv()

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults

tool = TavilySearchResults(max_results=2)

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], operator.add]

from langgraph.checkpoint.sqlite import SqliteSaver

memory = SqliteSaver.from_conn_string(":memory:")

class Agent:
    def __init__(self, model, tools, checkpointer, system=""):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile(checkpointer=checkpointer)
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def exists_action(self, state: AgentState):
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatOpenAI(model="gpt-4o")
abot = Agent(model, [tool], system=prompt, checkpointer=memory)

 

 

이제 Streaming 을 통해서 내부적으로 무슨 일이 일어나고 있는지 보자:

  • Thread 설정은 세션 설정이라고 보면 된다. 각 세션 단위로 Agent 는 Persistence 에 히스토리를 기록해놓음.
  • Thread 별로 세션이 설정되니까 여러명의 사용자를 요청에서 받아낼 수 있는거임.
messages = [HumanMessage(content="What is the weather in sf?")]

thread = {"configurable": {"thread_id": "1"}}

for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v['messages'])

 

 

메시지 단위의 Streaming 출력은 다음과 같다:

[AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_k7yyf5QjNSIKngn3D5A6Jq8I', 'function': {'arguments': '{"query":"current weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 151, 'total_tokens': 173}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_9cb5d38cf7', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2059886e-a5ba-4fd7-89d8-90f8f76f3340-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_k7yyf5QjNSIKngn3D5A6Jq8I'}])]
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_k7yyf5QjNSIKngn3D5A6Jq8I'}
Back to the model!

[ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718692086, \'localtime\': \'2024-06-17 23:28\'}, \'current\': {\'last_updated_epoch\': 1718691300, \'last_updated\': \'2024-06-17 23:15\', \'temp_c\': 13.0, \'temp_f\': 55.5, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 4.7, \'wind_kph\': 7.6, \'wind_degree\': 251, \'wind_dir\': \'WSW\', \'pressure_mb\': 1010.0, \'pressure_in\': 29.82, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 71, \'cloud\': 0, \'feelslike_c\': 12.7, \'feelslike_f\': 54.9, \'windchill_c\': 12.7, \'windchill_f\': 54.9, \'heatindex_c\': 13.0, \'heatindex_f\': 55.5, \'dewpoint_c\': 7.7, \'dewpoint_f\': 45.8, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 9.9, \'gust_kph\': 15.9}}"}, {\'url\': \'https://www.wunderground.com/hourly/us/ca/san-francisco/KCASANFR2002/date/2024-6-18\', \'content\': \'Current Weather for Popular Cities . San Francisco, CA warning 53 ° F Fair; Manhattan, NY 67 ° F Sunny; Schiller Park, IL (60176) 61 ° F Partly Cloudy; Boston, MA 66 ° F Mostly Cloudy; Houston ...\'}]', name='tavily_search_results_json', tool_call_id='call_k7yyf5QjNSIKngn3D5A6Jq8I')]

[AIMessage(content='The current weather in San Francisco is clear with a temperature of 13°C (55.5°F). The wind is blowing from the west-southwest (WSW) at 4.7 mph (7.6 kph), and there is no precipitation. The humidity is at 71%, and visibility is 10 km (6 miles).\n\n![Weather Icon](//cdn.weatherapi.com/weather/64x64/night/113.png)', response_metadata={'token_usage': {'completion_tokens': 91, 'prompt_tokens': 687, 'total_tokens': 778}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_f4e629d0a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-bca6d83c-d969-4f62-a14e-acfb93b66b4a-0')]

 

 

다음으로 히스토리가 유지되고 있는지 확인해보기 위해서 질문을 약간 수정해보자.

messages = [HumanMessage(content="What about in la?")]
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

 

 

잘 되는 걸 볼 수 있다.

{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_8NA7TWGrbiG5cKDPSyDwaw3J', 'function': {'arguments': '{"query":"current weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 790, 'total_tokens': 812}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_f4e629d0a5', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-be7979d8-f86a-4d37-a928-d0a528edc04e-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_8NA7TWGrbiG5cKDPSyDwaw3J'}])]}
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Los Angeles'}, 'id': 'call_8NA7TWGrbiG5cKDPSyDwaw3J'}
Back to the model!
{'messages': [ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'Los Angeles\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 34.05, \'lon\': -118.24, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718692386, \'localtime\': \'2024-06-17 23:33\'}, \'current\': {\'last_updated_epoch\': 1718692200, \'last_updated\': \'2024-06-17 23:30\', \'temp_c\': 18.9, \'temp_f\': 66.0, \'is_day\': 0, \'condition\': {\'text\': \'Overcast\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/122.png\', \'code\': 1009}, \'wind_mph\': 3.8, \'wind_kph\': 6.1, \'wind_degree\': 120, \'wind_dir\': \'ESE\', \'pressure_mb\': 1007.0, \'pressure_in\': 29.74, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 75, \'cloud\': 100, \'feelslike_c\': 18.9, \'feelslike_f\': 66.0, \'windchill_c\': 19.2, \'windchill_f\': 66.6, \'heatindex_c\': 19.4, \'heatindex_f\': 66.9, \'dewpoint_c\': 12.1, \'dewpoint_f\': 53.8, \'vis_km\': 16.0, \'vis_miles\': 9.0, \'uv\': 1.0, \'gust_mph\': 7.0, \'gust_kph\': 11.2}}"}, {\'url\': \'https://www.weathertab.com/en/c/e/06/united-states/california/los-angeles/\', \'content\': \'Explore comprehensive June 2024 weather forecasts for Los Angeles, including daily high and low temperatures, precipitation risks, and monthly temperature trends. Featuring detailed day-by-day forecasts, dynamic graphs of daily rain probabilities, and temperature trends to help you plan ahead. ... 18 80°F 54°F 27°C 12°C 39% 19 80°F 54°F ...\'}]', name='tavily_search_results_json', tool_call_id='call_8NA7TWGrbiG5cKDPSyDwaw3J')]}
{'messages': [AIMessage(content='The current weather in Los Angeles is overcast with a temperature of 18.9°C (66.0°F). The wind is blowing from the east-southeast (ESE) at 3.8 mph (6.1 kph), and there is no precipitation. The humidity is at 75%, and visibility is 16 km (9 miles).\n\n![Weather Icon](//cdn.weatherapi.com/weather/64x64/night/122.png)', response_metadata={'token_usage': {'completion_tokens': 94, 'prompt_tokens': 1339, 'total_tokens': 1433}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_f4e629d0a5', 'finish_reason': 'stop', 'logprobs': None}, id='run-a2ff2111-46eb-4518-aefe-74bdcbaa7a90-0')]}

 

 

다음으로는 Streaming 을 토큰 단위로 보자:

from langgraph.checkpoint.aiosqlite import AsyncSqliteSaver

memory = AsyncSqliteSaver.from_conn_string(":memory:")
abot = Agent(model, [tool], system=prompt, checkpointer=memory)

messages = [HumanMessage(content="What is the weather in SF?")]
thread = {"configurable": {"thread_id": "4"}}
async for event in abot.graph.astream_events({"messages": messages}, thread, version="v1"):
    kind = event["event"]
    if kind == "on_chat_model_stream":
        content = event["data"]["chunk"].content
        if content:
            # Empty content in the context of OpenAI means 
            # that the model is asking for a tool to be invoked.
            # So we only print non-empty content
            print(content, end="|")
Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_1ETM2sdzldjnBAsh03OZnsA0'}
Back to the model!
The| current| weather| in| San| Francisco| is| clear| with| a| temperature| of| approximately| |55|.|5|°F| (|13|°C|).| The| wind| is| coming| from| the| west|-s|outh|west| (|WS|W|)| at| about| |4|.|7| mph| (|7|.|6| k|ph|),| and| the| humidity| is| at| |71|%.| The| visibility| is| |10| kilometers| (|6| miles|),| and| there| is| no| precipitation|.| The| UV| index| is| |1|,| indicating| low| exposure|.|

 

 

5. Human in the loop

Human-in-the-loop 디자인은 사람이 모든 일을 다 하는 것과 AI 가 모든 일을 다 하는 것의 절충점을 찾는 UX 패턴임.

 

사람이 다 하게 되면 귀찮고, AI 가 다 하게 되면 틀릴 수가 있으니까 중간에 사람이 개입해서 올바르게 하도록 감독해주는 것.

 

이런 패턴을 이용하게 되면 AI 도 추가적인 피드백을 수집할 수 있어서 더 정확해 질 수도 있다.

 

여기서는 Agent 가 도구를 선택하기 전에 Interrupt 를 걸어서 멈추도록 해보고, 원하는 작업 방식이 아닐 떄 우리가 수동적으로 작업 방향을 바꿔보도록 해볼거임.

 

이 매커니즘은 간단한데, Agent 는 메시지 단위로 히스토리를 기억해나간다. 우리가 질문한 것부터 시작해서 AI 가 도구를 선택한다던지, 생각을 한다던지 등은 메시지가 누적된다.

 

이런 메시지를 원하는 방향에 맞게 변경해주면 됨.

 

그리고 Agent 는 메시지가 구성되어 있는 상태 정보를 저장한다.

 

그러니까 메시지 1이 저장된 상태를 State 1, 여기에다가 메시지 2가 추가로 저장된 상태를 State 2 이런식으로 기록해나간다.

 

우리는 여기서 특정 상태를 조회하고 메시지를 변경하고 이 새로운 상태를 추가하는 식으로 관리할 수 있다. 그래서 이전 상태로 돌아가서 거기서부터 작업을 진행하는 것도 가능함.

 

이런 것들에 대해서 하나씩 살펴보자.

 

먼저 Agent 를 만들자. 여기서는 Message 를 변경할 수 있도록 커스텀 reduce_message() 함수를 추가헀음. 이 함수는 존재하는 id 가 있다면 Replace 하고, 없다면 append 하는 함수임.

 

그리고 Agent 가 Action 노드 상태일 때 (= 도구를 사용하는 상태) Interrupt 가 걸리도록 했다. 이 예시에서는 Agent 가 도구를 선택하면 올바른 도구를 선택헀는지, 도구의 파라미터는 올바른지 확인하기 위해서 이렇게 한거임.

from dotenv import load_dotenv

_ = load_dotenv()

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.checkpoint.sqlite import SqliteSaver

memory = SqliteSaver.from_conn_string(":memory:")

from uuid import uuid4
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage

"""
In previous examples we've annotated the `messages` state key
with the default `operator.add` or `+` reducer, which always
appends new messages to the end of the existing messages array.

Now, to support replacing existing messages, we annotate the
`messages` key with a customer reducer function, which replaces
messages with the same `id`, and appends them otherwise.
"""
def reduce_messages(left: list[AnyMessage], right: list[AnyMessage]) -> list[AnyMessage]:
    # assign ids to messages that don't have them
    for message in right:
        if not message.id:
            message.id = str(uuid4())
    # merge the new messages with the existing messages
    merged = left.copy()
    for message in right:
        for i, existing in enumerate(merged):
            # replace any existing messages with the same id
            if existing.id == message.id:
                merged[i] = message
                break
        else:
            # append any new messages to the end
            merged.append(message)
    return merged

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], reduce_messages]

class Agent:
    def __init__(self, model, tools, system="", checkpointer=None):
        self.system = system
        graph = StateGraph(AgentState)
        graph.add_node("llm", self.call_openai)
        graph.add_node("action", self.take_action)
        graph.add_conditional_edges("llm", self.exists_action, {True: "action", False: END})
        graph.add_edge("action", "llm")
        graph.set_entry_point("llm")
        self.graph = graph.compile(
            checkpointer=checkpointer,
            interrupt_before=["action"]
        )
        self.tools = {t.name: t for t in tools}
        self.model = model.bind_tools(tools)

    def call_openai(self, state: AgentState):
        messages = state['messages']
        if self.system:
            messages = [SystemMessage(content=self.system)] + messages
        message = self.model.invoke(messages)
        return {'messages': [message]}

    def exists_action(self, state: AgentState):
        print(state)
        result = state['messages'][-1]
        return len(result.tool_calls) > 0

    def take_action(self, state: AgentState):
        tool_calls = state['messages'][-1].tool_calls
        results = []
        for t in tool_calls:
            print(f"Calling: {t}")
            result = self.tools[t['name']].invoke(t['args'])
            results.append(ToolMessage(tool_call_id=t['id'], name=t['name'], content=str(result)))
        print("Back to the model!")
        return {'messages': results}

prompt = """You are a smart research assistant. Use the search engine to look up information. \
You are allowed to make multiple calls (either together or in sequence). \
Only look up information when you are sure of what you want. \
If you need to look up some information before asking a follow up question, you are allowed to do that!
"""
model = ChatOpenAI(model="gpt-3.5-turbo")
abot = Agent(model, [tool], system=prompt, checkpointer=memory)

 

 

이제 Interrupt 가 걸리는지 확인해보자.

messages = [HumanMessage(content="Whats the weather in SF?")]
thread = {"configurable": {"thread_id": "1"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

 

 

출력 결과:

  • AIMessage 를 보면 도구를 선택하겠다는 메시지가 나온 걸 볼 수 있다.
{'messages': [HumanMessage(content='Whats the weather in SF?', id='28fb92dc-c2af-49a5-820e-ff5276495b1f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-a600b5ad-db4d-4a6d-94f0-ecd9772ca8c1-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS'}]), ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718694301, \'localtime\': \'2024-06-18 0:05\'}, \'current\': {\'last_updated_epoch\': 1718694000, \'last_updated\': \'2024-06-18 00:00\', \'temp_c\': 12.2, \'temp_f\': 53.9, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 3.8, \'wind_kph\': 6.1, \'wind_degree\': 230, \'wind_dir\': \'SW\', \'pressure_mb\': 1010.0, \'pressure_in\': 29.82, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 76, \'cloud\': 2, \'feelslike_c\': 11.9, \'feelslike_f\': 53.4, \'windchill_c\': 11.9, \'windchill_f\': 53.4, \'heatindex_c\': 12.2, \'heatindex_f\': 53.9, \'dewpoint_c\': 7.9, \'dewpoint_f\': 46.3, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 7.3, \'gust_kph\': 11.8}}"}, {\'url\': \'https://www.timeanddate.com/weather/usa/san-francisco/historic\', \'content\': \'San Francisco Weather History for the Previous 24 Hours Show weather for: Previous 24 hours June 17, 2024 June 16, 2024 June 15, 2024 June 14, 2024 June 13, 2024 June 12, 2024 June 11, 2024 June 10, 2024 June 9, 2024 June 8, 2024 June 7, 2024 June 6, 2024 June 5, 2024 June 4, 2024 June 3, 2024 June 2, 2024\'}]', name='tavily_search_results_json', id='89c92aba-b1b7-4173-80bb-6735351357ad', tool_call_id='call_CvsqwzrE80AFykbCAzvGnfFS'), AIMessage(content='The weather in San Francisco is currently clear with a temperature of 53.9°F (12.2°C). The wind is coming from the southwest at 6.1 km/h. The humidity is at 76%, and there is no precipitation.', response_metadata={'finish_reason': 'stop', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 52, 'prompt_tokens': 751, 'total_tokens': 803}}, id='run-a98f82b0-e247-4b22-91c3-113227427d6c-0'), HumanMessage(content='Whats the weather in SF?', id='a6f2a9a3-566f-4739-a639-faba2e8f0d7f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_segFEKNv1pIo5pSFIE6Eshg9', 'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 816, 'total_tokens': 837}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-90c775e6-42c9-4735-a377-531eb795dd00-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_segFEKNv1pIo5pSFIE6Eshg9'}])]}

{'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_segFEKNv1pIo5pSFIE6Eshg9', 'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 21, 'prompt_tokens': 816, 'total_tokens': 837}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-90c775e6-42c9-4735-a377-531eb795dd00-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_segFEKNv1pIo5pSFIE6Eshg9'}])]}

 

 

다음으로 Agent 의 현재 스냅샷 상태를 조회해보자:

abot.graph.get_state(thread)

 

 

현재 최싱 상태:

  • next 를 보면 다음 노드를 가리키는데 도구를 사용하는 Action 노드를 가리킨 상태를 볼 수 있음.
StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in SF?', id='28fb92dc-c2af-49a5-820e-ff5276495b1f'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-a600b5ad-db4d-4a6d-94f0-ecd9772ca8c1-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS'}])]}, next=('action',), config={'configurable': {'thread_id': '1', 'thread_ts': '1ef2d411-b855-6ee4-8001-cb75ec747735'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-a600b5ad-db4d-4a6d-94f0-ecd9772ca8c1-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in San Francisco'}, 'id': 'call_CvsqwzrE80AFykbCAzvGnfFS'}])]}}}, created_at='2024-06-18T07:05:12.962410+00:00', parent_config={'configurable': {'thread_id': '1', 'thread_ts': '1ef2d411-afdb-63a0-8000-854366bd83da'}})

 

 

이 상태에서 처리를 진행하려고 하면 이런식으로 코드를 실행하면 된다.

for event in abot.graph.stream(None, thread):
    for v in event.values():
        print(v)

 

 

그러면 이런 식으로 도구가 호출되는 걸 볼 수 있음:

Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_edVKHvcXI6qWYuF1iGspul4t'}
Back to the model!
{'messages': [ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718696404, \'localtime\': \'2024-06-18 0:40\'}, \'current\': {\'last_updated_epoch\': 1718695800, \'last_updated\': \'2024-06-18 00:30\', \'temp_c\': 12.2, \'temp_f\': 53.9, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 3.8, \'wind_kph\': 6.1, \'wind_degree\': 230, \'wind_dir\': \'SW\', \'pressure_mb\': 1010.0, \'pressure_in\': 29.82, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 76, \'cloud\': 2, \'feelslike_c\': 11.9, \'feelslike_f\': 53.4, \'windchill_c\': 11.9, \'windchill_f\': 53.4, \'heatindex_c\': 12.2, \'heatindex_f\': 53.9, \'dewpoint_c\': 7.9, \'dewpoint_f\': 46.3, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 7.3, \'gust_kph\': 11.8}}"}, {\'url\': \'https://www.timeanddate.com/weather/@z-us-94124/hourly\', \'content\': \'See weather overview Detailed Hourly Forecast — Next 24 hours Show weather on: Next 24 hours June 16, 2024 June 17, 2024 June 18, 2024 June 19, 2024 June 20, 2024 June 21, 2024 June 22, 2024\'}]', name='tavily_search_results_json', id='076ea064-5b17-4887-9809-d63670812e24', tool_call_id='call_edVKHvcXI6qWYuF1iGspul4t')]}
{'messages': [HumanMessage(content='Whats the weather in SF?', id='2c3e3883-20ba-4047-a4f3-f84568983809'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"current weather in San Francisco"}', 'name': 'tavily_search_results_json'}, 'id': 'call_edVKHvcXI6qWYuF1iGspul4t', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 22, 'prompt_tokens': 152, 'total_tokens': 174}}, id='run-41afb37b-1326-404d-93fb-67563ac4d900-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in San Francisco'}, 'id': 'call_edVKHvcXI6qWYuF1iGspul4t'}]), ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'San Francisco\', \'region\': \'California\', \'country\': \'United States of America\', \'lat\': 37.78, \'lon\': -122.42, \'tz_id\': \'America/Los_Angeles\', \'localtime_epoch\': 1718696404, \'localtime\': \'2024-06-18 0:40\'}, \'current\': {\'last_updated_epoch\': 1718695800, \'last_updated\': \'2024-06-18 00:30\', \'temp_c\': 12.2, \'temp_f\': 53.9, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 3.8, \'wind_kph\': 6.1, \'wind_degree\': 230, \'wind_dir\': \'SW\', \'pressure_mb\': 1010.0, \'pressure_in\': 29.82, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 76, \'cloud\': 2, \'feelslike_c\': 11.9, \'feelslike_f\': 53.4, \'windchill_c\': 11.9, \'windchill_f\': 53.4, \'heatindex_c\': 12.2, \'heatindex_f\': 53.9, \'dewpoint_c\': 7.9, \'dewpoint_f\': 46.3, \'vis_km\': 10.0, \'vis_miles\': 6.0, \'uv\': 1.0, \'gust_mph\': 7.3, \'gust_kph\': 11.8}}"}, {\'url\': \'https://www.timeanddate.com/weather/@z-us-94124/hourly\', \'content\': \'See weather overview Detailed Hourly Forecast — Next 24 hours Show weather on: Next 24 hours June 16, 2024 June 17, 2024 June 18, 2024 June 19, 2024 June 20, 2024 June 21, 2024 June 22, 2024\'}]', name='tavily_search_results_json', id='076ea064-5b17-4887-9809-d63670812e24', tool_call_id='call_edVKHvcXI6qWYuF1iGspul4t'), AIMessage(content='The current weather in San Francisco is clear with a temperature of 53.9°F (12.2°C). The wind is blowing at 6.1 km/h from the southwest direction. The humidity is at 76%, and the visibility is 10.0 km.', response_metadata={'token_usage': {'completion_tokens': 57, 'prompt_tokens': 690, 'total_tokens': 747}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-59a0add8-def5-4d22-91ac-8b7dc818e77f-0')]}
{'messages': [AIMessage(content='The current weather in San Francisco is clear with a temperature of 53.9°F (12.2°C). The wind is blowing at 6.1 km/h from the southwest direction. The humidity is at 76%, and the visibility is 10.0 km.', response_metadata={'token_usage': {'completion_tokens': 57, 'prompt_tokens': 690, 'total_tokens': 747}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-59a0add8-def5-4d22-91ac-8b7dc818e77f-0')]}

 

 

좀 더 human-in-the-loop 디자인 처럼 해볼려면 이런식으로 짤 수도 있다.

  • 도구 사용에 대해서 출력하고, y 버튼을 누르면 해당 도구를 사용하도록 하는거임
messages = [HumanMessage("Whats the weather in LA?")]
thread = {"configurable": {"thread_id": "2"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)
while abot.graph.get_state(thread).next:
    print("\n", abot.graph.get_state(thread),"\n")
    _input = input("proceed?")
    if _input != "y":
        print("aborting")
        break
    for event in abot.graph.stream(None, thread):
        for v in event.values():
            print(v)

 

 

LangGraph 에서 State 조회는 get_state() 함수와 get_state_history() 함수 이렇게 있음.

  • 여기서 get_state() 함수 인자로 thread config 그 자체를 주면 현재 상태를 조회하는게 되는거고 state 아이디인 thread_ts 값을 조회하면 이전 상태를 조회할 수 있다. 이렇게 하면 time travel 이 가능함.
  • 여기서 새로운 상태를 만들고 update_state() 함수를 호출하면 새로운 상태가 누적되게 된다. 기존에 있는 상태를 Immutable 하게 관리됨.
  • langgraph 를 실행하고 싶다면 invoke() 나 stream() 함수를 호출하면 됨.

 

 

이제 상태를 변경해보는 예제를 실습해보자.

messages = [HumanMessage("Whats the weather in LA?")]
thread = {"configurable": {"thread_id": "3"}}
for event in abot.graph.stream({"messages": messages}, thread):
    for v in event.values():
        print(v)

abot.graph.get_state(thread)

 

 

현재 상태는 다음과 같이 Los Angeles 의 날씨를 조회하려고 할거임.

StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4bd7-6371-8001-ec1e2703b980'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}}}, created_at='2024-06-18T07:46:11.192083+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4564-64a5-8000-e8aca9f2224e'}})

 

 

여기서 상태의 마지막 메시지를 조회해보고 이를 변경해보자

current_values = abot.graph.get_state(thread)
current_values.values['messages'][-1]

 

 

마지막 상태 메시지:

AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])

 

 

좀 더 쉽게 이 도구와 관련된 메시지를 보려면 이렇게 하면 된다.

current_values.values['messages'][-1].tool_calls

 

 

출력 결과:

[{'name': 'tavily_search_results_json',
  'args': {'query': 'weather in Los Angeles'},
  'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}]

 

 

이제 이 메시지를 변경해보자. LA 에 대해 날씨를 조회하는게 아니라 Louisiana 에 대해 조회하도록

_id = current_values.values['messages'][-1].tool_calls[0]['id']
current_values.values['messages'][-1].tool_calls = [
    {'name': 'tavily_search_results_json',
  'args': {'query': 'current weather in Louisiana'},
  'id': _id}
]

abot.graph.update_state(thread, current_values.values)

abot.graph.get_state(thread)

 

 

출력 결과:

  • AIMessage 의 query 는 변경되지 않았지만 도구를 사용하는 부분인 tool_calls 부분은 변경된 걸 볼 수 있음
StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d473-6d08-6006-8002-27a3646584d7'}}, metadata={'source': 'update', 'step': 2, 'writes': {'llm': {'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}}}, created_at='2024-06-18T07:48:55.733646+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4bd7-6371-8001-ec1e2703b980'}})

 

 

다음으로 이제 이어서 처리하도록 호출하면 Louisiana 날씨를 호출한 걸 볼 수 있음

for event in abot.graph.stream(None, thread):
    for v in event.values():
        print(v)

 

 

출력 결과:

Calling: {'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}
Back to the model!
{'messages': [ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'Louisiana\', \'region\': \'Missouri\', \'country\': \'USA United States of America\', \'lat\': 39.44, \'lon\': -91.06, \'tz_id\': \'America/Chicago\', \'localtime_epoch\': 1718697004, \'localtime\': \'2024-06-18 2:50\'}, \'current\': {\'last_updated_epoch\': 1718696700, \'last_updated\': \'2024-06-18 02:45\', \'temp_c\': 23.5, \'temp_f\': 74.3, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 10.5, \'wind_kph\': 16.9, \'wind_degree\': 170, \'wind_dir\': \'S\', \'pressure_mb\': 1014.0, \'pressure_in\': 29.93, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 84, \'cloud\': 0, \'feelslike_c\': 25.8, \'feelslike_f\': 78.4, \'windchill_c\': 21.8, \'windchill_f\': 71.2, \'heatindex_c\': 24.4, \'heatindex_f\': 76.0, \'dewpoint_c\': 20.2, \'dewpoint_f\': 68.3, \'vis_km\': 16.0, \'vis_miles\': 9.0, \'uv\': 1.0, \'gust_mph\': 22.6, \'gust_kph\': 36.3}}"}, {\'url\': \'https://www.wunderground.com/hourly/us/la/bt-/date/2024-6-18\', \'content\': \'Current Weather for Popular ... Louisiana, LA Hourly Weather Forecast star_ratehome. 78 ... Tuesday 06/18 Hourly for Tomorrow, Tue 06/18. Tomorrow 06/18. 58% / 0.09 in .\'}]', name='tavily_search_results_json', id='51df632f-bfdd-484a-8f10-bab787de49f8', tool_call_id='call_oEKQ7xbEfz1r3QwbY9qrCEYO')]}
{'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in Louisiana'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}]), ToolMessage(content='[{\'url\': \'https://www.weatherapi.com/\', \'content\': "{\'location\': {\'name\': \'Louisiana\', \'region\': \'Missouri\', \'country\': \'USA United States of America\', \'lat\': 39.44, \'lon\': -91.06, \'tz_id\': \'America/Chicago\', \'localtime_epoch\': 1718697004, \'localtime\': \'2024-06-18 2:50\'}, \'current\': {\'last_updated_epoch\': 1718696700, \'last_updated\': \'2024-06-18 02:45\', \'temp_c\': 23.5, \'temp_f\': 74.3, \'is_day\': 0, \'condition\': {\'text\': \'Clear\', \'icon\': \'//cdn.weatherapi.com/weather/64x64/night/113.png\', \'code\': 1000}, \'wind_mph\': 10.5, \'wind_kph\': 16.9, \'wind_degree\': 170, \'wind_dir\': \'S\', \'pressure_mb\': 1014.0, \'pressure_in\': 29.93, \'precip_mm\': 0.0, \'precip_in\': 0.0, \'humidity\': 84, \'cloud\': 0, \'feelslike_c\': 25.8, \'feelslike_f\': 78.4, \'windchill_c\': 21.8, \'windchill_f\': 71.2, \'heatindex_c\': 24.4, \'heatindex_f\': 76.0, \'dewpoint_c\': 20.2, \'dewpoint_f\': 68.3, \'vis_km\': 16.0, \'vis_miles\': 9.0, \'uv\': 1.0, \'gust_mph\': 22.6, \'gust_kph\': 36.3}}"}, {\'url\': \'https://www.wunderground.com/hourly/us/la/bt-/date/2024-6-18\', \'content\': \'Current Weather for Popular ... Louisiana, LA Hourly Weather Forecast star_ratehome. 78 ... Tuesday 06/18 Hourly for Tomorrow, Tue 06/18. Tomorrow 06/18. 58% / 0.09 in .\'}]', name='tavily_search_results_json', id='51df632f-bfdd-484a-8f10-bab787de49f8', tool_call_id='call_oEKQ7xbEfz1r3QwbY9qrCEYO'), AIMessage(content='The current weather in Louisiana is clear with a temperature of 74.3°F (23.5°C). The wind speed is 10.5 mph (16.9 kph) coming from the south. The humidity is at 84%, and there is no precipitation.', response_metadata={'token_usage': {'completion_tokens': 57, 'prompt_tokens': 677, 'total_tokens': 734}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9beeb959-1932-4078-977d-32b735646eb2-0')]}
{'messages': [AIMessage(content='The current weather in Louisiana is clear with a temperature of 74.3°F (23.5°C). The wind speed is 10.5 mph (16.9 kph) coming from the south. The humidity is at 84%, and there is no precipitation.', response_metadata={'token_usage': {'completion_tokens': 57, 'prompt_tokens': 677, 'total_tokens': 734}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-9beeb959-1932-4078-977d-32b735646eb2-0')]}

 

 

다음으로는 이전 상태로 돌아가는 Time Travel 을 해보자

states = []
for state in abot.graph.get_state_history(thread):
    print(state)
    print('--')
    states.append(state)

to_replay = states[-3]

to_replay

 

 

출력 결과:

  • 이 상태에서느 LA 에 대한 날씨를 호출하려고 하고 있음
StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4bd7-6371-8001-ec1e2703b980'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}}}, created_at='2024-06-18T07:46:11.192083+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4564-64a5-8000-e8aca9f2224e'}})

 

 

여기서 다음과 같이 실행하면 LA 에 대한 날씨를 볼 수 있다:

for event in abot.graph.stream(None, to_replay.config):
    for k, v in event.items():
        print(v)

 

 

다음과 같이 이전 상태로 돌아가서 메시지를 정정해서 실행하도록 만들 수도 있음

_id = to_replay.values['messages'][-1].tool_calls[0]['id']
to_replay.values['messages'][-1].tool_calls = [{'name': 'tavily_search_results_json',
  'args': {'query': 'current weather in LA, accuweather'},
  'id': _id}]

branch_state = abot.graph.update_state(to_replay.config, to_replay.values)

for event in abot.graph.stream(None, branch_state):
    for k, v in event.items():
        if k != "__end__":
            print(v)

 

 

아니면 해당 스냅샷 상태에서 직접 메시지를 추가하는 식으로도 가능하다. 먼저 스냅샷 조회부터 해보자.

to_replay

 

 

to_replay 의 상태:

StateSnapshot(values={'messages': [HumanMessage(content='Whats the weather in LA?', id='c8544469-7cd0-45b3-a417-3078a69ee2d8'), AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'current weather in LA, accuweather'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}, next=('action',), config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4bd7-6371-8001-ec1e2703b980'}}, metadata={'source': 'loop', 'step': 1, 'writes': {'llm': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'function': {'arguments': '{"query":"weather in Los Angeles"}', 'name': 'tavily_search_results_json'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO', 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'logprobs': None, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'token_usage': {'completion_tokens': 21, 'prompt_tokens': 152, 'total_tokens': 173}}, id='run-b9fb7924-9031-43b2-a967-306e9f30db5b-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'weather in Los Angeles'}, 'id': 'call_oEKQ7xbEfz1r3QwbY9qrCEYO'}])]}}}, created_at='2024-06-18T07:46:11.192083+00:00', parent_config={'configurable': {'thread_id': '3', 'thread_ts': '1ef2d46d-4564-64a5-8000-e8aca9f2224e'}})

 

 

여기서 새로운 도구 메시지 추가 후 실행

_id = to_replay.values['messages'][-1].tool_calls[0]['id']

state_update = {"messages": [ToolMessage(
    tool_call_id=_id,
    name="tavily_search_results_json",
    content="54 degree celcius",
)]}

branch_and_add = abot.graph.update_state(
    to_replay.config, 
    state_update, 
    as_node="action")

for event in abot.graph.stream(None, branch_and_add):
    for k, v in event.items():
        print(v)

 

 

6. Essay Writer

여기서는 실제로 LangGraph 를 이용해서 Essay 를 써주는 Writer 를 만들어보자.

 

Essay Writer 는 다음과 같은 노드들이 있음:

  • 1) Plan: 어떤 주제로 글을 쓸건지 입력을 받는 노드임. 입력을 받으면 글의 세부 주제에 대해서 outline 을 그린다.
  • 2) Research_plan: Outline 에 따라서 웹 검색으로 리서치를 하는 노드임.
  • 3) Generate: Research 한 내용을 바탕으로 실제로 글을 쓰는 노드임
  • 4) Reflect: 글을 썼다면 해당 글을 평가하고, 비판하는 노드임.
  • 5) Research Critique: 비판한 내용을 가지고 다시 리서치를 하는 노드임

 

 

작업을 수행하기 위한 Agent State 부터 만들자:

  • Agent 가 내부적으로 가지고 있을 데이터는 작업마다 다르다. 여기서는 해야할 작업인 task 와, 작업 계획인 plan, 그리고 초안인 draft, 그리고 비판한 내용인 critique, 리처시 내용인 content, 퇴고 숫자인 revision_number, 최대 퇴고 횟수인 max_revisions 를 가진다.
  • State 에서 선언한 데이터들이 노드간의 전달이 됨.
from dotenv import load_dotenv

_ = load_dotenv()

from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated, List
import operator
from langgraph.checkpoint.sqlite import SqliteSaver
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage, ChatMessage

memory = SqliteSaver.from_conn_string(":memory:")

class AgentState(TypedDict):
    task: str
    plan: str
    draft: str
    critique: str
    content: List[str]
    revision_number: int
    max_revisions: int

 

 

다음으로 이제 각 노드별로 역할을 수행할 프롬포트를 만들자. 프롬포트에서는 역할과, 명령과, 맥락 정보와, 지시 사항이 담겨있다:

from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

PLAN_PROMPT = """You are an expert writer tasked with writing a high level outline of an essay. \
Write such an outline for the user provided topic. Give an outline of the essay along with any relevant notes \
or instructions for the sections."""

WRITER_PROMPT = """You are an essay assistant tasked with writing excellent 5-paragraph essays.\
Generate the best essay possible for the user's request and the initial outline. \
If the user provides critique, respond with a revised version of your previous attempts. \
Utilize all the information below as needed: 

------

{content}"""

REFLECTION_PROMPT = """You are a teacher grading an essay submission. \
Generate critique and recommendations for the user's submission. \
Provide detailed recommendations, including requests for length, depth, style, etc."""

RESEARCH_PLAN_PROMPT = """You are a researcher charged with providing information that can \
be used when writing the following essay. Generate a list of search queries that will gather \
any relevant information. Only generate 3 queries max."""

RESEARCH_CRITIQUE_PROMPT = """You are a researcher charged with providing information that can \
be used when making any requested revisions (as outlined below). \
Generate a list of search queries that will gather any relevant information. Only generate 3 queries max."""

 

 

다음으로는 리서치 검색에서 사용할 데이터인 Queries 를 만들자. 이건 모델에서 출력할 구조화된 output 을 만들기 위해서 사용함.

from langchain_core.pydantic_v1 import BaseModel

class Queries(BaseModel):
    queries: List[str]

 

 

다음으로 각 노드들을 만들어주자.

from tavily import TavilyClient
import os
tavily = TavilyClient(api_key=os.environ["TAVILY_API_KEY"])

def plan_node(state: AgentState):
    messages = [
        SystemMessage(content=PLAN_PROMPT), 
        HumanMessage(content=state['task'])
    ]
    response = model.invoke(messages)
    return {"plan": response.content}

def research_plan_node(state: AgentState):
    queries = model.with_structured_output(Queries).invoke([
        SystemMessage(content=RESEARCH_PLAN_PROMPT),
        HumanMessage(content=state['task'])
    ])
    content = state['content'] or []
    for q in queries.queries:
        response = tavily.search(query=q, max_results=2)
        for r in response['results']:
            content.append(r['content'])
    return {"content": content}

def generation_node(state: AgentState):
    content = "\n\n".join(state['content'] or [])
    user_message = HumanMessage(
        content=f"{state['task']}\n\nHere is my plan:\n\n{state['plan']}")
    messages = [
        SystemMessage(
            content=WRITER_PROMPT.format(content=content)
        ),
        user_message
        ]
    response = model.invoke(messages)
    return {
        "draft": response.content, 
        "revision_number": state.get("revision_number", 1) + 1
    }

def reflection_node(state: AgentState):
    messages = [
        SystemMessage(content=REFLECTION_PROMPT), 
        HumanMessage(content=state['draft'])
    ]
    response = model.invoke(messages)
    return {"critique": response.content}

def research_critique_node(state: AgentState):
    queries = model.with_structured_output(Queries).invoke([
        SystemMessage(content=RESEARCH_CRITIQUE_PROMPT),
        HumanMessage(content=state['critique'])
    ])
    content = state['content'] or []
    for q in queries.queries:
        response = tavily.search(query=q, max_results=2)
        for r in response['results']:
            content.append(r['content'])
    return {"content": content}

 

 

다음으로 Agent 가 작업을 끝낼지를 결정하는 Conditional Edge 도 만들자:

def should_continue(state):
    if state["revision_number"] > state["max_revisions"]:
        return END
    return "reflect"

 

 

이제 LangGraph 를 만들어보자:

builder = StateGraph(AgentState)

builder.add_node("planner", plan_node)
builder.add_node("generate", generation_node)
builder.add_node("reflect", reflection_node)
builder.add_node("research_plan", research_plan_node)
builder.add_node("research_critique", research_critique_node)

builder.set_entry_point("planner")

builder.add_conditional_edges(
    "generate", 
    should_continue, 
    {END: END, "reflect": "reflect"}
)

builder.add_edge("planner", "research_plan")
builder.add_edge("research_plan", "generate")

builder.add_edge("reflect", "research_critique")
builder.add_edge("research_critique", "generate")

graph = builder.compile(checkpointer=memory)

from IPython.display import Image

Image(graph.get_graph().draw_png())

 

 

다음으로는 이제 이 그래프를 실제로 실행해보자:

thread = {"configurable": {"thread_id": "1"}}
for s in graph.stream({
    'task': "what is the difference between langchain and langsmith",
    "max_revisions": 2,
    "revision_number": 1,
}, thread):
    print(s)

 

 

출력 과정은 다음과 같다:

  • Plan -> Research Plan -> Generate -> Reflect -> Research_critique -> Generate 순으로 만들어짐.
{'planner': {'plan': 'I. Introduction\n    A. Brief overview of Langchain and Langsmith\n    B. Thesis statement: Exploring the differences between Langchain and Langsmith\n\nII. Langchain\n    A. Definition and explanation\n    B. Key features and characteristics\n    C. Use cases and applications\n    D. Advantages and disadvantages\n\nIII. Langsmith\n    A. Definition and explanation\n    B. Key features and characteristics\n    C. Use cases and applications\n    D. Advantages and disadvantages\n\nIV. Comparison between Langchain and Langsmith\n    A. Technology stack\n    B. Scalability\n    C. Security\n    D. Flexibility\n    E. Adoption and popularity\n\nV. Conclusion\n    A. Recap of main differences between Langchain and Langsmith\n    B. Implications for the future of blockchain technology\n    C. Final thoughts and recommendations\n\nNotes:\n- Ensure a clear and concise explanation of both Langchain and Langsmith.\n- Provide specific examples and real-world applications for each technology.\n- Use comparative analysis to highlight the distinctions between Langchain and Langsmith.\n- Conclude with insights on the potential impact of these technologies on the blockchain industry.'}}

{'research_plan': {'content': ['Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', "LangSmith Cookbook: Real-world Lang Smith Examples\nThe LangSmith Cookbook is not just a compilation of code snippets; it's a goldmine of hands-on examples designed to inspire and assist you in your projects. On This Page\nLangSmith: Best Way to Test LLMs and AI Application\nPublished on 12/17/2023\nIf you're in the world of Language Learning Models (LLMs), you've probably heard of LangSmith. How to Download Feedback and Examples (opens in a new tab): Export predictions, evaluation results, and other information to add to your reports programmatically.\n This article is your one-stop guide to understanding LangSmith, a platform that offers a plethora of features for debugging, testing, evaluating, and monitoring LLM applications.\n How do I get access to LangSmith?\nTo get access to LangSmith, you'll need to sign up for an account on their website.", "LangSmith Cookbook: Real-world Lang Smith Examples\nThe LangSmith Cookbook is not just a compilation of code snippets; it's a goldmine of hands-on examples designed to inspire and assist you in your projects. On This Page\nLangSmith: Best Way to Test LLMs and AI Application\nPublished on 12/17/2023\nIf you're in the world of Language Learning Models (LLMs), you've probably heard of LangSmith. How to Download Feedback and Examples (opens in a new tab): Export predictions, evaluation results, and other information to add to your reports programmatically.\n This article is your one-stop guide to understanding LangSmith, a platform that offers a plethora of features for debugging, testing, evaluating, and monitoring LLM applications.\n How do I get access to LangSmith?\nTo get access to LangSmith, you'll need to sign up for an account on their website.", "This was such an important unlock for us as we build in tactical functionality to our AI.\nAdvice for Product Teams Considering LangSmith\nWhen you're in the thick of product development, the whirlwind of AI and LLMs can be overwhelming. ou can now clearly see the citations coming through with the responses in the traces, and we’re good to ship the changes to the prompt to prod.\n We do this primarily to legitimize the LLMs response and give our HelpHub customer’s peace of mind that their end users are in fact getting to the help resource they need (instead of an LLM just hallucinating and giving any response it wants.)\n Take it from us, as we integrated AI more heavily and widely across our products, we’ve been more conscious than ever that the quality of the outputs matches the quality and trust that our customers have in our product. We updated our prompt to be more firm when asking for the sources:\nWe then tested everything using LangSmith evals, to make sure that it fixes the issue before pushing to production.\n", 'Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', 'Reviewing the Results\nThe comparison views also make it easy to manually review the outputs to get a better sense for how the models behave, so you can make adjustments to your cognitive architecture and update the evaluation techniques to address any failure modes you identify.\n Both say that they cannot see an explicit answer to the question (since the retriever is the same), but the assistant is willing to provide context on how strict is used in other contexts as a pointer for the user.\n If you are using a model like gpt-3.5 and getting hallucinations like this, you can try adding an additional system prompt reminding the model to only respond based on the retrieved content, and you can work to improve the retriever to filter out irrelevant documents.\n If you compare the outputs, you can see that mistral was influenced too much by the document content and mentioned the class from the docs rather than the direct answer.\n As a part of the initial release, we have evaluated various implementations that differ across a few dimensions:\nYou can check out the link above to review the results or continue with the information below!\n']}}

{'generate': {'draft': "Title: Langchain vs Langsmith: Contrasting Two AI Language Model Frameworks\n\nI. Introduction\nIn the realm of AI language model frameworks, Langchain and Langsmith stand out as prominent players. Langchain is an open-source framework that facilitates the development of applications using large language models like GPT-3. On the other hand, Langsmith offers a platform with a myriad of features for testing, debugging, and evaluating language learning models (LLMs). This essay delves into the disparities between Langchain and Langsmith.\n\nII. Langchain\nLangchain is a versatile framework that empowers developers to create applications leveraging large language models such as GPT-3. Its key features include seamless integration with Python, scalability, and a wide range of applications in natural language processing (NLP) tasks. From chatbots to content generation, Langchain finds utility in various domains. However, its reliance on Python may limit its accessibility to developers proficient in other programming languages.\n\nIII. Langsmith\nIn contrast, Langsmith is a platform tailored for testing and evaluating LLM applications. With a focus on debugging and monitoring, Langsmith provides developers with tools to enhance the quality and reliability of their AI applications. By offering features like exporting predictions and evaluation results, Langsmith streamlines the development process and ensures the robustness of language models. Access to Langsmith is granted through a simple sign-up process on their website.\n\nIV. Comparison between Langchain and Langsmith\nA. Technology Stack: Langchain primarily integrates with Python, while Langsmith offers a more specialized platform for testing and debugging LLMs.\nB. Scalability: Langchain's versatility makes it scalable for various applications, whereas Langsmith's focus on evaluation enhances the scalability of AI applications.\nC. Security: Both frameworks prioritize data security, but Langsmith's emphasis on debugging can lead to more secure AI models.\nD. Flexibility: While Langchain provides flexibility in application development, Langsmith enhances the flexibility of testing and evaluating LLMs.\nE. Adoption and Popularity: Langchain's accessibility through Python may contribute to its wider adoption, whereas Langsmith's specialized features attract developers seeking robust testing tools.\n\nV. Conclusion\nIn conclusion, the distinctions between Langchain and Langsmith underscore the diverse needs within the AI development landscape. While Langchain caters to application development using LLMs, Langsmith focuses on refining and testing these models for optimal performance. The future implications of these technologies on the blockchain industry are significant, as they pave the way for more sophisticated and reliable AI applications. As developers navigate the complexities of AI frameworks, understanding the nuances between Langchain and Langsmith can guide strategic decision-making and foster innovation in the field.", 'revision_number': 2}}

{'reflect': {'critique': "Critique:\n\n1. Introduction:\n - The introduction provides a clear overview of the topic and introduces the two frameworks effectively. However, it could be enhanced by including a brief statement on the significance of comparing these frameworks and what the reader can expect from the essay.\n\n2. Content:\n - The content is well-structured, with separate sections dedicated to each framework and a comparison section. Each section provides a concise overview of the framework's features and strengths.\n - The comparison section effectively highlights the differences between Langchain and Langsmith in terms of technology stack, scalability, security, flexibility, and adoption. However, it could benefit from more detailed examples or case studies to support the comparisons.\n\n3. Depth and Analysis:\n - While the essay provides a good overview of the frameworks, it lacks in-depth analysis and critical evaluation. Consider delving deeper into the advantages and limitations of each framework, providing more insights into their real-world applications and potential challenges.\n\n4. Recommendations:\n - Length: Consider expanding on each framework's features, functionalities, and real-world applications to provide a more comprehensive understanding for the reader.\n - Depth: Include examples, case studies, or comparisons with other similar frameworks to add depth and critical analysis to your evaluation.\n - Style: Enhance the essay by incorporating a more critical analysis of the frameworks, discussing potential drawbacks, challenges, and future developments in the field of AI language models.\n - Conclusion: Strengthen the conclusion by summarizing the key points of comparison and providing insights into the future implications of these frameworks in the AI development landscape.\n\nOverall, the essay provides a good foundation for comparing Langchain and Langsmith. By incorporating more detailed analysis, examples, and expanding on the frameworks' functionalities, you can create a more comprehensive and insightful evaluation of these AI language model frameworks."}}

{'research_critique': {'content': ['Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', "LangSmith Cookbook: Real-world Lang Smith Examples\nThe LangSmith Cookbook is not just a compilation of code snippets; it's a goldmine of hands-on examples designed to inspire and assist you in your projects. On This Page\nLangSmith: Best Way to Test LLMs and AI Application\nPublished on 12/17/2023\nIf you're in the world of Language Learning Models (LLMs), you've probably heard of LangSmith. How to Download Feedback and Examples (opens in a new tab): Export predictions, evaluation results, and other information to add to your reports programmatically.\n This article is your one-stop guide to understanding LangSmith, a platform that offers a plethora of features for debugging, testing, evaluating, and monitoring LLM applications.\n How do I get access to LangSmith?\nTo get access to LangSmith, you'll need to sign up for an account on their website.", "LangSmith Cookbook: Real-world Lang Smith Examples\nThe LangSmith Cookbook is not just a compilation of code snippets; it's a goldmine of hands-on examples designed to inspire and assist you in your projects. On This Page\nLangSmith: Best Way to Test LLMs and AI Application\nPublished on 12/17/2023\nIf you're in the world of Language Learning Models (LLMs), you've probably heard of LangSmith. How to Download Feedback and Examples (opens in a new tab): Export predictions, evaluation results, and other information to add to your reports programmatically.\n This article is your one-stop guide to understanding LangSmith, a platform that offers a plethora of features for debugging, testing, evaluating, and monitoring LLM applications.\n How do I get access to LangSmith?\nTo get access to LangSmith, you'll need to sign up for an account on their website.", "This was such an important unlock for us as we build in tactical functionality to our AI.\nAdvice for Product Teams Considering LangSmith\nWhen you're in the thick of product development, the whirlwind of AI and LLMs can be overwhelming. ou can now clearly see the citations coming through with the responses in the traces, and we’re good to ship the changes to the prompt to prod.\n We do this primarily to legitimize the LLMs response and give our HelpHub customer’s peace of mind that their end users are in fact getting to the help resource they need (instead of an LLM just hallucinating and giving any response it wants.)\n Take it from us, as we integrated AI more heavily and widely across our products, we’ve been more conscious than ever that the quality of the outputs matches the quality and trust that our customers have in our product. We updated our prompt to be more firm when asking for the sources:\nWe then tested everything using LangSmith evals, to make sure that it fixes the issue before pushing to production.\n", 'Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', 'Reviewing the Results\nThe comparison views also make it easy to manually review the outputs to get a better sense for how the models behave, so you can make adjustments to your cognitive architecture and update the evaluation techniques to address any failure modes you identify.\n Both say that they cannot see an explicit answer to the question (since the retriever is the same), but the assistant is willing to provide context on how strict is used in other contexts as a pointer for the user.\n If you are using a model like gpt-3.5 and getting hallucinations like this, you can try adding an additional system prompt reminding the model to only respond based on the retrieved content, and you can work to improve the retriever to filter out irrelevant documents.\n If you compare the outputs, you can see that mistral was influenced too much by the document content and mentioned the class from the docs rather than the direct answer.\n As a part of the initial release, we have evaluated various implementations that differ across a few dimensions:\nYou can check out the link above to review the results or continue with the information below!\n', 'Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', "Langsmith started charging. Time to compare alternatives. : r/LangChain. Langsmith started charging. Time to compare alternatives. Hey r/Langchain ! I've been using Langsmith for a while, and while it's been great, I'm curious about what else is out there. Specifically, I'm on the hunt for something fresh in the realm of LLM observability tools.", 'Feedback can be user-generated or "automated" using functions or even calls to an LLM:\nExporting data for fine-tuning\nFine-tune an LLM on collected run data using these recipes:\nExploratory Data Analysis\nTurn your trace data into actionable insights:\nAbout\nResources\nStars\nWatchers\nForks\nReleases\nPackages\n0\nContributors\n12\nLanguages\nFooter\nFooter navigation Python Examples\nTypeScript / JavaScript Testing Examples\nIncorporate LangSmith into your TS/JS testing and evaluation workflow:\nUsing Feedback\nHarness user feedback, "ai-assisted" feedback, and other signals to improve, monitor, and personalize your applications. Latest commit\nGit stats\nFiles\nREADME.md\nLangSmith Cookbook\nWelcome to the LangSmith Cookbook — your practical guide to mastering LangSmith. Saved searches\nUse saved searches to filter your results more quickly\nTo see all available qualifiers, see our documentation.\n langchain-ai/langsmith-cookbook\nName already in use\nUse Git or checkout with SVN using the web URL.\n', 'This function helps to load the specific language models and tools required for the task as shown in the code snippet below:\nAs a next step, initialize an agent by calling the initialize_agent function with several parameters like tools, llms, and agent:\nThe verbose parameter is set to false, indicating that the agent will not provide verbose or detailed output.\n You can accomplish this by following the shell commands provided below:\nCreating a LangSmith client\nNext, create a LangSmith client to interact with the API:\nIf you’re using Python, run the following commands to import the module:\n This code also handles exceptions that may occur during the agent execution:\nIt’s also important to call the wait_for_all_tracers function from the langchain.callbacks.tracers.langchain module as shown in the code snippet below:\nCalling the wait_for_all_tracers function helps ensure that logs and traces are submitted in full before the program proceeds. The temperature parameter will be set to 0, implying that the generated response will be more deterministic as shown in the code snippet below:\nNow, let’s call the load_tools function with a list of tool APIs, such as serpapi and llm-math, and also take the llm instance as a parameter. It initializes a chat model, loads specific tools, and creates an agent that can generate responses based on descriptions:\nInput processing with exception handling\nThe code below defines a list of input examples using the asyncio library to asynchronously run the agent on each input and gather the results for further processing.', 'Langchain vs Langsmith: Unpacking the AI Language Model Showdown\nOverview of Langchain and Langsmith\nLangchain is a versatile open-source framework that enables you to build applications utilizing large language models (LLM) like GPT-3. Check out our free WhatsApp channel to stay educated on LLM developments:\nJoin the Finxter Academy and unlock access to premium courses 👑 to certify your skills in exponential technologies and programming.\n Frequently Asked Questions\nWhether you’re trying to figure out which tool fits your needs or you’re just getting started with language model automation, these FAQs will help shed light on the common curiosities about Langchain and LangSmith.\n The best way to find out is to reach out to them through the LangSmith Walkthrough page or to inquire about access directly through their support channels.\n Here’s how you might start a simple Langchain project in Python:\nTo integrate LangSmith, you could write something like this:\nYou’re not limited to Python, though.', "LangChain's ecosystem is rounded out by LangSmith, a platform that supports the debugging, testing, evaluation, and monitoring of LLM applications. This ensures that applications not only get off the ground quickly but also operate reliably and efficiently in production environments. ... Challenges and Limitations of Langchain. Langchain, while ..."]}}

{'generate': {'draft': 'Title: Langchain vs Langsmith: Contrasting Two AI Language Models\n\nI. Introduction\nIn the realm of AI language models, Langchain and Langsmith stand out as prominent frameworks. While both serve the purpose of leveraging large language models like GPT-3, they possess unique features that set them apart.\n\nII. Langchain\nLangchain is an open-source framework designed for building applications utilizing large language models. It offers versatility and ease of use, making it a popular choice among developers. Key features include seamless integration with Python, extensive documentation, and a supportive community. Langchain finds applications in chatbots, content generation, and automated customer support. However, its limitations may include scalability challenges and security concerns.\n\nIII. Langsmith\nOn the other hand, Langsmith is a platform dedicated to testing, debugging, and evaluating language learning models. It provides a comprehensive suite of tools for monitoring and optimizing LLM applications. Langsmith is essential for ensuring the reliability and efficiency of AI applications in production environments. While it excels in enhancing the quality of outputs and user experience, it may have a steeper learning curve compared to Langchain.\n\nIV. Comparison between Langchain and Langsmith\nA. Technology Stack: Langchain focuses on application development, while Langsmith specializes in testing and evaluation.\nB. Scalability: Langchain may face scalability issues with larger projects, whereas Langsmith is tailored for performance optimization.\nC. Security: Langchain emphasizes building secure applications, while Langsmith prioritizes debugging and monitoring security vulnerabilities.\nD. Flexibility: Langchain offers flexibility in application development, while Langsmith provides flexibility in testing and refining models.\nE. Adoption and Popularity: Langchain is widely adopted for its ease of use, while Langsmith is gaining popularity for its robust testing capabilities.\n\nV. Conclusion\nIn conclusion, the distinctions between Langchain and Langsmith highlight the diverse needs within the AI development landscape. While Langchain caters to application building, Langsmith focuses on refining and optimizing AI models. Understanding these differences is crucial for developers to choose the right tool for their specific requirements. As AI technology continues to evolve, both Langchain and Langsmith play vital roles in shaping the future of AI applications.', 'revision_number': 3}}

 

 

 

7. LangChain Resources

LangChain 프로젝트를 위한 여러 리소스들:

'Generative AI > Agent' 카테고리의 다른 글

Multi AI Agent Systems with crewAI  (0) 2024.06.25
LangChain 의 Agent 와 Agent Type  (0) 2024.05.29
Agent 의 동작 과정  (0) 2024.05.29

+ Recent posts