Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Open In Colab

Classify Text into Labels

Tagging means labeling a document with classes such as:

  • Sentiment
  • Language
  • Style (formal, informal etc.)
  • Covered topics
  • Political tendency

Image description

Overview

Tagging has a few components:

  • function: Likeextraction, tagging usesfunctions to specify how the model should tag a document
  • schema: defines how we want to tag the document

Quickstart

Let's see a very straightforward example of how we can use OpenAI tool calling for tagging in LangChain. We'll use thewith_structured_output method supported by OpenAI models.

pip install--upgrade--quiet langchain-core

We'll need to load achat model:

pip install -qU "langchain[google-genai]"
import getpass
import os

ifnot os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"]= getpass.getpass("Enter API key for Google Gemini: ")

from langchain.chat_modelsimport init_chat_model

llm= init_chat_model("gemini-2.0-flash", model_provider="google_genai")

Let's specify a Pydantic model with a few properties and their expected type in our schema.

from langchain_core.promptsimport ChatPromptTemplate
from langchain_openaiimport ChatOpenAI
from pydanticimport BaseModel, Field

tagging_prompt= ChatPromptTemplate.from_template(
"""
Extract the desired information from the following passage.

Only extract the properties mentioned in the 'Classification' function.

Passage:
{input}
"""
)


classClassification(BaseModel):
sentiment:str= Field(description="The sentiment of the text")
aggressiveness:int= Field(
description="How aggressive the text is on a scale from 1 to 10"
)
language:str= Field(description="The language the text is written in")


# Structured LLM
structured_llm= llm.with_structured_output(Classification)
inp="Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
prompt= tagging_prompt.invoke({"input": inp})
response= structured_llm.invoke(prompt)

response
Classification(sentiment='positive', aggressiveness=1, language='Spanish')

If we want dictionary output, we can just call.model_dump()

inp="Estoy muy enojado con vos! Te voy a dar tu merecido!"
prompt= tagging_prompt.invoke({"input": inp})
response= structured_llm.invoke(prompt)

response.model_dump()
{'sentiment': 'enojado', 'aggressiveness': 8, 'language': 'es'}

As we can see in the examples, it correctly interprets what we want.

The results vary so that we may get, for example, sentiments in different languages ('positive', 'enojado' etc.).

We will see how to control these results in the next section.

Finer control

Careful schema definition gives us more control over the model's output.

Specifically, we can define:

  • Possible values for each property
  • Description to make sure that the model understands the property
  • Required properties to be returned

Let's redeclare our Pydantic model to control for each of the previously mentioned aspects using enums:

classClassification(BaseModel):
sentiment:str= Field(..., enum=["happy","neutral","sad"])
aggressiveness:int= Field(
...,
description="describes how aggressive the statement is, the higher the number the more aggressive",
enum=[1,2,3,4,5],
)
language:str= Field(
..., enum=["spanish","english","french","german","italian"]
)
tagging_prompt= ChatPromptTemplate.from_template(
"""
Extract the desired information from the following passage.

Only extract the properties mentioned in the 'Classification' function.

Passage:
{input}
"""
)

llm= ChatOpenAI(temperature=0, model="gpt-4o-mini").with_structured_output(
Classification
)

Now the answers will be restricted in a way we expect!

inp="Estoy increiblemente contento de haberte conocido! Creo que seremos muy buenos amigos!"
prompt= tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='positive', aggressiveness=1, language='Spanish')
inp="Estoy muy enojado con vos! Te voy a dar tu merecido!"
prompt= tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='enojado', aggressiveness=8, language='es')
inp="Weather is ok here, I can go outside without much more than a coat"
prompt= tagging_prompt.invoke({"input": inp})
llm.invoke(prompt)
Classification(sentiment='neutral', aggressiveness=1, language='English')

TheLangSmith trace lets us peek under the hood:

Image description

Going deeper

  • You can use themetadata tagger document transformer to extract metadata from a LangChainDocument.
  • This covers the same basic functionality as the tagging chain, only applied to a LangChainDocument.

[8]ページ先頭

©2009-2025 Movatter.jp