Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commite6bc346

Browse files
authored
Merge pull request#195 from raspawar/raspawar/nvidia_integration
NVIDIA NIM CrewAI Integration
2 parentsf5c5b29 +8069997 commite6bc346

File tree

21 files changed

+834
-0
lines changed

21 files changed

+834
-0
lines changed

‎nvidia_models/intro/.env.example‎

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
NVIDIA_API_KEY=
2+
MODEL=meta/llama-3.1-8b-instruct

‎nvidia_models/intro/.gitignore‎

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
.env
2+
.DS_Store
3+
__pycache__
4+
.venv
5+
poetry.lock
6+
.ruff_cache

‎nvidia_models/intro/Makefile‎

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
.PHONY: all format lint test tests integration_tests help
2+
3+
# Default target executed when no arguments are given to make.
4+
all: help
5+
6+
install:## Install the poetry environment and dependencies
7+
poetry install --no-root
8+
9+
clean:## Clean up cache directories and build artifacts
10+
find. -type d -name"__pycache__" -exec rm -rf {} +
11+
find. -type d -name"*.pyc" -exec rm -rf {} +
12+
find. -type d -name".ruff_cache" -exec rm -rf {} +
13+
find. -type d -name".pytest_cache" -exec rm -rf {} +
14+
find. -type d -name".coverage" -exec rm -rf {} +
15+
rm -rf dist/
16+
rm -rf build/
17+
18+
######################
19+
# LINTING AND FORMATTING
20+
######################
21+
22+
# Define a variable for Python and notebook files.
23+
PYTHON_FILES=.
24+
MYPY_CACHE=.mypy_cache
25+
lint:## Run code quality tools
26+
poetry run ruff check$(PYTHON_FILES)
27+
poetry run ruff format$(PYTHON_FILES) --check
28+
29+
format:## Format code using ruff
30+
poetry run ruff format$(PYTHON_FILES)
31+
poetry run ruff check$(PYTHON_FILES) --fix
32+
33+
######################
34+
# HELP
35+
######################
36+
37+
help:
38+
@echo'----'
39+
@echo'format - run code formatters'
40+
@echo'lint - run linters'

‎nvidia_models/intro/README.md‎

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
#AI Crew using NVIDIA NIM Endpoint
2+
3+
##Introduction
4+
This is a simple example using the CrewAI framework with an NVIDIA endpoint and langchain-nvidia-ai-endpoints integration.
5+
6+
##Running the Script
7+
This example uses the Azure OpenAI API to call a model.
8+
9+
-**Configure Environment**: Set NVIDIA_API_KEY to appropriate api key.
10+
Set MODEL to select appropriate model
11+
-**Install Dependencies**: Run`make install`.
12+
-**Execute the Script**: Run`python main.py` to see a list of recommended changes to this document.
13+
14+
##Details & Explanation
15+
-**Running the Script**: Execute`python main.py`. The script will leverage the CrewAI framework to process the specified file and return a list of changes.

‎nvidia_models/intro/main.py‎

Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
importlogging
2+
importos
3+
fromtypingimportAny,Dict,List,Optional,Union
4+
5+
importlitellm
6+
fromcrewaiimportLLM,Agent,Crew,Process,Task
7+
fromcrewai.utilities.exceptions.context_window_exceeding_exceptionimport (
8+
LLMContextLengthExceededException,
9+
)
10+
fromdotenvimportload_dotenv
11+
fromlangchain_nvidia_ai_endpointsimportChatNVIDIA
12+
13+
load_dotenv()
14+
15+
16+
classnvllm(LLM):
17+
def__init__(
18+
self,
19+
llm:ChatNVIDIA,
20+
model_str:str,
21+
timeout:Optional[Union[float,int]]=None,
22+
temperature:Optional[float]=None,
23+
top_p:Optional[float]=None,
24+
n:Optional[int]=None,
25+
stop:Optional[Union[str,List[str]]]=None,
26+
max_completion_tokens:Optional[int]=None,
27+
max_tokens:Optional[int]=None,
28+
presence_penalty:Optional[float]=None,
29+
frequency_penalty:Optional[float]=None,
30+
logit_bias:Optional[Dict[int,float]]=None,
31+
response_format:Optional[Dict[str,Any]]=None,
32+
seed:Optional[int]=None,
33+
logprobs:Optional[bool]=None,
34+
top_logprobs:Optional[int]=None,
35+
base_url:Optional[str]=None,
36+
api_version:Optional[str]=None,
37+
api_key:Optional[str]=None,
38+
callbacks:List[Any]=None,
39+
**kwargs,
40+
):
41+
self.model=model_str
42+
self.timeout=timeout
43+
self.temperature=temperature
44+
self.top_p=top_p
45+
self.n=n
46+
self.stop=stop
47+
self.max_completion_tokens=max_completion_tokens
48+
self.max_tokens=max_tokens
49+
self.presence_penalty=presence_penalty
50+
self.frequency_penalty=frequency_penalty
51+
self.logit_bias=logit_bias
52+
self.response_format=response_format
53+
self.seed=seed
54+
self.logprobs=logprobs
55+
self.top_logprobs=top_logprobs
56+
self.base_url=base_url
57+
self.api_version=api_version
58+
self.api_key=api_key
59+
self.callbacks=callbacks
60+
self.kwargs=kwargs
61+
self.llm=llm
62+
63+
ifcallbacksisNone:
64+
self.callbacks=callbacks= []
65+
66+
self.set_callbacks(callbacks)
67+
68+
defcall(self,messages:List[Dict[str,str]],callbacks:List[Any]=None)->str:
69+
ifcallbacksisNone:
70+
callbacks= []
71+
ifcallbacksandlen(callbacks)>0:
72+
self.set_callbacks(callbacks)
73+
74+
try:
75+
params= {
76+
"model":self.llm.model,
77+
"input":messages,
78+
"timeout":self.timeout,
79+
"temperature":self.temperature,
80+
"top_p":self.top_p,
81+
"n":self.n,
82+
"stop":self.stop,
83+
"max_tokens":self.max_tokensorself.max_completion_tokens,
84+
"presence_penalty":self.presence_penalty,
85+
"frequency_penalty":self.frequency_penalty,
86+
"logit_bias":self.logit_bias,
87+
"response_format":self.response_format,
88+
"seed":self.seed,
89+
"logprobs":self.logprobs,
90+
"top_logprobs":self.top_logprobs,
91+
"api_key":self.api_key,
92+
**self.kwargs,
93+
}
94+
95+
response=self.llm.invoke(**params)
96+
returnresponse.content
97+
exceptExceptionase:
98+
ifnotLLMContextLengthExceededException(str(e))._is_context_limit_error(
99+
str(e)
100+
):
101+
logging.error(f"LiteLLM call failed:{str(e)}")
102+
103+
raise# Re-raise the exception after logging
104+
105+
defset_callbacks(self,callbacks:List[Any]):
106+
callback_types= [type(callback)forcallbackincallbacks]
107+
forcallbackinlitellm.success_callback[:]:
108+
iftype(callback)incallback_types:
109+
litellm.success_callback.remove(callback)
110+
111+
forcallbackinlitellm._async_success_callback[:]:
112+
iftype(callback)incallback_types:
113+
litellm._async_success_callback.remove(callback)
114+
115+
litellm.callbacks=callbacks
116+
117+
118+
model=os.environ.get("MODEL","meta/llama-3.1-8b-instruct")
119+
llm=ChatNVIDIA(model=model)
120+
default_llm=nvllm(model_str="nvidia_nim/"+model,llm=llm)
121+
122+
os.environ["NVIDIA_NIM_API_KEY"]=os.environ.get("NVIDIA_API_KEY")
123+
124+
# Create a researcher agent
125+
researcher=Agent(
126+
role="Senior Researcher",
127+
goal="Discover groundbreaking technologies",
128+
verbose=True,
129+
llm=default_llm,
130+
backstory=(
131+
"A curious mind fascinated by cutting-edge innovation and the potential "
132+
"to change the world, you know everything about tech."
133+
),
134+
)
135+
136+
# Task for the researcher
137+
research_task=Task(
138+
description="Identify the next big trend in AI",
139+
agent=researcher,# Assigning the task to the researcher
140+
expected_output="Data Insights",
141+
)
142+
143+
144+
# Instantiate your crew
145+
tech_crew=Crew(
146+
agents=[researcher],
147+
tasks=[research_task],
148+
process=Process.sequential,# Tasks will be executed one after the other
149+
)
150+
151+
# Begin the task execution
152+
tech_crew.kickoff()

‎nvidia_models/intro/pyproject.toml‎

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
[tool.poetry]
2+
name ="nvidia-intro-crewai-example"
3+
version ="0.1.0"
4+
description =""
5+
authors = ["raspawar <raspawar@nvidia.com>"]
6+
7+
[tool.poetry.dependencies]
8+
python =">=3.10.0,<3.12"
9+
python-dotenv ="1.0.0"
10+
litellm ="^1.52.10"
11+
langchain-nvidia-ai-endpoints ="^0.3.5"
12+
crewai ="^0.80.0"
13+
14+
[tool.pyright]
15+
# https://github.com/microsoft/pyright/blob/main/docs/configuration.md
16+
useLibraryCodeForTypes =true
17+
exclude = [".cache"]
18+
19+
[tool.ruff.lint]
20+
select = [
21+
"E",# pycodestyle
22+
"F",# pyflakes
23+
"I",# isort
24+
"B",# flake8-bugbear
25+
"C4",# flake8-comprehensions
26+
"ARG",# flake8-unused-arguments
27+
"SIM",# flake8-simplify
28+
"T201",# print
29+
]
30+
ignore = [
31+
"W291",# trailing whitespace
32+
"W292",# no newline at end of file
33+
"W293",# blank line contains whitespace
34+
]
35+
36+
[build-system]
37+
requires = ["poetry-core>=1.0.0"]
38+
build-backend ="poetry.core.masonry.api"
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
#!/bin/bash
2+
#
3+
# This script searches for lines starting with "import pydantic" or "from pydantic"
4+
# in tracked files within a Git repository.
5+
#
6+
# Usage: ./scripts/check_pydantic.sh /path/to/repository
7+
8+
# Check if a path argument is provided
9+
if [$#-ne 1 ];then
10+
echo"Usage:$0 /path/to/repository"
11+
exit 1
12+
fi
13+
14+
repository_path="$1"
15+
16+
# Search for lines matching the pattern within the specified repository
17+
result=$(git -C"$repository_path" grep -E'^import pydantic|^from pydantic')
18+
19+
# Check if any matching lines were found
20+
if [-n"$result" ];then
21+
echo"ERROR: The following lines need to be updated:"
22+
echo"$result"
23+
echo"Please replace the code with an import from langchain_core.pydantic_v1."
24+
echo"For example, replace 'from pydantic import BaseModel'"
25+
echo"with 'from langchain_core.pydantic_v1 import BaseModel'"
26+
exit 1
27+
fi
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
#!/bin/bash
2+
3+
set -eu
4+
5+
# Initialize a variable to keep track of errors
6+
errors=0
7+
8+
# make sure not importing from langchain or langchain_experimental
9+
git --no-pager grep'^from langchain\.'.&& errors=$((errors+1))
10+
git --no-pager grep'^from langchain_experimental\.'.&& errors=$((errors+1))
11+
12+
# Decide on an exit status based on the errors
13+
if ["$errors"-gt 0 ];then
14+
exit 1
15+
else
16+
exit 0
17+
fi
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
SERPER_API_KEY=
2+
NVIDIA_API_KEY=
3+
MODEL=meta/llama-3.1-8b-instruct
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
.env
2+
.DS_Store
3+
__pycache__
4+
.venv
5+
poetry.lock
6+
.ruff_cache

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp