Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench

License

NotificationsYou must be signed in to change notification settings

NVIDIA/nim-anywhere

Repository files navigation

NVIDIA: LLM NIMNVIDIA: Embedding NIMNVIDIA: Reranker NIMCI Pipeline StatusPython: 3.10 | 3.11 | 3.12

Please join #cdd-nim-anywhere slack channel if you are a internal user,open an issue if you are external for any question and feedback.

One of the primary benefit of using AI for Enterprises is their abilityto work with and learn from their internal data. Retrieval-AugmentedGeneration(RAG)is one of the best ways to do so. NVIDIA has developed a set ofmicro-services calledNIMmicro-serviceto help our partners and customers build effective RAG pipeline withease.

NIM Anywhere contains all the tooling required to start integrating NIMsfor RAG. It natively scales out to full-sized labs and up to productionenvironments. This is great news for building a RAG architecture andeasily adding NIMs as needed. If you're unfamiliar with RAG, itdynamically retrieves relevant external information during inferencewithout modifying the model itself. Imagine you're the tech lead of acompany with a local database containing confidential, up-to-dateinformation. You don’t want OpenAI to access your data, but you need themodel to understand it to answer questions accurately. The solution isto connect your language model to the database and feed them with theinformation.

To learn more about why RAG is an excellent solution for boosting theaccuracy and reliability of your generative AI models,read thisblog.

Get started with NIM Anywhere now with thequick-startinstructions and build your first RAG application using NIMs!

NIM Anywhere Screenshot

Quick-start

Generate your NGC Personal Key

To allow AI Workbench to access NVIDIA’s cloud resources, you’ll need toprovide it with a Personal Key. These keys begin withnvapi-.

Expand this section for instructions for creating this key.
  1. Go to theNGC Personal KeyManager. If you areprompted to, then register for a new account and sign in.

    HINT You can find this tool by logging intongc.nvidia.com, expanding your profilemenu on the top right, selectingSetup, and then selectingGenerate Personal Key.

  2. SelectGenerate Personal Key.

    Generate Personal Key

  3. Enter any value as the Key name, an expiration of 12 months is fine,and select all the services. PressGenerate Personal Key when youare finished.

    Personal Key Form

  4. Save your personal key for later. Workbench will need it and thereis no way to retrieve it later. If the key is lost, a new one mustbe created. Protect this key as if it were a password.

    Personal Key

Authenticate with Docker

Workbench will use your system's Docker client to pull NVIDIA NIMcontainers, so before continuing, make sure to follow these steps toauthenticate your Docker client with your NGC Personal Key.

  1. Run the following Docker login command

    docker login nvcr.io
  2. When prompted for your credentials, use the following values:

    • Username:$oauthtoken
    • Password: Use your NGC Personal key beggining withnv-api

Install AI Workbench

This project is designed to be used withNVIDIA AIWorkbench.While this is not a requirement, running this demo without AI Workbenchwill require manual work as the pre-configured automation andintegrations may not be available.

This quick start guide will assume a remote lab machine is being usedfor development and the local machine is a thin-client for remotelyaccessing the development machine. This allows for compute resources tostay centrally located and for developers to be more portable. Note, theremote lab machine must run Ubuntu, but the local client can runWindows, MacOS, or Ubuntu. To install this project local only, simplyskip the remote install.

flowchart LR    local    subgraph lab environment        remote-lab-machine    end    local <-.ssh.-> remote-lab-machine
Loading

Client Machine Install

Ubuntu is required if the local client will also be used for developent.When using a remote lab machine, this can be Windows, MacOS, or Ubuntu.

Expand this section for a Windows install.

For full instructions, see theNVIDIA AI Workbench UserGuide.

  1. Install Prerequisite Software

    1. If this machine has an NVIDIA GPU, ensure the GPU drivers areinstalled. It is recommended to use theGeForceExperiencetooling to manage the GPU drivers.
    2. InstallDockerDesktop forlocal container support. Please be mindful of Docker Desktop'slicensing for enterprise use.RancherDesktop may be a viablealternative.
    3. [OPTIONAL] If Visual Studio Code integration is desired,installVisual Studio Code.
  2. Download theNVIDIA AIWorkbenchinstaller and execute it. Authorize Windows to allow the installerto make changes.

  3. Follow the instructions in the installation wizard. If you need toinstall WSL2, authorize Windows to make the changes and reboot localmachine when requested. When the system restarts, the NVIDIA AIWorkbench installer should automatically resume.

  4. Select Docker as your container runtime.

  5. Log into your GitHub Account by using theSign in throughGitHub.com option.

  6. Enter your git author information if requested.

Expand this section for a MacOS install.

For full instructions, see theNVIDIA AI Workbench UserGuide.

  1. Install Prerequisite Software

    1. InstallDockerDesktop forlocal container support. Please be mindful of Docker Desktop'slicensing for enterprise use.RancherDesktop may be a viablealternative.
    2. [OPTIONAL] If Visual Studio Code integration is desired,installVisual Studio Code.When using VSCode on a Mac, an additional step must beperformedto install the VSCode CLI interface used by Workbench.
  2. Download theNVIDIA AIWorkbenchdisk image (.dmg file) and open it.

  3. Drag AI Workbench into the Applications folder and runNVIDIA AIWorkbench from the application launcher.Mac DMG Install Interface

  4. Select Docker as your container runtime.

  5. Log into your GitHub Account by using theSign in throughGitHub.com option.

  6. Enter your git author information if requested.

Expand this section for an Ubuntu install.

For full instructions, see theNVIDIA AI Workbench UserGuide.Run this installation as the user who will be user Workbench. Do not runthese steps asroot.

  1. Install Prerequisite Software

    1. [OPTIONAL] If Visual Studio Code integration is desired,installVisual Studio Code.
  2. Download theNVIDIA AIWorkbenchinstaller, make it executable, and then run it. You can make thefile executable with the following command:

    chmod +x NVIDIA-AI-Workbench-*.AppImage
  3. AI Workbench will install the NVIDIA drivers for you (if needed).You will need to reboot your local machine after the drivers areinstalled and then restart the AI Workbench installation bydouble-clicking the NVIDIA AI Workbench icon on your desktop.

  4. Select Docker as your container runtime.

  5. Log into your GitHub Account by using theSign in throughGitHub.com option.

  6. Enter your git author information if requested.

Remote Machine Install

Only Ubuntu is supported for remote machines.

Expand this section for a remote Ubuntu install.

For full instructions, see theNVIDIA AI Workbench UserGuide.Run this installation as the user who will be using Workbench. Do notrun these steps asroot.

  1. Ensure SSH Key based authentication is enabled from the localmachine to the remote machine. If this is not currently enabled, thefollowing commands will enable this is most situations. ChangeREMOTE_USER andREMOTE-MACHINE to reflect your remote address.

    • From a Windows local client, use the following PowerShell:
      ssh-keygen-f"C:\Users\local-user\.ssh\id_rsa"-t rsa-N'""'type$env:USERPROFILE\.ssh\id_rsa.pub| ssh REMOTE_USER@REMOTE-MACHINE"cat >> .ssh/authorized_keys"
    • From a MacOS or Linux local client, use the following shell:
      if [!-e~/.ssh/id_rsa ];then ssh-keygen -f~/.ssh/id_rsa -t rsa -N"";fissh-copy-id REMOTE_USER@REMOTE-MACHINE
  2. SSH into the remote host. Then, use the following commands todownload and execute the NVIDIA AI Workbench Installer.

    mkdir -p$HOME/.nvwb/bin&& \curl -L https://workbench.download.nvidia.com/stable/workbench-cli/$(curl -L -s https://workbench.download.nvidia.com/stable/workbench-cli/LATEST)/nvwb-cli-$(uname)-$(uname -m) --output$HOME/.nvwb/bin/nvwb-cli&& \chmod +x$HOME/.nvwb/bin/nvwb-cli&& \sudo -E$HOME/.nvwb/bin/nvwb-cli install
  3. AI Workbench will install the NVIDIA drivers for you (if needed).You will need to reboot your remote machine after the drivers areinstalled and then restart the AI Workbench installation byre-running the commands in the previous step.

  4. Select Docker as your container runtime.

  5. Log into your GitHub Account by using theSign in throughGitHub.com option.

  6. Enter your git author information if requested.

  7. Once the remote installation is complete, the Remote Location can beadded to the local AI Workbench instance. Open the AI Workbenchapplication, clickAdd Remote Location, and then enter therequired information. When finished, clickAdd Location.

    • *Location Name: * Any short name for this new location
    • *Description: * Any brief metadata for this location.
    • *Hostname or IP Address: * The hostname or address used toremotely SSH. If step 1 was followed, this should be the same asREMOTE-MACHINE.
    • *SSH Port: * Usually left blank. If a nonstandard SSH port isused, it can be configured here.
    • *SSH Username: * The username used for making an SSH connection.If step 1 was followed, this should be the same asREMOTE_USER.
    • *SSH Key File: * The path to the private key for making SSHconnections. If step 1 was followed, this should be:/home/USER/.ssh/id_rsa.
    • *Workbench Directory: * Usually left blank. This is whereWorkbench will remotely save state.

Download this project

There are two ways to download this project for local use: Cloning andForking.

Cloning this repository is the recommended way to start. This will notallow for local modifications, but is the fastest to get started. Thisalso allows for the easiest way to pull updates.

Forking this repository is recommended for development as changes willbe able to be saved. However, to get updates, the fork maintainer willhave to regularly pull from the upstream repo. To work from a fork,followGitHub'sinstructionsand then reference the URL to your personal fork in the rest of thissection.

Expand this section for a details on downloading this project.
  1. Open the local NVIDIA AI Workbench window. From the list oflocations displayed, select either the remote one you just set up,or local if you're going to work locally.

    AI Workbench Locations Menu

  2. Once inside the location, selectClone Project.

    AI Workbench Projects Menu

  3. In the 'Clone Project' pop up window, set the Repository URL tohttps://github.com/NVIDIA/nim-anywhere.git. You can leave the Pathas the default of/home/REMOTE_USER/nvidia-workbench/nim-anywhere.git. ClickClone.`

    AI Workbench Clone Project Menu

  4. You will be redirected to the new project’s page. Workbench willautomatically bootstrap the development environment. You can viewreal-time progress by expanding the Output from the bottom of thewindow.

    AI Workbench Log Viewer

Configure this project

The project must be configured to use your NGC personal key.

Expand this section for a details on configuring this project.
  1. Before running for the first time, your NGC personal key must beconfigured in Workbench. This is done using theEnvironment tabfrom the left-hand panel.

    AI Workbench Side Menu

  2. Scroll down to theSecrets section and find theNGC_API_KEYentry. PressConfigure and provide the personal key for NGC thatwas generated earlier.

Start This Project

Even the most basic of LLM Chains depend on a few additionalmicroservices. These can be ignored during development for in-memoryalternatives, but then code changes are required to go to production.Thankfully, Workbench manages those additional microservices fordevelopment environments.

Expand this section for details on starting the demo application.

HINT: For each application, the debug output can be monitored inthe UI by clicking the Output link in the lower left corner, selectingthe dropdown menu, and choosing the application of interest (orCompose for applications started via compose).

Since you can either pull NIMs and run them locally, or utilize theendpoints fromai.nvidia.com you can run this project withorwithout GPUs.

  1. The applications bundled in this workspace can be controlled bynavigating to two tabs:

    • Environment >Compose
    • Environment >Applications
  2. First, navigate to theEnvironment >Compose tab. If you'renot working in an environment with GPUs, you can just clickStart to run the project using a lightweight deployment. Thisdefault configuration will run the following containers:

    • Milvus Vector DB: An unstructured knowledge base

    • Redis: Used to store conversation histories

  3. If you have access to GPU resources and want to run any NIMslocally, use the dropdown menu underCompose and select whichset of NIMs you want to run locally. Note that youmust have atleast 1 available GPU per NIM you plan to run locally. Below is anoutline of the available configurations:

    • Local LLM (min 1 GPU required)

      • The first time the LLM NIM is started, it will take some time todownload the image and the optimized models.
        • During a long start, to confirm the LLM NIM is starting, theprogress can be observed by viewing the logs by using theOutput pane on the bottom left of the UI.

        • If the logs indicate an authentication error, that means theprovidedNGC_API_KEY does not have access to the NIMs.Please verify it was generated correctly and in an NGCorganization that has NVIDIA AI Enterprise support or trial.

        • If the logs appear to be stuck on..........: Pull complete...........: Verifying complete, or..........: Download complete; this is all normal outputfrom Docker that the various layers of the container imagehave been downloaded.

        • Any other failures here need to be addressed.

    • Local LLM + Embedding (min 2 GPUs required)

    • Local LLM + Embedding + Reranking (min 3 GPUs required)

    NOTE:

    • Each profile will also runMilvus Vector DB andRedis
    • Due to the nature of Docker Compose profiles, the UI will letyou select multiple profiles at the same time. In the context ofthis project, selecting multiple profiles does not make sense.It will not cause any errors, however we recommend onlyselecting one profile at a time for simplicity.
  4. Once the compose services have been started, navigate to theEnvironment >Applications tab. Now, theChain Server cansafely be started. This contains the custom LangChain code forperforming our reasoning chain. By default, it will use the localMilvus and Redis, but useai.nvidia.com for LLM, Embedding, andReranking model inferencing.

  5. Once theChain Server is up, theChat Frontend can be started.Starting the interface will automatically open it in a browserwindow. If you are running any local NIMs, you can edit the configto connect to them via theChat Frontend

NIM Anywhere Frontend

Populating the Knowledge Base

To get started developing demos, a sample dataset is provided along witha Jupyter Notebook showing how data is ingested into a Vector Database.

  1. To import PDF documentation into the vector Database, open Jupyterusing the app launcher in AI Workbench.

  2. Use the Jupyter Notebook atcode/upload-pdfs.ipynb to ingest thedefault dataset. If using the default dataset, no changes arenecessary.

  3. If using a custom dataset, upload it to thedata/ directory inJupyter and modify the provided notebook as necessary.

Developing Your Own Applications

This project contains applications for a few demo services as well asintegrations with external services. These are all orchestrated byNVIDIA AIWorkbench.

The demo services are all in thecode folder. The root level of thecode folder has a few interactive notebooks meant for technical deepdives. The Chain Server is a sample application utilizing NIMs withLangChain. (Note that the Chain Server here gives you the option toexperiment with and without RAG). The Chat Frontend folder contains aninteractive UI server for exercising the chain server. Finally, samplenotebooks are provided in the Evaluation directory to demonstrateretrieval scoring and validation.

mindmap  root((AI Workbench))    Demo Services        Chain Server<br />LangChain + NIMs        Frontend<br />Interactive Demo UI        Evaluation<br />Validate the results        Notebooks<br />Advanced usage    Integrations        Redis</br>Conversation History        Milvus</br>Vector Database        LLM NIM</br>Optimized LLMs
Loading

Application Configuration

The Chain Server can be configured with either a configuration file orenvironment variables.

Config from a file

By default, the application will search for a configuration file in allof the following locations. If multiple configuration files are found,values from lower files in the list will take precedence.

  • ./config.yaml
  • ./config.yml
  • ./config.json
  • ~/app.yaml
  • ~/app.yml
  • ~/app.json
  • /etc/app.yaml
  • /etc/app.yml
  • /etc/app.json

Config from a custom file

An additional config file path can be specified through an environmentvariable namedAPP_CONFIG. The value in this file will take precedenceover all the default file locations.

export APP_CONFIG=/etc/my_config.yaml

Config from env vars

Configuration can also be set using environment variables. The variablenames will be in the form:APP_FIELD__SUB_FIELD Values specified asenvironment variables will take precedence over all values from files.

Chain Server config schema

# Your API key for authentication to AI Foundation.# ENV Variables: NGC_API_KEY, NVIDIA_API_KEY, APP_NVIDIA_API_KEY# Type: string, nullnvidia_api_key:~# The Data Source Name for your Redis DB.# ENV Variables: APP_REDIS_DSN# Type: stringredis_dsn:redis://localhost:6379/0llm_model:# The name of the model to request.# ENV Variables: APP_LLM_MODEL__NAME# Type: stringname:meta/llama3-8b-instruct# The URL to the model API.# ENV Variables: APP_LLM_MODEL__URL# Type: stringurl:https://integrate.api.nvidia.com/v1embedding_model:# The name of the model to request.# ENV Variables: APP_EMBEDDING_MODEL__NAME# Type: stringname:nvidia/nv-embedqa-e5-v5# The URL to the model API.# ENV Variables: APP_EMBEDDING_MODEL__URL# Type: stringurl:https://integrate.api.nvidia.com/v1reranking_model:# The name of the model to request.# ENV Variables: APP_RERANKING_MODEL__NAME# Type: stringname:nv-rerank-qa-mistral-4b:1# The URL to the model API.# ENV Variables: APP_RERANKING_MODEL__URL# Type: stringurl:https://integrate.api.nvidia.com/v1milvus:# The host machine running Milvus vector DB.# ENV Variables: APP_MILVUS__URL# Type: stringurl:http://localhost:19530# The name of the Milvus collection.# ENV Variables: APP_MILVUS__COLLECTION_NAME# Type: stringcollection_name:collection_1log_level:

Chat Frontend config schema

The chat frontend has a few configuration options as well. They can beset in the same manner as the chain server.

# The URL to the chain on the chain server.# ENV Variables: APP_CHAIN_URL# Type: stringchain_url:http://localhost:3030/# The url prefix when this is running behind a proxy.# ENV Variables: PROXY_PREFIX, APP_PROXY_PREFIX# Type: stringproxy_prefix:/# Path to the chain server&#39;s config.# ENV Variables: APP_CHAIN_CONFIG_FILE# Type: stringchain_config_file:./config.yamllog_level:

Contributing

All feedback and contributions to this project are welcome. When makingchanges to this project, either for personal use or for contributing, itis recommended to work on a fork on this project. Once the changes havebeen completed on the fork, a Merge Request should be opened.

Code Style

This project has been configured with Linters that have been tuned tohelp the code remain consistent while not being overly burdensome. Weuse the following Linters:

  • Bandit is used for security scanning
  • Pylint is used for Python Syntax Linting
  • MyPy is used for type hint linting
  • Black is configured for code styling
  • A custom check is run to ensure Jupyter Notebooks do not have anyoutput
  • Another custom check is run to ensure the README.md file is up to date

The embedded VSCode environment is configured to run the linting andchecking in realtime.

To manually run the linting that is done by the CI pipelines, execute/project/code/tools/lint.sh. Individual tests can be run be specifyingthem by name:/project code/tools/lint.sh [deps|pylint|mypy|black|docs|fix]. Runningthe lint tool in fix mode will automatically correct what it can byrunning Black, updating the README, and clearing the cell output on allJupyter Notebooks.

Updating the frontend

The frontend has been designed in an effort to minimize the requiredHTML and Javascript development. A branded and styled Application Shellis provided that has been created with vanilla HTML, Javascript, andCSS. It is designed to be easy to customize, but it should never berequired. The interactive components of the frontend are all created inGradio and mounted in the app shell using iframes.

Along the top of the app shell is a menu listing the available views.Each view may have its own layout consisting of one or a few pages.

Creating a new page

Pages contain the interactive components for a demo. The code for thepages is in thecode/frontend/pages directory. To create a new page:

  1. Create a new folder in the pages directory
  2. Create an__init__.py file in the new directory that uses Gradioto define the UI. The Gradio Blocks layout should be defined in avariable calledpage.
  3. It is recommended that any CSS and JS files needed for this view besaved in the same directory. See thechat page for an example.
  4. Open thecode/frontend/pages/__init__.py file, import the newpage, and add the new page to the__all__ list.

NOTE: Creating a new page will not add it to the frontend. It mustbe added to a view to appear on the Frontend.

Adding a view

View consist of one or a few pages and should function independently ofeach other. Views are all defined in thecode/frontend/server.pymodule. All declared views will automatically be added to the Frontend'smenu bar and made available in the UI.

To define a new view, modify the list namedviews. This is a list ofView objects. The order of the objects will define their order in theFrontend menu. The first defined view will be the default.

View objects describe the view name and layout. They can be declared asfollow:

my_view=frontend.view.View(name="My New View",# the name in the menuleft=frontend.pages.sample_page,# the page to show on the leftright=frontend.pages.another_page,# the page to show on the right)

All of the page declarations,View.left orView.right, are optional.If they are not declared, then the associated iframes in the web layoutwill be hidden. The other iframes will expand to fill the gaps. Thefollowing diagrams show the various layouts.

  • All pages are defined
block-beta    columns 1    menu["menu bar"]    block        columns 2        left right    end
Loading
  • Only left is defined
block-beta    columns 1    menu["menu bar"]    block        columns 1        left:1    end
Loading

Frontend branding

The frontend contains a few branded assets that can be customized fordifferent use cases.

Logo

The frontend contains a logo on the top left of the page. To modify thelogo, an SVG of the desired logo is required. The app shell can then beeasily modified to use the new SVG by modifying thecode/frontend/_assets/index.html file. There is a singlediv with anID oflogo. This box contains a single SVG. Update this to the desiredSVG definition.

<divid="logo"class="logo"><svgviewBox="0 0 164 30">...</svg></div>

Color scheme

The styling of the App Shell is defined incode/frontend/_static/css/style.css. The colors in this file may besafely modified.

The styling of the various pages are defined incode/frontend/pages/*/*.css. These files may also require modificationfor custom color schemes.

Gradio theme

The Gradio theme is defined in the filecode/frontend/_assets/theme.json. The colors in this file can safelybe modified to the desired branding. Other styles in this file may alsobe changed, but may cause breaking changes to the frontend. TheGradiodocumentation containsmore information on Gradio theming.

Messaging between pages

NOTE: This is an advanced topic that most developers will neverrequire.

Occasionally, it may be necessary to have multiple pages in a view thatcommunicate with each other. For this purpose, Javascript'spostMessage messaging framework is used. Any trusted message posted tothe application shell will be forwarded to each iframe where the pagescan handle the message as desired. Thecontrol page uses this featureto modify the configuration of thechat page.

The following will post a message to the app shell (window.top). Themessage will contain a dictionary with the keyuse_kb and a value oftrue. Using Gradio, this Javascript can be executed byany Gradioevent.

window.top.postMessage({"use_kb":true},'*');

This message will automatically be sent to all pages by the app shell.The following sample code will consume the message on another page. Thiscode will run asynchronously when amessage event is received. If themessage is trusted, a Gradio component with theelem_id ofuse_kbwill be updated to the value specified in the message. In this way, thevalue of a Gradio component can be duplicated across pages.

window.addEventListener("message",(event)=>{if(event.isTrusted){use_kb=gradio_config.components.find((element)=>element.props.elem_id=="use_kb");use_kb.props.value=event.data["use_kb"];};},false);

Updating documentation

The README is rendered automatically; direct edits will be overwritten.In order to modify the README you will need to edit the files for eachsection separately. All of these files will be combined and the READMEwill be automatically generated. You can find all of the related filesin thedocs folder.

Documentation is written in Github Flavored Markdown and then renderedto a final Markdown file by Pandoc. The details for this process aredefined in the Makefile. The order of files generated are defined indocs/_TOC.md. The documentation can be previewed in the Workbench filebrowser window.

Header file

The header file is the first file used to compile the documentation.This file can be found atdocs/_HEADER.md. The contents of this filewill be written verbatim, without any manipulation, to the README beforeanything else.

Summary file

The summary file contains quick description and graphic that describethis project. The contents of this file will be added to the READMEimmediately after the header and just before the table of contents. Thisfile is processed by Pandoc to embed images before writing to theREADME.

Table of Contents file

The most important file for the documentation is the table of contentsfile atdocs/_TOC.md. This file defines a list of files that should beconcatenated in order to generate the final README manual. Files must beon this list to be included.

Static Content

Save all static content, including images, to the_static folder. Thiswill help with organization.

Dynamic documentation

It may be helpful to have documents that update and write themselves. Tocreate a dynamic document, simply create an executable file that writesthe Markdown formatted document to stdout. During build time, if anentry in the table of contents file is executable, it will be executedand its stdout will be used in its place.

Rendering documentation

When a documentation related commit is pushed, a GitHub Action willrender the documentation. Any changes to the README will be automatiallycommitted.

Managing your Development Environment

Environment Variables

Most of the configuration for the development environment happens withEnvironment Variables. To make permanent changes to environmentvariables, modifyvariables.env or use theWorkbench UI.

Python Environment Packages

This project uses one Python environment at/usr/bin/python3 anddependencies are managed withpip. Because all development is doneinside a container, any changes to the Python environment will beephemeral. To permanently install a Python package, add it to therequirements.txt file or use the Workbench UI.

Operating System Configuration

The development environment is based on Ubuntu 22.04. The primary userhas password-less sudo access, but all changes to the system will beephemeral. To make permanent changes to installed packages, add them tothe [apt.txt] file. To make other changes to the operating systemsuch as manipulating files, adding environment variables, etc; use thepostBuild.bash andpreBuild.bash files.

Updating Dependencies

It is typically good practice to update dependencies monthly to ensureno CVEs are exposed through misused dependencies. The following processcan be used to patch this project. It is recommended to run theregression testing after the patch to ensure nothing has broken in theupdate.

  1. Update Environment: In the workbench GUI, open the project andnavigate to the Environment pane. Check if there is an updateavailable for the base image. If an updated base image is available,apply the update and rebuild the environment. Address any builderrors. Ensure that all of the applications can start.
  2. Update Python Packages and NIMs: The Python dependencies and NIMapplications can be updated automatically by running the/project/code/tools/bump.sh script.
  3. Update Remaining applications: For the remaining applications,manually check their default tag and compare to the latest. Updatewhere appropriate and ensure that the applications still start upsuccessfully.
  4. Restart and rebuild the environment.
  5. Audit Python Environment: It is now best to check the installedversions of ALL Python packages, not just the direct dependencies.To accomplish this, run/project/code/tools/audit.sh. This scriptwill print out a report of all Python packages in a warning stateand all packages in an error state. Anything in an error state mustbe resolved as it will have active CVEs and known vulnerabilities.
  6. Check Dependabot Alerts: Check all of theDependabotalerts and ensure they should be resolved.
  7. Regression testing: Run through the entire demo, from documentingesting to the frontend, and ensure it is still functional andthat the GUI looks correct.

[8]ページ先頭

©2009-2025 Movatter.jp