Movatterモバイル変換


[0]ホーム

URL:


ContentsMenuExpandLight modeDark modeAuto light/dark mode
Jina 3.27.17 documentation
Light LogoDark Logo
Star

Get Started

Concepts

Cloud Native

Developer Reference

Tutorials

Legacy Support

Ecosystem

Back to top

Welcome to Jina!#

Survey

Take ouruser experience survey to let us know your thoughts and help shape the future of Jina!

Jina lets you build multimodalAI services andpipelines that communicate via gRPC, HTTP and WebSockets, then scale them up and deploy to production. You can focus on your logic and algorithms, without worrying about the infrastructure complexity.

Jina provides a smooth Pythonic experience for serving ML models transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Jina makes advanced solution engineering and cloud-native technologies accessible to every developer.

Wait, how is Jina different from FastAPI?Jina's value proposition may seem quite similar to that of FastAPI. However, there are several fundamental differences:

Data structure and communication protocols

  • FastAPI communication relies on Pydantic and Jina relies onDocArray allowing Jina to support multiple protocolsto expose its services. The support for gRPC protocol is specially useful for data intensive applications as for embedding serviceswhere the embeddings and tensors can be more efficiently serialized.

Advanced orchestration and scaling capabilities

  • Jina allows you to easily containerize and orchestrate your services and models, providing concurrency and scalability.

  • Jina lets you deploy applications formed from multiple microservices that can be containerized and scaled independently.

Journey to the cloud

  • Jina provides a smooth transition from local development (usingDocArray) to local serving usingDeployment andFlowto having production-ready services by using Kubernetes capacity to orchestrate the lifetime of containers.

  • By usingJina AI Cloud you have access to scalable and serverless deployments of your applications in one command.

Install#

Make sure that you have Python 3.7+ installed on Linux/macOS/Windows.

pipinstall-Ujina
condainstalljina-cconda-forge

Getting Started#

Jina supports developers in building AI services and pipelines:

Let’s build a fast, reliable and scalable gRPC-based AI service. In Jina we call this anExecutor. Our simple Executor will wrap theStableLM LLM from Stability AI. We’ll then use aDeployment to serve it.

NoteA Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use aFlow.

Let’s implement the service’s logic:

executor.py
fromjinaimportExecutor,requestsfromdocarrayimportDocList,BaseDocfromtransformersimportpipelineclassPrompt(BaseDoc):text:strclassGeneration(BaseDoc):prompt:strtext:strclassStableLM(Executor):def__init__(self,**kwargs):super().__init__(**kwargs)self.generator=pipeline('text-generation',model='stabilityai/stablelm-base-alpha-3b')@requestsdefgenerate(self,docs:DocList[Prompt],**kwargs)->DocList[Generation]:generations=DocList[Generation]()prompts=docs.textllm_outputs=self.generator(prompts)forprompt,outputinzip(prompts,llm_outputs):generations.append(Generation(prompt=prompt,text=output))returngenerations

Then we deploy it with either the Python API or YAML:

Python API:deployment.py YAML:deployment.yml
fromjinaimportDeploymentfromexecutorimportStableLMdep=Deployment(uses=StableLM,timeout_ready=-1,port=12345)withdep:dep.block()
jtype:Deploymentwith:uses:StableLMpy_modules:-executor.pytimeout_ready:-1port:12345

And run the YAML Deployment with the CLI:jinadeployment--usesdeployment.yml

UseJina Client to make requests to the service:

fromjinaimportClientfromdocarrayimportDocList,BaseDocclassPrompt(BaseDoc):text:strclassGeneration(BaseDoc):prompt:strtext:strprompt=Prompt(text='suggest an interesting image generation prompt for a mona lisa variant')client=Client(port=12345)# use port from output aboveresponse=client.post(on='/',inputs=[prompt],return_type=DocList[Generation])print(response[0].text)
a steampunk version of the Mona Lisa, incorporating mechanical gears, brass elements, and Victorian era clothing details

Sometimes you want to chain microservices together into a pipeline. That’s where aFlow comes in.

A Flow is aDAG pipeline, composed of a set of steps, It orchestrates a set ofExecutors and aGateway to offer an end-to-end service.

NoteIf you just want to serve a single Executor, you can use aDeployment.

For instance, let’s combineour StableLM language model with a Stable Diffusion image generation model. Chaining these services together into aFlow will give us a service that will generate images based on a prompt generated by the LLM.

text_to_image.py
importnumpyasnpfromjinaimportExecutor,requestsfromdocarrayimportBaseDoc,DocListfromdocarray.documentsimportImageDocclassGeneration(BaseDoc):prompt:strtext:strclassTextToImage(Executor):def__init__(self,**kwargs):super().__init__(**kwargs)fromdiffusersimportStableDiffusionPipelineimporttorchself.pipe=StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4",torch_dtype=torch.float16).to("cuda")@requestsdefgenerate_image(self,docs:DocList[Generation],**kwargs)->DocList[ImageDoc]:result=DocList[ImageDoc]()images=self.pipe(docs.text).images# image here is in [PIL format](https://pillow.readthedocs.io/en/stable/)result.tensor=np.array(images)returnresult

Build the Flow with either Python or YAML:

Python API:flow.py YAML:flow.yml
fromjinaimportFlowfromexecutorimportStableLMfromtext_to_imageimportTextToImageflow=(Flow(port=12345).add(uses=StableLM,timeout_ready=-1).add(uses=TextToImage,timeout_ready=-1))withflow:flow.block()
jtype:Flowwith:port:12345executors:-uses:StableLMtimeout_ready:-1py_modules:-executor.py-uses:TextToImagetimeout_ready:-1py_modules:-text_to_image.py

Then run the YAML Flow with the CLI:jinaflow--usesflow.yml

Then, useJina Client to make requests to the Flow:

fromjinaimportClientfromdocarrayimportDocList,BaseDocfromdocarray.documentsimportImageDocclassPrompt(BaseDoc):text:strprompt=Prompt(text='suggest an interesting image generation prompt for a mona lisa variant')client=Client(port=12345)# use port from output aboveresponse=client.post(on='/',inputs=[prompt],return_type=DocList[ImageDoc])response[0].display()

Next steps#

Learn DocArray API

DocArray is the foundational data structure of Jina. Before starting Jina, first learn DocArray to quickly build a PoC.

Learn Executor

Executor is a Python class that can serve logic usingDocuments.

Learn Deployment

Deployment serves an Executor as a scalable service making it available to receiveDocuments usinggRPC orHTTP.

Learn Flow

Flow orchestrates Executors using different Deployments into a processing pipeline to accomplish a task.

Learn Gateway

The Gateway is a microservice that serves as the entrypoint of aFlow. It exposes multiple protocols for external communications and routes all internal traffic.

Explore Executor Hub

Executor Hub allows you to containerize, share, explore and make Executors ready for the cloud.

Deploy a Flow to Cloud

Jina AI Cloud is the MLOps platform for hosting Jina projects.

Support#

Join Us#

Jina is backed byJina AI and licensed underApache-2.0.


Index |Module Index

Next
Install
Copyright © Jina AI Limited. All rights reserved.
Last updated on Oct 01, 2024
Contents

[8]ページ先頭

©2009-2025 Movatter.jp