Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A full-stack Webui implementation of Large Language model, such as ChatGPT or LLaMA.

License

NotificationsYou must be signed in to change notification settings

c0sogi/LLMChat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👋 Welcome to the LLMChat repository, a full-stack implementation of an API server built with Python FastAPI, and a beautiful frontend powered by Flutter.💬 This project is designed to deliver a seamless chat experience with the advanced ChatGPT and other LLM models.🔝 Offering a modern infrastructure that can be easily extended when GPT-4's Multimodal and Plugin features become available.🚀 Enjoy your stay!

Demo


Enjoy the beautiful UI and rich set of customizable widgets provided by Flutter.

  • It supports bothmobile andPC environments.
  • Markdown is also supported, so you can use it to format your messages.

Web Browsing

  • Duckduckgo

    You can use the Duckduckgo search engine to find relevant information on the web. Just activate the 'Browse' toggle button!

    Watch the demo video for full-browsing:https://www.youtube.com/watch?v=mj_CVrWrS08

Browse Web


Vector Embedding

  • Embed Any Text

    With the/embed command, you can store the text indefinitely in your own private vector database and query it later, anytime. If you use the/share command, the text is stored in a public vector database that everyone can share. EnablingQuery toggle button or/query command helps the AI generate contextualized answers by searching for text similarities in the public and private databases. This solves one of the biggest limitations of language models:memory.

  • Upload Your PDF File

    You can embed PDF file by clickingEmbed Document on the bottom left. In a few seconds, text contents of PDF will be converted to vectors and embedded to Redis cache.

Upload Your PDF File


  • Change your chat model

    You can change your chat model by dropdown menu. You can define whatever model you want to use inLLMModels which is located inapp/models/llms.py.

    Change your chat model


  • Change your chat title

    You can change your chat title by clicking the title of the chat. This will be stored until you change or delete it!

    Change your chat title


🦙 Local LLMs

llama api

For the local Llalam LLMs, it is assumed to work only in the local environment and uses thehttp://localhost:8002/v1/completions endpoint. It continuously checks the status of the llama API server by connecting tohttp://localhost:8002/health once a second to see if a 200 OK response is returned, and if not, it automatically runs a separate process to create a the API server.

Llama.cpp

The main goal of llama.cpp is to run the LLaMA model usingGGML 4-bit quantization with plain C/C++ implementation without dependencies. You have to download GGMLbin file from huggingface and put it in thellama_models/ggml folder, and define LLMModel inapp/models/llms.py. There are few examples, so you can easily define your own model.Refer to thellama.cpp repository for more information:https://github.com/ggerganov/llama.cpp

Exllama

A standalone Python/C++/CUDA implementation of Llama for use with 4-bitGPTQ weights, designed to be fast and memory-efficient on modern GPUs. It usespytorch andsentencepiece to run the model. It is assumed to work only in the local environment and at least oneNVIDIA CUDA GPU is required. You have to download tokenizer, config, and GPTQ files from huggingface and put it in thellama_models/gptq/YOUR_MODEL_FOLDER folder, and define LLMModel inapp/models/llms.py. There are few examples, so you can easily define your own model. Refer to theexllama repository for more detailed information:https://github.com/turboderp/exllama


Key Features

  • FastAPI - High-performanceweb framework for building APIs with Python.
  • Flutter -Webapp frontend with beautiful UI and rich set of customizable widgets.
  • ChatGPT - Seamless integration with theOpenAI API for text generation and message management.
  • LLAMA - Suporting LocalLLM,LlamaCpp andExllama models.
  • WebSocket Connection -Real-time, two-way communication with the ChatGPT, and other LLM models, with Flutter frontend webapp.
  • Vectorstore - UsingRedis andLangchain, store and retrieve vector embeddings for similarity search. It will help AI to generate more relevant responses.
  • Auto summarization - Using Langchain's summarize chain, summarize the conversation and store it in the database. It will help saving a lot of tokens.
  • Web Browsing - UsingDuckduckgo search engine, browse the web and find relevant information.
  • Concurrency - Asynchronous programming withasync/await syntax for concurrency and parallelism.
  • Security - Token validation and authentication to keep API secure.
  • Database - Manage database connections and executeMySQL queries. Easily perform Create, Read, Update, and Delete actions, withsqlalchemy.asyncio
  • Cache - Manage cache connections and executeRedis queries with aioredis. Easily perform Create, Read, Update, and Delete actions, withaioredis.

Getting Started / Installation

To set up the on your local machine, follow these simple steps.Before you begin, ensure you havedocker anddocker-compose installed on your machine. If you want to run the server without docker, you have to installPython 3.11 additionally. Even though, you needDocker to run DB servers.

1. Clone the repository

To recursively clone the submodules to useExllama orllama.cpp models, use the following command:

git clone --recurse-submodules https://github.com/c0sogi/llmchat.git

You only want to use core features(OpenAI), use the following command:

git clone https://github.com/c0sogi/llmchat.git

2. Change to the project directory

cd LLMChat

3. Create.env file

Setup an env file, referring to.env-sample file. Enter database information to create, OpenAI API Key, and other necessary configurations. Optionals are not required, just leave them as they are.

4. To run the server

Execute these. It may take a few minutes to start the server for the first time:

docker-compose -f docker-compose-local.yaml up

5. To stop the server

docker-compose -f docker-compose-local.yaml down

6. Enjoy it

Now you can access the server athttp://localhost:8000/docs and the database atdb:3306 orcache:6379. You can also access the app athttp://localhost:8000/chat.

  • To run the server without dockerIf you want to run the server without docker, you have to installPython 3.11 additionally. Even though, you needDocker to run DB servers. Turn off the API server already running withdocker-compose -f docker-compose-local.yaml down api. Don't forget to run other DB servers on Docker! Then, run the following commands:

    python -m main

    Your Server should now be up and running onhttp://localhost:8001 in this case.

License

This project is licensed under theMIT License, which allows for free use, modification, and distribution, as long as the original copyright and license notice are included in any copy or substantial portion of the software.

Why FastAPI?

🚀FastAPI is a modern web framework for building APIs with Python.💪 It has high performance, easy to learn, fast to code, and ready for production.👍 One of the main features ofFastAPI is that it supports concurrency andasync/await syntax.🤝 This means that you can write code that can handle multiple tasks at the same time without blocking each other, especially when dealing with I/O bound operations, such as network requests, database queries, file operations, etc.

Why Flutter?

📱Flutter is an open-source UI toolkit developed by Google for building native user interfaces for mobile, web, and desktop platforms from a single codebase.👨‍💻 It usesDart, a modern object-oriented programming language, and provides a rich set of customizable widgets that can adapt to any design.

WebSocket Connection

You can accessChatGPT orLlamaCpp throughWebSocket connection using two modules:app/routers/websocket andapp/utils/chat/chat_stream_manager. These modules facilitate the communication between theFlutter client and the Chat model through a WebSocket. With the WebSocket, you can establish a real-time, two-way communication channel to interact with the LLM.

Usage

To start a conversation, connect to the WebSocket route/ws/chat/{api_key} with a valid API key registered in the database. Note that this API key is not the same as OpenAI API key, but only available for your server to validate the user. Once connected, you can send messages and commands to interact with the LLM model. The WebSocket will send back chat responses in real-time. This websocket connection is established via Flutter app, which can accessed with endpoint/chat.

websocket.py

websocket.py is responsible for setting up a WebSocket connection and handling user authentication. It defines the WebSocket route/chat/{api_key} that accepts a WebSocket and an API key as parameters.

When a client connects to the WebSocket, it first checks the API key to authenticate the user. If the API key is valid, thebegin_chat() function is called from thestream_manager.py module to start the conversation.

In case of an unregistered API key or an unexpected error, an appropriate message is sent to the client and the connection is closed.

@router.websocket("/chat/{api_key}")asyncdefws_chat(websocket:WebSocket,api_key:str):    ...

stream_manager.py

stream_manager.py is responsible for managing the conversation and handling user messages. It defines thebegin_chat() function, which takes a WebSocket, a user ID as parameters.

The function first initializes the user's chat context from the cache manager. Then, it sends the initial message history to the client through the WebSocket.

The conversation continues in a loop until the connection is closed. During the conversation, the user's messages are processed and GPT's responses are generated accordingly.

classChatStreamManager:@classmethodasyncdefbegin_chat(cls,websocket:WebSocket,user:Users)->None:    ...

Sending Messages to WebSocket

TheSendToWebsocket class is used for sending messages and streams to the WebSocket. It has two methods:message() andstream(). Themessage() method sends a complete message to the WebSocket, while thestream() method sends a stream to the WebSocket.

classSendToWebsocket:@staticmethodasyncdefmessage(...):        ...@staticmethodasyncdefstream(...):        ...

Handling AI Responses

TheMessageHandler class also handles AI responses. Theai() method sends the AI response to the WebSocket. If translation is enabled, the response is translated using the Google Translate API before sending it to the client.

classMessageHandler:    ...@staticmethodasyncdefai(...):        ...

Handling Custom Commands

User messages are processed using theHandleMessage class. If the message starts with/, such as/YOUR_CALLBACK_NAME. it is treated as a command and the appropriate command response is generated. Otherwise, the user's message is processed and sent to the LLM model for generating a response.

Commands are handled using theChatCommands class. It executes the corresponding callback function depending on the command. You can add new commands by simply adding callback inChatCommands class fromapp.utils.chat.chat_commands.

🌟Vector Embedding

Using Redis for storing vector embeddings of conversations 🗨️ can aid the ChatGPT model 🤖 in several ways, such as efficient and fast retrieval of conversation context 🕵️‍♀️, handling large amounts of data 📊, and providing more relevant responses through vector similarity search 🔎.

Some fun examples of how this could work in practice:

  • Imagine a user is chatting with ChatGPT about their favorite TV show 📺 and mentions a specific character 👤. Using Redis, ChatGPT could retrieve previous conversations where that character was mentioned and use that information to provide more detailed insights or trivia about that character 🤔.
  • Another scenario could be a user discussing their travel plans✈️ with ChatGPT. If they mention a particular city 🌆 or landmark 🏰, ChatGPT could use vector similarity search to retrieve previous conversations that discussed the same location and provide recommendations or tips based on that context 🧳.
  • If a user mentions a particular cuisine 🍝 or dish 🍱, ChatGPT could retrieve previous conversations that discussed those topics and provide recommendations or suggestions based on that context 🍴.

1. Embedding text using the/embed command

When a user enters a command in the chat window like/embed <text_to_embed>, theVectorStoreManager.create_documents method is called. This method converts the input text into a vector using OpenAI'stext-embedding-ada-002 model and stores it in the Redis vectorstore.

@staticmethod@command_response.send_message_and_stopasyncdefembed(text_to_embed:str,/,buffer:BufferedUserContext)->str:"""Embed the text and save its vectors in the redis vectorstore.\n    /embed <text_to_embed>"""    ...

2. Querying embedded data using the/query command

When the user enters the/query <query> command, theasimilarity_search function is used to find up to three results with the highest vector similarity to the embedded data in the Redis vectorstore. These results are temporarily stored in the context of the chat, which helps AI answer the query by referring to these data.

@staticmethodasyncdefquery(query:str,/,buffer:BufferedUserContext,**kwargs)->Tuple[str|None,ResponseType]:"""Query from redis vectorstore\n    /query <query>"""    ...

3. Automatically embedding uploaded text files

When running thebegin_chat function, if a user uploads a file containing text (e.g., a PDF or TXT file), the text is automatically extracted from the file, and its vector embedding is saved to Redis.

@classmethodasyncdefembed_file_to_vectorstore(cls,file:bytes,filename:str,collection_name:str)->str:# if user uploads file, embed it    ...

4.commands.py functionality

In thecommands.py file, there are several important components:

  • command_response: This class is used to set a decorator on the command method to specify the next action. It helps to define various response types, such as sending a message and stopping, sending a message and continuing, handling user input, handling AI responses, and more.
  • command_handler: This function is responsible for performing a command callback method based on the text entered by the user.
  • arguments_provider: This function automatically supplies the arguments required by the command method based on the annotation type of the command method.

📝 Auto Summarization

There is a way to save tokens by adding a task to the LLM that summarizes the message. The auto summarization task is a crucial feature that enhances the efficiency of chatbot. Let's break down the functionality of this feature:

  1. Task Triggering: This feature is activated whenever a user types a message or the AI responds with a message. At this point, an automatic summarization task is generated to condense the text content.

  2. Task Storage: The auto-summarization task is then stored in thetask_list attribute of theBufferUserChatContext. This serves as a queue for managing tasks linked to the user's chat context.

  3. Task Harvesting: Following the completion of a user-AI question and answer cycle by theMessageHandler, theharvest_done_tasks function is invoked. This function collects the results of the summarization task, making sure nothing is left out.

  4. Summarization Application: After the harvesting process, the summarized results replace the actual message when our chatbot is requesting answers from language learning models (LLMs), such as OPENAI and LLAMA_CPP. By doing so, we're able to send much more succinct prompts than the initial lengthy message.

  5. User Experience: Importantly, from the user's perspective, they only see the original message. The summarized version of the message is not shown to them, maintaining transparency and avoiding potential confusion.

  6. Simultaneous Tasks: Another key feature of this auto-summarization task is that it doesn't impede other tasks. In other words, while the chatbot is busy summarizing the text, other tasks can still be carried out, thereby improving the overall efficiency of our chatbot.

By default, summarize chain only works for messages of 512 tokens or more. This can be turned on/off and the threshold set inChatConfig.

📚 LLM Models

This repository contains different LLM models, defined inllms.py. Each LLM Model class inherit from the base classLLMModel. TheLLMModels enum is a collection of these LLMs.

All operations are handled asynchronously without interupting the main thread. However, Local LLMs are not be able to handle multiple requests at the same time, as they are too computationally expensive. Therefore, aSemaphore is used to limit the number of requests to 1.

📌 Usage

The default LLM model used by the user viaUserChatContext.construct_default isgpt-3.5-turbo. You can change the default for that function.

📖 Model Descriptions

1️⃣ OpenAIModel

OpenAIModel generates text asynchronously by requesting chat completion from the OpenAI server. It requires an OpenAI API key.

2️⃣ LlamaCppModel

LlamaCppModel reads a locally stored GGML model. The LLama.cpp GGML model must be put in thellama_models/ggml folder as a.bin file. For example, if you downloaded a q4_0 quantized model from "https://huggingface.co/TheBloke/robin-7B-v2-GGML",The path of the model has to be "robin-7b.ggmlv3.q4_0.bin".

3️⃣ ExllamaModel

ExllamaModel read a locally stored GPTQ model. The Exllama GPTQ model must be put in thellama_models/gptq folder as a folder. For example, if you downloaded 3 files from "https://huggingface.co/TheBloke/orca_mini_7B-GPTQ/tree/main":

  • orca-mini-7b-GPTQ-4bit-128g.no-act.order.safetensors
  • tokenizer.model
  • config.json

Then you need to put them in a folder.The path of the model has to be the folder name. Let's say, "orca_mini_7b", which contains the 3 files.

📝 Handling Exceptions

Handle exceptions that may occur during text generation. If aChatLengthException is thrown, it automatically performs a routine to re-limit the message to within the number of tokens limited by thecutoff_message_histories function, and resend it. This ensures that the user has a smooth chat experience regardless of the token limit.

Behind the WebSocket Connection...

This project aims to create an API backend to enable the large language model chatbot service. It utilizes a cache manager to store messages and user profiles in Redis, and a message manager to safely cache messages so that the number of tokens does not exceed an acceptable limit.

Cache Manager

The Cache Manager (CacheManager) is responsible for handling user context information and message histories. It stores these data in Redis, allowing for easy retrieval and modification. The manager provides several methods to interact with the cache, such as:

  • read_context_from_profile: Reads the user's chat context from Redis, according to the user's profile.
  • create_context: Creates a new user chat context in Redis.
  • reset_context: Resets the user's chat context to default values.
  • update_message_histories: Updates the message histories for a specific role (user, ai, or system).
  • lpop_message_history /rpop_message_history: Removes and returns the message history from the left or right end of the list.
  • append_message_history: Appends a message history to the end of the list.
  • get_message_history: Retrieves the message history for a specific role.
  • delete_message_history: Deletes the message history for a specific role.
  • set_message_history: Sets a specific message history for a role and index.

Message Manager

The Message Manager (MessageManager) ensures that the number of tokens in message histories does not exceed the specified limit. It safely handles adding, removing, and setting message histories in the user's chat context while maintaining token limits. The manager provides several methods to interact with message histories, such as:

  • add_message_history_safely: Adds a message history to the user's chat context, ensuring that the token limit is not exceeded.
  • pop_message_history_safely: Removes and returns the message history from the right end of the list while updating the token count.
  • set_message_history_safely: Sets a specific message history in the user's chat context, updating the token count and ensuring that the token limit is not exceeded.

Usage

To use the cache manager and message manager in your project, import them as follows:

fromapp.utils.chat.managers.cacheimportCacheManagerfromapp.utils.chat.message_managerimportMessageManager

Then, you can use their methods to interact with the Redis cache and manage message histories according to your requirements.

For example, to create a new user chat context:

user_id="example@user.com"# email formatchat_room_id="example_chat_room_id"# usually the 32 characters from `uuid.uuid4().hex`default_context=UserChatContext.construct_default(user_id=user_id,chat_room_id=chat_room_id)awaitCacheManager.create_context(user_chat_context=default_context)

To safely add a message history to the user's chat context:

user_chat_context=awaitCacheManager.read_context_from_profile(user_chat_profile=UserChatProfile(user_id=user_id,chat_room_id=chat_room_id))content="This is a sample message."role=ChatRoles.USER# can be enum such as ChatRoles.USER, ChatRoles.AI, ChatRoles.SYSTEMawaitMessageManager.add_message_history_safely(user_chat_context,content,role)

Middlewares

This project usestoken_validator middleware and other middlewares used in the FastAPI application. These middlewares are responsible for controlling access to the API, ensuring only authorized and authenticated requests are processed.

Examples

The following middlewares are added to the FastAPI application:

  1. Access Control Middleware: Ensures that only authorized requests are processed.
  2. CORS Middleware: Allows requests from specific origins, as defined in the app configuration.
  3. Trusted Host Middleware: Ensures that requests are coming from trusted hosts, as defined in the app configuration.

Access Control Middleware

The Access Control Middleware is defined in thetoken_validator.py file. It is responsible for validating API keys and JWT tokens.

State Manager

TheStateManager class is used to initialize request state variables. It sets the request time, start time, IP address, and user token.

Access Control

TheAccessControl class contains two static methods for validating API keys and JWT tokens:

  1. api_service: Validates API keys by checking the existence of required query parameters and headers in the request. It calls theValidator.api_key method to verify the API key, secret, and timestamp.
  2. non_api_service: Validates JWT tokens by checking the existence of the 'authorization' header or 'Authorization' cookie in the request. It calls theValidator.jwt method to decode and verify the JWT token.

Validator

TheValidator class contains two static methods for validating API keys and JWT tokens:

  1. api_key: Verifies the API access key, hashed secret, and timestamp. Returns aUserToken object if the validation is successful.
  2. jwt: Decodes and verifies the JWT token. Returns aUserToken object if the validation is successful.

Access Control Function

Theaccess_control function is an asynchronous function that handles the request and response flow for the middleware. It initializes the request state using theStateManager class, determines the type of authentication required for the requested URL (API key or JWT token), and validates the authentication using theAccessControl class. If an error occurs during the validation process, an appropriate HTTP exception is raised.

Token

Token utilities are defined in thetoken.py file. It contains two functions:

  1. create_access_token: Creates a JWT token with the given data and expiration time.
  2. token_decode: Decodes and verifies a JWT token. Raises an exception if the token is expired or cannot be decoded.

Params Utilities

Theparams_utils.py file contains a utility function for hashing query parameters and secret key using HMAC and SHA256:

  1. hash_params: Takes query parameters and secret key as input and returns a base64 encoded hashed string.

Date Utilities

Thedate_utils.py file contains theUTC class with utility functions for working with dates and timestamps:

  1. now: Returns the current UTC datetime with an optional hour difference.
  2. timestamp: Returns the current UTC timestamp with an optional hour difference.
  3. timestamp_to_datetime: Converts a timestamp to a datetime object with an optional hour difference.

Logger

Thelogger.py file contains theApiLogger class, which logs API request and response information, including the request URL, method, status code, client information, processing time, and error details (if applicable). The logger function is called at the end of theaccess_control function to log the processed request and response.

Usage

To use thetoken_validator middleware in your FastAPI application, simply import theaccess_control function and add it as a middleware to your FastAPI instance:

fromapp.middlewares.token_validatorimportaccess_controlapp=FastAPI()app.add_middleware(dispatch=access_control,middleware_class=BaseHTTPMiddleware)

Make sure to also add the CORS and Trusted Host middlewares for complete access control:

app.add_middleware(CORSMiddleware,allow_origins=config.allowed_sites,allow_credentials=True,allow_methods=["*"],allow_headers=["*"],)app.add_middleware(TrustedHostMiddleware,allowed_hosts=config.trusted_hosts,except_path=["/health"],)

Now, any incoming requests to your FastAPI application will be processed by thetoken_validator middleware and other middlewares, ensuring that only authorized and authenticated requests are processed.

Database Connection

This moduleapp.database.connection provides an easy-to-use interface for managing database connections and executing SQL queries using SQLAlchemy and Redis. It supports MySQL, and can be easily integrated with this project.

Features

  • Create and drop databases
  • Create and manage users
  • Grant privileges to users
  • Execute raw SQL queries
  • Manage database sessions with async support
  • Redis caching support for faster data access

Usage

First, import the required classes from the module:

fromapp.database.connectionimportMySQL,SQLAlchemy,CacheFactory

Next, create an instance of theSQLAlchemy class and configure it with your database settings:

fromapp.common.configimportConfigconfig:Config=Config.get()db=SQLAlchemy()db.start(config)

Now you can use thedb instance to execute SQL queries and manage sessions:

# Execute a raw SQL queryresult=awaitdb.execute("SELECT * FROM users")# Use the run_in_session decorator to manage sessions@db.run_in_sessionasyncdefcreate_user(session,username,password):awaitsession.execute("INSERT INTO users (username, password) VALUES (:username, :password)", {"username":username,"password":password})awaitcreate_user("JohnDoe","password123")

To use Redis caching, create an instance of theCacheFactory class and configure it with your Redis settings:

cache=CacheFactory()cache.start(config)

You can now use thecache instance to interact with Redis:

# Set a key in Redisawaitcache.redis.set("my_key","my_value")# Get a key from Redisvalue=awaitcache.redis.get("my_key")

In fact, in this project, theMySQL class does the initial setup at app startup, and all database connections are made with only thedb andcache variables present at the end of the module. 😅

All db settings will be done increate_app() inapp.common.app_settings.For example, thecreate_app() function inapp.common.app_settings will look like this:

defcreate_app(config:Config)->FastAPI:# Initialize app & db & jsnew_app=FastAPI(title=config.app_title,description=config.app_description,version=config.app_version,    )db.start(config=config)cache.start(config=config)js_url_initializer(js_location="app/web/main.dart.js")# Register routers# ...returnnew_app

Database CRUD Operations

This project uses simple and efficient way to handle database CRUD (Create, Read, Update, Delete) operations using SQLAlchemy and two module and path:app.database.models.schema andapp.database.crud.

Overview

app.database.models.schema

Theschema.py module is responsible for defining database models and their relationships using SQLAlchemy. It includes a set of classes that inherit fromBase, an instance ofdeclarative_base(). Each class represents a table in the database, and its attributes represent columns in the table. These classes also inherit from aMixin class, which provides some common methods and attributes for all the models.

Mixin Class

The Mixin class provides some common attributes and methods for all the classes that inherit from it. Some of the attributes include:

  • id: Integer primary key for the table.
  • created_at: Datetime for when the record was created.
  • updated_at: Datetime for when the record was last updated.
  • ip_address: IP address of the client that created or updated the record.

It also provides several class methods that perform CRUD operations using SQLAlchemy, such as:

  • add_all(): Adds multiple records to the database.
  • add_one(): Adds a single record to the database.
  • update_where(): Updates records in the database based on a filter.
  • fetchall_filtered_by(): Fetches all records from the database that match the provided filter.
  • one_filtered_by(): Fetches a single record from the database that matches the provided filter.
  • first_filtered_by(): Fetches the first record from the database that matches the provided filter.
  • one_or_none_filtered_by(): Fetches a single record or returnsNone if no records match the provided filter.

app.database.crud

Theusers.py andapi_keys.py module contains a set of functions that perform CRUD operations using the classes defined inschema.py. These functions use the class methods provided by the Mixin class to interact with the database.

Some of the functions in this module include:

  • create_api_key(): Creates a new API key for a user.
  • get_api_keys(): Retrieves all API keys for a user.
  • get_api_key_owner(): Retrieves the owner of an API key.
  • get_api_key_and_owner(): Retrieves an API key and its owner.
  • update_api_key(): Updates an API key.
  • delete_api_key(): Deletes an API key.
  • is_email_exist(): Checks if an email exists in the database.
  • get_me(): Retrieves user information based on user ID.
  • is_valid_api_key(): Checks if an API key is valid.
  • register_new_user(): Registers a new user in the database.
  • find_matched_user(): Finds a user with a matching email in the database.

Usage

To use the provided CRUD operations, import the relevant functions from thecrud.py module and call them with the required parameters. For example:

importasynciofromapp.database.crud.usersimportregister_new_user,get_me,is_email_existfromapp.database.crud.api_keysimportcreate_api_key,get_api_keys,update_api_key,delete_api_keyasyncdefmain():# `user_id` is an integer index in the MySQL database, and `email` is user's actual name# the email will be used as `user_id` in chat. Don't confuse with `user_id` in MySQL# Register a new usernew_user=awaitregister_new_user(email="test@test.com",hashed_password="...")# Get user informationuser=awaitget_me(user_id=1)# Check if an email exists in the databaseemail_exists=awaitis_email_exist(email="test@test.com")# Create a new API key for user with ID 1new_api_key=awaitcreate_api_key(user_id=1,additional_key_info={"user_memo":"Test API Key"})# Get all API keys for user with ID 1api_keys=awaitget_api_keys(user_id=1)# Update the first API key in the listupdated_api_key=awaitupdate_api_key(updated_key_info={"user_memo":"Updated Test API Key"},access_key_id=api_keys[0].id,user_id=1)# Delete the first API key in the listawaitdelete_api_key(access_key_id=api_keys[0].id,access_key=api_keys[0].access_key,user_id=1)if__name__=="__main__":asyncio.run(main())

[8]ページ先頭

©2009-2025 Movatter.jp