Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork2.4k
Insights: PromtEngineer/localGPT
Overview
Could not load contribution data
Please try again later
6 Pull requests merged by2 people
- Fix: Add comprehensive NaN handling for LanceDB indexing
#871 merged
Jul 18, 2025 - fix: implement automatic database path detection for multi-environment compatibility
#870 merged
Jul 18, 2025 - Fix Docker container SQLite database path issue (#849)
#853 merged
Jul 16, 2025 - Fix excessive empty lines in streaming markdown responses
#848 merged
Jul 15, 2025 - Fix: Default to token-based chunking for accurate chunk sizing
#847 merged
Jul 15, 2025 - Updated README with installation, API, and configuration details
#846 merged
Jul 13, 2025
2 Pull requests opened by1 person
- Fix UTF-8 encoding issues with emoji characters in logging
#852 opened
Jul 15, 2025 - Fix Ollama setup to support user-selected models beyond qwen
#860 opened
Jul 17, 2025
436 Issues closed by6 people
- sqlite3 OperationalError (on Windows)
#867 closed
Jul 18, 2025 - 'No Models' in 'Overview LLM'
#868 closed
Jul 18, 2025 - [v2] Can't install auto-gptq 0.6.0
#859 closed
Jul 16, 2025 - v2: unable to install docker image.
#851 closed
Jul 16, 2025 - [v2] Error running it via Docker: sqlite3.OperationalError: unable to open database file
#849 closed
Jul 16, 2025 - [Question] Can you help me, please? 100k PDFs!
#757 closed
Jul 15, 2025 - Your GPU is probably not used at all, which would explain the slow speed in answering.
#750 closed
Jul 15, 2025 - While executing "ingest.py" I am getting below error
#752 closed
Jul 15, 2025 - Docker not using GPU
#746 closed
Jul 15, 2025 - Hugging Faces Down: Unable to run models that have already been downloaded.
#744 closed
Jul 15, 2025 - Docker Build no module named 'utils'
#739 closed
Jul 15, 2025 - Question: How to run UI from Docker?
#742 closed
Jul 15, 2025 - Exllama kernel does not support query
#740 closed
Jul 15, 2025 - Download the source document
#736 closed
Jul 15, 2025 - GPU layers/batch size/models selction
#738 closed
Jul 15, 2025 - [macOs] ingest.py: IndexError: list index out of range
#734 closed
Jul 15, 2025 - Unable to load llama model from path
#726 closed
Jul 15, 2025 - Ingestion Error / Batch processing
#724 closed
Jul 15, 2025 - "TypeError: 'HuggingFaceInstructEmbeddings' object is not callable" after enter a query
#731 closed
Jul 15, 2025 - run_localGPT.py fail: python run_localGPT.py --device_type cpu
#713 closed
Jul 15, 2025 - Suggestion to improve or fine tune the model with custom documents
#711 closed
Jul 15, 2025 - multi-user or async prompt requests crashes the app
#714 closed
Jul 15, 2025 - Requested tokens exceed context window of 4096
#708 closed
Jul 15, 2025 - KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'
#709 closed
Jul 15, 2025 - Problem Running Model with GPU
#575 closed
Jul 15, 2025 - problem with CHROMA_SETTINGS
#574 closed
Jul 15, 2025 - localGPT can't answer the question in my customization file!
#576 closed
Jul 15, 2025 - Error while running Mistral-7B-Instruct-v0.1-GPTQ
#573 closed
Jul 15, 2025 - float division by zero
#571 closed
Jul 15, 2025 - Anyone achieved to run it with Windows on GPU ?
#570 closed
Jul 15, 2025 - Why different models store different paths?
#568 closed
Jul 15, 2025 - Why is there such a big difference in response speed between different models?
#567 closed
Jul 15, 2025 - NVIDIA Driver on your system is too OLD ---> alternatively go pytorch version
#569 closed
Jul 15, 2025 - Isolated indices for specified folder of source documents.
#565 closed
Jul 15, 2025 - Recommended PC specs
#561 closed
Jul 15, 2025 - torch.cuda.OutOfMemoryError: CUDA out of memory but it is a bug
#560 closed
Jul 15, 2025 - protobuff==3.20.0 conflict stopping from installing requirements
#562 closed
Jul 15, 2025 - Ingest crashes on MacBook
#557 closed
Jul 15, 2025 - Ingestion crashes upon encountering a corrupted / invalid Zip file
#556 closed
Jul 15, 2025 - dockerfile broken
#554 closed
Jul 15, 2025 - Token content is exceeded when making more than 3 queries in succession
#558 closed
Jul 15, 2025 - database is locked
#551 closed
Jul 15, 2025 - How could we add the streaming support to enhance the output effect?
#546 closed
Jul 15, 2025 - 100% cpu usage when running or ingesting, causes crashes.
#547 closed
Jul 15, 2025 - Not more than 8 vCPUs ?
#545 closed
Jul 15, 2025 - chunking doesnt work on csv data
#548 closed
Jul 15, 2025 - Keeps re-downloading entire model every new session
#544 closed
Jul 15, 2025 - Predicted wrongly based on the questions
#543 closed
Jul 15, 2025 - GPTQ models are not loaded while running run_localgpt.py
#407 closed
Jul 15, 2025 - It is too slow to run on colab with T4 GPU
#409 closed
Jul 15, 2025 - Is there a way to use plugin such as Wolfram with llama-2?
#398 closed
Jul 15, 2025 - Issue for loading the quantized model
#406 closed
Jul 15, 2025 - not accurate - very random asnwer
#404 closed
Jul 15, 2025 - Error while running pip install requirements.txt
#397 closed
Jul 15, 2025 - How can I run the UI in Google Colab?
#395 closed
Jul 15, 2025 - GPU not being used
#391 closed
Jul 15, 2025 - how to turn off this?
#392 closed
Jul 15, 2025 - model inference is pretty slow
#394 closed
Jul 15, 2025 - Why is the interference so slow on cuda using GPU RTX4090? Is it normal?
#390 closed
Jul 15, 2025 - run_localGPT_API.py file errors.
#381 closed
Jul 15, 2025 - CUDA Out of Memory Error when asking more than 4 questions on Google Colab T4
#384 closed
Jul 15, 2025 - AttributeError: 'Llama' object has no attribute 'ctx'
#383 closed
Jul 15, 2025 - Is possible to use huggingface/text-generation-inference as inference for llm
#336 closed
Jul 15, 2025 - Add a prefix sentence in all the batches
#339 closed
Jul 15, 2025 - not scalable - is it limitation
#379 closed
Jul 15, 2025 - Not able to run run_localGPT_API.py
#330 closed
Jul 15, 2025 - auto_gpt not found
#328 closed
Jul 15, 2025 - Reduce GGML RAM consumption
#329 closed
Jul 15, 2025 - I am truing to run API but getting this error is there any solution?
#327 closed
Jul 15, 2025 - how to speed up response
#321 closed
Jul 15, 2025 - I followed the instructions in the setup but it always defaults to CPU
#318 closed
Jul 15, 2025 - FreeWilly2 support ?? ? ? ?
#323 closed
Jul 15, 2025 - Petals integration?
#313 closed
Jul 15, 2025 - run_localGPT_API.py reset my DB
#311 closed
Jul 15, 2025 - Should SOURCE_DOCUMENTS folder only contain NEW documents?
#309 closed
Jul 15, 2025 - stop HuggingFaceInstructEmbeddings from loading all the base model bins
#303 closed
Jul 15, 2025 - Response Is Very Slow.
#300 closed
Jul 15, 2025 - Input length greater than the max_length
#307 closed
Jul 15, 2025 - Requirements.txt OUTDATED!
#306 closed
Jul 15, 2025 - issue if config.json has a different name
#297 closed
Jul 15, 2025 - Language model not willing to give "legal advices"
#299 closed
Jul 15, 2025 - size limit for ingest.py
#298 closed
Jul 15, 2025 - How to solve the problem with "torch.cuda.OutOfMemoryError"?
#293 closed
Jul 15, 2025 - RX 580 8GBs are useless for AI or can i use it somehow ?
#290 closed
Jul 15, 2025 - Xformers is not installed correctly
#296 closed
Jul 15, 2025 - Windows / CUDA multiple issues
#287 closed
Jul 15, 2025 - model download "read timed out"
#289 closed
Jul 15, 2025 - PC freezes with `python ingest.py --device_type cpu` and XMP active
#288 closed
Jul 15, 2025 - Llama.generate: prefix-match hit
#282 closed
Jul 15, 2025 - CUDA out of memory.
#285 closed
Jul 15, 2025 - OSError: Can't load tokenizer for 'TheBloke/Llama-2-7B-Chat-GGML'.
#280 closed
Jul 15, 2025 - Maybe a Conflict with Click Lib
#275 closed
Jul 15, 2025 - Not able to install llma-ccp-python the whole installation exit out
#276 closed
Jul 15, 2025 - Issue executing "run_langest_commands" using only python instead of python3
#269 closed
Jul 15, 2025 - Bitsandbytes does not work in Windows
#265 closed
Jul 15, 2025 - Different language output.
#263 closed
Jul 15, 2025 - window 10 ingest faile
#267 closed
Jul 15, 2025 - Syntax error in ingest.py - type annotations must be enclosed in quotes
#214 closed
Jul 15, 2025 - FileNotFoundError: No files were found inside SOURCE_DOCUMENTS?
#212 closed
Jul 15, 2025 - where is docs_path?
#211 closed
Jul 15, 2025 - Loading GPTQ tokenizer takes a long time
#208 closed
Jul 15, 2025 - CUDA out of memory error
#207 closed
Jul 15, 2025 - FileNotFoundError
#191 closed
Jul 15, 2025 - Documentation for building and installing project dependencies.
#190 closed
Jul 15, 2025 - ERROR: Could not find a version that satisfies the requirement langchain?
#203 closed
Jul 15, 2025 - Forever to load
#182 closed
Jul 15, 2025 - Support sub directories in ingestion
#179 closed
Jul 15, 2025 - I got the same error message as below, eager to get some assistance from the owner. Thanks
#174 closed
Jul 15, 2025 - Handling ingestion file types
#171 closed
Jul 15, 2025 - JSONLoader
#175 closed
Jul 15, 2025 - Run UI on CPU
#168 closed
Jul 15, 2025 - ingest failes on Windows 11
#169 closed
Jul 15, 2025 - IndexError: list index out of range
#159 closed
Jul 15, 2025 - Can I restrict the program to only one source document
#162 closed
Jul 15, 2025 - Add .docx file support for ingest
#157 closed
Jul 15, 2025 - Programming Language Support for Documents
#165 closed
Jul 15, 2025 - Answers aren't exclusive to our docs using the UI
#155 closed
Jul 15, 2025 - Not working on Amazon EC2 linux machine
#154 closed
Jul 15, 2025 - AssertionError: Torch not compiled with CUDA enabled
#156 closed
Jul 15, 2025 - Does LocalGPT support Chinese or Japanese?
#85 closed
Jul 15, 2025 - ingest.py is not running / mbp16 m1
#81 closed
Jul 15, 2025 - gradio demo
#74 closed
Jul 15, 2025 - Integrate permissively licensed open_llama model
#71 closed
Jul 15, 2025 - ERROR: Failed building wheel for llama-cpp-python
#78 closed
Jul 15, 2025 - chromadb安装报错
#66 closed
Jul 15, 2025 - load INSTRUCTOR_Transformer Terminated
#68 closed
Jul 15, 2025 - xformers can't load C++/CUDA extensions
#64 closed
Jul 15, 2025 - charmap codec error for input file
#57 closed
Jul 15, 2025 - Option for usage of other AI models (or maybe just 13-b)
#63 closed
Jul 15, 2025 - Can you use the same convention for model names as the TextGeneration-webUI project
#56 closed
Jul 15, 2025 - multi folders
#53 closed
Jul 15, 2025 - WSL doesn't seem to work (gpu or cpu) - always results in "Killed"
#51 closed
Jul 15, 2025 - Vulnerability in protobuf 3.20.0
#55 closed
Jul 15, 2025 - Process is Killed on CPU
#45 closed
Jul 15, 2025 - ingest.py does not recognize html documents
#47 closed
Jul 15, 2025 - GUI support etc.
#48 closed
Jul 15, 2025 - You could move the C++ compiler notice further up, it's quite a common issue.
#40 closed
Jul 15, 2025 - Feature request
#41 closed
Jul 15, 2025 - Error when running run_localGPT.py with the --device_type cpu flag
#42 closed
Jul 15, 2025 - Is there going to be option to run GGLM models in future?
#34 closed
Jul 15, 2025 - Can other models and embeddings be used?
#39 closed
Jul 15, 2025 - Parc informatique
#33 closed
Jul 15, 2025 - AssertionError: Torch not compiled with CUDA enabled
#32 closed
Jul 15, 2025 - RuntimeError :out of memory
#30 closed
Jul 15, 2025 - Program is running on CPU while set on GPU
#22 closed
Jul 15, 2025 - Other llm models?
#23 closed
Jul 15, 2025 - running out of resources?
#25 closed
Jul 15, 2025 - Running on google colab
#27 closed
Jul 15, 2025 - AssertionError: Torch not compiled with CUDA enabled
#16 closed
Jul 15, 2025 - Running localGPT
#21 closed
Jul 15, 2025 - Security Policy issues on Ubuntu 23.04.
#18 closed
Jul 15, 2025 - HuggingFace timeout
#15 closed
Jul 15, 2025 - Running successfully but not on my 1090TI GPU
#14 closed
Jul 15, 2025 - This is the error i get when i start ingest.py
#10 closed
Jul 15, 2025 - Bug on Ubuntu 22: software doesnt work
#9 closed
Jul 15, 2025 - How to improve performance?
#5 closed
Jul 15, 2025 - I get this error
#2 closed
Jul 15, 2025 - Cuda mismatch between installed and PyTorch causing AutoGPTQ error
#150 closed
Jul 15, 2025 - only one GPU working
#148 closed
Jul 15, 2025 - The answer of the query not accurate
#140 closed
Jul 15, 2025 - is:issue TypeError: issubclass() arg 1 must be a class
#144 closed
Jul 15, 2025 - pip subprocess to install build dependencies did not run successfully.
#134 closed
Jul 15, 2025 - not found instructor.py
#125 closed
Jul 15, 2025 - RuntimeError: Found no NVIDIA driver on your system
#137 closed
Jul 15, 2025 - Getting "AssertionError: Torch not compiled with CUDA enabled"
#124 closed
Jul 15, 2025 - Add only new documents when re-ingesting + add progress bar for ingest process
#120 closed
Jul 15, 2025 - So many issues i need tissues
#108 closed
Jul 15, 2025 - Please integrate your great Anki Flashcard generator
#110 closed
Jul 15, 2025 - Code is not using my GPU
#103 closed
Jul 15, 2025 - The computer encountered a blue screen while running the script "run_localGPT.py".
#107 closed
Jul 15, 2025 - I cannot ingest documents
#101 closed
Jul 15, 2025 - process killed
#99 closed
Jul 15, 2025 - python run_localGPT.py --device-type=cpu crashing shell. no warnings!
#97 closed
Jul 15, 2025 - where is the model downloaded, and what model versions are compatible?
#96 closed
Jul 15, 2025 - How to load a quantized model ?
#92 closed
Jul 15, 2025 - UserWarning on entering a query
#86 closed
Jul 15, 2025 - I am using mac os and getting this error despite having source documents in the folder
#260 closed
Jul 15, 2025 - AutoGPTQForCausalLM has changed API?
#262 closed
Jul 15, 2025 - Can we integrate API with oobabooga textui somehow ?
#258 closed
Jul 15, 2025 - Would you add Chinese README document for this project?
#256 closed
Jul 15, 2025 - ValueError: You are using a deprecated configuration of Chroma.
#252 closed
Jul 15, 2025 - How to globalise qa so that i can query using function??
#254 closed
Jul 15, 2025 - NameError: name 'autogptq_cuda_256' is not defined
#251 closed
Jul 15, 2025 - bug:ValueError: Requested tokens (4164) exceed context window of 2048
#255 closed
Jul 15, 2025 - Adding a separate config.yml (YAML) file instead of changing in run_localgpt.py
#247 closed
Jul 15, 2025 - Not clear how to change Embeddings
#246 closed
Jul 15, 2025 - Traceback
#243 closed
Jul 15, 2025 - ingest fails on Windows 10
#238 closed
Jul 15, 2025 - Bug using GGML models
#239 closed
Jul 15, 2025 - CUDA Setup failed despite GPU being available
#241 closed
Jul 15, 2025 - Support llama.cpp CUBlas
#242 closed
Jul 15, 2025 - Number of source documents strange
#234 closed
Jul 15, 2025 - Does this project support Chinese?
#232 closed
Jul 15, 2025 - Failed to import transformers.models.t5.modeling_t5
#230 closed
Jul 15, 2025 - Is 65B model the way to go?
#222 closed
Jul 15, 2025 - result with csv is too bad why?
#224 closed
Jul 15, 2025 - How much data can be digested
#223 closed
Jul 15, 2025 - which is the best model 7b for chat with document?
#216 closed
Jul 15, 2025 - how to use many csv file in the same time ?
#215 closed
Jul 15, 2025 - CSV file in the SOURCE_DOCUMENTS is not ingested in to DB
#217 closed
Jul 15, 2025 - Using OpenAI API specification
#378 closed
Jul 15, 2025 - ingest.py is stuck - no error
#374 closed
Jul 15, 2025 - Model - Information out of documents
#370 closed
Jul 15, 2025 - DB Folder is missing while running run_localGPT_API.py
#377 closed
Jul 15, 2025 - blank answer to every question with assitant model
#371 closed
Jul 15, 2025 - data privacy
#365 closed
Jul 15, 2025 - Building wheel for auto-gptq (setup.py) ... error
#366 closed
Jul 15, 2025 - [Enhancement] Enable folder recursion in ingest.py
#368 closed
Jul 15, 2025 - ValueError: Requested tokens (4257) exceed context window of 4096
#364 closed
Jul 15, 2025 - How can I load a locally existing model instead of downloading it again from Hugging Face?
#362 closed
Jul 15, 2025 - NoneType' object has no attribute 'group'
#358 closed
Jul 15, 2025 - How do i fix this issue? It is just stuck here
#353 closed
Jul 15, 2025 - ggml_new_tensor_impl: not enough space in the scratch memory pool
#354 closed
Jul 15, 2025 - ingest.py
#359 closed
Jul 15, 2025 - 24G GPU memory is NOT enough to fun localGPT???
#349 closed
Jul 15, 2025 - How to improve localGPT performance?
#350 closed
Jul 15, 2025 - Unable to load GGML
#352 closed
Jul 15, 2025 - Error while quering the model
#342 closed
Jul 15, 2025 - Issue while trying to ask the model a question.
#348 closed
Jul 15, 2025 - ERROR - chroma.py:129 - Chroma collection langchain contains fewer than 2 elements.
#343 closed
Jul 15, 2025 - it is not restrict to the given pdf
#341 closed
Jul 15, 2025 - Support for Qwen-7B-Chat
#340 closed
Jul 15, 2025 - Ingesting json files
#451 closed
Jul 15, 2025 - Running on docker produces gguf_init_from_file: invalid magic number 67676a74
#447 closed
Jul 15, 2025 - llama_tokenize_with_model: too many tokens
#441 closed
Jul 15, 2025 - I am running on CPU and I have 6.4GB worth of file
#448 closed
Jul 15, 2025 - Could not load Llama model from path: xxxx/llama-2-7b-chat.ggmlv3.q4_0.bin
#438 closed
Jul 15, 2025 - It works! But everything slow!!!
#437 closed
Jul 15, 2025 - Change verbosity (reponse length)
#439 closed
Jul 15, 2025 - Unable to build wheel for llama-cpp-python
#433 closed
Jul 15, 2025 - Providing an uploading functionality on streamlit for users ?I can help with that ?
#435 closed
Jul 15, 2025 - Tokens running out of space
#432 closed
Jul 15, 2025 - Getting not "not enough space in the buffer"
#434 closed
Jul 15, 2025 - how to ingest files from subfolders in source_documents?
#431 closed
Jul 15, 2025 - Support for Inter iRIS xe GPU
#429 closed
Jul 15, 2025 - Connection Refused Error using the UI
#427 closed
Jul 15, 2025 - Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
#425 closed
Jul 15, 2025 - Disable helpful answers?
#420 closed
Jul 15, 2025 - a question about intstall localgpt
#426 closed
Jul 15, 2025 - UI errors
#419 closed
Jul 15, 2025 - how to let the language model know about previous exchanges when using the API?
#417 closed
Jul 15, 2025 - chromadb keep adding more docs to indexes
#418 closed
Jul 15, 2025 - Input query length to LLM getting more than 'max_length'
#412 closed
Jul 15, 2025 - Mac Metal Can run, but output nothing.
#416 closed
Jul 15, 2025 - Chat Flow UI?
#413 closed
Jul 15, 2025 - Dimensionality of (768) does not match index dimensionality (384)
#411 closed
Jul 15, 2025 - GPU shows performance slow- Integration for api generations
#410 closed
Jul 15, 2025 - doesn't take in all data for llama 2 7b
#659 closed
Jul 15, 2025 - Error everytime I interact with the UI
#657 closed
Jul 15, 2025 - Failed to build docker image
#653 closed
Jul 15, 2025 - pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain llm
#656 closed
Jul 15, 2025 - Self-contained docker image
#658 closed
Jul 15, 2025 - AttributeError: type object 'hnswlib.Index' has no attribute 'file_handle_count'
#650 closed
Jul 15, 2025 - Add support for AWQ models
#645 closed
Jul 15, 2025 - Unable to use Mistral model
#651 closed
Jul 15, 2025 - can i put chinese file to SOURCE_DOCUMENTS?
#644 closed
Jul 15, 2025 - i can't update the files in the source_documents
#641 closed
Jul 15, 2025 - Ollama Support
#638 closed
Jul 15, 2025 - Multiple Request in APi
#637 closed
Jul 15, 2025 - Adding chat history gives wrong answer
#635 closed
Jul 15, 2025 - issue with installation
#633 closed
Jul 15, 2025 - will there be any option to modify the UI,or to modify the front end content?
#631 closed
Jul 15, 2025 - warnings.warn("Can't initialize NVML")
#632 closed
Jul 15, 2025 - Very slow inference with CPU
#622 closed
Jul 15, 2025 - Error while running the ingest.py
#623 closed
Jul 15, 2025 - regarding chroma
#620 closed
Jul 15, 2025 - Regarding ingestion
#618 closed
Jul 15, 2025 - Current build version wont ingest PDF's
#616 closed
Jul 15, 2025 - Integrate Chainlit UI
#613 closed
Jul 15, 2025 - Error running run_localGPT.py with Mac M2
#501 closed
Jul 15, 2025 - Encounter an error when run !python run_localGPT.py in google colab
#499 closed
Jul 15, 2025 - Ingest via WebGUI doesnt work
#492 closed
Jul 15, 2025 - GPU Issue
#497 closed
Jul 15, 2025 - should try the next file
#495 closed
Jul 15, 2025 - [Question] sqlite3.OperationalError: too many SQL variables
#489 closed
Jul 15, 2025 - The model does not understand that it is only the context it has to look for to find answer
#486 closed
Jul 15, 2025 - Docker file update
#488 closed
Jul 15, 2025 - Can't run Web UI
#483 closed
Jul 15, 2025 - How to use branches models ?
#476 closed
Jul 15, 2025 - Run interactive session of localGPT in jupyter notebook
#482 closed
Jul 15, 2025 - Having trouble download pytorch_model.bin
#474 closed
Jul 15, 2025 - Answer is ok but very slow
#470 closed
Jul 15, 2025 - Chat history between program runs?
#465 closed
Jul 15, 2025 - NetworkConnectionError !!
#469 closed
Jul 15, 2025 - GPU RAM is not used
#466 closed
Jul 15, 2025 - how to use GGUF models ?
#460 closed
Jul 15, 2025 - How to add Support for TheBloke/CodeLlama-13B-fp16
#463 closed
Jul 15, 2025 - Ingest: UnicodeDecodeError
#457 closed
Jul 15, 2025 - very slow GPU compared with CPU
#456 closed
Jul 15, 2025 - Unsupported version of sqlite3
#458 closed
Jul 15, 2025 - FileNotFound on Debian Server when running app.py (Unable to download, find the model)
#452 closed
Jul 15, 2025 - OperationalError: too many SQL variables when the quantity of document is large.
#540 closed
Jul 15, 2025 - Confused about model selection (already stored locally)
#539 closed
Jul 15, 2025 - Is it normal to take nearly 40 seconds to handle one prompt on GPU 4090?
#538 closed
Jul 15, 2025 - How to speed up answer time ?
#537 closed
Jul 15, 2025 - Does localGPT have any requirements for document formats?
#534 closed
Jul 15, 2025 - Is there a way to change language for LLama 2?
#530 closed
Jul 15, 2025 - none is not an allowed value (type=type_error.none.not_allowed)
#525 closed
Jul 15, 2025 - Embedding dimension 1024 does not match collection dimensionality 768
#529 closed
Jul 15, 2025 - RuntimeError: CUDA error: device-side assert triggered with Wizard-Vicuna-7B-Uncensored-GPTQ
#527 closed
Jul 15, 2025 - ValueError: check_hostname requires server_hostname
#522 closed
Jul 15, 2025 - BLAS = 0 Always
#520 closed
Jul 15, 2025 - I ran the code successfully but can't receive the answer
#518 closed
Jul 15, 2025 - How to use local embedding
#517 closed
Jul 15, 2025 - ingest.py: sqlite3.OperationalError: too many SQL variables
#519 closed
Jul 15, 2025 - Can not load ggml/gguf models
#513 closed
Jul 15, 2025 - cannot get it to work on linux
#512 closed
Jul 15, 2025 - GGUF models just ignore GPU but using CPU to inference only, slowly response
#514 closed
Jul 15, 2025 - TheBloke/wizard-vicuna-13B-GGML, ValueError: too many values to unpack (expected 2)
#511 closed
Jul 15, 2025 - Could not find model in TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ
#510 closed
Jul 15, 2025 - AMD
#508 closed
Jul 15, 2025 - error: zsh: illegal hardware instruction python run_localGPT.py --device_type cpu
#509 closed
Jul 15, 2025 - DB folder doesn't appear and is empty when manually created
#502 closed
Jul 15, 2025 - ingest.py line 51 name parameter is not defined.
#610 closed
Jul 15, 2025 - Error In generating answer , prompt template
#611 closed
Jul 15, 2025 - How many epochs and hidden layers it is using?
#612 closed
Jul 15, 2025 - Error on run_localGPT.py --device_type mps
#608 closed
Jul 15, 2025 - Tesseract is an undeclared dependency
#605 closed
Jul 15, 2025 - Some files causing ingestion problems
#604 closed
Jul 15, 2025 - localGPT for local email
#601 closed
Jul 15, 2025 - 0 chunks indexError: list index is out of range
#602 closed
Jul 15, 2025 - Ingesting XML file
#603 closed
Jul 15, 2025 - The model 'LlamaGPTQForCausalLM' is not supported for text-generation.
#600 closed
Jul 15, 2025 - Prompt Framing
#596 closed
Jul 15, 2025 - API is not able to handle multiple requests at the sametime
#598 closed
Jul 15, 2025 - logging.INFO("If you were using GGML model, LLAMA-CPP Dropped Support, Use GGUF Instead")
#594 closed
Jul 15, 2025 - Error when running localgpt.py
#590 closed
Jul 15, 2025 - Use pipenv instead of conda (on Ubuntu 22.04.03)
#591 closed
Jul 15, 2025 - BLAS = 1 but no GPU Usage
#589 closed
Jul 15, 2025 - TypeError: mistral isn't supported yet.
#588 closed
Jul 15, 2025 - Add Document from localGPT UI is not working
#587 closed
Jul 15, 2025 - error on run_localGPT.py: none is not an allowed value (type=type_error.none.not_allowed)
#584 closed
Jul 15, 2025 - llama2 70B running failed
#583 closed
Jul 15, 2025 - Ingest fails: Resource punkt not found.
#586 closed
Jul 15, 2025 - Docker keep interrupting
#578 closed
Jul 15, 2025 - How do access the localGPT API in parallel?
#582 closed
Jul 15, 2025 - how to configure GPU selection
#577 closed
Jul 15, 2025 - PowerInfer integration
#699 closed
Jul 15, 2025 - Type=Value Error whenever we load any other LLM model rather than Llama and Mistral
#697 closed
Jul 15, 2025 - IndexError: list index out of range
#694 closed
Jul 15, 2025 - Terminal VS Interface GPU problem
#693 closed
Jul 15, 2025 - /lib64/libgcc_s.so.1: version `GCC_7.0.0' not found libarrow.so.1400
#691 closed
Jul 15, 2025 - 500 Internal Server Error
#689 closed
Jul 15, 2025 - Phi-2 support
#692 closed
Jul 15, 2025 - Answer is not being generated.
#682 closed
Jul 15, 2025 - UI interface is confusing
#683 closed
Jul 15, 2025 - Error in running this run_localGPT.py
#688 closed
Jul 15, 2025 - run_localGPT_API.py is reporting error sqlite3 - The process cannot access
#681 closed
Jul 15, 2025 - ingest.py script is not exiting when I am trying to upload multiple documents
#680 closed
Jul 15, 2025 - Add Document from localGPT UI is not working
#675 closed
Jul 15, 2025 - How to run 70 B llama model
#678 closed
Jul 15, 2025 - Issue while running UI
#676 closed
Jul 15, 2025 - Mistral 7b model not giving expected result every time
#674 closed
Jul 15, 2025 - 'Collection' object has no attribute '__pydantic_private__',how to solve it?
#672 closed
Jul 15, 2025 - Ingested pdf document is not recognized
#671 closed
Jul 15, 2025 - Add api functionality for memgpt can call localgpt api
#669 closed
Jul 15, 2025 - embedded model
#663 closed
Jul 15, 2025 - Does this support non English language ?
#666 closed
Jul 15, 2025 - TypeError: 'NoneType' object is not callable in llama.py - line 1597
#661 closed
Jul 15, 2025 - Internal Server Issue
#660 closed
Jul 15, 2025 - Can I reuse the models which I have running locally via ollama service ?
#795 closed
Jul 15, 2025 - Small pdf file, simple question => inference takes a lot of time!
#805 closed
Jul 15, 2025 - Improved metadata at ingest
#800 closed
Jul 15, 2025 - Unable to execute 'python run_localGPT.py --device_type cpu'
#794 closed
Jul 15, 2025 - If I want to improve the Recall access the reranker model ,how can I do it?
#792 closed
Jul 15, 2025 - run_localGPT_API
#788 closed
Jul 15, 2025 - error in /opt/nvidia/nvidia_entrypoint.sh
#787 closed
Jul 15, 2025 - I encountered a mistake when I executed run_localGPT_API.py
#785 closed
Jul 15, 2025 - auto-gptq and auto awq is breaking in requiremetns.txt
#784 closed
Jul 15, 2025 - Extra Options with run_localGPT_API.py?
#786 closed
Jul 15, 2025 - problem when ingesting (just CPU)
#783 closed
Jul 15, 2025 - How do I add memory to chat-zero-shot-react-description?
#775 closed
Jul 15, 2025 - Mistral not supported
#778 closed
Jul 15, 2025 - cpp-llama-python not found.
#782 closed
Jul 15, 2025 - Getting error when I try to python ingest.py
#766 closed
Jul 15, 2025 - Why its sharing questions and data from different browser sessions?
#774 closed
Jul 15, 2025 - Wrong answer
#764 closed
Jul 15, 2025 - Not using the GPU
#768 closed
Jul 15, 2025 - Error when starting python ingest.py
#762 closed
Jul 15, 2025 - No module named 'triton'
#761 closed
Jul 15, 2025 - Docker not building: ModuleNotFoundError: No module named 'utils'
#763 closed
Jul 15, 2025 - Potential Command Injection via Subprocess Call with shell=True in crawl.py
#842 closed
Jul 15, 2025 - installing requirements is VERY slow and has not finished so far after several hours
#837 closed
Jul 15, 2025 - Installing depencies in conda takes forever
#834 closed
Jul 15, 2025 - Conflicting Dependencies when using it with Groq
#836 closed
Jul 15, 2025 - localGPT install error
#841 closed
Jul 15, 2025 - Installation with Docker image and Windows 11 is very slow
#830 closed
Jul 15, 2025 - How to solve ValueError: Dependencies for InstructorEmbedding not found. Error
#833 closed
Jul 15, 2025 - Issue Installing llama-cpp-python
#831 closed
Jul 15, 2025 - Unable to run run_localGPT.py
#825 closed
Jul 15, 2025 - Remove extra hyphen in google-generativeai
#826 closed
Jul 15, 2025 - Support for images in inputs
#824 closed
Jul 15, 2025 - Missing context if total number of chunks exceed 100
#818 closed
Jul 15, 2025 - localGPT exits back to the command prompt after I ask a query
#821 closed
Jul 15, 2025 - Difference between LocalGPT and GPT4All
#820 closed
Jul 15, 2025 - Truncation not explicitly mention
#813 closed
Jul 15, 2025 - cmake missing from requirements
#817 closed
Jul 15, 2025 - IndexError: list index out of range
#810 closed
Jul 15, 2025 - Github Security Lab Vulnerability Report
#809 closed
Jul 15, 2025 - Can ingest the source documents.
#806 closed
Jul 15, 2025 - Poor performance for PDF QA
#808 closed
Jul 15, 2025
6 Issues opened by6 people
- [V2] Adding more refence docs
#858 opened
Jul 16, 2025 - Error while getting response
#857 opened
Jul 16, 2025 - Docker connection issue between rag-api container and local Ollama service
#856 opened
Jul 16, 2025 - Backend fails with ModuleNotFoundError: No module named 'cgi'
#854 opened
Jul 16, 2025 - Windows Powershell doesn't like Emoji getting logged out of the box. V2 branch
#850 opened
Jul 15, 2025