- Notifications
You must be signed in to change notification settings - Fork0
ejhusom/MELODI
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Use local Large Language Models (LLMs) while monitoring energy usage. This project allows you to run prompts on LLMs and measure the energy consumption during the inference process.
To install the required dependencies and set up the project, follow these steps:
- Clone the repository:
git clone https://github.com/ejhusom/llm-energy-consumption.gitcd llm-energy-consumption
InstallOllama.
Install and configure
nvidia-smi
andscaphandre
for monitoring (if not already installed):Ensure that it is possible to read from the RAPL file (to measure power consumption) without root access (CodeCarbon GitHub Issue #224):
sudo chmod -R a+r /sys/class/powercap/intel-rapl
Ensure that no other processes than your LLM service are using the GPU. If need be, move the display service to the integrated graphics:
sudo nano /etc/X11/xorg.conf
- Paste:
Section"Device"Identifier"intelgpu0"Driver"intel"# Use the Intel driverEndSection
- Restart display:
sudo systemctl restart display-manager
To run the script that prompts LLMs and monitors energy consumption, use the following command:
python3 LLMEC.py [PATH_TO_DATASET]
Or use the tool programmatically like this:
fromLLMECimportLLMEC# Create an instance of LLMECllm_ec=LLMEC(config_path='path/to/config.ini')# Run a prompt and monitor energy consumptiondf=llm_ec.run_prompt_with_energy_monitoring(prompt="How can we use Artificial Intelligence for a better society?",save_power_data=True,plot_power_usage=True)
The script uses a configuration file for various settings. The default configuration file path is specified in theconfig
module. Below are some of the configurable options:
llm_service
: The LLM service to use (default: "ollama").llm_api_url
: The API URL of the LLM service (default: "http://localhost:11434/api/chat").model_name
: The model name for the request (default: "mistral").verbosity
: Level of verbosity for logging (default: 0).
Example configuration (config.ini
):
[General]llm_service = ollamallm_api_url = http://localhost:11434/api/chatmodel_name = mistralverbosity = 1
We have produced a dataset of energy consumption measurements for a diverse set of open-source LLMs.This dataset is available at Hugging Face Datasets:LLM Energy Consumption Dataset.
We welcome contributions! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes and commit them (
git commit -m 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a pull request.
This project is licensed under the MIT License - see theLICENSE file for details.
Maintained by Erik Johannes Husom. For any inquiries, please reach out via:
- Email:erik.johannes.husom@sintef.no
- GitHub:ejhusom