Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Use local Large Language Models (LLMs) while monitoring energy usage.

NotificationsYou must be signed in to change notification settings

ejhusom/MELODI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Use local Large Language Models (LLMs) while monitoring energy usage. This project allows you to run prompts on LLMs and measure the energy consumption during the inference process.

Table of Contents

Installation

To install the required dependencies and set up the project, follow these steps:

  1. Clone the repository:
    git clone https://github.com/ejhusom/llm-energy-consumption.gitcd llm-energy-consumption
  1. InstallOllama.

  2. Install and configurenvidia-smi andscaphandre for monitoring (if not already installed):

  3. Ensure that it is possible to read from the RAPL file (to measure power consumption) without root access (CodeCarbon GitHub Issue #224):

    sudo chmod -R a+r /sys/class/powercap/intel-rapl
  4. Ensure that no other processes than your LLM service are using the GPU. If need be, move the display service to the integrated graphics:

    • sudo nano /etc/X11/xorg.conf
    • Paste:
    Section"Device"Identifier"intelgpu0"Driver"intel"# Use the Intel driverEndSection
    • Restart display:sudo systemctl restart display-manager

Usage

To run the script that prompts LLMs and monitors energy consumption, use the following command:

python3 LLMEC.py [PATH_TO_DATASET]

Or use the tool programmatically like this:

fromLLMECimportLLMEC# Create an instance of LLMECllm_ec=LLMEC(config_path='path/to/config.ini')# Run a prompt and monitor energy consumptiondf=llm_ec.run_prompt_with_energy_monitoring(prompt="How can we use Artificial Intelligence for a better society?",save_power_data=True,plot_power_usage=True)

Configuration

The script uses a configuration file for various settings. The default configuration file path is specified in theconfig module. Below are some of the configurable options:

  • llm_service: The LLM service to use (default: "ollama").
  • llm_api_url: The API URL of the LLM service (default: "http://localhost:11434/api/chat").
  • model_name: The model name for the request (default: "mistral").
  • verbosity: Level of verbosity for logging (default: 0).

Example configuration (config.ini):

[General]llm_service = ollamallm_api_url = http://localhost:11434/api/chatmodel_name = mistralverbosity = 1

Data

We have produced a dataset of energy consumption measurements for a diverse set of open-source LLMs.This dataset is available at Hugging Face Datasets:LLM Energy Consumption Dataset.

Contributing

We welcome contributions! Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature-branch).
  3. Make your changes and commit them (git commit -m 'Add new feature').
  4. Push to the branch (git push origin feature-branch).
  5. Create a pull request.

License

This project is licensed under the MIT License - see theLICENSE file for details.

Contact Information

Maintained by Erik Johannes Husom. For any inquiries, please reach out via:

About

Use local Large Language Models (LLMs) while monitoring energy usage.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp