└─$ export OPENAI_API_KEY=vc-[redacted]└─$ lwe --version/home/alp/.local/bin/lwe version 0.22.2└─$ lwe --debugSchemaUpdater - DEBUG - Creating alembic config using .ini: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/schema/alembic.iniSchemaUpdater - DEBUG - Schema versioning initialized: TrueSchemaUpdater - DEBUG - Initialized SchemaUpdater with database URL: sqlite:////home/alp/.local/share/llm-workflow-engine/profiles/default/storage.dbDatabase - DEBUG - The database schema exists.SchemaUpdater - INFO - Current schema version for database: 4e642f725923SchemaUpdater - INFO - Latest schema version: 4e642f725923SchemaUpdater - INFO - Schema is up to date.PresetManager - DEBUG - Loading presets from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/presets, /home/alp/.config/llm-workflow-engine/presets, /home/alp/.config/llm-workflow-engine/profiles/default/presetsPresetManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/presetsPresetManager - DEBUG - Loading YAML file: gpt-4o-mini.yamlPresetManager - INFO - Successfully loaded preset: gpt-4o-miniPresetManager - DEBUG - Loading YAML file: gpt-4-code-generation.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-code-generationPresetManager - DEBUG - Loading YAML file: gpt-4-creative-writing.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-creative-writingPresetManager - DEBUG - Loading YAML file: gpt-4-chatbot-responses.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-chatbot-responsesPresetManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/presetsPresetManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/presetsPresetManager - DEBUG - Loading YAML file: turbo-16k-code-generation.yamlPresetManager - INFO - Successfully loaded preset: turbo-16k-code-generationPresetManager - DEBUG - Loading YAML file: turbo.yamlPresetManager - INFO - Successfully loaded preset: turboPresetManager - DEBUG - Loading YAML file: gpt-4-exploratory-code-writing.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-exploratory-code-writingPluginManager - DEBUG - Plugin paths: ['/home/alp/.config/llm-workflow-engine/profiles/default/plugins', '/home/alp/.config/llm-workflow-engine/plugins', '/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins']PluginManager - INFO - Scanning for package pluginsPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - INFO - Loading plugin provider_chat_openai from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - DEBUG - Merging plugin plugins.provider_chat_openai config, default: {}, user: {}PluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/echo.pyPluginManager - INFO - Loading plugin echo from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/echo.pyPluginManager - DEBUG - Merging plugin plugins.echo config, default: {'response': {'prefix': 'Echo'}}, user: {}Echo - INFO - This is the echo plugin, running with backend: apiPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/examples.pyPluginManager - INFO - Loading plugin examples from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/examples.pyPluginManager - DEBUG - Merging plugin plugins.examples config, default: {'confirm_overwrite': True, 'default_types': ['presets', 'templates', 'workflows', 'tools']}, user: {}Examples - INFO - This is the examples plugin, running with profile dir: /home/alp/.config/llm-workflow-engine/profiles/default, examples root: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/examples, default types: ['presets', 'templates', 'workflows', 'tools'], confirm overwrite: TrueWorkflowManager - DEBUG - Loading workflows from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflows, /home/alp/.config/llm-workflow-engine/workflows, /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - DEBUG - Loading workflows from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflows, /home/alp/.config/llm-workflow-engine/workflows, /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/workflowsApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: FalseProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiApiBackend - DEBUG - Setting model to: gpt-4o-miniTemplateManager - DEBUG - Loading templates from dirs: /home/alp/.config/llm-workflow-engine/profiles/default/templates, /home/alp/.config/llm-workflow-engine/templates, /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/templates, /tmp/lwe-temp-templatesUserManager - DEBUG - Retrieving all UsersApiBackend - DEBUG - Setting current user to meApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: False Provide a prompt, or type /help or ? to list commands.[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /model openai_api_base https://api.zanity.xyz/v1Set openai_api_base to https://api.zanity.xyz/v1[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /template edit-run deftemplate.mdTemplateManager - DEBUG - Ensuring template deftemplate.md existsTemplateManager - DEBUG - Loading templates from dirs: /home/alp/.config/llm-workflow-engine/profiles/default/templates, /home/alp/.config/llm-workflow-engine/templates, /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/templates, /tmp/lwe-temp-templatesTemplateManager - DEBUG - Template deftemplate.md existsApiBackend - INFO - Setting up run of template: tmpojf352rh.mdTemplateManager - DEBUG - Rendering template: tmpojf352rh.mdApiRepl - INFO - Running template[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> testApiBackend - INFO - Starting 'ask' requestApiBackend - DEBUG - Extracting activate preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Inintialized ApiRequest with input: test, default preset name: None, system_message: You are a helpful assistant., max_submission_tokens: 131072, request_overrides: {'print_stream': True, 'stream': True}, return only: FalseApiRequest - DEBUG - Extracting preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Using current providerProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiToolManager - DEBUG - Loading tools from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/tools, /home/alp/.config/llm-workflow-engine/tools, /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - DEBUG - Loading tool file test_tool.py from directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file reverse_content.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file store_sentiment_and_topics.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsApiRequest - DEBUG - Built LLM based on preset_name: None, metadata: {'provider': 'provider_chat_openai'}, customizations: {'model_name': 'gpt-4o-mini', 'n': 1, 'temperature': 0.7, 'openai_api_base': 'https://api.zanity.xyz/v1'}, preset_overrides: {}ApiRequest - DEBUG - Stripping messages over max tokens: 131072, initial token count: 19ApiRequest - DEBUG - Calling LLM with message count: 2ApiRequest - DEBUG - Building messages for LLM, message count: 2ApiRequest - DEBUG - Started streaming request at 2024-12-20T17:23:08.850023ApiRequest - DEBUG - Streaming with LLM attributes: {'model_name': 'gpt-4o-mini', 'model': 'gpt-4o-mini', 'stream': False, 'n': 1, 'temperature': 0.7, '_type': 'chat_openai'}Hello! How can I assist you today?ApiRequest - DEBUG - Stopped streaming response at 2024-12-20T17:23:15.772227ApiBackend - DEBUG - LLM Response: {'content': 'Hello! How can I assist you today?', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop', 'model_name': 'gpt-4o-mini', 'system_fingerprint': 'fp_vxDuDMkGIv'}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-953eb75b-5a73-432b-9712-c36684f0cbcc', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}ToolManager - DEBUG - Loading tools from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/tools, /home/alp/.config/llm-workflow-engine/tools, /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - DEBUG - Loading tool file test_tool.py from directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file reverse_content.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file store_sentiment_and_topics.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsConversationStorageManager - DEBUG - Storing conversation messages for conversation: newConversationManager - DEBUG - Retrieving User with id 1ConversationManager - INFO - Added Conversation with title: None for User meConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: system, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset: for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: user, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset: for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: assistant, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset: for Conversation with id 5ConversationStorageManager - INFO - Generating title for conversation 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving last Message for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2> ConversationStorageManager - DEBUG - Title generation LLM provider: provider_chat_openai, model: gpt-4o-miniException in thread Thread-1 (gen_title_thread):Traceback (most recent call last): File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner self.run() File "/usr/lib/python3.11/threading.py", line 982, in run self._target(*self._args, **self._kwargs) File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/conversation_storage_manager.py", line 217, in gen_title_thread result = llm.invoke(new_messages) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke self.generate_prompt( File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate raise e File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate self._generate_with_cache( File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 689, in _generate response = self.client.create(**payload) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 829, in create return self._post( ^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request return self._request( ^^^^^^^^^^^^^^ File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>GoodBye!