Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Is there a way I can change the OpenAI endpoint URL?#343

Discussion options

Hello,

I might be misreading the docs, but I'm not sure how to configure this.

I want to use a service called ConvoAI, they have an OpenAI-compatible API endpoint which is this:https://api.convoai.tech/v1/

And then all I have to do is set my API key.

The thing is... I can't find a place where I can change the default OpenAI API URL endpoint (which I think ishttps://api.openai.com/v1/chat) to the one for ConvoAI.

Thanks in advance!

You must be logged in to vote

Most commands in LWE support tab completion.

/model [TAB] will show you all available options you can set on a particular provider.

The defaultchat_openai provider has options for setting both base url and api key.

Once you set those in the CLI, you can save those settings as a preset.

Replies: 12 comments 7 replies

Comment options

Most commands in LWE support tab completion.

/model [TAB] will show you all available options you can set on a particular provider.

The defaultchat_openai provider has options for setting both base url and api key.

Once you set those in the CLI, you can save those settings as a preset.

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

/model openai_api_base https://api.convoai.tech/v1/ - Yep, this did the trick, thanks!

Though I got some error afterwards, but I'm not sure whether these two events are related:#265 (comment)

Answer selected byForeverNooob
Comment options

#265 has nothing to do with it.

The error is occurring in the thread that generates the title.

By default LWE uses OpenAI's GPT-3.5 to generate a short title for all conversations. If you don't have a validOPENAI_API_KEY environment variable set for the OpenAI API, this would throw an error. My assumption is you have that environment variable set to the API key for convoai.

There is a config settingbackend_options.title_generation.provider that allows you to override using thechat_openai provider to generate titles, but you'd need to set up another provider for that.

Given the proliferation of OpenAI compatible endpoints, it's probably best to add abackend_options.title_generation.preset config setting, which you could set to a configured preset to use whatever provider/model you want for title generation.

You must be logged in to vote
0 replies
Comment options

I took a look at addingbackend_options.title_generation.preset, and it's just going to be too messy given the architecture of the code.

Probably the cleanest and easiest option for you is to setopenai_api_key in your preset to the convoai API key, and setOPENAI_API_KEY environment variable to a valid OpenAI API key.

You could also usebackend_options.title_generation.provider to use a different provider, I did add some doc on that setting:https://github.com/llm-workflow-engine/llm-workflow-engine/blob/main/config.sample.yaml#L24-L28

LWE has quite a few other providers:https://github.com/orgs/llm-workflow-engine/repositories?q=provider

Finally, if you just want to go w/o titles, this will work to suppress the error:

Adjust config as follows:

backend_options:title_generation:provider:fake_llmplugins:enabled:    -provider_fake_llm

That's the testing class, and will just title everythingtest response.

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

Thanks! I chose to go without titles because I prefer to set my own, but despite that I'm still getting that error.

This is inside my~/.config/llm-workflow-engine/config.yaml:

backend:apibackend_options:auto_create_first_user:Nonedefault_user:Nonedefault_conversation_id:Nonetitle_generation:provider:fake_llmplugins:enabled:    -echo    -examples    -provider_fake_llm
└─$  lwe --version/home/me/.local/bin/lwe version 0.18.11
Comment options

It's working fine for me:

backend_options:title_generation:provider:fake_llm# other settings...plugins:enabled:    -provider_fake_llm
[New Conversation](0.5/204800/0): defaulthunmonk@claude-chatbot-responses 10> say helloHello! How can I assist you today?[Untitled](0.5/204800/20): defaulthunmonk@claude-chatbot-responses 11>test response(0.5/204800/20): defaulthunmonk@claude-chatbot-responses 11>

You can see thetest response title generated from the fake_llm provider.

Are you sure it's the same error, or is it another error? Have you run LWE with the debug flag and compared the backtrace?

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

To me it looks like the same error yeah:

└─$  lwe --debugSchemaUpdater - DEBUG - Creating alembic config using .ini: /home/me/.local/lib/python3.10/site-packages/lwe/backends/api/schema/alembic.iniSchemaUpdater - DEBUG - Schema versioning initialized: TrueSchemaUpdater - DEBUG - Initialized SchemaUpdater with database URL: sqlite:////home/me/.local/share/llm-workflow-engine/profiles/default/storage.dbDatabase - DEBUG - The database schema exists.SchemaUpdater - INFO - Current schema version for database: cc8f2aecf9ffSchemaUpdater - INFO - Latest schema version: cc8f2aecf9ffSchemaUpdater - INFO - Schema is up to date.PresetManager - DEBUG - Loading presets from dirs: /home/me/.local/lib/python3.10/site-packages/lwe/presets, /home/me/.config/llm-workflow-engine/presets, /home/me/.config/llm-workflow-engine/profiles/default/presetsPresetManager - INFO - Processing directory: /home/me/.local/lib/python3.10/site-packages/lwe/presetsPresetManager - DEBUG - Loading YAML file: gpt-4-chatbot-responses.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-chatbot-responsesPresetManager - DEBUG - Loading YAML file: gpt-4-code-generation.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-code-generationPresetManager - DEBUG - Loading YAML file: gpt-4-creative-writing.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-creative-writingPresetManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/presetsPresetManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/profiles/default/presetsPluginManager - DEBUG - Plugin paths: ['/home/me/.config/llm-workflow-engine/profiles/default/plugins', '/home/me/.config/llm-workflow-engine/plugins', '/home/me/.local/lib/python3.10/site-packages/lwe/plugins']PluginManager - INFO - Scanning for package pluginsPluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/profiles/default/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/me/.local/lib/python3.10/site-packages/lwe/plugins/echo.pyPluginManager - INFO - Loading plugin echo from /home/me/.local/lib/python3.10/site-packages/lwe/plugins/echo.pyPluginManager - DEBUG - Merging plugin plugins.echo config, default: {'response': {'prefix': 'Echo'}}, user: {}Echo - INFO - This is the echo plugin, running with backend: apiPluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/profiles/default/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/me/.local/lib/python3.10/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - INFO - Loading plugin provider_chat_openai from /home/me/.local/lib/python3.10/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - DEBUG - Merging plugin plugins.provider_chat_openai config, default: {}, user: {}PluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/profiles/default/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/me/.config/llm-workflow-engine/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/me/.local/lib/python3.10/site-packages/lwe/plugins/examples.pyPluginManager - INFO - Loading plugin examples from /home/me/.local/lib/python3.10/site-packages/lwe/plugins/examples.pyPluginManager - DEBUG - Merging plugin plugins.examples config, default: {'confirm_overwrite': True, 'default_types': ['presets', 'templates', 'workflows', 'functions']}, user: {}Examples - INFO - This is the examples plugin, running with profile dir: /home/me/.config/llm-workflow-engine/profiles/default, examples root: /home/me/.local/lib/python3.10/site-packages/lwe/examples, default types: ['presets', 'templates', 'workflows', 'functions'], confirm overwrite: TrueWorkflowManager - DEBUG - Loading workflows from dirs: /home/me/.local/lib/python3.10/site-packages/lwe/workflows, /home/me/.config/llm-workflow-engine/workflows, /home/me/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/me/.local/lib/python3.10/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - DEBUG - Loading workflows from dirs: /home/me/.local/lib/python3.10/site-packages/lwe/workflows, /home/me/.config/llm-workflow-engine/workflows, /home/me/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/me/.local/lib/python3.10/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/profiles/default/workflowsApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: FalseProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiApiBackend - DEBUG - Setting model to: gpt-3.5-turboTemplateManager - DEBUG - Loading templates from dirs: /home/me/.config/llm-workflow-engine/profiles/default/templates, /home/me/.config/llm-workflow-engine/templates, /home/me/.local/lib/python3.10/site-packages/lwe/templates, /tmp/lwe-temp-templatesUserManager - DEBUG - Retrieving all UsersApiBackend - DEBUG - Setting current user to meApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: False                                                                                      Provide a prompt, or type /help or ? to list commands.[New Conversation](0.7/16384/0): defaultme@gpt-3.5-turbo 1>[New Conversation](0.7/16384/0): defaultme@gpt-3.5-turbo 1> /model openai_api_base https://api.convoai.tech/v1/Set openai_api_base to https://api.convoai.tech/v1/[New Conversation](0.7/16384/0): defaultme@gpt-3.5-turbo 1> testApiBackend - INFO - Starting 'ask' requestApiBackend - DEBUG - Extracting activate preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Inintialized ApiRequest with input: test, default preset name: None, system_message: You are a helpful assistant., max_submission_tokens: 16384, request_overrides: {'print_stream': True, 'stream': True}, return only: FalseApiRequest - DEBUG - Extracting preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Using current providerProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiFunctionManager - DEBUG - Loading functions from dirs: /home/me/.local/lib/python3.10/site-packages/lwe/functions, /home/me/.config/llm-workflow-engine/functions, /home/me/.config/llm-workflow-engine/profiles/default/functionsFunctionManager - INFO - Processing directory: /home/me/.local/lib/python3.10/site-packages/lwe/functionsFunctionManager - DEBUG - Loading function file test_function.py from directory: /home/me/.local/lib/python3.10/site-packages/lwe/functionsFunctionManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/functionsFunctionManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/profiles/default/functionsApiRequest - DEBUG - Built LLM based on preset_name: , metadata: {'provider': 'provider_chat_openai'}, customizations: {'model_name': 'gpt-3.5-turbo', 'n': 1, 'temperature': 0.7, 'openai_api_base': 'https://api.convoai.tech/v1/'}, preset_overrides: {}ApiRequest - DEBUG - Stripping messages over max tokens: 16384, initial token count: 19ApiRequest - DEBUG - Calling LLM with message count: 2ApiRequest - DEBUG - Building messages for LLM, message count: 2ApiRequest - DEBUG - Started streaming request at 2024-04-29T20:41:32.452933ApiRequest - DEBUG - Streaming with LLM attributes: {'model_name': 'gpt-3.5-turbo', 'model': 'gpt-3.5-turbo', 'stream': False, 'n': 1, 'temperature': 0.7, '_type': 'chat_openai'}Hello! How can I assist you today?ApiRequest - DEBUG - Stopped streaming response at 2024-04-29T20:41:33.675576FunctionManager - DEBUG - Loading functions from dirs: /home/me/.local/lib/python3.10/site-packages/lwe/functions, /home/me/.config/llm-workflow-engine/functions, /home/me/.config/llm-workflow-engine/profiles/default/functionsFunctionManager - INFO - Processing directory: /home/me/.local/lib/python3.10/site-packages/lwe/functionsFunctionManager - DEBUG - Loading function file test_function.py from directory: /home/me/.local/lib/python3.10/site-packages/lwe/functionsFunctionManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/functionsFunctionManager - INFO - Processing directory: /home/me/.config/llm-workflow-engine/profiles/default/functionsConversationStorageManager - DEBUG - Storing conversation messages for conversation: newConversationManager - DEBUG - Retrieving User with id 1ConversationManager - INFO - Added Conversation with title: None for User meConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: system, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-3.5-turbo, preset:  for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: user, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-3.5-turbo, preset:  for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: assistant, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-3.5-turbo, preset:  for Conversation with id 5ConversationStorageManager - INFO - Generating title for conversation 5ConversationManager - DEBUG - Retrieving Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5MessageManager - DEBUG - Retrieving last Message for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/16384/33): defaultme@gpt-3.5-turbo 2> Exception in thread Thread-1 (gen_title_thread):Traceback (most recent call last):  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner    self.run()  File "/usr/lib/python3.10/threading.py", line 953, in run    self._target(*self._args, **self._kwargs)  File "/home/me/.local/lib/python3.10/site-packages/lwe/backends/api/conversation_storage_manager.py", line 195, in gen_title_thread    result = llm.invoke(new_messages)  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke    self.generate_prompt(  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate    raise e  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate    self._generate_with_cache(  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache    result = self._generate(  File "/home/me/.local/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 548, in _generate    response = self.client.create(messages=message_dicts, **params)  File "/home/me/.local/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper    return func(*args, **kwargs)  File "/home/me/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 581, in create    return self._post(  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1232, in post    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 921, in request    return self._request(  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1012, in _request    raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/16384/33): defaultme@gpt-3.5-turbo 2>ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/16384/33): defaultme@gpt-3.5-turbo 2>
Comment options

That traceback doesn't say WHAT the error is. Kinda hard for me to debug if I don't know what the error is.

You can try putting some debug statements in a few spots along that traceback, without more data I cannot help, and I cannot reproduce the issue.

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

Oh, sorry I thoughtException in thread Thread-1 (gen_title_thread): (and the stuff following it) was the error.
I'm not sure how I can insert some debug statements like you've described (I'm afraid my Python skills are almost non-existent) but I'll try and learn about this if I have time.

What I did notice was when I exitedlwe, I got some more stuff printed out of it:

Traceback (most recent call last):  File "/home/me/.local/bin/lwe", line 8, in <module>    sys.exit(main())  File "/home/me/.local/lib/python3.10/site-packages/lwe/main.py", line 209, in main    shell.cmdloop()  File "/home/me/.local/lib/python3.10/site-packages/lwe/core/repl.py", line 1425, in cmdloop    user_input = self.prompt_session.prompt(  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1026, in prompt    return self.app.run(  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 1002, in run    return asyncio.run(coro)  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run    return loop.run_until_complete(main)  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete    return future.result()  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 886, in run_async    return await _run_async(f)  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 746, in _run_async    result = await fEOFError
Comment options

EOFError is a reason, but extremely unhelpful.

The stack trace is like a map of the files and line numbers where the code execution was when a program crashes. So you already have a map of the actual files and locations, you'd look there for relevant variables being passed into those function calls, and add debug log statements. The LWE debug facility should probably work in all cases if you installed it as a package:

fromlweimportdebug# varname is a variable you want to see the value of, it can be any kind of variable.debug.console(varname)

Since you'll be hacking stuff insite-packages, you'll probably want to do that in a Python virtual environment, so you can just scrap the environment after you're done.

Hope that helps.

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

Thanks once again. I'll try to grok the basics of Python down when I can.
In the meantime, would it be an idea for a feature request to allow setting a title even before the first request / response from the LLM? Because then I could just set my own title and not see that error.

Comment options

You can use a template with front matter, it should allow you to set acustom title:https://llm-workflow-engine.readthedocs.io/en/latest/templates.html#front-matterThere's an `edit-run` action for templates, that would allow you to openthe template in your CLI editor, then you type your prompt and save, anit'll send the prompt, and use the custom title.
On Mon, Apr 29, 2024, 9:05 PM ForeverNooob ***@***.***> wrote: Thanks once again. I'll try to grok the basics of Python down when I can. In the meantime, would it be an idea for a feature request to allow setting a title even before the first request / response from the LLM? Because then I could just set my own title and not see that error. — Reply to this email directly, view it on GitHub <#343 (reply in thread)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAAKV7CWR24WJCCVNBVQ3OLY73U5XAVCNFSM6AAAAABGZWQIHOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TENRYHA4TK> . You are receiving this because you commented.Message ID: <llm-workflow-engine/llm-workflow-engine/repo-discussions/343/comments/9268895 @github.com>
You must be logged in to vote
1 reply
@ForeverNooob
Comment options

Hello, seems like it still gives me that error, though it looks like it got a new error alongside (about "Pydantic serializer")

I'm still in the process of adding print statements to (hopefully relevant) parts of the code, but in the meantime perhaps this might be of some use as well.

Also maybe handy to note is that I got this behavior from another API endpoint as well.

$  lwe --version/home/me/.local/bin/lwe version 0.19.1
me@gpt-3.5-turbo 3> /template edit-run deftemplate.mdYeah that was expected. I'm just testing out a client. Are you receiving this?/home/me/.local/lib/python3.10/site-packages/pydantic/main.py:328: UserWarning: Pydantic serializer warnings:  Expected `int` but got `float` - serialized value may not be as expected  return self.__pydantic_serializer__.to_python(Yes, I am receiving your messages. Feel free to continue testing or ask any questions you may have. I'm here to help![Untitled](0.7/16384/138): defaultme@gpt-3.5-turbo 4> Exception in thread Thread-3 (gen_title_thread):Traceback (most recent call last):  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner    self.run()  File "/usr/lib/python3.10/threading.py", line 953, in run    self._target(*self._args, **self._kwargs)  File "/home/me/.local/lib/python3.10/site-packages/lwe/backends/api/conversation_storage_manager.py", line 208, in gen_title_thread    result = llm.invoke(new_messages)  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke    self.generate_prompt(  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate    raise e  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate    self._generate_with_cache(  File "/home/me/.local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache    result = self._generate(  File "/home/me/.local/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 548, in _generate    response = self.client.create(messages=message_dicts, **params)  File "/home/me/.local/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper    return func(*args, **kwargs)  File "/home/me/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 581, in create    return self._post(  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1232, in post    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 921, in request    return self._request(  File "/home/me/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1012, in _request    raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/16384/138): defaultme@gpt-3.5-turbo 4>[Untitled](0.7/16384/138): defaultme@gpt-3.5-turbo 4>

And this is what happens when I exit it (usingCtrl+d):

me@gpt-3.5-turbo 4>Traceback (most recent call last):  File "/home/me/.local/bin/lwe", line 8, in <module>    sys.exit(main())  File "/home/me/.local/lib/python3.10/site-packages/lwe/main.py", line 213, in main    shell.cmdloop()  File "/home/me/.local/lib/python3.10/site-packages/lwe/core/repl.py", line 1440, in cmdloop    user_input = self.prompt_session.prompt(  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/shortcuts/prompt.py", line 1026, in prompt    return self.app.run(  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 1002, in run    return asyncio.run(coro)  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run    return loop.run_until_complete(main)  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete    return future.result()  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 886, in run_async    return await _run_async(f)  File "/home/me/.local/lib/python3.10/site-packages/prompt_toolkit/application/application.py", line 746, in _run_async    result = await fEOFError

The contents of~/.config/llm-workflow-engine/templates/deftemplate.md:

---description: Default template in order to set the title.request_overrides:  title: default_title---
Comment options

Stack trace onCtrl+d really doesn't mean anything, except that the REPL loop hasn't been properly catching those signals and cleanly exiting.dfe329f fixes that and exits cleanly.

You must be logged in to vote
0 replies
Comment options

Again, I cannot reproduce the other issue you are reporting:

2024-05-24_15-11

This code is what decides to use auto-title generation or not, if a title already exists, thengen_title() is NOT called:https://github.com/llm-workflow-engine/llm-workflow-engine/blob/main/lwe/backends/api/conversation_storage_manager.py#L71-L76

As you can see from my example, the template properly inserts the provided custom title. That title is present inconversation.title, andgen_title() will not be called.

I'm really not sure what you're doing differently, but everything I see is showing this logic working fine.

You must be logged in to vote
0 replies
Comment options

This also seems to happen on Alpine Linux v3.19 (probably on v3.20 as well)

Still didn't have time to learn Python just yet, so this is more of an additional datapoint which might be useful in the future.
The way lwe was installed is documented inthis post
Also a bit unfortunate that I have to send a message first in order to set a title, perhaps consider allowing setting of conversation titles from the start?

└─$  export OPENAI_API_KEY=vc-[redacted]└─$  lwe --version/home/alp/.local/bin/lwe version 0.22.2└─$  lweCreating database schema for: sqlite:////home/alp/.local/share/llm-workflow-engine/profiles/default/storage.dbINFO  [alembic.runtime.migration] Context impl SQLiteImpl.INFO  [alembic.runtime.migration] Will assume non-transactional DDL.INFO  [alembic.runtime.migration] Running stamp_revision  -> 4e642f725923Database schema installedNo users in database. Creating one...┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃                                                                                            Welcome to the LLM Workflow Engine shell!                                                                                            ┃┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛This shell interacts directly with ChatGPT and other LLMs via their API, and stores conversations and messages in the configured database.Before you can start using the shell, you must create a new user.Enter username (no spaces): meEnter email:Enter password (leave blank for passwordless login):User successfully registered.Login successful.Would you like to install example configurations for: presets, templates, workflows, tools? [y/N] yInstalling examples for: presetsInstalled turbo-16k-code-generation.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/presets/turbo-16k-code-generation.yamlInstalled turbo.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/presets/turbo.yamlInstalled gpt-4-exploratory-code-writing.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/presets/gpt-4-exploratory-code-writing.yamlInstalling examples for: templatesInstalled example-code-spec-generator.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-code-spec-generator.mdInstalled example-code-documentation.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-code-documentation.mdInstalled example-antonym-generator.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-antonym-generator.mdInstalled example-summarize-conversation.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-summarize-conversation.mdInstalled example-prompt-engineer.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-prompt-engineer.mdInstalled example-sentiment-analysis.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-sentiment-analysis.mdInstalled example-coding-gpt-chain-of-thought-collaboration.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-coding-gpt-chain-of-thought-collaboration.mdInstalled example-code-refactor.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-code-refactor.mdInstalled example-voicemail-sentiment-analysis.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-voicemail-sentiment-analysis.mdInstalled example-code-generator.md to /home/alp/.config/llm-workflow-engine/profiles/default/templates/example-code-generator.mdInstalling examples for: workflowsInstalled example-saved-conversation.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-saved-conversation.yamlInstalled example-file-summarizer.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-file-summarizer.yamlInstalled example-iterative-task.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-iterative-task.yamlInstalled example-persona-generator-create-persona.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-persona-generator-create-persona.yamlInstalled example-social-media-content-generator.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-social-media-content-generator.yamlInstalled example-persona-generator-create-characteristics.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-persona-generator-create-characteristics.yamlInstalled example-analyze-voicemail-transcriptions-process-row.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-analyze-voicemail-transcriptions-process-row.yamlInstalled example-question-answer-feedback-answer.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-question-answer-feedback-answer.yamlInstalled example-analyze-voicemail-transcriptions.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-analyze-voicemail-transcriptions.yamlInstalled example-ask-question.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-ask-question.yamlInstalled example-say-hello.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-say-hello.yamlInstalled example-multiple-workflows.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-multiple-workflows.yamlInstalled example-persona-generator.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-persona-generator.yamlInstalled example-iterative.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-iterative.yamlInstalled example-summarize-content.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-summarize-content.yamlInstalled example-file-summarizer-summarize-file.yaml to /home/alp/.config/llm-workflow-engine/profiles/default/workflows/example-file-summarizer-summarize-file.yamlInstalling examples for: toolsInstalled reverse_content.py to /home/alp/.config/llm-workflow-engine/profiles/default/tools/reverse_content.pyInstalled store_sentiment_and_topics.py to /home/alp/.config/llm-workflow-engine/profiles/default/tools/store_sentiment_and_topics.pyFinished installing examples                                                                                      Provide a prompt, or type /help or ? to list commands.[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /model openai_api_base https://api.zanity.xyz/v1Set openai_api_base to https://api.zanity.xyz/v1[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> testTest successful! How can I assist you today?[Untitled](0.7/131072/34): defaultme@gpt-4o-mini 2> Exception in thread Thread-1 (gen_title_thread):Traceback (most recent call last):  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner    self.run()  File "/usr/lib/python3.11/threading.py", line 982, in run    self._target(*self._args, **self._kwargs)  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/conversation_storage_manager.py", line 217, in gen_title_thread    result = llm.invoke(new_messages)             ^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke    self.generate_prompt(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate    raise e  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate    self._generate_with_cache(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache    result = self._generate(             ^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 689, in _generate    response = self.client.create(**payload)               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper    return func(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 829, in create    return self._post(           ^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request    return self._request(           ^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request    raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/131072/34): defaultme@gpt-4o-mini 2>[Untitled](0.7/131072/34): defaultme@gpt-4o-mini 2> /title just_title • Fetching conversation history... • Setting title...Title set to: just_titlejust_title(0.7/131072/34): defaultme@gpt-4o-mini 2>just_title(0.7/131072/34): defaultme@gpt-4o-mini 2> test testIt looks like you're testing the chat functionality. How can I assist you further today?just_title(0.7/131072/63): defaultme@gpt-4o-mini 3>GoodBye!
# This time with that default name template└─$  cat ~/.config/llm-workflow-engine/templates/deftemplate.md---description: Default template in order to set the title.request_overrides:  title: default_title---└─$  lwe                                                                                      Provide a prompt, or type /help or ? to list commands.[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /model openai_api_base https://api.zanity.xyz/v1Set openai_api_base to https://api.zanity.xyz/v1[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /template edit-run deftemplate.md[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> testHello! How can I assist you today?[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2> Exception in thread Thread-1 (gen_title_thread):Traceback (most recent call last):  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner    self.run()  File "/usr/lib/python3.11/threading.py", line 982, in run    self._target(*self._args, **self._kwargs)  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/conversation_storage_manager.py", line 217, in gen_title_thread    result = llm.invoke(new_messages)             ^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke    self.generate_prompt(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate    raise e  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate    self._generate_with_cache(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache    result = self._generate(             ^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 689, in _generate    response = self.client.create(**payload)               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper    return func(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 829, in create    return self._post(           ^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request    return self._request(           ^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request    raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>
You must be logged in to vote
0 replies
Comment options

Start LWE with the--debug flag, run the second case (with the template), post the full output.

You must be logged in to vote
1 reply
@ForeverNooob
Comment options

└─$   export OPENAI_API_KEY=vc-[redacted]└─$  lwe --version/home/alp/.local/bin/lwe version 0.22.2└─$  lwe --debugSchemaUpdater - DEBUG - Creating alembic config using .ini: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/schema/alembic.iniSchemaUpdater - DEBUG - Schema versioning initialized: TrueSchemaUpdater - DEBUG - Initialized SchemaUpdater with database URL: sqlite:////home/alp/.local/share/llm-workflow-engine/profiles/default/storage.dbDatabase - DEBUG - The database schema exists.SchemaUpdater - INFO - Current schema version for database: 4e642f725923SchemaUpdater - INFO - Latest schema version: 4e642f725923SchemaUpdater - INFO - Schema is up to date.PresetManager - DEBUG - Loading presets from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/presets, /home/alp/.config/llm-workflow-engine/presets, /home/alp/.config/llm-workflow-engine/profiles/default/presetsPresetManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/presetsPresetManager - DEBUG - Loading YAML file: gpt-4o-mini.yamlPresetManager - INFO - Successfully loaded preset: gpt-4o-miniPresetManager - DEBUG - Loading YAML file: gpt-4-code-generation.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-code-generationPresetManager - DEBUG - Loading YAML file: gpt-4-creative-writing.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-creative-writingPresetManager - DEBUG - Loading YAML file: gpt-4-chatbot-responses.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-chatbot-responsesPresetManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/presetsPresetManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/presetsPresetManager - DEBUG - Loading YAML file: turbo-16k-code-generation.yamlPresetManager - INFO - Successfully loaded preset: turbo-16k-code-generationPresetManager - DEBUG - Loading YAML file: turbo.yamlPresetManager - INFO - Successfully loaded preset: turboPresetManager - DEBUG - Loading YAML file: gpt-4-exploratory-code-writing.yamlPresetManager - INFO - Successfully loaded preset: gpt-4-exploratory-code-writingPluginManager - DEBUG - Plugin paths: ['/home/alp/.config/llm-workflow-engine/profiles/default/plugins', '/home/alp/.config/llm-workflow-engine/plugins', '/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins']PluginManager - INFO - Scanning for package pluginsPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/provider_chat_openai.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - INFO - Loading plugin provider_chat_openai from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/provider_chat_openai.pyPluginManager - DEBUG - Merging plugin plugins.provider_chat_openai config, default: {}, user: {}PluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/echo.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/echo.pyPluginManager - INFO - Loading plugin echo from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/echo.pyPluginManager - DEBUG - Merging plugin plugins.echo config, default: {'response': {'prefix': 'Echo'}}, user: {}Echo - INFO - This is the echo plugin, running with backend: apiPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/profiles/default/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.config/llm-workflow-engine/plugins/examples.pyPluginManager - DEBUG - Searching for plugin file /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/examples.pyPluginManager - INFO - Loading plugin examples from /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/plugins/examples.pyPluginManager - DEBUG - Merging plugin plugins.examples config, default: {'confirm_overwrite': True, 'default_types': ['presets', 'templates', 'workflows', 'tools']}, user: {}Examples - INFO - This is the examples plugin, running with profile dir: /home/alp/.config/llm-workflow-engine/profiles/default, examples root: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/examples, default types: ['presets', 'templates', 'workflows', 'tools'], confirm overwrite: TrueWorkflowManager - DEBUG - Loading workflows from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflows, /home/alp/.config/llm-workflow-engine/workflows, /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - DEBUG - Loading workflows from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflows, /home/alp/.config/llm-workflow-engine/workflows, /home/alp/.config/llm-workflow-engine/profiles/default/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/workflowsWorkflowManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/workflowsApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: FalseProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiApiBackend - DEBUG - Setting model to: gpt-4o-miniTemplateManager - DEBUG - Loading templates from dirs: /home/alp/.config/llm-workflow-engine/profiles/default/templates, /home/alp/.config/llm-workflow-engine/templates, /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/templates, /tmp/lwe-temp-templatesUserManager - DEBUG - Retrieving all UsersApiBackend - DEBUG - Setting current user to meApiBackend - INFO - System message set to: You are a helpful assistant.ApiBackend - DEBUG - Setting provider to: provider_chat_openai, with customizations: None, reset: False                                                                                      Provide a prompt, or type /help or ? to list commands.[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /model openai_api_base https://api.zanity.xyz/v1Set openai_api_base to https://api.zanity.xyz/v1[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> /template edit-run deftemplate.mdTemplateManager - DEBUG - Ensuring template deftemplate.md existsTemplateManager - DEBUG - Loading templates from dirs: /home/alp/.config/llm-workflow-engine/profiles/default/templates, /home/alp/.config/llm-workflow-engine/templates, /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/templates, /tmp/lwe-temp-templatesTemplateManager - DEBUG - Template deftemplate.md existsApiBackend - INFO - Setting up run of template: tmpojf352rh.mdTemplateManager - DEBUG - Rendering template: tmpojf352rh.mdApiRepl - INFO - Running template[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> testApiBackend - INFO - Starting 'ask' requestApiBackend - DEBUG - Extracting activate preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Inintialized ApiRequest with input: test, default preset name: None, system_message: You are a helpful assistant., max_submission_tokens: 131072, request_overrides: {'print_stream': True, 'stream': True}, return only: FalseApiRequest - DEBUG - Extracting preset configuration from request_overrides: {'print_stream': True, 'stream': True}ApiRequest - DEBUG - Using current providerProviderManager - DEBUG - Attempting to load provider: provider_chat_openaiProviderManager - DEBUG - Found provider: ProviderChatOpenaiProviderManager - INFO - Successfully loaded provider: provider_chat_openaiToolManager - DEBUG - Loading tools from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/tools, /home/alp/.config/llm-workflow-engine/tools, /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - DEBUG - Loading tool file test_tool.py from directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file reverse_content.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file store_sentiment_and_topics.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsApiRequest - DEBUG - Built LLM based on preset_name: None, metadata: {'provider': 'provider_chat_openai'}, customizations: {'model_name': 'gpt-4o-mini', 'n': 1, 'temperature': 0.7, 'openai_api_base': 'https://api.zanity.xyz/v1'}, preset_overrides: {}ApiRequest - DEBUG - Stripping messages over max tokens: 131072, initial token count: 19ApiRequest - DEBUG - Calling LLM with message count: 2ApiRequest - DEBUG - Building messages for LLM, message count: 2ApiRequest - DEBUG - Started streaming request at 2024-12-20T17:23:08.850023ApiRequest - DEBUG - Streaming with LLM attributes: {'model_name': 'gpt-4o-mini', 'model': 'gpt-4o-mini', 'stream': False, 'n': 1, 'temperature': 0.7, '_type': 'chat_openai'}Hello! How can I assist you today?ApiRequest - DEBUG - Stopped streaming response at 2024-12-20T17:23:15.772227ApiBackend - DEBUG - LLM Response: {'content': 'Hello! How can I assist you today?', 'additional_kwargs': {}, 'response_metadata': {'finish_reason': 'stop', 'model_name': 'gpt-4o-mini', 'system_fingerprint': 'fp_vxDuDMkGIv'}, 'type': 'AIMessageChunk', 'name': None, 'id': 'run-953eb75b-5a73-432b-9712-c36684f0cbcc', 'example': False, 'tool_calls': [], 'invalid_tool_calls': [], 'usage_metadata': None, 'tool_call_chunks': []}ToolManager - DEBUG - Loading tools from dirs: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/tools, /home/alp/.config/llm-workflow-engine/tools, /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - INFO - Processing directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - DEBUG - Loading tool file test_tool.py from directory: /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/toolsToolManager - INFO - Processing directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file reverse_content.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsToolManager - DEBUG - Loading tool file store_sentiment_and_topics.py from directory: /home/alp/.config/llm-workflow-engine/profiles/default/toolsConversationStorageManager - DEBUG - Storing conversation messages for conversation: newConversationManager - DEBUG - Retrieving User with id 1ConversationManager - INFO - Added Conversation with title: None for User meConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: system, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset:  for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: user, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset:  for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Conversation with id 5MessageManager - INFO - Added Message with role: assistant, message_type: content, message_metadata: None, provider: provider_chat_openai, model: gpt-4o-mini, preset:  for Conversation with id 5ConversationStorageManager - INFO - Generating title for conversation 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving Messages for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5MessageManager - DEBUG - Retrieving last Message for Conversation with id 5ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2> ConversationStorageManager - DEBUG - Title generation LLM provider: provider_chat_openai, model: gpt-4o-miniException in thread Thread-1 (gen_title_thread):Traceback (most recent call last):  File "/usr/lib/python3.11/threading.py", line 1045, in _bootstrap_inner    self.run()  File "/usr/lib/python3.11/threading.py", line 982, in run    self._target(*self._args, **self._kwargs)  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/backends/api/conversation_storage_manager.py", line 217, in gen_title_thread    result = llm.invoke(new_messages)             ^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke    self.generate_prompt(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate    raise e  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate    self._generate_with_cache(  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache    result = self._generate(             ^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 689, in _generate    response = self.client.create(**payload)               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_utils/_utils.py", line 275, in wrapper    return func(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 829, in create    return self._post(           ^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1280, in post    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 957, in request    return self._request(           ^^^^^^^^^^^^^^  File "/home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/openai/_base_client.py", line 1061, in _request    raise self._make_status_error_from_response(err.response) from None[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>ConversationManager - DEBUG - Retrieving Conversation with id 5[Untitled](0.7/131072/33): defaultme@gpt-4o-mini 2>GoodBye!
Comment options

I'm not sure what you're doing here, but this output makes no sense:

me@gpt-4o-mini 1> /template edit-run deftemplate.mdTemplateManager - DEBUG - Ensuring template deftemplate.md existsTemplateManager - DEBUG - Loading templates from dirs: /home/alp/.config/llm-workflow-engine/profiles/default/templates, /home/alp/.config/llm-workflow-engine/templates, /home/alp/.local/share/pipx/venvs/llm-workflow-engine/lib/python3.11/site-packages/lwe/templates, /tmp/lwe-temp-templatesTemplateManager - DEBUG - Template deftemplate.md existsApiBackend - INFO - Setting up run of template: tmpojf352rh.mdTemplateManager - DEBUG - Rendering template: tmpojf352rh.mdApiRepl - INFO - Running template[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1>[New Conversation](0.7/131072/0): defaultme@gpt-4o-mini 1> testApiBackend - INFO - Starting 'ask' request

/template edit-run should be running the template immediately after you exit the editor, but that output indicates that you're returned to a prompt, and you're then enteringtest and hitting enter. This wouldnot work, as it's not using the template.

It looks to me like you open the template for editing, but when you exit, there's no actual message content (only the YAML front matter). When you exit the template, LWE processes it, sees no message content, and returns you to the prompt.

I think you might be confusing the concept of templates and presets. Templates are one-shot -- the final contents of the completed template are processed through that run, then tossed. Fromthe documentation:

Templates allow storing text in template files, and quickly leveraging the contents as your user input.

To use/template edit-run correctly, you need to put your messagein the template, after the YAML front matter, then save/exit the editor.

You must be logged in to vote
0 replies
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
2 participants
@ForeverNooob@thehunmonkgroup

[8]ページ先頭

©2009-2025 Movatter.jp