- Notifications
You must be signed in to change notification settings - Fork16
[Prototype] Self-hosted CodeLlama LLM for code autocompletion#576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
lihebi commentedNov 5, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I also came across multiple providers the other day, and it is well supported:trpc#3049. I have implemented multiple providers inhttps://github.com/codepod-io/codepod-cloud/pull/11. Related code: |
lihebi commentedNov 5, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
With that said, it could actually be better and simpler to leave it in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thanks, Sen! It works well. I left some minor comments in the code.
One issue is the automatic completion byInlineCompletionsProvider
is not very responsiveness. Sometimes it is fired, and sometimes it is not. How about using a shortcut to trigger it manually, and disable the automatic triggering?
@@ -59,6 +68,8 @@ export async function startServer({ port, repoDir }) { | |||
}); | |||
http_server.listen({ port }, () => { | |||
console.log(`🚀 Server ready at http://localhost:${port}`); | |||
console.log( | |||
`🚀 Server ready at http://localhost:${port}, LLM Copilot is hosted at ${copilotIpAddress}:${copilotPort}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I'd revert this print information, because people may choose to run CodePod without copilot server, and this info is misleading. Just be silent should be fine.
ui/src/lib/trpc.ts Outdated
} else { | ||
remoteUrl = `${window.location.hostname}:${window.location.port}`; | ||
} | ||
export const trpcProxyClient = createTRPCProxyClient<AppRouter>({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We already have a trpc client in App.tsx. You can access the client inllamaInlineCompletionProvider
like this:
// MyMonaco.tsxfunctionMyMonaco(){ ...const{ client}=trpc.useUtils();constllamaCompletionProvider=newllamaInlineCompletionProvider(id,editor,client);}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
A second thought: since the copilot is already a REST API, and we are not going to further customize it or add authentication to this Desktop app, let's directly call the REST API in the frontend.
The trpc is preferred in the cloud app.
This isn't that critical. We can assume that the service is up.
This is quite important. It is quite often that we edit code in the middle. |
After the discussion, we decide to leave this PR as a reference to integrate the self-hosted copilot and address the comments in the codepod-cloud repo. |
Uh oh!
There was an error while loading.Please reload this page.
Summary
This pr provides a solution to use CodeLlama for self-hosted code autocompletion.
api/src/server
, initially we talked about to create an standalone server service to the copilot-related services.Test
x.x.x.x
, port is9090
cd codepod/api/
pnpm dev --copilotIP x.x.x.x --copilotPort 9090