- Notifications
You must be signed in to change notification settings - Fork18.4k
x/oscar feedback#68490
-
Gabyhelp is a first prototype experiment for a larger effort to explore creating automated help for open-source maintenance. We have posted an overview of the vision inx/oscar/README.md. This discussion is for collecting feedback about the overall vision,not for specifics about gabyhelp (use thegabyhelp discussion for that). Please read the README. After that, we would be happy to hear any feedback about concerns we are missing and also ways to reduce maintainer toil that we should be considering beyond what's listed there. Thanks very much! |
BetaWas this translation helpful?Give feedback.
All reactions
👍 62👎 1❤️ 18👀 4
Replies: 5 comments 7 replies
-
The examples that show similar issues are not very impressive. Similar tools existed years before the AI hype. The root problem here is more that the GitHub search is not very good and often returns way to many issues by seemingly matching the entire thread. This can be worked around by adding in:title to the search if you know what you are looking for. But my main concern with this, that we probably soon need to fight AIs in addition to stale bot in issues and in the process create lots of irrelevant comments. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
It's definitely a goal to avoid irrelevant comments. More generally it is a goal to be helpful. If certain comments are not helpful, we will try to avoid them. There are lots of potential ways to go wrong here, but that doesn't mean every path is wrong. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
This is so cool! I'm excited to see another language ecosystem start looking into this. At a high level, I think most of what I see in the oscar readme, and what I expect oscar to be in practice, is more effective project management:
There are other things that come to mind (e.g. making it easier for a newcomer to learn about the existing communication structures and ticketing/org processes of a given OSS org, or even just updating onboarding docs to reflect the new practices of an org) but none of those are super concrete. Two additional points I'll throw out as food for thought:
|
BetaWas this translation helpful?Give feedback.
All reactions
-
I too think that the UX of GitHub issues is problematic, and came here to suggest a different interface for interacting with oscar, or reporting issues in general. It would be great if oscar could help the reporter to get enough context from previous issues and guide the reporter through the process (what’s important for this particular issue about the environment etc) before even creating an issue. That way, inexperienced reporters will get help to report well-crafted issues with less effort and maybe decide that it is a non-issue and not open an issue at all, or instead provide additional context to an existing issue. Think of it as chatting with the issue tracker in natural language before committing the issue. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I do wonder a bit about command line interaction with gaby/oscar. That might not be the right UX for daily usage, but maybe there are places to expose the core LLM functionality to support improvisation, and experiment with what works or doesn't work? |
BetaWas this translation helpful?Give feedback.
All reactions
-
Also, taking quick skim through golang's issue tracker specifically, it looks like y'all have alot of issues being filed by automation. I definitely remember a lot of those being very noisy when I was at Google (over in databases world), and the constant frustration of "ok, here I go again, triaging automated issues...". There's definitely room here to reduce the noise - in particular, when a potentially useful signal gets so noisy that people disregard the signal entirely, there's not much utility in the signal itself, and I think that's also an area where LLMs can help out. The tricky part, though, is that a lot of what we're imagining along these lines, is that it requires a lot of dedicated engineering work to build the integrations for every action we're envisioning that gabyhelp might be able to do, and, well, if it hasn't been done already, why would adding an LLM to the mix make it more likely that someone will prioritize that work? |
BetaWas this translation helpful?Give feedback.
All reactions
-
I think I disagree with this. It seems better to me to raise the signal / lower the noise at the source. Our approach is focusing on issues filed by people. The bot-written issues such as backports and watchflakes are exempted from gaby's "related issue" posts. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Hmm- I was more suggesting that another use case for oscar/gabyhelp is to provide maintainers a mechanism to refine a noisy signal into a useful one, not necessarily that the filter should happen after it's been posted to GitHub. In particular, I think lowering the noise at the source requires a lot of manual heuristic creation and manual thresholding, both of which are areas where prompt-driven tooling could help. |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Last year I created an LLM agent framework and an example system that added source code comments to code. This used the AST facilities in golang along with some other small deterministic tools to extract code for review, ChatGPT to write comments and improve them using a multi-agent conversation, and then more utilities to insert them back into the code. This worked quite well, producing decent comments that would be helpful to both new developers coming in to a project as well as maintainers learning new parts of one. The next step would have been to add a vector store to enable good package-level structural comments, unfortunately the project was cut short. I suggest that good source comments are a huge help to maintainers, and where not existing, the best addition to aid maintainability. Not all developers like writing comments and many OSS projects do not have them -- indeed golang is unusually lucky in the quality of its comments. So I suggest adding "producing good in-source comments where not already available" as a future goal for Oscar :) |
BetaWas this translation helpful?Give feedback.
All reactions
-
My initial reaction to this is that it could be a massive help. I help maintainhttps://github.com/a-h/templ and even on that smaller scale issue management can become overwhelming at times, particularly because it is not my full time job and most of my motivation comes from bug fixes and new features, not chasing down issues that I know have already been raised or gathering documentation links (we have a docs sitehttps://templ.guide which once oscar becomes generally useable I can imagine us scraping that site). Am I right in assuming that there will be someway to feedback to oscar in cases where a comment is not useful, or the opposite where a comment from oscar results in a productive outcome? For example |
BetaWas this translation helpful?Give feedback.
All reactions
-
Emoji reactions to gabyhelp comments are currently tracked to help us assess how gabyhelp is performing. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Hey Ian, glad to hear that it's been thought about, I actually sought out Oscar after our conversation at GopherconUK, with Arman and I :) I guess there are signals that are built into the issue tracking flow, like if a ticket is raised, then commented on by Oscar, and finally closed by the original author or a maintainer then that sounds like a successful response from Oscar. |
BetaWas this translation helpful?Give feedback.