Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Add issue_graph tool to visualize issue/PR relationships and hierarchy #1510

Open
Assignees
copilot-swe-agent
@SamMorrowDrums

Description

@SamMorrowDrums

Create a new tool called issue_graph in the issues toolset that takes any pull request or issue, and provides a graph representation of the issue/PR number for all links, is it an issue or PR, title and first 5 lines from the body text. You have a max depth of 5 and links can be circular too so we need to suppor de-duplication before crawling further. If a node is an an issue with a label that includes the text "epic" or has epic in the title, then it is likely a parent node, and if the issue is a sub-issue of another issue than that sub-issue is a child of the parent issue. Often the graph would be sorted [epic issue]-> [batch issue] -> [task issue] -> [pull-request] and sometimes it's [task issue] -> [pull request]. Pull requests are usually leaf nodes, and other pull requests linked are likely siblings rather than parents or children (but that context is difficult to express), so a child node is OK also.

The code should be resiliant to strange and circular links as this is a human graph and many errors are present.

The focus of the graph should be the provided node, and even where possible we want to highlight the hierarchy of provided node (issue or PR) as being the main edges to care about, and other related issues should be noted (again with respect to depth), but we should be careful about providing too much detail.

Our crawler should pull out issues from issue body, sub-issues and from issue timelime (and if nothing found can check the commits for issue numbers #xxx style), and should use supporting evidence where it is structured to help make this as deterministic as possible, and should update the graph based on the discovered evidence. Efficiency and context reduction are extremely important also, so perhaps we should also reduce the body lines provided based on distance from the main node also (maybe could go from 8 for current node down to 3 for 5 steps away for example). A smarter approach that factors in relevent words could work, but is likely not very efficient, and that matters as this happens on a web server.

Can you attempt to simplify this and think carefully through how to produce a graph. The output type is usually JSON, but you can provide a field with structured text as long as an LLM model will read it well (so for example CSV with linked edges could work)

Examples: dot:

There is a standard text based format called DOT which allows you to work with directed and undirected graphs, and would give you the benefit of using a variety of different libraries to work with your graphs. Notably graphviz which allows you to read and write DOT files, as well as plot them graphically using matplotlib.
graph_4345345A,BB,CC,EE,Bgraph_3234766F,DB,C

adjacency style:

The two most known graph representations as data structures are:Adjacency matricesAdjacency listsAdjacency matricesFor a graph with |V| vertices, an adjacency matrix is a |V|X|V| matrix of 0s and 1s, where the entry in row i and column j is 1 if and only if the edge (i,j) is in the graph. If you want to indicate an edge weight, put it in the row i column j entry, and reserve a special value (perhaps null) to indicate an absent edge. With an adjacency matrix, we can find out whether an edge is present in constant time, by just looking up the corresponding entry in the matrix. For example, if the adjacency matrix is named graph, then we can query whether edge (i,j) is in the graph by looking at graph[i][j].For an undirected graph, the adjacency matrix is symmetric: the row i, column j entry is 1 if and only if the row j, column i entry is 1. For a directed graph, the adjacency matrix need not be symmetric.Adjacency listsRepresenting a graph with adjacency lists combines adjacency matrices with edge lists. For each vertex i, store an array of the vertices adjacent to it. We typically have an array of |V| adjacency lists, one adjacency list per vertex.Vertex numbers in an adjacency list are not required to appear in any particular order, though it is often convenient to list them in increasing order.We can get to each vertex's adjacency list in constant time, because we just have to index into an array. To find out whether an edge (i,j) is present in the graph, we go to i's adjacency list in constant time and then look for j in i's adjacency list.In an undirected graph, vertex j is in vertex i's adjacency list if and only if i is in j's adjacency list. If the graph is weighted, then each item in each adjacency list is either a two-item array or an object, giving the vertex number and the edge weight.

Google also has an awesome blog on this:https://research.google/blog/talk-like-a-graph-encoding-graphs-for-large-language-models/

Talk like a graph: Encoding graphs for large language modelsMarch 12, 2024Bahare Fatemi, Research Scientist, Google Research, and Bryan Perozzi, Research Scientist, Google ResearchWe dug deep into how to best represent graphs as text so LLMs can understand them — our investigation found three major factors that affect the results.Imagine all the things around you — your friends, tools in your kitchen, or even the parts of your bike. They are all connected in different ways. In computer science, the term graph is used to describe connections between objects. Graphs consist of nodes (the objects themselves) and edges (connections between two nodes, indicating a relationship between them). Graphs are everywhere now. The internet itself is a giant graph of websites linked together. Even the knowledge search engines use is organized in a graph-like way.Furthermore, consider the remarkable advancements in artificial intelligence — such as chatbots that can write stories in seconds, and even software that can interpret medical reports. This exciting progress is largely thanks to large language models (LLMs). New LLM technology is constantly being developed for different uses.Since graphs are everywhere and LLM technology is on the rise, in “Talk like a Graph: Encoding Graphs for Large Language Models”, presented at ICLR 2024, we present a way to teach powerful LLMs how to better reason with graph information. Graphs are a useful way to organize information, but LLMs are mostly trained on regular text. The objective is to test different techniques to see what works best and gain practical insights. Translating graphs into text that LLMs can understand is a remarkably complex task. The difficulty stems from the inherent complexity of graph structures with multiple nodes and the intricate web of edges that connect them. Our work studies how to take a graph and translate it into a format that an LLM can understand. We also design a benchmark called GraphQA to study different approaches on different graph reasoning problems and show how to phrase a graph-related problem in a way that enables the LLM to solve the graph problem. We show that LLM performance on graph reasoning tasks varies on three fundamental levels: 1) the graph encoding method, 2) the nature of the graph task itself, and 3) interestingly, the very structure of the graph considered. These findings give us clues on how to best represent graphs for LLMs. Picking the right method can make the LLM up to 60% better at graph tasks!Pictured, the process of encoding a graph as text using two different approaches and feeding the text and a question about the graph to the LLM.Graphs as textTo be able to systematically find out what is the best way to translate a graph to text, we first design a benchmark called GraphQA. Think of GraphQA as an exam designed to evaluate powerful LLMs on graph-specific problems. We want to see how well LLMs can understand and solve problems that involve graphs in different setups. To create a comprehensive and realistic exam for LLMs, we don’t just use one type of graph, we use a mix of graphs ensuring breadth in the number of connections. This is mainly because different graph types make solving such problems easier or harder. This way, GraphQA can help expose biases in how an LLM thinks about the graphs, and the whole exam gets closer to a realistic setup that LLMs might encounter in the real world.TalkGraph2-OvervieHeroOverview of our framework for reasoning with graphs using LLMs.GraphQA focuses on simple tasks related to graphs, like checking if an edge exists, calculating the number of nodes or edges, finding nodes that are connected to a specific node, and checking for cycles in a graph. These tasks might seem basic, but they require understanding the relationships between nodes and edges. By covering different types of challenges, from identifying patterns to creating new connections, GraphQA helps models learn how to analyze graphs effectively. These basic tasks are crucial for more complex reasoning on graphs, like finding the shortest path between nodes, detecting communities, or identifying influential nodes. Additionally, GraphQA includes generating random graphs using various algorithms like Erdős-Rényi, scale-free networks, Barabasi-Albert model, and stochastic block model, as well as simpler graph structures like paths, complete graphs, and star graphs, providing a diverse set of data for training.When working with graphs, we also need to find ways to ask graph-related questions that LLMs can understand. Prompting heuristics are different strategies for doing this. Let's break down the common ones:Zero-shot: simply describe the task ("Is there a cycle in this graph?") and tell the LLM to go for it. No examples provided.Few-shot: This is like giving the LLM a mini practice test before the real deal. We provide a few example graph questions and their correct answers.Chain-of-Thought: Here, we show the LLM how to break down a problem step-by-step with examples. The goal is to teach it to generate its own "thought process" when faced with new graphs.Zero-CoT: Similar to CoT, but instead of training examples, we give the LLM a simple prompt, like "Let's think step-by-step," to trigger its own problem-solving breakdown.BAG (build a graph): This is specifically for graph tasks. We add the phrase "Let's build a graph..." to the description, helping the LLM focus on the graph structure.We explored different ways to translate graphs into text that LLMs can work with. Our key questions were:Node encoding: How do we represent individual nodes? Options tested include simple integers, common names (people, characters), and letters.Edge encoding: How do we describe the relationships between nodes? Methods involved parenthesis notation, phrases like "are friends", and symbolic representations like arrows.Various node and edge encodings were combined systematically. This led to functions like the ones in the following figure:TalkGraph3-FunctionsExamples of graph encoding functions used to encode graphs via text.Analysis and resultsWe carried out three key experiments: one to test how LLMs handle graph tasks, and two to understand how the size of the LLM and different graph shapes affected performance. We run all our experiments on GraphQA.How LLMs handle graph tasksIn this experiment, we tested how well pre-trained LLMs tackle graph problems like identifying connections, cycles, and node degrees. Here is what we learned:LLMs struggle: On most of these basic tasks, LLMs did not do much better than a random guess.Encoding matters significantly: How we represent the graph as text has a great effect on LLM performance. The "incident" encoding excelled for most of the tasks in general.Our results are summarized in the following chart.TalkGraph4-EncoderResultsComparison of various graph encoder functions based on their accuracy on different graph tasks. The main conclusion from this figure is that the graph encoding functions matter significantly.Bigger is (usually) betterIn this experiment, we wanted to see if the size of the LLM (in terms of the number of parameters) affects how well they can handle graph problems. For that, we tested the same graph tasks on the XXS, XS, S, and L sizes of PaLM 2. Here is a summary of our findings:In general, bigger models did better on graph reasoning tasks. It seems like the extra parameters gave them space to learn more complex patterns.Oddly, size didn't matter as much for the “edge existence” task (finding out if two nodes in a graph are connected).Even the biggest LLM couldn't consistently beat a simple baseline solution on the cycle check problem (finding out if a graph contains a cycle or not). This shows LLMs still have room to improve with certain graph tasks.TalkGraph5-ModelCapResultsEffect of Model Capacity on graph reasoning task for PaLM 2-XXS, XS, S, and L.Do different graph shapes confuse LLMsWe wondered if the "shape" of a graph (how nodes are connected) influences how well LLMs can solve problems on it. Think of the following figure as different examples of graph shapes.TalkGraph6-SamplesSamples of graphs generated with different graph generators from GraphQA. ER, BA, SBM, and SFN refers to Erdős–Rényi, Barabási–Albert, Stochastic Block Model, and Scale-Free Network respectively.We found that graph structure has a big impact on LLM performance. For example, in a task asking if a cycle exists, LLMs did great on tightly interconnected graphs (cycles are common there) but struggled on path graphs (where cycles never happen). Interestingly, providing some mixed examples helped it adapt. For instance, for cycle check, we added some examples containing a cycle and some examples with no cycles as few-shot examples in our prompt. Similar patterns occurred with other tasks.TalkGraph7-GeneratorResultsComparing different graph generators on different graph tasks. The main observation here is that graph structure has a significant impact on the LLM’s performance. ER, BA, SBM, and SFN refers to Erdős–Rényi, Barabási–Albert, Stochastic Block Model, and Scale-Free Network respectively.ConclusionIn short, we dug deep into how to best represent graphs as text so LLMs can understand them. We found three major factors that make a difference:How to translate the graph to text: how we represent the graph as text significantly influences LLM performance. The incident encoding excelled for most of the tasks in general..Task type: Certain types of graph questions tend to be harder for LLMs, even with a good translation from graph to text.Graph structure: Surprisingly, the "shape" of the graph that on which we do inference (dense with connections, sparse, etc.) influences how well an LLM does.This study revealed key insights about how to prepare graphs for LLMs. The right encoding techniques can significantly boost an LLM's accuracy on graph problems (ranging from around 5% to over 60% improvement). Our new benchmark, GraphQA, will help drive further research in this area.AcknowledgementsWe would like to express our gratitude to our co-author, Jonathan Halcrow, for his valuable contributions to this work. We express our sincere gratitude to Anton Tsitsulin, Dustin Zelle, Silvio Lattanzi, Vahab Mirrokni, and the entire graph mining team at Google Research, for their insightful comments, thorough proofreading, and constructive feedback which greatly enhanced the quality of our work. We would also like to extend special thanks to Tom Small for creating the animation used in this post.

So taking this advice on board please try to come up with a graph solution which will be extremely clear to an LLM and also efficient to produce on a web server, and not wasteful of tokens.

The issue parsing, crawling algorith can use goroutines to speed up the process where possible, the most important thing is that when combining the data and providing the links we make sure the intent is clear.

Another important note. We should also provide a natural language summary of the graph found, focusing on the main node provided and its key relationships, and highlighting any epic/batch/task relationships found. This should be concise but informative, and can suggest next steps.

Also very significant is the description of the tool and therefore when and why to call the tool. Roughly the reason is to understand the relationship of the issue or PR provided to other issues and PRs in the repository, and to help visualize the hierarchy of work involved, especially where epics and batch issues are involved. This can help with planning, understanding scope, and identifying related work. The graph can also help identify potential blockers or dependencies that may impact the progress of the issue or PR provided, and should enable the agent/model to provide better understanding of the work motivation for the work without having to make a tool call for every node in the graph it is interested in. It should call this early on in work to gather appropriate context.

There should also be a sane max characters per line (and perhaps we can filter out hyperlinks/urls in the body, multiple whitespace chars and other non-useful content to help with this) to ensure we do not exceed any limits.

================================================

Issue Graph Tool Design Document

Overview

Theissue_graph tool provides a graph representation of GitHub issues and pull requests, showing their relationships to help understand the hierarchy of work, especially in projects using epics and batch issues.

Motivation

Problem Statement

When working on a GitHub issue or PR, understanding its context requires multiple API calls:

  • What epic does this belong to?
  • Are there related issues or PRs?
  • What's the hierarchy of work (epic → batch → task → PR)?
  • Are there blockers or dependencies?

Currently, an AI agent must make individual tool calls for each related issue to understand context, which is:

  • Slow: Multiple sequential API calls
  • Token-wasteful: Each response includes full issue bodies
  • Incomplete: Easy to miss relationships buried in issue bodies or timelines

Solution

A single tool call that returns a graph of related issues/PRs with:

  • Condensed node information (title + truncated body)
  • Clear parent/child/sibling relationships
  • Natural language summary for quick understanding
  • Depth-limited crawling to balance completeness vs. efficiency

When to Use This Tool

The agent should callissue_graph early in any workflow involving:

  • Understanding the scope of work for an issue or PR
  • Planning implementation for a task that's part of a larger epic
  • Identifying blockers or dependencies
  • Finding related work that might conflict or overlap
  • Understanding why a piece of work exists (tracing to parent epic)

Design Decisions

Graph Encoding Format

Based on Google's research on "Talk like a Graph" (ICLR 2024), theincident encoding format performs best for LLM comprehension. We'll use a hybrid approach:

GRAPH SUMMARY=============Focus: #123 (task) "Implement feature X"Hierarchy: #100 (epic) → #110 (batch) → #123 (task) → #125 (PR)NODES=====#100|epic|open|Improve performance|First 3 lines of body...#110|batch|open|Backend optimizations|First 3 lines...#123|task|open|Implement feature X|First 5 lines...#125|pr|open|feat: implement X|First 3 lines...EDGES (parent → child)======================#100 → #110#110 → #123#123 → #125RELATED (siblings/mentions)===========================#123 ~ #124 (sibling task in same batch)#123 ~ #130 (mentioned in body)

This format:

  • Uses clear section headers
  • Provides hierarchy at a glance in summary
  • Uses consistent delimiters (| for fields, for parent-child,~ for related)
  • Scales body preview based on distance from focus node

Node Classification

TypeDetection Method
EpicHas label containing "epic" OR title contains "epic" (case-insensitive)
BatchIs a parent issue (has sub-issues) but not an epic
TaskRegular issue that's not epic/batch
PRIs a pull request

Relationship Detection

RelationshipSourceDirection
Sub-issueGitHub sub-issues APIParent → Child
Closes/FixesPR body keywordsPR → Issue (child → parent)
References#xxx in body/timelineUndirected (related)
CommitsCommit messagesPR → Issue (child → parent)

Depth & Content Scaling

Distance from FocusBody LinesMax Line Length
0 (focus node)8120
15100
2480
3360
4250
5140

Content Sanitization

Before including body text:

  1. Remove URLs (replace with[link])
  2. Collapse multiple whitespace to single space
  3. Remove markdown images![...](...)
  4. Remove HTML tags
  5. Truncate to max chars per line
  6. Remove empty lines

Implementation Plan

Phase 1: Core Data Structures & Utilities

Goal: Define types and helper functions for graph operations

Tasks

  1. Define node types (issue_graph_types.go)

    • GraphNode struct: number, type (epic/batch/task/pr), state, title, body preview, depth
    • GraphEdge struct: from, to, relationship type
    • IssueGraph struct: focus node, nodes map, edges slice, summary
  2. Content sanitization utilities

    • sanitizeBody(body string, maxLines int, maxLineLen int) string
    • extractIssueRefs(text string) []int - find all#xxx references
    • classifyNode(issue, labels) NodeType - determine if epic/batch/task/pr
  3. Reference extraction regex patterns

    • Issue/PR references:#(\d+)
    • Closes keywords:(?:close[sd]?|fix(?:e[sd])?|resolve[sd]?)\s+#(\d+)

Phase 2: Graph Crawler

Goal: Implement concurrent BFS crawler with deduplication

Tasks

  1. Crawler state management

    • visited map for deduplication
    • pending queue for BFS traversal
    • mu mutex for concurrent access
  2. Issue/PR fetching

    • Fetch issue details (title, body, labels, state)
    • Fetch sub-issues if available
    • Fetch timeline events for references
    • Fallback to commit messages for PRs
  3. Concurrent crawling with goroutines

    • Worker pool pattern (limit concurrent API calls)
    • Context for cancellation/timeout
    • Error handling that doesn't break entire graph
  4. Relationship inference

    • If issue A has sub-issue B → A is parent of B
    • If PR mentions "closes #X" → PR is child of X
    • If issue body references #Y → related (not parent/child)
    • If issue has epic label → likely parent in hierarchy

Phase 3: Graph Assembly & Output

Goal: Build graph representation optimized for LLM consumption

Tasks

  1. Hierarchy detection

    • Find path from focus node to root epic (if exists)
    • Mark nodes on main hierarchy path
    • Identify siblings at each level
  2. Output formatting

    • Generate summary section with hierarchy chain
    • Generate nodes section with scaled body previews
    • Generate edges section (parent → child)
    • Generate related section (siblings/mentions)
  3. Natural language summary generation

    • Describe focus node and its role
    • Explain hierarchy if epic/batch found
    • Count related items at each level
    • Suggest next steps based on graph structure

Phase 4: MCP Tool Integration

Goal: Register tool with proper schema and handle requests

Tasks

  1. Tool definition (issues.go addition)

    • Name:issue_graph
    • Description: Clear explanation of when/why to use
    • Parameters: owner, repo, issue_number
    • Return: structured graph + summary
  2. Schema with annotations

    • ReadOnly: true (no mutations)
    • Required: owner, repo, issue_number
  3. Handler implementation

    • Parse parameters
    • Create crawler with context
    • Execute graph building
    • Format and return response

Phase 5: Testing & Documentation

Goal: Ensure reliability and maintainability

Tasks

  1. Unit tests

    • Content sanitization functions
    • Reference extraction
    • Node classification
    • Graph assembly
  2. Integration tests with mocks

    • Mock GitHub API responses
    • Test circular reference handling
    • Test depth limiting
    • Test error resilience
  3. Toolsnap creation

    • Run withUPDATE_TOOLSNAPS=true
    • Commit.snap file
  4. Documentation updates

    • Runscript/generate-docs
    • Update README if needed

File Structure

pkg/github/├── issue_graph.go          # Main tool implementation├── issue_graph_types.go    # Data structures├── issue_graph_crawler.go  # BFS crawler with goroutines├── issue_graph_format.go   # Output formatting├── issue_graph_test.go     # Unit and integration tests└── __toolsnaps__/    └── issue_graph.snap    # Schema snapshot

API Considerations

Rate Limiting

The crawler makes multiple API calls per node:

  • 1 call for issue/PR details
  • 1 call for sub-issues (if issue)
  • 1 call for timeline (optional, if few refs found)
  • 1 call for commits (PRs only, if no refs found)

Mitigation:

  • Concurrent worker limit (e.g., 5 parallel requests)
  • Context timeout (e.g., 30 seconds max)
  • Fail gracefully on rate limit (return partial graph)

Error Handling

ErrorHandling
404 Not FoundSkip node, note in summary
403 Rate LimitedStop crawling, return partial graph
TimeoutReturn what we have with warning
Invalid referenceSkip, don't add to graph

Output Example

GRAPH SUMMARY=============Focus: #456 (task) "Add caching layer"State: openHierarchy: #100 (epic) "Performance improvements" → #200 (batch) "Backend caching" → #456 (task)This task is part of a larger performance initiative. There are 3 sibling tasks in the same batch (#457, #458, #459). One PR (#470) is linked to this task.Suggested next steps:- Review parent batch #200 for context on caching strategy- Check sibling #457 for potential conflicts (also modifies cache config)NODES (7 total)===============#100|epic|open|Performance improvements|Q4 goal to improve response times by 50%...#200|batch|open|Backend caching|Implement caching across API endpoints...#456|task|open|Add caching layer|Implement Redis caching for user queries. Should handle cache invalidation on updates. Consider TTL strategy...#457|task|open|Cache configuration|Add config options for cache...#458|task|closed|Cache metrics|Add prometheus metrics...#459|task|open|Cache documentation|Document caching...#470|pr|open|feat: add redis cache|Implements #456...EDGES (parent → child)======================#100 → #200#200 → #456#200 → #457#200 → #458#200 → #459#456 → #470RELATED=======#456 ~ #457 (sibling, same batch)#456 ~ #458 (sibling, same batch)#456 ~ #459 (sibling, same batch)

Success Criteria

  1. Correctness: Graph accurately represents issue relationships
  2. Performance: Returns within 3 seconds for typical graphs (< 30 nodes)
  3. Resilience: Handles circular refs, missing nodes, API errors gracefully
  4. Usability: LLM can understand hierarchy from output without additional calls
  5. Efficiency: Token count scales linearly with depth, not exponentially

Dependencies

  • google/go-github - GitHub REST API client
  • shurcooL/githubv4 - GitHub GraphQL client (for sub-issues)
  • mark3labs/mcp-go - MCP protocol framework
  • Standard library:regexp,sync,context

Timeline Estimate

PhaseEffortNotes
Phase 1: Types & Utils2 hoursFoundational, test early
Phase 2: Crawler4 hoursCore complexity, needs careful testing
Phase 3: Assembly2 hoursFormat experimentation
Phase 4: Integration1 hourStandard MCP pattern
Phase 5: Testing3 hoursComprehensive mocks needed
Total~12 hours

Open Questions

  1. Should we support cross-repo references (e.g.,owner/repo#123)?

    • Recommendation: V1 - same repo only; V2 - cross-repo with opt-in
  2. Should depth be configurable by the caller?

    • Recommendation: Fixed at 5 for V1; prevents abuse and keeps output predictable
  3. Should we cache graphs for repeated calls?

    • Recommendation: No caching in V1; graphs change frequently

==========================================================================

Summary
Graph Encoding
Based on Google's LLM research, I chose a hybrid incident encoding format with:

Clear section headers (SUMMARY, NODES, EDGES, RELATED)
Pipe-delimited node fields
Arrow notation for parent→child relationships
Tilde for sibling/related relationships
Node Classification
Epic: Label contains "epic" or title contains "epic"
Batch: Parent issue (has sub-issues) but not epic
Task: Regular issue
PR: Pull request (typically leaf nodes)
Depth-Based Content Scaling
Body preview shrinks with distance from focus node (8 lines at depth 0 → 1 line at depth 5) to conserve tokens.

Implementation Phases
Core Data Structures - Types for nodes, edges, graph
Graph Crawler - Concurrent BFS with deduplication
Graph Assembly - Hierarchy detection and LLM-friendly formatting
MCP Tool Integration - Register in issues toolset
Testing & Documentation - Mocks, toolsnaps, README update
Key Features
Natural language summary explaining hierarchy and suggesting next steps
Resilient to circular references and API errors
Content sanitization (remove URLs, whitespace, HTML)
Concurrent crawling with worker pool pattern
~5 second timeout, graceful degradation

Possibly reduce the depth if it will be too slow (you can practice by getting issue data from public repos and following links if it helps).

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions


    [8]ページ先頭

    ©2009-2025 Movatter.jp