Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

start of adding touch input#1898

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
Pyrdacor wants to merge21 commits intodotnet:main
base:main
Choose a base branch
Loading
fromPyrdacor:add-touch-input

Conversation

Pyrdacor
Copy link
Contributor

Summary of the PR

This adds touch devices to the input abstraction. The goal is to add touch gesture recognition to the SDL backend to enable touch gestures and simple touch finger events on Android and iOS.

Related issues, Discord discussions, or proposals

I proposed this a long time ago and finally found some time for it.

@Perksey
Copy link
Member

I proposed this a long time ago

Where?

@PyrdacorPyrdacor requested a review froma team as acode ownerJanuary 14, 2024 23:05
@Pyrdacor
Copy link
ContributorAuthor

I proposed this a long time ago

Where?

On discord. But my memory was poor. It was more a question and you said I can give it a try. I guess I started to work on this a bit later but then abandonded it because of time.

image

@Perksey
Copy link
Member

Thanks

Comment on lines 23 to 36
/// <summary>
/// The last known normalized position of the finger.
/// </summary>
public Vector2 NormalizedPosition { get; }

/// <summary>
/// The last known speed of the finger.
/// </summary>
public Vector2 Speed { get; }

/// <summary>
/// The last known normalized speed of the finger.
/// </summary>
public Vector2 NormalizedSpeed { get; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Could there be some documentation in here about what normalized means in this specific context? In the case of graphics programming this could very well mean-1 -> 1 or0 -> 1

Copy link
ContributorAuthor

@PyrdacorPyrdacorJan 14, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I added (0..1) and (-1..1). Is this enough? In general positions are normalized to range 0 to 1 and speed values to range -1 to 1 as they have a direction. I tried to add this information everywhere before but forgot it here I guess.

@ewu1992
Copy link

I did a test of the multitouch feature and it is working well. When is it planned to be released?

@Pyrdacor
Copy link
ContributorAuthor

I don't know. Nobody reviewed this yet.

@Beyley
Copy link
Contributor

I did a test of the multitouch feature and it is working well. When is it planned to be released?

I don't know. Nobody reviewed this yet.

Everyone on the team has been very busy the past few months, and haven't had a whole lot of time to put towards Silk.NET, so open PRs such as these have stalled quite a bit. We apologize for the lengthy review/response times, and have no ETA for the next release/when this PR will be merged.

@Perksey
Copy link
Member

I notice this currently doesn't have a proposal in documentation/proposals. Ideally, this should be done to ensure we can review the API at a high-level and keep a record of our discussion/decisions.

@domportera
Copy link
Collaborator

if this ends up being accepted, I am very willing to port mygesture recognition work I'm making for Godot to Silk.

IIRC the only work I have left on it is godot-specific implementation details. It features support for multiple types of gestures occurring simultaneously and felt really good to me at the time I had it running on Godot 3.

@Perksey
Copy link
Member

Thanks@domportera. It's hopeful to see the positivity on this PR! At the moment this is currently blocked on a proposal needing to be written (as perdocumentation/proposals). This doesn't necessarily have to be done by Pyrdacor, I don't know what their availability is like, so anyone can feel free to do this. This allows the Working Group to bikeshed the API details in a meeting and ensure our usual design processes are followed.

domportera reacted with thumbs up emoji

@domportera
Copy link
Collaborator

domportera commentedNov 14, 2024
edited
Loading

looking at the code in this PR there is actually a ton of overlap in what this addresses vs what mine does, so maybe it's best not to step on anyone's toes

that being said, because of that, I can probably whip up a proposal that aligns pretty closely with the original author's intent. I'm not terribly informed on the state of touch in Silk though, as I've only ever used Silk for desktop imgui stuff atp. I'm assuming it currently looks like "raw" touch events with finger persistence from the SDL's implementation, but not so low level as to have actual sensor information? do other backends have a working touch implementation?

@Perksey
Copy link
Member

Anything you can get through SDL is at your disposal. I'm assuming GLFW doesn't have the APIs we require.

@Perksey
Copy link
Member

@domportera Note that there's some interest from the Stride project about sensors. I think it may just be worth writing this proposal atop Multi-Backend Input (i.e. 3.0) or at least evaluating the differences between 2.X and 3.0 to see if there's a way we can have one proposal for both 2.X and 3.0, if not I'd rather we focus on 3.0.

I have copied the feedback I have received from Stride so far on Multi-Backend Input below for reference.


I read over the whole Input proposal (okay, I skimmed some parts of it), as I have decent experience in that area.
It looks pretty good, and I don't see any issues if Stride switched to it, with just a thin wrapper around it.

There are a few things

  • It doesn't look like pens and touch screens are supported? Basically pointer devices. Stride doesn't actually have the concept of a Mouse exposed to the user, only a Pointer., with an exception being the virtual input system (where you define an 'action' and set different input device buttons to raise that action).
  • Doesn't look gestures are a built-in thing. Stride has them for doing like flicks, long presses, tabs, etc. And supports setting the needed number of fingers and such. (not 100% clear if these are actually connected to devices currently in Stride)
  • More of a question. IsInputContext.ConnectionChanged raised when a device is connected/disconnected. Like if a gamepad gets disconnected?
  • Stride has the concept of aISensorDevice which is a type ofIInputDevice. These sensors are like an accelerometer, and gyroscope, etc. that give real world motion data about the device. It don't seem like they would be supported in the current proposal.
  • Doesn't look like mouse delta is support. It is relatively easy for a user (like Stride) to implement, but might be worth adding?
  • I didn't see a way to get or a good way to implement getting the latest input device to receive input. This info is mostly used in games to update UI icons that show icons for inputs (like switching from "Press E to open" to "Press (X) to open")
  • I would recommend using the cardinal directions for the gamepad buttons and dpad instead of up/down/left/right, so north/south/east/west. This avoid having to check if "up" is pressed down, which can be a bit confusing.
  • I don't seeText input events? But maybe that is supported elsewhere I just didn't see?

@domportera
Copy link
Collaborator

damn ok this seems right up my alley, as I have another project (in its ABSOLUTE infancy, called omnio, made in godot, also will be using my gesture library and arbitrary sensors lmao) that is going to have to tackle very similar issues of abstracting inputs from all sorts of sensors, gamepads, and mouse/KB/touch. Im also interested in expanding the input capabilities of tooll3 after our next release and have begun to sneak silk windows into the codebase already. so this seems like a really good opportunity to get the whole damn .net ecosystem covered

if we wanna go that deep I'm absolutely game and have started to put some thought into that. I work with weird sensor inputs in C# for a living and have had my fair share of pain points so I can probably brainstorm something that more or less covers all the bases. I'm down to propose an everything-input-proposal but would need to seriously ramp up on Silk to keep it relevant

@Perksey
Copy link
Member

agree

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@BeyleyBeyleyBeyley left review comments

@ThomasMizThomasMizAwaiting requested review from ThomasMizThomasMiz is a code owner

At least 1 approving review is required to merge this pull request.

Assignees
No one assigned
Labels
None yet
Projects
Status: Todo
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

5 participants
@Pyrdacor@Perksey@ewu1992@Beyley@domportera

[8]ページ先頭

©2009-2025 Movatter.jp