Pre- Assume we just use triangles as primitives.
So I have a very big confusion.
- Using a camera, we capture some part of the real world 3d object, for example, a scenery with mountains and land, or for simplicity, just a ball(to display it on screen using pixels)
- Relevant image processing will be applied over the captured image, and the output will be: digital image(3d object divided into primitives) and texture image(to cover the 3d objects)
My first doubt: camera will capture the real world in a rectangle shape. now, do we :a. break this rectangle into smaller triangles and pass them to input assembler as primitives, orb. do edge detection during image processing and then break these segmented geometries and pass them to input assembler? for example, separating the mountains from the land and then breaking mountains into triangles and land into triangles?
- Input assembler will be getting vertices as input. But vertices need to have (x,y,x) as position data, right? Instead we pass the numbering/position of the triangle vertices. So what exactly the vertex buffer consists of? Positions or coordinates? And how exactly these positions are helpful in recreating the digital geometric object?
I am sorry if my doubt seems really silly, I don't have much knowledge of image processing. So I am not getting a clear picture of what actually is being passed to input assembler.
- 2$\begingroup$Your question is about two things: deriving a mesh from a photograph, and rasterizing a mesh. These are very different processes and areas of expertise. Pleaseedit your question so that it focuses on only one of them; you can post the other one as a separate question.$\endgroup$Kevin Reid– Kevin Reid2024-08-23 15:24:34 +00:00CommentedAug 23, 2024 at 15:24
- $\begingroup$Just like Kevin Reid commented, the input is already triangles they don't get created. So the mountains are input as a list of triangles. Or the ball is input as a list of triangles. Those triangles each have x,y,z positions on input. Turning the ball or mountains into triangles is a very different task from taking a list of triangles that represent the ball or mountains and turning them back into something you see on the screen.$\endgroup$pmw1234– pmw12342024-08-23 22:26:34 +00:00CommentedAug 23, 2024 at 22:26
- $\begingroup$@KevinReid, I then went to various resources, and I got a clear understanding, that: If we take a photograph, it is gonna go through image processing. Graphics come intro picture when either we used geometrical tools like CAD, or we use multiple cameras to convert a 3D model to 2D graphics image. The 3D model is read as a mesh of triangles(or any polygon, but triangles mostly). Then this 3D model is read as a bunch of vertices of traingles and their attributes(of vertices, such as texture coordinates, normal, and more). On which we perform the shaders. I hope I am correct this time?$\endgroup$Manika Sharma– Manika Sharma2024-08-29 11:41:02 +00:00CommentedAug 29, 2024 at 11:41
- $\begingroup$@pmw1234 That's true. Please check my comment, if it seems good, I will edit my question.$\endgroup$Manika Sharma– Manika Sharma2024-08-29 11:42:02 +00:00CommentedAug 29, 2024 at 11:42
- $\begingroup$I am not sure what you mean by the sentence that starts with “Graphics come intro picture…” — there may be a misunderstanding there, or there may not. The rest seems correct. Please do edit your question — more words, on a more narrow topic, will help us understand.$\endgroup$Kevin Reid– Kevin Reid2024-08-29 15:47:12 +00:00CommentedAug 29, 2024 at 15:47
