Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

RGBD viewer (rgb + depth Map)#1111

Unanswered
salexx asked this question inQ&A
Oct 14, 2024· 4 comments· 1 reply
Discussion options

Hello everyone. There are many ways to create a depth map from any rgb image.

Here is an example of use:https://pearsonkyle.github.io/Art-with-Depth/

This would be very interesting in VR.

I want to implement this on SK. (I really like this framework)

I don't know the algorithm for implementing such an effect.
I think either to create a grid based on the depth map and apply a regular texture.

Or use a vertex shader to control the grid (displacement map)

I will be glad to any tips. Thank you.

You must be logged in to vote

Replies: 4 comments 1 reply

Comment options

I don't have any personal experience with generating depth from a single image, outside of Photoshop or Kinect! I expect that this would require some kind of machine learning setup with a good model. ONNX has always seemed like a good match for running ML stuff in C#, but I'm afraid I don't have much more advice than that!

Once youhave the depth though, my favorite technique is to have a tessellated quad and a custom shader. Then you can just load the depth map as a texture, and use that for offsetting the verts in the vertex shader.

You must be logged in to vote
0 replies
Comment options

I didn't plan to prepare a depth map in C#

You are right, depth maps are created using machine learning.

And there are already many such methods and new ones are appearing, for example:
https://github.com/thygate/stable-diffusion-webui-depthmap-script
https://github.com/nagadomi/nunif

I am not very good at programming shaders.
But you have already suggested me in which direction to move.

Perhaps there is an example of how to take a value from a depth map and change the position of a vertex?

You must be logged in to vote
1 reply
@maluoi
Comment options

Here's a modified (untested) version of SK's unlit shader that will offset the vertex by a matching depth texture. Note that this doesn't do any tessellation on its own, so you'll need to provide a pre-tesselated plane. SK can make one with varying levels of tessellation viaMesh.GeneratePlane.

#include"stereokit.hlsli"//--color:color = 1, 1, 1, 1//--tex_trans   = 0,0,1,1//--depth_scale = 0.1//--diffuse     = white//--depth       = blackfloat4       color;float4       tex_trans;float        depth_scale;Texture2D    diffuse   :register(t0);SamplerState diffuse_s :register(s0);Texture2D    depth   :register(t0);SamplerState depth_s :register(s0);struct vsIn {float4 pos  :SV_Position;float3 norm :NORMAL0;float2 uv   :TEXCOORD0;float4 col  :COLOR0;};struct psIn {float4 pos   :SV_POSITION;float2 uv    :TEXCOORD0;half4  color :COLOR0;uint view_id :SV_RenderTargetArrayIndex;};psInvs(vsIn input,uint id :SV_InstanceID) {psIn o;o.view_id = id % sk_view_count;id        = id / sk_view_count;// Offset the vertex by the depth texture at this UV coordinate. This may// need some adjustment of depth_scale to achieve the results you're// looking forinput.pos += input.norm * depth.SampleLevel(depth_s, input.uv,0).r * depth_scale;float4 world =mul(float4(input.pos.xyz,1), sk_inst[id].world);o.pos        =mul(world,                    sk_viewproj[o.view_id]);o.uv    = (input.uv * tex_trans.zw) + tex_trans.xy;o.color = input.col * color * sk_inst[id].color;return o;}half4ps(psIn input) :SV_TARGET {return diffuse.Sample(diffuse_s, input.uv) * input.color;}
Comment options

Thank you very much. This is exactly what I needed!!! )))

You must be logged in to vote
0 replies
Comment options

ScreenShot
rgb
depth

This is cool and it works! And yes, it looks cool in VR.

The compiler gave an error only on this line:

input.pos += input.norm * depth.SampleLevel(depth_s, input.uv, 0).r * depth_scale;

input.norm - float3 is multiplied by a scalar value.

I replaced it with this code:

input.pos.x += input.norm.x * depth.SampleLevel(depth_s, input.uv, 0).r * depth_scale;input.pos.y += input.norm.y * depth.SampleLevel(depth_s, input.uv, 0).g * depth_scale;input.pos.z += input.norm.z * depth.SampleLevel(depth_s, input.uv, 0).b * depth_scale;

I create the mesh like this:

Mesh planeMesh = Mesh.GeneratePlane(Vec2.One, Vec3.Forward, Vec3.Up, 1000, true);Material rgbdMaterial = new Material("RGBDShader.hlsl");rgbdMaterial["diffuse"] = Tex.FromFile("\\src\\rgb.png");rgbdMaterial["depth"] = Tex.FromFile("\\src\\depth.png");rgbdMaterial["depth_scale"] = 0.3f;

The depth map is 16 bit. I don't know depth.SampleLevel(depth_s, input.uv, 0) - it gives a 16 bit value or 8 bit.

There are some artifacts when viewing. But I think some more complex algorithm is needed here.

I would be very glad to receive any tips on how to improve the algorithm.

Without your example, I would have spent a long time figuring out this shader, thank you very much! )

You must be logged in to vote
0 replies
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
2 participants
@salexx@maluoi

[8]ページ先頭

©2009-2025 Movatter.jp