- Notifications
You must be signed in to change notification settings - Fork131
-
Hello everyone. There are many ways to create a depth map from any rgb image. Here is an example of use:https://pearsonkyle.github.io/Art-with-Depth/ This would be very interesting in VR. I want to implement this on SK. (I really like this framework) I don't know the algorithm for implementing such an effect. Or use a vertex shader to control the grid (displacement map) I will be glad to any tips. Thank you. |
BetaWas this translation helpful?Give feedback.
All reactions
Replies: 4 comments 1 reply
-
I don't have any personal experience with generating depth from a single image, outside of Photoshop or Kinect! I expect that this would require some kind of machine learning setup with a good model. ONNX has always seemed like a good match for running ML stuff in C#, but I'm afraid I don't have much more advice than that! Once youhave the depth though, my favorite technique is to have a tessellated quad and a custom shader. Then you can just load the depth map as a texture, and use that for offsetting the verts in the vertex shader. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I didn't plan to prepare a depth map in C# You are right, depth maps are created using machine learning. And there are already many such methods and new ones are appearing, for example: I am not very good at programming shaders. Perhaps there is an example of how to take a value from a depth map and change the position of a vertex? |
BetaWas this translation helpful?Give feedback.
All reactions
-
Here's a modified (untested) version of SK's unlit shader that will offset the vertex by a matching depth texture. Note that this doesn't do any tessellation on its own, so you'll need to provide a pre-tesselated plane. SK can make one with varying levels of tessellation viaMesh.GeneratePlane. #include"stereokit.hlsli"//--color:color = 1, 1, 1, 1//--tex_trans = 0,0,1,1//--depth_scale = 0.1//--diffuse = white//--depth = blackfloat4 color;float4 tex_trans;float depth_scale;Texture2D diffuse :register(t0);SamplerState diffuse_s :register(s0);Texture2D depth :register(t0);SamplerState depth_s :register(s0);struct vsIn {float4 pos :SV_Position;float3 norm :NORMAL0;float2 uv :TEXCOORD0;float4 col :COLOR0;};struct psIn {float4 pos :SV_POSITION;float2 uv :TEXCOORD0;half4 color :COLOR0;uint view_id :SV_RenderTargetArrayIndex;};psInvs(vsIn input,uint id :SV_InstanceID) {psIn o;o.view_id = id % sk_view_count;id = id / sk_view_count;// Offset the vertex by the depth texture at this UV coordinate. This may// need some adjustment of depth_scale to achieve the results you're// looking forinput.pos += input.norm * depth.SampleLevel(depth_s, input.uv,0).r * depth_scale;float4 world =mul(float4(input.pos.xyz,1), sk_inst[id].world);o.pos =mul(world, sk_viewproj[o.view_id]);o.uv = (input.uv * tex_trans.zw) + tex_trans.xy;o.color = input.col * color * sk_inst[id].color;return o;}half4ps(psIn input) :SV_TARGET {return diffuse.Sample(diffuse_s, input.uv) * input.color;} |
BetaWas this translation helpful?Give feedback.
All reactions
-
Thank you very much. This is exactly what I needed!!! ))) |
BetaWas this translation helpful?Give feedback.
All reactions
-
This is cool and it works! And yes, it looks cool in VR. The compiler gave an error only on this line:
input.norm - float3 is multiplied by a scalar value. I replaced it with this code: I create the mesh like this: The depth map is 16 bit. I don't know depth.SampleLevel(depth_s, input.uv, 0) - it gives a 16 bit value or 8 bit. There are some artifacts when viewing. But I think some more complex algorithm is needed here. I would be very glad to receive any tips on how to improve the algorithm. Without your example, I would have spent a long time figuring out this shader, thank you very much! ) |
BetaWas this translation helpful?Give feedback.


