Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Microsoft Azure profile imageAaron Powell
Aaron Powell forMicrosoft Azure

Posted on • Originally published ataaron-powell.com on

     

Building a Video Chat App, Part 3 - Displaying Video

On myTwitch channel we’re continuing to build our video chat application onAzure Communication Services (ACS).

Last time we learnt how to access the camera and microphone using the ACS SDK, and today we’ll look to display that camera on the screen.

Displaying Video

As we learnt in the last post, cameras are available via aMediaStream in the browser, which we get when the user grants us access to their cameras. With raw JavaScript this can be set as thesrc attribute of a<video> element and the camera feed is displayed. But there's some orchestration code to setup and events to handle, so thankfully ACS gives us an API to work with,LocalVideoStream andRenderer.

Creating aLocalVideoStream

TheLocalVideoStream type requires aVideoDeviceInfo to be provided to it, and this type is what we get back from theDeviceManager (well, we get an array of them, you then pick the one you want).

We'll start by creating a new React context which will contain all the information that a user has selected for the current call.

exporttypeUserCallSettingsContextType={setCurrentCamera:(camera?:VideoDeviceInfo)=>void;setCurrentMic:(mic?:AudioDeviceInfo)=>void;setName:(name:string)=>void;setCameraEnabled:(enabled:boolean)=>void;setMicEnabled:(enabled:boolean)=>void;currentCamera?:VideoDeviceInfo;currentMic?:AudioDeviceInfo;videoStream?:LocalVideoStream;name:string;cameraEnabled:boolean;micEnabled:boolean;};constnie=<Textendsunknown>(_:T):void=>{throwError("Not Implemented");};constUserCallSettingsContext=createContext<UserCallSettingsContextType>({setCurrentCamera:nie,setCurrentMic:nie,setName:nie,setCameraEnabled:nie,setMicEnabled:nie,name:"",cameraEnabled:false,micEnabled:false});
Enter fullscreen modeExit fullscreen mode

Note: I've created a stub function that throws an exception for the default hook setter functions callednie.

The context will provide a few other pieces of data that the user is selecting, such as their preferred mic and their name, but we're really focusing on thevideoStream which will be exposed.

Now let's implement the context provider:

exportconstUserCallSettingsContextProvider=(props:{children:React.ReactNode;})=>{const[currentCamera,setCurrentCamera]=useState<VideoDeviceInfo>();const[currentMic,setCurrentMic]=useState<AudioDeviceInfo>();const[videoStream,setVidStream]=useState<LocalVideoStream>();const{clientPrincipal}=useAuthenticationContext();const[name,setName]=useState("");const[cameraEnabled,setCameraEnabled]=useState(true);const[micEnabled,setMicEnabled]=useState(true);useEffect(()=>{if(clientPrincipal&&!name){setName(clientPrincipal.userDetails);}},[clientPrincipal,name]);useEffect(()=>{// TODO - handle camera selection},[currentCamera,videoStream]);return(<UserCallSettingsContext.Providervalue={{setCurrentCamera,setCurrentMic,currentCamera,currentMic,videoStream,setName,name,setCameraEnabled,cameraEnabled,setMicEnabled,micEnabled}}>{props.children}</UserCallSettingsContext.Provider>);};exportconstuseUserCallSettingsContext=()=>useContext(UserCallSettingsContext);
Enter fullscreen modeExit fullscreen mode

When thecurrentCamera is changed (by user selection or otherwise) we're going to want to update theLocalVideoStream, and that's the missinguseEffect implementation. First off, we'll need to create one if it doesn't exist, but since we can't create it until there's a selected camera, we'll check for that:

useEffect(()=>{if(currentCamera&&!videoStream){constlvs=newLocalVideoStream(currentCamera);setVidStream(lvs);}},[currentCamera,videoStream]);
Enter fullscreen modeExit fullscreen mode

Using theLocalVideoStream

We've got ourselves a video stream, but what do we do with it? We need to createRenderer that will handle the DOM elements for us.

Let's create a component that uses the context to access theLocalVideoStream:

constVideoStream=()=>{const{videoStream}=useUserCallSettingsContext();return<div>Show video here</div>;};exportdefaultVideoStream;
Enter fullscreen modeExit fullscreen mode

TheRenderer, which we're going to create shortly, gives us a DOM element that we need to inject into the DOM that React is managing for us, and to do that we'll need access to the DOM element, obtained using aref.

constVideoStream=()=>{const{videoStream}=useUserCallSettingsContext();constvidRef=useRef<HTMLDivElement>null;return<divref={vidRef}>Show video here</div>;};
Enter fullscreen modeExit fullscreen mode

Since ourvideoStream might be null (camera is off or just unselected), we'll only create theRenderer when needed:

constVideoStream=()=>{const{videoStream}=useUserCallSettingsContext();constvidRef=useRef<HTMLDivElement>(null);const{renderer,setRenderer}=useState<Renderer>();useEffect(()=>{if(videoStream&&!renderer){setRenderer(newRenderer(videoStream));}},[videoStream,renderer]);return(<divref={vidRef}>Show video here</div>);};
Enter fullscreen modeExit fullscreen mode

With theRenderer created, the next thing to do is request a view from it, which displays the camera feed. We'll do this in a separate hook for simplicities sake:

constVideoStream=()=>{const{videoStream}=useUserCallSettingsContext();constvidRef=useRef<HTMLDivElement>(null);const{renderer,setRenderer}=useState<Renderer>();useEffect(()=>{if(videoStream&&!renderer){setRenderer(newRenderer(videoStream));}},[videoStream,renderer]);useEffect(()=>{if(renderer){renderer.createView().then((view)=>{vidRef.current!.appendChild(view.target);});}return()=>{if(renderer){renderer.dispose();}};},[renderer,vidRef]);return(<divref={vidRef}></div>);};
Enter fullscreen modeExit fullscreen mode

ThecreateView method from theRenderer will return aPromise<RendererView> that has information on the scaling mode and whether the video is mirrored (so you could apply your own mirror transform), as well as thetarget DOM element, that we can append to the children of the DOM element captured via thevidRef ref. You'll notice that I'm doing!. beforeappendChild, and this is to trick the TypeScript compiler, as it doesn't properly understand theuseRef assignment. Yes, it's true that thevidRefcould benull (its default value), but that'd require the hooks and Promise to execute synchronously, which isn't possible, so we can override the type check using the! postfix assertion.

Changing Camera Feeds

It's possible that someone has multiple cameras on their machine and they want to switch between them, how would you go about doing that?

The first thought might be that we create a newLocalVideoStream andRenderer, but it's actually a lot simpler than that as theLocalVideoStream provides aswitchSource method that will change the underlying camera source and in turn cascade that across to theRenderer.

We'll update our context with that support:

useEffect(()=>{if(currentCamera&&!videoStream){constlvs=newLocalVideoStream(currentCamera);setVidStream(lvs);}elseif(currentCamera&&videoStream&&videoStream.getSource()!==currentCamera){videoStream.switchSource(currentCamera);}},[currentCamera,videoStream]);
Enter fullscreen modeExit fullscreen mode

This new conditional branch will make sure we have a camera, video stream and the selected camera isn't already set (this was a side effect of React hooks and not something you'd necessarily need to do), and that's all we need for switching, we don't need to touch ourRenderer at all.

Conclusion

There we have it, we're now displaying the camera feed and you can see yourself. The use of theLocalVideoStream andRenderer from the ACS SDK makes it a lot simpler to handle the events and life cycle of the objects we need to work with.

If you want to see the full code from the sample application we're building, you'll find iton my GitHub.

If you want to catch up on the whole episode, as well as look at how we integrate this into the overall React application, you can catch the recording onYouTube, along with thefull playlist

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

Invent with purpose

Any language. Any platform.

More fromMicrosoft Azure

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp