- Notifications
You must be signed in to change notification settings - Fork474
A JavaScript client library for integrating multi-party communications powered by the Amazon Chime service.
License
aws/amazon-chime-sdk-js
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Amazon Chime SDK Project Board
Amazon Chime SDK React Components
Build video calling, audio calling, messaging, and screen sharing applications powered by the Amazon Chime SDK
The Amazon Chime SDK is a set of real-time communications components that developers canuse to quickly add messaging, audio, video, and screen sharing capabilities to their web ormobile applications.
Developers can build on AWS's global communications infrastructure to deliverengaging experiences in their applications. For example, they can add video to ahealth application so patients can consult remotely with doctors on healthissues, or create customized audio prompts for integration with the publictelephone network.
The Amazon Chime SDK for JavaScript works by connecting to meeting sessionresources that you create in your AWS account. The SDK has everythingyou need to build custom calling and collaboration experiences in yourweb application, including methods to configure meeting sessions, list andselect audio and video devices, start and stop screen share and screen shareviewing, receive callbacks when media events such as volume changes occur, andcontrol meeting features such as audio mute and video tile bindings.
If you are building a React application, consider using theAmazon Chime SDK React Component Library that supplies client-side state management and reusable UI components for common web interfaces used in audio and video conferencing applications. Amazon Chime also offersAmazon Chime SDK for iOS andAmazon Chime SDK for Android for native mobile application development.
TheAmazon Chime SDK Project Board captures the status of community feature requests across all our repositories. The descriptions of the columns on the board are captured in thisguide.
- Amazon Chime SDK Overview
- Understanding security in Amazon Chime Application and SDK
- Pricing
- Supported Browsers
- Getting Started Guides
- Developer Guide
- Control Plane API Reference
- Frequently Asked Questions (FAQ)
In addition to the below, here is a list ofall blog posts about the Amazon Chime SDK.
- Transforming Audio and Shared Content
- Quickly Launch an Amazon Chime SDK Application With AWS Amplify
- Capturing Amazon Chime SDK Meeting Content
- Monitoring and Troubleshooting With Amazon Chime SDK Meeting Events
- Build Meetings features into your Amazon Chime SDK messaging application
- Using the Amazon Chime SDK to Create Automated Outbound Call Notifications
- Building voice menus and call routing with the Amazon Chime SDK
- Use channel flows to remove profanity and sensitive content from messages in Amazon Chime SDK messaging
- Automated Moderation and Sentiment Analysis Blog (example using Kinesis Data Streams)
- Build chat applications in iOS and Android with Amazon Chime SDK messaging
- Building chat features into your application with Amazon Chime SDK messaging
- Integrate your Identity Provider with Amazon Chime SDK Messaging
- Creating Read-Only Chat Channels for Announcements
- Real-time Collaboration Using Amazon Chime SDK messaging
- Building a Live Streaming Chat Application
- Capture Amazon Chime SDK Meetings Using Media Capture Pipelines
- Amazon Chime SDK launches live connector for streaming
The following developer guides cover specific topics for a technical audience.
- API Overview
- Frequently Asked Questions (FAQ)
- Content Share
- Quality, Bandwidth, and Connectivity
- Simulcast
- Meeting Events
- Integrating Amazon Voice Focus and Echo Reduction Into Your Application
- Adding Frame-By-Frame Processing to an Outgoing Video Stream
- Adding Background Filtering to an Outgoing Video Stream
- Adapting Video to Limited Bandwidth Using a Priority-Based Video Downlink Policy
- Client Event Ingestion
- Content Security Policy
- Managing Video Quality for Different Video Layouts
The following developer guides cover the Amazon Chime SDK more broadly.
- Amazon Chime SDK Samples — Amazon Chime SDK Samples repository
- Meeting Demo — A browsermeeting application with a local server
- Serverless Meeting Demo — A self-contained serverless meeting application
- Single JS — A script to bundle the SDK into a single
.js
file - Transcription and Media Capture Demo - A demo to demonstrate transcription and media capture capabilities
- Virtual Classroom — An online classroom built with Electron and React
- Live Events — Interactive live events solution
- Amazon Chime SDK Smart Video Sending Demo — Demo showcasing how to dynamically display up to 25 video tiles from a pool of up to 250 meeting attendees
- Amazon Chime SDK and Amazon Connect Integration — Build a video contact center with Amazon Connect and Amazon Chime SDK
- Device Integration — Using the Amazon Chime SDK for 3rd party devices
- Messaging — Build chat features into your application with Amazon Chime SDK messaging
- Load Testing Applications — A tool to load test audio-video communication applications
- PSTN Dial In — Add PSTN dial-in capabilities to your Amazon Chime SDK Meeting using SIP media application
- Outbound Call Notifications — Send meeting reminders with SIP media application and get real time results back
- Update In-Progress Call - Update an in-progress SIP media application call via API call
Review the resources given in the README and use ourclient documentation for guidance on how to develop on the Chime SDK for JavaScript. Additionally, search ourissues database andFAQs to see if your issue is already addressed. If not please cut us anissue using the provided templates.
The blog postMonitoring and Troubleshooting With Amazon Chime SDK Meeting Events goes into detail about how to use meeting events to troubleshoot your application by logging to Amazon CloudWatch.
If you have more questions, or require support for your business, you can reach out toAWS Customer support. You can review our support planshere.
The Amazon Chime SDK for JavaScript uses WebRTC, the real-time communication API supported in most modern browsers. Here are some general resources on WebRTC.
- WebRTC Basics
- WebRTC Org - Getting started, presentation, samples, tutorials, books and more resources
- High Performance Browser Networking - WebRTC (Browser APIs and Protocols)
- MDN - WebRTC APIs
Make sure you have Node.js version 18 or higher. Node 20 is recommended and supported.
To add the Amazon Chime SDK for JavaScript into an existing application,install the package directly from npm:
npm install amazon-chime-sdk-js --save
Note that the Amazon Chime SDK for JavaScript targets ES2015, which is fully compatible withall supported browsers.
Create a meeting session in your client application.
import{ConsoleLogger,DefaultDeviceController,DefaultMeetingSession,LogLevel,MeetingSessionConfiguration}from'amazon-chime-sdk-js';constlogger=newConsoleLogger('MyLogger',LogLevel.INFO);constdeviceController=newDefaultDeviceController(logger);// You need responses from server-side Chime API. See below for details.constmeetingResponse=/* The response from the CreateMeeting API action */;constattendeeResponse=/* The response from the CreateAttendee or BatchCreateAttendee API action */;constconfiguration=newMeetingSessionConfiguration(meetingResponse,attendeeResponse);// In the usage examples below, you will use this meetingSession object.constmeetingSession=newDefaultMeetingSession(configuration,logger,deviceController);
You can use an AWS SDK, the AWS Command Line Interface (AWS CLI), or the REST APIto make API calls. In this section, you will use the AWS SDK for JavaScript in your server application, e.g. Node.js.SeeAmazon Chime SDK API Reference for more information.
⚠️ The server application does not require the Amazon Chime SDK for JavaScript.
constAWS=require('aws-sdk');const{v4:uuid}=require('uuid');// You must use "us-east-1" as the region for Chime API and set the endpoint.constchime=newAWS.ChimeSDKMeetings({region:'us-east-1'});constmeetingResponse=awaitchime.createMeeting({ClientRequestToken:uuid(),MediaRegion:'us-west-2',// Specify the region in which to create the meeting.}).promise();constattendeeResponse=awaitchime.createAttendee({MeetingId:meetingResponse.Meeting.MeetingId,ExternalUserId:uuid(),// Link the attendee to an identity managed by your application.}).promise();
Now securely transfer themeetingResponse
andattendeeResponse
objects to your client application.These objects contain all the information needed for a client application using the Amazon Chime SDK for JavaScript to join the meeting.
The value of the MediaRegion parameter in the createMeeting() should ideally be set to the one of the media regions which is closest to the user creating a meeting. An implementation can be found under the topic 'Choosing the nearest media Region' in theAmazon Chime SDK Media Regions documentation.
Create a messaging session in your client application to receive messages from Amazon Chime SDK for Messaging.
import{ChimeSDKMessagingClient}from'@aws-sdk/client-chime-sdk-messaging';import{ConsoleLogger,DefaultMessagingSession,LogLevel,MessagingSessionConfiguration,}from'amazon-chime-sdk-js';constlogger=newConsoleLogger('SDK',LogLevel.INFO);// You will need AWS credentials configured before calling AWS or Amazon Chime APIs.constchime=newChimeSDKMessagingClient({region:'us-east-1'});constuserArn=/* The userArn */;constsessionId=/* The sessionId */;constconfiguration=newMessagingSessionConfiguration(userArn,sessionId,undefined,chime);constmessagingSession=newDefaultMessagingSession(configuration,logger);
If you would like to enable prefetch feature when connecting to a messaging session, you can follow the code below.Prefetch feature will send out CHANNEL_DETAILS event upon websocket connection, which includes information about channel,channel messages, channel memberships etc. Prefetch sort order can be adjusted withprefetchSortBy
, setting it to eitherunread
(default value if not set) orlastMessageTimestamp
configuration.prefetchOn=Prefetch.Connect;configuration.prefetchSortBy=PrefetchSortBy.Unread;
git fetch --tags https://github.com/aws/amazon-chime-sdk-jsnpm run buildnpm run test
After runningnpm run test
the first time, you can usenpm run test:fast
to speed up the test suite.
Tags are fetched in order to correctly generate versioning metadata.
To view code coverage results opencoverage/index.html
in your browser after runningnpm run test
.
If you runnpm run test
and the tests are running but the coverage report is not getting generated then you might have a resource clean up issue. In Mocha v4.0.0 or newer the implementation was changed so that the Mocha processes will not force exit when the test run is complete.
For example, if you have aDefaultVideoTransformDevice
in your unit test then you must callawait device.stop();
to clean up the resources and not run into this issue. You can also look into the usage ofdone();
in theMocha documentation.
To generate JavaScript API reference documentation run:
npm run buildnpm run doc
Then opendocs/index.html
in your browser.
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via ourvulnerability reporting page.Please donot create a public GitHub issue.
- Device
- Starting a session
- Audio
- Video
- Screen and content share
- Attendees
- Monitoring and alerts
- Stopping a session
- Meeting readiness checker
- Selecting an Audio Profile
- Starting a Messaging Session
- Providing application metadata
Note: Before starting a session, you need to choose your microphone, speaker, and camera.
Use case 1. List audio input, audio output, and video input devices. The browser will ask for microphone and camera permissions.
With theforceUpdate
parameter set to true, cached device information is discarded and updated after the device label trigger is called. In some cases, builders need to delay the triggering of permission dialogs, e.g., when joining a meeting in view-only mode, and then later be able to trigger a permission prompt in order to show device labels; specifyingforceUpdate
allows this to occur.
constaudioInputDevices=awaitmeetingSession.audioVideo.listAudioInputDevices();constaudioOutputDevices=awaitmeetingSession.audioVideo.listAudioOutputDevices();constvideoInputDevices=awaitmeetingSession.audioVideo.listVideoInputDevices();// An array of MediaDeviceInfo objectsaudioInputDevices.forEach(mediaDeviceInfo=>{console.log(`Device ID:${mediaDeviceInfo.deviceId} Microphone:${mediaDeviceInfo.label}`);});
Use case 2. Choose audio input and audio output devices by passing thedeviceId
of aMediaDeviceInfo
object.Note that you need to calllistAudioInputDevices
andlistAudioOutputDevices
first.
constaudioInputDeviceInfo=/* An array item from meetingSession.audioVideo.listAudioInputDevices */;awaitmeetingSession.audioVideo.startAudioInput(audioInputDeviceInfo.deviceId);constaudioOutputDeviceInfo=/* An array item from meetingSession.audioVideo.listAudioOutputDevices */;awaitmeetingSession.audioVideo.chooseAudioOutput(audioOutputDeviceInfo.deviceId);
Use case 3. Choose a video input device by passing thedeviceId
of aMediaDeviceInfo
object.Note that you need to calllistVideoInputDevices
first.
If there is an LED light next to the attendee's camera, it will be turned on indicating that it is now capturing from the camera.You probably want to choose a video input device when you start sharing your video.
constvideoInputDeviceInfo=/* An array item from meetingSession.audioVideo.listVideoInputDevices */;awaitmeetingSession.audioVideo.startVideoInput(videoInputDeviceInfo.deviceId);// Stop video input. If the previously chosen camera has an LED light on,// it will turn off indicating the camera is no longer capturing.awaitmeetingSession.audioVideo.stopVideoInput();
Use case 4. Add a device change observer to receive the updated device list.For example, when you pair Bluetooth headsets with your computer,audioInputsChanged
andaudioOutputsChanged
are calledwith the device list including headsets.
You can use theaudioInputMuteStateChanged
callback to track the underlyinghardware mute state on browsers and operating systems that support that.
constobserver={audioInputsChanged:freshAudioInputDeviceList=>{// An array of MediaDeviceInfo objectsfreshAudioInputDeviceList.forEach(mediaDeviceInfo=>{console.log(`Device ID:${mediaDeviceInfo.deviceId} Microphone:${mediaDeviceInfo.label}`);});},audioOutputsChanged:freshAudioOutputDeviceList=>{console.log('Audio outputs updated: ',freshAudioOutputDeviceList);},videoInputsChanged:freshVideoInputDeviceList=>{console.log('Video inputs updated: ',freshVideoInputDeviceList);},audioInputMuteStateChanged:(device,muted)=>{console.log('Device',device,muted ?'is muted in hardware' :'is not muted');},};meetingSession.audioVideo.addDeviceChangeObserver(observer);
Use case 5. Start a session. To hear audio, you need to bind a device and stream to an<audio>
element.Once the session has started, you can talk and listen to attendees.Make sure you have chosen your microphone and speaker (See the "Device" section), and at least one other attendee has joined the session.
constaudioElement=/* HTMLAudioElement object e.g. document.getElementById('audio-element-id') */;meetingSession.audioVideo.bindAudioElement(audioElement);constobserver={audioVideoDidStart:()=>{console.log('Started');}};meetingSession.audioVideo.addObserver(observer);meetingSession.audioVideo.start();
Use case 6. Add an observer to receive session lifecycle events: connecting, start, and stop.
Note: You can remove an observer by calling
meetingSession.audioVideo.removeObserver(observer)
.In a component-based architecture (such as React, Vue, and Angular), you may need to add an observerwhen a component is mounted, and remove it when unmounted.
constobserver={audioVideoDidStart:()=>{console.log('Started');},audioVideoDidStop:sessionStatus=>{// See the "Stopping a session" section for details.console.log('Stopped with a session status code: ',sessionStatus.statusCode());},audioVideoDidStartConnecting:reconnecting=>{if(reconnecting){// e.g. the WiFi connection is dropped.console.log('Attempting to reconnect');}},};meetingSession.audioVideo.addObserver(observer);
Note: So far, you've added observers to receive device and session lifecycle events.In the following use cases, you'll use the real-time API methods to send and receive volume indicators and control mute state.
Use case 7. Mute and unmute an audio input.
// MutemeetingSession.audioVideo.realtimeMuteLocalAudio();// Unmuteconstunmuted=meetingSession.audioVideo.realtimeUnmuteLocalAudio();if(unmuted){console.log('Other attendees can hear your audio');}else{// See the realtimeSetCanUnmuteLocalAudio use case below.console.log('You cannot unmute yourself');}
Use case 8. To check whether the local microphone is muted, use this method rather than keeping track of your own mute state.
constmuted=meetingSession.audioVideo.realtimeIsLocalAudioMuted();if(muted){console.log('You are muted');}else{console.log('Other attendees can hear your audio');}
Use case 9. Disable unmute. If you want to prevent users from unmuting themselves (for example during a presentation), use these methods rather than keeping track of your own can-unmute state.
meetingSession.audioVideo.realtimeSetCanUnmuteLocalAudio(false);// Optional: Force mute.meetingSession.audioVideo.realtimeMuteLocalAudio();constunmuted=meetingSession.audioVideo.realtimeUnmuteLocalAudio();console.log(`${unmuted} is false. You cannot unmute yourself`);
Use case 10. Subscribe to volume changes of a specific attendee. You can use this to build a real-time volume indicator UI.
import{DefaultModality}from'amazon-chime-sdk-js';// This is your attendee ID. You can also subscribe to another attendee's ID.// See the "Attendees" section for an example on how to retrieve other attendee IDs// in a session.constpresentAttendeeId=meetingSession.configuration.credentials.attendeeId;meetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(presentAttendeeId,(attendeeId,volume,muted,signalStrength)=>{constbaseAttendeeId=newDefaultModality(attendeeId).base();if(baseAttendeeId!==attendeeId){// See the "Screen and content share" section for details.console.log(`The volume of${baseAttendeeId}'s content changes`);}// A null value for any field means that it has not changed.console.log(`${attendeeId}'s volume data: `,{ volume,// a fraction between 0 and 1 muted,// a boolean signalStrength,// 0 (no signal), 0.5 (weak), 1 (strong)});});
Use case 11. Subscribe to mute or signal strength changes of a specific attendee. You can use this to build UI for only mute or only signal strength changes.
// This is your attendee ID. You can also subscribe to another attendee's ID.// See the "Attendees" section for an example on how to retrieve other attendee IDs// in a session.constpresentAttendeeId=meetingSession.configuration.credentials.attendeeId;// To track mute changesmeetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(presentAttendeeId,(attendeeId,volume,muted,signalStrength)=>{// A null value for volume, muted and signalStrength field means that it has not changed.if(muted===null){// muted state has not changed, ignore volume and signalStrength changesreturn;}// mute state changedconsole.log(`${attendeeId}'s mute state changed: `,{ muted,// a boolean});});// To track signal strength changesmeetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(presentAttendeeId,(attendeeId,volume,muted,signalStrength)=>{// A null value for volume, muted and signalStrength field means that it has not changed.if(signalStrength===null){// signalStrength has not changed, ignore volume and muted changesreturn;}// signal strength changedconsole.log(`${attendeeId}'s signal strength changed: `,{ signalStrength,// 0 (no signal), 0.5 (weak), 1 (strong)});});
Use case 12. Detect the most active speaker. For example, you can enlarge the active speaker's video element if available.
import{DefaultActiveSpeakerPolicy}from'amazon-chime-sdk-js';constactiveSpeakerCallback=attendeeIds=>{if(attendeeIds.length){console.log(`${attendeeIds[0]} is the most active speaker`);}};meetingSession.audioVideo.subscribeToActiveSpeakerDetector(newDefaultActiveSpeakerPolicy(),activeSpeakerCallback);
Note: In Chime SDK terms, a video tile is an object containing an attendee ID,a video stream, etc. To view a video in your application, you must bind a tile to a
<video>
element.
- Make sure you bind a tile to the same video element until the tile is removed.
- A local video tile can be identified using
localTile
property.- A tile is created with a new tile ID when the same remote attendee restarts the video.
- Media Capture Pipeline relies on the meeting session to get the attendee info. After calling
this.meetingSession.audioVideo.start();
, wait foraudioVideoDidStart
event to be received before callingstartLocalVideoTile
.
Use case 13. Start sharing your video. The local video element is flipped horizontally (mirrored mode).
constvideoElement=/* HTMLVideoElement object e.g. document.getElementById('video-element-id') */;// Make sure you have chosen your camera. In this use case, you will choose the first device.constvideoInputDevices=awaitmeetingSession.audioVideo.listVideoInputDevices();// The camera LED light will turn on indicating that it is now capturing.// See the "Device" section for details.awaitmeetingSession.audioVideo.startVideoInput(videoInputDevices[0].deviceId);constobserver={// videoTileDidUpdate is called whenever a new tile is created or tileState changes.videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID and other attendee's tile.if(!tileState.boundAttendeeId||!tileState.localTile){return;}meetingSession.audioVideo.bindVideoElement(tileState.tileId,videoElement);}};meetingSession.audioVideo.addObserver(observer);meetingSession.audioVideo.startLocalVideoTile();
Use case 14. Stop sharing your video.
constvideoElement=/* HTMLVideoElement object e.g. document.getElementById('video-element-id') */;letlocalTileId=null;constobserver={videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID and other attendee's tile.if(!tileState.boundAttendeeId||!tileState.localTile){return;}// videoTileDidUpdate is also invoked when you call startLocalVideoTile or tileState changes.// The tileState.active can be false in poor Internet connection, when the user paused the video tile, or when the video tile first arrived.console.log(`If you called stopLocalVideoTile,${tileState.active} is false.`);meetingSession.audioVideo.bindVideoElement(tileState.tileId,videoElement);localTileId=tileState.tileId;},videoTileWasRemoved:tileId=>{if(localTileId===tileId){console.log(`You called removeLocalVideoTile. videoElement can be bound to another tile.`);localTileId=null;}}};meetingSession.audioVideo.addObserver(observer);meetingSession.audioVideo.stopLocalVideoTile();// Stop video input. If the previously chosen camera has an LED light on,// it will turn off indicating the camera is no longer capturing.awaitmeetingSession.audioVideo.stopVideoInput();// Optional: You can remove the local tile from the session.meetingSession.audioVideo.removeLocalVideoTile();
Use case 15. View one attendee video, e.g. in a 1-on-1 session.
constvideoElement=/* HTMLVideoElement object e.g. document.getElementById('video-element-id') */;constobserver={// videoTileDidUpdate is called whenever a new tile is created or tileState changes.videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID, a local tile (your video), and a content share.if(!tileState.boundAttendeeId||tileState.localTile||tileState.isContent){return;}meetingSession.audioVideo.bindVideoElement(tileState.tileId,videoElement);}};meetingSession.audioVideo.addObserver(observer);
Use case 16. View up to 25 attendee videos. Assume that you have 25 video elements in your application,and that an empty cell means it's taken.
/* No one is sharing video e.g. 9 attendee videos (9 empty cells) Next available: Next available: videoElements[0] videoElements[7] ╔════╦════╦════╦════╦════╗ ╔════╦════╦════╦════╦════╗ ║ 0 ║ 1 ║ 2 ║ 3 ║ 4 ║ ║ ║ ║ ║ ║ ║ ╠════╬════╬════╬════╬════╣ ╠════╬════╬════╬════╬════╣ ║ 5 ║ 6 ║ 7 ║ 8 ║ 9 ║ ║ ║ ║ 7 ║ 8 ║ ║ ╠════╬════╬════╬════╬════╣ ╠════╬════╬════╬════╬════╣ ║ 10 ║ 11 ║ 12 ║ 13 ║ 14 ║ ║ 10 ║ ║ 12 ║ 13 ║ 14 ║ ╠════╬════╬════╬════╬════╣ ╠════╬════╬════╬════╬════╣ ║ 15 ║ 16 ║ 17 ║ 18 ║ 19 ║ ║ 15 ║ 16 ║ 17 ║ 18 ║ 19 ║ ╠════╬════╬════╬════╬════╣ ╠════╬════╬════╬════╬════╣ ║ 20 ║ 21 ║ 22 ║ 23 ║ 24 ║ ║ 20 ║ 21 ║ 22 ║ 23 ║ 24 ║ ╚════╩════╩════╩════╩════╝ ╚════╩════╩════╩════╩════╝ */constvideoElements=[/* an array of 25 HTMLVideoElement objects in your application */];// index-tileId pairsconstindexMap={};constacquireVideoElement=tileId=>{// Return the same video element if already bound.for(leti=0;i<25;i+=1){if(indexMap[i]===tileId){returnvideoElements[i];}}// Return the next available video element.for(leti=0;i<25;i+=1){if(!indexMap.hasOwnProperty(i)){indexMap[i]=tileId;returnvideoElements[i];}}thrownewError('no video element is available');};constreleaseVideoElement=tileId=>{for(leti=0;i<25;i+=1){if(indexMap[i]===tileId){deleteindexMap[i];return;}}};constobserver={// videoTileDidUpdate is called whenever a new tile is created or tileState changes.videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID, a local tile (your video), and a content share.if(!tileState.boundAttendeeId||tileState.localTile||tileState.isContent){return;}meetingSession.audioVideo.bindVideoElement(tileState.tileId,acquireVideoElement(tileState.tileId));},videoTileWasRemoved:tileId=>{releaseVideoElement(tileId);},};meetingSession.audioVideo.addObserver(observer);
Use case 17. Add an observer to know all the remote video sources when changed.
constobserver={remoteVideoSourcesDidChange:videoSources=>{videoSources.forEach(videoSource=>{const{ attendee}=videoSource;console.log(`An attendee (${attendee.attendeeId}${attendee.externalUserId}) is sending video`);});},};meetingSession.audioVideo.addObserver(observer);
You can also call below method to know all the remote video sources:
Note:
getRemoteVideoSources
method is different fromgetAllRemoteVideoTiles
,getRemoteVideoSources
returns all the remote video sources that are available to be viewed,whilegetAllRemoteVideoTiles
returns the ones that are actually being seen.
constvideoSources=meetingSession.audioVideo.getRemoteVideoSources();videoSources.forEach(videoSource=>{const{ attendee}=videoSource;console.log(`An attendee (${attendee.attendeeId}${attendee.externalUserId}) is sending video`);});
Note: When you or other attendees share content (a screen capture, a video file, or any other MediaStream object),the content attendee (attendee-id#content) joins the session and shares content as if a regular attendee shares a video.
For example, your attendee ID is "my-id". When you call
meetingSession.audioVideo.startContentShare
,the content attendee "my-id#content" will join the session and share your content.
Use case 18. Start sharing your screen.
import{DefaultModality}from'amazon-chime-sdk-js';constobserver={videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID and videos.if(!tileState.boundAttendeeId||!tileState.isContent){return;}constyourAttendeeId=meetingSession.configuration.credentials.attendeeId;// tileState.boundAttendeeId is formatted as "attendee-id#content".constboundAttendeeId=tileState.boundAttendeeId;// Get the attendee ID from "attendee-id#content".constbaseAttendeeId=newDefaultModality(boundAttendeeId).base();if(baseAttendeeId===yourAttendeeId){console.log('You called startContentShareFromScreenCapture');}},contentShareDidStart:()=>{console.log('Screen share started');},contentShareDidStop:()=>{// Chime SDK allows 2 simultaneous content shares per meeting.// This method will be invoked if two attendees are already sharing content// when you call startContentShareFromScreenCapture or startContentShare.console.log('Screen share stopped');},};meetingSession.audioVideo.addContentShareObserver(observer);meetingSession.audioVideo.addObserver(observer);// A browser will prompt the user to choose the screen.constcontentShareStream=awaitmeetingSession.audioVideo.startContentShareFromScreenCapture();
If you want to display the content share stream for the sharer, you can bind the returned content share stream to avideo element usingconnectVideoStreamToVideoElement
from DefaultVideoTile.
DefaultVideoTile.connectVideoStreamToVideoElement(contentShareStream,videoElement,false);
Use case 19. Start sharing your screen in an environment that does not support a screen picker dialog. e.g. Electron
constsourceId=/* Window or screen ID e.g. the ID of a DesktopCapturerSource object in Electron */;awaitmeetingSession.audioVideo.startContentShareFromScreenCapture(sourceId);
Use case 20. Start streaming your video file from an<input>
element of typefile
.
constvideoElement=/* HTMLVideoElement object e.g. document.getElementById('video-element-id') */;constinputElement=/* HTMLInputElement object e.g. document.getElementById('input-element-id') */;inputElement.addEventListener('change',async()=>{constfile=inputElement.files[0];consturl=URL.createObjectURL(file);videoElement.src=url;awaitvideoElement.play();constmediaStream=videoElement.captureStream();/* use mozCaptureStream for Firefox e.g. videoElement.mozCaptureStream(); */awaitmeetingSession.audioVideo.startContentShare(mediaStream);inputElement.value='';});
Use case 21. Stop sharing your screen or content.
constobserver={contentShareDidStop:()=>{console.log('Content share stopped');},};meetingSession.audioVideo.addContentShareObserver(observer);awaitmeetingSession.audioVideo.stopContentShare();
Use case 22. View up to 2 attendee content or screens. Chime SDK allows 2 simultaneous content shares per meeting.
import{DefaultModality}from'amazon-chime-sdk-js';constvideoElementStack=[/* an array of 2 HTMLVideoElement objects in your application */];// tileId-videoElement mapconsttileMap={};constobserver={videoTileDidUpdate:tileState=>{// Ignore a tile without attendee ID and videos.if(!tileState.boundAttendeeId||!tileState.isContent){return;}constyourAttendeeId=meetingSession.configuration.credentials.attendeeId;// tileState.boundAttendeeId is formatted as "attendee-id#content".constboundAttendeeId=tileState.boundAttendeeId;// Get the attendee ID from "attendee-id#content".constbaseAttendeeId=newDefaultModality(boundAttendeeId).base();if(baseAttendeeId!==yourAttendeeId){console.log(`${baseAttendeeId} is sharing screen now`);// Get the already bound video element if available, or use an unbound element.constvideoElement=tileMap[tileState.tileId]||videoElementStack.pop();if(videoElement){tileMap[tileState.tileId]=videoElement;meetingSession.audioVideo.bindVideoElement(tileState.tileId,videoElement);}else{console.log('No video element is available');}}},videoTileWasRemoved:tileId=>{// Release the unused video element.constvideoElement=tileMap[tileId];if(videoElement){videoElementStack.push(videoElement);deletetileMap[tileId];}},};meetingSession.audioVideo.addObserver(observer);
Use case 23. Subscribe to attendee presence changes. When an attendee joins or leaves a session,the callback receivespresentAttendeeId
andpresent
(a boolean).
constattendeePresenceSet=newSet();constcallback=(presentAttendeeId,present)=>{console.log(`Attendee ID:${presentAttendeeId} Present:${present}`);if(present){attendeePresenceSet.add(presentAttendeeId);}else{attendeePresenceSet.delete(presentAttendeeId);}};meetingSession.audioVideo.realtimeSubscribeToAttendeeIdPresence(callback);
Use case 24. Create a simple roster by subscribing to attendee presence and volume changes.
import{DefaultModality}from'amazon-chime-sdk-js';constroster={};meetingSession.audioVideo.realtimeSubscribeToAttendeeIdPresence((presentAttendeeId,present)=>{if(!present){deleteroster[presentAttendeeId];return;}meetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(presentAttendeeId,(attendeeId,volume,muted,signalStrength)=>{constbaseAttendeeId=newDefaultModality(attendeeId).base();if(baseAttendeeId!==attendeeId){// Optional: Do not include the content attendee (attendee-id#content) in the roster.// See the "Screen and content share" section for details.return;}if(roster.hasOwnProperty(attendeeId)){// A null value for any field means that it has not changed.roster[attendeeId].volume=volume;// a fraction between 0 and 1roster[attendeeId].muted=muted;// A booleanroster[attendeeId].signalStrength=signalStrength;// 0 (no signal), 0.5 (weak), 1 (strong)}else{// Add an attendee.// Optional: You can fetch more data, such as attendee name,// from your server application and set them here.roster[attendeeId]={ attendeeId, volume, muted, signalStrength,};}});});
Use case 25. Add an observer to receive WebRTC metrics processed by Chime SDK such as bitrate, packet loss, and bandwidth. SeeAudioVideoObserver
for more available metrics.
constobserver={metricsDidReceive:clientMetricReport=>{constmetricReport=clientMetricReport.getObservableMetrics();const{ videoPacketSentPerSecond, videoUpstreamBitrate, availableOutgoingBitrate, availableIncomingBitrate, audioSpeakerDelayMs,}=metricReport;console.log(`Sending video bitrate in kilobits per second:${videoUpstreamBitrate/1000} and sending packets per second:${videoPacketSentPerSecond}`);console.log(`Sending bandwidth is${availableOutgoingBitrate/1000}, and receiving bandwidth is${availableIncomingBitrate/1000}`);console.log(`Audio speaker delay is${audioSpeakerDelayMs}`);},};meetingSession.audioVideo.addObserver(observer);
Use case 26. Add an observer to receive alerts. You can use these alerts to notify users of connection problems.
constobserver={connectionDidBecomePoor:()=>{console.log('Your connection is poor');},connectionDidSuggestStopVideo:()=>{console.log('Recommend turning off your video');},videoSendDidBecomeUnavailable:()=>{// Chime SDK allows a total of 25 simultaneous videos per meeting.// If you try to share more video, this method will be called.// See videoAvailabilityDidChange below to find out when it becomes available.console.log('You cannot share your video');},videoAvailabilityDidChange:videoAvailability=>{// canStartLocalVideo will also be true if you are already sharing your video.if(videoAvailability.canStartLocalVideo){console.log('You can share your video');}else{console.log('You cannot share your video');}},};meetingSession.audioVideo.addObserver(observer);
Use case 27. Leave a session.
import{MeetingSessionStatusCode}from'amazon-chime-sdk-js';constobserver={audioVideoDidStop:sessionStatus=>{constsessionStatusCode=sessionStatus.statusCode();if(sessionStatusCode===MeetingSessionStatusCode.Left){/* * You called meetingSession.audioVideo.stop(). */console.log('You left the session');}else{console.log('Stopped with a session status code: ',sessionStatusCode);}},};meetingSession.audioVideo.addObserver(observer);meetingSession.audioVideo.stop();
Use case 28. Add an observer to get notified when a session has ended.
import{MeetingSessionStatusCode}from'amazon-chime-sdk-js';constobserver={audioVideoDidStop:sessionStatus=>{constsessionStatusCode=sessionStatus.statusCode();if(sessionStatusCode===MeetingSessionStatusCode.MeetingEnded){/* - You (or someone else) have called the DeleteMeeting API action in your server application. - You attempted to join a deleted meeting. - No audio connections are present in the meeting for more than five minutes. - The meeting time exceeds 24 hours. See https://docs.aws.amazon.com/chime-sdk/latest/dg/mtgs-sdk-mtgs.html for details. */console.log('The session has ended');}else{console.log('Stopped with a session status code: ',sessionStatusCode);}},};meetingSession.audioVideo.addObserver(observer);
Use case 29. Initialize the meeting readiness checker.
import{DefaultMeetingReadinessChecker}from'amazon-chime-sdk-js';// In the usage examples below, you will use this meetingReadinessChecker object.constmeetingReadinessChecker=newDefaultMeetingReadinessChecker(logger,meetingSession);
Use case 30. Use the meeting readiness checker to perform local checks.
import{CheckAudioInputFeedback}from'amazon-chime-sdk-js';constaudioInputDeviceInfo=/* An array item from meetingSession.audioVideo.listAudioInputDevices */;constaudioInputFeedback=awaitmeetingReadinessChecker.checkAudioInput(audioInputDeviceInfo.deviceId);switch(audioInputFeedback){caseCheckAudioInputFeedback.Succeeded:console.log('Succeeded');break;caseCheckAudioInputFeedback.Failed:console.log('Failed');break;caseCheckAudioInputFeedback.PermissionDenied:console.log('Permission denied');break;}
Use case 31. Use the meeting readiness checker to perform end-to-end checks, e.g. audio, video, and content share.
import{CheckAudioConnectivityFeedback,CheckContentShareConnectivityFeedback,CheckVideoConnectivityFeedback}from'amazon-chime-sdk-js';// Tests audio connectionconstaudioDeviceInfo=/* An array item from meetingSession.audioVideo.listAudioInputDevices */;constaudioFeedback=awaitmeetingReadinessChecker.checkAudioConnectivity(audioDeviceInfo.deviceId);console.log(`Feedback result:${CheckAudioConnectivityFeedback[audioFeedback]}`);// Test video connectionconstvideoInputInfo=/* An array item from meetingSession.audioVideo.listVideoInputDevices */;constvideoFeedback=awaitmeetingReadinessChecker.checkVideoConnectivity(videoInputInfo.deviceId);console.log(`Feedback result:${CheckVideoConnectivityFeedback[videoFeedback]}`);// Tests content share connectivityconstcontentShareFeedback=awaitmeetingReadinessChecker.checkContentShareConnectivity();console.log(`Feedback result:${CheckContentShareConnectivityFeedback[contentShareFeedback]}`);
Use case 32. Use the meeting readiness checker to perform network checks, e.g. TCP and UDP.
import{CheckNetworkUDPConnectivityFeedback,CheckNetworkTCPConnectivityFeedback,}from'amazon-chime-sdk-js';// Tests for UDP network connectivityconstnetworkUDPFeedback=awaitmeetingReadinessChecker.checkNetworkUDPConnectivity();console.log(`Feedback result:${CheckNetworkUDPConnectivityFeedback[networkUDPFeedback]}`);// Tests for TCP network connectivityconstnetworkTCPFeedback=awaitmeetingReadinessChecker.checkNetworkTCPConnectivity();console.log(`Feedback result:${CheckNetworkTCPConnectivityFeedback[networkTCPFeedback]}`);
Use case 32. Set the audio quality of the main audio input to optimize for speech or music:
Use the following setting to optimize the audio bitrate of the main audio input for fullband speech with a mono channel:
meetingSession.audioVideo.setAudioProfile(AudioProfile.fullbandSpeechMono());
Use case 33. Set the audio quality of content share audio to optimize for speech or music:
Use the following setting to optimize the audio bitrate of content share audio for fullband music with a mono channel:
meetingSession.audioVideo.setContentAudioProfile(AudioProfile.fullbandMusicMono());
Use case 34. Sending and receiving stereo audio
You can send an audio stream with stereo channels either as content or through the main audio input.
Use the following setting to optimize the main audio input and output for an audio stream with stereo channels:
meetingSession.audioVideo.setAudioProfile(AudioProfile.fullbandMusicStereo());
Use the following setting to optimize the content share audio for an audio stream with stereo channels:
meetingSession.audioVideo.setContentAudioProfile(AudioProfile.fullbandMusicStereo());
Use case 35. Redundant Audio
Starting from version 3.18.2, the SDK starts sending redundant audio data to our servers on detecting packet lossto help reduce its effect on audio quality. Redundant audio packets are only sent out for packets containing activeaudio, ie, speech or music. This may increase the bandwidth consumed by audio to up to 3 times the normal amountdepending on the amount of packet loss detected. The SDK will automatically stop sending redundant data if it hasn'tdetected any packet loss for 5 minutes.
This feature requiresblob:
to be in your content security policy under theworker-src
directive.Without this, we will not be able to send out redundant audio data.
This feature is not supported on Firefox at the moment.We were able to successfully send redundant audio from safari 16.1 onwards. 15.6.1 advertises support as well but is untested.Chrome advertises support for redundant audio from version M96.
To disable this feature for attendee audio, you can use the following:
meetingSession.audioVideo.setAudioProfile(newAudioProfile(null,false));
If using bitrate optimization and you want to disable audio redundancy you can use the below line.In the example below, we only use fullbandSpeechMono but you can use fullbandMusicMono and fullbandMusicStereodepending on your use case.
meetingSession.audioVideo.setAudioProfile(AudioProfile.fullbandSpeechMono(false));
To disable this feature for content share audio, you can use any one of the following:
meetingSession.audioVideo.setContentAudioProfile(newAudioProfile(null,false));
If using bitrate optimization and you want to disable audio redundancy you can use the below line.In the example below, we only use fullbandSpeechMono but you can use fullbandMusicMono and fullbandMusicStereodepending on your use case.
meetingSession.audioVideo.setContentAudioProfile(AudioProfile.fullbandSpeechMono(false));
While there is an option to disable the feature, we recommend keeping it enabled for improved audio quality.One possible reason to disable it might be if your customers have very strict bandwidth limitations.
Use case 36. Setup an observer to receive events: connecting, start, stop and receive message; andstart a messaging session.
Note: You can remove an observer by calling
messagingSession.removeObserver(observer)
.In a component-based architecture (such as React, Vue, and Angular), you may need to add an observerwhen a component is mounted, and remove it when unmounted.
constobserver={messagingSessionDidStart:()=>{console.log('Session started');},messagingSessionDidStartConnecting:reconnecting=>{if(reconnecting){console.log('Start reconnecting');}else{console.log('Start connecting');}},messagingSessionDidStop:event=>{console.log(`Closed:${event.code}${event.reason}`);},messagingSessionDidReceiveMessage:message=>{console.log(`Receive message type${message.type}`);},};messagingSession.addObserver(observer);awaitmessagingSession.start();
Amazon Chime SDK for JavaScript allows builders to provide application metadata in the meeting session configuration. This field is optional. Amazon Chime uses application metadata to analyze meeting health trends or identify common failures to improve your meeting experience.
⚠️ Do not pass any Personal Identifiable Information (PII).
Use case 37. Provide application metadata to the meeting session configuration.
import{MeetingSessionConfiguration,ApplicationMetadata}from'amazon-chime-sdk-js';constcreateMeetingResponse=// CreateMeeting API response.constcreateAttendeeResponse=// CreateAttendee API response.constmeetingSessionConfiguration=newMeetingSessionConfiguration(createMeetingResponse,createAttendeeResponse);meetingSessionConfiguration.applicationMetadata=ApplicationMetadata.create({appName:'AppName',appVersion:'1.0.0'});
// The appName must be between 1-32 characters.// The appName must satisfy following regular expression:// /^[a-zA-Z0-9]+[a-zA-Z0-9_-]*[a-zA-Z0-9]+$/gappName:string;// The appVersion must be between 1-32 characters.// The appVersion must follow the Semantic Versioning format.// https://semver.org/appVersion:string;
The use of Amazon Voice Focus. background blur, and background replacement via this SDK involves the downloading and execution of code at runtime by end users.
The use of Amazon Voice Focus. background blur, and background replacement runtime code is subject to additional notices. Seethis Amazon Voice Focus NOTICES file,background blur and background replacement NOTICES file, andbackground blur 2.0 and background replacement 2.0 NOTICES file for details. You agree to make these additional notices available to all end users who use Amazon Voice Focus, background blur and background replacement, background blur 2.0 and background replacement 2.0, runtime code via this SDK.
The browser demo applications in thedemos directory useTensorFlow.js and pre-trainedTensorFlow.js models for image segmentation. Use of these third party models involves downloading and execution of code at runtime fromjsDelivr by end user browsers. For the jsDelivr Acceptable Use Policy, please visit thislink.
The use of TensorFlow runtime code referenced above may be subject to additional license requirements. See the licenses page for TensorFlow.jshere and TensorFlow.js modelshere for details.
You and your end users are responsible for all Content (including any images) uploaded for use with background replacement, and must ensure that such Content does not violate the law, infringe or misappropriate the rights of any third party, or otherwise violate a material term of your agreement with Amazon (including the documentation, the AWS Service Terms, or the Acceptable Use Policy).
Live transcription using the Amazon Chime SDK for JavaScript is powered by Amazon Transcribe. Use of Amazon Transcribe is subject to theAWS Service Terms, including the terms specific to the AWS Machine Learning and Artificial Intelligence Services. Standard charges for Amazon Transcribe and Amazon Transcribe Medical will apply.
You and your end users understand that recording Amazon Chime SDK meetings may be subject to laws or regulations regarding the recording of electronic communications. It is your and your end users’ responsibility to comply with all applicable laws regarding the recordings, including properly notifying all participants in a recorded session, or communication that the session or communication is being recorded, and obtain their consent.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
About
A JavaScript client library for integrating multi-party communications powered by the Amazon Chime service.