Amazon Chime SDK Project Board
Amazon Chime SDK React Components
The Amazon Chime SDK is a set of real-time communications components that developers can use to quickly add messaging, audio, video, and screen sharing capabilities to their web or mobile applications.
Developers can build on AWS's global communications infrastructure to deliver engaging experiences in their applications. For example, they can add video to a health application so patients can consult remotely with doctors on health issues, or create customized audio prompts for integration with the public telephone network.
The Amazon Chime SDK for JavaScript works by connecting to meeting session resources that you create in your AWS account. The SDK has everything you need to build custom calling and collaboration experiences in your web application, including methods to configure meeting sessions, list and select audio and video devices, start and stop screen share and screen share viewing, receive callbacks when media events such as volume changes occur, and control meeting features such as audio mute and video tile bindings.
If you are building a React application, consider using the Amazon Chime SDK React Component Library that supplies client-side state management and reusable UI components for common web interfaces used in audio and video conferencing applications. Amazon Chime also offers Amazon Chime SDK for iOS and Amazon Chime SDK for Android for native mobile application development.
The Amazon Chime SDK Project Board captures the status of community feature requests across all our repositories. The descriptions of the columns on the board are captured in this guide.
In addition to the below, here is a list of all blog posts about the Amazon Chime SDK.
The following developer guides cover specific topics for a technical audience.
The following developer guides cover the Amazon Chime SDK more broadly.
.js
fileReview the resources given in the README and use our client documentation for guidance on how to develop on the Chime SDK for JavaScript. Additionally, search our issues database and FAQs to see if your issue is already addressed. If not please cut us an issue using the provided templates.
The blog post Monitoring and Troubleshooting With Amazon Chime SDK Meeting Events goes into detail about how to use meeting events to troubleshoot your application by logging to Amazon CloudWatch.
If you have more questions, or require support for your business, you can reach out to AWS Customer support. You can review our support plans here.
The Amazon Chime SDK for JavaScript uses WebRTC, the real-time communication API supported in most modern browsers. Here are some general resources on WebRTC.
Make sure you have Node.js version 18 or higher. Node 20 is recommended and supported.
To add the Amazon Chime SDK for JavaScript into an existing application, install the package directly from npm:
npm install amazon-chime-sdk-js --save
Note that the Amazon Chime SDK for JavaScript targets ES2015, which is fully compatible with all supported browsers.
Create a meeting session in your client application.
import {
ConsoleLogger,
DefaultDeviceController,
DefaultMeetingSession,
LogLevel,
MeetingSessionConfiguration
} from 'amazon-chime-sdk-js';
const logger = new ConsoleLogger('MyLogger', LogLevel.INFO);
const deviceController = new DefaultDeviceController(logger);
// You need responses from server-side Chime API. See below for details.
const meetingResponse = /* The response from the CreateMeeting API action */;
const attendeeResponse = /* The response from the CreateAttendee or BatchCreateAttendee API action */;
const configuration = new MeetingSessionConfiguration(meetingResponse, attendeeResponse);
// In the usage examples below, you will use this meetingSession object.
const meetingSession = new DefaultMeetingSession(
configuration,
logger,
deviceController
);
You can use an AWS SDK, the AWS Command Line Interface (AWS CLI), or the REST API to make API calls. In this section, you will use the AWS SDK for JavaScript in your server application, e.g. Node.js. See Amazon Chime SDK API Reference for more information.
️ The server application does not require the Amazon Chime SDK for JavaScript.
const AWS = require('aws-sdk');
const { v4: uuid } = require('uuid');
// You must use "us-east-1" as the region for Chime API and set the endpoint.
const chime = new AWS.ChimeSDKMeetings({ region: 'us-east-1' });
const meetingResponse = await chime
.createMeeting({
ClientRequestToken: uuid(),
MediaRegion: 'us-west-2', // Specify the region in which to create the meeting.
})
.promise();
const attendeeResponse = await chime
.createAttendee({
MeetingId: meetingResponse.Meeting.MeetingId,
ExternalUserId: uuid(), // Link the attendee to an identity managed by your application.
})
.promise();
Now securely transfer the meetingResponse
and attendeeResponse
objects to your client application.
These objects contain all the information needed for a client application using the Amazon Chime SDK for JavaScript to join the meeting.
The value of the MediaRegion parameter in the createMeeting() should ideally be set to the one of the media regions which is closest to the user creating a meeting. An implementation can be found under the topic 'Choosing the nearest media Region' in the Amazon Chime SDK Media Regions documentation.
Create a messaging session in your client application to receive messages from Amazon Chime SDK for Messaging.
import { ChimeSDKMessagingClient } from '@aws-sdk/client-chime-sdk-messaging';
import {
ConsoleLogger,
DefaultMessagingSession,
LogLevel,
MessagingSessionConfiguration,
} from 'amazon-chime-sdk-js';
const logger = new ConsoleLogger('SDK', LogLevel.INFO);
// You will need AWS credentials configured before calling AWS or Amazon Chime APIs.
const chime = new ChimeSDKMessagingClient({ region: 'us-east-1'});
const userArn = /* The userArn */;
const sessionId = /* The sessionId */;
const configuration = new MessagingSessionConfiguration(userArn, sessionId, undefined, chime);
const messagingSession = new DefaultMessagingSession(configuration, logger);
If you would like to enable prefetch feature when connecting to a messaging session, you can follow the code below.
Prefetch feature will send out CHANNEL_DETAILS event upon websocket connection, which includes information about channel,
channel messages, channel memberships etc. Prefetch sort order can be adjusted with prefetchSortBy
, setting it to either
unread
(default value if not set) or lastMessageTimestamp
configuration.prefetchOn = Prefetch.Connect;
configuration.prefetchSortBy = PrefetchSortBy.Unread;
git fetch --tags https://github.com/aws/amazon-chime-sdk-js
npm run build
npm run test
After running npm run test
the first time, you can use npm run test:fast
to speed up the test suite.
Tags are fetched in order to correctly generate versioning metadata.
To view code coverage results open coverage/index.html
in your browser after running npm run test
.
If you run npm run test
and the tests are running but the coverage report is not getting generated then you might have a resource clean up issue. In Mocha v4.0.0 or newer the implementation was changed so that the Mocha processes will not force exit when the test run is complete.
For example, if you have a DefaultVideoTransformDevice
in your unit test then you must call await device.stop();
to clean up the resources and not run into this issue. You can also look into the usage of done();
in the Mocha documentation.
To generate JavaScript API reference documentation run:
npm run build
npm run doc
Then open docs/index.html
in your browser.
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our vulnerability reporting page. Please do not create a public GitHub issue.
Note: Before starting a session, you need to choose your microphone, speaker, and camera.
Use case 1. List audio input, audio output, and video input devices. The browser will ask for microphone and camera permissions.
With the forceUpdate
parameter set to true, cached device information is discarded and updated after the device label trigger is called. In some cases, builders need to delay the triggering of permission dialogs, e.g., when joining a meeting in view-only mode, and then later be able to trigger a permission prompt in order to show device labels; specifying forceUpdate
allows this to occur.
const audioInputDevices = await meetingSession.audioVideo.listAudioInputDevices();
const audioOutputDevices = await meetingSession.audioVideo.listAudioOutputDevices();
const videoInputDevices = await meetingSession.audioVideo.listVideoInputDevices();
// An array of MediaDeviceInfo objects
audioInputDevices.forEach(mediaDeviceInfo => {
console.log(`Device ID: ${mediaDeviceInfo.deviceId} Microphone: ${mediaDeviceInfo.label}`);
});
Use case 2. Choose audio input and audio output devices by passing the deviceId
of a MediaDeviceInfo
object.
Note that you need to call listAudioInputDevices
and listAudioOutputDevices
first.
const audioInputDeviceInfo = /* An array item from meetingSession.audioVideo.listAudioInputDevices */;
await meetingSession.audioVideo.startAudioInput(audioInputDeviceInfo.deviceId);
const audioOutputDeviceInfo = /* An array item from meetingSession.audioVideo.listAudioOutputDevices */;
await meetingSession.audioVideo.chooseAudioOutput(audioOutputDeviceInfo.deviceId);
Use case 3. Choose a video input device by passing the deviceId
of a MediaDeviceInfo
object.
Note that you need to call listVideoInputDevices
first.
If there is an LED light next to the attendee's camera, it will be turned on indicating that it is now capturing from the camera. You probably want to choose a video input device when you start sharing your video.
const videoInputDeviceInfo = /* An array item from meetingSession.audioVideo.listVideoInputDevices */;
await meetingSession.audioVideo.startVideoInput(videoInputDeviceInfo.deviceId);
// Stop video input. If the previously chosen camera has an LED light on,
// it will turn off indicating the camera is no longer capturing.
await meetingSession.audioVideo.stopVideoInput();
Use case 4. Add a device change observer to receive the updated device list.
For example, when you pair Bluetooth headsets with your computer, audioInputsChanged
and audioOutputsChanged
are called
with the device list including headsets.
You can use the audioInputMuteStateChanged
callback to track the underlying
hardware mute state on browsers and operating systems that support that.
const observer = {
audioInputsChanged: freshAudioInputDeviceList => {
// An array of MediaDeviceInfo objects
freshAudioInputDeviceList.forEach(mediaDeviceInfo => {
console.log(`Device ID: ${mediaDeviceInfo.deviceId} Microphone: ${mediaDeviceInfo.label}`);
});
},
audioOutputsChanged: freshAudioOutputDeviceList => {
console.log('Audio outputs updated: ', freshAudioOutputDeviceList);
},
videoInputsChanged: freshVideoInputDeviceList => {
console.log('Video inputs updated: ', freshVideoInputDeviceList);
},
audioInputMuteStateChanged: (device, muted) => {
console.log('Device', device, muted ? 'is muted in hardware' : 'is not muted');
},
};
meetingSession.audioVideo.addDeviceChangeObserver(observer);
Use case 5. Start a session. To hear audio, you need to bind a device and stream to an <audio>
element.
Once the session has started, you can talk and listen to attendees.
Make sure you have chosen your microphone and speaker (See the "Device" section), and at least one other attendee has joined the session.
const audioElement = /* HTMLAudioElement object e.g. document.getElementById('audio-element-id') */;
meetingSession.audioVideo.bindAudioElement(audioElement);
const observer = {
audioVideoDidStart: () => {
console.log('Started');
}
};
meetingSession.audioVideo.addObserver(observer);
meetingSession.audioVideo.start();
Use case 6. Add an observer to receive session lifecycle events: connecting, start, and stop.
Note: You can remove an observer by calling
meetingSession.audioVideo.removeObserver(observer)
. In a component-based architecture (such as React, Vue, and Angular), you may need to add an observer when a component is mounted, and remove it when unmounted.
const observer = {
audioVideoDidStart: () => {
console.log('Started');
},
audioVideoDidStop: sessionStatus => {
// See the "Stopping a session" section for details.
console.log('Stopped with a session status code: ', sessionStatus.statusCode());
},
audioVideoDidStartConnecting: reconnecting => {
if (reconnecting) {
// e.g. the WiFi connection is dropped.
console.log('Attempting to reconnect');
}
},
};
meetingSession.audioVideo.addObserver(observer);
Note: So far, you've added observers to receive device and session lifecycle events. In the following use cases, you'll use the real-time API methods to send and receive volume indicators and control mute state.
Use case 7. Mute and unmute an audio input.
// Mute
meetingSession.audioVideo.realtimeMuteLocalAudio();
// Unmute
const unmuted = meetingSession.audioVideo.realtimeUnmuteLocalAudio();
if (unmuted) {
console.log('Other attendees can hear your audio');
} else {
// See the realtimeSetCanUnmuteLocalAudio use case below.
console.log('You cannot unmute yourself');
}
Use case 8. To check whether the local microphone is muted, use this method rather than keeping track of your own mute state.
const muted = meetingSession.audioVideo.realtimeIsLocalAudioMuted();
if (muted) {
console.log('You are muted');
} else {
console.log('Other attendees can hear your audio');
}
Use case 9. Disable unmute. If you want to prevent users from unmuting themselves (for example during a presentation), use these methods rather than keeping track of your own can-unmute state.
meetingSession.audioVideo.realtimeSetCanUnmuteLocalAudio(false);
// Optional: Force mute.
meetingSession.audioVideo.realtimeMuteLocalAudio();
const unmuted = meetingSession.audioVideo.realtimeUnmuteLocalAudio();
console.log(`${unmuted} is false. You cannot unmute yourself`);
Use case 10. Subscribe to volume changes of a specific attendee. You can use this to build a real-time volume indicator UI.
import { DefaultModality } from 'amazon-chime-sdk-js';
// This is your attendee ID. You can also subscribe to another attendee's ID.
// See the "Attendees" section for an example on how to retrieve other attendee IDs
// in a session.
const presentAttendeeId = meetingSession.configuration.credentials.attendeeId;
meetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(
presentAttendeeId,
(attendeeId, volume, muted, signalStrength) => {
const baseAttendeeId = new DefaultModality(attendeeId).base();
if (baseAttendeeId !== attendeeId) {
// See the "Screen and content share" section for details.
console.log(`The volume of ${baseAttendeeId}'s content changes`);
}
// A null value for any field means that it has not changed.
console.log(`${attendeeId}'s volume data: `, {
volume, // a fraction between 0 and 1
muted, // a boolean
signalStrength, // 0 (no signal), 0.5 (weak), 1 (strong)
});
}
);
Use case 11. Subscribe to mute or signal strength changes of a specific attendee. You can use this to build UI for only mute or only signal strength changes.
// This is your attendee ID. You can also subscribe to another attendee's ID.
// See the "Attendees" section for an example on how to retrieve other attendee IDs
// in a session.
const presentAttendeeId = meetingSession.configuration.credentials.attendeeId;
// To track mute changes
meetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(
presentAttendeeId,
(attendeeId, volume, muted, signalStrength) => {
// A null value for volume, muted and signalStrength field means that it has not changed.
if (muted === null) {
// muted state has not changed, ignore volume and signalStrength changes
return;
}
// mute state changed
console.log(`${attendeeId}'s mute state changed: `, {
muted, // a boolean
});
}
);
// To track signal strength changes
meetingSession.audioVideo.realtimeSubscribeToVolumeIndicator(
presentAttendeeId,
(attendeeId, volume, muted, signalStrength) => {
// A null value for volume, muted and signalStrength field means that it has not changed.
if (signalStrength === null) {
// signalStrength has not changed, ignore volume and muted changes
return;
}
// signal strength changed
console.log(`${attendeeId}'s signal strength changed: `, {
signalStrength, // 0 (no signal), 0.5 (weak), 1 (strong)
});
}
);
Use case 12. Detect the most active speaker. For example, you can enlarge the active speaker's video element if available.
import { DefaultActiveSpeakerPolicy } from 'amazon-chime-sdk-js';
const activeSpeakerCallback = attendeeIds => {
if (attendeeIds.length) {
console.log(`${attendeeIds[0]} is the most active speaker`);
}
};
meetingSession.audioVideo.subscribeToActiveSpeakerDetector(
new DefaultActiveSpeakerPolicy(),
activeSpeakerCallback
);
Note: In Chime SDK terms, a video tile is an o