6.10.0
The SDK creation factory. Create an instance of the SDK by calling this factory with the desired configurations. The SDK instance will be referred as 'api' throughout the rest of the documentation content.
(config)
The configuration object.
api
:
The SDK instance.
// Instantiate the SDK.
import { create } from '@rbbn/webrtc-js-sdk'
const client = create({
authentication: { ... },
logs: { ... },
...
});
// Use the SDK's API.
client.on( ... );
The configuration object. This object defines what different configuration values you can use when instantiating the SDK using the create function.
Configuration options for the Logs feature.
The SDK will log information about the operations it is performing. The amount of information will depend on how the Logs feature is configured.
The format of logs can also be customized by providing a LogHandler. This function will receive a LogEntry which it can handle as it sees fit. By default, the SDK will log information to the console. For more information, see the Logs feature description.
(Object)
Logs configs.
Name | Description |
---|---|
logs.logLevel string
(default 'debug' )
|
Log level to be set. See logger.levels . |
logs.handler logger.LogHandler?
|
The function to receive log entries from the SDK. If not provided, a default handler will be used that logs entries to the console. |
logs.logActions (Object | boolean)
(default false )
|
Options specifically for action logs when logLevel is at DEBUG+ levels. Set this to false to not output action logs. |
logs.logActions.handler logger.LogHandler?
|
The function to receive action log entries from the SDK. If not provided, a default handler will be used that logs actions to the console. |
logs.logActions.actionOnly boolean
(default false )
|
Only output information about the action itself. Omits the SDK context for when it occurred. |
logs.logActions.collapsed boolean
(default false )
|
Whether logs should be minimized when initially output. The full log is still output and can be inspected on the console. |
logs.logActions.diff boolean
(default false )
|
Include a diff of what SDK context was changed by the action. |
logs.logActions.level string
(default 'debug' )
|
Log level to be set on the action logs |
logs.logActions.exposePayloads boolean
(default true )
|
Allow action payloads to be exposed in the logs, potentially displaying sensitive information. |
Configuration options for the anonymous Authentication feature.
(Object)
Authentication configs.
Name | Description |
---|---|
authentication.subscription Object
|
|
authentication.subscription.serviceUnavailableMaxRetries number
(default 3 )
|
The maximum number of times this client will retry in order to subscribe for a given service, while getting 'Service Unavailable' from backend. |
authentication.subscription.protocol string
(default 'https' )
|
Protocol to be used for subscription requests. |
authentication.subscription.server string
|
Server to be used for subscription requests. |
authentication.subscription.port Number
(default 443 )
|
Port to be used for subscription requests. |
authentication.subscription.service Array?
|
Services to subscribe to for notifications. |
authentication.websocket Object
|
|
authentication.websocket.protocol string
(default 'wss' )
|
Protocol to be used for websocket notifications. |
authentication.websocket.server string
|
Server to be used for websocket notifications. |
authentication.websocket.port Number
(default 443 )
|
Port to be used for websocket notifications. |
Configuration options for the call feature.
(Object)
The call configuration object.
Name | Description |
---|---|
call.defaultPeerConfig call.RTCPeerConnectionConfig?
|
A key-value dictionary that corresponds to the available RTCPeerConfiguration which is normally passed when creating an RTCPeerConnection. See RTCPeerConnection's configuration parameters for more information. This is the recommended way of setting ICE servers and other RTCPeerConnection-related configuration. |
call.iceCollectionIdealTimeout number
(default 1000 )
|
The amount of time to wait for an ideal candidate in milliseconds.
The default is 1000ms. An ideal list of candidates is a complete list of candidates considering the RTCPeerConnection configuration.
Note that this values will not be considered if a custom function is passed through the
iceCollectionCheckFunction
, and
any timeouts must be handled by the custom function.
|
call.iceCollectionMaxTimeout number
(default 3000 )
|
The maximum amount of time to wait for ICE collection in milliseconds.
The default is 3000ms. After this time has been reached, the call will proceed with the currently gathered candidates.
Note that this values will not be considered if a custom function is passed through the
iceCollectionCheckFunction
, and
any timeouts must be handled by the custom function.
|
call.iceCollectionCheckFunction Function?
|
Override the default IceCollectionCheckFunction to manually decide when to proceed with operations, error out, or wait for the appropriate states and candidates. The function will receive an object containing the ice collection info. See IceCollectionInfo for more details. The function must return a results object with details on how to proceed with the ICE collection check or operatiaon. See IceCollectionResult object for details on the format of the return object. See IceCollectionCheckFunction for more information on the form of the function, as well as information about the default IceCollectionCheckFunction that is used if nothing is provided. |
call.serverTurnCredentials boolean
(default true )
|
Whether server-provided TURN credentials should be used. |
call.sdpHandlers Array<call.SdpHandlerFunction>?
|
List of SDP handler functions to modify SDP. Advanced usage. |
call.earlyMedia boolean
(default false )
|
Whether early media should be supported for calls. Not supported on Firefox. |
call.resyncOnConnect boolean
(default false )
|
Whether the SDK should re-sync all call states after connecting (requires WebRTC Gateway 4.7.1+). |
call.mediaBrokerOnly boolean
(default false )
|
Whether all Calls will be anchored on the MediaBroker instead of being peer-to-peer. Set to true if the backend is configured for broker only mode. |
call.removeBundling boolean
(default false )
|
Whether to remove a=group attributes to stop media bundling from incoming and outgoing SDP messages. |
call.ringingFeedbackMode string
(default 'auto' )
|
The mode for sending ringing feedback to the Caller ('auto', 'manual').
By default, feedback will be automatically sent when a call has been received. In 'manual' mode, the application
must initiate the feedback being sent. See the
call.sendRingingFeedback
API for more info.
|
call.callAuditTimer number
(default 25000 )
|
Time interval, in milliseconds between call audits. |
call.mediaConnectionRetryDelay number
(default 3000 )
|
Delay, in milliseconds for the passive side of a call to wait before trying a media reconnection. |
call.normalizeDestination boolean
(default true )
|
Specifies whether or not SIP address normalization will be applied. |
Configuration options for the Connectivity feature. The SDK can only use keepalive as the connectivity check.
Keep Alive: The client sends "keepalive" messages (to the server) on the websocket at regular intervals. This lets the server know that the client is still connected, and that it should "keep the connection alive".
For more information on keepalive see here: https://en.wikipedia.org/wiki/Keepalive
(Object)
Connectivity configs.
Name | Description |
---|---|
connectivity.pingInterval Number
(default 30000 )
|
Time in between websocket ping attempts (milliseconds). |
connectivity.reconnectLimit Number
(default 5 )
|
Number of failed reconnect attempts before reporting an error. Can be set to 0 to not limit reconnection attempts. |
connectivity.reconnectDelay Number
(default 5000 )
|
Base time between websocket reconnect attempts (milliseconds). |
connectivity.reconnectTimeMultiplier Number
(default 1 )
|
Reconnect delay multiplier for subsequent attempts. The reconnect delay time will be multiplied by this after each failed reconnect attempt to increase the delay between attempts. eg. 5000ms then 10000ms then 20000ms delay if value is 2. |
connectivity.reconnectTimeLimit Number
(default 640000 )
|
Maximum time delay between reconnect attempts (milliseconds). Used in conjunction with the reconnect time multiplier to prevent overly long delays between reconnection attempts. |
connectivity.autoReconnect Boolean
(default true )
|
Flag to determine whether the SDK will attempt to automatically reconnect after connectivity disruptions. |
connectivity.maxMissedPings Number
(default 3 )
|
Maximum pings sent (without receiving a response) before reporting an error. |
connectivity.checkConnectivity Boolean
(default true )
|
Flag to determine whether the SDK should check connectivity. |
connectivity.webSocketOAuthMode string
(default query )
|
query will send the bearer access token to authenticate the websocket and none will not send it. |
Configuration options for the notification feature.
(Object)
The notifications configuration object.
Name | Description |
---|---|
notifications.idCacheLength number
(default 100 )
|
Default amount of event ids to remember for de-duplication purposes. |
notifications.incomingCallNotificationMode string
(default 'any-channel' )
|
Communication channel mode used for incoming call notifications. Supported values are 'any-channel' or 'push-channel-only'. |
notifications.pushRegistration Object?
|
Object describing the server to use for push services. |
notifications.pushRegistration.server string?
|
Hostname for the push registration server. |
notifications.pushRegistration.port string?
|
Port for the push registration server. |
notifications.pushRegistration.protocol string?
|
Protocol for the push registration server. |
notifications.pushRegistration.version string?
|
Version for the push registration server. |
The 'api' is the type returned by the create function. It contains various top-level functions that pertain to SDK global instance as well as several nested namespaces that pertain to various features (e.g. call, contacts, presence, etc).
Returns the current version of the API.
Destroys the SDK, and removes its state, rendering the SDK unusable. Useful when a user logs out and their call data needs to be destroyed. The SDK must be recreated to be usable again. The destroy command is async, and will happen on the next tick so as not to interfere with any ongoing events.
// Instantiate the SDK.
import { create } from '@rbbn/webrtc-js-sdk'
const config = {
authentication: { ... },
logs: { ... },
...
}
let client = create(config);
client.on( ... )
// Use the SDK
...
// Destroy the SDK, then recreate on the next step
client.destroy()
client = create(config)
client.on( ... )
Update the configuration values for the SDK to use.
This API will only modify the configurations provided, leaving other configurations as they were originally set, by performing a merge of the new values into the previous values.
Please note that the object provided to the updateConfig
API may be different
than the object retrieved from the getConfig API. This may happen when a format
change has happened and the SDK modifies the provided format to alleviate
backwards-compatibility issues. We recommend ensuring the configurations you
provide are as described by the config section.
// Instantiate the SDK with certain configs.
const client = create({
authentication: { ... },
logs: { ... },
...
})
// Modify a subsection of the configs at a later time.
// This will only update the specified configurations.
client.updateConfig({
logs: {
loglevel: 'DEBUG'
}
})
Add an event listener for the specified event type. The event is emitted by the SDK instance.
(string)
The event type for which to add the listener.
(Function)
The listener for the event type. The parameters of the listener depend on the event type.
// Listen for events of a specific type emitted by the SDK.
client.on('dummy:event', function (params) {
// Handle the event.
})
Retrieve information about the browser being used.
Browser information being defined indicates that the browser supports basic webRTC scenarios.
const details = client.getBrowserDetails()
log(`Browser in use: ${details.browser}, version ${details.version}.`)
Retrieves information about the current user.
Object
:
user The user data.
string
:
user.username The username of the current user. Note that this username can take different encoded forms.
It's not meant to be displayed to a user.
string
:
user.token The current access token.
The authentication credentials have been set. You can check the set user details with the getUserInfo
API.
(Object)
There was an error with authentication.
(Object)
Name | Description |
---|---|
params.error api.BasicError
|
The Basic error object. |
An error occurred with server authorization.
This event will be emitted anytime a REST request to the server is rejected due to an authorization issue. This may occur for invalid credentials or expired tokens, depending on which form of authentication the application has chosen to use.
(Object)
Name | Description |
---|---|
params.error api.BasicError
|
The Basic error object. |
The 'call' namespace (within the 'api' type) is used to make audio and video calls to and from SIP users and PSTN phones.
Call functions are all part of the 'call' namespace.
Information about a Call.
Can be retrieved using the call.getAll or call.getById APIs.
Type: Object
(string)
: The ID of the call.
(user.UserID)
: A unique identifier (uri) of the person who made the call.
(user.UserID)
: A unique identifier (uri) of the person who receives the call.
(string)
: The direction in which the call was created. Can be 'outgoing' or 'incoming'.
(string)
: The current status of the call's media connection. See
call.mediaConnectionStates
for possible states.
(boolean)
: Indicates whether this call is currently being held locally.
(boolean)
: Indicates whether this call is currently being held remotely.
(Array<string>)
: A list of Track IDs that the call is sending to the remote participant.
(Array<string>)
: A list of Track IDs that the call is receiving from the remote participant.
(call.MediaOffered?)
: Information about what media was offered by the person who made the call.
(Object)
: Information about the other call participant.
(call.BandwidthControls)
: The bandwidth limitations set for the call.
(Array<call.CustomParameter>)
: The locally set Custom Parameters for the call.
(number)
: The start time of the call in milliseconds since the epoch.
(number?)
: The end time of the call in milliseconds since the epoch.
The MediaConstraint type defines the format for configuring media options.
Either the exact
or ideal
property should be provided. If both are present, the
exact
value will be used.
When the exact
value is provided, it will be the only value considered for the option.
If it cannot be used, the constraint will be considered an error.
When the ideal
value is provided, it will be considered as the optimal value for the option.
If it cannot be used, the closest acceptable value will be used instead.
A string value can be provided directly instead of using the MediaConstraint format.
Using a string directly is not recommended, since behaviour may differ depending
on browser and media property. For most properties, a direct string value will be
handled as ideal
behaviour, but some properties may follow the exact
behaviour
(eg. deviceId
).
Type: Object
(string?)
: The required value for the constraint. Other values will not be accepted.
(string?)
: The ideal value for the constraint. Other values will be considered if necessary.
// Specify video resolution when making a call.
client.call.make(destination, {
audio: true,
video: true,
videoOptions: {
// Set height and width constraints to ideally be 1280x720.
height: { ideal: 720 },
width: { ideal: 1280 }
}
})
The MediaOffered type defines what media capabilities are offered by the person making a call. This is an optional property and therefore may be null if it is not known or if it's associated with caller's side of the call.
Type: Object
(boolean)
: Specifies if any audio capability has been offered by the caller. If set to true, then the caller is capable of supporting at least one audio stream in the current call.
(boolean)
: Specifies if any video capability has been offered by the caller. If set to true, then the caller is capable of supporting at least one video stream in the current call.
The BandwidthControls type defines the format for configuring media and/or track bandwidth options. BandwidthControls only affect received remote tracks of the specified type.
Type: Object
(number?)
: The desired combined bandwidth bitrate in kilobits per second for all media in the call.
(number?)
: The desired bandwidth bitrate in kilobits per second for received remote audio.
(number?)
: The desired bandwidth bitrate in kilobits per second for received remote video.
// Specify received remote video bandwidth limits when making a call.
client.call.make(destination, mediaConstraints,
{
bandwidth: {
video: 5
}
}
)
The DSCPControls type defines the format for configuring network priorities (DSCP marking) for the media traffic.
If DSCPControls are not configured for a call the network priority of the traffic for all media kinds will be the default (i.e., "low").
Type: Object
(RTCPriorityType?)
: The desired network priority for audio traffic (see
RTCPriorityType Enum
for the list of possible values).
(RTCPriorityType?)
: The desired network priority for video traffic (see
RTCPriorityType Enum
for the list of possible values).
(RTCPriorityType?)
: The desired network priority for screen share traffic (see
RTCPriorityType Enum
for the list of possible values).
Configuration options for an RTCPeerConnection. It represents an RTCPeerConfiguration dictionary, whose parameters are documented here.
Type: Object
(string
= 'unified-plan'
)
The sdpSemantics to use (
'unified-plan'
or
'plan-b'
).
As 'plan-b' has become a deprecated option, it will therefore be removed in the future.
(number?)
An unsigned 16-bit integer value which specifies
the size of the prefetched ICE candidate pool. The default value is 0 (meaning no candidate prefetching will occur).
(string?)
The current ICE transport policy; if the policy isn't specified,
'all' is assumed by default. Possible values are: 'all', 'public', 'relay'.
(string?)
For further description on this and other properties,
see
RTCPeerConnection's configuration parameters
.
Type: Object
((Array<string> | string))
: Either an array of URLs for reaching out several ICE servers or a single URL for reaching one ICE server. See
RTCIceServers.urls documentation
to learn more about the actual url format.
Starting with Chromium 110, TURN(S) urls must only contain a transport
parameter in the query section and STUN urls must not specify any query section.
(string?)
: The credential needed by the ICE server.
(string?)
: The credential needed by the ICE server.
This object is provided to the IceCollectionCheckFunction, and contains the necessary information about the call (i.e., call ID, current call operation), and information about the ongoing ICE collection, such as the list of all ICE candidates collected so far and the ICE gathering state.
Type: Object
(string)
: The ID of the call.
(string)
: The current operation of the call.
(string)
: The reason the check function was called. Three possible values:
'NewCandidate' - A new ICE candidate was collected. Note: there may be multiple new ICE candidates collected.
'IceGatheringStateChanged' - The ICE gathering state changed.
'Scheduled' - A scheduled call (for first invocation, and subsequent invocations based on
wait
value returned by
IceCollectionCheckFunction
.)
(Array<RTCIceCandidate>)
: An array of all ICE candidates collected so far.
(number)
: The time elapsed since the start of the ICE collection process.
(string)
: The current ICE gathering state.
See
RTCPeerConnection.iceGatheringState
.
(Object)
: The current configration for the RTCPeerConnection.
(string)
: The current local session description set on the peer.
The form of the ICE collection check function, the arguments that it receives, and the outputs expected.
This function is provided the necessary details of the current WebRTC session and ICE collection
(IceCollectionInfo), which it can use to dictate how to proceed with a call.
The function can be invoked for three different reasons:
a new ICE candidate was collected, the ICE gathering state changed, or a scheduled call based on the wait
time set after
an initial invocation of the function.
The function must then return an appropriate result object in the format of IceCollectionCheckResult
which will dictate how the call will proceed. An incorrect return object, or result type
, will cause the call to end with an error.
[Default] The default IceCollectionCheckFunction uses the following algorithm to determine if the call can proceed to negotiation:
iceGatheringState
is "complete" at any stage, then:
The ideal and max timeouts can be configured with the idealIceCollectionTimeout
and maxIceCollectionTimeout
properties of
the call config.
Type: Function
(call.IceCollectionInfo)
Information about the current status of the ICE candidate collection.
(Object)
Configurations provided to the SDK for ICE collection timeout boundaries.
Name | Description |
---|---|
iceTimeouts.iceCollectionIdealTimeout number
|
The amount of time to wait for ideal candidates, in milliseconds. See config.call for more information. |
iceTimeouts.iceCollectionMaxTimeout number
|
The maximum amount of time to wait for ICE collection, in milliseconds. See config.call for more information. |
call.IceCollectionCheckResult
:
Information on how to proceed with the call and/or ICE collection.
function isRelayCandidate (candidate) {
// NOTE: This would need to be different for Firefox since the `.type` property doesn't exist
// and we would need to parse it ourselves in the `.candidate` property.
return candidate.type === 'relay'
}
function myIceCollectionCheck ({ iceGatheringState, iceCandidates }, iceTimeouts) {
if (iceGatheringState === 'complete') {
if (iceCandidates.some(isRelayCandidate)) {
return { type: 'StartCall' }
} else {
return { type: 'Error', error: new Error('Failed to start call because there is no relay candidates.') }
}
} else {
return { type: 'Wait' }
}
}
Type: Object
(string)
: Indicates how the system should proceed with the call operation / ICE collection. The possible values are:
'StartCall' - instruct the system to start the call with the currently gathered ICE candidates and other information.
'Error' - instruct the system to fail the call with an error. The error to communicate to the user should be specified in the
error
property.
'Wait' - instruct the system to wait for the specified amount of time before triggering a new ICE collection check. The amount
of wait time should be specified in the
wait
property.
(string)
: An error to be sent the user when the
type
of the result is
IceCollectionCheckResultType.Error
(number)
: The amount of time (in milliseconds) to wait before triggering a new ICE collection check. This is only
valid if the
type
of result is
IceCollectionCheckResultType.Wait
. If a value is not provided, the ICE collection check function
will only be triggered for new candidates or when the ICE gathering state changes.
Type: Object
(string)
: The id corresponding to the call for which the handler is being run.
(RTCSdpType)
: The session description's type.
(string)
: The step that will occur after the SDP Handlers are run.
Will be either 'set' (the SDP will be set locally) or 'send' (the SDP will
be sent to the remote endpoint).
(string)
: Which end of the connection created the SDP.
The form of an SDP handler function and the expected arguments that it receives.
Type: Function
(Object)
The SDP so far (could have been modified by previous handlers).
(call.SdpHandlerInfo)
Additional information that might be useful when making SDP modifications.
(Object)
The SDP in its initial state.
Object
:
The resulting modified SDP based on the changes made by this function.
The state representation of a Media object. Media is a collection of Track objects.
Type: Object
(string)
: The ID of the Media object.
(boolean)
: Indicator on whether this media is local or remote.
(Array<call.TrackObject>)
: A list of Track objects that are contained in this Media object.
A Track is a stream of audio or video media from a single source.
Tracks can be retrieved using the Media module's getTrackById
API and manipulated with other functions of the Media module.
Type: Object
(boolean)
: Indicator of whether this Track is disabled or not. If disabled, it cannot be re-enabled.
(boolean)
: Indicator of whether this Track is a locally created one or is a remote one.
(string)
: The ID of the Track.
(string)
: The kind of Track this is (audio, video).
(string)
: The label of the device this Track uses.
(boolean)
: Indicator on whether this Track is muted or not.
(boolean)
: Indicator on whether this Track is receiving media from its source or not.
When true, the Track can be considered removed. This indicator is affected by actions outside the
control of the SDK, such as the remote endpoint of a Call stopping to send media for a remote Track,
or the browser temporarily disabling the SDK's access to a local Track's source.
(string)
: The state of this Track. Can be 'live' or 'ended'.
(string)
: The ID of the Media Stream that includes this Track.
A collection of media devices and their information.
Type: Object
(Array<call.DeviceInfo>)
: A list of camera device information.
(Array<call.DeviceInfo>)
: A list of microphone device information.
(Array<call.DeviceInfo>)
: A list of speaker device information.
Custom SIP headers can be used to convey additional information to a SIP endpoint.
These parameters must be configured on the server prior to making a request, otherwise the request will fail when trying to include the parameters.
These parameters can be specified with the call.make and call.answer APIs. They can also be set after a Call is established using the call.setCustomParameters API, and sent using the call.sendCustomParameters API.
Custom headers may be received anytime throughout the duration a call. A remote endpoint may send custom headers when starting a call, answering a call, or during call updates such as hold/unhold and addition/removal of media in the call. When these custom headers are received, the SDK will emit a call:customParameters event which will contain the custom parameters that were received.
A Call's custom parameters are stored on the Call's CallObject, which can be retrieved using the call.getById or call.getAll APIs. These are the parameters that will be sent to the remote endpoint of the Call. Parameters received from a Call are not stored as part of the CallObject and are only provided via the call:customParameters event.
Type: Object
// Specify custom parameters when making a call.
client.call.make(destination, mediaConstraints,
{
customParameters: [
{
name: 'X-GPS',
value: '42.686032,23.344565'
}
]
}
)
Starts an outgoing call as an anonymous user.
(string)
Full user ID of the call recipient.
(Object)
Information needed to validate a token anonymous call.
Name | Description |
---|---|
credentials.realm Object
|
The realm used to encrypt the tokens. |
credentials.accountToken Object?
|
The encrypted account token of the account making the call. |
credentials.fromToken Object?
|
The encrypted SIP address of the account/caller. |
credentials.toToken Object?
|
The encrypted SIP address of the callee. |
credentials.authAccount Object?
|
The account used to authenticate if no token is provided. |
(Object)
Call options.
Name | Description |
---|---|
callOptions.from string
|
The URI of the user making the call. |
callOptions.audio Boolean
(default true )
|
Whether the call should have audio on start. Currently, audio-less calls are not supported. |
callOptions.audioOptions Object?
|
Options for configuring the call's audio. |
callOptions.audioOptions.deviceId call.MediaConstraint?
|
ID of the microphone to receive audio from. |
callOptions.bandwidth call.BandwidthControls?
|
Options for configuring media's bandwidth. |
callOptions.video Boolean
(default false )
|
Whether the call should have video on start. |
callOptions.videoOptions Object?
|
Options for configuring the call's video. |
callOptions.videoOptions.deviceId call.MediaConstraint?
|
ID of the camera to receive video from. |
callOptions.videoOptions.height call.MediaConstraint?
|
The height of the video. |
callOptions.videoOptions.width call.MediaConstraint?
|
The width of the video. |
callOptions.videoOptions.frameRate call.MediaConstraint?
|
The frame rate of the video. |
callOptions.screen Boolean
(default false )
|
Whether the call should have screenshare on start. |
callOptions.screenOptions Object?
|
Options for configuring the call's screenShare. |
callOptions.screenOptions.height call.MediaConstraint?
|
The height of the screenShare. |
callOptions.screenOptions.width call.MediaConstraint?
|
The width of the screenShare. |
callOptions.screenOptions.frameRate call.MediaConstraint?
|
The frame rate of the screenShare. |
callOptions.displayName string?
|
Custom display name to be provided to the destination. Only used with token-less anonymous calls. Not supported in all environments and may use default display name. |
callOptions.customParameters Array<call.CustomParameter>?
|
Custom SIP header parameters for the SIP backend. |
string
:
Id of the outgoing call.
// Make a basic anonymous call.
let callee = 'user1@example.com';
let callOptions = { ... };
let callId = client.call.makeAnonymous(callee, {}, callOptions);
// Make a time-limited token anonymous call.
let callee = 'user1@example.com';
let account = 'user2@example.com';
let callOptions = { ...
customParameters: [
{
"name": "X-GPS",
"value": "42.686032,23.344565"
}
],
...
};
// Generate / Retrieve the encrypted tokens.
const key = 'abc123...';
const credentials = {
accountToken: createToken(account, key),
fromToken: createToken('sip:' + account, key),
toToken: createToken('sip:' + callee, key),
realm: 'realmAbc123...'
};
let callId = client.call.makeAnonymous(callee, credentials, callOptions);
Ends an ongoing call.
The SDK will stop any/all local media associated with the call. Events will be emitted to indicate which media tracks were stopped. See the call:trackEnded event for more information.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:stateChange event locally when the operation completes. The remote participant will be notified, through their own call:stateChange event, that the call was ended.
(string)
The ID of the call to end.
Puts a call on hold.
The specified call to hold must not already be locally held. Any/all media received from the remote participant will stop being received, and any/all media being sent to the remote participant will stop being sent.
Some call operations cannot be performed while the call is on hold. The call can be taken off hold with the call.unhold API.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:stateChange event locally when the operation completes. The remote participant will be notified of the operation through a call:stateChange event as well.
(string)
The ID of the call to hold.
Takes a call off hold.
The specified call to unhold must be locally held. If the call is not also remotely held, call media will be reconnected as it was before the call was held.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:stateChange event locally when the operation completes. The remote participant will be notified of the operation through a call:stateChange event as well.
(string)
The ID of the call to unhold.
Add new media tracks to an ongoing call. Will get new media tracks from the specific sources to add to the call.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:newTrack event both for the local and remote users to indicate a track has been added to the Call.
(string)
The ID of the call to add media to.
(Object
= {}
)
The media options to add to the call.
Name | Description |
---|---|
media.audio boolean
(default false )
|
Whether to add audio to the call. |
media.video boolean
(default false )
|
Whether to add video to the call. |
media.screen boolean
(default false )
|
Whether to add the screenshare to the call. (Note: Screensharing is not supported on iOS Safari.) |
media.audioOptions Object?
|
Options for configuring the call's audio. |
media.audioOptions.deviceId call.MediaConstraint?
|
ID of the microphone to receive audio from. |
media.videoOptions Object?
|
Options for configuring the call's video. |
media.videoOptions.deviceId call.MediaConstraint?
|
ID of the camera to receive video from. |
media.videoOptions.height call.MediaConstraint?
|
The height of the video. |
media.videoOptions.width call.MediaConstraint?
|
The width of the video. |
media.videoOptions.frameRate call.MediaConstraint?
|
The frame rate of the video. |
media.screenOptions Object?
|
Options for configuring the call's screenShare. |
media.screenOptions.height call.MediaConstraint?
|
The height of the screenShare. |
media.screenOptions.width call.MediaConstraint?
|
The width of the screenShare. |
media.screenOptions.frameRate call.MediaConstraint?
|
The frame rate of the screenShare. |
(Object?
= {}
)
Name | Description |
---|---|
options.bandwidth call.BandwidthControls?
|
Options for configuring media's bandwidth. |
options.dscpControls call.DSCPControls?
|
Options for configuring DSCP markings on the media traffic |
Remove tracks from an ongoing call.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:trackEnded event for both the local and remote users to indicate that a track has been removed.
(string)
The ID of the call to remove media from.
(Array)
A list of track IDs to remove.
(Object?
= {}
)
Name | Description |
---|---|
options.bandwidth call.BandwidthControls?
|
Options for configuring media's bandwidth. |
Adds local video to an ongoing Call, to start sending to the remote participant.
Can only be used in a basic media scenario, where the Call does not already have video. For more advanced scenarios, the call.addMedia API can be used.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:tracksAdded event both for the local and remote users to indicate a track has been added to the Call.
(string)
ID of the call being acted on.
(Object?)
Options for configuring the call's video.
Name | Description |
---|---|
videoOptions.deviceId call.MediaConstraint?
|
ID of the camera to receive video from. |
videoOptions.height call.MediaConstraint?
|
The height of the video. |
videoOptions.width call.MediaConstraint?
|
The width of the video. |
videoOptions.frameRate call.MediaConstraint?
|
The frame rate of the video. |
(Object?)
Name | Description |
---|---|
options.bandwidth call.BandwidthControls?
|
Options for configuring media's bandwidth. |
options.dscpControls call.DSCPControls?
|
Options for configuring DSCP markings on the media traffic. |
Removes local video from an ongoing Call, stopping it from being sent to the remote participant.
Can only be used in a basic media scenario, where the Call has only one video track. For more advanced scenarios, the call.removeMedia API can be used.
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:tracksRemoved event for both the local and remote users to indicate that a track has been removed.
(string)
ID of the call being acted on.
Adds local screenshare to an ongoing Call, to start sending to the remote participant.
The latest SDK release (v4.X+) has not yet implemented this API in the same way that it was available in previous releases (v3.X). In place of this API, the SDK has a more general API that can be used for this same behaviour.
The call.addMedia API can be used to perform the same behaviour
as startScreenshare
. call.addMedia is a general-purpose API
for adding media to a call, which covers the same functionality as
startScreenshare
. Selecting only screen options when using
call.addMedia will perform the same behaviour as using
startScreenshare
.
// Select media options for adding only screenshare.
const media = {
audio: false,
video: false,
screen: true,
screenOptions: { ... }
}
// Add the selected media to the call.
client.call.addMedia(callId, media)
Removes local screenshare from an ongoing Call, stopping it from being sent to the remote participant.
The latest SDK release (v4.X+) has not yet implemented this API in the same way that it was available in previous releases (v3.X). In place of this API, the SDK has a more general API that can be used for this same behaviour.
The call.removeMedia API can be used to perform the same
behaviour as stopScreenshare
. call.removeMedia is a
general-purpose API for removing media from a call, which covers the
same functionality as stopScreenshare
. Specifying only the screen
track when using call.removeMedia will perform the same
behaviour as using stopScreenshare
.
There is a caveat that if a Call has multiple video tracks (for example, both a video and a screen track), the SDK itself cannot yet differentiate one from the other. The application will need to know which track was the screen track in this scenario.
const call = client.call.getById(callId)
// Get the ID of any/all video tracks on the call.
const videoTracks = call.localTracks.filter(trackId => {
const track = call.media.getTrackById(trackId)
// Both video and screen tracks have kind of 'video'.
return track.kind === 'video'
})
// Pick out the screen track.
const screenTrack = videoTracks[0]
// Remove screen from the call.
client.call.removeMedia(callId, [ screenTrack ])
Replace a call's track with a new track of the same media type.
The operation will remove the old track from the call and add a new track to the call. This effectively allows for changing the track constraints (eg. device used) for an ongoing call.
Because it completely replaces an old track with a new one, the old track's state characteristics are not carried over in the new track's state. (e.g. if an old track's state was 'muted' and replacement of this track has occured, the new track's state will be 'unmuted', as this is its default state)
The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:trackReplaced event locally when the operation completes. The newly added track will need to be handled by the local application. The track will be replaced seamlessly for the remote application, which will not receive an event.
(string)
The ID of the call to replace the track of.
(string)
The ID of the track to replace.
(Object
= {}
)
The media options.
Name | Description |
---|---|
media.audio boolean
(default false )
|
Whether to create an audio track. |
media.audioOptions Object?
|
Options for configuring the audio track. |
media.audioOptions.deviceId call.MediaConstraint?
|
ID of the microphone to receive audio from. |
media.video boolean
(default false )
|
Whether to create a video track. |
media.videoOptions Object?
|
Options for configuring the video track. |
media.videoOptions.deviceId call.MediaConstraint?
|
ID of the camera to receive video from. |
media.videoOptions.height call.MediaConstraint?
|
The height of the video. |
media.videoOptions.width call.MediaConstraint?
|
The width of the video. |
media.videoOptions.frameRate call.MediaConstraint?
|
The frame rate of the video. |
const callId = ...
// Get the video track used by the call.
const videoTrack = ...
// Replace the specified video track of the call with a new
// video track.
client.call.replaceTrack(callId, videoTrack.id, {
// The track should be replaced with a video track using
// a specific device. This effectively changes the input
// device for an ongoing call.
video: true,
videoOptions: {
deviceId: cameraId
}
})
const callId = ...
// Get the video track used by the call.
const videoTrack = ...
// Can also replace the specified video track of the call with a new
// screen sharing track because screen sharing is delivered as a video stream to remote peer.
// User will then be prompted to pick a specific screen to share and thus the device id will be selected.
client.call.replaceTrack(callId, videoTrack.id, {
// The track should be replaced with a screen sharing track.
// Note that 'screenOptions' property is not mandatory, as API will use default values
// for properties like: width, height, frameRate.
screen: true
})
Attempt to re-establish a media connection for a call.
This API will perform a "refresh" operation on the call with the intention
of resolving media issues that may have been encountered. This API is only
necessary after the Call's mediaConnectionState
has entered the failed
state, but may be used in other scenarios.
After the operation completes successfully, the Call will be re-establishing
its media connection. By this time, or shortly after, the Call's
mediaConnectionState should have
transitioned to checking
(via a
call:mediaConnectionChange
event) to signify the re-establishment. It will then transition to either
connected
or failed
state, similar to during the initial Call establishment.
If this operation fails, then the Call will not attempt the re-establishment
and will remain in the failed
mediaConnectionState.
Behaviour during the operation may differ slightly based on the browser.
Notably, Firefox will always transition to the checking
mediaConnectionState no matter what
the previous state was. Whereas Chrome will skip the checking
state,
transitioning directly to either connected
or failed
. This has the
implication for Chrome that if the state does not change (for example,
the Call is in the failed
state before the media restart operation,
and media re-establishment fails), then there will be no
call:mediaConnectionChange
event emitted. For this reason, Chrome-based applications may need a
short delay after receiving the call:mediaRestart
event before checking the Call's updated
mediaConnectionState to ensure the
application is acting on the "latest" state.
The SDK will emit a call:mediaRestart event when the operation completes.
The progress of the operation will be tracked via the call:operation event.
(string)
The ID of the call to act on.
Plays an audio file to the remote side of the Call. This API will temporarily replace the Call's local audio track with an audio file for the duration of the audio file.
The Call must be in Connected
state and have a local audio track for this operation.
This API will not affect media other than the local audio track. Other media on the Call, such as local video or remote audio, can be muted or unrendered during this operation if desired.
This operation will use the browser's
Audio constructor
to read in the audio file. The filePath
parameter will be used directly with Audio
, so
can be either a relative file path to your audio file or a URL pointing to a file.
This API returns a promise that can be used to track the progress of the operation.
The promise will resolve after the operation completes or reject if an error is
encountered. Additionally, an extra onPlaying
callback is provided on the Promise
to indicate when the audio file actually begins to play. See the code example below
for a sample.
The SDK will emit call:operation events locally as the operation progresses. The remote endpoint will not receive an event for this operation.
If an error is encountered during the operation and the SDK is unable to replace the original local audio track, then that track will be forcibly ended and an media:trackEnded event will be emitted for it. This will release the microphone and avoid losing access to the track while it is active, allowing the application to resolve the scenario by using the call.replaceTrack API to revert the local audio track.
Promise
:
Promise that resolves when the operation is complete.
// The API returns a promise which will provide feedback about the operation.
client.call.playAudioFile(callId, filePath)
.then(() => {
// Audio file has finished playing; call has reverted to previous audio.
})
.catch(err => {
// An error has occurred during the operation.
})
// The returned promise can optionally provide feedback midway through the
// operation. A chainable `onPlaying` method denotes when the audio file has
// started to play and the Call's audio has been replaced.
client.call.playAudioFile(callId, filePath)
.onPlaying(({ duration }) => {
// Audio file has started playing; call audio is now the file.
// Note: Calling `onPlaying` must be done before `then` and `catch` for it
// to be chainable.
})
.then(() => { ... })
.catch(err => { ... })
Retrieves the information of all calls made during the current session.
Array<call.CallObject>
:
Call objects.
let calls = client.call.getAll()
let currentCalls = calls.filter(call => {
return call.state === client.call.states.CONNECTED
})
Retrieves the information of a single call with a specific call ID.
(string)
The ID of the call to retrieve.
call.CallObject
:
A call object.
Set the Custom Parameters of a Call, to be provided to the remote endpoint.
The specified parameters will be saved as part of the call's information throughout the duration of the call. All subsequent call operations will include these custom parameters. Therefore, invalid parameters, or parameters not previously configured on the server, will cause subsequent call operations to fail.
A Call's custom parameters are a property of the Call's CallObject, which can be retrieved using the call.getById or call.getAll APIs.
The custom parameters set on a call can be sent directly with the call.sendCustomParameters API.
Custom parameters can be removed from a call's information by setting them as undefined (e.g., call.setCustomParameters(callId)
).
Subsequent call operations will no longer send custom parameters.
(string)
The ID of the call.
(Array<call.CustomParameter>)
The custom parameters to set.
Send the custom parameters on an ongoing call to the server. The server may either consume the headers or relay them to another endpoint, depending on how the server is configured.
A Call's custom parameters are a property of the Call's CallObject, which can be retrieved using the call.getById or call.getAll APIs.
Before sending custom parameters, they need to be first set on the existing Call. To set, change or remove the custom parameters on a call, use the call.setCustomParameters API.
(string)
The ID of the call being acted on.
Send DTMF tones to a call's audio.
The provided tone can either be a single DTMF tone (eg. '1') or a sequence of DTMF tones (eg. '123') which will be played one after the other.
The specified call must be either in Connected, Ringing, or Early Media state, otherwise invoking this API will have no effect.
The tones will be sent as out-of-band tones if supported by the call, otherwise they will be added in-band to the call's audio.
The progress of the operation will be tracked via the call:operation event.
(string)
ID of the call being acted on.
(string)
DTMF tone(s) to send. Valid characters are ['0','1','2','3','4','5','6','7','8','9','#','*' and ','].
(number
= 100
)
The amount of time, in milliseconds, that each DTMF tone should last.
(number
= 70
)
The length of time, in milliseconds, to wait between tones.
Retrieve a snapshot of the low-level information of the Call through statistical report.
The data retrieved is a RTCStatsReport object, which contains many individual RTCStats. These are advanced statistics gathered by the browser providing insights into the Call at a certain point in time. Aggregating reports over a period of time would allow a low-level analysis of the Call for that period. As an example, this could be done to determine the media quality during the Call.
A Track ID can optionally be provided to get a report for a specific local Track of the Call.
This API will return a promise which, when resolved, will contain the report of the particular call. The progress of the operation will be tracked via the call:operation event.
The SDK will emit a call:statsReceived event, after the operation completes, that has the report.
(string)
The ID of the Call to retrieve the report.
(string?)
ID of a local Track being used by the Call. If not
provided, RTCStatsReport is generated for the Call itself.
Promise
:
A promise that will resolve with the stats report or an error if it fails.
// Get a snapshot of the Call's stats.
// This may be done on a regular interval to collect data over time.
try {
// The API will return a promise that resolves with the stats.
const result = await client.call.getStats(callId)
result.forEach(stats => {
// Handle the data on its own or collate with previously gathered stats
// for analysis.
...
})
} catch (err) {
// Handle the error.
const { code, message } = err
...
}
Retrieve the list of available and supported codecs based on the browser's capabilities for sending media.
This API will return a promise which, when resolved, it will contain the list of available and supported codecs. In addition, the SDK emits a call:availableCodecs event upon retrieving that list of codecs.
This API is a wrapper for the static method RTCRtpSender.getCapabilities().
(string)
The kind of media, i.e., 'audio' or 'video', to get the list of available codecs of.
Promise
:
A promise that will resolve with an object containing the available codecs, along with the
kind
parameter, that was supplied in the first place.
If there was an error, it will return undefined.
try {
// The API will return a promise that resolves with the codecs.
const result = await client.call.getAvailableCodecs('audio')
result.forEach(codec => {
// Inspect the codec supported by browser by looking at its properties.
...
})
} catch (err) {
// Handle the error.
const { code, message } = err
...
}
Retrieve the call metrics report for a call.
The object returned from this API will be in JSON format. The top level object is the report and will include a timeline of events that were recorded during a call as well as a map object containing computed metrics.
Any event in a timeline will have it's own timeline that may have recorded events. Events in a timeline are scoped to that timelines event or report.
The report and some events may have additional data included in a data property.
See event documentation here. See metrics documentation here.
(string)
The id of the call to retrieve the report on.
Object
:
An object containing all metrics and data tracked against this call.
Set SDP Handler Functions that will be run as part of a pipeline for all future calls. This will replace any SDP Handlers that were previously set.
SDP handlers can be used to make modifications to the SDP (e.g., removing certain codecs) before they are processed or sent to the other side.
This is an advanced feature, changing the SDP handlers mid-call may cause unexpected behaviour in future call operations for that call.
(Array<call.SdpHandlerFunction>)
The list of SDP handler functions to modify SDP.
Changes the camera and/or microphone used for a Call's media input.
The latest SDK release (v4.X+) has not yet implemented this API in the same way that it was available in previous releases (v3.X). In place of this API, the SDK has a more general API that can be used for this same behaviour.
The same behaviour as the changeInputDevices
API can be implemented
using the general-purpose call.replaceTrack API. This API can
be used to replace an existing media track with a new track of the
same type, allowing an application to change certain aspects of the
media, such as input device.
const call = client.call.getById(callId)
// Get the ID of the Call's video track.
const videoTrack = call.localTracks.find(trackId => {
const track = client.media.getTrackById(trackId)
return track.kind === 'video'
})
// Select the new video options.
const media = {
video: true,
videoOptions: {
deviceId: 'cameraId'
}
}
// Change the call's camera by replacing the video track.
client.call.replaceTrack(callId, videoTrack, media)
Changes the speaker used for a Call's audio output. Supported on browser's that support HTMLMediaElement.setSinkId().
The latest SDK release (v4.X+) has not yet implemented this API in the same way that it was available in previous releases (v3.X). In place of this API, the SDK has a more general API that can be used for this same behaviour.
The same behaviour as the changeSpeaker
API can be implemented by
re-rendering the Call's audio track. A speaker can be selected when
rendering an audio track, so changing a speaker can be simulated
by unrendering the track with media.removeTracks, then
re-rendering it with a new speaker with media.renderTracks.
const call = client.call.getById(callId)
// Get the ID of the Call's audio track.
const audioTrack = call.localTracks.find(trackId => {
const track = client.media.getTrackById(trackId)
return track.kind === 'audio'
})
// Where the audio track was previously rendered.
const audioContainer = ...
// Unrender the audio track we want to change speaker for.
client.media.removeTrack([ audioTrack ], audioContainer)
// Re-render the audio track with a new speaker.
client.media.renderTrack([ audioTrack ], audioContainer, {
speakerId: 'speakerId'
})
Possible states that a Call can be in.
A Call's state describes the current status of the Call. An application should use the state to understand how the Call, and any media associated with the Call, should be handled. Which state the Call is in defines how it can be interacted with, as certain operations can only be performed while in specific states, and tells an application whether the Call currently has media flowing between users. Unless stated otherwise, the Call's state pertains to both caller & callee.
The Call's state is a property of the CallObject, which can be retrieved using the call.getById or call.getAll APIs.
The SDK emits a call:stateChange event when a Call's state changes from one state to another.
Type: Object
(string)
: The (outgoing) call is being started. While in this state, no Call operations can be performed until Call gets into Initiated state.
(string)
: A call has been started and both the callee and caller may now perform further operations on the call object.
(string)
: The call has been received by both parties, and is waiting to be answered.
(string)
: The call has not been answered, but media
is already being received. This may be network-ringing media, IVR
system media, or other pre-answer medias. When early media is
supported, this state may replace the Ringing state. This is a state valid only for caller's side.
(string)
: The call was disconnected before it could be answered. This is a state valid only for callee's side.
(string)
: Both parties are connected and media is flowing.
(string)
: Both parties are connected but no media is flowing.
(string)
: The call has ended.
// Use the call states to know how to handle a change in the call.
client.on('call:stateChange', function (params) {
const call = client.call.getById(params.callId)
// Check if the call now has media flowing.
if (call.state === client.call.states.CONNECTED) {
// The call is now active, and can perform midcall operations.
}
})
Possible states that a Call's media connection can be in.
A Call's media connection state describes the current status of media within the call. An application should use this state to understand whether the Call participants are able to see/hear each other or may be experiencing connection issues. The media connection state can be used alongside the Call state to determine if media issues are occurring while the participants are expecting to be connected.
An important state to check for is the FAILED
state. This state signifies that there is no
media connection between the call participants and an action must be taken to resolve the
problem. Using the call.restartMedia API will attempt to reconnect the media. See
the call.restartMedia API description for more information.
These states are direct reflections of the possible RTCPeerConnection.iceConnectionState values.
The Call's media connection state is a property of the CallObject, which can be retrieved using the call.getById or call.getAll APIs.
The SDK emits a call:mediaConnectionChange event when a Call's media connection state changes from one state to another.
Type: Object
(string)
: The Call is initializing the local side of the connection and waiting on information from the remote side.
When the information is received, the state will transition into
checking
as the Call automatically begins the connection process.
(string)
: The Call has received information from the remote endpoint and is working to establish the media connection.
When finished, the state will transition to either
connected
or
failed
.
(string)
: A usable connection has been made and the Call will now have media.
The connection may not be optimal, though, so the Call will continue establishment to improve the connection before going into the
completed
state.
(string)
: The media connection process has fully completed and the optimal connection has been established. While in this state,
the Call endpoints will receive each other's media.
(string)
: Media has become disconnected and the Call endpoints have stopped receiving each other's media.
The Call will automatically attempt to reconnect, transitioning back to
completed
if successful or to
failed
if not.
(string)
: The connection has failed and cannot be recovered automatically. A full media connection refresh is required to reestablish a connection. See the
call.restartMedia
API.
(string)
: The connection has been shut down and is no longer in use.
// Use the media connection states to check the status of the media connection of the Call.
client.on('call:mediaConnectionChange', function (params) {
// Retrieve the state of the Call this event is for.
const call = client.call.getById(params.callId)
const mediaConnectionStates = client.call.mediaConnectionStates
// Check the mediaConnectionState to determine which scenario the call is in.
switch (call.mediaConnectionState) {
case mediaConnectionStates.CONNECTED:
case mediaConnectionStates.COMPLETED:
// Media established: The Call's media is connected. The Call's participants
// are able to see/hear each other.
// These states will occur after Call establishment.
...
break
case mediaConnectionStates.NEW:
case mediaConnectionStates.CHECKING:
case mediaConnectionStates.DISCONNECTED:
// Media pending: The Call's media is not connected. The Call is working
// to connect media automatically.
// These states will occur during Call establishment and may occur midcall if there are
// connection issues (eg. poor network quality) or a Call participant has changed (eg. transfer).
...
break
case mediaConnectionStates.FAILED:
// Media has failed. The call requires a media refresh to reestablish.
// This state will occur after the `DISCONNECTED` state is encountered.
...
break
case mediaConnectionStates.CLOSED:
// Media ended due to the Call being ended.
// This state will occur after Call establishment.
...
break
}
}
Events used in the SDK's call reports.
As a call progresses, the operation(s)/function(s) being performed throughout the duration of a call are recorded as events in a call report. The call report can be retrieved via the call.getReport API. An application can use these event names to find the associated event(s) in the call report for more information on the event. See Call Reports tutorial for more information on call reports and events.
Type: Object
(string)
: Starts when the make operation starts. Ends when the make operation finishes.
(string)
: Starts when the send ringing feedback operation starts. Ends when the ringing feedback operation finishes.
(string)
: Starts when the SDK receives a call and ends when the incoming call is setup.
(string)
(string)
: Starts when the answer operation starts. Ends when the answer operation finishes.
(string)
: Starts when user media is requested from the browser and ends when the media is created.
(string)
: Starts when the local media begins processing, and ends when the offer is set and ice collection completes.
(string)
: Starts when the remote response is received, and ends when the remote media is set.
(string)
: Starts when ice candidate collection starts and ends when collection is complete.
(string)
: Starts and ends when a relay candidate is collected. Event data contains info on the candidate.
(string)
: Starts when the ignore operation starts. Ends when the ignore operation finishes.
(string)
: Starts when the reject operation starts. Ends when the reject operation finishes.
(string)
: Starts when the forward call operation starts. Ends when the forward operation finishes.
(string)
: Starts when the end operation starts. Ends when the end operation finishes.
(string)
: Starts when the call status update ended operation starts. Ends when the call status update ended operation finishes.
(string)
: Starts when the add basic media operation starts. Ends when the add basic media operation finishes.
(string)
: Starts when the add media operation starts. Ends when the add media operation finishes.
(string)
: Starts when a remote add media notification is received and ends when the operation is handled.
(string)
: Starts when the remove basic media operation starts. Ends when the remove basic operation finishes.
(string)
: Starts when the remove media operation starts. Ends when the remove media operation finishes.
(string)
: Starts when a remote remove media notification is received and ends when the operation is handled.
(string)
: Starts when the media restart operation starts. Ends when the media restart operation finishes.
(string)
: Starts when the replace track operation starts. Ends when the replace track operation finishes.
(string)
: Starts when the hold operation starts. Ends when the hold operation finishes.
(string)
: Starts when a remote hold notification is received and ends when the operation is handled.
(string)
: Starts when the unhold operation starts. Ends when the unhold operation finishes.
(string)
: Starts when a remote unhold notification is received and ends when the operation is handled.
(string)
: Starts when a REST request is to be made for an operation and ends when a response is received, or it times out.
(string)
: Starts when the play audio operation starts. Ends when the play audio operation finishes.
(string)
: Starts when the start music on hold operation starts. Ends when the start music on hold operation finishes.
(string)
: Starts when the stop music on hold operation starts. Ends when the stop music on hold operation finishes.
(string)
: Starts when the send custom parameters operation starts. Ends send custom parameters operation finishes.
(string)
: Starts when the get stats operation starts. Ends when the get stats operation finishes.
(string)
: Starts when the send DTMF operation starts. Ends when the DTMF operation finishes.
(string)
: Starts when the resync operation starts. Ends when the resync operation finishes.
(string)
: Starts when the direct transfer operation starts. Ends when the direct transfer operation finishes.
(string)
: Starts when the consultative transfer operation starts. Ends when the consultative transfer operation finishes.
(string)
: Starts when the join operation starts. Ends when the join operation finishes.
(string)
: Starts when the get available codecs operation starts. Ends when the get available codecs operation finishes.
(string)
: Starts when the slow start operation starts. Ends when the slow stop operation finishes.
const report = client.call.getReport('callId')
const getAvailableCodecsEvent = report.timeline.find(event => event.type === client.call.reportEvents.GET_AVAILABLE_CODECS)
log(`Took ${getAvailableCodecsEvent.end - getAvailableCodecsEvent.start}ms to get available codecs.`)
List of metrics available as part of a Call Report. Metrics are calculated only for the successful scenarios.
As a call progresses, timings are calculated for the duration of operations and other events. They are recorded in a call report that can be retrieved via the call.getReport API.
Type: Object
(string)
: The duration of a completed call starting from the make call API call or incoming call notification until the call ends.
(string)
: The amount of time it takes from when the
make call
operation starts up until right before we set local description.
(string)
: The amount of time it takes from when a call is made until the call is setup locally. This does not include any remote session creation.
(string)
: The amount of time it takes from when the create session request is sent until the SDK processes the response.
(string)
: The amount of time it takes from when the
answer call
operation starts until it is setup locally.
(i.e. from the time an incoming call is answered until media is connected)
(string)
: The amount of time it takes from when the
answer call
operation starts up until right before we set local description.
(string)
: For incoming calls, the time from the call first being received until it has been answered. Includes call processing and setup, as well as time for the answer API to have been called.
(string)
: For incoming calls, the time from the call first being received until media is connected. Similar to
TIME_FROM_RECEIVE_TO_ANSWER
, but without the
answer
REST request.
(string)
: The amount of time it takes from when a call is made until the SDK recieves the remote ringing notification.
(string)
: The amount of time it takes for the ignore call to complete.
(string)
: The amount of time it takes for the reject call to complete.
(string)
: The amount of time it takes from when the local
add media
operation starts until it has finished.
(string)
: The amount of time it takes from when the SDK receives a remote
add media
notification until it is handled and operation completes.
(string)
: The amount of time it takes from when the local
remove media
operation starts until it has finished.
(string)
: The amount of time it takes from when the SDK receives a remote
remove media
notification until it is handled and operation completes.
(string)
: The amount of time it takes from when the
restart media
operation starts until it has finished.
(string)
: The amount of time it takes from when the local
hold
operation starts until it has finished.
(string)
: The amount of time it takes from when the SDK receives a remote
hold
notification until it is handled and operation completes.
(string)
: The amount of time it takes from when the local
unhold
operation starts until it has finished.
(string)
: The amount of time it takes from when the SDK receives a remote
unhold
notification until it is handled and operation completes.
(string)
: The amount of time it takes from when the local description is set to when all ICE candidates have been collected.
(string)
: The amount of time it takes from when the
ice collection
operation starts until each relay candidate has been recieved.
(string)
: The amount of time it takes from when the
send custom parameters
operation starts until it has finished.
(string)
: The amount of time it takes from when the
forward call
operation starts until it has finished.
(string)
: The amount of time it takes from when the
direct transfer
operation starts until it has finished.
(string)
: The amount of time it takes from when the
consultative transfer
operation starts until it has finished.
(string)
: The amount of time it takes from when the
join call
operation starts until it has finished.
const report = client.call.getReport(callId)
const callDuration = report.metrics.find(metric => metric.type === client.call.metrics.CALL_DURATION)
log(`Call duration was ${callDuration.data}ms.`)
A call operation has either started, been updated, or finished.
Information about ongoing call operations are stored on the CallObject. This event indicates that an operation's information has changed.
The status of an operation indicates whether the local or remote side of the call is currently processing it, with values being 'ONGOING' or 'PENDING', respectively. All operations will begin as 'ONGOING' status with an event indicating the 'START' transition. Operations that require a response from the remote side will have an 'UPDATE' transition to the 'PENDING' status once it starts to wait for the response. Once complete, an event will indicate a 'FINISH' transition and the operation will be removed from the call state.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID for the call being operated on. |
params.operation string
|
The type of operation causing this event. |
params.operationId string
|
The unique ID of the call operation. |
params.transition string
|
The transition reason for the operation change. |
params.isLocal boolean
|
Flag indicating whether the operation was local or not. |
params.previous Object?
|
The operation information before this change. If the transition is to "start" the operation, there will be no previous information. |
params.previous.operation string?
|
The operation that was ongoing. |
params.previous.status string?
|
The operation status before this change. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:operation', (params) => {
const { callId, operationId } = params
// Get the operation from the call's state that this event is about.
const call = client.call.getById(callId)
const operation = call.currentOperations.find(op => op.id === operationId)
log(`${operation.type} operation is now ${operation.status} for call ${callId}.`)
})
An outgoing call has been started.
Information about the Call can be retrieved using the call.getById API.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the call. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
A new incoming call has been received.
Information about the Call can be retrieved using the call.getById API.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the call. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:receive', function(params) {
// We have received a call, prompt the user to respond.
promptUser(client.call.getById(params.callId));
});
A Call's state has changed.
See call.states for information about call states.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the Media object that was operated on. |
params.previous Object
|
The call's properties before the operation changed it. |
params.previous.state string
|
The previous state of the call. |
params.previous.localHold boolean?
|
The previous local hold state. Present when the state change was a hold/unhold operation. |
params.previous.remoteHold boolean?
|
The previous remote hold state. Present when the state change was a hold/unhold operation. |
params.transition Object?
|
Contains more detailed information about the state change. |
params.transition.statusCode number?
|
The status code associated with the particular state change's reason. |
params.transition.reasonText string?
|
The reason for the state change. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:stateChange', function (params) {
const call = client.call.getById(params.callId)
const prevState = params.previous.state
log(`Call changed from ${prevState} to ${call.state} state.`)
// Handle the event depending on the new call state.
switch (call.state) {
case client.call.states.CONNECTED:
// Handle being on call with media.
break
case client.call.states.ENDED:
// Handle call ending.
break
...
}
})
New media has been added to the call.
Tracks have been added to the Call after an SDK operation. Both sides of the Call are now able to render these tracks.
Tracks are added to a Call when either the local or remote user adds new media to the Call, using the call.addMedia API for example, or when the Call is unheld with the call.unhold API.
Remote tracks are similarly added to a Call when new tracks are added by the remote user or either user unholds the call.
This event can indicate that multiple tracks have been removed by the same operation. For example, if the remote user added video to the call, this event would indicate a single, remote video track was added. If the local user unheld the call, this event would indicate that any tracks previously on the call have been re-added, both local and remote.
Information about a Track can be retrieved using the media.getTrackById API.
client.on('call:tracksAdded', function (params) {
// Get the information for each track.
const tracks = params.trackIds.map(client.media.getTrackById)
tracks.forEach(track => {
const { id, kind, isLocal } = track
// Handle the track depending whether it is audio vs. video and local vs. remote.
...
})
})
Tracks have been removed from the Call after an SDK operation. The tracks may still exist, but the media is not available to both sides of the Call any longer.
Tracks are removed from a Call when either the local or remote user stops the tracks, by using the call.removeMedia API for example, or when the Call is held with the call.hold API.
This event can indicate that multiple tracks have been removed by the same operation. For example, if the remote user removed video from the call, this event would indicate a single, remote video track was removed. If the local user held the call, this event would indicate that all tracks on the call have been removed, both local and remote.
Information about a Track can be retrieved using the media.getTrackById API.
client.on('call:tracksRemoved', function (params) {
// Get the information for each track.
const tracks = params.trackIds.map(client.media.getTrackById)
tracks.forEach(track => {
const { id, kind, isLocal } = track
// Handle the track depending whether it is audio vs. video and local vs. remote.
...
})
})
Stats have been retrieved for a Call or specific Track of a Call.
See the call.getStats API for more information.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the Call to retrieve stats for. |
params.trackId string?
|
The ID of the Track to retrieve stats for. |
params.result Map?
|
The RTCStatsReport. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:statsReceived', function (params) {
if (params.error) {
// Handle the error from the operation.
const { code, message } = params.error
...
} else {
// Iterate over each individual statistic inside the RTCPStatsReport Map.
// Handle the data on its own or collate with previously gathered stats
// for analysis.
params.result.forEach(stat => {
...
})
}
})
A local Track has been replaced by the call.replaceTrack API.
This event is a combination of a track being removed from the Call and a new track being added to the Call. The previous Track's media is no longer available, similar to the call:tracksRemoved event, and the new Track is available in its place, similar to the call:tracksAdded event. The event includes information about the Track that was replaced to help an application replace it seamlessly.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the call where a track was replaced. |
params.newTrackId string?
|
The ID of the new track that replaced the old track. |
params.oldTrack call.TrackObject?
|
State of the replaced track. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:trackReplaced', function (params) {
const { callId, oldTrack, newTrackId } = params
// Unrender the removed track.
handleTrackGone(oldTrack, callId)
// Render the added track.
const track = client.media.getTrackById(newTrackId)
handleTrackAdded(track, callId)
})
The list of available and supported codecs by the browser have been retrieved.
This event is emitted as a result of the call.getAvailableCodecs API. Please refer to the API for more information.
client.on('call:availableCodecs', function (codecs) {
// Iterate over each codec.
codecs.forEach(codec => {
// Handle the data by analysing its properties.
// Some codec instances may have the same name, but different characteristics.
// (i.e. for a given audio codec, the number of suported channels may differ (e.g. mono versus stereo))
...
})
})
A Call's media connection state has been changed.
This event is emitted as a result of changes to the media connection of the Call. These state changes occur during call establishment, connection loss/re-establishment, call completion, etc.
To check the media connection state of a call, retrieve the call's information using the call.getById API,
and check the mediaConnectionState
property of the call.
See call.mediaConnectionStates for the list of possible values and descriptions.
A media restart operation for a Call has been attempted.
This event is emitted as a result of the call.restartMedia API being called. See the description for call.restartMedia for information about its usage.
The call:mediaConnectionChange event will also be emitted alongside this event when the media restart operation is successful.
(Object)
Name | Description |
---|---|
params.callId string
|
The ID of the Call that was acted on. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
client.on('call:mediaRestart', function (params) {
if (params.error) {
// The operation failed. May want to determine whether to re-attempt the
// operation (if the failure was simply a connectivity issue) or to
// consider the call's media irrecoverable.
...
} else {
// The call should be re-establishing media, with the call's
// `mediaConnectionState` being updated.
const mediaState = client.call.getById(params.callId).mediaConnectionState
...
}
})
The 'connection' namespace is used to connect and maintain connections between the SDK and one or more backend servers.
Information about a websocket connection.
Can be retrieved using the connection.getSocketState API.
Type: Object
(boolean)
: The state of the websocket connection.
(boolean)
: True if the client has sent a ping to the server and is still waiting for a pong response.
(Object)
: Information about how the websocket is being used.
(string)
: The SDK platform being used.
(number)
: How often the client will ping the server to test for websocket connectivity.
(number)
: How many times the SDK will try to reconnect a disconnected websocket.
(number)
: How long the SDK will wait before retrying websocket reconnection.
(number)
: Reconnect delay multiplier for subsequent attempts. The reconnect delay time will be multiplied by this after each failed reconnect attempt to increase the delay between attempts. eg. 5000ms then 10000ms then 20000ms delay if value is 2.
(number)
: Maximum time delay between reconnect attempts (milliseconds). Used in conjunction with
reconnectTimeMultiplier
to prevent overly long delays between reconnection attempts.
(boolean)
: Indicates if the SDK should automatically try reconnecting a disconnected websocket.
(number)
: How many missed pings before the SDK stops trying to reconnect a disconnected websocket.
(string)
: The mode used for authenticating with the server.
(Object)
: Information required to connect a websocket to the server.
wsInfo.protocol
string?
The protocol to use to connect a websocket.
wsInfo.server
string?
The domain name or IP address of the server to connect to.
wsInfo.port
number?
The port of the server to connect to.
wsInfo.url
string?
The URL path to use to request a websocket connection.
wsInfo.params
string?
Any additional params that might be required by the server to establish the websocket connection.
(number)
: The date and time that the last known contact with the server was.
Get the state of the websocket.
(string
= 'link'
)
Backend platform for which to request the websocket's state.
connection.WSConnectionObject
:
Details about the current websocket connection, including state and configuration.
Triggers a reset in the connection to the WebSocket being used for notifications. This can be used in scenarios where a network issue (undetectable by the SDK) is detected by an application.
If there is no WebSocket currently connected, this function has no effect. Calling this function will trigger all the normal WebSocket and connectivity lifecycle events as well as trigger re-connection processing that follows the configuration of the SDK. Calling this function always has the potential of causing some events being lost by the SDK and preventing proper operation.
The SDK has an internal logging system for providing information about its behaviour. The SDK will generate logs, at different levels for different types of information, which are routed to a "Log Handler" for consumption. An application can provide their own Log Handler (see config.logs) to customize how the logs are handled, or allow the default Log Handler to print the logs to the console.
The SDK's default Log Handler is merely a thin wrapper around the browser's
console API (ie. window.console
). It receives the log generated by the
SDK, called a "Log Entry", formats a
human-readable message with it, then uses the console to log it at the
appropriate level. This is important to be aware of, since your browser's
console may affect how you see the SDK's default log messages. Since the
default Log Handler uses the console's levels, the browser may filter
which messages are shown depending on which levels it has configured. For
a user that understands console log levels, this can be helpful for
filtering the logs to only the relevant information. But it can equally
be a hindrance by hiding the more detailed log messages (at the 'debug'
level), since browser can have this level hidden by default. For this
reason, we recommend providing a custom Log Handler to the SDK that is
better suited for your application and its users.
A LogEntry object is the data that the SDK compiles when information is logged. It contains both the logged information and meta-info about when and who logged it.
A LogHandler provided to the SDK (see config.logs) will need to handle LogEntry objects.
Type: Object
(number)
: When the log was created, based on UNIX epoch.
(string)
: The log function that was used to create the log.
(string)
: The level of severity the log.
(Object)
: The subject that the log is about.
(Array)
: The logged information, given to the Logger
method as parameters.
(Object?)
: Timing data, if the log method was a timer method.
function defaultLogHandler (logEntry) {
// Compile the meta info of the log for a prefix.
const { timestamp, level, target } = logEntry
let { method } = logEntry
const logInfo = `${timestamp} - ${target.type} - ${level}`
// Assume that the first message parameter is a string.
const [log, ...extra] = logEntry.messages
// For the timer methods, don't actually use the console methods.
// The Logger already did the timing, so simply log out the info.
if (['time', 'timeLog', 'timeEnd'].includes(method)) {
method = 'debug'
}
console[method](`${logInfo} - ${log}`, ...extra)
}
A LogHandler can be used to customize how the SDK should log information. By default, the SDK will log information to the console, but a LogHandler can be configured to change this behaviour.
A LogHandler can be provided to the SDK as part of its configuration (see config.logs). The SDK will then provide this function with the logged information.
Type: Function
(Object)
The LogEntry to be logged.
// Define a custom function to handle logs.
function logHandler (logEntry) {
// Compile the meta info of the log for a prefix.
const { timestamp, level, target } = logEntry
let { method } = logEntry
const logInfo = `${timestamp} - ${target.type} - ${level}`
// Assume that the first message parameter is a string.
const [log, ...extra] = logEntry.messages
// For the timer methods, don't actually use the console methods.
// The Logger already did the timing, so simply log out the info.
if (['time', 'timeLog', 'timeEnd'].includes(method)) {
method = 'debug'
}
console[method](`${logInfo} - ${log}`, ...extra)
}
// Provide the LogHandler as part of the SDK configurations.
const configs = { ... }
configs.logs.handler = logHandler
const client = create(configs)
Possible levels for the SDK logger.
The SDK will provide Log Entries to the Log Handler for all logs at or above the set log level. 'debug' is considered the lowest level and 'silent' the highest level. For example, if the current level is 'info', then the Log Handler will receive Log Entries for logs at 'info', 'warn', and 'error', but not for the 'debug' level.
(string)
: Nothing will be logged.
(string)
: Unhandled error information will be logged. If
the SDK encounters an issue it cannot resolve, the error will be included
in the logs. This likely points to an issue with the SDK itself or an
issue with how the SDK is being used.
(string)
: Warning messages for the application developer will
be logged. If the SDK encounters an issue that it can recover and continue,
a warning about the issue will be included in the logs. These logs point
to issues that need to be handled by the application. For example, providing
an invalid configuration to the SDK will cause a warning log that explains
the issue.
(string)
: General information about the SDK's operations will
be logged, outlining how the SDK is handling the operations. Reading through
these logs should provide a high-level view of what the SDK is doing,
and why it is doing it.
(string)
: Detailed information about the SDK's operations,
meant for debugging issues, will be logged. Specific information and relevant
operation data are provided for understanding the scenario that the SDK
was in during the operation.
The 'media' namespace provides an interface for interacting with Media that the SDK has access to. Media is used in conjunction with the Calls feature to manipulate and render the Tracks sent and received from a Call.
Media and Track objects are not created directly, but are created as part of Call operations. Media and Tracks will either be marked as "local" or "remote" depending on whether their source is the local user's machine or a remote user's machine.
The Media feature also keeps track of media devices that the user's machine can access. Any media device (eg. USB headset) connected to the machine can be used as a source for media. Available devices can be found using the media.getDevices API.
Retrieves the available media devices for use.
The devices:change event will be emitted when the available media devices have changed.
Object
:
The lists of camera, microphone, and speaker devices.
Retrieves an available Media object with a specific Media ID.
(string)
The ID of the Media to retrieve.
call.MediaObject
:
A Media object.
Retrieve an available Track object with a specific Track ID.
(string)
The ID of the Track to retrieve.
call.TrackObject
:
A Track object.
Requests permission to access media devices on the end-user's machine.
This API will trigger the browser to ask the end-user for permission to access their camera and/or microphone. These permissions are needed for the SDK to read information about the devices (the label, for example) and for using the devices for a call.
If the browser does not yet have permission, it will prompt the end-user with a small pop-up window, giving the user a chance to allow/deny the permissions. The behaviour of this pop-up window differs slightly based on the browser; it may automatically save the user's decision (such as in Chrome and Safari) or it may require the user to choose whether their decision should be saved (such as in Firefox).
This API is not required for proper usage of media and/or calls, but helps to prepare a user before a call is made or received. It allows an application to prompt the user for device permissions when it is convenient for them, rather than during call setup. If the user saves their decision, they will not be prompted again when the SDK accesses those devices for a call.
For device information, the media.getDevices API will retrieve the list of media devices available for the SDK to use. If this list is empty, or is missing information, it is likely that the browser does not have permission to access the device's information. We recommend using the media.initializeDevices API in this scenario if you would like to allow the end-user to select which device(s) they would like to use when they make a call, rather than using the system default.
The SDK will emit a devices:change event when the operation is successful or a devices:error event if an error is encountered.
// The SDK will ask for both audio and video permissions by default.
client.media.initializeDevices()
// The SDK will only ask for audio permissions.
client.media.initializeDevices({ audio: true, video: false })
Render Media Tracks in a container.
The container is specified by providing a CSS selector string that corresponds to the HTMLElement to act as the container.
(string)
A CSS selector string that uniquely
identifies an element. Ensure that special characters are properly
escaped.
// When a Call receives a new track, render it.
client.on('call:tracksAdded', function (params) {
params.trackIds.forEach(trackId => {
const track = client.media.getTrackById(trackId)
const container = track.isLocal ? localContainer : remoteContainer
// Render the Call's new track when it first becomes available.
client.media.renderTracks([ trackId ], container)
}
})
Remove Media Tracks from a container.
The container is specified by providing a CSS selector string that corresponds to the HTMLElement to act as the container.
Mutes the specified Tracks.
This API prevents the media of the specified Tracks from being rendered. Audio Tracks will become silent and video Tracks will be a black frame. This does not stop media from being received by those Tracks. The media simply cannot be used by the application while the Track is muted.
If a local Track being sent in a Call is muted, the Track will be noticeably muted for the remote user. If a remote Track received in a call is muted, the result will only be noticeable locally.
This mute operation acts on those specified Tracks directly. It does not act on the active Call as a whole.
The SDK will emit a media:muted event when a Track has been muted.
Unmutes the specified Tracks.
Media will resume its normal rendering for the Tracks. Like the 'muteTracks' API, this unmute operation acts on those specified Tracks directly. Therefore it does not act on active Call as a whole.
The SDK will emit a media:unmuted event when a Track has been unmuted.
The media devices available for use have changed.
Information about the available media devices can be retrieved using the media.getDevices API.
// Listen for changes to available media devices.
client.on('devices:change', function () {
// Retrieve the latest media device lists.
const devices = client.media.getDevices()
})
An error occurred while trying to access media devices.
The most common causes of this error are when the browser does not have permission from the end-user to access the devices, or when the browser cannot find a media device that fulfills the MediaConstraint(s) that was provided.
The specified Tracks have been muted.
A Track can be muted using the media.muteTracks API.
The specified Tracks have been unmuted.
A Track can be unmuted using the media.unmuteTracks API.
The specified Track has had its media source muted.
The Track is still active, but is not receiving media any longer. An audio track will be silent and a video track will be a black frame. It is possible for the track to start receiving media again (see the media:sourceUnmuted event).
This event is generated outside the control of the SDK. This will predominantly happen for a remote track during network issues, where media will lose frames and be "choppy". This may also happen for a local track if the browser or end-user stops allowing the SDK to access the media device, for example.
The specified Track has started receiving media from its source once again.
The Track returns to the state before it was muted (see the media:sourceMuted event), and will be able to display video or play audio once again.
This event is generated outside the control of the SDK, when the cause of the media source being muted had been undone.
The specified Track has been rendered into an element.
(Object)
Name | Description |
---|---|
params.trackIds Array<string>
|
The list of track id's that were rendered. |
params.selector string
|
The css selector used to identify the element the track is rendered into. |
params.error api.BasicError?
|
An error object, if the operation was not successful. |
A local Track has ended unexpectedly. The Track may still be part of a Call but has become disconnected from its media source and is not recoverable.
This event is emitted when an action other than an SDK operation stops the track. The most comon scenarios are when a device being used for a Call disconnects, any local tracks (such as audio from a bluetooth headset's microphone or video from a USB camera) from that device will be ended. Another scenario is for screensharing, where some browsers provide the ability to stop screensharing directly rather than through an SDK operation.
When a local track ends this way, it will still be part of the Call but will not have any media. The track can be removed from the call with the call.removeMedia API so the remote side of the Call knows the track has stopped, or the track can be replaced with a new track using the call.replaceTrack API to prevent any interruption.
The 'notification' namespace allows user to register/deregister for/from push notifications as well as enabling/disabling the processing of websocket notifications.
Provides an external notification to the system for processing.
Registers with Apple push notification service. Once registration is successful, the application will be able to receive standard and/or voip push notifications. It can then send these notifications to the SDK with api.notifications.process in order for the SDK to process them.
(Object)
Name | Description |
---|---|
params.services Array<string>
|
Array of services for which we wish to receive notifications. |
params.voipDeviceToken string
|
The voip device token used for voip push on iOS. This token is required if registering for call service notifications on iOS. |
params.standardDeviceToken string
|
The standardDevice token used for standard push on iOS . This token is required when registering for non-call service notifications. |
params.bundleId string
|
The bundleId to identify the application receiving the push notification. |
params.clientCorrelator string
|
Unique identifier for a client device. |
params.realm string
|
The realm used by the push registration service to identify and establish a connection with the service gateway. |
params.isProduction boolean
|
If true, push notification will be sent to production. If false, push notification will be sent to sandbox. |
Promise
:
When successful, the information of the registration.
Promise will reject with error object otherwise.
Registers with Google push notification service. Once registration is successful, the application will be able to receive standard and/or voip push notifications. It can then send these notifications to the SDK with api.notifications.process in order for the SDK to process them.
(Object)
Name | Description |
---|---|
params.services Array<string>
|
Array of services to register for. |
params.deviceToken string
|
The device token used for standard push on Android. This token is required when registering for all related services notifications. |
params.bundleId string
|
The bundleId to identify the application receiving the push notification. |
params.clientCorrelator string
|
Unique identifier for a client device. |
params.realm string
|
The realm used by the push registration service to identify and establish a connection with the service gateway. |
Promise
:
When successful, the information of the registration.
Promise will reject with error object otherwise.
An error occurred with push notifications.
(Object)
Name | Description |
---|---|
params.error api.BasicError
|
The Basic error object. |
params.channel string
|
The channel for the notification. |
The 'request' namespace (within the 'api' type) is used to make network requests to the server.
Send a request to the underlying REST service with the appropriate configuration and authentication. This is a wrapper on top of the browser's fetch API and behaves very similarly but using SDK configuration for the base URL and authentication as well as SDK logging.
(string)
The full path of the resource to fetch from the underlying service. This should include any REST version
or user information. This path will be appended to the base URL according to SDK configuration.
(RequestInit)
An object containing any custom settings that you want to apply to the request. See
fetch API
for a full description and defaults.
// Send a REST request to the server
// Create a request options object following [fetch API](https://developer.mozilla.org/en-US/docs/Web/API/fetch)
const requestOptions = {
method: 'POST',
body: JSON.stringify({
test: 123
})
}
// Note that you will need to subscribe for the `custom` service in order to
// receive notifications from the `externalnotification` service.
const response = await client.request.fetch('/rest/version/1/user/xyz@test.com/externalnotification', requestOptions)
A set of SdpHandlerFunctions for manipulating SDP information. These handlers are used to customize low-level call behaviour for very specific environments and/or scenarios.
Note that SDP handlers are exposed on the entry point of the SDK. They can be added during initialization of the SDK using the config.call.sdpHandlers configuration parameter. They can also be set after the SDK's creation by using the call.setSdpHandlers function.
import { create, sdpHandlers } from '@rbbn/webrtc-js-sdk';
const codecRemover = sdpHandlers.createCodecRemover(['VP8', 'VP9'])
const client = create({
call: {
sdpHandlers: [ codecRemover, <Your-SDP-Handler-Function>, ...]
}
})
// Through the Call API post-instantiation
client.call.setSdpHandlers([ codecRemover, <Your-SDP-Handler-Function>, ...])
This function creates an SDP handler that will remove codecs matching the selectors specified for SDP offers and answers.
In some scenarios it's necessary to remove certain codecs being offered by the SDK to remote parties. For example, some legacy call services limit the SDP length (usually to 4KB) and will reject calls that have SDP size above this amount.
While creating an SDP handler would allow a user to perform this type of manipulation, it is a non-trivial task that requires in-depth knowledge of WebRTC SDP.
To facilitate this common task, the createCodecRemover function creates a codec removal handler that can be used for this purpose. Applications can use this codec removal handler in combination with the call.getAvailableCodecs function in order to build logic to determine the best codecs to use for their application.
call.SdpHandlerFunction
:
The resulting SDP handler that will remove the codec.
import { create, sdpHandlers } from '@rbbn/webrtc-js-sdk';
const codecRemover = sdpHandlers.createCodecRemover([
// Remove all VP8 and VP9 codecs.
'VP8',
'VP9',
// Remove all H264 codecs with the specified FMTP parameters.
{
name: 'H264',
fmtpParams: ['packetization-mode=0']
}
])
const client = create({
call: {
sdpHandlers: [codecRemover]
}
})