Kurento
Kurento
Kurento
Release 6.5.0
kurento.org
Contents
I
II
Whats Kurento?
Introducing Kurento
3
7
11
13
17
III
19
23
25
Kurento Tutorials
27
IV
7
Hello world
7.1 Java - Hello world . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2 JavaScript - Hello world . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3 Node.js - Hello world . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
31
41
46
57
57
66
71
81
81
92
Mastering Kurento
237
20 Kurento Architecture
239
20.1 Kurento Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
21 Kurento API Reference
247
21.1 Kurento API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
22 Kurento Protocol
253
22.1 Kurento Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
23 Advanced Installation Guide
265
23.1 Kurento Media Server Advanced Installation guide . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
24 Working with Nightly Builds
267
24.1 Working with nightly builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
25 Kurento Modules
271
25.1 Kurento Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
26 WebRTC Statistics
337
26.1 WebRTC Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
27 Kurento Utils JS
341
27.1 Kurento Utils JS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
28 Kurento Java Client JavaDoc
ii
347
349
351
VI
Kurento FAQ
32 How do I...
32.1 ...install Kurento Media Server in an Amazon EC2 instance? . . . . . . . . . . . . .
32.2 ...know how many Media Pipelines do I need for my Application? . . . . . . . . . .
32.3 ...know how many Endpoints do I need? . . . . . . . . . . . . . . . . . . . . . . . .
32.4 ...know to what client a given WebRtcEndPoint belongs or where is it coming from?
357
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
361
361
362
362
362
VII
Glossary
365
iii
iv
Contents
Contents
Part I
Whats Kurento?
Kurento is a WebRTC media server and a set of client APIs making simple the development of advanced video
applications for WWW and smartphone platforms. Kurento features include group communications, transcoding,
recording, mixing, broadcasting and routing of audiovisual flows.
Kurento also provides advanced media processing capabilities involving computer vision, video indexing, augmented
reality and speech analysis. Kurento modular architecture makes simple the integration of third party media processing
algorithms (i.e. speech recognition, sentiment analysis, face recognition, etc.), which can be transparently used by
application developers as the rest of Kurento built-in features.
Kurentos core element is Kurento Media Server, responsible for media transmission, processing, loading and recording. It is implemented in low level technologies based on GStreamer to optimize the resource consumption. It provides
the following features:
Networked streaming protocols, including HTTP, RTP and WebRTC.
Group communications (MCUs and SFUs functionality) supporting both media mixing and media routing/dispatching.
Generic support for computational vision and augmented reality filters.
Media storage supporting writing operations for WebM and MP4 and playing in all formats supported by
GStreamer.
Automatic media transcodification between any of the codecs supported by GStreamer including VP8, H.264,
H.263, AMR, OPUS, Speex, G.711, etc.
There are available Kurento Client libraries in Java and Javascript to control Kurento Media Server from applications.
If you prefer another programming language, you can use the Kurento Protocol, based on WebSocket and JSON-RPC.
Kurento is open source, released under the terms of Apache 2.0 license. Its source code is hosted on GitHub.
If you want to put your hands on quickly, the best way is installing the Kurento Media Server and take a look to
our tutorials in form of working demo applications. You can choose your favorite technology to build multimedia
applications: Java, Browser JavaScript or Node.js.
If you want to make the most of Kurento, please take a look to the advanced documentation.
Part II
Introducing Kurento
CHAPTER 1
WebRTC is an open source technology that enables web browsers with Real-Time Communications (RTC) capabilities
via JavaScript APIs. It has been conceived as a peer-to-peer technology where browsers can directly communicate
without the mediation of any kind of infrastructure. This model is enough for creating basic applications but features
such as group communications, media stream recording, media broadcasting or media transcoding are difficult to
implement on top of it. For this reason, many applications require using a media server.
Fig. 1.1: Peer-to-peer WebRTC approach vs. WebRTC through a media server
Conceptually, a WebRTC media server is just a kind of multimedia middleware (it is in the middle of the communicating peers) where media traffic pass through when moving from source to destinations. Media servers are capable
of processing media streams and offering different types including groups communications (distributing the media
stream one peer generates among several receivers, i.e. acting as Multi-Conference Unit, MCU), mixing (transforming several incoming stream into one single composite stream), transcoding (adapting codecs and formats between
incompatible clients), recording (storing in a persistent way the media exchanged among peers), etc.
10
CHAPTER 2
At the heart of the Kurento architecture there is a media server called the Kurento Media Server (KMS). Kurento
Media Server is based on pluggable media processing capabilities meaning that any of its provided features is a
pluggable module that can be activated or deactivated. Moreover, developers can seamlessly create additional modules
extending Kurento Media Server with new functionalities which can be plugged dynamically.
Kurento Media Server provides, out of the box, group communications, mixing, transcoding, recording and playing.
In addition, it also provides advanced modules for media processing including computer vision, augmented reality,
alpha blending and much more.
11
12
CHAPTER 3
Kurento Media Server capabilities are exposed by the Kurento API to application developers. This API is implemented by means of libraries called Kurento Clients. Kurento offers two clients out of the box for Java and
JavaScript. If you have another favorite language, you can still use Kurento using directly the Kurento Protocol.
This protocol allows to control Kurento Media Server and it is based on Internet standards such as WebSocket and
JSON-RPC. The picture below shows how to use Kurento Clients in three scenarios:
Using the Kurento JavaScript Client directly in a compliant WebRTC browser
Using the Kurento Java Client in a Java EE Application Server
Using the Kurento JavaScript Client in a Node.js server
Complete examples for these three technologies is described in the tutorials section.
Kurento Clients API is based on the concept of Media Element. A Media Element holds a specific media capability.
For example, the media element called WebRtcEndpoint holds the capability of sending and receiving WebRTC media
streams, the media element called RecorderEndpoint has the capability of recording into the file system any media
streams it receives, the FaceOverlayFilter detects faces on the exchanged video streams and adds a specific overlaid
image on top of them, etc. Kurento exposes a rich toolbox of media elements as part of its APIs.
To better understand theses concepts it is recommended to take a look to Kurento API and Kurento Protocol sections.
You can also take a loot to the JavaDoc and JsDoc:
kurento-client-java : JavaDoc of Kurento Java Client.
kurento-client-js : JsDoc of Kurento JavaScript Client.
kurento-utils-js : JsDoc of an utility JavaScript library aimed to simplify the development of WebRTC applications.
13
Fig. 3.1: Connection of Kurento Clients (Java and JavaScript) to Kuento Media Server
14
Fig. 3.2: Some Media Elements provided out of the box by Kurento
15
16
CHAPTER 4
From the application developer perspective, Media Elements are like Lego pieces: you just need to take the elements
needed for an application and connect them following the desired topology. In Kurento jargon, a graph of connected
media elements is called a Media Pipeline. Hence, when creating a pipeline, developers need to determine the capabilities they want to use (the media elements) and the topology determining which media elements provide media to
which other media elements (the connectivity). The connectivity is controlled through the connect primitive, exposed
on all Kurento Client APIs. This primitive is always invoked in the element acting as source and takes as argument the
sink element following this scheme:
sourceMediaElement.connect(sinkMediaElement)
For example, if you want to create an application recording WebRTC streams into the file system, youll need two
media elements: WebRtcEndpoint and RecorderEndpoint. When a client connects to the application, you will need to
instantiate these media elements making the stream received by the WebRtcEndpoint (which is capable of receiving
WebRTC streams) to be feed to the RecorderEndpoint (which is capable of recording media streams into the file
system). Finally you will need to connect them so that the stream received by the former is fed into the later:
WebRtcEndpoint.connect(RecorderEndpoint)
To simplify the handling of WebRTC streams in the client-side, Kurento provides an utility called WebRtcPeer. Nevertheless, the standard WebRTC API (getUserMedia, RTCPeerConnection, and so on) can also be used to connect to
WebRtcEndpoints. For further information please visit the tutorials section.
17
18
Part III
19
Kurento Media Server (KMS) has to be installed on Ubuntu 14.04 LTS (64 bits).
In order to install the latest stable Kurento Media Server version (6.5.0) you have to type the following commands,
one at a time and in the same order as listed here. When asked for any kind of confirmation, reply affirmatively:
echo
wget
sudo
sudo
Now, Kurento Media Server has been installed. Use the following commands to start and stop it respectively:
sudo service kurento-media-server-6.0 start
sudo service kurento-media-server-6.0 stop
21
22
CHAPTER 5
The current stable version of Kurento Media Server uses the Trickle ICE protocol for WebRTC connections. Trickle
ICE is the name given to the extension to the Interactive Connectivity Establishment (ICE) protocol that allows ICE
agents (in this case Kurento Media Server and Kurento Client) to send and receive candidates incrementally rather
than exchanging complete lists. In short, Trickle ICE allows to begin WebRTC connectivity much more faster.
This feature makes the Kurento Media Server 6 incompatible with the former versions. If you are using Kurento
Media Server 5.1 or lower, it is strongly recommended to upgrade your KMS. To do that, first you need to uninstall
KMS as follows:
sudo apt-get remove kurento-media-server
sudo apt-get purge kurento-media-server
sudo apt-get autoremove
Finally, the references to the Kurento Media Server in the APT sources should be removed:
# Delete any file in /etc/apt/sources.list.d folder related to kurento
sudo rm /etc/apt/sources.list.d/kurento*
# Edit sources.list and remove references to kurento
sudo vi /etc/apt/sources.list
After that, install Kurento Media Server 6 as depicted at the top of this page.
23
24
CHAPTER 6
If Kurento Media Server is located behind a NAT you need to use a STUN or TURN in order to achieve NAT traversal.
In most of cases, a STUN server will do the trick. A TURN server is only necessary when the NAT is symmetric.
In order to setup a STUN server you should uncomment the following lines in the Kurento Media Server configuration
file located on at /etc/kurento/modules/kurento/WebRtcEndpoint.conf.ini:
stunServerAddress=<stun_ip_address>
stunServerPort=<stun_port>
Note: Be careful since comments inline (with ;) are not allowed for parameter such as stunServerAddress.
Thus, the following configuration is not correct:
stunServerAddress=<stun_ip_address> ; Only IP address are supported
The parameter stunServerAddress should be an IP address (not domain name). There is plenty of public STUN
servers available, for example:
173.194.66.127:19302
173.194.71.127:19302
74.125.200.127:19302
74.125.204.127:19302
173.194.72.127:19302
74.125.23.127:3478
77.72.174.163:3478
77.72.174.165:3478
77.72.174.167:3478
77.72.174.161:3478
208.97.25.20:3478
62.71.2.168:3478
212.227.67.194:3478
212.227.67.195:3478
107.23.150.92:3478
77.72.169.155:3478
77.72.169.156:3478
77.72.169.164:3478
77.72.169.166:3478
77.72.174.162:3478
77.72.174.164:3478
25
77.72.174.166:3478
77.72.174.160:3478
54.172.47.69:3478
In order to setup a TURN server you should uncomment the following lines in the Kurento Media Server configuration
file located on at /etc/kurento/modules/kurento/WebRtcEndpoint.conf.ini:
turnURL=user:password@address:port
As before, TURN address should be an IP address (not domain name). See some examples of TURN configuration
below:
turnURL=kurento:[email protected]:3478
An open source implementation of a TURN server is coturn. In the FAQ section there is description about how to
install a coturn server.
26
Part IV
Kurento Tutorials
27
This section contains tutorials showing how to use Kurento framework to build different types of WebRTC and multimedia applications. Tutorials come in three flavors:
Java: These show applications where clients interact with Spring-Boot based application, that host the logic
orchestrating the communication among clients and controlling Kurento Media Server capabilities.
Browser JavaScript: These show applications executing at the browser and communicating directly with the
Kurento Media Server. In these tutorial all logic is directly hosted by browser. Hence, no application server is
necessary.
Node.js: These show applications where clients interact with an application server based on Node.js technology. The application server holds the logic orchestrating the communication among the clients and controlling
Kurento Media Server capabilities for them.
Note: The tutorials have been created with learning objectives. They are not intended to be used in production
environments where different unmanaged error conditions may emerge. Use at your own risk!
Note: These tutorials require https in order to use WebRTC. Following instructions will provided further information
about how to enable application security.
29
30
CHAPTER 7
Hello world
This is one of the simplest WebRTC applications you can create with Kurento. It implements a WebRTC loopback (a
WebRTC media stream going from client to Kurento and back to the client)
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
31
32
33
34
@EnableWebSocket
@SpringBootApplication
public class HelloWorldApp implements WebSocketConfigurer {
final static String DEFAULT_KMS_WS_URI = "ws://localhost:8888/kurento";
@Bean
public HelloWorldHandler handler() {
return new HelloWorldHandler();
}
@Bean
public KurentoClient kurentoClient() {
return KurentoClient.create(System.getProperty("kms.url", DEFAULT_KMS_WS_URI));
}
@Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(handler(), "/helloworld");
}
public static void main(String[] args) throws Exception {
new SpringApplication(HelloWorldApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/helloworld.
HelloWorldHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central
35
piece of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
public class HelloWorldHandler extends TextWebSocketHandler {
private final Logger log = LoggerFactory.getLogger(HelloWorldHandler.class);
private static final Gson gson = new GsonBuilder().create();
@Autowired
private KurentoClient kurento;
36
users.put(session.getId(), user);
// 3. SDP negotiation
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
// 4. Gather ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.error(e.getMessage());
}
}
});
webRtcEndpoint.gatherCandidates();
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
}
37
3. WebRTC SDP negotiation: In WebRTC, SDP (Session Description protocol) is used for negotiating media
exchanges between peers. Such negotiation is based on the SDP offer and answer exchange mechanism. This
negotiation is finished in the third part of the method processRequest, using the SDP offer obtained from the
browser client and returning a SDP answer generated by WebRtcEndpoint.
4. Gather ICE candidates: As of version 6, Kurento fully supports the Trickle ICE protocol. For that reason,
WebRtcEndpoint can receive ICE candidates asynchronously. To handle this, each WebRtcEndpoint offers a
listener (addOnIceGatheringDoneListener) that receives an event when the ICE gathering process is done.
38
39
id : 'stop'
}
sendMessage(message);
}
hideSpinner(videoInput, videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
7.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
40
Due to Same-origin policy, this demo has to be served by a HTTP server. A very simple way of doing this is by means
of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-hello-world
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the parameter ws_uri to the
URL, as follows:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
41
42
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one showing the local
stream (as captured by the device webcam) and the other showing the remote stream sent by the media server back to
the client.
The logic of the application is quite simple: the local stream is sent to the Kurento Media Server, which sends it back
to the client without modifications. To implement this behavior, we need to create a Media Pipeline composed by a
single Media Element, i.e. a WebRtcEndpoint, which holds the capability of exchanging full-duplex (bidirectional)
WebRTC media flows. This media element is connected to itself so that the media it receives (from browser) is send
back (to browser). This media pipeline is illustrated in the following picture:
43
44
ekko-lightbox : Module for Bootstrap to open modal images, videos, and galleries.
demo-console : Custom JavaScript console.
The specific logic of the Hello World JavaScript demo is coded in the following JavaScript file: index.js. In this file,
there is a function which is called when the green button labeled as Start in the GUI is clicked.
var startButton = document.getElementById("start");
startButton.addEventListener("click", function() {
var options = {
localVideo: videoInput,
remoteVideo: videoOutput
};
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if(error) return onError(error)
this.generateOffer(onOffer)
});
[...]
}
The function WebRtcPeer.WebRtcPeerSendrecv abstracts the WebRTC internal details (i.e. PeerConnection and getUserStream) and makes possible to start a full-duplex WebRTC communication, using the HTML video tag with id
videoInput to show the video camera (local stream) and the video tag videoOutput to show the remote stream provided
by the Kurento Media Server.
Inside this function, a call to generateOffer is performed. This function accepts a callback in which the SDP offer is
received. In this callback we create an instance of the KurentoClient class that will manage communications with the
Kurento Media Server. So, we need to provide the URI of its WebSocket endpoint. In this example, we assume its
listening in port 8888 at the same host than the HTTP serving the application.
[...]
var args = getopts(location.search,
{
default:
{
ws_uri: 'ws://' + location.hostname + ':8888/kurento',
ice_servers: undefined
}
});
[...]
kurentoClient(args.ws_uri, function(error, client){
[...]
};
If everything works correctly, we will have an instance of a media pipeline (variable _pipeline in this example).
With it, we are able to create Media Elements. In this example we just need a single WebRtcEndpoint.
In WebRTC, SDP is used for negotiating media exchanges between applications. Such negotiation happens based on
the SDP offer and answer exchange mechanism by gathering the ICE candidates as follows:
7.2. JavaScript - Hello world
45
pipeline = _pipeline;
pipeline.create("WebRtcEndpoint", function(error, webRtc){
if(error) return onError(error);
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
if(error) return onError(error);
webRtcPeer.processAnswer(sdpAnswer, onError);
});
webRtc.gatherCandidates(onError);
[...]
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
7.2.4 Dependencies
All dependencies of this demo can to be obtained using Bower. The list of these dependencies are defined in the
bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at Bower.
46
knowledge of JavaScript, Node.js, HTML and WebRTC. We also recommend reading the Introducing Kurento section
before starting this tutorial.
Note: This tutorial has been configurated for using https. Follow these instructions for securing your application.
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-hello-world
git checkout 6.5.0
npm install
npm start
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
47
48
The logic of the application is quite simple: the local stream is sent to the Kurento Media Server, which returns it back
to the client without modifications. To implement this behavior we need to create a Media Pipeline composed by a
single Media Element, i.e. a WebRtcEndpoint, which holds the capability of exchanging full-duplex (bidirectional)
WebRTC media flows. This media element is connected to itself so that the media it receives (from browser) is send
back (to browser). This media pipeline is illustrated in the following picture:
49
50
51
default:
ws.send(JSON.stringify({
id : 'error',
message : 'Invalid message ' + message
}));
break;
}
});
});
In order to control the media capabilities provided by the Kurento Media Server, we need an instance of the KurentoClient in the Node application server. In order to create this instance, we need to specify to the client library the
location of the Kurento Media Server. In this example, we assume its located at localhost listening in port 8888.
var kurento = require('kurento-client');
var kurentoClient = null;
var argv = minimist(process.argv.slice(2), {
default: {
as_uri: 'https://localhost:8443/',
ws_uri: 'ws://localhost:8888/kurento'
}
});
[...]
function getKurentoClient(callback) {
if (kurentoClient !== null) {
return callback(null, kurentoClient);
}
kurento(argv.ws_uri, function(error, _kurentoClient) {
if (error) {
console.log("Could not find media server at address " + argv.ws_uri);
return callback("Could not find media server at address" + argv.ws_uri
+ ". Exiting with error " + error);
}
kurentoClient = _kurentoClient;
callback(null, kurentoClient);
});
}
Once the Kurento Client has been instantiated, you are ready for communicating with Kurento Media Server. Our first
operation is to create a Media Pipeline, then we need to create the Media Elements and connect them. In this example,
we just need a single WebRtcEndpoint connected to itself (i.e. in loopback). These functions are called in the start
function, which is fired when the start message is received:
function start(sessionId, ws, sdpOffer, callback) {
if (!sessionId) {
return callback('Cannot use undefined sessionId');
}
getKurentoClient(function(error, kurentoClient) {
if (error) {
return callback(error);
52
}
kurentoClient.create('MediaPipeline', function(error, pipeline) {
if (error) {
return callback(error);
}
createMediaElements(pipeline, ws, function(error, webRtcEndpoint) {
if (error) {
pipeline.release();
return callback(error);
}
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint
}
return callback(null, sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
return callback(error);
}
});
});
});
});
});
}
53
As of Kurento Media Server 6.0, the WebRTC negotiation is done by exchanging ICE candidates between the WebRTC
peers. To implement this protocol, the webRtcEndpoint receives candidates from the client in OnIceCandidate
function. These candidates are stored in a queue when the webRtcEndpoint is not available yet. Then these
candidates are added to the media element by calling to the addIceCandidate method.
var candidatesQueue = {};
[...]
function onIceCandidate(sessionId, _candidate) {
var candidate = kurento.register.complexTypes.IceCandidate(_candidate);
if (sessions[sessionId]) {
console.info('Sending candidate');
var webRtcEndpoint = sessions[sessionId].webRtcEndpoint;
webRtcEndpoint.addIceCandidate(candidate);
}
else {
console.info('Queueing candidate');
if (!candidatesQueue[sessionId]) {
candidatesQueue[sessionId] = [];
}
candidatesQueue[sessionId].push(candidate);
}
}
54
In the function start the method WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js is used to create the
webRtcPeer object, which is used to handle the WebRTC communication.
videoInput = document.getElementById('videoInput');
videoOutput = document.getElementById('videoOutput');
[...]
function start() {
console.log('Starting video call ...')
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoInput, videoOutput);
console.log('Creating WebRtcPeer and generating local sdp offer ...');
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
55
7.3.5 Dependencies
Server-side dependencies of this demo are managed using npm. Our main dependency is the Kurento Client JavaScript
(kurento-client). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
[...]
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
[...]
"kurento-utils" : "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at npm and Bower.
56
CHAPTER 8
This web application consists on a WebRTC video communication in loopback, adding a funny hat over detected faces.
This is an example of a computer vision and augmented reality filter.
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.ws.url=ws://kms_host:kms_port/kurento
Fig. 8.1: Kurento Magic Mirror Screenshot: WebRTC with filter in loopback
58
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
WebRtcEndpoint: Provides full-duplex (bidirectional) WebRTC capabilities.
FaceOverlay filter: Computer vision filter that detects faces in the video stream and puts an image on top of
them. In this demo the filter is configured to put a Super Mario hat).
59
60
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
8.1. Java - WebRTC magic mirror
61
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/magicmirror.
MagicMirrorHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central
piece of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop and
onIceCandidates. These messages are treated in the switch clause, taking the proper steps in each case.
public class MagicMirrorHandler extends TextWebSocketHandler {
private final Logger log = LoggerFactory.getLogger(MagicMirrorHandler.class);
private static final Gson gson = new GsonBuilder().create();
62
In the following snippet, we can see the start method. It handles the ICE candidates gathering, creates a Media
Pipeline, creates the Media Elements (WebRtcEndpoint and FaceOverlayFilter) and make the connections
among them. A startResponse message is sent back to the client with the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
// User session
UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
user.setWebRtcEndpoint(webRtcEndpoint);
users.put(session.getId(), user);
// ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
// Media logic
FaceOverlayFilter faceOverlayFilter = new FaceOverlayFilter.Builder(pipeline).build();
63
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
Note: Notice the hat URL is provided by the application server and consumed by the KMS. This logic is assuming that
the application server is hosted in local (localhost), and by the default the hat URL is https://localhost:8443/img/mariowings.png. If your application server is hosted in a different host, it can be easily changed by means of the configuration parameter app.server.url, for example:
mvn compile exec:java -Dapp.server.url=https://app_server_host:app_server_port
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
8.1.4 Client-Side
Lets move now to the client-side of the application. To call the previously created WebSocket service in the serverside, we use the JavaScript class WebSocket. We use a specific Kurento JavaScript library called kurento-utils.js
to simplify the WebRTC interaction with the server. This library depends on adapter.js, which is a JavaScript WebRTC utility maintained by Google that abstracts away browser differences. Finally jquery.js is also needed in this
application.
These libraries are linked in the index.html web page, and are used in the index.js. In the following snippet we can see
the creation of the WebSocket (variable ws) in the path /magicmirror. Then, the onmessage listener of the WebSocket is used to implement the JSON signaling protocol in the client-side. Notice that there are three incoming messages to client: startResponse, error, and iceCandidate. Convenient actions are taken to implement each
step in the communication. For example, in functions start the function WebRtcPeer.WebRtcPeerSendrecv
of kurento-utils.js is used to start a WebRTC communication.
var ws = new WebSocket('ws://' + location.host + '/magicmirror');
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'startResponse':
startResponse(parsedMessage);
break;
case 'error':
if (state == I_AM_STARTING) {
setState(I_CAN_START);
64
}
onError("Error message from server: " + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function (error) {
if (error) {
console.error("Error adding candidate: " + error);
return;
}
});
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
function start() {
console.log("Starting video call ...")
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoInput, videoOutput);
console.log("Creating WebRtcPeer and generating local sdp offer ...");
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function (error) {
if (error) {
return console.error(error);
}
webRtcPeer.generateOffer(onOffer);
});
}
function onOffer(offerSdp) {
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'start',
sdpOffer : offerSdp
}
sendMessage(message);
}
function onIceCandidate(candidate) {
console.log("Local candidate" + JSON.stringify(candidate));
var message = {
id: 'onIceCandidate',
candidate: candidate
};
sendMessage(message);
65
8.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
66
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-magic-mirror
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
Kurento Media Server must use WebSockets over SSL/TLS (WSS), so make sure you check this too. It is possible to
locate the KMS in other machine simple adding the parameter ws_uri to the URL:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
67
Fig. 8.5: Kurento Magic Mirror Screenshot: WebRTC with filter in loopback
68
The function WebRtcPeer.WebRtcPeerSendrecv abstracts the WebRTC internal details (i.e. PeerConnection and getUserStream) and makes possible to start a full-duplex WebRTC communication, using the HTML video tag with id
videoInput to show the video camera (local stream) and the video tag videoOutput to show the remote stream provided
by the Kurento Media Server.
Inside this function, a call to generateOffer is performed. This function accepts a callback in which the SDP offer is
received. In this callback we create an instance of the KurentoClient class that will manage communications with the
Kurento Media Server. So, we need to provide the URI of its WebSocket endpoint. In this example, we assume its
listening in port 8888 at the same host than the HTTP serving the application.
[...]
var args = getopts(location.search,
{
default:
{
ws_uri: 'ws://' + location.hostname + ':8888/kurento',
ice_servers: undefined
}
});
[...]
kurentoClient(args.ws_uri, function(error, client){
[...]
};
69
Once we have an instance of kurentoClient, the following step is to create a Media Pipeline, as follows:
client.create("MediaPipeline", function(error, _pipeline){
[...]
});
If everything works correctly, we have an instance of a media pipeline (variable pipeline in this example). With
this instance, we are able to create Media Elements. In this example we just need a WebRtcEndpoint and a FaceOverlayFilter. Then, these media elements are interconnected:
pipeline.create('WebRtcEndpoint', function(error, webRtcEp) {
if (error) return onError(error);
setIceCandidateCallbacks(webRtcPeer, webRtcEp, onError)
webRtcEp.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) return onError(error);
webRtcPeer.processAnswer(sdpAnswer, onError);
});
webRtcEp.gatherCandidates(onError);
pipeline.create('FaceOverlayFilter', function(error, filter) {
if (error) return onError(error);
filter.setOverlayedImage(args.hat_uri, -0.35, -1.2, 1.6, 1.6,
function(error) {
if (error) return onError(error);
});
client.connect(webRtcEp, filter, webRtcEp, function(error) {
if (error) return onError(error);
console.log("WebRtcEndpoint --> filter --> WebRtcEndpoint");
});
});
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
8.2.4 Dependencies
The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in
the bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
}
70
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at Bower.
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-magic-mirror
git checkout 6.5.0
npm install
npm start
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
71
Fig. 8.7: Kurento Magic Mirror Screenshot: WebRTC with filter in loopback
72
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
WebRtcEndpoint: Provides full-duplex (bidirectional) WebRTC capabilities.
FaceOverlay filter: Computer vision filter that detects faces in the video stream and puts an image on top of
them. In this demo the filter is configured to put a Super Mario hat).
73
74
path : '/magicmirror'
});
/*
* Management of WebSocket messages
*/
wss.on('connection', function(ws) {
var sessionId = null;
var request = ws.upgradeReq;
var response = {
writeHead : {}
};
sessionHandler(request, response, function(err) {
sessionId = request.session.id;
console.log('Connection received with sessionId ' + sessionId);
});
ws.on('error', function(error) {
console.log('Connection ' + sessionId + ' error');
stop(sessionId);
});
ws.on('close', function() {
console.log('Connection ' + sessionId + ' closed');
stop(sessionId);
});
ws.on('message', function(_message) {
var message = JSON.parse(_message);
console.log('Connection ' + sessionId + ' received message ', message);
switch (message.id) {
case 'start':
sessionId = request.session.id;
start(sessionId, ws, message.sdpOffer, function(error, sdpAnswer) {
if (error) {
return ws.send(JSON.stringify({
id : 'error',
message : error
}));
}
ws.send(JSON.stringify({
id : 'startResponse',
sdpAnswer : sdpAnswer
}));
});
break;
case 'stop':
stop(sessionId);
break;
case 'onIceCandidate':
onIceCandidate(sessionId, message.candidate);
break;
default:
75
ws.send(JSON.stringify({
id : 'error',
message : 'Invalid message ' + message
}));
break;
}
});
});
In order to control the media capabilities provided by the Kurento Media Server, we need an instance of the KurentoClient in the Node application server. In order to create this instance, we need to specify to the client library the
location of the Kurento Media Server. In this example, we assume its located at localhost listening in port 8888.
var kurento = require('kurento-client');
var kurentoClient = null;
var argv = minimist(process.argv.slice(2), {
default: {
as_uri: 'https://localhost:8443/',
ws_uri: 'ws://localhost:8888/kurento'
}
});
[...]
function getKurentoClient(callback) {
if (kurentoClient !== null) {
return callback(null, kurentoClient);
}
kurento(argv.ws_uri, function(error, _kurentoClient) {
if (error) {
console.log("Could not find media server at address " + argv.ws_uri);
return callback("Could not find media server at address" + argv.ws_uri
+ ". Exiting with error " + error);
}
kurentoClient = _kurentoClient;
callback(null, kurentoClient);
});
}
Once the Kurento Client has been instantiated, you are ready for communicating with Kurento Media Server. Our
first operation is to create a Media Pipeline, then we need to create the Media Elements and connect them. In this
example, we need a WebRtcEndpoint connected to a FaceOverlayFilter, which is connected to the sink of the same
WebRtcEndpoint. These functions are called in the start function, which is fired when the start message is
received:
function start(sessionId, ws, sdpOffer, callback) {
if (!sessionId) {
return callback('Cannot use undefined sessionId');
}
getKurentoClient(function(error, kurentoClient) {
if (error) {
return callback(error);
76
}
kurentoClient.create('MediaPipeline', function(error, pipeline) {
if (error) {
return callback(error);
}
createMediaElements(pipeline, ws, function(error, webRtcEndpoint) {
if (error) {
pipeline.release();
return callback(error);
}
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, faceOverlayFilter, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint
}
return callback(null, sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
return callback(error);
}
});
});
});
});
});
}
77
As of Kurento Media Server 6.0, the WebRTC negotiation is done by exchanging ICE candidates between the WebRTC
peers. To implement this protocol, the webRtcEndpoint receives candidates from the client in OnIceCandidate
function. These candidates are stored in a queue when the webRtcEndpoint is not available yet. Then these
candidates are added to the media element by calling to the addIceCandidate method.
var candidatesQueue = {};
[...]
function onIceCandidate(sessionId, _candidate) {
var candidate = kurento.register.complexTypes.IceCandidate(_candidate);
if (sessions[sessionId]) {
console.info('Sending candidate');
var webRtcEndpoint = sessions[sessionId].webRtcEndpoint;
webRtcEndpoint.addIceCandidate(candidate);
}
else {
console.info('Queueing candidate');
if (!candidatesQueue[sessionId]) {
candidatesQueue[sessionId] = [];
}
candidatesQueue[sessionId].push(candidate);
}
}
utility maintained by Google that abstracts away browser differences. Finally jquery.js is also needed in this application. These libraries are linked in the index.html web page, and are used in the index.js. In the following snippet we
can see the creation of the WebSocket (variable ws) in the path /magicmirror. Then, the onmessage listener
of the WebSocket is used to implement the JSON signaling protocol in the client-side. Notice that there are three
incoming messages to client: startResponse, error, and iceCandidate. Convenient actions are taken to
implement each step in the communication.
var ws = new WebSocket('ws://' + location.host + '/magicmirror');
var webRtcPeer;
const I_CAN_START = 0;
const I_CAN_STOP = 1;
const I_AM_STARTING = 2;
[...]
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'startResponse':
startResponse(parsedMessage);
break;
case 'error':
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Error message from server: ' + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate)
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
In the function start the method WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js is used to create the
webRtcPeer object, which is used to handle the WebRTC communication.
videoInput = document.getElementById('videoInput');
videoOutput = document.getElementById('videoOutput');
[...]
function start() {
console.log('Starting video call ...')
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoInput, videoOutput);
console.log('Creating WebRtcPeer and generating local sdp offer ...');
var options = {
79
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if(error) return onError(error);
this.generateOffer(onOffer);
});
}
function onIceCandidate(candidate) {
console.log('Local candidate' + JSON.stringify(candidate));
var message = {
id : 'onIceCandidate',
candidate : candidate
};
sendMessage(message);
}
function onOffer(error, offerSdp) {
if(error) return onError(error);
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'start',
sdpOffer : offerSdp
}
sendMessage(message);
}
8.3.5 Dependencies
Server-side dependencies of this demo are managed using npm. Our main dependency is the Kurento Client JavaScript
(kurento-client). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
[...]
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
[...]
"kurento-utils" : "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at npm and Bower.
80
CHAPTER 9
Video broadcasting for WebRTC. One peer transmits a video stream and N peers receive it.
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
WebRtcEndpoints. The following picture shows an screenshot of the Presenters web GUI:
83
84
DEFAULT_KMS_WS_URI));
}
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(callHandler(), "/call");
}
public static void main(String[] args) throws Exception {
new SpringApplication(One2ManyCallApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with server by means of requests and responses. Specifically, the main app class implements the interface
WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path /call.
CallHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece
of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are three different kind of incoming messages to the Server : presenter, viewer,
stop, and onIceCandidate. These messages are treated in the switch clause, taking the proper steps in each case.
public class CallHandler extends TextWebSocketHandler {
private static final Logger log = LoggerFactory.getLogger(CallHandler.class);
private static final Gson gson = new GsonBuilder().create();
85
@Autowired
private KurentoClient kurento;
private MediaPipeline pipeline;
private UserSession presenterUserSession;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message from session '{}': {}", session.getId(), jsonMessage);
switch (jsonMessage.get("id").getAsString()) {
case "presenter":
try {
presenter(session, jsonMessage);
} catch (Throwable t) {
handleErrorResponse(t, session, "presenterResponse");
}
break;
case "viewer":
try {
viewer(session, jsonMessage);
} catch (Throwable t) {
handleErrorResponse(t, session, "viewerResponse");
}
break;
case "onIceCandidate": {
JsonObject candidate = jsonMessage.get("candidate").getAsJsonObject();
UserSession user = null;
if (presenterUserSession != null) {
if (presenterUserSession.getSession() == session) {
user = presenterUserSession;
} else {
user = viewers.get(session.getId());
}
}
if (user != null) {
IceCandidate cand = new IceCandidate(candidate.get("candidate").getAsString(),
candidate.get("sdpMid").getAsString(), candidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(cand);
}
break;
}
case "stop":
stop(session);
break;
default:
break;
}
}
private void handleErrorResponse(Throwable t, WebSocketSession session,
String responseId) throws IOException {
stop(session);
log.error(t.getMessage(), t);
JsonObject response = new JsonObject();
response.addProperty("id", responseId);
86
response.addProperty("response", "rejected");
response.addProperty("message", t.getMessage());
session.sendMessage(new TextMessage(response.toString()));
}
@Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception {
stop(session);
}
}
In the following snippet, we can see the presenter method. It creates a Media Pipeline and the WebRtcEndpoint
for presenter:
87
response.addProperty("response", "accepted");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
presenterUserSession.sendMessage(response);
}
presenterWebRtc.gatherCandidates();
} else {
JsonObject response = new JsonObject();
response.addProperty("id", "presenterResponse");
response.addProperty("response", "rejected");
response.addProperty("message", "Another user is currently acting as sender. Try again later ..
session.sendMessage(new TextMessage(response.toString()));
}
}
The viewer method is similar, but not he Presenter WebRtcEndpoint is connected to each of the viewers WebRtcEndpoints, otherwise an error is sent back to the client.
private synchronized void viewer(final WebSocketSession session, JsonObject jsonMessage) throws IOExc
if (presenterUserSession == null || presenterUserSession.getWebRtcEndpoint() == null) {
JsonObject response = new JsonObject();
response.addProperty("id", "viewerResponse");
response.addProperty("response", "rejected");
response.addProperty("message", "No active sender now. Become sender or . Try again later ...")
session.sendMessage(new TextMessage(response.toString()));
} else {
if (viewers.containsKey(session.getId())) {
JsonObject response = new JsonObject();
response.addProperty("id", "viewerResponse");
response.addProperty("response", "rejected");
response.addProperty("message",
"You are already viewing in this session. Use a different browser to add additional vi
session.sendMessage(new TextMessage(response.toString()));
return;
}
UserSession viewer = new UserSession(session);
viewers.put(session.getId(), viewer);
String sdpOffer = jsonMessage.getAsJsonPrimitive("sdpOffer").getAsString();
WebRtcEndpoint nextWebRtc = new WebRtcEndpoint.Builder(pipeline).build();
nextWebRtc.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
88
}
});
viewer.setWebRtcEndpoint(nextWebRtc);
presenterUserSession.getWebRtcEndpoint().connect(nextWebRtc);
String sdpAnswer = nextWebRtc.processOffer(sdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "viewerResponse");
response.addProperty("response", "accepted");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
viewer.sendMessage(response);
}
nextWebRtc.gatherCandidates();
}
}
9.1.4 Client-Side
Lets move now to the client-side of the application. To call the previously created WebSocket service in the serverside, we use the JavaScript class WebSocket. We use a specific Kurento JavaScript library called kurento-utils.js
to simplify the WebRTC interaction with the server. This library depends on adapter.js, which is a JavaScript WebRTC utility maintained by Google that abstracts away browser differences. Finally jquery.js is also needed in this
application.
These libraries are linked in the index.html web page, and are used in the index.js. In the following snippet we
can see the creation of the WebSocket (variable ws) in the path /call. Then, the onmessage listener of the
WebSocket is used to implement the JSON signaling protocol in the client-side. Notice that there are four incoming
messages to client: presenterResponse, viewerResponse, iceCandidate, and stopCommunication.
9.1. Java - One to many video call
89
Convenient actions are taken to implement each step in the communication. For example, in the function presenter
the function WebRtcPeer.WebRtcPeerSendonly of kurento-utils.js is used to start a WebRTC communication.
Then, WebRtcPeer.WebRtcPeerRecvonly is used in the viewer function.
var ws = new WebSocket('ws://' + location.host + '/call');
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'presenterResponse':
presenterResponse(parsedMessage);
break;
case 'viewerResponse':
viewerResponse(parsedMessage);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function (error) {
if (!error) return;
console.error("Error adding candidate: " + error);
});
break;
case 'stopCommunication':
dispose();
break;
default:
console.error('Unrecognized message', parsedMessage);
}
}
function presenter() {
if (!webRtcPeer) {
showSpinner(video);
var options = {
localVideo: video,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendonly(options,
function (error) {
if(error) {
return console.error(error);
}
webRtcPeer.generateOffer(onOfferPresenter);
});
}
}
function viewer() {
if (!webRtcPeer) {
showSpinner(video);
var options = {
remoteVideo: video,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function (error) {
90
if(error) {
return console.error(error);
}
this.generateOffer(onOfferViewer);
});
}
}
9.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
91
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-one2many-call
git checkout 6.5.0
npm install
npm start
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
92
93
To implement this behavior we have to create a Media Pipeline composed by 1+N WebRtcEndpoints. The Presenter
peer sends its stream to the rest of the Viewers. Viewers are configured in receive-only mode. The implemented media
pipeline is illustrated in the following picture:
94
95
var ws = require('ws');
[...]
var wss = new ws.Server({
server : server,
path : '/one2many'
});
/*
* Management of WebSocket messages
*/
wss.on('connection', function(ws) {
var sessionId = nextUniqueId();
console.log('Connection received with sessionId ' + sessionId);
ws.on('error', function(error) {
console.log('Connection ' + sessionId + ' error');
stop(sessionId);
});
ws.on('close', function() {
console.log('Connection ' + sessionId + ' closed');
stop(sessionId);
});
ws.on('message', function(_message) {
var message = JSON.parse(_message);
console.log('Connection ' + sessionId + ' received message ', message);
switch (message.id) {
case 'presenter':
startPresenter(sessionId, ws, message.sdpOffer, function(error, sdpAnswer) {
if (error) {
return ws.send(JSON.stringify({
id : 'presenterResponse',
response : 'rejected',
message : error
}));
}
ws.send(JSON.stringify({
id : 'presenterResponse',
response : 'accepted',
sdpAnswer : sdpAnswer
}));
});
break;
case 'viewer':
startViewer(sessionId, ws, message.sdpOffer, function(error, sdpAnswer) {
if (error) {
return ws.send(JSON.stringify({
id : 'viewerResponse',
response : 'rejected',
message : error
}));
}
96
ws.send(JSON.stringify({
id : 'viewerResponse',
response : 'accepted',
sdpAnswer : sdpAnswer
}));
});
break;
case 'stop':
stop(sessionId);
break;
case 'onIceCandidate':
onIceCandidate(sessionId, message.candidate);
break;
default:
ws.send(JSON.stringify({
id : 'error',
message : 'Invalid message ' + message
}));
break;
}
});
});
In order to control the media capabilities provided by the Kurento Media Server, we need an instance of the KurentoClient in the Node application server. In order to create this instance, we need to specify to the client library the
location of the Kurento Media Server. In this example, we assume its located at localhost listening in port 8888.
var kurento = require('kurento-client');
var kurentoClient = null;
var argv = minimist(process.argv.slice(2), {
default: {
as_uri: 'https://localhost:8443/',
ws_uri: 'ws://localhost:8888/kurento'
}
});
[...]
function getKurentoClient(callback) {
if (kurentoClient !== null) {
return callback(null, kurentoClient);
}
kurento(argv.ws_uri, function(error, _kurentoClient) {
if (error) {
console.log("Could not find media server at address " + argv.ws_uri);
return callback("Could not find media server at address" + argv.ws_uri
+ ". Exiting with error " + error);
}
kurentoClient = _kurentoClient;
callback(null, kurentoClient);
});
97
Once the Kurento Client has been instantiated, you are ready for communicating with Kurento Media Server. Our first
operation is to create a Media Pipeline, then we need to create the Media Elements and connect them. In this example,
we need a WebRtcEndpoint (in send-only mode) for the presenter connected to N WebRtcEndpoint (in receive-only
mode) for the viewers. These functions are called in the startPresenter and startViewer function, which is
fired when the presenter and viewer message are received respectively:
function startPresenter(sessionId, ws, sdpOffer, callback) {
clearCandidatesQueue(sessionId);
if (presenter !== null) {
stop(sessionId);
return callback("Another user is currently acting as presenter. Try again later ...");
}
presenter = {
id : sessionId,
pipeline : null,
webRtcEndpoint : null
}
getKurentoClient(function(error, kurentoClient) {
if (error) {
stop(sessionId);
return callback(error);
}
if (presenter === null) {
stop(sessionId);
return callback(noPresenterMessage);
}
kurentoClient.create('MediaPipeline', function(error, pipeline) {
if (error) {
stop(sessionId);
return callback(error);
}
if (presenter === null) {
stop(sessionId);
return callback(noPresenterMessage);
}
presenter.pipeline = pipeline;
pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
stop(sessionId);
return callback(error);
}
if (presenter === null) {
stop(sessionId);
return callback(noPresenterMessage);
}
presenter.webRtcEndpoint = webRtcEndpoint;
98
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
stop(sessionId);
return callback(error);
}
if (presenter === null) {
stop(sessionId);
return callback(noPresenterMessage);
}
callback(null, sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
stop(sessionId);
return callback(error);
}
});
});
});
});
}
function startViewer(sessionId, ws, sdpOffer, callback) {
clearCandidatesQueue(sessionId);
if (presenter === null) {
stop(sessionId);
return callback(noPresenterMessage);
}
presenter.pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
stop(sessionId);
return callback(error);
}
viewers[sessionId] = {
"webRtcEndpoint" : webRtcEndpoint,
"ws" : ws
}
99
As of Kurento Media Server 6.0, the WebRTC negotiation is done by exchanging ICE candidates between the WebRTC
peers. To implement this protocol, the webRtcEndpoint receives candidates from the client in OnIceCandidate
function. These candidates are stored in a queue when the webRtcEndpoint is not available yet. Then these
candidates are added to the media element by calling to the addIceCandidate method.
100
101
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'presenterResponse':
presenterResponse(parsedMessage);
break;
case 'viewerResponse':
viewerResponse(parsedMessage);
break;
case 'stopCommunication':
dispose();
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate)
break;
default:
console.error('Unrecognized message', parsedMessage);
}
}
function presenterResponse(message) {
if (message.response != 'accepted') {
var errorMsg = message.message ? message.message : 'Unknow error';
console.warn('Call not accepted for the following reason: ' + errorMsg);
dispose();
} else {
webRtcPeer.processAnswer(message.sdpAnswer);
}
}
function viewerResponse(message) {
if (message.response != 'accepted') {
var errorMsg = message.message ? message.message : 'Unknow error';
console.warn('Call not accepted for the following reason: ' + errorMsg);
dispose();
} else {
webRtcPeer.processAnswer(message.sdpAnswer);
}
}
On the one hand, the function presenter uses the method WebRtcPeer.WebRtcPeerSendonly of kurentoutils.js to start a WebRTC communication in send-only mode. On the other hand, the function viewer uses the
method WebRtcPeer.WebRtcPeerRecvonly of kurento-utils.js to start a WebRTC communication in receiveonly mode.
function presenter() {
if (!webRtcPeer) {
showSpinner(video);
var options = {
localVideo: video,
onicecandidate : onIceCandidate
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendonly(options, function(error) {
if(error) return onError(error);
102
this.generateOffer(onOfferPresenter);
});
}
}
function onOfferPresenter(error, offerSdp) {
if (error) return onError(error);
var message = {
id : 'presenter',
sdpOffer : offerSdp
};
sendMessage(message);
}
function viewer() {
if (!webRtcPeer) {
showSpinner(video);
var options = {
remoteVideo: video,
onicecandidate : onIceCandidate
}
webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options, function(error) {
if(error) return onError(error);
this.generateOffer(onOfferViewer);
});
}
}
function onOfferViewer(error, offerSdp) {
if (error) return onError(error)
var message = {
id : 'viewer',
sdpOffer : offerSdp
}
sendMessage(message);
}
9.2.5 Dependencies
Server-side dependencies of this demo are managed using npm. Our main dependency is the Kurento Client JavaScript
(kurento-client). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
[...]
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
103
"dependencies": {
[...]
"kurento-utils" : "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at npm and Bower.
104
CHAPTER 10
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
105
106
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the local stream
and other for the remote peer stream). If two users, A and B, are using the application, the media flow goes this way:
The video camera stream of user A is sent to the Kurento Media Server, which sends it to user B. In the same way, B
sends to Kurento Media Server, which forwards it to A. This means that KMS is providing a B2B (back-to-back) call
service.
To implement this behavior, create sa Media Pipeline composed by two WebRtC endpoints connected in B2B. The
implemented media pipeline is illustrated in the following picture:
107
108
Fig. 10.4: Server-side class diagram of the one to one video call app
@EnableWebSocket
@SpringBootApplication
public class One2OneCallApp implements WebSocketConfigurer {
final static String DEFAULT_KMS_WS_URI = "ws://localhost:8888/kurento";
@Bean
public CallHandler callHandler() {
return new CallHandler();
}
@Bean
public UserRegistry registry() {
return new UserRegistry();
}
@Bean
public KurentoClient kurentoClient() {
return KurentoClient.create(System.getProperty("kms.url",
DEFAULT_KMS_WS_URI));
}
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(callHandler(), "/call");
}
public static void main(String[] args) throws Exception {
new SpringApplication(One2OneCallApp.class).run(args);
}
}
109
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with server by means of requests and responses. Specifically, the main app class implements the interface
WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path /call.
CallHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece
of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are five different kind of incoming messages to the application server: register,
call, incomingCallResponse, onIceCandidate and stop. These messages are treated in the switch
clause, taking the proper steps in each case.
public class CallHandler extends TextWebSocketHandler {
private static final Logger log = LoggerFactory.getLogger(CallHandler.class);
private static final Gson gson = new GsonBuilder().create();
110
candidate.get("sdpMid").getAsString(), candidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(cand);
}
break;
}
case "stop":
stop(session);
break;
default:
break;
}
}
private void handleErrorResponse(Throwable t, WebSocketSession session,
String responseId) throws IOException {
stop(session);
log.error(t.getMessage(), t);
JsonObject response = new JsonObject();
response.addProperty("id", responseId);
response.addProperty("response", "rejected");
response.addProperty("message", t.getMessage());
session.sendMessage(new TextMessage(response.toString()));
}
private void register(WebSocketSession session, JsonObject jsonMessage) throws IOException {
...
}
private void call(UserSession caller, JsonObject jsonMessage) throws IOException {
...
}
@Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception {
stop(session);
registry.removeBySession(session);
}
}
In the following snippet, we can see the register method. Basically, it obtains the name attribute from register
message and check if there are a registered user with that name. If not, the new user is registered and an acceptance
message is sent to it.
private void register(WebSocketSession session, JsonObject jsonMessage) throws IOException {
String name = jsonMessage.getAsJsonPrimitive("name").getAsString();
UserSession caller = new UserSession(session, name);
String responseMsg = "accepted";
if (name.isEmpty()) {
111
In the call method, the server checks if there is a registered user with the name specified in to message attribute,
and sends an incomingCall message. If there is no user with that name, a callResponse message is sent to
caller rejecting the call.
private void call(UserSession caller, JsonObject jsonMessage) throws IOException {
String to = jsonMessage.get("to").getAsString();
String from = jsonMessage.get("from").getAsString();
JsonObject response = new JsonObject();
if (registry.exists(to)) {
UserSession callee = registry.getByName(to);
caller.setSdpOffer(jsonMessage.getAsJsonPrimitive("sdpOffer").getAsString());
caller.setCallingTo(to);
response.addProperty("id", "incomingCall");
response.addProperty("from", from);
callee.sendMessage(response);
callee.setCallingFrom(from);
} else {
response.addProperty("id", "callResponse");
response.addProperty("response", "rejected: user '" + to + "' is not registered");
caller.sendMessage(response);
}
}
The stop method ends the video call. It can be called both by caller and callee in the communication. The result is
that both peers release the Media Pipeline and ends the video communication:
public void stop(WebSocketSession session) throws IOException {
String sessionId = session.getId();
if (pipelines.containsKey(sessionId)) {
pipelines.get(sessionId).release();
CallMediaPipeline pipeline = pipelines.remove(sessionId);
pipeline.release();
// Both users can stop the communication. A 'stopCommunication'
// message will be sent to the other peer.
UserSession stopperUser = registry.getBySession(session);
if (stopperUser != null) {
UserSession stoppedUser = (stopperUser.getCallingFrom() != null)
? registry.getByName(stopperUser.getCallingFrom())
: stopperUser.getCallingTo() != null
? registry.getByName(stopperUser.getCallingTo())
: null;
112
if (stoppedUser != null) {
JsonObject message = new JsonObject();
message.addProperty("id", "stopCommunication");
stoppedUser.sendMessage(message);
stoppedUser.clear();
}
stopperUser.clear();
}
}
}
In the incomingCallResponse method, if the callee user accepts the call, it is established and the media elements are created to connect the caller with the callee in a B2B manner. Basically, the server creates a
CallMediaPipeline object, to encapsulate the media pipeline creation and management. Then, this object is
used to negotiate media interchange with users browsers.
The negotiation between WebRTC peer in the browser and WebRtcEndpoint in Kurento Media Server is made
by means of SDP generation at the client (offer) and SDP generation at the server (answer). The SDP answers are generated with the Kurento Java Client inside the class CallMediaPipeline (as we see in a moment). The methods used to generate SDP are generateSdpAnswerForCallee(calleeSdpOffer) and
generateSdpAnswerForCaller(callerSdpOffer):
113
calleer.setWebRtcEndpoint(pipeline.getCallerWebRtcEP());
pipeline.getCallerWebRtcEP().addOnIceCandidateListener(new EventListener<OnIceCandidateEvent
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (calleer.getSession()) {
calleer.getSession().sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
String callerSdpAnswer = pipeline.generateSdpAnswerForCaller(callerSdpOffer);
JsonObject startCommunication = new JsonObject();
startCommunication.addProperty("id", "startCommunication");
startCommunication.addProperty("sdpAnswer", calleeSdpAnswer);
synchronized (callee) {
callee.sendMessage(startCommunication);
}
pipeline.getCalleeWebRtcEP().gatherCandidates();
JsonObject response = new JsonObject();
response.addProperty("id", "callResponse");
response.addProperty("response", "accepted");
response.addProperty("sdpAnswer", callerSdpAnswer);
synchronized (calleer) {
calleer.sendMessage(response);
}
pipeline.getCallerWebRtcEP().gatherCandidates();
} catch (Throwable t) {
log.error(t.getMessage(), t);
if (pipeline != null) {
pipeline.release();
}
pipelines.remove(calleer.getSessionId());
pipelines.remove(callee.getSessionId());
JsonObject response = new JsonObject();
response.addProperty("id", "callResponse");
response.addProperty("response", "rejected");
calleer.sendMessage(response);
response = new JsonObject();
response.addProperty("id", "stopCommunication");
114
callee.sendMessage(response);
}
} else {
JsonObject response = new JsonObject();
response.addProperty("id", "callResponse");
response.addProperty("response", "rejected");
calleer.sendMessage(response);
}
}
The media logic in this demo is implemented in the class CallMediaPipeline. As you can see, the media pipeline
of this demo is quite simple: two WebRtcEndpoint elements directly interconnected. Please take note that the
WebRtcEndpoints need to be connected twice, one for each media direction.
public class CallMediaPipeline {
private MediaPipeline pipeline;
private WebRtcEndpoint callerWebRtcEP;
private WebRtcEndpoint calleeWebRtcEP;
public CallMediaPipeline(KurentoClient kurento) {
try {
this.pipeline = kurento.createMediaPipeline();
this.callerWebRtcEP = new WebRtcEndpoint.Builder(pipeline).build();
this.calleeWebRtcEP = new WebRtcEndpoint.Builder(pipeline).build();
this.callerWebRtcEP.connect(this.calleeWebRtcEP);
this.calleeWebRtcEP.connect(this.callerWebRtcEP);
} catch (Throwable t) {
if (this.pipeline != null) {
pipeline.release();
}
}
}
public String generateSdpAnswerForCaller(String sdpOffer) {
return callerWebRtcEP.processOffer(sdpOffer);
}
public String generateSdpAnswerForCallee(String sdpOffer) {
return calleeWebRtcEP.processOffer(sdpOffer);
}
public void release() {
if (pipeline != null) {
pipeline.release();
}
}
public WebRtcEndpoint getCallerWebRtcEP() {
return callerWebRtcEP;
}
public WebRtcEndpoint getCalleeWebRtcEP() {
return calleeWebRtcEP;
}
115
10.1.4 Client-Side
Lets move now to the client-side of the application. To call the previously created WebSocket service in the serverside, we use the JavaScript class WebSocket. We use a specific Kurento JavaScript library called kurento-utils.js
to simplify the WebRTC interaction with the server. This library depends on adapter.js, which is a JavaScript WebRTC utility maintained by Google that abstracts away browser differences. Finally jquery.js is also needed in this
application.
These libraries are linked in the index.html web page, and are used in the index.js.
In the following snippet we can see the creation of the WebSocket (variable ws) in the path /call. Then, the
onmessage listener of the WebSocket is used to implement the JSON signaling protocol in the client-side. Notice that there are five incoming messages to client: resgisterResponse, callResponse, incomingCall,
iceCandidate and startCommunication. Convenient actions are taken to implement each step in the communication. For example, in functions call and incomingCall (for caller and callee respectively), the function
WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js is used to start a WebRTC communication.
var ws = new WebSocket('ws://' + location.host + '/call');
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'resgisterResponse':
resgisterResponse(parsedMessage);
break;
case 'callResponse':
callResponse(parsedMessage);
break;
case 'incomingCall':
incomingCall(parsedMessage);
break;
case 'startCommunication':
startCommunication(parsedMessage);
break;
case 'stopCommunication':
console.info("Communication ended by remote peer");
stop(true);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function (error) {
if (!error) return;
console.error("Error adding candidate: " + error);
});
break;
default:
console.error('Unrecognized message', parsedMessage);
}
}
116
function incomingCall(message) {
//If bussy just reject without disturbing user
if (callState != NO_CALL) {
var response = {
id : 'incomingCallResponse',
from : message.from,
callResponse : 'reject',
message : 'bussy'
};
return sendMessage(response);
}
setCallState(PROCESSING_CALL);
if (confirm('User ' + message.from
+ ' is calling you. Do you accept the call?')) {
showSpinner(videoInput, videoOutput);
from = message.from;
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate,
onerror: onError
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function (error) {
if(error) {
return console.error(error);
}
webRtcPeer.generateOffer (onOfferIncomingCall);
});
} else {
var response = {
id : 'incomingCallResponse',
from : message.from,
callResponse : 'reject',
message : 'user declined'
};
sendMessage(response);
stop();
}
}
function call() {
if (document.getElementById('peer').value == '') {
window.alert("You must specify the peer name");
return;
}
setCallState(PROCESSING_CALL);
showSpinner(videoInput, videoOutput);
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate,
onerror: onError
}
117
10.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, adapter.js, and draggabilly) are handled with Bower. These
dependencies are defined in the file bower.json. The command bower install is automatically called from Maven.
Thus, Bower should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
118
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-one2one-call
git checkout 6.5.0
npm install
npm start
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
119
120
sends to Kurento Media Server, which forwards it to A. This means that KMS is providing a B2B (back-to-back) call
service.
To implement this behavior create a Media Pipeline composed by two WebRtC endpoints connected in B2B. The
implemented media pipeline is illustrated in the following picture:
121
122
case 'incomingCallResponse':
incomingCallResponse(sessionId, message.from, message.callResponse, message.sdpOffer, ws)
break;
case 'stop':
stop(sessionId);
break;
case 'onIceCandidate':
onIceCandidate(sessionId, message.candidate);
break;
default:
ws.send(JSON.stringify({
id : 'error',
message : 'Invalid message ' + message
}));
break;
}
});
});
In order to perform a call, each user (the caller and the callee) must be register in the system. For this reason, in the
server-side there is a class named UserRegistry to store and locate users. Then, the register message fires the
execution of the following function:
// Represents registrar of users
function UserRegistry() {
this.usersById = {};
this.usersByName = {};
123
}
UserRegistry.prototype.register = function(user) {
this.usersById[user.id] = user;
this.usersByName[user.name] = user;
}
UserRegistry.prototype.unregister = function(id) {
var user = this.getById(id);
if (user) delete this.usersById[id]
if (user && this.getByName(user.name)) delete this.usersByName[user.name];
}
UserRegistry.prototype.getById = function(id) {
return this.usersById[id];
}
UserRegistry.prototype.getByName = function(name) {
return this.usersByName[name];
}
UserRegistry.prototype.removeById = function(id) {
var userSession = this.usersById[id];
if (!userSession) return;
delete this.usersById[id];
delete this.usersByName[userSession.name];
}
function register(id, name, ws, callback) {
function onError(error) {
ws.send(JSON.stringify({id:'registerResponse', response : 'rejected ', message: error}));
}
if (!name) {
return onError("empty user name");
}
if (userRegistry.getByName(name)) {
return onError("User " + name + " is already registered");
}
userRegistry.register(new UserSession(id, name, ws));
try {
ws.send(JSON.stringify({id: 'registerResponse', response: 'accepted'}));
} catch(exception) {
onError(exception);
}
}
In order to control the media capabilities provided by the Kurento Media Server, we need an instance of the KurentoClient in the Node application server. In order to create this instance, we need to specify to the client library the
location of the Kurento Media Server. In this example, we assume its located at localhost listening in port 8888.
var kurento = require('kurento-client');
var kurentoClient = null;
var argv = minimist(process.argv.slice(2), {
124
default: {
as_uri: 'https://localhost:8443/',
ws_uri: 'ws://localhost:8888/kurento'
}
});
[...]
function getKurentoClient(callback) {
if (kurentoClient !== null) {
return callback(null, kurentoClient);
}
kurento(argv.ws_uri, function(error, _kurentoClient) {
if (error) {
console.log("Could not find media server at address " + argv.ws_uri);
return callback("Could not find media server at address" + argv.ws_uri
+ ". Exiting with error " + error);
}
kurentoClient = _kurentoClient;
callback(null, kurentoClient);
});
}
Once the Kurento Client has been instantiated, you are ready for communicating with Kurento Media Server. Our first
operation is to create a Media Pipeline, then we need to create the Media Elements and connect them. In this example,
we need two WebRtcEndpoints, i.e. one peer caller and other one for the callee. This media logic is implemented in
the class CallMediaPipeline. Note that the WebRtcEndpoints need to be connected twice, one for each media
direction. This object is created in the function incomingCallResponse which is fired in the callee peer, after
the caller executes the function call:
function call(callerId, to, from, sdpOffer) {
clearCandidatesQueue(callerId);
var caller = userRegistry.getById(callerId);
var rejectCause = 'User ' + to + ' is not registered';
if (userRegistry.getByName(to)) {
var callee = userRegistry.getByName(to);
caller.sdpOffer = sdpOffer
callee.peer = from;
caller.peer = to;
var message = {
id: 'incomingCall',
from: from
};
try{
return callee.sendMessage(message);
} catch(exception) {
rejectCause = "Error " + exception;
}
}
var message = {
id: 'callResponse',
response: 'rejected: ',
message: rejectCause
};
caller.sendMessage(message);
125
}
function incomingCallResponse(calleeId, from, callResponse, calleeSdp, ws) {
clearCandidatesQueue(calleeId);
function onError(callerReason, calleeReason) {
if (pipeline) pipeline.release();
if (caller) {
var callerMessage = {
id: 'callResponse',
response: 'rejected'
}
if (callerReason) callerMessage.message = callerReason;
caller.sendMessage(callerMessage);
}
var calleeMessage = {
id: 'stopCommunication'
};
if (calleeReason) calleeMessage.message = calleeReason;
callee.sendMessage(calleeMessage);
}
var callee = userRegistry.getById(calleeId);
if (!from || !userRegistry.getByName(from)) {
return onError(null, 'unknown from = ' + from);
}
var caller = userRegistry.getByName(from);
if (callResponse === 'accept') {
var pipeline = new CallMediaPipeline();
pipelines[caller.id] = pipeline;
pipelines[callee.id] = pipeline;
pipeline.createPipeline(caller.id, callee.id, ws, function(error) {
if (error) {
return onError(error, error);
}
126
sdpAnswer: callerSdpAnswer
};
caller.sendMessage(message);
});
});
});
} else {
var decline = {
id: 'callResponse',
response: 'rejected',
message: 'user declined'
};
caller.sendMessage(decline);
}
}
As of Kurento Media Server 6.0, the WebRTC negotiation is done by exchanging ICE candidates between the WebRTC
peers. To implement this protocol, the webRtcEndpoint receives candidates from the client in OnIceCandidate
function. These candidates are stored in a queue when the webRtcEndpoint is not available yet. Then these
candidates are added to the media element by calling to the addIceCandidate method.
var candidatesQueue = {};
[...]
function onIceCandidate(sessionId, _candidate) {
var candidate = kurento.register.complexTypes.IceCandidate(_candidate);
var user = userRegistry.getById(sessionId);
127
sages to client: startResponse, error, and iceCandidate. Convenient actions are taken to implement each
step in the communication. For example, in functions start the function WebRtcPeer.WebRtcPeerSendrecv
of kurento-utils.js is used to start a WebRTC communication.
var ws = new WebSocket('ws://' + location.host + '/one2one');
var webRtcPeer;
[...]
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'registerResponse':
resgisterResponse(parsedMessage);
break;
case 'callResponse':
callResponse(parsedMessage);
break;
case 'incomingCall':
incomingCall(parsedMessage);
break;
case 'startCommunication':
startCommunication(parsedMessage);
break;
case 'stopCommunication':
console.info("Communication ended by remote peer");
stop(true);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate)
break;
default:
console.error('Unrecognized message', parsedMessage);
}
}
On the one hand, the function call is executed in the caller client-side, using the method
WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js to start a WebRTC communication in duplex
mode. On the other hand, the function incomingCall in the callee client-side uses also the method
WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js to complete the WebRTC call.
function call() {
if (document.getElementById('peer').value == '') {
window.alert("You must specify the peer name");
return;
}
setCallState(PROCESSING_CALL);
showSpinner(videoInput, videoOutput);
var options = {
localVideo : videoInput,
remoteVideo : videoOutput,
onicecandidate : onIceCandidate
}
128
129
}
var response = {
id : 'incomingCallResponse',
from : message.from,
callResponse : 'accept',
sdpOffer : offerSdp
};
sendMessage(response);
});
});
} else {
var response = {
id : 'incomingCallResponse',
from : message.from,
callResponse : 'reject',
message : 'user declined'
};
sendMessage(response);
stop(true);
}
}
10.2.5 Dependencies
Server-side dependencies of this demo are managed using npm. Our main dependency is the Kurento Client JavaScript
(kurento-client). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
[...]
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
[...]
"kurento-utils" : "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at npm and Bower.
130
CHAPTER 11
This is an enhanced version of the the one-to-one application with video recording and augmented reality.
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
Pipeline. The following picture shows an screenshot of this demo running in a web browser:
Chapter 11. WebRTC one-to-one video call with recording and filtering
Fig. 11.2: Advanced one to one video call media pipeline (1)
Fig. 11.3: Advanced one to one video call media pipeline (2)
133
134
Chapter 11. WebRTC one-to-one video call with recording and filtering
135
Fig. 11.5: Server-side class diagram of the advanced one to one video call app
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with server by means of requests and responses. Specifically, the main app class implements the interface
WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path /call.
CallHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece
of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are five different kind of incoming messages to the Server : register, call,
incomingCallResponse, onIceCandidate and play. These messages are treated in the switch clause, taking the proper steps in each case.
public class CallHandler extends TextWebSocketHandler {
private static final Logger log = LoggerFactory
.getLogger(CallHandler.class);
136
Chapter 11. WebRTC one-to-one video call with recording and filtering
137
In the following snippet, we can see the register method. Basically, it obtains the name attribute from register
message and check if there are a registered user with that name. If not, the new user is registered and an acceptance
message is sent to it.
private void register(WebSocketSession session, JsonObject jsonMessage)
throws IOException {
String name = jsonMessage.getAsJsonPrimitive("name").getAsString();
UserSession caller = new UserSession(session, name);
String responseMsg = "accepted";
if (name.isEmpty()) {
responseMsg = "rejected: empty user name";
} else if (registry.exists(name)) {
responseMsg = "rejected: user '" + name + "' already registered";
} else {
registry.register(caller);
}
JsonObject response = new JsonObject();
response.addProperty("id", "resgisterResponse");
response.addProperty("response", responseMsg);
138
Chapter 11. WebRTC one-to-one video call with recording and filtering
caller.sendMessage(response);
}
In the call method, the server checks if there are a registered user with the name specified in to message attribute
and send an incomingCall message to it. Or, if there isnt any user with that name, a callResponse message
is sent to caller rejecting the call.
private void call(UserSession caller, JsonObject jsonMessage)
throws IOException {
String to = jsonMessage.get("to").getAsString();
String from = jsonMessage.get("from").getAsString();
JsonObject response = new JsonObject();
if (registry.exists(to)) {
UserSession callee = registry.getByName(to);
caller.setSdpOffer(jsonMessage.getAsJsonPrimitive("sdpOffer")
.getAsString());
caller.setCallingTo(to);
response.addProperty("id", "incomingCall");
response.addProperty("from", from);
callee.sendMessage(response);
callee.setCallingFrom(from);
} else {
response.addProperty("id", "callResponse");
response.addProperty("response", "rejected");
response.addProperty("message", "user '" + to
+ "' is not registered");
caller.sendMessage(response);
}
}
In the incomingCallResponse method, if the callee user accepts the call, it is established and the media elements
are created to connect the caller with the callee. Basically, the server creates a CallMediaPipeline object, to
encapsulate the media pipeline creation and management. Then, this object is used to negotiate media interchange
with users browsers.
As explained in the Magic Mirror tutorial, the negotiation between WebRTC peer in the browser and WebRtcEndpoint
in Kurento Server is made by means of SDP generation at the client (offer) and SDP generation at the server (answer).
The SDP answers are generated with the Kurento Java Client inside the class CallMediaPipeline (as we see in a
moment). The methods used to generate SDP are generateSdpAnswerForCallee(calleeSdpOffer) and
generateSdpAnswerForCaller(callerSdpOffer):
private void incomingCallResponse(final UserSession callee,
JsonObject jsonMessage) throws IOException {
String callResponse = jsonMessage.get("callResponse").getAsString();
String from = jsonMessage.get("from").getAsString();
final UserSession calleer = registry.getByName(from);
String to = calleer.getCallingTo();
if ("accept".equals(callResponse)) {
log.debug("Accepted call from '{}' to '{}'", from, to);
CallMediaPipeline callMediaPipeline = new CallMediaPipeline(
kurento, from, to);
pipelines.put(calleer.getSessionId(),
139
callMediaPipeline.getPipeline());
pipelines.put(callee.getSessionId(),
callMediaPipeline.getPipeline());
String calleeSdpOffer = jsonMessage.get("sdpOffer").getAsString();
String calleeSdpAnswer = callMediaPipeline
.generateSdpAnswerForCallee(calleeSdpOffer);
callee.setWebRtcEndpoint(callMediaPipeline.getCalleeWebRtcEP());
callMediaPipeline.getCalleeWebRtcEP().addOnIceCandidateListener(
new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils
.toJsonObject(event.getCandidate()));
try {
synchronized (callee.getSession()) {
callee.getSession()
.sendMessage(
new TextMessage(response
.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
JsonObject startCommunication = new JsonObject();
startCommunication.addProperty("id", "startCommunication");
startCommunication.addProperty("sdpAnswer", calleeSdpAnswer);
synchronized (callee) {
callee.sendMessage(startCommunication);
}
callMediaPipeline.getCalleeWebRtcEP().gatherCandidates();
String callerSdpOffer = registry.getByName(from).getSdpOffer();
calleer.setWebRtcEndpoint(callMediaPipeline.getCallerWebRtcEP());
callMediaPipeline.getCallerWebRtcEP().addOnIceCandidateListener(
new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils
.toJsonObject(event.getCandidate()));
try {
synchronized (calleer.getSession()) {
calleer.getSession()
.sendMessage(
new TextMessage(response
140
Chapter 11. WebRTC one-to-one video call with recording and filtering
.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
String callerSdpAnswer = callMediaPipeline
.generateSdpAnswerForCaller(callerSdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "callResponse");
response.addProperty("response", "accepted");
response.addProperty("sdpAnswer", callerSdpAnswer);
synchronized (calleer) {
calleer.sendMessage(response);
}
callMediaPipeline.getCallerWebRtcEP().gatherCandidates();
callMediaPipeline.record();
} else {
JsonObject response = new JsonObject();
response.addProperty("id", "callResponse");
response.addProperty("response", "rejected");
calleer.sendMessage(response);
}
}
Finally, the play method instantiates a PlayMediaPipeline object, which is used to create Media Pipeline in
charge of the playback of the recorded streams in the Kurento Media Server.
private void play(final UserSession session, JsonObject jsonMessage)
throws IOException {
String user = jsonMessage.get("user").getAsString();
log.debug("Playing recorded call of user '{}'", user);
JsonObject response = new JsonObject();
response.addProperty("id", "playResponse");
if (registry.getByName(user) != null
&& registry.getBySession(session.getSession()) != null) {
final PlayMediaPipeline playMediaPipeline = new PlayMediaPipeline(
kurento, user, session.getSession());
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
session.setPlayingWebRtcEndpoint(playMediaPipeline.getWebRtc());
playMediaPipeline.getPlayer().addEndOfStreamListener(
new EventListener<EndOfStreamEvent>() {
@Override
public void onEvent(EndOfStreamEvent event) {
UserSession user = registry
.getBySession(session.getSession());
releasePipeline(user);
141
playMediaPipeline.sendPlayEnd(session.getSession());
}
});
playMediaPipeline.getWebRtc().addOnIceCandidateListener(
new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils
.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.getSession()
.sendMessage(
new TextMessage(response
.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
String sdpAnswer = playMediaPipeline.generateSdpAnswer(sdpOffer);
response.addProperty("response", "accepted");
response.addProperty("sdpAnswer", sdpAnswer);
playMediaPipeline.play();
pipelines.put(session.getSessionId(),
playMediaPipeline.getPipeline());
synchronized (session.getSession()) {
session.sendMessage(response);
}
playMediaPipeline.getWebRtc().gatherCandidates();
} else {
response.addProperty("response", "rejected");
response.addProperty("error", "No recording for user '" + user
+ "'. Please type a correct user in the 'Peer' field.");
session.getSession().sendMessage(
new TextMessage(response.toString()));
}
}
The media logic in this demo is implemented in the classes CallMediaPipeline and PlayMediaPipeline. The first media
pipeline consists on two WebRtcEndpoint elements interconnected with a FaceOverlayFilter in between,
and also with and RecorderEndpoint to carry out the recording of the WebRTC communication. Please take
note that the WebRtc endpoints needs to be connected twice, one for each media direction. In this class we can see
the implementation of methods generateSdpAnswerForCaller and generateSdpAnswerForCallee.
These methods delegate to WebRtc endpoints to create the appropriate answer.
142
Chapter 11. WebRTC one-to-one video call with recording and filtering
final
final
final
final
final
MediaPipeline pipeline;
WebRtcEndpoint webRtcCaller;
WebRtcEndpoint webRtcCallee;
RecorderEndpoint recorderCaller;
RecorderEndpoint recorderCallee;
143
Note:
Notice the hat URLs are provided by the application server and consumed by the KMS. This logic
is assuming that the application server is hosted in local (localhost), and by the default the hat URLs are
https://localhost:8443/img/mario-wings.png and https://localhost:8443/img/Hat.png. If your application server is
hosted in a different host, it can be easily changed by means of the configuration parameter app.server.url,
for example:
mvn compile exec:java -Dapp.server.url=https://app_server_host:app_server_port
144
Chapter 11. WebRTC one-to-one video call with recording and filtering
}
});
}
public void sendPlayEnd(WebSocketSession session) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "playEnd");
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Error sending playEndOfStream message", e);
}
}
public void play() {
player.play();
}
public String generateSdpAnswer(String sdpOffer) {
return webRtc.processOffer(sdpOffer);
}
public MediaPipeline getPipeline() {
return pipeline;
}
public WebRtcEndpoint getWebRtc() {
return webRtc;
}
public PlayerEndpoint getPlayer() {
return player;
}
}
11.1.4 Client-Side
Lets move now to the client-side of the application. To call the previously created WebSocket service in the serverside, we use the JavaScript class WebSocket. We use a specific Kurento JavaScript library called kurento-utils.js
to simplify the WebRTC interaction with the server. This library depends on adapter.js, which is a JavaScript WebRTC utility maintained by Google that abstracts away browser differences. Finally jquery.js is also needed in this
application.
These libraries are linked in the index.html web page, and are used in the index.js.
In the following snippet we can see the creation of the WebSocket (variable ws) in the path /call. Then, the
onmessage listener of the WebSocket is used to implement the JSON signaling protocol in the client-side. Notice that there are six incoming messages to client: resgisterResponse, callResponse, incomingCall,
startCommunication, iceCandidate and play. Convenient actions are taken to implement each step in
the communication. On the one hand, in functions call and incomingCall (for caller and callee respectively),
the function WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js is used to start a WebRTC communication.
On the other hand in the function play, the function WebRtcPeer.WebRtcPeerRecvonly is called since the
WebRtcEndpoint is used in receive-only.
145
146
Chapter 11. WebRTC one-to-one video call with recording and filtering
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function (error) {
if(error) {
return console.error(error);
}
this.generateOffer (onOfferIncomingCall);
});
} else {
var response = {
id : 'incomingCallResponse',
from : message.from,
callResponse : 'reject',
message : 'user declined'
};
sendMessage(response);
stop();
}
}
function call() {
if (document.getElementById('peer').value == '') {
document.getElementById('peer').focus();
window.alert("You must specify the peer name");
return;
}
setCallState(DISABLED);
showSpinner(videoInput, videoOutput);
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function (error) {
if(error) {
return console.error(error);
}
this.generateOffer (onOfferCall);
});
}
function play() {
var peer = document.getElementById('peer').value;
if (peer == '') {
window.alert("You must insert the name of the user recording to be played (field 'Peer')");
document.getElementById('peer').focus();
return;
}
document.getElementById('videoSmall').style.display = 'none';
setCallState(DISABLED);
showSpinner(videoOutput);
147
var options = {
remoteVideo: videoOutput,
onicecandidate: onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function (error) {
if(error) {
return console.error(error);
}
this.generateOffer (onOfferPlay);
});
}
function stop(message) {
var stopMessageId = (callState == IN_CALL) ? 'stop' : 'stopPlay';
setCallState(POST_CALL);
if (webRtcPeer) {
webRtcPeer.dispose();
webRtcPeer = null;
if (!message) {
var message = {
id : stopMessageId
}
sendMessage(message);
}
}
hideSpinner(videoInput, videoOutput);
document.getElementById('videoSmall').style.display = 'block';
}
11.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
148
Chapter 11. WebRTC one-to-one video call with recording and filtering
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, adapter.js, and draggabilly) are handled with Bower. These
dependencies are defined in the file bower.json. The command bower install is automatically called from Maven.
Thus, Bower should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
149
150
Chapter 11. WebRTC one-to-one video call with recording and filtering
CHAPTER 12
This tutorial connects several participants to the same video conference. A group call will ocnsist, in the media server
side, in N*N WebRTC endpoints, where N is the number of clients connected to that conference.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
151
152
}
@Bean
public RoomManager roomManager() {
return new RoomManager();
}
@Bean
public CallHandler groupCallHandler() {
return new CallHandler();
}
@Bean
public KurentoClient kurentoClient() {
return KurentoClient.create(System.getProperty("kms.url", DEFAULT_KMS_WS_URI));
}
public static void main(String[] args) throws Exception {
SpringApplication.run(GroupCallApp.class, args);
}
@Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(groupCallHandler(), "/groupcall");
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/groupcall.
CallHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece
of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are five different kind of incoming messages to the application server: joinRoom,
receiveVideoFrom, leaveRoom and onIceCandidate. These messages are treated in the switch clause,
taking the proper steps in each case.
public class CallHandler extends TextWebSocketHandler {
private static final Logger log = LoggerFactory.getLogger(CallHandler.class);
private static final Gson gson = new GsonBuilder().create();
@Autowired
private RoomManager roomManager;
@Autowired
private UserRegistry registry;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
final JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
final UserSession user = registry.getBySession(session);
153
if (user != null) {
log.debug("Incoming message from user '{}': {}", user.getName(), jsonMessage);
} else {
log.debug("Incoming message from new user: {}", jsonMessage);
}
switch (jsonMessage.get("id").getAsString()) {
case "joinRoom":
joinRoom(jsonMessage, session);
break;
case "receiveVideoFrom":
final String senderName = jsonMessage.get("sender").getAsString();
final UserSession sender = registry.getByName(senderName);
final String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
user.receiveVideoFrom(sender, sdpOffer);
break;
case "leaveRoom":
leaveRoom(user);
break;
case "onIceCandidate":
JsonObject candidate = jsonMessage.get("candidate").getAsJsonObject();
if (user != null) {
IceCandidate cand = new IceCandidate(candidate.get("candidate").getAsString(),
candidate.get("sdpMid").getAsString(), candidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(cand, jsonMessage.get("name").getAsString());
}
break;
default:
break;
}
}
@Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception {
...
}
private void joinRoom(JsonObject params, WebSocketSession session) throws IOException {
...
}
private void leaveRoom(UserSession user) throws IOException {
...
}
}
@Override
public void afterConnectionClosed(WebSocketSession session, CloseStatus status) throws Exception {
UserSession user = registry.removeBySession(session);
roomManager.getRoom(user.getRoomName()).leave(user);
}
In the joinRoom method, the server checks if there are a registered room with the name specified, add the user into
this room and registries the user.
154
The leaveRoom method finish the video call from one user.
private void leaveRoom(UserSession user) throws IOException {
final Room room = roomManager.getRoom(user.getRoomName());
room.leave(user);
if (room.getParticipants().isEmpty()) {
roomManager.removeRoom(room);
}
}
155
break;
case 'receiveVideoAnswer':
receiveVideoResponse(parsedMessage);
break;
case 'iceCandidate':
participants[parsedMessage.name].rtcPeer.addIceCandidate(parsedMessage.candidate, function (err
if (error) {
console.error("Error adding candidate: " + error);
return;
}
});
break;
default:
console.error('Unrecognized message', parsedMessage);
}
}
function register() {
name = document.getElementById('name').value;
var room = document.getElementById('roomName').value;
document.getElementById('room-header').innerText = 'ROOM ' + room;
document.getElementById('join').style.display = 'none';
document.getElementById('room').style.display = 'block';
var message = {
id : 'joinRoom',
name : name,
room : room,
}
sendMessage(message);
}
function onNewParticipant(request) {
receiveVideo(request.name);
}
function receiveVideoResponse(result) {
participants[result.name].rtcPeer.processAnswer (result.sdpAnswer, function (error) {
if (error) return console.error (error);
});
}
function callResponse(message) {
if (message.response != 'accepted') {
console.info('Call not accepted by peer. Closing call');
stop();
} else {
webRtcPeer.processAnswer(message.sdpAnswer, function (error) {
if (error) return console.error (error);
});
}
}
function onExistingParticipants(msg) {
var constraints = {
audio : true,
video : {
156
mandatory : {
maxWidth : 320,
maxFrameRate : 15,
minFrameRate : 15
}
}
};
console.log(name + " registered in room " + room);
var participant = new Participant(name);
participants[name] = participant;
var video = participant.getVideoElement();
var options = {
localVideo: video,
mediaConstraints: constraints,
onicecandidate: participant.onIceCandidate.bind(participant)
}
participant.rtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendonly(options,
function (error) {
if(error) {
return console.error(error);
}
this.generateOffer (participant.offerToReceiveVideo.bind(participant));
});
msg.data.forEach(receiveVideo);
}
function leaveRoom() {
sendMessage({
id : 'leaveRoom'
});
for ( var key in participants) {
participants[key].dispose();
}
document.getElementById('join').style.display = 'block';
document.getElementById('room').style.display = 'none';
ws.close();
}
function receiveVideo(sender) {
var participant = new Participant(sender);
participants[sender] = participant;
var video = participant.getVideoElement();
var options = {
remoteVideo: video,
onicecandidate: participant.onIceCandidate.bind(participant)
}
participant.rtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function (error) {
if(error) {
return console.error(error);
}
157
this.generateOffer (participant.offerToReceiveVideo.bind(participant));
});;
}
function onParticipantLeft(request) {
console.log('Participant ' + request.name + ' left');
var participant = participants[request.name];
participant.dispose();
delete participants[request.name];
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
12.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
158
"dependencies": {
"kurento-utils": "6.5.0"
}
159
160
CHAPTER 13
This tutorial detects and draws faces present in the webcam video. It connects filters: KmsDetectFaces and the
KmsShowFaces.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
Note: This demo needs the kms-datachannelexample module installed in the media server. That module is available
in the Kurento repositories, so it is possible to install it with:
161
162
}
@Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(handler(), "/metadata");
}
public static void main(String[] args) throws Exception {
new SpringApplication(MetadataApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/metadata.
MetadataHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central
piece of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop and
onIceCandidates. These messages are treated in the switch clause, taking the proper steps in each case.
public class MetadataHandler extends TextWebSocketHandler {
private final Logger log = LoggerFactory.getLogger(MetadataHandler.class);
private static final Gson gson = new GsonBuilder().create();
private final ConcurrentHashMap<String, UserSession> users = new ConcurrentHashMap<>();
@Autowired
private KurentoClient kurento;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message: {}", jsonMessage);
switch (jsonMessage.get("id").getAsString()) {
case "start":
start(session, jsonMessage);
break;
case "stop": {
UserSession user = users.remove(session.getId());
if (user != null) {
user.release();
}
break;
}
case "onIceCandidate": {
JsonObject jsonCandidate = jsonMessage.get("candidate").getAsJsonObject();
UserSession user = users.get(session.getId());
if (user != null) {
IceCandidate candidate = new IceCandidate(jsonCandidate.get("candidate").getAsString(),
jsonCandidate.get("sdpMid").getAsString(),
163
jsonCandidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(candidate);
}
break;
}
default:
sendError(session, "Invalid message with id " + jsonMessage.get("id").getAsString());
break;
}
}
private void start(final WebSocketSession session, JsonObject jsonMessage) {
...
}
private void sendError(WebSocketSession session, String message) {
...
}
}
In the following snippet, we can see the start method. It handles the ICE candidates gathering, creates a Media
Pipeline, creates the Media Elements (WebRtcEndpoint, KmsShowFaces and KmsDetectFaces) and make
the connections among them. A startResponse message is sent back to the client with the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
// User session
UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
user.setWebRtcEndpoint(webRtcEndpoint);
users.put(session.getId(), user);
// ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
// Media logic
KmsShowFaces showFaces = new KmsShowFaces.Builder(pipeline).build();
KmsDetectFaces detectFaces = new KmsDetectFaces.Builder(pipeline).build();
webRtcEndpoint.connect(detectFaces);
detectFaces.connect(showFaces);
showFaces.connect(webRtcEndpoint);
164
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
165
startResponse(parsedMessage);
break;
case 'error':
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError("Error message from server: " + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function(error) {
if (error) {
console.error("Error adding candidate: " + error);
return;
}
});
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
function start() {
console.log("Starting video call ...")
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoInput, videoOutput);
console.log("Creating WebRtcPeer and generating local sdp offer ...");
var options = {
localVideo : videoInput,
remoteVideo : videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function(error) {
if (error) {
return console.error(error);
}
webRtcPeer.generateOffer(onOffer);
});
}
function onOffer(error, offerSdp) {
if (error)
return console.error("Error generating the offer");
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'start',
sdpOffer : offerSdp
}
sendMessage(message);
}
function onError(error) {
166
console.error(error);
}
function onIceCandidate(candidate) {
console.log("Local candidate" + JSON.stringify(candidate));
var message = {
id : 'onIceCandidate',
candidate : candidate
};
sendMessage(message);
}
function startResponse(message) {
setState(I_CAN_STOP);
console.log("SDP answer received from server. Processing ...");
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function stop() {
console.log("Stopping video call ...");
setState(I_CAN_START);
if (webRtcPeer) {
webRtcPeer.dispose();
webRtcPeer = null;
var message = {
id : 'stop'
}
sendMessage(message);
}
hideSpinner(videoInput, videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
13.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
167
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
168
CHAPTER 14
This tutorial reads a file from disk and plays the video to WebRTC.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
This is a web application, and therefore it follows a client-server architecture. At the client-side, the logic is implemented in JavaScript. At the server-side, we use a Spring-Boot based server application consuming the Kurento
Java Client API, to control Kurento Media Server capabilities. All in all, the high level architecture of this demo
is three-tier. To communicate these entities, two WebSockets are used. First, a WebSocket is created between client
and application server to implement a custom signaling protocol. Second, another WebSocket is used to perform the
communication between the Kurento Java Client and the Kurento Media Server. This communication takes place
using the Kurento Protocol. For further information on it, please see this page of the documentation.
The following sections analyze in depth the server (Java) and client-side (JavaScript) code of this application. The
complete source code can be found in GitHub.
170
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/player.
PlayerHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece
of this class is the method handleTextMessage. This method implements the actions for requests, returning
responses through the WebSocket. In other words, it implements the server part of the signaling protocol depicted in
the previous sequence diagram.
In the designed protocol, there are seven different kinds of incoming messages to the Server : start, stop, pause,
resume, doSeek, getPosition and onIceCandidates. These messages are treated in the switch clause,
taking the proper steps in each case.
public class PlayerHandler extends TextWebSocketHandler {
@Autowired
private KurentoClient kurento;
private
private
private
new
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
String sessionId = session.getId();
log.debug("Incoming message {} from sessionId", jsonMessage, sessionId);
try {
switch (jsonMessage.get("id").getAsString()) {
case "start":
start(session, jsonMessage);
break;
case "stop":
stop(sessionId);
break;
case "pause":
pause(sessionId);
break;
case "resume":
resume(session);
break;
case "doSeek":
doSeek(session, jsonMessage);
break;
case "getPosition":
getPosition(session);
break;
case "onIceCandidate":
onIceCandidate(sessionId, jsonMessage);
break;
default:
sendError(session, "Invalid message with id " + jsonMessage.get("id").getAsString());
break;
}
} catch (Throwable t) {
log.error("Exception handling message {} in sessionId {}", jsonMessage, sessionId, t);
171
sendError(session, t.getMessage());
}
}
In the following snippet, we can see the start method. It handles the ICE candidates gathering, creates a Media
Pipeline, creates the Media Elements (WebRtcEndpoint and PlayerEndpoint) and makes the connections
between them and plays the video. A startResponse message is sent back to the client with the SDP answer.
When the MediaConnected event is received, info about the video is retrieved and sent back to the client in a
videoInfo message.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
final UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
user.setWebRtcEndpoint(webRtcEndpoint);
String videourl = jsonMessage.get("videourl").getAsString();
final PlayerEndpoint playerEndpoint = new PlayerEndpoint.Builder(pipeline, videourl).build();
user.setPlayerEndpoint(playerEndpoint);
users.put(session.getId(), user);
playerEndpoint.connect(webRtcEndpoint);
// 2. WebRtcEndpoint
// ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
172
173
playerEndpoint.play();
}
The pause method retrieves the user associated to the current session, and invokes the pause method on the
PlayerEndpoint.
private void pause(String sessionId) {
UserSession user = users.get(sessionId);
if (user != null) {
user.getPlayerEndpoint().pause();
}
}
The resume method starts the PlayerEndpoint of the current user, sending back the information about the
video, so the client side can refresh the stats.
private void resume(String sessionId) {
UserSession user = users.get(session.getId());
if (user != null) {
user.getPlayerEndpoint().play();
VideoInfo videoInfo = user.getPlayerEndpoint().getVideoInfo();
JsonObject response = new JsonObject();
response.addProperty("id", "videoInfo");
response.addProperty("isSeekable", videoInfo.getIsSeekable());
response.addProperty("initSeekable", videoInfo.getSeekableInit());
response.addProperty("endSeekable", videoInfo.getSeekableEnd());
response.addProperty("videoDuration", videoInfo.getDuration());
sendMessage(session, response.toString());
}
}
The doSeek method gets the user by sessionId, and calls the method setPosition of the PlayerEndpoint with the
new playing position. A seek message is sent back to the client if the seek fails.
private void doSeek(final WebSocketSession session, JsonObject jsonMessage) {
UserSession user = users.get(session.getId());
if (user != null) {
try {
user.getPlayerEndpoint().setPosition(jsonMessage.get("position").getAsLong());
} catch (KurentoException e) {
log.debug("The seek cannot be performed");
JsonObject response = new JsonObject();
response.addProperty("id", "seek");
response.addProperty("message", "Seek failed");
sendMessage(session, response.toString());
}
}
}
The getPosition calls the method getPosition of the PlayerEndpoint of the current user. A position
message is sent back to the client with the actual position of the video.
private void getPosition(final WebSocketSession session) {
UserSession user = users.get(session.getId());
174
if (user != null) {
long position = user.getPlayerEndpoint().getPosition();
JsonObject response = new JsonObject();
response.addProperty("id", "position");
response.addProperty("position", position);
sendMessage(session, response.toString());
}
}
The stop method is quite simple: it searches the user by sessionId and stops the PlayerEndpoint. Finally, it
releases the media elements and removes the user from the list of active users.
private void stop(String sessionId) {
UserSession user = users.remove(sessionId);
if (user != null) {
user.release();
}
}
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
175
case 'startResponse':
startResponse(parsedMessage);
break;
case 'error':
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Error message from server: ' + parsedMessage.message);
break;
case 'playEnd':
playEnd();
break;
break;
case 'videoInfo':
showVideoData(parsedMessage);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function(error) {
if (error)
return console.error('Error adding candidate: ' + error);
});
break;
case 'seek':
console.log (parsedMessage.message);
break;
case 'position':
document.getElementById("videoPosition").value = parsedMessage.position;
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
function start() {
// Disable start button
setState(I_AM_STARTING);
showSpinner(video);
var mode = $('input[name="mode"]:checked').val();
console
.log('Creating WebRtcPeer in " + mode + " mode and generating local sdp offer ...');
// Video and audio by default
var userMediaConstraints = {
audio : true,
video : true
}
if (mode == 'video-only') {
userMediaConstraints.audio = false;
} else if (mode == 'audio-only') {
userMediaConstraints.video = false;
}
var options = {
176
remoteVideo : video,
mediaConstraints : userMediaConstraints,
onicecandidate : onIceCandidate
}
console.info('User media constraints' + userMediaConstraints);
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function(error) {
if (error)
return console.error(error);
webRtcPeer.generateOffer(onOffer);
});
}
function onOffer(error, offerSdp) {
if (error)
return console.error('Error generating the offer');
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'start',
sdpOffer : offerSdp,
videourl : document.getElementById('videourl').value
}
sendMessage(message);
}
function onError(error) {
console.error(error);
}
function onIceCandidate(candidate) {
console.log('Local candidate' + JSON.stringify(candidate));
var message = {
id : 'onIceCandidate',
candidate : candidate
}
sendMessage(message);
}
function startResponse(message) {
setState(I_CAN_STOP);
console.log('SDP answer received from server. Processing ...');
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function pause() {
togglePause()
console.log('Pausing video ...');
var message = {
id : 'pause'
}
177
sendMessage(message);
}
function resume() {
togglePause()
console.log('Resuming video ...');
var message = {
id : 'resume'
}
sendMessage(message);
}
function stop() {
console.log('Stopping video ...');
setState(I_CAN_START);
if (webRtcPeer) {
webRtcPeer.dispose();
webRtcPeer = null;
var message = {
id : 'stop'
}
sendMessage(message);
}
hideSpinner(video);
}
function playEnd() {
setState(I_CAN_START);
hideSpinner(video);
}
function doSeek() {
var message = {
id : 'doSeek',
position: document.getElementById("seekPosition").value
}
sendMessage(message);
}
function getPosition() {
var message = {
id : 'getPosition'
}
sendMessage(message);
}
function showVideoData(parsedMessage) {
//Show video info
isSeekable = parsedMessage.isSeekable;
if (isSeekable) {
document.getElementById('isSeekable').value = "true";
enableButton('#doSeek', 'doSeek()');
} else {
document.getElementById('isSeekable').value = "false";
}
document.getElementById('initSeek').value = parsedMessage.initSeekable;
178
document.getElementById('endSeek').value = parsedMessage.endSeekable;
document.getElementById('duration').value = parsedMessage.videoDuration;
enableButton('#getPosition', 'getPosition()');
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
14.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
179
180
CHAPTER 15
This tutorial injects video to a QR filter and then sends the stream to WebRTC. QR detection events are delivered by
means of WebRTC data channels, to be displayed in browser.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
Note: This demo needs the kms-datachannelexample module installed in the media server. That module is available
in the Kurento repositories, so it is possible to install it with:
181
182
}
@Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(handler(), "/senddatachannel");
}
public static void main(String[] args) throws Exception {
new SpringApplication(SendDataChannelApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/senddatachannel.
SendDataChannelHandler class implements TextWebSocketHandler to handle text WebSocket requests. The
central piece of this class is the method handleTextMessage. This method implements the actions for requests,
returning responses through the WebSocket. In other words, it implements the server part of the signaling protocol
depicted in the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop and
onIceCandidates. These messages are treated in the switch clause, taking the proper steps in each case.
public class SendDataChannelHandler extends TextWebSocketHandler {
private final Logger log = LoggerFactory.getLogger(SendDataChannelHandler.class);
private static final Gson gson = new GsonBuilder().create();
private final ConcurrentHashMap<String, UserSession> users = new ConcurrentHashMap<>();
@Autowired
private KurentoClient kurento;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message: {}", jsonMessage);
switch (jsonMessage.get("id").getAsString()) {
case "start":
start(session, jsonMessage);
break;
case "stop": {
UserSession user = users.remove(session.getId());
if (user != null) {
user.release();
}
break;
}
case "onIceCandidate": {
JsonObject jsonCandidate = jsonMessage.get("candidate").getAsJsonObject();
UserSession user = users.get(session.getId());
if (user != null) {
IceCandidate candidate = new IceCandidate(jsonCandidate.get("candidate").getAsString(),
jsonCandidate.get("sdpMid").getAsString(),
183
jsonCandidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(candidate);
}
break;
}
default:
sendError(session, "Invalid message with id " + jsonMessage.get("id").getAsString());
break;
}
}
private void start(final WebSocketSession session, JsonObject jsonMessage) {
...
}
private void sendError(WebSocketSession session, String message) {
...
}
}
In the following snippet, we can see the start method. It handles the ICE candidates gathering, creates a Media
Pipeline, creates the Media Elements (WebRtcEndpoint, KmsSendData and PlayerEndpoint) and make the
connections among them. A startResponse message is sent back to the client with the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
// User session
UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).useDataChannels()
.build();
user.setWebRtcEndpoint(webRtcEndpoint);
PlayerEndpoint player = new PlayerEndpoint.Builder(pipeline,
"http://files.kurento.org/video/filter/barcodes.webm").build();
user.setPlayer(player);
users.put(session.getId(), user);
// ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
// Media logic
KmsSendData kmsSendData = new KmsSendData.Builder(pipeline).build();
184
player.connect(kmsSendData);
kmsSendData.connect(webRtcEndpoint);
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
webRtcEndpoint.gatherCandidates();
player.play();
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
185
switch (parsedMessage.id) {
case 'startResponse':
startResponse(parsedMessage);
break;
case 'error':
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError("Error message from server: " + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function(error) {
if (error) {
console.error("Error adding candidate: " + error);
return;
}
});
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
function start() {
console.log("Starting video call ...")
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoOutput);
var servers = null;
var configuration = null;
var peerConnection = new RTCPeerConnection(servers, configuration);
console.log("Creating channel");
var dataConstraints = null;
channel = peerConnection.createDataChannel(getChannelName (), dataConstraints);
channel.onmessage = onMessage;
var dataChannelReceive = document.getElementById('dataChannelReceive');
function onMessage (event) {
console.log("Received data " + event["data"]);
dataChannelReceive.value = event["data"];
}
console.log("Creating WebRtcPeer and generating local sdp offer ...");
var options = {
peerConnection: peerConnection,
remoteVideo : videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
186
function(error) {
if (error) {
return console.error(error);
}
webRtcPeer.generateOffer(onOffer);
});
}
function closeChannels(){
if(channel){
channel.close();
$('#dataChannelSend').disabled = true;
$('#send').attr('disabled', true);
channel = null;
}
}
function onOffer(error, offerSdp) {
if (error)
return console.error("Error generating the offer");
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'start',
sdpOffer : offerSdp
}
sendMessage(message);
}
function onError(error) {
console.error(error);
}
function onIceCandidate(candidate) {
console.log("Local candidate" + JSON.stringify(candidate));
var message = {
id : 'onIceCandidate',
candidate : candidate
};
sendMessage(message);
}
function startResponse(message) {
setState(I_CAN_STOP);
console.log("SDP answer received from server. Processing ...");
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function stop() {
console.log("Stopping video call ...");
setState(I_CAN_START);
if (webRtcPeer) {
closeChannels();
187
webRtcPeer.dispose();
webRtcPeer = null;
var message = {
id : 'stop'
}
sendMessage(message);
}
hideSpinner(videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
15.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
... note:
188
*kurento-utils-js* can be resolved as a Java dependency, but is also available on Bower. To use this
library from Bower, add this dependency to the file
`bower.json <https://github.com/Kurento/kurento-tutorial-java/blob/master/kurento-send-data-channel/b
.. sourcecode:: js
"dependencies": {
"kurento-utils": "6.5.0"
}
189
190
CHAPTER 16
This tutorial shows how text messages sent from browser can be delivered by data channels, to be displayed together
with loopback video.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
Note: This demo needs the kms-datachannelexample module installed in the media server. That module is available
in the Kurento repositories, so it is possible to install it with:
191
192
@Override
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(handler(), "/showdatachannel");
}
public static void main(String[] args) throws Exception {
new SpringApplication(ShowDataChannelApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/showdatachannel.
ShowDataChannelHandler class implements TextWebSocketHandler to handle text WebSocket requests. The
central piece of this class is the method handleTextMessage. This method implements the actions for requests,
returning responses through the WebSocket. In other words, it implements the server part of the signaling protocol
depicted in the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop and
onIceCandidates. These messages are treated in the switch clause, taking the proper steps in each case.
public class ShowDataChannelHandler extends TextWebSocketHandler {
private final Logger log = LoggerFactory.getLogger(ShowDataChannelHandler.class);
private static final Gson gson = new GsonBuilder().create();
private final ConcurrentHashMap<String, UserSession> users = new ConcurrentHashMap<>();
@Autowired
private KurentoClient kurento;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message: {}", jsonMessage);
switch (jsonMessage.get("id").getAsString()) {
case "start":
start(session, jsonMessage);
break;
case "stop": {
UserSession user = users.remove(session.getId());
if (user != null) {
user.release();
}
break;
}
case "onIceCandidate": {
JsonObject jsonCandidate = jsonMessage.get("candidate").getAsJsonObject();
UserSession user = users.get(session.getId());
if (user != null) {
IceCandidate candidate = new IceCandidate(jsonCandidate.get("candidate").getAsString(),
jsonCandidate.get("sdpMid").getAsString(),
jsonCandidate.get("sdpMLineIndex").getAsInt());
user.addCandidate(candidate);
193
}
break;
}
default:
sendError(session, "Invalid message with id " + jsonMessage.get("id").getAsString());
break;
}
}
private void start(final WebSocketSession session, JsonObject jsonMessage) {
...
}
private void sendError(WebSocketSession session, String message) {
...
}
}
Following snippet shows method start, where ICE candidates are gathered and Media Pipeline and Media Elements
(WebRtcEndpoint and KmsSendData) are created and connected. Message startResponse is sent back to
client carrying the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
// User session
UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).useDataChannels()
.build();
user.setWebRtcEndpoint(webRtcEndpoint);
users.put(session.getId(), user);
// ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
// Media logic
KmsShowData kmsShowData = new KmsShowData.Builder(pipeline).build();
webRtcEndpoint.connect(kmsShowData);
kmsShowData.connect(webRtcEndpoint);
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
194
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
195
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError("Error message from server: " + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function(error) {
if (error) {
console.error("Error adding candidate: " + error);
return;
}
});
break;
default:
if (state == I_AM_STARTING) {
setState(I_CAN_START);
}
onError('Unrecognized message', parsedMessage);
}
}
function start() {
console.log("Starting video call ...")
// Disable start button
setState(I_AM_STARTING);
showSpinner(videoInput, videoOutput);
var servers = null;
var configuration = null;
var peerConnection = new RTCPeerConnection(servers, configuration);
console.log("Creating channel");
var dataConstraints = null;
channel = peerConnection.createDataChannel(getChannelName (), dataConstraints);
channel.onopen = onSendChannelStateChange;
channel.onclose = onSendChannelStateChange;
function onSendChannelStateChange(){
if(!channel) return;
var readyState = channel.readyState;
console.log("sencChannel state changed to " + readyState);
if(readyState == 'open'){
dataChannelSend.disabled = false;
dataChannelSend.focus();
$('#send').attr('disabled', false);
} else {
dataChannelSend.disabled = true;
$('#send').attr('disabled', true);
}
}
var sendButton = document.getElementById('send');
var dataChannelSend = document.getElementById('dataChannelSend');
sendButton.addEventListener("click", function(){
var data = dataChannelSend.value;
196
197
setState(I_CAN_STOP);
console.log("SDP answer received from server. Processing ...");
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function stop() {
console.log("Stopping video call ...");
setState(I_CAN_START);
if (webRtcPeer) {
closeChannels();
webRtcPeer.dispose();
webRtcPeer = null;
var message = {
id : 'stop'
}
sendMessage(message);
}
hideSpinner(videoInput, videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
16.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
198
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-hello-world-data-channel
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the parameter ws_uri to the
URL, as follows:
16.2. JavaScript - Hello World with Data Channels
199
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
Note: This demo needs the kms-datachannelexample module installed in the media server. That module is available
in the Kurento repositories, so it is possible to install it with:
sudo apt-get install kms-datachannelexample
200
[...]
}
The function WebRtcPeer.WebRtcPeerSendrecv abstracts the WebRTC internal details (i.e. PeerConnection and getUserStream) and makes possible to start a full-duplex WebRTC communication, using the HTML video tag with id
videoInput to show the video camera (local stream) and the video tag videoOutput to show the remote stream provided
by the Kurento Media Server.
Inside this function, a call to generateOffer is performed. This function accepts a callback in which the SDP offer is
received. In this callback we create an instance of the KurentoClient class that will manage communications with the
Kurento Media Server. So, we need to provide the URI of its WebSocket endpoint. In this example, we assume its
listening in port 8433 at the same host than the HTTP serving the application.
[...]
var args = getopts(location.search,
{
default:
{
ws_uri: 'wss://' + location.hostname + ':8433/kurento',
ice_servers: undefined
}
});
[...]
kurentoClient(args.ws_uri, function(error, client){
[...]
};
Once we have an instance of kurentoClient, the following step is to create a Media Pipeline, as follows:
client.create("MediaPipeline", function(error, _pipeline){
[...]
});
If everything works correctly, we have an instance of a media pipeline (variable pipeline in this example).
With this instance, we are able to create Media Elements. In this example we just need a WebRtcEndpoint with
useDataChannels property as true. Then, this media elements is connected itself:
pipeline.create("WebRtcEndpoint", {useDataChannels: true}, function(error, webRtc){
if(error) return onError(error);
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(sdpOffer, function(error, sdpAnswer){
if(error) return onError(error);
webRtc.gatherCandidates(onError);
webRtcPeer.processAnswer(sdpAnswer, onError);
});
webRtc.connect(webRtc, function(error){
if(error) return onError(error);
console.log("Loopback established");
});
201
});
In the following snippet, we can see how to create the channel and the send method of one channel.
var dataConstraints = null;
var channel = peerConnection.createDataChannel(getChannelName (), dataConstraints);
...
sendButton.addEventListener("click", function(){
...
channel.send(data);
...
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
16.2.4 Dependencies
Demo dependencies are defined in file bower.json. They are managed using Bower.
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at Bower.
202
CHAPTER 17
WebRTC recording
This tutorial has two parts. First, it implements a WebRTC loopback and records the stream to disk. Second, it plays
back the recorded stream. Users can choose which type of media to send and record: audio, video or both.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
204
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/recording.
HelloWorldRecHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece of this class is the method handleTextMessage. This method implements the actions for requests,
returning responses through the WebSocket. In other words, it implements the server part of the signaling protocol
depicted in the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop , play
and onIceCandidates. These messages are treated in the switch clause, taking the proper steps in each case.
public class HelloWorldRecHandler extends TextWebSocketHandler {
private static final String RECORDER_FILE_PATH = "file:///tmp/HelloWorldRecorded.webm";
private final Logger log = LoggerFactory.getLogger(HelloWorldRecHandler.class);
private static final Gson gson = new GsonBuilder().create();
@Autowired
private UserRegistry registry;
@Autowired
private KurentoClient kurento;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message: {}", jsonMessage);
UserSession user = registry.getBySession(session);
if (user != null) {
log.debug("Incoming message from user '{}': {}", user.getId(), jsonMessage);
} else {
log.debug("Incoming message from new user: {}", jsonMessage);
}
switch (jsonMessage.get("id").getAsString()) {
case "start":
start(session, jsonMessage);
break;
case "stop":
case "stopPlay":
if (user != null) {
user.release();
}
break;
case "play":
play(user, session, jsonMessage);
break;
case "onIceCandidate": {
205
In the following snippet, we can see the start method. It handles the ICE candidates gathering, creates a Media
Pipeline, creates the Media Elements (WebRtcEndpoint and RecorderEndpoint) and make the connections
among them. A startResponse message is sent back to the client with the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
// 1. Media logic (webRtcEndpoint in loopback)
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
webRtcEndpoint.connect(webRtcEndpoint);
MediaProfileSpecType profile = getMediaProfileFromMessage(jsonMessage);
RecorderEndpoint recorder = new RecorderEndpoint.Builder(pipeline, RECORDER_FILE_PATH)
.withMediaProfile(profile).build();
connectAccordingToProfile(webRtcEndpoint, recorder, profile);
// 2. Store user session
UserSession user = new UserSession(session);
user.setMediaPipeline(pipeline);
user.setWebRtcEndpoint(webRtcEndpoint);
registry.register(user);
// 3. SDP negotiation
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
206
The play method, creates a Media Pipeline with the Media Elements (WebRtcEndpoint and PlayerEndpoint)
and make the connections among them. It will then send the recorded media to the client.
private void play(UserSession user, final WebSocketSession session, JsonObject jsonMessage) {
try {
// 1. Media logic
final MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
PlayerEndpoint player = new PlayerEndpoint.Builder(pipeline, RECORDER_FILE_PATH).build();
player.connect(webRtcEndpoint);
// Player listeners
player.addErrorListener(new EventListener<ErrorEvent>() {
@Override
public void onEvent(ErrorEvent event) {
log.info("ErrorEvent for session '{}': {}", session.getId(), event.getDescription());
sendPlayEnd(session, pipeline);
}
});
player.addEndOfStreamListener(new EventListener<EndOfStreamEvent>() {
@Override
public void onEvent(EndOfStreamEvent event) {
log.info("EndOfStreamEvent for session '{}'", session.getId());
207
sendPlayEnd(session, pipeline);
}
});
// 2. Store user session
user.setMediaPipeline(pipeline);
user.setWebRtcEndpoint(webRtcEndpoint);
// 3. SDP negotiation
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "playResponse");
response.addProperty("sdpAnswer", sdpAnswer);
// 4. Gather ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.error(e.getMessage());
}
}
});
// 5. Play recorded stream
player.play();
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
webRtcEndpoint.gatherCandidates();
} catch (Throwable t) {
log.error("Play error", t);
sendError(session, t.getMessage());
}
}
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
208
}
}
209
210
webRtcPeer.dispose();
webRtcPeer = null;
var message = {
id : stopMessageId
}
sendMessage(message);
}
hideSpinner(videoInput, videoOutput);
}
function play() {
console.log("Starting to play recorded video...");
// Disable start button
setState(DISABLED);
showSpinner(videoOutput);
console.log('Creating WebRtcPeer and generating local sdp offer ...');
var options = {
remoteVideo : videoOutput,
mediaConstraints : getConstraints(),
onicecandidate : onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function(error) {
if (error)
return console.error(error);
webRtcPeer.generateOffer(onPlayOffer);
});
}
function onPlayOffer(error, offerSdp) {
if (error)
return console.error('Error generating the offer');
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'play',
sdpOffer : offerSdp
}
sendMessage(message);
}
function getConstraints() {
var mode = $('input[name="mode"]:checked').val();
var constraints = {
audio : true,
video : true
}
if (mode == 'video-only') {
constraints.audio = false;
} else if (mode == 'audio-only') {
constraints.video = false;
}
211
return constraints;
}
function playResponse(message) {
setState(IN_PLAY);
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function playEnd() {
setState(POST_CALL);
hideSpinner(videoInput, videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
17.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
212
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You will need to download the source code form GitHub. There are two implementations of this tutorial, but they are
functionally the same. Its just the internal implementation that changes. After checking out the code, you can start
the web server.
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-recorder
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-hello-world-recorder-generator
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the parameter ws_uri to the
URL, as follows:
17.2. JavaScript - Recorder
213
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
214
[...]
}
The function WebRtcPeer.WebRtcPeerSendrecv abstracts the WebRTC internal details (i.e. PeerConnection and getUserStream) and makes possible to start a full-duplex WebRTC communication, using the HTML video tag with id
videoInput to show the video camera (local stream) and the video tag videoOutput to show the remote stream provided
by the Kurento Media Server.
Inside this function, a call to generateOffer is performed. This function accepts a callback in which the SDP offer is
received. In this callback we create an instance of the KurentoClient class that will manage communications with the
Kurento Media Server. So, we need to provide the URI of its WebSocket endpoint. In this example, we assume its
listening in port 8433 at the same host than the HTTP serving the application.
[...]
var args = getopts(location.search,
{
default:
{
ws_uri: 'wss://' + location.hostname + ':8433/kurento',
file_uri: 'file:///tmp/recorder_demo.webm', // file to be stored in media server
ice_servers: undefined
}
});
[...]
kurentoClient(args.ws_uri, function(error, client){
[...]
};
Once we have an instance of kurentoClient, the following step is to create a Media Pipeline, as follows:
client.create("MediaPipeline", function(error, _pipeline){
[...]
});
If everything works correctly, we have an instance of a media pipeline (variable pipeline in this example). With this
instance, we are able to create Media Elements. In this example we just need a WebRtcEndpoint and a RecorderEndpoint. Then, these media elements are interconnected:
var elements =
[
{type: 'RecorderEndpoint', params: {uri : args.file_uri}},
{type: 'WebRtcEndpoint', params: {}}
]
pipeline.create(elements, function(error, elements){
if (error) return onError(error);
var recorder = elements[0]
var webRtc
= elements[1]
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(offer, function(error, answer) {
if (error) return onError(error);
215
console.log("offer");
webRtc.gatherCandidates(onError);
webRtcPeer.processAnswer(answer);
});
client.connect(webRtc, webRtc, recorder, function(error) {
if (error) return onError(error);
console.log("Connected");
recorder.record(function(error) {
if (error) return onError(error);
console.log("record");
});
});
});
When stop button is clicked, the recoder element stops to record, and all elements are released.
stopRecordButton.addEventListener("click", function(event){
recorder.stop();
pipeline.release();
webRtcPeer.dispose();
videoInput.src = "";
videoOutput.src = "";
hideSpinner(videoInput, videoOutput);
var playButton = document.getElementById('play');
playButton.addEventListener('click', startPlaying);
})
In the second part, after play button is clicked, we have an instance of a media pipeline (variable pipeline in this
example). With this instance, we are able to create Media Elements. In this example we just need a WebRtcEndpoint
and a PlayerEndpoint with uri option like path where the media was recorded. Then, these media elements are
interconnected:
var options = {uri : args.file_uri}
pipeline.create("PlayerEndpoint", options, function(error, player) {
if (error) return onError(error);
player.on('EndOfStream', function(event){
pipeline.release();
videoPlayer.src = "";
hideSpinner(videoPlayer);
});
player.connect(webRtc, function(error) {
if (error) return onError(error);
player.play(function(error) {
if (error) return onError(error);
console.log("Playing ...");
});
});
216
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
17.2.4 Dependencies
Demo dependencies are located in file bower.json. Bower is used to collect them.
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
}
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at Bower.
217
218
CHAPTER 18
WebRTC repository
This is similar to the recording tutorial, but using the repository to store metadata.
Access the application connecting to the URL https://localhost:8443/ in a WebRTC capable browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. In addition, by default this demo is also suppossing that the Kurento Repository is up and running
in the localhost. It can be changed by means of the property repository.uri. All in all, and due to the fact that
we can use Maven to run the tutorial, you should execute the following command:
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento \
-Drepository.uri=http://repository_host:repository_url
219
220
@Bean
public RepositoryClient repositoryServiceProvider() {
return REPOSITORY_SERVER_URI.startsWith("file://") ? null
: RepositoryClientProvider.create(REPOSITORY_SERVER_URI);
}
@Bean
public UserRegistry registry() {
return new UserRegistry();
}
public static void main(String[] args) throws Exception {
new SpringApplication(HelloWorldRecApp.class).run(args);
}
}
This web application follows a Single Page Application architecture (SPA), and uses a WebSocket to communicate
client with application server by means of requests and responses. Specifically, the main app class implements the
interface WebSocketConfigurer to register a WebSocketHanlder to process WebSocket requests in the path
/repository.
HelloWorldRecHandler class implements TextWebSocketHandler to handle text WebSocket requests. The central piece of this class is the method handleTextMessage. This method implements the actions for requests,
returning responses through the WebSocket. In other words, it implements the server part of the signaling protocol
depicted in the previous sequence diagram.
In the designed protocol there are three different kinds of incoming messages to the Server : start, stop,
stopPlay, play and onIceCandidates. These messages are treated in the switch clause, taking the proper
steps in each case.
public class HelloWorldRecHandler extends TextWebSocketHandler {
// slightly larger timeout
private static final int REPOSITORY_DISCONNECT_TIMEOUT = 5500;
private static final String RECORDING_EXT = ".webm";
private final Logger log = LoggerFactory.getLogger(HelloWorldRecHandler.class);
private static final SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd_HH-mm-ss-S");
private static final Gson gson = new GsonBuilder().create();
@Autowired
private UserRegistry registry;
@Autowired
private KurentoClient kurento;
@Autowired
private RepositoryClient repositoryClient;
@Override
public void handleTextMessage(WebSocketSession session, TextMessage message) throws Exception {
JsonObject jsonMessage = gson.fromJson(message.getPayload(), JsonObject.class);
log.debug("Incoming message: {}", jsonMessage);
UserSession user = registry.getBySession(session);
if (user != null) {
221
In the following snippet, we can see the start method. If a repository REST client or interface has been created, it
will obtain a RepositoryItem from the remote service. This item contains an ID and a recording URI that will be used
by the Kurento Media Server. The ID will be used after the recording ends in order to manage the stored media. If the
client doesnt exist, the recording will be performed to a local URI, on the same machine as the KMS. This method also
deals with the ICE candidates gathering, creates a Media Pipeline, creates the Media Elements (WebRtcEndpoint
and RecorderEndpoint) and makes the connections between them. A startResponse message is sent back
to the client with the SDP answer.
private void start(final WebSocketSession session, JsonObject jsonMessage) {
try {
222
// 0. Repository logic
RepositoryItemRecorder repoItem = null;
if (repositoryClient != null) {
try {
Map<String, String> metadata = Collections.emptyMap();
repoItem = repositoryClient.createRepositoryItem(metadata);
} catch (Exception e) {
log.warn("Unable to create kurento repository items", e);
}
} else {
String now = df.format(new Date());
String filePath = HelloWorldRecApp.REPOSITORY_SERVER_URI + now + RECORDING_EXT;
repoItem = new RepositoryItemRecorder();
repoItem.setId(now);
repoItem.setUrl(filePath);
}
log.info("Media will be recorded {}by KMS: id={} , url={}",
(repositoryClient == null ? "locally" : ""), repoItem.getId(), repoItem.getUrl());
// 1. Media logic (webRtcEndpoint in loopback)
MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
webRtcEndpoint.connect(webRtcEndpoint);
RecorderEndpoint recorder = new RecorderEndpoint.Builder(pipeline, repoItem.getUrl())
.withMediaProfile(MediaProfileSpecType.WEBM).build();
webRtcEndpoint.connect(recorder);
// 2. Store user session
UserSession user = new UserSession(session);
user.setMediaPipeline(pipeline);
user.setWebRtcEndpoint(webRtcEndpoint);
user.setRepoItem(repoItem);
registry.register(user);
// 3. SDP negotiation
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
// 4. Gather ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.error(e.getMessage());
}
}
}
The play method, creates a Media Pipeline with the Media Elements (WebRtcEndpoint and PlayerEndpoint)
and make the connections between them. It will then send the recorded media to the client. The media can be served
from the repository or directly from the disk. If the repository interface exists, it will try to connect to the remote
223
service in order to obtain an URI from which the KMS will read the media streams. The inner workings of the
repository restrict reading an item before it has been closed (after the upload finished). This will happen only when
a certain number of seconds elapse after the last byte of media is uploaded by the KMS (safe-guard for gaps in the
network communications).
private void play(UserSession user, final WebSocketSession session, JsonObject jsonMessage) {
try {
// 0. Repository logic
RepositoryItemPlayer itemPlayer = null;
if (repositoryClient != null) {
try {
Date stopTimestamp = user.getStopTimestamp();
if (stopTimestamp != null) {
Date now = new Date();
long diff = now.getTime() - stopTimestamp.getTime();
if (diff >= 0 && diff < REPOSITORY_DISCONNECT_TIMEOUT) {
log.info(
"Waiting for {}ms before requesting the repository read endpoint "
+ "(requires {}ms before upload is considered terminated "
+ "and only {}ms have passed)",
REPOSITORY_DISCONNECT_TIMEOUT - diff, REPOSITORY_DISCONNECT_TIMEOUT, diff);
Thread.sleep(REPOSITORY_DISCONNECT_TIMEOUT - diff);
}
} else {
log.warn("No stop timeout was found, repository endpoint might not be ready");
}
itemPlayer = repositoryClient.getReadEndpoint(user.getRepoItem().getId());
} catch (Exception e) {
log.warn("Unable to obtain kurento repository endpoint", e);
}
} else {
itemPlayer = new RepositoryItemPlayer();
itemPlayer.setId(user.getRepoItem().getId());
itemPlayer.setUrl(user.getRepoItem().getUrl());
}
log.debug("Playing from {}: id={}, url={}",
(repositoryClient == null ? "disk" : "repository"), itemPlayer.getId(),
itemPlayer.getUrl());
// 1. Media logic
final MediaPipeline pipeline = kurento.createMediaPipeline();
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline).build();
PlayerEndpoint player = new PlayerEndpoint.Builder(pipeline, itemPlayer.getUrl()).build();
player.connect(webRtcEndpoint);
// Player listeners
player.addErrorListener(new EventListener<ErrorEvent>() {
@Override
public void onEvent(ErrorEvent event) {
log.info("ErrorEvent for session '{}': {}", session.getId(), event.getDescription());
sendPlayEnd(session, pipeline);
}
});
player.addEndOfStreamListener(new EventListener<EndOfStreamEvent>() {
@Override
public void onEvent(EndOfStreamEvent event) {
log.info("EndOfStreamEvent for session '{}'", session.getId());
sendPlayEnd(session, pipeline);
}
224
});
// 2. Store user session
user.setMediaPipeline(pipeline);
user.setWebRtcEndpoint(webRtcEndpoint);
// 3. SDP negotiation
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
JsonObject response = new JsonObject();
response.addProperty("id", "playResponse");
response.addProperty("sdpAnswer", sdpAnswer);
// 4. Gather ICE candidates
webRtcEndpoint.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
} catch (IOException e) {
log.error(e.getMessage());
}
}
});
The sendError method is quite simple: it sends an error message to the client when an exception is caught in the
server-side.
private void sendError(WebSocketSession session, String message) {
try {
JsonObject response = new JsonObject();
response.addProperty("id", "error");
response.addProperty("message", message);
session.sendMessage(new TextMessage(response.toString()));
} catch (IOException e) {
log.error("Exception sending message", e);
}
}
225
actions are taken to implement each step in the communication. For example, in functions start the function
WebRtcPeer.WebRtcPeerSendrecv of kurento-utils.js is used to start a WebRTC communication.
var ws = new WebSocket('wss://' + location.host + '/repository');
ws.onmessage = function(message) {
var parsedMessage = JSON.parse(message.data);
console.info('Received message: ' + message.data);
switch (parsedMessage.id) {
case 'startResponse':
startResponse(parsedMessage);
break;
case 'playResponse':
playResponse(parsedMessage);
break;
case 'playEnd':
playEnd();
break;
case 'error':
setState(NO_CALL);
onError('Error message from server: ' + parsedMessage.message);
break;
case 'iceCandidate':
webRtcPeer.addIceCandidate(parsedMessage.candidate, function(error) {
if (error)
return console.error('Error adding candidate: ' + error);
});
break;
default:
setState(NO_CALL);
onError('Unrecognized message', parsedMessage);
}
}
function start() {
console.log('Starting video call ...');
// Disable start button
setState(DISABLED);
showSpinner(videoInput, videoOutput);
console.log('Creating WebRtcPeer and generating local sdp offer ...');
var options = {
localVideo : videoInput,
remoteVideo : videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options,
function(error) {
if (error)
return console.error(error);
webRtcPeer.generateOffer(onOffer);
});
}
function onOffer(error, offerSdp) {
if (error)
226
227
var options = {
remoteVideo : videoOutput,
onicecandidate : onIceCandidate
}
webRtcPeer = new kurentoUtils.WebRtcPeer.WebRtcPeerRecvonly(options,
function(error) {
if (error)
return console.error(error);
webRtcPeer.generateOffer(onPlayOffer);
});
}
function onPlayOffer(error, offerSdp) {
if (error)
return console.error('Error generating the offer');
console.info('Invoking SDP offer callback function ' + location.host);
var message = {
id : 'play',
sdpOffer : offerSdp
}
sendMessage(message);
}
function playResponse(message) {
setState(IN_PLAY);
webRtcPeer.processAnswer(message.sdpAnswer, function(error) {
if (error)
return console.error(error);
});
}
function playEnd() {
setState(POST_CALL);
hideSpinner(videoInput, videoOutput);
}
function sendMessage(message) {
var jsonMessage = JSON.stringify(message);
console.log('Senging message: ' + jsonMessage);
ws.send(jsonMessage);
}
18.1.5 Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need two dependencies: the Kurento Client Java dependency
(kurento-client) and the JavaScript Kurento utility library (kurento-utils) for the client-side:
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
228
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest version of Kurento Java Client at Maven Central.
Kurento Java Client has a minimum requirement of Java 7. Hence, you need to include the following properties in
your pom:
<maven.compiler.target>1.7</maven.compiler.target>
<maven.compiler.source>1.7</maven.compiler.source>
Browser dependencies (i.e. bootstrap, ekko-lightbox, and adapter.js) are handled with Bower. These dependencies are
defined in the file bower.json. The command bower install is automatically called from Maven. Thus, Bower
should be present in your system. It can be installed in an Ubuntu machine as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Note: kurento-utils-js can be resolved as a Java dependency, but is also available on Bower. To use this library from
Bower, add this dependency to the file bower.json:
"dependencies": {
"kurento-utils": "6.5.0"
}
229
230
CHAPTER 19
WebRTC statistics
This tutorial implements a WebRTC loopback and shows how to collect WebRTC statistics.
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
Clone source code from GitHub and then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-loopback-stats
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
231
The function WebRtcPeer.WebRtcPeerSendrecv hides internal details (i.e. PeerConnection and getUserStream) and
makes possible to start a full-duplex WebRTC communication, using the HTML video tag with id videoInput to show
the video camera (local stream) and the video tag videoOutput to show the remote stream provided by the Kurento
Media Server.
Inside this function, a call to generateOffer is performed. This function accepts a callback in which the SDP offer is
received. In this callback we create an instance of the KurentoClient class that will manage communications with the
Kurento Media Server. So, we need to provide the URI of its WebSocket endpoint. In this example, we assume its
listening in port 8433 at the same host than the HTTP serving the application.
232
[...]
var args = getopts(location.search,
{
default:
{
ws_uri: 'wss://' + location.hostname + ':8433/kurento',
ice_servers: undefined
}
});
[...]
kurentoClient(args.ws_uri, function(error, client){
[...]
};
Once we have an instance of kurentoClient, the following step is to create a Media Pipeline, as follows:
client.create("MediaPipeline", function(error, _pipeline){
[...]
});
If everything works correctly, we have an instance of a media pipeline (variable pipeline in this example). With
this instance, we are able to create Media Elements. In this example we just need a WebRtcEndpoint. Then, this media
elements is connected itself:
pipeline.create("WebRtcEndpoint", function(error, webRtc) {
if (error) return onError(error);
webRtcEndpoint = webRtc;
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) return onError(error);
webRtc.gatherCandidates(onError);
webRtcPeer.processAnswer(sdpAnswer, onError);
});
webRtc.connect(webRtc, function(error) {
if (error) return onError(error);
console.log("Loopback established");
webRtcEndpoint.on('MediaStateChanged', function(event) {
if (event.newState == "CONNECTED") {
console.log("MediaState is CONNECTED ... printing stats...")
activateStatsTimeout();
}
});
});
});
In the following snippet, we can see getStats method. This method returns several statistic values of WebRtcEndpoint.
233
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
19.1.4 Dependencies
Demo dependencies are located in file bower.json. Bower is used to collect them.
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
}
234
Note: We are in active development. You can find the latest version of Kurento JavaScript Client at Bower.
235
236
Part V
Mastering Kurento
237
CHAPTER 20
Kurento Architecture
Fig. 20.1: Kurento Architecture. Kurento architecture follows the traditional separation between signaling and media
planes.
239
The right side of the picture shows the application, which is in charge of the signaling plane and contains the business
logic and connectors of the particular multimedia application being deployed. It can be build with any programming
technology like Java, Node.js, PHP, Ruby, .NET, etc. The application can use mature technologies such as HTTP and
SIP Servlets, Web Services, database connectors, messaging services, etc. Thanks to this, this plane provides access
to the multimedia signaling protocols commonly used by end-clients such as SIP, RESTful and raw HTTP based
formats, SOAP, RMI, CORBA or JMS. These signaling protocols are used by client side of applications to command
the creation of media sessions and to negotiate their desired characteristics on their behalf. Hence, this is the part of
the architecture, which is in contact with application developers and, for this reason, it needs to be designed pursuing
simplicity and flexibility.
On the left side, we have the Kurento Media Server, which implements the media plane capabilities providing access
to the low-level media features: media transport, media encoding/decoding, media transcoding, media mixing, media
processing, etc. The Kurento Media Server must be capable of managing the multimedia streams with minimal latency
and maximum throughput. Hence the Kurento Media Server must be optimized for efficiency.
Kurento modules architecture. Kurento Media Server can be extended with built-it modules (crowddetector, pointerdetector, chroma, platedetector) and also with other custom modules.
For further details please visit the Kurento Modules page.
241
Application logic: This layer provides the specific multimedia logic. In other words, this layer is in charge of
building the appropriate pipeline (by chaining the desired media elements) that the multimedia flows involved
in the application will need to traverse.
Service layer: This layer provides the multimedia services that support the application logic such as media
recording, media ciphering, etc. The Kurento Media Server (i.e. the specific pipeline of media elements) is in
charge of this layer.
The interesting aspect of this discussion is that, as happens with WWW development, Kurento applications can place
the Presentation layer at the client side and the Service layer at the server side. However, the Application Logic layer,
in both cases, can be located at either of the sides or even distributed between them. This idea is represented in the
following picture:
Layered architecture of web and multimedia applications. Applications created using Kurento (right) can be similar to
standard WWW applications (left). Both types of applications may choose to place the application logic at the client
or at the server code.
This means that Kurento developers can choose to include the code creating the specific media pipeline required by
their applications at the client side (using a suitable Kurento Client or directly with Kurento Protocol) or can place it
at the server side.
Both options are valid but each of them drives to different development styles. Having said this, it is important to
note that in the WWW developers usually tend to maintain client side code as simple as possible, bringing most of
their application logic to the server. Reproducing this kind of development experience is the most usual way of using
Kurento. That is, by locating the multimedia application logic at the server side, so that the specific media pipelines
are created using the Kurento Client for your favorite language.
Note: In the following sections it is considered that all Kurento handling is done at the server side. Although this
is the most common way of using Kurento, is important to note that all multimedia logic can be implemented at the
client with Kurento JavaScript Client.
242
Main interactions between architectural modules. Main interactions occur in two phases: negotiation and media
exchange. Remark that the color of the different arrows and boxes is aligned with the architectural figures presented
above, so that, for example, orange arrows show exchanges belonging to the signaling plane, blue arrows show
exchanges belonging to the Kurento Protocol, red boxes are associated to the Kurento Media Server and green boxes
with the application.
1. Media negotiation phase (signaling)
As it can be observed, at a first stage, a client (a browser in a computer, a mobile application, etc.) issues a message to
the application requesting some kind of multimedia capability. This message can be implemented with any protocol
(http, websockets, SIP, etc.). For instance, that request could ask for the visualization of a given video clip.
When the application receives the request, if appropriate, it will carry out the specific server side application logic,
which can include Authentication, Authorization and Accounting (AAA), CDR generation, consuming some type of
web service, etc.
After that, the application processes the request and, according to the specific instructions programmed by the developer, commands Kurento Media Server to instantiate the suitable media elements and to chain them in an appropriate
media pipeline. Once the pipeline has been created successfully, kurento Media server responds accordingly and the
application forwards the successful response to the client, showing it how and where the media service can be reached.
During the above mentioned steps no media data is really exchanged. All the interactions have the objective of
negotiating the whats, hows, wheres and whens of the media exchange. For this reason, we call it the negotiation
phase. Clearly, during this phase only signaling protocols are involved.
2. Media exchange phase
After that, a new phase starts devoted to producing the actual media exchange. The client addresses a request for the
media to the Kurento Media Server using the information gathered during the negotiation phase. Following with the
video-clip visualization example mentioned above, the browser will send a GET request to the IP address and port of
the Kurento Media Server where the clip can be obtained and, as a result, an HTTP reponse with the media will be
received.
Following the discussion with that simple example, one may wonder why such a complex scheme for just playing a
video, when in most usual scenarios clients just send the request to the appropriate URL of the video without requiring
any negotiation. The answer is straightforward. Kurento is designed for media applications involving complex media
processing. For this reason, we need to establish a two-phase mechanism enabling a negotiation before the media
exchange. The price to pay is that simple applications, such as one just downloading a video, also need to get through
these phases. However, the advantage is that when creating more advanced services the same simple philosophy will
hold. For example, if we want to add augmented reality or computer vision features to that video-clip, we just need to
create the appropriate pipeline holding the desired media element during the negotiation phase. After that, from the
client perspective, the processed clip will be received as any other video.
243
Main interactions in a WebRTC session. Interactions taking place in a Real Time Communications (RTC) session.
During the negotiation phase, a Session Description Protocol (SDP) message is exchanged offering the capabilities of
the client. As a result, Kurento Media Server generates an SDP answer that can be used by the client for extablishing
the media exchange.
The application developer is able to create the desired pipeline during the negotiation phase, so that the real time
multimedia stream is processed accordingly to the application needs. Just as an example, imagine that we want to
create a WebRTC application recording the media received from the client and augmenting it so that if a human face is
found, a hat will be rendered on top of it. This pipeline is schematically shown in the figure below, where we assume
that the Filter element is capable of detecting the face and adding the hat to it.
Example pipeline for a WebRTC session. During the negotiation phase, the application developer can create a pipeline
providing the desired specific functionality. For example, this pipeline uses a WebRtcEndpoint for communicating with
the client, which is connected to a RecorderEndpoint storing the received media streamd and to an augmented reality
filter, which feeds its output media stream back to the client. As a result, the end user will receive its own image filtered
(e.g. with a hat added onto her head) and the stream will be recorded and made available for further recovery into a
repository (e.g. a file).
Distribution of Media and Application Services Kurento Media Server and applications can be collocated, scalated or distributed among different machines.
A single application can invoke the services of more than one Kurento Media Server. The opposite
also applies, that is, a Kurento Media Server can attend the requests of more than one application.
Suitable for the Cloud Kurento is suitable to be integrated into cloud environments to act as a PaaS
(Platform as a Service) component.
Media Pipelines Chaining Media Elements via Media Pipelines is an intuitive approach to challenge the
complexity of multimedia processing.
Application development Developers do not need to be aware of internal Kurento Media Server complexities, all the applications can deployed in any technology or framework the developer like, from
client to server. From browsers to cloud services.
End-to-end Communication Capability Kurento provides end-to-end communication capabilities so
developers do not need to deal with the complexity of transporting, encoding/decoding and rendering media on client devices.
Fully Processable Media Streams Kurento enables not only interactive interpersonal communications
(e.g. Skype-like with conversational call push/reception capabilities), but also human-to-machine
(e.g. Video on Demand through real-time streaming) and machine-to-machine (e.g. remote video
recording, multisensory data exchange) communications.
Modular Processing of Media Modularization achieved through media elements and pipelines allows
defining the media processing functionality of an application through a graph-oriented language,
where the application developer is able to create the desired logic by chaining the appropriate functionalities.
Auditable Processing Kurento is able to generate rich and detailed information for QoS monitoring,
billing and auditing.
Seamless IMS integration Kurento is designed to support seamless integration into the IMS infrastructure of Telephony Carriers.
Transparent Media Adaptation Layer Kurento provides a transparent media adaptation layer to make
the convergence among different devices having different requirements in terms of screen size,
power consumption, transmission rate, etc. possible.
245
246
CHAPTER 21
247
Fig. 21.1: Example of a Media Pipeline implementing an interactive multimedia application receiving media from a
WebRtcEndpoint, overlaying and image on the detected faces and sending back the resulting stream
Kurento API is an object oriented API. That is, there are classes that can be instantiated. This classes define operations
that can be invoked over objects of this classes. The classes can have an inheritance relationship with other classes,
inheriting operations from parent classes to children ones.
The following class diagram shows some of the relationships of the main classes in the Kurento API.
21.1.2 Endpoints
Let us discuss briefly the different Endpoints offered by kurento.
A WebRtcEndpoint is an output and input endpoint that provides media streaming for Real Time Communications
(RTC) through the web. It implements WebRTC technology to communicate with browsers.
248
A RtpEndpoint is an output and input endpoint. That is, provides bidirectional content delivery capabilities with
remote networked peers through RTP protocol. As you can imagine, to send and receive media through the network it
uses RTP protocol and SDP for media negotiation.
An HttpPostEndpoint is an input endpoint that accepts media using http POST requests like HTTP file upload function.
A PlayerEndpoint is an input endpoint that retrieves content from file system, http URL or RTSP url and inject it into
the media pipeline.
A RecorderEndpoint is an output endpoint that provides function to store contents in reliable mode (doesnt discard
data). It contains Media Sink pads for audio and video.
The following class diagram shows the relationships of the main endpoint classes.
21.1.3 Filters
Filters are MediaElements that perform media processing, computer vision, augmented reality, and so on. Let see the
available filters in Kurento:
The ZBarFilter filter detects QR and bar codes in a video stream. When a code is found, the filter raises a
CodeFoundEvent. Clients can add a listener to this event to execute some action.
249
250
The FaceOverlayFilter filter detects faces in a video stream and overlaid it with a configurable image.
GStreamerFilter is a generic filter interface that allow use GStreamer filter in Kurento Media Pipelines.
The following class diagram shows the relationships of the main filter classes.
21.1.4 Hubs
Hubs are media objects in charge of managing multiple media flows in a pipeline. A Hub has several hub ports where
other media elements are connected. Lets see the available hubs in Kurento:
Composite is a hub that mixes the audio stream of its connected inputs and constructs a grid with the video streams
of them.
251
DispatcherOneToMany is a Hub that sends a given input to all the connected output HubPorts.
Dispatcher is a hub that allows routing between arbitrary input-output HubPort pairs.
CHAPTER 22
Kurento Protocol
253
254
To allow this rich API, Kurento Clients require requires full-duplex communications between client and server infrastructure. For this reason, the Kurento Protocol is based on WebSocket transports.
Previous to issuing commands, the Kurento Client requires establishing a WebSocket connection with Kurento Media
Server to the URL: ws://hostname:port/kurento
Once the WebSocket has been established, the Kurento Protocol offers different types of request/response messages:
ping: Keep-alive method between client and Kurento Media Server.
create: Instantiates a new media object, that is, a pipeline or media element.
invoke: Calls a method of an existing media object.
subscribe: Creates a subscription to an event in a object.
unsubscribe: Removes an existing subscription to an event.
release: Deletes the object and release resources used by it.
The Kurento Protocol allows to Kurento Media Server send requests to clients:
onEvent: This request is sent from Kurento Media server to clients when an event occurs.
Ping
In order to warranty the WebSocket connectivity between the client and the Kurento Media Server, a keep-alive
method is implemented. This method is based on a ping method sent by the client, which must be replied with a
pong message from the server. If no response is obtained in a time interval, the client is aware that the connectivity
with the media server has been lost.The parameter interval is the time out to receive the Pong message from the
server, in milliseconds. By default this value is 240000 (i.e. 40 seconds). This is an example of ping request:
{
"id": 1,
"method": "ping",
"params": {
"interval": 240000
},
"jsonrpc": "2.0"
}
The response to a ping request must contain a result object with a value parameter with a fixed name: pong.
The following snippet shows the pong response to the previous ping request:
{
"id": 1,
"result": {
"value": "pong"
},
"jsonrpc": "2.0"
}
Create
Create message requests the creation of an object of the Kurento API (Media Pipelines and Media Elements). The
parameter type specifies the type of the object to be created. The parameter constructorParams contains all the
information needed to create the object. Each message needs different constructorParams to create the object.
These parameters are defined in Kurento API section.
255
Media Elements have to be contained in a previously created Media Pipeline. Therefore, before creating Media
Elements, a Media Pipeline must exist. The response of the creation of a Media Pipeline contains a parameter called
sessionId, which must be included in the next create requests for Media Elements.
The following example shows a request message requesting the creation of an object of the type MediaPipeline:
{
"id": 2,
"method": "create",
"params": {
"type": "MediaPipeline",
"constructorParams": {},
"properties": {}
},
"jsonrpc": "2.0"
}
The response to this request message is as follows. Notice that the parameter value identifies the created Media
Pipelines, and sessionId is the identifier of the current session:
{
"id": 2,
"result": {
"value": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline",
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
The response message contains the identifier of the new object in the field value. As usual, the message id must
match with the request message. The sessionId is also returned in each response. The following example shows
a request message requesting the creation of an object of the type WebRtcEndpoint within an existing Media
Pipeline (identified by the parameter mediaPipeline). Notice that in this request, the sessionId is already
present, while in the previous example it was not (since at that point was unknown for the client):
{
"id": 3,
"method": "create",
"params": {
"type": "WebRtcEndpoint",
"constructorParams": {
"mediaPipeline": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline"
},
"properties": {},
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
The following example shows a request message requesting the creation of an object of the type WebRtcEndpoint
within an existing Media Pipeline (identified by the parameter mediaPipeline). Notice that in this request, the
sessionId is already present, while in the previous example it was not (since at that point was unknown for the
client):
{
"id": 3,
"result": {
"value": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline/087b7777-aab5-4787-816f"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
256
},
"jsonrpc": "2.0"
}
Invoke
Invoke message requests the invocation of an operation in the specified object. The parameter object indicates the
id of the object in which the operation will be invoked. The parameter operation carries the name of the operation
to be executed. Finally, the parameter operationParams has the parameters needed to execute the operation.
The following example shows a request message requesting the invocation of the operation connect on a
PlayerEndpoint connected to a WebRtcEndpoint:
{
"id": 5,
"method": "invoke",
"params": {
"object": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline/76dcb8d7-5655-445b-8cb7
"operation": "connect",
"operationParams": {
"sink": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline/087b7777-aab5-4787-81
},
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
The response message contains the value returned while executing the operation invoked in the object or nothing if the
operation doesnt return any value.
The following example shows a typical response while invoking the operation connect (that doesnt return anything):
{
"id": 5,
"result": {
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
Release
Release message requests the release of the specified object. The parameter object indicates the id of the object to
be released:
{
"id": 36,
"method": "release",
"params": {
"object": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline",
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
The response message only contains the sessionId. The following example shows the typical response of a release
request:
22.1. Kurento Protocol
257
{
"id": 36,
"result": {
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
Subscribe
Subscribe message requests the subscription to a certain kind of events in the specified object. The parameter object
indicates the id of the object to subscribe for events. The parameter type specifies the type of the events. If a client
is subscribed for a certain type of events in an object, each time an event is fired in this object, a request with method
onEvent is sent from Kurento Media Server to the client. This kind of request is described few sections later.
The following example shows a request message requesting the subscription of the event type EndOfStream on a
PlayerEndpoint object:
{
"id": 11,
"method": "subscribe",
"params": {
"type": "EndOfStream",
"object": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline/76dcb8d7-5655-445b-8cb7
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
The response message contains the subscription identifier. This value can be used later to remove this subscription.
The following example shows the response of subscription request. The value attribute contains the subscription id:
{
"id": 11,
"result": {
"value": "052061c1-0d87-4fbd-9cc9-66b57c3e1280",
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
Unsubscribe
Unsubscribe message requests the cancellation of a previous event subscription. The parameter subscription contains
the subscription id received from the server when the subscription was created.
The following example shows a request message requesting the cancellation of the subscription
353be312-b7f1-4768-9117-5c2f5a087429 for a given object:
{
"id": 38,
"method": "unsubscribe",
"params": {
"subscription": "052061c1-0d87-4fbd-9cc9-66b57c3e1280",
"object": "6ba9067f-cdcf-4ea6-a6ee-d74519585acd_kurento.MediaPipeline/76dcb8d7-5655-445b-8cb7
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
258
},
"jsonrpc": "2.0"
}
The response message only contains the sessionId. The following example shows the typical response of an
unsubscription request:
{
"id": 38,
"result": {
"sessionId": "bd4d6227-0463-4d52-b1c3-c71f0be68466"
},
"jsonrpc": "2.0"
}
OnEvent
When a client is subscribed to a type of events in an object, the server sends an onEvent request each time an event
of that type is fired in the object. This is possible because the Kurento Protocol is implemented with WebSockets and
there is a full duplex channel between client and server. The request that server send to client has all the information
about the event:
source: the object source of the event.
type: The type of the event.
timestamp: Date and time of the media server.
tags: Media elements can be labeled using the methods setSendTagsInEvents and addTag present in
each element. These tags are key-value metadata that can be used by developers for custom purposes. Tags are
returned with each event by the media server in this field.
The following example shows a notification sent for server to client to notify an event of type EndOfStream for a
PlayerEndpoint object:
{
"jsonrpc":"2.0",
"method":"onEvent",
"params":{
"value":{
"data":{
"source":"681f1bc8-2d13-4189-a82a-2e2b92248a21_kurento.MediaPipeline/e983997e-ac19-4f4b-95
"tags":[],
"timestamp":"1441277150",
"type":"EndOfStream"
},
"object":"681f1bc8-2d13-4189-a82a-2e2b92248a21_kurento.MediaPipeline/e983997e-ac19-4f4b-9575"type":"EndOfStream"
}
}
}
Notice that this message has no id field due to the fact that no response is required.
259
A Media Element is collected when the client is disconnected longer than 4 minutes. After that time, these media
elements are disposed automatically.
Therefore the WebSocket connection between client and KMS be active any time. In case of temporary network
disconnection, KMS implements a mechanism to allow the client reconnection.
There is an special kind of message with the format above. This message allows a client to reconnect to the same KMS
previously connected:
{
"jsonrpc": "2.0",
"id": 7,
"method": "connect",
"params": {
"sessionId":"4f5255d5-5695-4e1c-aa2b-722e82db5260"
}
}
... this means that client is reconnected to the same KMS. In case of reconnection to another KMS, the message is the
following:
{
"jsonrpc":"2.0",
"id": 7,
"error":{
"code":40007,
"message":"Invalid session",
"data":{
"type":"INVALID_SESSION"
}
}
}
In this case client is supposed to invoke the connect primitive once again in order to get a new sessionId:
{
"jsonrpc":"2.0",
"id": 7,
"method":"connect"
}
260
KMS elements
KMS filters
2. KMS sends a response message with the identifier for the media pipeline and the media session:
{
"id":1,
"result":{
"value":"c4a84b47-1acd-4930-9f6d-008c10782dfe_MediaPipeline",
"sessionId":"ba4be2a1-2b09-444e-a368-f81825a6168c"
},
"jsonrpc":"2.0"
}
4. KMS creates the WebRtcEndpoint sending back the media element identifier to the client:
{
"id":2,
"result":{
"value":"c4a84b47-1acd-4930-9f6d-008c10782dfe_MediaPipeline/e72a1ff5-e416-48ff-99ef-02f7fadabaf7
"sessionId":"ba4be2a1-2b09-444e-a368-f81825a6168c"
},
261
"jsonrpc":"2.0"
}
5. Client invokes the connect primitive in the WebRtcEndpoint in order to create a loopback:
{
"id":3,
"method":"invoke",
"params":{
"object":"c4a84b47-1acd-4930-9f6d-008c10782dfe_MediaPipeline/e72a1ff5-e416-48ff-99ef-02f7fadabaf
"operation":"connect",
"operationParams":{
"sink":"c4a84b47-1acd-4930-9f6d-008c10782dfe_MediaPipeline/e72a1ff5-e416-48ff-99ef-02f7fadaba
},
"sessionId":"ba4be2a1-2b09-444e-a368-f81825a6168c"
},
"jsonrpc":"2.0"
}
7. Client invokes the processOffer primitive in the WebRtcEndpoint in order to negotiate SDP in WebRTC:
{
"id":4,
"method":"invoke",
"params":{
"object":"c4a84b47-1acd-4930-9f6d-008c10782dfe_MediaPipeline/e72a1ff5-e416-48ff-99ef-02f7fadabaf
"operation":"processOffer",
"operationParams":{
"offer":"SDP"
},
"sessionId":"ba4be2a1-2b09-444e-a368-f81825a6168c"
},
"jsonrpc":"2.0"
}
8. KMS carry out the SDP negotiation and returns the SDP answer:
{
"id":4,
"result":{
"value":"SDP"
},
"jsonrpc":"2.0"
}
262
The aim of this tools is to generate the client code and also the glue code needed in the server-side. For code generation
it uses Freemarker as template engine. The typical way to use Kurento Module Creator is by running a command like
this:
kurento-module-creator -c <CODEGEN_DIR> -r <ROM_FILE> -r <TEMPLATES_DIR>
Where:
CODEGEN_DIR: Destination directory for generated files.
ROM_FILE: A space separated list of Kurento Media Element Description (kmd) files or folders containing this
files. As an example, you can take a look to the kmd files within the Kurento Media Server source code.
TEMPLATES_DIR: Directory that contains template files. As an example, you can take a look to the internal
Java and JavaScript templates.
263
264
CHAPTER 23
As of Kurento Media Server version 6, in addition to this general configuration file, the specific features of KMS are
tuned as individual modules. Each of these modules has its own configuration file:
/etc/kurento/modules/kurento/MediaElement.conf.ini: Generic parameters for Media Elements.
265
/etc/kurento/modules/kurento/SdpEndpoint.conf.ini:
pEndpoints (i.e. WebRtcEndpoint and RtpEndpoint).
1270
0 08:52 ?
00:01:00 /usr/bin/kurento-media-server
WebSocket Port
Unless configured otherwise, KMS will open the port 8888 to receive requests and send responses to/from by means
of the Kurento Protocol. To verify if this port is listening execute the following command:
sudo netstat -putan | grep kurento
0 :::8888
:::*
LISTEN
1270/kurento-media-server
266
CHAPTER 24
267
As you can imagine, it is not possible to have installed at the same time latest stable version and latest development
version of Kurento Media Server.
Older versions can be manually downloaded from http://ubuntu.kurento.org/pool/main/k/. Notice dependencies will
be downgraded as required by the old package. For example:
sudo dpkg -i kurento-media-server-dbg_5.1.4~20150528151643.2.g75f094f.trusty_amd64.deb
sudo apt-get -f install
Then, you have to change the dependency in your applications pom.xml to point to a development version. There is
no way in Maven to use the latest development version of an artifact. You have to specify the concrete development
version you want to depend on. To know what is the current Kurento Java Client development version, you can take
a look to the internal Kurento Maven repository and search for the latest version. Then, you have to include in your
applications pom.xml the following dependency:
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>latest-version-SNAPSHOT</version>
</dependency>
268
If you are using Kurento JavaScript Client from a Node.js application and want to use the latest development version
of this library, you have to change the dependencies section in the applications package.json. You have to
point directly to the development repository, that is:
"dependencies": {
"kurento-client": "Kurento/kurento-client-js#develop"
}
If you are using Kurento JavaScript Client from a browser application with Bower and want to use the latest development version of this library, you have to change the dependencies section in the applications bower.json. You
have to point directly to the development bower repository, that is:
"dependencies": {
"kurento-client": "develop"
"kurento-utils": "develop"
}
Alternatively, if your browser application is pointing directly to JavaScript libraries from HTML resources, then, you
have to change to development URLs:
<script type="text/javascript"
src="http://builds.kurento.org/dev/master/latest/js/kurento-client.min.js"></script>
269
270
CHAPTER 25
Kurento Modules
of
Kurento
Media
Elements
(WebRtcEndpoint,
kms-chroma: Filter that makes transparent a color range in the top layer, revealing another image behind.
sudo apt-get install kms-chroma-6.0
Warning: Plate detector module is a prototype and its results is not always accurate. Consider this if
you are planning to use this module in a production environment.
Custom modules. Extensions to Kurento Media Server which provides new media capabilities. If you are
planning to develop your own custom module, please take a look to the following page:
271
2. Gstreamer module:
kurento-module-scaffold.sh <module_name> <output_directory>
The tool generates the folder tree, all the CmakeLists.txt files necessaries and example files of Kurento module
descriptor files (.kmd). These files describe our modules, the constructor, the methods, the properties, the events and
the complex types defined by the developer.
Once, kmd files are completed we can generate code. The tool kurento-module-creator generates glue code
to server-side. From the root directory:
cd build
cmake ..
The following section details how to create your module depending on the filter type you chose (OpenCV or
GStreamer):
OpenCV module
We have four files in src/server/implementation:
ModuleNameImpl.cpp
ModuleNameImpl.hpp
ModuleNameOpenCVImpl.cpp
ModuleNameOpenCVImpl.hpp
The first two files should not be modified. The last two files will contain the logic of your module. The file
ModuleNameOpenCVImpl.cpp contains functions to deal with the methods and the parameters (you must implement the logic). Also, this file contains a function called process. This function will be called with each new frame,
thus you must implement the logic of your filter inside this function.
GStreamer module
In this case, we have two directories inside the src folder. The gst-plugins folder contains the implementation of your GStreamer element (the kurento-module-scaffold generates a dummy filter). Inside the
server/objects folder you have two files:
272
ModuleNameImpl.cpp
ModuleNameImpl.hpp
In the file ModuleNameImpl.cpp you have to invoke the methods of your GStreamer element. The module logic
will be implemented in the GStreamer element.
For both kind of modules
If you need extra compilation dependencies you can add compilation rules to the kurento-module-creator using the
function generate_code in the CmakeLists.txt file in src/server. The following parameters are available:
MODELS (required): This parameter receives the folders where the models (.kmd files) are located.
INTERFACE_LIB_EXTRA_SOURCES,
INTERFACE_LIB_EXTRA_HEADERS,
INTERFACE_LIB_EXTRA_INCLUDE_DIRS,
INTERFACE_LIB_EXTRA_LIBRARIES:
These
parameters allow to add additional source code to the static library.
Files included in
INTERFACE_LIB_EXTRA_HEADERS will be installed in the system as headers for this library. All
the parameters accept a list as input.
SERVER_IMPL_LIB_EXTRA_SOURCES,
SERVER_IMPL_LIB_EXTRA_HEADERS,
SERVER_IMPL_LIB_EXTRA_INCLUDE_DIRS,
SERVER_IMPL_LIB_EXTRA_LIBRARIES:
These parameters allows to add additional source code to the interface library. Files included in
SERVER_IMPL_LIB_EXTRA_HEADERS will be installed in the system as headers for this library. All
the parameters accept a list as input.
MODULE_EXTRA_INCLUDE_DIRS, MODULE_EXTRA_LIBRARIES: These parameters allows to add extra
include directories and libraries to the module.
SERVER_IMPL_LIB_FIND_CMAKE_EXTRA_LIBRARIES: This parameter receives a list of strings, each
string has this format libname[ libversion range] (possible ranges can use symbols AND OR < <= >
>= ^ and ~):
^ indicates a version compatible using Semantic Versioning
~ Indicates a version similar, that can change just last indicated version character
SERVER_STUB_DESTINATION (required): The generated code that you may need to modify will be generated on the folder indicated by this parameter.
Once the module logic is implemented and the compilation process is finished, you need to install your module in your
system. You can follow two different ways:
You
can
generate
the
Debian
package
(debuild -us -uc)
and
install
it
(dpkg -i)
or
you
can
define
the
following
environment
variables
in
the
file
/etc/default/kurento:
KURENTO_MODULES_PATH=<module_path>/build/src
GST_PLUGIN_PATH=<module_path>/build/src.
Now, you need to generate code for Java or JavaScript to use your module from the client-side.
For
Java,
from
the
build
directory
you
have
to
execute
cmake ..
-DGENERATE_JAVA_CLIENT_PROJECT=TRUE command generates a Java folder with client code.
You can run make java_install and your module will be installed in your Maven local repository. To use
the module in your Maven project, you have to add the dependency to the pom.xml file:
<dependency>
<groupId>org.kurento.module</groupId>
<artifactId>modulename</artifactId>
<version>moduleversion</version>
</dependency>
273
Examples
Simple examples for both kind of modules are available in GitHub:
OpenCV module
GStreamer module
There are a lot of examples of how to define methods, parameters or events in all our public built-in modules:
kms-pointerdetector
kms-crowddetector
kms-chroma
kms-platedetector
Moreover, all our modules are developed using this methodology, for that reason you can take a look to our main
modules:
kms-core
kms-elements
kms-filters
The following picture shows an schematic view of the Kurento Media Server as described before:
Taking into account the built-in modules, the Kurento toolbox is extended as follows:
The remainder of this page is structured in four sections in which the built-in modules (kms-pointerdetector,
kms-chroma, kms-crowddetector, kms-platedetector) are used to develop simple applications (tutorials) aimed to show how to use them.
274
Fig. 25.1: Kurento modules architecture. Kurento Media Server can be extended with built-it modules (crowddetector, pointerdetector, chroma, platedetector) and also with other custom modules.
Fig. 25.2: Extended Kurento Toolbox. The basic Kurento toolbox (left side of the picture) is extended with more
computer vision and augmented reality filters (right side of the picture) provided by the built-in modules.
275
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-pointerdetector-6.0 should be also installed:
sudo apt-get install kms-pointerdetector-6.0
To launch the application, you need to clone the GitHub project where this demo is hosted, and then run the main
class:
git clone https://github.com/Kurento/kurento-tutorial-java.git
cd kurento-tutorial-java/kurento-pointerdetector
git checkout 6.5.0
mvn compile exec:java
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a pointer in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
276
277
When the desired color to track is filling that box, a calibration message is sent from the client to the server. This is
done by clicking on the Calibrate blue button of the GUI.
After that, the color of the pointer is tracked in real time by Kurento Media Server. PointerDetectorFilter can
also define regions in the screen called windows in which some actions are performed when the pointer is detected when
the pointer enters (WindowInEvent event) and exits (WindowOutEvent event) the windows. This is implemented
in the server-side logic as follows:
// Media Logic (Media Pipeline and Elements)
UserSession user = new UserSession();
MediaPipeline pipeline = kurento.createMediaPipeline();
user.setMediaPipeline(pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline)
.build();
user.setWebRtcEndpoint(webRtcEndpoint);
users.put(session.getId(), user);
webRtcEndpoint
.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate", JsonUtils
.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(
response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
pointerDetectorFilter = new PointerDetectorFilter.Builder(pipeline,
new WindowParam(5, 5, 30, 30)).build();
pointerDetectorFilter
.addWindow(new PointerDetectorWindowMediaParam("window0",
50, 50, 500, 150));
pointerDetectorFilter
.addWindow(new PointerDetectorWindowMediaParam("window1",
50, 50, 500, 250));
webRtcEndpoint.connect(pointerDetectorFilter);
pointerDetectorFilter.connect(webRtcEndpoint);
pointerDetectorFilter
.addWindowInListener(new EventListener<WindowInEvent>() {
@Override
public void onEvent(WindowInEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "windowIn");
response.addProperty("roiId", event.getWindowId());
try {
278
session.sendMessage(new TextMessage(response
.toString()));
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
});
pointerDetectorFilter
.addWindowOutListener(new EventListener<WindowOutEvent>() {
@Override
public void onEvent(WindowOutEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "windowOut");
response.addProperty("roiId", event.getWindowId());
try {
session.sendMessage(new TextMessage(response
.toString()));
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
});
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
// Sending response back to client
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
webRtcEndpoint.gatherCandidates();
The following picture illustrates the pointer tracking in one of the defined windows:
In order to send the calibration message from the client side, this function is used in the JavaScript side of this demo:
function calibrate() {
console.log("Calibrate color");
var message = {
id : 'calibrate'
}
sendMessage(message);
}
When this message is received in the application server side, this code is execute to carry out the calibration:
private void calibrate(WebSocketSession session, JsonObject jsonMessage) {
if (pointerDetectorFilter != null) {
pointerDetectorFilter.trackColorFromCalibrationRegion();
}
}
279
280
Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need three dependencies: the Kurento Client Java
dependency (kurento-client), the JavaScript Kurento utility library (kurento-utils) for the client-side, and the pointer
detector module (pointerdetector):
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento.module</groupId>
<artifactId>pointerdetector</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest versions at Maven Central.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-pointerdetector-6.0 should be also installed:
sudo apt-get install kms-pointerdetector-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
281
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-pointerdetector
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
Kurento Media Server must use WebSockets over SSL/TLS (WSS), so make sure you check this too. It is possible to
locate the KMS in other machine simple adding the parameter ws_uri to the URL:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a pointer in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
282
283
284
filter.addWindow(options, onError);
filter.on ('WindowIn', function (data){
console.log ("Event window in detected in window " + data.windowId);
});
filter.on ('WindowOut', function (data){
console.log ("Event window out detected in window " + data.windowId);
});
console.log("Connecting ...");
client.connect(webRtc, filter, webRtc, function(error) {
if (error) return onError(error);
console.log("WebRtcEndpoint --> Filter --> WebRtcEndpoint");
});
});
});
});
});
The following picture illustrates the pointer tracking in one of the defined windows:
285
function calibrate() {
if(filter) filter.trackColorFromCalibrationRegion(onError);
}
function onError(error) {
if(error) console.error(error);
}
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
Dependencies
The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in
the bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at Bower.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-pointerdetector-6.0 should be also installed:
sudo apt-get install kms-pointerdetector-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
286
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-pointerdetector
git checkout 6.5.0
npm install
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
This application uses computer vision and augmented reality techniques to detect a pointer in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
287
This example is a modified version of the Magic Mirror tutorial. In this case, this demo uses a PointerDetector instead
of FaceOverlay filter.
In order to perform pointer detection, there must be a calibration stage, in which the color of the pointer is registered
by the filter. To accomplish this step, the pointer should be placed in a square in the upper left corner of the video, as
follows:
288
if (error) {
return callback(error);
}
createMediaElements(pipeline, ws, function(error, webRtcEndpoint, filter) {
if (error) {
pipeline.release();
return callback(error);
}
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, filter, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
filter.on('WindowIn', function (_data) {
return callback(null, 'WindowIn', _data);
});
filter.on('WindowOut', function (_data) {
return callback(null, 'WindowOut', _data);
});
var options1 = PointerDetectorWindowMediaParam({
id: 'window0',
height: 50,
width: 50,
upperRightX: 500,
upperRightY: 150
});
filter.addWindow(options1, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
});
var options2 = PointerDetectorWindowMediaParam({
id: 'window1',
height: 50,
width:50,
upperRightX: 500,
289
upperRightY: 250
});
filter.addWindow(options2, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint,
'pointerDetector' : filter
}
return callback(null, 'sdpAnswer', sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
return callback(error);
}
});
});
});
});
});
}
function createMediaElements(pipeline, ws, callback) {
pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
return callback(error);
}
var options = {
calibrationRegion: WindowParam({
topRightCornerX: 5,
topRightCornerY:5,
width:30,
height: 30
})
};
pipeline.create('PointerDetectorFilter', options, function(error, filter) {
if (error) {
return callback(error);
}
return callback(null, webRtcEndpoint, filter);
});
});
}
290
The following picture illustrates the pointer tracking in one of the defined windows:
Dependencies
Dependencies of this demo are managed using NPM. Our main dependency is the Kurento Client JavaScript (kurentoclient). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
25.1. Kurento Modules
291
"dependencies": {
"kurento-utils" : "6.5.0",
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at npm and Bower.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-chroma-6.0 should be also installed:
sudo apt-get install kms-chroma-6.0
To launch the application, you need to clone the GitHub project where this demo is hosted, and then run the main
class:
git clone https://github.com/Kurento/kurento-tutorial-java.git
cd kurento-tutorial-java/kurento-chroma
git checkout 6.5.0
mvn compile exec:java
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a chroma in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
292
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
293
calibration is finished, the square disappears and the chroma is substituted with the configured image. Take into
account that this process requires good lighting condition. Otherwise the chroma substitution will not be perfect. This
behavior can be seen in the upper right corner of the following screenshot:
294
response.add("candidate", JsonUtils
.toJsonObject(event.getCandidate()));
try {
synchronized (session) {
session.sendMessage(new TextMessage(
response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
ChromaFilter chromaFilter = new ChromaFilter.Builder(pipeline,
new WindowParam(5, 5, 40, 40)).build();
String appServerUrl = System.getProperty("app.server.url",
ChromaApp.DEFAULT_APP_SERVER_URL);
chromaFilter.setBackground(appServerUrl + "/img/mario.jpg");
webRtcEndpoint.connect(chromaFilter);
chromaFilter.connect(webRtcEndpoint);
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
// Sending response back to client
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
webRtcEndpoint.gatherCandidates();
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need three dependencies: the Kurento Client Java
dependency (kurento-client), the JavaScript Kurento utility library (kurento-utils) for the client-side, and the chroma
module (chroma):
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
295
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento.module</groupId>
<artifactId>chroma</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest versions at Maven Central.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-chroma-6.0 should be also installed:
sudo apt-get install kms-chroma-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-chroma
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
Kurento Media Server must use WebSockets over SSL/TLS (WSS), so make sure you check this too. It is possible to
locate the KMS in other machine simple adding the parameter ws_uri to the URL:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
296
This application uses computer vision and augmented reality techniques to detect a chroma in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
297
298
299
});
console.log("Got WebRtcEndpoint");
var options =
{
window: WindowParam({
topRightCornerX: 5,
topRightCornerY: 5,
width: 30,
height: 30
})
}
pipeline.create('ChromaFilter', options, function(error, filter) {
if (error) return onError(error);
console.log("Got Filter");
filter.setBackground(args.bg_uri, function(error) {
if (error) return onError(error);
console.log("Set Image");
});
client.connect(webRtc, filter, webRtc, function(error) {
if (error) return onError(error);
console.log("WebRtcEndpoint --> filter --> WebRtcEndpoint");
});
});
});
});
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
Dependencies
The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in
the bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
"kurento-module-pointerdetector": "6.5.0"
}
300
bower install
Note: We are in active development. You can find the latest versions at Bower.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-chroma-6.0 should be also installed:
sudo apt-get install kms-chroma-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-chroma
git checkout 6.5.0
npm install
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
301
This application uses computer vision and augmented reality techniques to detect a chroma in a WebRTC stream based
on color tracking.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
302
303
304
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, filter, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint
}
return callback(null, sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
return callback(error);
}
});
});
});
});
});
}
function createMediaElements(pipeline, ws, callback) {
pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
return callback(error);
}
var options = {
window: kurento.register.complexTypes.WindowParam({
topRightCornerX: 5,
topRightCornerY: 5,
width: 30,
height: 30
})
}
305
Dependencies
Dependencies of this demo are managed using NPM. Our main dependency is the Kurento Client JavaScript (kurentoclient). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
"kurento-utils" : "6.5.0",
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at npm and Bower.
306
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-crowddetector-6.0 should be also installed:
sudo apt-get install kms-crowddetector-6.0
To launch the application, you need to clone the GitHub project where this demo is hosted, and then run the main
class:
git clone https://github.com/Kurento/kurento-tutorial-java.git
cd kurento-tutorial-java/kurento-crowddetector
git checkout 6.5.0
mvn compile exec:java
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
mvn compile exec:java -Dkms.url=ws://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a crowd in a WebRTC stream.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
The complete source code of this demo can be found in GitHub.
This example is a modified version of the Magic Mirror tutorial. In this case, this demo uses a CrowdDetector instead
of FaceOverlay filter.
To setup a CrowdDetectorFilter, first we need to define one or more regions of interest (ROIs). A ROI determines the zone within the video stream, which are going to be monitored and analised by the filter. To define a ROI,
25.1. Kurento Modules
307
CrowdDetectorOccupancyEvent. Event raised when a level of occupancy is detected in a ROI. Occupancy can
be seen as the level of agglomeration in stream.
CrowdDetectorDirectionEvent. Event raised when a movement direction is detected in a ROI by a crowd.
Both fluidity as occupancy are quantified in a relative metric from 0 to 100%. Then, both attributes are qualified into
three categories: i) Minimum (min); ii) Medium (med); iii) Maximum (max).
Regarding direction, it is quantified as an angle (0-360), where 0 is the direction from the central point of the video
to the top (i.e., north), 90 correspond to the direction to the right (east), 180 is the south, and finally 270 is the west.
With all these concepts, now we can check out the Java server-side code of this demo. As depicted in the snippet
below, we create a ROI by adding RelativePoint instances to a list. Each ROI is then stored into a list of
RegionOfInterest instances.
Then, each ROI should be configured. To do that, we have the following methods:
setFluidityLevelMin: Fluidity level (0-100%) for the category minimum.
setFluidityLevelMed: Fluidity level (0-100%) for the category medium.
setFluidityLevelMax: Fluidity level (0-100%) for the category maximum.
setFluidityNumFramesToEvent: Number of consecutive frames detecting a fluidity level to rise a event.
setOccupancyLevelMin: Occupancy level (0-100%) for the category minimum.
setOccupancyLevelMed: Occupancy level (0-100%) for the category medium.
setOccupancyLevelMax: Occupancy level (0-100%) for the category maximum.
setOccupancyNumFramesToEvent: Number of consecutive frames detecting a occupancy level to rise a
event.
setSendOpticalFlowEvent: Boolean value that indicates whether or not directions events are going to
be tracked by the filter. Be careful with this feature, since it is very demanding in terms of resource usage (CPU,
memory) in the media server. Set to true this parameter only when you are going to need directions events in
your client-side.
setOpticalFlowNumFramesToEvent: Number of consecutive frames detecting a direction level to rise
a event.
setOpticalFlowNumFramesToReset: Number of consecutive frames detecting a occupancy level in
which the counter is reset.
setOpticalFlowAngleOffset: Counterclockwise offset of the angle. This parameters is useful to move
the default axis for directions (0=north, 90=east, 180=south, 270=west).
All in all, the media pipeline of this demo is implemented as follows:
// Media Logic (Media Pipeline and Elements)
MediaPipeline pipeline = kurento.createMediaPipeline();
pipelines.put(session.getId(), pipeline);
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(pipeline)
.build();
webRtcEndpoint
.addOnIceCandidateListener(new EventListener<OnIceCandidateEvent>() {
@Override
public void onEvent(OnIceCandidateEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "iceCandidate");
response.add("candidate",
JsonUtils.toJsonObject(event.getCandidate()));
309
try {
synchronized (session) {
session.sendMessage(new TextMessage(response
.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
List<RegionOfInterest> rois = new ArrayList<>();
List<RelativePoint> points = new ArrayList<RelativePoint>();
points.add(new
points.add(new
points.add(new
points.add(new
RelativePoint(0, 0));
RelativePoint(0.5F, 0));
RelativePoint(0.5F, 0.5F));
RelativePoint(0, 0.5F));
310
}
});
crowdDetectorFilter.addCrowdDetectorFluidityListener(
new EventListener<CrowdDetectorFluidityEvent>() {
@Override
public void onEvent(CrowdDetectorFluidityEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "fluidityEvent");
response.addProperty("roiId", event.getRoiID());
response.addProperty("level",
event.getFluidityLevel());
response.addProperty("percentage",
event.getFluidityPercentage());
try {
session.sendMessage(new TextMessage(response
.toString()));
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
});
crowdDetectorFilter.addCrowdDetectorOccupancyListener(
new EventListener<CrowdDetectorOccupancyEvent>() {
@Override
public void onEvent(CrowdDetectorOccupancyEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "occupancyEvent");
response.addProperty("roiId", event.getRoiID());
response.addProperty("level",
event.getOccupancyLevel());
response.addProperty("percentage",
event.getOccupancyPercentage());
try {
session.sendMessage(new TextMessage(response
.toString()));
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
});
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
// Sending response back to client
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
session.sendMessage(new TextMessage(response.toString()));
webRtcEndpoint.gatherCandidates();
311
Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need three dependencies: the Kurento Client Java
dependency (kurento-client), the JavaScript Kurento utility library (kurento-utils) for the client-side, and the crowd
detector module (crowddetector):
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento.module</groupId>
<artifactId>crowddetector</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest versions at Maven Central.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-crowddetector-6.0 should be also installed:
sudo apt-get install kms-crowddetector-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
312
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
git clone https://github.com/Kurento/kurento-tutorial-js.git
cd kurento-tutorial-js/kurento-crowddetector
git checkout 6.5.0
bower install
http-server -p 8443 -S -C keys/server.crt -K keys/server.key
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
Kurento Media Server must use WebSockets over SSL/TLS (WSS), so make sure you check this too. It is possible to
locate the KMS in other machine simple adding the parameter ws_uri to the URL:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a crowd in a WebRTC stream.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
313
314
CrowdDetectorFluidityEvent. Event raised when a certain level of fluidity is detected in a ROI. Fluidity can be
seen as the level of general movement in a crowd.
CrowdDetectorOccupancyEvent. Event raised when a level of occupancy is detected in a ROI. Occupancy can
be seen as the level of agglomeration in stream.
CrowdDetectorDirectionEvent. Event raised when a movement direction is detected in a ROI by a crowd.
Both fluidity as occupancy are quantified in a relative metric from 0 to 100%. Then, both attributes are qualified into
three categories: i) Minimum (min); ii) Medium (med); iii) Maximum (max).
Regarding direction, it is quantified as an angle (0-360), where 0 is the direction from the central point of the video
to the top (i.e., north), 90 correspond to the direction to the right (east), 180 is the south, and finally 270 is the west.
With all these concepts, now we can check out the Java server-side code of this demo. As depicted in the snippet
below, we create a ROI by adding RelativePoint instances to a list. Each ROI is then stored into a list of
RegionOfInterest instances.
Then, each ROI should be configured. To do that, we have the following methods:
fluidityLevelMin: Fluidity level (0-100%) for the category minimum.
fluidityLevelMed: Fluidity level (0-100%) for the category medium.
fluidityLevelMax: Fluidity level (0-100%) for the category maximum.
fluidityNumFramesToEvent: Number of consecutive frames detecting a fluidity level to rise a event.
occupancyLevelMin: Occupancy level (0-100%) for the category minimum.
occupancyLevelMed: Occupancy level (0-100%) for the category medium.
occupancyLevelMax: Occupancy level (0-100%) for the category maximum.
occupancyNumFramesToEvent: Number of consecutive frames detecting a occupancy level to rise a
event.
sendOpticalFlowEvent: Boolean value that indicates whether or not directions events are going to be
tracked by the filter. Be careful with this feature, since it is very demanding in terms of resource usage (CPU,
memory) in the media server. Set to true this parameter only when you are going to need directions events in
your client-side.
opticalFlowNumFramesToEvent: Number of consecutive frames detecting a direction level to rise a
event.
opticalFlowNumFramesToReset: Number of consecutive frames detecting a occupancy level in which
the counter is reset.
opticalFlowAngleOffset: Counterclockwise offset of the angle. This parameters is useful to move the
default axis for directions (0=north, 90=east, 180=south, 270=west).
All in all, the media pipeline of this demo is is implemented as follows:
kurentoClient(args.ws_uri, function(error, client) {
if (error) return onError(error);
client.create('MediaPipeline', function(error, p) {
if (error) return onError(error);
pipeline = p;
console.log("Got MediaPipeline");
pipeline.create('WebRtcEndpoint', function(error, webRtc) {
315
316
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
Dependencies
The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in
the bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at Bower.
317
Note: This tutorial has been configurated for using https. Follow these instructions for securing your application.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-crowddetector-6.0 should be also installed:
sudo apt-get install kms-crowddetector-6.0
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-crowddetector
git checkout 6.5.0
npm install
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
sudo npm install npm -g
This application uses computer vision and augmented reality techniques to detect a crowd in a WebRTC stream.
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
The complete source code of this demo can be found in GitHub.
This example is a modified version of the Magic Mirror tutorial. In this case, this demo uses a CrowdDetector instead
of FaceOverlay filter.
318
319
CrowdDetectorFluidityEvent. Event raised when a certain level of fluidity is detected in a ROI. Fluidity can be
seen as the level of general movement in a crowd.
CrowdDetectorOccupancyEvent. Event raised when a level of occupancy is detected in a ROI. Occupancy can
be seen as the level of agglomeration in stream.
CrowdDetectorDirectionEvent. Event raised when a movement direction is detected in a ROI by a crowd.
Both fluidity as occupancy are quantified in a relative metric from 0 to 100%. Then, both attributes are qualified into
three categories: i) Minimum (min); ii) Medium (med); iii) Maximum (max).
Regarding direction, it is quantified as an angle (0-360), where 0 is the direction from the central point of the video
to the top (i.e., north), 90 correspond to the direction to the right (east), 180 is the south, and finally 270 is the west.
With all these concepts, now we can check out the Java server-side code of this demo. As depicted in the snippet
below, we create a ROI by adding RelativePoint instances to a list. Each ROI is then stored into a list of
RegionOfInterest instances.
Then, each ROI should be configured. To do that, we have the following methods:
fluidityLevelMin: Fluidity level (0-100%) for the category minimum.
fluidityLevelMed: Fluidity level (0-100%) for the category medium.
fluidityLevelMax: Fluidity level (0-100%) for the category maximum.
fluidityNumFramesToEvent: Number of consecutive frames detecting a fluidity level to rise a event.
occupancyLevelMin: Occupancy level (0-100%) for the category minimum.
occupancyLevelMed: Occupancy level (0-100%) for the category medium.
occupancyLevelMax: Occupancy level (0-100%) for the category maximum.
occupancyNumFramesToEvent: Number of consecutive frames detecting a occupancy level to rise a
event.
sendOpticalFlowEvent: Boolean value that indicates whether or not directions events are going to be
tracked by the filter. Be careful with this feature, since it is very demanding in terms of resource usage (CPU,
memory) in the media server. Set to true this parameter only when you are going to need directions events in
your client-side.
opticalFlowNumFramesToEvent: Number of consecutive frames detecting a direction level to rise a
event.
opticalFlowNumFramesToReset: Number of consecutive frames detecting a occupancy level in which
the counter is reset.
opticalFlowAngleOffset: Counterclockwise offset of the angle. This parameters is useful to move the
default axis for directions (0=north, 90=east, 180=south, 270=west).
All in all, the media pipeline of this demo is is implemented as follows:
function start(sessionId, ws, sdpOffer, callback) {
if (!sessionId) {
return callback('Cannot use undefined sessionId');
}
getKurentoClient(function(error, kurentoClient) {
if (error) {
return callback(error);
}
kurentoClient.create('MediaPipeline', function(error, pipeline) {
320
if (error) {
return callback(error);
}
createMediaElements(pipeline, ws, function(error, webRtcEndpoint, filter) {
if (error) {
pipeline.release();
return callback(error);
}
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, filter, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
filter.on('CrowdDetectorDirection', function (_data){
return callback(null, 'crowdDetectorDirection', _data);
});
filter.on('CrowdDetectorFluidity', function (_data){
return callback(null, 'crowdDetectorFluidity', _data);
});
filter.on('CrowdDetectorOccupancy', function (_data){
return callback(null, 'crowdDetectorOccupancy', _data);
});
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint
}
return callback(null, 'sdpAnswer', sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
321
return callback(error);
}
});
});
});
});
});
}
function createMediaElements(pipeline, ws, callback) {
pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
return callback(error);
}
var options = {
rois: [
RegionOfInterest({
id: 'roi1',
points: [
RelativePoint({x: 0 , y: 0 }),
RelativePoint({x: 0.5, y: 0 }),
RelativePoint({x: 0.5, y: 0.5}),
RelativePoint({x: 0 , y: 0.5})
],
regionOfInterestConfig: RegionOfInterestConfig({
occupancyLevelMin: 10,
occupancyLevelMed: 35,
occupancyLevelMax: 65,
occupancyNumFramesToEvent: 5,
fluidityLevelMin: 10,
fluidityLevelMed: 35,
fluidityLevelMax: 65,
fluidityNumFramesToEvent: 5,
sendOpticalFlowEvent: false,
opticalFlowNumFramesToEvent: 3,
opticalFlowNumFramesToReset: 3,
opticalFlowAngleOffset: 0
})
})
]
}
pipeline.create('CrowdDetectorFilter', options, function(error, filter) {
if (error) {
return callback(error);
}
return callback(null, webRtcEndpoint, filter);
});
});
}
Dependencies
Dependencies of this demo are managed using NPM. Our main dependency is the Kurento Client JavaScript (kurentoclient). The relevant part of the package.json file for managing this dependency is:
322
"dependencies": {
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
"kurento-utils" : "6.5.0",
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at npm and Bower.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-platedetector-6.0 should be also installed:
sudo apt-get install kms-platedetector-6.0
Warning: Plate detector module is a prototype and its results is not always accurate. Consider this if you are
planning to use this module in a production environment.
To launch the application, you need to clone the GitHub project where this demo is hosted, and then run the main
class:
git clone https://github.com/Kurento/kurento-tutorial-java.git
cd kurento-tutorial-java/kurento-platedetector
git checkout 6.5.0
mvn compile exec:java
The web application starts on port 8443 in the localhost by default. Therefore, open the URL https://localhost:8443/
in a WebRTC compliant browser (Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the flag kms.url to the JVM
executing the demo. As well be using maven, you should execute the following command
25.1. Kurento Modules
323
This application uses computer vision and augmented reality techniques to detect a plate in a WebRTC stream on
optical character recognition (OCR).
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
324
325
synchronized (session) {
session.sendMessage(new TextMessage(
response.toString()));
}
} catch (IOException e) {
log.debug(e.getMessage());
}
}
});
PlateDetectorFilter plateDetectorFilter = new PlateDetectorFilter.Builder(
pipeline).build();
webRtcEndpoint.connect(plateDetectorFilter);
plateDetectorFilter.connect(webRtcEndpoint);
plateDetectorFilter
.addPlateDetectedListener(new EventListener<PlateDetectedEvent>() {
@Override
public void onEvent(PlateDetectedEvent event) {
JsonObject response = new JsonObject();
response.addProperty("id", "plateDetected");
response.addProperty("plate", event.getPlate());
try {
session.sendMessage(new TextMessage(response
.toString()));
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
});
// SDP negotiation (offer and answer)
String sdpOffer = jsonMessage.get("sdpOffer").getAsString();
String sdpAnswer = webRtcEndpoint.processOffer(sdpOffer);
// Sending response back to client
JsonObject response = new JsonObject();
response.addProperty("id", "startResponse");
response.addProperty("sdpAnswer", sdpAnswer);
synchronized (session) {
session.sendMessage(new TextMessage(response.toString()));
}
webRtcEndpoint.gatherCandidates();
} catch (Throwable t) {
sendError(session, t.getMessage());
}
}
Dependencies
This Java Spring application is implemented using Maven. The relevant part of the pom.xml is where Kurento dependencies are declared. As the following snippet shows, we need three dependencies: the Kurento Client Java dependency (kurento-client), the JavaScript Kurento utility library (kurento-utils) for the client-side, and the plate detector
module (platedetector):
326
<dependencies>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-client</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento</groupId>
<artifactId>kurento-utils-js</artifactId>
<version>6.5.0</version>
</dependency>
<dependency>
<groupId>org.kurento.module</groupId>
<artifactId>platedetector</artifactId>
<version>6.5.0</version>
</dependency>
</dependencies>
Note: We are in active development. You can find the latest versions at Maven Central.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-platedetector-6.0 should be also installed:
sudo apt-get install kms-platedetector-6.0
Warning: Plate detector module is a prototype and its results is not always accurate. Consider this if you are
planning to use this module in a production environment.
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
Due to Same-origin policy, this demo has to be served by an HTTP server. A very simple way of doing this is by
means of an HTTP Node.js server which can be installed using npm :
sudo npm install http-server -g
You also need the source code of this demo. You can clone it from GitHub. Then start the HTTP server:
327
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
Kurento Media Server must use WebSockets over SSL/TLS (WSS), so make sure you check this too. It is possible to
locate the KMS in other machine simple adding the parameter ws_uri to the URL:
https://localhost:8443/index.html?ws_uri=wss://kms_host:kms_port/kurento
This application uses computer vision and augmented reality techniques to detect a plate in a WebRTC stream on
optical character recognition (OCR).
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
328
329
console.log("Got MediaPipeline");
pipeline.create('WebRtcEndpoint', function(error, webRtc) {
if (error) return onError(error);
console.log("Got WebRtcEndpoint");
setIceCandidateCallbacks(webRtcPeer, webRtc, onError)
webRtc.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) return onError(error);
console.log("SDP answer obtained. Processing...");
webRtc.gatherCandidates(onError);
webRtcPeer.processAnswer(sdpAnswer);
});
pipeline.create('PlateDetectorFilter', function(error, filter) {
if (error) return onError(error);
console.log("Got Filter");
filter.on('PlateDetected', function (data){
console.log("License plate detected " + data.plate);
});
client.connect(webRtc, filter, webRtc, function(error) {
if (error) return onError(error);
console.log("WebRtcEndpoint --> filter --> WebRtcEndpoint");
});
});
});
});
});
Note: The TURN and STUN servers to be used can be configured simple adding the parameter ice_servers to
the application URL, as follows:
https://localhost:8443/index.html?ice_servers=[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.
https://localhost:8443/index.html?ice_servers=[{"urls":"turn:turn.example.org","username":"user","cre
Dependencies
The dependencies of this demo has to be obtained using Bower. The definition of these dependencies are defined in
the bower.json file, as follows:
"dependencies": {
"kurento-client": "6.5.0",
"kurento-utils": "6.5.0"
"kurento-module-pointerdetector": "6.5.0"
}
bower install
Note: We are in active development. You can find the latest versions at Bower.
First of all, you should install Kurento Media Server to run this demo. Please visit the installation guide for further
information. In addition, the built-in module kms-platedetector-6.0 should be also installed:
sudo apt-get install kms-platedetector-6.0
Warning: Plate detector module is a prototype and its results is not always accurate. Consider this if you are
planning to use this module in a production environment.
Be sure to have installed Node.js and Bower in your system. In an Ubuntu machine, you can install both as follows:
curl -sL https://deb.nodesource.com/setup | sudo bash sudo apt-get install -y nodejs
sudo npm install -g bower
To launch the application, you need to clone the GitHub project where this demo is hosted, install it and run it:
git clone https://github.com/Kurento/kurento-tutorial-node.git
cd kurento-tutorial-node/kurento-platedetector
git checkout 6.5.0
npm install
If you have problems installing any of the dependencies, please remove them and clean the npm cache, and try to
install them again:
rm -r node_modules
npm cache clean
Finally, access the application connecting to the URL https://localhost:8443/ through a WebRTC capable browser
(Chrome, Firefox).
Note: These instructions work only if Kurento Media Server is up and running in the same machine as the tutorial.
However, it is possible to connect to a remote KMS in other machine, simply adding the argument ws_uri to the
npm execution command, as follows:
npm start -- --ws_uri=ws://kms_host:kms_host:kms_port/kurento
In this case you need to use npm version 2. To update it you can use this command:
331
This application uses computer vision and augmented reality techniques to detect a plate in a WebRTC stream on
optical character recognition (OCR).
The interface of the application (an HTML web page) is composed by two HTML5 video tags: one for the video
camera stream (the local client-side stream) and other for the mirror (the remote stream). The video camera stream is
sent to Kurento Media Server, which processes and sends it back to the client as a remote stream. To implement this,
we need to create a Media Pipeline composed by the following Media Element s:
332
333
if (candidatesQueue[sessionId]) {
while(candidatesQueue[sessionId].length) {
var candidate = candidatesQueue[sessionId].shift();
webRtcEndpoint.addIceCandidate(candidate);
}
}
connectMediaElements(webRtcEndpoint, filter, function(error) {
if (error) {
pipeline.release();
return callback(error);
}
webRtcEndpoint.on('OnIceCandidate', function(event) {
var candidate = kurento.register.complexTypes.IceCandidate(event.candidate);
ws.send(JSON.stringify({
id : 'iceCandidate',
candidate : candidate
}));
});
filter.on('PlateDetected', function (data){
return callback(null, 'plateDetected', data);
});
webRtcEndpoint.processOffer(sdpOffer, function(error, sdpAnswer) {
if (error) {
pipeline.release();
return callback(error);
}
sessions[sessionId] = {
'pipeline' : pipeline,
'webRtcEndpoint' : webRtcEndpoint
}
return callback(null, 'sdpAnswer', sdpAnswer);
});
webRtcEndpoint.gatherCandidates(function(error) {
if (error) {
return callback(error);
}
});
});
});
});
});
}
function createMediaElements(pipeline, ws, callback) {
pipeline.create('WebRtcEndpoint', function(error, webRtcEndpoint) {
if (error) {
return callback(error);
}
pipeline.create('PlateDetectorFilter', function(error, filter) {
if (error) {
return callback(error);
334
}
return callback(null, webRtcEndpoint, filter);
});
});
}
Dependencies
Dependencies of this demo are managed using NPM. Our main dependency is the Kurento Client JavaScript (kurentoclient). The relevant part of the package.json file for managing this dependency is:
"dependencies": {
"kurento-client" : "6.5.0"
}
At the client side, dependencies are managed using Bower. Take a look to the bower.json file and pay attention to the
following section:
"dependencies": {
"kurento-utils" : "6.5.0",
"kurento-module-pointerdetector": "6.5.0"
}
Note: We are in active development. You can find the latest versions at npm and Bower.
335
336
CHAPTER 26
WebRTC Statistics
337
});
});
Once WebRTC statistics are enabled, the second step is reading the statistics values using the method getStats of
a Media Element, For example, to read the statistics of a WebRtcEndpoint object in Java:
WebRtcEndpoint webRtcEndpoint = new WebRtcEndpoint.Builder(mediaPipeline).build();
MediaType mediaType = ... // it can be MediaType.VIDEO, MediaType.AUDIO, or MediaType.DATA
Map<String, Stats> statsMap = webRtcEndpoint.getStats(mediaType);
// ...
Notice that the WebRTC statistics are read as a map. Therefore, each entry of this collection has a key and a value, in
which the key is the specific statistic, with a given value at the reading time. Take into account that these values make
reference to real-time properties, and so these values vary in time depending on multiple factors (for instance network
performance, KMS load, and so on). The complete description of the statistics are defined in the KMD interface
description. The most relevant statistics are listed below:
ssrc: The synchronized source (SSRC).
firCount: Count the total number of Full Intra Request (FIR) packets received by the sender. This metric is
only valid for video and is sent by receiver.
pliCount: Count the total number of Packet Loss Indication (PLI) packets received by the sender and is sent
by receiver.
nackCount: Count the total number of Negative ACKnowledgement (NACK) packets received by the sender
and is sent by receiver.
sliCount: Count the total number of Slice Loss Indication (SLI) packets received by the sender. This metric
is only valid for video and is sent by receiver.
remb: The Receiver Estimated Maximum Bitrate (REMB). This metric is only valid for video.
packetsLost: Total number of RTP packets lost for this SSRC.
packetsReceived: Total number of RTP packets received for this SSRC.
bytesReceived: Total number of bytes received for this SSRC.
jitter: Packet Jitter measured in seconds for this SSRC.
packetsSent: Total number of RTP packets sent for this SSRC.
bytesSent: Total number of bytes sent for this SSRC.
targetBitrate: Presently configured bitrate target of this SSRC, in bits per second.
roundTripTime: Estimated round trip time (seconds) for this SSRC based on the RTCP timestamp.
audioE2ELatency: End-to-end audio latency measured in nano seconds.
videoE2ELatency: End-to-end video latency measured in nano seconds.
338
All in all, the process for gathering WebRTC statistics in the KMS can be summarized in two steps: 1) Enable WebRTC
statistics; 2) Read WebRTC. This process is illustrated in the following picture. This diagram also describes the JSONRPC messages exchanged between Kurento client and KMS following the Kurento Protocol:
26.1.3 Example
There is a running tutorial which uses the WebRTC gathering as described before. This demo has been implemented
using the JavaScript client and it is available on GitHub: kurento-loopback-stats.
From a the Media Pipeline point of view, this demo application consists in a WebRtcEndpoint in loopback. Once
the demo is up and running, WebRTC are enabled and gathered with a rate of 1 second.
In addition to the KMS WebRTC statistics, the client-side (i.e. browser WebRtc peer) are also gathered by the application. This is done using the standard method provided by the peerConnection object, i.e using its method
getStats. Please check out the JavaScript logic located in the index.js file for implementation details.
Both kinds of WebRTC statistics values (i.e. browser and KMS side) are updated and shown each second in the
application GUI, as follows:
339
340
CHAPTER 27
Kurento Utils JS
27.1.3 Examples
There are several tutorials that show kurento-utils used in complete WebRTC applications developed on Java, Node
and JavaScript. These tutorials are in GitHub, and you can download and run them at any time.
Java - https://github.com/Kurento/kurento-tutorial-java
Node - https://github.com/Kurento/kurento-tutorial-node
JavaScript - https://github.com/Kurento/kurento-tutorial-js
341
In the following lines we will show how to use the library to create an RTCPeerConnection, and how to negotiate
the connection with another peer. The library offers a WebRtcPeer object, which is a wrapper of the browsers
RTCPeerConnection API. Peer connections can be of different types: unidirectional (send or receive only) or bidirectional (send and receive). The following code shows how to create the latter, in order to be able to send and receive
media (audio and video). The code assumes that there are two video tags in the page that loads the script. These tags
will be used to show the video as captured by your own client browser, and the media received from the other peer.
The constructor receives a property that holds all the information needed for the configuration.
var videoInput = document.getElementById('videoInput');
var videoOutput = document.getElementById('videoOutput');
var constraints = {
audio: true,
video: {
width: 640,
framerate: 15
}
};
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
};
With this little code, the library takes care of creating the RTCPeerConnection, and invoking getUserMedia in
the browser if needed. The constraints in the property are used in the invocation, and in this case both microphone and
webcam will be used. However, this does not create the connection. This is only achieved after completing the SDP
negotiation between peers. This process implies exchanging SDPs offer and answer and, since Trickle ICE is used, a
number of candidates describing the capabilities of each peer. How the negotiation works is out of the scope of this
document. More info can be found in this link.
In the previous piece of code, when the webRtcPeer object gets created, the SDP offer is generated with
this.generateOffer(onOffer). The only argument passed is a function, that will be invoked one the
browsers peer connection has generated that offer. The onOffer callback method is responsible for sending this
offer to the other peer, by any means devised in your application. Since that is part of the signaling plane and business
logic of each particular application, it wont be covered in this document.
Assuming that the SDP offer has been received by the remote peer, it must have generated an SDP answer, that should
be received in return. This answer must be processed by the webRtcEndpoint, in order to fulfill the negotiation. This
could be the implementation of the onOffer callback function. Weve assumed that theres a function somewhere in
the scope, that allows sending the SDP to the remote peer.
function onOffer(error, sdpOffer) {
if (error) return onError(error);
// We've made this function up sendOfferToRemotePeer(sdpOffer,
function(sdpAnswer) {
webRtcPeer.processAnswer(sdpAnswer);
});
342
As weve commented before, the library assumes the use of Trickle ICE to complete the connection between both
peers. In the configuration of the webRtcPeer, there is a reference to a onIceCandidate callback function. The
library will use this function to send ICE candidates to the remote peer. Since this is particular to each application, we
will just show the signature
function onIceCandidate(candidate) {
// Send the candidate to the remote peer
}
In turn, our client application must be able to receive ICE candidates from the remote peer. Assuming the signaling
takes care of receiving those candidates, it is enough to invoke the following method in the webRtcPeer to consider
the ICE candidate
webRtcPeer.addIceCandidate(candidate);
343
* webcam
* screen
* window
onstreamended: Method that will be invoked when stream ended event happens
onicecandidate: Method that will be invoked when ice candidate event happens
oncandidategatheringdone: Method that will be invoked when all candidates have been harvested
simulcast: Indicates whether simulcast is going to be used. Value is true|false
configuration: It is a json object where ICE Servers are defined using
* iceServers: The format for this variable is like:
[{"urls":"turn:turn.example.org","username":"user","credential":"myPassword"}]
[{"urls":"stun:stun1.example.net"},{"urls":"stun:stun2.example.net"}]
Constraints provide a general control surface that allows applications to both select an appropriate source for a track
and, once selected, to influence how a source operates. getUserMedia() uses constraints to help select an appropriate source for a track and configure it. For more information about media constraints and its values, you can check
here.
By default, if the mediaConstraints is undefined, this constraints are used when getUserMedia is called:
{
audio: true,
video: {
width: 640,
framerate: 15
}
}
If mediaConstraints has any value, the library uses this value for the invocation of getUserMedia. It is up to the
browser whether those constraints are accepted or not.
In the examples section, there is one example about the use of media constraints.
Methods
getPeerConnection Using this method the user can get the peerConnection and use it directly.
showLocalVideo Use this method for showing the local video.
344
getLocalStream Using this method the user can get the local stream. You can use muted property to silence the
audio, if this property is true.
getRemoteStream Using this method the user can get the remote stream.
getCurrentFrame Using this method the user can get the current frame and get a canvas with an image of the current
frame.
processAnswer Callback function invoked when a SDP answer is received. Developers are expected to invoke this
function in order to complete the SDP negotiation. This method has two parameters:
sdpAnswer: Description of sdpAnswer
callback: It is a function with error like parameter. It is called when the remote description has been set
successfully.
processOffer Callback function invoked when a SDP offer is received. Developers are expected to invoke this
function in order to complete the SDP negotiation. This method has two parameters:
sdpOffer: Description of sdpOffer
callback: It is a function with error and sdpAnswer like parameters. It is called when the remote description
has been set successfully.
dispose This method frees the resources used by WebRtcPeer.
addIceCandidate Callback function invoked when an ICE candidate is received. Developers are expected to invoke
this function in order to complete the SDP negotiation. This method has two parameters:
iceCandidate: Literal object with the ICE candidate description
callback: It is a function with error like parameter. It is called when the ICE candidate has been added.
getLocalSessionDescriptor Using this method the user can get peerconnections local session descriptor.
getRemoteSessionDescriptor Using this method the user can get peerconnections remote session descriptor.
generateOffer Creates an offer that is a request to find a remote peer with a specific configuration.
How to do screen share
Screen and window sharing depends on the privative module kurento-browser-extensions. To enable its support, youll
need to install the package dependency manually or provide a getScreenConstraints function yourself on runtime. The
option sendSource could be window or screen before create a WebRtcEndpoint. If its not available, when trying to
share the screen or a window content it will throw an exception.
345
Alternatively, you can download the code using Git and install manually its dependencies:
git clone https://github.com/Kurento/kurento-utils
cd kurento-utils
npm install
346
CHAPTER 28
kurento-client-java
347
348
CHAPTER 29
kurento-client-js
349
350
CHAPTER 30
kurento-utils-js
351
352
CHAPTER 31
353
Note: If you plan on using a webserver as proxy, like Nginx or Apache, youll need to setAllowedOrigins
when registering the handler. Please read the official Spring documentation entry for more info.
express = require('express');
ws = require('ws');
fs
= require('fs');
https = require('https');
var options =
{
key: fs.readFileSync('key/server.key'),
cert: fs.readFileSync('keys/server.crt')
};
var app = express();
var server = https.createServer(options, app).listen(port, function() {
...
});
...
var wss = new ws.Server({
server : server,
path : '/'
});
wss.on('connection', function(ws) {
....
Start application
npm start
354
Media
Server,
i.e.
"secure": {
"port": 8433,
"certificate": "defaultCertificate.pem",
"password": ""
},
If this PEM certificate is a signed certificate (by a Certificate Authority such as Verisign), then you are done. If you
are going to use a self-signed certificate (suitable for development), then there is still more work to do.
You can generate a self signed certificate by doing this:
certtool --generate-privkey --outfile defaultCertificate.pem
echo 'organization = your organization name' > certtool.tmpl
certtool --generate-self-signed --load-privkey defaultCertificate.pem \
--template certtool.tmpl >> defaultCertificate.pem
sudo chown kurento defaultCertificate.pem
Due to the fact that the certificate is self-signed, applications will reject it by default. For this reason, youll need to
force them to accept it.
Browser applications: Youll need to manually accept the certificate as trusted one before secure WebSocket connections can be established. By default, this can be done by connecting to connecting to
https://localhost:8433/kurento and accepting the certificate in the browser.
Java applications, follow the instructions of this link (get InstallCert.java from here). Youll need to
instruct the KurentoClient needs to be configured to allow the use of certificates. For this purpose, we need
to create our own JsonRpcClient:
SslContextFactory sec = new SslContextFactory(true);
sec.setValidateCerts(false);
JsonRpcClientWebSocket rpcClient = new JsonRpcClientWebSocket(uri, sec);
KurentoClient kuretoClient = KurentoClient.createFromJsonRpcClient(rpcClient);
355
356
Part VI
Kurento FAQ
357
This is a list of Frequently Asked Questions about Kurento. Feel free to suggest new entries or different wording for
answers!
359
360
CHAPTER 32
How do I...
Modify the DAEMON_ARGS var to take these IPs into account, along with the long-term credentials
user and password (kurento:kurento in this case, but could be different), realm and some other
options:
DAEMON_ARGS="-c /etc/turnserver.conf -f -o -a -v -r kurento.org
-u kurento:kurento --no-stdout-log --external-ip $EXTERNAL_IP/$LOCAL_IP"
4. Then lets enable the turnserver to run as an automatic service daemon. For this, open the file
/etc/default/coturn and uncomment the key:
TURNSERVER_ENABLED=1
5. Now, you have to tell the Kurento server where is the turnserver installed. For this, modify the turnURL
key in /etc/kurento/modules/kurento/WebRtcEndpoint.conf.ini:
turnURL=kurento:kurento@<public-ip>:3478
stunServerAddress=<public-ip>
stunServerPort=3478
361
49152 - 65535 UDP: As per RFC 5766, these are the ports that the TURN server will use to exchange
media. These ports can be changed using the --max-port and --min-port options from the
turnserver.
6. The last thing to do, is to start the coturn server and the media server:
sudo service coturn start && sudo service kurento-media-server-6.0 restart
362
CHAPTER 33
apt-get
apt-get
apt-get
apt-get
apt-get
remove kurento*
autoremove
update
dist-upgrade
install kurento-media-server-6.0
363
364
Part VII
Glossary
365
This is a glossary of terms that often appear in discussion about multimedia transmissions. Most of the terms are
described and linked to its wikipedia, RFC or W3C relevant documents. Some of the terms are specific to gstreamer
or kurento.
Agnostic, Media One of the big problems of media is that the number of variants of video and audio codecs, formats
and variants quickly creates high complexity in heterogeneous applications. So kurento developed the concept
of an automatic converter of media formats that enables development of agnostic elements. Whenever a media
elements source is connected to another media elements sink, the kurento framework verifies if media adaption
and transcoding is necessary and, if needed, it transparently incorporates the appropriate transformations making
possible the chaining of the two elements into the resulting Pipeline.
AVI Audio Video Interleaved, known by its initials AVI, is a multimedia container format introduced by Microsoft
in November 1992 as part of its Video for Windows technology. AVI files can contain both audio and video
data in a file container that allows synchronous audio-with-video playback. AVI is a derivative of the Resource
Interchange File Format (RIFF).
See also:
Wikipedia reference of the AVI format
Wikipedia reference of the RIFF format
Bower Bower is a package manager for the web. It offers a generic solution to the problem of front-end package
management, while exposing the package dependency model via an API that can be consumed by a build stack.
Builder Pattern The builder pattern is an object creation software design pattern whose intention is to find a solution
to the telescoping constructor anti-pattern. The telescoping constructor anti-pattern occurs when the increase of
object constructor parameter combination leads to an exponential list of constructors. Instead of using numerous
constructors, the builder pattern uses another object, a builder, that receives each initialization parameter step by
step and then returns the resulting constructed object at once.
See also:
Wikipedia reference of the Builder Pattern
CORS
is a mechanism that allows JavaScript code on a web page to make XMLHttpRequests to different domains
than the one the JavaScript originated from. It works by adding new HTTP headers that allow servers to serve
resources to permitted origin domains. Browsers support these headers and enforce the restrictions they establish.
See also:
enable-cors.org for information on the relevance of CORS and how and when to enable it.
DOM, Document Object Model Document Object Model is a cross-platform and language-independent convention
for representing and interacting with objects in HTML, XHTML and XML documents.
EOS Acronym of End Of Stream. In Kurento some elements will raise an EndOfStream event when the media
they are processing is finished.
GStreamer GStreamer is a pipeline-based multimedia framework written in the C programming language.
H.264 A Video Compression Format. The H.264 standard can be viewed as a family of standards composed of a
number of profiles. Each specific decoder deals with at least one such profiles, but not necessarily all. See
See also:
RFC 6184 RTP Payload Format for H.264 Video. This RFC obsoletes RFC 3984.
HTTP The is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the
foundation of data communication for the World Wide Web.
See also:
367
RFC 2616
ICE, Interactive Connectivity Establishment Interactive Connectivity Establishment (ICE) is a technique used to
achieve NAT Traversal. ICE makes use of the STUN protocol and its extension, TURN. ICE can be used by any
protocol utilizing the offer/answer model.
See also:
RFC 5245
Wikipedia reference of ICE
IMS is
Mobile Architectural Framework for delivering IP Multimedia Services in 3G (and beyond) Mobile Networks.
See also:
RFC 3574
Java EE Java EE, or Java Platform, Enterprise Edition, is a standardised set of APIs for Enterprise software development.
See also:
Oracle Site Java EE Overview
Wikipedia
jQuery jQuery is a cross-platform JavaScript library designed to simplify the client-side scripting of HTML.
JSON JSON (JavaScript Object Notation) is a lightweight data-interchange format. It is designed to be easy to
understand and write for humans and easy to parse for machines.
JSON-RPC JSON-RPC is a simple remote procedure call protocol encoded in JSON. JSON-RPC allows for notifications and for multiple calls to be sent to the server which may be answered out of order.
Kurento Kurento is a platform for the development of multimedia enabled applications. Kurento is the Esperanto
term for the English word stream. We chose this name because we believe the Esperanto principles are inspiring
for what the multimedia community needs: simplicity, openness and universality. Kurento is open source, released under Apache 2.0, and has several components, providing solutions to most multimedia common services
requirements. Those components include: Kurento Media Server, Kurento API, Kurento Protocol, and Kurento
Client.
Kurento API Kurento API is an object oriented API to create media pipelines to control media. It can be seen as
and interface to Kurento Media Server. It can be used from the Kurento Protocol or from Kurento Clients.
Kurento Client A Kurento Client is a programming library (Java or JavaScript) used to control Kurento Media
Server from an application. For example, with this library, any developer can create a web application that uses
Kurento Media Server to receive audio and video from the user web browser, process it and send it back again
over Internet. Kurento Client exposes the Kurento API to app developers.
Kurento Protocol Communication between KMS and clients by means of JSON-RPC messages. It is based on
WebSocket that uses JSON-RPC V2.0 messages for making requests and sending responses.
Kurento Media Server Kurento Media Server is the core element of Kurento since it responsible for media transmission, processing, loading and recording.
Maven Maven is a build automation tool used primarily for Java projects.
Media Element A Media Element is a module that encapsulates a specific media capability.
RecorderEndpoint, PlayerEndpoint, etc.
368
For example
Media Pipeline A Media Pipeline is a chain of media elements, where the output stream generated by one element
(source) is fed into one or more other elements input streams (sinks). Hence, the pipeline represents a machine
capable of performing a sequence of operations over a stream.
Media Plane In the traditional , the handling of media is conceptually splitted in two layers. The one that handles the
media itself, with functionalities such as media transport, encoding/decoding, and processing, is called Media
Plane.
See also:
Signaling Plane
MP4 MPEG-4 Part 14 or MP4 is a digital multimedia format most commonly used to store video and audio, but can
also be used to store other data such as subtitles and still images.
See also:
Wikipedia definition of .
Multimedia Multimedia is concerned with the computer controlled integration of text, graphics, video, animation,
audio, and any other media where information can be represented, stored, transmitted and processed digitally.
There is a temporal relationship between many forms of media, for instance audio, video and animations. There
2 are forms of problems involved in
Sequencing within the media, i.e. playing frames in correct order or time frame.
Synchronisation, i.e. inter-media scheduling. For example, keeping video and audio synchronized or displaying captions or subtitles in the required intervals.
See also:
Wikipedia definition of
Multimedia container format Container or wrapper formats are metafile formats whose specification describes how
different data elements and metadata coexist in a computer file.
Simpler multimedia container formats can contain different types of audio formats, while more advanced container formats can support multiple audio and video streams, subtitles, chapter-information, and meta-data, along
with the synchronization information needed to play back the various streams together. In most cases, the file
header, most of the metadata and the synchro chunks are specified by the container format.
See also:
Wikipedia definition of
NAT, Network Address Translation Network address translation (NAT) is the technique of modifying network address information in Internet Protocol (IP) datagram packet headers while they are in transit across a traffic
routing device for the purpose of remapping one IP address space into another.
See also:
definition at Wikipedia
NAT-T, NAT Traversal NAT traversal (sometimes abbreviated as NAT-T) is a general term for techniques that establish and maintain Internet protocol connections traversing network address translation (NAT) gateways, which
break end-to-end connectivity. Intercepting and modifying traffic can only be performed transparently in the
absence of secure encryption and authentication.
See also:
NAT Traversal White Paper White paper on NAT-T and solutions for end-to-end connectivity in its presence
369
Node.js Node.js is a cross-platform runtime environment for server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on OS X, Microsoft Windows and
Linux with no changes.
npm npm is the official package manager for Node.js.
OpenCL OpenCL is standard framework for cross-platform, parallel programming of heterogeneous platforms
consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors
(DSPs), field-programmable gate arrays (FPGAs) and other processors.
OpenCV OpenCV (Open Source Computer Vision Library) is a BSD-licensed open source computer vision and
machine learning software library. OpenCV aims to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception.
Pad, Media A Media Pad is is an elements interface with the outside world. Data streams from the MediaSource
pad to another elements MediaSink pad.
See also:
GStreamer Pad Definition of the Pad structure in GStreamer
PubNub PubNub is a publish/subscribe cloud service for sending and routing data. It streams data to global audiences
on any device using persistent socket connections. PubNub has been designed to deliver data with low latencies
to end-user devices. These devices can be behind firewalls, NAT environments, and other hard-to-reach network
environments. PubNub provides message caching for retransmission of lost signals over unreliable network
environments. This is accomplished by maintaining an always open socket connection to every device.
QR QR code (Quick Response Code) is a type of two-dimensional barcode. that became popular in the mobile phone
industry due to its fast readability and greater storage capacity compared to standard UPC barcodes.
See also:
Entry in wikipedia
REST
is an architectural style consisting of a coordinated set of constraints applied to components, connectors, and
data elements, within a distributed hypermedia system. The term representational state transfer was introduced
and defined in 2000 by Roy Fielding in his doctoral dissertation.
RTCP The is a sister protocol of the RTP, that provides out-of-band statistics and control information for an RTP
flow.
See also:
RFC 3605
RTP The is a standard packet format designed for transmitting audio and video streams on IP networks. It is used in
conjunction with the RTP Control Protocol. Transmissions using
typically use SDP to describe the technical parameters of the media streams.
See also:
RFC 3550
Same-origin policy The is web application security model. The policy permits scripts running on pages originating
from the same site to access each others DOM with no specific restrictions, but prevents access to DOM on
different sites.
SDP, Session Description Protocol The describes initialization parameters for a streaming media session. Both
parties of a streaming media session exchange SDP files to negotiate and agree in the parameters to be used for
the streaming.
See also:
370
371
372
Index
A
Agnostic, Media, 367
AVI, 367
B
Bower, 367
Builder Pattern, 367
C
CORS, 367
D
Document Object Model, 367
DOM, 367
E
EOS, 367
M
Maven, 368
Media
Element, 247
Pad, 370
Pipeline, 369
Sink, 371
Source, 371
Media Element, 368
Media Pipeline, 369
Media Plane, 369
MP4, 369
Multimedia, 369
Multimedia container format, 369
H.264, 367
HTTP, 367
NAT, 369
NAT Traversal, 369
NAT-T, 369
Network Address Translation, 369
Node.js, 370
npm, 370
ICE, 368
IMS, 368
Interactive Connectivity Establishment, 368
OpenCL, 370
OpenCV, 370
GStreamer, 367
J
Java EE, 368
jQuery, 368
JSON, 368
JSON-RPC, 368
K
Kurento, 368
Kurento API, 368
Kurento Client, 368
P
Pad, Media, 370
Pipeline: single
Media, 247
Plane
Media, 369
Signaling, 371
PubNub, 370
Q
QR, 370
373
R
REST, 370
RFC
RFC 2616, 367
RFC 3550, 370
RFC 3574, 368
RFC 3605, 370
RFC 3711, 371
RFC 3984, 367
RFC 4566, 371
RFC 4568, 371
RFC 5245, 368
RFC 5246, 372
RFC 6184, 367
RFC 6386, 372
RTCP, 370
RTP, 370
S
Same-origin policy, 370
SDP, 370
Semantic Versioning, 371
Session Description Protocol, 370
Session Traversal Utilities for NAT, 372
Signaling Plane, 371
Single-Page Application, 371
Sink, Media, 371
SIP, 371
Source, Media, 371
SPA, 371
Sphinx, 371
Spring Boot, 371
SRTCP, 371
SRTP, 371
SSL, 372
STUN, 372
T
TLS, 372
Traversal Using Relays around NAT, 372
Trickle ICE, 372
TURN, 372
V
VP8, 372
W
WebM, 372
WebRTC, 372
WebSocket, 372
374
Index