0% found this document useful (0 votes)
297 views116 pages

Unit 2

The document discusses principles of multimedia and telemedicine technology, including various communication methods like text, audio, video, and networks. It covers data communication networks and protocols like PSTN, POTS, ISDN, and the internet. It also discusses wireless communication methods like GSM, satellite, and microwave. Integration and use of communication infrastructure for telemedicine is described, including LAN, WAN, and satellite networks. Mobile devices and their use in telemedicine is also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
297 views116 pages

Unit 2

The document discusses principles of multimedia and telemedicine technology, including various communication methods like text, audio, video, and networks. It covers data communication networks and protocols like PSTN, POTS, ISDN, and the internet. It also discusses wireless communication methods like GSM, satellite, and microwave. Integration and use of communication infrastructure for telemedicine is described, including LAN, WAN, and satellite networks. Mobile devices and their use in telemedicine is also covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 116

UNIT II TELEMEDICAL TECHNOLOGY

Principles of Multimedia - Text, Audio, Video, data, Data communications and networks, PSTN,POTS,
ANT, ISDN, Internet, Air/ wireless communications: GSM satellite, and Micro wave, Modulation
techniques, Types of Antenna, Integration and operational issues, Communication infrastructure for
telemedicine – LAN and WAN technology. Satellite communication. Mobile hand held devices and
mobile communication. Internet technology and telemedicine using world wide web (www). Video and
audio conferencing. Clinical data – local and centralized.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
TCP/IP

TCP/IP Protocol Architecture


TCP/IP protocols map to a four-layer conceptual model known as the DARPA model , named after the U.S.
government agency that initially developed TCP/IP. The four layers of the DARPA model are: Application,
Transport, Internet, and Network Interface. Each layer in the DARPA model corresponds to one or more
layers of the seven-layer Open Systems Interconnection (OSI) model.
Figure 1.1 shows the TCP/IP protocol architecture.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.1 TCP/IP Protocol Architecture
Network Interface Layer
The Network Interface layer (also called the Network Access layer) is responsible for placing TCP/IP
packets on the network medium and receiving TCP/IP packets off the network medium. TCP/IP was
designed to be independent of the network access method, frame format, and medium. In this way,
TCP/IP can be used to connect differing network types. These include LAN technologies such as Ethernet
and Token Ring and WAN technologies such as X.25 and Frame Relay. Independence from any specific
network technology gives TCP/IP the ability to be adapted to new technologies such as Asynchronous
Transfer Mode (ATM).
The Network Interface layer encompasses the Data Link and Physical layers of the OSI model. Note that
the Internet layer does not take advantage of sequencing and acknowledgment services that might be
present in the Data-Link layer. An unreliable Network Interface layer is assumed, and reliable
communications through session establishment and the sequencing and acknowledgment of packets is
the responsibility of the Transport layer.
Top Of Page

Internet Layer
The Internet layer is responsible for addressing, packaging, and routing functions. The core protocols of
the Internet layer are IP, ARP, ICMP, and IGMP.
 The Internet Protocol (IP) is a routable protocol responsible for IP addressing, routing, and the
fragmentation and reassembly of packets.
 The Address Resolution Protocol (ARP) is responsible for the resolution of the Internet layer
address to the Network Interface layer address such as a hardware address.
 The Internet Control Message Protocol (ICMP) is responsible for providing diagnostic functions and
reporting errors due to the unsuccessful delivery of IP packets.
 The Internet Group Management Protocol (IGMP) is responsible for the management of IP
multicast groups.
The Internet layer is analogous to the Network layer of the OSI model.
Top Of Page

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Transport Layer
The Transport layer (also known as the Host-to-Host Transport layer) is responsible for providing the
Application layer with session and datagram communication services. The core protocols of the Transport
layer are Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP).
 TCP provides a one-to-one, connection-oriented, reliable communications service. TCP is
responsible for the establishment of a TCP connection, the sequencing and acknowledgment of
packets sent, and the recovery of packets lost during transmission.
 UDP provides a one-to-one or one-to-many, connectionless, unreliable communications service.
UDP is used when the amount of data to be transferred is small (such as the data that would fit
into a single packet), when the overhead of establishing a TCP connection is not desired or when
the applications or upper layer protocols provide reliable delivery.
The Transport layer encompasses the responsibilities of the OSI Transport layer and some of the
responsibilities of the OSI Session layer.
Top Of Page

Application Layer
The Application layer provides applications the ability to access the services of the other layers and
defines the protocols that applications use to exchange data. There are many Application layer protocols
and new protocols are always being developed.
The most widely-known Application layer protocols are those used for the exchange of user information:
 The Hypertext Transfer Protocol (HTTP) is used to transfer files that make up the Web pages of
the World Wide Web.
 The File Transfer Protocol (FTP) is used for interactive file transfer.
 The Simple Mail Transfer Protocol (SMTP) is used for the transfer of mail messages and
attachments.
 Telnet, a terminal emulation protocol, is used for logging on remotely to network hosts.
Additionally, the following Application layer protocols help facilitate the use and management of TCP/IP
networks:
 The Domain Name System (DNS) is used to resolve a host name to an IP address.
 The Routing Information Protocol (RIP) is a routing protocol that routers use to exchange routing
information on an IP internetwork.
 The Simple Network Management Protocol (SNMP) is used between a network management
console and network devices (routers, bridges, intelligent hubs) to collect and exchange network
management information.
Examples of Application layer interfaces for TCP/IP applications are Windows Sockets and NetBIOS.
Windows Sockets provides a standard application programming interface (API) under Windows 2000.
NetBIOS is an industry standard interface for accessing protocol services such as sessions, datagrams, and
name resolution.
Understanding TCP/IP: Chapter 1 – Introduction to Network Protocols

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


1. Introduction to Network Protocols

Just as diplomats use diplomatic protocols in their meetings, computers use


network protocols to communicate in computer networks. There are many
network protocols in existence; TCP/IP is a family of network protocols that are
used for the Internet.

A network protocol is a standard written down on a piece of paper (or, more


precisely, with a text editor in a computer). The standards that are used for the
Internet are called Requests ForComment (RFC). RFCs are numbered from 1
onwards. There are more than 4,500 RFCs today. Many of them have become out
of date, so only a handful of the first thousand RFCs are still used today.

The International Standardization Office (ISO) has standardized a system of


network protocols called as ISO OSI. Another organization that issues
communication standards is the InternationalTelecommunication Union (ITU)
located in Geneva. The ITU was formerly known as the CCITT and, being founded
in 1865, is one of the oldest worldwide organizations (for comparison, the Red
Cross was founded in 1863). Some standards are also issued by
the Institute of Electrical andElectronics Engineers (IEEE). RFC, standards
released by RIPE (Réseaux IP Européens),
and PKCS (Public Key Cryptography Standard) are freely available on the Internet
and are easy to get hold of. Other organizations (ISO, ITU, and so on) do not
provide their standards free of charge—you have to pay for them. If that presents
a problem, then you have to spend some time doing some library research.

First of all, let’s have a look at why network communication is divided into several
protocols. The answer is simple although this is a very complex problem that
reaches across many different professions. Most books concerning network
protocols explain the problem using a metaphor of two foreigners (or
philosophers, doctors, and so on) trying to communicate with each other. Each of
the two can only communicate in his or her respective language. In order for

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


them to be able to communicate with each other, they need a translator as
shown in the following figure:

Figure 1.1: Three-layer communication architecture

The two foreigners exchange ideas, i.e., they communicate. But they only do so
virtually. In reality, they are both handing over information to their interpreters,
who then transmit this information by sending vibrations through the surrounding
air with their vocal cords. Or if the parties are far away from each other, the
interpreters communicate over the phone; thus the information is physically
transmitted over phone lines. We can therefore talk about virtual communication
in the horizontal direction (philosophical communication, the shared language
between interpreters, and electronic signals transmitted via phone lines) and real
communication in the vertical direction (foreigner-to-interpreter and interpreter-
to-phone). We can thus distinguish three levels of communication:

1.
Between two foreigners

2.
Between interpreters
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
3.
Physical transmission of information using media (phone lines, sound waves,
etc.)

Communication between the two foreigners and between the two interpreters is
only virtual. In fact, the only real communication happens between the foreigner
and his or her interpreter.

Even more layers are used in computer networks. The number of layers depends
on which system of network protocols you choose to use. The system of network
protocols is sometimes referred to as the network model. You most commonly
work with a system that uses the Internet, which is also referred to as the TCP/IP
family. In addition to TCP/IP, we will also come across the ISO OSI model that was
standardized by the ISO.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.2: Comparison of TCP/IP and ISO OSI network models

The TCP/IP family uses four layers while ISO OSI uses seven layers as shown in the
figure above. The TCP/IP and ISO OSI systems differ from each other significantly,
although they are very similar on the network and transport layers.

Except for some exceptions like SLIP or PPP, the TCP/IP family does not deal with
the link and physical layers. Therefore, even on the Internet, we use the link and
physical protocols of the ISO OSI model.

1.1 ISO OSI


Communication between two computers is shown in the following figure:

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.3: Seven-layer architecture of ISO OSI

1.1.1 Physical Layer

The physical layer is responsible for activating the physical circuit between
the Data TerminalEquipment (DTE) and Data Circuit-
terminating Equipment (DCE), communicating through it, and then deactivating
it. Additionally, the physical layer is also responsible for the communication
between DCEs (see Figure 1.3a). A computer or router can represent the DTE. The
DCE, on the other hand, is usually represented by a modem or a multiplexer.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.3a: DTE and DCE

To put it differently, the physical layer describes the electric or optical signals
used for communicating between two computers. Physical circuits are created on
the physical layer. Other appliances such as modems modulating a signal for a
phone line are often put in the physical circuits created between two computers.

Physical layer protocols specify the following:


Electrical signals (for example, +1V)


Connector shapes (for example, V.35)

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Media type (twisted pair, coaxial cable, optical fiber, etc.)


Modulation (for example, FM, PM, etc.)


Coding (for example, RZ, NRZ, etc.)


Synchronization (synchronous and asynchronous communication, time source,
and so on)

1.1.2 Data Link Layer

As for serial links, the link layer provides data exchange between neighboring
computers as well as data exchange between computers within a local network.

For the link layer, the basic unit of data transfer is the data link packet frame (see
Figure 1.4). A data frame is composed of a header, payload, and trailer.

Figure 1.4: Data link packet or frame

A frame carries the destination link address, source link address, and other
control information in the header. The trailer usually contains the checksum of
the transported data. By using the checksum, we can find out whether the

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


payload has been damaged during transfer. The network-layer packet is usually
included in the payload.

In Figure 1.3a, the link layer does not engage in a conversation between DTE and
DCE (the link layer does not see the DCE). It is engaged, however, in the frame
exchange between DTEs. (It relies on the physical layer to handle the DCE issue.)

The following figure illustrates that different protocols can be used for each end
of the connection on the physical layer. In our case, one of the ends uses the X.21
protocol while the other end uses the V.35 protocol. This rule is valid not only for
serial links, but also for local networks. In local networks, you are more likely to
encounter more complicated setups in which a switch that converts the link
frames of one link protocol into link frames of a second one (for example,
Ethernet into FDDI) is inserted between the two ends of the connection. This
obviously results in different protocols being used on the physical layer.

Figure 1.5: Link layer communication

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


A serial port or an Ethernet card can serve as a link interface. A link interface has a
link address that is unique within a particular Local Area Network (LAN).

1.1.3 Network Layer

The network layer ensures the data transfer between two remote computers
within a particular Wide Area Network (WAN). The basic unit of transfer is
a datagram that is wrapped (encapsulated) in a frame. The datagram is also
composed of a header and data field. Trailers are not very common in network
protocols.

Figure 1.6: Network packet and its insertion in the link frame

As shown in the figure above, the datagram header, together with data (network-
layer payload), creates the payload or data field of the frame.

There is usually at least one router on WANs between two computers. The
connection between two neighboring routers on the link layer is always direct.
The router unpacks the datagram from a frame, only to wrap it again into a
different frame (or, more generally, in a frame of different link protocol) before
sending it to a different line. The network layer does not see the appliances on
the physical and link layers (modems, repeaters, switches, etc.).

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


The network layer does not care about what kind of link protocols are used on
route between the source and the destination.

Figure 1.7: Network layer communication

A serial port or an Ethernet card can be used as a network interface. A network


interface has a one or more unique address within a particular WAN.

1.1.4 Transport Layer

A network layer facilitates the connection between two remote computers. As far
as the transport layer is concerned, it acts as if there were no modems, repeaters,
bridges, or routers along the way. The transport layer relies completely on the
services of lower layers. It also expects that the connection between two
computers has been established, and it can therefore fully dedicate its efforts to
the cooperation between two distant computers. Generally, the transport layer is
responsible for communication between two applications running on different
computers.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


There can be several transport connections between two computers at any given
time (for example, one for a virtual terminal and another for email). On the
network layer, the transport packets are directed based on the address of the
computer (or its network interface). On the transport layer, individual
applications are addressed. Applications use unique addresses within one
computer, so the transport address is usually composed of both the network and
transport addresses.

Figure 1.8: Transport layer connection

In this case, the basic transmission unit is the segment that is composed of a
header and payload. The transport packet is transmitted within the payload of the
network packet.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.9: Inserting transport packets into network packets that are then inserted
into link frames

1.1.5 Session Layer

The session layer facilitates exchange of data between two applications. In other
words, it serves as a checkpoint and is involved in synchronizing transactions,
correctly closing files, and so on. Sharing a network disk is a good example of a
session. The disk can be shared for a certain period of time, but the disk is not
used for the entire time. When we need to work with a file on the network disk, a
connection is established on the transport layer from the time when the file is
opened to when it is closed. The session, however, exists on the session layer for
the entire time the disk is being shared.

The basic unit is a session layer PDU (Protocol Data Unit), which is inserted in a
segment. Other books often illustrate this with a figure of a session-layer PDU,
composed of the session header and payload, being inserted in the segment.
Starting with the session layer, however, this does not necessarily have to be the
case. The session layer information can be transmitted inside the payload. This
situation is even more noticeable if, for example, the presentation layer encrypts
the data, and thus changes the whole content of the session-layer PDU.

1.1.6 Presentation Layer


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
The presentation layer is responsible for representing and securing data. The
representation can differ on different computers. For example, it deals with the
problem of whether the highest bit is in the byte on the right or on the left. By
securing, we mean encrypting, ensuring data integrity, digital signing, and so
forth.

1.1.7 Application Layer

The application layer defines the format in which the data should be received
from or handed over to the applications. For example, the OSI Virtual Terminal
protocol describes how data should be formatted as well as the dialogue used
between the two ends of the connection.

Figure 1.10: Examples of network protocols from the ISO OSI protocols family
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
1.2 TCP/IP

With a few exceptions, the TCP/IP family does not deal with the physical or link
layers. In practice, Internet protocols often use protocols that adhere to the ISO
OSI standards for the physical and link layers.

What is the correlation between the ISO OSI protocols and TCP/IP? Each group of
protocols has its definition of its own layers as well as the protocols used on these
layers. Generally speaking, ISO OSI protocols and TCP/IP are incompatible. In
practice, ISO OSI-compliant communication appliances need to be used for
transferring IP datagrams, or on the other hand, services based on ISO OSI need
to be provided via the Internet.

1.2.1 Internet Protocol

Internet Protocol (IP) basically corresponds to the network layer. IP is used for
transmitting IP datagrams between remote computers. Each IP datagram header
contains the destination address, which is the complete routing information used
for delivering the IP datagram to its destination. Therefore, the network can only
transmit each datagram individually. IP datagrams of one session can be
transmitted through different paths and can thus be received by the destination
in a different order than they were sent.

Each network interface on the large Internet network has one or more IP address
that is unique worldwide. (One network interface can have several IP addresses,
but one IP address cannot be used by many network interfaces.) The Internet is
composed of individual networks that are interconnected via routers. Routers are
also referred to as gateways in old literature.
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
1.2.2 TCP and UDP

TCP and UDP correspond to the transportation layer. TCP transports data using
TCP segments that are addressed to individual applications. UDP transports data
using UDP datagrams.

TCP and UDP arrange a connection between applications that run on remote
computers. TCP and UDP can also facilitate communication between processes
running on the same computer, but this is not very interesting for our purposes.

The difference between TCP and UDP is that TCP is a connection-oriented


service—the destination confirms the data received. If some data (TCP segments)
gets lost, the destination requests a retransmission of the lost data. UDP
transports data using datagrams (the delivery is not guaranteed). In other words,
the source party sends the datagram without worrying about whether it has been
received. UDP is connectionless-oriented service.

The port is used as the address. To understand the difference between an IP


address and port number, think of it as a mailing address. The IP address
corresponds to the address of a house, while the port tells you the name of the
person that should receive the letter.

TCP is described in Chapter 9 and UDP in Chapter 10.

1.2.3 Application Protocols

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Application protocols correspond to several ISO OSI layers. The session,
presentation, and application ISO OSI layers are reduced to one TCP/IP application
layer.

The absence of a presentation layer is made up for by introducing specialized


presentation-application protocols such as SSL and S/MINE that specialize in
securing data or the Virtual Terminal and ASN.1 protocols that are designed for
presenting data. The Virtual Terminal protocol (not to be confused with the ISO
OSI protocol of the same name) specifies the network data presentation for
character-oriented network protocols (Telnet, FTP, SMTP, and, partly, HTTP).
Similarly, ASN.1 is often used for binary-oriented network transport. ASN.1
(including BER or DER encoding) was initially used by SNMP, but today it is also
used by S/MINE.

There are many different application protocols. For practical purposes, they can
be divided into two groups:


User protocols utilized by user applications (HTTP, SMTP, Telnet, FTP, IMAP,
PIP3, and so on).


Service protocols, i.e., the protocols that ordinary Internet users rarely
encounter. These protocols make sure the Internet functions correctly. For
example, these could be routing protocols that are used for mutual
communication by routers to correctly set their routing tables. Another
example is SNMP usage in network administration.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.11: Some protocols of the TCP/IP family

1.3 Methods of Information Transmission


There are many different network protocols and several protocols can be
available even on a single layer. Especially with lower-layer protocols, we
distinguish between the types of transmission that they facilitate, whether they
provide connection-oriented or connection-less services, if the protocol uses
virtual circuits, and so on. We also distinguish between synchronous, packet, and
asynchronous transmission.

1.3.1 Synchronous Transmission


Synchronous transmission is needed when it is necessary to provide a stable
(guaranteed) bandwidth, for example, in audio and video. If the source does not
use the provided bandwidth it remains unused. Synchronous transmission uses
frames that are of fixed length and are transmitted at constant speeds.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1.12: Frames divided into slots in synchronous transmission
In synchronous transmission, the guaranteed bandwidth is established by dividing
the transmitted frames into slots (see Figure 1.12). One or more slots in any
transmitted frame are reserved for a particular connection. Let’s say that each
frame has slot 1 reserved for our connection. Since the frames follow each other
steadily in a network, our application has a guaranteed bandwidth consisting of
the number of slot 1s that can be transmitted through the network in one second.
The concept becomes even clearer if we draw several frames under each other,
creating a ‘super-frame’ (see Figure 1.13). The slots located directly under each
other belong to the same connection.

Figure 1.13: Super-frame


Synchronous transmission is used to connect your company switchboard to the
phone company exchange. In this case, we use an E1(or T1 in United States) link
containing 32 slots of 64 Kbps each. A slot can be used for making a phone call.
Therefore, in theory, 32 calls are guaranteed at the same time (although some
slots are probably used for servicing).
The Internet does not use synchronous transmission, i.e., in general, does not
guarantee bandwidth. Quality audio or video transmission on the Internet is
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
usually achieved by over-dimensioning the transmission lines. Recently, there has
been a steady increase in requests for audio and video transmission via the
Internet, so more and more often we come across systems that guarantee
bandwidth even on the Internet with the help of Quality of Service (QoS). In order
for us to reach the expected results, however, all appliances on route from the
source to the destination must support these services. Today, we are more likely
to get involved with only those areas on the Internet that guarantee bandwidth
such as within a particular Internet provider.

1.3.2 Packet Transmission

(From now onwards we will use the term packet to refer to ‘packet’, ‘datagram’,
‘segment’, ‘protocol data unit’.) Packet transmission is especially valuable for
transferring data. Packets usually carry data of variable size.

Figure 1.14: Packet data transmission

One packet always carries data of one particular application (of one connection).
It is not possible to guarantee bandwidth, because the packets are of various
lengths. On the other hand, we can use the bandwidth more effectively because if
one application does not transmit data, then other applications can use the
bandwidth instead.
1.3.3 Asynchronous Transmission
Asynchronous transmission is used in the ATM protocol. This transmission type
combines features of packet transmission with features of synchronous
transmission.

Figure 1.15: Asynchronous data transfer


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Similarly to synchronous transmission, in asynchronous transmission, the data are
transmitted in packets that are rather small, but are all of the same size; these
packets are called cells. Similarly to packet transmission, data for one application
(one connection) is transmitted in one cell. All cells have the same length; so if we
guarantee that the nth cell will be available for a certain application (a particular
connection), the bandwidth will be guaranteed by this as well. Additionally, it
doesn’t really matter if the application does not send the cell since a different
application’s cell might be sent instead.
1.4 Virtual Circuit
Some network protocols create virtual circuits in networks. A virtual circuit is
conducted through the network and all packets of a particular connection go via
the circuit. If the circuit gets interrupted anywhere, then the connection is
interrupted, a new circuit is established, and data transmission continues.

Figure 1.16: Virtual circuit

In the figure above, a virtual circuit between nodes A and D is established via
nodes B, F, and G. All packets must go through this circuit.

Datagrams can be transmitted via the virtual circuit in two ways:


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


The circuit does not guarantee the datagram’s delivery to its destination. (If
network congestion occurs, the circuit can even throw the datagram away.) An
example is the Frame Relay protocol.
The virtual circuit can establish a connection and guarantee the data delivery,
i.e., the data packets transmitted are numbered and the destination confirms
their reception. If any data gets lost, a request to resend the data is made. For
example, this mechanism is used in the X.25 protocol.

The advantage of virtual circuits is that they are first established (using
signalization) and then the data is inserted only into the established circuit. Each
packet does not have to carry the globally unique address of the destination
(complete routing information) in its header. It only needs the circuit ID.

The virtual mechanism is not used on the Internet, which was primarily aimed for
use by the U.S. Department of Defense, since the destruction of a node in the
virtual circuit would result in the transmission being interrupted—a fact that the
authors of TCP/IP did not like. For this reason, IP does not use virtual circuits. Each
IP datagram carries a destination IP address (complete routing information) and is
therefore transported independently. If a node is destroyed, only the
IP datagrams currently being transmitted through that particular node are
destroyed. The remaining datagrams are routed via different nodes.

Figure 1.17: IP does not use virtual circuits

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


As the figure above shows, IP datagrams 1, 2, and 3 start from the node A to node
B, but from this point, datagrams 1 and 3 are routed through a different path than
datagram 2. The destination (node D) is then reached by each of them via a
different path. Generally, IP datagrams may reach their destination in a different
order than the order in which they were sent. So our IP datagrams could be
received in the following order: 2, 1, and then 3.

In the Internet hierarchy, TCP—a higher-layer protocol that establishes a


connection and guarantees the delivery of data—is used above the
connectionless IP. If some of the data packets are lost, their retransmission is
requested. If the data packets were lost due to the destruction of a node along
the way and there is another routing possible within the network, then the
transmission is automatically repeated using the other path.

Virtual circuits are divided into the following groups:



Permanent (Permanent Virtual Circuit (PVC)), i.e., circuits permanently built by
the network administrator.
Switched (Switched Virtual Circuit (SVC)), i.e., virtual circuits that are created
dynamically as the need arises. An SVC is created with the help of signalizing
protocols that can be used for communicating between the user and the
network itself. The network signalizes to the user various events that can be
used for network monitoring and administration. SVC communication consists
of two steps: creating the virtual circuit and using it for communication.

PVC corresponds to leased lines and SVC corresponds to the dial-up lines of a
phone network.

PSTN
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
PSTN, POTS, ANT, ISDN, Internet, Air/ wireless
communications: GSM satellite, and Micro wave, Modulation
techniques, Types of Antenna

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
POTS

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
TYPES OF ANTENNA
In this modern era of wireless communication, many engineers are showing interest to do
specialization in communication fields, but this requires basic knowledge of fundamental
communication concepts such as types of antennas, electromagnetic radiation and various
phenomena related to propagation, etc. In case of wireless communication systems, antennas
play a prominent role as they convert the electronic signals into electromagnetic waves
efficiently.

Antennas are basic components of any electrical circuit as they provide interconnecting
links between transmitter and free space or between free space and receiver. Before we
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
discuss about antenna types, there are a few properties that need to be understood.
Apart from these properties, we also cover about different types of antennas used in
communication system in detail.
Properties of Antennas
 Antenna Gain
 Aperture
 Directivity and bandwidth
 Polarization
 Effective length
 Polar diagram
Antenna Gain: The parameter that measures the degree of directivity of antenna’s radial
pattern is known as gain. An antenna with a higher gain is more effective in its radiation
pattern. Antennas are designed in such a way that power raises in wanted direction and
decreases in unwanted directions.
G = (power radiated by an antenna)/(power radiated by refernce antenna)
Aperture: This aperture is also known as the effective aperture of the antenna that
actively participate in transmission and reception of electromagnetic waves. The power
received by the antenna gets associated with collective area. This collected area of an
antenna is known as effective aperture.
Pr = Pd*A watts
A=pr/ pd m2
Directivity and Bandwidth: The directive of an antenna is defined as the measure of
concentrated power radiation in a particular direction. It may be considered as the
capability of an antenna to direct radiated power in a given direction. It can also be
noted as the ratio of the radiation intensity in a given direction to the average radiation
intensity. Bandwidth is one of the desired parameters to choose an antenna. It can be
defined as the range of frequencies over which an antenna can properly radiates energy
and receives energy.

Polarization: An electromagnetic wave launched from an antenna may be polarized


vertically and horizontally. If the wave gets polarized in the vertical direction, then the E
vector is vertical and it requires a vertical antenna. If vector E is in horizontal way, it

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


needs a horizontal antenna to launch it. Sometimes, circular polarization is used, it is a
combination of both horizontal and vertical ways.

Effective Length: The effective length is the parameter of antennas that characterizes
the efficiency of the antennas in transmitting and receiving electromagnetic waves.
Effective length can be defined for both transmitting and receiving antennas. The ratio
of EMF at the receiver input to the intensity of the electric field occurred on the antenna
is known as receivers’ effective length. The effective length of the transmitter can be
defined as the length of the free space in conductor, and current distribution across its
length generates same field intensity in any direction of radiation.
Effective Length = (Area under non-uniform current distrbution)/(Area under uniform
current distribution)
Polar diagram: The most significant property of an antenna is its radiation pattern or
polar diagram. In case of a transmitting antenna, this is a plot that discusses about the
strength of the power field radiated by the antenna in various angular directions as
shown in the plot below. A plot can also be obtained for both vertical and horizontal
planes – and, it is also named as vertical and horizontal patterns, respectively.

Till now we have covered the properties of antennas, and now we will discuss on
different types of antennas that are used for different applications.

Types of Antennas
Log Periodic Antennas
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
 Bow Tie Antennas
 Log-Periodic Dipole Array
Wire Antennas

 Short Dipole Antenna


 Dipole Antenna
 Monopole Antenna
 Loop Antenna
Travelling Wave Antennas

 Helical Antennas
 Yagi-Uda Antennas
Microwave Antennas

 Rectangular Micro strip Antennas


 Planar Inverted-F Antennas
Reflector Antennas

 Corner Reflector
 Parabolic Reflector
1. Log-Periodic Antennas

Log Periodic Antenna


A log-periodic antenna is also named as a log periodic array. It is a multi-element,
directional narrow beam antenna that works on a wide range of frequencies. This
antenna is made of a series of dipoles placed along the antenna axis at different space
intervals of time followed by a logarithmic function of frequency. Log-periodic antenna is
used in a wide range of applications where variable bandwidth is required along with
antenna gain and directivity.

Bow-Tie Antennas
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Bow Tie Antenna
A bow-tie antenna is also known as Biconical antenna or Butterfly antenna. Biconical
antenna is an omnidirectional wide-band antenna. According to the size of this antenna,
it has low- frequency response, and acts as a high-pass filter. As the frequency goes to
higher limits, away from the design frequency, the radiation pattern of the antenna gets
distorted and spreads.

Most of the bow-tie antennas are derivatives of biconical antennas. The discone is as a
type of half-biconical antenna. The bow-tie antenna is planar, and therefore, directional
antenna.

Log-Periodic Dipole Array


The most common type of antenna used in wireless communication technology is a log-
periodic dipole array fundamentally comprises a number of dipole elements. These
dipole-array antennas reduce in size from the back end to the front end. The leading
beam of this RF antenna comes from the smaller front end.

Log Periodic Dipole Antenna


The element at the back end of the array is large in size with the half wavelength
operating in a low-frequency range. The spacing of the element gets reduced towards

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


the front end of the array wherein the smallest arrays are placed. During this operation,
as the frequency varies, a smooth transition takes place along the array of the
elements, which leads to form an active region.

2. Wire Antennas

Wire Antenna
Wire antennas are also known as linear or curved antennas.These antennas are very
simple, cheap and are used in a wide range of applications.These antennas are further
subdivided into four as explained below.

Dipole Antenna
A dipole antenna is one of the most straightforward antenna alignments. This dipole
antenna consists of two thin metal rods with a sinusoidal voltage difference between
them. The length of the rods is chosen in such a way that they have quarter length of
the wavelength at operational frequencies. These antennas are used in designing their
own antennas or other antennas. They are very simple to construct and use.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


The dipole antenna consists of two metallic rods through which current and frequency
flow. This current and voltage flow makes an electromagnetic wave and the radio
signals get radiated. The antenna consists of a radiating element that splits the rods and
make current flow through the center by using a feeder at the transmitter out that takes
from the receiver.The different types of dipole antennas used as RF antennas include half
wave, multiple, folded, non-resonant, and so on.
Short-Dipole Antenna:

Short Dipole Antenna


It is the simplest of all types of antennas. This antenna is an open circuited wire in which
short denotes “ relative to a wavelength” so this antenna gives priority to the size of the
wire relative to the wavelength of the frequency of operation. It does take any
consideration about the absolute size of the dipole antenna. The short dipole antenna is
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
made up of two co-linear conductors that are placed end to end, with a small gap
between conductors by a feeder. A Dipole is considered as short if the length of the
radiating element is less than a tenth of the wavelength.

L<λ/10
The short dipole antenna is made of two co-linear conductors that are placed end to
end, with a small gap between conductors by a feeder.

The short dipole antenna is infrequently satisfactory from an efficiency viewpoint


because most of the power entering this antenna is dissipated as heat and resistive
losses also become gradually high.

Monopole Antenna
A monopole antenna is half of a simple dipole antenna located over a grounded plane
as shown in the figure below.

The radiation pattern above the grounded plane will be same as the half wave dipole
antenna, however, the total power radiated is half that of a dipole; the field gets radiated
only in the upper hemisphere region. The directivity of these antennas become double
compared to the dipole antennas.The monopole antennas are also used as vehicle
mounted antennas as they provide the required ground plane for the antennas mounted
above the earth.
Loop Antenna

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Loop Antenna
Loop antennas share similar characteristics with both dipole and monopole antennas
because they are simple and easy to construct. Loop antennas are available in different
shapes like circular, elliptical, rectangular, etc. The fundamental characteristics of the
loop antenna are independent of its shape. They are widely used in communication
links with the frequency of around 3 GHz. These antennas can also be used as
electromagnetic field probes in the microwave bands.

The circumference of the loop antenna determines the efficiency of the antenna as
similar to that of dipole and monopole antennas. These antennas are further classified
into two types: electrically small and electrically large based on the circumference of the
loop.

Electrically small loop antenna———> Circumference≤λ⁄10


Electrically large loop antenna ———> Circumference≈λ
Electrically small loops of a single turn have small radiation resistance compared to their
loss resistance. The radiation resistance of small loop antennas can be improved by
adding more turns. Multi-turn loops have better radiation resistance even if they have
less efficiency.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Small Loop Antenna
Due to this, the small loop antenna are mostly used as receiving antennas where losses
are not mandatory. Small loops are not used as transmitting antennas due to their low
efficiency.

Resonant loop antennas are relatively large, and are directed by the operation of
wavelength .They are also known as large loop antennas as they are used at higher
frequencies, such as VHF and UHF, wherein their size is convenient. They can be
viewed as folded-dipole antenna and deformed into different shapes like spherical,
square, etc., and have similar characteristics such as high-radiation efficiency.

3. Travelling Wave Antennas


Helical Antennas
Helical antennas are also known as helix antennas. They have relatively simple
structures with one, two or more wires each wound to form a helix, usually backed by a
ground plane or shaped reflector and driven by an appropriate feed. The most common
design is a single wire backed by the ground and fed with a coaxial line.

In General, the radiation properties of a helical antenna are associated with this
specification: the electrical size of the structure, wherein the input impedance is more
sensitive to the pitch and wire size.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Helical Antenna
Helical antennas have two predominate radiation modes: the normal mode and the axial
mode. The axial mode is used in a wide range of applications. In the normal mode, the
dimensions of the helix are small compared to its wavelength. This antenna acts as the
short dipole or monopole antenna. In the axial mode, the dimensions of the helix are
same compared to its wavelength. This antenna works as directional antenna.

Yagi-Uda Antenna

Yagi-Uda Antenna
Another antenna that makes use of passive elements is the Yagi-Uda antenna. This type
of antenna is inexpensive and effective. It can be constructed with one or more reflector
elements and one or more director elements. Yagi antennas can be made by using an
antenna with one reflector, a driven folded-dipole active element, and directors,
mounted for horizontal polarization in the forward direction.
4. Microwave Antennas
The antennas operating at microwave frequencies are known as microwave antennas.
These antennas are used in a wide range of applications.
Rectangular Micro strip Antennas

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Rectangular Micro strip Antennas
For spacecraft or aircraft applications – based on the specifications such as size,
weight, cost, performance, ease of installation, etc. – low profile antennas are preferred.
These antennas are known as rectangular microstrip antennas or patch antennas; they
only require space for the feed line which is normally placed behind the ground plane.
The major disadvantage of using these antennas is their inefficient and very narrow
bandwidth, which is typically a fraction of a percent or, at the most, a few percent.

Planar Inverted-F Antennas


A Planar Inverted-F Antenna can be considered as a type of linear Inverted F antenna
(IFA) in which the wire radiating element is replaced by a plate to increase the
bandwidth. The advantage of these antennas is that they can be hidden into the
housing of the mobile when compared to different types of antennas like a whip, rod or
helical antennas, etc. The other advantage is that they can reduce the backward
radiation towards the top of the antenna by absorbing power, which enhances the
efficiency. They provides high gain in both horizontal and vertical states. This feature is
most important for any kind of antennas used in wireless communications.

5. Reflector Antennas
Corner Reflector Antenna

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Corner Reflector Antenna
The antenna that comprises one or more dipole elements placed in front of a corner
reflector, is known as corner-reflector antenna.The directivity of any antenna can be
increased by using reflectors. In case of a wire antenna, a conducting sheet is used
behind the antenna for directing the radiation in the forward direction.

Parabolic-Reflector Antenna
The radiating surface of a parabolic antenna has very large dimensions compared to its
wavelength. The geometrical optics, which depend upon rays and wavefronts, are used
to know about certain features of these antennas. Certain important properties of these
antennas can be studied by using ray optics, and of other antennas by using
electromagnetic field theory.

Parabolic Antenna

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


One of the useful properties of this antenna is the conversion of a diverging spherical
wavefront into parallel wave front that produces a narrow beam of the antenna. The
various types of feeds that use this parabolic reflector include horn feeds, Cartesian
feeds and dipole feed.

In this article, you have studied about the different types of antennas and their
applications in wireless communications and the usage of Antennas in transmitting and
receiving data. For any help regarding this article, contact us by commenting in the
comment section below.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
LAN TECHNOLOGY
A local area network (LAN) supplies networking capability to a group of computers in
close proximity to each other, like in an office building, school, or home. LANs are
usually built to enable the sharing of resources and services like files, printers, games,
applications, email, or internet access.

Multiple local networks may stand alone, disconnected from any other network, or might
connect to other LANs or a WAN (like the internet).

Traditional home networks are individual LANs but it is possible to have multiple LANs
within a home, like if a guest network is set up.

Technologies Used to Build a LAN

Modern local area networks predominantly use either Wi-Fi or Ethernet to connect their
devices together.

A traditional Wi-Fi LAN operates one or more wireless access points that devices within
signal range connect to. These access points in turn manage network traffic flowing to
and from the local devices and can also interface the local network with outside
networks. On a home LAN, wireless broadband routers perform the functions of an
access point.

A traditional Ethernet LAN consists of one or more hubs, switches, or traditional


routers that individual devices connect to through Ethernet cables.
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Both Wi-Fi and Ethernet also allow devices to connect to each other directly (e.g. peer
to peer or ad hoc connections) rather than through a central device, although the
functionality of these networks is limited.

Though Ethernet and Wi-Fi are usually used in most businesses and homes, due both
to the low cost and speed requirement, a LAN may be setup with fiber if enough reason
can be found.

Internet Protocol (IP) is by far the predominant choice of network protocol used on
LANs. All popular network operating systems have built-in support for the
required TCP/IP technology.

How Big Is a LAN?

A local network can contain anywhere from one or two devices up to many thousands.
Some devices like servers and printers stay permanently associated with the LAN while
mobile devices like laptop computers and phones may join and leave the network at
various times.

Both the technologies used to build a LAN and also its purpose determine its physical
size. Wi-Fi local networks, for example, tend to be sized according to the coverage area
of individual access points, whereas Ethernet networks tend to span the distances that
individual Ethernet cables can cover.

In both cases, though, LANs can be extended to cover much larger distances if needed
by aggregating together multiple access points or switches.

Note: Other types of area networks may be larger than LANs, like MANs and CANs.

Benefits of a Local Area Network

There are plenty of advantages to LANs. The most obvious one, like mentioned above,
is that software (plus licenses), files, and hardware can be shared with all the devices
that connect to the LAN. This not only makes things easier but it also reduces the cost
of having to buy multiples.

For example, a business can avoid having to buy a printer for each employee and
computer by setting up a LAN to share the printer over the whole network, which lets
more than just one person print to it, fax things, scan documents, etc.

Since sharing is a major role of a local area network, it's clear that this type of network
means faster communication. Not only can files and other data be shared much quicker
if they stay within the local network instead of reaching the internet first, but point-to-
point communication can be setup for quicker communication.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Also on this note, sharing resources on a network means there's central administrative
control, which means it's easier to make changes, monitor, update, troubleshoot, and
maintain those resources.

LAN Topologies

A computer network topology is the underlying communication structure for components


of a LAN.

Those who design network technologies consider topologies, and understanding them
gives some additional insight into how networks work. However, the average user of a
computer network does not need to know much about them.

Bus, ring, and star topologies are the three basic forms that are known by most
networking-literate people.

BUS TOPOLOGY

Alternatively referred to as a line topology, a bus topology is a network setup in which each computer
and network device are connected to a single cable or backbone. The following sections contain both
the advantages and disadvantages of using a bus topology with your devices.
Advantages of bus topology

 It works well when you have a small network.


 Easiest network topology for connecting computers or peripherals in a linear fashion.
 Requires less cable length than a star topology.
Disadvantages of bus topology

 Difficult to identify the problems if the whole network goes down.


 It can be hard to troubleshoot individual device issues.
 Not great for large networks.
 Terminators are required for both ends of the main cable.
 Additional devices slow the network down.
 If a main cable is damaged, the network fails or splits into two.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


RING TOPOLOGY

Alternatively referred to as a ring network, a ring topology is a computer network configuration where
the devices are connected to each other in a circular shape. Each packet is sent around the ring until it
reaches its final destination. Ring topologies are used in both local area network (LAN) and wide area
network (WAN) setups. The picture to the right is a visual example of a network using the ring topology
to connect several computers together.
Additional information

In the past, the ring topology was most commonly used in schools, offices, and smaller buildings where
networks were smaller. However, today, the ring topology is seldom used, having been switched to
another type of network topology for improved performance, stability, or support.
Advantages of ring topology

 All data flows in one direction, reducing the chance of packet collisions.
 A network server is not needed to control network connectivity between each workstation.
 Data can transfer between workstations at high speeds.
 Additional workstations can be added without impacting performance of the network.
Disadvantages of ring topology

 All data being transferred over the network must pass through each workstation on the
network, which can make it slower than a star topology.
 The entire network will be impacted if one workstation shuts down.
 The hardware needed to connect each workstation to the network is more expensive than
Ethernet cards and hubs/switches.

STAR TOPOLOGY

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Alternatively referred to as a star network, star topology is one of the most common network setups. In
this configuration, every node connects to a central network device, like a hub, switch, or computer. The
central network device acts as a server and the peripheral devices act as clients.
The picture to the right shows how this network setup gets its name, as it is shaped like a star.
Advantages of star topology

 Centralized management of the network, through the use of the central computer, hub, or
switch.
 Easy to add another computer to the network.
 If one computer on the network fails, the rest of the network continues to function normally.
Disadvantages of star topology

 Can have a higher cost to implement, especially when using a switch or router as the central
network device.
 The central network device determines the performance and number of nodes the network can
handle.
 If the central computer, hub, or switch fails, the entire network goes down and all computers
are disconnected from the network.

What Is a LAN Party?

LAN party refers to a type of multiplayer computer gaming and social event where
participants bring their own computers and build a temporary local network.

Before cloud-based game services and internet gaming matured, LAN parties were
essential for bringing together players for matchmaking with the benefit of high-speed,
low-latency connections to support real-time game types.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


POTS

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
ISDN

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


AIR/WIRELESS COMMUNICATION

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
GSM SATELLITE

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


MODULATION TECHNIQUES FOR TELEMEDICINE

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Internet technology and telemedicine using world wide web
(www).

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
INTERNET AND TELEMEDICINE

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
CLINICAL DATA-LOCAL AND CENTRALIZED
From the moment ALS responders arrive on the scene of a critical care patient, their hands are full
stabilizing the patient and preparing for transport. Whether the patient is suffering a cardiac event,
stroke, trauma or some other critical care condition, the immediate focus is on patient care. In these
situations, ALS responders have little, if any, time to document, manually transmit data, or call ahead to
medical control or the receiving hospital to advise that they are inbound. How then can this information
be conveyed in a timely manner to alert emergency departments and treatment centers of the clinical
status of inbound patients? It has already been established that transmitting prehospital ECGs and 12-
lead ECG reports to the hospital can help reduce first-medical-contact-to-balloon or door-to-balloon
(D2B) time for patients in need of percutaneous coronary intervention (PCI)1,2. For patients suffering
ST-segment elevation myocardial infarction (STEMI), EMS agencies and PCI hospitals are working
together to create regional STEMI networks in their service areas to link responder agencies with
receiving centers. This link results in significant conformity to a D2B time of less-than or equal to 90 min,
and surpassing the American College of Cardiology D2B Alliance benchmark3. These same networks can
also be leveraged to support more than just STEMI patients. Similarly, EMS personnel have transmitted
ECGs to offsite cardiologists via a fully automated wireless network, resulting in shortened D2B times
(63 min. average), as well as other reduced infarct size and shortened length of hospital stay.4 You can
support all types of critical care patients when EMS agencies and PCI hospitals automatically receive
patient vitals, 12-lead ECG reports, rhythm strips, and other clinical data on a periodic or event-driven
basis through appropriate data transmission solutions. This application note describes the basic
components of an emergency care clinical data transmission network, how they function, and what is
needed to put them into action.
Clinical Data Transmission Solutions

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Philips offers a suite of open data management solutions that streamline information delivery in an
effort to optimize patient care and operational efficiency. One of these is focused on emergency care
clinical data transmission from the ambulance to either the medical control office or directly to the
receiving hospital. From there, this data can be shared with the emergency department and other
specialists such as cardiologists, surgeons, neurologists, and respiratory therapists.
Send from the point of care
The solution begins with a Philips HeartStart MRx Monitor/Defibrillator. The MRx is a multi-modality ALS
monitor capable of delivering electric therapy for pacing, cardioversion and defibrillation. It can be
equipped with cellular, WiFi (Wireless Link), and/or Bluetooth wireless technology to send data to the
hospital ahead of the patient. Once connected to the internet, the MRx is able to send data to Philips’
HeartStart Telemedicine System, a software application that receives, stores, and forwards the clinical
data to a number of destinations. With the push of a few buttons on the MRx, 12-Lead reports and
periodic clinical data can be sent to pre-configured distribution lists via the Telemedicine System. Figures
1-3 illustrate the end-to-end Wireless Link and Bluetooth data transmission flows. NOTE: Only 12-Lead
ECGs can be transmitted to Philips’ TraceMasterVue ECG database.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Prepare and respond at the receiving center
HeartStart Telemedicine System software can reside on a server at an EMS dispatch center, medical
control, one hospital, or a number of hospitals.

CLINICAL DATA TRANSMISSION


The HeartStart Telemedicine System can be shared among EMS agencies and can serve multiple
hospitals and destinations in a regional network. A viewer component (HeartStart Telemedicine Viewer)
allows clinicians or system administrators to interact with patient data found on the server and perform
limited tasks, such as forwarding events to a variety of locations.

CLINICAL DATA TRANSMISSION BENEFITS


Philips’ emergency care clinical data transmission solutions provide benefits to both EMS and hospital
organizations. For EMS, it means:
* Hands-free, hassle-free transmission Start data transmission. Then, forget it, so you can keep your
focus on patient care.
*Informed clinical support for help with critical patients Screen unnecessary transports and get
assistance (from the receiving hospital) with patients who are refusing transport. If there is a dispatch or
call center involved in the EMS system, personnel can see patient clinical status and provide medics
remote support for critical patients. If needed, they can direct the medics to a more appropriate facility.
The net result? A more informed emergency department (ED) means shorter handoffs that enables you
to reduce out-of-service times and to meet service level commitments. For a receiving hospital, it
means:
* Advanced notice of inbound critical care patients You’ll have legible, objective data in hand to reduce
manually recorded and verbally recalled information.
*Making appropriate decisions Find a bed for a patient, consult a specialist as required (or not), and
refer to a patient’s history. The net result? A more informed receiving team may be able to reduce a
patient's stay in the ED and better utilize the Cath Lab, Radiology, Surgery, and/or ICU.

MOBILE HANDHELD DEVICES AND MOBILE COMMUNICATION

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


INTRODUCTION
The use of mobile devices by health care professionals (HCPs) has transformed many aspects of clinical
practice. Mobile devices have become commonplace in health care settings, leading to rapid growth in the
development of medical software applications (apps) for these platforms. Numerous apps are now
available to assist HCPs with many important tasks, such as: information and time management; health
record maintenance and access; communications and consulting; reference and information gathering;
patient management and monitoring; clinical decision-making; and medical education and training.
Mobile devices and apps provide many benefits for HCPs, perhaps most significantly increased access to
point-of-care tools, which has been shown to support better clinical decision-making and improved
patient outcomes. However, some HCPs remain reluctant to adopt their use.1,4 Despite the benefits they
offer, better standards and validation practices regarding mobile medical apps need to be established to
ensure the proper use and integration of these increasingly sophisticated tools into medical practice. These
measures will raise the barrier for entry into the medical app market, increasing the quality and safety of
the apps currently available for use by HCPs

USE OF MOBILE DEVICES BY HEALTH CARE PROFESSIONALS

Types and Prevalence of Devices Used


The introduction of mobile computing devices (personal digital assistants [PDAs], followed by
smartphones and tablet computers) has greatly impacted many fields, including medicine. Health care
professionals now use smartphone or tablet computers for functions they used to need a pager, cellphone,
and PDA to accomplish.7 Smartphones and tablets combine both computing and communication features
in a single device that can be held in a hand or stored in a pocket, allowing easy access and use at the
point of care. In addition to voice and text, new mobile device models offer more advanced features, such
as web searching, global positioning systems (GPS), high-quality cameras, and sound recorders. With
these features, as well as powerful processers and operating systems, large memories, and high-resolution
screens, mobile devices have essentially become handheld computers.
The first mobile device that incorporated both communication and computing features was the
Blackberry, which was introduced in 2002.5 After the Blackberry was brought to market, other handheld
mobile devices were introduced. Perhaps most notably, in January 2007, Apple launched the first-
generation iPhone.5Subsequently, smartphones that run the Google Android operating system were
introduced in October 2008.5Because of the intuitive touch-screen user interfaces and advanced features
and capabilities that the iPhone and Android smartphones offer, ownership of mobile devices has
increased rapidly.12 In April 2010, Apple introduced a new innovation, the iPad tablet computer, which
because of ease of use, portability, and a comparatively large screen was yet another transformative
computing tool.5 The iPad ignited the tablet computer market.9 Tablets that run the Google Android
operating system (Samsung Galaxy and others) were launched later that year, making the use of these
mobile devices even more widespread.5
Without a doubt, medicine is one of the disciplines that has been profoundly affected by the availability of
mobile devices.4 This is evident in many surveys of HCPs that reveal a high ownership rate of these tools,
which HCPs use in both clinical practice and education.2 Smartphones and tablets have even replaced
desktop systems as the preferred computing devices for HCPs who need fast access to information at the
point of care.9
The June 2012 Manhattan Research/Physician Channel Adoption Study found that doctors’ ownership
and use of mobile devices is pervasive, with 87% using a smartphone or tablet device in their workplace,
compared to 99% who use a computer.13 Surveys have shown that around 80% of physicians use an
iPhone, while most of the remainder opt for Android smartphones.1,14 An estimated 66% of doctors own a

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


tablet computer, which 54% use in their practices.13 Interestingly, the popularity of mobile devices does
not correspond with age, since 80% of physicians ages 55 and older own a smartphone. 13 Similar results
reflecting the pervasive use of mobile devices by HCPs were reported in a survey of medical school
faculty, residents, and students.1 The results of this study found that 85%, 90%, and 85% of respondents,
respectively, use mobile devices in a wide variety of clinical settings ranging from classrooms to
hospitals.1

Need for Mobile Devices at the Point of Care


One major motivation driving the widespread adoption of mobile devices by HCPs has been the need for
better communication and information resources at the point of care.7,14 Ideally, HCPs require access to
many types of resources in a clinical setting, including:

 Communication capabilities—voice calling, video conferencing, text, and e-mail 7


 Hospital information systems (HISs)—electronic health records (EHRs), electronic medical
records (EMRs), clinical decision support systems (CDSSs), picture archiving and
communication systems (PACSs), and laboratory information systems (LISs)7
 Informational resources—textbooks, guidelines, medical literature, drug references7
 Clinical software applications—disease diagnosis aids, medical calculators7

Prior to the development of mobile devices, these resources were mainly provided by stationary
computers, which do not support the need for mobility in health care settings.7 In an attempt to address
this need, some health care environments set up portable, wireless mobile information stations such as
Computers on Wheels (COWs) or Workstations on Wheels (WOWs).7 With the availability of mobile
devices, however, clinicians now have access to a wellspring of information at their fingertips, through
their smartphones and tablets.10
The results of the 2012 Manhattan Research/Physician Channel Adoption Study also identified the
purposes for which HCPs rely on mobile devices.13 Searching was the most popular activity among HCPs,
with 98% using their desktops/laptops to search, 63% using their tablets, and 56% using their
smartphones.13 Focusing on smartphone use for doctors alone, searching is again the most common
activity, occupying 48% of phone time, with professional apps consuming an additional 38%.13 Physicians
were also found to spend an average of three hours per week watching web videos for professional
purposes on desktops/laptops (67%), tablets (29%), and smartphones (13%); the most frequently viewed
content (55%) was continuing medical education (CME) activities.13 A frequent reliance on mobile
devices was also reported in the survey of medical school HCPs and students, with 85% reporting the use
of a mobile device at least once daily for clinical purposes, often for information and time management or
communication relating to education and patient care.1

MOBILE APPS FOR HEALTH CARE PROFESSIONALS

What Are “Apps”?


The rapid integration of mobile devices into clinical practice has, in part, been driven by the rising
availability and quality of medical software applications, or “apps.” 2 Apps are software programs that
have been developed to run on a computer or mobile device to accomplish a specific purpose.1 Faster
processors, improved memory, smaller batteries, and highly efficient open-source operating systems that
perform complex functions have paved the way for the development of a flood of medical mobile device
apps for both professional and personal use.4

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


The ability to download medical apps on mobile devices has made a wealth of mobile clinical resources
available to HCPs.15 Medical apps for many purposes are available, including ones for electronic
prescribing, diagnosis and treatment, practice management, coding and billing, and CME or e-
learning.9,10 A broad choice of apps that assist with answering clinical practice and other questions at the
point of care exist, such as: drug reference guides, medical calculators, clinical guidelines and other
decision support aids, textbooks, and literature search portals.7,13,15 There are even mobile apps that
simulate surgical procedures or that can conduct simple medical exams, such as hearing or vision
tests.6,7 Many mobile apps are not intended to replace desktop applications, but are meant to complement
them in order to provide a resource that has the potential to improve outcomes at the point of care. 7 The
use of medical apps has become frequent and widespread; 70% of medical school HCPs and students
reported using at least one medical app regularly, with 50% using their favorite app daily.1,9
In July 2008, access to apps was further revolutionized by the launch of the Apple iTunes Appstore,
which gave iPad, iPhone, and iPod Touch (iTouch) users the ability to shop for and download apps from
an online marketplace.9,15 As of January 2014, Apple reported that a staggering 1 million apps were
available through the iTunes Appstore.16 In 2011, Apple created the “Apps for Healthcare Professionals”
section within the medical category of the iTunes Appstore, a unique feature among mobile app
marketplaces.17 In 2013, this section was further divided into subcategories including: reference, medical
education, EMR and patient monitoring, nursing, imaging, patient education, and personal care. 18 Google
similarly launched a “Google Play” shop that provides a wide variety of apps, including some for HCPs,
for mobile devices that use the Android operating system.15 To reach more users, some mobile apps have
been made available for use on either Apple or Android platforms.9
The primary criteria for choice of app is often cost; users may prefer to download a free app but will
replace or upgrade it later, if necessary, with one that requires payment.9 Some free apps are fully
functional, while others are nonfunctional or partially functional unless a subscription is
purchased.9 Many well-known medical journals and medical textbooks can be purchased as mobile apps
after payment of a subscription fee.9 Although some medical apps may initially be costly, they can
ultimately be cost-effective if updates are included.9,15 For example, medical textbook apps are often
updated annually, eliminating the need to buy newer editions.9

How HCPs Use Mobile Devices and Apps


Health care professionals use medical devices and apps for many purposes, most of which can be grouped
under five broad categories: administration, health record maintenance and access, communications and
consulting, reference and information gathering, and medical education.
Information and Time Management
One of HCPs’ most frequent uses for mobile devices is information and time management. 1 Popular
information management apps, such as Evernote and Notability, enable users to write or dictate notes,
record audio, store photographs, and organize material into categories within a searchable electronic
database.6 E-book reader apps, such as GoodReader and iAnnotate, allow users to view, underline,
highlight, enlarge, and annotate text in PDF files.2,5
Cloud-based storage and file-sharing services that can be accessed using a mobile device are also useful
for information management, since they allow users to store, update, and share documents or photographs
with others without exchanging a flash drive or CD.2,5,6 Most cloud-based storage systems provide users
with a few gigabytes of memory for free; additional space often requires payment of an annual
subscription.2Cloud-based information storage provides the additional advantage of permitting
information to be accessed instantaneously from multiple devices, which allows people who are
collaborating together to share materials quickly.2,5,6

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Dropbox is a popular cloud-based storage service that can be used with multiple devices.2,5 Google Drive
also allows file uploading, sharing, and management through Google Docs.2,5 SkyDrive, a cloud-based
storage service that is compatible with mobile devices, uses the Windows 8 operating system.5 Box is a
cloud-based storage service that is reportedly compliant with both the Health Insurance Portability and
Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health
(HITECH) Act.19 However, it should be noted that some cloud-based storage services are not compliant,
and therefore may not be suitable for storing or exchanging patient information.2 Access to cloud-based
storage services requires an Internet connection, but files can be saved to the mobile device’s internal
memory for offline use.5
An additional advantage provided by information management apps is that they can be used in
combination. For example, GoodReader can be connected to a cloud service, allowing PDF files to be
downloaded from the cloud into the reader app.5 Evernote, as well as some other information management
apps, can be used in conjunction with a cloud service and reader.5 This enables a PDF downloaded from
the cloud to be viewed with a reader, then sections of the document can be cut and pasted into the
information management app.5
HCPs frequently use mobile device apps for time management. This doesn’t require a special app; native
apps that come installed on mobile devices are often sufficient to organize and track appointments,
meetings, call schedules, and other clinical obligations.1,3 Mobile apps such as ZocDoc, which allow
patients to view information about and make appointments with participating doctors, are also available
for the iPhone, Android, and Blackberry devices.4

Health Record Maintenance and Access


Apps are also available that aid in data collection and retrieval, such as entering information into a
patient’s EHR or EMR.3,7 Hospital information systems often include features that allow HCP
management of EHRs and PACSs, permitting secure access to patient information (medical history,
vitals, prescriptions, lab results, x-rays, scans, consultations, and discharge notes) either on site or
remotely.3,7
One health care software company, Epic Systems, has partnered with Apple and released versions of the
Epic scheduling, billing, and clinical support app for the iPhone and iPad.4 PatientKeeper Mobile Clinical
Results provides physicians with access to patient clinical data via either Apple or Android mobile
devices.7Teamviewer is a general-purpose record maintenance and access app that can be installed on
mobile devices, allowing remote access to desktop PCs.5 In the absence of such apps, a virtual private
network (VPN) log-in can often be obtained from the hospital to allow remote secure access into the in-
house network through the Internet to view records for emergency consultations.5
Specialized apps are also available for remote viewing of medical imaging scans.10 Mobile MIM is a free
app for the iPad and iPhone, approved by the Food and Drug Administration, that allows remote viewing
of x-rays and imaging scans when users cannot access imaging workstations.6 This software works with a
paid subscription or pay-per-use plan using MIMCloud, a HIPAA-compliant server that allows users to
store and share medical images.6 Images can be downloaded from the cloud and viewed with the
MIMViewer paid app in any setting, whether during discussions with team members or patients.6
In some instances remote evaluation of image scans via a medical device has been proven to be just as
effective as viewing them at a standard workstation.4 In fact, one group demonstrated that its members
could use their iPhones to diagnose acute stroke on CT brain scans just as accurately as when a
workstation was used.4 Mobile devices’ cameras are also useful for documenting images to aid in
diagnoses, such as taking pictures of gross or microscopic pathology specimens to get a colleague’s
opinion or for personal reference.3

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Communication and Consulting
Health care systems are often highly dispersed, encompassing multiple locations such as clinics, inpatient
wards, outpatient services, emergency departments, operating theaters, intensive care units, and
labs.7Consequently, HCPs not only need to be mobile themselves, they also need to be able to
communicate and collaborate with people in different locations.7 Mobile devices satisfy this need by
offering multiple means of communication, including: voice and video calling; text, e-mail, and
multimedia messaging; and video conferencing.3,7 Clinical communication apps are available for mobile
devices that are specifically designed to simplify communication among clinicians.7
Mobile devices have been proven to improve contact between HCPs and their colleagues. 1,4 In one study,
mobile devices were shown to improve communication between doctors and nurses on inpatient
wards.4 In a survey of medical school HCPs and students, more than 80% of respondents described using
mobile devices to communicate with colleagues about patient care via e-mail, telephone, and text
messages.1 They described texting as a more efficient means of communication than telephone
conversations or in-person meetings.1Mobile devices also allow rapid response to e-mail, allowing users
to keep up with communication.1 Texting or calling colleagues directly on their mobile devices, rather
than paging them, has also been shown to save critical time in emergency cases.3,7 Mobile devices can
also be used by HCPs to aid long-distance patients by allowing them to text or send pictures regarding
problems or questions.3
Social networking apps are useful tools for enabling discussion, consultations, and collaboration among
HCPs.5 Doximity is a HIPAA-compliant social networking site that has been described as a “Facebook for
doctors.”4 Registration on Doximity requires validation of a potential user’s credentials through
verification against a medical license database.4 Once registered, physicians can network with colleagues
from medical school, residency, or elsewhere, and exchange patient-related information via text
messages.4 Facebook itself has also reportedly been used to establish a “nondisclosed” forum for
consultations, discussions, and mini-lectures among infectious disease specialists who are registered
university professors.5 Such forums can provide a convenient and efficient means for medical specialists
to rapidly and efficiently share opinions.5Chatting apps that allow text messaging and image exchange can
be used to trade detailed information during consultations.5 It should be noted that Facebook, as well as
many other social media and chatting apps, is not HIPAA compliant.5

Reference and Information Gathering

Literature Research and Review


Mobile devices are invaluable tools for HCPs to use to search or access medical literature, as well as other
information sources.1 The survey of medical school HCPs and students found that mobile devices were
often used to access medical journal websites (60%) or medical news online (74%). 1 Several medical
journals, such as the New England Journal of Medicine, The Lancet, and BMJ (formerly the British
Medical Journal), provide apps that allow articles to be viewed on mobile devices.5However, journals
rarely provide free access to articles without the purchase of a subscription.5
Search applications for HCPs, such as PubMed/MEDLINE, also facilitate searches of medical literature
databases to identify published medical information.7 Mobile medical literature search apps used by HCPs
include: PubSearch, PubMed on Tap, Medscape, MEDLINE Database on Tap (MD on Tap or MDoT),
Docphin, Docwise, Read by QxMD, askMEDLINE, PICO, and Disease Associations. 7 PubSearch is
available for free, while PubMed on Tap is available for several dollars. 2,7 Both apps work with the iOS
platform to facilitate PubMed/MEDLINE searches using the iPhone, iTouch, and iPad.7 The free app MD
on Tap is provided by the National Library of Medicine to help HCPs using PDAs access medical
information at the point of care through three search engines: PubMed, Essie, and Google. 7

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Drug References
Drug reference applications are generally used to access information including: drug names, indications,
dosages, pharmacology, interactions, contraindications, cost, formulary status, identification guides, and
dose by weight calculators.3,7 The most frequently used mobile drug reference apps include: Epocrates,
Skyscape RxDrugs/Omnio, Micromedex, FDA Drugs, and DrugDoses.net.2,4,7Epocrates, Skyscape
RxDrugs/Omnio, and FDA Drugs allow users to check multiple drug– drug interactions at the same
time.7 FDA Drugs, which includes official FDA labeling for prescription and over-the-counter drugs,
permits searching by active ingredients.7 The authors of Epocrates, the most commonly used drug
reference app, found that 90% of physicians use mobile device apps to access drug information.7,14

News Acquisition
MedPage Today is one of the most popular apps among HCPs for accessing breaking medical news,
organizing news by interest, and earning CME credits.4 The MedPage Today app provides information
about drugs, diseases, and medical procedures, as well as daily podcasts, videos, and news updates. 2,3 It
encompasses 30 medical specialties and provides annual coverage of more than 60 meetings and
symposia.2
Other medical news apps are available.5 For example, the “Outbreaks Near Me” app for users of either
Apple or Android mobile devices provides real-time information regarding disease outbreaks according to
geography.4 This information is gathered from multiple resources, including online news, eyewitness
accounts, and official reports.4 The Outbreaks Near Me app was funded by Google and developed in
collaboration with the Centers for Disease Control and Prevention, as well as other organizations.4

Patient Management

Clinical Decision-Making
Mobile devices provide HCPs with convenient and rapid access to evidence-based information,
supporting clinical decision-making at the point of care.8 HCPs’ increased reliance on electronic resources
for this purpose was identified in the Manhattan Research/Physician Channel Adoption Study, which
reported that physicians spend the majority (64%) of their online time looking for information to make or
support clinical decisions, double the time spent reviewing print resources.13
Many evidence-based software apps serve as useful bedside clinical decision-making tools.7 Printed
medical references often used in disease diagnosis are now available as mobile device apps that provide
information on diagnosis, treatment, differential diagnosis, infectious diseases, pathogens, and other
topics.7 Such apps include: Johns Hopkins Antibiotic Guide (JHABx), Dynamed, UpToDate, 5-Minute
Clinical Consult (5MCC), 5-Minute Infectious Diseases Consult (5MIDC), Sanford Guide to
Antimicrobial Therapy (SG), ePocrates ID, Infectious Disease Notes (ID Notes), Pocket Medicine
Infectious Diseases (PMID), and IDdx.2,7
Diagnosaurus, a popular, low-cost mobile differential diagnosis app for the iPhone, iPad, and iTouch, can
help ensure that alternative diagnoses are not overlooked.4 Flowcharts to help physicians identify
diagnostic possibilities are included in the apps 5MCC and Pocket Guide to Diagnostic Tests. 7 Other
diagnostic mobile apps apply clinical algorithms to aid physicians in determining a disease
diagnosis.7 Mobile devices can also be used to access CDSSs installed on desktop computers in clinical
settings to aid in diagnosis and treatment decisions.8
Mobile apps can also help clinicians identify the appropriate scans or tests to order, decreasing
unnecessary procedures and reducing cost of care.7 Lab test apps provide information such as: reference
values and interpretation, causes for abnormal values, and laboratory unit conversions. 7 They include:

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Pocket Lab Values, Lab Pro Values, Palm LabDX, Normal Lab Values, Lab Unit Converter, Labs 360,
Davis’s Laboratory and Diagnostic Tests, and Pocket Guide to Diagnostic Tests.2,7
Mobile apps can also be used directly to conduct simple examinations for visual acuity or color blindness,
as well as blood pressure or glucose level.3,5,7 The iPhone iSeismometer app, which is used to measure
tremor frequency, has been reported to match more sophisticated and expensive devices used for
electromyogram analysis.3 The iMurmur app provides recordings of 20 types of heart murmurs, allowing
a physician to match and identify what she or he hears.3 Many apps are available to determine pregnancy
due dates by using a patient’s sonogram and date of last period, such as “Perfect OB Wheel.” 3 These apps
have been said to predict pregnancy due dates more accurately than the paper wheels that had previously
been the standard.3
Current treatment guidelines available at the point of care via mobile apps also provide a valuable
resource for HCPs.6 Several guidelines are accessible on mobile platforms, including the National
Comprehensive Cancer Network guidelines for cancer care available through the Epocrates app, and the
American College of Chest Physicians antithrombotic therapy guidelines available via the CHEST
app.6 The Johns Hopkins ABX Guide app provides an impressive compilation of antimicrobial
recommendations and guidelines, including some for surgical prophylaxis and surgical site infection
treatment.6
Other mobile apps, such as medical calculators, use standard formulas to make calculations to determine
risk scores and other measures, such as body mass index (BMI), body surface area (BSA), and proper
drug doses.4,7 Calculation of clinical scores or indices typically involves utilizing complex formulas that
require several input parameters.7 Even if a HCP knows the formula, performing even simple clinical
score calculations manually can be surprisingly time consuming and error prone in a fast-paced clinical
environment.7 In contrast, HCPs who use medical calculators do not necessarily need to know the formula
for calculating a clinical score or index; they only need to enter the parameters to quickly produce a
reliable result.7
Popular medical calculators include: Epocrates MedMath, MedCalc, Mediquations, Calculate, Medical
Calculator, Archimedes, uBurn Lite, Softforce’s Antibiotic Dosage Calculator, and Paeds ED. 2,6 Others
that are available are: Vancomycin ClinCalc Full, Softforces’s Antibiotic Dosage Calculator, and
MedCalc 3000 Pharmacology.5 Calculate by QxMD is a free app that calculates heart disease and stroke
risk based on various patient variables.3 Since the results are visual, this app can be very effective in
communicating risks to patients during discussions about potential behavior change. 3 Another free
medical calculator called Archimedes is available through Skyscape.4
The Agency for Healthcare Research and Quality (AHRQ), part of the U.S. Department of Health and
Human Services, provides the free Electronic Preventive Services Selector (AHRQ ePSS) app. 3,4 This app
is designed to assist primary care physicians in screening, counseling, and identifying preventive
measures, based on a patient’s age, gender, sexual activity, tobacco use, and other risk factors.3,4 Surgical
risk calculators are also available, such as the euroSCORE calculator, which uses recommendations from
the Society of Thoracic Surgeons to calculate operative risk at the point of care. 6 The American Cancer
Society National Surgical Quality Improvement Program is also developing a surgical risk calculator. 6

Patient Monitoring
The use of mobile devices to remotely monitor the health or location of patients with chronic diseases or
conditions has already become a viable option.7 Mobile device apps can provide public health
surveillance, aid in community data collection, or assist disabled persons with independent living.12 In one
study, a single-lead electrocardiograph (ECG) was connected to a smartphone to diagnose and follow
treatment of patients with sleep apnea, providing a possible alternative to costly and labor-intensive
polysomnography.4 Sensors attached to garments that communicate with mobile devices have also been
used to remotely monitor and collect medical data regarding chronically ill elderly patients.4
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
A clinical monitoring system was developed to monitor an entire unit or one bed in intensive care via
smartphone; it displays an alarm, color-coded according to severity, based on patient vital signs.7 The app
iWander for Android was developed to monitor and track patients with early Alzheimer’s disease who are
prone to wandering by using the mobile device GPS.4 HanDBase, a HIPAA-compliant relational database
software program, can be used on mobile devices to track hospitalized patients according to their
locations, diagnosis, tests, treatments, and billing information.3 Smartphone apps have also been used to
monitor patients during rehabilitation.4 For example, a smartphone connected via Bluetooth to a single-
lead ECG device enabled the monitoring of patients in their own neighborhoods when they were unable to
reach traditional hospital-based rehabilitation.4 Although potentially useful, patient monitoring apps can
be limited by factors such as Internet and GPS reliability, as well as the patient’s ability to use the device. 4
Mobile apps that supplement medical devices are being developed.5 One example is iStethoscope, which
uses the microphone function of the iPhone to auscultate and record.5 While this app isn’t officially
intended for use as a medical device, it is significant in that its existence suggests that mobile devices can
eventually replace medical devices.5 Mobile devices have also been used to accurately track heart rate and
heart-rate variability.4 In January 2011, MobiSante became the first company to receive FDA approval for
a smartphone-based medical diagnostic tool that uses an ultrasound probe for echocardiography. 4 Work
has also already been initiated to develop ECG recording devices that work with smartphones.4

Medical Education and Training


Mobile devices play an increasingly important role in medical education as students and schools use more
technology during training.4 Mobile devices are used by health care students in a variety of ways: to log
their experiences, to access information about medical conditions and drug treatment, to perform
calculations, and to make basic notes.1
Mobile devices have become ubiquitous in educational settings, particularly because they are a “learn
anywhere” resource for accessing information or double-checking knowledge.1,15 Health care students are
increasingly relying on mobile devices as a “pocket brain” for quick, easy access to information they need
in order to succeed in their programs and careers.9 Resources frequently used by health care students
include: online textbooks and lectures, medical podcasts, medical calculators, and search engines to look
up unfamiliar terms.1 In addition, many mobile apps for health care students can be used for knowledge
assessment, such as case study quizzes or tests to help prepare for board examinations. 6,7 The ability to
access all of these resources has been shown to enhance student learning in the clinical environment and
to increase student knowledge scores.1
Mobile devices are also used by practicing HCPs for educational purposes, especially for CME activities
that keep them informed about the most current evidence-based information and medical
practices.3,4,7QuantiaMD has a mobile CME app that provides well-scripted interactive case studies that
can be shared with colleagues.4 In a survey of medical school faculty, residents, and students, 75%, 95%,
and 55%, respectively, agreed that using a mobile device for rapid access to educational resources while
on the go had a positive educational effect.1
Mobile devices have been shown to be important tools for teaching medical curricula. In one example,
doctors who used a mobile device app during advanced life-support training had significantly improved
scores during cardiac arrest simulation testing.4 Mobile apps such as Touch Surgery or vCath are available
for simulated surgery training.6 In addition, Northwestern University’s Feinberg School of Medicine
faculty uses an iPhone app to assess residents’ autonomy and skill level in the operating room based on
self-assessment and the attending surgeon’s evaluation.6

BENEFITS PROVIDED BY MOBILE DEVICES AND APPS FOR HEALTH CARE


PROFESSIONALS
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Mobile devices and apps have provided many benefits for HCPs, allowing them to make more rapid
decisions with a lower error rate, increasing the quality of data management and accessibility, and
improving practice efficiency and knowledge.1,7,8,10,20 Most importantly, these benefits have been shown to
have a positive effect on patient care outcomes, as evidenced by a reduction in adverse events and
hospital length of stay.8,10 These and other benefits mobile devices and apps provide to HCPs are
discussed in the following section.

Convenience
Many mobile apps have made the practice of evidence-based medicine at the point of care more
convenient.7,14 Health care professionals associate numerous conveniences with using a mobile device in
clinical practice, such as: portability, rapid access to information and multimedia resources, flexible
communications, and a choice of powerful apps to accomplish many different purposes.1,12 Medical school
HCPs and students cite access to information instantaneously at the time of need as a major
convenience.1Other studies describe keeping current through access to updates about new books,
guidelines, reviews, and medical literature as an appreciated convenience.10 Health care students also no
longer have to carry reference books, since many can now be accessed with a mobile
device.6 Consequently, students can carry all of the information found in standard medical textbooks and
other necessary references in one small device that fits in a lab-coat pocket.9

Better Clinical Decision-Making


Many medical apps make mobile devices invaluable tools that support clinical decision-making at the
point of care.7,10 This quality is very important when practicing evidence-based medicine, since clinicians
may not always seek answers to clinical questions after the completion of every clinical
encounter.2,7,10 Practicing clinicians, as well as medical and nursing students, cite the most useful mobile
tools for supporting evidence-based medicine and clinical decision-making as being drug reference,
medical textbook, disease diagnosis, and medical calculator apps.7 The use of mobile devices can also
support better decision-making by pharmacists by providing instant access to multiple drug information
sources and other medical references.2
Studies have reported an increase in the appropriateness of diagnoses and treatment decisions when
mobile devices were used for clinical decision support, particularly when a CDSS app was used.8 Data
have shown that when electronic references were consulted, there were twice as many adjustments in
patient management decisions compared to cases in which only paper resources were available.10 Another
study tested participants’ understanding of prescribing accuracy and found that the use of a mobile device
improved drug knowledge and understanding to a statistically significant level (P = 0.005).8 Similarly, a
risk assessment study evaluated the occurrence of gastrointestinal side effects with nonsteroidal anti-
inflammatory drugs and found that unsafe prescribing was significantly reduced (P = 0.001) in the group
of HCPs that used a mobile device app compared with a control group.8,10

Improved Accuracy
Mobile devices have repeatedly been found to improve the completeness and accuracy of patient
documentation, an effect that has often been attributed to ease of use.2,8,10,15 More accurate diagnostic
coding, more frequent documentation of side effects, and increased medication safety through reduced
medical errors have been reported.10 Based on a more detailed description of clinical findings and a
correct progress assessment, documentation prepared using a mobile device was judged to be of higher
quality than documentation prepared using paper records.10 Inclusion of specific intervention rules on a
mobile device has been found to significantly reduce prescription error rates (P < 0.05).10 Use of a mobile
device significantly reduced discharge order list errors (from 22% to 8%, P < 0.05) and yielded fewer
discrepancies in recording neonatal patient weights in intensive care compared to using paper

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


records.10 Timely communication within hospitals has also been determined to reduce medical errors,
especially in critical care environments.7

Increased Efficiency
Evidence has shown that mobile devices allow HCPs to be more efficient in their work practices. 3,10 The
Deloitte Center for Health Solutions 2013 Survey of U.S. Physicians found that most doctors believe that
meaningful adoption of health information technology (EHRs, e-prescribing, health information
exchange, analytics/decision support, patient support tools [websites, mobile apps, tools to track and
manage health and wellness], and mobile health technologies [ tablets, smartphones]) can improve the
efficiency of clinical practice.21
The use of mobile devices has been shown to provide HCPs with numerous enhanced efficiencies,
including: increased quality of patient documentation through fewer errors and more complete records,
more rapid access to new information, and improved workflow patterns.10 Physicians have reported that
the use of a mobile device for retrieving information from a drug database led to more efficient decision-
making and patient care.10 Physicians working in health care organizations have cited improved care
coordination, as well as quicker and more efficient access to clinical support resources (guidelines, lab
tests, and reports) as principal benefits associated with mobile device use.10 Physicians who used mobile
devices during patient rounds reported spending less time accessing, retrieving, and recording data and
said that the increased efficiency freed up more time for direct patient care. 10 In contrast, another study
found that the increased efficiency in median doctor–patient encounter time (227 vs. 301 seconds)
provided by the use of mobile devices, rather than paper resources, resulted in less time spent with the
patient.10

Enhanced Productivity
Research has shown that the use of mobile devices at the point of care has helped streamline workflow
and increase the productivity of HCPs.2 Mobile devices have been found to cause a significant increase in
the average rate of electronic prescribing, from 52% to 64% (P = 0.03).10 Mobile apps can also increase
pharmacist productivity by allowing important drug information, such as contraindications and
interactions, to be checked quickly, resulting in more rapid processing of prescriptions.22 Pharmacists
using a mobile device reported recording more information and completing more fields, which resulted in
more thorough documentation.10
Studies that investigated patient record maintenance and revision found that more patient information was
documented when a mobile device was used, reportedly because of ease of use in comparison to paper
records.8 Another study found a statistically significant difference (P = 0.0001) in the number of
diagnoses documented with a mobile device compared to paper records.8 Mobile apps can also help
increase productivity by improving professional and personal time and information management.2

FUTURE TRENDS FOR MOBILE DEVICES AND APPS IN HEALTH CARE


Several interesting trends regarding the use of mobile devices and apps in health care have been predicted
for the future. As better health outcomes become the ultimate goal of the health care system, apps will be
needed to fulfill that purpose.23 The prevention and management of chronic health conditions, such as
diabetes, obesity, and heart disease, present serious problems for HCPs, patients, and the health care
system.23 Patient care management and compliance are difficult challenges, too, so apps that successfully
address these issues are needed and eagerly awaited.22 Apps that support caregivers and promote better
communication among patients, physicians, and other resources have also been identified as important
unmet needs.23 As patient ownership of mobile devices increases, new opportunities for direct
communication with HCPs and for improved self-monitoring and disease prevention are expected to
develop.10
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Mobile device hardware and apps are expected to continue to improve, bringing additional and enhanced
benefits to clinical practice.1,10 Future mobile apps are expected to include even larger databases, as well
as CDSS prompts that will aid in clinical decision-making, similar to features that are already built into
the EMR systems on desktop computers in clinical settings.5 Various other types of mobile apps will
continue to evolve and transform into CDSS apps that incorporate artificial intelligence–oriented
algorithms.5 There is also a need to develop standards for mobile apps so that they can integrate
seamlessly with HIS capabilities, such as EMRs and patient monitoring systems.7,8 This may require in-
house CDSSs that are carefully custom designed for each patient care setting.8 Such measures will enable
HCPs to use mobile apps in a more meaningful way that hopefully leads to improved patient care.7
The role played by mobile devices and apps in health care education is also expected to grow. 1,4 Medical
school HCPs and students predict that mobile devices and apps will become even more integrated into
patient care and will eventually completely replace textbooks.1 As the use of medical devices and apps
expands, more educational health care programs are expected to incorporate them into medical
curricula.1,4
Several issues challenge the future integration of mobile devices and apps into health care
practice.2 While the majority of HCPs have adopted the use of mobile devices, the use of these tools in
clinical care has been debated since their introduction, with opinions ranging from overwhelming support
to strong opposition.1,4Among the concerns raised regarding mobile devices are: their reliability for
making clinical decisions; protection of patient data with respect to privacy; impact on the doctor–patient
relationship; and proper integration into the workplace.10,14,22 In addition, HCPs have expressed concerns
about lack of oversight with respect to standards or content accuracy, especially for apps involved in
patient management.14 Older HCPs, as well as those who are intimidated by or less inclined to use new
technologies, may be at a disadvantage if the use of mobile devices becomes a requirement within the
health care fields.4
The increased use of these devices by clinicians in their personal and working lives has also raised
important medicolegal and ethical implications.8 Consequently, establishing standards and policies within
health care institutions will be necessary to ensure ethical and transparent conduct.7,11 A call has also been
made for the examination of the effect of mobile devices and medical apps on clinical
education.4 Adoption of these recommended measures will be greatly helpful in guiding clinicians,
administrators, educators, and researchers in determining how to best incorporate these increasingly
sophisticated tools into clinical practice.10 Best-practice standards for medical app developers should also
be established.11 These standards will raise the barrier for entry into the medical app market, limiting the
overwhelming quantity and increasing the quality of the apps currently available to HCPs and patients.11
It is also important that mobile medical apps that claim diagnostic or therapeutic efficacy be evaluated
with regard to claimed outcome, as well as utility in clinical practice.4,8,10,11 While many mobile medical
apps have been available for years and are very popular, there is still a lack of data that support or identify
the best approach to their use.4,10 As more data become available, this will lead to a more useful selection
of validated mobile medical apps for HCPs.11 Toward this end, in September 2013, the FDA released
long-awaited guidelines concerning regulation of mobile device apps, announcing that the agency will
evaluate apps that are “used as an accessory to a regulated medical device; or transform a mobile platform
into a regulated medical device.” 22,24 The FDA has chosen to exercise only enforcement discretion for
apps that are deemed to pose less risk, such as those that inform or assist patients in managing their
disease without providing treatment suggestions, or simple tools that allow patients to track or organize
health information or interact with their EHRs.

CONCLUSION
Medical devices and apps are already invaluable tools for HCPs, and as their features and uses expand,
they are expected to become even more widely incorporated into nearly every aspect of clinical
practice.1,2However, some HCPs remain reluctant to adopt their use in clinical practice. 1,4 Although
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
medical devices and apps inarguably provide the HCP with many advantages, they are currently being
used without a thorough understanding of their associated risks and benefits.11 Rigorous evaluation,
validation, and the development of best-practice standards for medical apps are greatly needed to ensure a
fundamental level of quality and safety when these tools are used.11 With the implementation of such
measures, the main determinant of an app’s value may ultimately be its ability to provide meaningful,
accurate, and timely information and guidance to the end user in order to serve the vital purpose of
improving patient outcomes
The provision of effective emergency telemedicine and home monitoring solutions are the major
fields of interest discussed in this study. Ambulances, Rural Health Centers (RHC) or other
remote health location such as Ships navigating in wide seas are common examples of possible
emergency sites, while critical care telemetry and telemedicine home follow-ups are important
issues of telemonitoring. In order to support the above different growing application fields we
created a combined real-time and store and forward facility that consists of a base unit and a
telemedicine (mobile) unit. This integrated system: can be used when handling emergency cases
in ambulances, RHC or ships by using a mobile telemedicine unit at the emergency site and a
base unit at the hospital-expert's site, enhances intensive health care provision by giving a mobile
base unit to the ICU doctor while the telemedicine unit remains at the ICU patient site and
enables home telemonitoring, by installing the telemedicine unit at the patient's home while the
base unit remains at the physician's office or hospital. The system allows the transmission of
vital biosignals (3–12 lead ECG, SPO2, NIBP, IBP, Temp) and still images of the patient. The
transmission is performed through GSM mobile telecommunication network, through satellite
links (where GSM is not available) or through Plain Old Telephony Systems (POTS) where
available. Using this device a specialist doctor can telematically "move" to the patient's site and
instruct unspecialized personnel when handling an emergency or telemonitoring case. Due to the
need of storing and archiving of all data interchanged during the telemedicine sessions, we have
equipped the consultation site with a multimedia database able to store and manage the data
collected by the system. The performance of the system has been technically tested over several
telecommunication means; in addition the system has been clinically validated in three different
countries using a standardized medical protocol.

Keywords
Emergency Health Care Telemedicine GSM Satellite

Background
Telemedicine is defined as the delivery of health care and sharing of medical knowledge over a
distance using telecommunication means. Thus, the aim of Telemedicine is to provide expert-
based health care to understaffed remote sites and to provide advanced emergency care through
modern telecommunication and information technologies. The concept of Telemedicine was
introduced about 30 years ago through the use of nowadays-common technologies like telephone
and facsimile machines. Today, Telemedicine systems are supported by State of the Art
Technologies like Interactive video, high resolution monitors, high speed computer networks and
switching systems, and telecommunications superhighways including fiber optics, satellites and
cellular telephony [1].

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


The availability of prompt and expert medical care can meaningfully improve health care
services at understaffed rural or remote areas. The provision of effective emergency
Telemedicine and home monitoring solutions are the major fields of interest discussed in this
study. There are a wide variety of examples where those fields are crucial.
Nevertheless, Ambulances, Rural Health Centers (RHC) and Ships navigating in wide seas are
common examples of possible emergency sites, while critical care telemetry and Telemedicine
home follow-ups are important issues of telemonitoring. In emergency cases where immediate
medical treatment is the issue, recent studies conclude that early and specialized pre-hospital
patient management contributes to the patient's survival [2]. Especially in cases of serious head
injuries, spinal cord or internal organs trauma, the way the incidents are treated and transported
is crucial for the future well being of the patients.

A quick look to past car accident statistics points out clearly the issue: During 1997, 6753500
incidents were reported in the United States [3] from which about 42000 people lost their lives,
2182660 drivers and 1125890 passengers were injured. In Europe during the same period 50000
people died resulting of car crash injuries and about half a million were severely injured.
Furthermore, studies completed in 1997 in Greece [4], a country with the world's third highest
death rate due to car crashes, show that 77,4 % of the 2500 fatal injuries in accidents were
injured far away from any competent healthcare institution, thus resulting in long response times.
In addition, the same studies reported that 66% of deceased people passed away during the first
24 hours.

Coronary artery diseases is another common example of high death rates in emergency or home
monitoring cases since still two thirds of all patients die before reaching the central hospital. In a
study performed in the UK in 1998 [5], it is sobering to see that among patient above 55 years
old, who die from cardiac arrest, 91% do so outside hospital, due to a lack of immediate
treatment. In cases where thrombolysis is required, survival is related to the "call to needle" time,
which should be less than 60 minutes [6]. Thus, time is the enemy in the acute treatment of heart
attack or sudden cardiac death (SCD). Many studies worldwide have proven that a rapid
response time in pre-hospital settings resulting from treatment of acute cardiac events decreases
mortality and improves patient outcomes dramatically [7]-[12]. In addition, other studies have
shown that 12-lead ECG performed during transportation increase available time to perform
thrombolytic therapy effectively, thus preventing death and maintaining heart muscle function
[13]. The reduction of all those high death rates is definitely achievable through strategies and
measures, which improve access to care, administration of pre-hospital care and patient
monitoring techniques.

Critical care telemetry is another case of handling emergency situations. The main point is to
monitor continuously intensive care units' (ICU) patients at a hospital and at the same time to
display all telemetry information to the competent doctors anywhere, anytime [14]. In this
pattern, the responsible doctor can be informed about the patient's condition at a 24-hour basis
and provide vital consulting even if he's not physically present. This is feasible through advanced
telecommunications means or in other words via Telemedicine.

Another important Telemedicine application field is home monitoring. Recent studies show that
[15] the number of patients being managed at home is increasing, in an effort to cut part of the
high hospitalization's cost, while trying to increase patient's comfort. Using low-cost televideo
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
equipment that runs over regular phone lines, providers are expanding the level while reducing
the frequency of visits to healthcare institutions [16]. In addition, a variety of diagnostic devices
can be attached to the system giving to the physician the ability to see and interact directly with
the patient. For example, pulse oximetry and respiratory flow data can be electronically
transmitted (for patients with chronic obstructive pulmonary disease). Diabetes patients can have
their blood glucose and insulin syringe monitored prior to injection for correct insulin dosage.
Furthermore, obstetric patients can have their blood pressure and fetal heart pulses monitored
remotely and stay at home rather than prematurely admitted to a hospital.

It is common knowledge that people that monitor patients at home or are the first to handle
emergency situations do not always have the required advanced theoretical background and
experience to manage properly all cases. Emergency Telemedicine and home monitoring can
solve this problem by enabling experienced neurosurgeons, cardiologists, orthopedics and other
skilled people to be virtually present in the emergency medical site. This is done through
wireless transmission of vital biosignals and on scene images of the patient to the experienced
doctor. A survey [17] of the Telemedicine market states that emergency Telemedicine is the
fourth most needed Telemedicine topic with 39.8% coverage of market requests while home
healthcare covers 23.1%. The same survey also points out that the use of such state of the art
technologies has 23% enhanced patient outcomes.

Several systems that could cover emergency cases [18]-[23], home monitoring cases [24]-[25]
and critical care telemetry [14] have been presented over the years. Recent developments in
mobile telecommunications and information technology enhanced capability in development of
telemedicine systems using wireless communication means [26]-[32]. In most cases however
only the store and forward procedure was successfully elaborated, while the great majority of
emergency cases do require real time transmition of data.

In order to cover as much as possible of the above different growing demands we created a
combined real-time and store and forward facility that consists of a base unit and a telemedicine
unit where this integrated system:

 Can be used when handling emergency cases in ambulances, RHC or ships by using the
Telemedicine unit at the emergency site and the expert's medical consulting at the base unit
 Enhances intensive health care provision by giving the telemedicine unit to the ICU doctor while
the base unit is incorporated with the ICU's in-house telemetry system
 Enables home telemonitoring, by installing the telemedicine unit at the patient's home while the
base unit remains at the physician's office or hospital.
The Telemedicine device is compliant with some of the main vital signs monitor manufacturers
like Johnson & Johnson CRITIKON Dinamap Plus and Welch Allyn – Protocol (Propaq). It is
able to transmit both 3 and 12 lead ECGs, vital signs (non-invasive blood pressure, temperature,
heart rate, oxygen saturation and invasive blood pressure) and still images of a patient by using a
great variety of communication means (Satellite, GSM and Plain Old Telephony System –
POTS). The base unit is comprised of a set of user-friendly software modules that can receive
data from the Telemedicine device, transmit information back to it and store all data in a
database at the base unit. The communication between the two parts is based on the TCP/IP
protocol. The general framework for the above system was developed under EU funded TAP

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


(Telematics Applications Programme) projects, the EMERGENCY 112 project(HC 4027)[33]
and the Ambulance project(HC1001) [22].

Methods
Trends and needs of Telemedicine systems
As mentioned above, scope of this study was to design and implement an integrated
Telemedicine system, able to handle different Telemedicine needs especially in the fields of:

 Emergency health care provision in ambulances, Rural Hospital Centers (or any other remote
located health center) and navigating Ships
 Intensive care patients monitoring
 Home telecare, especially for patients suffering from chronic and /or permanent diseases (like
heart disease).
In other words we determined a "Multi-purpose" system consisting of two major parts: a)
Telemedicine unit (which can be portable or not portable depending on the case) and b) Base unit
or doctor's unit (which can be portable or not portable depending on the case and usually located
at a Central Hospital).

Figure 1 describes the overall system architecture. In each different application the Telemedicine unit is
located at the patient's site, whereas the base unit (or doctor's unit) is located at the place where the
signals and images of the patient are sent and monitored. The Telemedicine device is responsible to
collect data (biosignals and images) from the patient and automatically transmit them to the base unit.
The base unit is comprised of a set of user-friendly software modules, which can receive data from the
Telemedicine device, transmit information back to it and store important data in a local database. The
system has several different applications (with small changes each time), according to the current
healthcare provision nature and needs.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 1

Overall system architecture


Before the system's technical implementation, an overview of the current trends and needs in the
aforementioned Telemedicine applications was made, so that the different requirements are taken into
account during design and development, thus ensuring maximum applicability and usability of the final
system in distinct environments and situations. Table 1 provides the results of this overview, which was
done towards a predefined list of criteria that usually influence a Telemedicine application
implementation (cost, portability, autonomy, weight and size of Telemedicine device, type and quality of
PC and camera, communication means used).

Table 1

Overview of current trends and needs in Telemedicine applications


Telemedicine
Basic needs
applications

Small
PC Camera Communication
Cost Portability Autonomy Weight
type quality means
& size

Medium
Ambulance Medium/High High High High Palmtop GSM
High
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
Telemedicine
Basic needs
applications

Desktop Medium
RHC Medium/High Low Low Low POTS, GSM
Laptop High

Low/ Low/ Desktop Medium


Ship Medium/High Low GSM, Satellite
Medium Medium Laptop High

Low/ Low/ Desktop


Home care Low Low High POTS
Medium Medium Laptop

Intensive
Medium/High Low Low Low Desktop High POTS, GSM
care room
As shown in the Table 1, low cost is a very crucial aspect for home telecare, since the costs are
covered by the patient and not by the hospital (in contrast with all other applications). Portability,
autonomy and small weight & size of the Telemedicine device are a very important component
in ambulance applications, where the device also needs to be transferred at the scene (outside the
ambulance). This is also related with the PC type, which in ambulances must be a palmtop or a
sub notebook, while in other cases can be desktop or laptop. The camera quality in all
applications should be as high as possible, but in certain cases like the intensive care room it is of
up most importance. As far as communication is considered, in ambulances and ships, GSM is
the major mean, while in RHC and homecare it is POTS. Satellite links are suggested mainly for
ships, but it should always be taken into account that costs arise very much with the quality of
the links (in other words, a lot of money should be spent to obtain reliable equipment for
transmission via satellite links). User friendliness is important in all applications, but even more
important in home telecare, where not specialized or trained staff is using the device.

Besides the above, the Telemedicine applications can be examined towards other criteria, like for
example security needs, transmission type (continuos, store & forward) needs, ECG leads
required (3 or 12 leads), etc. These last are examined in more detail in the next paragraph, where
the overall technical description of the system is provided.

System design and technical implementation


As mentioned above, the system consists of two separate modules (Figure 1): a) the unit located
at the patient's site called "Telemedicine unit" and b) the unit located at doctor's site called "Base
Unit". The Doctor might be using the system either in an Emergency case or when monitoring a
patient from a remote place.

The design and implementation of the system was based on a detailed user requirements
analysis, as well as the corresponding system functional specifications. The study was mainly
based on the experience of Telemedicine projects named AMBULANCE [22] and Emergency
112 [33] where functional prototypes of a device with emergency Telemedicine functionalities
was built and extensively evaluated. Through these project we had phased the need to implement
a telemedicine device, which would facilitate a flexible architecture and could be used in several
emergency or monitoring cases that have simiral needs of information transmition.
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
The Telemedicine unit is responsible for collecting and transmitting biosignals and still images of the
patients from the incident place to the Doctor's location while the Doctor's unit is responsible for
receiving and displaying incoming data. The information flow (using a layered description) between the
two sites can be seen in Figure 2.

Figure 2

Information Flow within the Telemedicine system (Telemedicine and base units)
The software design and implementation follows the client server model; it was done using
Borland Delphi 5 [34] for windows 95/98/NT/2000 platform; the Telemedicine unit site is the
client while the Base unit site is the server. Communication between the two parts is achieved
using TCP/IP as network protocol, which ensures safe data transmission and interoperability
over different telecommunication means (GSM, Satellite, and POTS). System communications
are based on a predefined communication protocol for data interchange, which is used to control
and maintain connection between the two sites, thus ensuring portability, interoperability and
security of the transmitted data. During the design and implementation phase an extended
codification scheme based on the "Vital" and DICOM" was developed [35]. Based in this
experience we had created the communication protocol.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


a) Telemedicine Unit
The Telemedicine unit mainly consists of four modules, the biosignal acquisition module, which
is responsible for biosignals acquisition, a digital camera responsible for image capturing, a
processing unit, which is basically a Personal Computer, and a communication module (GSM,
Satellite or POTS modem).

The biosignal acquisition module was designed to operate with some of the most common
portable biosignal monitors used in emergency cases or in Intensive care Units such as a)
CRITIKON DINAMAP PLUS Monitor Model 8700/9700 family of monitors, b) PROTOCOL-
Welch Allyn Propaq 1xx Vital Signs Monitor, c) PROTOCOL-Welch Allyn Propaq Encore 2xx
Vital Signs Monitor.

The biosignals collected by the patient (and then transmitted to the Base Unit) are:

 ECG up to 12 lead, depending on the monitor used in each case.


 Oxygen Saturation (SpO2).
 Heart Rate (HR).
 Non-Invasive Blood Pressure (NIBP).
 Invasive blood Pressure (IP).
 Temperature (Temp)
 Respiration (Resp)
The PC used depends on the type of the Telemedicine application (role of the Telemedicine unit). As
shown in Table 1: a) in cases where the autonomy and small size of the system are important (mainly in
ambulances), a sub notebook like Toshiba libretto 100 ct portable PC is used, a picture of a portable
device is shown in Figure 3 b) in cases where we need some autonomy but size is not considered an
important element a Typical Pentium portable PC is used; c) in cases where we do not necessarily need
autonomy, portability and small system size, a Typical Pentium Desktop PC is used.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 3

Picture of telemedicine mobile unit (monitor Propaq 2xx is used)


As mentioned before, data interchange is done using the TCP/IP network protocol, which allows
operation over several communication means. The PC is equipped with the proper modem for
each case, i.e. GSM, Satellite or POTS. The design was done for standard Hayes modems. The
system supports ETSI – AT command set for GSM modem, for Satellite modems and for
Standard POTS modems. Several modems types were used for testing: a) a NOKIA card phone
2.0 GSM 900/1800 modem pcmcia card and an Option FirstFone GSM 900 modem pcmcia card
were used for GSM communication, b) a Micronet pcmcia POTS modem 56 K and a US-
Robotics 33.6 K external modem were user for POTS communication, c) a mini m terminal for
ships "Thrane & Thrane TT-3064A CAPSAT Inmarsat Maritime Phone" was used for satellite
communication.

The Telemedicine unit is also responsible for the collection and transmission of images of the
patient to the base unit. In order to implement a hardware independent system, this module was
designed to operate using Microsoft video for WINDOWS. Several cameras were used while
testing the system: a) ZOOM digital camera connected to the PC's parallel port model 1585. b)
ZOOM digital camera connected to the PC's usb port model 1595. c) Logitech quick cam express
digital usb port camera d) Connectix quick cam VC parallel port e) Creative camera connected to
usb port.

The control of the Telemedicine unit is fully automatic. The only thing the telemedicine unit user
has to do is connect the biosignal monitor to the patient and turn on the PC. The PC then
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
performs the connection to the base unit automatically. Although the base unit basically controls
the overall system operation, the Telemedicine unit user can also execute a number of
commands. This option is useful when the system is used in a distance health center or in a ship
and a conversation between the two sites takes place.

b) Base Unit (or Doctor's Unit)


The base unit mainly consists of a dedicated PC equipped with a modem, which is responsible
for data interchange. In addition the base unit pc is responsible for displaying incoming signals
from the Telemedicine unit. When an expert doctor uses the base unit located outside the hospital
area (like in the Intensive Care Room application – see Figure 1, a portable PC equipped with a
GSM modem or a desktop PC equipped with a POTS modem is used. When the base unit is
located in the hospital, a desktop PC connected to the Hospital Information Network (HIS)
equipped with a POTS modem can additionally be used; the expert doctor uses it as a processing
terminal.

Through the base unit, user has the full control of the telemedicine session. The user is able to monitor
the connection with a client (telemedicine unit), send commands to the telemedicine unit such as the
operation mode (biosignals or images) Figure 4. In cases were the base station is connected to a Hospital
LAN the user can choose to which of the telemedicine units to connect to, as shown in Figure 5 the user
of the base unit is able to choose and connect to anyone of the telemedicine units connected on the
network. The units connected on the network can be ICU telemedicine units or distance mobile
telemedicine units connected through phone lines.

Figure 4

Control Window – Base Unit

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 5

Telemedicine Network control – Base Unit


The Base Unit's user can monitor biosignals or still images coming from the Telemedicine unit, thus
keeping a continuous online communication with the patient site. This unit has the full control of the
Telemedicine session. The doctor (user) can send all possible commands concerning both still image
transmission and biosignals transmission. Figure 6 presents a typical biosignal-receiving window
(continuous operation).

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 6

Biosignal receiving window at Base Unit


When the system operates on still image mode, the doctor can draw-annotate on the image and
send the annotations back to the Telemedicine unit. The Telemedicine Unit user can also
annotate on the freezed image and annotations will then again be transferred to the Base unit.

When operating on biosignal mode (Figure 6), the transmission of vital biosignals can be done in
two ways, continuous way or store and forward way, depending on the ECG waveform channels
which are transmitted and the telecommunication channel data transfer rate. In continuous
operation, the Base Unit user can send commands to the Telemedicine Unit monitor, such as lead
change or blood pressure determination; the user can also pause incoming ECG, move it forward
or backward and perform some measurements on the waveform.

b) Hospital database Unit


When the Base Unit is located in a hospital (especially in emergency handling or in home
telecare), a Hospital database unit can be integrated in the system, in order to record information
concerning the cases handled. When the system is used for emergency cases, predefined
information for each case are registered, information includes incident's number, date, time,
initial and final diagnosis, Telemedicine files etc. This information is compliant to the directive
"Standard Guide for View of Emergency Medical Care in the Computerized Patient Record"
(designation E-1744-95) of the American Society for Testing and Materials (ASTM). When the
system is used in an Ambulance Emergency Medical Service, the database unit is also
responsible for accepting and recording emergency calls, as well as managing Ambulance
vehicle fleet.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


In cases where a Hospital Information System (HIS) is already available at the Base Unit site (Hospital),
the doctor (Base Unit user) can retrieve information (using the hospital archiving unit) concerning the
patient's medical history. When HIS is not available, the Hospital Database Unit can handle the patient
medical record by itself (Figure 7).

Figure 7

Patient Information window, Hospital database Unit


The database was designed using Paradox 7 and was equipped with graphical user interface
features built in Borland Delphi 5 for increased user friendliness. All parts of the database are in
compatibility with Microsoft Windows 95/98/NT/2000. For security reasons, according to the
directive 95/46/EC, the database is fully protected against unauthorized access and is password
protected and encrypted, whereas the whole application is password protected with several
access levels depending on user groups.

c) Technical Constraints – Feasibility


Biosignals transmission
Along with biosignals, information concerning the monitor, such as the alarms or the monitor
status, is transmitted from the Telemedicine unit to the base unit. The ECG waveform and SpO2
or Co2 Waveform (where available) is the continuous signals transmitted, trends are transmitted
for the rest of data. ECG data are sampled at a rate of 200 samples/sec by 10 bits/sample or 12
bits/sample, for all monitors used, thus resulting in a generation of 2000 bits/sec and 2400
bits/sec for one ECG channel. SpO2 and Co2 waveforms are sampled at a rate of 100
samples/sec by 10 bit/sample; thus resulting in a generation of 1000 bits/sec for one channel.
Trends for SpO2, HR, NIBP, BP, Temp and monitor data are updated with a refresh rate of one
per second, thus adding a small fraction of data to be transmitted approximately up to 200
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
bits/sec. All biosignals monitors used with the system can provide digital output of the collected
signals [36, 37, 38].

Image transmission
Images captured by the Telemedicine unit's camera have resolution 320 × 240 pixel and are
compressed using the JPEG compression algorithm; the resulting data set is approximately 5–6
KB depending on the compression rate used for the JPEG algorithm [39].

Transmission rate
The signals transmission is done using GSM, Satellite and POTS links. For the time being, the GSM
network that the system was technically tested on; allows transmission of data up to 9600 bps (when
operating on the normal mode) and is able to reach up to 43200 bps when using the HSDC (High Speed
Circuit Switched Data). The satellite links transmission rate depends on the equipment and the satellite
system used in each case; it has a range from 2400 bps up to 64000 bps. The use of different satellite
systems can increase the cost of equipment and cost of use; in our case we had used an INMARSAT-
phone Mini-m system which can transmit data only up to 2400 bps, but has low equipment and use cost.
Plain Old Telephony System (POTS) allows the transmission of data using a rate up to 56000 bps, thus
enabling the continuous and fast information transmission (Table 2).

Table 2

ECG channels and way of transferring over several Telecommunication means


2 ECG Channel
ECG signal 12 ECG
1 ECG Channel 2 ECG Channel +Other waveform
Tel. Mean Channel
(Spo2 Co2)

Continuous / Continuous / Continuous / Store & Store &


GSM
Store & Forward Store & Forward Forward Forward

Continuous / Continuous / Continuous / Store & Store &


POTS
Store &Forward Store & Forward Forward Forward

Store
Satellite Store & Forward Store & Forward Store & Forward
&Forward
The practical maximum data transfer rate over telecommunication means is never as high as the
theoretical data transfer rate. Practical data rates depend on the time and the area where the
system is used. Biosignals data transmission can be done in two ways: real time transmission
where a continuous signal is transmitted from client to server or store and forward transmission
where signals of a predefined period of time are stored in the client and transmitted as files to
server. It mainly depends on the maximum data transfer rate of the telecommunication link used
and the digital data output that the biosignal monitor has in each case.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Waveforms Transmission
As mentioned above monitors from two of the major portable monitors firms were used in this
study, which can provide three to twelve leads waveform of ECG and numeric data from other
biosignals (HR, SpO2, NIBP, IP, Temp).

The first of the monitors used, CRITIKON DINAMAP PLUS Monitor has a digital output of a
continuous one channel ECG plus biosignals such as NIBP, SpO2, HR, IP and data concerning
monitor alarms etc.; all the above information can be transferred using up to 2200 bps. For this
reason, the continuous transmission of signals from this monitor can be done when using GSM
and POTS and 2400 BPS satellite links.

The second of the monitors used, PROTOCOL Propaq Monitor has a digital output of a
continuous one (model 1xx) or two (model 2xx) channels ECG, plus another waveform such as
SpO2 or Co2; plus biosignals trends such as NIBP, SpO2, HR, IP and data concerning monitor
alarms etc. All above information can be transferred using up to 2400 BPS for one channel ECG,
up to 4400 for two channels of ECG or up to 5400 for two channels of ECG plus another
waveform (SpO2 or Co2). For this reason, the continuous transmission of signals from this
monitor can be done when using GSM and POTS but only one lead ECG when using 2400 BPS
satellite links.

Compression & encryption


In order to decrease data size, a lossless ECG compression algorithm based on Huffman coding
algorithm [40] is implemented in the system and can be applied on transmitted signals, when
needed by the Base Unit user.

Security of the Telemedicine Unit was designed according to the directive 97/66/EC, concerning
processing of medical data in the telecommunication sector. An encryption algorithm was
implemented in the system and can be used when needed by the hospital unit user. The system
can encrypt interchanged data using the Blowfish cipher algorithm [41]. The use of encryption is
optional and can be selected by the user; authentication and connection between base and
telemedicine units is done using encrypted messages. In any case, in the communication between
the Telemedicine and the base unit only the incident's ID number is used, while the patient's
name or any other relevant information are never mentioned, thus increasing the security of the
whole system.

Compression and encryption of signals add some delay, especially when powerful system for the
Telemedicine unit PC is not used. This is the reason why both are added in the system as extra
options, which can be disabled from Base unit user.

Results – Discussion
The final result is a "Multi-purpose" Telemedicine system, which facilitates a flexible
architecture that can be adopted in several different application fields. The system has been
tested and validated for a variety of medical devices and telecommunication means. Results

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


presented in this section are typical for the needs of system use in Rural Health Centers, in
Ambulance Vehicles or in a Navigating Ship.

Data transmission is done using the TCP/IP network protocol. Transmitting data over TCP/IP is
a trivial and easy task when using networks, which have high bandwidth and low error rate. In
order to transmit a buffer of n bytes through TCP/IP a header of about 55 bytes is added, this will
add a great amount of data especially in cases that we transmit small buffers (e.g. when
transmitting a buffer of 10 bytes the network protocol will increase this buffer to 65 bytes).
When transmitting a buffer that has size larger than the Maximum Transfer Unit (MTU) this
buffer will be fragmented in to smaller packets that each one has the size of the MTU, all small
packets will be reconnected when arriving at the destination site; this case will cause problems
when one of the fragmentation packets is lost [42].

Considering the above two cases the transmission of data, especially through networks that have
low bandwidth and high error rates (such as GSM mobile network and Satellite Links), has to be
done in a way that will utilize the network use as much as possible. The buffers transmitted must
have size that want be either too small or too big.

In order to measure the performance of TCP/IP over the GSM network several sizes of data
buffers had been tested. The tests were performed using GSM modem, Nokia Card Phone 2.0 for
the telemedicine unit, and a POTS modem US robotics sportster voice 56 KBPS for the base
unit. These two devices support compression protocol V42 bis.

In order to perform the tests; buffers from 71 up to 479 bytes were selected; the size of buffers is
proportional to the data rate that the Propaq 2xx sends through the RS232 serial port. The
packets had sizes: 71, 95, 143, 239, 287, 335, 383, 431, 455, 479 bytes.

Using all the above buffers we made some measurements on the bytes that were received and
transmitted to and from the base unit of the telemedicine system. Figure 8 shows the results of the
bytes transmitted and received from the server unit when having a telemedicine unit connected with
GSM to the server. Numbers 1 to 10 represent the size of the buffers used, 1 for the smallest (71 bytes)
up to 10 for the largest (479 bytes). The mean value of the bytes transmitted/received per second was
recorded for 2 minutes per case. As can be seen transmitting small packets of data cased the
transmition of more bytes because of the overhead added on each buffer. The continuous transmition
of small buffers also cased some problems on the communication and on the overall telemedicine unit
operation; it could stop the operation of the protocol or add some problem when reading data from the
medical monitor (too many system resources were used).

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 8

Received and Transmitted bytes per second, using several buffers


Having in mind all the above and the measurements of bytes transmitted (Figure 8) we had to
select a buffer size that: would not add too much overhead to the transmitted data, would not
cause fragmentation of the transmitted buffers and would not add too much delay on real time
transmitted signal. Having in mind all the above the selected buffer size used was 431 bytes.

Data transmition through GSM


In order to measure the performance of the GSM network [43]-[45] using the optimum buffer
size and the equipment mentioned above several measurements from a moving vehicle were
performed. Measurements were performed using a moving vehicle in some of the main streets of
the city of Athens (Greece). The vehicle was moving with a speed of approximately 60 km/h.
The tests were performed in such a way and using the relevant equipment in order to simulate the
data calls from an Ambulance vehicle. The transmission of ECG signals was done using
continuous mode, while at the same time images of several sizes were transmitted. Tests were
performed for one month using different routes and for different periods of a day.

Having performed 40 tests of approximately 30 minutes we had the following results:

1. a)In order to establish the connection between the telemedicine unit and the base unit an average time
of 28 seconds was required.

2. b)10 images per test were successfully transmitted. The average transmition time for several image files
was from 18 to 26 seconds (Table 3). Around 93% of image transmissions were achieved within the first
attempt; the rest 7% was transmitted using a second attempt because we had a line failure

Table 3

Images transfer times – GSM and Inmarsat M satellite


File
GSM Inmarsat M satellite
size

Mean transfer time (100 Transfer rate Mean transfer time (100 Transfer rate
files) (sec) (bps) files) (sec) (bps)
SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT
File
GSM Inmarsat M satellite
size

6 Kb 18 2666,7 40 1200

7 Kb 20,5 2731,7 43 1302,3

8 Kb 24 2666.7 46,5 1376,3

9 Kb 26 2769,7 47 1531,9
1. c)The transmition of one ECG lead waveforms was performed in real time. The connection was
interrupted once for at least 15 % of all cases. In some cases we had more than one interruption
(Table 4); reconnection of telemedicine unit to the base unit was performed successfully in all cases of
interruption.

Table 4

Interruptions for GSM connections and Inmarsat M sattelite


Number of
GSM Inmarsat M
interruptions

Percentage from
Percentage from the total number of
the total number
interruptions
of interruptions

1 33,3% 60%

2 16,7% 20%

3 16,7% 10%

4 25% 10%

More 8,3% 0%

Data transmition through Inmarsat satellite links


In order to measure the performance of the system over the Inmarsat satellite network [46],
several measurements were performed. Measurements were performed on a yacht using a mini m
terminal for ships "Thrane & Thrane TT-3064A CAPSAT Inmarsat Maritime Phone" for the
telemedicine unit, and a POTS modem US robotics sportster voice 33.6 KBPS for the base unit.
The mini m device is able to transmit data with a rate up to 2400 bps. The GSM optimum buffer
size was used in this case too. The tests were performed in such a way and using the relevant
equipment in order to simulate the emergency data calls from a yacht. The transmission of ECG
signals was done using continuous mode, while at the same time images of several sizes were
transmitted.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Having performed 40 tests of approximately 30 minutes for different periods of day, we had the
following results:

1. a)In order to establish the connection between the telemedicine unit and the base unit an average time
of 40 seconds was required.

2. b)10 images per tests were successfully transmitted. The average transmition time for several image
files was from 40 to 47 seconds (Table 3). Around 90% of image transmissions were achieved within the
first attempt; the rest 10% was transmitted using a second attempt because we had a line failure.

3. c)The transmition of two ECG lead waveforms and pulse oxymetry waveform was performed in real
time. The connection was interrupted once for at least 20 % of all cases, in some cases we had more
than on interruptions (Table 4); reconnection of telemedicine unit to the base unit was performed
successfully in all cases of interruption.

Clinical Tests
The system has been clinically tested through installation and extended validation of the system
in a number of distinct demonstration sites across Europe.

More specifically, the use of the developed system in emergency cases handling in ambulances has been
extensively demonstrated in Greece (Athens Medical Centre), Cyprus (Nicosia General Hospital), Italy
(Azienda Ospedaliera Pisa) and Sweden (Malmo Ambulance Services). The initial demonstration of the
system for ambulance emergency cases was performed on 100 (not severe) emergency cases for each
hospital. The results of this phase were very promising. The system was able to improve, the percentage
of incidents that in an emergency case initial diagnosis did not matched final diagnosis. For 100 cases
without the system use, 13% of the initial diagnosis did not matched final diagnosis; while in 100 cases
with the system use 8% of initial diagnosis did not matched the final diagnosis. The use of the system in
Rural Health Centers has been tested extensively in Cyprus, where the national emergency system will
be built on top of the already installed application. The use of the system in a Ship is currently being
used in Athens Greece, and finally the use in home telecare is also being tested in Athens Greece. The
system is currently installed and being used in two different countries, Greece (Figure 9) and Cyprus
(Figure 10).

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


Figure 9

Telemedicine network – Greece

Figure 10

Telemedicine network – Cyprus


Conclusions
We have developed a medical device for telemedicine applications. The device uses GSM
mobile telephony links, Satellite links or POTS links and allows the collection and transmission
of vital biosignals, still images of the patient and bi-directional telepointing capability. The
advance man-machine interface enhances the system functionality by allowing the users to

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT


operate in hands-free mode while receiving data and communicating with specialists. In order to
introduce the system in daily health care provision; the system has been clinically tested using a
controlled medical protocol. The final system is currently installed and used in two different
countries Greece and Cyprus. Results from the system use are very promising thus encouraging
us to continue the development and improvement of the system in order to be able to cover
additional future needs.

SINDHU.G,AP,BIOMEDICAL ENGINEERING DEPARTMENT

You might also like