0% found this document useful (0 votes)
142 views10 pages

01 - Chapter Introduction To AMQ Streams

Uploaded by

Martin Bassi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views10 pages

01 - Chapter Introduction To AMQ Streams

Uploaded by

Martin Bassi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

 

Introduction 
AMQ Streams  
Audience 
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been 
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of 
type and scrambled it to make a type specimen book. It has survived not only five centuries, but 
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently 
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. 

Course Objectives 
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been 
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of 
type and scrambled it to make a type specimen book. It has survived not only five centuries, but 
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently 
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. 

Prerequisites 
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been 
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of 
type and scrambled it to make a type specimen book. It has survived not only five centuries, but 
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently 
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. 

   

Copyright © 2020 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Red Hat logo, and JBoss are trademarks or registered trademarks of Red Hat, Inc. 
 
or its subsidiaries in the United States and other countries. Linux​®​ is the registered trademark of Linus Torvalds in the U.S. and other countries.
 

Orientation to the Classroom 


Environment 
 

Diagram of KAFKA CLUSTER ON OCP 


   

     

 
 

Table 1. Classroom Environment 

 
URL User Pass

   

     

 
 

Overview of AMQ Streams 


Objectives 

After completing this section, students should be able to: 

● Describe AMQ Streams key features. 


● Identify AMQ Streams components. 

Introducing AMQ streams 

AMQ  streams  is  an  enterprise  distributing  streaming  platform  that  allows  you  to scale elastically 
to  handle  massive  amounts  of  data  streams.  Based  on  the  upstream  projects  Apache  Kafka  it is 
built with a storage system that persists, replicates and can keep data as long as needed.  

LinkedIn  engineering  built  Kafka  to  support  real-time  analytics.  Kafka  was  designed  to  feed 
analytics  system  that  did  real-time  processing  of  streams. LinkedIn developed Kafka as a unified 
platform  for  real-time  handling  of  streaming  data  feeds.  The  goal  behind  Kafka,  build  a 
high-throughput  streaming  data  platform  that  supports  high-volume  event  streams  like  log 
aggregation, user activity, etc. 

To  scale  to  meet  the  demands  of  LinkedIn  Kafka  is  distributed,  supports  sharding  and  load 
balancing.  Scaling  needs  inspired  Kafka’s  partitioning  and  consumer  model.  Kafka  scales  writes 
and  reads  with  partitioned,  distributed,  commit  logs.  Kafka’s  sharding  is  called  partitioning. 
(Kinesis which is similar to Kafka calls partitions shards.) 

A  database  shard  is  a  horizontal  partition  of  data  in  a  database  or  search  engine.  Each  individual 
partition  is  referred  to  as  a  shard  or  database  shard.  Each  shard  is  held  on  a  separate  database 
server instance, to spread load. Sharding 

Kafka  was  designed  to  handle  periodic large data loads from offline systems as well as traditional 


messaging use-cases, low-latency. 

MOM  is  message  oriented middleware think IBM MQSeries, JMS, ActiveMQ, and RabbitMQ. Like 


many  MOMs,  Kafka  is  fault-tolerance  for  node  failures  through  replication  and  leadership 
election.  However,  the  design  of  Kafka  is  more  like  a  distributed  database transaction log than a 
traditional  messaging  system.  Unlike  many  MOMs,  Kafka  replication  was  built  into  the  low-level 
design and is not an afterthought. 

     

 
 

AMQ streams main components 

Kafka Broker 

Messaging broker responsible for delivering records from producing clients to consuming clients. 

Apache  ZooKeeper  is  a  core  dependency  for  Kafka,  providing  a  cluster  coordination  service  for 
highly reliable distributed coordination. 

Kafka Streams API 

API for writing stream processor applications. 

Producer and Consumer APIs 

Java-based APIs for producing and consuming messages to and from Kafka brokers. 

Kafka Bridge 

AMQ Streams Kafka Bridge provides a RESTful interface that allows HTTP-based clients to 
interact with a Kafka cluster. 

Kafka Connect 

A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. 

Kafka MirrorMaker 

Replicates data between two Kafka clusters, within or across data centers. 

Kafka Explorer 

An exporter used in the extraction of Kafka metrics data for monitoring. 

AMQ streams architecture 

A  cluster  of  Kafka  brokers  is  the  hub  connecting  all  these  components.  The  broker uses Apache 
ZooKeeper  for  storing  configuration  data  and  for  cluster  coordination.  Before  running  Apache 
Kafka, an Apache ZooKeeper cluster has to be ready. 

     

 
 

Figure 1.1. AMQ Streams architecture 

   

     

 
 

AMQ streams capabilities 

The underlying data stream-processing capabilities and component architecture of Kafka can 
deliver: 

● Microservices and other applications to share data with extremely high throughput and 
low latency 
● Message ordering guarantees 
● Message rewind/replay from data storage to reconstruct an application state 
● Message compaction to remove old records when using a key-value log 
● Horizontal scalability in a cluster configuration 
● Replication of data to control fault tolerance 
● Retention of high volumes of data for immediate access 

Kafka’s capabilities make it suitable for: 

● Event-driven architectures 
● Event sourcing to capture changes to the state of an application as a log of events 
● Message brokering 
● Website activity tracking 
● Operational monitoring through metrics 
● Log collection and aggregation 
● Commit logs for distributed systems 

Stream processing so that applications can respond to data in real time 

AMQ Streams characteristics 

Among  lots  of  choices  of  publish/subscribe  systems, AMQ Streams shines for some of its unique 


characteristics that Apache Kafka offers: 

Multiple consumers and producers 

Apache  Kafka  allows  you  to  have  multiple  producers  streaming their datas to a single topic to be 


consumed  for  multiple  consumers.  That  could  help  us  in  a  scenario  where  we  have  multiple 
multiple  selling  channels  sending  their  orders  for a single topic allowing multiple microservices to 
consume  it  without  interfering  with  each  other  like  an  auditing  service  and  multiple  vendors 
consuming the same information when they need. 

   

     

 
 

Disk-based retention 

Kafka  will keep your data safe until it makes senses. If a consumer fails to get their message it can 
be  done  when  they  get  operational  again  with  no  risk of data loss. Kafka allows the consumer to 
set  from  which  message  it  wants  to  start  consuming,  allowing  developers  to  start  from  where 
they  stopped,  only  consumes  new  messages  that  were  sent  after  the  consumer started to listen 
or even consume all messages again. 

Native Scalability 

Apache  Kafka  is  built  to  failover  and  be  able to offer high-availability in multiple sizes of clusters, 


allowing you to start small and scale when it is needed. 

     

 
 

 
  

Quiz: AMQ Streams Features 


 

Match the items below to their counterparts in the table. 

1) Kafka Broker 
2) Consumer 
3) Producer 
4) Kafka Bridge 
5) Kafka Connect 
6) Kafka MirrorMaker 
7) Kafka Exporter 

 
 

Description  Component 

   

Provides a RESTful interface that allows   


HTTP-based clients to interact with a 
Kafka cluster 

Metrics tools that allow you to monitor   


your Kafka 

API used in applications to send   


messages on a topic  

A toolkit for streaming data between   


Kafka brokers 

Distributed system responsible to   


receive and deliver messages from 
consumers to producers 

API used in applications to receive   


messages from a topic 

Replicates data between two Kafka   


clusters 

     

 
 

Solution 
 

Match the items below to their counterparts in the table. 


 

Description  Component 

   

Provides a RESTful interface that allows  4) Kafka Bridge 


HTTP-based clients to interact with a 
Kafka cluster 

Metrics tools that allow you to monitor  7) Kafka Exporter 


your Kafka 

API used in applications to send  3) Producer 


messages on a topic  

A toolkit for streaming data between  5) Kafka Connect 


Kafka brokers 

Distributed system responsible to  1) Kafka Broker 


receive and deliver messages from 
consumers to producers 

API used in applications to receive  2) Consumer 


messages from a topic 

Replicates data between two Kafka  6) Kafka MirrorMaker 


clusters 

     

You might also like