01 - Chapter Introduction To AMQ Streams
01 - Chapter Introduction To AMQ Streams
Introduction
AMQ Streams
Audience
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of
type and scrambled it to make a type specimen book. It has survived not only five centuries, but
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
Course Objectives
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of
type and scrambled it to make a type specimen book. It has survived not only five centuries, but
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
Prerequisites
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been
the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of
type and scrambled it to make a type specimen book. It has survived not only five centuries, but
also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the
1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently
with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
Copyright © 2020 Red Hat, Inc. Red Hat, Red Hat Enterprise Linux, the Red Hat logo, and JBoss are trademarks or registered trademarks of Red Hat, Inc.
or its subsidiaries in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.
URL User Pass
AMQ streams is an enterprise distributing streaming platform that allows you to scale elastically
to handle massive amounts of data streams. Based on the upstream projects Apache Kafka it is
built with a storage system that persists, replicates and can keep data as long as needed.
LinkedIn engineering built Kafka to support real-time analytics. Kafka was designed to feed
analytics system that did real-time processing of streams. LinkedIn developed Kafka as a unified
platform for real-time handling of streaming data feeds. The goal behind Kafka, build a
high-throughput streaming data platform that supports high-volume event streams like log
aggregation, user activity, etc.
To scale to meet the demands of LinkedIn Kafka is distributed, supports sharding and load
balancing. Scaling needs inspired Kafka’s partitioning and consumer model. Kafka scales writes
and reads with partitioned, distributed, commit logs. Kafka’s sharding is called partitioning.
(Kinesis which is similar to Kafka calls partitions shards.)
A database shard is a horizontal partition of data in a database or search engine. Each individual
partition is referred to as a shard or database shard. Each shard is held on a separate database
server instance, to spread load. Sharding
Kafka Broker
Messaging broker responsible for delivering records from producing clients to consuming clients.
Apache ZooKeeper is a core dependency for Kafka, providing a cluster coordination service for
highly reliable distributed coordination.
Java-based APIs for producing and consuming messages to and from Kafka brokers.
Kafka Bridge
AMQ Streams Kafka Bridge provides a RESTful interface that allows HTTP-based clients to
interact with a Kafka cluster.
Kafka Connect
A toolkit for streaming data between Kafka brokers and other systems using Connector plugins.
Kafka MirrorMaker
Replicates data between two Kafka clusters, within or across data centers.
Kafka Explorer
A cluster of Kafka brokers is the hub connecting all these components. The broker uses Apache
ZooKeeper for storing configuration data and for cluster coordination. Before running Apache
Kafka, an Apache ZooKeeper cluster has to be ready.
The underlying data stream-processing capabilities and component architecture of Kafka can
deliver:
● Microservices and other applications to share data with extremely high throughput and
low latency
● Message ordering guarantees
● Message rewind/replay from data storage to reconstruct an application state
● Message compaction to remove old records when using a key-value log
● Horizontal scalability in a cluster configuration
● Replication of data to control fault tolerance
● Retention of high volumes of data for immediate access
● Event-driven architectures
● Event sourcing to capture changes to the state of an application as a log of events
● Message brokering
● Website activity tracking
● Operational monitoring through metrics
● Log collection and aggregation
● Commit logs for distributed systems
Disk-based retention
Kafka will keep your data safe until it makes senses. If a consumer fails to get their message it can
be done when they get operational again with no risk of data loss. Kafka allows the consumer to
set from which message it wants to start consuming, allowing developers to start from where
they stopped, only consumes new messages that were sent after the consumer started to listen
or even consume all messages again.
Native Scalability
1) Kafka Broker
2) Consumer
3) Producer
4) Kafka Bridge
5) Kafka Connect
6) Kafka MirrorMaker
7) Kafka Exporter
Description Component
Solution
Description Component