0% found this document useful (0 votes)
7 views20 pages

Memcached

The document provides an overview of memcached, an in-memory key-value store, detailing its memory management, LRU eviction policy, threading model, read/write operations, locking mechanisms, and distributed cache functionality. It explains how memory is allocated in pages to avoid fragmentation and describes the use of per-item locks for improved concurrency. Additionally, it mentions a demo using Docker, telnet, and Node.js to illustrate these concepts.

Uploaded by

pawaw54698
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views20 pages

Memcached

The document provides an overview of memcached, an in-memory key-value store, detailing its memory management, LRU eviction policy, threading model, read/write operations, locking mechanisms, and distributed cache functionality. It explains how memory is allocated in pages to avoid fragmentation and describes the use of per-item locks for improved concurrency. Additionally, it mentions a demo using Docker, telnet, and Node.js to illustrate these concepts.

Uploaded by

pawaw54698
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

databases.

win

memcached
Simple in-memory key value store
Agenda

● Memory management
● LRU
● Threads
● Reads/Writes
● Locking
● Distributed Cache
● Demo
In-memory key value store

● An item consists of key and value


● A key is a string (250 chars)
● The value can be any type (1MB configurable) *
● Keys have expiration date (TTL)
● Everything is stored in memory
Memory management
free
● Memory allocations is random allocated

● Items freed/allocated causes gaps


● Results in memory fragmentation
● New items can no longer fit
Fragmented memory
Memory management

● Memory is allocated in pages (1MB configurable)


● Pages are organized by slab classes Chunk

● Pages consists of chunks of fixed size


● Items are stored in chunks
● Each slab class has fixed chunk size
Page (up to 1MB)

● This avoids memory fragmentation


Memory management
14563 chunks per page

Chunk size 72 bytes

Slab Class 1

….

Chunk size 1MB

Slab Class 43

1 chunk per page (page = 1MB)


Memory management
14563 chunks per page

Chunk size 72 bytes

Slab Class 1
New item
(40 byte)

Chunk size 1MB


New item
(900KB) Slab Class 43

1 chunk per page (page = 1MB)


Memory management
14563 chunks per page

Chunk size 72 bytes

Slab Class 1
New item
(40 byte)

FULL
Chunk size 1MB

Slab Class 43

Chunk growth size 1 chunk per page (page = 1MB)


LRU - Least recently used
head
● Memory is limited
● When an item is access its goes to the head
● items that aren’t used “may” get removed
● LRU crawler/daemon Cache eviction
● LRU cache per slab class
tail
LRU in big picture
Head

Chunk size 72 bytes

Slab Class 1

Tail

Chunk size 1MB

Slab Class 43

1 chunk per page (page = 1MB)


Threading TCP Port: 11211
Listener thread
● One listener thread
● For each connection a new thread is
created
● Used to be one global lock Connection threads

● Changed to a per-item lock


Read

Hash Table (N size)


a b c d

Read key “test” Hash (test) % N

LRU

tail a b c d head
Read 2

Hash Table (N size)


a b c d

Read key “buzz” Hash (buzz) % N

LRU

tail a b d c head
Write

Hash Table (N size) Slab


class

Write key “new” Hash (new) % N


44 bytes
Write (collision)

Hash Table (N size)

Write key “Nani” Hash (Nani) % N


44 bytes
Read (collision)
Compare key(not
match)

Hash Table (N size)

Compare key(
match!) return to
user
Read key “Nani” Hash (Nani) % N
Locking

● Used to be one global lock


● Changed to a per-item lock threads

● Refcounting

blocked
Distributed Cache

● Memcached servers are unaware of each


other
● Client is configured with a server pool
● Clients do the distribution through consistent
hashing
● Up to clients to reshuffle keys when
adding/removing servers
memcached
Demo with docker, telnet and nodejs
Summary

● Memory management
● LRU
● Threads
● Reads/Writes
● Locking
● Distributed Cache
● Demo

You might also like