Ca-Nfs: Paper: A. Batsakis, R. Burns, A. Kanevsky, J. Letini and T. Talpey Slides: Joe Buck For CMPS 292 - Spring 2010
Ca-Nfs: Paper: A. Batsakis, R. Burns, A. Kanevsky, J. Letini and T. Talpey Slides: Joe Buck For CMPS 292 - Spring 2010
04/12/2010
My slides are based heavily on Batsakis’ FAST ’09 slides. Link in the last slide
The Goal
✤ Greedy scheduling
Memory is used on both the client, which can flush cache, or on the server
Keep going until bad things happen
This is the status quo
Make clients aware of other clients resource usage (cost for for further use)
Makes client aware of server’s resource usage ( and cost for further use)
How to quantify congestion
✤ How to measure
✤ Pi (ui) = Pmax x
✤ Device-specific
✤ Can be time-shifted
Clients make their decisions based on server prices and local prices
Accelerating Writes
✤ Write-behind is bypassed
✤ Non-issue
If the server is still loaded later then the writes will have a bigger latency delay (the damn
bursting)
Typically caused by high server load
Heuristics throttle small write deferral
Example 1
CA-NFS in Practice
memory: 65%
Server network: 20%
disk: 90%
hit rate: 40%
price Writes Reads
40 12
Client 2 defers writes since the cost on the server is higher than the local cost.
Also,©client 2 is seeing good cache rates so this is good.
2008 NetApp. All rights reserved. 14
14
Example
CA-NFS 2
in Practice
memory: 65%
Server network: 20%
disk: 60%
hit rate: 40%
price Writes Reads
55 12
Client 1 has freed up some memory and client 2 is using more memory so it now starts
writing again. among clients and the server
Writes cheaper
© 2008 atrights
NetApp. All thereserved.
server than either client 15
15
Benchmarks
✤ Fileserver test
✤ mostly synchronous
Performance improves for a single client because writes and reads are batched a bit better
© 2008 NetApp. All rights reserved. 17
17
Fileserver
Fileserver –Results
Results II 2
19
Future Work
✤ Cost metrics are based on this paper: B. Awerbuch, Y. Azar and S. Plotkin
“Throughput-Competitive On-Line Routing” FOCS ’93
✤ Questions?
✤ Comments?
✤ Contact: [email protected]